Call to Action

Webinar: Take a tour of Sparkling Logic's SMARTS Decision Manager Register Now
Home » Optimization

Optimization

IBM expanding its footprint in Risk Management


Well, it feels like we should devote a complete section of this blog to IBM and its acquisitions. Over $14 billion in acquisitions in the last 5 years, many of those in technologies and expertise in areas squarely in or related to the field of Decision Management.

Over the past few days, IBM has announced its acquisition of UK-based I2, and of Canada-based Algorithmics. The official press releases – of course, similar in form – provide some insight on what motivated IBM to invest in these acquisitions: “accelerate big data analytics to transform big cities” for I2, and “accelerate business analytics into financial risk management” for Algorithmics. The terms of these acquisitions – not disclosed for I2 but believed to be in the $500M range, and in the $380M range for Algorithmics – are not enough to make them mega deals, but, they position IBM squarely as a major provider of risk management solutions for multiple industries, but in particular financial services and  defense/security.

Both companies leverage data, and increasingly what is now called big data with its volume + variety + complexity + velocity challenge, through sophisticated analytics to support automated and human decision making by assessing and qualifying risk.

A lot of noise has been made that these two acquisitions increase the presence of IBM in the big data analytics world. While that is correct at the technology level, I believe that they also do something else: they contribute to make IBM a key provider of core enterprise risk management solutions.

I2 is not a well known company outside the  security, fraud and crime risk management spaces. Of course, it’s never helped that a much better supply chain management company has the same name… I2’s products allow organizations to track vasts amounts of data, and organize it, search through it in order to identify patterns that may be indicative of terrorist, criminal or fraudulent behavior. A number of techniques are used, but a big claim to fame for I2 is its leading position in the link analysis space. Link analysis, also sometimes referred to as network analysis, and in a particular form made popular by social network analysis, identifies relevant relationships between entities, qualifies them (for example in terms of “betweenness”, “closeness”, etc) and allows to navigate them through multiple dimensions, including time, leading to the recognition of a pattern of entities and non obvious relationships indicative of potential issues. The analysis is carried out on large sets of seemingly disparate data:  transaction data, structured and semi-structured documents, phone records, email trails, IP data, etc. Its products, for example Analyst’s Notebook, have receive great reviews.

I2 brings to the table not just the risk management products and expertise that has made the company famous in that space, but also a solid expertise in the management of big data. IBM has made acquisitions in this space – Netezza a year ago, in September 2010 and NISC a little bit earlier  –  and I2 brings to the company complementary solutions and expertise.

Algorithmics is also a well known company in its space which is not well known outside of it. It specializes in the measurement and management of the risk of financial investments. Up to now it has been part of the French holding company Fimalac which happens to also own the Fitch Ratings agency which issues credit ratings to both commercial companies and governments – I would expect them to use the capabilities of Algorithmics. The company was created in 1989, and its initial charter – to create solutions to characterize and manage the financial risk of investments – addressed some of the risk issues faced during the 1987 stock market crash. We are not going to elaborate on why the similar risk management and rating issues remain at the forefront of preoccupations in the financial and political worlds…

Risk management is a fairly fragmented space, with specialized solutions focusing on different types of risks. In the short term, it is possible that IBM will not immediately compete with some of its partners in the risk management space, such as  Fair Isaac (disclosure: I used to work there). However, risk management is becoming much more of a global enterprise affair than it used to be – the sources of risk are becoming multi-faceted, delivered through multiple channels, touching multiple processes at once. Customers are looking for, and assembling themselves, enterprise risk management solutions.  This trend makes IBM’s acquisitions in this space well thought out to position the company at the core of these solutions, and I am certain that IBM will displace or acquire niche risk management vendors as its footprint in the space continues to grow. It should be noted that, like for big data, the acquisitions in risk management areas have been quite frequent for (Ever) Big(ger) Blue: NISC already mentioned earlier in January 2010, OpenPages in September 2010, PSS Systems in October 2010, and now these two.

Another important aspect is that a lot of the technology and solutions applied to the management of risk are also applicable to the optimization of processes and the increase of competitiveness. In a world where regulations will increase to reign in excesses, the search for incremental competitiveness will be combined with the compliance to regulations and the management of risk in comprehensive solutions. Decision Management already plays and will continue to play a central role there, leveraging data management, analytics, business rules, optimization and case management technologies in concert.

I do expect IBM to continue completing its portfolio in big data, decision management and risk management. IBM clearly has its acquisition machine in control – and it is paying off. For example, the investments in analytics have enabled its business analytics software and services unit to see seven consecutive quarters of growth, with a growth of 20% just in the first half of 2011. IBM’s goal is to go from $10 billion in annual sales now to $16 billion by 2015. A significant increase, but one that it is giving itself the means to achieve.

I have a little list of companies I would not be surprised to see becoming part of all this (although I missed one: Autonomy, bought by HP not long ago, was one I expected to see acquired by IBM…)

BREAKING NEWS: Rules Fest Call for Paper is now Open!


RulesFestI just got the word from Jason Morris, Chairman of the show.  The show’s website is now open for registration!

Register!

If you have practical experience with Decisioning technologies like

  • Business Rules,
  • Complex Event Processing,
  • Predictive Analaytics,
  • Optimization or
  • Articifial Intelligence,

Then you should consider submitting a paper to:

Join the Rules Fest Speaker Hall of Fame!

Please keep in mind that we are looking for hands-on experience, lessons learned, those kinds of things.  Vendor pitches will not be accepted of course.

Optimization vs. Business Rules: pick your evil


During my career, I have been asked over and over again what my opinion was regarding 2 different approaches: Optimization or Business Rules.  Which one is best?

People always expect more than the “it depends” answer.  But in that case, how could anyone answer such a broad “apples and oranges” question?  It is actually like asking if I recommend a hammer or a screwdriver…  Of course it depends on the job at hand!

Trying to satisfy my audience, I thought about it very carefully.  It has always been clear to me that this technology choice was not really about the nature of the problem as it was about the approach to solve it.  Let me clarify what I mean after we settle on a common terminology.

What are those Technologies?

Optimization is about finding the best solution to a problem.  You can either find any solution that works, get all possible solutions or select the best one based on some objective criteria.  The premise is that you can describe the problem itself and the associated constraints that define a “good” solution.  The technology is responsible for performing the optimized search.

Illustration – an optimization model might look like those problems we used to solve in school.  For example:

  • X is monthly income
  • Y is the monthly mortgage payment which is known to be $3,000
  • X must be greater than 3 times Y for the primary borrower
  • What is the lowest income you need to get documentation for? *** this is what the engine figures out ***

Granted, this example is extremely simplistic.  In reality a traditional optimization model might look at a loan request and define the best terms for the bank (i.e. highest margin) that have the best chance to be competitive for the applicant.  Interestingly, the optimization job can also deal with much larger search space — for example, allocating gates in airports or scheduling crews over a large geography.

Business Rules are about describing how you go about getting to the end result.  You can describe the various steps you might perform to reach an end state — often a decision but it could also be a configuration or, more generally, some set values.  The rules engine is responsible for “sequencing” and executing those IF-THEN-ELSE statements.

Illustration – a rulebase might look like this:

  • If the total income of the primary borrower is less than 3 times the loan monthly payment then require guarantor
  • If the total income of the guarantor is less than 5 times the loan monthly payment then decline

For two rules only, a rules engine would definitely be overkill but consider that you may have thousands or millions of rules in your system.  Regulations may force you to create many different alternatives of the same rule with variations per State or Country; different products may have slightly different governing rules; Gold customers might be treated differently than your general customer.  The beauty of engine that executes them is that it does not care about overlap or sequencing.  Those potentially huge volumes of rules will be executed very efficiently as it relates to repeated tests over several rules and consistently.  I should add a caveat here that I am mostly referring to RETE-based engines.  Non-inferencing engines would not necessarily provide the same convenience.

So What Makes Them Similar?

With those definitions in mind you might wonder why anyone would use one in place of the other one…  Well, there is overlap.  The most obvious one being the problem of CONFIGURATION.

If you want to find a valid configuration as a manufacturer, you could possibly go either way.  I have seen either technology being used for:

  • Computer assembly as well as Trucks — in both cases, the problem was to fit parts that may be mutually exclusive (you can only have one motherboard in the PC and only one type of tires on the truck), including some physical limitations (only so many slots in your PC, a V8 engine may not fit under certain hoods), etc.
  • Manufacturing processes — Some gases may not mix too well; some drugs may interact…
  • Services — Life insurance policies can be extremely complex to set up; Telecom services now including so many options do require some intelligence as to what is compatible / required…
  • Even Sports!  Games can be scheduled according to the league’s demands: alternating home and away games, avoid blocked dates (football stadium may be used for concerts or baseball game), etc.  Similar demands may be made by the announcers for the ads scheduling…  You may have attended my landmark presentation on NFL at Business Rules Forum last year.

The nature of the problem does not dictate the technology that is most appropriate.  You need to focus on the type of outcome and the approach that stakeholders are comfortable with.

Usage Guidelines for Optimization

Optimization, per its name, is about finding the best solution to a problem.  Especially when it is heavily constrained, that technology is very efficient at finding a feasible solution if it exists, or at letting you know that none exists which is valuable in itself.  Furthermore, if that problem lends itself to ranking — where one solution might be more valuable than another and you ave a way to measure that — optimization technology will typically outperform other techniques.

Let me illustrate with a typical crew scheduling example.

Let’s say you have 1,000 engineers to dispatch.  Jobs have been contracted and include a specific deadline.  You need to make sure that each engineer is assigned to a job that matches his/her credentials.  Your objective is to perform all the work within the allocated timeframe to avoid penalties.

Granted the standard Miss Manners rules benchmark solves a somewhat similar problem (not scheduling within a timeframe but placing guests next to each other according to topics of interest), but the rules using in that test are certainly not easy to understand since they refer to the mechanics of allocating those guests to their seats rather than the description of the end result.  And more importantly, that design is very opaque…  How would you inject additional rules to reflect that Joe never works on Fridays and that Jack and Jill can’t be collaborating on a project?

Usage Guidelines for Business Rules

Business Rules are very good at applying a technique that you know will get you to the end result.  If the problem is not constrained, meaning that you will not get yourself into a corner when you assemble your configuration, then business rules offer some advantages.  Since a heuristic will lead you to a solution rather than the set of all possible solutions, business rules would make more sense when all solutions are equal.  You simply care about finding any configuration that works.

I have met end users that were not comfortable using optimization because it changes radically from the way they have always been business and they will not trust that the outcome will be satisfactory or in other words “as good as if they had done it manually”.  For those guys, business rules offer the flexibility to describe the logic step by step, one rule at a time, and they can also review execution traces that justify how the system got to the recommendation.

Imagine you had a Sudoku grid and you fed it into an engine that filled it in completely.  You might go and check each row, each column and each square…  just to be sure.  But if you had a step by step animation explaining that this “3” completes the top right square, you would not worry as much.  You would likely make sure that the square logic, the row logic, the column logic, etc. work correctly then trust the system.  People like to understand and be able to anticipate how systems work in order to trust them.

Let me use an example that is a little bit more serious.  In the Telecom space, you might sign up for products and services.  In a package, you might get to select phone, internet access, television and cell phone.  Product Managers can describe using business rules what components are mandatory or optional.  Business rules can also be used to define incompatibilities and requirements.  You cannot have DSL and Fiber at the same time.  You need local service to enable long distance.  You must select one data plan for an iPhone or smart phone.  The advantage of using business rules in that example is that you can reuse them in different contexts: to guide a consumer on the self-serve website, to script an agent’s conversation in a call center, to validate application transmitted electronically or by mail.

In a Nutshell

Searching a needle in a hay stack?  And not just any needle?  Then pick Optimization!

Your experts know the recipe?  A solution is a solution?  Then pick Business Rules!


 2018 SparklingLogic. All Rights Reserved.