Call to Action

Webinar: Take a tour of Sparkling Logic's SMARTS Decision Manager Register Now

Predictive Analytics

Data versus Expertise Dilemma


balanceIn the decade (or two) I have spent in Decision Management, and Artificial Intelligence at large, I have seen first-hand the war raging between knowledge engineers and data scientists.  Each defending its approach to supporting ultimately better decisions.  So what is more valuable?  Insight from data?  Or knowledge from the expert?

Mike Loukides wrote a fantastic article called “The unreasonable necessity of subject experts on the O’Reilly radar, that illustrates this point very well and provides a clear picture as to why and how we would want both.

Data knows stuff that experts don’t

In the world of uncertainty that surrounds us, experts can’t compete with the sophisticated algorithms we have refined over the years.  Their computational capabilities goes way above and beyond the ability of the human brain.  Algorithms can crunch data in relatively little time and uncover correlations that did not suspect.

Adding to Mike’s numerous example, the typical diaper shopping use case comes to mind.  Retail transaction analysis uncovered that buyers of diapers at night were very likely to buy beer as well.  The rationale is that husbands help the new mom with shopping, when diapers run low at the most inconvenient time of the day: inevitably at night.  The new dad wandering in the grocery store at night ends up getting “his” own supplies: beer.

Mike warns against the pitfalls of data preparation.  A hidden bias can surface in a big way in data samples, whether it over-emphasizes some trends or cleans up traces of unwanted behavior.  If your data is not clean and unbiased, value of the data insight becomes doubtful.  Skilled data scientists work hard to remove as much bias as they can from the data sample they work on, uncovering valuable correlations.

 Data knows too much?

When algorithms find expected correlations, like Mike’s example of pregnant women being interested in baby products, analytics can validate intuition and confirm fact we knew.

When algorithms find unexpected correlations, things become interesting!  With insight that is “not so obvious”, you are at an advantage to market more targeted messages.  Marketing campaigns can yield much better results than “shooting darts in the dark”.

Mike raises an important set of issues: Can we trust the correlation?  How to interpret the correlation?

Mike’s article includes many more examples.  There are tons of football statistics that we smile about during the Super Bowl.  Business Insider posted some even more incredible examples such as:

  • People who dislike licorice are more likely to understand HTML
  • People who like scooped ice cream are more likely to enjoy roller coasters than those that prefer soft serve ice cream
  • People who have never ridden a motorcycle are less likely to be multilingual
  • People who can’t type without looking at the keyboard are more likely to prefer thin-crust pizza to deep-dish

There may be some interesting tidbit of insight in there that you could leverage.  but unless you *understand* the correlation, you may be misled by your data and make some premature conclusions.

Expert shines at understanding

Mike makes a compelling argument that the role of the expert is to interpret the data insight and sort through the red herrings.

This illustrates very well what we have seen in the Decision Management industry with the increased interplay between the “factual” insight and the “logic” that leverages that insight.  Capturing expert-driven business rules is a good thing.  Extracting data insight is a good thing.  But the real value is in combining them.  I think the interplay is much more intimate than purely throwing the insight on the other side of the fence.  You need to ask the right questions as you are building your decisioning logic, and use the available data samples to infer, validate or refine your assumptions.

As Mike concludes, the value resides in the conversation that is raised by experts on top of data.  Being able to bring those to light, and enable further conversations, is how we will be able to understand and improve our systems.

DecisionStats – Predictive Models Ain’t Easy to Deploy


PitfallOne of my articles was published on the DecisionStats blog.  Thanks, Ajay!  You can read it there.

This article highlights the main issues that Decision Management practitioners are facing when deploying Predictive Models with their Business Rules.

For your convenience, here it is:

Decision Management is about combining predictive models and business rules to automate decisions for your business. Insurance underwriting, loan origination or workout, claims processing are all very good use cases for that discipline… But there is a hiccup… It ain’t as easy you would expect…

What’s easy?

If you have a neat model, then most tools would allow you to export it as a PMML model – PMML stands for Predictive Model Markup Language and is a standard XML representation for predictive model formulas. Many model development tools let you export it without much effort. Many BRMS – Business rules Management Systems – let you import it. Tada… The model is ready for deployment.

What’s hard?

The problem that we keep seeing over and over in the industry is the issue around variables.

Those neat predictive models are formulas based on variables that may or may not exist as is in your object model. When the variable is itself a formula based on the object model, like the min, max or sum of Dollar amount spent in Groceries in the past 3 months, and the object model comes with transaction details, such that you can compute it by iterating through those transactions, then the problem is not “that” big. PMML 4 introduced some support for those variables.

The issue that is not easy to fix, and yet quite frequent, is when the model development data model does not resemble the operational one. Your Data Warehouse very likely flattened the object model, and pre-computed some aggregations that make the mapping very hard to restore.

It is clearly not an impossible project as many organizations do that today. It comes with a significant overhead though that forces modelers to involve IT resources to extract the right data for the model to be operationalized. It is a heavy process that is well justified for heavy-duty models that were developed over a period of time, with a significant ROI.

This is a show-stopper though for other initiatives which do not have the same ROI, or would require too frequent model refresh to be viable. Here, I refer to “real” model refresh that involves a model reengineering, not just a re-weighting of the same variables.

For those initiatives where time is of the essence, the challenge will be to bring closer those two worlds, the modelers and the business rules experts, in order to streamline the development AND deployment of analytics beyond the model formula. The great opportunity I see is the potential for a better and coordinated tuning of the cut-off rules in the context of the model refinement. In other words: the opportunity to refine the strategy as a whole. Very ambitious? I don’t think so.

The Fraud Game


Fraud has been on the rise lately with some recent high-profile cases like the Zappos leak a couple of weeks ago.  The systems are unfortunately the target of fraudsters on all possible fronts:

  • Origination or on-boarding: Can I trust this individual to do business with?
  • Transactions or claims: Should I let it go through?
  • Investigation: Is this transaction actually legitimate?  Can I trust this individual?
  • Management: How do I treat this flagged individual or transaction?
  • etc.

eBayWe often think of risk management as a Financial Services specialty but many if not all businesses can be the target of fraudsters.  In my talk with eBay at BBC, Kenny and I discussed some specifics of Fraud Detection for a retail site.  This is a significant problem they need to tackle very quickly, as you can imagine.  Here are some numbers that are talk to the size of that problem:

  • 2 rules deployments every week
  • 20+ rules analysts around the globe depend on BRMS to innovate in fraud detection and risk management
  • 110+ eBay user flows
  • 300+ Rulesets
  • 600 application servers running rules (in the slides), 1200 approved on the day of the talk!
  • 1,000+ variables
  • 15k+ rules
  • 50M+ fired rules a day
  • 140M+ rule sessions a day

Let me share of the key take-aways of the talk.

1. Fraudsters look for a good ROI

The same way that businesses consider the Return On Investment, fraudsters are on the look-out for the biggest bang for the buck.  They continuously look for the weakness in the systems or procedures that can be exploited at large-scale.  With that in mind, you could consider that the Fraud team’s job is not to make it impossible to abuse the system, but rather to make it *expensive*.

We have all received phishing emails, ranging from the African Dictator’s survivor to the Lottery Grand Prize.  We know of credit card abuse, etc.  Kenny shared some more unusual examples of fraud that eBay had to react to.

Catcha MouseAccount Take Over is a major issue.  Originally fraudsters simply logged in to create new fraudulent listings.  eBay started tracking the IP addresses in the account history and used it for comparison in case of new listings.  Fraudsters eventually realized that they could instead revise the seller’s existing listings to the fraudulent ones.  eBay introduced some delays in making the change visible to allow for verification.  The fraudster found out that eBay, as a policy, did not delay those changes when made in the last 12 hours of the auction…

This feels very much like a chasing game.  Kenny compares it to “catch the mouse”.

Here are some other “creative” moves from the fraudsters:

Fraudulent listings include contact information highlighted in the description to  get the buyer to transact outside of eBay, by-passing the security measures of the commerce platform.  eBay introduced a word search for email addresses at the time of posting.  The fraudsters started posting their contact details as images!

A clever twist in the Fraud scheme caused an interesting puzzle for the Fraud Detection team.  They realized that, after the fraudulent listings had been removed, they eventually reappeared despite the measures they took to block access… until they realized that, elsewhere in the account configuration, the fraudsters had made sure that non-sold items were automatically reposted.  The automated rule repeated the fraud all by itself!

Fraudsters can get quite sophisticated.  This “organized” crime organization moves fast and spreads everywhere through fraud rings and distribution channels.

Fraudster2. The Intelligence to stop the fraudster

That is one fascinating aspect of the Fraud space: it is a moving target.  You always need to solve new mysteries and devise plans to stop the fraud.  If you love puzzles like I do, you cannot not be enticed by that challenge!

The rules analysts need to come up with rules that flag the fraudsters, all the fraudsters and only the fraudsters, as comprehensively as possible, as precisely as possible and as fast as possible.  The metrics that are typically used to track the success of those business rules are the Hit Rate — when I flag a transaction, how likely is it that I catch an actually fraudulent transaction — and the Catch Rate — out of all the fraudulent transactions, how many do I catch.

Having clear objectives and ways to track them is a great start, but it does not solve the core issue of coming up with those business rules.  The rules analysts have to rely both on their intuition, typically with the insight of the case workers, and lots of data insight of course.  Analytics are critical tools in the Fraud Detection departments.

With this context in mind, the business case for Business Rules / Decision Management technology becomes obvious.  The speed of change and the need to iterate to refine the fraud detection criteria are not at all compatible with traditional software development.  If you played with the numbers that Kenny shared initially, you know that eBay makes about 20,000 changes per year.  The only way to get this is done is by empowering those business analysts so that they can author the flagging rules on their own while the IT team focuses on improving the speed of data access and variable computation, which Kenny described in more details in his other talk.

In conclusion, the ROI for the companies that are fighting fraud is in getting the rules right and getting them fast.

Disclaimer: the examples of fraud I provided are not meant to encourage you to fraud…  All of those schemes are now automatically flagged as fraudulent of course!

IBM expanding its footprint in Risk Management


Well, it feels like we should devote a complete section of this blog to IBM and its acquisitions. Over $14 billion in acquisitions in the last 5 years, many of those in technologies and expertise in areas squarely in or related to the field of Decision Management.

Over the past few days, IBM has announced its acquisition of UK-based I2, and of Canada-based Algorithmics. The official press releases – of course, similar in form – provide some insight on what motivated IBM to invest in these acquisitions: “accelerate big data analytics to transform big cities” for I2, and “accelerate business analytics into financial risk management” for Algorithmics. The terms of these acquisitions – not disclosed for I2 but believed to be in the $500M range, and in the $380M range for Algorithmics – are not enough to make them mega deals, but, they position IBM squarely as a major provider of risk management solutions for multiple industries, but in particular financial services and  defense/security.

Both companies leverage data, and increasingly what is now called big data with its volume + variety + complexity + velocity challenge, through sophisticated analytics to support automated and human decision making by assessing and qualifying risk.

A lot of noise has been made that these two acquisitions increase the presence of IBM in the big data analytics world. While that is correct at the technology level, I believe that they also do something else: they contribute to make IBM a key provider of core enterprise risk management solutions.

I2 is not a well known company outside the  security, fraud and crime risk management spaces. Of course, it’s never helped that a much better supply chain management company has the same name… I2’s products allow organizations to track vasts amounts of data, and organize it, search through it in order to identify patterns that may be indicative of terrorist, criminal or fraudulent behavior. A number of techniques are used, but a big claim to fame for I2 is its leading position in the link analysis space. Link analysis, also sometimes referred to as network analysis, and in a particular form made popular by social network analysis, identifies relevant relationships between entities, qualifies them (for example in terms of “betweenness”, “closeness”, etc) and allows to navigate them through multiple dimensions, including time, leading to the recognition of a pattern of entities and non obvious relationships indicative of potential issues. The analysis is carried out on large sets of seemingly disparate data:  transaction data, structured and semi-structured documents, phone records, email trails, IP data, etc. Its products, for example Analyst’s Notebook, have receive great reviews.

I2 brings to the table not just the risk management products and expertise that has made the company famous in that space, but also a solid expertise in the management of big data. IBM has made acquisitions in this space – Netezza a year ago, in September 2010 and NISC a little bit earlier  –  and I2 brings to the company complementary solutions and expertise.

Algorithmics is also a well known company in its space which is not well known outside of it. It specializes in the measurement and management of the risk of financial investments. Up to now it has been part of the French holding company Fimalac which happens to also own the Fitch Ratings agency which issues credit ratings to both commercial companies and governments – I would expect them to use the capabilities of Algorithmics. The company was created in 1989, and its initial charter – to create solutions to characterize and manage the financial risk of investments – addressed some of the risk issues faced during the 1987 stock market crash. We are not going to elaborate on why the similar risk management and rating issues remain at the forefront of preoccupations in the financial and political worlds…

Risk management is a fairly fragmented space, with specialized solutions focusing on different types of risks. In the short term, it is possible that IBM will not immediately compete with some of its partners in the risk management space, such as  Fair Isaac (disclosure: I used to work there). However, risk management is becoming much more of a global enterprise affair than it used to be – the sources of risk are becoming multi-faceted, delivered through multiple channels, touching multiple processes at once. Customers are looking for, and assembling themselves, enterprise risk management solutions.  This trend makes IBM’s acquisitions in this space well thought out to position the company at the core of these solutions, and I am certain that IBM will displace or acquire niche risk management vendors as its footprint in the space continues to grow. It should be noted that, like for big data, the acquisitions in risk management areas have been quite frequent for (Ever) Big(ger) Blue: NISC already mentioned earlier in January 2010, OpenPages in September 2010, PSS Systems in October 2010, and now these two.

Another important aspect is that a lot of the technology and solutions applied to the management of risk are also applicable to the optimization of processes and the increase of competitiveness. In a world where regulations will increase to reign in excesses, the search for incremental competitiveness will be combined with the compliance to regulations and the management of risk in comprehensive solutions. Decision Management already plays and will continue to play a central role there, leveraging data management, analytics, business rules, optimization and case management technologies in concert.

I do expect IBM to continue completing its portfolio in big data, decision management and risk management. IBM clearly has its acquisition machine in control – and it is paying off. For example, the investments in analytics have enabled its business analytics software and services unit to see seven consecutive quarters of growth, with a growth of 20% just in the first half of 2011. IBM’s goal is to go from $10 billion in annual sales now to $16 billion by 2015. A significant increase, but one that it is giving itself the means to achieve.

I have a little list of companies I would not be surprised to see becoming part of all this (although I missed one: Autonomy, bought by HP not long ago, was one I expected to see acquired by IBM…)

Plato or Aristotle?


Since I got the TED app on my iPad, I have had a chance to watch a lot more videos while cooking.  I love cooking as some of you may know and I particularly enjoy multi-tasking in the kitchen.

This weekend, Damon Horowitz got my attention.  “Data is Power” was a sure way to get it of course.  He took a slightly different angle though.  Damon did not focus as much on why this premise is true and relevant but he dared raise moral considerations.  Without weighting too much on his personal beliefs, he describes different approaches that could be considered for a moral framework.  Like most TED talks, this is  pleasant and entertaining talk with some nuggets of food for thought.  How appropriate when you are cooking!

[youtube=http://www.youtube.com/watch?v=nG3vB2Cu_jM]

Damon is absolutely right though – data is power.  With data we can infer a lot of information that can help improve decisions.  This is the basis for Analytics of course:

  • Business Intelligence can shed light on patterns of behavior or categorization
  • Predictive Analytics can tell you who is likely to commit fraud and who might be seduced by a promotion

Beyond the technology, Analytics have disrupted decision-making processes throughout the enterprise.  As you may know, Bill Fair and Earl Isaac pioneered credit decisioning back in the mid 1950’s.  Back to Damon’s focus on a moral framework, people may have mixed feelings about the principle of scoring individuals and the data used to do so.  The fact is, as Larry Rosenberg explained to us in the past, Credit Scores have enabled access to credit to populations that fell into the grey area before.  By improving our predictions on the odds of payment, we can improve the conditions for those bad apples that happen to be in a bad basket.  There is some good here.  There are also other nasty scenarios.  If Good vs. Bad was an easy discrimination, Damon would not have had to discuss the topic on stage.

Besides the transactional data we use for decisioning throughout the customer life-cycle, there is also now social data available.  It can be used for fraud detection.  One typical use case is the detection of fraud rings based on social affinity.  Although it makes a lot of sense to contribute to the eradication of fraud if that is possible, one might fear the abuse and unintended consequences.  Damon’s comparison of the Utilitarian versus Civil Right perspective here exemplifies that there is no obvious Right versus Wrong.  We have opportunities enabled by the technology but we also need to consider the edge cases and how to deal with them to avoid the pitfalls along the way.  That, in my mind, will lead to more ad-hoc exceptions in which you will want to empower the Human, the Expert, to make that judgment call when the time comes.

I must admit I did not realize until the last year or so that there was more data to Decision Management than the data we look at for the actual decision.  Whether the expert is weighting in at the time of processing or when he/she dumps his/her expertise into the automated system, there is environmental data that bias our perspective.  Based on our personal experience, we may not always think outside the box or not 100% objectively.  Those concerns are quite well expressed in this other TED talk I recommend.

Eli Pariser brings to our attention a scary perspective on the unintended effect of personalization.  He stresses the danger of uncontrolled automated filters on news and information that eventually specialize too much, potentially keeping critical details hidden.  Personalization was our tagline coming into year 2000.  There is tremendous value here for sure: productivity gains, advertisement, etc.  Eli found extreme cases though that ended up providing a biased picture of the topic at hand in the quest for information.

[youtube=http://www.youtube.com/watch?v=B8ofWFx525s]

Whether we consider that this automated personalization and the derived SEO techniques & services has a significant impact or not, we can’t ignore the point that Eli is raising.  This is going to become a more important issue for Decision Management as the discipline expands to manual decisions where getting the right information at the right time matters.  Mixing fact-based performance metrics with human-centric judgment seems to make sense.  There may be more solutions to that problem though.  I believe we are in the infancy of those metaphysical considerations on technology and morals.

Crowdsourcing Predictions


E.T.Remember a decade ago when we were running SETI@home on our computers all night?  I was guilty of lending my off-hours CPU time for this fun Berkeley experiment.  My husband and I would join forces to help detect little anomalies…  We did not take it seriously of course but we enjoyed being part of the program!  It was also technically intriguing as the first large-scale grid deployment we participated in…

Time has passed since then.

Grid architecture turned into Cloud deployments for elasticity.  The idea to join forces grew stronger though, manifesting itself in various ways.  Collaboration and Social capabilities are revolutionizing the way people can work together.  Some success stories involve better teamwork within the enterprise; others are about customer ideation.  In this post, I will focus on those crowdsourcing initiatives that push the innovation outside of the enterprise, not for feedback but for actual work.

The Netflix Prize

We have all heard of the Netflix Prize but let me remind you of the premise.  Netflix launched a competition in 2006 with an appealing $1M prize for the best predictor.  Being a movie rental business, they differentiate from the established brick-and-mortar players by offering the service over the mail as a subscription rather than a per-day rental.  The business model was a great way to break into the space quickly but they needed increased differentiation since all players could (and did) adapt to the new way of renting movies.  Consumers want to watch movies but they do not necessarily know what is out there…  Have you ever stared at the wall of movies at Blockbusters without a clue as to what you will bring home?  With a powerful Recommendation Engine, Netflix can predict the list of movies that you are the most likely to love, based on your previous ratings.  Improving the precision of this recommendation engine increases Netflix’s value-add and therefore competitiveness.

On September 21, 2009, the $1M Grand Prize was awarded to a team that could improve by more than 10% the accuracy of the incumbant Cinematch.

Academics have had opportunities to research algorithms forever of course but, in this case, Netflix made data freely available to the participants.  This enabled a pragmatic effort to take place rather than just theoretical.  The business objective was clearly stated in the rules: improve the prediction by 10% or more on the provided quiz sample.

This 3-year journey involved a lot of hard work from many teams around the world.  It was impressive to see how close the race got, with another submission reaching the stated goal arriving just 24 minutes after the winning project.  What was most impressive was the collaboration that took place.  The leading teams realized that they could achieve more by working joining than competing.  At that point in time, dramatic improvements were achieved.  This is a beautiful lesson learned that testifies of the value of collaboration!

The secret sauce for both BellKor’s Pragmatic Chaos and The Ensemble was collaboration between diverse ideas, and not in some touchy-feely, unquantifiable, “when people work together things are better” sort of way. The top two teams beat the challenge by combining teams and their algorithms into more complex algorithms incorporating everybody’s work. The more people joined, the more the resulting team’s score would increase.
— Eliot Van Buskirk, Wired

This prize was a stroke of genius by Netflix who realized very early on the potential offered by crowdsourcing.  Not only could they achieve an incredible performance improvement to their algorithm, which they may not have been able to come up with ever, but they only spent $1M!  It may seem like a nice price tag but if you consider that a team of 7 people works for 3 years on it, that is ridiculously cheap…  What would have been the odds of hiring the right people, with the right motivation and ideas?  It would have cost a lot more I am sure, and would have led to less tangible results.

More crowdsourced projects for recommendation / prediction engines?

With the success of the Netflix project, some new similar projects have bubbled up.  The chances of FICO outsourcing their FICO score are pretty slim of course.  Companies that use a prediction engine but do not live off of it are more likely to launch those initiatives.

Similar to Netflix, Overstock.com wants to provide better recommendations to their consumers, hoping to increase their sales at the end of the day.  They have just started a new competition with the now “standard” $1M prize.  Following the Netflix footsteps, they also target a 10% improvement or better.

If you are on the lookout for a bigger prize you can also check out this other competition.  Heritage Provider Network is offering a $3M Grand Prize for the best predictive algorithm can identify patients who will be admitted to the hospital within the next year, using historical claims data.  In that case, data is obviously provided but no hard-and-fast objective is provided.  The team with the best prediction will win the prize at the 2 year mark.

I find this trend very exciting for the Decision Management space.  Collaboration can lead to great results with or without the carrot those companies are offering here.  It may take a little while for companies to embrace collaboration outside of the boundaries of the enterprise for harvesting and fine-tuning Business Rules but I have hope that we are not talking about decades.

BREAKING NEWS: Rules Fest Call for Paper is now Open!


RulesFestI just got the word from Jason Morris, Chairman of the show.  The show’s website is now open for registration!

Register!

If you have practical experience with Decisioning technologies like

  • Business Rules,
  • Complex Event Processing,
  • Predictive Analaytics,
  • Optimization or
  • Articifial Intelligence,

Then you should consider submitting a paper to:

Join the Rules Fest Speaker Hall of Fame!

Please keep in mind that we are looking for hands-on experience, lessons learned, those kinds of things.  Vendor pitches will not be accepted of course.

Big Data meets Analytics… again


Well, another month, another acquisition… Teradata has announced the acquisition of Aster Data. You can find a less formal yet official post on the acquisition in Aster Data’s blog – Mayank and Tasso go into some more details on what Aster Data is all about and why the deal makes a lot of sense to the industry.

Aster Data has focused its energy into developing a low-cost (from a systems footprint perspective) platform to manage and process data at large-scale without imposing restrictions on the types of data being managed and the type of processing being carried out. The resulting platform is a show case for a solid commercial implementation of the much talked about map-reduce approach to big data processing, and has enabled companies from different industries to extract analytic insight from both structured and unstructured data. As a result, they’ve been able to make better decisions leveraging not just the traditional operational data, but also the social data, the web click data, etc…, that is generated in huge numbers around their products and services.

The approach also allows to shorten the time needed to bring that insight back into decisions. This type of close-to-real-time insight makes the understanding of decision impact as well as the their evolution much more dynamic, giving those companies that can leverage it an edge in managing risk and benefiting from trends.

The acquisition is a good move for Teradata. It also re-inforces the following key trends:

  • Platform players continue acquiring young innovative companies which solve complex data, analytics and/or decision management problems. Just in this space, EMC bought Greenplum some time ago, IBM bought Netezza, HP bought Vertica…
    The consolidation trend will continue
  • The big data management and processing spaces are merging on unified platforms. There is less and less distinction between managing vast amounts of data and processing them to gain insight, generating more data on the fly.
  • Managing and processing non-structured data – which makes up most of big data – is becoming an integral part of what companies need to do to manage the decisions around their products and services.  And contrary to popular belief, this is as important in B2B as in B2C.
    This is also the consequence of the trend of the importance of the decision data than can be extracted from social data. This will accelerate with Enterprise 2.0.
  • And finally platform vendors are morphing into Cloud-based/backed Saas providers, and they are making tasks such as the ones enabled by Aster Data accessible at low entry cost.

Exciting times. Congratulations to the Aster Data team.

This, of course, reduces the pool of independent big data management and processing products. InfoBright and ParAccel come to mind – and HP, Dell and the like still need to move in this space. Who wants to start bets?


 2019 SparklingLogic. All Rights Reserved.