Call to Action

Webinar: Take a tour of Sparkling Logic's SMARTS Decision Manager Register Now
Home » AI

AI

When They’re Searching for the Next Big Thing


How to Keep and Delight Fickle Customers in a Competitive Insurance and Fintech Markets

Are your customers fickle? How well do you anticipate their needs, proactively offer packages at a competitive prices, react to regulatory and competitive changes before they leave you?

decision analytics toolsToday’s banks, insurance and financials firms operate in a fast moving, highly competitive and rapidly changing market. Disruption is everywhere and the customers have choices they can make in an instant from their smart phones. Losing a customer to a more nimble competitor can be as quick as a cup of coffee with a few finger swipes at a Starbucks patio.

Particularly in the insurance market, customer interactions are precious and few. An insurance company rep needs to not only delight their customer when an opportunity arises, but also upsell them by offering them a personalized product or service tailored to their need virtually instantly.

Doing the same thing as before is a certain way to lose business

Nimble competitors now use the latest AI and analytics technology to rapidly discover and deploy intelligent decision systems which can instantly predict customer needs and customize the offering and pricing relevant for the customer at the right time.

To achieve and sustain such flexibility, a financial organization needs to modernize its underlying systems. Best companies build a living decision intelligence into their systems, ready to be updated on a moments notice.

If a competitor offers a better deal, customer has an life event or data analysts discover a new pattern for risk or fraud, core systems need to be updated virtually instantly. By having an intelligent, AI-driven central decision management system as the heart of your core system, anyone in your organization can have the latest intelligence at their fingertips. Intelligent systems will help verify customer eligibility, provide custom product or a bundle offering at a competitive price, speed up and automate claim adjudication and automate loan origination across all sales and support channels.

The heart of this solution is a modern, AI-driven decision management and rule engines platforms that use the latest AI and analytics techniques, have sophisticated cloud offerings providing unparalleled flexibility and speed. Best systems are no longer just for the IT – they allow most business analysts to view, discover, test and deploy updated business logic in a real time.

A modern organization needs the latest decision analytics tools

These tools will allow you to discover new patterns from the historical data using machine learning, connect and cross correlate multiple sources of data and incorporate predictive models from company’s data analysts. Updating and deploying new logic is now as easy as publishing a web page and does not require changing the application itself, just the underlying business logic.

decision analytics tools

Sparkling Logic SMARTS AI Decision Management is the third and the newest generation of the decision management and rules engine offering using cloud, AI, decision analytics, predictive models and machine learning. We currently process millions of rules and billions of records for the most progressive companies. Find out how we succeeded in creating the ultimate sophisticated set of decision management and decision analytics tools that every modern financial institution should have in their competitive tool chest.

Analytics- Driven Automated Decisions

SMARTS Decision Manager White Paper

Automated decisions are at the heart of your processes and systems. These operational decisions provide the foundation for your digital business initiatives and are critical to the success and profitability of your business. Learn how SMARTS Decision Manager lets you define agile, targeted, and optimal decisions and deploy them to highly available, efficient, and scalable decision services.
Get the Whitepaper

How Predictive Models Improve Automated Decisions


Agility is a key focus and benefit in the discipline of decision management. Agility, in the decision management context, means being able to rapidly adjust and respond to business and market-driven changes. Decision management technologies allow you to separate the business logic from your systems and applications. Business analysts then manage and make changes to the business logic a separate environment. And they can deploy their changes with minimal IT involvement and without a full software development cycle. With decision management, changes can be implemented in a fraction of the time required to change traditional applications. This ability to address frequently changing and new requirements that impact key automated decisions makes your business more agile.

Being able to rapidly make and deploy changes is important. But how do you know what changes to make? Some changes, like those defined by regulations and contracts, are straightforward. If you implement the regulations or contract provisions accurately, the automated decision will produce the required results and therefore, make good decisions. However, many decisions don’t have such a direct and obvious solution.

When Agility Isn’t Enough

Frequently decisions depend on customer behavior, market dynamics, environmental influences or other external factors. As a result, these decisions involve some degree of uncertainty. For example, in a credit risk decision, you’re typically determining whether or not to approve a credit application and where to set the credit limit and interest rate. How do organizations determine the best decisions to help them gain customers while minimizing risk? The same applies to marketing decisions like making upsell and cross-sell offers. Which potential offer would the customer most likely accept?

Predictive Models Provide Data Insight

crystal ballThis is where predictive models help. Predictive models combine vast amounts of data and sophisticated analytic techniques to make predictions about the future. They help us reduce uncertainty and make better decisions. They do this by identifying patterns in historical data that lead to specific outcomes and detecting those same patterns in future transactions and customer interactions.

Predictive models guide many decisions that impact our daily lives. Your credit card issuer has likely contacted you on one or more occasions asking you to confirm recent transactions that were outside of your normal spending patterns. When you shop online, retailers suggest products you might want to purchase based on your past purchases or the items in your shopping cart. And you probably notice familiar ads displayed on websites you visit. These ads are directly related to sites you previously visited to encourage you to return and complete your purchase. All of these are based on predictive models that are used in the context of specific decisions.

How Predictive Models Are Built

Predictive modeling involves creating a model that mathematically represents the underlying associations between attributes in historical data. The attributes selected are those that influence results and can be used to create a prediction. For example, to predict the likelihood of a future sale, useful predictors might be the customer’s age, location, gender, and purchase history. Or to predict customer churn we might consider customer behavior data such as the number of complaints in the last 6 months, the number of support tickets over the last month, and the number of months the person has been a customer, as well as demographic data such as the customer’s age, location, and gender.

Assuming we have a sufficient amount of historical data available that includes the actual results (whether or not a customer actually purchased in the first example, or churned in the second) we can use this data to create a predictive model that maps the input data elements (predictors) to the output data element (target) to make a prediction about our future customers.

Typically data scientists build predictive models through an iterative process that involves:

  • Collecting and preparing the data (and addressing data quality issues)
  • Exploring and Analyzing the data to detect anomalies and outliers and identify meaningful trends and patterns
  • Building the model using machine learning algorithms and statistical techniques like regression analysis
  • Testing and validating the model to determine its accuracy
Data Science Process
By Farcaster at English Wikipedia, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=40129394

Once the model is built and validated it can be deployed and used in real-time to inform automated decisions.

Deploying Predictive Models in Automated Decisions

While predictive models can give us sound predictions and scores, we still need to decide how to act on them. Modern decision management platforms like SMARTS Decision Manager let you combine predictive models that inform your decisions with business rules that translate those decisions into concrete actions. SMARTS includes built-in predictive analytics capabilities and also lets you use models built using other analytics tools such as SAS, SPSS and R.

The use of predictive models is rapidly expanding and changing the way we do business. But it’s important to understand that predictions aren’t decisions! Real world business decisions often include more than one predictive model. For example, a fraud decision might include a predictive model that determines the likelihood that a transaction originated from an account that was taken over. It might also include a model that determines the likelihood that a transaction went into an account that was compromised. A loan origination decision will include credit scoring models and fraud scoring models. It may also include other models to predict the likelihood the customer will pay back early, or the likelihood they will purchase additional products and services (up-sell). Business rules are used to leverage the scores from these models in a decision that seeks to maximize return while minimizing risk.

In our next post, we’ll look at how modern decision management platforms, like SMARTS, help you evaluate alternative decision strategies. We’ll explore how you can use decision simulation to find the best course of action.

Evolution of the Rete Algorithm


The Rete Algorithm Demystified blog series attracted a huge crowd.  I want to thank you for your readership!  Following the popular demand, let me continue on the series and add a few words on the latest and greatest Rete-NT.

rete algorithmWhat’s new? 

Well, this is where I can’t say much without violating Charles’s trade secrets…  Sorry!

So what can I share?

For the evolution of the Rete Algorithm, Charles has focused primarily on runtime performance, looking for ways to accelerate rule evaluations and reduce memory usage.  Mission accomplished.

Faster: With Rete III, the speed increase came with the ability to efficiently handle a larger number of objects per transaction.  With Rete-NT, the speed increase came from optimizations on complex joins in the Rete network.  As described in part 2, the discrimination tree performs a product of object lists.  The list of all drivers satisfying x,y,z requirement is combined with the list of all vehicles matching some other requirements, for example, producing the cartesian cross product.  The more patterns, you add, the more joins will be added.  This has been referred to as the problem of multi-patterns.  The combinatorial explosion is kept under control with the latest algorithm, in a dramatically different way than previously attempted, achieving unprecedented performance.  This algorithm shines when business rules involve complex conditions, which tends to be the case in real-life applications.

Slimmer: This is related to the complex join speed increase.  The less combinatorial explosion, the less memory usage.  It is actually a lot more sophisticated than that, but I am unfortunately bound to secrecy…  The most important thing to remember is that memory usage goes down quite a bit.  This is a concern that software architects can truly appreciate!

James Owen scrutinized the actual performance increase and published the results in InfoWorld last year.  Although the overhead may slow down a tiny bit the performances for overly simple tests, the performance gain is huge: an order of magnitude faster than the previous generation.

Does performance matter?

Most rules engines have achieved some excellent levels of runtime performance, so performance for the sake of performance is not an objective in itself.

I am excited about Rete-NT because it improves performance where it is needed.  Previous generations of Rete engines put pressure on rules writers to design rules that avoid as much as possible multi-patterns.  This algorithmic innovation removes a painful hurdle, or at least move the boundary.  In my career, especially in the past 2 years at Sparkling Logic, I have come across new use cases that do require more flexibility, more expressibility that would be hard to cope with using less efficient algorithms.  One thing we can always seem to be able to count on is for complexity to increase…

How does that compare to non-inference engines?

You can over-simplify the inference versus compiled sequential debate by saying that:

  • Rete shines when the number of rules is large and the number of objects in memory is small
  • Non-inference shines when the number of rules is small and the number of objects in memory is large

Rete-NT changes the game a bit by expanding the scope of problems that can be handled effectively.  As a result, non-inference engines dominate a smaller and smaller number of use cases, while Rete keeps its lead on large rulebases.

Data versus Expertise Dilemma


balanceIn the decade (or two) I have spent in Decision Management, and Artificial Intelligence at large, I have seen first-hand the war raging between knowledge engineers and data scientists.  Each defending its approach to supporting ultimately better decisions.  So what is more valuable?  Insight from data?  Or knowledge from the expert?

Mike Loukides wrote a fantastic article called “The unreasonable necessity of subject experts on the O’Reilly radar, that illustrates this point very well and provides a clear picture as to why and how we would want both.

Data knows stuff that experts don’t

In the world of uncertainty that surrounds us, experts can’t compete with the sophisticated algorithms we have refined over the years.  Their computational capabilities goes way above and beyond the ability of the human brain.  Algorithms can crunch data in relatively little time and uncover correlations that did not suspect.

Adding to Mike’s numerous example, the typical diaper shopping use case comes to mind.  Retail transaction analysis uncovered that buyers of diapers at night were very likely to buy beer as well.  The rationale is that husbands help the new mom with shopping, when diapers run low at the most inconvenient time of the day: inevitably at night.  The new dad wandering in the grocery store at night ends up getting “his” own supplies: beer.

Mike warns against the pitfalls of data preparation.  A hidden bias can surface in a big way in data samples, whether it over-emphasizes some trends or cleans up traces of unwanted behavior.  If your data is not clean and unbiased, value of the data insight becomes doubtful.  Skilled data scientists work hard to remove as much bias as they can from the data sample they work on, uncovering valuable correlations.

 Data knows too much?

When algorithms find expected correlations, like Mike’s example of pregnant women being interested in baby products, analytics can validate intuition and confirm fact we knew.

When algorithms find unexpected correlations, things become interesting!  With insight that is “not so obvious”, you are at an advantage to market more targeted messages.  Marketing campaigns can yield much better results than “shooting darts in the dark”.

Mike raises an important set of issues: Can we trust the correlation?  How to interpret the correlation?

Mike’s article includes many more examples.  There are tons of football statistics that we smile about during the Super Bowl.  Business Insider posted some even more incredible examples such as:

  • People who dislike licorice are more likely to understand HTML
  • People who like scooped ice cream are more likely to enjoy roller coasters than those that prefer soft serve ice cream
  • People who have never ridden a motorcycle are less likely to be multilingual
  • People who can’t type without looking at the keyboard are more likely to prefer thin-crust pizza to deep-dish

There may be some interesting tidbit of insight in there that you could leverage.  but unless you *understand* the correlation, you may be misled by your data and make some premature conclusions.

Expert shines at understanding

Mike makes a compelling argument that the role of the expert is to interpret the data insight and sort through the red herrings.

This illustrates very well what we have seen in the Decision Management industry with the increased interplay between the “factual” insight and the “logic” that leverages that insight.  Capturing expert-driven business rules is a good thing.  Extracting data insight is a good thing.  But the real value is in combining them.  I think the interplay is much more intimate than purely throwing the insight on the other side of the fence.  You need to ask the right questions as you are building your decisioning logic, and use the available data samples to infer, validate or refine your assumptions.

As Mike concludes, the value resides in the conversation that is raised by experts on top of data.  Being able to bring those to light, and enable further conversations, is how we will be able to understand and improve our systems.

RulesFest 2011 – Andrew Ng: Introduction To Machine Learning


Andrew NgMachine learning is everywhere, but the “how” is not well understood by the masses.  Andrew Ng, professor at Stanford university, visits the conference for an introduction.

When we look at a picture, our brain automatically interprets the information and we recognize it.  For a computer, the work is not that simple.

Looking at pixels individually could be tedious to say the least.  The size of the problem and the variability of subjects make it almost impossible to find interesting correlation at that level.  That being said, if we break the problem by applying algorithms that identify smaller components like wheels or handlebars, then the correlation becomes much simpler.  Presence of both wheels and handlebar is a decent predictor of a motorcycle — although it could also be a wheelbarrow or dumbbells I suppose.  But confusion with trees and pasta would be limited I guess.

This technique is called feature extraction.  You look for features you can estimate and use those features to teach the system to detect the target subject.

Hierarchical sparse coding allows to layer those levels of abstraction to learn how to detect basic trends that, assembled together can allow the detection of “bigger” pieces, like an eye or an ear, then aggregated together could allow face detection for example.

This is not specific to image detection.  Dr. Ng explained how it could be used for Audio or video.

Analytics for business transactions use similar technique to facilitate the creation of predictive models, although the underlying algorithms would be different — the algorithm presented by Andrew is actually more adapted to perceptual data.  I talked about some of those principles in Rules Fest 2008 actually in my introduction to Predictive Analytics but we did not blog the show back then; I do not have a link to offer…  Let me elaborate a bit on the lingo the modelers use in our space.  In Risk Management, features are also called Variables, Calculations or Characteristics. Looking at tons of transactions you may have a hard time detecting patterns of fraud for example. But once you aggregate the data to look at the “feature” that is equal to the number of transactions in the past 3 months for travel expenses or the “feature” that is equal to the time on books in months, you have the opportunity to detect interesting correlation.  Those features are then used to train the models — neural nets, linear models, etc.

Andrew debunks typical criticism:

  • Is it better to encode prior knowledge about structure of image (audio, etc.).  Linguists argued similarly a couple of decades ago but Google’s success on translation automation speaks by itself (no pun intended)
  • Unsupervised feature learning cannot currently do X…  The list is long but over time many of those barriers fall one by one with technology advances

The talk is heavily tainted on the AI-as-data side rather than AI-as-knowledge side.  It is good to balance the opposite view offered by Paul Haley on Monday.  As you know, my vision is actually hybrid – I believe that data and expert should share the spotlight, each with its characteristics, and mixing both can deliver superior value in some other cases.  Carlos’s 101 session on Analytics for Rules Writers is a great resource to get started too.  i will share the link as the materials are posted.

Discount Code for Rules Fest 2011


Once again, we are pleased to share with you our discount code for the Rules Fest conference in San Francisco on October 24-26 2011.

Rules Fest 2011

Feel free to enter code !SparklingLogicRules!
to enjoy a 10% discount at checkout with our compliments!

This year again, I will open the festivities!  My talk will be on
Agile Methodology for Business Rules Elicitation.

Carlos and Dr Charles Forgy will team up for a fascinating and *sparkling* debate
on the current and future state of Business Rules.
I can’t wait to hear the clashing perspectives of
the algorithm guru with the experienced architect!
(I came to realize that none of the adjectives I could come up with
for those two people I admire would do them justice)

BREAKING NEWS: Rules Fest Call for Paper is now Open!


RulesFestI just got the word from Jason Morris, Chairman of the show.  The show’s website is now open for registration!

Register!

If you have practical experience with Decisioning technologies like

  • Business Rules,
  • Complex Event Processing,
  • Predictive Analaytics,
  • Optimization or
  • Articifial Intelligence,

Then you should consider submitting a paper to:

Join the Rules Fest Speaker Hall of Fame!

Please keep in mind that we are looking for hands-on experience, lessons learned, those kinds of things.  Vendor pitches will not be accepted of course.

The Summer of AI: Watson, Google & more


Do you remember how Artificial Intelligence (AI) became taboo?

Star Wars - C3POMany of us were fascinated by Artificial Intelligence a few decades ago.  This discipline carried an extraordinary potential.  We started dreaming about expert systems that would outperform humans, systems that would write themselves and other fantastic progress outpacing our own capabilities.  We certainly did a great job exciting our imagination leading to a long list of sci-fi movies and other fantasies.

Then winter came

The technology ended up disappointing the masses.  We wanted to believe in the incredible potential but realized that the current approach was much more limited than we had hoped.  I personally believe that the industry mostly failed in managing expectations.  The technological progress was nothing to be ashamed of.  Lots of techniques, algorithms and new approaches came out of this period but living up to those over-inflated expectations was unrealistic.

I was personally involved in Expert Systems back then.  Neuron Data had a great product: Nexpert Object.  In other posts I might describe some of the projects I did back in the 90’s.  It was very exciting… although, admittedly, the systems did not write themselves.  The skills required to transcode the business expertise were not common.  The learning curve was steep.

It was harsh, certainly, to bury all those efforts and banish the terminology altogether.  The dream did not vanish completely though.  AI remained alive in the collective imagination.  More movies and books kept satisfying our hunger for that dream of fabricated intelligence that could help humanity — or destroy it as we enjoy fearing in those movies –, that could approach it to the point we could confuse an android with a real person, or that could perform amazing tasks.

AI technology also survived the winter in hibernation.  Those passionate AI researchers looked for more realistic objectives for the technology.  I read once in a book — back in 1997 — a great expression that I have shared more than once with some of you in the past: “From Artificial Intelligence to Intelligent Agents”.  If nothing else, this book helped me realize back then that we needed to change our approach to AI.  Instead of the monolithic expert system, there was an opportunity to add intelligence in small specific tasks distributed over the network.  The budding concept of Decision Services was born.  It may or may not be a coincidence that Blaze Advisor and ILOG JRules were conceived around that time.  The BRMS movement was another perspective on AI, still focused on adding intelligence to our systems but intelligence that came straight from the Business Experts, intelligence that was under their control the whole time.

AI spanned much more than one technology in reality and many other fields of research kept investing and refining the technology for the same purpose of making systems smarter.  It is no wonder that, after a long and rigorous winter, AI is finally able to bubble up to the public once again, this time as a more mature discipline, less ambitious in many ways and more accomplished too.

And now we are at the dawn of the Summer of AI

BRMS and Decision Management is certainly a topic I am passionate about but, looking around me, I realize the phenomenal progress and applicability of other techniques (that I am also very interested in although not as dedicated to).

As a proof that AI may have finally become an accepted term for the public (again), I have collected a few pieces of evidence.  The list is long.  I decided to focus on a couple of recent articles.

How could I ignore the Watson phenomenon?  This is most definitely the triggering event for this flooding of AI publicity.  Granted IBM had Deep Blue playing chess in the 90’s, but we may have considered the game too structured to recognize the talent.  Now, beating the Jeopardy champions is a greater challenge since it requires more than brute force.  Winning the game requires an amazing ability to deal with a general lack of precision that does require “intelligence”.  For the first time in a very long time, the Press was impressed by the performance of the machine, by its intelligence.  Not that the world is desperately looking for greater Jeopardy champions, but the idea that such technology could be used in other contexts where precision is approximate at best, where data is partially know, is extremely appealing.  Think about call centers or emergency situations where humans are pressed to make decisions when data-points are lacking.

Factoid: Joseph Bigus, who wrote the book I quoted earlier, is a Senior Technical Staff Member at the IBM T.J. Watson Research Center.  AI is a small world I keep realizing.

After the wave of press coverage — I was going to say tsunami but decided not to out of respect for our Japanese friends — for the Watson project at IBM, more thoughts converged on the usability and potential usefulness of AI in other areas.  Peter Norvig elaborated on the progress made by AI in this great article.  I like in particular his analysis on the limitation of Expert Systems: reliance on Experts interviews.

Learning turned out to be more important than knowing. In the 1960s and 1970s, many A.I. programs were known as “Expert Systems,” meaning that they were built by interviewing experts in the field (for example, expert physicians for a medical A.I. system) and encoding their knowledge into logical rules that the computer could follow. This approach turned out to be fragile, for several reasons. First, the supply of experts is sparse, and interviewing them is time-consuming. Second, sometimes they are expert at their craft but not expert at explaining how they do it. Third, the resulting systems were often unable to handle situations that went beyond what was anticipated at the time of the interviews.

From a Decision Management perspective, we do face a similar challenge but that would be the topic for another post.

The third proof of the Summer of AI is the recent Turing Award going to Leslie Valient.  AI is popular again.  Although Leslie’s work is not very recent, he is being recognized now for his contribution to machine learning.

I could go on and on.  You have probably seen articles on AI in general newspapers.  Summer is here.

Man Versus Machine

Man + MachineThe main difference between the old days of AI and the new Summer that may be starting now is the role of the Machine.  We dreamt of Machines that would be able to replace Humans.

The possibility that one day Machines will replace humans has of course been at the center of long debates, raising deep issues, going as far as making us question what being human really means. Kurtzweil has famously argued that the Singularity is near and will have profound implications on human evolution (we will transcend biology, he claims). On a more negative note, Bill Joy wrote a famous article in Wired in 2000, in which he worried that we will in effect lose control of our technology and run the risk of becoming an endangered species. Recently, the Atlantic published a long article on Mind vs Machine, in which a more nuanced approach is taken – yes, Machines may well pass the Turing test but that does not signify a path towards irrelevance for humans.

Star Trek Voyager DoctorThe reality in my mind is that we need Machines that can augment Humans.  We need better processing power to supplement our human limitations but we are not ready yet to let a Machine make the final decision.  Think about Healthcare for example, it is appealing to think that a virtual doctor could have access to the latest and greatest research on every possible topic and would be able to compare and analyze all possible treatments including the side effects and possible risks.  Hasn’t Star Trek (Voyager) already painted that vision with its android doctor?  But in the end, we like that a real person is making those life and death decisions with ethical safeguards we would be hard-pressed to implement completely and accurately for a machine.

John Seely Brown, from the Deloitte Center for the Edge and author of “The Power of Pull”, commented in a recent article in the NY Times that machines that are facile at answering questions only serve to obscure what remains fundamentally human.  My take is that the success of AI resides in the ability to combine both.  If we could, the Machine’s incredible power with the unique intuition of Humans, we could get the best of both worlds.

 


 2018 SparklingLogic. All Rights Reserved.