Call to Action

Webinar: Take a tour of Sparkling Logic's SMARTS Decision Manager Register Now
Home » Predictive Analytics

Predictive Analytics

Improve Your Automated Decisions with Decision Simulation


In our last post, we looked at how predictive models are used in automated decisions. A key take away from that post is that a prediction is not a decision. Rather predictive models provide us with key insights based on historical data so we can make more informed decisions.

For example, a predictive model can identify customers that are likely to churn, transactions that are suspicious, and offerings and ads that are likely to have the most appeal. But, based on these predictions, we still need to decide the best response or course of action. A decision combines one or more predictions with business knowledge and expertise to define the appropriate actions.

From Predictions to Decisions

Determining how to take action based on predictions is not trivial . Most likely, there are multiple business options for the actions an organization could take based on a prediction. Consider for example charges that are identified as potentially fraudulent, a card issuer could report the case to the fraud team for further investigation, shut down the card to prohibit further charges, or text the cardholder to verify the charge.

Or in the case of customers who have been identified as having a high probability of switching to a competitor, a company may decide to contact them with a special incentive or renewal offer. But the company still needs to decide exactly how many customers will receive the offer and how much to offer. The company could target a flat percentage of customers, or could focus only on those with the highest projected CLTV (customer lifetime value). Targeting too many customers with too large off an offer might be too expensive to be worthwhile. The possible actions an organization can take based on predictions have different costs and benefits that need to be evaluated to determine the optimal decision. This is where decision simulation is applicable. Decision simulations help you identify the best decision strategy from amongst a set of alternatives.

Measure Your Decision Quality with KPIs and Metrics

The “best” decision strategy means the one that most closely meets your organization’s objectives. By defining KPIs and metrics that measure the quality of the decision in relation to these objectives, we have a basis to compare alternative decision approaches. Ideally these KPIs were identified early on, when you first decided to automate the decision.

Decision KPIs give us a clear understanding of how decision performance is related to business performance. They provide the basis for evaluating decision alternatives. To compare alternatives, you can run simulations using historical data. Using simulations you can compare one decision strategy to another, or you can compare how a given strategy performs on each of your customer segments as represented in your data.

Young businessman checking mark on checklist with marker.Returning to the above customer churn example, we may decide we want to target customers who have an 80% or greater probability of churn based on our predictive model. One option would be to offer them a special 25% discount to attempt to re-engage and keep them as a customer. We can run a simulation against our historical data to learn how many customers fall into this bucket. From there, we can evaluate how much the discount offer would likely cost us. We can run multiple simulations using different thresholds, offers, and combinations until we find the best decision approach to deploy.

Decision Simulation Helps You Evaluate Alternative Decision Strategies

Decision simulations help us evaluate alternative decision strategies to narrow to the best approach. Modern decision management technologies, like SMARTS, make it easy to set up and run these simulations, even on very large data samples. Of course, the ultimate quality of the selected decision approach is related to its success once deployed- how many customers do we manage to retain and at what cost?

Once we deploy a decision we can monitor and track the KPIs but we have no way of knowing whether customers who did not accept our offer would have instead accepted a different offer. Or whether customers who did accept would also have accepted a 20% rather than a 25% discount. To answer these questions we need to use Champion / Challenger experiments. We’ll cover how Champion / Challenger works with decision management in an upcoming post.

How Predictive Models Improve Automated Decisions


Agility is a key focus and benefit in the discipline of decision management. Agility, in the decision management context, means being able to rapidly adjust and respond to business and market-driven changes. Decision management technologies allow you to separate the business logic from your systems and applications. Business analysts then manage and make changes to the business logic a separate environment. And they can deploy their changes with minimal IT involvement and without a full software development cycle. With decision management, changes can be implemented in a fraction of the time required to change traditional applications. This ability to address frequently changing and new requirements that impact key automated decisions makes your business more agile.

Being able to rapidly make and deploy changes is important. But how do you know what changes to make? Some changes, like those defined by regulations and contracts, are straightforward. If you implement the regulations or contract provisions accurately, the automated decision will produce the required results and therefore, make good decisions. However, many decisions don’t have such a direct and obvious solution.

When Agility Isn’t Enough

Frequently decisions depend on customer behavior, market dynamics, environmental influences or other external factors. As a result, these decisions involve some degree of uncertainty. For example, in a credit risk decision, you’re typically determining whether or not to approve a credit application and where to set the credit limit and interest rate. How do organizations determine the best decisions to help them gain customers while minimizing risk? The same applies to marketing decisions like making upsell and cross-sell offers. Which potential offer would the customer most likely accept?

Predictive Models Provide Data Insight

crystal ballThis is where predictive models help. Predictive models combine vast amounts of data and sophisticated analytic techniques to make predictions about the future. They help us reduce uncertainty and make better decisions. They do this by identifying patterns in historical data that lead to specific outcomes and detecting those same patterns in future transactions and customer interactions.

Predictive models guide many decisions that impact our daily lives. Your credit card issuer has likely contacted you on one or more occasions asking you to confirm recent transactions that were outside of your normal spending patterns. When you shop online, retailers suggest products you might want to purchase based on your past purchases or the items in your shopping cart. And you probably notice familiar ads displayed on websites you visit. These ads are directly related to sites you previously visited to encourage you to return and complete your purchase. All of these are based on predictive models that are used in the context of specific decisions.

How Predictive Models Are Built

Predictive modeling involves creating a model that mathematically represents the underlying associations between attributes in historical data. The attributes selected are those that influence results and can be used to create a prediction. For example, to predict the likelihood of a future sale, useful predictors might be the customer’s age, location, gender, and purchase history. Or to predict customer churn we might consider customer behavior data such as the number of complaints in the last 6 months, the number of support tickets over the last month, and the number of months the person has been a customer, as well as demographic data such as the customer’s age, location, and gender.

Assuming we have a sufficient amount of historical data available that includes the actual results (whether or not a customer actually purchased in the first example, or churned in the second) we can use this data to create a predictive model that maps the input data elements (predictors) to the output data element (target) to make a prediction about our future customers.

Typically data scientists build predictive models through an iterative process that involves:

  • Collecting and preparing the data (and addressing data quality issues)
  • Exploring and Analyzing the data to detect anomalies and outliers and identify meaningful trends and patterns
  • Building the model using machine learning algorithms and statistical techniques like regression analysis
  • Testing and validating the model to determine its accuracy
Data Science Process
By Farcaster at English Wikipedia, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=40129394

Once the model is built and validated it can be deployed and used in real-time to inform automated decisions.

Deploying Predictive Models in Automated Decisions

While predictive models can give us sound predictions and scores, we still need to decide how to act on them. Modern decision management platforms like SMARTS Decision Manager let you combine predictive models that inform your decisions with business rules that translate those decisions into concrete actions. SMARTS includes built-in predictive analytics capabilities and also lets you use models built using other analytics tools such as SAS, SPSS and R.

The use of predictive models is rapidly expanding and changing the way we do business. But it’s important to understand that predictions aren’t decisions! Real world business decisions often include more than one predictive model. For example, a fraud decision might include a predictive model that determines the likelihood that a transaction originated from an account that was taken over. It might also include a model that determines the likelihood that a transaction went into an account that was compromised. A loan origination decision will include credit scoring models and fraud scoring models. It may also include other models to predict the likelihood the customer will pay back early, or the likelihood they will purchase additional products and services (up-sell). Business rules are used to leverage the scores from these models in a decision that seeks to maximize return while minimizing risk.

In our next post, we’ll look at how modern decision management platforms, like SMARTS, help you evaluate alternative decision strategies. We’ll explore how you can use decision simulation to find the best course of action.

The Convergence of Data Analysts and Business Analysts


ConnectionsDecision Management has been a discipline for Business Analysts for decades now.  Data Scientists have been historically avid users of Analytic Workbenches.  The divide between these two crowds has been crossed by sending predictive model specifications across, from the latter group to the former.  These specifications could be in the form of paper, stating the formula to implement, or in electronic format that could be seamlessly imported.  This is why PMML (Predictive Model Markup Language) has proven to be a useful standard in our industry.

The fact is that the divide that was artificially created between these two groups is not as deep as we originally thought.  There have been reasons to cross the divide, and both groups have seen significant benefits in doing so.

In this post, I will highlight a couple of use cases that illustrate my point.

Read more…

From Decision Management to Prescriptive Analytics


compassA number of organizations have adopted the idea of making use of the Decision Management approach and technologies to problems such as risk, fraud, eligibility, maximizing and more. If you read this blog, you probably already know what Decision Management brings to the table.

Decision Management is all about automating repeatable decisions in a maintainable way so that they can be optimized in a continuous fashion.

Decision systems can use Business Rules Management Systems (BRMS), but they do not need to restrict themselves to just that: they can also be built on Predictive Analytics technology; or they can even consist of a combination of both. The increasing availability of data that can be used to test, optimize decisions, or extract insights from, makes it possible for decision-centric applications to combine expertise and data to levels not seen in previous generations of applications.

In this post, we’ll outline the evolution from pure Business Rules Systems to Prescriptive Analytics platforms for decision-centric applications. Read more…

Making informed decisions (part 2)


Funny roadsign 2In part 1, we saw that we could use knowledge, experience and intuition to build a model serving as a basis for making decisions. But when historical data is available, we can do more…

Predictive Analytics

When large amounts of historical data are available (and the larger, the better), a predictive model can be built using predictive analytics: this basically uses statistics to comb through the data and find patterns. Such patterns can of course be found more easily when they occur frequently. It can be quite useful to make use of the results of BI (if available) to guide the predictive analytics algorithms so that they find the proper correlations.

When successful, the predictive model, used on new cases, will predict a given outcome –therefore based on past experience. Automation of the decision making, using the predictive model, can be performed by building business rules from that model.

And the resulting business rules can, as usual, be enriched using existing knowledge or future knowledge acquired over time (from human experience, or other predictive analytics “campaigns”).

Prescriptive Analytics

When the results of predictive analytics are used in a number of simulation scenarios, we end up with a number of possible outcomes, a few of them possibly more optimal than others (and here we are talking business performance).

These simulation scenarios may be run continually, as new historical data becomes available, in order to constantly optimize the predictive models –and also so that they correspond to a reality that is more current.

The possibility of obtaining a number of possible decisions trying to maximize an expected outcome, all based on historical data (and possibly also on existing knowledge) leads to a real prescription: “something that is suggested as a way to do something or to make something happen” (Merriam-Webster dictionary).

Automatically providing advice on decisions to make to reach a given target is a very appealing and powerful idea: you don’t just rely on “gut feeling” or experience or past knowledge; you rely on all of these, simultaneously. And the suggestions evolve as time passes, allowing quick refocusing.

Making informed decisions

The ability of making decisions based on so many different aspects that evolve over time is already something we, humans, do at our own level (both consciously and unconsciously).

Scaling this up to tactical and strategic levels in the Enterprise requires the use of prescriptive analytics, backed by knowledge, experience, and big data. So that we can have some comfort that we made those decisions based on all that we had at our disposal.

Now, should I eat some Thai food for lunch, or some Japanese food?

 

Making informed decisions (part 1)


Funny roadsign

We spend our lives, both personal and professional, making decisions, all day long; some without consequences, and some with long-lasting and even perhaps game-changing ones.

Should I eat some Thai food for lunch, or some Japanese food?

Do we make targeted offers to customers that have been with us for more than 2 years, or to those that have been with us for more than 5?

How do we reduce the time it takes us to fix defective devices?

Although sometimes not making a decision is worse than making the wrong one, we all strive to make the best decisions possible. And to make the best decisions, we rely on experience and whatever information is at hand. With experience in the subject matter, decisions can be made very quickly; when the matter is new or information is scarce, we usually require more time to evaluate a number of possibilities, to make a few computations, to balance the pros and cons.

All this is part of our daily lives. But when a large number of decisions need to be made in a short amount of time, or when the data available to us is limited, or on the other hand enormous, automation can come to the rescue. But how can we make informed decisions at a large scale? Read more…

Decision Management Predictions for 2014


PredictionAs it is customary, let me share what I foresee as being big this new year…  I would like to focus on just three points that are striking me as important in no particular order.

1.  Predictive Analytics

Well, of course, we have been seeing that trend develop for a while.  This is certainly not a surprising entry in this list.

The fact is that we see more and more projects combining predictive analytics and business rules.  What is really interesting to me is the fact that more and more business analysts are getting trained to develop some of these predictive models.

Given the data scientist shortage, it make total sense.  If you do not have a modeling team in-house or if it is swamped with high priority projects, you may as well look for other ways to leverage the available data to inform your decisions.

I am optimistic that we will see more business analysts add predictive analytics to their skill set.

2. Business Intelligence

Sticking with analytics at large, I see also a greater synergy between business intelligence and business rules.  We have talked about ‘Operational BI’ for a while now, but there seems to be a lot of activity finally taking shape.

I believe that there will be more projects that actually combine both in 2014, allowing companies to act on the gained from monitoring historical trends.

3. Internet of things

When I was still in my early years, we dreamed of ‘intelligent’ equipment, cars and other things that would make our life easier.  While embedding computers in all things around the house has been cost prohibitive for the mass market back then, the Cloud is now making it a reality.

The beauty of having ‘things’ that can communicate is that they are immediately candidate for ‘higher intelligence’.  By hooking them up with a decision service on the cloud, we can seamlessly allow them to act more appropriately and subtly to signals they sense around around.  They can better adapt since changing their behavior does not involve any hardware changes, or more generically any changes in-situ.  The intelligence is located on the cloud, readily available for all connected things.

I am totally in awe with the progress we have made thus far, and the potential for a global ‘increase of intelligence’ of the things around us.  The future is now!

Data versus Expertise Dilemma


balanceIn the decade (or two) I have spent in Decision Management, and Artificial Intelligence at large, I have seen first-hand the war raging between knowledge engineers and data scientists.  Each defending its approach to supporting ultimately better decisions.  So what is more valuable?  Insight from data?  Or knowledge from the expert?

Mike Loukides wrote a fantastic article called “The unreasonable necessity of subject experts on the O’Reilly radar, that illustrates this point very well and provides a clear picture as to why and how we would want both.

Data knows stuff that experts don’t

In the world of uncertainty that surrounds us, experts can’t compete with the sophisticated algorithms we have refined over the years.  Their computational capabilities goes way above and beyond the ability of the human brain.  Algorithms can crunch data in relatively little time and uncover correlations that did not suspect.

Adding to Mike’s numerous example, the typical diaper shopping use case comes to mind.  Retail transaction analysis uncovered that buyers of diapers at night were very likely to buy beer as well.  The rationale is that husbands help the new mom with shopping, when diapers run low at the most inconvenient time of the day: inevitably at night.  The new dad wandering in the grocery store at night ends up getting “his” own supplies: beer.

Mike warns against the pitfalls of data preparation.  A hidden bias can surface in a big way in data samples, whether it over-emphasizes some trends or cleans up traces of unwanted behavior.  If your data is not clean and unbiased, value of the data insight becomes doubtful.  Skilled data scientists work hard to remove as much bias as they can from the data sample they work on, uncovering valuable correlations.

 Data knows too much?

When algorithms find expected correlations, like Mike’s example of pregnant women being interested in baby products, analytics can validate intuition and confirm fact we knew.

When algorithms find unexpected correlations, things become interesting!  With insight that is “not so obvious”, you are at an advantage to market more targeted messages.  Marketing campaigns can yield much better results than “shooting darts in the dark”.

Mike raises an important set of issues: Can we trust the correlation?  How to interpret the correlation?

Mike’s article includes many more examples.  There are tons of football statistics that we smile about during the Super Bowl.  Business Insider posted some even more incredible examples such as:

  • People who dislike licorice are more likely to understand HTML
  • People who like scooped ice cream are more likely to enjoy roller coasters than those that prefer soft serve ice cream
  • People who have never ridden a motorcycle are less likely to be multilingual
  • People who can’t type without looking at the keyboard are more likely to prefer thin-crust pizza to deep-dish

There may be some interesting tidbit of insight in there that you could leverage.  but unless you *understand* the correlation, you may be misled by your data and make some premature conclusions.

Expert shines at understanding

Mike makes a compelling argument that the role of the expert is to interpret the data insight and sort through the red herrings.

This illustrates very well what we have seen in the Decision Management industry with the increased interplay between the “factual” insight and the “logic” that leverages that insight.  Capturing expert-driven business rules is a good thing.  Extracting data insight is a good thing.  But the real value is in combining them.  I think the interplay is much more intimate than purely throwing the insight on the other side of the fence.  You need to ask the right questions as you are building your decisioning logic, and use the available data samples to infer, validate or refine your assumptions.

As Mike concludes, the value resides in the conversation that is raised by experts on top of data.  Being able to bring those to light, and enable further conversations, is how we will be able to understand and improve our systems.


 2018 SparklingLogic. All Rights Reserved.