In our last post, we looked at how predictive models are used in automated decisions. A key take away from that post is that a prediction is not a decision. Rather predictive models provide us with key insights based on historical data so we can make more informed decisions.
For example, a predictive model can identify customers that are likely to churn, transactions that are suspicious, and offerings and ads that are likely to have the most appeal. But, based on these predictions, we still need to decide the best response or course of action. A decision combines one or more predictions with business knowledge and expertise to define the appropriate actions.
From Predictions to Decisions
Determining how to take action based on predictions is not trivial . Most likely, there are multiple business options for the actions an organization could take based on a prediction. Consider for example charges that are identified as potentially fraudulent, a card issuer could report the case to the fraud team for further investigation, shut down the card to prohibit further charges, or text the cardholder to verify the charge.
Or in the case of customers who have been identified as having a high probability of switching to a competitor, a company may decide to contact them with a special incentive or renewal offer. But the company still needs to decide exactly how many customers will receive the offer and how much to offer. The company could target a flat percentage of customers, or could focus only on those with the highest projected CLTV (customer lifetime value). Targeting too many customers with too large off an offer might be too expensive to be worthwhile. The possible actions an organization can take based on predictions have different costs and benefits that need to be evaluated to determine the optimal decision. This is where decision simulation is applicable. Decision simulations help you identify the best decision strategy from amongst a set of alternatives.
Measure Your Decision Quality with KPIs and Metrics
The “best” decision strategy means the one that most closely meets your organization’s objectives. By defining KPIs and metrics that measure the quality of the decision in relation to these objectives, we have a basis to compare alternative decision approaches. Ideally these KPIs were identified early on, when you first decided to automate the decision.
Decision KPIs give us a clear understanding of how decision performance is related to business performance. They provide the basis for evaluating decision alternatives. To compare alternatives, you can run simulations using historical data. Using simulations you can compare one decision strategy to another, or you can compare how a given strategy performs on each of your customer segments as represented in your data.
Returning to the above customer churn example, we may decide we want to target customers who have an 80% or greater probability of churn based on our predictive model. One option would be to offer them a special 25% discount to attempt to re-engage and keep them as a customer. We can run a simulation against our historical data to learn how many customers fall into this bucket. From there, we can evaluate how much the discount offer would likely cost us. We can run multiple simulations using different thresholds, offers, and combinations until we find the best decision approach to deploy.
Decision Simulation Helps You Evaluate Alternative Decision Strategies
Decision simulations help us evaluate alternative decision strategies to narrow to the best approach. Modern decision management technologies, like SMARTS, make it easy to set up and run these simulations, even on very large data samples. Of course, the ultimate quality of the selected decision approach is related to its success once deployed- how many customers do we manage to retain and at what cost?
Once we deploy a decision we can monitor and track the KPIs but we have no way of knowing whether customers who did not accept our offer would have instead accepted a different offer. Or whether customers who did accept would also have accepted a 20% rather than a 25% discount. To answer these questions we need to use Champion / Challenger experiments. We’ll cover how Champion / Challenger works with decision management in an upcoming post.
In part 1, we saw that we could use knowledge, experience and intuition to build a model serving as a basis for making decisions. But when historical data is available, we can do more…
When large amounts of historical data are available (and the larger, the better), a predictive model can be built using predictive analytics: this basically uses statistics to comb through the data and find patterns. Such patterns can of course be found more easily when they occur frequently. It can be quite useful to make use of the results of BI (if available) to guide the predictive analytics algorithms so that they find the proper correlations.
When successful, the predictive model, used on new cases, will predict a given outcome –therefore based on past experience. Automation of the decision making, using the predictive model, can be performed by building business rules from that model.
And the resulting business rules can, as usual, be enriched using existing knowledge or future knowledge acquired over time (from human experience, or other predictive analytics “campaigns”).
When the results of predictive analytics are used in a number of simulation scenarios, we end up with a number of possible outcomes, a few of them possibly more optimal than others (and here we are talking business performance).
These simulation scenarios may be run continually, as new historical data becomes available, in order to constantly optimize the predictive models –and also so that they correspond to a reality that is more current.
The possibility of obtaining a number of possible decisions trying to maximize an expected outcome, all based on historical data (and possibly also on existing knowledge) leads to a real prescription: “something that is suggested as a way to do something or to make something happen” (Merriam-Webster dictionary).
Automatically providing advice on decisions to make to reach a given target is a very appealing and powerful idea: you don’t just rely on “gut feeling” or experience or past knowledge; you rely on all of these, simultaneously. And the suggestions evolve as time passes, allowing quick refocusing.
Making informed decisions
The ability of making decisions based on so many different aspects that evolve over time is already something we, humans, do at our own level (both consciously and unconsciously).
Scaling this up to tactical and strategic levels in the Enterprise requires the use of prescriptive analytics, backed by knowledge, experience, and big data. So that we can have some comfort that we made those decisions based on all that we had at our disposal.
Now, should I eat some Thai food for lunch, or some Japanese food?
We spend our lives, both personal and professional, making decisions, all day long; some without consequences, and some with long-lasting and even perhaps game-changing ones.
Should I eat some Thai food for lunch, or some Japanese food?
Do we make targeted offers to customers that have been with us for more than 2 years, or to those that have been with us for more than 5?
How do we reduce the time it takes us to fix defective devices?
Although sometimes not making a decision is worse than making the wrong one, we all strive to make the best decisions possible. And to make the best decisions, we rely on experience and whatever information is at hand. With experience in the subject matter, decisions can be made very quickly; when the matter is new or information is scarce, we usually require more time to evaluate a number of possibilities, to make a few computations, to balance the pros and cons.
All this is part of our daily lives. But when a large number of decisions need to be made in a short amount of time, or when the data available to us is limited, or on the other hand enormous, automation can come to the rescue. But how can we make informed decisions at a large scale? Read more…
Last week we jointly hosted a webinar with our consulting and implementation partner, Mariner. Shash Hedge, BI Solution Specialist from Mariner, described operational BI, its challenges, and some traditional and recent implementation approaches. He concluded with a few cases studies of operational BI projects that were missing an important piece — the ability to make decisions based on the operational insight provided by the system.
Operational BI systems provide critical insight on business operations and enable your front-line workers to make more informed decisions. But as Shash highlighted, insight delivered in the right format, to the right people, at the right time is often not enough, you need to make decisions based on that insight in order to take action…
I lead the second half of the webinar, introducing decision management and describing how it complements operational BI. Watch the recorded webinar to learn more.
The recording is a bit rough when the video gets to my part; it sounds like I am presenting from another country! We’re planning another joint webinar in May where we will cover the topic in more depth and demonstrate how these two technologies complement each other. Stay tuned for dates and registration information. I’m sure we’ll get the sound issues resolved next time!
As it is customary, let me share what I foresee as being big this new year… I would like to focus on just three points that are striking me as important in no particular order.
1. Predictive Analytics
Well, of course, we have been seeing that trend develop for a while. This is certainly not a surprising entry in this list.
The fact is that we see more and more projects combining predictive analytics and business rules. What is really interesting to me is the fact that more and more business analysts are getting trained to develop some of these predictive models.
Given the data scientist shortage, it make total sense. If you do not have a modeling team in-house or if it is swamped with high priority projects, you may as well look for other ways to leverage the available data to inform your decisions.
I am optimistic that we will see more business analysts add predictive analytics to their skill set.
2. Business Intelligence
Sticking with analytics at large, I see also a greater synergy between business intelligence and business rules. We have talked about ‘Operational BI’ for a while now, but there seems to be a lot of activity finally taking shape.
I believe that there will be more projects that actually combine both in 2014, allowing companies to act on the gained from monitoring historical trends.
3. Internet of things
When I was still in my early years, we dreamed of ‘intelligent’ equipment, cars and other things that would make our life easier. While embedding computers in all things around the house has been cost prohibitive for the mass market back then, the Cloud is now making it a reality.
The beauty of having ‘things’ that can communicate is that they are immediately candidate for ‘higher intelligence’. By hooking them up with a decision service on the cloud, we can seamlessly allow them to act more appropriately and subtly to signals they sense around around. They can better adapt since changing their behavior does not involve any hardware changes, or more generically any changes in-situ. The intelligence is located on the cloud, readily available for all connected things.
I am totally in awe with the progress we have made thus far, and the potential for a global ‘increase of intelligence’ of the things around us. The future is now!
How do we prioritize our project portfolio according to our business objectives?
Why are our customers buying mountain bikes and not city bikes?
Couldn’t we try to sell more of those top-brand bagels instead of regular sandwiches?
What is the best time to alert our customers about our deals on Black Friday to maximize our profits?
When should I buy my flight tickets to Europe to get the best deal?
From the largest company to the individual, we all naturally strive to maximize, optimize, get better returns, reduce turnover, pay less…
Automating decisions has its own Return on Investment (ROI), but it is only the very beginning of a Decision Management transformation. The end goal should be to improve your decisions. Having the underlying decision logic out of the system code, gives you opportunities to analyze, understand and experiment; which was not really possible before.
It used to take time to ensure that your decision logic did what it was supposed to. Business Rules had to be implemented as code, then compiled, then tested in QA, then deployed, then eventually executed for real and the produced Production reports would tell you if there is a problem.
The sooner you actually see the effect of those business rules on your production data, the sooner your can correct the course of action, or feel safe about pushing those rules into Production.
What you can do… is to operationalize the gathering of data from your Production systems, and get them into your Decision Management systems to see how your business rules will be applied.
Measure early, measure often
Testing is good, and allows you to reduce the typos and logic errors in your business rules. You can see the raw impact of your decision logic before it gets out the door. What will make a key difference on your bottom-line though is how this new or update decision logic will behave in aggregate.
What you can do… is to measure Key Performance Indicators (KPIs) in your systems. KPIs are aggregated metrics that measure your business performance, your success. For example, you might care about the distribution of Approve, Decline and Refer decisions. but that datapoint alone is not sufficient: you want to make sure you decline the bad risk, keep the good risk, while at the same time do not undercut your revenue. In the Fraud case study I presented with ebay, we had a different set of key metrics that were critical to the fraud expert: Catch rate and Hit rate. Whatever those KPIs might be for your business, make sure you define them carefully, and that you measure continuously the progress you make towards them.
Look for more
There is what you see in the KPI reports, and there is what you don’t see… Why don’t you get an extra help from the super processing power of some analytics? they will likely not know better than you, but they can uncover some patterns in your historical data that you can refine and operationalize.
What you can do… is to crunch your data using analytic algorithms that help you ‘mine your business rules’. Once you obtain the data-driven rules, massage them
There might be more than one way to improve your bottom-line. If you are implementing compliance rules, then you may not have that many options to experiment on; but if you are looking to improve your profitability you might have to try things for real into multiple segments to see for yourself which one is most effective.
What you can do… is to start by setting up your simulation environment to run those business rules ‘comparisons’ at large-scale. It will give you a more realistic idea of your KPIs based on a larger sample. The next step is to start experimenting live. Marketers have done A-B testing for a long time. In the Decision Management space, we call that experimental design or champion-challenger.
You are the business expert, right? But how well do you know the contribution of your rules? The decisions we make in life do not always pan out exactly how we expected them to, sometimes for the better and sometimes not… It is the same for your business decisions. They generally work the way you expect but there could be surprises.
You could measure Key Performance Indicators (KPIs) in your systems and look at the reports on a regular basis. we recommend that of course.
What you could do too that is even more powerful, a greater learning experience… is to take the time to anticipate where you expect those KPIs to land and where you would like to take them. With clear objectives in mind, you will be more attuned to the outcome and, as a result, more effective in affecting those KPIs.
If you are interested in this topic and would like to hear practical illustrations of these techniques, please join us for a webinar on 9/19 or 10/3 on “Decisions by the Numbers”.
The funny thing about Decision Management technologies is that we obsess with automation but. in many instances, projects lose steam before we get a chance to close the loop. As a result, we hope that we are automating the right business logic but we don’t know for sure until much much much later. We have often seen cases where factual feedback only came weeks after deployment!
No feedback loop?
In some cases, business rules are derived from data but we tend to call those predictive models or segmentation logic rather than business rules. Business rules are, generally speaking, heavily judgmental. As an expert, you want to codify how you make decisions. Although we are used to measuring and monitoring the quality of predictive models, we tend not to do that same for business rules. Why? I think it has to do with:
- Tact: Why would you challenge an expert? He/she knows better than anyone else!
- Development context: Data tends to be absent from the environment where we develop the business rules; so data-driven testing is extra effort
- Lack of time & resources: Getting systems wired to make those measurements is an expense that is not mandatory for the system to function; with project delays and other surprises along the way, this is the first capability to be cut
I would argue though that fact-based validation is vital to the success of many decisioning projects if not all. It is common wisdom that Business Intelligence is vital to strategic decision-making. Would you trust a management team to make the right decision in absence of dashboards, based only on intuition or rumors? Of course not… So why wouldn’t we look for the same insight for our operational decisions?
Where no feedback led this insurance company
I may have mentioned this story before, but it really stuck to my mind as I saw the chain of events invariably lead to project failure. I will not name the Insurance company to avoid any embarrassment. I had a chance to meet with the lead architect, a great guy, very capable. He led the Automated Underwriting project beautifully and implemented all the business rules that came from the team of underwriters without a hiccup. The application deployed to the first series of States on time. The systems always deployed on Friday night to allow for a slow ramp up over the weekend. Per company policy, the team was on-call over the weekend. Within hours, the team was called on site to fix the mess. The business rules, that did exactly what the underwriters wanted, did not achieve the business objectives they were looking for! Taken in isolation, each rule made the accept / decline decision it was supposed to, but more often than not the transactions ended up in the refer pile for manual review. There may have been a genuine lack of trust from the underwriters who did not want, consciouly or not, to release control to the automated system. I do not intend any blame here. It is human nature, especially for experts, to keep some oversight on the decisions. If you do not pay attention to the numbers though, you may end up, like them, referring too many transactions and, as a result, flooding the underwriting team with applications to review manually.
The good news is that, thanks to BRMS technology, they have been able to fix the system in a matter of hours. Had they encoded the logic into spaghetti code, they would not have been able to make the necessary changes in less than months. The project was not the complete failure it could have been, but this first deployment was.
My take-away from this real-life experience is that we need to measure early, we need to measure often, we need to measure continuously.
Let me clarify here that Business Performance testing and monitoring is different from test case testing. In this project, they had individual test cases that were created specifically for each captured rule. The application did what it was supposed to. It was the rate of automation that failed them. One KPI was off and it had disastrous consequences for the project.
By having greater insight in the business performance of decisioning applications at the time of rules authoring, in addition to the measurements after deployment of course, the team would have been able to detect early that the application was to miss the KPI they hoped for. They could have fixed the system way before it hit Production. Everybody could have used a nice and peaceful weekend on deployment day.
Another side effect of having visibility into Business Performance is that you can build this trust with your business users. If they are skeptical about decision automation and have a hard time letting go of control, they may gain reassurance along the way when they see the effect of their decisioning logic applied to historical transactions. They might actually find some opportunities for additional improvement while diving into those performance reports. Business Intelligence dashboards, combined with their expertise, can create a very powerful combination. The business users I have worked with at Sparkling Logic have been seduced by the ability to explore the business impact of policy changes while they craft their decisioning logic. To quote one of them, it is “awesome’.
So do not be afraid to show your business users what the numbers are… Instead of fearing offending them, consider the extra power you give them! and they want it…