Call to Action

Webinar: Take a tour of Sparkling Logic's SMARTS Decision Manager Register Now

What are Champion / Challenger experiments in Decision Management?


Written by: Carole-Ann BerliozPublished on: Feb 10, 20143 comments

Coin TossHave you ever heard of Champion / Challenger?  If you are in the Financial Services industry, chances are you have and you might even have used this technique.  If not, I wouldn’t be so sure you have.

What is Champion / Challenger?

In a nutshell, the idea is to compare two or more strategies in order to promote the one that performs the best.

When your decision logic implements regulations or contracts, the business rules are pretty much defined in stone, and very little is left to interpretation.  But when you are making decisions as to what product you should recommend, or which transaction is most likely fraudulent, or how much you should increase an account’s credit limit, there is no right or wrong business rules.  You have to deal with some degree of uncertainty.  Should you target users based on age or type of smartphone?  Should you set your threshold here or there?

In order to know which strategy performs the best, whether it is a simple threshold update or a completely new approach, you have to actually try them in real life.  Although business rules make it easy to switch from one strategy to another, swapping from your Production rules to the new strategy might introduce a level of risk that is not reasonable…  It might also be tedious and biased to try one strategy, then another, then another sequentially…  Not to mention the inconsistent behavior as seen from the outside…

Champion / Challenger addresses all of these issues by deploying all of them at once.  You might wonder how this can be done since you usually have to pick ONE decision, not a bunch of them.

You might be familiar with the concept of AB testing in advertisement…  The idea is to publish different ads to a market segment and measure which one performs the best.  In order to protect the integrity of the experiment, you have to randomly select who gets A and who gets B, and potentially a few more alternatives.  On websites, marketers try different headlines, styles, or content.

Champion / Challenger does the same.  The champion is your Production strategy (aka decision logic, business rules).  You can make it compete with one or more challengers (aka variants of the decision logic).  You do have control as to how many transactions with go through one or another of these strategies within a segment, but you will let them be selected at random.  As Champion / Challenger selects the strategy, it keeps track of the assignment for measuring its performance.  You can track real-time how well each strategy is performing, and possibly shut down or tweak some experiments that are under-performing.

After minutes, days or weeks, you have gained enough evidence that one of the challengers is doing much better than the others, and promote it as the new champion.  You may want to create new variants to compete against it once again.

The beauty of this mechanism is that you have full control over the percentages.  If you are risk-adverse, you might want to run experiments on only 5% of your portfolio and exclude your VIP customers.  If you are more aggressive, you might want to spread your eggs into multiple baskets equally and get faster results.

Knowing which strategy was applied is critical for consistent behavior over time.  In some cases, regulations force you to keep track of the rules that were in force at the time of the decision so that you reuse the same business rules if you need to re-process the application.  Even when regulation is not imposing it, you may want to run experiments across decision services through a customer session, or across sessions for a better user experience.  Champion / Challenger provides the infrastructure to do just that.

Why can’t I just run a simulation like before?

Well, referring back to the illustration, the alternative is a coin toss when it comes to changing your strategy.

Complete coin toss?  I sure hope not.  I am sure that you do a fair amount of testing before publishing your business rules in Production.  By running your historical transactions through several variants of your decision logic, you can get an idea of their business performance, assuming that past behavior is consistent with current behavior of course.

There are some aspects of your decision that you cannot anticipate in simulation though.

For example, you can check how many times you would have proposed offer A versus offer B, let’s say a free smartphone versus 12-month discounted service.  But you cannot easily estimate how many customers would have accepted one offer or the other.  The only way to know for sure is to actually offer the new ‘product’ to a some of your customers and see how they react.

How hard is it really?

Not as hard as you think…  We believe that you should focus on what you want to experiment with, and not on what the plumbing looks like to assign the strategies and keep track of it.  So the work is really just to define the variants and ‘sign them up’ for an experiment.  The rest is magic!

See our most recent posts on Champion / Challenger experiments: “Champion / Challenger, It’s a Numbers Game” and “Another Usage for Champion / Challenger: Rolling Out Deployments“.

If you’d like to see it live, you can view the recorded webinar: Experiment for Better Decision Results

 

Call to Action

SMARTS Decision ManagerSparkling Logic SMARTS is a decision management platform that empowers business analysts to define decisions using business rules and predictive models and deploy those decisions into an operational environment. SMARTS includes dashboard reporting that allows organizations to measure the quality of decisions both during development and post deployment. Learn more about how SMARTS can help your organization improve decisions.
© 2023 SparklingLogic. All Rights Reserved.