Call to Action

Webinar: Take a tour of Sparkling Logic's SMARTS Decision Manager Register Now
Home » Decision Management

Decision Management

Best Practices Series: Manage your decisions in Production


Managing your decisions in productionOur Best Practices Series has focused, so far, on authoring and lifecycle management aspects of managing decisions. This post will start introducing what you should consider when promoting your decision applications to Production.

Make sure you always use release management for your decision

Carole-Ann has already covered why you should always package your decisions in releases when you have reached important milestones in the lifecycle of your decisions: see Best practices: Use Release Management. This is so important that I will repeat her key points here stressing its importance in the production phase.

You want to be 100% certain that you have in production is exactly what you tested, and that it will not change by side effect. This happens more frequently than you would think: a user may decide to test variations of the decision logic in what she or he thinks is a sandbox and that may in fact be the production environment.
You also want to have complete traceability, and at any point in time, total visibility on what the state of the decision logic was for any decision rendered you may need to review.

Everything they contributes to the decision logic should be part of the release: flows, rules, predictive and lookup models, etc. If your decision logic also includes assets the decision management system does not manage, you open the door to potential execution and traceability issues. We, of course, recommend managing your decision logic fully within the decision management system.

Only use Decision Management Systems that allow you to manage releases, and always deploy decisions that are part of a release.

Make sure the decision application fits your technical environments and requirements

Now that you have the decision you will use in production in the form of a release, you still have a number of considerations to take into account.

It must fit into the overall architecture

Typically, you will encounter one or more of the following situations
• The decision application is provided as a SaaS and invoked through REST or similar protocols (loose coupling)
• The environment is message or event driven (loose coupling)
• It relies mostly on micro-services, using an orchestration tool and a loose coupling invocation mechanism.
• It requires tight coupling between one (or more) application components at the programmatic API level

Your decision application will need to simply fit within these architectural choices with a very low architectural impact.

One additional thing to be careful about is that organizations and applications evolve. We’ve seen many customers deploy the same decision application in multiple such environments, typically interactive and batch. You need to be able to do multi-environment deployments a low cost.

It must account for availability and scalability requirements

In a loosely coupled environments, your decision application service or micro-service with need to cope with your high availability and scalability requirements. In general, this means configuring micro-services in such a way that:
• There is no single point of failure
○ replicate your repositories
○ have more than one instance available for invocation transparently
• Scaling up and down is easy

Ideally, the Decision Management System product you use has support for this directly out of the box.

It must account for security requirements

Your decision application may need to be protected. This includes
• protection against unwanted access of the decision application in production (MIM attacks, etc.)
• protection against unwanted access to the artifacts used by the decision application in production (typically repository access)

Make sure the decision applications are deployed the most appropriate way given the technical environment and the corresponding requirements. Ideally you have strong support from your Decision Management System for achieving this.

Leverage the invocation mechanisms that make sense for your use case

You will need to figure out how your code invokes the decision application once in production. Typically, you may invoke the decision application
• separately for each “transaction” (interactive)
• for a group of “transactions” (batch)
• for stream of “transactions” (streaming or batch)

Choosing the right invocation mechanism for your case can have a significant impact on the performance of your decision application.

Manage the update of your decision application in production according to the requirements of the business

One key value of Decision Management Systems is that with them business analysts can implement, test and optimize the decision logic directly.

Ideally, this expands into the deployment of decision updates to the production. As the business analysts have updated, tested and optimized the decision, they will frequently request that it be deployed “immediately”.

Traditional products require going through IT phases, code conversion, code generation and uploads. With them, you deal with delays and the potential for new problems. Modern systems such as SMARTS do provide support for this kind of deployment.

There are some key aspects to take into account when dealing with old and new versions of the decision logic:
• updating should be a one-click atomic operation, and a one-API call atomic operation
• updating should be safe (if the newer one fails to work satisfactorily, it should not enter production or should be easily rolled back)
• the system should allow you to run old and new versions of the decision concurrently

In all cases, this remains an area where you want to strike the right balance between the business requirements and the IT constraints.
For example, it is possible that all changes are batched in one deployment a day because they are coordinated with other IT-centric system changes.

Make sure that you can update the decisions in Production in the most diligent way to satisfy the business requirement.

Track the business performance of your decision in production

Once you have your process to put decisions in the form of releases in production following the guidelines above, you still need to monitor its business performance.

Products like SMARTS let you characterize, analyze and optimize the business performance of the decision before it is put in production. It will important that you continue with the same analysis once the decision is in production. Conditions may change. Your decisions, while effective when they were first deployed, may no longer be as effective after the changes. By tracking the business performances of the decisions in production you can identify this situation early, analyze the reasons and adjust the decision.

In a later installment on this series, we’ll tackle how to approach the issue of decision execution performance as opposed to decision business performance.

Best Practices Series: QA Testing


QA TestingYou are done authoring your rules, or at least a subset of your decision logic. Now what? You need to check that your rules are working properly. Today, we will focus on traditional QA Testing.

In the coming weeks, we will review other aspects of testing such as:

  • Business Performance Testing
  • Runtime Performance Testing

What is QA Testing?

Obviously, QA Testing refers to the testing of business rules in this context. In the worst case scenario, users can produce test cases manually to ensure that the rules produce the proper result. That being said, it is more productive to leverage test cases with expected outcome. For Joe’s application, you know that Joe is supposed to be approved with such and such terms. You can now automate the execution of the rules, and check that they match the expected results.

Having weeks or months of historical transactions is ideal. The more transactions you have, the more comprehensive your testing will be. While there is a little investment in collecting and preparing the data, the benefit is significant.

However, if you do not have any data yet, do not give up. You can produce data by hand, or using tools such as Mockaroo. Many of our customers like to leverage Excel when possible. They simply copy one test case and create variations for each relevant attribute (age, state…). Unlike the lucky practitioners with historical data, you will have to think about the result that you expect. Though it adds a little effort, these golden test cases can be saved for the future, and reused for each wave of changes in your decision logic.

How does QA Testing work?

This part is actually trivial. Once you have uploaded your data, you just need to add a computation that compares the status of your transaction with the expected status. You have a few options here. You can create:

  • a single QA Flag for every single outcome of your decision
  • one QA Flag for each outcome of your decision, and likely a global one that checks that all individual flags are good
  • QA Flags for just a subset of the fields you want to confirm, and, again, a global one that checks that all individual flags are good

Regardless of how many QA Flags you have, you are now ready to let the magic happen. Create a report for your QA Flag(s), and get instant gratification as the chart will let you know how many discrepancies you have.

In my projects, I like to create filters to focus exclusively on the ‘bad’ test cases. Instead of looking at the full data set, it will show me only the ones that fail QA Testing. In some cases, I end up fixing a rule after I review the test case. In some cases, I end up fixing the test case itself. You would be surprised how many time data is not supplied as expected!

Once the change is done, it is time to move on to the next ‘bad’ test case until we reach the end. QA Testing has never been easier!

Taking QA Testing to the next level

The beauty of working with data is that you can focus on actual transactions. If you are looking at older rules that you may or may not have authored personally, I highly recommend activating RedPen. That will help you pinpoint what is the issue in the rule (or test case), by overlaying them on top of each other. You can see the data next to the test done by the rule. You can see which statement in the rule is true, and which one is not. That is a time-saver in large object models.

You can tell that I am getting excited. I assure you that I am not a QA nut at all. I enjoy writing rules more than I do testing of course! This different approach to testing, with RedPen and the dashboard, has changed my perspective though. I really enjoy the rules forensics with this set of tools. Dare I say it is almost fun 😉

Additionally, once you testing is done, make sure you read our recommendations on packaging your decisions in a release.

Best Practices Series: Use Release Management


High Level ReleaseSo far, our Best Practices Series has focused on low-level rules writing. I would like to take a step back for a moment, and take a higher-level perspective. When your rules authoring is done, what should you do? In this series, we will talk about topics like lifecycle management and testing. Today, let’s cover release management.

What is Release Management?

You are certainly familiar with versioning, which is the ability to keep track of each modification in your repository. Versioning is typically offered out of the box, and does not rely on anyone setting it up, or explicitly creating versions. That way, you can see day-to-day, who modify what, when, and why (if they provided a comment).

Release management takes a higher perspective on change management. When you create a release, you freeze the state of each rule that contributes to the decision. The release is, conceptually, a packaging for your decision, frozen in a certain state. Changes to the project, from that point on, will not be visible in the release.

While each decision management product is different, we believe strongly that products should not create copies of your rules projects. A release should only refer to the manifest of your decision content. It should not duplicate the rules.

Why should you create Releases?

The first and foremost reason is agility. As you deploy changes to your decisions, we expect your rules to improve over time. Unfortunately, unforeseen problems can hide in your decision logic, and slip through your QA efforts. For example, changing a threshold too aggressively could impact dramatically your decline rate. While this is correct in terms of execution, it may not be aligned with your business objectives.

When such a situation occurs, release management allows you to target the prior release with the click of a button. You can revert back to the prior state of your decision, while your team works on improving the decision logic. The execution engine does not need to know which rules have changed and need to be reverted. The execution engine simply loads the configuration you want to substitute.

Related, but slightly different, is the ability to load multiple configurations jointly. For progressive roll-outs, releases make it easy to continue processing the ‘old’ rules for most of your transactions, and the ‘new’ rules for 5 or 10% of your traffic. Once you feel comfortable with the new logic, you can route all transactions to the latest release of the decision.

Finally, there is huge value in transparency. By using releases, you can go back at any point in time to the state of the rules when a given transaction was processed. We have seen the need in case of legal challenge or internal investigation. Whatever the reason might be, releases are always available. As a frozen copy, the rules in a release cannot technically be modified. You can rely that what you see is what executed at that point in time. And you can now leverage all investigation tools to understand how the decision was made.

In conclusion, if you are not using release today, you should. As an added bonus, you will get an easier time tracking changes from release to release with automated ‘diff’ documentation. There is really no reason not to use releases!

Best Practices Series: How to Think about Decisions


Let’s continue with our series on best practices for your decision management projects. We covered what not to do in rule implementation, and what decisions should return. Now, let’s take a step back, and consider how to think about decisions. In other words, I want to focus on the approaches you can take when designing your decisions.

Think about decisions as decision flows

The decision flow approach

People who know me know that I love to cook. To achieve your desired outcome, recipes give you step by step instructions of what to do. This is in my opinion the most natural way to decompose a decision as well. Decision flows are recipes for making a decision.

In the early phases of a project, I like to sit down with the subject matter experts and pick their brain on how they think about the decision at hand. Depending on the customer’s technical knowledge, we draw boxes using a whiteboard or Visio, or directly within the tool. We think about the big picture, and try to be exhaustive in the steps, and sequencing of the steps to reach our decision. In all cases, the visual aid allows experts who have not prior experience in decision management design to join in, and contribute to the success of the project.

What is a decision flow

Think about decisions as decision flowIn short, a decision flow is a diagram that links decision steps together. These links could be direct links, or links with a condition. You may follow all the links that are applicable, or only take the first one that is satisfied. You might even experiment on a step or two to improve your business performance. In this example, starting at the top, you will check that the input is valid. If so, you will go through knock-off rules. If there is no reason to decline this insurance application, we will assess the risk level in order to rate it. Along the way, rules might cause the application to be rejected or referred. In this example, green ball markers identify the actual path for the transaction being processed. You can see that we landed in the Refer decision step. Heatmaps also show how many transactions flow to each bucket. 17% of our transactions are referred.

Advantages of the decision flow approach

The advantage of using this approach is that it reflects the actual flow of your transactions. It mirrors the steps taken in a real life. It makes it easy to retrace transactions with the experts and identify if the logic needs to be updated. Maybe the team missed some exotic paths. maybe the business changed, and the business rules need to be updated. When the decision flow links to actual data, you can use it also as a way to work on your strategies to improve your business outcome. If 17% referral rate is too high, you can work directly with business experts on the path that led to this decision and experiment to improve your outcome.

Think about decisions as dependency diagrams

A little background

In the early days of my career, I worked on a fascinating project for the French government. I implemented an expert system that helped them diagnose problems with missile guidance systems. The experts were certainly capable of layout of the series of steps to assess which piece of equipment was faulty. However, this is not how they were used to think. Conducting all possible tests upfront was not desirable. First, there was a cost to these tests. But more importantly, every test could cause more damage to these very subtle pieces of engineering.

As it was common back then in expert systems design, we thought more in a “backward chaining” way. That means that we reversed engineered our decisions. We collected evidences along the way to narrow down the spectrum of possible conclusions.

If the system was faulty, it could be due to the mechanical parts or to the electronics onboard. If it was mechanical, there were 3 main components. To assess whether it was the first component, we could conduct a simple test. If the test was negative, we could move on to the second component. Etc.

In the end, thinking about dependencies was much more efficient than a linear sequence, for this iterative process.

The dependency diagram approach

Today, the majority of the decision management systems might pale in sophistication compared to this expert system. But the approach taken by experts back then is not so different from the intricate knowledge in the head of experts nowadays in a variety of fields. We see on a regular basis projects that seem better laid out in terms of dependencies. Or at least, it seems more natural to decompose them this way to extract this precious knowledge.

What is a dependency diagram

Decision ModelA dependency diagram starts with the ultimate decision you need to make. The links do not illustrate sequence, as they do in the decision flows. Rather, they illustrate dependencies obviously, showing what input or sub-decision needs to feed into the higher level decision. In this example, we want to determine the risk level, health-wise, of a member in a wellness program. Many different aspects feed into the final determination. From a concrete perspective, we could look at obesity, blood pressure, diabetes, and other medical conditions to assess the current state. From a subjective perspective, we could assess aggravating or improving factors like activity and nutrition. For each factor, we would look at specific data points. Height and weight will determine BMI, which determines obesity.

Similarly to the expert system, there is no right or wrong sequence. Lots of factors help make the final decision, and they will be assessed independently. One key difference is that we do not diagnose the person here. We can consider all data feeds to make the best final decision. Branches are not competing in the diagram, they contribute to a common goal. The resulting diagram is what we call a decision model.

Advantages of the dependency diagram approach

Dependency diagrams are wonderful ways to extract knowledge. As you construct your decision model, you decompose a large problem into smaller problems, for which several experts in their own domain can contribute their knowledge. When decisions are not linear, and the decision logic has not yet been documented, this is the right approach.

This approach is commonly used in the industry. OMG has standardized the notation under the “DMN” label, which stands for Decision Model and Notation. This approach allows you to harvest knowledge, and document source rules.

Choose the approach that is best for you

Decision flows are closest to an actual implementation. In contrast, dependency diagrams, or decision models, focus on knowledge. But they feed straight into decision management systems. In the end, think about decisions in the way that best fits your team and project. The end result will translate into an executable decision flow no matter what.

LTCG Balances Technology and Human Touch in Claims Processing


LTCG

Long Term Care Group (LTCG) is a leading provider of business process outsourcing services for the insurance industry. They are the largest third party long term care insurance provider offering underwriting, policy administration, clinical services, as well as claims processing and care management for America’s largest insurance companies. Insurers rely on LTCG for these services due to LTCG’s deep expertise in long term care portfolios, which require specialized knowledge and processes. LTCG continually invests in the people, processes, and technology to maintain their leadership position in the industry.

Several years ago LTCG developed and implemented an automated claims adjudication process using Sparkling Logic SMARTS as the decision engine. Prior to this initiative more than 90,000 claims per month were processed manually by LTCG’s team of claims examiners. LTCG wanted to reduce the time their claims examiners needed to spend researching and making a claims decision in order to maintain the highest levels of customer satisfaction.

Long term care insurance is unique in that benefits are coordinated by care coordinators who create a plan of care to help policyholders leverage the benefits covered by their policy based on clinical guidelines that direct care needs over time. Due to the unique nature of long-term care needs, LTCG wanted to balance the use of technology with their emphasis on human touch to ensure the best possible care and coverage for policyholders.

The first automated claims adjudication system was developed in 6 months using an agile methodology and Sparkling Logic SMARTS. The Scrum team was able to iterate on the business rules and logic quickly thanks to the simplicity and power of the SMARTS user interface and software architecture.

Download the LTCG Case Study to learn more.

Unlocking New Use Cases for Rules Engines


The Willis Towers Watson Story

Willis Towers Watson (NASDAQ: WLTW) is a leading global, advisory, broking, and solutions company that helps their clients turn risk into a path for growth. To stay competitive and to continue to meet the needs of their financial services customers, Willis Towers Watson embarked on an initiative to transform how they made decisions. Placing a priority on making more informed, data-driven decisions brought together their community of broking experts to ensure they could drive the best result for their customers.

As Willis Towers Watson looked to replace silos of data, manual processes, and custom applications that were difficult to maintain and test, they turned to Sparkling Logic SMARTS. SMARTS is a modern, agile, and easy to implement cloud-based rules engine and decision management platform that Willis Towers Watson is harnessing to drive their competitive edge.

data driven decisionsWhen they started evaluating rules engine and decision management solutions, like most organizations, they wrote an RFP with very specific requirements. They were launching “Connected Broking” – a global placements platform and needed the rules engine to determine panel eligibility for insurance panels and they needed to automate decisions to channel the right risk to the right marketplace to better serve their customers.

However, once Willis Towers Watson selected SMARTS, they began to identify new use cases where the software could be applied to solve other business problems that were never even considered during the RFP phase. In some cases, the program management team saw uses for SMARTS that even their rules authoring lead and business analysts didn’t believe were a good fit for a rules engine.

The two new, not-so-obvious use cases that Willis Towers Watson considered were:

  1. Dynamic Data Capture
  2. Task Management

Let’s dive in a bit further:

Dynamic Data Capture

Willis Towers Watson identified an opportunity to apply business logic and rules to dynamically determine what types of data (and the sequence of that data) would need to be captured from an end customer so that the insurance carrier could make appropriate credit decisions and offer the best products and services. The team integrated SMARTS with a webform they developed to drive the data type and data sequence presented based on the application of business rules. Then, the webform was built to call back to SMARTS to validate the data that was entered. This solution is in production today.

Task Management

Willis Towers Watson currently has plans in place to use the decision engine in an orchestration capacity. Specifically, they will use SMARTS in combination with a lightweight task management application that can trigger allocated tasks based on business events – getting the right task to the right person in the right application.

With each new use case discovered, Willis Towers Watson is looking to continue to harness and extend the value of SMARTS. It has been possible for the company to explore new use cases quickly due to the ease of implementing and managing SMARTS including the limited amount of developer and analyst resources required to author rules. As a result, the team has been able to devote time and resource to building the framework and integrations for both dynamic data capture and task management routing.

To learn more about how Willis Towers Watson achieved these results, read their case study.

Read The Case Study

Another Usage for Champion / Challenger: Rolling Out Deployments


rolling out deplymentLast week, I talked about Champion / Challenger and how this technique compares alternative strategies for a decision checkpoint. This week, I would like to talk about another usage for this technique. Using the same infrastructure, the same idea, you can test in Production your new and updated decision logic. This time it is not for evaluating its business performance. It is for testing it in Production, or what we call rolling out your deployment.

Rolling out a deployment… Why?

As part of the software development life cycle (SDLC), a critical aspect is testing. You do not want your software released out there without any quality assurance, of course. While many go over their test cases and more in the QA environment, it is frequent to roll out deployment in phases. When the software in question supports mission critical business operations, it is key to ensure it is performing as expected. There is only so much you can test in QA.

In the past, I have seen projects deployed to one state or geography in the first phase. If everything goes well, then the software gets deployed more widely. Some projects shadow the current execution with the new execution, just logging what the answer would have been, for an offline examination. I have also seen projects that were deployed to a segment of the incoming transactions, this time more randomly assigned.

Rolling Out a Deployment… How?

When looking at how to roll out deployments in more detail, you will inevitably see the parallel with Champion / Challenger. You have one champion: the existing service implementation. You have one challenger: the new and improved service implementation. Please do not feel confused by the term implementation. While there is software running, we are really talking about your old decision logic and your new decision logic. Business rules, maybe models, have changed, but the software implementation has not changed in our scenario. I am not talking about deploying actual software code.

At this point, you want to deploy the new business rules to only a fraction of your incoming business. Let’s say that you might want to start with 5% of your transactions. This is very similar to a Champion / Challenger setup with 95% of the transactions going to your champion, and 5% going to your challenger. You can monitor for days or weeks that the challenger segment behaves as expected. Then you can augment to 20% or 50%, and so on until you reach 100%.

Why would you use Champion / Challenger for that?

That’s a good question. Let me ask you this though: why not? Without being facetious, the alternative is to write the code infrastructure by hand. This means that you need to solve the issue of running two different releases of the same decision services concurrently. SMARTS, our decision management system allows you to do obviously — we had to solve that problem to enable Champion / Challenger. But other decision management systems might require that you clone your repository or something like.

Secondly, you need to hard-code the parameter of the Champion / Challenger experiment, aka the volume of transactions going to your current decision logic, and the volume of transactions going to your new decision logic. That part if not hard, but a change to this parameter implies a full software development life cycle. When you upgrade from 5% to 20%, you will need software engineers to make the change, and you will need QA to test, and you will need a formal production release. This can be heavy. A Champion / Challenger change only requires as much testing you would go through for a rule change. Wasn’t it the reason you decided to use business rules in the first place?

Finally, you will need to put in place a mechanism to keep track of the path you took, and whether it met your standards. Granted, you could simply check that transactions are processed without crashing the system… But I am sure you have higher QA standards. Champion / Challenger tracks out of the box which path was taken for each transaction.

Food for Thought

Rolling out your deployments is a step in the direction of more business transparency in the quality of your decisions. Once set up, it does not take much to start monitoring the business performance of your new decision logic. Why not start using Champion / Challenger?

Champion / Challenger, it’s a Number’s Game


race, champion challengerAutomating decisions is mostly valuable when you can change the underlying decision logic as fast as your business changes. It might be due to regulatory changes, competitive pressure, or simply business opportunities. Changing rhymes with testing… It would be foolish to change a part of your business model without making sure that it is implemented correctly of course. However, testing is not always sufficient. It is needed obviously, but it has its limitations. How can you test your decision logic when many unknowns are out of your control? What we need in terms of testing is sometimes more akin to a race between different strategies. I will discuss a technique pioneered a few decades ago, and yet not widely adopted outside of a few niches. This technique is called Champion / Challenger.

Why Champion / Challenger

Have you ever experimented with Champion / Challenger? Or maybe you have heard of it as A/B testing… The main objective is to compare a given strategy (your champion) with one or more alternatives (the challengers). This has been used over and over again with website design. The objective could be about highlighting call-to-actions in different ways, or even changing drastically the wording on several pages alternatives, and measuring which version yields the best results. While it is a norm in web design, it is not as widely applied in decisioning. Why, may you ask? My hunch is that many companies are not comfortable with how to set it up. I have actually seen companies that used this technique, and still tainted their experiment with a careless setup. I would welcome comments from you all to see which other industries are making strides in Champion / Challenger experimentation.

Let me explain briefly the basic concept as it applies to decision management. Like web design, decision management experiments aim at comparing different alternatives in a live environment. The rationale is that testing and simulation in a sandbox can estimate the actual business performance of a decision (approving a credit line for example), but it cannot predict how people will react over time. Simulation would only tell you how many people in your historical sample would be accepted versus declines. You can approve a population segment, and then discover over time that this segment performs poorly because of high delinquency. Live experimentation allows you to make actual decisions and then measure over time the business performance of this sample.

How Champion / Challenger works

Technically, two or more decision services can be actually deployed in production. Since your system cannot approve and decline at the same time, you need the infrastructure to route transactions randomly to a strategy, and mark the transaction for monitoring. The keyword here is ‘randomly’. It is critical that your setup distributes transactions without any bias. That being said, it is common to exclude entire segments because of their strategic value (VIP customers for example), or because of regulations (to avoid adverse actions on the elderly for example, which could result in fines). Your setup will determine what volume of transactions will go to the champion strategy, let’s say 50%, and how many will go to the challengers, let’s say 25% for each of 2 challengers.

It becomes trickier to setup when you need to test multiple parts of your decisions. It is not my objective to describe this issue in details here. I might do that in a follow up post. I just want to raise the importance of experimentation integrity as a possible reason for the perceived complexity.

Once the strategies are deployed, you need to wait a week, a month, or whatever time period, before you can conclude that one of the strategies is ‘winning’, meaning that is outperforms the others. At that point in time, you can promote that strategy as the established champion, and possibly start a new experimentation.

It’s a Number’s Game

As you process transactions day in and day out, you will allocate a percentage to each strategy. In our earlier example, we have 50% going to champion and 25% going to challengers 1 & 2. In order for the performance indicator to be statistically relevant, you will need ‘enough data’. If your system processes a dozen transactions a day, it will take a long time before you have enough transactions going to each of the challengers. This becomes increasingly problematic if you want to test out even more challengers at once. And, on the other hand, systems that process millions of transactions per day will get results faster.

So, basically, you end up with 3 dimensions you can play with:

  • Number of transactions per day
  • Number of strategies to consider
  • Amount of time you run the experiment

As long as the volumes along these 3 dimensions are sufficient, you will be able to learn from your experimentation.

Is that enough? Not quite. While you can learn from any experiment, you, the expert, is the one making sense of these numbers. If you run an experimentation in retail for the whole month of December, it is not clear that the best performing strategy is also applicable outside of the holidays. If your delinquency typically starts after 2 or 3 months of the account being open, a shorter experimentation will not give you this insight. While the concept of testing several strategies in parallel is fairly simple, it is a good idea to get expert advice on these parameters, and use your common sense on what needs to prove that a strategy is actually better than the alternatives. Once you are familiar with the technique, your business performance will soar. Champion / Challenger is a very powerful tool.


 2019 SparklingLogic. All Rights Reserved.