Call to Action

Webinar: Take a tour of Sparkling Logic's SMARTS Decision Manager Register Now
Home » Decision Management

Decision Management

Best Practices Series: How to Think about Decisions


Let’s continue with our series on best practices for your decision management projects. We covered what not to do in rule implementation, and what decisions should return. Now, let’s take a step back, and consider how to think about decisions. In other words, I want to focus on the approaches you can take when designing your decisions.

Think about decisions as decision flows

The decision flow approach

People who know me know that I love to cook. To achieve your desired outcome, recipes give you step by step instructions of what to do. This is in my opinion the most natural way to decompose a decision as well. Decision flows are recipes for making a decision.

In the early phases of a project, I like to sit down with the subject matter experts and pick their brain on how they think about the decision at hand. Depending on the customer’s technical knowledge, we draw boxes using a whiteboard or Visio, or directly within the tool. We think about the big picture, and try to be exhaustive in the steps, and sequencing of the steps to reach our decision. In all cases, the visual aid allows experts who have not prior experience in decision management design to join in, and contribute to the success of the project.

What is a decision flow

Think about decisions as decision flowIn short, a decision flow is a diagram that links decision steps together. These links could be direct links, or links with a condition. You may follow all the links that are applicable, or only take the first one that is satisfied. You might even experiment on a step or two to improve your business performance. In this example, starting at the top, you will check that the input is valid. If so, you will go through knock-off rules. If there is no reason to decline this insurance application, we will assess the risk level in order to rate it. Along the way, rules might cause the application to be rejected or referred. In this example, green ball markers identify the actual path for the transaction being processed. You can see that we landed in the Refer decision step. Heatmaps also show how many transactions flow to each bucket. 17% of our transactions are referred.

Advantages of the decision flow approach

The advantage of using this approach is that it reflects the actual flow of your transactions. It mirrors the steps taken in a real life. It makes it easy to retrace transactions with the experts and identify if the logic needs to be updated. Maybe the team missed some exotic paths. maybe the business changed, and the business rules need to be updated. When the decision flow links to actual data, you can use it also as a way to work on your strategies to improve your business outcome. If 17% referral rate is too high, you can work directly with business experts on the path that led to this decision and experiment to improve your outcome.

Think about decisions as dependency diagrams

A little background

In the early days of my career, I worked on a fascinating project for the French government. I implemented an expert system that helped them diagnose problems with missile guidance systems. The experts were certainly capable of layout of the series of steps to assess which piece of equipment was faulty. However, this is not how they were used to think. Conducting all possible tests upfront was not desirable. First, there was a cost to these tests. But more importantly, every test could cause more damage to these very subtle pieces of engineering.

As it was common back then in expert systems design, we thought more in a “backward chaining” way. That means that we reversed engineered our decisions. We collected evidences along the way to narrow down the spectrum of possible conclusions.

If the system was faulty, it could be due to the mechanical parts or to the electronics onboard. If it was mechanical, there were 3 main components. To assess whether it was the first component, we could conduct a simple test. If the test was negative, we could move on to the second component. Etc.

In the end, thinking about dependencies was much more efficient than a linear sequence, for this iterative process.

The dependency diagram approach

Today, the majority of the decision management systems might pale in sophistication compared to this expert system. But the approach taken by experts back then is not so different from the intricate knowledge in the head of experts nowadays in a variety of fields. We see on a regular basis projects that seem better laid out in terms of dependencies. Or at least, it seems more natural to decompose them this way to extract this precious knowledge.

What is a dependency diagram

Decision ModelA dependency diagram starts with the ultimate decision you need to make. The links do not illustrate sequence, as they do in the decision flows. Rather, they illustrate dependencies obviously, showing what input or sub-decision needs to feed into the higher level decision. In this example, we want to determine the risk level, health-wise, of a member in a wellness program. Many different aspects feed into the final determination. From a concrete perspective, we could look at obesity, blood pressure, diabetes, and other medical conditions to assess the current state. From a subjective perspective, we could assess aggravating or improving factors like activity and nutrition. For each factor, we would look at specific data points. Height and weight will determine BMI, which determines obesity.

Similarly to the expert system, there is no right or wrong sequence. Lots of factors help make the final decision, and they will be assessed independently. One key difference is that we do not diagnose the person here. We can consider all data feeds to make the best final decision. Branches are not competing in the diagram, they contribute to a common goal. The resulting diagram is what we call a decision model.

Advantages of the dependency diagram approach

Dependency diagrams are wonderful ways to extract knowledge. As you construct your decision model, you decompose a large problem into smaller problems, for which several experts in their own domain can contribute their knowledge. When decisions are not linear, and the decision logic has not yet been documented, this is the right approach.

This approach is commonly used in the industry. OMG has standardized the notation under the “DMN” label, which stands for Decision Model and Notation. This approach allows you to harvest knowledge, and document source rules.

Choose the approach that is best for you

Decision flows are closest to an actual implementation. In contrast, dependency diagrams, or decision models, focus on knowledge. But they feed straight into decision management systems. In the end, think about decisions in the way that best fits your team and project. The end result will translate into an executable decision flow no matter what.

LTCG Balances Technology and Human Touch in Claims Processing


LTCG

Long Term Care Group (LTCG) is a leading provider of business process outsourcing services for the insurance industry. They are the largest third party long term care insurance provider offering underwriting, policy administration, clinical services, as well as claims processing and care management for America’s largest insurance companies. Insurers rely on LTCG for these services due to LTCG’s deep expertise in long term care portfolios, which require specialized knowledge and processes. LTCG continually invests in the people, processes, and technology to maintain their leadership position in the industry.

Several years ago LTCG developed and implemented an automated claims adjudication process using Sparkling Logic SMARTS as the decision engine. Prior to this initiative more than 90,000 claims per month were processed manually by LTCG’s team of claims examiners. LTCG wanted to reduce the time their claims examiners needed to spend researching and making a claims decision in order to maintain the highest levels of customer satisfaction.

Long term care insurance is unique in that benefits are coordinated by care coordinators who create a plan of care to help policyholders leverage the benefits covered by their policy based on clinical guidelines that direct care needs over time. Due to the unique nature of long-term care needs, LTCG wanted to balance the use of technology with their emphasis on human touch to ensure the best possible care and coverage for policyholders.

The first automated claims adjudication system was developed in 6 months using an agile methodology and Sparkling Logic SMARTS. The Scrum team was able to iterate on the business rules and logic quickly thanks to the simplicity and power of the SMARTS user interface and software architecture.

Download the LTCG Case Study to learn more.

Unlocking New Use Cases for Rules Engines


The Willis Towers Watson Story

Willis Towers Watson (NASDAQ: WLTW) is a leading global, advisory, broking, and solutions company that helps their clients turn risk into a path for growth. To stay competitive and to continue to meet the needs of their financial services customers, Willis Towers Watson embarked on an initiative to transform how they made decisions. Placing a priority on making more informed, data-driven decisions brought together their community of broking experts to ensure they could drive the best result for their customers.

As Willis Towers Watson looked to replace silos of data, manual processes, and custom applications that were difficult to maintain and test, they turned to Sparkling Logic SMARTS. SMARTS is a modern, agile, and easy to implement cloud-based rules engine and decision management platform that Willis Towers Watson is harnessing to drive their competitive edge.

data driven decisionsWhen they started evaluating rules engine and decision management solutions, like most organizations, they wrote an RFP with very specific requirements. They were launching “Connected Broking” – a global placements platform and needed the rules engine to determine panel eligibility for insurance panels and they needed to automate decisions to channel the right risk to the right marketplace to better serve their customers.

However, once Willis Towers Watson selected SMARTS, they began to identify new use cases where the software could be applied to solve other business problems that were never even considered during the RFP phase. In some cases, the program management team saw uses for SMARTS that even their rules authoring lead and business analysts didn’t believe were a good fit for a rules engine.

The two new, not-so-obvious use cases that Willis Towers Watson considered were:

  1. Dynamic Data Capture
  2. Task Management

Let’s dive in a bit further:

Dynamic Data Capture

Willis Towers Watson identified an opportunity to apply business logic and rules to dynamically determine what types of data (and the sequence of that data) would need to be captured from an end customer so that the insurance carrier could make appropriate credit decisions and offer the best products and services. The team integrated SMARTS with a webform they developed to drive the data type and data sequence presented based on the application of business rules. Then, the webform was built to call back to SMARTS to validate the data that was entered. This solution is in production today.

Task Management

Willis Towers Watson currently has plans in place to use the decision engine in an orchestration capacity. Specifically, they will use SMARTS in combination with a lightweight task management application that can trigger allocated tasks based on business events – getting the right task to the right person in the right application.

With each new use case discovered, Willis Towers Watson is looking to continue to harness and extend the value of SMARTS. It has been possible for the company to explore new use cases quickly due to the ease of implementing and managing SMARTS including the limited amount of developer and analyst resources required to author rules. As a result, the team has been able to devote time and resource to building the framework and integrations for both dynamic data capture and task management routing.

To learn more about how Willis Towers Watson achieved these results, read their case study.

Read The Case Study

Another Usage for Champion / Challenger: Rolling Out Deployments


rolling out deplymentLast week, I talked about Champion / Challenger and how this technique compares alternative strategies for a decision checkpoint. This week, I would like to talk about another usage for this technique. Using the same infrastructure, the same idea, you can test in Production your new and updated decision logic. This time it is not for evaluating its business performance. It is for testing it in Production, or what we call rolling out your deployment.

Rolling out a deployment… Why?

As part of the software development life cycle (SDLC), a critical aspect is testing. You do not want your software released out there without any quality assurance, of course. While many go over their test cases and more in the QA environment, it is frequent to roll out deployment in phases. When the software in question supports mission critical business operations, it is key to ensure it is performing as expected. There is only so much you can test in QA.

In the past, I have seen projects deployed to one state or geography in the first phase. If everything goes well, then the software gets deployed more widely. Some projects shadow the current execution with the new execution, just logging what the answer would have been, for an offline examination. I have also seen projects that were deployed to a segment of the incoming transactions, this time more randomly assigned.

Rolling Out a Deployment… How?

When looking at how to roll out deployments in more detail, you will inevitably see the parallel with Champion / Challenger. You have one champion: the existing service implementation. You have one challenger: the new and improved service implementation. Please do not feel confused by the term implementation. While there is software running, we are really talking about your old decision logic and your new decision logic. Business rules, maybe models, have changed, but the software implementation has not changed in our scenario. I am not talking about deploying actual software code.

At this point, you want to deploy the new business rules to only a fraction of your incoming business. Let’s say that you might want to start with 5% of your transactions. This is very similar to a Champion / Challenger setup with 95% of the transactions going to your champion, and 5% going to your challenger. You can monitor for days or weeks that the challenger segment behaves as expected. Then you can augment to 20% or 50%, and so on until you reach 100%.

Why would you use Champion / Challenger for that?

That’s a good question. Let me ask you this though: why not? Without being facetious, the alternative is to write the code infrastructure by hand. This means that you need to solve the issue of running two different releases of the same decision services concurrently. SMARTS, our decision management system allows you to do obviously — we had to solve that problem to enable Champion / Challenger. But other decision management systems might require that you clone your repository or something like.

Secondly, you need to hard-code the parameter of the Champion / Challenger experiment, aka the volume of transactions going to your current decision logic, and the volume of transactions going to your new decision logic. That part if not hard, but a change to this parameter implies a full software development life cycle. When you upgrade from 5% to 20%, you will need software engineers to make the change, and you will need QA to test, and you will need a formal production release. This can be heavy. A Champion / Challenger change only requires as much testing you would go through for a rule change. Wasn’t it the reason you decided to use business rules in the first place?

Finally, you will need to put in place a mechanism to keep track of the path you took, and whether it met your standards. Granted, you could simply check that transactions are processed without crashing the system… But I am sure you have higher QA standards. Champion / Challenger tracks out of the box which path was taken for each transaction.

Food for Thought

Rolling out your deployments is a step in the direction of more business transparency in the quality of your decisions. Once set up, it does not take much to start monitoring the business performance of your new decision logic. Why not start using Champion / Challenger?

Champion / Challenger, it’s a Number’s Game


race, champion challengerAutomating decisions is mostly valuable when you can change the underlying decision logic as fast as your business changes. It might be due to regulatory changes, competitive pressure, or simply business opportunities. Changing rhymes with testing… It would be foolish to change a part of your business model without making sure that it is implemented correctly of course. However, testing is not always sufficient. It is needed obviously, but it has its limitations. How can you test your decision logic when many unknowns are out of your control? What we need in terms of testing is sometimes more akin to a race between different strategies. I will discuss a technique pioneered a few decades ago, and yet not widely adopted outside of a few niches. This technique is called Champion / Challenger.

Why Champion / Challenger

Have you ever experimented with Champion / Challenger? Or maybe you have heard of it as A/B testing… The main objective is to compare a given strategy (your champion) with one or more alternatives (the challengers). This has been used over and over again with website design. The objective could be about highlighting call-to-actions in different ways, or even changing drastically the wording on several pages alternatives, and measuring which version yields the best results. While it is a norm in web design, it is not as widely applied in decisioning. Why, may you ask? My hunch is that many companies are not comfortable with how to set it up. I have actually seen companies that used this technique, and still tainted their experiment with a careless setup. I would welcome comments from you all to see which other industries are making strides in Champion / Challenger experimentation.

Let me explain briefly the basic concept as it applies to decision management. Like web design, decision management experiments aim at comparing different alternatives in a live environment. The rationale is that testing and simulation in a sandbox can estimate the actual business performance of a decision (approving a credit line for example), but it cannot predict how people will react over time. Simulation would only tell you how many people in your historical sample would be accepted versus declines. You can approve a population segment, and then discover over time that this segment performs poorly because of high delinquency. Live experimentation allows you to make actual decisions and then measure over time the business performance of this sample.

How Champion / Challenger works

Technically, two or more decision services can be actually deployed in production. Since your system cannot approve and decline at the same time, you need the infrastructure to route transactions randomly to a strategy, and mark the transaction for monitoring. The keyword here is ‘randomly’. It is critical that your setup distributes transactions without any bias. That being said, it is common to exclude entire segments because of their strategic value (VIP customers for example), or because of regulations (to avoid adverse actions on the elderly for example, which could result in fines). Your setup will determine what volume of transactions will go to the champion strategy, let’s say 50%, and how many will go to the challengers, let’s say 25% for each of 2 challengers.

It becomes trickier to setup when you need to test multiple parts of your decisions. It is not my objective to describe this issue in details here. I might do that in a follow up post. I just want to raise the importance of experimentation integrity as a possible reason for the perceived complexity.

Once the strategies are deployed, you need to wait a week, a month, or whatever time period, before you can conclude that one of the strategies is ‘winning’, meaning that is outperforms the others. At that point in time, you can promote that strategy as the established champion, and possibly start a new experimentation.

It’s a Number’s Game

As you process transactions day in and day out, you will allocate a percentage to each strategy. In our earlier example, we have 50% going to champion and 25% going to challengers 1 & 2. In order for the performance indicator to be statistically relevant, you will need ‘enough data’. If your system processes a dozen transactions a day, it will take a long time before you have enough transactions going to each of the challengers. This becomes increasingly problematic if you want to test out even more challengers at once. And, on the other hand, systems that process millions of transactions per day will get results faster.

So, basically, you end up with 3 dimensions you can play with:

  • Number of transactions per day
  • Number of strategies to consider
  • Amount of time you run the experiment

As long as the volumes along these 3 dimensions are sufficient, you will be able to learn from your experimentation.

Is that enough? Not quite. While you can learn from any experiment, you, the expert, is the one making sense of these numbers. If you run an experimentation in retail for the whole month of December, it is not clear that the best performing strategy is also applicable outside of the holidays. If your delinquency typically starts after 2 or 3 months of the account being open, a shorter experimentation will not give you this insight. While the concept of testing several strategies in parallel is fairly simple, it is a good idea to get expert advice on these parameters, and use your common sense on what needs to prove that a strategy is actually better than the alternatives. Once you are familiar with the technique, your business performance will soar. Champion / Challenger is a very powerful tool.

How Equifax is Helping Credit Lenders “Leap Frog” the Competition


Digital Disruption + Risk Management

Digital Disruption is at the top of every banking and insurance CEO’s agenda in 2017: how to become the disrupter and avoid getting disrupted. Across all credit-driven financial services firms, the pressure is intense with new market players emerging in all realms creating new expectations from customers.
risk management
Credit Risk Management and Decisioning are emerging as key scenarios that are ripe opportunities for digital disruption for two primary reasons.
First, the impact of credit risk decision management and compliance is significant to the bottom line and incremental improvements to processes are no longer enabling lenders and insurers to keep pace.
McKinsey reports that, “In 2012, the share of risk and compliance in total banking costs was about 10 percent; in the coming year the cost is expected to rise to around 15 percent… banks are finding it increasingly difficult to mitigate risk…To expand despite the new pressures, banks need to digitize their credit processes.” Top performing firms not only need to eliminate inconsistent approaches to credit analysis that expose them to unnecessary risk. To leap frog, they need to develop a systematic approach based on the integration of new data sources and credit-scoring approaches rather than relying solely on the historical performance indicators.

Second, risk management is, by its very nature, a data-driven discipline well positioned to take advantage of the massive advancements in analytics technologies at the new levels of scale enabled by cloud computing. This is dramatically lowering the cost of all solutions related to credit risk management for small to mid-sized financial services institutions, including FinTech startups that can enter the market quickly with limited barriers to entry.

What is the Opportunity in 2017?

Banks and Insurers can manage increasingly complex data under a higher volume of business rules. At the same time, they can apply an agile management framework of rules and data to take advantage of market opportunities in real-time. This is now possible at a fraction of the cost and time to implement compared to even five years ago. Our partnership with firms like Equifax is paving the way for the next wave of digital disruption in the financial services industry in scenarios like credit risk management and fraud detection.

The Equifax Story

Equifax has offered their leading, cloud-based decision management solution called InterConnect to their global customers for many years. The InterConnect solution “automates account opening, credit-risk decisioning, cross-selling and fraud mitigation during the account acquisition process.”

In 2016, Equifax was looking for ways to help their customers capture new opportunities in their credit risk management and decisioning process by strengthening one of the core components of their InterConnect platform: the Rules Editor.

Equifax’s customers were looking for enhanced support in defining, testing and optimizing business rules. Even more importantly, they needed to rapidly seize competitive advantage through the agile implementation of new business rules and automated optimization strategies based on real-time results, as well as the development of test data for repeated use to enable greater consistency and scale.

Equifax turned to Sparkling Logic as a key partner to fulfill these requirements for InterConnect. Sparkling Logic’s decision management engine powers the enhanced Rules Editor. One specific strategy that was not previously possible was the testing and implementation of Champion and Challenger credit decisioning strategies.

Before Sparkling Logic, customers struggled to compare two or more decisioning strategies at the same time. With Challenger and Champion strategies now enabled in the enhanced Rules Editor, new strategies (“Challengers”) can be developed, tested, and deployed simultaneously with existing strategies (“Champions”). Winning strategies are immediately applied to new decisions after the initial test period. Additional revenue is now captured that would have been lost while you waited for one test after another to play out.

What’s Next? How do you replicate this model to leap frog your digital disruption strategy?

While your competitors are busy applying incremental improvements to their portfolio management strategies and using historical performance data to drive crediting decisions, you have the opportunity to leap frog. This is possible when you immediately capture available revenue opportunity by applying an automated decision management engine to your credit decisioning processes.

LEARN HOW from the experts at Equifax and Sparkling Logic directly. RSVP for our Dec. 15th webcast to hear how “How Equifax is Helping Credit Lenders Leap Frog the Competition.” RSVP Today.

Part 3: Decision Management and Case Management


In Part 1 of this series, we reviewed how decision management, decision analytics and case management combine in systems that support automated decisions. In Part 2, we explored how modern decision management can help case workers accelerate their daily work, while at the same time allowing the organization to progressively capture the knowledge they use.

Decision Management Support for Investigative Case Management

Investigative case management can be significantly helped by the strong decision analytics included in modern decision management solutions.

Investigative Case Management

Typically, an investigative case manager will use a variety of tools to analyze the decisions and their outcomes, focusing in particular on those that led to cases being processed through case management. During this analysis, the key objective is to identify the reasons why processing the case manually was required, and to find changes to the automated decision so that the manual case management can be eliminated or reduced.

A modern decision management tool such as SMARTS has built-in capabilities to help facilitate this search:

  1. Traceability understandable by an investigator of what the decision logic is that led to the generation of a case to work on. We saw this in the previous part (add link), since it also helps the operational case worker.
  2. Ability to leverage built-in decision analytics in order to identify patterns in the data and the way the decision logic leverage it leading to too many cases being created, using business level metrics, simulation and predictive analytics capabilities.

    For example, the investigator may build reports that track different risk assessment measures:

    Risk Assessment Measures

    and relate the decision made to business outcomes:

    Business Outcomes

  3. Ability for an investigator to modify the decision logic without modifying the currently deployed one in order to run experiments, potentially in champion-challenger mode to test alternatives to the current decision logic. For example, the investigator might have used the decision analytics and predictive analytics capabilities highlighted before to create a couple of alternative ways of managing the decision. Using the built-in Experimental Design capability in SMARTS, he can test the current implementation (champion) against the two alternatives (challenger) and measure the business effectiveness through multiple KPIs and reports.

    Champion Challenger

These capabilities extend the reach of what the investigator can do – going further than simply pin pointing correlations and letting others do the exploration in terms of the business logic.

Why Decision Management Should Be Part of Your Case Management Strategy

Making SMARTS fully part of your case management strategy in addition to your decision management strategy allows you to have very strong support for both your operational case workers and your investigators, and efficiently manage your decisions so as to keep your case load as reduced as possible without losing flexibility. You may even combine it with your adaptive case management support, making it very easy for you to keep track of the core expertise those systems allow you to put in operation while processing cases.

If you are interested in knowing more about Sparkling Logic SMARTS, contact us, or request a free SMARTS evaluation.

Part 2: Decision Management and Case Management


In Part 1 of this series, we reviewed how decision management, decision analytics and case management combine in systems that support automated decisions.

Decision Management and Case Management

Aspects of Case Management: Operational and Investigative

Case management has two key aspects to it:

  • Working on cases the automated system could not process or reportedly did not process well (operational or intervention case management)
  • Investigate sets of such cases in order to identify how the automated decision could be changed to do a better job and reduce the load on the more expensive case management (investigation case management)

While the distinction may appear arbitrary and there are situations in which the case worker will straddle across both worlds, we can still separate the two aspects for the purpose of this discussion.

In this part, we’ll focus on the first of these aspects. Typically, case workers will be involved for various reasons – for example

  • Handling cases where the data is incomplete
  • Handling cases which present too much risk for the automated system to take the final decision, but present enough potential for them to be considered by humans, as is the case in a number of loan origination situations
  • Handling cases which require interactions with human customers to get to a conclusion on the decision, as is for example the case in credit card transaction fraud in which a call may settle the decision on whether to authorize or not a transaction

Responsibilities of Operational Case Workers

Operational or intervention case workers tend to work with a case at a time, and to review the specifics of the case using forms. They review the case in the representation, and based on their expertise and their hunches will:

  • Try to understand why the automated system did not take a decision or why the decision was reportedly incorrect
  • Decide what the next step in the processing of the case should be (ask for more information, take a decision for the case, etc)

Decision Management to the Rescue of Case Workers

Decision analytics and modern decision management user interfaces can help.

SMARTS offers the RedPen™ approach to managing decisions. This approach presents the decision information in the context of cases and groups of cases, and lets the case worker interact with decisions within the forms that they are used to in everyday work.

What’s more, SMARTS allows case workers, in order to get support and to track the reasons for which decisions are overridden and/or modified, to use the same system as the business users managing automated decisions.

In the following SMARTS screen in RedPen mode, the case worker looking at the current case (1/100 in Census 2000) immediately understands the reasons for the automated decision (“High Risk” rule fired because “AverageAge” is between 16 and 25 and set the “Assigned Risk Level” to “High Risk”).

SMARTS RedPen

Furthermore, the case worker can use the same tool in order to explore how to propose a modification to the decision logic in order to take care of the case being worked on. For example, the case worker can decide that even though the average age is quite young, the total work/school mileage is below 50 miles and that the organization should grant a lower risk level. The case worker can do that by directly manipulating the decision within the tool, creating an exception rule to the “High Risk” rule and changing the conclusions for the cases where, in addition to AverageAge being in the specified range, the corresponding mileage is below 50 miles.

Exception Rule

The exception rule thus created may then be fed back to the automated decision, shared with other case workers who can continue refining it. And they can do that with full understanding of the consequences on the various groups of cases and business performance objectives. This is a great example of “design by doing” and adaptive case management.

This approach allows case workers to benefit from the systematic capture of decisions within the system, yet gives them the flexibility to adapt the logic to the situations they are dealing with. It also allows the organization to capture in a systematic way this highly valuable information which otherwise gets lost or is not efficiently communicated.

It is not even required that the case worker actually configure the logic – just highlighting the reasons why the decision needs to be adjusted for this case is valuable information that can be tracked in the decision management tool in order to further improve the decision.

If you are interested in knowing more about Sparkling Logic SMARTS, contact us, or request a free evaluation of SMARTS Decision Manager.


 2018 SparklingLogic. All Rights Reserved.