Call to Action

Webinar: Take a tour of Sparkling Logic's SMARTS Decision Manager Register Now

Business Rules

SMARTS Real-Time Decision Analytics


Real-time decision analytics
In this post, I briefly introduce SMARTS Real-Time Decision Analytics capability to manage the quality and performance of operational decisions from development, to testing, to production.

Decision performance

H. James Harrington, one of the pioneers of decision performance measurement, once said, “Measurement is the first step that leads to control and eventually to improvement. If you can’t measure something, you can’t understand it. If you can’t understand it, you can’t control it. If you can’t control it, you can’t improve it.” This statement is also true for decision performance.

Measuring decision performance is essential in any industry where a small improvement in a single decision can make a big difference, especially in risk-driven industries such as banking, insurance, and healthcare. Improving decisions in these sectors means continuously adjusting policies, rules, prices, etc. to keep them consistent with business strategy and compliant with regulations.

Decision performance management in SMARTS

SMARTS helps organizations make their operational decisions explicit, so that they can be tested and simulated before implementation —- thereby reducing errors and bias. To this end, we added a real-time decision analytics capability to the core decision management platform.

Currently used in financial and insurance services, it helps both business analysts and business users to define dashboards, assess alternative decision strategies, and measure the quality of performance at all stages of the lifecycle of decision management — all with the same interface without switching from one tool to another.

Development. From the start, SMARTS focuses the decision automation effort on tangible business objectives, measured by Key Performance Indicators (KPIs). Analysts and users can define multiple KPIs through graphical interactions and simple, yet powerful formulas. As they capture their decision logic, simply dragging and dropping any attribute into the dashboard pane automatically creates reports. They can customize these distributions, aggregations, and/or rule metrics, as well as the charts to view the results in the dashboard.

Testing and validation. During the testing phase, analysts and users have access to SMARTS’ built-in map-reduce-based simulation environment to measure these metrics against large samples of data. Doing so, they can estimate the KPIs for impact analysis before the actual deployment. And all of this testing work does not require IT to code these metrics, because they are transparently translated by SMARTS.

Execution. By defining a time window for these metrics, business analysts can deploy them seamlessly against production traffic. Real-time decision analytics charts display the measurements and trigger notifications and alerts when certain thresholds are crossed or certain patterns are detected. Notifications can be pushed by email, or generate a ticket in a corporate management system. Also, real-time monitoring allows organizations to react quickly when conditions suddenly change. For example, under-performing strategies can be eliminated and replaced when running a Champion/Challenger experiment.

Uses cases

Insurance underwriting. Using insurance underwriting as an example, a risk analyst can look at the applicants that were approved by the rules in production and compare them to the applicants that would be approved using the rules under development. Analyzing the differences between the two sets of results drive the discovery of which rules are missing or need to be adjusted to produce better results or mitigate certain risks.

For example, he or she might discover that 25% of the differences in approval status are due to differences in risk level. This insight leads the risk analyst to focus on adding and/or modifying your risk related rules. Repeating this analyze-improve cycle reduces the time to consider and test different rules until he or she gets the best tradeoff between results and risks.

Fraud detection. An other example from a real customer case is flash fraud where decisions had to be changed and new ones rolled out in real time. In this case, the real-time decision analysis capability of SMARTS was essential so that the customer could spot deviation trends from normal situation directly in the dashboard and overcome the flood in the same user interface, all in real time.

Without this built-in capability, the time lag between the identification of fraud and the implementation of corrective actions would have been long, resulting in significant financial losses. In fact, with SMARTS Real-Time Decision Analytics, the fraud management for this client has gone from 15 days to 1 day.

Marketing campaign. The two above examples are taken from financial services but SMARTS real-time decision analytics helps in any context where decision performance could be immediately affected by a change in data, models, or business rules, such as in loan origination, product pricing, or marketing promotion.

In the latter case, SMARTS can help optimize promotion in real-time. Let’s say you construct a series of rules for a marketing couponing using SMARTS Champion/Challenger capability. Based on rules you determine, certain customers will get a discount. Some get 15% off (the current offering — the champion), while others get 20% (a test offering — the challenger). And you wonder if the extra 5% discount leads to more coupons used and more sales generated. With SMARTS real-time decision analytics environment, you find out the answer as the day progresses. By testing alternatives, you converge to the best coupon strategy with real data and on the fly.

Conclusion

As part of the decision lifecycle, business analysts obviously start by authoring their decision logic. As they progress, testing rapidly comes to the forefront. To this end, SMARTS integrates predictive data analytics with real-time decision analytics, enabling business analysts and business users to define dashboards and seamlessly associate metrics with the execution environment — using the same tool, the same interface, and just point and click.

Takeaways

  • SMARTS comes with built-in decision analytics — no additional or third-party tool is required
  • You can define metrics on decision results so you can measure and understand how each decision contributes to your organization’s business objectives
  • Decision metrics enable you to assess alternative decision strategies to see which should be kept and which rejected
  • SMARTS add-on for real-time decision analytics lets you monitor the decisions being made and make adjustments on the fly
  • SMARTS’ real-time decision analytics helps in any context where decision performance could be immediately affected by a change in data, models, or business rules

About

Sparkling Logic is a decision management company founded in the San Francisco Bay Area to accelerate how companies leverage data, machine learning, and business rules to automate and improve the quality of enterprise-level decisions.

Sparkling Logic SMARTS is an end-to-end, low-code no-code decision management platform that spans the entire business decision lifecycle, from data import to decision modeling to application production.

Carlos Serrano is Co-Founder, Chief Technology Officer at Sparkling Logic. You can reach him at cserrano@sparklinglogic.com.

Low-Code No-Code Applied to Decision Management


DevelopPreviewShip

Low-code no-code is not a new concept to Sparkling Logic. From the beginning, the founders wanted to deliver a powerful yet simple product, so that a business analyst could start with data and build decision logic with built-in predictive data analytics and execution decision analytics.

Version after version, they have achieved this vision through SMARTS, an end-to-end decision management platform that features low-code no-code for business analysts and business users to develop and manage decision logic through point-and-click operations.

Low-code development

For business analysts, SMARTS offers a low-code development environment in which users can express decision logic through a point-and-click user interface to connect data, experiment with decisions, monitor execution without switching between different tools to get the job done. Depending on the nature of the decision logic at hand and user preferences, business analysts can choose on the fly the most appropriate representation to capture or update their decision logic. The resulting decision logic is seamlessly deployed as a decision service without IT intervention.

To push the simplification even further, Sparkling Logic founders turned to their customers for inspiration on their needs and developed three complementary technologies:

  • RedPen, a patented point-and-click technology that accelerates rule authoring without a need to know rule syntax or involve IT to author the rules
  • BluePen, another patented point-and-click technology to quickly create or leverage a data model and put it into production without involving data scientists or IT
  • A dynamic questionnaire to produce intelligent front-ends that reduces the number of unnecessary or redundant questions

No-code apps

In addition to low-code development capability for business analysts, SMARTS also elevates the decision logic to a simple web form-based interface for untrained business users. They can configure their decision strategies, test the updated decision logic, and promote the vetted changes to the next staging environment — without learning rules syntax.

These business apps offer a business abstraction for most tasks available in SMARTS related to configuration, testing and simulation, invocation and deployment management, as well as administration.

For example, credit risk specialists can configure loans, credit cards, and other banking products, and pricing specialists can control pricing tables, through a custom business app specific to their industry. The no-code business app enables business users to cope with environment changes whether they are related to internal policies, competition pressure, or industry regulation.

Furthermore, SMARTS tasks can also be automated through orchestration scripts. Business users can trigger these scripts through the click of a button, or schedule them to be performed automatically and seamlessly.

About

Sparkling Logic is a decision management company founded in the Bay Area to accelerate how companies leverage internal and external data and models to automate and improve the quality of enterprise-level decisions.

Sparkling Logic SMARTS is an end-to-end, low-code no-code decision management platform that spans the entire business decision lifecycle, from data import to decision modeling to application production.

Hassan Lâasri is a data strategy consultant, now leading marketing for Sparkling Logic. You can reach him at hlaasri@sparklinglogic.com.

Best-in-class Series: Decision Authoring


Decision AuthoringYou think that authoring business rules is difficult? If you attend a decision management show, you will likely hear many business analysts share their frustration while struggling with their first rules project. For instance, I remember vividly ever-lasting conversations aimed at defining what a rule is. As a consequence, I made it my personal mission to address this challenge. Business analysts should be able to log into their decision management tool and feel confident.

Clearly, authoring decision logic is not rocket science, but the tooling you use plays an important role. In other words, if that tool looks like a software development environment, it will only feel intuitive to software developers, which, as a business person, you might consider rocket science! Overall, look for advanced capabilities that support what you need, as a business analyst:

  • Get started writing Business Rules intuitively in the context of data
  • Adapt your view depending on the task at hand, switching from text to a decision table, tree or graph
  • Combine business rules with very large MS Excel spreadsheets for efficient lookups
  • Design a business app, for those that need to access the decision logic in their own terms

Intuitive authoring in the context of data

As a business analyst, your business is your comfort zone. For example, an insurance business analyst will speak about applications, eligibility criteria, claims, and more. Asking that person to think in terms of “IF” and “THEN” and rules cannot feel natural out of the box. The solution is to bridge those two worlds by overlaying decision logic in the context of data. That is to say that you can click on Joe’s age to indicate that he does not meet the policy’s age requirement. Et voila!

Change representation at any time

Having to decide what representation is the most appropriate can be paralyzing. Often, requirements are bound to evolve over time. As a result, what seemed like a good candidate for a decision table may end up filled with scattered data. Assuming your tool locks you into a single representation after that initial choice, the cost of a bad choice becomes unbearable. A dynamic representation removes that dilemma. Just get started, and switch representation if and when you need a different way to interact. The nature of the rules may very well dictate a natural representation. For the convenience of editing, it may make more sense to change that view temporarily. As a compliance officer, you may prefer to look at a graph that documents all the paths that lead to an adverse action.

Spreadsheet requirements

As a business analyst, I bet you have to deal with quite a few spreadsheets. Once, I met a lady that managed around 10,000 spreadsheets for an insurance company! While many spreadsheets contain requirements that need to turn into business rules, many do not. In this lady’s case, a great number of these spreadsheets contained rating tables, which were regularly updated in that format. When it is the case, you want to keep those very large MS Excel spreadsheets as spreadsheets. We call them lookup models, for which you only supply the formula to read the data, aka which columns you need to match and how, and which columns you need to return. In the end, when you need efficient lookups, you need efficient lookups. Not every requirement needs to turn into a rule.

Business users may need business apps, not business rules authoring

As intuitive as a decision management tool can be, it may never meet the needs of a true business person. The bells and whistles that business analysts need, may be overwhelming for the financier or underwriter that needs access to the decision logic. In short, that person needs the decision logic abstracted into a business app. As the business analyst, you have full control of the capabilities you expose. Should you expose some thresholds? Or expose all rejection criteria? In some cases, you may not need to expose any of the decision logic, but rather give him or her the ability to run a simulation, and the authority to promote the decision to Production. Once again, business rules are an important part of the decision management project, but not everything has to do with authoring ‘just business rules’.

I will actually present a webinar in January, and would love to have you join me.

DecisionCAMP 2020 – Best Practices on Making Decisions


DecisionCAMP 2020

With the world on a partial lockdown due to COVID 19, we had to be creative. DecisionCAMP 2020 takes place virtually this year, through Zoom presentations and Slack interactions. The show invited me to present ahead of the event.
Watch my DecisionCAMP 2020 presentation now

I decided to tackle one of the most common rules designs. Though I hope that you will implement it in SMARTS, it is technology-agnostic. As such, you could adopt it regardless of the decision management system that you use.

A decision management system obviously makes decisions. These decisions can boil down to a yes or no answer. In many circumstances, the decision includes several sub-components, in addition to that primary decision. For this design pattern, however, I only focus on the primary decision. Note that you could use the same design applied to any sub-decision as well. This is a limitation of the presentation, not one of the design.

In an underwriting system, for example, the final approval derives from many different data-points. The system looks at self-disclosures regarding the driver(s) and the vehicle(s), but also third-party data sources like DMV reports. If the rules make that decision as they go through the available data, there is a risk of an inadvertent decision override. Hence the need for a design pattern that collects these point decisions, or intermediate decisions, and makes the final decision in the end. In this presentation, I illustrate how do it in a simple and effective manner.

Watch my DecisionCAMP 2020 presentation now

DM Challenge – Dynamic Loan Evaluation


dynamic loan approvedLet’s take on another challenge from the Decision Management community. The Dynamic Loan Evaluation challenge looks very applicable to what our customers do. In this scenario, the business logic is very simple, but data is uncovered over time. As a result, the underwriting decision changes each time new information comes to light. Overall, this project illustrates the combination of a point decision, the loan origination, and a changing series of facts.
Click here to watch the demo

dynamic loan declined

Key takeaways from this dynamic loan evaluation challenge

The business rules behind this loan evaluation do not present much difficulties. While a real system typically includes many eligibility criteria, this evaluation relies solely on a risk exposure measurement. If the assets exceed the obligations, we approve the loan. However, if the obligations exceed the assets, we decline the loan.

As assets and debt gets uncovered, the balance tips from one side to the other. While this feels like a dynamic interaction, I prefer to design loan origination systems as stateless services. This means that you expose all the information available, and the business rules render the verdict. In short, the decision service does not keep track of the history.

On the other hand, the origination system must handle the dynamic aspect of the loan. In this challenge, I use a dynamic questionnaire to capture the data. The dynamic questionnaire collects all borrower and guarantor information over time. As the loan agent, I append the new facts to the in-flight application, and submit the whole thing for evaluation.

I like to separate cleanly the business logic from the questionnaire logic. This project illustrates this design perfectly.
Click here to watch the demo

If you want to try building the demo by yourself, feel free to ask for a free evaluation.

DM Challenge – Pay-as-you-go Pricing demo


Our friend Jacob posted a Decision Management challenge this month. SaaS pricing can prove to be a challenge to calculate when combining volume discount and special incentive. In this challenge, I demonstrate how to take advantage of test cases to safely write these pricing rules. Click here to watch the demo

Key takeaways from this challenge

As I hinted, testing against expected outcome is saving a ton of time during rules writing. While I ran into several typos in my data entry, I felt comfortable that my rules were correct (after correcting the data entry).

Seeing is believing. Being able to see what the rules assign to each tier allows for a quick understanding of what was left to do.

Finally, not seeing calculations in place would have likely taken me a lot more time (and a headache) to complete that challenge. It helps a great deal to rely on intermediate calculations when rules end up somewhat convoluted.

While this is a simple pricing use case, business rules can quickly create a combinatorial explosion of scenarios. Decompose your problem in a few simple steps, check that your progress, until you cover all of your test cases.
1. what tier do you start in
2. move extra units to the next tier
3. apply special incentive if applicable

PricingClick here to watch the demo

If you want to try building the demo by yourself, feel free to ask for a free evaluation.

Technical Series: Decision Engine Performance


Decision Engine Performance TestingOne of the subjects that frequently comes up when considering decision engines is performance, and, more broadly, the performance characterization of decisions: how do decision engines cope with high throughput, low response, high concurrency scenarios?

In fact, the whole world of forward-chaining business rules engines (based on the RETE algorithm) was able to develop largely because of the capability of that algorithm to cope with some of the scalability challenges that were seen. If you want to know more about the principles of RETE, see here.

However, there is significantly more to decision engine performance than the RETE algorithm.

A modern decision engine, such as SMARTS;, will be exercised in a variety of modes:

  • Interactive invocations where the caller sends data or documents to be decided on, and waits for the decision results
  • Batch invocations where the caller streams large data or sets of documents through the decision engine to get the results
  • Simulation invocations where the caller streams large data and sets of documents through the decision to get both the decision results and decision analytics computations made on them

Let’s first look at performance as purely execution performance at the decision engine level.

The SMARTS; decision engine allows business users to implement decisions using a combination of decision logic representations:

  • Decision flows
  • Rules groups in rule sets
  • Lookup models
  • Predictive models

These different representations provide different perspectives of the logic, and the most optimal representation for implementing, reviewing, and optimizing the decision. For example, SMARTS allows you to cleanly separate the flow of control within your decision from your smaller decision steps – check the document data is valid first, then apply knock out rules, then etc.

However, SMARTS does also something special for these representations: it executes them with dedicated engines tuned for high performance for their specific task. For example, if you were to implement a predictive model on top of a rules engine, your result will typically be sub-par. However, in SMARTS, each predictive model is executed by a dedicated and optimized execution engine.

Focusing on what is traditionally called business rules, SMARTS provides:

  • A compiled sequential rules engine
  • A compiled Rete-NT inference rules engine
  • A fully indexed lookup model engine

These are different engines, and apply to different use cases, and they are optimized for their specific application.

Compiled Sequential Rules Engine

This engine simply takes the rules in the rule set, orders them by explicit or implicit priority, and evaluates the rule guards and premises in the resulting order. Once a rule has its guard and premise evaluated to true, it fires. If the rule set is exclusive, the rule set evaluation is over, and if not, the next rule in the ordered list is evaluated.
There is a little bit more than that to it, but that’s the gist.

The important point is that there is no interpreter involved – this is executed in code compiled to the bytecode of the chosen architecture (Java or .NET or .NET Core). So, this executes at bytecode speed and gets optimized by the same JITs as any code in the same architecture.

This yields great performance when the number of transactions is very large, and the average number of rules evaluated (i.e. having their guards and premises evaluated) is not very large. For example, we’ve implemented batch fraud systems processing over 1 billion records for fraud in less than 45 minutes on 4 core laptops.

When the number of rules becomes very large, in the 100K+ in a single rule set, then the cost of evaluating premises that do not match starts getting too high. Our experience is that is very likely that with that number of rules your problem is in fact a lookup problem and would be better served by a lookup model(As an aside, lookup models also provide superior management for large numbers of rules). If that is not the case, then a Rete-NT execution would be better.

Compiled Rete-NT Inference Rules Engine

This engine takes the rules within the rule set and converts them to a network following the approach described in this blog post. What this approach does is revert the paradigm – in RETE, the data is propagated through the network, and the rules ready to fire are more optimally found and put in an agenda. The highest priority rule in the agenda is retrieved, and executed. The corresponding changes get propagated into the network, the agenda updated, etc., until there is nothing left in the agenda.

One important distinction with respect to the sequential engine is that in the case of the Rete-NT engine, a rule that already fired may well be put back in the agenda. This capability is sometimes required by the problem being solved.
Again, there is much more to it, but this is the principle.

SMARTS implements the Rete-NT algorithm – which is the latest optimized version of the algorithm provided by its inventor, Charles Forgy, who serves on the Sparking Logic Advisory Board. RETE-NT has been benchmarked to be between 10 and 100 times faster than previous versions of the algorithm in inference engine tests. In addition, SMARTS compiles to the bytecode of the chosen architecture everything that is not purely the algorithm, allowing all expressions to be evaluated at bytecode speed.

In the case where your number of rules per rule set is very large, in the 100K+ range, and you are not dealing with a lookup model, the RETE-NT engine yields increasingly better performance compared to the sequential engine. SMARTS has been used with 200k+ rules in rule sets – these rules end up exercising 100s of fields in your object model, and the Rete-NT advantages make these rule sets perform better than the pure sequential engine.

Fully Indexed Lookup Model Engine

There are numerous cases where what a rule set with a large number of rules is doing is selecting out of a large number of possibilities. For example, going through a list of medication codes to find those matching parametrized conditions.

In many systems, that is done outside a decision engine, but in some cases, it makes sense to make it part of the decision. For example, when it is managed by the same people and at the same time, it is intimately tied to the rest of the decision logic, and it needs to go through its lifecycle exactly like the rules.

SMARTS provides a Lookup Model specifically for those cases: you specify the various possibilities as data, and then a query that is used to filter from the data the subset that matches the parameters being used. At a simplified level, this engine works by essentially doing what database engines do: indexing the data as much as possible and convert the query into a search through indexes. Because the search is fully indexed, scalability with the number of possibilities is great, and superior to what you can get with the Sequential or Rete-NT engine.

The SMARTS decision engine can also run concurrent branches of the decision logic, implemented in any combination of the engines above. That allows a single invocation to the decision engine to execute on more than 1 core of your system. There are heavy computation cases in which such an approach yields performance benefits.

Of course, performance is also impacted by deployment architecture choices. We’ll explore that topic in the next post in this series.

Technical Series: Data Integration for Decision Management


Decision Management SystemIntegration with data is key to a successful decision application: Decision Management Systems (DMS) benefit from leveraging data to develop, test and optimize high value decisions.

This blog post focuses on the usage of data by the DMS for the development, testing and optimization of automated decisions.
Read More »


 2021 SparklingLogic. All Rights Reserved.