Call to Action

Webinar: Take a tour of Sparkling Logic's SMARTS Decision Manager Register Now

Decision Management

How to implement a Rating Engine


rating engineRating engine, pricing engine, compensation calculation, fee calculation, claims calculation, and many more types of projects aim at calculating a fee or cost. They typically share some complexity on the fine prints.

Implementing a rating engine has relied on different technologies over the years. The debate continues: can business rules be used for a rating engine? My answer is yes and no. I strongly believe that we need the full power of a decision management system. Let me explain.

When the rating engine logic is pretty straight-forward, business rules will do a good job. In the pay-as-you-go demo, we could summarize the pricing strategy in a few business rules. In real-life projects, compensation calculations and benefit calculations have also been implemented in a modest number of rules. As a rule of thumb (no pun intended), I would say that if you can express your rating logic as a formula, with or without a few exceptions here and there, you are in a good shape using business rules only.

Business rule engine is not enough

The limitation, though, is when these rate tables grow significantly. Some of these spreadsheets include tens of thousands of rows! This combinatorial explosion makes sense when you think of having rates specific to:

  • 50 states,
  • maybe up to 42,000 zip codes,
  • 2 or 3 genders (some states can’t legally discriminate over gender, so you may deal with a gender-neutral pricing in addition to male and female)
  • several risk score bins,
  • many, many, many age options…

Writing all of these rules by hand may be a significant task. Rule import can help, of course, but keep in mind that it is likely a one-time solution. Updating and maintaining these rates over time will become a painful task, less painful than coding of course, but still painful. When actuaries come up with an updated rate table, finding the right line items could be a tricky exercise. My experience tells me that rare business experts provide a color-coded spreadsheet, highlighting only the changes. You certainly do not want to be in the business of comparing line by line the old and new rates.

However, the great advantage of business rules is their extreme flexibility. Assuming that the rating tables per se are taken care of, which I will cover on my next point, overrides for specific states, or product options, remains trivial by complementing the rating engine with business rules. Often, I see projects in which these overrides happen while or after retrieving rates. There is literally no end to the flexibility you can implement as business rules once rates are retrieved.

My first takeaway: I do like using business rules for fine-tuning rates, dealing with exceptions.

Rating Engines are mostly lookups

For rates that are dependent on this (sometimes) enormous number of lines to pick from, I prefer using lookup models. Lookup models are spreadsheets that can be used to return the matching columns. Let’s say you import a spreadsheet with state, age, gender, and score range. The lookup model will retrieve the rate that matches all 4 columns. You are not limited to one column to return though. In addition to the rate, the spreadsheet could return volume discount or any additional information.

The main advantage of using lookup models, in my opinion, is that there is no manual translation from spreadsheet to executable lookup model. The implementation effort is in the interface. Do you pass the actual gender for example, or do you translate its value to ‘neutral’ for those states that prevent you from using gender? Do you return a single rate, or is it possible to return multiple rates?

While rate tables can be as large as your business experts might dream, lookup models offer a great advantage in performance. They are indexed, and can return rates very quickly, somewhat independently of size.

My second takeaway: I do like using lookup models for retrieving rates.

Addressing evolving rates

As I mentioned before, the majority of the pain is typically in the maintenance of the rate tables in the rating engine. Using lookup models, this pain disappears. Updating rates is as easy as uploading a spreadsheet to the system. Overall, the 2 key aspects that I love about these mechanics deal with time-travel, seen from 2 different perspectives.

(1) The convenience of version tracking

Although rates change over time, you may need to resurrect these past rates. Many reasons come to mind. You might need to justify what your rates were at that time. Or, in urgency, you might need to backtrack bad rates that made their way into production.

Thanks to the underlying versioning system, and release management, these past rates are just a click away. Time-travel to the January 2020 release of your rating engine to resurrect pre-pandemic rates, or just to run simulations. You can also use the version history to promote that specific version of the spreadsheet to become current again. You will never lose any iteration of that spreadsheet that went into production. Peace of mind!

(2) The flexibility of time-sensitive rates

The introduction of new rates into your rating table is likely to follow a specific schedule. It happens that rates are just adjusted directly. But, more likely, they will start on a specific date, like January 1st. Rather than scheduling a job that will update the rates at midnight, it makes more sense to upload these rates upfront, and specify their effective dates.

Like business rules, rate tables can follow effective dates that apply to the entire sheet. Using the full power of business rules, you can activate these 2021 rates on January 1st for a group of states, and at a later date for another group of states.

I particularly appreciate that you have full control over the clock. Do you switch on January 1st at midnight at your headquarters? Should you adjust for the local time zone in each state? Do you consider invocation time? Or do you apply the proper rates based on delivery date? Once again, the sky the limit. I have yet to see a scenario that could not be implemented using a decision management system.

My third takeaway: I do like using lookup models for managing rates over time.

In conclusion, I highly recommend you consider a decision management system when implementing a rating engine. Business rules may have been limited in the past. However, decision management systems can certainly take you there now!

3 Use Cases for Dynamic Questionnaires


Per definition, Dynamic Questionnaires generate user interfaces that collect data. Their main characteristics include the following:

  1. Primarily, they apply reflexive logic for dependencies between questions
  2. Additionally, they enforce validation rules to ensure quality of data input
  3. Finally, they collaborate seamlessly with back-end decision logic

As one expects, dynamic questionnaires have been leveraged primarily for business applications. However, we also leverage them quite often in other use cases that may surprise you.

#1 Dynamic Questionnaires for Business Applications

Dynamic Questionnaires

In the Dynamic Loan Evaluation demo, we presented an example of business application that uses dynamic questionnaires. In no time, a business form turns into an online questionnaire that collects the borrower’s information, and renders the underwriting decision.

While this academic example is limited in complexity, we have seen real-life questionnaires for finance, insurance, and healthcare, that contain hundreds of questions.

Let’s say that you report past violations, we may ask, for each violation, if that was a ticket or an accident. If an accident, we may ask if it was at fault, and if there were fatalities. These nested questions, also called reflexive logic, justify the use of dynamic questionnaire technology.

#2 Dynamic Questionnaires for Implementing / Testing Multi-Step Processes

In order to navigate from page to page, a sort of business process engine handles these transitions. This engine applies decision services when appropriate. All-in-all, it provides all that long-running transactions need. When questionnaires are pretty long, applicants may get interrupted mid-way through the paperwork. You certainly do not want to force your users to start from the top then. Whatever they have filled in already can be retrieved, and the applicant can continue filling the form at any point in time.

Dynamic Questionnaires for TestingFrom time to time, we encounter a scenario in which a separate business process handles the overall orchestration. While there is no issue at all with integrating a separate business process, that business process is not always ready from the get-go. When the business process is not yet available, or when it cannot be easily tested, I use dynamic questionnaires for testing. That gives me a convenient environment to navigate the series of steps needed, and select the profile for the data that needs to be pulled.

#3 Dynamic Questionnaires for Generating Business Rules

Last but not least, I use dynamic questionnaires for capturing business rules. Most decision projects author and maintain business rules in our SMARTS environment. Yet, we also encounter another scenario in which multiple rules configurations follow the exact same structure, activating or deactivating rules, and possibly setting up values.

Dynamic Questionnaires for ConfigurationIn this example, I have several insurance products available. For each product, I can configure eligibility rules using point-and-click in the questionnaire. As product owner, the questionnaire constrains my options to only valid options. Note that I could also leverage business rules on the back-end to ensure that my configuration is valid!

We can design the questionnaire to reflect product specification sheets that you may use as a PDF for example.

I hope that these use cases will give you some ideas on how to leverage dynamic questionnaires.

DecisionCAMP 2020 – Best Practices on Making Decisions


DecisionCAMP 2020

With the world on a partial lockdown due to COVID 19, we had to be creative. DecisionCAMP 2020 takes place virtually this year, through Zoom presentations and Slack interactions. The show invited me to present ahead of the event.
Watch my DecisionCAMP 2020 presentation now

I decided to tackle one of the most common rules designs. Though I hope that you will implement it in SMARTS, it is technology-agnostic. As such, you could adopt it regardless of the decision management system that you use.

A decision management system obviously makes decisions. These decisions can boil down to a yes or no answer. In many circumstances, the decision includes several sub-components, in addition to that primary decision. For this design pattern, however, I only focus on the primary decision. Note that you could use the same design applied to any sub-decision as well. This is a limitation of the presentation, not one of the design.

In an underwriting system, for example, the final approval derives from many different data-points. The system looks at self-disclosures regarding the driver(s) and the vehicle(s), but also third-party data sources like DMV reports. If the rules make that decision as they go through the available data, there is a risk of an inadvertent decision override. Hence the need for a design pattern that collects these point decisions, or intermediate decisions, and makes the final decision in the end. In this presentation, I illustrate how do it in a simple and effective manner.

Watch my DecisionCAMP 2020 presentation now

Technical Series: Business Performance


decision performanceIn our last two blog posts in this series we discussed decision engine performance and how performance is impacted by deployment architecture choices. In addition to those considerations, you should also focus on business decision performance, the topic of this post.

Central to SMARTS’s approach to decision design is the idea that you need to have a strong focus on the expected business performance of your decision. The business performance of the decision is measured by multiple KPIs defined by the different business stake holders and characterize how the decision is contributing to the business.

Decision Analytics for Simulations

SMARTS provides you with fully integrated decision analytics, including aggregates, reports and dashboards that you can configure to track those KPIs. As you are implementing and optimizing the decision logic, you can run simulations to assess the impact your change have on the decision, and take appropriate action.This allows you to ensure that the business performance of your decision is actually what you want before you deploy it in production.

Real-time Decision Metrics

SMARTS also provides you streaming decision analytics, allowing you to monitor the same KPIs on the live decisions as they are deployed, and to specify alerts that trigger if those KPIs deviate from limits you can set.This gives you the peace of mind that you are always kept up to date on how well the deployed decision is behaving and that you can take early action to update it should the situation need it.

Champion/Challenger Experiments

There are also cases where it is not possible to necessarily know in advance the impact of a change. You may be exploring with new decision options you had never attempted before. SMARTS allows you to deploy your decision in an experimental mode – where part of your invocations will be routed to the new “experimental” version, and the rest to the proven one, and where you will be monitoring the relative performance to identify whether your “experimental” version is doing better than the proven one. In many financial services areas, this is called Champion-Challenger, in marketing or design, this is called A/B testing. With this approach, you can gradually and safely introduce decision optimizations that lead to better and better business performance.

In summary, when considering performance of decision management systems it is critical to consider the topic from a business perspective as well as a technical perspective. We hope this series has helped clarify performance related issues pertaining to decision management.

Best Practices for Decision Management


best practicesNaturally, the decision management community demands constantly more best practices. We delivered tips for writing and organizing business rules, and topics of that nature. However, success often depends on these crucial early choices. So, let’s take a step back, and discuss how to start a decision management project.

Join our upcoming webinar to explore what initial steps will ensure success. In particular, we will focus on data and data model, business rules requirements, and business performance.

Anyone with interest in decision management will benefit from this conversation. As a newcomer, you will hear practical advice for this first project you have to tackle. As an experienced practitioner, you will enjoy our design tips.

Best Practices Series: Rules for Data Validation


Data ValidationWhy should you care about Data Validation? Actually, your decisions can only be as good as the data they apply to. Consequently, by improving the quality of the data you apply your decisions to, you will improve the quality of your decisions. There is inherently a strong bond between data and decisions. Our previous post highlights the importance of data in Decision Management. In this post, we will focus on strategies to improve Data Validation.

As a matter of fact, there are many forms of bad data: incorrect, missing, fraudulent, etc. Data Validation needs to address all these forms of problems you might encounter. Let’s take a deeper look at those.
Read More »

Best Practices Series: Data-Centric Approach


For decades, writing rules has been an abstract exercise. Business Analysts review requirements. They write the corresponding logic. If they are lucky, there is a testing infrastructure they can push the rules to. Often, they have together code for test cases, or wait for QA to catch possible issues. There is a better way that involves bringing data in early on.

Instant Feedback

The primary objective of a data-centric approach is to provide immediate feedback. As you look at one transaction at the time, you can see what decision and intermediate decisions are made. For example, an insurance application might be too aggressively turned down due to a somewhat-poor driving record. Reviewing this result, you can fine-tune your rule right-away. After adding the proper safe-guards, that same application might end up approved with higher premium, rightfully-so. This quick turn-around is key to quality rules in your decision management projects.
Read More »

Best Practices Series: Rules Requirements 101


requirementsBusiness rules provide the flexibility and agility that systems need. By definition, they enable business analysts to adjust the system’s decision logic to ever-evolving business direction. Due to competitive pressure, business regulations, or executive direction, business rules keep adapting. The art of capturing rules requirements dictates the success of the rules implementation to follow.

Rules Requirements are Requirements

As a Product Manager at heart, I value a nicely written PRD (Product Requirement Document). Many articles provide guidance on requirements gathering, and the golden rules to adopt for success. In particular, I like Duncan Haughey’s concise take on the subject. Above all, I cannot stress enough the value of focusing on the problem rather than the solution. Engaging stake-holders or subject matter expert from the start is paramount. Kuddos to him for putting it into writing.
Read More »


 2020 SparklingLogic. All Rights Reserved.