Technical Series: Data Integration for Decision Management

on April 24, 2019

Integration with data is key to a successful decision application: Decision Management Systems (DMS) benefit from leveraging data to develop, test and optimize high value decisions.

This blog post focuses on the usage of data by the DMS for the development, testing and optimization of automated decisions.

Data is necessary at every stage of the life-cycle an automated decision. In particular,

  • during development, you will incrementally build the business logic with simple test cases.
  • while testing, you will use more elaborate test cases to fully check the decisions being made. If available, you will use existing production data to check their quality.
  • in production, you will use data to make live decisions, and you will leverage the resulting decision data to improve the automated decisions in their next iterations.

Usual DMS implementations focus purely on the logic itself: flows, rules, etc. In some cases, they may allow you to start from a schema for the data.

However, we believe assessing the quality of the logic is as important, if not more, than implementing it according to the requirements. You can only assess the quality of a decision by measuring it. In other words, you’ll need to leverage the decision analytics capabilities of the DMS. And that, in turn, requires data on which to compute these analytics. The more important your decision, the more important that you measure its performance early.

Carole-Ann has written about this in the best practices series: Best Practices Series: Business Testing. The earlier you leverage data, and measure the quality of your decision, the better. If you can start with data, then the definition of the metrics you will use to assess the quality of the logic will put you in a much better position to implement decisions that won’t need to be corrected.

Starting with some sample data

Using sample data

You may start building a decision without any prior data and only referring to the schema for the data. But such an approach does not let you measure the quality of your decision –you can only be sure the decision is consistent with the schema, and consistent with the written requirements. However, that is not enough. The decision may be properly implemented from a technical perspective but totally fail from a business perspective. For example, the logic for a loan application may end up sending too many cases to manual review.

Furthermore, business analysts think in terms of logic on data, not schemas. Having data available to interact with as you develop the logic keeps you in your mindset, and does not force you to change to a programmer’s mindset.

For example, you will have some simple test cases which will help you in the implementation of your first few rules. If you already have lots of data, take a subset that would be relevant for the decision at hand. When test cases are small (in the tens of thousands of records at most), then having the DMS manage the data makes sense –in particular if that data is only used for decision development.

As the construction of the automated decision progresses, you will want to add more data for functional checks. You will then perhaps uncover new cases requiring more business logic. The DMS will ideally allow you to associate multiple such data sets to any decision.

Consequences for the DMS

To support this incremental build-up of the automated decision, the DMS will:

  • provide support for managing data sets associated to decisions
  • add new data and data sets on the fly
  • support data formats commonly used in the enterprise (such as CSV, XML or JSON)
  • provide decision analytics facilities to verify the quality of the decision using the data sets

One key challenge with using data is that data changes. We view that as an opportunity –in general, data changes because new data items are discovered and used to enrich it. Those new data items create further opportunities to implement better and richer decisions. It is thus very important that the DMS support easy updating of the data.

Moving forward with large data sets

Using large data sets in simulation and decision analytics

Once the implementation of the automated decision gets to a stable state, you will need to have access to more and more test cases. You will also want to determine whether these test cases are successfully executed.

When large data sets are available, the simulation capabilities of the DMS will verify the behavior of the automated decision, and give you relevant quality information through decision analytics. Usually, you will do this to determine how the decision behaves in general. Different variations of the same automated decision may also compete against one another to find out which variant behaves best. Leveraging these, you can ensure your decision improves in quality, or that your champion-challenger strategy, for example, is safe.

These data sets typically come from your data lakes, potentially fed from operational data. They may consist of large enriched historical data that you keep around for reporting, analysis, or machine learning purposes.

Consequences for the DMS

For such large data sets, you will want the data management to remain in the data lake, data mart or enterprise data environment. Duplicating data management for large data sets is expensive and potentially runs into security or compliance issues.

Thus, the DMS will ideally provide means to integrate with these without copying it or managing it within the DMS. Predefined connectors to databases, or data lakes can be useful for simpler access to existing stores. But a generic means to access data, by means of a web service, or using any standard data format through the FTP or http protocols, will guarantee access to anything.

Furthermore, the data sets can be very large. The DMS will ideally provide a simulation and decision analytics environment where the decisions and the metrics are computed without imposing boundaries on the size of the data set. For example, by providing a map-reduce and streaming simulation and decision analytics facility.

The DMS will:

  • provide mechanisms to access enterprise data on-the-fly
  • provide scalable simulation and decision analytics facilities that can cope with enterprise-size data sets

The enterprise will:

  • manage large data sets relevant to the decision
  • use the DMS facilities to leverage these large data sets to verify and enhance decisions

Improving the decision using large operational data sets

Using operational data

When the DMS executes an automated decision in a production system, you will want to collect interesting data points. You will then use them at a later time to determine why some results were obtained. You will also use them to improve these results by updating the decision.

Typically, your enterprise environment will include operational data stores, and data lakes where the data is merged for reporting, analysis and machine learning purposes. The more sophisticated enterprises will also include decision outcome databases and correlate business outcomes with decision outcomes in the data lakes.
Take the example of an application that offers promotions to an existing subscription customer. A good decision is such that its outcome is correlated to the fact that the customer:

  • opened the offer
  • accepted the offer
  • is still a customer 6 months down the road
  • has not had negative interactions with tech support in the 6 months following the offer

Using this data, you will be able to keep improving your decision and its value for the business. You can also use machine learning tools and extract knowledge from the accumulated data and incorporate it in your decision.

Consequences for the DMS

The DMS will:

  • support storing decision outcomes
  • provide mechanisms to access data lakes, operational data stores on the fly
  • offer simulation and decision analytics facilities that scale
  • ideally support model creation and/or model execution

The enterprise will:

  • manage large data sets relevant to the decision
  • use the DMS facilities to leverage these large data sets to verify and enhance decisions

This blog is part of the Technical Series, stay tuned for more!
In a later blog post, we’ll cover the various strategies to pass data to decisions at run time, including the retrieval of additional data while the decision executes.

Learn more about Decision Management and Sparkling Logic’s SMARTS™ Data-Powered Decision Manager

Search Posts by Category

ABOUT US

Sparkling Logic Inc. is a Silicon Valley-based company dedicated to helping organizations automate and optimize key decisions in daily business operations and customer interactions in a low-code, no-code environment. Our core product, SMARTS™ Data-Powered Decision Manager, is an all-in-one decision management platform designed for business analysts to quickly automate and continuously optimize complex operational decisions. Learn more by requesting a live demo or free trial today.