Call to Action

Webinar: Take a tour of Sparkling Logic's SMARTS Decision Manager Register Now
Home » Decision Management

Decision Management

Technical Series: Data Integration for Decision Management


Decision Management SystemIntegration with data is key to a successful decision application: Decision Management Systems (DMS) benefit from leveraging data to develop, test and optimize high value decisions.

This blog post focuses on the usage of data by the DMS for the development, testing and optimization of automated decisions.

Data is necessary at every stage of the life-cycle an automated decision. In particular,

  • during development, you will incrementally build the business logic with simple test cases.
  • while testing, you will use more elaborate test cases to fully check the decisions being made. If available, you will use existing production data to check their quality.
  • in production, you will use data to make live decisions, and you will leverage the resulting decision data to improve the automated decisions in their next iterations.

Usual DMS implementations focus purely on the logic itself: flows, rules, etc. In some cases, they may allow you to start from a schema for the data.

However, we believe assessing the quality of the logic is as important, if not more, than implementing it according to the requirements. You can only assess the quality of a decision by measuring it. In other words, you’ll need to leverage the decision analytics capabilities of the DMS. And that, in turn, requires data on which to compute these analytics. The more important your decision, the more important that you measure its performance early.

Carole-Ann has written about this in the best practices series: Best Practices Series: Business Testing. The earlier you leverage data, and measure the quality of your decision, the better. If you can start with data, then the definition of the metrics you will use to assess the quality of the logic will put you in a much better position to implement decisions that won’t need to be corrected.

Starting with some sample data

Using sample data

You may start building a decision without any prior data and only referring to the schema for the data. But such an approach does not let you measure the quality of your decision –you can only be sure the decision is consistent with the schema, and consistent with the written requirements. However, that is not enough. The decision may be properly implemented from a technical perspective but totally fail from a business perspective. For example, the logic for a loan application may end up sending too many cases to manual review.

Furthermore, business analysts think in terms of logic on data, not schemas. Having data available to interact with as you develop the logic keeps you in your mindset, and does not force you to change to a programmer’s mindset.

For example, you will have some simple test cases which will help you in the implementation of your first few rules. If you already have lots of data, take a subset that would be relevant for the decision at hand. When test cases are small (in the tens of thousands of records at most), then having the DMS manage the data makes sense –in particular if that data is only used for decision development.

As the construction of the automated decision progresses, you will want to add more data for functional checks. You will then perhaps uncover new cases requiring more business logic. The DMS will ideally allow you to associate multiple such data sets to any decision.

Consequences for the DMS

To support this incremental build-up of the automated decision, the DMS will:

  • provide support for managing data sets associated to decisions
  • add new data and data sets on the fly
  • support data formats commonly used in the enterprise (such as CSV, XML or JSON)
  • provide decision analytics facilities to verify the quality of the decision using the data sets

One key challenge with using data is that data changes. We view that as an opportunity –in general, data changes because new data items are discovered and used to enrich it. Those new data items create further opportunities to implement better and richer decisions. It is thus very important that the DMS support easy updating of the data.

Moving forward with large data sets

Using large data sets in simulation and decision analytics

Once the implementation of the automated decision gets to a stable state, you will need to have access to more and more test cases. You will also want to determine whether these test cases are successfully executed.

When large data sets are available, the simulation capabilities of the DMS will verify the behavior of the automated decision, and give you relevant quality information through decision analytics. Usually, you will do this to determine how the decision behaves in general. Different variations of the same automated decision may also compete against one another to find out which variant behaves best. Leveraging these, you can ensure your decision improves in quality, or that your champion-challenger strategy, for example, is safe.

These data sets typically come from your data lakes, potentially fed from operational data. They may consist of large enriched historical data that you keep around for reporting, analysis, or machine learning purposes.

Consequences for the DMS

For such large data sets, you will want the data management to remain in the data lake, data mart or enterprise data environment. Duplicating data management for large data sets is expensive and potentially runs into security or compliance issues.

Thus, the DMS will ideally provide means to integrate with these without copying it or managing it within the DMS. Predefined connectors to databases, or data lakes can be useful for simpler access to existing stores. But a generic means to access data, by means of a web service, or using any standard data format through the FTP or http protocols, will guarantee access to anything.

Furthermore, the data sets can be very large. The DMS will ideally provide a simulation and decision analytics environment where the decisions and the metrics are computed without imposing boundaries on the size of the data set. For example, by providing a map-reduce and streaming simulation and decision analytics facility.

The DMS will:

  • provide mechanisms to access enterprise data on-the-fly
  • provide scalable simulation and decision analytics facilities that can cope with enterprise-size data sets

The enterprise will:

  • manage large data sets relevant to the decision
  • use the DMS facilities to leverage these large data sets to verify and enhance decisions

Improving the decision using large operational data sets

Using operational data

When the DMS executes an automated decision in a production system, you will want to collect interesting data points. You will then use them at a later time to determine why some results were obtained. You will also use them to improve these results by updating the decision.

Typically, your enterprise environment will include operational data stores, and data lakes where the data is merged for reporting, analysis and machine learning purposes. The more sophisticated enterprises will also include decision outcome databases and correlate business outcomes with decision outcomes in the data lakes.
Take the example of an application that offers promotions to an existing subscription customer. A good decision is such that its outcome is correlated to the fact that the customer:

  • opened the offer
  • accepted the offer
  • is still a customer 6 months down the road
  • has not had negative interactions with tech support in the 6 months following the offer

Using this data, you will be able to keep improving your decision and its value for the business. You can also use machine learning tools and extract knowledge from the accumulated data and incorporate it in your decision.

Consequences for the DMS

The DMS will:

  • support storing decision outcomes
  • provide mechanisms to access data lakes, operational data stores on the fly
  • offer simulation and decision analytics facilities that scale
  • ideally support model creation and/or model execution

The enterprise will:

  • manage large data sets relevant to the decision
  • use the DMS facilities to leverage these large data sets to verify and enhance decisions

This blog is part of the Technical Series, stay tuned for more!
In a later blog post, we’ll cover the various strategies to pass data to decisions at run time, including the retrieval of additional data while the decision executes.

ETCIO: Equifax InterConnect helps Indian CIOs


ETCIOETCIO (an initiative of The Economic Times) has interviewed KM Nanaiah, country manager at Equifax. The article highlights the details of the tooling that is now available to financial institutions. They will see dramatic improvements in customer acquisition and loan decisioning.

InterConnect’s Rules Editor is our Sparkling Logic SMARTS decision manager. In particular, InterConnect customers have praised its Champion / Challenger capabilities.

Read more at ETCIO

Technical Series: Authentication and Access Control


Decision Management SystemA key benefit of using a Decision Management System is to allow the life-cycle of automated decisions to be fully managed by the enterprise.

When the decision logic remains in the application code, it becomes difficult to separate access to decision logic code from the rest. For example, reading through pages of commit comments to find the ones relevant to the decision is close to impossible. And so is ensuring that only resources with the right roles can modify the logic.
Clearly, this leads to the same situation you would be in if your business data were totally immersed in the application code. You would not do that for your business data, you should not do that for your business decision logic for exactly the same reasons.

Decision Management Systems separate the decision logic from the rest of the code. Thus, you get the immense benefit of being able to update the decision logic according to the business needs. But the real benefit comes when you combine that with authentication and access control:

  • you can control who has access to what decision logic asset, and for what purpose
  • and you can trace who did what to which asset, when and why

Of course, a lot of what is written here applies to other systems than Decision Management Systems. But this is particularly important in this case.

Roles and access control

The very first thing to consider is how to control who has access to what in the DMS. This is access control — but note that we also use authorization as an equivalent term.
In general, one thinks of access control in terms of roles ans assets. Roles characterize how a person interacts with the assets in the system.
And the challenge is that there are many roles involved in interacting with your automated decision logic. The same physical person may fill many roles, but those are different roles: they use the decision management system in different ways. In other words, these different roles have access to different operations on different sets of decision logic assets.

Base roles and access control needs

Typically, and this is of course not the only way of splitting them, you will have roles such as the following:

  • Administrator
    The administrator role administers the system but rarely is involved in anything else. In general, IT or operations resources are those with this role.

  • Decision definer
    The decision definer role is a main user role: this role is responsible for managing the requirements for the automated decision and its expected business performance. Typically, business owners and business analysts are assigned this role.

  • Decision implementer
    The decision implementer role is the other main user role: this role designs, implements, tests and optimizes decisions. Generally, business analysts, data analysts or scientists, decision owners, and sometimes business-savvy IT resources are given this role.

  • Decision tester
    The decision tester role is involved in business testing of the decisions: validating they really do fit what the business needs. Usually, business analysts, data analysts and business owners fill this role.

  • Life-cycle manager
    The life-cycle manager role is responsible for ensuring that enterprise-compliant processes are followed as the decision logic assets go from requirements to implementation to deployment and retirement.

More advanced needs

There may be many other roles, and the key is to realize that how the enterprise does business impacts what these roles may be. For example, our company has a number of enterprise customers who have two types of decision implementer roles:

  • General decision implementer: designs, implements the structure of the decision and many parts of it, tests and optimizes it
  • Restricted decision implementer: designs and implements only parts of the decision — groups of rules, or models

The details on what the second role can design and implement may vary from project to project, etc.

Many other such roles may be defined: those who can modify anything but the contract between the automated decision and the application that invokes, etc.

It gets more complicated: you may also need to account for the fact that only specific roles can manage certain specific assets. For example, you may have a decision that incorporates a rate computation table that only a few resources can see, although it is part of what the system manages and executes.

Requirements for the Decision Management System

Given all this, the expectation is that the DMS support directly, or through an integration with the enterprise systems, the following:

  • Role-based access control to the decision logic asset
  • And ability to define custom roles to fit the needs of the enterprise and how it conducts its business
  • And ability to have roles that control access to specific operations on specific decision logic assets

This can be achieved in a few ways. In general:

  • If all decision assets are in a system which is also managed by the enterprise authentication and access control system: you can directly leverage it
  • And if that is not the case: you delegate authentication and basic access control to the enterprise authentication and access control system, and manage the finer-grained access control in the DMS, tied to the external authentication

Authentication

Of course, roles are attached to a user, and in order to guarantee that the user is the right one, you will be using an authentication system. There is a vast number of such systems in the enterprise, and they play a central role in securing the assets the enterprise deals with.

Principles

The principle is that for each user that needs to have access to your enterprise systems, you will have an entry in your authentication system. Thus, the authentication system will ensure the user is who the user claims, and apply all the policies the enterprise wants to apply: two-factor authentication, challenges, password changes, etc. Furthermore, it will also control when the user has access to the systems.

This means that all systems need to make sure a central system carries out all authentications. And this includes the Decision Management System, of course. For example:

  • The DMS is only accessible through another application that does the proper authentication
  • Or it delegates the authentication to the enterprise authentication system

The second approach is more common in a services world with low coupling.

Requirements for the Decision Management System

The expectation is that the DMS will:

  • Delegate its authentication to the enterprise authentication and access control systems
  • Or use the authentication information provided by an encapsulating service

Vendors in this space have the challenge that in the enterprise world there are many authentication systems, each with potentially more than one protocol. Just in terms of protocols, enterprises use:

  • LDAP
  • WS-Federation
  • OAuth2
  • OpenID Connect
  • and more

Trace

Additionally, enterprises are interested in keeping a close trace of who does what and when in the Decision Management System. Of course, using authentication and the fact that users will always operate within the context of an authenticated session largely enables them to do so.
But this is not just a question of change log: you also want to know who has been active, who has exported and imported assets, who has generated reports, who has triggered long simulations, etc.

Furthermore, there are three types of usages for these traces:

  • Situational awareness: you want to know what has been done recently and why
  • Exception handling: you want to be alerted if a certain role or user carries out a certain operation. For example, when somebody updates a decision in production.
  • Forensics: you are looking for a particular set of operations and want to know when, who and why. For example, for compliance verification reasons.

A persisted and query-able activity stream provides support for the first type of usage. And an integration with the enterprise log management and communication management systems support the other types of usages.

Requirements for the Decision Management System

The expectation is that the DMS will:

  • Provide an activity stream users can browse through and query
  • And support an integration with the enterprise systems that log activity
  • And provide an integration with the enterprise systems that communicate alerts

There are many more details related to these authentication, access control and trace integrations. Also, one interesting trend is the move towards taking all of these into account for the beginning as the IT infrastructure moves to the models common in the cloud, even when on-premise.

This blog is part of the Technical Series, stay tuned for more!

[Image Designed by security from Flaticon]

Technical Series: Decision Management Platform Integrations


DMNDecision Management and Business Rules Management platforms cater to the needs of business oriented roles (business analysts, business owners, etc.) involved in operational decisions. But they also need to take into account the constraints of the enterprise and its technology environment.

Among those constraints are the ones that involve integrations. This is the first series of posts exploring the requirements, approaches and trade-offs for decision management platform integrations with the enterprise eco-system.

Why integrate?

Operational decisions do not exist in a vacuum. They

  • are embedded in other systems, applications or business processes
  • provide operational decisions that other systems carry out
  • are core contributors to the business performance of automated systems
  • are critical contributors to the business operations and must be under tight control
  • must remain compliant, traced and observed
  • yet must remain flexible for business-oriented roles to make frequent changes to them

Each and every one of these aspects involves more than just the decision management platform. Furthermore, more than one enterprise system provides across-application support for these. Enterprises want to use such systems because they reduce the cost and risk involved in managing applications.
For example, authentication across multiple applications is generally centralized to allow for a single point of control on who has access to them. Otherwise, each application implements its own and managing costs and risk skyrocket.

In particular, decision management platforms end up being a core part of the enterprise applications, frequently as core as databases. It may be easy and acceptable to use disconnected tools to generate reports, or write documents; but it rarely is acceptable to not manage part of core systems. In effect, there is little point in offering capabilities which cannot cleanly fit into the management processes for the enterprise; the gain made by giving business roles control of the logic is negated by the cost and risk in operating the platform.

In our customer base, most do pay attention to integrations. Which integrations are involved, and with which intensity, depends on the customer. However, it is important to realize that the success of a decision management platform for an enterprise also hinges on the quality of its integrations to its systems.

Which integrations matter?

We can group the usual integrations for decision management platforms in the following groups:

  • Authentication and Access Control
  • Implementation Support
  • Management Audit
  • Life-cycle management
  • Execution
  • Execution Audit
  • Business Performance Tracking

Authentication and access control integrations are about managing which user has access to the platform, and, beyond that, to which functionality within the platform.
Implementation support integrations are those that facilitate the identification, implementation, testing and optimization of decisions within the platform: import/export, access to data, etc.
Management audit integrations enable enterprise systems to track who has carried out which operations and when within the platform.
Life-cycle management integrations are those that support the automated or manual transitioning of decisions through their cycles: from inception to implementation and up to production and retirement.

Similarly, execution integrations enable the deployment of executable decisions within the context of the enterprise operational systems: business process platforms, micro-services platforms, event systems, etc. Frequently, these integrations also involve logging or audit systems.
Finally, performance tracking integrations are about using the enterprise reporting environment to get a business-level view of how well the decisions perform.

Typically, different types of integrations interest different roles within the enterprise. The security and risk management groups will worry about authentication, access control and audit. The IT organization will pay attention to life-cycle management and execution. Business groups will mostly focus on implementation support and performance tracking.

The upcoming series of blog posts will focus on these various integrations: their requirements, their scope, their challenges and how to approach them.

In the meantime, you can read the relevant posts in the “Best Practices” series:

Best Practices Series: Object Model First


object modelWhere do you start? Do you upload a predefined object model? Or do you develop it with your decision logic?

Object Model First

It is our experience that, in the vast majority of the projects, object models already exist. The IT organization defines and maintains them. This makes perfect sense, since the object model is the contract for the decision service. We need to know all features of the application before processing it. The invoking system also needs to know where to find the decision and all related parameters.

The object model, or data model, or schema, really defines the structure of the data exchanged with the decision service. Some sections and fields will play the role of input data. Some will be output. The business rules will determine or calculate those.

In our world, at Sparkling Logic, we call the object model the form. When you think about the application as data, the form represents the structure specifying what each piece of data means. For example, Customer Information is a section; and first name, last name and date of birth are fields in this section.

While business rules are based on these fields, the field definition typically belong to the system. The system will produce the transaction payload, aka the transaction data, and receive it back after the rules execute and produce the final decision.

To summarize it, the ownership of the object model lies with the IT organization, since they are responsible for making the actual service invocation.

Modifying the Object Model

Does that mean that we cannot make changes to this object model? Absolutely not. Augmenting the object model with calculations and statistics is expected. The customer info will likely include a date of birth, but your business rules will likely refer to the age of the person. It is common practice to add an Age field, that is easily calculated using a simple formula. More fields could be added in the same fashion for aggregating the total income of all co-borrowers, or for calculating the debt to income ratio.

In most systems, these calculations remain private to the decision service. As a result, the IT organization will not even know that they exist.

Quite a similar mechanism exists to add business terms to the form. In order to complement your business concepts in the form, Business terms constitute an additional lingo that is shared across project. for example, you might want to define once and for all what your cut-off values are for a senior citizen. Your business term could even specify cut-off values per state. Your rules will not have to redefine those conditions. They can simply refer to the business term directly: ‘if the applicant is a senior citizen and his family status is single’. Each project leveraging that form will reuse the same terminology without having to specify it again and again.

Like calculations, business rules can use business terms, but IT systems will not see them.

It eventually happens that variables might need to be created. That’s okay. There is no issues with introducing intermediate calculations in order to simplify your business rules. Although these fields will be visible to IT, they can be ignored. As intermediate variables, the system might not even persist these values in the database of record.

When is the Object Model provided?

It is ideal to start your decision management projects with an established object model. Uploading your data is most definitely the very first step in your project implementation. This is true regardless of whether you have actual historical data, or are building data sample for unit testing your rules as you go.

The reason you want your object model established prior to writing rules is quite simple, frankly. Each time you modify the object model, rules that depend on the affected portions of the object model (or form in our case) will need refactoring.

Granted, some changes are not destructive. If that is your case, you can absolutely keep extending your object model happily.

Some changes only move sections within the form. As long as the type of the affected fields remain the same, your rules will not need rewriting. The only exception being for the rules that use full path rather than short names. If you rule says “age < 21", you will be okay whether the age field is located. If your rule says "customer.age < 21", then you will have to modify it if age moves to a different section.

And finally some changes are quite intrusive. If you go from having one driver in the policy, to multiple drivers, all driver rules will have to account for the change in structure. You will have to decide if the age rule is applicable to all drivers, any driver in the policy, or only to the primary driver. This is where refactoring can become a burden.

The more established the object model is, the better suited you will be for writing rules.

One point I want to stress here too is that it is important for the IT team and the business analyst team to communicate and clearly set expectations on the fields of the object model. Make sure that:

  • Values are clearly documented and agreed upon: CA versus California, for example
  • You know which fields are used as input: if state appears in several addresses, know which one takes precedence for state requirements

Sorry for this quick tangent… This is where we see the most of ‘rules fixing’ spent!

When do Rules own the Object Model?

It is rare, but it happens. We see it mostly for green field projects. When the database of record does not exist, and there is no existing infrastructure, new projects might have the luxury of defining their own object model. When there is none, all options are on the table: have data modelers define the object model, or proceed with capturing it as you capture your business rules.

In these cases, we see the DMN standard (decision modeling and notation) leveraged more often than not. As business analysts capture their source rules in a tool like Pencil, its glossary gets assembled.

For those of you not familiar with DMN, let me summarize the approach. The decision model representation guides the business analyst through the decomposition of the decision logic. Let’s say that you want to calculate Premiums. You will need to establish the base rate, and the add-on rates. For the base rate, you will need to know details about the driver: age, risk level, and location. You will also need to know details about the car: make, model and year. Your work as a business analyst is to drill down over the layers of decisioning until you have harvested all the relevant rules.

The glossary is the collection of all the properties you encounter in this process, like age, risk level, location, model, make, year, etc. Input and output properties are named in the process. You can also organize these properties within categories. When you have completed this effort, your glossary will translate to a form, your categories to sections, your properties to fields. In this case, your harvesting covers both decision logic and object model.

Final Takeaway

Besides minor additions like computations and variables, the object model is by and large owned and provided from the start by the IT organization. Only green field projects will combine rules and data model harvesting.

For further reading, I suggest checking our best practices for deployment and how to think about decisions.

Roadblocks to Rules Engines


quoraA little while ago, I ran into a question in Quora that hit me in the stomach… figuratively, of course. Someone asked “why do rules engines fail to gain mass adoption?“. I had mixed feelings about it. In one hand, I am very proud of our decision management industry, and how robust and sophisticated our rules engines have become. In the other hand, I must admit that I see tons of projects not using this technology that would help them so much. I took a little time to reflect on the actual roadblocks to rules engines.

A couple of points I want to stress first

Evangelization

In addition to the points I make below, with a little more time to think about it, I think it boils down to evangelization. We, in the industry, have not been doing a good job educating the masses about the value of the technology, and its ease of use. We rarely get visibility up the CxO level. Business rules is never one of the top 10 challenges of executives, though it might be in disguise. We need to do a better job.

I’m signing up, with my colleagues, for an active webinar series, so that we can address one of the roadblocks to rules engines, and decision management!

Rules are so important, they are already part of platforms

The other key aspect to keep in mind is that business rules are so important in systems that they often become a de-facto component of the ecosystem. Business rules might be used under the form of BPM rules or other customization, but not called out as a usage for rules engines. Many platforms will claim they include business rules. The capability might be there, though it may not be as rich as a decision management system. Many vertical platforms like Equifax’s InterConnect platform include a full-blown decision management system though. When decision makers have to allocate the budget for technology, this becomes another of the roadblocks to rules engines, as they assume that rules are covered by the platform. Sometimes they are right, often not.

Rules in code is not a good idea

Let me stress once more that burying your rules into code or SQL procedures is not a good idea. It is one of the roadblocks to rules engines excuse we have heard probably the most. I explain down below that this is tempting for software developers to go back to their comfort zone. This is not sustainable. This is not flexible. We did that many decades ago as part of Cobol system, mostly because decision management systems did not exist back then. We suffered with maintenance of these beasts. In many occurrences, the maintenance was so painful that we had to patch the logic with pre-processing or post-processing to avoid touching the code. We have learned from these days that logic, when it is complex and/or when it changes decently often, needs to be externalized.

Business owners do not want to go to IT and submit a change request. They want to be able to see the impact of a change before they actually commit to the change. They want the agility to tweak their thresholds or rate tables within minutes or hours, not days and weeks. While there is testing needed for rules like for any software, it is much more straight-forward as it does not impact the code. It is just about QA testing and business testing.

Here is my answer:

I have been wondering the same thing. Several decades ago, I discovered expert systems at school, in my AI class. I fell in love with them, and even more so with rules engines as they were emerging as commercial products.

While coding is more powerful and intuitive than it used to be, the need is still there to make applications more agile. Having software developers change code is certainly more painful than changing business logic in a separate component, ie the decision service.
Some argue that the technology is difficult:

  • syntax
  • ability to find issues

Is the technology too difficult to use?

Because of syntax

I can attest that writing LISP back in the days was nothing ‘intuitive’. Since then, thanks God, rules syntax has improved, as programming syntax did too. Most business analysts I have worked with have found the syntax decently understandable (except for the rare rules engine that still use remnants of OPS5). With a little practice, the syntax is easily mastered.

Furthermore, advances in rules management have empowered rules writers with additional graphical representations like decision tables, trees, and graph. At Sparkling Logic, we went a step further to display the rules as an overlay on top of transaction data. This is as intuitive as it gets in my opinion.

Because of debugging

The second point seems more realistic. When rules execute, they do not follow a traditional programmatic sequence. The rules that apply simply fire. Without tooling, you might have to have faith that the result will be correct. Or rely on a lot of test case data! Once again, technology has progressed and tooling is now available to indicate which rules fired for a given transaction, what path was taken (in the form of an execution trace), etc. For the savvy business analyst, understanding why rules did or did not execute has become a simplistic puzzle game… You just have to follow the crumbs.

So why are rules not as prevalent as they should be?

Is the technology too easy to not use?

Because of ownership

I am afraid to say that IT might be responsible. While it is now a no-brainer to delegate to a database for storage, and for some other commonly accepted components in the architecture for specialized functions, it remains a dilemma for developers to let business analysts take ownership. You need to keep in mind that business rules are typically in charge of the core of the system: the decisions. If bad decisions are made, or if no decision can be made, some heads will roll.

Because of budget

Even if the management of decisions is somewhat painful, and inflexible, it is a common choice to keep these cherished rules inside the code, or externalized in a database or configuration file. The fact that developers have multiple options to encode these rules is certainly not helping. First, they can see it as their moment of fun, for those that love to create their own framework (and there are plenty of them). Second, it does not create urgency for management to allocate a budget. It ends up being a build vs. buy decision.
Without industry analysts focused exclusively on decision management, less coverage by publications, and less advertisement by the tech giants, the evangelization of the technology is certainly slower than it deserves to be.

Yet, it should be used more…

Because of Decision Analytics

I would stress that the technology deserves to be known, and used. Besides agility and flexibility (key benefits of the technology), there is more that companies could benefit from. In particular, decision analytics are on top of my list. Writing and executing rules is clearly an important aspect. But I believe that measuring the business performance is also critical. With business rules, it is very easy to create a dashboard for your business metrics. You can estimate how good your rules are before you deploy them, and you can monitor how well they perform on a day-to-day basis.

Because of ease of integration

For architects, there are additional benefits too in terms of integration. You certainly do not want to rewrite rules for each system that needs to access them. Rules engines deployed as a component can be integrated with the online and batch system, and any changing architecture, without any rewrite, duplication, or any nightmare.

Final Takeaway

With that in mind, I hope that you will not let these roadblocks to rules engines stop you. There are plenty of reasons to consider decision management systems, or rules engines as they are often called. You will benefit greatly:

  • Flexibility to change your decision logic as often and as quickly as desired
  • Integration and deployment of predictive analytics
  • Testing from a QA and business perspective
  • Measure business performance in sand-box and in production
  • and yet, it will integrate beautifully in your infrastructure

Best Practices Series: Business Testing


business testingAs we get ready for deployment, let’s consider business testing. We covered QA testing a couple of weeks ago. The key difference is that we do not check that requirements are covered; we check that our requirements are correct. In other words, we focus on business performance.

Why business testing?

In the traditional SDLC (software development life cycle), as an over-simplification, you get requirements, they get implemented, and you test that they were implemented properly. When referring to your UI requirements for example, this is a no-brainer. You want a button for such-and-such fonctionality. It is where you want it. It does what you want. This is all good and easy to test.

Business testing would take this paradigm to the next level. Combined with A/B testing (what we call in our world champion / challenger experiments), it would aim to measure whether you are more successful at your business objectives with a green button, or a hyperlink.

When applied to decision management, business analytics become paramount. Certainly, champion / challenger experiments will take your business testing to the next level. However, just measuring your business performance is an invaluable first step. While only a subset of our customers use champion / challenger, almost all of them establish dashboards in which they can track how well they current policies are expected to perform once deployed.

Looking at the past few weeks or months of data, what is the load expected in manual process? Will changing this threshold affect my decline rate significantly?

Decision management is sometimes well defined, per regulations at times. But in many cases, it is an evolving art of making the best decisions for your positioning. If you are a conservative organization, but quite liberal with customer acquisition, you might end up at odds at some point.

Thanks to business testing, you can anticipate what your business outcome is going to be, with the assumption that historical transactions are a good prediction as to what is to come.

How do we do business testing?

Start with Data

In order to reach valid and relevant estimates, you will need data. The best option is to collect historical transactions. The volume will depend on your industry, and/or your type of application. Some customers use thousands of transactions; while others use billions. Some customers prefer recent, fresh transactions because of volatility in their space; while others lean towards extensive data sets collected through months or years. Regardless of your specifics, having historical transactions is the best thing for thorough business testing.

What if you do not have any historical transactions? Do not despair, there are a couple of options for you.

The first option is to look for pooled data. It is obviously not available for all projects. When the common good drives organizations to collaborate and make data available, you can take advantage of this golden opportunity. This pooled data can help you assess your business performance outside of your customer portfolio, as part of your business testing efforts. When considering customer acquisition, ranging from marketing to origination to fraud detection, this data can become invaluable.

Another option is to make up the data you do not have. When constructing test cases, you can introduce bias to reflect the distribution in your customer portfolio. For green field projects, you may not have any other option, and something is better than nothing. It is clear that your business indicators will not be as reliable, but they should give you a directional sense of how your decisions will perform.

Then, your KPIs

Business testing is about measuring your business performance. In that sense, KPIs (key performance indicators) constitute the foundation for your business testing. In addition to the QA statistics we mentioned during QA testing, you will need to establish what is important to track in your business.

Each organization, each project uses a different set of KPIs. It is likely that some indicators will track your business success: how well are you doing, how much $$ you make, how many transactions you approve, how many fraudulent transactions you stop, etc. Likewise, other statistics will measure your risk exposure: how much credit has been granted, how many customers will you inconvenience, how many overall transactions will be stopped, etc.

Taking business testing one step further

Now that we have decision analytics as to what is expected, the next logical step, in terms of business testing, is to measure these KPIs in your real-time environment. Are your actual outcomes close to your predictions? If they are, how would you continue tweaking your rules to improve them further. If they are not, do you understand why your current business is different from your past transactions? How can you take advantage of it?

The art of decision management turns into science with the right tooling in place.

Best Practices Series: Manage your decisions in Production


Managing your decisions in productionOur Best Practices Series has focused, so far, on authoring and lifecycle management aspects of managing decisions. This post will start introducing what you should consider when promoting your decision applications to Production.

Make sure you always use release management for your decision

Carole-Ann has already covered why you should always package your decisions in releases when you have reached important milestones in the lifecycle of your decisions: see Best practices: Use Release Management. This is so important that I will repeat her key points here stressing its importance in the production phase.

You want to be 100% certain that you have in production is exactly what you tested, and that it will not change by side effect. This happens more frequently than you would think: a user may decide to test variations of the decision logic in what she or he thinks is a sandbox and that may in fact be the production environment.
You also want to have complete traceability, and at any point in time, total visibility on what the state of the decision logic was for any decision rendered you may need to review.

Everything they contributes to the decision logic should be part of the release: flows, rules, predictive and lookup models, etc. If your decision logic also includes assets the decision management system does not manage, you open the door to potential execution and traceability issues. We, of course, recommend managing your decision logic fully within the decision management system.

Only use Decision Management Systems that allow you to manage releases, and always deploy decisions that are part of a release.

Make sure the decision application fits your technical environments and requirements

Now that you have the decision you will use in production in the form of a release, you still have a number of considerations to take into account.

It must fit into the overall architecture

Typically, you will encounter one or more of the following situations
• The decision application is provided as a SaaS and invoked through REST or similar protocols (loose coupling)
• The environment is message or event driven (loose coupling)
• It relies mostly on micro-services, using an orchestration tool and a loose coupling invocation mechanism.
• It requires tight coupling between one (or more) application components at the programmatic API level

Your decision application will need to simply fit within these architectural choices with a very low architectural impact.

One additional thing to be careful about is that organizations and applications evolve. We’ve seen many customers deploy the same decision application in multiple such environments, typically interactive and batch. You need to be able to do multi-environment deployments a low cost.

It must account for availability and scalability requirements

In a loosely coupled environments, your decision application service or micro-service with need to cope with your high availability and scalability requirements. In general, this means configuring micro-services in such a way that:
• There is no single point of failure
○ replicate your repositories
○ have more than one instance available for invocation transparently
• Scaling up and down is easy

Ideally, the Decision Management System product you use has support for this directly out of the box.

It must account for security requirements

Your decision application may need to be protected. This includes
• protection against unwanted access of the decision application in production (MIM attacks, etc.)
• protection against unwanted access to the artifacts used by the decision application in production (typically repository access)

Make sure the decision applications are deployed the most appropriate way given the technical environment and the corresponding requirements. Ideally you have strong support from your Decision Management System for achieving this.

Leverage the invocation mechanisms that make sense for your use case

You will need to figure out how your code invokes the decision application once in production. Typically, you may invoke the decision application
• separately for each “transaction” (interactive)
• for a group of “transactions” (batch)
• for stream of “transactions” (streaming or batch)

Choosing the right invocation mechanism for your case can have a significant impact on the performance of your decision application.

Manage the update of your decision application in production according to the requirements of the business

One key value of Decision Management Systems is that with them business analysts can implement, test and optimize the decision logic directly.

Ideally, this expands into the deployment of decision updates to the production. As the business analysts have updated, tested and optimized the decision, they will frequently request that it be deployed “immediately”.

Traditional products require going through IT phases, code conversion, code generation and uploads. With them, you deal with delays and the potential for new problems. Modern systems such as SMARTS do provide support for this kind of deployment.

There are some key aspects to take into account when dealing with old and new versions of the decision logic:
• updating should be a one-click atomic operation, and a one-API call atomic operation
• updating should be safe (if the newer one fails to work satisfactorily, it should not enter production or should be easily rolled back)
• the system should allow you to run old and new versions of the decision concurrently

In all cases, this remains an area where you want to strike the right balance between the business requirements and the IT constraints.
For example, it is possible that all changes are batched in one deployment a day because they are coordinated with other IT-centric system changes.

Make sure that you can update the decisions in Production in the most diligent way to satisfy the business requirement.

Track the business performance of your decision in production

Once you have your process to put decisions in the form of releases in production following the guidelines above, you still need to monitor its business performance.

Products like SMARTS let you characterize, analyze and optimize the business performance of the decision before it is put in production. It will important that you continue with the same analysis once the decision is in production. Conditions may change. Your decisions, while effective when they were first deployed, may no longer be as effective after the changes. By tracking the business performances of the decisions in production you can identify this situation early, analyze the reasons and adjust the decision.

In a later installment on this series, we’ll tackle how to approach the issue of decision execution performance as opposed to decision business performance.


 2019 SparklingLogic. All Rights Reserved.