Our Best Practices Series has focused, so far, on authoring and lifecycle management aspects of managing decisions. This post will start introducing what you should consider when promoting your decision applications to Production.
Make sure you always use release management for your decision
Carole-Ann has already covered why you should always package your decisions in releases when you have reached important milestones in the lifecycle of your decisions: see Best practices: Use Release Management. This is so important that I will repeat her key points here stressing its importance in the production phase.
You want to be 100% certain that you have in production is exactly what you tested, and that it will not change by side effect. This happens more frequently than you would think: a user may decide to test variations of the decision logic in what she or he thinks is a sandbox and that may in fact be the production environment.
You also want to have complete traceability, and at any point in time, total visibility on what the state of the decision logic was for any decision rendered you may need to review.
Everything they contributes to the decision logic should be part of the release: flows, rules, predictive and lookup models, etc. If your decision logic also includes assets the decision management system does not manage, you open the door to potential execution and traceability issues. We, of course, recommend managing your decision logic fully within the decision management system.
Only use Decision Management Systems that allow you to manage releases, and always deploy decisions that are part of a release.
Make sure the decision application fits your technical environments and requirements
Now that you have the decision you will use in production in the form of a release, you still have a number of considerations to take into account.
It must fit into the overall architecture
Typically, you will encounter one or more of the following situations
• The decision application is provided as a SaaS and invoked through REST or similar protocols (loose coupling)
• The environment is message or event driven (loose coupling)
• It relies mostly on micro-services, using an orchestration tool and a loose coupling invocation mechanism.
• It requires tight coupling between one (or more) application components at the programmatic API level
Your decision application will need to simply fit within these architectural choices with a very low architectural impact.
One additional thing to be careful about is that organizations and applications evolve. We’ve seen many customers deploy the same decision application in multiple such environments, typically interactive and batch. You need to be able to do multi-environment deployments a low cost.
It must account for availability and scalability requirements
In a loosely coupled environments, your decision application service or micro-service with need to cope with your high availability and scalability requirements. In general, this means configuring micro-services in such a way that:
• There is no single point of failure
○ replicate your repositories
○ have more than one instance available for invocation transparently
• Scaling up and down is easy
Ideally, the Decision Management System product you use has support for this directly out of the box.
It must account for security requirements
Your decision application may need to be protected. This includes
• protection against unwanted access of the decision application in production (MIM attacks, etc.)
• protection against unwanted access to the artifacts used by the decision application in production (typically repository access)
Make sure the decision applications are deployed the most appropriate way given the technical environment and the corresponding requirements. Ideally you have strong support from your Decision Management System for achieving this.
Leverage the invocation mechanisms that make sense for your use case
You will need to figure out how your code invokes the decision application once in production. Typically, you may invoke the decision application
• separately for each “transaction” (interactive)
• for a group of “transactions” (batch)
• for stream of “transactions” (streaming or batch)
Choosing the right invocation mechanism for your case can have a significant impact on the performance of your decision application.
Manage the update of your decision application in production according to the requirements of the business
One key value of Decision Management Systems is that with them business analysts can implement, test and optimize the decision logic directly.
Ideally, this expands into the deployment of decision updates to the production. As the business analysts have updated, tested and optimized the decision, they will frequently request that it be deployed “immediately”.
Traditional products require going through IT phases, code conversion, code generation and uploads. With them, you deal with delays and the potential for new problems. Modern systems such as SMARTS do provide support for this kind of deployment.
There are some key aspects to take into account when dealing with old and new versions of the decision logic:
• updating should be a one-click atomic operation, and a one-API call atomic operation
• updating should be safe (if the newer one fails to work satisfactorily, it should not enter production or should be easily rolled back)
• the system should allow you to run old and new versions of the decision concurrently
In all cases, this remains an area where you want to strike the right balance between the business requirements and the IT constraints.
For example, it is possible that all changes are batched in one deployment a day because they are coordinated with other IT-centric system changes.
Make sure that you can update the decisions in Production in the most diligent way to satisfy the business requirement.
Track the business performance of your decision in production
Once you have your process to put decisions in the form of releases in production following the guidelines above, you still need to monitor its business performance.
Products like SMARTS let you characterize, analyze and optimize the business performance of the decision before it is put in production. It will important that you continue with the same analysis once the decision is in production. Conditions may change. Your decisions, while effective when they were first deployed, may no longer be as effective after the changes. By tracking the business performances of the decisions in production you can identify this situation early, analyze the reasons and adjust the decision.
In a later installment on this series, we’ll tackle how to approach the issue of decision execution performance as opposed to decision business performance.
Risk Management techniques and enterprise tools have been around for some time, mostly in insurance, finance and banking. With the growth of industrial IoT and connected devices, OEMs and industrial customers have more opportunity to apply similar techniques to industrial equipment failure and maintenance problems. As some of our customers like ABT have shown, such new problems require state of the art Prescriptive Analytics tools to reduce failure risk and optimize maintenance costs in any industrial setting.
There are three obvious reasons the latest analytics tools can improve IoT failure risk management:
1. New, more granular IoT data may be difficult to correlate with failures –
Unlike the financial transactions where human behavior and fraud have been tracked for some time, machine data from newly connected, sub-components and related equipment failures are relatively new. Since there may be limited historical information correlating newly archived data with documented failures, it is essential to augment predictive analytics (machine learning) tools with traditional human experience (business rules).
2. Correlating multiple IoT components with a failure is difficult –As the sensors and components become more prevalent, it may be difficult to correlate a particular component behavior to a failure. For example, in a commercial distillery, the increased temperature of a distillate and related loss of alcohol efficiency on a hybrid still may be caused by either reduced coolant flow, blockage of a redistillation plate or a steam valve failure. By tracking sensors on each component and correlating them with human operator experience, a distilling plant can predict a more appropriate cleaning and maintenance of a particular distilling component. In other works, collecting data from multiple industrial IoT components and blending it with experience-based learning will significantly improve predicting likelihood of failure or a need for unscheduled maintenance to maintain equipment efficiency.
3. Learning improves maintenance insight, reduces costs –
Today, most OEMs have periodic scheduled maintenance whether needed or not. Frequently, such maintenance does not account for higher risk of failure due to a component problem. As a result, industrial customers experience both unnecessary maintenance and unpredictable failures, both which increase costs and prolong costly downtimes. For example, one of our customers, a major power distributor in Western Australia, combined predictive analytics with decision logic to identify power grid components more likely to fail soon.
Modern decision management platforms like Sparkling Logic SMARTS allow improving the ultimate Risk Management problem. They allow evolution of intelligent industrial machinery that learns and suggests failure before and outside scheduled maintenance intervals. I predict that using such tools, progressive OEMs and industrial customers will move to variable maintenance schedules and predict the majority of failures BEFORE they happen.
In summary, industrial customers and industrial equipment OEMs need modern tools to manage connected IoT components and equipment and ultimately implement advanced industrial IoT risk management. These techniques will result in higher uptime, lower maintenance costs and higher productivity. Such modern prescriptive analytics tools provide two key areas of expertise:
- Predictive Analytics –
- Decision Management / Rules Engines –
to quickly analyze IoT device data, visualize, predict and learn the patterns of failure and suggest best course of action or improved maintenance schedules.
to implement predictive discoveries in an easy, graphical fashion as well as to test multiple failure scenarios and instantly deploy the industrial failure risk logic. Deploying and automating improved failure risk decisions will allow even less skilled operators to manage even most complex industrial systems with great efficiency.
Learn more about how SMARTS Decision Manager can help improve your IoT failure risk.
Decision Management has been a discipline for Business Analysts for decades now. Data Scientists have been historically avid users of Analytic Workbenches. The divide between these two crowds has been crossed by sending predictive model specifications across, from the latter group to the former. These specifications could be in the form of paper, stating the formula to implement, or in electronic format that could be seamlessly imported. This is why PMML (Predictive Model Markup Language) has proven to be a useful standard in our industry.
The fact is that the divide that was artificially created between these two groups is not as deep as we originally thought. There have been reasons to cross the divide, and both groups have seen significant benefits in doing so.
In this post, I will highlight a couple of use cases that illustrate my point.
A number of organizations have adopted the idea of making use of the Decision Management approach and technologies to problems such as risk, fraud, eligibility, maximizing and more. If you read this blog, you probably already know what Decision Management brings to the table.
Decision Management is all about automating repeatable decisions in a maintainable way so that they can be optimized in a continuous fashion.
Decision systems can use Business Rules Management Systems (BRMS), but they do not need to restrict themselves to just that: they can also be built on Predictive Analytics technology; or they can even consist of a combination of both. The increasing availability of data that can be used to test, optimize decisions, or extract insights from, makes it possible for decision-centric applications to combine expertise and data to levels not seen in previous generations of applications.
In this post, we’ll outline the evolution from pure Business Rules Systems to Prescriptive Analytics platforms for decision-centric applications.Read More »
In part 1, we saw that we could use knowledge, experience and intuition to build a model serving as a basis for making decisions. But when historical data is available, we can do more…
When large amounts of historical data are available (and the larger, the better), a predictive model can be built using predictive analytics: this basically uses statistics to comb through the data and find patterns. Such patterns can of course be found more easily when they occur frequently. It can be quite useful to make use of the results of BI (if available) to guide the predictive analytics algorithms so that they find the proper correlations.
When successful, the predictive model, used on new cases, will predict a given outcome –therefore based on past experience. Automation of the decision making, using the predictive model, can be performed by building business rules from that model.
And the resulting business rules can, as usual, be enriched using existing knowledge or future knowledge acquired over time (from human experience, or other predictive analytics “campaigns”).
When the results of predictive analytics are used in a number of simulation scenarios, we end up with a number of possible outcomes, a few of them possibly more optimal than others (and here we are talking business performance).
These simulation scenarios may be run continually, as new historical data becomes available, in order to constantly optimize the predictive models –and also so that they correspond to a reality that is more current.
The possibility of obtaining a number of possible decisions trying to maximize an expected outcome, all based on historical data (and possibly also on existing knowledge) leads to a real prescription: “something that is suggested as a way to do something or to make something happen” (Merriam-Webster dictionary).
Automatically providing advice on decisions to make to reach a given target is a very appealing and powerful idea: you don’t just rely on “gut feeling” or experience or past knowledge; you rely on all of these, simultaneously. And the suggestions evolve as time passes, allowing quick refocusing.
Making informed decisions
The ability of making decisions based on so many different aspects that evolve over time is already something we, humans, do at our own level (both consciously and unconsciously).
Scaling this up to tactical and strategic levels in the Enterprise requires the use of prescriptive analytics, backed by knowledge, experience, and big data. So that we can have some comfort that we made those decisions based on all that we had at our disposal.
Now, should I eat some Thai food for lunch, or some Japanese food?
We spend our lives, both personal and professional, making decisions, all day long; some without consequences, and some with long-lasting and even perhaps game-changing ones.
Should I eat some Thai food for lunch, or some Japanese food?
Do we make targeted offers to customers that have been with us for more than 2 years, or to those that have been with us for more than 5?
How do we reduce the time it takes us to fix defective devices?
Although sometimes not making a decision is worse than making the wrong one, we all strive to make the best decisions possible. And to make the best decisions, we rely on experience and whatever information is at hand. With experience in the subject matter, decisions can be made very quickly; when the matter is new or information is scarce, we usually require more time to evaluate a number of possibilities, to make a few computations, to balance the pros and cons.
All this is part of our daily lives. But when a large number of decisions need to be made in a short amount of time, or when the data available to us is limited, or on the other hand enormous, automation can come to the rescue. But how can we make informed decisions at a large scale?Read More »