Risk Management techniques and enterprise tools have been around for some time, mostly in insurance, finance and banking. With the growth of industrial IoT and connected devices, OEMs and industrial customers have more opportunity to apply similar techniques to industrial equipment failure and maintenance problems. As some of our customers like ABT have shown, such new problems require state of the art Prescriptive Analytics tools to reduce failure risk and optimize maintenance costs in any industrial setting.
There are three obvious reasons the latest analytics tools can improve IoT failure risk management:
1. New, more granular IoT data may be difficult to correlate with failures –
Unlike the financial transactions where human behavior and fraud have been tracked for some time, machine data from newly connected, sub-components and related equipment failures are relatively new. Since there may be limited historical information correlating newly archived data with documented failures, it is essential to augment predictive analytics (machine learning) tools with traditional human experience (business rules).
2. Correlating multiple IoT components with a failure is difficult –As the sensors and components become more prevalent, it may be difficult to correlate a particular component behavior to a failure. For example, in a commercial distillery, the increased temperature of a distillate and related loss of alcohol efficiency on a hybrid still may be caused by either reduced coolant flow, blockage of a redistillation plate or a steam valve failure. By tracking sensors on each component and correlating them with human operator experience, a distilling plant can predict a more appropriate cleaning and maintenance of a particular distilling component. In other works, collecting data from multiple industrial IoT components and blending it with experience-based learning will significantly improve predicting likelihood of failure or a need for unscheduled maintenance to maintain equipment efficiency.
3. Learning improves maintenance insight, reduces costs –
Today, most OEMs have periodic scheduled maintenance whether needed or not. Frequently, such maintenance does not account for higher risk of failure due to a component problem. As a result, industrial customers experience both unnecessary maintenance and unpredictable failures, both which increase costs and prolong costly downtimes. For example, one of our customers, a major power distributor in Western Australia, combined predictive analytics with decision logic to identify power grid components more likely to fail soon.
Modern decision management platforms like Sparkling Logic SMARTS allow improving the ultimate Risk Management problem. They allow evolution of intelligent industrial machinery that learns and suggests failure before and outside scheduled maintenance intervals. I predict that using such tools, progressive OEMs and industrial customers will move to variable maintenance schedules and predict the majority of failures BEFORE they happen.
In summary, industrial customers and industrial equipment OEMs need modern tools to manage connected IoT components and equipment and ultimately implement advanced industrial IoT risk management. These techniques will result in higher uptime, lower maintenance costs and higher productivity. Such modern prescriptive analytics tools provide two key areas of expertise:
- Predictive Analytics –
- Decision Management / Rules Engines –
to quickly analyze IoT device data, visualize, predict and learn the patterns of failure and suggest best course of action or improved maintenance schedules.
to implement predictive discoveries in an easy, graphical fashion as well as to test multiple failure scenarios and instantly deploy the industrial failure risk logic. Deploying and automating improved failure risk decisions will allow even less skilled operators to manage even most complex industrial systems with great efficiency.
Learn more about how SMARTS Decision Manager can help improve your IoT failure risk.
Decision Management has been a discipline for Business Analysts for decades now. Data Scientists have been historically avid users of Analytic Workbenches. The divide between these two crowds has been crossed by sending predictive model specifications across, from the latter group to the former. These specifications could be in the form of paper, stating the formula to implement, or in electronic format that could be seamlessly imported. This is why PMML (Predictive Model Markup Language) has proven to be a useful standard in our industry.
The fact is that the divide that was artificially created between these two groups is not as deep as we originally thought. There have been reasons to cross the divide, and both groups have seen significant benefits in doing so.
In this post, I will highlight a couple of use cases that illustrate my point.
A number of organizations have adopted the idea of making use of the Decision Management approach and technologies to problems such as risk, fraud, eligibility, maximizing and more. If you read this blog, you probably already know what Decision Management brings to the table.
Decision Management is all about automating repeatable decisions in a maintainable way so that they can be optimized in a continuous fashion.
Decision systems can use Business Rules Management Systems (BRMS), but they do not need to restrict themselves to just that: they can also be built on Predictive Analytics technology; or they can even consist of a combination of both. The increasing availability of data that can be used to test, optimize decisions, or extract insights from, makes it possible for decision-centric applications to combine expertise and data to levels not seen in previous generations of applications.
In this post, we’ll outline the evolution from pure Business Rules Systems to Prescriptive Analytics platforms for decision-centric applications. Read more…
In part 1, we saw that we could use knowledge, experience and intuition to build a model serving as a basis for making decisions. But when historical data is available, we can do more…
When large amounts of historical data are available (and the larger, the better), a predictive model can be built using predictive analytics: this basically uses statistics to comb through the data and find patterns. Such patterns can of course be found more easily when they occur frequently. It can be quite useful to make use of the results of BI (if available) to guide the predictive analytics algorithms so that they find the proper correlations.
When successful, the predictive model, used on new cases, will predict a given outcome –therefore based on past experience. Automation of the decision making, using the predictive model, can be performed by building business rules from that model.
And the resulting business rules can, as usual, be enriched using existing knowledge or future knowledge acquired over time (from human experience, or other predictive analytics “campaigns”).
When the results of predictive analytics are used in a number of simulation scenarios, we end up with a number of possible outcomes, a few of them possibly more optimal than others (and here we are talking business performance).
These simulation scenarios may be run continually, as new historical data becomes available, in order to constantly optimize the predictive models –and also so that they correspond to a reality that is more current.
The possibility of obtaining a number of possible decisions trying to maximize an expected outcome, all based on historical data (and possibly also on existing knowledge) leads to a real prescription: “something that is suggested as a way to do something or to make something happen” (Merriam-Webster dictionary).
Automatically providing advice on decisions to make to reach a given target is a very appealing and powerful idea: you don’t just rely on “gut feeling” or experience or past knowledge; you rely on all of these, simultaneously. And the suggestions evolve as time passes, allowing quick refocusing.
Making informed decisions
The ability of making decisions based on so many different aspects that evolve over time is already something we, humans, do at our own level (both consciously and unconsciously).
Scaling this up to tactical and strategic levels in the Enterprise requires the use of prescriptive analytics, backed by knowledge, experience, and big data. So that we can have some comfort that we made those decisions based on all that we had at our disposal.
Now, should I eat some Thai food for lunch, or some Japanese food?
We spend our lives, both personal and professional, making decisions, all day long; some without consequences, and some with long-lasting and even perhaps game-changing ones.
Should I eat some Thai food for lunch, or some Japanese food?
Do we make targeted offers to customers that have been with us for more than 2 years, or to those that have been with us for more than 5?
How do we reduce the time it takes us to fix defective devices?
Although sometimes not making a decision is worse than making the wrong one, we all strive to make the best decisions possible. And to make the best decisions, we rely on experience and whatever information is at hand. With experience in the subject matter, decisions can be made very quickly; when the matter is new or information is scarce, we usually require more time to evaluate a number of possibilities, to make a few computations, to balance the pros and cons.
All this is part of our daily lives. But when a large number of decisions need to be made in a short amount of time, or when the data available to us is limited, or on the other hand enormous, automation can come to the rescue. But how can we make informed decisions at a large scale? Read more…