Decision Management has been a discipline for Business Analysts for decades now. Data Scientists have been historically avid users of Analytic Workbenches. The divide between these two crowds has been crossed by sending predictive model specifications across, from the latter group to the former. These specifications could be in the form of paper, stating the formula to implement, or in electronic format that could be seamlessly imported. This is why PMML (Predictive Model Markup Language) has proven to be a useful standard in our industry.
The fact is that the divide that was artificially created between these two groups is not as deep as we originally thought. There have been reasons to cross the divide, and both groups have seen significant benefits in doing so.
In this post, I will highlight a couple of use cases that illustrate my point.
Business Analysts needing more
Business Analysts are responsible for analyzing and documenting requirements. Among other responsibilities, thanks to business process and business rules technology, they have stepped up to the plate, and done way more than documenting: they have been able to take over the maintenance of these business processes and business rules, that were traditionally the responsibility of IT. This has been a huge transition, with tremendous benefits.
What we have seen in the past few years, following the 2007-2008 crisis, has been an evolution, driven by executives, to be more data-driven. Analytics have popped up year after year as a key initiative, constantly in the top 5. This has created a demand for more Data Scientists of course, but it has also created a push for Business Analysts to become more analytics-savvy.
In concrete terms, the market has put a void, in the way that companies used predictive models, in the spotlight.
While heavy-duty predictive models, developed by Data Scientists, have been, still are, and will remain to be invaluable for all kinds of decisioning applications, we were missing light-weight predictive models. This is the void that Business Analysts have started to fill.
What I call light-weight predictive models are models that can be developed:
- Quickly — in the matter of hours or days
- In absence of huge amounts of data — compensated by business knowledge and intuition
Not being experts in machine learning algorithms, Business Analysts can drive the variable selection based on what they know of the industry and the problem. With fewer data samples, they can quickly come up with models that are clearly not accurate as the heavy duty models, but quickly effective.
One area where we have seen this approach being institutionalized is Fraud Detection. Neural nets and other algorithms can be very sophisticated and precise, but they take time to train. You do not want this new type of fraudulent transactions to keep expanding while Data Scientists work on the re-training. You want it to stop right away. In absence of data, the best path forward is to empower Business Analysts to add rules, and because they do not necessarily know right away what the pattern of fraud is, giving them machine learning capabilities allows them to be more data-driven, and quickly effective.
Data Scientists needing more
While Data Scientists have very sophisticated workbenches to slice and dice the data, there is one step missing: the ability to test and prove the quality of the model. Granted, you can validate your model against data that you set aside. You can’t really check how well this model is performing though if you only test it in the context of this model alone.
First there is an operationalization aspect to it. In order to test the model against operational data, and not the sanitized data that has been scrubbed, you do need to turn your model into something that can be executed.
The second challenge is that the model will you a score, which is significant in itself, but only usable with its cut-off rules. You need to figure out where to set the thresholds for approval, and possibly tiering if you want to segment your transactions into buckets.
Lastly, there is further decision logic that might come into play. Business rules being evaluated before the model — for pre-qualification for example, and business rules being executed after the model — for rating for example, need to be taken into consideration to evaluate the quality of the model in the larger context.
What we see now and then is that Data Scientists spend time and energy building models that do not always get deployed, as no one can assess and prove the uplift that it will bring to the overall decision strategy.
As a result, we see Data Scientists engaging more and more in the authoring of such business rules, or collaborating more closely with Business Analysts that have access to these tools.
Learn more about Decision Management and Sparkling Logic’s SMARTS™ Data-Powered Decision Manager