Call to Action

Webinar: Take a tour of Sparkling Logic's SMARTS Decision Manager Register Now

AI

Software industry trends behind the digital transformation revolution


This article presents the three software industry trends driving the digital transformation revolution: DevOps, low-code / no-code automation, vertical integration with digital decisioning.

Introduction

The pandemic changed tech priorities for many people both at work and home making a ‘hybrid’ work a top initiative. Where and how we do the work accelerated the need to improve customer digital experiences and efficiency across work, shopping, and everyday chores.

The data supports this new trend. The independent research firm Omdia compiled over 300 responses from executives at large companies indicated that working away from traditional offices will become the new norm. 58% percent of respondents said they will adopt a hybrid home/work. Even more interesting is that 68% of enterprises believe employee productivity has improved since the move to remote work.

Similarly, adoption of everyday on-line activities such as shopping, banking and entertainment further accelerated the pace of digital transformation. The need for improved applications increased the pressure on companies to relaunch efficient, friendly front-end customer apps with more intuitive UX. The back end now needs to support faster turnaround with the need to automate processes for the new on-line community of users demanding faster, cleaner, and more intelligent offerings.

To respond to this digital transformation, companies are rapidly adopting easy-to-use integrated enterprise software tools to optimize and accelerate development of these efficient digital products.

Several trends like DevOps, Low-code/automation and vertical integrations with integrated digital decisioning have emerged to help enterprises take the digital transformation journey faster and cheaper.

DevOps

DevOps is a software development concept bringing together historically disconnected functions in the lifecycle of the software development. Traditionally, business analysts would define the problem, developers would interpret the concept and build applications, and operations teams would test, report bugs and provide feedback. The disconnect between the functions, silo’d approach created inefficiencies, increased costs and slowed down application releases.

The emergence of integrated tools and processes which integrate this multiple aspect of software development and promote collaboration between these different functions supported growth of the DevOps industry.

In fact, the market data shows that these trends are supported by the investment community and exit activity. According to Venture Beat, in 2Q 2021, Venture funding for global DevOps startups reached $4 billion and the exit activity deal value was dominated by the IPOs of UiPath (robotic process automation) and Confluent, (data / application integration platform).

Low code / no code automation

Application development is also coming closer to non-developers with low/no-code approach and automation.

Software engineering, traditionally owned by IT and software engineers, has always been coveted by other, non-IT stakeholders in the enterprise. In 1991, Powerbuilder introduced a revolutionary concept of a development framework, aiming at democratizing development by allowing non-software professionals to get access to application development. Perhaps ahead of its time with clunky UX, WYSIWYG, Powerbuilder started the revolution of introducing emergence of ‘citizen-developers’, people who originally participated alongside IT in shaping the application and business models but could not code and create the applications themselves. It also introduced data integration with application logic and object-oriented concepts like inheritance and polymorphism and encapsulation, bringing software engineering to the masses.

Fast forward to 2020’s, virtually every enterprise tool platform and enterprise customer have adopted a low-code/no-code approach. The mission is the same as 30 years ago – to provide easy to use, graphical UI/UX, drag and drop concept to application development and allow business analysts, ‘citizen-developers’ and non-software engineers to create, test and even deploy enterprise applications.

Vertical integration with digital decisioning

The perennial challenge of allowing non-developers to create applications is the conundrum of how deep they can develop without coding and to what extent they can customize complex enterprise cloud applications without IT and coding.

To accelerate digital transformation, enterprise software vendors are emerging mostly from the workflow / BPA world, such as Pega and ServiceNow. They are applying a two prong approach – core tool collection and vertical integration. The workflow vendors have developed (or acquired) a collection of point tools in a core-component framework. Those components typically include AI/ML, reporting, workflow, RPA (Robotic Process Automation), case management, rules engine, decision management, knowledge bases, BPA (business process automation) and process orchestration. Those components typically feature common UI and work across a normalized data model and unified architecture.

But that is not enough. To satisfy modern rapid digital transformation needs, in case of fintech enterprise customers (i.e. banks, insurance companies and financial services) also now require pre-built workflow, data and application models. These vertical templates are higher level and more specific, providing out-of-the box, drag/drop solutions like credit card operations, loan management and payment operations. Using the low-code approach, a business analyst can graphically drag/drop pre-defined steps into a loan origination workflow with pre-defined commonly used tasks, created using best practices defined by the ‘centers of excellence’. Companies like UIPath have created a 3rd party marketplace for additional steps and templates created by analysts and consultants. (Those steps could be ‘get customer data’, ‘OCR input form’, ‘scrub customer data’, authorize user’, ‘assess risk profile’ etc.).

Beyond the top level tasks, the functionality ultimately becomes more complex and the sophisticated customer needs powerful decision capabilities to introduce their own business rules and implement proprietary features. The ‘secret-sauce’, which separtes most common steps from proprietary concepts distinguishes top corporations from the competition, requires more sophisticated digital decisioning tools. These digital decisioning tools enable non-developers to customize and manage decision logic, implement AI/ML features, run A/B testing and visualize performance results on training and production data in real time.

To satisfy most common customer base, digital workflow vendors typically provide rudimentary business rules integrated in their low-code platforms and further integrate them with the downstream workflow platforms and vertical ecosystem vendors (i.e. FiServ, Jack Henry, SAP, Salesforce and FIS in banking for example).

The most sophisticated and demanding customers, however, need a more sophisticated set of digital decisioning tools like standalone professional DM platforms. To simplify and visualize this complex decision management, a new generation of low-code digital decision management platforms like Sparkling Logic emerged. These platforms integrate historical business rules engine, data and AI, demystifying machine-learning and providing low-code approach to development and monitoring of application logic performance, continuously as the business logic and training data change and drift.

The pandemic, hybrid work and pervasiveness of the cloud computing have irreversibly changed the software application development. Enterprise customers are seeking and deploying better, faster, more integrated software tools. DevOps integration, low-code, vertical templates, integrated AI and digital decisioning are becoming a new normal while defining the next generation of applications, created not only by software engineers, but by mere mortals across the enterprise.

About

Davorin Kuchan is the CEO of Sparkling Logic, Inc, an AI-driven digital decision management enterprise tools platform. Major enterprise customers like Equifax, Centene, First American, Nike, SwissRE and Enova deploy and integrate Sparkling Logic SMARTS digital decision engine. Sparkling Logic, Inc is based in Sunnyvale, California. http://www.sparklinglogic.com

Noise reduction in digital decisioning with Sparkling Logic SMARTS


noise-digital-decisioning-explicit-decisions-dashboards-analyticsIn this post, we present how to deal with the problem of noise, which is both a source of errors and biases in digital decision-making in organizations, through explicit decision rules, dashboards, and analytics. To illustrate our point, we use the example of the Sparkling Logic SMARTS decision management platform.

Noise in organizations’ decisioning and what to do about it

In an interview with McKinsey, Olivier Sibony, one of the renowned experts in decisioning, recommends algorithms, rules, or artificial intelligence to solve the problem of noise, a generator of errors and biases in decisioning in organizations. This recommendation resonates with our vision of automating decisioning — not all of the decisioning but the operational decisions that organizations make by thousands and sometimes millions per day. Think credit origination, claim processing, fraud detection, emergency routing, and so on.

In our vision, one of the best ways to reduce noise, and therefore errors and biases, is to make decisions explicit (like the rules of laws) so that those who define the decisions can test them out, one at a time or in groups, and visualize. The consequences of these choices on the organization before putting them into production. In particular, decisions should be kept separate from the rest of the system calling those decisions — the CRM, the loan origination system, the credit risk management platform, etc.

Noise reduction with explicit decision rules, dashboards, and analytics

Our SMARTS decisioning platform helps organizations make their operational decisions explicit, so that they can be tested and simulated before implementation, reducing biases that could be a failure to comply with industry regulations, a deviation from organizational policies, or a source of an applicant disqualification. The consequences of biases could be high in terms of image or fees, and even tremendous for certain sensitive industries such as financial, insurance, and healthcare services.

In SMARTS, business users (credit analysts, underwriters, call center professionals, fraud specialists, product marketers, etc.) express decisions in the form of business rules, decision trees, decision tables, decision flows, lookup models, and other intuitive representations that make decisioning self-explainable so that they can test decisions individually as well as collectively. So, at any time, they can check potential noise, errors, and biases before they translate into harmful consequences for the organization.

In addition to making development of decisioning explicit, SMARTS also comes with built-in dashboards to assess alternative decision strategies and measure the quality of performance at all stages of the lifecycle of decisions. By design, SMARTS focuses the decision automation effort on tangible objectives, measured by Key Performance Indicators (KPIs). Users define multiple KPIs through graphical interactions and simple, yet powerful formulas. As they capture decision logic, simply dragging and dropping any attribute into the dashboard pane automatically creates reports. Moreover, they can customize these distributions, aggregations, and/or rule metrics, as well as the charts to view the results in the dashboard.

During the testing phase, the users have access to SMARTS’ built-in map-reduce-based simulation capability to measure these metrics against large samples of data and transactions. Doing so, they can estimate the KPIs for impact analysis before the actual deployment. And all of this testing work does not require IT to code these metrics, because they are transparently translated by SMARTS.

And once the decisioning application is deployed, the users have access to SMARTS’ real-time decision analytics, a kind of cockpit for them to monitor the application, make the necessary changes, without stopping the decisioning application. SMARTS platform automatically displays KPI metrics over time or in a time window. The platform also generates notifications and alerts when some of the thresholds users have defined are crossed or certain patterns are detected. Notifications and alerts can be pushed by email, SMS, or generate a ticket in the organization’s incident management system.

Rather than being a blackbox, SMARTS makes decisioning explicit so that the users who developed it can easily explain it to those who will operate it. Moreover, the latter can adjust the decision making so that biases can be quickly detected and corrected, without putting the organization at risk for violating legal constraints, eligibility criteria, or consumer rights.
So, if you are planning to build a noise-free, error-free, and bias-free decisioning application, SMARTS can help. The Sparkling Logic team enjoys nothing more than helping customers implement their most demanding business requirements and technical specifications. Our obsession is not only to have them satisfied, but also proud of the system they build. We helped companies to build flaw-proof, data-tested, and scalable applications for loan origination, claims processing, credit risk assessment, or even fraud detection and response. So dare to give us a challenge, and we will solve it for you in days, not weeks, or months. Just email us or request a free trial.

About

Sparkling Logic is a Silicon Valley company dedicated to helping businesses automate and improve the quality of their operational decisions with a powerful digital decisioning platform, accessible to business analysts and ‘citizen developers’. Sparkling Logic’s customers include global leaders in financial services, insurance, healthcare, retail, utility, and IoT.

Sparkling Logic SMARTSTM (SMARTS for short) is a cloud-based, low-code, AI-powered business decision management platform that unifies authoring, testing, deployment and maintenance of operational decisions. SMARTS combines the highly scalable Rete-NT inference engine, with predictive analytics and machine learning models, and low-code functionality to create intelligent decisioning systems.

Hassan Lâasri is a data strategy consultant, now leading marketing for Sparkling Logic. You can reach him at hlaasri@sparklinglogic.com.

Sparkling Logic turns data-driven businesses into learning organizations


Sparkling Logic SMARTS AI & ModelOps

Today, predictive analytics is common in any data-driven business. Typically, data scientists create predictive models first, and IT staff deploy these models in a production environment. At Sparkling Logic, not only have we streamlined this process, but we’ve extended it with prescriptive decisions. Sparkling Logic SMARTS AI & ModelOps is the third built-in capability of SMARTS to cover the full spectrum of operational predictive models, from importing models and creating new ones to initiating learning tasks. But let’s start with a brief overview of the stages data has gone through.

Data: a resource, an asset, a business

Until recently, data was a resource to conduct business and as such, it was typically managed by the CIO’s organization. The organization’s mission was to build the overall data architecture, to choose a database vendor, and to design all the applications necessary to process the data from the databases to the business and functional people screens. These applications were mostly reporting, letting the business get a sense for how the business has been doing based on the collected data.

Then came the first transformation, where data went from an asset used to understand how the business has been doing to being an asset leveraged to predict how the business could potentially do in the future. Reporting was enhanced by predictive analytics. The scope of the analytics was not only what had happened, but also what was happening and what could happen. In general, these two past-focused and future-focused activities cover most of what data science is in business, with some really important use cases on marketing, sales, and customer relationship management.

However, a new transformation is underway, first in the banking, insurance, and health sectors, but will certainly penetrate other sectors. It consists of transforming analytics into automated decisions, translating predictions into prescriptions. The goal of this transformation is to create a virtuous cycle where not only data is analyzed, but this analysis is transformed into decisions and actions that generate new data, and so on. Reporting and predictive analytics are now completed by prescriptive analytics.

Anticipating this trend, the founders of Sparkling Logic designed the SMARTS decision management platform to implement this cycle of data, insights, and decisions. Sparkling Logic SMARTS comes with a built-in AI & ModelOps environment that covers the full spectrum of operationalizing predictive models, from importing models built by data scientists, to creating new ones without prior knowledge of machine learning, to launching and managing learning jobs.

Sparkling Logic SMARTS AI & ModelOps

Predictive model import

Business analysts can import AI, machine learning, and deep learning models developed by data scientists, and leverage them in the decision logic. The models could be developed in Python, SPSS, SAS, or Project R among others. SMARTS integrates them as long as they are compliant to PMML, a standard for sharing and deploying predictive models, or are accessible as services.

SMARTS supports importing as PMML neural networks, multinomial, general, and linear/log regression, trees, support vector machines, naïve bayes, clustering, ruleset, scorecard, K-Nearest Neighbors (KNN), random forest, and other machine learning models.
​​
In cases where models exist but are only available as specification, business analysts can easily import these models and seamlessly transform them into business rules for full transparency and easy inspection.

There may be situations where it is necessary to call an external service that is available elsewhere. This external service can be a predictive model or a data source. SMARTS provides support for remote functions, which makes it possible to invoke an external service through JSON-RPC or REST services.

BluePen predictive technology

When time is of the essence, when models are short-lived or when expertise needs to be confronted with knowledge captured in the data, business experts can use the BluePen learning technology to quickly create a model, potentially leveraging existing models.

BluePen lets business analysts and business experts explore and analyze data using domain knowledge and expertise to identify predictors, or, alternatively, selects the predictors for them. Then, using the selected predictors, BluePen helps them to generate a model in the form of readable decision rules, tables, or trees, and integrate them into their decision logic.

Using BluePen, users can build meaningful predictive models in hours or days, rather than the months it often takes. Users can also engineer or modify the models. As a result, without heavy investment in data analytics efforts, these models can be tested, leveraged in simulations, and quickly deployed in the context of an operational business decision.

Regardless of the business analyst’s choice, he or she can operationalize a wide range of models within SMARTS. Being able to integrate models into decision logic is a central ability to test and measure the performance of the end-to-end decisioning.

Moreover, SMARTS allows the analyst to translate the insights from many different models into a decision. Typically, data-centric organizations will have many different models which each can contribute insights into what the decision should be. The orchestration of how these insights are combined is expressed in decision logic, turning multiple discrete predictions into actual prescriptive decisions.
​​
The benefits of combining machine learning and automated decisioning as SMARTS does are nothing less than transforming businesses into always learning organizations where data helps identify opportunities, machine learning turns that data into insights and automated decisioning turns this information into action, closing the virtuous cycle that data promises.

Takeaways

  • Data has moved from being a resource to assess how the business has been doing, to being an asset used to predict the future of the business, and finally to an asset used to improve automated decisions
  • Sparkling Logic SMARTS comes with a built-in AI & ModelOps environment that covers the full spectrum of operationalizing predictive models, from importing models, to creating new ones, to launching and managing learning jobs
  • With its AI & ModelOps capability, SMARTS helps in transforming businesses into learning organizations, closing the virtuous cycle that data promises. Data feeds analytics leading to improved decisions that generates additional data in addition to profits

About


Sparkling Logic is a decision management company founded in the San Francisco Bay Area to accelerate how companies leverage data, machine learning, and business rules to automate and improve the quality of enterprise-level decisions.

Carole-Ann is Co-Founder, Chief Product Officer at Sparkling Logic. You can reach her at cberlioz@sparklinglogic.com.

Low-Code No-Code Applied to Decision Management


DevelopPreviewShip

Low-code no-code is not a new concept to Sparkling Logic. From the beginning, the founders wanted to deliver a powerful yet simple product, so that a business analyst could start with data and build decision logic with built-in predictive data analytics and execution decision analytics.

Version after version, they have achieved this vision through SMARTS, an end-to-end decision management platform that features low-code no-code for business analysts and business users to develop and manage decision logic through point-and-click operations.

Low-code development

For business analysts, SMARTS offers a low-code development environment in which users can express decision logic through a point-and-click user interface to connect data, experiment with decisions, monitor execution without switching between different tools to get the job done. Depending on the nature of the decision logic at hand and user preferences, business analysts can choose on the fly the most appropriate representation to capture or update their decision logic. The resulting decision logic is seamlessly deployed as a decision service without IT intervention.

To push the simplification even further, Sparkling Logic founders turned to their customers for inspiration on their needs and developed three complementary technologies:

  • RedPen, a patented point-and-click technology that accelerates rule authoring without a need to know rule syntax or involve IT to author the rules
  • BluePen, another patented point-and-click technology to quickly create or leverage a data model and put it into production without involving data scientists or IT
  • A dynamic questionnaire to produce intelligent front-ends that reduces the number of unnecessary or redundant questions

No-code apps

In addition to low-code development capability for business analysts, SMARTS also elevates the decision logic to a simple web form-based interface for untrained business users. They can configure their decision strategies, test the updated decision logic, and promote the vetted changes to the next staging environment — without learning rules syntax.

These business apps offer a business abstraction for most tasks available in SMARTS related to configuration, testing and simulation, invocation and deployment management, as well as administration.

For example, credit risk specialists can configure loans, credit cards, and other banking products, and pricing specialists can control pricing tables, through a custom business app specific to their industry. The no-code business app enables business users to cope with environment changes whether they are related to internal policies, competition pressure, or industry regulation.

Furthermore, SMARTS tasks can also be automated through orchestration scripts. Business users can trigger these scripts through the click of a button, or schedule them to be performed automatically and seamlessly.

About

Sparkling Logic is a decision management company founded in the Bay Area to accelerate how companies leverage internal and external data and models to automate and improve the quality of enterprise-level decisions.

Sparkling Logic SMARTS is an end-to-end, low-code no-code decision management platform that spans the entire business decision lifecycle, from data import to decision modeling to application production.

Hassan Lâasri is a data strategy consultant, now leading marketing for Sparkling Logic. You can reach him at hlaasri@sparklinglogic.com.

Best Practices Series: Manage your decisions in Production


Managing your decisions in productionOur Best Practices Series has focused, so far, on authoring and lifecycle management aspects of managing decisions. This post will start introducing what you should consider when promoting your decision applications to Production.

Make sure you always use release management for your decision

Carole-Ann has already covered why you should always package your decisions in releases when you have reached important milestones in the lifecycle of your decisions: see Best practices: Use Release Management. This is so important that I will repeat her key points here stressing its importance in the production phase.

You want to be 100% certain that you have in production is exactly what you tested, and that it will not change by side effect. This happens more frequently than you would think: a user may decide to test variations of the decision logic in what she or he thinks is a sandbox and that may in fact be the production environment.
You also want to have complete traceability, and at any point in time, total visibility on what the state of the decision logic was for any decision rendered you may need to review.

Everything they contributes to the decision logic should be part of the release: flows, rules, predictive and lookup models, etc. If your decision logic also includes assets the decision management system does not manage, you open the door to potential execution and traceability issues. We, of course, recommend managing your decision logic fully within the decision management system.

Only use Decision Management Systems that allow you to manage releases, and always deploy decisions that are part of a release.

Make sure the decision application fits your technical environments and requirements

Now that you have the decision you will use in production in the form of a release, you still have a number of considerations to take into account.

It must fit into the overall architecture

Typically, you will encounter one or more of the following situations
• The decision application is provided as a SaaS and invoked through REST or similar protocols (loose coupling)
• The environment is message or event driven (loose coupling)
• It relies mostly on micro-services, using an orchestration tool and a loose coupling invocation mechanism.
• It requires tight coupling between one (or more) application components at the programmatic API level

Your decision application will need to simply fit within these architectural choices with a very low architectural impact.

One additional thing to be careful about is that organizations and applications evolve. We’ve seen many customers deploy the same decision application in multiple such environments, typically interactive and batch. You need to be able to do multi-environment deployments a low cost.

It must account for availability and scalability requirements

In a loosely coupled environments, your decision application service or micro-service with need to cope with your high availability and scalability requirements. In general, this means configuring micro-services in such a way that:
• There is no single point of failure
○ replicate your repositories
○ have more than one instance available for invocation transparently
• Scaling up and down is easy

Ideally, the Decision Management System product you use has support for this directly out of the box.

It must account for security requirements

Your decision application may need to be protected. This includes
• protection against unwanted access of the decision application in production (MIM attacks, etc.)
• protection against unwanted access to the artifacts used by the decision application in production (typically repository access)

Make sure the decision applications are deployed the most appropriate way given the technical environment and the corresponding requirements. Ideally you have strong support from your Decision Management System for achieving this.

Leverage the invocation mechanisms that make sense for your use case

You will need to figure out how your code invokes the decision application once in production. Typically, you may invoke the decision application
• separately for each “transaction” (interactive)
• for a group of “transactions” (batch)
• for stream of “transactions” (streaming or batch)

Choosing the right invocation mechanism for your case can have a significant impact on the performance of your decision application.

Manage the update of your decision application in production according to the requirements of the business

One key value of Decision Management Systems is that with them business analysts can implement, test and optimize the decision logic directly.

Ideally, this expands into the deployment of decision updates to the production. As the business analysts have updated, tested and optimized the decision, they will frequently request that it be deployed “immediately”.

Traditional products require going through IT phases, code conversion, code generation and uploads. With them, you deal with delays and the potential for new problems. Modern systems such as SMARTS do provide support for this kind of deployment.

There are some key aspects to take into account when dealing with old and new versions of the decision logic:
• updating should be a one-click atomic operation, and a one-API call atomic operation
• updating should be safe (if the newer one fails to work satisfactorily, it should not enter production or should be easily rolled back)
• the system should allow you to run old and new versions of the decision concurrently

In all cases, this remains an area where you want to strike the right balance between the business requirements and the IT constraints.
For example, it is possible that all changes are batched in one deployment a day because they are coordinated with other IT-centric system changes.

Make sure that you can update the decisions in Production in the most diligent way to satisfy the business requirement.

Track the business performance of your decision in production

Once you have your process to put decisions in the form of releases in production following the guidelines above, you still need to monitor its business performance.

Products like SMARTS let you characterize, analyze and optimize the business performance of the decision before it is put in production. It will important that you continue with the same analysis once the decision is in production. Conditions may change. Your decisions, while effective when they were first deployed, may no longer be as effective after the changes. By tracking the business performances of the decisions in production you can identify this situation early, analyze the reasons and adjust the decision.

In a later installment on this series, we’ll tackle how to approach the issue of decision execution performance as opposed to decision business performance.

When They’re Searching for the Next Big Thing


How to Keep and Delight Fickle Customers in a Competitive Insurance and Fintech Markets

Are your customers fickle? How well do you anticipate their needs, proactively offer packages at a competitive prices, react to regulatory and competitive changes before they leave you?

decision analytics toolsToday’s banks, insurance and financials firms operate in a fast moving, highly competitive and rapidly changing market. Disruption is everywhere and the customers have choices they can make in an instant from their smart phones. Losing a customer to a more nimble competitor can be as quick as a cup of coffee with a few finger swipes at a Starbucks patio.

Particularly in the insurance market, customer interactions are precious and few. An insurance company rep needs to not only delight their customer when an opportunity arises, but also upsell them by offering them a personalized product or service tailored to their need virtually instantly.

Doing the same thing as before is a certain way to lose business

Nimble competitors now use the latest AI and analytics technology to rapidly discover and deploy intelligent decision systems which can instantly predict customer needs and customize the offering and pricing relevant for the customer at the right time.

To achieve and sustain such flexibility, a financial organization needs to modernize its underlying systems. Best companies build a living decision intelligence into their systems, ready to be updated on a moments notice.

If a competitor offers a better deal, customer has an life event or data analysts discover a new pattern for risk or fraud, core systems need to be updated virtually instantly. By having an intelligent, AI-driven central decision management system as the heart of your core system, anyone in your organization can have the latest intelligence at their fingertips. Intelligent systems will help verify customer eligibility, provide custom product or a bundle offering at a competitive price, speed up and automate claim adjudication and automate loan origination across all sales and support channels.

The heart of this solution is a modern, AI-driven decision management and rule engines platforms that use the latest AI and analytics techniques, have sophisticated cloud offerings providing unparalleled flexibility and speed. Best systems are no longer just for the IT – they allow most business analysts to view, discover, test and deploy updated business logic in a real time.

A modern organization needs the latest decision analytics tools

These tools will allow you to discover new patterns from the historical data using machine learning, connect and cross correlate multiple sources of data and incorporate predictive models from company’s data analysts. Updating and deploying new logic is now as easy as publishing a web page and does not require changing the application itself, just the underlying business logic.

decision analytics tools

Sparkling Logic SMARTS AI Decision Management is the third and the newest generation of the decision management and rules engine offering using cloud, AI, decision analytics, predictive models and machine learning. We currently process millions of rules and billions of records for the most progressive companies. Find out how we succeeded in creating the ultimate sophisticated set of decision management and decision analytics tools that every modern financial institution should have in their competitive tool chest.

Analytics- Driven Automated Decisions

SMARTS Decision Manager White Paper

Automated decisions are at the heart of your processes and systems. These operational decisions provide the foundation for your digital business initiatives and are critical to the success and profitability of your business. Learn how SMARTS Decision Manager lets you define agile, targeted, and optimal decisions and deploy them to highly available, efficient, and scalable decision services.
Get the Whitepaper

How Predictive Models Improve Automated Decisions


Agility is a key focus and benefit in the discipline of decision management. Agility, in the decision management context, means being able to rapidly adjust and respond to business and market-driven changes. Decision management technologies allow you to separate the business logic from your systems and applications. Business analysts then manage and make changes to the business logic a separate environment. And they can deploy their changes with minimal IT involvement and without a full software development cycle. With decision management, changes can be implemented in a fraction of the time required to change traditional applications. This ability to address frequently changing and new requirements that impact key automated decisions makes your business more agile.

Being able to rapidly make and deploy changes is important. But how do you know what changes to make? Some changes, like those defined by regulations and contracts, are straightforward. If you implement the regulations or contract provisions accurately, the automated decision will produce the required results and therefore, make good decisions. However, many decisions don’t have such a direct and obvious solution.

When Agility Isn’t Enough

Frequently decisions depend on customer behavior, market dynamics, environmental influences or other external factors. As a result, these decisions involve some degree of uncertainty. For example, in a credit risk decision, you’re typically determining whether or not to approve a credit application and where to set the credit limit and interest rate. How do organizations determine the best decisions to help them gain customers while minimizing risk? The same applies to marketing decisions like making upsell and cross-sell offers. Which potential offer would the customer most likely accept?

Predictive Models Provide Data Insight

crystal ballThis is where predictive models help. Predictive models combine vast amounts of data and sophisticated analytic techniques to make predictions about the future. They help us reduce uncertainty and make better decisions. They do this by identifying patterns in historical data that lead to specific outcomes and detecting those same patterns in future transactions and customer interactions.

Predictive models guide many decisions that impact our daily lives. Your credit card issuer has likely contacted you on one or more occasions asking you to confirm recent transactions that were outside of your normal spending patterns. When you shop online, retailers suggest products you might want to purchase based on your past purchases or the items in your shopping cart. And you probably notice familiar ads displayed on websites you visit. These ads are directly related to sites you previously visited to encourage you to return and complete your purchase. All of these are based on predictive models that are used in the context of specific decisions.

How Predictive Models Are Built

Predictive modeling involves creating a model that mathematically represents the underlying associations between attributes in historical data. The attributes selected are those that influence results and can be used to create a prediction. For example, to predict the likelihood of a future sale, useful predictors might be the customer’s age, location, gender, and purchase history. Or to predict customer churn we might consider customer behavior data such as the number of complaints in the last 6 months, the number of support tickets over the last month, and the number of months the person has been a customer, as well as demographic data such as the customer’s age, location, and gender.

Assuming we have a sufficient amount of historical data available that includes the actual results (whether or not a customer actually purchased in the first example, or churned in the second) we can use this data to create a predictive model that maps the input data elements (predictors) to the output data element (target) to make a prediction about our future customers.

Typically data scientists build predictive models through an iterative process that involves:

  • Collecting and preparing the data (and addressing data quality issues)
  • Exploring and Analyzing the data to detect anomalies and outliers and identify meaningful trends and patterns
  • Building the model using machine learning algorithms and statistical techniques like regression analysis
  • Testing and validating the model to determine its accuracy
Data Science Process
By Farcaster at English Wikipedia, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=40129394

Once the model is built and validated it can be deployed and used in real-time to inform automated decisions.

Deploying Predictive Models in Automated Decisions

While predictive models can give us sound predictions and scores, we still need to decide how to act on them. Modern decision management platforms like SMARTS Decision Manager let you combine predictive models that inform your decisions with business rules that translate those decisions into concrete actions. SMARTS includes built-in predictive analytics capabilities and also lets you use models built using other analytics tools such as SAS, SPSS and R.

The use of predictive models is rapidly expanding and changing the way we do business. But it’s important to understand that predictions aren’t decisions! Real world business decisions often include more than one predictive model. For example, a fraud decision might include a predictive model that determines the likelihood that a transaction originated from an account that was taken over. It might also include a model that determines the likelihood that a transaction went into an account that was compromised. A loan origination decision will include credit scoring models and fraud scoring models. It may also include other models to predict the likelihood the customer will pay back early, or the likelihood they will purchase additional products and services (up-sell). Business rules are used to leverage the scores from these models in a decision that seeks to maximize return while minimizing risk.

In our next post, we’ll look at how modern decision management platforms, like SMARTS, help you evaluate alternative decision strategies. We’ll explore how you can use decision simulation to find the best course of action.

Evolution of the Rete Algorithm


The Rete Algorithm Demystified blog series attracted a huge crowd.  I want to thank you for your readership!  Following the popular demand, let me continue on the series and add a few words on the latest and greatest Rete-NT.

rete algorithmWhat’s new? 

Well, this is where I can’t say much without violating Charles’s trade secrets…  Sorry!

So what can I share?

For the evolution of the Rete Algorithm, Charles has focused primarily on runtime performance, looking for ways to accelerate rule evaluations and reduce memory usage.  Mission accomplished.

Faster: With Rete III, the speed increase came with the ability to efficiently handle a larger number of objects per transaction.  With Rete-NT, the speed increase came from optimizations on complex joins in the Rete network.  As described in part 2, the discrimination tree performs a product of object lists.  The list of all drivers satisfying x,y,z requirement is combined with the list of all vehicles matching some other requirements, for example, producing the cartesian cross product.  The more patterns, you add, the more joins will be added.  This has been referred to as the problem of multi-patterns.  The combinatorial explosion is kept under control with the latest algorithm, in a dramatically different way than previously attempted, achieving unprecedented performance.  This algorithm shines when business rules involve complex conditions, which tends to be the case in real-life applications.

Slimmer: This is related to the complex join speed increase.  The less combinatorial explosion, the less memory usage.  It is actually a lot more sophisticated than that, but I am unfortunately bound to secrecy…  The most important thing to remember is that memory usage goes down quite a bit.  This is a concern that software architects can truly appreciate!

James Owen scrutinized the actual performance increase and published the results in InfoWorld last year.  Although the overhead may slow down a tiny bit the performances for overly simple tests, the performance gain is huge: an order of magnitude faster than the previous generation.

Does performance matter?

Most rules engines have achieved some excellent levels of runtime performance, so performance for the sake of performance is not an objective in itself.

I am excited about Rete-NT because it improves performance where it is needed.  Previous generations of Rete engines put pressure on rules writers to design rules that avoid as much as possible multi-patterns.  This algorithmic innovation removes a painful hurdle, or at least move the boundary.  In my career, especially in the past 2 years at Sparkling Logic, I have come across new use cases that do require more flexibility, more expressibility that would be hard to cope with using less efficient algorithms.  One thing we can always seem to be able to count on is for complexity to increase…

How does that compare to non-inference engines?

You can over-simplify the inference versus compiled sequential debate by saying that:

  • Rete shines when the number of rules is large and the number of objects in memory is small
  • Non-inference shines when the number of rules is small and the number of objects in memory is large

Rete-NT changes the game a bit by expanding the scope of problems that can be handled effectively.  As a result, non-inference engines dominate a smaller and smaller number of use cases, while Rete keeps its lead on large rulebases.


 2021 SparklingLogic. All Rights Reserved.