A compiled Rete-NT inference rules engine is one of the many dedicated execution engines that SMARTS™ Data-Powered Decision Manager provides. In this post, we take a closer look at inference rules engines, Rete-NT, and what sets SMARTS™ apart from the rest.
Inference Rules Engines
Before we talk about inference rules engines, we should talk about the difference between sequential and inference rules execution. Sequential rules execution happens when the order and priority of rules have been specified. SMARTS™ has a dedicated compiled sequential rules engine to execute these types of business rules at high speed and high performance.
However, there are cases when the order that business rules should execute is not so clear. When you’re dealing with complex rules with multiple dependencies, deploying inference rules execution may be a better approach. This is especially true when objects in memory are small. With inference rules engines, the engine will determine the best order to execute rules based on a specified goal.
Rete-NT Algorithm
So how does the engine determine the best order to execute rules? This is where Rete-NT comes in. The Rete-NT is the latest version of the Rete algorithm, a pattern-matching algorithm developed by Dr. Charles L. Forgy in the 70’s.
This and other similar algorithms is how an engine determines which rules to fire when. Rete-NT engines are fast while giving business analysts more flexibility in rule writing. To achieve high performance with less efficient algorithms, business analysts would have to design rules a particular way.
You can learn more about the Rete algorithm in a 3-part blog series: Rete Algorithm Demystified.
- Part 1: Origin of the Rete Algorithm
- Part 2: How the Rete Algorithm Works
- Part 3: Evolution of the Rete Algorithm (more on Rete-NT)
Dedicated Execution Engine
Like with our dedicated compiled sequential rules engine, SMARTS™ has a dedicated compiled Rete-NT inference rules engine to support high speed and performance that scales. Dedicated execution engines allow you to optimize performance for each decision logic representation and ultimately your overall decision. When inference is required and/or the rule set is highly complex (ex. a large number of rules over a small number of fields), the rules will execute on our Rete-NT engine. You can learn more about our dedicated execution engines and other ways SMARTS™ supports scalability in our post on Decision Engines.