Rules Fest Live: Luke Voss / User-Mediated Rule Engine Execution

on October 11, 2010
SMARTS™ Data-Powered Decision Management Platform

Luke introduces a new topic at the show:

Sometimes the system isn’t smart enough

This is about Novice Systems — less impressive than Expert Systems.  The point Luke is making is that sometimes the encoded rules may be complete enough to make decisions on its own and human can intervene for conflict resolution.

What happens when we insert a person into the Pattern Match / Agenda Resolution / Fire cycle?

1. Pattern Matching

People are natural pattern matchers.

The question Luke is asking here is whether it is worth it to encode in a group of business rules how to select the data sets that seem the most interesting to pursue.  Heuristics can be quite subjective so having a human make that selection manually at the right time.

This is something we see quite a bit actually in all applications that use the notion of “event-rules” (triggers that specific how to find the value of an attribute).  Note that the event-rules offer the option to reach out to human as well as other “non-human systems”.

We used to do a lot of that a couple of decades ago, when expert systems were big.  I was personally involved in a missile diagnostic application where I actually encoded most of the expertise as “backward-chaining” rules including the chart interpretation — although I would have been happy back then to let a human do it instead of me scratching my head to get it done…  The human interactions in my application were limited to entering test results and looking at the Engineering Schematics of the missile to assess whether the suspect list is short enough to “manually” end the diagnostic.

Memories, memories…

Other memories come from the fact assertion terminology.  This is so reminiscing of those times.  New BRMS focus on transactions with input / output…

2. Agenda Resolution

Asking a user to select the rule to execute at agenda resolution time may or may not practical, from a performance perspective.  I have never seen that.  It looks like Luke has not either.

3. Execution

User could help jump to conclusion in the middle of rules execution.  This seem to be close to the role my missile experts played way back when…

By the way, the technology I used then to build my application was a traditional backward chaining expert system, namely Nexpert from Neuron Data.  Very cool technology, especially when they made it available with the hooks for Interface (allowing me to draw those schematics).

So Missiles and Spacecraft techniques eventually meet?  Of course they share $$ characteristic and the desire for minimal actual interventions.

I wonder if Luke’s novice system is fundamentally different from an expert system…

Luke stresses some good points here.  Latency is an issue of course — in some systems, not all.  Luke argues that users could introduce bad data — right but they might also introduce good knowledge too.  Lack of context can mess the whole thing though as experts could be wrongly biased if they do not understand clearly the scope of the questions they have to answer — biased by past questions for example.  Humans are “human” after all.

What-if scenario simulation is obviously as valuable here — if not more — as it is in traditional BRMS projects.

Good presentation!

Learn more about Decision Management and Sparkling Logic’s SMARTS™ Data-Powered Decision Manager

Search Posts by Category


Sparkling Logic Inc. is a Silicon Valley-based company dedicated to helping organizations automate and optimize key decisions in daily business operations and customer interactions in a low-code, no-code environment. Our core product, SMARTS™ Data-Powered Decision Manager, is an all-in-one decision management platform designed for business analysts to quickly automate and continuously optimize complex operational decisions. Learn more by requesting a live demo or free trial today.