Rules Fest Live: John Laird / “Keynote” The Role of Production Rules in a General Cognitive Architecture
John Laird’s presentation focused on Soar. Soar goes a long time back, and has been a focal activity in AI for a long time, proposing a very comprehensive and ambitious efforts to model human behavior and implement applications. Both Charles Forgy’s Rete and John Laird’s Soar have roots in CMU. John’s first claim to fame is to have written the first 1,000+ rules system in ’80 – a game.
Soar is a cognitive architecture for human-level intelligence. It is multi-method and multi-task – and provides an overarching approach that allows for rules to evolve under input through learning and evolution.
John emphasizes the difference between the goals of cognitive architectures and commercial rules based systems. While these differences do exist and are significant, there are lessons that can derived from Soar’s approach beneficial to commercial systems.
Soar takes a different approach to conflict resolution to avoid the “conditions” engineering that frequently takes place in the execution of product rules. Adding more knowledge through Operators. Rules contain knowledge that propose operators, evaluate operators and apply operators – rules fire in parallel, but the focus is on operators.
This is the key conceptual shift: instead of thinking of rules as the key focus of decision making, operators become the focus of decision making. At the point where we would select rules to fire, Soar looks at the operators that are proposed by the rules applying preferences – themselves modeled. Applying this knowledge gives more flexibility to the production system – and separates the concern of identifying what can be done through independent rules and the concern of selecting which among those possibilities is the best. No complex mix of concerns in the organization of the rules conditions. The problem is real, and Soar does offer a nice solution – commercial products do offer other approaches to address the same fundamental issue.
John spent some time on the problem and knowledge search approaches. Problem search is taking a step in the world – you are changing the situation -, while knowledge search is about finding the operators – no change to the situation. It’s critical to go very fast through the knowledge search. Modern technology makes it simpler – but for me the key question is whether it will always be challenge given that the amount of data we want to include in the reasoning keeps growing.
One interesting aspect of Soar is that it has the ability to cope with knowledge that fails – or knowledge that is simply incomplete (think about learning systems) – it generates substates, and use operators to reason about the impasse. Essentially, apply more rules to manage the cases where the directly involved rules fail to manage the situation.
Through analysis, it can generate (“”chunk””) rules to those directly involved in order to avoid the impasses in further interactions.
Soar has grown through time. One of the core extensions has been the introduction of (hierarchical) reinforcement learning as a mechanism to evolve rules. Reinforcement learning has been an established mechanism to evolve specific knowledge representations (think Bayesian networks) – Soar applies it to production rules.
Another one has been the addition of semantic memory that extends the queryiable knowledge accessible to rules and all the Soar processing.
With semantic memory, episodic memory (long term memory) and spatial memory (essentially modelization of the physical world) available in practical implementations, what Soar is essentially doing is making more and more knowledge accessible to its processing, which allows it to be more “”intelligent”” and “”adaptable”” productions rules.
Soar has both C and Java implementations available under BSD license.
Peter switched gears completely after my presentation and tackled deeper Artificial Intelligence topics. Measuring intelligence is a fascinating subject. It reminds me of the challenge of simulating artificial stupidity but that is a different topic!
Contrasting Artificial General intelligence (AGI) and Narrow AI, Peter discussed the adaptive aspects. This is obviously a dear topic to us. I enjoyed his perspective, especially as he touched on the religious war between the various groups, namely AI and Neural Net camps.
During our tenure at FICO, Carlos and I experienced first hand this religious war between the Bay area and San Diego. In the end, I am pleased that we could make progress on both sides and find a somewhat happy middle ground. The reality is that technologies and techniques can collaborate and work in synergy. It is amazing that technologists tend to be so narrow-minded sometimes.
Peter’s long list of techniques of “Acquiring Knowledge and Skills” is the meat of the presentation. If you are interested in those things like we are, you should bookmark the presentation as soon as it is made available!
Anecdote to remember:
Parallel between Engineering and Evolution: humans have been able to fly for 100 years or so but birds have forever (granted I reworded slightly here)… Food for thought as we try to engineer a machine that think like humans do.
Peter charted technology evolution to serve Call Center automation, with increasing levels of “intelligence” (ability to handle transaction complexity) related to cost. We enjoyed a recorded call center interaction. Does any of you remember the Chancellor initiative lead by Robert Hecht-Nielsen, the “HNC” Hecht-Nielsen? Start Trek computer interactions may become a reality in the not-so-distant future!
His “imagine” slide definitely got me dreaming of spending time with Data… although he featured Spock!
If you cannot afford it, create general solutions
Don’t be limited by rules versus neural net dichotomy
Utilize vast network of online resources (Hey you can start here)