Call to Action

Webinar: Take a tour of Sparkling Logic's SMARTS Decision Manager Register Now
Home » RulesFest 2011 – Nate Derbinsky: Effective Scaling of Long-term Memory for Reactive Rule-based Agents

RulesFest 2011 – Nate Derbinsky: Effective Scaling of Long-term Memory for Reactive Rule-based Agents


Written by: Carole-Ann BerliozPublished on: Oct 25, 2011No comments

Nate DerbinskyNate is a Post Doc researcher working with John Laird in the SOAR group.

Perfect talk for the end of the day…  Playing with robots…  Our little guy is getting to a cliff and encounter a stop sign in scenario 1 and an explosive device on scenario 2.

Agents need effective access to diverse information: factual and experimental.  in his research, Nate focuses on real-time reactivity defined as less than 50 msec.

Taking a rule-based approach: too many facts loaded in working memory…  It just takes too long.

Search approach: even Google takes too long and fail to get the right reference for “the last time I encountered…”

I enjoy this presentation if only for the Memento reference.  It is indeed the first reference that comes to mind to illustrate episodic memory.  Nate also covers semantic memory in his talk.

For reference here is a link to the SOAR presentation last year.

The architecture is focused on the working memory:  A piece of information in the working memory is identified as a Cue.  the Long Term Memory matches the Cue and ‘remembers” by leveraging both semantic and episodic memory.

The semantic memory is deliberate.  Known information is stored in a tree structure optimized for heuristic search.  the suggested implementation has been tested for large sample demonstrating little sensitivity to the number of objects and some correlation with number of Cue constraints.

Now in practice, when the agent comes across the new term “RUN”, semantic memory can return many contexts for that term.  Disambiguation is necessary to refine the actual definition in the current context.  Different options: the most recent, the most frequent, etc.

The episodic memory takes a sequence of snapshots and stores it.  The idea is to look for the most recent match.  Nate describes in details the storage and search algorithm for which the slides are quite detailed AND animated for an easy understanding.  I recommend taking a look at them.  A picture is definitely worth a thousand words!

Nate and team tested episodic memory in Games, robotics, PDDL and Word Sense Discrimination.

That furthers the research on long-term memory usage for agents — opening doors for better learning capabilities.

Cool references:

  • Memento
  • Star Trek

Please share your thoughts on this post:

Your email address will not be published. Required fields are marked *

 2018 SparklingLogic. All Rights Reserved.