Richard Griffiths - Lecture Notes


"The ultimate goal of work in cognitive architecture is to provide for a system capable of general intelligent behaviour.  That is, the goal is to provide the underlying structure that would enable a system to perform the full range of cognitive tasks, employ the full range of problem solving methods and representations appropriate for the tasks, and learn about all aspects of the tasks and its performance on them.  In this article we present SOAR, an implemented proposal for such an architecture. ..."
(Laird et. al)

What is Soar? (From the Soar FAQ)

Soar means different things to different people, but it can basically be considered in three different ways:
  1. A theory of cognition. As such it provides the principles behind the implemented Soar system.
  2. A set of principles and constraints on (cognitive) processing. Thus, it provides a (cognitive) architectural framework, within which you can construct cognitive models.  In this view it can be considered as an integrated architecture for knowledge-based problem solving, learning and interacting with external environments.
  3. An AI programming language.
Soar incorporates

What does Soar stand for? (From the Soar FAQ)

Historically Soar stood for State, Operator And Result because all problem solving in Soar is regarded as a search through a problem space in which you apply an operator to a state to get a result. Over time, the community no longer regarded Soar as an acronym: this is why it is no longer written in upper case.

Who uses Soar for what? (From the Soar FAQ)

Carnegie Mellon University

There are two basic strands of research.
The NL-Soar explores a range of issues in natural language processing within a unified framework. Particular projects in the recent past and present include real-time comprehension and generation, learning discourse operators, and models of second language acquisition and simultaneous translation. Jill Fain Lehman.
Development of models for quantitatively predicting human performance, including GOMS. More complex, forward-looking and sophisticated models are built using Soar.  Bonnie John.

Information Sciences Institute, University of Southern California

Soar projects cover five main areas of research: development of automated agents for simulated environments (in collaboration with CMU, and UMich); learning (including explanation-based learning); planning; implementation technology (e.g., production system match algorithms); and virtual environments for training.  Paul Rosenbloom.

University of Michigan

The Soar work at UMich has four basic research thrusts:
Learning from external environments including learning to recover from incorrect domain knowledge, learning from experience in continuous domains, compilation of hierarchical execution knowledge, and constructive induction to improve problem solving and planning;
Cognitive modelling of psychological data involving learning to perform multiple tasks and learning to recall declarative knowledge;
Complex, knowledge-rich, real time control of autonomous agents within the context of tactical air engagements (the TacAir-Soar project);
Basic architectural research in support of the above research topics.
John Laird.

The Ohio State University

There are two basic strands of Soar research.
NL-Soar work at OSU focuses on modelling real-time human sentence processing, research on word sense disambiguation, and experimental tests of NL-Soar as a psycholinguistic theory. Other cognitive work in Soar includes modelling learning and performance in human-device interaction.
The other work involves looking at the use of cognitive models of complex problem solving to guide the development of decision support systems and effective training techniques. Specific projects include developing a hybrid learning model of tactical decision making (using Soar and ECHO).
Todd Johnson.

University of Nottingham

The general area of research involves using Soar models as a way to test theories of learning, and improving human-computer interaction. Other projects include the development of the Psychological Soar Tutorial and the Soar FAQ! Frank Ritter.

University of Hertfordshire

Soar research includes modelling aspects of human-computer interaction, particularly on the use of eye movements during the exploratory search of menus. Richard Young.

ExpLore Reasoning Systems, Inc.

As well as its academic usage, Soar is also being used by ExpLore Reasoning Systems, Inc. in Virginia in the USA. A commercial version of Soar, called KB Agent, has been developed as a tool for modelling and implementing business expertise.

Architectural Requirements

    "To realize a task as search in a problem space requires a fixed set of task-implementation functions, involving the retrieval or generation of:
    1. problem spaces,
    2. problem space operators,
    3. an initial state representing the current situation, and
    4. new states that result from applying the operators to existing states.
    The control of search requires a fixed set of search control functions, involving the selection of:
    1. a problem space,
    2. a state from those directly available, and
    3. an operator to apply to the state.
    Together, the task-implementation and search-control functions are sufficient for problem space search to occur.  The quality and efficiency of the problem solving will depend on the nature of the selected functions."
    (Laird et. al, p. 11.)

The Problem Space view of Soar


All activity in Soar is formulated as applying operators to states within a problem space to achieve a goal.
  • Goal: Some desired situation.
  • State:  Data structures that define possible stages of progress in the problem.
  • Operators:  Cause the transformation of a state via some action.  The state transformation is persistent.
  • Problem space:  The collection of states and operators that are available for achieving a goal.
  • Problem solving:  The process of moving from a given initial state in the problem space through intermediate states generated by operators, reaching a desired state, and thereby attaining the goal.  Directly available knowledge may be incomplete or inconsistent leading to an impasse.
  • Impasse:  Where the system is unable to make progress in the current problem given its available knowledge.  This in turn becomes a problem to be solved—a sub-goal.
  • The same process may be applied to reasoning about control decisions—meta-goals. "... impasse-driven goal generation is sufficient for all goals and that no other goal generation and selection mechanisms are required." (Laird et. al)

    The resolution of an impasse provides an opportunity to learn.  The processing of the subgoal can be captured in an efficient form, so that it can be applied directly on a subsequent occasion without further problem solving.

    The Problem Space Computational Model functions


    Proposing, Comparing, Selecting
    Refining (information available about current goal)

    Problem spaces

    Proposing, Comparing, Selecting
    Refining (information available about the current problem space)


    Proposing, Comparing, Selecting (initial state)
    Refining (information available about the current state)


    Proposing, Comparing, Selecting
    Refining (information available about the current operator)
    Applying an operator to a state
    No function for IO or learning—they just happen!
    Goal proposal, comparison, selection and termination are all performed by the architecture and not subject to task dependent knowledge.

    Encoding a task

    The equivalent of programming in Soar.

    The minimum set of functions must include:

  • Propose the task (in the form of an operator)
  • Terminate the task when it has been completed (terminate the operator that represents the task)
  • Propose a problem space for the task
  • Propose an initial state for the task
  • Propose operators to transform a state in the problem space
  • Apply an operator to transform one state to another in the problem space.
  • Additional search-control knowledge (improves efficiency) might include:
  • Compare the desirability of the task to other tasks.
  • Compare candidate problem spaces for the task.
  • Compare candidate operators for different states of the task.
  • Implementation and control for subtasks that arise for selection.
  • The Symbol Level view of Soar

    Design decisions

    Seven major design decisions have been made in the implementation of the Symbol-Level Computational Model (SLCM).
  • Knowledge is stored in a permanent recogitional (production) memory and a temporary working memory.
  • Permanent memory only proposes changes to working memory through preferences, and does not actually deliver such changes.  These are held in preference memory.
  • All goals, problem spaces, states and operators are represented by symbols in working memory once they have been proposed.
  • For each goal there is at most a single current problem space, state and operator.  I.e. no parallelism at the PSCM level.
  • For each goal there is only one state available in working memory.
  • The selections of the current problem space, state and operators are explicitly represented by a goal context in working memory.  Thus they are available as retrieval cues.
  • There is no a priori control scheme that restricts the order in which PSCM functions are performed.
  • Architecture diagram

    A schematic diagram of the Soar architecture.


    Working memory
    Contains the data representing current progress on all goals.
    Data is organized into objects, consisting of a symbolic identifier and augmentations—attribute-value pairs
    Production (or Recognition) memory
    Contains production rules.  Conditions are matched against working memory (and can contain variables)
    The actions create preferences for changes to working memory.
    All matched productions are fired in parallel.
    Five varieties of preferences;
  • feasibility:  acceptable, reject
  • exclusivity:  parallel
  • desirability:  best, better, indifferent, worse, worst
  • necessity:  require, prohibit
  • termination:  reconsider
  • Preferences cannot be used as cues for the productions in recognition memory.

    Problem solving process

    A schematic illustration of the Soar processing sequence.


    During the problem solving process SOAR can reach an 'impasse':

    I.e.. A state in which existing preferences do not specify the next change to the goal context.


  • OPERATOR-TIE:  Two operators are proposed as 'acceptable' but neither is better.  Problem: choose between operators.
  • STATE NO-CHANGE:  No operator is proposed in the current state.  Problem: find an applicable operator
  • OPERATOR NO CHANGE:  An operator is selected but no state changes are proposed.  Problem: implement an operator.
  • When an impasse occurs;
    The decision procedure creates a sub-goal to resolve the impasse.
    The sub-goal contains a description of the impasse.


    Each time a sub-goal terminates (i.e., an impasse is resolved) SOAR creates a new rule ('chunk').
  • Chunk "summarizes the processing in the sub-goal."
  • Chunk's conditions:  Those Working Memory Elements (WMEs) examined by rules fixed in the sub-goal that led to its results.  Only WMEs existing before sub-goal.
  • Chunk's actions/results:  Those WMEs created by rules fixed in the sub-goal, existing after sub-goal terminates.  (i.e., the results of the sub-goal that resolved the impasse.)
  • Chunk should prevent similar impasses in future.
  • Small amount of "generalization."  Object identifiers replaced by variables.
  • Chunking leads to increasingly "routine" behaviour (fewer sub-goals).
  • Can be switched off!
  • Default knowledge

    The set of rules supplied with the system.  Specifies what to do in various types of impasse:


    Selection problem space.
    Evaluate operator by creating sub-goal with original problem space and copy of state, with operator selected.
    Implements operator evaluation by look-ahead.
    User can 'plug in' static evaluation rules.
    Operator sub-goaling.
    Find a state in which operator can apply.
    Detects success and failure.
    Halts system if at top-level.

    Basic Hypotheses Embodied in Soar

    Physical symbol system hypothesis:

    A general intelligence must be realized with a symbol system.

    Goal structure hypothesis:

    Control in a general intelligence is maintained by a symbolic goal system.

    Uniform elementary-representation hypothesis:

    There is a single elementary representation for declarative knowledge.

    Problem space hypothesis:

    Problem spaces are the fundamental organizational units of all goal-directed behaviour.

    Production system hypothesis:

    Production systems are the appropriate organization for encoding all long-term knowledge.

    Universal-subgoaling hypothesis:

    All goals arise dynamically in response to impasses and are generated automatically by the architecture.

    Control-knowledge hypothesis:

    Any decision can be controlled by indefinite amounts of knowledge, both domain dependant and independent.

    Weak-method hypothesis:

    The weak methods form the basic methods of intelligence.

    Weak-method emergence hypothesis:

    The weak methods arise directly from the system responding based on its knowledge of the task.

    Uniform-learning hypothesis:

    Goal-based chunking is the general learning mechanism.

    References and other resources

    Laird, J.E., Newell, A., Rosenbloom, P.S., 1987  "SOAR: An Architecture for General Intelligence". Artificial Intelligence 33 pp. 1-64.

    Ritter, F.E.  Current  "Soar FAQ (Frequently Asked Questions list)" available at: and the "Soar LFAQ (Less Frequently Asked Questions list) at:

    Young, R.M., Ritter, F., Jones, G.  1998  "Online Psychological Soar Tutorial" available at:


    This page is maintained by Richard Griffiths and does not necessarily reflect the official position of the University of Brighton.