Basic ABL Concepts
For a given agent in a world, you must specify
- the physical actions the agent can perform
- how the agents senses its environment
- the goals of the agent
- the methods used to pursue those goals
The devil is in the details but this picture is accurate.
Sensory-Motor Architecture (SMA)
A sensorimotor architecture is how communication happens between the agent running in the ABL runtime and the agent acting in the world. The ABL-Wargus SMA, for instance, boils down to managing a client-server connection between the agent and the game.
Primitive actions (a.k.a physical acts) are the lowest level actions in an ABL environment; they are the most basic set of actions an agent can take. The primitive actions supported by an environment are defined in a layer of code that interface the ABL code and the code of the world. In Wargus, for instance, the actions are the set of commands that a player can send to the game (build buildings, attack enemy units, etc.).
Sensors are the mechanism by which an agent "sees" the world, typically reporting the direct state of the world to the agent. Wargus agents, for example, have access to information about their resources, units, buildings, etc.
Working Memory contains any information the agent needs to keep track of during execution; this represents what's "in the agent's head". This information is organized as a collection of working memory elements (WMEs). WMEs are like instances in an object-oriented language; every WME has a type plus some number of typed fields that can take on values.
Working Memory Elements (WMEs)
Registered WMEs. WMEs are the mechanism by which an agent becomes aware of sensed information. These kinds of WMEs are called registered WMEs because they must be registered with the corresponding sensor so that the ABL runtime knows how to update the agent's working memory. Sensors report information about changes in the world by writing that information into WMEs. ABL has a number of mechanisms for writing behaviors that are continuously reactive to the contents of working memory, and thus to sensed changes in the world. The details of sensors, like actions, depend on the specific world and agent body.
Unregistered WMEs. ABL programmers can also create WMEs that are not directly linked to any sensor. This is useful for putting items in working memory that arise out of some internally defined criteria.
There are several one-shot and continually-monitored tests available for annotating steps and annotating behaviors. For instance, preconditions can be written to define states of the world in which a behavior is applicable. These tests use pattern matching semantics over working memory familiar from production rule languages. This is actually a quite powerful feature of ABL, enabling complex queries over all of working memory.
Each agent has a library of pre-written behaviors. Each behavior consists of a set of steps, to be executed either sequentially or in parallel, which accomplish a goal. The current execution state of the agent is captured by the Active Behavior Tree (ABT) and working memory. The ABT contains the currently active goals and behaviors. The ABT is analogous to a traditional program call stack, but is a tree rather than a stack because some behaviors execute their steps in parallel, thus introducing parallel lines of expansion in the program state.
The leaves of the ABT constitute the conflict set. The agent continuously executes a decision cycle, during which a leaf step is chosen for execution. As each step is executed, it either succeeds or fails. In a sequential behavior, step success makes the next step available for execution. If any step fails, it causes the enclosing behavior to fail. When the last step of a behavior succeeds, the enclosing behavior succeeds. In this way, success and failure propagate through the ABT.