Last updated:

The BDI (Belief, Desire, Intention) architecture is the most well-established inner-agent architecture. It models an agent quite directly in the way people see their own mental activity: based on their beliefs about the world, the goals/desires they have and the plans they are currently executing to achieve these goals. However, there is no such thing as the BDI architecture. It should be seen as a range of architectures. Much of its literature is concerned with the logical underpinnings of the architecture, which can be quite remote from any implementations. There are three generations of BDI architectures: PRS (Practical Reasoning System), dMARS (distributed Multi Agent Reasoning System), and JACK (Java Agent Kernel). I chose to implement dMARS because its structure and inner workings have been described best in public literature, as far as I know.

AgentSpeak(L) is the language based on a subset of dMARS agent functionality. I was interested in it because I needed a language to use (test) my dMARS agent. However, I found the language too restrictive; I would not be able to test major parts of the agent. Thus I created my own syntax to describe a dMARS plan.

I used the knowledge and some of the algorithms from the chapters on logical reasoning from "Artificial Intelligence: A Modern Approach" (AIMA), by Stuart Russell and Peter Norvig. For current purposes it was enough for me to restrict myself to reasoning with Horn clauses (a la Prolog). I decided not to try anything more complicated at the moment.

Because I expect to reuse the Predicate Logic constructs in this agent, I decided to separate these from the rest of the agent. I created a separate module, Predicate Logic (pl).

I finished this library in a working state, but far from being perfect. Making it perfect would currently take too much time. Perhaps at a later date.

What was implemented:
  • The agent described in 'A formal specification of dMARS'.
  • A small language to write dMARS plans.
  • A built-in equals predicate (=), as in '=(X, Y)' or 'NOT =(X, jack)'
What was not implemented:
  • Remove goal actions. Although AgentSpeak describes -!g(t), the event that a goal was removed, it is not part of the dMARS spec, so I left it out completely.
  • Maintenance conditions.
  • External actions: They are implemented, but the action should somehow return a substitution; and this was not completely clear.
  • Situation formula: no compound formula, but a series of literals. I chose to simplify situation formulas because the unification of compound formulas (containing both and and or clauses) is currently too complicated for me. I am using the unification algorithms of AIMA, which are based on Horn clauses. Thus I interpreted a situation formula as a sequence of predicate logical literals.
  • Multiple context substitutions. In order to decide wether a plan is applicable, the agent attempts to find a variable substitution in order to match the context with the belief base. It is possible that multiple substitutions are found. The spec. is silent here. I chose to add the plan to the list of applicable plans multiple times, each time with a different substitution.
  • Closed World Assumption. This assumption states that a fact is false iff it is not available in the knowledge base. It is one of Prolog's basic assumptions. It is important because it makes practical logical programming environments feasible. For example, if it is (only) given that "position(vehicle, a)", it can be concluded (with the CWA) that "NOT position(vehicle, b)". Not assuming this, would require "~position(vehicle, b)" to be stated explicitly, as well as "~position(vehicle, c)", "~position(vehicle, d)", etc, etc. The spec states mistakenly that 'dMARS beliefs are "rather" like Prolog facts: ... positive or negative atomic formulae containing no variables'. Prolog facts are not literals, just atoms; they cannot be negative. In order to write a practical agent, in the spirit of Prolog's logical reasoning, I decided to incorporate the CWA into the agent. For most agents it is more important to reason practically than purely logically.
  • Chosing the right representation for atoms, literals, plans and variables proved quite a task. I chose to implement atoms like one would implement a string: passing an atom from one place to the next implies a complete copy of the atom. This was necessary to ensure that all copies would be deleted properly (management reason). A plan, however, is too big a structure to copy. Managegement of plans is not much of an issue either. So they are passed by pointer. Because of the Prolog-like structure I chose, there are no negative atom literals in my system. However, one would like to check that a certain atom does not occur. For this reason I added a 'positive' field in the atom structure. It may be somewhat misleading, but is was much more efficient to do it this way. Variables are problematic, because one needs to check regularly if two variables are exactly equal (i.e. not only the same name). This is done most efficiently by comparing variable memory addresses. So if a variable is used more than once in a structure; all pointers point to the same variable object. It uses a reference counter to determine wether the variable should really be removed. This should be improved however, because currently there are many places were I check if the reference count is 0. I tried to use a smart-pointer here, but that didn't work.
  • I may have misunderstood the spec, but in my implementation there can only be one plan to match a single goal. If that plan fails, no other goal is attempted. The question is: should the goal become active again? Should the agent store the goal event until the goal is completed? The spec seems to say so, but I have not implemented it that way.
  • Note to self: In CPLModel::BackChainList, all variables are passed down the line; don't do that. And in subsitutions, when 'x=y', then also: 'y=x'! See also lines around 452.
  • The use of logical functions has not been tested.
  • The spec says something about binding the results of a successful achievement goal plan instance back to the calling plan instance. I don't know what this means. For a test goal holds the same. In this case I do know what it means, but the test goal can yield multiple substitutions (f.i. in case of 'running(X, E)' while only X was bound up until then) and it is not clear how they should all be processed.
Sample use
	CdMARSModel Model;
	CdMARSParser &Parser = Model.GetParser();
	CPLAtom Belief;
	CdMARSPlan *Plan;


	// if you see waste in your current lane, move to the lane with the bin,
	// and drop it there.
	Plan = Parser.ParsePlan(
		"trigger: +location(waste, X);" \
		"context: location(robot, X), location(bin, Y);" \
		"body: pick(waste) -> (!location(robot, Y) -> ( drop(waste) ) )."
	Plan->SetName("TopLevel Plan.");

	// if you should go to lane Y, and you are in lane X, and X and Y are adjacent,
	// then move to lane Y
	Plan = Parser.ParsePlan(
		"trigger: !location(robot, Y);" \
		"context: location(robot, X), adjacent(X, Y);" \
		"body: move(X, Y);" \
		"success: +location(robot, Y), -location(robot, X)."
	Plan->SetName("Move 1 lane.");

	// if you should go to lane Z, and you are in lane X (which is not Z),
	// and lane X is adjacent to lane Y, then move to lane Y
	//	(except when there is a car at Y)
	Plan = Parser.ParsePlan(
		"trigger: !location(robot, Z);" \
		"context: location(robot, X), adjacent(X, Y), " \
			"adjacent(Y, Z), NOT =(X, Z), NOT location(car, Y);" \
		"body: !location(robot, Y) -> (!location(robot, Z))."
	Plan->SetName("Move to intermediate lane.");

	Parser.ParseAtom("adjacent(a, b).", Belief);

	Parser.ParseAtom("adjacent(b, c).", Belief);

	Parser.ParseAtom("adjacent(c, d).", Belief);

	Parser.ParseAtom("location(robot, b).", Belief);

	Parser.ParseAtom("location(waste, b).", Belief);

	Parser.ParseAtom("location(bin, d).", Belief);

	for (int i=0; i<50; i++)