Goal Recognition Week

TL;DR:

I read some papers, and mocked up a good method for encoding ‘complex’ observation sequences for goal recognition. I have a very clear idea how this will work, enough that my advisor and I are targeting this for publication submission just as school starts. It’s been fun to actually code something algorithmic, and Penni the Penguin Eraser has been seeing some use.

Progress Evaluation

Hero Narrative Pitch: Done, though pending revision before I would post it. It proposes a big change in direction, and a more specific task for my honors thesis.

Language PDDL: On the back burner, so no progress.

Goal Recognition: Read some papers, still more to go. Looked in depth at the Ramirez goal recognizer, got very frustrated at it. Wrote a parser for ‘complex’ observations, detailed below.

Broader AI studies: This Researcher app is cool, and I’ve got a few papers queued from it for when I have time.

Lists: Didn’t do anything. 🙁

Next Week:

Goal Recognition:

  • Write a ‘Hero Narrative’ pitch for complex observations in goal recognition, intended as an outline for a publication
  • Start writing a formalization for complex observation sequences, intended for publication.
  • Implement my parser on top of the Ramirez compiler.

Language PDDL: I’ll be reading this paper titled Attention, Intentions, and the Structure of Discourse, looking for some more theoretical justifications for plan recognition within discourse.

Broader Studies: Read this paper, titled Biologically plausible deep learning — But how far can we go with shallow networks?. It’s not my area, but it’s in my broader interests. Also, keep with the Plan Recognition papers.

GRE: Schedule a test-taking date, set a study plan, and schedule time for another practice test.

Complex Observations:

Goal recognition is the problem of recognizing an agent’s goal from partial observations of their actions. The task I’m tackling within goal recognition is modeling ‘complex’ observations. By ‘complex’, I mean observations whose order is unknown, or who are just one explanation of several possibilities. Right now these observations are assumed to be strictly ordered.

This is motivated by my research with plan-based natural language. If we observe the word “smile”, it could either be the noun or the verb, but not both. This creates a mutually exclusive observation set: we know we observed either smile-noun or smile-verb, but we do not know which. Also specific to the plan-based language is not knowing the order of observations. Though we know the sentence’s ordering, the plan that generates that sentence is not necessarily in that order. Hence, the sentence “He gave me a smile.” becomes a complex set of unordered, mutually exclusive observation sets.

This week I figured out a syntax for writing down the complex observations, and a parser for integrating that with goal recognition systems. I haven’t got it working with any particular system, but in theory it’ll work with either the original Ramirez goal recognition as planning or its revisited version.

Complex Observation Syntax:

My syntax involves three types of groups: unordered, ordered, and mutex groups. Unordered groups are denoted with {curly braces}, and can contain arbitrary subgroups. Ordered groups can likewise contain arbitrary subgroups, and are denoted with [square brackets]. Mutex groups cannot contain subgroups (which would be useless, since only one observation from a mutex group is allowed), and are denoted by |bars|. Additionally, one can tag an observation with a mutex ID (obs1^mutexID^) and any observations with that ID will be put into that mutex group, regardless of their ordering. (Multiple IDs are allowed).

This is best explained by example. Say we had a normal observation sequence of a -> b.1 -> b.2 -> c, except we aren’t sure of the ordering for b.1 and b.2.

[a, {b.1, b.2}, c]

This means that a was observed first. The two b‘s were observed after a, but in no order with respect to each other. The c was observed only after one of the b observations. (I am debating adding a <> set for unordered observations that must all occur before the next observation in a list, but that is slightly more complex.)

The two goal recognizers I’m targeting encode goal recognition as a form of planning by creating actions explain-a(), explain-b1(), explain-b2(), and explain-c() which have the same effects as the normal action but have additional preconditions and effects to ensure orderings. One of the planner’s goals is to reach explained-all, which only happens after explain-c(), which has explained-b2 as a precondition. (And so on.)

My modifications substitute a more general form of this, where explain-a() has an additional effect order-1-observed, and both explain-b1() and explain-b2() have that precondition order-1-observed. Both of these actions have the same ordering effect order-2-observed, however, allowing explain-c() to continue.

I’ve also added mutex groups. Say we knew b.1 or b.2 occurred, but not both.

[a, |b.1, b.2|, c]

In this case, we maintain the same ordering preconditions as before, but we add an extra precondition not(observed-mutex-group-0)and an extra effect observed-mutex-group-0 to both observe-b1() and observe-b2(). In this way, if one b observation is made, the other cannot be.

I allow arbitrary mutex groupings with marked IDs, so that [{a^mu1^, b^mu2^}, {c^mu1^, d^mu2^}] creates two mutex groups around the fluents observed-mutex-group-mu1 and observed-mutex-group-mu2.

I allow arbitrary subgrouping for lists and sets (not mutexes). In these cases, all observations and subgroups inside a group maintain the constraints of all groups it is nested in. So, observation d, in {[a, {b, [c,|d, e|}]], f} has not only its mutex group preconditions, but must come after c. Observation f, however has no constraints.

This is all in a couple java files, with no integration into actual PDDL domains, yet. My next task is to rewrite the parser in C, on top of the Ramirez goal recognition parser. I’m also going to formalize this in publication form, which will really hammer out the kinks.