I made good progress on the reductionist CRISP, figuring out the grammar bits, and getting CRISP’s code base to translate my plans into an actual sentence. Plan recognition worked in its basic form, but will need more work to go from a sentence to the speech-action observations required. My next big task is to develop a plan recognition technique that works for out-of-order observations, and for mutually exclusive possible observations. All we can observe from a sentence is what words are used—not the corresponding LTAG, or what order LTAGs were adjoined into. Apparently, no plan recognizer yet deals with complex observation conditions. ¯\_(ツ)_/¯
Also, I finished making/potting this wire bonsai tree, and I’m pretty proud of it. Every part of my life must involve trees, apparently.
Language PDDL: Progress!
- I better understand CRISP’s approach to grammar, and have adopted it completely. This means I can use CRISP’s code to translate one of my own speech-act plans into an actual sentence. (But only if I keep some dummy variables around, where the described objects used to be.)
- Started modifying CRISP’s code so that I don’t need the dummy variables. Not as simple as it should be—I either write a whole PDDL parser, or start generating my reductionist domains using CRISP code. I’ve gone with the latter.
Plan Recognizer: Tossed almost-complete and incomplete language plans into plan recognizer. It worked as expected—but it’s far from complete.
Lists: I added some small details, and made a very nice spreadsheet to project our household earnings for the next 20 years, given different paths through school/industry.
Broader AI Studies: I got this app called Researcher, which pulls together feeds of new journal papers. It’s interesting so far, if only to read the abstracts. I did not read an off-topic paper in depth, unfortunately, but I did re-read another CRISP paper in depth.
- By Wednesday, a ‘Hero Narrative’ pitch for what’s publishable about my reductionist CRISP (At my advisor’s request)
- Language PDDL: I will continue to modify CRISP’s code, but not in any haste. Plan recognition takes priority. I will more look at how CRISP translates its LTAG speech actions into a syntax tree, and see if that’s emulatable in PDDL.
- Plan Recognition: This is the next focus of my summer—and it may be a contribution in itself.
- Read a sequence of papers, including an overview of plan recognition and an in-depth look at the plan recognizer I’m using.
- Take a quick pass at modifying the Ramirez plan recognizer (Ramirez and Geffner, 2009) to work with unordered observations or mutually exclusive possible observations.
- Broader AI Studies: I’ll keep reading abstracts on that app I’ve got. My paper quota will be taken up by Plan Recognition papers, however.
- Lists: Separate schools into tier list, and add more fallback schools.
LTAG Derivations and Plan Recognition
I better understand CRISP’s grammar, now. The syntaxnodes which so confused me aren’t syntax nodes, as the visual nodes in a syntax tree. They’re just the grammatical object paired with an actual object that’s being described. So, the
n-1 node might be paired with
rabbit1, representing the role of
sleeper. Any grammatical adjoining or substituting happens to
n-1, but effects the distractor set for
This makes it difficult to tell what the planned sentence actually is, without further processing. The CRISP PDDL domain doesn’t fully describe an LTAG—it merely notes what LTAG was used, and what syntaxnode was adjoined/substituted onto. Consequently, the PDDL doesn’t know what order words go in—the two LTAGs below might be the same action for all it knows.
This is an issue for plan recognition. Plan recognition algorithms currently take as input a sequence of fully specified in-order observations. So would the “red” in “I wake the red rabbit.” be translated as
init-An-red? What syntaxnodes are the parameters?
I see two options to solve this.
- Re-write CRISP’s domains to fully specify an LTAG and word-order, then solve a planning problem with the goal state of having the observed word order. (The side-effects of the planning problem are then the meaning.)
- Adapt plan recognition to work with unordered observations, and mutually exclusive possible observations.
I feel inadequate, linguistically, to tackle #1. But I can tackle #2, and my advisor is well versed in plan recognition. He thinks that a plan recognizer that deals with out-of-order observations is publishable in and of itself. Also, #1 doesn’t help recognize a speakers intended side-effects. (“I shoot the goblin” -> player wants a dead goblin). So, we’re tackling that.
Even with modified plan recognition, I still have some concerns about word-order. “The blue rabbit wakes the red rabbit” might be equally translatable as
wake(r1, r2) or
wake(r2, r1) if I’m not very careful. I may yet need to work on linguistic aspects. We’ll see.