Short week. Memorial Day on Monday. I spent all Tuesday taking a practice GRE and reviewing my wrong answers. Thursday I spent hanging out with my mother in law, touring Pike’s Place and the Seattle art museum. Wednesday and today I worked on website pages and writing a sample PDDL domain for playing an Evil Dead-ish game using simple sentences. I’m struggling with understanding the linguistic parts of it, but the tools I’m using are up and running.
This week was short, but I’m still satisfied with what I achieved. Setting up CRISP and starting to write in its XML format for LTAG grammars went smoother than anticipated. The real blocker there is my own lack of linguistics knowledge, detailed later.
- Continue writing sample language PDDL. I don’t have an estimate of how far I’ll get.
- Write up a page on this project (found here when finished). That should help me make faster progress on writing the PDDL, and figure out a better description than “sample language PDDL”
- Start renewal application for UROP funding
- Continue adding to Grad school list
- Continue reading papers and articles in general AI topics (I think this is a good habit to develop as a researcher.)
LTAGs are complex
My biggest challenge right now is understanding LTAGs. LTAG stands for “lexicalized tree adjoined grammar”, and I use the term to refer to the individual trees. I understand the gist—an LTAG pairs a word (the L in LTAG) with an incomplete syntax tree describing how that word might be used. So, if the word “drink” can be used as either a verb (“I drink coffee”) or a noun (“She grabbed a drink”), there would be an LTAG for each usage.
It’s actually more complicated though. [shocked pikachu face]. According to the XTAG database, there’s at least 174 different usages for the word “drink”. Many of these are related usages, and take into account a wider view of a sentence. Some are duplicates except for the additional word “by”, which apparently gets very special privileges in this system (I don’t know why).
I think the reason for so many is because LTAGs attempt to capture all relevant parts of a usage. So a simple usage like “the drink” isn’t very complex, and “drink” gets a little stick for a tree.
The usage like “I drink it” gets a more complex tree, capturing a spot for the drinker (NP0, aka noun phrase 0) and for what is drunk (NP1, aka noun phrase 1). To create the sentence “I drink it”, we start with the tree below, stick on a noun phrase stick for “I” in the NP0 spot, and a noun phrase stick for “it” in the NP1 spot. Without this more complex tree, the LTAG system wouldn’t work.
Then there’s trees like this, and I don’t know how to interpret that. What kind of usage necessitates this?
I’ve figured out what some of the notation means, but a lot is still confusing to me. My only linguistics background is my ability to critique other’s writing as “passive” or “active”—so I’ve been playing some catchup. Here are some of my remaining questions.
- What are the s, r, and p subscripts for? (The f subscript indicates a “foot” node, and numbered subscripts are just indexes. )
- What’s with the epsilon (e and ε) that’s in so many trees? And why the w or v subscripts?
- alpha vs beta?
- I suspect learning the naming convention would be useful, but I also suspect learning the naming convention requires learning a great deal of linguistics.
- (alphaGnX1vs2bynx0-PRO[with] is a good example of the names for trees. I’ve figured out that this tree’s root will probably be a NP (noun phrase), and it’ll include the word “by” somewhere. The PRO means something about control, dunno what though.)
- How do I indicate a “no-adjoin” node in CRISP? (The “na” in the above trees.) I can’t find an example of that.
- Also, where is CRISP’s documentation? It’s gotta be somewhere, right? Right?
- Along those lines, did CRISP’s authors hand encode XTAG trees, or do they have an interface hiding somewhere? I could really use an interface.
My end goal is to create a system for non-expert users to do write my PDDL domains. I don’t want my system to have to teach linguistics to its users. I’m hoping that if I learn this stuff, the users won’t have to. We’ll see.