“You are not entering this world in the usual manner, for you are setting forth to be a Dungeon Master. Certainly there are stout fighters, mighty magic-users, wily thieves, and courageous clerics who will make their mark in the magical lands of D&D adventure. You however, are above even the greatest of these, for as DM you are to become the Shaper of the Cosmos. It is you who will give form and content to the all the universe. You will breathe life into the stillness, giving meaning and purpose to all the actions which are to follow.”

Gary Gygax, The Keep on The Borderlands (1979)

The Gist

The long-term goal of this project is to create a system capable of serving the role of ‘Game Master’ for tabletop role-playing games (TRPGs). The Game Master (or GM) controls the world and non-player characters, and narrates the story. For those familiar with D&D, the task is to make an AI Dungeon Master.

This is a tall order. At first glance, an AI could automate the minutia of a TRPG (ala roll20.net), but could it converse naturally with players, tell interesting stories, balance player agency and authorial control, or improvise as human GMs do? An Automatic Game Master (AGM) has to fill a role that most humans find challenging because of the wide range of skills and improv needed–and AI is not known for its generalizability, or for its social skills. Put simply, creating an AGM is a pipe dream for interactive narrative research, and AI research in general.

Still, creating an AGM a worthy goal. Because TRPGs are entirely language based, they are not constrained to a simulated environment and can explore any story. Game Masters, who control all aspects of the story except other players’ characters, are also unconstrained in the style of experience they create. Simplified TRPGs can be used as toy problems to target research in nearly any direction, while still utilizing advances from the larger set of TRPG toy problems. Putting the narrative system in a role usually played by humans also provides a standard to strive for, and points towards natural areas for exploration.

Also, how freaking cool would it be to play D&D with an AI?!

Motivations

I joined the Quantitative Experience Design Lab (QED lab) largely to start work on this project, but also because Rogelio is a cool mentor. Not only is it very cool to say I’m researching an AI Dungeon Master (at least, cool in my circles), I think this problem has important applications.

Primarily, I’m interested in the theoretical aspects of an AGM. How do humans improvise? How do we tell stories? When I play D&D, how do I know what is or isn’t a reasonable action? How do I estimate the obvious consequences of unusual actions? When I talk about my research with others who play TRPGs, we inevitably get drawn into endless fascinating questions. We simply don’t know how humans do this, or why we find it so entertaining.

From another perspective, though, I think TRPGs are impactful. D&D instantly brings together strangers in a way that’s hard to replicate elsewhere. It’s cognitively engaging. It provides a good reason for friends to get together regularly, which is especially important for some groups. I wish my Nana had a D&D group, because D&D would give her a reason to meet new people regularly. I think D&D would be a perfect game to run in a nursing home. D&D has successfully been used as a type of therapy, especially for kids with poor social skills, or kids who wouldn’t otherwise be able to engage with normal therapy.

I don’t think we should build an AGM for these purposes—certainly not for therapeutic games. For sensitive applications, it’s best to have real human interaction. But making a scalable system for an easy introduction to the game attracts me. Role-playing is such a profoundly human endeavor—it must have some untapped potential.

Another great reason to pursue an AGM is to make a common research pursuit. Computer Science research–especially Artificial Intelligence research–has long made use of games as toy problems. I think researchers could make good use of TRPGs as a class of toy problems to target questions in human-computer interaction. For example, a researcher interested in how personality is conveyed through language choices could use a TRPG environment. Another researcher, interested in computers summarizing relevant information for humans, might use a TRPG setting to introduce players to arbitrary fictional settings. If both researchers can start with the same code for basic TRPG games, both researchers can conduct their experiments faster, compare their results more easily, and add more functionality to the code base for future use.

Because TRPGs are fun games, it’s easier to attract experiment participants. It’s a conversation based game, so it’s highly applicable to all sorts of natural language topics. More, it’s a narrative game, which is an important area of research for AI. Maintaining a narrative requires a good understanding of human psychology and especially of how to avoid immersion-shattering mistakes (a common problem with social AI like Siri). The AGM problem also supports research in common-sense intelligence. The fictional aspects of TRPGs afford researchers some flexibility in narrowing the game to focus on any research question without sacrificing participant immersion.

How Even?

So, for any particular aspect we want to research, we can make a TRPG to focus on it. But what is the real goal? What makes a wildly successful AGM?

Pipe Dream AGM Capabilities:

  • Interact with players naturally
    • Maintain natural conversation with players
      • Role play as characters
        • Understand what to describe and what to voice act
      • Explain rules to new players
      • Engage with players to gauge preferences, guide/advise actions, and reduce awkwardness
      • Fluidly shift between clear roles as characters, game master, and social co-player
    • Intuit what players mean to do in the game
      • Match player descriptions to pre-described actions
      • Improvise new but realistic actions based on player descriptions
        • Improvise realistic actions for non-player characters
      • Prohibit unrealistic or nonsensical player actions
      • Alternatively, describe realistic player failures for unrealistic action requests
    • Differentiate between in-character speech, player questions, and non-game speech
  • Manage an Interactive Story
    • Create interesting stories designed to be played
    • Customize stories to player actions
    • Manage unexpected player actions that interfere with planned stories
    • Incorporate dice rolls and the possibility of different non-player action outcomes
    • Great AGMs could build off unexpected player actions to improvise more interesting stories
    • Great AGMs could manage a long campaign with changes in player rosters
  • Tailor the Experience
    • Target specific genres, story archetypes, and time constraints
    • Tune levels of player autonomy, character autonomy, improvisation,
    • Adjust characters to player preferences
    • Fudge the rules, or fudge the dice, according to player preference or specified goals

Realistic Basic AGM:

  • Manage a basic story in a defined domain, and accommodate for unexpected player actions from a single player
  • Understand direct player actions and inquiries, as expressed through natural language text
  • Describe (in natural language text) the world, non-player actions, and relevant narration

“Manage a basic story” is a pretty wide specification, also. That’s why I think this is a useful toy problem—interactive narrative researchers love figuring out different ways to manage a basic story. An AGM with the other capabilities in place is an easy way to start testing out different models of storytelling. If designed modularly, an AGM could simply plug in a different narrative engine and be play-test ready.

What I’m Doing

I’m working on the other two bullet points: understanding player speech and narrating the game. Theoretically, we could just offload the ‘understanding’ part to Amazon or Google’s neural networks. Narrating the game is trickier. Right now it’s often done by mostly authored text, like you’d find in a text adventure game, and formatted text. Simple actions like “shoots (shooter, victim)” are formatted as “<shooter_name> shoots <victim_name> “. Longer exposition is written by a game’s author.

TRPGs are more open-world than most text adventure games, however. The sessions can be largely pre-written, but only in broad strokes. In an actual game, the players describe what their character does (or speak in character) and the GM replies with the relevant consequences of that action (or voice acts a character). Honestly, handling dialogue scares me, and I can’t tackle that in an undergraduate thesis. But I can work on matching player speech to known actions, and on describing relevant aspects of the world.

Rather than use a big ol’ neural network to recognize player inputs, I want to use something that understands a game’s context. In regular gameplay, a player could say “I shoot the goblin”, and a human GM naturally understands that they mean the leftmost goblin, because that’s who the player was conversing with. For this reason and some others, I’m working on plan-based natural language that views the player as an intelligent agent, trying to accomplish an in-game goal via speech.

What Matt’s Doing

My husband, Matt, is my same year and major. We joined the QED lab together. While I’m doing language-y stuff, he’s focusing on automated narratives. Specifically, he’s developing a “knowledge-based theory of character intention”. Basically, a character must intend to do something, and in order to intend to do something, they must believe they can succeed at it. This introduces some chance of failure. His poster pitch uses the example of a generated story with a hero and a villain. The hero wants the villain’s McGuffin. A non-narrative planner makes a story where the villain simply gives the hero the McGuffin—not a believable story. An intentional narrative planner makes the hero steal the McGuffin from the villain—a more believable story. Matt’s theoretical kowledge-based-intentional planner makes a story where the hero must first search for the McGuffin, discover the villain, plan a heist, then steal the McGuffin—a more interesting and believable story.

What Needs to Be Done

TRPGs aren’t really used as a toy problem, yet. That said, much research in interactive narrative works on text adventures, which are basically simple single-player TRPGs. As with much research, most researchers make a prototype system from scratch designed to illustrate a particular concept or finding. Planning algorithms are a big deal in narrative research, with much work being put into creating planners capable of exhibiting narrative phenomena like intention or conflict. Because planning is such a common method for narrative research, Justus Robertson created the General Mediation Engine (GME), which creates an interactive narrative game using an off-the-shelf planner, or even a narrative planner. I plan on altering the GME to make a simple AGM, and I hope more researchers use it for their own studies. I want to add more complex speech abilities to the GME, so that future narrative research can operate in a more realistic setting—one of natural language.

For TRPGs specifically, I hope someone figures out what to do about dice. It’s such an integral part of the game, but I’m not gonna handle it, nor has anyone I know of talked about it. Narratives are more interesting when there’s a believable element of chance involved—when one can fail. I know there’s lots of research into planning with probabilistic outcomes, but it’s all directed at robotics and such. I don’t know if anyone’s done narrative planning with chance.

If I weren’t doing plan-based language stuff, I would be working on improvisational actions. The most fun I’ve ever had playing D&D is when my party decided to wear giant decapitated rat heads as hats, to form a rat killing cult. How should an AGM handle a player request to wear a rat head? As humans, we can figure out the natural consequences: bloody faces, lopsided unstable headwear, and reactions of fear and disgust. With the D&D context, we can expect boosts to intimidation actions, and failure on any other form of persuasion. A computer though, has no way of figuring all this out. How should an AGM reason the consequences of such unexpected actions? Apply logic on a knowledge base? Use word associations? Should it just ask a Mechanical Turk and remember the answer next time?

I also think some research on real TRPG playing needs to be done. It’s probably being done, actually, and I’m just shamefully unaware of it. In particular, I’d be interested in analyzing how humans slip in and out of character. We’re nowhere near a computer capable of such social fluency, but it’d be useful to understand how it works so as not to interfere with it.