Reading and Acting while Blindfolded:
The Need for Semantics in Text Game Agents

[Paper]      [Code]      [Blogpost]      [Poster]      [Slides]     

We propose ablations on game language to study if text game agents leverage language semantics. Surprisingly, fixed, random, hash-based language representations (c) perform even slightly better than learned GRU representations (a)!


Text-based games simulate worlds and interact with players using natural language. Recent work has used them as a testbed for autonomous language-understanding agents, with the motivation being that understanding the meanings of words or semantics is a key component of how humans understand, reason, and act in these worlds. However, it remains unclear to what extent artificial agents utilize semantic understanding of the text. To this end, we perform experiments to systematically reduce the amount of semantic information available to a learning agent. Surprisingly, we find that an agent is capable of achieving high scores even in the complete absence of language semantics, indicating that the currently popular experimental setup and models may be poorly designed to understand and leverage game texts. To remedy this deficiency, we propose an inverse dynamics decoder to regularize the representation space and encourage exploration, which shows improved performance on several games including Zork I. We discuss the implications of our findings for designing future agents with stronger semantic understanding.