Can humans and artificial intelligences share concepts and communicate? One aim of Making AI Intelligible is to show that philosophical work on the metaphysics of meaning can help answer these questions. Cappelen and Dever use the externalist tradition in philosophy of to create models of how AIs and humans can understand each other. In doing so, they also show ways in which that philosophical tradition can be improved: our linguistic encounters with AIs revel that our theories of meaning have been excessively anthropocentric. The questions addressed in the book are not only theoretically interesting, but the answers have pressing practical implications. Many important decisions about human life are now influenced by AI. In giving that power to AI, we presuppose that AIs can track features of the world that we care about (e.g. creditworthiness, recidivism, cancer, and combatants.) If AIs can share our concepts, that will go some way towards justifying this reliance on AI. The book can be read as a proposal for how to take some first steps towards achieving interpretable AI. Making AI Intelligible is of interest to both philosophers of language and anyone who follows current events or interacts with AI systems. It illustrates how philosophy can help us understand and improve our interactions with AI.