Are LLMs stochastic parrots, jpeg-compressions of the Internet, or more - and if so, why? Large language models (LLMs) seem to be more than stochastic parrots, seemingly representing colors, directionals, space, time and more. I present experiments to quantify structural similarity between concept organizations in LLMs and elsewhere. I discuss the impact of these experiments on the LLM Understanding Debate and suggest how they may lend support to a position that has been claimed to reconcile sides in the Presentation Wars.
Anders Søgaard is a Professor of Computer Science at the University of Copenhagen, part-time affiliated with Philosophy. He is a father of three and the author of four academic books (and six literary ones). He previously worked at University of Potsdam, Amazon, and Google.