Hoppa direkt till innehållet

Studenter som inte bytt lösenord sedan 7 maj kan inte logga in på studentwebben. Läs hur du byter lösenord.

Huvudmenyn dold.

Compositionality in Deep Neural Networks

Tid Fredag 5 maj, 2023 kl. 14:00 - 15:00
Plats Zoom

Competent language users can understand and produce a potentially infinite number of novel, well-formed linguistic expressions by dynamically recombining known elements. This is generally taken to support the claim that humans process linguistic expressions compositionally, such that the meaning of complex expressions is determined by the meaning of its constituents, and the way in which they are syntactically combined. Computation over compositionally structured representations has been conjectured to be central not only to linguistic processing, but also to cognition more broadly. Such capacity can be readily accounted for in a classical system that combines discrete symbolic representations into complex representations with constituent structure. By contrast, it has been argued that connectionist systems that do not merely implement a classical architecture lack representations with constituent structure, and are therefore inadequate models of linguistic processing and human cognition. The recent and rapid progress of artificial neural network architectures, ushered by the coming of age of deep learning within the past decade, warrants a reassessment of old debates about compositionality in connectionist models. Deep neural networks called language models, trained on large amounts of text without built-in linguistic priors, have vastly exceeded expectations in many areas of natural language processing. Here, I argue that language models are capable of processing their inputs compositionally, by following systematic rules induced during training instead of shallow heuristics. Accordingly, they encode linguistic information into a structured representational format, even though they fall short of implementing a classical architecture.

Specifically, instead of concatenating discrete symbolic representations through strict (algebraic) variable binding, I argue that they can compose distributed (vector-based) representations through a form of fuzzy variable binding enabled by attention mechanisms in the Transformer architecture. I offer both theoretical and empirical support for this hypothesis, and suggest that it goes a long way towards explaining the remarkable performance of language models. The upshot of this analysis is threefold. First, we need not see language models as uninterpretable black boxes. By unraveling the repertoire of computations they induce during training, we can start bridging the gap between behavioral evidence about their performance and claims about their underlying competence. Second, the classicist approach to compositionality is not the only game in town to explain the systematicity of linguistic processing and cognition. Connectionist models need not implement a classical architecture with strict variable binding over discrete constituents to process structured representations compositionally. Third, this non-classical approach to compositionality has a number of characteristics that makes it increasingly attractive not just as an engineering project, but also as an empirically plausible model of linguistic processing in humans. I conclude by offering some reflections about future directions to investigate this claim, and how this line of research may influence cognitive science more broadly.

Evenemangstyp: Föreläsning

Raphaël Millière

Lecturer in the Philosophy Department at Columbia University