Step closer to AGI?

The classic argument made over 30 years ago by Fodor and Pylyshyn - that neural networks fundamentally lack the systematic compositional skills of humans due to their statistical nature - has cast a long shadow over neural network research. Their critique framed doubts about the viability of connectionist models in cognitive science. This new research finally puts those doubts to rest.

Through an innovative meta-learning approach called MLC, the authors demonstrate that a standard neural network model can exhibit impressive systematic abilities given the right kind of training regimen. MLC optimizes networks for compositional skills by generating a diverse curriculum of small but challenging compositional reasoning tasks. This training nurtures in the network a talent for rapid systematic generalization that closely matches human experimental data.

The model not only displays human-like skills of interpreting novel systematic combinations, but also captures subtle patterns of bias-driven errors that depart from purely algebraic reasoning. This showcases the advantages of neural networks in flexibly blending structure and statistics to model the nuances of human cognition.

Furthermore, this research provides a framework for reverse engineering and imparting other human cognitive abilities in neural networks. The training paradigm bridges neuroscience theories of inductive biases with advanced machine learning techniques. The approach could potentially elucidate the origins of compositional thought in childhood development.

By resolving this classic debate on the capabilities of neural networks, and elucidating connections between human and artificial intelligence, this research marks an important milestone. The results will open new frontiers at the intersection of cognitive science and machine learning. Both fields stand to benefit enormously from this integration.

In summary, by settling such a historically significant critique and enabling new cross-disciplinary discoveries, this paper makes an immensely valuable contribution with profound implications for our understanding of intelligence, natural and artificial. Its impact will be felt across these disciplines for years to come.

Paper link: https://www.nature.com/articles/s41586-023-06668-3

Abstract:

The power of human language and thought arises from systematic compositionality—the algebraic ability to understand and produce novel combinations from known components. Fodor and Pylyshyn1 famously argued that artificial neural networks lack this capacity and are therefore not viable models of the mind. Neural networks have advanced considerably in the years since, yet the systematicity challenge persists. Here we successfully address Fodor and Pylyshyn’s challenge by providing evidence that neural networks can achieve human-like systematicity when optimized for their compositional skills. To do so, we introduce the meta-learning for compositionality (MLC) approach for guiding training through a dynamic stream of compositional tasks. To compare humans and machines, we conducted human behavioural experiments using an instruction learning paradigm. After considering seven different models, we found that, in contrast to perfectly systematic but rigid probabilistic symbolic models, and perfectly flexible but unsystematic neural networks, only MLC achieves both the systematicity and flexibility needed for human-like generalization. MLC also advances the compositional skills of machine learning systems in several systematic generalization benchmarks. Our results show how a standard neural network architecture, optimized for its compositional skills, can mimic human systematic generalization in a head-to-head comparison.

New to LessWrong?

New Comment