MIT researchers just discovered an AI mimicking the brain on its own

AI can tell us a lot about the brain. Learn how MIT researchers discovered machine learning mimicking both brain function and evolution.

Eric James Beyer
MIT researchers just discovered an AI mimicking the brain on its own

In 2019, The MIT Press Reader published interviews with Noam Chomsky and Steven Pinker, two of the world’s foremost linguistic and cognitive scientists. The conversations, like the men themselves, vary in their framing and treatment of critical issues surrounding their areas of expertise. However, when asked about machine learning and its contributions to cognitive science, their opinions gather under the banner of skepticism and something approaching disappointment. 

“In just about every relevant respect, it is hard to see how [machine learning] makes any kind of contribution to science,” Chomsky laments, “specifically to cognitive science, whatever value it may have for constructing useful devices or for exploring the properties of the computational processes being employed,” he added.

While Pinker adopts a slightly softer tone, he echoes Chomsky’s lack of enthusiasm for how AI has advanced our understanding of the brain: “Cognitive science itself became overshadowed by neuroscience in the 1990s and artificial intelligence in this decade, but I think those fields will need to overcome their theoretical barrenness and be reintegrated with the study of cognition — mindless neurophysiology and machine learning have each hit walls when it comes to illuminating intelligence.” 

But as our understanding of human and artificial intelligence grows, positions like these may soon find themselves on unstable ground. While AI has yet to attain human-like cognition, artificial neural networks that replicate language processing — a system thought to be a critical component behind higher cognition — are starting to look surprisingly similar to what we see taking place in the brain.

AIs are starting to resemble brains.

In November 2021, a group of researchers at MIT published a study in the Proceedings of the National Academy of Sciences demonstrating that analyzing trends in machine learning can provide a window into these mechanisms of higher cognitive brain function. Perhaps even more astounding is the study’s implication that AI is undergoing a convergent evolution with nature — without anyone programming it to do so. 

Artificial intelligence and the brain

Artificial intelligence powered by machine learning has made impressive strides in recent years, especially visual recognition. Instagram uses image recognition AI to describe photos for the visually impaired, Google uses it for its reverse-image search function, and facial recognition algorithms from companies like Clearview AI help law enforcement agencies match images on social media to those in government databases to identify wanted individuals. 

Crucial ethics discussions aside, the mechanics of how these algorithms work can shine a light on cognitive function. By comparing neural activity from humans and non-human primates to data from artificial neural network machine learning models tasked with a similar function — say, recognizing an image against a chaotic background — researchers can gain insight into both which programs work best and which most closely resemble how the brain carries out the same task.  

“We’ve had some success in modeling sensory areas [of the brain], in particular with vision,” explained Martin Schrimpf, first author of the new MIT study, in an interview with Interesting Engineering. 

Schrimpf, a Ph.D. student at the MIT Brain and Cognitive Sciences Department, co-authored the paper with Joshua Tenenbaum, a computational cognitive science instructor at MIT and a member of the institute’s Artificial Intelligence Laboratory, and Evelina Fedorenko, an Associate Professor of Neuroscience at the university. 

Quote

“[Prediction] is something our language system seems to be optimized to do.”

In the wake of these successes, Martin began to wonder whether or not the same principle could be applied to higher-level cognitive functions like language processing. “I said, let’s just look at neural networks that are successful and see if they’re anything like the brain. My bet was that it would work, at least to some extent.”

To find out, Martin and colleagues compared data from 43 artificial neural network language models against fMRI and ECoG neural recordings taken. At the same time, subjects listened to or read words as part of a text. The AI models the group surveyed covered all the major classes of available neural network approaches for language-based tasks. Some were more basic embedding models like GloVe, which clusters semantically similar words in groups. Others, like the models known as GPT and BERT, were far more complex. These models are trained to predict the next word in a sequence or predict a missing word within a specific context respectively. 

“The setup itself becomes quite simple,” Martin explains. “You just show the same stimuli to the models that you show to the subjects […]. At the end of the day, you’re left with two matrices, and you test if those matrices are similar.” And the results? “I think there are three-and-a-half major findings here,” Schrimpf says with a laugh. “I say ‘and a half’ because the last one we still don’t fully understand.”

Machine learning that mirrors the brain

The finding that sticks out to Martin most immediately is that some models predict neural data extremely well. In other words, regardless of how well a model performed a task, some appear to resemble the brain’s cognitive mechanics for language processing. Intriguingly, the team at MIT identified the GPT model variants as the most brain-like out of the group they looked at. GPT is a learning model trained to generate any variety of human-language text. It was developed by Open AI, the Elon Musk-founded AI research lab that just this June revealed a new AI tool capable of writing computer code. Until recently, GPT-3, the program’s latest iteration, was the single largest neural network ever created, with over 175 billion machine learning parameters.  

This finding could open up a major window into how the brain performs at least some part of a higher-level cognitive function like language processing. GPT operates on the principle of predicting the next word in a sequence. That it matches so well with data gleaned from brain scans indicates that, whatever the brain is doing with language processing, prediction is a key component of that. 

Machine learning can mirror the brain.

Ref

Schrimpf also notes that when longer texts and stories were shown to subjects, all neural network models fared relatively poorly compared to how they scored on short-range texts. “There are different interpretations for this,” Martin says. “But the more exciting interpretation, which I think is also consistent with what machine learning is intuiting right now, is that perhaps these models are really good at forming the right short-range representations. But once you […] have semantic context that you need to aggregate over, perhaps that’s where they fall short. If you’ve ever played with one of these chat agents in your browser, you might have noticed something similar, where it starts well and falls apart pretty quickly.”

The team’s second significant finding reveals how our cognition works regarding language, as they tested various language tasks via a combination of eight benchmarks that included aspects like grammaticality, judgment, and entanglement. “None of them correlated,” says Martin. “So, even if these models do well on these tasks, that doesn’t predict at all how well they’re going to match to the brain. So, it really seems like this prediction task is something special. It’s something our language system seems to be optimized to do.”

Quote

“What the natural language processing community is doing […] is something like community evolution.”

Further study is needed to understand why some models resemble the brain more than others. This is partly related to the fact that, in machine learning, AI models can be something like a black box, wherein their functions are so complicated that even the people who designed them might not be able to comprehend just how the variables that go into the models are related to one another. Martin acknowledges that parsing those variables out could be an enormous task.

 

But not all resemble brains.

“For individual models, we still don’t know what would happen if we had one fewer layer [in the neural network] or fewer units, or more units,” he says. “But there are projects that are trying to pick apart models and knock out all the different components and see what’s really driving the match to the brain.” The study’s third major finding, and one that most uniquely ties it to theories about cognition, is that the more brain-like an AI model is, the more it can match human behavior, in this case, subjects’ reading times. 

Putting the picture together reveals an unexpected synthesis of scientific knowledge that Martin calls “the triangle.” Models that use next-word prediction mirror subjects’ brain scores, which in turn can be used to predict human behavior. “I think this triangle [of insights] is super cool,” Martin says excitedly. “Now that we’ve learned lessons from vision and other areas, we were able to pull all of this together in one study. Models that are better at predicting the next word can better predict neural responses in human brains, and the models that better predict neural responses can better predict behavior in the form of self-paced reading times.”

From neuroscience to AI and back again

One of the reasons the study is so fascinating is that these insights into cognition simultaneously point to a kind of “AI evolution” that’s taking place, one that has until recently gone unnoticed. It’s important to remember that nobody intentionally programmed these models to act like the brain. Still, throughout building and upgrading them, we seem to have stumbled into a process similar to the one that produced the brain.  “Perhaps we could help people with language comprehension issues. I’m cautiously hopeful.”

“One quote that I really like from Nancy Kanwisher, one of the senior authors in the paper, was, ‘It didn’t have to be that way.’ It didn’t have to be that the models that we built for these [language predicting] tasks ended up looking like the brain,” Martin elaborates. “We speculate in the paper that perhaps what the natural language processing community is doing […] is something like community evolution. If you take an [AI] architecture and it works well, then you take the parts of it that work, ‘mutate’ it, recombine it with other architectures that work well, and build new ones. It’s not too different [from evolution] in the broad sense, at least.”

It’s the architecture of both the brain and AI models that Martin feels is the final potential insight the study offers, though it’s one whose edges are still coming into view. While neural networks can be trained on data to perform better or more similarly to the brain, their underlying structure appears to matter greatly. “It turns out that these inherent structures [in the models] give you a lot,” Martin explains. “If you look at these models, you still gain something like 50 percent from training [them on data], but I think none of us expected that the structure puts you in the right representational space.” 

The future of AI research

Schrimpf and his colleagues are focused on expanding an information platform that pulls in large amounts of data and language models, making them accessible to the scientific community to help catalyze further progress. While there’s no end goal for this kind of research, Martin recognizes that building a more comprehensive understanding of cognition while using that understanding to create practical applications capable of helping people are two sides of the same coin. 

“These things are useful scientifically because they’re part of a unified scientific hypothesis of everything we know about a particular brain space,” he says. “I’m [also] currently working on model-guided stimulation. So, ideally, we’d have a subject in a chair looking at a gray screen; then we’d ask the model, ‘If I want to make the subject believe they’re seeing a dog, what kind of stimulation would I have to apply?’ And then we zap the brain accordingly, and they see a dog. I think that’s a good direction for vision. Something similar could be done with language. Perhaps we could help people with language comprehension issues. I do think there is a direction there — I’m cautiously hopeful.” 

Such research and projects will inspire new conversations in machine learning, neuroscience, and cognition. It will also influence one of the more intense discussions in the scientific community, asking whether the brain is a good model for machine learning and whether that even matters. “People argue both ways,” Martin observes. “I think neuroscience can serve as a validation checkpoint now and then. Are you guys on the right track? Are you building the right kinds of, in this case, language models? Or are they completely different from how the brain is solving things?” Regardless, peering further into the brain to see how it solves things is a project that should interest everyone. Machine learning, it turns out, just might be one of the best tools available to help us do so.