Carney researchers show that large language model AIs can reproduce mechanisms found in the human brain

A collaboration between professors Michael Frank and Ellie Pavlick is yielding important results about similarities between how ChatGPT-like AI and the human brain accomplish certain complex tasks, opening the door for transformative research at the intersection of computational neuroscience and computer science.

One of the uncanny qualities about interacting with a large language model AI such as ChatGPT is that it is capable of responding to a request in much the same way a human would. It takes in questions, composes contextually appropriate responses, remembers relevant tangents and utilizes them in subsequent answers, and disregards anything irrelevant. And yet, the engineers who designed ChatGPT did not intentionally equip it with special mechanisms to enable it to do so.

How are the large language models doing it?

MF.jpg
Michael Frank
Director of the Center of Computational Brain Science

Michael Frank, a computational neuroscientist who studies working memory, and Ellie Pavlick, a computer scientist who studies how to get computers to understand language the way humans do, are beginning to answer this question. Through a new collaboration between their labs, they have recently demonstrated that these types of AI can learn how to implement at least some of the same processes the human brain uses. The duo found that, when presented with a task designed to tax the human brain’s working memory, a ChatGPT-like AI succeeds at the task by restructuring certain mechanisms to mirror mechanisms in the brain.

The brain mechanisms in question are called input and output gating mechanisms. These mechanisms enable us to store multiple pieces of information separately in our working memory so that we can access and act upon the different pieces independent from one another, explained Frank, who directs the Center for Computational Brain Science in the Robert J. and Nancy D. Carney Institute for Brain Science. 

“Let’s take a typical text message you might receive,” said Frank. “Something like, ‘We need a dinner reservation for six people, and Mary is gluten free. Can you also get groceries for tomorrow? I'm tied up with finishing this project at work.’”

“When you read this message, your brain uses input gating to store the dinner request separately from the grocery request. Later, you would use output gating to access the information about Mary being gluten free when deciding which restaurant to go to and, later still, access the grocery request to decide what items to pick up.”

In Frank and Pavlick’s joint experiment, a ChatGPT-like AI adopted these same biological strategies when it was challenged to respond to a multi-layered request similar to the text message example. This is a striking finding, according to Pavlick, because this type of AI has the capacity to complete such a task in ways no human ever could.


Ellie Pavlick
Manning Assistant Professor of Computer Science

“ChatGPT and other large language model AIs are not subject to the kinds of constraints the human brain is. They can provide appropriate answers using strategies that no human would be able to use, such as memorizing everything they have been told verbatim,” explained Pavlick, who is the Manning Assistant Professor of Computer Science. “So to see the ChatGPT-like model–even though it had the option of solving the task in many different ways–specializing in this surprisingly brain-like manner is really exciting.” 

This newly established link between large language models and the brain has important implications for both the fields of computer science and brain science, Frank said.

“Large language models are already known for being quite flexible and powerful but they require a huge amount of training to get to that point. We hope that brain-inspired algorithms can help a large language model learn these strategies more efficiently with less data.”

“At the same time,” Frank added, “studying the strategies that large language models use can tell us if there is something more adaptive there that can inspire us to think about whether analogous mechanisms exist in the brain. From there, we can improve our brainlike models to capture those mechanisms.”

HARNESSING AI’S POWER BY COLLABORATING ACROSS FIELDS

In their work together across the fields of computer science and brain science, Frank, Pavlick and their respective labs are recontextualizing a debate as old as AI itself. Ever since the technology came onto the scene in the 1980s, researchers in various camps have argued about how humanlike it’s possible for artificial intelligence to become. Although early AI was loosely modeled upon human neural networks, brain scientists were quick to point out that the human brain has many fine-grain details AI lacks, such as the neural architectures that enable input and output gating.

In 2006, Frank was at the forefront of establishing how human input and output gating mechanisms work to solve difficult tasks, a theory that was later validated in humans and animals. To illustrate that human input and output gating mechanisms were really needed to solve his experimental tasks, Frank compared his computational model of the circuits linking the frontal cortex with the thalamus and basal ganglia–which together support input and output gating–to several artificial intelligence models. When all of the different models attempted to solve the experimental tasks, most of the AI models performed much worse than Frank’s brainlike model. The one exception was the leading intelligence model at the time, LTSM – the only AI model designed with a form of gating, albeit a non-biological form. The LTSM model was sometimes able to perform experimental tasks as well as Frank’s biological model; sometimes Frank’s model still won.

Since 2006, transformers, the artificial neural network under the hood of ChatGPT and other large language models, have surpassed LTSMs through their ability to work with much larger datasets and show more convincing language abilities. Transformers are able to read questions and write answers thanks to their attention heads–mechanisms that allow these AI to pay attention to specific words and ignore others, and to change their attention at each moment in time depending on what they have just seen. 

Shortly after the emergence of ChatGPT, Pavlick and Frank’s labs began collaborating. “Our reasoning was that, since our research subfields have tended to favor different interpretations and to choose their questions and methods accordingly, we needed to join forces to study large language models in order to make sure we didn’t miss important insights,” explained Pavlick. 

The two researchers shared a similar hunch, said Frank.

“Both of us suspected that, based on ChatGPT’s success rate, it must be somehow implementing a form of input and output gating even though it had no mechanisms like this intentionally built in.”

The team–including graduate students Aaron Traylor and Jack Merullo in Pavlick’s lab–got to work.

Frank and Pavlick saw they could take an iteration of the experimental task Frank had used in 2006 to create the constraints that demonstrated input and output gating and then challenge a transformer to tackle it. That task, designed by postdoctoral researcher Rachel Ratz-Lubashevsky in Frank’s lab, requires the transformer to process letters one at a time, and then asks the transformer to determine whether the current symbol is the same or different as the ones stored in its memory of previous letters. This task promotes input and output gating because it forces the transformer to store a symbol in a particular address in memory (input gating) while also allowing it to make decisions about the current symbol in relation to one of several multiple items already stored in memory (output gating). “To return to the text message request example,” explained Frank, “the task is testing to see whether the model can do the equivalent of storing the restaurant plans separately from the grocery shopping plans and later make decisions about each of them independent of the other.”

But the transformer successfully performing such a task was only the beginning. The researchers needed to be able to see into the model in order to prove their hypothesis. Without the ability to peer inside the AI black box, all the team had were the transformer’s successful results–not how it arrived at them.

To solve this problem, the team tapped into Pavlick’s skill in a subfield called mechanistic interpretability, a recently introduced set of analysis tools that enables a computer scientist to determine which components of a transformer work together to produce observed behavior on a task. What Pavlick and her graduate students found through mechanistic interpretability was that, as they interacted with the constraints of the Frank lab’s experimental task, the transformer’s attention heads began to specialize, morphing into mechanisms that are analogous to human input and output gating mechanisms

“Since the transformer could have addressed the task in any number of ways because it isn’t human, the fact that it opted to adjust its attention heads to behave like biological input and output gating mechanisms suggests there might be something ‘natural’ about this type of solution. Such a result opens up provocative research questions about what types of pressures from the learning environment might make solutions like this ‘inevitable,’ and could have implications for ways to train large language models more efficiently and reliably,” said Pavlick. 

While there are many more variables when it comes to how ChatGPT and other large language models function in the wild, the team suspects that these findings are relevant to a scenario that anyone using ChatGPT is likely to find themselves in: correcting the AI on an incorrect answer to a question

“We believe if a person corrects ChatGPT about some aspect of its answer, the large language model will then be able to access the part of the answer that was wrong using its attention heads in a way that is similar to input gating: taking new information from the person's correction to modify the content of that information. Then, later, the model uses output gating to craft responses that demonstrate its new understanding of the issue,” Frank said.

Frank, Pavlick, their graduate students and other Carney affiliates are full of further ideas for applying recent developments in AI to computational neuroscience. “Carney’s Center for Computational Brain Science is perfectly poised to explore the deep connections between AI and brain science,” said Frank. “We have the resources, the inspiration and the freedom to probe problems that span traditional boundaries. Particularly with Professor Pavlick coming aboard as a key collaborator in the realm of computer science and linguistics, findings like these are just the beginning.”