Matt Nassar is an assistant professor in the department of neuroscience. He is the principal investigator in the Learning, Memory & Decision lab which uses computational models to better understand how the brain represents and stores information for effective decision making.
Carney Institute (CI): Tell us a bit about yourself.
Matt Nassar (MN): I grew up in a small town in Upstate New York, Norwich, New York and went to Colgate University where I took a psychology class and discovered neuroscience. After that, I was obsessed with it and concentrated in neuroscience.
I attended the University of Pennsylvania for grad school. They had a rotation program in the neuroscience department where we got to try lots of different things. I worked with two labs that were in the neurodegenerative disease area. In the third rotation, I jumped ship on the cellular, molecular stuff and ended up switching gears, joining a monkey electrophysiology lab, which studied cognition using single unit recordings.
At first, I didn't know how to code or anything that was useful for that lab, but it was just so fun. I spent three months messing around, writing code that analyzed data to make simple computational models of learning. I've been forgetting my Krebs Cycle ever since.
I then came to Brown to do a postdoc in Michael Frank's lab in the Department of Cognitive, Linguistic, and Psychological Sciences. I really wanted to get training in a lab that specialized in computational modeling and that's just what I got.
CI: Are there unique challenges posed when joining an existing lab? Are you organically integrated into the culture?
MN: I think it can mean different things. You can come in and do the project that the lab would've done, do your own thing using the lab for resources, or a mix of those two things. For me, it was a mix. We definitely did projects that were in line with Michael's direct expertise. So, in that sense, I contributed to the lab's main direction, but I also was able to do other work that was more in line with the questions that I wanted to answer.
The benefits of working in an environment with a bunch of people that were experts in computational modeling was that, when I had a hurdle, I had support. I could ask my officemate, "How would you do this?” So, I had enough independence to learn and teach myself things and practice the process of independent science by going through the iterative loop of asking questions and getting answers.
CI: What questions did you want to answer in your own research?
MN: When I started my postdoc, I was really interested in understanding why the brain uses neuromodulators to modulate neural activity — meaning, if I can do X when dopamine is high and I can do Y when dopamine is low, why don't I just have two sets of glutamatergic circuits: one that does X and one that does Y and a switch that goes between them? I felt that it had to be more efficient to reuse the same circuit, to do some small modulation to it and get it to do both things.
A project that I focused upon investigated how we learn from and adjust to new pieces of information. We have new experiences all the time; some of them impact future behavior, others will not. And there's a behavioral trade-off: if I adjust my behavior according to everything that happens, then I‘ll not be able to develop stable representations of the world. So, any bit of noise or random fluctuation in an environment is going to trip me up, leaving me vulnerable to making a mistake the next time I face a similar decision point.
CI: Can you give an example of this?
MN: If you think about learning as just updating according to recent information, then there's a cost to it. That cost is memory. If you're a little goldfish that's only storing the most recent thing that happened to you and acting accordingly, then in some sense you're leaving a lot on the table that could be useful information, that was in the deeper past.
We now think that you can have different stores of memory. Even if your behavior is changing, we think you're storing different contexts. For example, when commuting to work, you’re making a mental calculation about the best route to take based on many inputs like the weather: route A may be the most direct path and the one that you take regularly but it slows in non-optimal conditions. So, on a rainy day, you opt for route B.
So, every time we have a worse than expected experience, because of rain or snow or other factors, you're going to switch and take the other path when really, the right thing to do would be to average the times that we get on the different paths and just choose the one that's quicker.
CI: Is memory something that can be identified through computational measures? Is it a biological function of neurons?
MN: If you ask 10 people what memory is, you might get 10 different answers. Even in my own lab, we study different types of memory and it’s hard to believe that they fall under the same blanket term. For example, our lab has done work on episodic memories, which in some cases can last decades. On the other hand, we've done work on visual working memories that can be vanquished with the flash of a light (or the passing of a few seconds). We can quantify both of these things using computational approaches and perhaps distinguish between them through models that incorporate a "decay" through which working memories are rapidly forgotten.
One common link between these cases is that they both involve representing some aspect of the past — be it a few seconds ago or several years ago — that might be useful for guiding future behavior. And I do think that this is a primary function of neural circuits. But the range of types of memory highlights the need to solve the same problem at multiple timescales; for example, with systems that can sustain neural activity over short periods but patterns of synaptic weights over much longer ones.
CI: After your postdoc, you joined the neuroscience department here at Brown. What is one of the courses that you’re teaching?
MN: I teach a course, Neural Computation in Learning, Memory and Decision making. It encourages students to build models of behavior by writing code that examines how we learn to do things and how we make decisions. We then test and poke these models to see how they work. In the process, the students gain an understanding of evidence accumulation and drift diffusion models, reinforcement learning, and even cutting edge concepts at the interface of neuroscience and AI.
Some people take the course and have never programmed before. I really wanted to make my class accessible to all levels because I know that many people, once they get excited about this stuff, are hooked. And I myself didn’t really learn how to program until graduate school.
CI: What are some of the real-world applications of your work?
MN: In our research, we’re using high level models to study neuromodulators: for example, we’re studying norepinephrine, the spikes of which might promote changes in neural networks that shape the “mental context” through which we view the world.
This is an old idea but, we’re investigating if these spikes allow an individual to recognize and respond to context changes, such as might occur when something uncontrollable disrupts our everyday life. This may have some practical applications for understanding populations that are not good at correctly switching contexts mentally, for example those experiencing schizophrenia. In principle, if we knew how the brain updated its context representations, and we were able to manipulate these systems safely, we might be better positioned to treat various behavioral disorders. However, we are certainly not there yet. In our lab, we’re still working on basic mechanisms and collaborating heavily with animal labs who can see things at a finer resolution with an eye toward coming up with new ways to manipulate the system, perhaps making it work better in certain situations.