Ahmed Abdelfattah wins a 2023 dynamic imaging grant from the Chan Zuckerberg Initiative

The Chan Zuckerberg Initiative (CZI) was founded in 2015 to help solve some of society’s toughest challenges — from eradicating disease and improving education, to addressing the needs of our local communities.    

CZI's imaging program aims to "support the development of new tools or significant enhancements of existing tools to monitor biological processes in motion, across time, and across spatial scales.”      

One of this year’s grantees is Ahmed Abdelfattah, assistant professor of Brain Science and assistant professor in the Department of Neuroscience for his project “High-speed Volumetric Voltage Imaging.” Along with co-PI’s Adam Cohen from Harvard University and Liam Paninski from Columbia University, Abdelfattah is seeking to develop molecular, optical, and algorithmic tools to map high-speed bioelectrical dynamics in three dimensions in intact tissue and in live specimens.   

Carney Institute (CI): Tell us a bit about the science behind this award.   

Ahmed Abdelfattah (AA): Cells in multicellular organisms communicate with each other chemically and electrically. With chemical communication, cells can release peptides or small molecules that relay information about the state of one cell to another cell. Some cells, including neurons, can also communicate through electricity. If a neuron is electrically coupled to its neighbors, then changes in the original neuron’s membrane potential can influence the electrical state of the coupled cells.      

Generally speaking, chemical communication is more straightforward to detect than measuring a cell’s bioelectrical signal, as it’s very difficult to detect the membrane voltage or the electrical signal, in a live cell or a tissue. If you put an electrode in the cell body of a neuron, you can measure the membrane potential in the cell body but not in other compartments of the cell, meaning that you’ll have no idea what's happening in the distal processes of the neuron. These electrical signals in different regions of neurons can vary widely and by recording only from the soma we’re obscuring the subtle yet important fluctuations in more distal regions, such as the synapse.   

In my lab, we're developing new genetically encoded proteins that we can put in the membrane of neurons in a brain to address this unresolved problem. These proteins change their fluorescence when the membrane potential changes. As the fluorescence changes, every pixel the camera captures acts as an electrode because every pixel samples what's happening to the membrane voltage in a different spatial location in the neuron. In contrast to traditional electrophysiological methods that could only record from the cell body, using our new method you can optically record what's happening all throughout an entire neuron, including dendritic trees, which is the focus of this grant.   

Later on, we can also put that sensor in multiple cells in the brain allowing you to see what multiple neurons are doing simultaneously and how they're communicating.  After that, the challenge will shift to building microscopes that can accommodate an entire brain or even an awake and behaving specimen in the field of view your camera can capture.   

CI: Is this mapping happening in one specific area in the brain or multiple areas at the same time?     

AA: In practice, you don't want these proteins to be expressed in every cell in the brain because if that happens then you get a fluorescent blob. You don't know which cell is “lit up”. You want it to be selectively expressed in specific cells that you're interested in.   

Imagine tracking changes in voltage across the membrane as fluorescence signals from thousands of pixels at the same time. When you're recording membrane voltage at 1000 frames a second for an hour, that's a lot of data that you need to process and analyze. And it’s challenging to figure out what the different wave forms or shapes of the data mean and how to make sense of all the data you're getting.     

That's actually where CZI co-grantee Adam Cohen's lab comes in. They’re constructing highly technical imaging systems that will allow us to image those voltage signals in volumes of the brain. The other co-PI on this grant, Liam Paninski, is working on data extraction algorithms to make sense of all the imaging volumes we are collecting.      

Combining the expertise from all three grantees — new imaging systems, new molecular tools, and data extraction algorithms and data analysis — allows us to push the field forward and make sense of the data that we’re acquiring from those sophisticated tools, whether they be new microscopes or new systems that we're making in the lab.    

CI: Are there static images from this work that you can then utilize in some fashion?       

AA: In some ways, they’re static images but they're in sequence so that they make a movie, a movie about electricity in the brain. We're imaging at micrometer spatial scales at millisecond time resolution.    

CI: Taking the 30,000-foot view, what’s the practical application of this research?      

AA: In the most basic sense, this work helps us to understand biology because we can see it in action. Traditionally, people have seen the end point or the terminal point, let's say for example from Parkinson's patients or any sort of pathology. You take brain tissue, and you analyze it post hoc as the result of a disease. What we want to do is at the beginning — record from an animal model what's happening during the progression of the disease so that we can stop it much earlier.    

We're not focused on one disease here. We're just focused on tool development, method development really.   

CI: Where is your lab going next and what are the benchmark goals you're trying to wrap your arms around right now?      

AA: The holy grail would be having an accessible, easy-to-use pipeline for imaging bioelectricity in the brain from multiple cells — from the microscope needed, to the tools needed in the brain, to the data analysis method.  

That's the end goal, which we want to reach in like 10 years from now. We're working on the different parts in our labs first, and then making it accessible hopefully in the very near future to other labs to use, too.