Michael A. Long

Michael A. Long

Alumni

Michael A. Long

Author: 
Joshua Speiser

Michael A. Long, Ph.D. 2003, is a professor in the Department of Otolaryngology-Head and Neck Surgery and the Thomas and Suzanne Murphy Professor of Neuroscience and Physiology in the Department of Neuroscience and Physiology at NYU Grossman School of Medicine.   

His laboratory examines brain networks during the perception or production of skilled movements (often vocalizations) with a special interest in understanding the cellular and network properties that contribute to these behaviors.  

Carney Institute (CI): Tell us a bit about yourself:    

Michael Long (ML): I was born in Chesterfield, Missouri and went to college in a small school in Memphis, Tennessee. Many of my classmates were drawn to medical school, but I was fascinated by the prospect of basic research. After working for a year at the University of Tennessee, Memphis as a technician, I applied to neuroscience programs. I had heard some people working on interesting things at Brown, in particular the work coming out of Prof. Barry Connors’ lab. I met with him, got accepted to Brown, and matriculated. And I loved it there.     

This was before Carney and even before the Brown Institute for Brain Science (BIBS) existed. There were maybe 15 labs at that time, including the people who “wrote the textbook.”  It was an incredibly tightly-knit community — my cohort was only seven students — and it was so enriching for exactly that reason. The whole was so much larger than the sum of the parts.  

What was also unique about Brown was how integrated everything was. Retreats would include applied mathematicians, neuroscientists, psychologists, and cognitive scientists. There, everyone would sit together, have dinner, and find common scientific ground. From other events, I befriended experts across a broad array of fields, such as Russian literature and demography.    

After Brown, I went to MIT for my postdoc and, in 2010, I moved to New York as an assistant professor at the New York University School of Medicine. I'm now a full professor with an endowed chair. Time flies.  

CI: From the outside, it seems like the lion’s share of current neuroscience research focuses upon computational modeling. Your work does consider models but also deals with whole organisms and ecological systems. Is this atypical in the field?  

ML: A small number of model systems – such as mice and fruit flies - are found in most labs. However, historically, animals were selected based on specialized behaviors and abilities. For instance, Jim Simmons has done incredible work throughout his career in the bat model and echolocation, and this model has become in vogue again because of the bat’s 3D spatial navigation abilities.   

One ‘expert’ model system that we study in our laboratory is the songbird. We typically focus on a small Australian species called the zebra finch that learns its courtship song from a tutor, usually their father. Their brains feature several dedicated regions for learning and producing their songs, which helps to illustrate the link between neural circuit function and behavior.  

During my postdoc, I hypothesized that one area called HVC may be important for pacing the song – like a clock – and I tested this idea using focal temperature manipulation. I placed a small cooling device bilaterally over HVC and cooled down this area selectively by about two degrees. By slowing down this circuit from cooling, we noticed that the song had stretched out. When we cooled it by another two degrees, it stretched out more. So, we saw a monotonic relationship between the temperature of this circuit and the speed of the bird's song. Using this approach, we identified the group of 70,000 neurons that are controlling the circuit enabling the bird to produce a temporal structure for its song.    

Since then, we’ve expanded on this research, using tools like 2-photon imaging to observe the activity of these neurons during singing. The birds are trained to sing underneath the two-photon microscope, and we can watch as these neurons fire during that song. They fire in a set sequence; one cell will fire at the beginning of the song, another cell will fire immediately after, and another cell will fire immediately after that. It’s a bit like a wave rising and falling.  

We have been able to more deeply understand the wiring that leads to sequence generation through electrophysiology and anatomy, including EM connectomics. By observing those connections, we’re able to “reverse engineer” the biological clock in the head of the bird that makes him sing. Recently, we found that a group of 750 thalamic neurons located deep within the core of the brain that tip over the first domino within that chain.  

CI: Are there applications for this research for those who have suffered from stroke or other ailments affecting their ability to speak?   

ML: Yes. However, a challenge is that zebra finches diverged 320 million years ago from humans. We have tried to do the same cooling experiment in people and had some predictions about what we might see in patients undergoing tumor removal or a neurosurgical procedure for intractable epilepsy, for example. In these procedures, a neurosurgeon can place a device that cools the surface of the cortex in different places and, in real time, observe changes in speech.  

This is important because you must functionally map the brain during this neurosurgical procedure. This is what Wilder Penfield did in 1937 when he used direct cortical stimulation to find areas that are critical for speech production. Stimulation leads to temporary speech arrest: patients can't really talk while the stimulation is ongoing which tells the surgeon where should or shouldn’t cut which leads to much better outcomes for those patients.  

We tried it and, not only does it work, but we can identify parts of the brain that lead to disfluency. Speech degrades considerably while the cooling is going on but returns very quickly once the cooling probe is removed. Interestingly, cooling other parts of the brain causes large scale changes in timing. In some sense, patients sound just like our songbirds. They stretch out their vocalizations. To me, that’s an exciting leap, from the birdcage to the bedside. It shows that through this specialized vocalization circuitry, we can manipulate and learn something that can help doctors do their job in the operating room.  

CI: Is this analogous to stories of musicians undergoing brain surgery who continue to play music throughout the operation to help direct and guide the surgeon to avoid certain areas of the brain?  

ML: Well, it is. We wrote a case study on a single patient that was a singer. He was epileptic and required a craniotomy. During the procedure as he was singing, the neurosurgeon stimulated his brain in different places and found sites on the temporal lobe on the right side which selectively blocked the song.   

Very oddly, when that same area of the brain was cooled, we expected that it would pitch shift his song but in fact it was delivered beautifully with no problems whatsoever. His speech, however, was pitch shifted. I don't understand that result at all, but we still published it as a single patient case study.   

CI: It’s spring here in the northeast so, as someone who works on neuroscience and songbirds, do you have a favorite songbird call and why?  

MI: My favorite song is the song of the zebra finch because that means I'm getting data. It has been so tied in with my dopamine pathway – I know that when I hear that song, we're learning more about the brain.