Neuroscientist Jason Ritt is on a mission to assess the depth of brain science potential at Brown University, and to find ways to facilitate the sharing of ideas and methods across laboratories.
Ritt, scientific director of quantitative neuroscience at the Robert J. and Nancy D. Carney Institute for Brain Science, is looking for opportunities across the Carney community to reduce barriers to collaboration and technology adoption through a mix of teaching, consultation, event organizing, and other initiatives. He focuses primarily on quantitative and computational topics, but his goal is to work with scientists in all areas of brain science research.
“The interdisciplinary complexity of neuroscience research keeps increasing, and it is often unrealistic to think a single student, or even a single lab, can obtain all of the required expertise,” said Ritt, who is also an assistant professor of neuroscience (research).
Since joining Carney in June, Ritt has embarked on lab tours to learn more about each lab’s activities, and he has consulted with various students and postdocs on their experimental design and analysis problems. In one example, he helped a group of scientists fix a small technical problem just by comparing the details of their setup to another group within the same laboratory.
“The right framing can sometimes help people discover common solutions even when they already work together,” he said.
Over the coming year, Ritt will kick off educational activities including topical workshops and seminars, bootcamps, and teaching courses on data science methods and best practices for neuroscientists. He will continue providing individual consulting to students and postdocs when available, and engaging in collaborative research. A broader goal, he said, is to find ways to foster self-organizing growth in quantitative capabilities across the Brown brain science ecosystem, such as initiating curated code repositories and “social knowledge networks.”
In the following Q&A, Ritt discusses his research on sensory systems neuroscience and neural engineering, and he shares his motivation to pursue basic questions of neural function.
Q. Tell us about your research in sensory systems neuroscience and neural engineering. What projects are you planning for the near future?
A. My lab concentrates on two interconnected lines of research. First, how do neural systems establish the sense of touch in real world contexts where self-motion is a central part of sensory input? The way we choose to interact with objects preselects what we might learn about them. During such "active sensing,” the nervous system does not passively receive information from the environment so much as extract it.
One extreme example is how people might cup their hands over their ears to listen to a specific person in a noisy room. Do you think of your hand as part of your auditory system? We modify and enhance perception not just through internal neural pathways, but also by altering our external connection to the world, in this example by changing the way sound waves bounce into our ears.
Almost all sensation caries an active component, but self-motion is especially integral to touch. My lab studies the dynamical relation between behavior and neural function in the rodent whisker system, as animals engage in tactile exploration. We use high speed video to track behaviors and electrophysiology to measure neural signals, and we "close the loop" by stimulating sensory areas in real time based on the animal's motion, to see how that alters future exploration. The general approach is to study sensory coding not just by seeing what the neurons do, but also by how the animal responds to changes in their activity.
The second line of research seeks to advance technology for neurostimulation, and is tightly intertwined with the science. Given some theory connecting neural activity patterns to perceptual events, we can ask how we might drive those patterns with enough precision to induce perception in the absence of true touch. A motivating application is sensory prosthetics and brain machine interfaces, as a specific instance of the more general question: how could we write information directly into the brain? Success requires overcoming technological challenges—we have focused on ‘underactuation’ common to most stimulation systems—but moreover tests our understanding of sensory coding in general.
Q. How did you become interested in this area of brain science?
A. My original training was in dynamical systems theory, a branch of mathematics that generalizes the study of change over time. Early on I focused on dynamical problems within the brain, for example, seeking biophysical conditions that lead neurons to be synchronously active. But growing into my postdoctoral work I came to appreciate how important the dynamics outside the brain, of the body itself, are to understanding what is going on inside. There are several historical threads of philosophy, psychology, neuroscience, and computer science, that have stressed how some intractable ‘internal representation’ problems can be circumvented when organisms can move and change their environment, and conversely that some simplifications useful in laboratory experiment—including the classic stimulus-response pair—may have poor predictive value when sent out into the real world. In my projects now, I'm still seeking that dynamical theory of brain function, but as tightly coupled to empiricism as possible, which in particular means keeping the brain in its behavioral context. I sometimes describe myself as a theorist who does experiments.
Q. What are the great challenges in researching neural function during active sensing?
A. A central challenge to studying active sensing is that, by definition, the experimentalist has to allow the subject to choose how they gather information. Much of our methodology, and even our concept of what constitutes a successful experiment, incentivizes removing choice. To get precise repeatability, one must either enforce simplicity, for example by restraining subjects, or discourage choice, for example by training subjects to hold themselves still. The question of what impacts these manipulations have is usually difficult to answer. But entirely free behavior carries large and difficult to model variability across trials and across subjects, and the ever-present risk of ending up with a mush of suggestive but ultimately uninterpretable outcomes.
The measurement process is in inherent tension with the phenomenon of interest. So the day to day experimental challenges encourage thinking about what the goals of neuroscience are and ought to be, and if this pursuit of compromise between laboratory control and real world generality is in fact conceptually sound. As a concrete example, with the field's current emphasis on advanced technology and big data, we can ask if we will learn more by investing in richer measurement systems—such as automated video analysis—and computational frameworks that capture free behaviors, as opposed to continue trying to cleverly constrain behaviors to fit the frameworks we already have.
Q. What inspires you or keeps you motivated to continue your quest to address basic questions of neural function?
A. Like many of my colleagues, I was drawn to neuroscience by fundamental questions of human experience. What is intelligence? What is the ‘I’ that I am? I have a concept of myself distinct from my body, but what does that really mean? How does all of that experience emerge from this complicated stuff inside our skulls? And does better understanding the machinery inform the way we think about our own thoughts, feelings, and actions? Active sensing is an empirically grounded scientific problem but carries implications for those high-level questions of how we come to understand the world, and how our actions choose what we experience.