Brown Logo

The News Service
38 Brown Street / Box R
Providence RI 02912

401 863-2476
Fax 863-9595

Distributed September 14, 2004
Contact Wendy Lawton

Science of Perception
Features or Creatures: Visual Expertise Taps Same Neural Networks

Is there a special area in the human brain that only processes faces? No, according to Brown University research. When study subjects learned to identify computer-generated figures and then saw both human faces and the figures, scientists found they used the same neural mechanisms. The study appears in the current online early edition of Proceedings of the National Academy of Sciences.

PROVIDENCE, R.I. — Adult humans and primates are face experts, able to quickly and accurately spot children in a crowded mall or mates in a thick forest. This critical adaptation has given rise to a decades-long scientific debate: Does the human brain contain a region used exclusively for identifying faces?


Differences in appendages and body shape make computer-generated creatures – Greebles – useful in studying how humans recognize, differentiate and learn subtle variations.

Research conducted at Brown University and published in the current online early edition of the Proceedings of the National Academy of Sciences supports a mounting body of evidence that such a “face module” does not exist. Instead, researchers are finding that the same networks of neurons used to process faces are also used by people who are expert in making all kinds of fine visual distinctions, from radiologists to dog show judges.

In a novel experiment, subjects were trained to become experts on computer-generated creatures called Greebles. Over a two-week period, in a series of one-hour sessions, subjects were shown images of Greebles, whose appendages make them unique. Conceived by Brown Professor Michael J. Tarr and former students Isabel Gauthier and Scott Yu, Greebles’ subtle variations in shape and configuration render them visually difficult to identify, making them a perfect control image for the human face.

After an average of eight sessions, subjects could name individual Greebles quickly and accurately. Then they were connected to 64 electroencephalogram (EEG) leads as they sat in front of a computer. Either a Greeble or a YUFO – another computer-generated creature they’d never seen before – flashed in the center of the screen. A fraction of a second later, a human face appeared to the right or left. Subjects pressed a key corresponding to whether the face appeared to the right or to the left. Each subject saw a total of 240 image pairs in random order.

The results: Electrical current picked up by the EEG revealed that when viewing faces and Greebles – objects the subjects were expert in – the images competed for resources, consequently decreasing the electrical response from the occipitotemporal cortex, the area of the brain where faces are processed. Put another way, the computational power used for face recognition appears to be in play in the expert recognition of other objects.

Another facet of the experiment reinforces this conclusion. Prior to Greeble training, subjects viewed both faces and Greebles. Yet for novices, the presence of Greebles did not affect the neural response to faces. So experience makes the difference, Tarr said.

Tarr, the Sidney A. Fox and Dorothea Doctors Fox Professor of Ophthalmology and Visual Sciences and professor of cognitive and linguistic sciences, said similar results have been recorded in studies of birders and car enthusiasts, who are experts at quickly identifying finches and Fords. These findings, Tarr said, make evolutionary sense.

“Evolution is opportunistic,” he said. “Why restrict this system in our brains if we can gain other benefits from it? We can use it to tell the difference between good fruit and poisonous fruit or the difference between two tools.”

Tarr said face recognition research could shed light on autism, which has been linked to dysfunction in face recognition abilities. These experiments, Tarr said, are also of interest to government agencies involved in national security. Understanding why some analysts are better at scanning images – from security cameras or spy satellites – could lead to better training programs.

Tarr conducted the research at Brown along with graduate student Chun-Chia Kung. Bruno Rossion, a former Brown postdoctoral fellow now at Universite Catholique de Louvain in Belgium, led the project. Funding came from the Belgian National Fund for Scientific Research, Communaute Francaise de Belgique, the National Science Foundation and the Perceptual Expertise Network, which receives funding from the McDonnell Foundation.

Bruno Rossion’s web page:

Michael Tarr’s web page:

Perceptual Expertise Network web page:


News Service Home  |  Top of File  |  e-Subscribe  |  Brown Home Page