When Science Reporting Is Cool–and wrong.

by w3woody

Scientists now know how your brain differentiates faces

Researchers at Caltech have taken a huge step in figuring out how the brain processes faces. In a study published this week in Cell, the team found that the brain only needs around 200 neurons to differentiate faces from each other.

First, the research that was done at Caltech was very cool. (And I’m not just saying that because I graduated from Caltech.)

But second, the Engadget reporting gets the discovery completely wrong.

From the post made at Caltech’s own web site: Cracking the Code of Facial Recognition

In 2003, Tsao and her collaborators discovered that certain regions in the primate brain are most active when a monkey is viewing a face. The researchers dubbed these regions face patches; the neurons inside, they called face cells. Research over the past decade had revealed that different cells within these patches respond to different facial characteristics. For example, some cells respond only to faces with eyes while others respond only to faces with hair.

“But these results were unsatisfying, as we were observing only a shadow of what each cell was truly encoding about faces,” says Tsao. “For example, we would change the shape of the eyes in a cartoon face and find that some cells would be sensitive to this change. But cells could be sensitive to many other changes that we hadn’t tested. Now, by characterizing the full selectivity of cells to faces drawn from a realistic face space, we have discovered the full code for realistic facial identity.”

In other words, the existence of these “face patches”, small clumps of neurons which encode faces for facial recognition, was known for over a decade.

The key insight is how faces are encoded in these small 200-ish clumps of neurons:

Two clinching pieces of evidence prove that the researchers have cracked the full code for facial identity. First, once they knew what axis each cell encoded, the researchers were then able to develop an algorithm that could decode additional faces from neural responses. In other words, they could show a monkey a new face, measure the electrical activity of face cells in the brain, and recreate the face that the monkey was seeing with high accuracy.

Second, the researchers theorized that if each cell was indeed responsible for coding only a single axis in face space, each cell should respond exactly the same way to an infinite number of faces that look extremely different but all have the same projection on this cell’s preferred axis. Indeed, Tsao and Le Chang, postdoctoral scholar and first author on the Cell paper, found this to be true.

In other words, the researchers discovered not just that the brain only needs 200 neurons. That’s been known since 2003.

No, what is cool is that the researchers discovered how faces are encoded in those patches of 200 neurons–by identifying the “vector space” of facial variations which are used by these neurons in recognizing faces.

And that “how”–that’s the difference between saying “an internal combustion engine causes a car to move” and knowing how the cylinders and spark plugs and gasoline combine to make the internal combustion engine work.

I point this out for two reasons.

First, it’s a really cool bit of research which may have substantial implications.

Second, it shows how the popular press often completely screws up reporting. Of course it’s not their fault that they stopped with the first half of the third paragraph in the press release:

The central insight of the new work is that even though there exist an infinite number of different possible faces, our brain needs only about 200 neurons to uniquely encode any face, with each neuron encoding a specific dimension, or axis, of facial variability. …

Without reading the second half:

… In the same way that red, blue, and green light combine in different ways to create every possible color on the spectrum, these 200 neurons can combine in different ways to encode every possible face—a spectrum of faces called the face space.

Or without reading the rest of the article which covers the core insight, which is the “how” and not just the “what”.

Undoubtedly the person at Engadget was on a tight deadline, and didn’t have the intelligence to understand things like “linear algebra”, so they couldn’t understand the key insight:

“We were stunned that, deep in the brain’s visual system, the neurons are actually doing simple linear algebra. Each cell is literally taking a 50-dimensional vector space—face space—and projecting it onto a one-dimensional subspace. It was a revelation to see that each cell indeed has a 49-dimensional null space; this completely overturns the long-standing idea that single face cells are coding specific facial identities. Instead, what we’ve found is that these cells are beautifully simple linear projection machines.”