Charlotte 101.3 - Greenville 97.3 - Boone 92.9 - WSIF Wilkesboro 90.9
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Scientists make a pocket-sized AI brain with help from monkey neurons

Researchers using data from macaque monkeys were able to shrink an AI vision model to a tiny fraction of its original size.
AerialPerspective Images
/
Getty Images
Researchers using data from macaque monkeys were able to shrink an AI vision model to a tiny fraction of its original size.

A human brain consumes less power than a light bulb, while artificial intelligence systems guzzle electricity to do the same tasks.

Now, scientists have created a highly efficient AI model that hints at how living brains are able to do so much with so little, a team reports in the journal Nature.

The model, which mimics a part of the brain's visual system, started out using 60 million variables. But the team was able to compress it into a version that performed nearly as well using just 10,000 variables.

"That is incredibly small," says Ben Cowley, an author of the study and an assistant professor at Cold Spring Harbor Laboratory. "This is something we could send in a tweet or an email."

The compact model also appears to work more like a living brain, which could help scientists study what goes wrong in diseases like Alzheimer's, Cowley says.

More broadly, if the AI model really does replicate strategies found in nature, it could help scientists understand the inner workings of human brains, says Mitya Chklovskii, a group leader at the Simons Foundation's Flatiron Institute, who was not involved in the study.

Compact, biology-inspired models of the brain could also lead to "more powerful and more humanlike artificial intelligence," says Chklovskii, who is also on the faculty at NYU Medical Center.

Monkey data

The study is part of an effort to understand the human visual system, which takes in bits of light and transforms them into something we recognize, like grandma or the Grand Canyon.

Cowley says scientists who study the visual system have been trying to answer questions like, "How do you recognize a cat?" or "How do you recognize a dog?"

There's no good way to watch a human brain do this. So Cowley has been looking at artificial intelligence systems able to accomplish the same sort of tasks.

But there's a problem: "We're very impoverished in our understanding of how these AI systems work," Cowley says, "much like our own brain."

Working with researchers at Carnegie Mellon University and Princeton University, Cowley created an AI model that his team could understand. It simulates just one part of the visual system, which features cells called V4 neurons.

"They encode colors and textures and curves and very complicated proto-objects," Cowley says.

Existing AI systems can do the same thing using deep neural network models, which require powerful computers and learn by considering a huge range of possibilities. But Cowley's team was after something more efficient.

"We want to take these big clunky models and try to compress it down into a much smaller, compact form," he says.

They started with a model trained on data from macaque monkeys. Then they looked for parts of the model that were redundant or unnecessary. They also applied statistical techniques like those used to compress digital photos.

The result: a model small enough to put in an email attachment.

A compact model with fewer secrets

Because the model is so small and simple, the team was able to get a glimpse of what its artificial neurons were doing.

Some V4 neurons, for example, were responding to shapes with strong edges and lots of curves — the sort of shapes you might see in the produce section of the grocery store.

"When you go into the supermarket and you see the arranged fruit, your V4 neurons love that," Cowley says. "They love arranged fruit. They love all the curves of the apples [and] oranges."

Other V4 neurons seemed to respond only to small dots in an image.

"This was quite interesting to us because primates are very drawn to eyes," Cowley says.

The specialized nature of these V4 neurons may help explain how human and other primate brains are able make sense of what they see without relying on massive computing power.

The findings also may have implications for artificial intelligence.

"If our brains have less complex models and yet can do more than these AI systems, that tells us something about our AI systems," Cowley says. Namely, they could probably be smaller and simpler yet still do a better job interpreting what they see.

For example, self-driving cars might be able to run on less powerful computers, he says, while correctly distinguishing a pedestrian from airborne plastic bag.

But AI systems need to do more than shrink in order to perform as well as a human brain, Chklovskii says.

For example, he says, a person can easily recognize a friend's face in any setting and from many angles, even if that friend has acquired a suntan or is sporting a new haircut.

AI systems struggle with this sort of task, even when powered by supercomputers.

That may be because current AI models are based on an understanding of the human brain from the 20th century, Chklovskii says.

"Since then, we learned a lot more about the brain," he says. "So maybe we should update the foundations of the artificial networks."

Copyright 2026 NPR

Tags
Jon Hamilton is a correspondent for NPR's Science Desk. Currently he focuses on neuroscience and health risks.