Two parallel quests to understand learning — in machines and in our own heads — are converging in a small group of scientists who think that artificial intelligence may hold an answer to the deep-rooted mystery of how our brains learn.

SwissCognitiveWhy it matters: If machines and animals do learn in similar ways — still an open question among researchers — figuring out how could simultaneously help neuroscientists unravel the mechanics of knowledge or addiction, and help computer scientists build much more capable AI.

The big picture: For decades, researchers compared human and machine learning and largely rejected the notion that they are closely linked. At the center of the question is the credit-assignment problem: the enigma of how the brain knows which parts of itself need to change in order to better accomplish a task.

In AI, a major method for credit assignment is known as error backpropagation .

After backprop fueled major advances in AI image recognition , some scientists started revisiting whether the brain could be doing something similar.

“There is a big undercurrent in neuroscience [saying] we should go back to neural networks,” says Konrad Kording, a neuroscientist at UPenn, referring to a reigning AI technique that relies on backprop. Backprop allows machines to learn from their mistakes. If an actual outcome differs from the computer’s predicted outcome, information about what went wrong gets passed back through layers in the neural network, adjusting the system accordingly.

Noticing errors and spreading information about them are central to the brain, too.

What’s happening: In a flurry of recent papers, researchers propose tweaking or approximating backprop to explain how the brain learns from mistakes.


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

One central debate is over whether neurons, which communicate through chemical signals, can simultaneously transmit information to another neuron while receiving feedback from that same neuron about what went wrong.

The researchers chasing this line of inquiry say there are biologically plausible ways neurons could do this to solve the credit assignment problem.

But so far, Kording cautions, “the experimental evidence for backprop is thin.”

A trio of scientists in Toronto and a DeepMind researcher are searching for that evidence in the brains of mice. In their experiment, carried out at the Allen Institute for Brain Science in Seattle, animals watch patterns on a screen as their brain activity is recorded.

The animals see a consistent pattern of moving shapes for hours — then, an aberration, like a square going the wrong way.

Preliminary results suggest there is in fact a specific, measurable signal that passes between neurons only when the animals witness an “error.”

“We know the brain has to have some mechanism of credit assignment,” says Joel Zylberberg, a professor at York University in Toronto. “The most promising candidate still seems to be these top-down feedback signals.[…]

Illustration: Aïda Amer/Axios