Health

A.I. Is Getting Better at Mind-Reading

Think about the words spinning around in your head. A silent impression of a best friend’s new partner. Now imagine someone can listen.

On Monday, scientists at the University of Texas at Austin took another step in that direction.In published research Journal Nature Neuroscienceresearchers described an AI that could translate the private thoughts of human subjects by analyzing fMRI scans that measure blood flow to different regions of the brain.

Already, researchers have developed language decoding methods, pick up attempted speech of people who have lost the ability to speak paralyzed people write while just thinking about writing. However, the new language decoder is he one of the first implant-independent languages. In this study, a person’s imaginary speech could be turned into real speech, and when subjects were shown a silent film, they were able to describe what was happening on the screen with relative accuracy.

“It’s more than just verbal stimulation,” says neuroscientist Alexander Huth of the university who led the study. “We’re making sense of the idea of ​​what’s going on, and the fact that it’s possible is very exciting.”

This study centered on three participants. Participants spent 16 hours in Dr. Huth’s lab over several days listening to “The Moth” and other narratives of his podcast. While they listened, an fMRI scanner recorded blood oxygenation levels in parts of the brain. matched words and phrases heard by

Large language models such as OpenAI’s GPT-4 and Google’s Bard are trained on huge amounts of sentences to predict the next word in sentences and phrases. In the process, the model builds a map of how words are related to each other.A few years ago, Dr. Huth noticed Certain parts of these maps — so-called contextual embeddings that capture the semantic features or meanings of phrases — may be used to predict how the brain activates in response to language.

In a fundamental sense, Shinji Nishimoto, a neuroscientist at Osaka University who was not involved in the study, said, “Brain activity is a kind of coded signal, and the language model knows how to decipher it. We will provide it,” he said.

In their study, Dr. Huth and his colleagues effectively reversed the process, using another AI to translate participants’ fMRI images into words and phrases. The researchers tested the decoder by having participants listen to new recordings and see how well the translations matched the actual transcripts.

Almost every word was out of place in the decoded script, but the meaning of passages was regularly preserved. Basically the decoder was a paraphrase.

original transcript: “I got up off the air mattress and pressed my face against the glass of my bedroom window. I expected eyes to stare at me, but instead I found darkness.”

Decoding from brain activity: “I just went to the window and kept opening the glass.

Participants were asked to silently imagine a story while performing an fMRI scan. They then repeated the story aloud for reference. Again, the decoding model captured the gist of the unspoken version.

Participant version: “Look for a message from your wife that she changed her mind and is coming back.”

decoded version: “I thought that when I met her, for some reason she would come to me and say that she misses me.”

Finally, subjects watched a short silent animated film while undergoing another fMRI scan. By analyzing their brain activity, language models were able to decipher a rough outline of what they were seeing.

This result suggests that the AI ​​decoder captured not only words but also meanings. “Language perception is an externally driven process, whereas imagination is an active internal process,” says Dr. Nishimoto. “And the authors showed that the brain uses common representations throughout these processes.”

Greta Tuckto, a neuroscientist at the Massachusetts Institute of Technology who was not involved in the study, said it was an “advanced problem.”

“Can you decipher meaning from your brain?” she continued. “In some ways they show that yes we can.”

Dr. Huth and his colleagues said there were limitations to this method of language decoding. For one thing, fMRI scanners are bulky and expensive. Moreover, training a model is a long and tedious process and must be done on individuals to be effective. Researchers tried to use a decoder trained on one person to read the brain activity of another person, but failed. This suggests that every brain has its own way of expressing meaning.

Participants could also ditch their decoders for other thoughts and hide their inner monologues. AI may be able to read our minds, but for now it needs to read them one at a time, with our permission.

Related Articles

Back to top button