Latest

6/recent/ticker-posts

Scientists create AI tool to read minds without invasive methods.

Scientists create AI tool to read minds non-invasively.

Scientists have created a new AI tool that can turn brain activity into text. The tool can decode the thoughts of people who are listening to or imagining a story. This is the first non-invasive technique to read thoughts, according to The Guardian. The AI tool uses a transformer model similar to ChatGPT and Bard. It could help people who cannot speak due to strokes or other conditions.

The tool works by using fMRI scans to measure brain activity after training the decoder with podcasts. The tool does not produce exact transcripts, but captures the main idea of what is being said or thought. The researchers said they were surprised and excited by how well the tool works.

According to Mr Huth, the language decoder his team developed “operates on a very different level”. “He said to reporters that their system really captures the concepts, the meanings, and the semantics of what is being said or thought.”

How does the new system operate?

The researchers used an fMRI machine to scan the brains of three people who listened to spoken stories, mostly podcasts, for 16 hours in total. This helped them to see how different parts of the brain that deal with language responded to words, phrases and meanings.

They used this data to train a neural network language model that uses GPT-1, the earlier version of the AI technology that was later used in the popular ChatGPT. The model learned to predict how each person’s brain would react to speech, then narrowed down the choices until it found the best match.

To check how well the model worked, each participant listened to a new story in the fMRI machine. The lead author Jerry Tang said the decoder could “capture the main idea of what the user was hearing”. For example, when the participant heard the words “I don’t have my driver’s license yet”, the model returned “she has not even started to learn to drive yet”.

What tools are already in use?

Before the success of the latest language decoding system, people had to rely on other systems that required surgical implants. One such system was released in 2019 to help those who lost their voice due to paralysis and conditions such as throat cancer, amyotrophic lateral sclerosis (ALS), and Parkinson’s disease.

The technology used implanted electrodes to identify relevant neural signals from brain activity. These signals were then decoded into estimated movements of lips, tongue, larynx, and jaw and finally transformed into synthetic speech.

Scientists issue warning.

David Rodriguez-Arias Vailhen, a bioethics professor at Spain’s Granada University who was not involved in the research, said that the latest brain-computer interface research went beyond what had been achieved by previous interfaces. He was quoted as saying by news agency AFP that this brings us closer to a future in which machines are able to read minds and transcribe thought. The scientists also warned that this could possibly take place against people’s will, such as when they are sleeping.

However, the researchers said they had anticipated such concerns and ran tests showing that the decoder did not work on a person if it had not already been trained on their own particular brain activity. The three participants were also able to easily foil the decoder.

While listening to one of the podcasts, the users were told to count by sevens, name and imagine animals or tell a different story in their mind. All these tactics “sabotaged” the decoder, the researchers said.

Post a Comment

0 Comments