By John von Radowitz
Wednesday, 1 February 2012
Wednesday, 1 February 2012
A first step has been taken towards hearing imagined speech using a form of
electronic telepathy, it has been claimed.
Scientists believe in future it may be possible to "decode" the thoughts of brain-damaged patients who cannot speak.
In a study described by one British expert as "remarkable", US researchers were able to reconstruct heard words from brain wave patterns.
A computer program was used to predict what spoken words volunteers had listened to by analysing their brain activity.
Previous research has shown that imagined words activate similar brain areas as words that are actually uttered.
The hope is that imagined words can be uncovered by "reading" the brain waves they produce.
"This is huge for patients who have damage to their speech mechanisms because of a stroke or Lou Gehrig's disease and can't speak," said Professor Robert Knight, one of the researchers from the University of California at Berkeley.
"If you could eventually reconstruct imagined conversations from brain activity, thousands of people could benefit."
However, the study involved the use of electrodes inserted through the skull on to the brains of epileptic patients.
A system sophisticated enough to achieve the same result non-invasively remains a long way off.
Prof Knight acknowledged that the research was at an early stage and controlling movement with brain activity was "relatively simple" compared with reconstructing language. But he added: "This experiment takes that earlier work to a whole new level."
The findings are reported today in the online journal Public Library of Science Biology.
Scientists enlisted the help of people undergoing brain surgery to investigate the cause of untreatable epileptic seizures.
To pinpoint where the seizures were being generated, neurosurgeons cut a hole in the skull and placed an array of electrodes on to the surface of the brain.
In the case of 15 seizure patients, brain activity from the temporal lobe was recorded as they listened to five to 10 minutes of conversation.
Two different computational models were devised to match the spoken sounds to patterns of activity from the electrodes.
Patients then heard a single word, and the models were used to predict what it was from the earlier analysis.
The better of the two programmes reproduced a synthesised sound realistic enough for the scientists to guess the original word.
There is evidence that the brain breaks sound down to its component acoustic frequencies, with speech spanning the range from about 1 Hertz (cycles per second) to 8,000 Hertz.
Study leader Dr Brian Pasley, also from Berkeley, said: "We are looking at which cortical sites are increasing activity at particular acoustic frequencies, and from that, we map back to the sound."
He compared the technique to a pianist "hearing" the music a colleague is playing in a sound-proof room simply by looking at the keys.
Dr Pasley added: "This research is based on sounds a person actually hears, but to use this for a prosthetic device, these principles would have to apply to someone who is imagining speech.
"There is some evidence that perception and imagery may be pretty similar in the brain. If you can understand the relationship well enough between the brain recordings and sound, you could either synthesise the actual sound a person is thinking, or just write out the words with a type of interface device."
The research builds on previous work on the way animals encode sound in the brain's auditory cortex.
Scientists have used brain recordings to guess the words ferrets were read, even though the animals were unable to understand them.
British expert Professor Jan Schnupp, from Oxford University, said: "This study by Pasley and others is really quite remarkable. Neuroscientists have of course long believed that the brain essentially works by translating aspects of the external world, such as spoken words, into patterns of electrical activity. But proving that this is true by showing that it is possible to translate these activity patterns back into the original sound (or at least a fair approximation of it) is nevertheless a great step forward, and it paves the way to rapid progress toward biomedical applications.
"Some may worry ... that this sort of technology might lead to 'mind-reading' devices which could one day be used to eavesdrop on the privacy of our thoughts. Such worries are unjustified. It is worth remembering that Pasley and colleagues could only get their technique to work because epileptic patients had co-operated closely and willingly with them, and allowed a large array of electrodes to be placed directly on the surface of their brains. No non-invasive brain scanning technique in existence is able to provide the very fine temporal and the spatial resolution needed to make proper mind-reading possible."
Scientists believe in future it may be possible to "decode" the thoughts of brain-damaged patients who cannot speak.
In a study described by one British expert as "remarkable", US researchers were able to reconstruct heard words from brain wave patterns.
A computer program was used to predict what spoken words volunteers had listened to by analysing their brain activity.
Previous research has shown that imagined words activate similar brain areas as words that are actually uttered.
The hope is that imagined words can be uncovered by "reading" the brain waves they produce.
"This is huge for patients who have damage to their speech mechanisms because of a stroke or Lou Gehrig's disease and can't speak," said Professor Robert Knight, one of the researchers from the University of California at Berkeley.
"If you could eventually reconstruct imagined conversations from brain activity, thousands of people could benefit."
However, the study involved the use of electrodes inserted through the skull on to the brains of epileptic patients.
A system sophisticated enough to achieve the same result non-invasively remains a long way off.
Prof Knight acknowledged that the research was at an early stage and controlling movement with brain activity was "relatively simple" compared with reconstructing language. But he added: "This experiment takes that earlier work to a whole new level."
The findings are reported today in the online journal Public Library of Science Biology.
Scientists enlisted the help of people undergoing brain surgery to investigate the cause of untreatable epileptic seizures.
To pinpoint where the seizures were being generated, neurosurgeons cut a hole in the skull and placed an array of electrodes on to the surface of the brain.
In the case of 15 seizure patients, brain activity from the temporal lobe was recorded as they listened to five to 10 minutes of conversation.
Two different computational models were devised to match the spoken sounds to patterns of activity from the electrodes.
Patients then heard a single word, and the models were used to predict what it was from the earlier analysis.
The better of the two programmes reproduced a synthesised sound realistic enough for the scientists to guess the original word.
There is evidence that the brain breaks sound down to its component acoustic frequencies, with speech spanning the range from about 1 Hertz (cycles per second) to 8,000 Hertz.
Study leader Dr Brian Pasley, also from Berkeley, said: "We are looking at which cortical sites are increasing activity at particular acoustic frequencies, and from that, we map back to the sound."
He compared the technique to a pianist "hearing" the music a colleague is playing in a sound-proof room simply by looking at the keys.
Dr Pasley added: "This research is based on sounds a person actually hears, but to use this for a prosthetic device, these principles would have to apply to someone who is imagining speech.
"There is some evidence that perception and imagery may be pretty similar in the brain. If you can understand the relationship well enough between the brain recordings and sound, you could either synthesise the actual sound a person is thinking, or just write out the words with a type of interface device."
The research builds on previous work on the way animals encode sound in the brain's auditory cortex.
Scientists have used brain recordings to guess the words ferrets were read, even though the animals were unable to understand them.
British expert Professor Jan Schnupp, from Oxford University, said: "This study by Pasley and others is really quite remarkable. Neuroscientists have of course long believed that the brain essentially works by translating aspects of the external world, such as spoken words, into patterns of electrical activity. But proving that this is true by showing that it is possible to translate these activity patterns back into the original sound (or at least a fair approximation of it) is nevertheless a great step forward, and it paves the way to rapid progress toward biomedical applications.
"Some may worry ... that this sort of technology might lead to 'mind-reading' devices which could one day be used to eavesdrop on the privacy of our thoughts. Such worries are unjustified. It is worth remembering that Pasley and colleagues could only get their technique to work because epileptic patients had co-operated closely and willingly with them, and allowed a large array of electrodes to be placed directly on the surface of their brains. No non-invasive brain scanning technique in existence is able to provide the very fine temporal and the spatial resolution needed to make proper mind-reading possible."