Mind Reading Computer

Updated : 07-04-2018 Published : by :
Computer Science Engineering Seminars

The mind-reading computer system presents information about your mental state as easily as a keyboard and mouse present text and commands. Imagine a future where we are surrounded with mobile phones, cars and online services

Why mind reading?
The mind-reading computer system presents information about your mental state as easily as a keyboard and mouse present text and commands. Imagine a future where we are surrounded with mobile phones, cars and online services that can read our minds and react to our moods.
How would that change our use of technology and our lives? We are working with a major car manufacturer to implement this system in cars to detect driver mental states such as drowsiness, distraction and anger.
Current projects in Cambridge are considering further inputs such as body posture and gestures to improve the inference. We can then use the same models to control the animation of cartoon avatars. We are also looking at the use of mind-reading to support on-line shopping and learning systems.
The mind-reading computer system may also be used to monitor and suggest improvements in human-human interaction. The Affective Computing Group at the MIT Media Laboratory is
developing an emotional-social intelligence prosthesis that explores new technologies to augment and improve people’s social interactions and communication skills.

How does it work?
The mind reading actually involves measuring the volume and oxygen level of the blood around the subject's brain, using technology called functional near-infrared spectroscopy (fNIRS).
The user wears a sort of futuristic headband that sends light in that spectrum into the tissues of the head where it is absorbed by active, blood-filled tissues. The headband then measures how much light was not absorbed, letting the computer gauge the metabolic demands that the brain is making.
The results are often compared to an MRI, but can be gathered with lightweight, non- invasive equipment.
A computer program which can read silently spoken words by analyzing nerve signals in our mouths and throats, has been developed by NASA.
Preliminary results show that using button-sized sensors, which attach under the chin and on the side of the Adam's apple, it is possible to pick up and recognize nerve signals and patterns from the tongue and vocal cords that correspond to specific words.
"Biological signals arise when reading or speaking to oneself with or without actual lip or facial movement," says Chuck Jorgensen, a neuroengineer at NASA's Ames Research Center in Moffett Field, California, in charge of the research. Just the slightest movement in the voice box and tongue is all it needs to work, he says.
Web search
For the first test of the sensors, scientists trained the software program to recognize six words - including "go", "left" and "right" - and 10 numbers. Participants hooked up to the sensors silently said the words to themselves and the software correctly picked up the signals 92 per cent of the time.
Then researchers put the letters of the alphabet into a matrix with each column and row labeled with a single-digit number. In that way, each letter was represented by a unique pair of number co-ordinates. These were used to silently spell "NASA" into a web search engine using the program.
"This proved we could browse the web without touching a keyboard”.

DOWNLOADS :


07012013132240-mind-reading-computer.doc (Size : 189 KB)

227-XUGrBVH.docx (Size : 29 KB)

227-wDj6Coy.pdf (Size : 526 KB)

    1 reviews