MIT_Fluid_InterfacesThanks to AlterEgo wearable gadget, neuromuscular signals from a user will be fed into a neural network (AI) that is trained by researchers “to identify [a fundamental set of] subvocalized words from neuromuscular signals, but it can be customized to a particular user,” the MIT News Office reported.

This successful direct approach to the problem, unveil FB’s primarily aims to read people’s minds and brain activity, while instead it is publicly announcing “brain-to-text” research projects.  [1]

Researchers at the MIT Fluid Interfaces group led by Arnav Kapur had developed such device.

Kapur describes the headset as an “intelligence-augmentation” or IA device, and was presented at the Association for Computing Machinery’s Intelligent User Interface conference in Tokyo. [2]

“The wearable system reads electrical impulses from the surface of the skin in the lower face and neck that occur when a user is internally vocalizing words or phrases.”

The so-called electrophysiological signals are created when the wearer intentionally, but silently, voices words.

The wearable isn’t reading brain waves: it’s not plucking sentences straight out of the user’s mind. Instead it relies on the conscious decision to speak silently.

MIT Media Lab website makes very clear in its FAQs: “No, this device cannot read your mind… The system does not have any direct and physical access to brain activity, and therefore cannot read a user’s thoughts.” [3]

For Kapur’s team the challenge was to identify the locations on the face where the most reliable vibrations can be picked up. Initially, they worked with 16 sensors, but now they are able to get good accuracy with just four.

The device presented at Tokyo’s conference it’s still limited: the researchers say it has a 92 percent accuracy with only 20 words and it takes 15 minutes to train the AI to be tuned on a person.

The computer that the “myoneural interface” is linked to can only receive and process speech that has been intentionally vocalized in that way.

The processed silent speech can then be passed as input to any sort of service like Siri, Alexa, Google Translate, etc. Some more tech wizardry happens and then an answer is sent back to the listener’s mind through ‘bone conduction’. Instead of making air molecules vibrate all around, the vibration is sent through direct contact with the listener’s jawbone.

The result is a computer that could hold a conversation that’s completely imperceptible to those around the wearer.

The advantage of combining these two technologies — reading subvocalisations and bone-conduction — is that it enables voice communication without sound, or in spite of it.

With the right peripherals, the AlterEgo technology will have huge implications.

You can read the full research document by Arnav Kapur, Shreyas Kapur. and Pattie Maes, here. [4]

References:

[1] https://news.mindholocaust.is/facebook-unveils-building-8-brain-activity-decoding-project/

[2] http://iui.acm.org/2018/program.html#1b

[3] https://www.media.mit.edu/projects/alterego/overview/

[4] http://fluid.media.mit.edu/sites/default/files/p43-kapur.pdf