Intelligence-Augmentation Device Can Transcribe Words You ‘Silently Speak’

  • Researchers develop a system that allows you to communicate seamlessly with your personal digital device and other people without speaking anything. 
  • The system analyzes neuromuscular signals in the face and correlates them with specific words. 

Input modalities have surely come a long way since punch cards. Today, we’ve advanced systems (like touch-enabled and voice inputs) that make it easier for humans to connect with smart digital devices. Now, a team of researchers at MIT has come up with computer interface capable of transcribing words that users verbalize internally. They don’t need to speak out loud.

How it’s possible you asked? Well, the system comprises of a wearable device and a computing module. It’s not quite a mind reader, but the electrodes fitted in device analyze neuromuscular signals in the face and jaw. These signals are usually formed by internal verbalizations (when you say something in your mind).

Machine learning algorithms further analyze these signals and correlate them with specific words. The overall system (device+interface) is much like an internal extension of your own cognition.

The Wearable Device

The device named AlterEgo enables seamless, discreet and bidirectional communication with a digital device without any voice input or detectable lip movements. It has two bone-conduction headphones for transferring vibration from face-bones to the inner ear, without obstructing the ear canal.

It allows users to unnoticeably pose and receive solutions to complex computational problem, for example, one can use it in the chess game to report opponents’ moves silently.

To make it work, a user needs to wear the device, which connects wirelessly over Bluetooth to almost all external digital systems. And, unlike other proposed brain computer interfaces, it doesn’t have access to private thoughts and information. It just requires silent input, which is voluntary on the users’ part.

How System Detects and Processes Words

The idea of converting internal verbalizations into useful information is not new. However, it hasn’t been explored well. The new prototype has curved appendages that make contact with 4 different locations on user’s face (along one jaw).

After selecting the specific electrode positions, they began gathering data of different types of tasks with limited vocabularies. They wrote a computer program to examine the signals coming from 4 electrode locations. They discovered that these signals clearly distinguished words formed by internal verbalizations, also known as subvocalized words.

More specifically, researchers utilized a neural network to discover correlations between specific words and specific neuromuscular signals. Like other conventional neural networks, this one is arranged into multiple layers of processing nodes. Each node is joined to several other nodes in the next and previous layer.

Source: AlterEgo | Massachusetts Institute of Technology

Initially, the data is fed into the first layers, where nodes process it and transfer and modified data to the next layer. The process goes on until the final layer outputs a clear result. The neural network used in the experiment was trained on NVIDIA GeForce Titan X graphics processing unit. Researchers used this neural network architecture to classify several vocabulary datasets.

Test Results

Kapur demonstrates AlterEgo | Lorrie Lejeune/MIT

For now, the system allows users to communicate seamlessly with their personal digital device and other people. Researchers tested the usability of this prototype – they selected 10 people in the experiment who spent more than one and a half hour configuring arithmetic application to their own neurophysiology (15 minutes), and executing computations (90 minutes).

The prototype achieved 92% accuracy in the test. However, Arnav Kapur, the lead author, believes that the performance could be improved by introducing more training data.

Read: Could We Be Performing Quantum Computing In Our Own Brain?

In future, AlterEgo could be used in a high-noise environment, like in places with heavy machinery, aircraft carrier’s flight deck, printing press and more. Moreover, it could be a perfect device for disabled people who cannot speak normally. The applications are countless.

Written by
Varun Kumar

I am a professional technology and business research analyst with more than a decade of experience in the field. My main areas of expertise include software technologies, business strategies, competitive analysis, and staying up-to-date with market trends.

I hold a Master's degree in computer science from GGSIPU University. If you'd like to learn more about my latest projects and insights, please don't hesitate to reach out to me via email at [email protected].

View all articles
Leave a reply