A researcher from MIT, Arnav Kapur has built computer interface system that can transcribe words that the user verbalizes internally and does not actually speak aloud. The electrodes of this device sit on the face and jaw and pick up otherwise undetectable neuromuscular signals triggered by internal verbalizations.
The system consists of a wearable device and an associated computing system. Electrodes in the device pick up neuromuscular signals in the jaw and face that are triggered by internal verbalisations — saying words “in your head” — but are undetectable to the human eye. The signals are fed to a machine-learning system that has been trained to correlate particular signals with particular words.
Similar to a myoelectric prosthetic, which can detect when the brain sends electrical signals to the body, AlterEgo translates those signals into the user’s intended physical actions. When the wearer of the headset thinks of a word, their brain sends the signals to muscles in the face and throat. Electrode sensors on the headset happen to sit on the person’s face and jaw where those signals are strongest. Once the data is received, the device then responds to the subvocalized request.
AlterEgo could be useful in noisy environments or in spaces where silence is required. But just as prosthetics are expanding the range of movement for many, AlterEgo uses similar technology that Kapur hopes will one day allow the voiceless to communicate.
The researchers describe their device in a paper they presented at the Association for Computing Machinery’s ACM Intelligent User Interface conference. Kapur is first author on the paper, Maes is the senior author, and they’re joined by Shreyas Kapur, an undergraduate major in electrical engineering and computer science.