MIT researcher Arnav Kapur has developed a computer interface that can transcribe words that the user verbalizes internally and does not actually speak aloud. The device electrodes sit on the face and jaw and pick up otherwise undetectable neuromuscular signals triggered by internal verbalizations.
Essentially, the system consists of a wearable device and an associated computing system. The signals that are caught by the wearable are fed to a machine-learning system that has been trained to correlate particular signals with particular words. And for that to happen, the device also includes a pair of bone-conduction headphones, which are meant to transmit vibrations through the bones of the face to the inner ear.
The headphones enable the system to convey information to the user without interrupting conversation or otherwise interfering with the user’s auditory experience. This means, you don’t need spoken words for the system to transcribe things. The device is part of a complete silent-computing system, which lets the user easily and smoothly pose and receive answers to difficult computational problems.
“The motivation for this was to build an IA device — an intelligence-augmentation device,” says Arnav Kapur told MIT news. “Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?”
This device was described in a paper presented at the Association for Computing Machinery’s ACM Intelligent User Interface conference. Arnav Kapur is first author on the paper, Maes is the senior author, and they’re joined by Shreyas Kapur, an undergraduate major in electrical engineering and computer science.