Amazon Alexa can do a lot of things using simple voice commands. Users can ask Alexa to play a song, look for information or even book a cab. But while it makes life easy for most people, a big question to be answered here is, how can it be enabled to help those who suffer from speech impediments? While a solid solution for this hasn’t come out from Amazon yet, a software developer might just have found way for these users to communicate with Alexa better.
The Verge recently reported a software developer Abhishek Singh to have created a mod that can allow Amazon Alexa to understand sign language commands. The mod has been created by using a laptop and Google’s TensorFlow software. The webcam on the laptop here acts as the eyes of the Echo speaker, in order to help it understand signs, and translate them further into voice commands.
The system is further backed by TensorFlow software, which allows users to code machine learning applications in web browsers. As Singh couldn’t find any data for sign language on the internet, he had to create his own set of signals and teach it to the system by doing them repeatedly in front of the camera.
At the moment, Singh’s system is just a proof-of-concept. It can only understand a limited set of signs and commands. However, Singh does plan to open-source the code and also write an explanatory blog post about it. He states that by doing that, others will be able to download and build on the feature as well. Adding vocabulary to the system is also said to be simple.
Watch: Amazon Echo Spot First Look
In other news, Amazon has recently rolled out an update for Alexa, which allows users to communicate with the Echo Show, by simply tapping on the touchscreen instead of using voice commands. While it’s not as farfetched as Singh’s system, it does serve as a starting point for accessibility features on these devices.