comscore
News

Amazon Alexa could get a lot better if it invested in dedicated AI chips

Machine learning and AI have grown exponentially in terms of industry and consumer interest over the past couple of years. We're now seeing companies invest heavily in the necessary infrastructure for effective technology models.

Amazon Echo Dot Alexa-Pixabay

Highlights

  • Apple and Google have invested in their own chips.

  • Amazon is looking to do more processing on the device rather than depending on the cloud.

  • AI and machine learning have been the focus for Apple, Amazon, Google, Microsoft, Facebook and vast list of companies.

The prominence of AI and machine learning is a known trend in technology these days. Over the past couple of years, all the leading tech giants such as Amazon, Apple, Facebook, Google, Microsoft and several others have focused their attention towards the field of AI and machine learning.

Smart assistant speakers and their learning

The result has been a stiffly competitive industry with smart speaker products such as Google Home, Amazon Alexa, and Apple HomePod that have garnered widespread attention with their respective consumers. However, considering the core objective of implementing AI and machine learning, with the addition of natural language processing, technology companies are looking to have a seamless experience by allowing consumers use spoken language as they normally do. The secret recipe to success to this new and emerging field is to appropriately understand context, diction and respond in a way suitable for local geographies.

The TPU ensures Google’s success with AI and machine learning

From current capabilities, it appears that Google has an advantage across the world, with accent, localization, and delivering richer and more contextual responses. The primary reason for this is the sheer amount of learning available at Google’s disposal. But the ajinomoto in its secret recipe is the TPU or TensorFlow Processing Unit, which it revealed a couple of years ago at Google I/O 2016.

Google had been using these chips in its own datacenters for some time, but it has now created a whole new framework that didn’t just provide the hardware, but also the software stack for an effective implementation of AI and machine learning models that accommodates big data from widespread consumer data that it could acquire from Android users in the billions. In fact, Google is now making this evolved framework available for other companies as well, so they could implement it in their own respective projects for AI and machine learning models.

Amazon Alexa and its sight on the global market

Beyond Google Assistant and its other interests in bots, it’s only rivaled in potential by Amazon. The scale at which Amazon works in the web business – it enjoys the most dominant market share with Amazon Web Services, offering expertise in cloud and cloud-enabled infrastructure. Considering that computational capabilities and expertise exist in abundance with Amazon, a dedicated chip is only the missing link in enhancing its own machine learning capabilities. It would still need a robust technology stack, with focus on data processing, engines and output.

Considering that AI chips would need a larger, deeper interest and focus including fabrication and manufacturing, it’s essential to start early. According to a recent report by The Information, that’s exactly what Amazon hopes to do. The report adds that these chips would allow Amazon Echo devices to respond quickly to commands by implementing more data processing locally on device, rather than on the cloud.

On device AI, the new and emerging trend

What I have gathered from people familiar with Apple’s AI and machine learning initiatives is the focus on local processing of inputs on device itself. So whether it is processing of speech inputs, or deriving intelligence from an image, the future seems to be about processing on the device. And considering that AI and machine learning are growing at such a rapid pace, the need to invest in core capabilities in this field is only growing in urgency.

Another reason for the need to do on-device processing of data inputs for AI systems is that overall consumer experience gets deeply tied to the quality of data stream. In most cases, our internet connection. For products that are targeting market reach in countries such as India, it’s a significant concern. The added delay between giving your input to sending it and processing it on the cloud, and then receiving a result all surpass the anticipated response time, if it were a real human the consumer were speaking to. And for that reason, I somehow feel good old typing solves many problems, and answers many queries.

  • Published Date: February 13, 2018 2:56 PM IST