Artificial Intelligence is no longer a subject of science fiction movies or comic books. Perhaps, we haven t reached the stage depicted in those movies but we are definitely getting there. On one hand, we might fear the machines will take over humanity some day at certain levels. But we cannot overlook the advantages of the AI and most importantly how helpful it has been in improving productivity. Google s annual developer conference, I/O 2017, also reflects the same trend. Also Read - Google Pixel 5 could launch with 6.67-inch 120Hz displayAlso Read - Google Files gets pin-protected Safe Folder feature to store private files
With numerous announcements and new features in the works, it is clear that AI is finally now the top priority of the technology companies. What makes the new development more useful and exciting is that with companies like Apple and Google investing, we might get the dose of AI at the earliest. I am more keen on using AI on my handset rather giant AI-powered screens in my Batcave (I wish). There are two obvious reasons why AI is gradually taking the center stage of all the major tech conferences. The first is that the audience is now more than ready. After having used early versions of Siri and Google Now, users are now willing to see more of the AI. And the second reason is the big data – something on which technology companies thrive on. Also Read - Google rolls out AirDrop-like file sharing feature for Android
Coming back to Google s I/O developer conference, just look at the massive focus on the machine learning and intelligence. For starters, Google s Assistant is coming to iOS platform. Despite so much of hue and cry about data privacy, the app has now more than 100 million active users on Android. The user base is going to increase with the launch on the iPhone. Assistant s crossover to iOS also signifies Google s trust on the AI and tapping all possible segments it can.
But what makes Google s efforts more unique is the emphasis on creating a comprehensive ecosystem. Google Assistant now works in sync with Google s Home voice assistant speaker. The same also has been extended to Android Wear and Android TV. So if you are part of Google s ecosystem, you cannot escape this compelling application. ALSO READ: Google I/O 2017 Live: Google Assistant comes to the iPhone, reaches 100 million users on Android
In an AI-first world, we are rethinking all our products, Pichai is quoted as saying at the Google I/O. Google s vision goes beyond a singular app as it envisages machine learning and much talked about deep learning among others. Google has also announced an umbrella wing, google.ai, to focus on all of these developments. With Android covering the majority of smartphones in the world, needless to say, Google s grander foray into the AI will have an impact at a larger level.
Similar is the case for Vision (with) great improvements in computer vision. Clearly at an inflection point with vision. So today we are announcing Google Lens, which will be first included in Google Assistant, Pichai added.
Google also understand that reimagining all of its apps and services in the context of AI will also require improvements to the hardware at its core-level. And unsurprisingly, Google launched the second-generation Tensor Processor Unit (TPU). The cloud-based platform dedicatedly caters to the machine learning and has been leveraged by several Google s AI projects such as AlphaGo and Deepmind, which is known for beating Go expert, Lee Sedol.
Global chipset maker Qualcomm has also set its sights on the AI with its newer chipsets like Snapdragon 835 SoC that have machine learning in center focus. The company recently demoed Snapdragon 835 s machine learning abilities and faster voice-control support. Check out the video below. You ll notice how elements like object tracking and voice support have significantly improved.
The Snapdragon Neural Processing Engine SDK was created to help developers determine where to run their neural network-powered applications on the processor. For example, an audio/speech detection application might run on the Qualcomm Hexagon DSP and an object detection or style transfer application on the Qualcomm Adreno GPU, explains the company on its website.
With the help of the SDK, developers have the flexibility to target the core of choice that best matches the power and performance profile of the intended user experience. The SDK supports convolutional neural networks, LSTMs (Long Short-Term Memory) expressed in Caffe and TensorFlow, as well as conversion tools designed to ensure optimal performance on Snapdragon heterogenous cores, it adds.
With chipsets coming with built-in machine learning, it opens new avenues for smartphone companies (OEMs) and developers to have deeper AI integration in the smartphones. Here s what Qualcomm points out on its website: Application developers and device manufacturers understand what their users want. They can create a feature or an application that uses machine learning (more specifically, deep neural networks) to improve the performance of a particular task, such as detecting or recognizing objects, filtering out background noise, or recognizing voices or languages. These applications are usually run in the cloud, and depending on the device they re in, this could be sub-optimal.
Apple, on the other hand, has been very much close about its operating system. With iOS 10, Apple, however, shifted its focus on opening up the platform to third-party developers. The iOS 10 also focused on making Siri workable with third party applications like WhatsApp. Moreover, Apple has already started laying the foundation for its foray into the AI. Earlier this year, the company joined a non-profit group that promotes AI usage for good purposes.
We re glad to see the industry engaging on some of the larger opportunities and concerns created with the advance of machine learning and AI, Tom Gruber, Apple s head of development for its Siri, said. We believe it s beneficial to Apple, our customers, and the industry to play an active role in its development and look forward to collaborating with the group to help drive discussion on how to advance AI while protecting the privacy and security of consumers.
I will not be surprised to see Apple make a host of AI-related announcements at its forthcoming WWDC developer conference.
Samsung has also started its preparations with Bixby voice assistant that comes integrated on Galaxy S8 and S8+. Samsung s Bixby, however, appears more advanced version of AI, when it comes using an AI for day to day productivity. As I mentioned in my one of my previous takes on AI, Bixby takes elements from Amazon s Alexa, Microsoft s Cortana, or Apple s Siri-like taking voice and text commands. But it further improves the deal with more intuitive features such as image recognition and real-time language translation. Things such as contextual understandings like asking the device to take a screenshot and send to one of your friends make it more useful. Moreover, it works with smart devices in your house. ALSO READ: From Samsung Bixby to Apple Siri, is Artificial Intelligence ready for primetime?
2017 is going to be a revolutionary year for the AI as we see more and new announcements related to the technology. AI is finally coming to the fore and is making giant strides in our lives. Looking at the trends and success stories so far, AI, even though at the nascent stage, looks highly promising.