You know you have a powerful product from the attention it receives. A few months ago, Samsung was in the news for the wrong reasons. But I truly believe the worst is in the past. Sure the spate of reports and incidents around exploding batteries of the Samsung Galaxy Note 7 isn’t something we would ever want to experience or linger around for too long.
Yesterday when I heard of a new assistant – Bixby – by Samsung I had my initial doubts. But when I read the official details, posted in a blogpost by Injong Rhee, Executive Vice President and Head of R&D, Software and Services, Samsung Electronics, it seems like there’s a conscious plan, and Samsung clearly knows what it wants. To begin with, the statement begins with, “Technology is supposed to make life easier, but as the capabilities of machines such as smartphones, PCs, home appliances and IoT devices become more diverse, the interfaces on these devices are becoming too complicated for users to take advantage of many of these functions conveniently.” It sees like the South Korean company feels my pain.
Just make my life simpler
Over the past couple of years, from Apple, to Google and Microsoft, I’ve seen technology giants focus on AI, bots and assistants like the future of humanity depended on it. Well, the rate at which things are going, I wouldn’t disagree with it. But somehow I always felt that it was a bit of an overkill on smartphones. Let me explain. Say you’re browsing through a webpage, and you simply want to share that webpage with your friend. Fire up the superior AI assistant on your favourite platform, and there’s a chance it won’t understand your instruction. I tried this on iOS and Android, and it clearly didn’t seem to make a difference.
What’s the point of superior research on artificial intelligence that can’t do a task as simple as share a webpage. And no, I don’t want to add another app or make any tweaks. What I want is the ability to automate mundane tasks. But in reality, all I ever see or experience is what has been demoed on stage by each one of these urban stalwarts. Yes, I do understand that AI is an emerging field in terms of execution. Because as a concept it’s led millennials grow up aspiring to be part of a world that drives itself without the need manually move anything.
When I think AI, I only picture a bot that can do mundane tasks, but never simplify my life. If voice input has to translate into intelligence, then it must works towards specific tasks that it helps ease out. And from the official statement issued by Samsung, I get the right kind of signals.
‘Context-awareness?’, tell me about it!
Samsung has correctly identified potential here. However great an AI platform may be, it’s simply not relevant if it can’t understand context. Now, Sundar Pichai effectively demoed what AI’s real capabilities are on stage when he demoed the Google Pixel. Contextual awareness is nothing but adapting your menu and operations based on the underlying situation. Common examples are the menu on Microsoft Office, which changes depending on the task you are completing. It’s really as simple as that. The new Macbook Pro by Apple also comes with a touchbar. And depending on the operation you perform on the Macbook, it displays relevant menus on the touchbar. From a developer’s point of view, it doesn’t matter whether you give an audio input or a keyboard. The logic for context is primarily the same. Only variables differ and need to consider factors such as tone, accent, pitch and sense emotions.
But clearly coming back to sharing a webpage, shouldn’t display search results for ‘Share’. In my mind, that’s precisely what my experience with AI has been. Good on the demo stage, but absolutely ridiculous otherwise. ALSO READ: Samsung Gear S3 Review: A smartwatch that leaves you impressed
Cognitive Tolerance, sounds well and fine
This again is hilarious when AI fails. I’m not sure if any of this has happened with you. I wanted to demo the AI capabilities of an Android phone a few months ago. A friend of mine spoke eagerly, “What is the time it takes to reach home?” I looked forward to see a Google Maps page. Turns out he didn’t have Maps installed, and what we saw was the time. 11:30am. I vaguely remember what it understood the question as. But considering the assistant responded with the time, I believe it simply ignored the remainder of the question. How convenient, right? What I’ve learned with my interface with AI assistants and bots is that we, as Indians, need to speak into the phone sounding like bots ourselves.
What really needs to improve there is the understanding of diction, and pitch to understand when a query is complete, and what the intended meaning is. I admit, it’s far complicated that we realise. But honestly, as a consumer, do I really care? If I’m pitched a flagship device that costs me between one to two kidneys, I expect things to simply work. That seems like a fair ask, I’d assume.
A thought-out process
Precisely for this reason, I’d happily welcome yet another entrant to the assistant’s space. There’s much work to be done. And a new entrant would be awesome. If only it understood me and truly helped me leave my smartphone in my pockets while it continued to work towards simplifying my life. Rather than complicate it.
For starters, a dedicated button sounds interesting. At least it gives me the ability to key in an input or simply speak it out. I’m definitely going to wait and watch, because in the meanwhile, all the existing platforms are hopefully getting better. Though I’d never trust my life or the lives of my loved ones with it. Similar to how the world, including me, is excited about self driving cars, but I’d never quite go on the highway in one of those. That’s just me. I’d still wait on authorities to clear it for commercial use. And in all of this, I’m not surprised that authorities in the US have had problems with Uber. Even Rhee seems to agree when he writes, “We do have a bold vision of revolutionizing the human-to-machine interface, but that vision won’t be realized overnight. Ambition takes time.” I’m hopeful.