Google is reportedly working on a chatbot that can interact like a human. Chatbots have become a new craze among consumers as well as enterprises. Their rise from the last decade could transform into major change this year. The conversational agents today are mostly limited to question and answer kind of interface. But Google wants to change that with a new kind of chatbot “that can chat about anything”. The new chatbot named “Meena” is being created by the Google Brain research team.
Detailed by the research team, the chatbot relies on an end-to-end trained neural conversational model. The method aims to correct the “critical flaw” of current chatbots specialized for a purpose. “They sometimes say things that are inconsistent with what has been said so far, or lack common sense and basic knowledge about the world. Moreover, chatbots often give responses that are not specific to the current context,” Daniel Adiwardana, Senior Research Engineer, and Thang Luong, Senior Research Scientist, Google Research, Brain Team, said in a blog post.
Google’s Meena mainly differs by focusing on understanding the context of a conversation. This allows the chatbot to offer a more sensible reply to any user query. The research team aims to create a chatbot that can eventually “chat about virtually anything a user wants.” In the blog post, the Brain Team also showed two conversations where a user asks Meena for show recommendations. In another conversation, Google shows Meena reply with a joke.
Google Meena wants to tackle major challenge</h2?
The researchers note that Meena is trained on 341GB of text from public domain social media conversations. This is 8.5x more data than existing state-of-the-art generative models. While creating a realistic model, Google says it also created a new quality benchmark for chatbots. Called the Sensibleness and Specificity Average (SSA), it “captures basic, but important attributes for natural conversations.” Google also uses human evaluators to judge if a response is “reasonable in context.”
The researchers note that if anything seems confusing, illogical, out of context or factually wrong then it should be rated as “does not make sense”. When the response makes sense, the utterance is further assessed to determine if it is specific to the given context. Meena has also been found to do better than existing models in this Google-created benchmark. It is also rather close to humans in the Sensibleness and Specificity Average (SSA).
“We are evaluating the risks and benefits associated with externalizing the model checkpoint, however, and may choose to make it available in the coming months to help advance research in this area,” the researchers said in the blog post. They believe the conversational agent could be used in applications such as “further humanizing computer interactions and improving foreign language practice”. It can also help make “relatable interactive movie and videogame characters.” Google wants to look beyond SSA and sees safety and bias as another important area.