Researchers including one of Indian-origin have provided a fresh insight into how human learning can foster smarter artificial intelligence (AI). Recent breakthroughs in creating artificial systems that outplay humans in a diverse array of challenging games have their roots in neural networks inspired by information processing in the brain. Now, researchers from Google DeepMind and Stanford University have updated a theory originally developed to explain how humans and other animals learn. Also Read - Realme announces 'D' under its TechLife division; will focus on smart home devicesAlso Read - Sony leading image sensor market, Samsung still trails behind
First published in 1995, the theory states that learning is the product of two complementary learning systems in the brain. The first system gradually acquires knowledge and skills from exposure to experiences, and the second stores specific experiences so that these can be replayed to allow their effective integration into the first system. “The evidence seems compelling that the brain has these two kinds of learning systems, and the complementary learning systems theory explains how they complement each other to provide a powerful solution to a key learning problem that faces the brain,” explained James McClelland, lead author of the 1995 paper from Stanford University.
Components of the neural network architecture that succeeded in achieving human-level performance in a variety of computer games like Space Invaders and Breakout were inspired by complementary learning systems theory. These neural networks exploit a memory buffer akin to the hippocampus that stores recent episodes of game play and replays them in interleaved fashion. This greatly amplifies the use of actual game play experience and avoids the tendency for a particular local run of experience to dominate learning in the system, added cognitive neuroscientist Dharshan Kumaran from Google DeepMind in a review published in the journal Trends in Cognitive Sciences.
The first system in the proposed theory, placed in the neocortex of the brain, was inspired by precursors of today’s deep neural networks. As with today’s deep networks, these systems contain several layers of neurons between input and output, and the knowledge in these networks is in their connections. According to DeepMind co-founder Demis Hassabis, the extended version of the complementary learning systems theory is likely to continue to provide a framework for future research not only in neuroscience but also in the quest to develop Artificial General Intelligence — our goal at Google DeepMind.