A new program is beginning to lessen the gap between human and artificial intelligence.
Humans learn continuously. We build new knowledge on prior knowledge, and for the most part, we remember the skills we learn. For example, we won't forget how to ride a bike when we learn how to ride a motorcycle or drive a car. Instead, the things we learned while riding a bike, like situational awareness, become handy in driving a car or motorcycle.
Artificial intelligence does not work this way. AI may beat human players at poker or chess, but it won't be able to beat them at both. This is because AI can only learn one of those games at a time. If poker-playing AI has to learn to play chess, it has to forget poker entirely. AI is not able to use prior skills to build new ones. Therein lies the difference between artificial and human intelligence. The ability to retain knowledge and build upon it is unique to living things.
But not for long.
Google's artificial intelligence company DeepMind is developing an AI program that can effectively retain and build knowledge. To do this, the AI has to remember what it has already learned. Sequential learning is vital to giving AI the ability to learn the way humans and other living things do.
Therefore, researchers at DeepMind need to draw on studies in neuroscience to figure out how to enable AI to build knowledge. In their study, they detailed how the AI consolidated its skills to develop them and learn new ones.
DeepMind's AI program mimicked the neural connections in the brains of living things to enable sequential learning. It learned two tasks by creating neural connections using the same points of reference. This way, it learned something new by building upon what it already knew.
The researchers had the AI play a number of video games to test its learning skills. They found that after several days of playing, the AI developed skills comparable to those a human player. However, the researchers also noted that sequential learning did not enable the AI to perform as well as expected. AI dedicated to playing only one game still did better than AI that learned to play through sequential learning.
Of course, the AI can still improve. The researchers surmised that the AI didn't learn as naturally as living things can. This means that it wasn't able to identify which neural connections were important to developing a playing strategy. Thus, artificial intelligence still has a long way to go to have the ability to fully take advantage of the skills and knowledge they learned.
However, developing sequential learning is already a big achievement in itself. Of course, the AI can be developed further. It has to learn how to identify the way different tasks and actions relate to one another. For now, though, it is now one step closer to truly being able to build skills and knowledge. In the near future, artificial intelligence will be able to be more like us.
Get weekly science updates in your inbox!