Man vs Machine
As artificial intelligence (AI) research and development continues to strengthen, there have been some incredibly intriguing projects where machines battled man in tasks that were once thought the realm of humans. While not all were 100% successful, AI researchers and technology companies learned a lot about how to continue forward momentum as well as what a future might look like when machines and humans work alongside one another. Here are some of the highlights from when artificial intelligence battled humans.
World Champion chess player Garry Kasparov competed against artificial intelligence twice. In the first chess match-up between machine (IBM Deep Blue) and man (Kasparov) in 1996 Kasparov won. The next year, Deep Blue was victorious. When Deep Blue won, many talked that it was a sign that artificial intelligence was catching up to human intelligence and it inspired a documentary film called The Man vs. The Machine. Shortly after losing, Kasparov went on record to state he thought the IBM team had cheated; however, in an interview in 2016, Kasparov said he had analyzed the match and retracted his previous conclusion and cheating accusation.
In 2011, IBM Watson took on Ken Jennings and Brad Rutter, two of the most successful contestants of the game show Jeopardy who had collectively won $5 million during their reigns as Jeopardy champions. Watson won! To prepare for the competition, Watson played 100 games against past winners. The computer was the size of a room, was named after IBM’s founder Thomas J. Watson and required a powerful and noisy cooling system to keep its servers from overheating. Deep Blue and Watson were products that came from IBM’s Grand Challenge initiatives that pit man against machines. Since Jeopardy has a unique format where contestants provide the answers to the “clues” they are given, Watson first had to learn how to untangle the language to determine what was being asked even before it could do the work to figure out how to respond—a significant feat for natural language processing that resulted in IBM developing DeepQA, a software structure to do just that.
Could artificial intelligence play Atari games better than humans? DeepMind Technologies took on this challenge, and in 2013 it applied its deep learning model to seven Atari 2600 games. This endeavor had to overcome the challenge of reinforcement learning to control agents directly from vision and speech inputs. The breakthroughs in computer vision and speech recognition allowed the innovators at DeepMind Technologies to develop a convolutional neural network for reinforcement learning to enable a machine to master several Atari games using only raw pixels as input and in a few games have better results than humans.
Next up in our review of man versus machine is the achievements of AlphaGo, a machine that is able to learn for itself what knowledge is. The supercomputer was able to learn 3,000 years of human knowledge in a mere 40 days prompting some to claim it was “one of the greatest advances ever in artificial intelligence.” The system had already learned how to beat the world champion of Go, an ancient board game that was once thought to be impossible for a machine to decipher. The film about the experience is now available on Netflix. AlphaGo's success, when not being constrained by human knowledge, presents the possibility of the system being used to solve some of the world's most challenging problems such as in healthcare or energy or environmental concerns.