Man vs Machine

As artificial intelligence (AI) research and development continues to strengthen, there have been some incredibly intriguing projects where machines battled man in tasks that were once thought the realm of humans. While not all were 100% successful, AI researchers and technology companies learned a lot about how to continue forward momentum as well as what a future might look like when machines and humans work alongside one another. Here are some of the highlights from when artificial intelligence battled humans.

World Champion chess player Garry Kasparov competed against artificial intelligence twice. In the first chess match-up between machine (IBM Deep Blue) and man (Kasparov) in 1996 Kasparov won. The next year, Deep Blue was victorious. When Deep Blue won, many talked that it was a sign that artificial intelligence was catching up to human intelligence and it inspired a documentary film called The Man vs. The Machine. Shortly after losing, Kasparov went on record to state he thought the IBM team had cheated; however, in an interview in 2016, Kasparov said he had analyzed the match and retracted his previous conclusion and cheating accusation.


In 2011, IBM Watson took on Ken Jennings and Brad Rutter, two of the most successful contestants of the game show Jeopardy who had collectively won $5 million during their reigns as Jeopardy champions. Watson won! To prepare for the competition, Watson played 100 games against past winners. The computer was the size of a room, was named after IBM’s founder Thomas J. Watson and required a powerful and noisy cooling system to keep its servers from overheating. Deep Blue and Watson were products that came from IBM’s Grand Challenge initiatives that pit man against machines. Since Jeopardy has a unique format where contestants provide the answers to the “clues” they are given, Watson first had to learn how to untangle the language to determine what was being asked even before it could do the work to figure out how to respond—a significant feat for natural language processing that resulted in IBM developing DeepQA, a software structure to do just that.

Could artificial intelligence play Atari games better than humans? DeepMind Technologies took on this challenge, and in 2013 it applied its deep learning model to seven Atari 2600 games. This endeavor had to overcome the challenge of reinforcement learning to control agents directly from vision and speech inputs. The breakthroughs in computer vision and speech recognition allowed the innovators at DeepMind Technologies to develop a convolutional neural network for reinforcement learning to enable a machine to master several Atari games using only raw pixels as input and in a few games have better results than humans.

Next up in our review of man versus machine is the achievements of AlphaGo, a machine that is able to learn for itself what knowledge is. The supercomputer was able to learn 3,000 years of human knowledge in a mere 40 days prompting some to claim it was “one of the greatest advances ever in artificial intelligence.” The system had already learned how to beat the world champion of Go, an ancient board game that was once thought to be impossible for a machine to decipher. The film about the experience is now available on Netflix.  AlphaGo's success, when not being constrained by human knowledge, presents the possibility of the system being used to solve some of the world's most challenging problems such as in healthcare or energy or environmental concerns.


Information technology


Information technology (IT) is the use of computers to store, retrieve, transmit, and manipulate data or information. IT is typically used within the context of business operations as opposed to personal or entertainment technologies. IT is considered to be a subset of information and communications technology (ICT). An information technology system (IT system) is generally an information system, a communications system or, more specifically speaking, a computer system – including all hardware, software and peripheral equipment – operated by a limited group of users.
Humans have been storing, retrieving, manipulating, and communicating information since the Sumerians in Mesopotamia developed writing in about 3000 BC, but the term information technology in its modern sense first appeared in a 1958 article published in the Harvard Business Review; authors Harold J. Leavitt and Thomas L. Whisler commented that "the new technology does not yet have a single established name. We shall call it information technology (IT)." Their definition consists of three categories: techniques for processing, the application of statistical and mathematical methods to decision-making, and the simulation of higher-order thinking through computer programs. The term is commonly used as a synonym for computers and computer networks, but it also encompasses other information distribution technologies such as television and telephones. Several products or services within an economy are associated with information technology, including computer hardware, software, electronics, semiconductors, internet, telecom equipment, and e-commerce.
Based on the storage and processing technologies employed, it is possible to distinguish four distinct phases of IT development: pre-mechanical (3000 BC – 1450 AD), mechanical (1450–1840), electromechanical (1840–1940), and electronic (1940–present). This article focuses on the most recent period (electronic).