What defines artificial intelligence? The Complete Guide to WIRED

1 year ago
tgadmintechgreat
171

Artificial intelligence Here. It is overblown, misunderstood, and misguided, but it is already the foundation of our lives—and it will only expand its influence.

AI helps explore self-driving cars, detects otherwise invisible signs of illness in medical images, finds the answer when you ask Alexa a question, and allows you to unlock your phone with your face to talk to friends as an animated poop on iPhone X using Apple’s Animoji. . These are just a few of the ways AI is already impacting our lives, and there is still a lot of work to be done. But don’t worry, superintelligent algorithms aren’t going to take over all jobs or wipe out humanity.

The current boom in all things AI has been driven by breakthroughs in a field known as machine learning. It involves “training” computers to perform tasks by examples rather than human programming. A technique called deep learning has made this approach much more powerful. Just ask Lee Sedol, who holds 18 international titles in the challenging game of Go. In 2016, it was smashed by a program called AlphaGo.

There is evidence that AI can make us happier and healthier. But there is also reason for caution. Incidents in which algorithms have captured or amplified social biases about race or gender show that a better AI future will not automatically be better.

The beginning of artificial intelligence

Artificial intelligence as we know it started out as a holiday project. Dartmouth professor John McCarthy coined the term in the summer of 1956 when he invited a small group to spend a few weeks thinking about how to make machines do things like use language.

He had high hopes for a breakthrough in moving towards human-level machines. “We think that significant progress can be made,” he wrote with his co-organizers“if a carefully selected group of scientists work on it together over the summer.”

These hopes did not come true, and McCarthy later admitted that he was too optimistic. But the workshop helped researchers dreaming of intelligent machines coalesce into a recognized academic field.

Early work often focused on solving rather abstract problems in mathematics and logic. But AI soon began to show promise in more human tasks. In the late 1950s, Arthur Samuel created programs to teach checkers. In 1962, one defeated the master in a game. In 1967, a program called Dendral showed that it could replicate how chemists interpret mass spectrometry data on the composition of chemical samples.

As the field of AI has evolved, so have different strategies to build smarter machines. Some researchers have tried to convert human knowledge into code or come up with rules for specific tasks such as understanding a language. Others were inspired by the importance of studying human and animal intelligence. They built systems that got better over time, perhaps by modeling evolution or by examining sample data. The field has reached milestone after milestone as computers have been able to perform tasks that previously only humans could do.

Deep learning, the rocket fuel of the current AI boom, is the revival of one of the oldest ideas in the field of AI. This method involves transmitting data through networks of mathematics based on the work of brain cells, which are known as artificial neural networks. As the network processes the training data, the connections between parts of the network are adjusted, making it possible to interpret future data.

Artificial neural networks became a well-established idea in the field of AI soon after the Dartmouth workshop. The 1958 Perceptron Mark 1 that filled the entire room, for example, learned to distinguish between various geometric shapes and was recorded in New York Times as “A computer embryo designed to read and become wiser.” But neural networks fell out of favor after an influential 1969 book co-authored with MIT’s Marvin Minsky said they couldn’t be very powerful.

However, not everyone was convinced by the skeptics, and some researchers have supported the technique for decades. They were confirmed in 2012, when a series of experiments showed that neural networks working with large data sets can give machines new perceptual capabilities. Processing this amount of data was difficult with traditional computer chips, but moving to graphics cards have accelerated the explosion of computing power.

Leave a Reply