Humans and AI will understand each other better than ever

2 years ago
tgadmintechgreat
276

Artificial intelligence has promised a lot, but something stood in the way of its successful use by billions of people: the frustrating struggle between humans and machines to understand each other in natural language.

This is now changing with the advent of large language models based on transformer architectures, which is one of the most important breakthroughs in the field of artificial intelligence in the last 20 years.

Transducers are neural networks designed to model serial data and predict what should be next in the series. At the core of their success is the idea of ​​”attention”, which allows the converter to “pay attention” to the most salient features of the input rather than trying to process everything.

These new models have greatly improved natural language applications such as language translation, summarization, information retrieval, and, most importantly, text generation. In the past, each of them required individual architecture. Transformers now provide state-of-the-art results across the board.

While Google pioneered the transformer architecture, OpenAI was the first to show its power at scale in 2020 with the release of GPT-3 (Generative Pre-Trained Transformer 3). At the time, it was the largest language model ever created.

The GPT-3’s ability to create human-like text caused a wave of excitement. This was just the beginning. Large language models are now improving at a truly impressive rate.

The “number of parameters” is usually taken as a rough indication of the capabilities of the model. So far, we have seen that models perform better on a wide range of problems as the number of parameters increases. The number of models has increased by almost an order of magnitude every year for the past five years, so it’s no surprise that the results have been impressive. However, these very large models are expensive to manufacture.

What is really remarkable is that over the past year they have become smaller and much more efficient. We are now seeing impressive performance from smaller models that are much cheaper to run. Many of them are open source, further lowering the barriers to experimentation and deployment of these new AI models. This, of course, means they will become more widely integrated into the apps and services you use every day.

They will increasingly be able to generate very high quality text, images, audio and video. This new wave of artificial intelligence will change the idea of ​​what computers can do for their users, releasing a flood of cutting-edge capabilities into existing and radical new products.

What worries me the most is the language. Throughout the history of computing, people have had to painstakingly enter their thoughts using interfaces designed for technology, not humans. With this wave of breakthroughs in 2023, we will start chatting with machines in our language is instantaneous and comprehensive. Eventually, we will have really free conversational interaction with all our devices. This promises to fundamentally redefine the interaction between man and machine.

Over the past few decades, we have rightly focused on teaching people how to program—essentially teaching the language of computers. This will remain important. But in 2023, we will begin to change this scenario, and computers will speak our language. This will greatly expand access to tools for creativity, learning and play.

As AI finally enters the age of utility, the opportunities for new AI-focused products are huge. Soon we will live in a world where, regardless of your programming ability, the main limitations are simply curiosity and imagination.

Leave a Reply