What chatbot blunders say about the future of AI

2 years ago
tgadmintechgreat
816

Who cares seven days doing in the world of generative AI.

Last week, Satya Nadella, CEO of Microsoft, gleefully told the world that Bing’s new artificial intelligence search engine “make Google dance”, challenging its long-standing dominance in web search.

The new Bing uses a little thing called ChatGPT— you may have heard of it — which represents a significant leap in the ability of computers to process language. Thanks to advances in machine learning, he basically figured out how to answer all sorts of questions by eating trillions of lines of text, much of it from the internet.

In fact, Google danced to Satya’s tune by announcing Bard, their answer to ChatGPT, and promising to use the technology in their own search results. Baidu, China’s largest search engine, said he was working on a similar technology.

But Nadella might want to see where his company’s bizarre work leads.

IN Microsoft gave demos last week, it seemed that Bing could use ChatGPT to provide complex and comprehensive responses to queries. He designed the Mexico City itinerary, produced financial summaries, offered product recommendations based on information from numerous reviews, and gave advice on whether a piece of furniture would fit in a minivan by comparing sizes posted online.

WIRED had some time during launch to test Bing, and while he seemed skilled at answering many types of questions, he was clearly buggy and not even sure of his own name. And as one astute pundit remarked, some of the results Microsoft has demonstrated were less impressive than they first appeared. It looks like Bing made up some information about the travel itinerary it generated and left out some details that no one could miss. The search engine also confused Gap’s financial results by mistaking gross profit for unadjusted gross profit — a serious mistake for anyone who relied on a bot to perform a seemingly simple summarizing task.

There have been more issues this week as the new Bing is made available to more beta testers. They appear to include dispute with the user about what year it is and experiencing an existential crisis when pushed to prove his own sensitivity. Market capitalization of Google fell by a staggering $100 billion after someone spotted errors in the responses generated by Bard in the company’s demo video.

Why do these tech titans make such blunders? This is due to how strange ChatGPT and similar AI models work, as well as the extraordinary hype of the current moment.

What is confusing and misleading about ChatGPT and similar models is that they answer questions by making educated guesses. ChatGPT generates what it thinks should follow your question based on statistical representations of characters, words, and paragraphs. The startup behind the chatbot, OpenAI, has honed this core mechanism to give more satisfying answers by getting people to give positive feedback whenever the model generates answers that seem right.

ChatGPT can be impressive and interesting because the process can give the illusion of understanding, which can work well in some use cases. But the same process will “hallucinate” false information, which could be one of the biggest problems in technology right now.

The intense hype and anticipation associated with ChatGPT and similar bots adds to the danger. When well-funded startups, some of the world’s most valuable companies, and the most famous tech leaders say that chatbots are the next big thing in search, many people will take it as gospel, prompting those who started the chatter to double down on their opinions. . more AI omniscience predictions. Not only chatbots can go astray by pattern matching without fact checking.

//platform.twitter.com/widgets.js

Leave a Reply