AI chatbots have become popular and their ethical alert flags have gotten bigger.

1 year ago
tgadmintechgreat
165

Each score is a window into the AI ​​model, Soleiman says, not a perfect representation of how it will always work. But she hopes to make it possible to identify and stop the harm that AI can cause, as alarming cases have already surfaced, including AI Dungeon players using GPT-3 to generate text describing sex scenes involving children. “This is an extreme case of what we cannot allow,” Soleiman says.

Soleiman latest research at Hugging Face found that major tech companies are taking an increasingly closed approach to the generative models they released from 2018 to 2022. This trend has accelerated with the Alphabet AI teams at Google and DeepMind, and on a larger scale among companies working on AI following the phased release of GPT-2. Companies that keep their achievements as trade secrets can also make cutting-edge AI developments less accessible to marginalized researchers with limited resources, Soleiman said.

As more money is poured into large language models, closed releases are reversing a trend seen throughout the history of natural language processing. Researchers have traditionally shared details about training datasets, parameter weights, and code to promote reproducibility of results.

“We have less and less knowledge about what database systems were trained on or how they were evaluated, especially for the most powerful systems released as products,” says Alex Tamkin, a graduate student at Stanford University whose work focuses on large language models. .

He credits the people in AI ethics for raising public opinion on why it’s dangerous to move fast and break things when technology is being used by billions of people. Without this work, things could have been much worse in recent years.

Fall 2020 Tamkin co-chaired the symposium with OpenAI policy director Miles Brundage on the impact of large language models on society. The multidisciplinary panel stressed the need for industry leaders to set ethical standards and take steps such as conducting pre-deployment bias assessments and avoiding certain use cases.

Tamkin believes that external AI audit services should grow with companies using AI because internal assessments tend to fall short of expectations. He believes that collaborative assessment methods that involve community members and other stakeholders have great potential to increase democratic participation in building AI models.

Merve Hickok, director of research at the Center for AI Ethics and Policy at the University of Michigan, says it’s not enough to try to get companies to stop or stop the AI ​​hype, regulate themselves and embrace ethical principles. Protecting human rights means moving from talking about what’s ethical to talking about what’s legal, she says.

Hickok and Hanna of DAIR are watching the European Union finalize the AI ​​Act this year to see how it applies to text and image-generating models. Hickok said she was particularly interested to see how European lawmakers deal with liability for harm using models built by companies such as Google, Microsoft and OpenAI.

“Some things need to be mandatory because we’ve seen over and over again that if they’re not mandatory, these companies keep breaking things and keep pushing for profit over rights and profit over communities,” he says. Hickok.

As long as policy is discussed in Brussels, the stakes remain high. The day after the error in Bard’s demo, Alphabet’s stock price plunge resulted in an estimated $100 billion in market capitalization loss. “This is the first time I’ve seen wealth destruction due to a big language model error of this magnitude,” Hanna says. However, she does not hope that this will convince the company to slow down the launch rush. “I’m guessing it’s not really going to be a cautionary tale.”

Leave a Reply