two years ago, Twitter has launched what is arguably the tech industry’s most ambitious attempt at algorithm transparency. Its researchers wrote papers showing that Twitter’s artificial intelligence system for cropping images in tweets favored white faces and women, and that messages from the political right in several countries, including the US, UK and France, received more algorithmic momentum than the ones on the left.
By early October last year, when Elon Musk faced a court deadline to finalize his $44 billion acquisition of Twitter, the company’s latest research was nearly complete. This showed that the machine learning program incorrectly downvoted some tweets that mention any of the 350 terms related to identity, politics, or sexuality, including “gay”, “Muslim” and “deaf”, because the system is designed to limit views of tweets. inarticulate marginalized groups also discouraged publications glorifying these communities. This discovery, and a partial fix developed by Twitter, could help other social platforms make better use of AI for content control. But will anyone be able to read the study?
Musk had backed algorithmic transparency months earlier, saying he wanted to “open source” Twitter’s content recommendation code. On the other hand, Musk has said he will reinstate popular accounts permanently suspended for tweets that break the rules. He also mocked some of the communities that Twitter researchers were trying to protect and complained about the indefinite “awakening virus“. It’s also embarrassing that Musk’s AI scientists at Tesla tended not to publish research.
AI ethics researchers on Twitter eventually decided their prospects under Musk were too dim to wait for their research to be published in an academic journal or even finish writing. company blog mail. Thus, less than three weeks before Musk finally took ownership on October 27, they published a study of moderation bias in the open access service Arxiv, where scientists publish research that has not yet been peer-reviewed.
“We were rightly worried about what this leadership change would entail,” says Rumman Chowdhury, who at the time was the CTO of the Twitter group for machine learning ethics, transparency and accountability, known as META. “There is a lot of ideology and misunderstanding of what work ethics teams are doing as part of some kind of liberal program and not as real scientific work.”
Concerns about Musk’s regime prompted researchers at Cortex, the machine learning and research organization Twitter, to covertly release a series of studies much earlier than planned, Chowdhury and five other former employees said. The results covered topics including misinformation and recommendation algorithms. The frenzied jolt and published papers have not been previously reported.
The researchers wanted to preserve the knowledge gained on Twitter so that everyone can use it and make other social networks better. “I’m very interested in companies being more open about the issues they have and trying to lead the process and also show people that it’s doable,” says Kira Yi, lead author of the moderation document.
Twitter and Musk did not respond to a detailed email request for comment for this story.
Another study’s team worked overnight to make final edits before hitting “Publish to Arxiv” on the day Musk took Twitter, one researcher says, speaking anonymously for fear of retribution from Musk. “We knew the runway would close when the giant Elon plane landed,” the source says. “We knew we needed to do this before the acquisition closed. We can stick a flag in the ground and say it exists.”
The fear was not in vain. Most Twitter researchers have lost their jobs or retired under Musk. On the META team, Musk fired all but one person on November 4, and the remaining member, co-founder and lead researcher Luca Belli, left later that month.
//platform.twitter.com/widgets.js