regional elections in Spain almost four more months, but Irene Larraz and her team at Newtral are ready to hit. Every morning, half of Larras’s team at the Madrid media company schedules political speeches and debates, in preparation to test politicians’ claims. The other half, who debunks misinformation, scans the Internet for viral lies, and works to infiltrate groups spreading lies. As soon as the May elections come to an end, a nationwide election should be scheduled before the end of the year, which is likely to cause a flood of lies on the Internet. “It will be quite difficult,” says Larraz. We are already preparing.
The spread of disinformation and propaganda online means an uphill battle for fact-checkers around the world who have to sift through and verify vast amounts of information in complex or rapidly changing situations such as Russian invasion of Ukraine, COVID-19 pandemicor election campaigns. This task has become even more difficult with the advent of chatbots using large language models, such as OpenAI’s ChatGPT, which can produce natural-sounding text at the click of a button, essentially automating the production of disinformation.
Faced with this asymmetry, fact-checking organizations are forced to build their own AI-based tools to automate and speed up their work. It’s far from a complete solution, but fact-checkers hope these new tools will at least prevent the gap between them and their adversaries from widening too quickly at a time when social media companies are shrinking their own moderation operations. .
“The race between those who check the facts and those they check is not even,” says Tim Gordon, co-founder of Best Practice AI, an artificial intelligence strategy and management consulting firm, and trustee of the UK-based fact-checking charity. facts.
“Fact checkers are often tiny organizations compared to those who spread misinformation,” says Gordon. “Both the sheer scale of what generative AI can produce and the speed at which it can do so means this race is only going to get harder.”
Newtral began development of its multilingual AI model, ClaimHunter, in 2020 funded by profits from its television wing, which produces show fact-checking politiciansand documentaries for HBO and Netflix.
By using BERT language model, the developers of ClaimHunter used 10,000 claims to train the system to recognize sentences that appear to contain factual statements, such as data, numbers, or comparisons. “We trained the machine to play the role of a fact checker,” says Newtral CTO Ruben Miges.
Simply identifying claims made by political figures and social media accounts that need to be verified is a difficult task. ClaimHunter automatically detects political statements made on Twitter, while another application converts video and audio reports of politicians to text. Identify and highlight statements containing a public life statement that can be proven or disproven, such as statements that are not ambiguous, questions or opinions, and flag them for Newtral fact checking.
The system isn’t perfect and sometimes labels opinions as facts, but its errors help users constantly retrain the algorithm. Miges says the time it takes to identify claims worthy of verification has been reduced by 70 to 80 percent.