Facebook owner Meta is ending its third-party fact-checking programme and will instead rely on its users to flag misinformation, as the social media giant prepares for Donald Trump’s return as president.
The $1.6tn company on Tuesday said it would “allow more speech by lifting restrictions on some topics that are part of mainstream discourse and focusing our enforcement on illegal and high-severity violations” and “take a more personalised approach to political content”.
“It’s time to get back to our roots around free expression on Facebook and Instagram,” Mark Zuckerberg, Meta’s chief executive and co-founder, said in a video post.
Trump was sharply critical of Zuckerberg during last year’s US presidential election campaign, suggesting that if Meta interfered in the 2024 vote he would “spend the rest of his life in prison”.
But the Facebook founder has sought to rebuild relations with the president-elect following his November victory, including visiting him at his Mar-a-Lago residence in Florida.
On Monday, Meta moved to make further inroads with the incoming US presidential administration by appointing UFC founder and prominent Trump supporter Dana White to its board of directors.
White will sit on Meta’s board alongside another Trump ally, tech investor Marc Andreessen, who has long pushed for the company to loosen its policing of online content.
Zuckerberg said the complexity of its content moderation system, which was expanded in December 2016 following Trump’s first election win, had introduced “too many mistakes and too much censorship”.
Starting in the US, Meta will move to a so-called “community notes” model, similar to the one employed by Elon Musk’s X, which allows users to add context to controversial or misleading posts. Meta itself will not write community notes.
Meta said there was “no immediate plan” to end third-party fact checking and introduce community notes outside the US. It is unclear how such a system might comply with regimes such as the EU’s Digital Services Act and the UK’s Online Safety Act, which require online platforms to put measures in place to tackle illicit content and safeguard users.
Zuckerberg added that Meta would also change its systems to “dramatically reduce” the amount of content that its automated filters remove from its platforms.
That includes lifting restrictions on topics such as immigration and gender, to focus its systems on “illegal and high-severity violations”, such as terrorism, child exploitation and fraud, as well as content related to suicide, self-injury and eating disorders.
He acknowledged that the changes would mean Meta “is going to catch less bad stuff”, but argued the trade-off was worthwhile to reduce the number of “innocent people’s” posts that were taken down.
The changes bring Zuckerberg into closer alignment with Musk, who slashed content moderation after buying the social media platform, then called Twitter, in 2022.
“Just like they do on X, Community Notes will require agreement between people with a range of perspectives to help prevent biased ratings,” Meta said in a blog post.
“This is cool,” Musk said in an X post referencing Meta’s changes.
Joel Kaplan, a prominent Republican who Meta announced last week was taking over from Sir Nick Clegg as its president of global affairs, told Fox News on Tuesday that its third-party fact-checkers had been “too biased”.
In a reference to Trump’s return to the White House on January 20, Kaplan added: “We’ve got a real opportunity now, we’ve got a new administration and a new president coming in who are big defenders of free expression and that makes a difference.”
As part of the changes announced on Tuesday, Meta also said it would move its US-based content moderation staff from California to Texas. “I think that it will help us build trust to do this work in places where there is less concern about the bias of our teams,” Zuckerberg said.
Meta’s changes were slammed by online safety campaigners. Ian Russell, whose 14-year-old daughter Molly took her own life after viewing harmful content on sites including Instagram, said he was “dismayed” by the plans.
“These moves could have dire consequences for many children and young adults,” he said.
Zuckerberg first introduced third-party fact-checking as part of a raft of measures in late 2016 designed to address criticism of rampant misinformation on Facebook.
He said at the time that the company needed “stronger detection” of misinformation and would work with the news industry to learn from journalists’ fact-checking systems.
Meta has said it now spends billions of dollars a year on its safety and security systems, employing or contracting tens of thousands of people around the world.
But on Tuesday, Zuckerberg blamed governments and “legacy media” for pushing his company to “censor more and more”.
He said Meta would work with the Trump administration to “push back on governments around the world that are going after American companies and pushing to censor more”.
He pointed to restrictive regimes in China and Latin America, as well as highlighting what he called an “ever-increasing number” of European laws that were “institutionalising censorship and making it difficult to build anything innovative there”.
Meta shares fell 2 per cent on Tuesday morning to $616.11.
Read the full article here