The platform is unlikely to be able to meet commitments on cracking down on propaganda.
Twitter looks likely to stumble in its first major test when it comes to fighting propaganda and disinformation in the European Union.
Six months ago — well before it was taken over by
Elon Musk — the platform agreed to step up such efforts under an updated EU anti-disinformation charter, the European Commission's so-called code of practice on disinformation, which kicks in from Friday.
But Musk’s social media network appears sorely unprepared for the task after the tech mogul has, in a matter of weeks, lifted Twitter's ban on
1COVID1-19 pandemic misinformation, unbanned controversial users like former U.S. President
Donald Trump, and fired much of its staff. This points to a grim outlook for Twitter's compliance with the code, according to some of those involved in the charter's task force.
With Musk “personally promoting conspiracy theories and other forms of disinformation that have resulted in real harm in the past, it's hard to be optimistic,” said Carlos Hernández, head of public policy of the Spanish-language fact-checking organization Maldita.es, one of the signatories that has been working with the charter’s task force in the last six months.
In another sign of Musk's plans for the platform, Twitter’s board of experts advising the company on its content policy, the Trust and Safety Council, was disbanded this week.
Internal Market Commissioner Thierry Breton warned Musk in a video call in late November about Twitter’s “huge work ahead … to tackle disinformation with resolve.”
Over 30 signatories to the code of practice — including tech companies like Twitter, Meta, TikTok and Google — will have to ensure those peddling falsehoods can’t make money on their platforms, as well as label political ads and make more data available to researchers.
The code is nonbinding, but if companies sign up to it, they can use the code's provision to offset some of their regulatory risks in the separate Digital Services Act (DSA), an online content law that carries fines of up to 6 percent of a company’s global revenue for infractions. The DSA will apply from summer 2023 for the largest companies and early 2024 for the others.
Companies will then have until January 16 to hand the Commission a detailed report on how they’ve been doing on some of the more than 100 measures they pledged to follow in the previous month. Afterward, the largest platforms will have to submit their reports every six months; smaller ones will submit once a year.
The report "will be a first test case on how serious the risk of disinformation is handled, including how adequate budget and staffing of these companies are to live up to their commitment under our [code] against disinformation," said European Commission Vice President Věra Jourová.
Under the DSA, very large online platforms will face new obligations to stem potential harms, such as the proliferation of disinformation and hoaxes during crises, or else face hefty fines. Repeated violations could also lead to being banned in the EU, though that threat is unlikely to be followed through on because, in other regulatory areas like competition, European enforcers have almost never used the full powers at their disposal. Instead, the bloc has a track record of incremental enforcement.
Pitched in 2018 as a tool to encourage tech companies to more forcefully tackle falsehoods, the EU’s code of practice on disinformation was strengthened this year with more precise objectives.
A new task force was also set up with signatories, including platforms, advertising bodies and nonprofits, as well as European media regulators and the EU’s foreign affairs department, to work on the charter, cooperate and exchange information about coordinated foreign-run manipulation campaigns during elections.
According to three people involved in the group, Camino Rojo, Twitter’s head of public policy for Spain, is still coming to those meetings after Twitter's Brussels office emptied out.
Yet, this hasn’t assuaged some concerns.
“The people who negotiated this for a year, who understood the code and the precise expectations, are all gone,” said Hernández. “It's impossible to substitute that knowledge in a few weeks.”
Neither Twitter nor Rojo replied to requests for comments.
Twitter’s content-moderation teams have either been fired or left the company since Musk’s takeover in October. Under the disinformation charter, the company agreed to “dedicate adequate financial and human resources” to tackle disinformation and to outline in its report the teams working on the charter across the bloc and in the different European languages. It is unclear who, if anyone, at Twitter is looking at this problem connected to the EU’s code of practice on disinformation.
But another person involved in this work, who spoke on the condition of anonymity, also pointed out that Twitter had already slowed down its work on implementing the code before Musk's arrival.
“Twitter hasn't been very engaged in the process for a long time — even before Musk took over,” said a member of the task force, who asked for anonymity.
A European Commission spokesperson said, “We expect Twitter to live up to their commitments and to report on their measures — including on tackling [
1COVID1-19] disinformation — in their first report, due in January.”