London Daily

Focus on the big picture.
Thursday, Dec 04, 2025

The Year That Changed the Internet

The Year That Changed the Internet

In 2020, the need to contain misinformation about COVID-19 pushed Facebook and Twitter into a role they never wanted—arbiters of the truth.
For years, social-media platforms had held firm: Just because a post was false didn’t mean it was their place to do anything about it. But 2020 changed their minds.

At the end of May, Twitter for the first time labeled a tweet from the president of the United States as potentially misleading. After Donald Trump falsely insisted that mail-in voting would rig the November election, the platform added a message telling users to “get the facts.” Within a day, Mark Zuckerberg, Facebook’s founder and CEO, had appeared on Fox News to reassure viewers that Facebook had “a different policy” and believed strongly that tech companies shouldn’t be arbiters of truth of what people say online.

But come November, between the time polls closed and the race was called for Biden, much of Trump’s Facebook page, as well as more than a third of Trump’s Twitter feed, was plastered with warning labels and fact-checks, a striking visual manifestation of the way that 2020 has transformed the internet. Seven months ago, that first label on a Trump tweet was a watershed event. Now it’s entirely unremarkable.

Among the many facets of life transformed by the coronavirus pandemic was the internet itself. In the face of a public-health crisis unprecedented in the social-media age, platforms were unusually bold in taking down COVID-19 misinformation. Instead of their usual reluctance to remove a post just because it was false, they were loudly touting their aggressive and sweeping actions.

They were rewarded for it: For about a week in March, some of the companies’ usual critics cheered their newfound sense of responsibility. Some suggested that the “techlash” against powerful internet giants was over.

That enthusiasm didn’t last, but mainstream platforms learned their lesson, accepting that they should intervene aggressively in more and more cases when users post content that might cause social harm. During the wildfires in the American West in September, Facebook and Twitter took down false claims about their cause, even though the platforms had not done the same when large parts of Australia were engulfed in flames at the start of the year.

Twitter, Facebook, and YouTube cracked down on QAnon, a sprawling, incoherent, and constantly evolving conspiracy theory, even though its borders are hard to delineate. These actions had a domino effect, as podcast platforms, on-demand fitness companies, and other websites banned QAnon postings. Content moderation comes to every content platform eventually, and platforms are starting to realize this faster than ever.

As if to make clear how far things had come since 2016, Facebook and Twitter both took unusually swift action to limit the spread of a New York Post article about Hunter Biden mere weeks before the election. By stepping in to limit the story’s spread before it had even been evaluated by any third-party fact-checker, these gatekeepers trumped the editorial judgment of a major media outlet with their own.

Gone is the naive optimism of social-media platforms’ early days, when—in keeping with an overly simplified and arguably self-serving understanding of the First Amendment tradition—executives routinely insisted that more speech was always the answer to troublesome speech. Our tech overlords have been doing some soul-searching.

As Reddit CEO Steve Huffman said, when doing a PR tour about an overhaul of his platform’s policies in June, “I have to admit that I’ve struggled with balancing my values as an American, and around free speech and free expression, with my values and the company’s values around common human decency.”

Nothing symbolizes this shift as neatly as Facebook’s decision in October (and Twitter’s shortly after) to start banning Holocaust denial. Almost exactly a year earlier, Zuckerberg had proudly tied himself to the First Amendment in a widely publicized “stand for free expression” at Georgetown University.

The strong protection of even literal Nazism is the most famous emblem of America’s free-speech exceptionalism. But one year and one pandemic later, Zuckerberg’s thinking, and, with it, the policy of one of the biggest speech platforms in the world, had “evolved.”

The evolution continues. Facebook announced earlier this month that it will join platforms such as YouTube and TikTok in removing, not merely labeling or down-ranking, false claims about COVID-19 vaccines. This might seem an obvious move; the virus has killed more than 315,000 people in the U.S. alone, and widespread misinformation about vaccines could be one of the most harmful forms of online speech ever.

But until now, Facebook, wary of any political blowback, had previously refused to remove anti-vaccination content. However, the pandemic also showed that complete neutrality is impossible. Even though it’s not clear that removing content outright is the best way to correct misperceptions, Facebook and other platforms plainly want to signal that, at least in the current crisis, they don’t want to be seen as feeding people information that might kill them.

As platforms grow more comfortable with their power, they are recognizing that they have options beyond taking posts down or leaving them up. In addition to warning labels, Facebook implemented other “break glass” measures to stem misinformation as the election approached. It tweaked its algorithm to boost authoritative sources in the news feed and turned off recommendations to join groups based around political or social issues.

Facebook is reversing some of these steps now, but it cannot make people forget this toolbox exists in the future. Twitter is keeping, and even expanding, a number of election-related changes meant to encourage more thoughtful sharing. Even before the pandemic, YouTube had begun adjusting its recommendation algorithm to reduce the spread of borderline and harmful content, and is introducing pop-up nudges to encourage users to think before posting comments that might be offensive.

U.S.-based platforms have long been even more likely to neglect the by-products of their presence in global markets. But that trend also began to reverse in 2020. Twitter removed tweets from Brazilian President Jair Bolsonaro and Venezuelan President Nicolás Maduro for violating its COVID-19 policies.

Facebook rolled out a suite of election-specific policies in Myanmar for its election, including labeling disputed claims of voting fraud. (It turns out that expressing frustration in all caps at being labeled is a reaction that crosses cultures.) In early December, Twitter put a warning label for the first time on a tweet of a prominent Indian politician whom BuzzFeed described as “notorious for posting misinformation.” The bar is low enough that steps like these can be considered progress.

Platforms don’t deserve praise for belatedly noticing dumpster fires that they helped create and affixing unobtrusive labels to them. Social-media companies still devote far too little attention and resources to markets outside the United States and languages other than English. Warning labels for misinformation might make some commentators feel a little better, but whether labels actually do much to contain the spread of false information is still unknown. News reporting suggests that insiders at Facebook knew they could and should do more about misinformation, but higher-ups vetoed their ideas. YouTube barely acted to stem the flood of misinformation about election results on its platform.

The fundamental opacity of these complex systems remains. When internet platforms announce new policies, assessing whether they can and will enforce them consistently has always been difficult. In essence, the companies are grading their own work. But too often what can be gleaned from the outside suggests that they’re failing.

Platforms have increased the number of people working on content moderation in the past few years, but these overworked contractors were heavily outgunned even before many were sent home at the start of the pandemic and unable to work at full capacity. Platforms also use AI to catch content that breaks their rules, and the transparency reports they release boast of an ever higher “proactive detection rate,” but these tools are brittle and err often.

And if 2020 finally made clear to platforms the need for greater content moderation, it also exposed the inevitable limits of content moderation. As some platforms cracked down on harmful content, others saw this as an opportunity and marketed themselves as “free speech” refuges for aggrieved users. Sure enough, content removed by some platforms started to overflow and spread onto these others.

Down-ranking, labeling, or deleting content on an internet platform does not address the social or political circumstances that caused it to be posted in the first place. And even the most powerful platform will never be able to fully compensate for the failures of other governing institutions or be able to stop the leader of the free world from constructing an alternative reality when a whole media ecosystem is ready and willing to enable him. As Renée DiResta wrote in The Atlantic last month, “reducing the supply of misinformation doesn’t eliminate the demand.”

Even so, this year’s events showed that nothing is innate, inevitable, or immutable about platforms as they currently exist. The possibilities for what they might become—and what role they will play in society—are limited more by imagination than any fixed technological constraint, and the companies appear more willing to experiment than ever.
Newsletter

Related Articles

0:00
0:00
Close
UK Inquiry Finds Putin ‘Morally Responsible’ for 2018 Novichok Death — London Imposes Broad Sanctions on GRU
India backs down on plan to mandate government “Sanchar Saathi” app on all smartphones
King Charles Welcomes German President Steinmeier to UK in First State Visit by Berlin in 27 Years
UK Plans Major Cutback to Jury Trials as Crown Court Backlog Nears 80,000
UK Government to Significantly Limit Jury Trials in England and Wales
U.S. and U.K. Seal Drug-Pricing Deal: Britain Agrees to Pay More, U.S. Lifts Tariffs
UK Postpones Decision Yet Again on China’s Proposed Mega-Embassy in London
Head of UK Budget Watchdog Resigns After Premature Leak of Reeves’ Budget Report
Car-sharing giant Zipcar to exit UK market by end of 2025
Reports of Widespread Drone Deployment Raise Privacy and Security Questions in the UK
UK Signals Security Concerns Over China While Pursuing Stronger Trade Links
Google warns of AI “irrationality” just as Gemini 3 launch rattles markets
Top Consultancies Freeze Starting Salaries as AI Threatens ‘Pyramid’ Model
Macron Says Washington Pressuring EU to Delay Enforcement of Digital-Regulation Probes Against Meta, TikTok and X
UK’s DragonFire Laser Downs High-Speed Drones as £316m Deal Speeds Naval Deployment
UK Chancellor Rejects Claims She Misled Public on Fiscal Outlook Ahead of Budget
Starmer Defends Autumn Budget as Finance Chief Faces Accusations of Misleading Public Finances
EU Firms Struggle with 3,000-Hour Paperwork Load — While Automakers Fear De Facto 2030 Petrol Car Ban
White House launches ‘Hall of Shame’ site to publicly condemn media outlets for alleged bias
UK Budget’s New EV Mileage Tax Undercuts Case for Plug-In Hybrids
UK Government Launches National Inquiry into ‘Grooming Gangs’ After US Warning and Rising Public Outcry
Taylor Swift Extends U.K. Chart Reign as ‘The Fate of Ophelia’ Hits Six Weeks at No. 1
250 Still Missing in the Massive Fire, 94 Killed. One Day After the Disaster: Survivor Rescued on the 16th Floor
Trump: National Guard Soldier Who Was Shot in Washington Has Died; Second Soldier Fighting for His Life
UK Chancellor Reeves Defends Tax Rises as Essential to Reduce Child Poverty and Stabilise Public Finances
No Evidence Found for Claim That UK Schools Are Shifting to Teaching American English
European Powers Urge Israel to Halt West Bank Settler Violence Amid Surge in Attacks
"I Would Have Given Her a Kidney": She Lent Bezos’s Ex-Wife $1,000 — and Received Millions in Return
European States Approve First-ever Military-Grade Surveillance Network via ESA
UK to Slash Key Pension Tax Perk, Targeting High Earners Under New Budget
UK Government Announces £150 Annual Cut to Household Energy Bills Through Levy Reforms
UK Court Hears Challenge to Ban on Palestine Action as Critics Decry Heavy-Handed Measures
Investors Rush Into UK Gilts and Sterling After Budget Eases Fiscal Concerns
UK to Raise Online Betting Taxes by £1.1 Billion Under New Budget — Firms Warn of Fallout
Lamine Yamal? The ‘Heir to Messi’ Lost to Barcelona — and the Kingdom Is in a Frenzy
Warner Music Group Drops Suit Against Suno, Launches Licensed AI-Music Deal
HP to Cut up to 6,000 Jobs Globally as It Ramps Up AI Integration
MediaWorld Sold iPad Air for €15 — Then Asked Customers to Return Them or Pay More
UK Prime Minister Sir Keir Starmer Promises ‘Full-Time’ Education for All Children as School Attendance Slips
UK Extends Sugar Tax to Sweetened Milkshakes and Lattes in 2028 Health Push
UK Government Backs £49 Billion Plan for Heathrow Third Runway and Expansion
UK Gambling Firms Report £1bn Surge in Annual Profits as Pressure Mounts for Higher Betting Taxes
UK Shares Advance Ahead of Budget as Financials and Consumer Staples Lead Gains
Domino’s UK CEO Andrew Rennie Steps Down Amid Strategic Reset
UK Economy Stalls as Reeves Faces First Budget Test
UK Economy’s Weak Start Adds Pressure on Prime Minister Starmer
UK Government Acknowledges Billionaire Exodus Amid Tax Rise Concerns
UK Budget 2025: Markets Brace as Chancellor Faces Fiscal Tightrope
UK Unveils Strategic Plan to Secure Critical Mineral Supply Chains
UK Taskforce Calls for Radical Reset of Nuclear Regulation to Cut Costs and Accelerate Build
×