Studies show that Meta and X authorized ads containing hate speech and incitements to violence prior to Germany's federal elections.
A recent study by a German corporate responsibility group has uncovered that social media companies Meta (
Facebook) and X (formerly Twitter) approved advertisements with anti-Semitic and anti-Muslim messages in the run-up to Germany's federal elections.
For this research, the team submitted 20 ads featuring violent language and hate speech aimed at minority communities.
The findings indicated that X approved all 10 ads it received, whereas Meta greenlit 5 out of 10. These ads included calls for violence against Jews and Muslims, derogatory comparisons of Muslim refugees to 'viruses' and 'rodents,' and advocates for their extermination or sterilization.
One advertisement even promoted the idea of setting fire to synagogues to 'stop the Jewish globalist agenda.' The researchers pointed out that while the ads were flagged and removed before publication, the results raise concerns about the content moderation practices of social media platforms.
The organization conducting the study has reported its findings to the European Commission, which is anticipated to initiate an investigation into possible breaches of the EU Digital Services Act by Meta and X. The timing of these revelations is particularly critical as Germany’s federal elections draw near, amplifying worries about the potential effects of hate speech on the democratic process.
Facebook previously encountered scrutiny during the Cambridge Analytica scandal, where a data intelligence firm was found to have manipulated elections globally using similar tactics, leading to a $5 billion penalty.
Moreover,
Elon Musk, the owner of X, has been accused of meddling in the German elections, including urging support for the far-right AfD party.
It is still uncertain if the approval of such advertisements is due to Musk's political inclinations or his broader commitment to 'free speech' on X. Musk has dismantled X’s content moderation framework and replaced it with a 'community notes' system, allowing users to provide context to posts to offer alternative perspectives.
Mark Zuckerberg, CEO of Meta, has also introduced a similar system for
Facebook, but he highlighted that content moderation through AI detection systems would still be employed to address hate speech and unlawful content.
However, this transition has raised concerns, especially in light of reports indicating that extremist right-wing content is increasingly being promoted on platforms like X and TikTok, influencing public opinion.
The economic downturn and rising violence associated with attacks linked to Muslim migrants in recent months have exacerbated tensions.
It is uncertain whether the rise in extremist content is driven by real-world events or if social media algorithms are amplifying such messages to boost user engagement.
Regardless, both Musk and Zuckerberg have shown a willingness to reduce content moderation despite facing pressure from the European Union and German officials.
Whether this investigation will prompt the EU to impose stricter regulations on X,
Facebook, and TikTok is still unclear, but it underscores the ongoing challenge of finding a balance between free speech and curbing the spread of extremist content.
The study highlights the broader concern that hate speech often aligns with political motives, complicating the role of social media platforms in managing content.
While regulatory discussions may arise, the question of who should govern digital expression—private companies or government bodies—remains unresolved.
Like traditional media, social media platforms might encounter increasing scrutiny regarding their regulation of user-generated content.