Platform cites misinformation policy after influencer links Israel to global attacks in now-deleted video
TikTok has removed a video posted by beauty entrepreneur Huda Kattan that accused Israel of orchestrating global attacks, including the September eleventh attacks in the United States and the October seventh attacks in Israel.
The video prompted criticism online, including renewed calls from some users to boycott her products.
TikTok stated that the removal was in accordance with its policy against harmful misinformation that could significantly affect individuals or society.
Kattan, who has a substantial following on social media and has previously voiced criticism of Israel's military actions in Gaza, shared the now-deleted video amid heightened global attention on the Israel-Gaza conflict.
The post, widely circulated before removal, was described by commentators as conspiratorial and antisemitic.
The incident has placed renewed scrutiny on content moderation practices across social media platforms.
In contrast to platforms such as X and
Facebook, which have reportedly reduced content moderation efforts in recent years, TikTok cited its responsibility to maintain a shared reality based on facts.
The platform's policy aims to address misinformation that may lead to public harm.
Kattan's case has also led to broader discussions about corporate relationships with influencers.
Some businesses associated with her brand have faced pressure from consumers and advocacy groups to reconsider partnerships.
The episode highlights the growing role of social media influencers as alternative sources of news and commentary, particularly among younger users.
Industry research shows a steady rise in the number of people relying on platforms like TikTok, YouTube, and podcasts as their primary sources of information, as traditional television and print media see declining engagement.
Media analysts have drawn parallels between digital media consumption and dietary habits, noting that users tend to consume content that aligns with their preferences, often reinforcing existing beliefs.
Algorithms, designed to personalize feeds, can limit exposure to diverse viewpoints.
Additionally, the use of artificial intelligence tools to deliver news and information is expanding.
Younger demographics are increasingly turning to AI chatbots, though accuracy and reliability remain ongoing concerns.
Major AI developers are pursuing licensing agreements with news publishers to integrate verified information into their systems.
However, the tools have yet to achieve consistent performance in fact-based responses, prompting caution among educational institutions and professional sectors.
The evolving media landscape has led to debates over who holds the responsibility for moderating content online—platforms, content creators, or users.
As alternative news formats continue to grow in popularity, scrutiny of how information is shared, verified, and consumed remains a central focus for technology companies, regulators, and the public.