- TikTok’s design is pushing users into misinformation rabbit holes about the Russia-Ukraine conflict.
- Researchers say old videos and shorter clips on the platform make it harder to fight misinformation.
- TikTok also shares less data on its algorithm or content moderation than Twitter or Meta, they say.
Could you tell the difference between a scene of Russian troops dropping bombs on a Ukrainian city and footage taken from a video game with a generic explosion soundtrack?
Many people can’t, if share counts on social media are anything to go by.
The constantly refreshing nature of social apps means it’s difficult to quantify misinformation during a fast-moving event such as Russia’s invasion of Ukraine.
For experts monitoring social media, TikTok is emerging as particularly problematic.
A video purporting to be footage of Russian missiles in Ukraine was actually taken from the video game Arma III, racking up over 6.2 million views before it was taken down. Similar clips were shared to Facebook and Twitter but garnered fewer views before removal.
Another video using audio from the Beirut explosion in 2020 clocked over 5 million views in 12 hours. A TikTok of a woman crying with her child posted in early February — weeks before Russia’s invasion of Ukraine — had a spike in views and comments expressing support for Ukraine.
A post from Russian state media peddling an inaccurate narrative that Ukrainian President Volodymyr Zelensky had fled the capital Kyiv stayed live until at least March 2 and racked up more than 1 million views.
One problem is that, more so than other platforms, old posts are re-upped by TikTok’s algorithm if they’re attracting fresh attention.
It’s also easier for content creators to overlay someone else’s audio onto their videos.
“From a disinformation perspective, it’s the most dangerous platform,” said Viktoras Daukšas, who heads up the Debunk EU initiative, which monitors misinformation and disinformation across Eastern Europe.
According to Debunk EU’s analysis of conspiracy theories on TikTok, users can enter a “rabbit hole” of misinformation without an exit in just five to six minutes.
That means that a user can go from watching a couple of videos peddling misinformation to false information and conspiracy theories taking up 90% of their feed.
TikTok is “more heavily invested in capturing users” for long periods of time than other social networks, according to Fabio Belafatti, a senior researcher at the Policy Impact Lab.
Over the past week, videos of Ukrainians sharing details of the conflict unfolding and journalists explaining the conflict have gone viral on the platform.
Users are more prone to take TikTok clips at face value, researchers said, because video feels more emotive and trustworthy than text.
“People seem to think that it’s harder to fake or just are less inclined to think that it might be misleading,” Abbie Richards, a disinformation researcher at Media Matters, told Insider.
Researchers say they can’t access TikTok moderation data
Rabbit holes happen faster on TikTok because a user tends to watch more videos in a shorter time than on YouTube or Facebook, where videos still tend to be minutes rather than seconds long.
TikTok recently increased its maximum video length to up to 10 minutes, but the vast majority of its videos are still under three minutes. And if users are watching a video longer, it’s more likely to be recommended to others.
“With any platform, when watch time is prioritized over accuracy, that incentivizes misinformation,” Richards said.
It’s also a lot easier for an account that doesn’t have a large following to go viral on TikTok than on some of the other big social networks because one feature of its algorithm involves taking obscure content into main feeds.
With raw footage flooding the platform, misinformation researchers have no idea how much content moderation is going on behind the scenes because TikTok keeps real-time data on content moderation pretty inaccessible.
Unlike some other popular sites, users don’t need to know or follow others to see their content pop up on a “For You” page, the curated stream at the center of the app.
Nonprofit and research organizations work with Facebook, Twitter, and YouTube to combat misinformation on their platforms. Researchers such as those with the Atlantic Council’s Digital Forensic Research Lab can access real-time data and make removal recommendations.
“I’ve never heard of the same thing with TikTok,” Daukšas said.
Russian state-backed outlets are skirting an EU news ban
All of this has made TikTok a valuable platform for Russian state media outlets such as RT and RIA Novosti over platforms like YouTube, according to new research published this week by the Institute for Strategic Dialogue.
Like other tech platforms, TikTok has now removed the accounts of state-controlled Russian media RT and Sputnik from its platform for users in the EU after an EU directive to clamp down on pro-Putin propaganda.
But the Institute of Strategic Dialogue found several state-backed news organizations have skirted the geoblock in the EU.
RT’s editor-in-chief, Margarita Simonyan, who has 175,000 followers on TikTok, and Sputnik’s Spanish-language news account Sputnik Mundo, which has 52,000 followers, were both still accessible at the time of writing. Both are likewise visible on Twitter but carry warnings to users that they are state controlled or propaganda outlets.
TikTok has slowly been adding labels to some state-controlled media pages but hasn’t specified how far those labels will extend.
On Sunday, the company announced a temporary ban on users in Russia posting new content on the app.
The move is a response to Putin’s new “fake news” law, which imposes fines or jail terms up to 15 years for spreading information deemed false about the Russian military and which came into force last week.
It’s a remarkable shift in the information ecosystem of the conflict because Russia’s war arsenal has included orchestrating anti-Ukrainian content, including on TikTok.
It’s possible the changes will bring TikTok closer in line with other social-media platforms, where Ukraine is “winning the information war,” according to Laura Edelson, colead of the Cybersecurity for Democracy project at NYU.
A spokesperson for TikTok said: “We care deeply about the safety and well-being of our community, and we invest at scale in our global safety operations to detect and remove harmful content. TikTok has thousands of people working on safety all around the world.
“We continue to respond to the war in Ukraine with increased safety and security resources to detect emerging threats and remove harmful misinformation. We also partner with independent fact-checking organizations to support our efforts to help TikTok remain a safe and authentic place.”