TikTok has printed its newest Transparency Report, as required underneath the EU Code of Apply, which outlines all the enforcement actions it undertook inside EU member states over the past six months of final 12 months.
And there are some fascinating notes in regard to the impression of content material labeling, the rise of AI-generated or manipulated media, overseas affect operations, and extra.
You’ll be able to obtain TikTok’s full H2 2024 Transparency Report right here (warning: it’s 329 pages lengthy), however on this publish, we’ll check out a number of the key notes.
First off, TikTok stories that it eliminated 36,740 political adverts within the second half of 2024, in step with its insurance policies in opposition to political info within the app.
Political adverts aren’t permitted on TikTok, although because the quantity would counsel, that hasn’t stopped a lot of political teams from looking for to make use of the attain of the app to broaden their messaging.
That highlights each the rising affect of TikTok extra broadly, and the continued want for vigilance in managing potential misuse by these teams.
TikTok additionally eliminated virtually 10 million pretend accounts within the interval, in addition to 460 million pretend likes that had been allotted by these profiles. These might have been a way to control content material rating, and the removing of this exercise helps to make sure genuine interactions within the app.
Effectively, “genuine” when it comes to this coming from actual, precise folks. It could actually’t do a lot about you liking your good friend’s crappy publish since you’ll really feel dangerous for those who don’t.
When it comes to AI content material, TikTok additionally notes that it eliminated 51,618 movies within the interval for violations of its artificial media movies for violations of its AI-generated content material guidelines.
“Within the second half of 2024, we continued to put money into our work to reasonable and supply transparency round AI-generated content material, by turning into the primary platform to start implementing C2PA Content material Credentials, a know-how that helps us determine and robotically label AIGC from different platforms. We additionally tightened our insurance policies prohibiting harmfully deceptive AIGC and joined forces with our friends on a pact to safeguard elections from misleading AI.”
Meta not too long ago reported that AI-generated content material wasn’t a significant component in its election integrity efforts final 12 months, with rankings on AI content material associated to elections, politics, and social matters representing lower than 1% of all fact-checked misinformation. Which, on steadiness, might be near what TikTok noticed as nicely, although that 1%, at such an enormous scale, that also represents a variety of AI-generated content material that’s being assessed and rejected by these apps.
This determine from TikTok places that in some perspective, whereas Meta additionally reported that it rejected 590k requests to generate photos of U.S. political candidates inside its generative AI instruments within the month main as much as election day.
So whereas AI content material hasn’t been a significant component as but, extra persons are a minimum of making an attempt it, and also you solely want a number of of those hoax photos and/or movies to catch on to make an impression.
TikTok additionally shared insights into its third-party fact-checking efforts:
“TikTok acknowledges the necessary contribution of our fact-checking companions within the battle in opposition to disinformation. In H2 we onboarded two new fact-checking companions and expanded our fact-checking protection to a lot of wider-European and EU candidate international locations with current fact-checking companions. We now work carefully with 14 IFCN-accredited fact-checking organizations throughout the EU, EEA and wider Europe who’ve technical coaching, sources, and industry-wide insights to impartially assess on-line misinformation.”
Which is fascinating within the context of Meta transferring away from third-party fact-checking, in favor of crowd-sourced Group Notes to counter misinformation.
TikTok additionally notes that content material shares had been decreased by 32%, on common, amongst EU customers when an “unverified declare” notification was displayed to point that the data offered within the clip is probably not true.
In equity, Meta has additionally shared knowledge which means that the show of Group Notes on posts can scale back the unfold of deceptive claims by 60%. That’s not a direct comparability to this stat from TikTok (TikTok’s measuring complete shares by depend, whereas the research checked out general distribution), but it surely may very well be round about the identical outcome.
Although the issue with Group Notes is that almost all are by no means exhibited to customers, as a result of they don’t achieve cross-political consensus from raters. As such, TikTok’s stat right here really does point out that there’s a worth in third-party reality checks, and/or “unverified declare” notifications, in an effort to scale back the unfold of probably deceptive claims.
For additional context, TikTok additionally stories that it despatched 6k movies uploaded by EU customers to third-party fact-checkers inside the interval.
That factors to a different subject with third-party fact-checking, that it’s very troublesome to scale this method, that means that solely a tiny quantity of content material can really be reviewed.
There’s no definitive proper reply, however the knowledge right here does counsel that there’s a minimum of some worth to sustaining an neutral third-party fact-checking presence to watch a number of the most dangerous claims.
There’s a heap extra in TikTok’s full report (once more, over 300 pages), together with a spread of insights into EU-specific initiatives and enforcement packages.