TikTok to Auto-Label AI-Generated Content in Collaboration with Misinformation NGOs
AI labeling part of a larger push to stop "misinformation" - but who gets to decide what is true?
TikTok has announced plans to automatically label AI-generated content uploaded to its platform by partnering with the Coalition for Content Provenance and Authenticity (C2PA) and implementing Content Credentials technology.
While TikTok has been requiring creators to label realistic AI-generated content for over a year, this new development extends auto-labeling to content created using other platforms. The gradual implementation of this feature will initially focus on images and videos, with plans to include audio content in the future.
In addition to AI labeling, TikTok will roll out media literacy resources developed in collaboration with MediaWise and WITNESS, aiming to enhance users’ ability to discern authentic content. This initiative comes amidst concerns about the rise of AI-generated deepfakes, exemplified by recent incidents involving fraudulent cryptocurrency exchanges and public figures like Warren Buffett being targeted by realistic deepfake content.
The Problem With Auto-Labeling
AI content detectors often face challenges with false positives, where they incorrectly identify content as violating guidelines when it is actually harmless. This can lead to issues such as censoring legitimate speech, restricting creative expression, and hindering productive conversations.
False positives can arise due to the complexity of language, cultural nuances, and evolving online trends that may not be accurately captured by the AI algorithms. As a result, content creators and users may experience frustration and limitations in their online interactions, highlighting the need for continuous refinement and improvement of AI content detection systems to minimize these errors and ensure a more accurate and nuanced content moderation process.
What Is the Coalition for Content Provenance and Authenticity (C2PA)?
The Coalition for Content Provenance and Authenticity (C2PA) is a project established by the Joint Development Foundation1 with the aim of collaboratively developing an open technical standard to enhance understanding of the authenticity and provenance of various types of media for publishers, creators, and consumers.
Founded in February 2021, the C2PA counts Microsoft and Adobe as its founding members, with additional participation from companies like Arm, BBC, Intel, and Truepic. The organization’s focus on providing opt-in, flexible solutions for managing provenance information underscores its commitment to empowering content creators while combating “misinformation” by claiming to ensure transparency and accountability in digital content.
However, can an organization mainly run by Big Tech companies be fully objective and speak to the needs of average people?
What Is MediaWise?
MediaWise, a digital media literacy project, is part of The Poynter Institute, a nonprofit journalism school and research organization dedicated to enhancing journalism through training, research, and fact-checking initiatives. MediaWise's mission is to “empower diverse communities” by equipping them with the skills to identify “misinformation” and supposedly promote factual information for the betterment of democracy.
However, fact check organizations like MediaWise have been under fire for being biased against conservatives, which is evident in this article:
On April 17, about 75 students and parents gathered outside the school to protest “against the furries,” according to one student who was interviewed off-camera during the protest.
A man named Andrew Bartholomew, whose wife is a Republican running for the Utah State School Board, posted videos of the protest on YouTube and X.
In the YouTube video, students made claims about what the alleged furries had been doing. Often eliciting laughter from the crowd, students claimed the furries “bark every day, but they only bite like once a week,” “bite ankles,” “scratch us,” and “spray us in the eyes with Febreze if they get a chance” after school and in the halls.
We did not find clear evidence that any students at Mount Nebo Middle School identify as furries.
Except, a furry tail can be seen in a video posted by local news outlet ABC4 about the protest:
So the students who participated in the protest made claims that other students pretending to be animals had bitten or scratched them, and the presence of a video showing an angry student wearing furry tail and engaging in aggressive behavior supports the students’ claims. It is also possible that the district spokesperson may be doing damage control, which would make the students’ accounts more trustworthy.
So why does Poynter’s MediaWise take the district spokesperson’s account as gospel while denying the lived experience of numerous students? The “fact check” was biased and wrong, but the “FALSE” result was used to suppress the furry story on Meta.
What Is WITNESS?
WITNESS is a progressive organization that claims to be at the forefront of utilizing video and technology to protect and defend human rights on a global scale. Their AI division asserts that they are addressing the challenges presented by emerging technologies like deepfakes and generative AI through a purported lens of social justice and equality.
By collaborating with human rights defenders, journalists, and technologists, WITNESS states that they advocate for transparency, accountability, and ethical use of AI tools. They assert that their efforts aim to fortify democracy, protect vulnerable communities, and promote inclusive approaches to AI development that prioritize human rights.
While WITNESS claims to uphold values such as fairness, integrity, and responsible technology deployment for societal betterment, some may view their messaging with a degree of skepticism regarding the actual impact and fairness of their initiatives.
Keep Watching the Watchers
While TikTok's recent partnership with the Coalition for Content Provenance and Authenticity (C2PA) and the implementation of Content Credentials technology may seem like a positive step towards addressing the growing concerns about AI-generated deepfakes and content authenticity, it is essential to consider the potential for bias and influence from organizations like C2PA, MediaWise, and WITNESS.
These partnerships may inadvertently undermine freedom of speech and expression by prioritizing certain perspectives and narratives over others. It is crucial to remain vigilant and critically evaluate the information and guidance provided by these organizations to ensure a balanced and fair approach to content moderation and media literacy that respects diverse viewpoints and protects the fundamental right to free expression.
The Joint Development Foundation offers a simpler way for starting independent projects related to technical specifications and standards. By utilizing the Foundation's agreements and corporate structure, projects can benefit from industry-standard documents without the need to create their own. This makes it quicker and easier to begin new projects while still enjoying the advantages of being associated with a 501(c)6 corporation.



MediaWise is here to protect us all, said no one who believes in free speech. Tagging of ai my be a thing for a short while. But I think we can see where this is going and in the future your more likely to see a tag saying, completely human made as some sort of ai tool will be involved in nearly all media production. When tagging ai becomes a problem for the ai big business wants to use, the tags will go away.