Mapping the New Rules of “AI Slop”: How Social Media Platforms are Managing AI-Generated Content

Zendaya and Tom Holland got married recently, and a beautiful photo from their wedding got over ten million likes on Instagram. But here’s the thing: the photo is fake. It was AI-generated, and ten million people believed it was real.
Over the past several years, one thing about social media has become abundantly clear. AI-generated content is increasingly common, and it is here to stay. The question then becomes, as AI-generated content becomes ubiquitous, how should online platforms govern it so that people can still trust what they see and share?
PhD student Lan Gao, currently a third year in the AIR lab at the University of Chicago’s Department of Computer Science, is trying to answer this exact question. Mentored by Associate Professor Marshini Chetty, Gao focuses broadly on human-computer interaction (HCI) and, more specifically, is interested in the topics of AI Governance, and Online Trust and Safety.
The idea for Gao’s latest paper, Governance of AI-Generated Content: A Case Study on Social Media Platforms, grew out of the rapid rise of generative AI tools since 2022. These systems are no longer niche: people now use them for everyday communication, writing, and content creation across the web. Even before this wave, platforms were already worried about deepfakes, but as generative tools spread to the mainstream, AI-generated content is no longer just the domain of “bad actors”, but is increasingly produced by everyone.
Commentators have started using terms like “AI slop” to describe the flood of synthetic text, images, and video reshaping online communities. That shift raised urgent questions for Gao: How are the platforms that host user-generated content, where anyone can post almost anything, responding to AI-generated material? Are they labeling it, restricting it, or treating it just like everything else?
“I wanted to conduct research on how stakeholders who host those online platforms manage this kind of issue,” Gao stated. “What’s unique about social media is that any user can post anything they want, and management is critical for these kinds of platforms. From the research perspective, there are limited studies that systematically analyze how different social media platforms really do those approaches.”
To understand how major platforms approach AI-generated content, Gao began by examining the platforms’ own words. The study collected and analyzed public-facing documents such as terms of service, legal and policy pages, help center articles, and support documentation that describe how AI-generated content is governed.
Gao and her team built an automated web-scraping pipeline to assemble this dataset at scale. Starting from seed pages that mentioned generative AI or related terms, the scraper followed links and extracted additional pages that contained relevant policies and guidance. They were able to capture how dozens of social platforms talk about AI-generated content in their official materials.
From there, Gao conducted a qualitative analysis of the collected passages, identifying how platforms describe, categorize, and manage AI-generated content. The analysis revealed that roughly two-thirds of the 40 platforms studied, and 27 in total, explicitly acknowledge AI-generated content governance in their documents. The remaining 13 either do not mention it or only address it indirectly.
Across those 27 platforms, Gao identified six main approaches to AI-generated content governance. They were 1) Moderating AI-Generated Content That Violates Existing Policies, 2) Disclosing and Labeling AI-Generated Content, 3) Restricting Posting and Sharing AI-Generated Content, 4) Constraining Monetization of AI-Generated Content, 5) Controlling Output Generation and Distribution from Integrated AI Tools, and 6) Educating and Empowering Users When Interacting with AI-Generated Content. The most common approach was 1) Moderating AI-Generated Content That Violates Existing Policies: 25 social media platforms explicitly emphasize that they adhere to existing community guidelines and terms of service to manage AI-generated content.

Early debates had largely focused on deepfakes, a small concentration of bad actors in the online space. Those problems are still important, but generative AI has moved far beyond that niche. Today, AI is woven into everyday content production: a comment drafted with a chatbot, a video script co-written with an AI assistant, a music track modified by generative tools. Gao argues that stakeholders need to tailor governance strategies to the role AI plays on each platform, rather than relying on one-size-fits-all policies.
“This kind of policy oversight is just catching up right now, and social media platforms are just applying existing content migration policies, or labeling AI content to ensure transparency,” Gao said. “By doing this study, we are creating this baseline in 2025 of how the most popular social media platforms are approaching this, so that we have the evidence to recommend stakeholders, regulators, and policymakers to do something.”
Currently, Gao plans to dig deeper into disclosure and labeling practices. She is studying how AI-generated content is actually labeled in practice on short-form video platforms like TikTok and YouTube Shorts, and how creators think about disclosure. By combining policy analysis, empirical measurement, and creator perspectives, Gao hopes that her work enables concrete recommendations for future governance. The goal is not only to understand current practices, but to help platforms and regulators design solutions that reflect the realities of AI-assisted creativity and the diversity of online spaces.
To learn more about Gao’s work in the AIR lab, check out the lab website here.