Before 1 June 2024, will Facebook, WhatsApp, Messenger, or Twitter announce that they are labeling posts as potentially written by AI?
Closing Jun 01, 2024 04:00AM UTC
The advent of ChatGPT and other generative artificial intelligence (AI) tools has raised concerns about how AI can be used to generate and spread disinformation at a much larger scale (Poynter, CSET, PBS). As a result, organizations have been working to develop tools that can identify text as written by artificial intelligence (Platformer, Tech Monitor, OpenAI, NPR).
- Labels must be issued by the platform. User-generated labels (e.g., Twitter’s Community Notes) will not be sufficient on their own.
- Labels can be viewable on a feed or when a user interacts with a post (e.g., by sharing or forwarding it).
- A labeling policy which is only applied to select users or certain regions will still count.
- Research into labeling AI-generated text will not be sufficient unless it is accompanied by an announced policy or platform update in which AI-generated text is labeled for users or a subset of users.
- Labeling must apply to text written by generative AI. Labeling of images or videos generated by AI will not count towards resolution.
- Challenges with AI: Artistry, Copyrights, and Fake News (GovTech)
- Generative AI: 5 essential reads about the new era of creativity, job anxiety, misinformation, bias and plagiarism (The Conversation)
- Should the United States or the European Union Follow China’s Lead and Require Watermarks for Generative AI? (Georgetown Journal of International Affairs)
By labelling posts as "potentially written by AI", we mean that the relevant social media company detects or suspects text to have been written by AI, not that they are disclosing their own use of AI to generate text.