- Meta Platforms (META.O) will begin detecting and labeling images generated by other companies’ artificial intelligence services in the coming months, using invisible markers built into the files, according to Nick Clegg, the company’s president of global affairs.
- The labels will be applied to any content carrying the markers posted to Facebook, Instagram, and Threads to signal to users that the images, resembling real photos, are digital creations. Meta already labels content generated using its own AI tools.
- Once the new system is operational, Meta will extend labeling to images created on services run by companies such as OpenAI, Microsoft, Adobe, Midjourney, Shutterstock, and Google.
- This move reflects an effort by technology companies to mitigate potential harms associated with generative AI technologies, which can produce fake but realistic-seeming content in response to simple prompts.
- The approach builds on a template established over the past decade for coordinating the removal of banned content across platforms.
- Clegg expressed confidence in the companies’ ability to label AI-generated images reliably, but noted that tools for marking audio and video content were more complex and still under development.
- Meta will start requiring people to label their own altered audio and video content, with penalties for non-compliance, while acknowledging challenges in labeling written text generated by AI tools.
- It’s unclear whether Meta will apply labels to generative AI content shared on its encrypted messaging service WhatsApp.
- Meta’s independent oversight board recently criticized the company’s policy on misleadingly doctored videos, advocating for labeling rather than removal, a sentiment with which Clegg broadly agreed.