As experts warn that images, audio and video generated by artificial intelligence could influence the fall election, OpenAI announced that its popular image generator, created by DALL-E, Release tools designed to discover content. However, the prominent AI startup admits that this tool is only a small part of what will be needed to combat so-called deepfakes in the coming months and years.
OpenAI announced Tuesday that it will share its new deepfake detector with a small group of disinformation researchers, allowing them to test the tool in real-world situations and identify ways to improve it.
“This is to start new research,” said Sandini Agarwal, an OpenAI researcher focused on safety and policy. “I really need that.”
OpenAI said the new detector can accurately identify 98.8 percent of images created by DALL-E 3, the latest version of the image generator. But the company said the tool is not designed to detect images produced by other popular generators such as Midjourney or Stability.
This type of deepfake detector is driven by probability and can never be perfect. So, like many other companies, nonprofits, and academic research institutions, OpenAI is working to combat the problem in a different way.
Like tech giants Google and Meta, the company is on the steering committee of the Coalition for Content Provenance and Authenticity (C2PA), an initiative to develop credentials for digital content. The C2PA standard is a type of “nutrition label” for images, videos, audio clips, and other files, including AI, that tells when and how they were created or modified.
OpenAI also said it is developing ways to “watermark” AI-generated sounds so they can be instantly and easily identified. The company wants to make these watermarks difficult to remove.
The AI industry, led by companies like OpenAI, Google, and Meta, is facing increasing pressure to take responsibility for the content its products create. Experts are calling on the industry to prevent users from generating misleading and malicious content and provide ways to track its origin and distribution.
In a year of major elections around the world, the need for ways to monitor the lineage of AI content becomes even more urgent. In recent months, audio and images have already influenced political campaigns and votes in countries such as Slovakia, Taiwan and India.
OpenAI's new deepfake detector may help stop the problem, but it won't solve it. Agarwal said there is “no silver bullet” in the fight against deepfakes.