Remember the viral image of Pope Francis in a white puffer jacket and jeweled crucifix, looking like he stepped out of a streetwear runway show? We’ve seen deepfakes before, but few have had quite this impact. Now, Google has released a tool to prevent images like this from spreading unchecked.
Introducing SynthID: How It Works
DeepMind, an AI research lab acquired by Google, recently announced the launch of SynthID, a watermarking tool designed specifically to identify AI-generated images.
“Being able to identify AI-generated content is critical to empowering people with knowledge of when they’re interacting with generated media and for helping prevent the spread of misinformation,” said the DeepMind team. Unlike traditional watermarking techniques that rely on visible watermarks or metadata, which can be lost, DeepMind embeds a digital watermark directly into the pixels of an image. This means that even if the image is altered through cropping, resizing, filters, or brightness adjustments, the watermark remains intact. While the human eye won’t detect it, detection software can identify it.
Not much more is revealed about the tech. CEO Demis Hassabis explained to The Verge that “the more you reveal about the way it works, the easier it’ll be for hackers and nefarious entities to get around it.” Currently in beta, SynthID is available to Imagen users, a Google Cloud product, who use Vertex AI, a cloud-based machine learning platform. Customers can responsibly create, share, and identify AI-generated images. However, Hassabis notes that the technology isn’t foolproof against “extreme image manipulations,” but it’s a step in the right direction.
What’s Next for SynthID?
The DeepMind team is working on expanding access to SynthID, aiming to make it available to third parties and integrate it into more Google products. This announcement came shortly after Google and six top AI players attended a White House summit where they pledged to invest in AI safety tools and research for responsible use. The White House requested new watermark technology as a means for AI companies to earn public trust. According to The Verge, the software will likely extend to audio and visual content.
This summit is part of the government’s ongoing effort to combat deepfakes. In 2021, the Department of Homeland Security and Governmental Affairs Committee passed the Deepfake Task Force Act. On the lighter end, deepfakes can humorously style the Pope in trendy fashion. On the darker end, they can lead to political instability, fraud, and stock manipulation.
In 2021, Adobe cofounded the nonprofit Coalition for Content Provenance and Authenticity (C2PA) to standardize media content labeling and combat misinformation. They serve as a seal of approval, showing consumers that an asset was not manipulated. Due to the AI boom, C2PA’s membership has grown 56% in the past six months, as noted by MIT Technology Review. In late July, Shutterstock announced it would integrate C2PA’s technical protocol into its AI systems and creativity tools, including the AI image generator.
The Takeaway
With increasing government pressure, AI companies big and small must prioritize responsible AI efforts. Whether you’re using or creating AI tools, there’s more oversight on the horizon.