SAN FRANCISCO - Google on Wednesday tweaked its political advertising policies to require politicians to disclose if they use any “synthetic” or artificial-intelligence-generated images or videos in their ads on the tech giant’s platforms.
The company already bans outright “deepfakes” that aim to deceive voters, but the new policy will require companies to disclose any use of the tech beyond minor edits such as adjusting color or contrast in an image. Politicians will have to fix a label to their ads warning people they include synthetic content, the company said.
“Generative” AI tools such as Google’s Bard chatbot or OpenAI’s Dall-E image generator have rapidly improved in quality, to the point where they can write professional exams and conjure realistic-looking pictures that are often hard to distinguish from ones taken by a camera.
The company said in the announcement it was making the change because of “the growing prevalence of tools that produce synthetic content.”
That’s prompted concerns from politicians and democracy activists that the tools could be made to trick voters or make it look as though a political opponent said or did something they didn’t. Google and Meta, which together account for a huge amount of online advertising real estate, have been under pressure for years to push back against false claims made on their platforms. Meta bans outright deepfakes as well.
Fake images and audio have already begun showing up in election ads around the world. In June, Florida Gov. Ron DeSantis’s campaign released a video that included fake images of Donald Trump hugging former White House coronavirus adviser Anthony S. Fauci. Last month, a Polish opposition party admitted that it used AI-generated audio to fake the voice of the country’s prime minister in an ad.
The new Google rules only apply to advertisements and won’t affect regular videos uploaded to its YouTube video platform. The new rules will go into effect in November.