Google has said political adverts using artificial intelligence (AI) must soon be accompanied by a prominent disclosure if imagery or sounds have been synthetically altered.
Starting in November, just under a year before the next US presidential election, the technology giant said in an update to its political content policy that disclosure of AI to alter images must be clear and conspicuous and be located somewhere that users are likely to notice it.
Although fake images, videos or audio clips are not new to political advertising, generative AI tools are making it easier to do, and more realistic.
Some presidential campaigns in the 2024 race – including that of Florida Governor Ron DeSantis – are already using the technology.
In April, the Republican National Committee released an entirely AI-generated ad meant to show the future of the United States if President Joe Biden is re-elected.
It employed fake but realistic photos showing boarded-up shopfronts, armoured military patrols in the streets, and waves of immigrants creating panic.
In June, Mr DeSantis’s campaign posted an attack ad against his primary opponent, Donald Trump, which used AI-generated images of the former president hugging infectious disease expert Dr Anthony Fauci.
Last month the Federal Election Commission began a process to potentially regulate AI-generated deepfakes in political ads ahead of the 2024 election.
Congress could pass legislation creating safeguards for AI-generated deceptive content, and politicians, including Senate Majority Leader Chuck Schumer, have expressed an intention to do so.
Several states have also discussed or passed legislation related to deepfake technology.
Google is not banning AI in political advertising outright.
Exceptions to the ban include synthetic content altered or generated in a way that is inconsequential to the claims made in the ad.
AI can also be used in editing techniques like image resizing, cropping, colour, defect correction, or background edits.