Nevada Legislature Fights to Combat Artificial Intelligence in Political Ads

News, Technology & Software

The Nevada Legislature recently passed Assembly Bill 73 (“AB 73”) during this year’s regular legislative session, aimed mitigating the use of artificial intelligence (“AI”) in political ads.  The rise of AI, and in particular generative AI, has created new problems related to the integrity of elections and the dissemination of information.  Generative AI can create almost any image and the use of generative AI to create altered pictures of political figures has become increasingly common.  Many have expressed concerns about the rise of generative AI and the ability of the general public to tell the difference between real and generated images as the midterm election year kicks into gear.

AB 73, which was passed unanimously, seeks to alleviate those concerns in Nevada.  The bill requires disclosure of any use of AI in election materials created by a candidate or campaign. The bill specifically targets any image, audio, or video that has been intentionally manipulated by AI or generative AI to create a realistic but fake depiction of a real person.  AB 73 does not impose liability on a streaming service, internet provider, radio station, television station, broadcasting content programmer, or anyone who publishes the manipulated material as satire.

If a generative AI image, audio, or video is used in the election material, the material must state, in a clear and conspicuous manner: “This (image/video/audio) has been manipulated.”  If the AI material is a video, the disclosure must appear for the entire duration of the video.  If it is only audio, the disclosure must be clearly spoken at the beginning, the end, and if the audio is longer than 2 minutes, every 2 minutes.  Candidates who are depicted in AI materials without the required disclosure may also seek an injunction to stop the dissemination of the images.

Twenty-six (26) states have implemented similar laws aimed at regulated AI content in political ads.  Texas, for example, prohibits the publication of deepfake videos within thirty (30) days of an election.  Michigan, Washington, and Utah require similar disclaimers as to Nevada if AI manipulated content is used.  Many of these laws have faced legal challenges, usually related to First and Fourteenth Amendment concerns.  A California law that restricted AI-generated deepfakes during elections was struck down by a federal judge, who held it violated Section 230 of the Communications Decency Act by holding the platforms civilly liable for content posted to them.  

While no challenges have yet been launched against AB 73, it is likely to face legal challenges as the election cycle ramps up and candidates begin disseminating their election materials more heavily. The Nevada Legislature is attempting to strike a delicate balance between protecting the integrity of the election process while also preserving citizens’ First Amendment rights, and it remains to be seen which consideration will prevail.