X Sues To Block Calif.'s New Deepfake Political Ads Law

(November 15, 2024, 9:55 PM EST) -- X Corp. filed a lawsuit in California federal court seeking to block a new Golden State law aimed at combating artificial intelligence-generated deepfake political ads, claiming the regulation that takes effect in January is unconstitutional and violates Section 230 of the Communications Decency Act.

In a 65-page complaint Thursday, the social media company formerly known as Twitter argued that A.B. 2655 — also referred to as the Defending Democracy from Deepfake Deception Act — has a "problematic enforcement system" that will have the effect of forcing social media platforms to broadly remove or label content for being deceptive, even in scenarios that are close calls.

"This system will inevitably result in the censorship of wide swaths of valuable political speech and commentary, and will limit the type of 'uninhibited, robust and wide-open' 'debate on public issues' that core First Amendment protections are designed to ensure," the complaint said, citing the high court's 1964 ruling in New York Times v. Sullivan.

X Corp.'s lawsuit comes roughly two months after Gov. Gavin Newsom signed A.B. 2655 into law, along with other first-in-the-nation artificial intelligence-related bills. The measures give actors more protections over their digital likenesses and attempt to reign in the usage during elections of AI-generated, so-called deepfake images, videos and audio clips that are deceptively digitally created and altered.

When it goes into effect next year, A.B. 2655 will require large online platforms like Facebook and YouTube to remove or label "materially deceptive" deepfake content related to elections during specified periods before, during and after elections are taking place. Such content would include AI deepfakes portraying political candidates as saying something they didn't say, or posts that are "reasonably likely" to harm candidates' reputations, although the law notably doesn't apply to satires or parodies.

The new law also requires those platforms to provide mechanisms to report the content in question, and A.B. 2655 authorizes candidates, elected officials, election officials, the attorney general, and district and city attorneys to seek injunctive relief for purported violations.

However, in its lawsuit, the Elon Musk-owned X Corp. criticized the regulation's enforcement system for purportedly "highly" incentivizing platforms to remove or label any content that presents a close call to avoid enforcement litigation and "enormous liability," and it allows the government to be involved in content moderation "in every step of that system," the complaint added.

"Rather than allow covered platforms to make their own decisions about moderation of the content at issue here, it authorizes the government to substitute its judgment for those of the platforms," the complaint argued.

X Corp. claims that the law constitutes an unconstitutional prior restraint on speech, and runs afoul of the U.S. Supreme Court's decision in Moody v. NetChoice LLC, which held that, when a social media platform "present[s] a curated and 'edited compilation of [third-party] speech,'" the presentation "is itself protected speech."

The company also alleged that the new law contradicts protections afforded under Section 230, and its requirements are "so vague and unintelligible that the covered platforms cannot understand what they permit and what they prohibit," which will allegedly lead to blanket censorship,

"This, in turn, will severely chill important political speech — specifically, the use of exaggerated or unfavorable visual means to undermine and combat political opponents, which, as the Supreme Court has recognized, is ingrained in the historical fabric of U.S. political commentary and subject to the strongest of First Amendment protections," the complaint says.

The three-count complaint asserts violations of the First and 14th amendments, as well as the CDA's Section 230. The lawsuit seeks an order declaring that A.B. 2655 violates those amendments and blocking the state from enforcing the law, as well as a ruling awarding X Corp. attorney fees, costs and expenses.

The state attorney general's office said in a statement Friday that the California Department of Justice "has been and will continue to vigorously defend A.B. 2655 in court, which aims to combat deepfake election content."

Counsel and representatives for X Corp. didn't immediately respond to requests for comment Friday.

The lawsuit comes roughly a month after U.S. District Judge John A. Mendez preliminarily blocked a related bill — A.B. 2839 — which barred the dissemination of materially deceptive AI-generated content of elected officials, candidates, election officials and others while allowing those officials to file a civil action to block distribution of the content.

In his ruling, Judge Mendez acknowledged AI's risks, but agreed with conservative content creator Christopher Kohls, who filed the suit against A.B. 2839, that the law is an overly broad "blunt tool that hinders humorous expression and unconstitutionally stifles" the free exchange of ideas.

X Corp. is represented by William R. Warne and Meghan M. Baker of Downey Brand LLP, and Joel Kurtzberg, Floyd Abrams and Jason Rozbruch of Cahill Gordon & Reindel LLP.

Counsel information for the defendants wasn't immediately available Friday.

The case is X Corp. v. Bonta et al., case number 2:24-cv-03162, in the U.S. District Court for the Central District of California.

--Additional reporting by Lauren Berg. Editing by Melissa Treolo.

For a reprint of this article, please contact reprints@law360.com.

Hello! I'm Law360's automated support bot.

How can I help you today?

For example, you can type:
  • I forgot my password
  • I took a free trial but didn't get a verification email
  • How do I sign up for a newsletter?
Ask a question!