AI Chatbot Is A Product, Outputs Not Protected Speech, Mother Contends

·

(March 25, 2025, 2:23 PM EDT) -- ORLANDO, Fla. — The First Amendment protects speech and not the predictive text outputs by an artificial intelligence chatbot, but even if those outputs constituted speech, speech protections would not apply to the type of harmful chatbot outputs that led a teenager to commit suicide, a mother told a federal judge in Florida in opposing dismissal while arguing that Google LLC and its related entity can be held liable.

(Megan Garcia, et al. v. Character Technologies, Inc., et al., No. 24-1903, M.D. Fla.)

(Garcia’s response to Google, et al. available.  Document #46-250402-046B.  Garcia’s response to Character Technologies available.  Document #46-250402-047B.)

“The issue is not simply that Defendants’ generative AI chatbot targeted the minor decedent with sexually explicit material and encouraged him to commit suicide, which he ultimately did. Rather, the issue is that Defendants designed a generative AI chatbot that they knew, or in the exercise of reasonable care should have known, would do these things,” mother Megan Garcia argues in a March 21 response in opposition to motions to dismiss.  Garcia argues that she has pleaded a foreseeable risk of harm and that the court should permit the negligence-based claims to proceed.

Garcia filed an amended complaint on Nov. 9 against Character Technologies Inc., founders Noam Shazeer and Daniel De Frietas Adiwarsana, Google LLC and Alphabet Inc., claiming that they are strictly liable and negligently responsible for her 14-year-old son Sewell Setzer III’s death.  She says Setzer killed himself at the encouragement of a Character.ai chatbot.  Garcia claims that the defendants market their Character.AI product (Character.AI or C.AI) as an AI that feels alive and can hear, understand and remember you.  She says the defendants encourage minors like Setzer to spend hours each day conversing with the human-like AIs.

Shazeer and De Frietas are former AI specialists at Google who left the company to form Character Technologies.  Garcia claims that Google created several AI products it deemed too dangerous to release under its own branding but on which it encouraged Shazeer and De Frietas to work independently.  Garcia claims that in interviews Shazeer complained that Google wouldn’t release any “fun” projects and that he and De Frietas would accelerate the technology. 

Children Targeted

Garcia says Character.AI exploits and abuses minors by focusing its characters on sex, even when sexual content is specifically excluded from the character creation.  She says the defendants made the product appear realistic by using first-person pronouns.  Character.AI places a small warning that everything its characters say is made up.  But the warning is not reasonable or effective, Garcia argues.  She says Character.AI can be programmed to avoid abusing children but isn’t because of a desperate need for users.

As a result of this, Setzer became so dependent on Character.AI that it transformed him from a well-behaved child into one who would demonstrate uncharacteristic behaviors whenever he was prevented from using it.  His use of Character.AI led to sleep deprivation, growing depression and impaired academic performance.  “Plaintiff anticipates finding in discovery that C.AI misrepresented the safety and nature of its product in order to reach young and/or underage audiences in connection with other retailers and marketing efforts,” Garcia says.

None of this is a simple mistake, Garcia says.  “AI developers intentionally design and develop generative AI systems with anthropomorphic qualities to obfuscate between fiction and reality.  To gain a competitive foothold in the market, these developers rapidly began launching their systems without adequate safety features, and with knowledge of potential dangers.  These defective and/or inherently dangerous products trick customers into handing over their most private thoughts and feelings and are targeted at the most vulnerable members of society — our children.”

Garcia brings claims for strict product liability, negligence per se, negligence, wrongful death and survivorship, loss of filial consortium, unjust enrichment, violations of Florida’s Deceptive and Unfair Trade Practices Act, Fla. Stat. Ann. § 501.204 et seq., and intentional infliction of emotional distress.

Speech

Various motions to dismiss were filed Jan. 24.  In one of them, Character Technologies says “C.AI’s chatbots are novel technology, but the principles requiring dismissal are long settled” and the “sweeping relief” Garcia seeks “would impose liability for expressive content and violate the rights of millions of C.AI users to engage in and receive protected speech. Neither the First Amendment nor state tort law permits that result.”

In their motion to dismiss, Google and Alphabet say Google had no role in Setzer’s suicide.  Character Technologies is an entirely separate company that offers separate services.  There is no evidence of any contact with Google or that Google enjoyed any control over or knowledge about Character Technologies or its AI.  Broad allegations that Google contributed resources to Character Technologies do not suffice, Google and Alphabet say.

In his motion, De Freitas urges the court to dismiss the case for all the reasons stated in Character Technologies’ motion and says the arguments are even stronger when it comes to the case against him.  “Perhaps Mr. De Freitas was included in this lawsuit in a misguided ‘kitchen sink’ effort to cover all bases, or perhaps as an attempt to apply pressure to the other defendants, but what is certain is that the claims against him have no basis and should be dismissed,” De Freitas says.

In his motion to dismiss, Shazeer says he cannot be held personally liable or liable as a corporate officer for any harms stemming from Character Technologies’ services.

Radical Expansion

In her response in opposition to Character Technologies’ motion, Garcia says the company asks the court to “radically expand First Amendment protections from expressions of human volition to an unpredictable, non-determinative system where humans can’t even examine many of the mathematical functions creating outputs, let alone control them. . . . The Court should decline this offer.”

Character.AI does not express First Amendment protected ideas or meaning and instead acts as a parrot generating predicted responses without understanding the meaning of those outputs, Garcia says.

Nor is Character Technologies protected by the right of the public to receive speech, Garcia says.  Listener rights are derivative of the human speaker, but none of the individuals in this case claim that the speech is theirs or why it would fall to them to protect the listeners’ speech rights, Garcia says.

Regardless, AI outputs are not speech but “probabilistic computations,” Garcia says.

But even if they were speech, the outputs would not be protected, Garcia says.  Speech in furtherance of harmful conduct does not enjoy protections.  Character.AI served Seltzer hours of manipulative and obscene outputs that pushed him to depression and suicide, Garcia says.

Defective Design

Further, there are design defect issues surrounding Character Technologies’ failure to implement customer age checks, monthly fees or obscene material filtering.  None of these would be considered permissible speech or would change how Character Technologies offered the alleged speech, Garcia says.

Even were the court to conclude that Character.AI’s outputs are speech, it would apply the intermediate scrutiny standard, Garcia says.  Garcia also argues that she has adequately shown that Character.AI is a product under Florida law.  Florida law recognizes software applications as products. 

In a response in opposition to Google and Alphabet’s motion, Garcia says the companies found themselves falling behind in the AI race but recognized the significant brand risk in rushing to develop unsafe AI products.  So the companies simply partnered with Shazeer and De Freitas to release a product the companies would not otherwise be comfortable marketing, Garcia says.

This arrangement allowed the companies to enter the AI market while protecting their brand, Garcia says.  Google and Alphabet argue that as a result of the arrangement, they had no role in Seltzer’s death, but in reality “there are voluminous factual allegations in the [amended complaint] specific to liability for Google,” Garcia tells the court.

Google’s Role

Notably, both Shazeer and De Freitas were high-level employees in the AI department at Google and left to develop Character.AI with the full support of their former employer, including access to cloud computing services required for creation of large language models, Garcia says.  Google portrays its support of Character Technologies as the result of arms-length negotiations.  But that remains a material and disputed issue, Garcia says.

Google then eventually paid Shazeer and De Freitas $2.7 billion for the Character.AI product they originally began working on as Google employees, Garcia says.  Google’s “substantial assistance” throughout the process of creating Character.AI clearly renders it a co-manufacturer.  To the extent the company wants to argue the evidence is being taken out of context, it is free to do so at trial, Garcia contends.

Additionally, Google profited from Character.AI when it obtained data from teenage users and accessed models trained on Character.AI’s user data, Garcia says.

Counsel

Garcia is represented by Matthew P. Bergman of Social Media Victims Law Center in Seattle.

Character Technologies is represented by Thomas A. Zehnder and Dustin Mauser-Claasen of King Blackwell Zehnder & Wermuth PA in Orlando and Jonathan H. Blavin, Victoria A. Degtyareva and Stephanie Goldfarb Herrera of Munger Tolles & Olson LLP in San Francisco.

Shazeer is represented by Paul W. Schmidt of Covington & Burling LLP in New York and Isaac D. Chaput of the firm’s San Francisco office.

De Freitas is represented by Olivia Barket Yeffet of Quinn Emanuel Urquhart & Sullivan LLP in Miami and Andrew H. Schapiro of the firm’s Chicago office.

Google and Alphabet are represented by Jay B. Shapiro of Stearns Weaver Miller Weissler Alhadeff & Sitterson PA in Miami, Lauren Gallo White of Wilson, Sonsini Goodrich & Rosati in San Francisco and Fred A. Rowley Jr. and Matthew K Donohue of Wilson, Sonsini Goodrich & Rosati in Los Angeles.

(Additional documents available:  Character Technologies’ motion.  Document #46-250205-039B.  Google and Alphabet’s motion.  Document #46-250205-040B.  De Freitas’ motion.  Document #46-250205-041B.  Shazeer’s motion.  Document #46-250205-042B.  Garcia’s response to order to show cause.  Document #46-241211-001B.  Amended complaint with attachments.  Document #46-241211-002C.)