Do LLMs generate “pure speech”?
Feds could censor chatbots if their “speech” isn’t protected, Character.AI says.
Pushing to dismiss a lawsuit alleging that its chatbots caused a teen’s suicide, Character Technologies is arguing that chatbot outputs should be considered “pure speech” deserving of the highest degree of protection under the First Amendment.
In their motion to dismiss, the developers of Character.AI (C.AI) argued that it doesn’t matter who the speaker is—whether it’s a video game character spouting scripted dialogue, a foreign propagandist circulating misinformation, or a chatbot churning out AI-generated responses to prompting—courts protect listeners’ rights to access that speech. Accusing the mother of the departed teen, Megan Garcia, of attempting to “insert this Court into the conversations of millions of C.AI users” and supposedly endeavoring to “shut down” C.AI, the chatbot maker argued that the First Amendment bars all of her claims.
“The Court need not wrestle with the novel questions of who should be deemed the speaker of the allegedly harmful content here and whether that speaker has First Amendment rights,” Character Technologies argued, “because the First Amendment protects the public’s ‘right to receive information and ideas.'”
Warning that “imposing tort liability for one user’s alleged response to expressive content would be to ‘declare what the rest of the country can and cannot read, watch, and hear,'” the company urged the court to consider the supposed “chilling effect” that would have on “both on C.AI and the entire nascent generative AI industry.”
“‘Pure speech,’ such as the chat conversations at issue here, ‘is entitled to comprehensive protection under the First Amendment,'” Character Technologies argued in another court filing.
However, Garcia’s lawyers pointed out that even a video game character’s dialogue is written by a human, arguing that all of Character Technologies’ examples of protected “pure speech” are human speech. Although the First Amendment also protects non-human corporations’ speech, corporations are formed by humans, they noted. And unlike corporations, chatbots have no intention behind their outputs, her legal team argued, instead simply using a probabilistic approach to generate text. So they argue that the First Amendment does not apply.
Character Technologies argued in response that demonstrating C.AI’s expressive intent is not required, but if it were, “conversations with Characters feature such intent” because chatbots are designed to “be expressive and engaging,” and users help design and prompt those characters.
“Users layer their own expressive intent into each conversation by choosing which Characters to talk to and what messages to send and can also edit Characters’ messages and direct Characters to generate different responses,” the chatbot maker argued.
In her response opposing the motion to dismiss, Garcia urged the court to decline what her legal team characterized as Character Technologies’ invitation to “radically expand First Amendment protections from expressions of human volition to an unpredictable, non-determinative system where humans can’t even examine many of the mathematical functions creating outputs, let alone control them.”
To support Garcia’s case, they cited a 40-year-old ruling where the Eleventh Circuit ruled that a talking cat called “Blackie” could not be “considered a person” and was deemed a “non-human entity” despite possessing an “exceptional speech-like ability.”
Garcia’s lawyers hope the judge will rule that “AI output is not speech at all,” or if it is speech, it “falls within an exception to the First Amendment”—perhaps deemed offensive to minors who the chatbot maker knew were using the service or possibly resulting in a novel finding that manipulative speech isn’t protected. If either argument is accepted, the chatbot makers’ attempt to invoke “listeners’ rights cannot save it,” they suggested.
However, Character Technologies disputes that any recognized exception to the First Amendment’s protections is applicable in the case, noting that Garcia’s team is not arguing that her son’s chats with bots were “obscene” or incited violence. Rather, the chatbot maker argued, Garcia is asking the court to “be the first to hold that ‘manipulative expression’ is unprotected by the First Amendment because a ‘disparity in power and information between speakers and listeners… frustrat[es] listeners’ rights.'”
Now, a US court is being asked to clarify if chatbot outputs are protected speech. At a hearing Monday, a US district judge in Florida, Anne Conway, did not rule from the bench, Garcia’s legal team told Ars. Asking few questions of either side, the judge is expected to issue an opinion on the motion to dismiss within the next few weeks, or possibly months.
For Garcia and her family, who appeared at the hearing, the idea that AI “has more rights than humans” felt dehumanizing, Garcia’s legal team said.
“Pandering” to Trump administration to dodge guardrails
According to Character Technologies, the court potentially agreeing with Garcia that “that AI-generated speech is categorically unprotected” would have “far-reaching consequences.”
At perhaps the furthest extreme, they’ve warned Conway that without a First Amendment barrier, “the government could pass a law prohibiting AI from ‘offering prohibited accounts of history’ or ‘making negative statements about the nation’s leaders,’ as China has considered doing.” And the First Amendment specifically prohibits the government from controlling the flow of ideas in society, they noted, angling to make chatbot output protections seem crucial in today’s political climate.
Meetali Jain, Garcia’s attorney and founder of the Tech Justice Law Project, told Ars that this kind of legal challenge is new in the generative AI space, where copyright battles have dominated courtroom debates.
“This is the first time that I’ve seen not just the issue of the First Amendment being applied to gen AI but also the First Amendment being applied in this way,” Jain said.
In their court filing, Jain’s team noted that Character Technologies is not arguing that the First Amendment shielded the rights of Garcia’s son, Sewell Setzer, to receive allegedly harmful speech. Instead, their argument is “effectively juxtaposing the listeners’ rights of their millions of users against this one user who was aggrieved. So it’s kind of like the hypothetical users versus the real user who’s in court.”
Jain told Ars that Garcia’s team tried to convince the judge that the argument that it doesn’t matter who the speaker is, even when the speaker isn’t human, is reckless since it seems to be “implying” that “AI is a sentient being and has its own rights.”
Additionally, Jain suggested that Character Technologies’ argument that outputs must be shielded to avoid government censorship seems to be “pandering” to the Trump administration’s fears that China may try to influence American politics through social media algorithms like TikTok’s or powerful open source AI models like DeepSeek.
“That suggests that there can be no sort of imposition of guardrails on AI, lest we either lose on the national security front or because of these vague hypothetical under-theorized First Amendment concerns,” Jain told Ars.
At a press briefing Tuesday, Jain confirmed that the judge clearly understood that “our position was that the First Amendment protects speech, not words.”
“LLMs do not think and feel as humans do,” Jain said, citing University of Colorado law school researchers who supported their complaint. “Rather, they generate text through statistical methods based on patterns found in their training data. And so our position was that there is a distinction to make between words and speech, and that it’s really only the latter that is deserving of First Amendment protection.”
Jain alleged that Character Technologies is angling to create a legal environment where all chatbot outputs are protected against liability claims so that C.AI can operate “without any sort of constraints or guardrails.”
It’s notable, she suggested, that the chatbot maker updated its safety features following the death of Garcia’s son, Sewell Setzer. A C.AI blog mourned the “tragic loss of one of our users” and noted updates, included changes “to reduce the likelihood of encountering sensitive or suggestive content,” improved detection and intervention in harmful chat sessions, and “a revised disclaimer on every chat to remind users that the AI is not a real person.”
Although Character Technologies argues that it’s common to update safety practices over time, Garcia’s team alleged these updates show that C.AI could have made a safer product and chose not to.
Expert warns against giving AI products rights
Character Technologies has also argued that C.AI is not a “product” as Florida law defines it. That has striking industry implications, according to Camille Carlton, a policy director for the Center for Humane Technology who is serving as a technical expert on the case.
At the press briefing, Carlton suggested that “by invoking these First Amendment protections over speech without really specifying whose speech is being protected, Character.AI’s defense has really laid the groundwork for a world in which LLM outputs are protected speech and for a world in which AI products could have other protected rights in the same way that humans do.”
Since chatbot outputs seemingly don’t have Section 230 protections—Jain noted it was somewhat surprising that Character Technologies did not raise this defense—the chatbot maker may be attempting to secure the First Amendment as a shield instead, Carlton suggested.
“It’s a move that they’re incentivized to take because it would reduce their own accountability and their own responsibility,” Carlton said.
Jain expects that whatever Conway decides, the losing side will appeal. However, if Conway denies the motion, then discovery can begin, perhaps allowing Garcia the clearest view yet into the allegedly harmful chats she believes manipulated her son into feeling completely disconnected from the real world.
If courts grant AI products across the board such rights, Carlton warned, troubled parents like Garcia may have no recourse for potentially dangerous outputs.
“This issue could fundamentally reshape how the law approaches AI free speech and corporate accountability,” Carlton said. “And I think the bottom line from our perspective—and from what we’re seeing in terms of the trends in Character.AI and the broader trends from these AI labs—is that we need to double down on the fact that these are products. They’re not people.”
Character Technologies declined Ars’ request to comment.
If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.