First Amendment

iceblock-lawsuit:-trump-admin-bragged-about-demanding-app-store-removal

ICEBlock lawsuit: Trump admin bragged about demanding App Store removal


ICEBlock creator sues to protect apps that are crowd-sourcing ICE sightings.

In a lawsuit filed against top Trump administration officials on Monday, Apple was accused of caving to unconstitutional government demands by removing an Immigration and Customs Enforcement-spotting app from the App Store with more than a million users.

In his complaint, Joshua Aaron, creator of ICEBlock, cited a Fox News interview in which Attorney General Pam Bondi “made plain that the United States government used its regulatory power to coerce a private platform to suppress First Amendment-protected expression.”

Suing Bondi—along with Department of Homeland Security Secretary Kristi Noem, Acting Director of ICE Todd Lyons, White House “Border Czar” Thomas D. Homan, and unnamed others—Aaron further alleged that US officials made false statements and “unlawful threats” to criminally investigate and prosecute him for developing ICEBlock.

Currently, ICEBlock is still available to anyone who downloaded the app prior to the October removal from the App Store, but updates have been disrupted, and Aaron wants the app restored. Seeking an injunction to block any attempted criminal investigations from chilling his free speech, as well as ICEBlock users’ speech, Aaron vowed in a statement provided to Ars to fight to get ICEBlock restored.

“I created ICEBlock to keep communities safe,” Aaron said. “Growing up in a Jewish household, I learned from history about the consequences of staying silent in the face of tyranny. I will never back down from resisting the Trump Administration’s targeting of immigrants and conscripting corporations into its unconstitutional agenda.”

Expert calls out Apple for “capitulation”

Apple is not a defendant in the lawsuit and did not respond to Ars’ request to comment.

Aaron’s complaint called out Apple, though, for alleged capitulation to the Trump administration that appeared to be “the first time in Apple’s nearly fifty-year history” that “Apple removed a US-based app in response to the US government’s demands.” One of his lawyers, Deirdre von Dornum, told Ars that the lawsuit is about more than just one app being targeted by the government.

“If we allow community sharing of information to be silenced, our democracy will fail,” von Dornum said. “The United States will be no different than China or Russia. We cannot stand by and allow that to happen. Every person has a right to share information under the First Amendment.”

Mario Trujillo, a staff attorney from a nonprofit digital rights group called the Electronic Frontier Foundation that’s not involved in the litigation, agreed that Apple’s ban appeared to be prompted by an unlawful government demand.

He told Ars that “there is a long history that shows documenting law enforcement performing their duties in public is protected First Amendment activity.” Aaron’s complaint pointed to a feature on one of Apple’s own products—Apple Maps—that lets users crowd-source sightings of police speed traps as one notable example. Other similar apps that Apple hosts in its App Store include other Big Tech offerings, like Google Maps and Waze, as well as apps with explicit names like Police Scanner.

Additionally, Trujillo noted that Aaron’s arguments are “backed by recent Supreme Court precedent.”

“The government acted unlawfully when it demanded Apple remove ICEBlock, while threatening others with prosecution,” Trujillo said. “While this case is rightfully only against the government, Apple should also take a hard look at its own capitulation.”

ICEBlock maker sues to stop app crackdown

ICEBlock is not the only app crowd-sourcing information on public ICE sightings to face an app store ban. Others, including an app simply collecting footage of ICE activities, have been removed by Apple and Google, 404 Media reported, as part of a broader crackdown.

Aaron’s suit is intended to end that crackdown by seeking a declaration that government demands to remove ICE-spotting apps violate the First Amendment.

“A lawsuit is the only mechanism that can bring transparency, accountability, and a binding judicial remedy when government officials cross constitutional lines,” Aaron told 404 Media. “If we don’t challenge this conduct in court, it will become a playbook for future censorship.”

In his complaint, Aaron explained that he created ICE in January to help communities hold the Trump administration accountable after Trump campaigned on a mass deportation scheme that boasted numbers far beyond the number of undocumented immigrants in the country.

“His campaign team often referenced plans to deport ’15 to 20 million’ undocumented immigrants, when in fact the number of undocumented persons in the United States is far lower,” his complaint said.

The app was not immediately approved by Apple, Aaron said. But after a thorough vetting process, Apple approved the app in April.

ICEBlock wasn’t an overnight hit but suddenly garnered hundreds of thousands of users after CNN profiled the app in June.

Trump officials attack ICEBlock with false claims

Within hours of that report, US officials began blasting the app, claiming that it was used to incite violence against ICE officers and amplifying pressure to get the app yanked from the App Store.

But Bondi may have slipped up by making comments that seemed to make it clear her intentions were to restrict disfavored speech. On Fox, Bondi claimed that CNN’s report supposedly promoting the app was dangerous, whereas the Fox News report was warning people not to use the app and was perfectly OK.

“Bondi’s statements make clear that her threats of adverse action constitute viewpoint discrimination, where speech ‘promoting’ the app is unlawful but speech ‘warning’ about the app is lawful,” the lawsuit said.

Other Trump officials were accused of making false statements and using unlawful threats to silence Aaron and ICEBlock users.

“What they’re doing is actively encouraging people to avoid law enforcement activities, operations, and we’re going to actually go after them,” Noem told reporters in July. In a statement, Lyons claimed that ICEBlock “basically paints a target on federal law enforcement officers’ backs” and that “officers and agents are already facing a 500 percent increase in assaults.” Echoing Lyons and Noem, Homan called for an investigation into CNN for reporting on the app, which “falsely implied that Plaintiffs’ protected speech was illegally endangering law enforcement officers,” Aaron alleged.

Not named in the lawsuit, White House Press Secretary Karoline Leavitt also allegedly made misleading statements. That included falsely claiming “that ICEBlock and similar apps are responsible for violent attacks on law enforcement officers, such as the tragic shooting of immigrants at an ICE detention facility in Dallas, Texas, on September 24, 2025,” where “no actual evidence has ever been cited to support these claims,” the lawsuit said.

Despite an apparent lack of evidence, Apple confirmed that ICEBlock was removed in October, “based on information we’ve received from law enforcement about the safety risks associated with ICEBlock,” a public statement said. In a notice to Aaron, Apple further explained that the app was banned “because its purpose is to provide location information about law enforcement officers that can be used to harm such officers individually or as a group.”

Apple never shared any more information with Aaron to distinguish his app from other apps allowed in the App Store that help people detect and avoid nearby law enforcement activities. The iPhone maker also didn’t confirm the source of its information, Aaron said.

However, on Fox, Bondi boasted about reaching “out to Apple today demanding they remove the ICEBlock app from their App Store—and Apple did so.”

Then, later during sworn testimony before the Senate Judiciary Committee, she reiterated those comments, while also oddly commenting that Google received the same demand, despite ICEBlock intentionally being designed for iPhone only.

She also falsely claimed that ICEBlock “was reckless and criminal in that people were posting where ICE officers lived” but “subsequently walked back that statement,” Aaron’s complaint said.

Aaron is hoping the US District Court in the District of Columbia will agree that “Bondi’s demand to Apple to remove ICEBlock from the App store, as well as her viewpoint-based criticism of CNN for publicizing the app, constitute a ‘scheme of state censorship’ designed to ‘suppress’” Aaron’s “publication and distribution of the App.”

His lawyer, Noam Biale, told Ars that “Attorney General Bondi’s self-congratulatory claim that she succeeded in pushing Apple to remove ICEBlock is an admission that she violated our client’s constitutional rights. In America, government officials cannot suppress free speech by pressuring private companies to do it for them.”

Similarly, statements from Noem, Lyons, and Homan constituted “excessive pressure on Apple to remove the App and others like it from the App Store,” Aaron’s complaint alleged, as well as unconstitutional suppression of Aaron’s and ICEBlock users’ speech.

ICEBlock creator was one of the first Mac Geniuses

Aaron maintains that ICEBlock prominently features a disclaimer asking all users to “please note that the use of this app is for information and notification purposes only. It is not to be used for the purposes of inciting violence or interfering with law enforcement.”

In his complaint, he explained how the app worked to automatically delete ICE sightings after four hours—information that he said could not be recovered. That functionality ensures that “ICEBlock cannot be used to track ICE agents’ historical presence or movements,” Aaron’s lawsuit noted.

Rather than endangering ICE officers, Aaron argued that ICEBlock helps protect communities from dangerous ICE activity, like tear gassing and pepper spraying, or alleged racial profiling triggering arrests of US citizens and immigrants. Kids have been harmed, his complaint noted, with ICE agents documented “arresting parents and leaving young children unaccompanied” and even once “driving an arrestee’s car away from the scene of arrest with the arrestee’s young toddler still strapped into a car seat.”

Aaron’s top fear driving his development of the app was his concern that escalations in ICE enforcement—including arbitrary orders to hit 75 arrests a day—exposed “immigrants and citizens alike to violence and rampant violations of their civil liberties” that ICEBlock could shield them from.

“These operations have led to widespread and well-documented civil rights violations against citizens, lawful residents, and undocumented immigrants alike, causing serious concern among members of the public, elected officials, and federal courts,” Aaron’s complaint said.

They also “have led some people—regardless of immigration or citizenship status—to want to avoid areas of federal immigration enforcement activities altogether” and “resulted in situations where members of the public may wish, when enforcement activity becomes visible in public spaces, to observe, record, or lawfully protest against such activity.”

In 2001, Aaron worked for Apple as one of the first Mac Geniuses in its Apple Stores. These days, he flexes his self-taught developer skills by creating apps intended to do social good and help communities.

Emphasizing that he was raised in a Jewish household where he heard stories from Holocaust survivors that left a lasting mark, Aaron said that the ICEBlock app represented his “commitment to use his abilities to advocate for the protection of civil liberties.” Without an injunction, he’s concerned that he and other like-minded app makers will remain in the Trump administration’s crosshairs, as the mass deportation scheme rages on through ongoing ICE raids across the US, Aaron told 404 Media.

“More broadly, the purpose [of the lawsuit] is to hold government officials accountable for using their authority to silence lawful expression and intimidate creators of technology they disfavor,” Aaron said. “This case is about ensuring that public officials cannot circumvent the Constitution by coercing private companies or threatening individuals simply because they disagree with the message or the tool being created.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

ICEBlock lawsuit: Trump admin bragged about demanding App Store removal Read More »

big-tech-sues-texas,-says-age-verification-law-is-“broad-censorship-regime”

Big Tech sues Texas, says age-verification law is “broad censorship regime”

Texas minors also challenge law

The Texas App Store Accountability Act is similar to laws enacted by Utah and Louisiana. The Texas law is scheduled to take effect on January 1, 2026, while the Utah and Louisiana laws are set to be enforced starting in May and July, respectively.

The Texas law is also being challenged in a different lawsuit filed by a student advocacy group and two Texas minors.

“The First Amendment does not permit the government to require teenagers to get their parents’ permission before accessing information, except in discrete categories like obscenity,” attorney Ambika Kumar of Davis Wright Tremaine LLP said in an announcement of the lawsuit. “The Constitution also forbids restricting adults’ access to speech in the name of protecting children. This law imposes a system of prior restraint on protected expression that is presumptively unconstitutional.”

Davis Wright Tremaine LLP said the law “extends far beyond social media to mainstream educational, news, and creative applications, including Wikipedia, search apps, and internet browsers; messaging services like WhatsApp and Slack; content libraries like Audible, Kindle, Netflix, Spotify, and YouTube; educational platforms like Coursera, Codecademy, and Duolingo; news apps from The New York Times, The Wall Street Journal, ESPN, and The Atlantic; and publishing tools like Substack, Medium, and CapCut.”

Both lawsuits against Texas argue that the law is preempted by the Supreme Court’s 2011 decision in Brown v. Entertainment Merchants Association, which struck down a California law restricting the sale of violent video games to children. The Supreme Court said in Brown that a state’s power to protect children from harm “does not include a free-floating power to restrict the ideas to which children may be exposed.”

The tech industry has sued Texas over multiple laws related to content moderation. In 2022, the Supreme Court blocked a Texas law that prohibits large social media companies from moderating posts based on a user’s viewpoint. Litigation in that case is ongoing. In a separate case decided in June 2025, the Supreme Court upheld a Texas law that requires age verification on porn sites.

Big Tech sues Texas, says age-verification law is “broad censorship regime” Read More »

4chan-fined-$26k-for-refusing-to-assess-risks-under-uk-online-safety-act

4chan fined $26K for refusing to assess risks under UK Online Safety Act

The risk assessments also seem to unconstitutionally compel speech, they argued, forcing them to share information and “potentially incriminate themselves on demand.” That conflicts with 4chan and Kiwi Farms’ Fourth Amendment rights, as well as “the right against self-incrimination and the due process clause of the Fifth Amendment of the US Constitution,” the suit says.

Additionally, “the First Amendment protects Plaintiffs’ right to permit anonymous use of their platforms,” 4chan and Kiwi Farms argued, opposing Ofcom’s requirements to verify ages of users. (This may be their weakest argument as the US increasingly moves to embrace age gates.)

4chan is hoping a US district court will intervene and ban enforcement of the OSA, arguing that the US must act now to protect all US companies. Failing to act now could be a slippery slope, as the UK is supposedly targeting “the most well-known, but small and, financially speaking, defenseless platforms” in the US before mounting attacks to censor “larger American companies,” 4chan and Kiwi Farms argued.

Ofcom has until November 25 to respond to the lawsuit and has maintained that the OSA is not a censorship law.

On Monday, Britain’s technology secretary, Liz Kendall, called OSA a “lifeline” meant to protect people across the UK “from the darkest corners of the Internet,” the Record reported.

“Services can no longer ignore illegal content, like encouraging self-harm or suicide, circulating online which can devastate young lives and leaves families shattered,” Kendall said. “This fine is a clear warning to those who fail to remove illegal content or protect children from harmful material.”

Whether 4chan and Kiwi Farms can win their fight to create a carveout in the OSA for American companies remains unclear, but the Federal Trade Commission agrees that the UK law is an overreach. In August, FTC Chair Andrew Ferguson warned US tech companies against complying with the OSA, claiming that censoring Americans to comply with UK law is a violation of the FTC Act, the Record reported.

“American consumers do not reasonably expect to be censored to appease a foreign power and may be deceived by such actions,” Ferguson told tech executives in a letter.

Another lawyer backing 4chan, Preston Byrne, seemed to echo Ferguson, telling the BBC, “American citizens do not surrender our constitutional rights just because Ofcom sends us an e-mail.”

4chan fined $26K for refusing to assess risks under UK Online Safety Act Read More »

is-it-illegal-to-not-buy-ads-on-x?-experts-explain-the-ftc’s-bizarre-ad-fight.

Is it illegal to not buy ads on X? Experts explain the FTC’s bizarre ad fight.


Here’s the “least silly way” to wrap your head around the FTC’s war over X ads.

Credit: Aurich Lawson | Getty Images

After a judge warned that the Federal Trade Commission’s probe into Media Matters for America (MMFA) should alarm “all Americans”—viewing it as a likely government retaliation intended to silence critical reporting from a political foe—the FTC this week appealed a preliminary injunction blocking the investigation.

The Republican-led FTC’s determined to keep pressure on the nonprofit—which is dedicated to monitoring conservative misinformation—ever since Elon Musk villainized MMFA in 2023 for reporting that ads were appearing next to pro-Nazi posts on X. Musk claims that reporting caused so many brands to halt advertising that X’s revenue dropped by $1.5 billion, but advertisers have suggested there technically was no boycott. They’ve said that many factors influenced each of their independent decisions to leave X—including their concerns about Musk’s own antisemitic post, which drew rebuke from the White House in 2023.

For MMFA, advertisers, agencies, and critics, a big question remains: Can the FTC actually penalize advertisers for invoking their own rights to free expression and association by refusing to deal with a private company just because they happened to agree on a collective set of brand standards to avoid monetizing hate speech or offensive content online?

You’re not alone if you’re confused by the suggestion, since advertisers have basically always cautiously avoided associations that could harm their brands. After Elon Musk sued MMFA—then quickly expanded the fight by also suing advertisers and agencies—a running social media joke mocked X as suing to force people to buy its products and the billionaire for seeming to believe it should be illegal to deprive him of money.

On a more serious note, former FTC commissioner Alvaro Bedoya, who joined fellow Democrats who sued Trump for ejecting them from office, flagged the probe as appearing “bizarrely” politically motivated to protect Musk, an ally who donated $288 million to Trump’s campaign.

The FTC did not respond to Ars’ request to comment on its investigation. But seemingly backing Musk’s complaints without much evidence, the FTC continues to amplify his conspiracy theory that sharing brand safety standards harms competition in the ad industry. So far, the FTC has alleged that sharing such standards allows advertisers, ad buyers, and nonprofit advocacy groups to coordinate attacks on revenue streams in supposed bids to control ad markets and censor conservative platforms.

Legal experts told Ars that these claims seem borderline absurd. Antitrust claims usually arise out of concerns that collaborators are profiting by reducing competition, but it’s unclear how advertisers financially gain from withholding ads. Somewhat glaringly in the case of X, it seems likely that at least some advertisers actually increased costs by switching from buying cheaper ads on the increasingly toxic X to costlier platforms deemed safer or more in line with brands’ values.

X did not respond to Ars’ request to comment.

The bizarre logic of the FTC’s ad investigation

In a blog post, Walter Olson, a senior fellow at the Cato Institute’s Robert A. Levy Center for Constitutional Studies, picked apart the conspiracy theory, trying to iron out the seemingly obvious constitutional conflicts with the FTC’s logic.

He explained that “X and Musk, together with allies in high government posts, have taken the position that for companies or ad agencies to decline to advertise with X on ideological grounds,” that “may legally violate its rights, especially if they coordinate with other entities in doing so.”

“Perhaps the least silly way of couching that idea is to say that advertisers are combining in restraint of trade to force [X] to improve the quality of its product as an ad environment, which you might analogize to forcing it to offer better terms to advertisers,” Olson said.

Pointing to a legal analysis weighing reasons why the FTC’s antitrust claims might not hold up in court, Olson suggested that the FTC is unlikely to overcome constitutional protections and win its ad war on the merits.

For one, he noted that it’s unusual to mingle “elements of anticompetitive conduct with First Amendment expression,” For another, “courts have been extremely protective of the right to boycott for ideological reasons, even when some effects were anti-competitive.” As Olson emphasized to Ars, courts are cautious that infringing First Amendment rights for even a brief period of time can irreparably harm speakers, including causing a chilling effect on speech broadly.

It seems particularly problematic that the FTC is attempting to block so-called boycotts from advertisers and agencies that “are specifically deciding how to spend money on speech itself,” Olson wrote. He noted that “the decision to advertise, the rejection of a platform for ideological reasons, and communication with others on how to turn these speech decisions into a maximum statement are all forms of expression on matters of public concern.”

Olson agrees with critics who suspect that the FTC doesn’t care about winning legal battles in this war. Instead, experts from Public Knowledge, a consumer advocacy group partly funded by big tech companies, told Ars that, seemingly for the FTC, “capitulation is the point.”

Why Media Matters’ fight may matter most

Public Knowledge Policy Director Lisa Macpherson told Ars that “the investigation into Media Matters is part of a larger pattern” employed by the FTC, which uses “the technical concepts of antitrust to further other goals, which are related to information control on behalf of the Trump administration.”

As one example, she joined Public Knowledge’s policy counsel focused on competition, Elise Phillips, in criticizing the FTC for introducing “unusual terms” into a merger that would create the world’s biggest advertising agency. To push the merger through, ad agencies were asked to sign a consent agreement that would block them from “boycotting platforms because of their political content by refusing to place their clients’ advertisements on them.”

Like social media users poking fun at Musk and X, it struck Public Knowledge as odd that the FTC “appears to be demanding that these ad agencies—and by extension, their clients—support media channels that may spread disinformation, hate speech, and extreme content as a condition for a merger.”

“The specific scope of the consent order seems to indicate that it does not reflect focus on the true impacts of diminished ad buying competition on advertisers, consumers, or labor, but instead the political impact of decreased revenue flows to publishers hosting content favorable to the Trump administration,” Public Knowledge experts suggested.

The demand falls in line with other Trump administration efforts to control information, Public Knowledge said, such as the FCC requiring a bias monitor for CBS to approve the Paramount-Skydance merger. It’s “all in service of controlling the flow of information about the administration and its policies,” Public Knowledge suggested. And the Trump administration depending on “the lack of a legal challenge due to industry financial interests” is creating “the biggest risk to First Amendment protections right now,” Phillips said.

Olson agreed with Public Knowledge experts that the agencies likely could have fought to remove the terms as unconstitutional and won, but instead, the CEO of the acquiring agency, Omnicom, appeared to indicate that the company was willing to accept the terms to push the merger through.

It seems possible that Omnicom didn’t challenge the terms because they represent what Public Knowledge suggested in a subsequent blog was the FTC’s fundamental misunderstanding of how ad placements work online. Due to the opaque nature of ad tech like Google’s, advertisers started depending on ad agencies to set brand safety standards to help protect their ad placements (the ad tech was ruled anti-competitive, and the Department of Justice is currently figuring out how to remedy market harms). But even as they adapted to an opaque ad environment, advertisers, not their agencies, have always maintained control over where ads are placed.

Even if Omnicom felt that the FTC terms simply maintained the status quo—as the FTC suggested it would—Public Knowledge noted that Omnicom missed an opportunity to challenge how the terms impacted “the agency’s rights of association and perfectly legal, independent refusals to deal by private companies.” The seeming capitulation could “cause a chilling effect” not just impacting placements from Omnicom’s advertiser clients but also those at other ad agencies, Public Knowledge’s experts suggested.

That sticks advertisers in a challenging spot where the FTC seemingly hopes to keep them squirming, experts suggested. Without agencies to help advise on whether certain ad placements may risk harming their brands, advertisers who don’t want their “stuff to be shown against Nazis” are “going to have to figure out how” to tackle brand safety on their own, Public Knowledge’s blog said. And as long as the ad industry is largely willing to bend to the FTC’s pressure campaign, it’s less likely that legal challenges will be raised to block what appears to be the quiet erosion of First Amendment protections, experts fear.

That may be why the Media Matters fight, which seems like just another front with a tangential player in the FTC’s bigger battle, may end up mattering the most. Whereas others directly involved in the ad industry may be tempted to make a deal like Omnicon’s to settle litigation, MMFA refuses to capitulate to Musk or the FTC, vowing to fight both battles to the bitter end.

“It has been a recurring strategy of the Trump administration to pile up the pressure on targets so that they cannot afford to hold out for vindication at trial, even if their chances there seem good,” Olson told Ars. “So they settle.”

It’s harder than usual in today’s political climate to predict the outcome of the FTC’s appeal, Olson told Ars. Macpherson told Ars she’s holding out hope “that the DC court would take the same position that the current judge did,” which is that “this is likely vindictive behavior on the part of the FTC and that, importantly, advertisers’ First Amendment rights should make the FTC’s sweeping investigation invalid.”

Perhaps the FTC’s biggest hurdle, apart from the First Amendment, may be a savvy judges who see through their seeming pressure campaign. In a notable 1995 case, a US judge, Richard Posner, “took the view that a realistic court should be ready to recognize instances where litigation can be employed to generate intense pressure on targets to settle regardless of the merits,” Olson said.

While that case involved targets of litigation, the appeals court judge—or even the Supreme Court if MMFA’s case gets that far—could rule that “targets of investigation could be under similar pressure,” Olson suggested.

In a statement to Ars, MMFA President Angelo Carusone confirmed that MMFA’s resolve has not faded in the face of the FTC’s appeal and was instead only strengthened by the US district judge being “crystal clear” that “FTC’s wide-ranging fishing expedition was a ‘retaliatory act’ that ‘should alarm all Americans.'”

“We will continue to fight this blatant attack on our First Amendment rights because if this Administration succeeds, so can any Administration target anyone who disagrees,” Carusone said. “The law here is clear, and we are optimistic that the Circuit Court will see through this appeal for what it is: an attempt to do an end run around constitutional law in an effort to silence political critics.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Is it illegal to not buy ads on X? Experts explain the FTC’s bizarre ad fight. Read More »

skydance-deal-allows-trump’s-fcc-to-“censor-speech”-and-“silence-dissent”-on-cbs

Skydance deal allows Trump’s FCC to “censor speech” and “silence dissent” on CBS

Warning that the “Paramount payout” and “reckless” acquisition approval together mark a “dark chapter” for US press freedom, Gomez suggested the FCC’s approval will embolden “those who believe the government can—and should—abuse its power to extract financial and ideological concessions, demand favored treatment, and secure positive media coverage.”

FCC terms also govern Skydance hiring decisions

Gomez further criticized the FCC for overstepping its authority in “intervening in employment matters reserved for other government entities with proper jurisdiction on these issues” by requiring Skydance commitments to not establish any DEI programs, which Carr derided as “invidious.” But Gomez countered that “this agency is undermining legitimate efforts to combat discrimination and expand opportunity” by meddling in private companies’ employment decisions.

Ultimately, commissioner Olivia Trusty joined Carr in voting to stamp the agency’s approval, celebrating the deal as “lawful” and a “win” for American “jobs” and “storytelling.” Carr suggested the approval would bolster Paramount’s programming by injecting $1.5 billion into operations, which Trusty said would help Paramount “compete with dominant tech platforms.”

Gomez conceded that she was pleased that at least—unlike the Verizon/T-Mobile merger—Carr granted her request to hold a vote, rather than burying “the outcome of backroom negotiations” and “granting approval behind closed doors, under the cover of bureaucratic process.”

“The public has a right to know how Paramount’s capitulation evidences an erosion of our First Amendment protections,” Gomez said.

Outvoted 2–1, Gomez urged “companies, journalists, and citizens” to take up the fight and push back on the Trump administration, emphasizing that “unchecked and unquestioned power has no rightful place in America.”

Skydance deal allows Trump’s FCC to “censor speech” and “silence dissent” on CBS Read More »

first-amendment-doesn’t-just-protect-human-speech,-chatbot-maker-argues

First Amendment doesn’t just protect human speech, chatbot maker argues


Do LLMs generate “pure speech”?

Feds could censor chatbots if their “speech” isn’t protected, Character.AI says.

Pushing to dismiss a lawsuit alleging that its chatbots caused a teen’s suicide, Character Technologies is arguing that chatbot outputs should be considered “pure speech” deserving of the highest degree of protection under the First Amendment.

In their motion to dismiss, the developers of Character.AI (C.AI) argued that it doesn’t matter who the speaker is—whether it’s a video game character spouting scripted dialogue, a foreign propagandist circulating misinformation, or a chatbot churning out AI-generated responses to prompting—courts protect listeners’ rights to access that speech. Accusing the mother of the departed teen, Megan Garcia, of attempting to “insert this Court into the conversations of millions of C.AI users” and supposedly endeavoring to “shut down” C.AI, the chatbot maker argued that the First Amendment bars all of her claims.

“The Court need not wrestle with the novel questions of who should be deemed the speaker of the allegedly harmful content here and whether that speaker has First Amendment rights,” Character Technologies argued, “because the First Amendment protects the public’s ‘right to receive information and ideas.'”

Warning that “imposing tort liability for one user’s alleged response to expressive content would be to ‘declare what the rest of the country can and cannot read, watch, and hear,'” the company urged the court to consider the supposed “chilling effect” that would have on “both on C.AI and the entire nascent generative AI industry.”

“‘Pure speech,’ such as the chat conversations at issue here, ‘is entitled to comprehensive protection under the First Amendment,'” Character Technologies argued in another court filing.

However, Garcia’s lawyers pointed out that even a video game character’s dialogue is written by a human, arguing that all of Character Technologies’ examples of protected “pure speech” are human speech. Although the First Amendment also protects non-human corporations’ speech, corporations are formed by humans, they noted. And unlike corporations, chatbots have no intention behind their outputs, her legal team argued, instead simply using a probabilistic approach to generate text. So they argue that the First Amendment does not apply.

Character Technologies argued in response that demonstrating C.AI’s expressive intent is not required, but if it were, “conversations with Characters feature such intent” because chatbots are designed to “be expressive and engaging,” and users help design and prompt those characters.

“Users layer their own expressive intent into each conversation by choosing which Characters to talk to and what messages to send and can also edit Characters’ messages and direct Characters to generate different responses,” the chatbot maker argued.

In her response opposing the motion to dismiss, Garcia urged the court to decline what her legal team characterized as Character Technologies’ invitation to “radically expand First Amendment protections from expressions of human volition to an unpredictable, non-determinative system where humans can’t even examine many of the mathematical functions creating outputs, let alone control them.”

To support Garcia’s case, they cited a 40-year-old ruling where the Eleventh Circuit ruled that a talking cat called “Blackie” could not be “considered a person” and was deemed a “non-human entity” despite possessing an “exceptional speech-like ability.”

Garcia’s lawyers hope the judge will rule that “AI output is not speech at all,” or if it is speech, it “falls within an exception to the First Amendment”—perhaps deemed offensive to minors who the chatbot maker knew were using the service or possibly resulting in a novel finding that manipulative speech isn’t protected. If either argument is accepted, the chatbot makers’ attempt to invoke “listeners’ rights cannot save it,” they suggested.

However, Character Technologies disputes that any recognized exception to the First Amendment’s protections is applicable in the case, noting that Garcia’s team is not arguing that her son’s chats with bots were “obscene” or incited violence. Rather, the chatbot maker argued, Garcia is asking the court to “be the first to hold that ‘manipulative expression’ is unprotected by the First Amendment because a ‘disparity in power and information between speakers and listeners… frustrat[es] listeners’ rights.'”

Now, a US court is being asked to clarify if chatbot outputs are protected speech. At a hearing Monday, a US district judge in Florida, Anne Conway, did not rule from the bench, Garcia’s legal team told Ars. Asking few questions of either side, the judge is expected to issue an opinion on the motion to dismiss within the next few weeks, or possibly months.

For Garcia and her family, who appeared at the hearing, the idea that AI “has more rights than humans” felt dehumanizing, Garcia’s legal team said.

“Pandering” to Trump administration to dodge guardrails

According to Character Technologies, the court potentially agreeing with Garcia that “that AI-generated speech is categorically unprotected” would have “far-reaching consequences.”

At perhaps the furthest extreme, they’ve warned Conway that without a First Amendment barrier, “the government could pass a law prohibiting AI from ‘offering prohibited accounts of history’ or ‘making negative statements about the nation’s leaders,’ as China has considered doing.” And the First Amendment specifically prohibits the government from controlling the flow of ideas in society, they noted, angling to make chatbot output protections seem crucial in today’s political climate.

Meetali Jain, Garcia’s attorney and founder of the Tech Justice Law Project, told Ars that this kind of legal challenge is new in the generative AI space, where copyright battles have dominated courtroom debates.

“This is the first time that I’ve seen not just the issue of the First Amendment being applied to gen AI but also the First Amendment being applied in this way,” Jain said.

In their court filing, Jain’s team noted that Character Technologies is not arguing that the First Amendment shielded the rights of Garcia’s son, Sewell Setzer, to receive allegedly harmful speech. Instead, their argument is “effectively juxtaposing the listeners’ rights of their millions of users against this one user who was aggrieved. So it’s kind of like the hypothetical users versus the real user who’s in court.”

Jain told Ars that Garcia’s team tried to convince the judge that the argument that it doesn’t matter who the speaker is, even when the speaker isn’t human, is reckless since it seems to be “implying” that “AI is a sentient being and has its own rights.”

Additionally, Jain suggested that Character Technologies’ argument that outputs must be shielded to avoid government censorship seems to be “pandering” to the Trump administration’s fears that China may try to influence American politics through social media algorithms like TikTok’s or powerful open source AI models like DeepSeek.

“That suggests that there can be no sort of imposition of guardrails on AI, lest we either lose on the national security front or because of these vague hypothetical under-theorized First Amendment concerns,” Jain told Ars.

At a press briefing Tuesday, Jain confirmed that the judge clearly understood that “our position was that the First Amendment protects speech, not words.”

“LLMs do not think and feel as humans do,” Jain said, citing University of Colorado law school researchers who supported their complaint. “Rather, they generate text through statistical methods based on patterns found in their training data. And so our position was that there is a distinction to make between words and speech, and that it’s really only the latter that is deserving of First Amendment protection.”

Jain alleged that Character Technologies is angling to create a legal environment where all chatbot outputs are protected against liability claims so that C.AI can operate “without any sort of constraints or guardrails.”

It’s notable, she suggested, that the chatbot maker updated its safety features following the death of Garcia’s son, Sewell Setzer. A C.AI blog mourned the “tragic loss of one of our users” and noted updates, included changes “to reduce the likelihood of encountering sensitive or suggestive content,” improved detection and intervention in harmful chat sessions, and “a revised disclaimer on every chat to remind users that the AI is not a real person.”

Although Character Technologies argues that it’s common to update safety practices over time, Garcia’s team alleged these updates show that C.AI could have made a safer product and chose not to.

Expert warns against giving AI products rights

Character Technologies has also argued that C.AI is not a “product” as Florida law defines it. That has striking industry implications, according to Camille Carlton, a policy director for the Center for Humane Technology who is serving as a technical expert on the case.

At the press briefing, Carlton suggested that “by invoking these First Amendment protections over speech without really specifying whose speech is being protected, Character.AI’s defense has really laid the groundwork for a world in which LLM outputs are protected speech and for a world in which AI products could have other protected rights in the same way that humans do.”

Since chatbot outputs seemingly don’t have Section 230 protections—Jain noted it was somewhat surprising that Character Technologies did not raise this defense—the chatbot maker may be attempting to secure the First Amendment as a shield instead, Carlton suggested.

“It’s a move that they’re incentivized to take because it would reduce their own accountability and their own responsibility,” Carlton said.

Jain expects that whatever Conway decides, the losing side will appeal. However, if Conway denies the motion, then discovery can begin, perhaps allowing Garcia the clearest view yet into the allegedly harmful chats she believes manipulated her son into feeling completely disconnected from the real world.

If courts grant AI products across the board such rights, Carlton warned, troubled parents like Garcia may have no recourse for potentially dangerous outputs.

“This issue could fundamentally reshape how the law approaches AI free speech and corporate accountability,” Carlton said. “And I think the bottom line from our perspective—and from what we’re seeing in terms of the trends in Character.AI and the broader trends from these AI labs—is that we need to double down on the fact that these are products. They’re not people.”

Character Technologies declined Ars’ request to comment.

If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

First Amendment doesn’t just protect human speech, chatbot maker argues Read More »

fcc-democrat-slams-chairman-for-aiding-trump’s-“campaign-of-censorship”

FCC Democrat slams chairman for aiding Trump’s “campaign of censorship”

The first event is scheduled for Thursday and will be hosted by the Center for Democracy and Technology. The events will be open to the public and livestreamed when possible, and feature various speakers on free speech, media, and telecommunications issues.

With Democrat Geoffrey Starks planning to leave the commission soon, Republicans will gain a 2–1 majority, and Gomez is set to be the only Democrat on the FCC for at least a while. Carr is meanwhile pursuing news distortion investigations into CBS and ABC, and he has threatened Comcast with a similar probe into its subsidiary NBC.

Gomez’s press release criticized Carr for these and other actions. “From investigating broadcasters for editorial decisions in their newsrooms, to harassing private companies for their fair hiring practices, to threatening tech companies that respond to consumer demand for fact-checking tools, the FCC’s actions have focused on weaponizing the agency’s authority to silence critics,” Gomez’s office said.

Gomez previously criticized Carr for reviving news distortion complaints that were dismissed shortly before Trump’s inauguration. “We cannot allow our licensing authority to be weaponized to curtail freedom of the press,” she said at the time.

FCC Democrat slams chairman for aiding Trump’s “campaign of censorship” Read More »

tiktok-loses-supreme-court-fight,-prepares-to-shut-down-sunday

TikTok loses Supreme Court fight, prepares to shut down Sunday


TikTok has said it’s preparing to shut down Sunday.

A TikTok influencer holds a sign that reads “Keep TikTok” outside the US Supreme Court Building as the court hears oral arguments on whether to overturn or delay a law that could lead to a ban of TikTok in the U.S., on January 10, 2025 in Washington, DC. Credit: Kayla Bartkowski / Stringer | Getty Images News

TikTok has lost its Supreme Court appeal in a 9–0 decision and will likely shut down on January 19, a day before Donald Trump’s inauguration, unless the app can be sold before the deadline, which TikTok has said is impossible.

During the trial last Friday, TikTok lawyer Noel Francisco warned SCOTUS that upholding the Biden administration’s divest-or-sell law would likely cause TikTok to “go dark—essentially the platform shuts down” and “essentially… stop operating.” On Wednesday, TikTok reportedly began preparing to shut down the app for all US users, anticipating the loss.

But TikTok’s claims that the divest-or-sell law violated Americans’ free speech rights did not supersede the government’s compelling national security interest in blocking a foreign adversary like China from potentially using the app to spy on or influence Americans, SCOTUS ruled.

“We conclude that the challenged provisions do not violate petitioners’ First Amendment rights,” the SCOTUS opinion said, while acknowledging that “there is no doubt that, for more than 170 million Americans, TikTok offers a distinctive and expansive outlet for expression, means of engagement, and source of community.”

Late last year, TikTok and its owner, the Chinese-owned company ByteDance, urgently pushed SCOTUS to intervene before the law’s January 19 enforcement date. Ahead of SCOTUS’ decision, TikTok warned it would have no choice but to abruptly shut down a thriving platform where many Americans get their news, express their views, and make a living.

The US had argued the law was necessary to protect national security interests as the US-China trade war intensifies, alleging that China could use the app to track and influence TikTok’s 170 million American users. A lower court had agreed that the US had a compelling national security interest and rejected arguments that the law violated the First Amendment, triggering TikTok’s appeal to SCOTUS. Today, the Supreme Court upheld that ruling.

According to SCOTUS, the divest-or-sell law is “content-neutral” and only triggers intermediate scrutiny. That requires that the law doesn’t burden “substantially more speech than necessary” to serve the government’s national security interests, rather than strict scrutiny which would force the government to protect those interests through the least restrictive means.

Further, the government was right to single TikTok out, SCOTUS wrote, due to its “scale and susceptibility to foreign adversary control, together with the vast swaths of sensitive data the platform collects.”

“Preventing China from collecting vast amounts of sensitive data from 170 million US TikTok users” is a “decidedly content agnostic” rationale, justices wrote.

“The Government had good reason to single out TikTok for special treatment,” the opinion said.

TikTok CEO Shou Zi Chew posted a statement on TikTok reacting to the ruling, thanking Trump for committing to “work with TikTok” to avoid a shut down and telling users to “rest assured, we will do everything in our power to ensure our platform thrives” in the US.

Momentum to ban TikTok has shifted

First Amendment advocates condemned the SCOTUS ruling. The American Civil Liberties Union called it a “major blow to freedom of expression online,” and the Electronic Frontier Foundation’s civil liberties director David Greene accused justices of sweeping “past the undisputed content-based justification for the law” to “rule only based on the shaky data privacy concerns.”

While the SCOTUS ruling was unanimous, justice Sonia Sotomayor said that  “precedent leaves no doubt” that the law implicated the First Amendment and “plainly” imposed a burden on any US company that distributes TikTok’s speech and any content creator who preferred TikTok as a publisher of their speech.

Similarly concerned was justice Neil Gorsuch, who wrote in his concurring opinion that he harbors “serious reservations about whether the law before us is ‘content neutral’ and thus escapes ‘strict scrutiny.'” Gorsuch also said he didn’t know “whether this law will succeed in achieving its ends.”

“But the question we face today is not the law’s wisdom, only its constitutionality,” Gorsuch wrote. “Given just a handful of days after oral argument to issue an opinion, I cannot profess the kind of certainty I would like to have about the arguments and record before us. All I can say is that, at this time and under these constraints, the problem appears real and the response to it not unconstitutional.”

For TikTok and content creators defending the app, the stakes were incredibly high. TikTok repeatedly denied there was any evidence of spying and warned that enforcing the law would allow the government to unlawfully impose “a massive and unprecedented speech restriction.”

But the Supreme Court declined to order a preliminary injunction to block the law until Trump took office, instead deciding to rush through oral arguments and reach a decision prior to the law’s enforcement deadline. Now TikTok has little recourse if it wishes to maintain US operations, as justices suggested during the trial that even if a president chose to not enforce the law, providing access to TikTok or enabling updates could be viewed as too risky for app stores or other distributors.

The law at the center of the case—the Protecting Americans from Foreign Adversary Controlled Applications Act—had strong bipartisan support under the Biden administration.

But President-elect Donald Trump said he opposed a TikTok ban, despite agreeing that US national security interests in preventing TikTok spying on or manipulating Americans were compelling. And this week, Senator Ed Markey (D-Mass.) has introduced a bill to extend the deadline ahead of a potential TikTok ban, and a top Trump adviser, Congressman Mike Waltz, has said that Trump plans to stop the ban and “keep TikTok from going dark,” the BBC reported. Even the Biden administration, whose justice department just finished arguing why the US needed to enforce the law to SCOTUS, “is considering ways to keep TikTok available,” sources told NBC News.

“What might happen next to TikTok remains unclear,” Gorsuch noted in the opinion.

Will Trump save TikTok?

It will likely soon be clear whether Trump will intervene. Trump filed a brief in December, requesting that the Supreme Court stay enforcement of the law until after he takes office because allegedly only he could make a deal to save TikTok. He criticized SCOTUS for rushing the decision and suggested that Congress’ passage of the law may have been “legislative encroachment” that potentially “binds his hands” as president.

“As the incoming Chief Executive, President Trump has a particularly powerful interest in and responsibility for those national-security and foreign-policy questions, and he is the right constitutional actor to resolve the dispute through political means,” Trump’s brief said.

TikTok’s CEO Chew signaled to users that Trump is expected to step in.

“On behalf of everyone at TikTok and all our users across the country, I want to thank President Trump for his commitment to work with us to find a solution that keeps TikTok available in the United States,” Chew’s statement said.

Chew also reminded Trump that he has 60 billion views of his content on TikTok and perhaps stands to lose a major platform through the ban.

“We are grateful and pleased to have the support of a president who truly understands our platform, one who has used TikTok to express his own thoughts and perspectives,” Chew said.

Trump seemingly has limited options to save TikTok, Forbes suggested. At trial, justices disagreed on whether Trump could legally decide to simply not enforce the law. And efforts to pause enforcement or claim compliance without evidence that ByteDance is working on selling off TikTok could be blocked by the court, analysts said. And while ByteDance has repeatedly said it’s unwilling to sell TikTok US, it’s possible, one analyst suggested to Forbes, that ByteDance might be more willing to divest “in exchange for Trump backing off his threat of high tariffs on Chinese imports.”

On Tuesday, a Bloomberg report suggested that China was considering whether selling TikTok to Elon Musk might be a good bargaining chip to de-escalate Trump’s attacks in the US-China trade war.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

TikTok loses Supreme Court fight, prepares to shut down Sunday Read More »

texas-defends-requiring-id-for-porn-to-scotus:-“we’ve-done-this-forever”

Texas defends requiring ID for porn to SCOTUS: “We’ve done this forever”

“You can use VPNs, the click of a button, to make it seem like you’re not in Texas,” Shaffer argued. “You can go through the search engines, you can go through social media, you can access the same content in the ways that kids are likeliest to do.”

Texas attorney Aaron Nielson argued that the problem of kids accessing porn online has only gotten “worse” in the decades since Texas has been attempting less restrictive and allegedly less effective means like content filtering. Now, age verification is Texas’ preferred solution, and strict scrutiny shouldn’t apply to a law that just asks someone to show ID to see adult content, Nielson argued.

“In our history we have always said kids can’t come and look at this stuff,” Nielson argued. “So it seems not correct to me as a historical matter to say, well actually it’s always been presumptively unconstitutional. … But we’ve done it forever. Strict scrutiny somehow has always been satisfied.”

Like groups suing, Texas also asked the Supreme Court to be very clear when writing guidance for the 5th Circuit should the court vacate and remand the case. But Texas wants justices to reiterate that just because the case was remanded, that doesn’t mean the 5th Circuit can’t reinstitute the stay on the preliminary injunction that was ordered following the 5th Circuit’s prior review.

On rebuttal, Shaffer told SCOTUS that out of “about 20 other laws that by some views may look a lot like Texas'” law, “this is the worst of them.” He described Texas Attorney General Ken Paxton as a “hostile regulator who’s saying to adults, you should not be here.”

“I strongly urge this court to stick with strict scrutiny as the applicable standard of review when we’re talking about content-based burdens on speakers,” Shaffer said.

In a press release, Vera Eidelman, a senior staff attorney with the ACLU Speech, Privacy, and Technology Project, said that “efforts to childproof the Internet not only hurt everyone’s ability to access information, but often give the government far too much leeway to go after speech it doesn’t like—all while failing to actually protect children.”

Texas defends requiring ID for porn to SCOTUS: “We’ve done this forever” Read More »

trump-told-scotus-he-plans-to-make-a-deal-to-save-tiktok

Trump told SCOTUS he plans to make a deal to save TikTok

Several members of Congress— Senator Edward J. Markey (D-Mass.), Senator Rand Paul (R-Ky.), and Representative Ro Khanna (D-Calif.)—filed a brief agreeing that “the TikTok ban does not survive First Amendment scrutiny.” They agreed with TikTok that the law is “illegitimate.”

Lawmakers’ “principle justification” for the ban—”preventing covert content manipulation by the Chinese government”—masked a “desire” to control TikTok content, they said. Further, it could be achieved by a less-restrictive alternative, they said, a stance which TikTok has long argued for.

Attorney General Merrick Garland defended the Act, though, urging SCOTUS to remain laser-focused on the question of whether a forced sale of TikTok that would seemingly allow the app to continue operating without impacting American free speech violates the First Amendment. If the court agrees that the law survives strict scrutiny, TikTok could still be facing an abrupt shutdown in January.

The Supreme Court has scheduled oral arguments to begin on January 10. TikTok and content creators who separately sued to block the law have asked for their arguments to be divided, so that the court can separately weigh “different perspectives” when deciding how to approach the First Amendment question.

In its own brief, TikTok has asked SCOTUS to strike the portions of the law singling out TikTok or “at the very least” explain to Congress that “it needed to do far better work either tailoring the Act’s restrictions or justifying why the only viable remedy was to prohibit Petitioners from operating TikTok.”

But that may not be necessary if Trump prevails. Trump told the court that TikTok was an important platform for his presidential campaign and that he should be the one to make the call on whether TikTok should remain in the US—not the Supreme Court.

“As the incoming Chief Executive, President Trump has a particularly powerful interest in and responsibility for those national-security and foreign-policy questions, and he is the right constitutional actor to resolve the dispute through political means,” Trump’s brief said.

Trump told SCOTUS he plans to make a deal to save TikTok Read More »

supreme-court-to-decide-if-tiktok-should-be-banned-or-sold

Supreme Court to decide if TikTok should be banned or sold

While the controversial US law doesn’t necessarily ban TikTok, it does seem designed to make TikTok “go away,” Greene said, and such a move to interfere with a widely used communications platform seems “unprecedented.”

“The TikTok ban itself and the DC Circuit’s approval of it should be of great concern even to those who find TikTok undesirable or scary,” Greene said in a statement. “Shutting down communications platforms or forcing their reorganization based on concerns of foreign propaganda and anti-national manipulation is an eminently anti-democratic tactic, one that the US has previously condemned globally.”

Greene further warned that the US “cutting off a tool used by 170 million Americans to receive information and communicate with the world, without proving with evidence that the tools are presently seriously harmful” would “greatly” lower “well-established standards for restricting freedom of speech in the US.”

TikTok partly appears to be hoping that President-elect Donald Trump will disrupt enforcement of the law, but Greene said it remains unclear if Trump’s plan to “save TikTok” might just be a plan to support a sale to a US buyer. At least one former Trump ally, Steven Mnuchin, has reportedly expressed interest in buying the app.

For TikTok, putting pressure on Trump will likely be the next step, “if the Supreme Court ever says, ‘we agree the law is valid,'” Greene suggested.

“Then that’s it,” Greene said. “There’s no other legal recourse. You only have political recourses.”

Like other civil rights groups, the EFF plans to remain on TikTok’s side as the SCOTUS battle starts.

“We are pleased that the Supreme Court will take the case and will urge the justices to apply the appropriately demanding First Amendment scrutiny,” Greene said.

Supreme Court to decide if TikTok should be banned or sold Read More »

facing-ban-next-month,-tiktok-begs-scotus-for-help

Facing ban next month, TikTok begs SCOTUS for help

TikTok: Ban is slippery slope to broad US censorship

According to TikTok, the government’s defense of the ban to prevent China from wielding a “covert” influence over Americans is a farce invented by lawyers to cover up the true mission of censorship. If the lower court’s verdict stands, TikTok alleged, “then Congress will have free rein to ban any American from speaking simply by identifying some risk that the speech is influenced by a foreign entity.”

TikTok doesn’t want to post big disclaimers on the app warning of “covert” influence, claiming that the government relied on “secret evidence” to prove this influence occurs on TikTok. But if the Supreme Court agrees that the government needed to show more than “bare factual assertions” to back national security claims the lower court said justified any potential speech restrictions, then the court will also likely agree to reverse the lower court’s decision, TikTok suggested.

It will become much clearer by January 6 whether the January 19 ban will take effect, at which point TikTok would shut down, booting all US users from the app. TikTok urged the Supreme Court to agree it is in the public interest to delay the ban and review the constitutional claims to prevent any “extreme” harms to both TikTok and US users who depend on the app for news, community, and income.

If SCOTUS doesn’t intervene, TikTok said that the lower court’s “flawed legal rationales would open the door to upholding content-based speech bans in contexts far different than this one.”

“Fearmongering about national security cannot obscure the threat that the Act itself poses to all Americans,” TikTok alleged, while suggesting that even Congress would agree that a “modest delay” in enforcing the law wouldn’t pose any immediate risk to US national security. Congress is also aware that a sale would not be technically, commercially, or legally possible in the timeframe provided, TikTok said. A temporary injunction would prevent irreparable harms, TikTok said, including the irreparable harm courts have long held is caused by restricting speech of Americans for any amount of time.

“An interim injunction is also appropriate because it will give the incoming Administration time to determine its position, as the President-elect and his advisors have voiced support for saving TikTok,” TikTok argued.

Ars could not immediately reach TikTok for comment.

Facing ban next month, TikTok begs SCOTUS for help Read More »