free speech rights

texas-defends-requiring-id-for-porn-to-scotus:-“we’ve-done-this-forever”

Texas defends requiring ID for porn to SCOTUS: “We’ve done this forever”

“You can use VPNs, the click of a button, to make it seem like you’re not in Texas,” Shaffer argued. “You can go through the search engines, you can go through social media, you can access the same content in the ways that kids are likeliest to do.”

Texas attorney Aaron Nielson argued that the problem of kids accessing porn online has only gotten “worse” in the decades since Texas has been attempting less restrictive and allegedly less effective means like content filtering. Now, age verification is Texas’ preferred solution, and strict scrutiny shouldn’t apply to a law that just asks someone to show ID to see adult content, Nielson argued.

“In our history we have always said kids can’t come and look at this stuff,” Nielson argued. “So it seems not correct to me as a historical matter to say, well actually it’s always been presumptively unconstitutional. … But we’ve done it forever. Strict scrutiny somehow has always been satisfied.”

Like groups suing, Texas also asked the Supreme Court to be very clear when writing guidance for the 5th Circuit should the court vacate and remand the case. But Texas wants justices to reiterate that just because the case was remanded, that doesn’t mean the 5th Circuit can’t reinstitute the stay on the preliminary injunction that was ordered following the 5th Circuit’s prior review.

On rebuttal, Shaffer told SCOTUS that out of “about 20 other laws that by some views may look a lot like Texas'” law, “this is the worst of them.” He described Texas Attorney General Ken Paxton as a “hostile regulator who’s saying to adults, you should not be here.”

“I strongly urge this court to stick with strict scrutiny as the applicable standard of review when we’re talking about content-based burdens on speakers,” Shaffer said.

In a press release, Vera Eidelman, a senior staff attorney with the ACLU Speech, Privacy, and Technology Project, said that “efforts to childproof the Internet not only hurt everyone’s ability to access information, but often give the government far too much leeway to go after speech it doesn’t like—all while failing to actually protect children.”

Texas defends requiring ID for porn to SCOTUS: “We’ve done this forever” Read More »

elon-musk’s-x-may-succeed-in-blocking-calif.-content-moderation-law-on-appeal

Elon Musk’s X may succeed in blocking Calif. content moderation law on appeal

Judgment call —

Elon Musk’s X previously failed to block the law on First Amendment grounds.

Elon Musk’s X may succeed in blocking Calif. content moderation law on appeal

Elon Musk’s fight defending X’s content moderation decisions isn’t just with hate speech researchers and advertisers. He has also long been battling regulators, and this week, he seemed positioned to secure a potentially big win in California, where he’s hoping to permanently block a law that he claims unconstitutionally forces his platform to justify its judgment calls.

At a hearing Wednesday, three judges in the 9th US Circuit Court of Appeals seemed inclined to agree with Musk that a California law requiring disclosures from social media companies that clearly explain their content moderation choices likely violates the First Amendment.

Passed in 2022, AB-587 forces platforms like X to submit a “terms of service report” detailing how they moderate several categories of controversial content. Those categories include hate speech or racism, extremism or radicalization, disinformation or misinformation, harassment, and foreign political interference, which X’s lawyer, Joel Kurtzberg, told judges yesterday “are the most controversial categories of so-called awful but lawful speech.”

The law would seemingly require more transparency than ever from X, making it easy for users to track exactly how much controversial content X flags and removes—and perhaps most notably for advertisers, how many users viewed concerning content.

To block the law, X sued in 2023, arguing that California was trying to dictate its terms of service and force the company to make statements on content moderation that could generate backlash. X worried that the law “impermissibly” interfered with both “the constitutionally protected editorial judgments” of social media companies, as well as impacted users’ speech by requiring companies “to remove, demonetize, or deprioritize constitutionally protected speech that the state deems undesirable or harmful.”

Any companies found to be non-compliant could face stiff fines of up to $15,000 per violation per day, which X considered “draconian.” But last year, a lower court declined to block the law, prompting X to appeal, and yesterday, the appeals court seemed more sympathetic to X’s case.

At the hearing, Kurtzberg told judges that the law was “deeply threatening to the well-established First Amendment interests” of an “extraordinary diversity of” people, which is why X’s complaint was supported by briefs from reporters, freedom of the press advocates, First Amendment scholars, “conservative entities,” and people across the political spectrum.

All share “a deep concern about a statute that, on its face, is aimed at pressuring social media companies to change their content moderation policies, so as to carry less or even no expression that’s viewed by the state as injurious to its people,” Kurtzberg told judges.

When the court pointed out that seemingly the law simply required X to abide by content moderation policies for each category defined in its own terms of service—and did not compel X to adopt any policy or position that it did not choose—Kurtzberg pushed back.

“They don’t mandate us to define the categories in a specific way, but they mandate us to take a position on what the legislature makes clear are the most controversial categories to moderate and define,” Kurtzberg said. “We are entitled to respond to the statute by saying we don’t define hate speech or racism. But the report also asks about policies that are supposedly, quote, ‘intended’ to address those categories, which is a judgment call.”

“This is very helpful,” Judge Anthony Johnstone responded. “Even if you don’t yourself define those categories in the terms of service, you read the law as requiring you to opine or discuss those categories, even if they’re not part of your own terms,” and “you are required to tell California essentially your views on hate speech, extremism, harassment, foreign political interference, how you define them or don’t define them, and what you choose to do about them?”

“That is correct,” Kurtzberg responded, noting that X considered those categories the most “fraught” and “difficult to define.”

Elon Musk’s X may succeed in blocking Calif. content moderation law on appeal Read More »