Policy

judge:-you-can’t-ban-dei-grants-without-bothering-to-define-dei

Judge: You can’t ban DEI grants without bothering to define DEI

Separately, Trump v. Casa blocked the use of a national injunction against illegal activity. So, while the government’s actions have been determined to be illegal, Young can only protect the people who were parties to this suit. Anyone who lost a grant but wasn’t a member of any of the parties involved, or based in any of the states that sued, remains on their own.

Those issues aside, the ruling largely focuses on whether the termination of grants violates the Administrative Procedures Act, which governs how the executive branch handles decision- and rule-making. Specifically, it requires that any decisions of this sort cannot be “arbitrary and capricious.” And, Young concludes that the government hasn’t cleared that bar.

Arbitrary and capricious

The grant cancellations, Young concludes, “Arise from the NIH’s newly minted war against undefined concepts of diversity, equity, and inclusion and gender identity, that has expanded to include vaccine hesitancy, COVID, influencing public opinion and climate change.” The “undefined” aspect plays a key part in his reasoning. Referring to DEI, he writes, “No one has ever defined it to this Court—and this Court has asked multiple times.” It’s not defined in Trump’s executive order that launched the “newly minted war,” and Young found that administrators within the NIH issued multiple documents that attempted to define it, not all of which were consistent with each other, and in some cases seemed to use circular reasoning.

He also noted that the officials who sent these memos had a tendency to resign shortly afterward, writing, “it is not lost on the Court that oftentimes people vote with their feet.”

As a result, the NIH staff had no solid guidance for determining whether a given grant violated the new anti-DEI policy, or how that might be weighed against the scientific merit of the grant. So, how were they to identify which grants needed to be terminated? The evidence revealed at trial indicates that they didn’t need to make those decisions; DOGE made them for the NIH. In one case, an NIH official approved a list of grants to terminate received from DOGE only two minutes after it showed up in his inbox.

Judge: You can’t ban DEI grants without bothering to define DEI Read More »

xai-data-center-gets-air-permit-to-run-15-turbines,-but-imaging-shows-24-on-site

xAI data center gets air permit to run 15 turbines, but imaging shows 24 on site

Before xAI got the permit, residents were stuck relying on infrequent thermal imaging to determine how many turbines appeared to be running without BACT. Now that xAI has secured the permit, the company will be required to “record the date, time, and durations of all startups, shutdowns, malfunctions, and tuning events” and “always minimize emissions including startup, shutdown, maintenance, and combustion tuning periods.”

These records—which also document fuel usage, facility-wide emissions, and excess emissions—must be shared with the health department semiannually, with xAI’s first report due by December 31. Additionally, xAI must maintain five years of “monitoring, preventive, and maintenance records for air pollution control equipment,” which the department can request to review at any time.

For Memphis residents worried about smog-forming pollution, the worst fear would likely be visibly detecting the pollution. Mitigating this, xAI’s air permit requires that visible emissions “from each emission point at the facility shall not exceed” 20 percent in opacity for more than minutes in any one-hour period or more than 20 minutes in any 24-hour period.

It also prevents xAI from operating turbines all the time, limiting xAI to “a maximum of 22 startup events and 22 shutdown events per year” for the 15 turbines included in the permit, “with a total combined duration of 110 hours annually.” Additionally, it specifies that each startup or shutdown event must not exceed one hour.

A senior communications manager for the SELC, Eric Hilt, told Ars that the “SELC and our partners intend to continue monitoring xAI’s operations in the Memphis area.” He further noted that the air permit does not address all of citizens’ concerns at a time when xAI is planning to build another data center in the area, sparking new questions.

“While these permits increase the amount of public information and accountability around 15 of xAI’s turbines, there are still significant concerns around transparency—both for xAI’s first South Memphis data center near the Boxtown neighborhood and the planned data center in the Whitehaven neighborhood,” Hilt said. “XAI has not said how that second data center will be powered or if it plans to use gas turbines for that facility as well.”

xAI data center gets air permit to run 15 turbines, but imaging shows 24 on site Read More »

everything-that-could-go-wrong-with-x’s-new-ai-written-community-notes

Everything that could go wrong with X’s new AI-written community notes


X says AI can supercharge community notes, but that comes with obvious risks.

Elon Musk’s X arguably revolutionized social media fact-checking by rolling out “community notes,” which created a system to crowdsource diverse views on whether certain X posts were trustworthy or not.

But now, the platform plans to allow AI to write community notes, and that could potentially ruin whatever trust X users had in the fact-checking system—which X has fully acknowledged.

In a research paper, X described the initiative as an “upgrade” while explaining everything that could possibly go wrong with AI-written community notes.

In an ideal world, X described AI agents that speed up and increase the number of community notes added to incorrect posts, ramping up fact-checking efforts platform-wide. Each AI-written note will be rated by a human reviewer, providing feedback that makes the AI agent better at writing notes the longer this feedback loop cycles. As the AI agents get better at writing notes, that leaves human reviewers to focus on more nuanced fact-checking that AI cannot quickly address, such as posts requiring niche expertise or social awareness. Together, the human and AI reviewers, if all goes well, could transform not just X’s fact-checking, X’s paper suggested, but also potentially provide “a blueprint for a new form of human-AI collaboration in the production of public knowledge.”

Among key questions that remain, however, is a big one: X isn’t sure if AI-written notes will be as accurate as notes written by humans. Complicating that further, it seems likely that AI agents could generate “persuasive but inaccurate notes,” which human raters might rate as helpful since AI is “exceptionally skilled at crafting persuasive, emotionally resonant, and seemingly neutral notes.” That could disrupt the feedback loop, watering down community notes and making the whole system less trustworthy over time, X’s research paper warned.

“If rated helpfulness isn’t perfectly correlated with accuracy, then highly polished but misleading notes could be more likely to pass the approval threshold,” the paper said. “This risk could grow as LLMs advance; they could not only write persuasively but also more easily research and construct a seemingly robust body of evidence for nearly any claim, regardless of its veracity, making it even harder for human raters to spot deception or errors.”

X is already facing criticism over its AI plans. On Tuesday, former United Kingdom technology minister, Damian Collins, accused X of building a system that could allow “the industrial manipulation of what people see and decide to trust” on a platform with more than 600 million users, The Guardian reported.

Collins claimed that AI notes risked increasing the promotion of “lies and conspiracy theories” on X, and he wasn’t the only expert sounding alarms. Samuel Stockwell, a research associate at the Centre for Emerging Technology and Security at the Alan Turing Institute, told The Guardian that X’s success largely depends on “the quality of safeguards X puts in place against the risk that these AI ‘note writers’ could hallucinate and amplify misinformation in their outputs.”

“AI chatbots often struggle with nuance and context but are good at confidently providing answers that sound persuasive even when untrue,” Stockwell said. “That could be a dangerous combination if not effectively addressed by the platform.”

Also complicating things: anyone can create an AI agent using any technology to write community notes, X’s Community Notes account explained. That means that some AI agents may be more biased or defective than others.

If this dystopian version of events occurs, X predicts that human writers may get sick of writing notes, threatening the diversity of viewpoints that made community notes so trustworthy to begin with.

And for any human writers and reviewers who stick around, it’s possible that the sheer volume of AI-written notes may overload them. Andy Dudfield, the head of AI at a UK fact-checking organization called Full Fact, told The Guardian that X risks “increasing the already significant burden on human reviewers to check even more draft notes, opening the door to a worrying and plausible situation in which notes could be drafted, reviewed, and published entirely by AI without the careful consideration that human input provides.”

X is planning more research to ensure the “human rating capacity can sufficiently scale,” but if it cannot solve this riddle, it knows “the impact of the most genuinely critical notes” risks being diluted.

One possible solution to this “bottleneck,” researchers noted, would be to remove the human review process and apply AI-written notes in “similar contexts” that human raters have previously approved. But the biggest potential downfall there is obvious.

“Automatically matching notes to posts that people do not think need them could significantly undermine trust in the system,” X’s paper acknowledged.

Ultimately, AI note writers on X may be deemed an “erroneous” tool, researchers admitted, but they’re going ahead with testing to find out.

AI-written notes will start posting this month

All AI-written community notes “will be clearly marked for users,” X’s Community Notes account said. The first AI notes will only appear on posts where people have requested a note, the account said, but eventually AI note writers could be allowed to select posts for fact-checking.

More will be revealed when AI-written notes start appearing on X later this month, but in the meantime, X users can start testing AI note writers today and soon be considered for admission in the initial cohort of AI agents. (If any Ars readers end up testing out an AI note writer, this Ars writer would be curious to learn more about your experience.)

For its research, X collaborated with post-graduate students, research affiliates, and professors investigating topics like human trust in AI, fine-tuning AI, and AI safety at Harvard University, the Massachusetts Institute of Technology, Stanford University, and the University of Washington.

Researchers agreed that “under certain circumstances,” AI agents can “produce notes that are of similar quality to human-written notes—at a fraction of the time and effort.” They suggested that more research is needed to overcome flagged risks to reap the benefits of what could be “a transformative opportunity” that “offers promise of dramatically increased scale and speed” of fact-checking on X.

If AI note writers “generate initial drafts that represent a wider range of perspectives than a single human writer typically could, the quality of community deliberation is improved from the start,” the paper said.

Future of AI notes

Researchers imagine that once X’s testing is completed, AI note writers could not just aid in researching problematic posts flagged by human users, but also one day select posts predicted to go viral and stop misinformation from spreading faster than human reviewers could.

Additional perks from this automated system, they suggested, would include X note raters quickly accessing more thorough research and evidence synthesis, as well as clearer note composition, which could speed up the rating process.

And perhaps one day, AI agents could even learn to predict rating scores to speed things up even more, researchers speculated. However, more research would be needed to ensure that wouldn’t homogenize community notes, buffing them out to the point that no one reads them.

Perhaps the most Musk-ian of ideas proposed in the paper, is a notion of training AI note writers with clashing views to “adversarially debate the merits of a note.” Supposedly, that “could help instantly surface potential flaws, hidden biases, or fabricated evidence, empowering the human rater to make a more informed judgment.”

“Instead of starting from scratch, the rater now plays the role of an adjudicator—evaluating a structured clash of arguments,” the paper said.

While X may be moving to reduce the workload for X users writing community notes, it’s clear that AI could never replace humans, researchers said. Those humans are necessary for more than just rubber-stamping AI-written notes.

Human notes that are “written from scratch” are valuable to train the AI agents and some raters’ niche expertise cannot easily be replicated, the paper said. And perhaps most obviously, humans “are uniquely positioned to identify deficits or biases” and therefore more likely to be compelled to write notes “on topics the automated writers overlook,” such as spam or scams.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Everything that could go wrong with X’s new AI-written community notes Read More »

nudify-app’s-plan-to-dominate-deepfake-porn-hinges-on-reddit,-4chan,-and-telegram,-docs-show

Nudify app’s plan to dominate deepfake porn hinges on Reddit, 4chan, and Telegram, docs show


Reddit confirmed the nudify app’s links have been blocked since 2024.

Clothoff—one of the leading apps used to quickly and cheaply make fake nudes from images of real people—reportedly is planning a global expansion to continue dominating deepfake porn online.

Also known as a nudify app, Clothoff has resisted attempts to unmask and confront its operators. Last August, the app was among those that San Francisco’s city attorney, David Chiu, sued in hopes of forcing a shutdown. But recently, a whistleblower—who had “access to internal company information” as a former Clothoff employee—told the investigative outlet Der Spiegel that the app’s operators “seem unimpressed by the lawsuit” and instead of worrying about shutting down have “bought up an entire network of nudify apps.”

Der Spiegel found evidence that Clothoff today owns at least 10 other nudify services, attracting “monthly views ranging between hundreds of thousands to several million.” The outlet granted the whistleblower anonymity to discuss the expansion plans, which the whistleblower claimed was motivated by Clothoff employees growing “cynical” and “obsessed with money” over time as the app—which once felt like an “exciting startup”—gained momentum. Because generating convincing fake nudes can cost just a few bucks, chasing profits seemingly relies on attracting as many repeat users to as many destinations as possible.

Currently, Clothoff runs on an annual budget of around $3.5 million, the whistleblower told Der Spiegel. It has shifted its marketing methods since its launch, apparently now largely relying on Telegram bots and X channels to target ads at young men likely to use their apps.

Der Spiegel’s report documents Clothoff’s “large-scale marketing plan” to expand into the German market, as revealed by the whistleblower. The alleged campaign hinges on producing “naked images of well-known influencers, singers, and actresses,” seeking to entice ad clicks with the tagline “you choose who you want to undress.”

A few of the stars named in the plan confirmed to Der Spiegel that they never agreed to this use of their likenesses, with some of their representatives suggesting that they would pursue legal action if the campaign is ever launched.

However, even celebrities like Taylor Swift have struggled to combat deepfake nudes spreading online, while tools like Clothoff are increasingly used to torment young girls in middle and high school.

Similar celebrity campaigns are planned for other markets, Der Spiegel reported, including British, French, and Spanish markets. And Clothoff has notably already become a go-to tool in the US, not only targeted in the San Francisco city attorney’s lawsuit, but also in a complaint raised by a high schooler in New Jersey suing a boy who used Clothoff to nudify one of her Instagram photos taken when she was 14 years old, then shared it with other boys on Snapchat.

Clothoff is seemingly hoping to entice more young boys worldwide to use its apps for such purposes. The whistleblower told Der Spiegel that most of Clothoff’s marketing budget goes toward “advertising posts in special Telegram channels, in sex subs on Reddit, and on 4chan.” (Reddit noted to Ars that Clothoff URLs have been banned from Reddit since 2024 and “Reddit does not allow paid advertising against NSFW content or otherwise monetize it.”)

In ads, the app planned to specifically target “men between 16 and 35” who like benign stuff like “memes” and “video games,” as well as more toxic stuff like “right-wing extremist ideas,” “misogyny,” and “Andrew Tate,” an influencer criticized for promoting misogynistic views to teen boys.

Chiu was hoping to defend young women increasingly targeted in fake nudes by shutting down Clothoff, along with several other nudify apps targeted in his lawsuit. But so far, while Chiu has reached a settlement shutting down two websites, porngen.art and undresser.ai, attempts to serve Clothoff through available legal channels have not been successful. Chiu’s office is continuing its efforts to serve Clothoff through available legal channels. which evolve as the lawsuit moves through the court system, deputy press secretary for Chiu’s office, Alex Barrett-Shorter, told Ars.

Meanwhile, Clothoff continues to evolve, recently marketing a feature that Clothoff claims attracted more than a million users eager to make explicit videos out of a single picture.

Clothoff denies it plans to use influencers

Der Spiegel’s efforts to unmask the operators of Clothoff led the outlet to Eastern Europe, after reporters stumbled upon a “database accidentally left open on the Internet” that seemingly exposed “four central people behind the website.”

This was “consistent,” Der Spiegel said, with a whistleblower claim that all Clothoff employees “work in countries that used to belong to the Soviet Union.” Additionally, Der Spiegel noted that all Clothoff internal communications it reviewed were written in Russian, and the site’s email service is based in Russia.

A person claiming to be a Clothoff spokesperson named Elias denied knowing any of the four individuals flagged in their investigation, Der Spiegel reported, and disputed the $3 million budget figure. Elias claimed a nondisclosure agreement prevented him from discussing Clothoff’s team any further. However, soon after reaching out, Der Spiegel noted that Clothoff took down the database, which had a name that translated to “my babe.”

Regarding the shared marketing plan for global expansion, Elias denied that Clothoff intended to use celebrity influencers, saying that “Clothoff forbids the use of photos of people without their consent.”

He also denied that Clothoff could be used to nudify images of minors; however, one Clothoff user who spoke to Der Spiegel on the condition of anonymity, confirmed that his attempt to generate a fake nude of a US singer failed initially because she “looked like she might be underage.” But his second attempt a few days later successfully generated the fake nude with no problem. That suggests Clothoff’s age detection may not work perfectly.

As Clothoff’s growth appears unstoppable, the user explained to Der Spiegel why he doesn’t feel that conflicted about using the app to generate fake nudes of a famous singer.

“There are enough pictures of her on the Internet as it is,” the user reasoned.

However, that user draws the line at generating fake nudes of private individuals, insisting, “If I ever learned of someone producing such photos of my daughter, I would be horrified.”

For young boys who appear flippant about creating fake nude images of their classmates, the consequences have ranged from suspensions to juvenile criminal charges, and for some, there could be other costs. In the lawsuit where the high schooler is attempting to sue a boy who used Clothoff to bully her, there’s currently resistance from boys who participated in group chats to share what evidence they have on their phones. If she wins her fight, she’s asking for $150,000 in damages per image shared, so sharing chat logs could potentially increase the price tag.

Since she and the San Francisco city attorney each filed their lawsuits, the Take It Down Act has passed. That law makes it easier to force platforms to remove AI-generated fake nudes. But experts expect the law will face legal challenges over censorship fears, so the very limited legal tool might not withstand scrutiny.

Either way, the Take It Down Act is a safeguard that came too late for the earliest victims of nudify apps in the US, only some of whom are turning to courts seeking justice due to largely opaque laws that made it unclear if generating a fake nude was illegal.

“Jane Doe is one of many girls and women who have been and will continue to be exploited, abused, and victimized by non-consensual pornography generated through artificial intelligence,” the high schooler’s complaint noted. “Despite already being victimized by Defendant’s actions, Jane Doe has been forced to bring this action to protect herself and her rights because the governmental institutions that are supposed to protect women and children from being violated and exploited by the use of AI to generate child pornography and nonconsensual nude images failed to do so.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Nudify app’s plan to dominate deepfake porn hinges on Reddit, 4chan, and Telegram, docs show Read More »

nyt-to-start-searching-deleted-chatgpt-logs-after-beating-openai-in-court

NYT to start searching deleted ChatGPT logs after beating OpenAI in court


What are the odds NYT will access your ChatGPT logs in OpenAI court battle?

Last week, OpenAI raised objections in court, hoping to overturn a court order requiring the AI company to retain all ChatGPT logs “indefinitely,” including deleted and temporary chats.

But Sidney Stein, the US district judge reviewing OpenAI’s request, immediately denied OpenAI’s objections. He was seemingly unmoved by the company’s claims that the order forced OpenAI to abandon “long-standing privacy norms” and weaken privacy protections that users expect based on ChatGPT’s terms of service. Rather, Stein suggested that OpenAI’s user agreement specified that their data could be retained as part of a legal process, which Stein said is exactly what is happening now.

The order was issued by magistrate judge Ona Wang just days after news organizations, led by The New York Times, requested it. The news plaintiffs claimed the order was urgently needed to preserve potential evidence in their copyright case, alleging that ChatGPT users are likely to delete chats where they attempted to use the chatbot to skirt paywalls to access news content.

A spokesperson told Ars that OpenAI plans to “keep fighting” the order, but the ChatGPT maker seems to have few options left. They could possibly petition the Second Circuit Court of Appeals for a rarely granted emergency order that could intervene to block Wang’s order, but the appeals court would have to consider Wang’s order an extraordinary abuse of discretion for OpenAI to win that fight.

OpenAI’s spokesperson declined to confirm if the company plans to pursue this extreme remedy.

In the meantime, OpenAI is negotiating a process that will allow news plaintiffs to search through the retained data. Perhaps the sooner that process begins, the sooner the data will be deleted. And that possibility puts OpenAI in the difficult position of having to choose between either caving to some data collection to stop retaining data as soon as possible or prolonging the fight over the order and potentially putting more users’ private conversations at risk of exposure through litigation or, worse, a data breach.

News orgs will soon start searching ChatGPT logs

The clock is ticking, and so far, OpenAI has not provided any official updates since a June 5 blog post detailing which ChatGPT users will be affected.

While it’s clear that OpenAI has been and will continue to retain mounds of data, it would be impossible for The New York Times or any news plaintiff to search through all that data.

Instead, only a small sample of the data will likely be accessed, based on keywords that OpenAI and news plaintiffs agree on. That data will remain on OpenAI’s servers, where it will be anonymized, and it will likely never be directly produced to plaintiffs.

Both sides are negotiating the exact process for searching through the chat logs, with both parties seemingly hoping to minimize the amount of time the chat logs will be preserved.

For OpenAI, sharing the logs risks revealing instances of infringing outputs that could further spike damages in the case. The logs could also expose how often outputs attribute misinformation to news plaintiffs.

But for news plaintiffs, accessing the logs is not considered key to their case—perhaps providing additional examples of copying—but could help news organizations argue that ChatGPT dilutes the market for their content. That could weigh against the fair use argument, as a judge opined in a recent ruling that evidence of market dilution could tip an AI copyright case in favor of plaintiffs.

Jay Edelson, a leading consumer privacy lawyer, told Ars that he’s concerned that judges don’t seem to be considering that any evidence in the ChatGPT logs wouldn’t “advance” news plaintiffs’ case “at all,” while really changing “a product that people are using on a daily basis.”

Edelson warned that OpenAI itself probably has better security than most firms to protect against a potential data breach that could expose these private chat logs. But “lawyers have notoriously been pretty bad about securing data,” Edelson suggested, so “the idea that you’ve got a bunch of lawyers who are going to be doing whatever they are” with “some of the most sensitive data on the planet” and “they’re the ones protecting it against hackers should make everyone uneasy.”

So even though odds are pretty good that the majority of users’ chats won’t end up in the sample, Edelson said the mere threat of being included might push some users to rethink how they use AI. He further warned that ChatGPT users turning to OpenAI rival services like Anthropic’s Claude or Google’s Gemini could suggest that Wang’s order is improperly influencing market forces, which also seems “crazy.”

To Edelson, the most “cynical” take could be that news plaintiffs are possibly hoping the order will threaten OpenAI’s business to the point where the AI company agrees to a settlement.

Regardless of the news plaintiffs’ motives, the order sets an alarming precedent, Edelson said. He joined critics suggesting that more AI data may be frozen in the future, potentially affecting even more users as a result of the sweeping order surviving scrutiny in this case. Imagine if litigation one day targets Google’s AI search summaries, Edelson suggested.

Lawyer slams judges for giving ChatGPT users no voice

Edelson told Ars that the order is so potentially threatening to OpenAI’s business that the company may not have a choice but to explore every path available to continue fighting it.

“They will absolutely do something to try to stop this,” Edelson predicted, calling the order “bonkers” for overlooking millions of users’ privacy concerns while “strangely” excluding enterprise customers.

From court filings, it seems possible that enterprise users were excluded to protect OpenAI’s competitiveness, but Edelson suggested there’s “no logic” to their exclusion “at all.” By excluding these ChatGPT users, the judge’s order may have removed the users best resourced to fight the order, Edelson suggested.

“What that means is the big businesses, the ones who have the power, all of their stuff remains private, and no one can touch that,” Edelson said.

Instead, the order is “only going to intrude on the privacy of the common people out there,” which Edelson said “is really offensive,” given that Wang denied two ChatGPT users’ panicked request to intervene.

“We are talking about billions of chats that are now going to be preserved when they weren’t going to be preserved before,” Edelson said, noting that he’s input information about his personal medical history into ChatGPT. “People ask for advice about their marriages, express concerns about losing jobs. They say really personal things. And one of the bargains in dealing with OpenAI is that you’re allowed to delete your chats and you’re allowed to temporary chats.”

The greatest risk to users would be a data breach, Edelson said, but that’s not the only potential privacy concern. Corynne McSherry, legal director for the digital rights group the Electronic Frontier Foundation, previously told Ars that as long as users’ data is retained, it could also be exposed through future law enforcement and private litigation requests.

Edelson pointed out that most privacy attorneys don’t consider OpenAI CEO Sam Altman to be a “privacy guy,” despite Altman recently slamming the NYT, alleging it sued OpenAI because it doesn’t “like user privacy.”

“He’s trying to protect OpenAI, and he does not give a hoot about the privacy rights of consumers,” Edelson said, echoing one ChatGPT user’s dismissed concern that OpenAI may not prioritize users’ privacy concerns in the case if it’s financially motivated to resolve the case.

“The idea that he and his lawyers are really going to be the safeguards here isn’t very compelling,” Edelson said. He criticized the judges for dismissing users’ concerns and rejecting OpenAI’s request that users get a chance to testify.

“What’s really most appalling to me is the people who are being affected have had no voice in it,” Edelson said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

NYT to start searching deleted ChatGPT logs after beating OpenAI in court Read More »

paramount-accused-of-bribery-as-it-settles-trump-lawsuit-for-$16-million

Paramount accused of bribery as it settles Trump lawsuit for $16 million

Payout to future presidential library

Paramount told us that the settlement terms were proposed by a mediator and that it will pay $16 million, including plaintiffs’ fees and costs. That amount, minus the fees and costs, will be allocated to Trump’s future presidential library, Paramount said. Trump’s complaint sought at least $20 billion in damages.

Paramount also said that “no amount will be paid directly or indirectly to President Trump or Rep. Jackson personally” and that the settlement will release Paramount from “all claims regarding any CBS reporting through the date of the settlement, including the Texas action and the threatened defamation action.”

Warren’s statement said the “settlement exposes a glaring need for rules to restrict donations to sitting presidents’ libraries,” and that she will “introduce new legislation to rein in corruption through presidential library donations. The Trump administration’s level of sheer corruption is appalling and Paramount should be ashamed of putting its profits over independent journalism.”

Trump previously obtained settlements from ABC, Meta, and X Corp.

Paramount said the settlement “does not include a statement of apology or regret.” It “agreed that in the future, 60 Minutes will release transcripts of interviews with eligible US presidential candidates after such interviews have aired, subject to redactions as required for legal or national security concerns.”

FCC’s news distortion investigation

Trump and Paramount previously told the court that they were in advanced settlement negotiations and are scheduled to file a joint status report on Thursday.

Federal Communications Commission Chairman Brendan Carr has been probing CBS over the Harris interview and holding up Paramount’s merger with Skydance. Carr revived a complaint that was previously dismissed by the FCC and which alleges that CBS intentionally distorted the news by airing two different answers given by Harris to the same question about Israeli Prime Minister Benjamin Netanyahu.

Paramount accused of bribery as it settles Trump lawsuit for $16 million Read More »

fcc-chair-decides-inmates-and-their-families-must-keep-paying-high-phone-prices

FCC chair decides inmates and their families must keep paying high phone prices

Federal Communications Commission Chairman Brendan Carr has decided to let prisons and jails keep charging high prices for calling services until at least 2027, delaying implementation of rate caps approved last year when the FCC had a Democratic majority.

Carr’s office announced the change yesterday, saying it was needed because of “negative, unintended consequences stemming from the Commission’s 2024 decision on Incarcerated People’s Communications Services (IPCS)… As a result of this waiver decision, the FCC’s 2021 Order rate cap, site commission, and per-minute pricing rules will apply until April 1, 2027, unless the Commission sets an alternative date.”

Commissioner Anna Gomez, the FCC’s only Democrat, criticized the decision and pointed out that Congress mandated lower prices in the Martha Wright-Reed Act, which the FCC was tasked with implementing.

“Today, the FCC made the indefensible decision to ignore both the law and the will of Congress… rather than enforce the law, the Commission is now stalling, shielding a broken system that inflates costs and rewards kickbacks to correctional facilities at the expense of incarcerated individuals and their loved ones,” Gomez said. “Instead of taking targeted action to address specific concerns, the FCC issued a blanket two-year waiver that undercuts the law’s intent and postpones meaningful relief for millions of families. This is a blatant attempt to sidestep the law, and it will not go unchallenged in court.”

Price caps have angered prison phone providers and operators of prisons and jails that get financial benefits from contracts with the prison telcos. One Arkansas jail ended phone service instead of complying with the rate caps.

Win for prison telco Securus

Carr issued a statement saying that “a number of institutions are or soon will be limiting the availability of IPCS due to concerns with the FCC’s 2024 decision,” and that “there is concerning evidence that the 2024 decision does not allow providers and institutions to properly consider public safety and security interests when facilitating these services.” Carr’s office said the delay is needed to “support the continued availability of IPCS for incarcerated people.”

FCC chair decides inmates and their families must keep paying high phone prices Read More »

ted-cruz-gives-up-on-ai-law-moratorium,-joins-99-1-vote-against-his-own-plan

Ted Cruz gives up on AI law moratorium, joins 99-1 vote against his own plan

Cruz blamed “outside interests”

After the compromise fell apart, the Senate voted 99-1 for Blackburn’s amendment to remove the AI provision from the budget bill. Sen. Thom Tillis (R-N.C.) cast the only vote against the amendment.

“Cruz ultimately got behind Blackburn’s amendment early Tuesday, acknowledging that ‘many of my colleagues would prefer not to vote on this matter,'” according to The Hill. Cruz said the five-year moratorium had support from President Trump and “protected kids and protected the rights of creative artists, but outside interests opposed that deal.”

However, Blackburn was quoted as saying that they “weren’t able to come to a compromise that would protect our governors, our state legislators, our attorney generals and, of course, House members who have expressed concern over this language… what we know is this—this body has proven that they cannot legislate on emerging technology.”

Cantwell pointed out that many state government officials from both major parties opposed the Cruz plan. “Despite several revisions by its author and misleading assurances about its true impact, state officials from across the country, including 17 Republican Governors and 40 state attorneys general, as well [as] conservative and liberal organizations—from the Heritage Foundation to the Center for American Progress—rallied against the harmful proposal,” Cantwell’s office said.

Cantwell and Sen. Ed Markey (D-Mass.) had also filed an amendment to strip the AI moratorium from the bill. Markey said yesterday that “the Blackburn-Cruz so-called compromise is a wolf in sheep’s clothing. Despite Republican efforts to hide the true impact of the AI moratorium, the language still allows the Trump administration to use federal broadband funding as a weapon against the states and still prevents states from protecting children online from Big Tech’s predatory behavior.”

Cantwell said at a recent press conference that 24 states last year started “regulating AI in some way, and they have adopted these laws that fill a gap while we are waiting for federal action.” Yesterday, she called the Blackburn/Cruz compromise “another giveaway to tech companies” that “gives AI and social media a brand-new shield against litigation and state regulation.”

Ted Cruz gives up on AI law moratorium, joins 99-1 vote against his own plan Read More »

gop-budget-bill-poised-to-crush-renewable-energy-in-the-us

GOP budget bill poised to crush renewable energy in the US

An early evaluation shows the administration’s planned energy policies would result in the drilling of 50,000 new oil wells every year for the next few years, he said, adding that it “ensures the continuation of land devastation… the poisoning of soil and groundwater due to fossil fuels and the continuation of gas blowouts and fires.”

There is nothing beneficial about the tax, he said, “only guaranteed misery.”

An analysis by the Rhodium Group, and energy policy research institute, projected that the Republican regime’s proposed energy policies would result in about 4 billion tons more greenhouse gas emissions than a continuation of current policies—enough to raise the average global temperature by .0072° Fahrenheit.

The overall budget bill was also panned in a June 28 statement by the president of North America’s Building Trades Unions, Sean McGarvey.

McGarvey called it “a massive insult to the working men and women of North America’s Building Trades Unions and all construction workers.”

He said that, as written, the budget “stands to be the biggest job-killing bill in the history of this country,” potentially costing as many jobs as shutting down 1,000 Keystone X pipeline projects, threatening an estimated 1.75 million construction jobs and over 3 billion work hours, which translates to $148 billion in lost annual wages and benefits.

“These are staggering and unfathomable job loss numbers, and the bill throws yet another lifeline and competitive advantage to China in the race for global energy dominance,” he said.

Research in recent years shows how right-wing populist and nationalist ideologies have used anti-renewable energy arguments to win voters, in defiance of environmental logic and scientific fact, in part by using social media to spread misleading and false information about wind, solar and other emissions-free electricity sources.

The same forces now seem to be at work in the US, said Stephan Lewandowsky, a cognitive psychologist at the University of Bristol who studies how people respond to misinformation and propaganda, and why people reject well-established scientific facts, such as those regarding climate change.

“This is a bonus for fossil fuels at the expense of future generations and the future of the American economy,” he said. “Other countries will continue working towards renewable-energy economies, especially China. That competitive advantage will eventually pay out to the detriment of American businesses. You can’t negotiate with the laws of physics.”

This story originally appeared on Inside Climate News.

GOP budget bill poised to crush renewable energy in the US Read More »

pay-up-or-stop-scraping:-cloudflare-program-charges-bots-for-each-crawl

Pay up or stop scraping: Cloudflare program charges bots for each crawl

“Imagine asking your favorite deep research program to help you synthesize the latest cancer research or a legal brief, or just help you find the best restaurant in Soho—and then giving that agent a budget to spend to acquire the best and most relevant content,” Cloudflare said, promising that “we enable a future where intelligent agents can programmatically negotiate access to digital resources.”

AI crawlers now blocked by default

Cloudflare’s announcement comes after rolling out a feature last September, allowing website owners to block AI crawlers in a single click. According to Cloudflare, over 1 million customers chose to block AI crawlers, signaling that people want more control over their content at a time when Cloudflare observed that writing instructions for AI crawlers in robots.txt files was widely “underutilized.”

To protect more customers moving forward, any new customers (including anyone on a free plan) who sign up for Cloudflare services will have their domains, by default, set to block all known AI crawlers.

This marks Cloudflare’s transition away from the dreaded opt-out models of AI scraping to a permission-based model, which a Cloudflare spokesperson told Ars is expected to “fundamentally change how AI companies access web content going forward.”

In a world where some website owners have grown sick and tired of attempting and failing to block AI scraping through robots.txt—including some trapping AI crawlers in tarpits to punish them for ignoring robots.txt—Cloudflare’s feature allows users to choose granular settings to prevent blocks on AI bots from impacting bots that drive search engine traffic. That’s critical for small content creators who want their sites to still be discoverable but not digested by AI bots.

“AI crawlers collect content like text, articles, and images to generate answers, without sending visitors to the original source—depriving content creators of revenue, and the satisfaction of knowing someone is reading their content,” Cloudflare’s blog said. “If the incentive to create original, quality content disappears, society ends up losing, and the future of the Internet is at risk.”

Disclosure: Condé Nast, which owns Ars Technica, is a partner involved in Cloudflare’s beta test.

This story was corrected on July 1 to remove publishers incorrectly listed as participating in Cloudflare’s pay-per-crawl beta.

Pay up or stop scraping: Cloudflare program charges bots for each crawl Read More »

meta,-tiktok-can’t-toss-wrongful-death-suit-from-mom-of-“subway-surfing”-teen

Meta, TikTok can’t toss wrongful death suit from mom of “subway surfing” teen

Section 230 has so far failed to shield Meta and TikTok owner ByteDance from a lawsuit raised by a mother who alleged that her son’s wrongful death followed a flood of “subway surfing” videos platforms intentionally targeted to teens in New York.

In a decision Monday, New York State Supreme Court Judge Paul Goetz largely denied social media companies’ motions to dismiss claims they argued should be barred under Section 230 and the First Amendment. Goetz said that the mother, Norma Nazario, had adequately alleged that subway surfing content “was purposefully fed” to her son Zackery “because of his age” and “not because of any user inputs that indicated he was interested in seeing such content.”

Unlike other Section 230 cases in which platforms’ algorithms were determined to be content-neutral, Goetz wrote that in this case, “it is plausible that the social media defendants’ role exceeded that of neutral assistance in promoting content and constituted active identification of users who would be most impacted by the content.”

Platforms may be forced to demystify algorithms

Moving forward, Nazario will have a chance to seek discovery that could show exactly how Zackery came to interact with the subway surfing content. In her complaint, she did not ask for the removal of all subway surfing content but rather wants to see platforms held accountable for allegedly dangerous design choices that supposedly target unwitting teens.

“Social media defendants should not be permitted to actively target young users of its applications with dangerous ‘challenges’ before the user gives any indication that they are specifically interested in such content and without warning,” Nazario has argued.

And if she’s proven right, that means platforms won’t be forced to censor any content but must instead update algorithms to stop sending “dangerous” challenges to keep teens engaged at a time when they’re more likely to make reckless decisions, Goetz suggested.

Meta, TikTok can’t toss wrongful death suit from mom of “subway surfing” teen Read More »

supreme-court-to-decide-whether-isps-must-disconnect-users-accused-of-piracy

Supreme Court to decide whether ISPs must disconnect users accused of piracy

The Supreme Court has agreed to hear a case that could determine whether Internet service providers must terminate users who are accused of copyright infringement.

In a list of orders released today, the court granted a petition filed by cable company Cox. The ISP, which was sued by Sony Music Entertainment, is trying to overturn a ruling that it is liable for copyright infringement because it failed to terminate users accused of piracy. Music companies want ISPs to disconnect users whose IP addresses are repeatedly connected to torrent downloads.

“We are pleased the US Supreme Court has decided to address these significant copyright issues that could jeopardize Internet access for all Americans and fundamentally change how Internet service providers manage their networks,” Cox said today.

Cox was once on the hook for $1 billion in the case. In February 2024, the 4th Circuit court of appeals overturned the $1 billion verdict, deciding that Cox did not profit directly from copyright infringement committed by users. But the appeals court found that Cox was guilty of willful contributory infringement and ordered a new damages trial.

The Cox petition asks the Supreme Court to decide whether an ISP “can be held liable for ‘materially contributing’ to copyright infringement merely because it knew that people were using certain accounts to infringe and did not terminate access, without proof that the service provider affirmatively fostered infringement or otherwise intended to promote it.”

Trump admin backed Cox; Sony petition denied

The Trump administration backed Cox last month, saying that ISPs shouldn’t be forced to terminate the accounts of people accused of piracy. Solicitor General John Sauer told the court in a brief that the 4th Circuit decision, if not overturned, “subjects ISPs to potential liability for all acts of copyright infringement committed by particular subscribers as long as the music industry sends notices alleging past instances of infringement by those subscribers” and “might encourage providers to avoid substantial monetary liability by terminating subscribers after receiving a single notice of alleged infringement.”

Supreme Court to decide whether ISPs must disconnect users accused of piracy Read More »