Policy

a-power-utility-is-reporting-suspected-pot-growers-to-cops-eff-says-that’s-illegal.

A power utility is reporting suspected pot growers to cops. EFF says that’s illegal.

In May 2020, Sacramento, California, resident Alfonso Nguyen was alarmed to find two Sacramento County Sheriff’s deputies at his door, accusing him of illegally growing cannabis and demanding entry into his home. When Nguyen refused the search and denied the allegation, one deputy allegedly called him a liar and threatened to arrest him.

That same year, deputies from the same department, with their guns drawn and bullhorns and sirens sounding, fanned out around the home of Brian Decker, another Sacramento resident. The officers forced Decker to walk backward out of his home in only his underwear around 7 am while his neighbors watched. The deputies said that he, too, was under suspicion of illegally growing cannabis.

Invasion of the privacy snatchers

According to a motion the Electronic Frontier Foundation filed in Sacramento Superior Court last week, Nguyen and Decker are only two of more than 33,000 Sacramento-area people who have been flagged to the sheriff’s department by the Sacramento Municipal Utility District, the electricity provider for the region. SMUD called the customers out for using what it and department investigators said were suspiciously high amounts of electricity indicative of illegal cannabis farming.

The EFF, citing investigator and SMUD records, said the utility unilaterally analyzes customers’ electricity usage in “painstakingly” detailed increments of every 15 minutes. When analysts identify patterns they deem likely signs of illegal grows, they notify sheriff’s investigators. The EFF said the practice violates privacy protections guaranteed by the federal and California governments and is seeking a court order barring the warrantless disclosures.

“SMUD’s disclosures invade the privacy of customers’ homes,” EFF attorneys wrote in a court document in support of last week’s motion. “The whole exercise is the digital equivalent of a door-to-door search of an entire city. The home lies at the ‘core’ of constitutional privacy protection.”

Contrary to SMUD and sheriff’s investigator claims that the likely illegal grows are accurate, the EFF cited multiple examples where they have been wrong. In Decker’s case, for instance, SMUD analysts allegedly told investigators his electricity usage indicated that “4 to 5 grow lights are being used [at his home] from 7pm to 7am.” In actuality, the EFF said, someone in the home was mining cryptocurrency. Nguyen’s electricity consumption was the result of a spinal injury that requires him to use an electric wheelchair and special HVAC equipment to maintain his body temperature.

A power utility is reporting suspected pot growers to cops. EFF says that’s illegal. Read More »

xai-workers-balked-over-training-request-to-help-“give-grok-a-face,”-docs-show

xAI workers balked over training request to help “give Grok a face,” docs show

For the more than 200 employees who did not opt out, xAI asked that they record 15- to 30-minute conversations, where one employee posed as the potential Grok user and the other posed as the “host.” xAI was specifically looking for “imperfect data,” BI noted, expecting that only training on crystal-clear videos would limit Grok’s ability to interpret a wider range of facial expressions.

xAI’s goal was to help Grok “recognize and analyze facial movements and expressions, such as how people talk, react to others’ conversations, and express themselves in various conditions,” an internal document said. Allegedly among the only guarantees to employees—who likely recognized how sensitive facial data is—was a promise “not to create a digital version of you.”

To get the most out of data submitted by “Skippy” participants, dubbed tutors, xAI recommended that they never provide one-word answers, always ask follow-up questions, and maintain eye contact throughout the conversations.

The company also apparently provided scripts to evoke facial expressions they wanted Grok to understand, suggesting conversation topics like “How do you secretly manipulate people to get your way?” or “Would you ever date someone with a kid or kids?”

For xAI employees who provided facial training data, privacy concerns may still exist, considering X—the social platform formerly known as Twitter that recently was folded into xAI—has recently been targeted by what Elon Musk called a “massive” cyberattack. Because of privacy risks ranging from identity theft to government surveillance, several states have passed strict biometric privacy laws to prevent companies from collecting such data without explicit consent.

xAI did not respond to Ars’ request for comment.

xAI workers balked over training request to help “give Grok a face,” docs show Read More »

fcc-to-eliminate-gigabit-speed-goal-and-scrap-analysis-of-broadband-prices

FCC to eliminate gigabit speed goal and scrap analysis of broadband prices

“As part of our return to following the plain language of section 706, we propose to abolish without replacement the long-term goal of 1,000/500Mbps established in the 2024 Report,” Carr’s plan said. “Not only is a long-term goal not mentioned in section 706, but maintaining such a goal risks skewing the market by unnecessarily potentially picking technological winners and losers.”

Fiber networks can already meet a 1,000/500Mbps standard, and the Biden administration generally prioritized fiber when it came to distributing grants to Internet providers. The Trump administration changed grant-giving procedures to distribute more funds to non-fiber providers such as Elon Musk’s Starlink satellite network.

Carr’s proposal alleged that the 1,000/500Mbps long-term goal would “appear to violate our obligation to conduct our analysis in a technologically neutral manner,” as it “may be unreasonably prejudicial to technologies such as satellite and fixed wireless that presently do not support such speeds.”

100/20Mbps standard appears to survive

When the 100/20Mbps standard was adopted last year, Carr alleged that “the 100/20Mbps requirement appears to be part and parcel of the Commission’s broader attempt to circumvent the statutory requirement of technological neutrality.” It appears the Carr FCC will nonetheless stick with 100/20Mbps for measuring availability of fixed broadband. But his plan would seek comment on that approach, suggesting a possibility that it could be changed.

“We propose to again focus our service availability discussion on fixed broadband at speeds of 100/20Mbps and seek comment on this proposal,” the plan said.

If any regulatory changes are spurred by Carr’s deployment inquiry, they would likely be to eliminate regulations instead of adding them. Carr has been pushing a “Delete, Delete, Delete” initiative to eliminate rules that he considers unnecessary, and his proposal asks for comment on broadband regulations that could be removed.

“Are there currently any regulatory barriers impeding broadband deployment, investment, expansion, competition, and technological innovation that the Commission should consider eliminating?” the call for comment asks.

FCC to eliminate gigabit speed goal and scrap analysis of broadband prices Read More »

researcher-threatens-x-with-lawsuit-after-falsely-linking-him-to-french-probe

Researcher threatens X with lawsuit after falsely linking him to French probe

X claimed that David Chavalarias, “who spearheads the ‘Escape X’ campaign”—which is “dedicated to encouraging X users to leave the platform”—was chosen to assess the data with one of his prior research collaborators, Maziyar Panahi.

“The involvement of these individuals raises serious concerns about the impartiality, fairness, and political motivations of the investigation, to put it charitably,” X alleged. “A predetermined outcome is not a fair one.”

However, Panahi told Reuters that he believes X blamed him “by mistake,” based only on his prior association with Chavalarias. He further clarified that “none” of his projects with Chavalarias “ever had any hostile intent toward X” and threatened legal action to protect himself against defamation if he receives “any form of hate speech” due to X’s seeming error and mischaracterization of his research. An Ars review suggests his research on social media platforms predates Musk’s ownership of X and has probed whether certain recommendation systems potentially make platforms toxic or influence presidential campaigns.

“The fact my name has been mentioned in such an erroneous manner demonstrates how little regard they have for the lives of others,” Panahi told Reuters.

X denies being an “organized gang”

X suggests that it “remains in the dark as to the specific allegations made against the platform,” accusing French police of “distorting French law in order to serve a political agenda and, ultimately, restrict free speech.”

The press release is indeed vague on what exactly French police are seeking to uncover. All French authorities say is that they are probing X for alleged “tampering with the operation of an automated data processing system by an organized gang” and “fraudulent extraction of data from an automated data processing system by an organized gang.” But later, a French magistrate, Laure Beccuau, clarified in a statement that the probe was based on complaints that X is spreading “an enormous amount of hateful, racist, anti-LGBT+ and homophobic political content, which aims to skew the democratic debate in France,” Politico reported.

Researcher threatens X with lawsuit after falsely linking him to French probe Read More »

uk-backing-down-on-apple-encryption-backdoor-after-pressure-from-us

UK backing down on Apple encryption backdoor after pressure from US

Under the terms of the legislation, recipients of such a notice are unable to discuss the matter publicly, even with customers affected by the order, unless granted permission by the Home Secretary.

The legislation’s use against Apple has triggered the tech industry’s highest-profile battle over encryption technology in almost a decade.

In response to the demand, Apple withdrew its most secure cloud storage service from the UK in February and is now challenging the Home Office’s order at the Investigatory Powers Tribunal, which probes complaints against the UK’s security services.

Last month, Meta-owned WhatsApp said it would join Apple’s legal challenge, in a rare collaboration between the Silicon Valley rivals.

In the meantime, the Home Office continues to pursue its case with Apple at the tribunal.

Its lawyers discussed the next legal steps this month, reflecting the divisions within government over how best to proceed. “At this point, the government has not backed down,” said one person familiar with the legal process.

A third senior British official added that the UK government was reluctant to push “anything that looks to the US vice-president like a free-speech issue.”

In a combative speech at the Munich Security Conference in February, Vance argued that free speech and democracy were threatened by European elites.

The UK official added, this “limits what we’re able to do in the future, particularly in relation to AI regulation.” The Labour government has delayed plans for AI legislation until after May next year.

Trump has also been critical of the UK stance on encryption.

The US president has likened the UK’s order to Apple to “something… that you hear about with China,” saying in February that he had told Starmer: “You can’t do this.”

US Director of National Intelligence Tulsi Gabbard has also suggested the order would be an “egregious violation” of Americans’ privacy that risked breaching the two countries’ data agreement.

Apple did not respond to a request for comment. “We have never built a back door or master key to any of our products, and we never will,” Apple said in February.

The UK government did not respond to a request for comment.

A spokesperson for Vance declined to comment.

The Home Office has previously said the UK has “robust safeguards and independent oversight to protect privacy” and that these powers “are only used on an exceptional basis, in relation to the most serious crimes.”

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

UK backing down on Apple encryption backdoor after pressure from US Read More »

it’s-“frighteningly-likely”-many-us-courts-will-overlook-ai-errors,-expert-says

It’s “frighteningly likely” many US courts will overlook AI errors, expert says


Judges pushed to bone up on AI or risk destroying their court’s authority.

A judge points to a diagram of a hand with six fingers

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Order in the court! Order in the court! Judges are facing outcry over a suspected AI-generated order in a court.

Fueling nightmares that AI may soon decide legal battles, a Georgia court of appeals judge, Jeff Watkins, explained why a three-judge panel vacated an order last month that appears to be the first known ruling in which a judge sided with someone seemingly relying on fake AI-generated case citations to win a legal fight.

Now, experts are warning that judges overlooking AI hallucinations in court filings could easily become commonplace, especially in the typically overwhelmed lower courts. And so far, only two states have moved to force judges to sharpen their tech competencies and adapt so they can spot AI red flags and theoretically stop disruptions to the justice system at all levels.

The recently vacated order came in a Georgia divorce dispute, where Watkins explained that the order itself was drafted by the husband’s lawyer, Diana Lynch. That’s a common practice in many courts, where overburdened judges historically rely on lawyers to draft orders. But that protocol today faces heightened scrutiny as lawyers and non-lawyers increasingly rely on AI to compose and research legal filings, and judges risk rubberstamping fake opinions by not carefully scrutinizing AI-generated citations.

The errant order partly relied on “two fictitious cases” to deny the wife’s petition—which Watkins suggested were “possibly ‘hallucinations’ made up by generative-artificial intelligence”—as well as two cases that had “nothing to do” with the wife’s petition.

Lynch was hit with $2,500 in sanctions after the wife appealed, and the husband’s response—which also appeared to be prepared by Lynch—cited 11 additional cases that were “either hallucinated” or irrelevant. Watkins was further peeved that Lynch supported a request for attorney’s fees for the appeal by citing “one of the new hallucinated cases,” writing it added “insult to injury.”

Worryingly, the judge could not confirm whether the fake cases were generated by AI or even determine if Lynch inserted the bogus cases into the court filings, indicating how hard it can be for courts to hold lawyers accountable for suspected AI hallucinations. Lynch did not respond to Ars’ request to comment, and her website appeared to be taken down following media attention to the case.

But Watkins noted that “the irregularities in these filings suggest that they were drafted using generative AI” while warning that many “harms flow from the submission of fake opinions.” Exposing deceptions can waste time and money, and AI misuse can deprive people of raising their best arguments. Fake orders can also soil judges’ and courts’ reputations and promote “cynicism” in the justice system. If left unchecked, Watkins warned, these harms could pave the way to a future where a “litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.”

“We have no information regarding why Appellee’s Brief repeatedly cites to nonexistent cases and can only speculate that the Brief may have been prepared by AI,” Watkins wrote.

Ultimately, Watkins remanded the case, partly because the fake cases made it impossible for the appeals court to adequately review the wife’s petition to void the prior order. But no matter the outcome of the Georgia case, the initial order will likely forever be remembered as a cautionary tale for judges increasingly scrutinized for failures to catch AI misuses in court.

“Frighteningly likely” judge’s AI misstep will be repeated

John Browning, a retired justice on Texas’ Fifth Court of Appeals and now a full-time law professor at Faulkner University, last year published a law article Watkins cited that warned of the ethical risks of lawyers using AI. In the article, Browning emphasized that the biggest concern at that point was that lawyers “will use generative AI to produce work product they treat as a final draft, without confirming the accuracy of the information contained therein or without applying their own independent professional judgment.”

Today, judges are increasingly drawing the same scrutiny, and Browning told Ars he thinks it’s “frighteningly likely that we will see more cases” like the Georgia divorce dispute, in which “a trial court unwittingly incorporates bogus case citations that an attorney includes in a proposed order” or even potentially in “proposed findings of fact and conclusions of law.”

“I can envision such a scenario in any number of situations in which a trial judge maintains a heavy docket and looks to counsel to work cooperatively in submitting proposed orders, including not just family law cases but other civil and even criminal matters,” Browning told Ars.

According to reporting from the National Center for State Courts, a nonprofit representing court leaders and professionals who are advocating for better judicial resources, AI tools like ChatGPT have made it easier for high-volume filers and unrepresented litigants who can’t afford attorneys to file more cases, potentially further bogging down courts.

Peter Henderson, a researcher who runs the Princeton Language+Law, Artificial Intelligence, & Society (POLARIS) Lab, told Ars that he expects cases like the Georgia divorce dispute aren’t happening every day just yet.

It’s likely that a “few hallucinated citations go overlooked” because generally, fake cases are flagged through “the adversarial nature of the US legal system,” he suggested. Browning further noted that trial judges are generally “very diligent in spotting when a lawyer is citing questionable authority or misleading the court about what a real case actually said or stood for.”

Henderson agreed with Browning that “in courts with much higher case loads and less adversarial process, this may happen more often.” But Henderson noted that the appeals court catching the fake cases is an example of the adversarial process working.

While that’s true in this case, it seems likely that anyone exhausted by the divorce legal process, for example, may not pursue an appeal if they don’t have energy or resources to discover and overturn errant orders.

Judges’ AI competency increasingly questioned

While recent history confirms that lawyers risk being sanctioned, fired from their firms, or suspended from practicing law for citing fake AI-generated cases, judges will likely only risk embarrassment for failing to catch lawyers’ errors or even for using AI to research their own opinions.

Not every judge is prepared to embrace AI without proper vetting, though. To shield the legal system, some judges have banned AI. Others have required disclosures—with some even demanding to know which specific AI tool was used—but that solution has not caught on everywhere.

Even if all courts required disclosures, Browning pointed out that disclosures still aren’t a perfect solution since “it may be difficult for lawyers to even discern whether they have used generative AI,” as AI features become increasingly embedded in popular legal tools. One day, it “may eventually become unreasonable to expect” lawyers “to verify every generative AI output,” Browning suggested.

Most likely—as a judicial ethics panel from Michigan has concluded—judges will determine “the best course of action for their courts with the ever-expanding use of AI,” Browning’s article noted. And the former justice told Ars that’s why education will be key, for both lawyers and judges, as AI advances and becomes more mainstream in court systems.

In an upcoming summer 2025 article in The Journal of Appellate Practice & Process, “The Dawn of the AI Judge,” Browning attempts to soothe readers by saying that AI isn’t yet fueling a legal dystopia. And humans are unlikely to face “robot judges” spouting AI-generated opinions any time soon, the former justice suggested.

Standing in the way of that, at least two states—Michigan and West Virginia—”have already issued judicial ethics opinions requiring judges to be ‘tech competent’ when it comes to AI,” Browning told Ars. And “other state supreme courts have adopted official policies regarding AI,” he noted, further pressuring judges to bone up on AI.

Meanwhile, several states have set up task forces to monitor their regional court systems and issue AI guidance, while states like Virginia and Montana have passed laws requiring human oversight for any AI systems used in criminal justice decisions.

Judges must prepare to spot obvious AI red flags

Until courts figure out how to navigate AI—a process that may look different from court to court—Browning advocates for more education and ethical guidance for judges to steer their use and attitudes about AI. That could help equip judges to avoid both ignorance of the many AI pitfalls and overconfidence in AI outputs, potentially protecting courts from AI hallucinations, biases, and evidentiary challenges sneaking past systems requiring human review and scrambling the court system.

An overlooked part of educating judges could be exposing AI’s influence so far in courts across the US. Henderson’s team is planning research that tracks which models attorneys are using most in courts. That could reveal “the potential legal arguments that these models are pushing” to sway courts—and which judicial interventions might be needed, Henderson told Ars.

“Over the next few years, researchers—like those in our group, the POLARIS Lab—will need to develop new ways to track the massive influence that AI will have and understand ways to intervene,” Henderson told Ars. “For example, is any model pushing a particular perspective on legal doctrine across many different cases? Was it explicitly trained or instructed to do so?”

Henderson also advocates for “an open, free centralized repository of case law,” which would make it easier for everyone to check for fake AI citations. “With such a repository, it is easier for groups like ours to build tools that can quickly and accurately verify citations,” Henderson said. That could be a significant improvement to the current decentralized court reporting system that often obscures case information behind various paywalls.

Dazza Greenwood, who co-chairs MIT’s Task Force on Responsible Use of Generative AI for Law, did not have time to send comments but pointed Ars to a LinkedIn thread where he suggested that a structural response may be needed to ensure that all fake AI citations are caught every time.

He recommended that courts create “a bounty system whereby counter-parties or other officers of the court receive sanctions payouts for fabricated cases cited in judicial filings that they reported first.” That way, lawyers will know that their work will “always” be checked and thus may shift their behavior if they’ve been automatically filing AI-drafted documents. In turn, that could alleviate pressure on judges to serve as watchdogs. It also wouldn’t cost much—mostly just redistributing the exact amount of fees that lawyers are sanctioned to AI spotters.

Novel solutions like this may be necessary, Greenwood suggested. Responding to a question asking if “shame and sanctions” are enough to stop AI hallucinations in court, Greenwood said that eliminating AI errors is imperative because it “gives both otherwise generally good lawyers and otherwise generally good technology a bad name.” Continuing to ban AI or suspend lawyers as a preferred solution risks dwindling court resources just as cases likely spike rather than potentially confronting the problem head-on.

Of course, there’s no guarantee that the bounty system would work. But “would the fact of such definite confidence that your cures will be individually checked and fabricated cites reported be enough to finally… convince lawyers who cut these corners that they should not cut these corners?”

In absence of a fake case detector like Henderson wants to build, experts told Ars that there are some obvious red flags that judges can note to catch AI-hallucinated filings.

Any case number with “123456” in it probably warrants review, Henderson told Ars. And Browning noted that AI tends to mix up locations for cases, too. “For example, a cite to a purported Texas case that has a ‘S.E. 2d’ reporter wouldn’t make sense, since Texas cases would be found in the Southwest Reporter,” Browning said, noting that some appellate judges have already relied on this red flag to catch AI misuses.

Those red flags would perhaps be easier to check with the open source tool that Henderson’s lab wants to make, but Browning said there are other tell-tale signs of AI usage that anyone who has ever used a chatbot is likely familiar with.

“Sometimes a red flag is the language cited from the hallucinated case; if it has some of the stilted language that can sometimes betray AI use, it might be a hallucination,” Browning said.

Judges already issuing AI-assisted opinions

Several states have assembled task forces like Greenwood’s to assess the risks and benefits of using AI in courts. In Georgia, the Judicial Council of Georgia Ad Hoc Committee on Artificial Intelligence and the Courts released a report in early July providing “recommendations to help maintain public trust and confidence in the judicial system as the use of AI increases” in that state.

Adopting the committee’s recommendations could establish “long-term leadership and governance”; a repository of approved AI tools, education, and training for judicial professionals; and more transparency on AI used in Georgia courts. But the committee expects it will take three years to implement those recommendations while AI use continues to grow.

Possibly complicating things further as judges start to explore using AI assistants to help draft their filings, the committee concluded that it’s still too early to tell if the judges’ code of conduct should be changed to prevent “unintentional use of biased algorithms, improper delegation to automated tools, or misuse of AI-generated data in judicial decision-making.” That means, at least for now, that there will be no code-of-conduct changes in Georgia, where the only case in which AI hallucinations are believed to have swayed a judge has been found.

Notably, the committee’s report also confirmed that there are no role models for courts to follow, as “there are no well-established regulatory environments with respect to the adoption of AI technologies by judicial systems.” Browning, who chaired a now-defunct Texas AI task force, told Ars that judges lacking guidance will need to stay on their toes to avoid trampling legal rights. (A spokesperson for the State Bar of Texas told Ars the task force’s work “concluded” and “resulted in the creation of the new standing committee on Emerging Technology,” which offers general tips and guidance for judges in a recently launched AI Toolkit.)

“While I definitely think lawyers have their own duties regarding AI use, I believe that judges have a similar responsibility to be vigilant when it comes to AI use as well,” Browning said.

Judges will continue sorting through AI-fueled submissions not just from pro se litigants representing themselves but also from up-and-coming young lawyers who may be more inclined to use AI, and even seasoned lawyers who have been sanctioned up to $5,000 for failing to check AI drafts, Browning suggested.

In his upcoming “AI Judge” article, Browning points to at least one judge, 11th Circuit Court of Appeals Judge Kevin Newsom, who has used AI as a “mini experiment” in preparing opinions for both a civil case involving an insurance coverage issue and a criminal matter focused on sentencing guidelines. Browning seems to appeal to judges’ egos to get them to study up so they can use AI to enhance their decision-making and possibly expand public trust in courts, not undermine it.

“Regardless of the technological advances that can support a judge’s decision-making, the ultimate responsibility will always remain with the flesh-and-blood judge and his application of very human qualities—legal reasoning, empathy, strong regard for fairness, and unwavering commitment to ethics,” Browning wrote. “These qualities can never be replicated by an AI tool.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

It’s “frighteningly likely” many US courts will overlook AI errors, expert says Read More »

court-rules-trump-broke-us-law-when-he-fired-democratic-ftc-commissioner

Court rules Trump broke US law when he fired Democratic FTC commissioner

“Without removal protections, that independence would be jeopardized… Accordingly, the Court held that the FTC Act’s for-cause removal protections were constitutional,” wrote AliKhan, who was appointed to the District Court by President Biden in 2023.

Judge: Facts almost identical to 1935 case

The Supreme Court reaffirmed its Humphrey’s Executor findings in cases decided in 2010 and 2020, AliKhan wrote. “Humphrey’s Executor remains good law today. Over the span of ninety years, the Supreme Court has declined to revisit or overrule it,” she wrote. Congress has likewise not disturbed FTC commissioners’ removal protection, and “thirteen Presidents have acquiesced to its vitality,” she wrote.

AliKhan said the still-binding precedent clearly supports Slaughter’s case against Trump. “The answer to the key substantive question in this case—whether a unanimous Supreme Court decision about the FTC Act’s removal protections applies to a suit about the FTC Act’s removal protections—seems patently obvious,” AliKhan wrote. “In arguing for a different result, Defendants ask this court to ignore the letter of Humphrey’s Executor and embrace the critiques from its detractors.”

The 1935 case and the present case are similar in multiple ways, the judge wrote. “Humphrey’s Executor involved the exact same provision of the FTC Act that Ms. Slaughter seeks to enforce here: the for-cause removal protection within 15 U.S.C. § 41 prohibiting any termination except for ‘inefficiency, neglect of duty, or malfeasance in office,'” she wrote.

The “facts almost identically mirror those of Humphrey’s Executor,” she continued. In both Roosevelt’s removal of Humphrey and Trump’s removal of Slaughter, the president cited disagreements in priorities and “did not purport to base the removal on inefficiency, neglect of duty, or malfeasance.”

Trump and fellow defendants assert that the current FTC is much different from the 1935 version of the body, saying it now “exercises significant executive power.” That includes investigating and prosecuting violations of federal law, administratively adjudicating claims itself, and issuing rules and regulations to prevent unfair business practices.

Court rules Trump broke US law when he fired Democratic FTC commissioner Read More »

trump-to-sign-stablecoin-bill-that-may-make-it-easier-to-bribe-the-president

Trump to sign stablecoin bill that may make it easier to bribe the president


Donald Trump’s first big crypto win “nothing to crow about,” analyst says.

Donald Trump is expected to sign the GENIUS Act into law Friday, securing his first big win as a self-described “pro-crypto president.” The act is the first major piece of cryptocurrency legislation passed in the US.

The House of Representatives voted to pass the GENIUS Act on Thursday, approving the same bill that the Senate passed last month. The law provides a federal framework for stablecoins, a form of cryptocurrency that’s considered less volatile than other cryptocurrencies, as each token is backed by the US dollar or other supposedly low-risk assets.

The GENIUS Act is expected to spur more widespread adoption of cryptocurrencies, since stablecoins are often used to move funds between different tokens. It could become a gateway for many Americans who are otherwise shy about investing in cryptocurrencies, which is what the industry wants. Ahead of Thursday’s vote, critics had warned that Republicans were rushing the pro-industry bill without ensuring adequate consumer protections, though, seemingly setting Americans up to embrace stablecoins as legitimate so-called “cash of the blockchain” without actually insuring their investments.

A big concern is that stablecoins will appear as safe investments, legitimized by the law, while supposedly private companies issuing stablecoins could peg their tokens to riskier assets that could tank reserves, cause bank runs, and potentially blindside and financially ruin Americans. Stablecoin scams could also target naïve stablecoin investors, luring them into making deposits that cannot be withdrawn.

Rep. Maxine Waters (D-Calif.)—part of a group of Democrats who had strongly opposed the bill—further warned Thursday that the GENIUS Act prevents lawmakers from owning or promoting stablecoins, but not the president. Trump and his family have allegedly made more than a billion dollars through their crypto ventures, and Waters is concerned that the law will make it easier for Trump and other presidents to use the office to grift and possibly even obscure foreign bribes.

“By passing this bill, Congress will be telling the world that Congress is OK with corruption, OK with foreign companies buying influence,” Waters said Thursday, CBS News reported.

Some lawmakers fear such corruption is already happening. Senators previously urged the Office of Government Ethics in a letter to investigate why “a crypto firm whose founder needs a pardon” (Binance’s Changpeng Zhao, also known as “CZ”) “and a foreign government spymaker coveting sensitive US technology” (United Arab Emirates-controlled MGX) “plan to pay the Trump and Witkoff families hundreds of millions of dollars.”

The White House continues to insist that Trump has “no conflicts of interest” because “his assets are in a trust managed by his children,” Reuters reported.

Ultimately, Waters and other Democrats failed to amend the bill to prevent presidents from benefiting from the stablecoin framework and promoting their own crypto projects.

Markets for various cryptocurrencies spiked Thursday, as the industry anticipates that more people will hold crypto wallets in a world where it’s fast, cheap, and easy to move money on the blockchain with stablecoins, as compared to relying on traditional bank services. And any fees associated with stablecoin transfers will likely be paid with other forms of cryptocurrencies, with a token called ether predicted to benefit most since “most stablecoins are issued and transacted on the underlying blockchain Ethereum,” Reuters reported.

Unsurprisingly, ether-linked stocks jumped Friday, with the token’s value hitting a six-month high. Notably, Bitcoin recently hit a record high; it was valued at above $120,000 as the stablecoin bill moved closer to Trump’s desk.

GENIUS Act plants “seeds for the next financial crisis”

As Trump prepares to sign the law, Consumer Reports’ senior director monitoring digital marketplaces, Delicia Hand, told Ars that the group plans to work with other consumer advocates and the implementing regulator to try to close any gaps in the stablecoin legislation that would leave Americans vulnerable.

Some Democrats supported the GENIUS Act, arguing that some regulation is better than none as cryptocurrency activity increases globally and the technology has the potential to revolutionize the US financial system.

But Hand told Ars that “we’ve already seen what happens when there are no protections” for consumers, like during the FTX collapse.

She joins critics that the BBC reported are concerned that stablecoin investors could get stuck in convoluted bankruptcy processes as tech firms engage more and more in “bank-like activities” without the same oversight as banks.

The only real assurances for stablecoin investors are requirements that all firms must publish monthly reserves backing their tokens, as well as annual statements required from the biggest companies issuing tokens. Those will likely include e-commerce and digital payments giants like Amazon, PayPal, and Shopify, as well as major social media companies.

Meanwhile, Trump seemingly wants to lure more elderly people into investing in crypto, reportedly “working on a presidential order that could allow retirement accounts to be invested in private assets, such as crypto, gold, and private equity,” the BBC reported.

Waters, a top Democrat on the House Financial Services Committee, is predicting the worst. She has warned that the law gives “Trump the pen to write the rules that would put more money in his family’s pocket” while causing “consumer harm” and planting “the seeds for the next financial crisis.”

Analyst: End of Trump’s crypto wins

The House of Representatives passed two other crypto bills this week, but those bills now go to the Senate, where they may not have enough support to pass.

The CLARITY Act—which creates a regulatory framework for digital assets and cryptocurrencies to allow for more innovation and competition—is “absolutely the most important thing” the crypto industry has been pushing since spending more than $119 million backing pro-crypto congressional candidates last year, a Coinbase policy official, Kara Calvert, told The New York Times.

Republicans and industry see the CLARITY Act as critical because it strips the Securities and Exchange Commission of power to police cryptocurrencies and digital assets and gives that power instead to the Commodity Futures Trading Commission, which is viewed as friendlier to industry. If it passed, the CLARITY Act would not just make it harder for the SEC to raise lawsuits, but it would also box out any future SEC officials under less crypto-friendly presidents from “bringing any cases for past misconduct,” Amanda Fischer, a top SEC official under the Biden administration, told the NYT.

“It would retroactively bless all the conduct of the crypto industry,” Fischer suggested.

But Senators aren’t happy with the CLARITY Act and expect to draft their own version of the bill, striving to lay out a crypto market structure that isn’t “reviled by consumer protection groups,” the NYT reported.

And the other bill that the House sent to the Senate on Thursday—which would ban the US from creating a central bank digital currency (CBDC) that some conservatives believe would allow for government financial surveillance—faces an uphill battle, in part due to Republicans seemingly downgrading it as a priority.

The anti-CBDC bill will likely be added to a “must-pass” annual defense policy bill facing a vote later this year, the NYT reported. But Rep. Marjorie Taylor Greene (R.-Ga.) “mocked” that plan, claiming she did not expect it to be “honored.”

Terry Haines, founder of the Washington-based analysis firm Pangaea Policy, has forecasted that both the CLARITY Act and the anti-CBDC bills will likely die in the Senate, the BBC reported.

“This is the end of crypto’s wins for quite a while—and the only one,” Haines suggested. “When the easy part, stablecoin, takes [approximately] four to five years and barely survives industry scandals, it’s not much to crow about.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Trump to sign stablecoin bill that may make it easier to bribe the president Read More »

will-ai-end-cheap-flights?-critics-attack-delta’s-“predatory”-ai-pricing.

Will AI end cheap flights? Critics attack Delta’s “predatory” AI pricing.

Although Delta’s AI pricing could increase competition in the airline industry, Slover expects that companies using such pricing schemes are “all too likely” to be incentivized “to skew in the direction of higher prices” because of the AI pricing’s lack of transparency.

“Informed consumer choice is the engine that drives competition; because consumers won’t be as informed, and thus will have little or no agency in the supposed competitive benefits, they are more apt to be taken advantage of than to benefit,” Slover said.

Delta could face backlash as it rolls out individualized pricing over the next few years, Slover suggested, as some customers are “apt to react viscerally” to what privacy advocates term “surveillance pricing.”

The company could also get pushback from officials, with the Federal Trade Commission already studying how individualized pricing like Delta’s pilot could potentially violate the FTC Act or harm consumers. That could result in new rulemaking, Solver said, or possibly even legislation “to prohibit or rein it in.”

Some lawmakers are already scrutinizing pricing algorithms, Slover noted, with pricing practices of giants like Walmart and Amazon targeted in recent hearings held by the Senate Committee on Banking, Housing, and Urban Affairs.

For anyone wondering how to prevent personalized pricing that could make flights suddenly more expensive, Slover recommended using a virtual private network (VPN) when shopping as a short-term solution.

Long-term, stronger privacy laws could gut such AI tools of the data needed to increase or lower prices, Slover said. Third-party intermediaries could also be used, he suggested, “restoring anonymity” to the shopping process by relying on third-party technology acting as a “purchasing agent.” Ideally, those third parties would not be collecting data themselves, Slover said, recommending that nonprofits like Consumer Reports could be good candidates to offer that form of consumer protection.

At least one lawmaker, Sen. Ruben Gallego (D-Ariz.), has explicitly vowed to block Delta’s AI plan.

“Delta’s CEO just got caught bragging about using AI to find your pain point—meaning they’ll squeeze you for every penny,” Gallego wrote on X. “This isn’t fair pricing or competitive pricing. It’s predatory pricing. I won’t let them get away with this.”

Will AI end cheap flights? Critics attack Delta’s “predatory” AI pricing. Read More »

eu-presses-pause-on-probe-of-x-as-us-trade-talks-heat-up

EU presses pause on probe of X as US trade talks heat up

While Trump and Musk have fallen out this year after developing a political alliance on the 2024 election, the US president has directly attacked EU penalties on US companies calling them a “form of taxation” and comparing fines on tech companies with “overseas extortion.”

Despite the US pressure, commission president Ursula von der Leyen has explicitly stated Brussels will not change its digital rulebook. In April, the bloc imposed a total of €700 million fines on Apple and Facebook owner Meta for breaching antitrust rules.

But unlike the Apple and Meta investigations, which fall under the Digital Markets Act, there are no clear legal deadlines under the DSA. That gives the bloc more political leeway on when it announces its formal findings. The EU also has probes into Meta and TikTok under its content moderation rulebook.

The commission said the “proceedings against X under the DSA are ongoing,” adding that the enforcement of “our legislation is independent of the current ongoing negotiations.”

It added that it “remains fully committed to the effective enforcement of digital legislation, including the Digital Services Act and the Digital Markets Act.”

Anna Cavazzini, a European lawmaker for the Greens, said she expected the commission “to move on decisively with its investigation against X as soon as possible.”

“The commission must continue making changes to EU regulations an absolute red line in tariff negotiations with the US,” she added.

Alongside Brussels’ probe into X’s transparency breaches, it is also looking into content moderation at the company after Musk hosted Alice Weidel of the far-right Alternative for Germany for a conversation on the social media platform ahead of the country’s elections.

Some European lawmakers, as well as the Polish government, are also pressing the commission to open an investigation into Musk’s Grok chatbot after it spewed out antisemitic tropes last week.

X said it disagreed “with the commission’s assessment of the comprehensive work we have done to comply with the Digital Services Act and the commission’s interpretation of the Act’s scope.”

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

EU presses pause on probe of X as US trade talks heat up Read More »

permit-for-xai’s-data-center-blatantly-violates-clean-air-act,-naacp-says

Permit for xAI’s data center blatantly violates Clean Air Act, NAACP says


Evidence suggests health department gave preferential treatment to xAI, NAACP says.

Local students speak in opposition to a proposal by Elon Musk’s xAI to run gas turbines at its data center during a public comment meeting hosted by the Shelby County Health Department at Fairley High School on xAI’s permit application to use gas turbines for a new data center in Memphis, TN on April 25, 2025. Credit: The Washington Post / Contributor | The Washington Post

xAI continues to face backlash over its Memphis data center, as the NAACP joined groups today appealing the issuance of a recently granted permit that the groups say will allow xAI to introduce major new sources of pollutants without warning at any time.

The battle over the gas turbines powering xAI’s data center began last April when thermal imaging seemed to show that the firm was lying about dozens of seemingly operational turbines that could be a major source of smog-causing pollution. By June, the NAACP got involved, notifying the Shelby County Health Department (SCHD) of its intent to sue xAI to force Elon Musk’s AI company to engage with community members in historically Black neighborhoods who are believed to be most affected by the pollution risks.

But the NAACP’s letter seemingly did nothing to stop the SCHD from granting the permits two weeks later on July 2, as well as exemptions that xAI does not appear to qualify for, the appeal noted. Now, the NAACP—alongside environmental justice groups; the Southern Environmental Law Center (SELC); and Young, Gifted and Green—is appealing. The groups are hoping the Memphis and Shelby County Air Pollution Control Board will revoke the permit and block the exemptions, agreeing that the SCHD’s decisions were fatally flawed, violating the Clean Air Act and local laws.

SCHD’s permit granted xAI permission to operate 15 gas turbines at the Memphis data center, while the SELC’s imaging showed that xAI was potentially operating as many as 24. Prior to the permitting, xAI was accused of operating at least 35 turbines without the best-available pollution controls.

In their appeal, the NAACP and other groups argued that the SCHD put xAI profits over Black people’s health, granting unlawful exemptions while turning a blind eye to xAI’s operations, which allegedly started in 2024 but were treated as brand new in 2025.

Significantly, the groups claimed that the health department “improperly ignored” the prior turbine activity and the additional turbines still believed to be on site, unlawfully deeming some of the turbines as “temporary” and designating xAI’s facility a new project with no prior emissions sources. Had xAI’s data center been categorized as a modification to an existing major source of pollutants, the appeal said, xAI would’ve faced stricter emissions controls and “robust ambient air quality impacts assessments.”

And perhaps more concerningly, the exemptions granted could allow xAI—or any other emerging major sources of pollutants in the area—to “install and operate any number of new polluting turbines at any time without any written approval from the Health Department, without any public notice or public participation, and without pollution controls,” the appeal said.

The SCHD and xAI did not respond to Ars’ request to comment.

Officials accused of cherry-picking Clean Air Act

The appeal called out the SCHD for “tellingly” omitting key provisions of the Clean Air Act that allegedly undermined the department’s “position” when explaining why xAI qualified for exemptions. Groups also suggested that xAI was getting preferential treatment, providing as evidence a side-by-side comparison of a permit with stricter emissions requirements granted to a natural gas power plant, issued within months of granting xAI’s permit with only generalized emissions requirements.

“The Department cannot cherry pick which parts of the federal Clean Air Act it believes are relevant,” the appeal said, calling the SCHD’s decisions a “blatant” misrepresentation of the federal law while pointing to statements from the Environmental Protection Agency (EPA) that allegedly “directly” contradict the health department’s position.

For some Memphians protesting xAI’s facility, it seems “indisputable” that xAI’s turbines fall outside of the Clean Air Act requirements, whether they’re temporary or permanent, and if that’s true, it is “undeniable” that the activity violates the law. They’re afraid the health department is prioritizing xAI’s corporate gains over their health by “failing to establish enforceable emission limits” on the data center, which powers what xAI hypes as the world’s largest AI supercomputer, Colossus, the engine behind its controversial Grok models.

Rather than a minor source, as the SCHD designated the facility, Memphians think the data center is already a major source of pollutants, with its permitted turbines releasing, at minimum, 900 tons of nitrogen oxides (NOx) per year. That’s more than three times the threshold that the Clean Air Act uses to define a major source: “one that ’emits, or has the potential to emit,’ at least 250 tons of NOx per year,” the appeal noted. Further, the allegedly overlooked additional turbines that were on site at xAI when permitting was granted “have the potential to emit at least 560 tons of NOx per year.”

But so far, Memphians appear stuck with the SCHD’s generalized emissions requirements and xAI’s voluntary emission limits, which the appeal alleged “fall short” of the stringent limits imposed if xAI were forced to use best-available control technologies. Fixing that is “especially critical given the ongoing and worsening smog problem in Memphis,” environmental groups alleged, which is an area that has “failed to meet EPA’s air quality standard for ozone for years.”

xAI also apparently conducted some “air dispersion modeling” to appease critics. But, again, that process was not comparable to the more rigorous analysis that would’ve been required to get what the EPA calls a Prevention of Significant Deterioration permit, the appeal said.

Groups want xAI’s permit revoked

To shield Memphians from ongoing health risks, the NAACP and environmental justice groups have urged the Memphis and Shelby County Air Pollution Control Board to act now.

Memphis is a city already grappling with high rates of emergency room visits and deaths from asthma, with cancer rates four times the national average. Residents have already begun wearing masks, avoiding the outdoors, and keeping their windows closed since xAI’s data center moved in, the appeal noted. Residents remain “deeply concerned” about feared exposure to alleged pollutants that can “cause a variety of adverse health effects,” including “increased risk of lung infection, aggravated respiratory diseases such as emphysema and chronic bronchitis, and increased frequency of asthma attack,” as well as certain types of cancer.

In an SELC press release, LaTricea Adams, CEO and President of Young, Gifted and Green, called the SCHD’s decisions on xAI’s permit “reckless.”

“As a Black woman born and raised in Memphis, I know firsthand how industry harms Black communities while those in power cower away from justice,” Adams said. “The Shelby County Health Department needs to do their job to protect the health of ALL Memphians, especially those in frontline communities… that are burdened with a history of environmental racism, legacy pollution, and redlining.”

Groups also suspect xAI is stockpiling dozens of gas turbines to potentially power a second facility nearby—which could lead to over 90 turbines in operation. To get that facility up and running, Musk claimed that he will be “copying and pasting” the process for launching the first data center, SELC’s press release said.

Groups appealing have asked the board to revoke xAI’s permits and declare that xAI’s turbines do not qualify for exemptions from the Clean Air Act or other laws and that all permits for gas turbines must meet strict EPA standards. If successful, groups could force xAI to redo the permitting process “pursuant to the major source requirements of the Clean Air Act” and local law. At the very least, they’ve asked the board to remand the permit to the health department to “reconsider its determinations.”

Unless the pollution control board intervenes, Memphians worry xAI’s “unlawful conduct risks being repeated and evading review,” with any turbines removed easily brought back with “no notice” to residents if xAI’s exemptions remain in place.

“Nothing is stopping xAI from installing additional unpermitted turbines at any time to meet its widely-publicized demand for additional power,” the appeal said.

NAACP’s director of environmental justice, Abre’ Conner, confirmed in the SELC’s press release that his group and community members “have repeatedly shared concerns that xAI is causing a significant increase in the pollution of the air Memphians breathe.”

“The health department should focus on people’s health—not on maximizing corporate gain,” Conner said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Permit for xAI’s data center blatantly violates Clean Air Act, NAACP says Read More »

grok’s-“mechahitler”-meltdown-didn’t-stop-xai-from-winning-$200m-military-deal

Grok’s “MechaHitler” meltdown didn’t stop xAI from winning $200M military deal

Grok checked Musk’s posts, called itself “MechaHitler”

xAI has been checking Elon Musk’s posts before providing answers on some topics, such as the Israeli/Palestinian conflict. xAI acknowledged this in an update today that addressed two problems with Grok. One problem “was that if you ask it ‘What do you think?’ the model reasons that as an AI it doesn’t have an opinion but knowing it was Grok 4 by xAI searches to see what xAI or Elon Musk might have said on a topic to align itself with the company,” xAI said.

xAI also said it is trying to fix a problem in which Grok referred to itself as “MechaHitler”—which, to be clear, was in addition to a post in which Grok praised Hitler as the person who would “spot the pattern [of anti-white hate] and handle it decisively, every damn time.” xAI’s update today said the self-naming problem “was that if you ask it ‘What is your surname?’ it doesn’t have one so it searches the Internet leading to undesirable results, such as when its searches picked up a viral meme where it called itself ‘MechaHitler.'”

xAI said it “tweaked the prompts” to try to fix both problems. One new prompt says, “Responses must stem from your independent analysis, not from any stated beliefs of past Grok, Elon Musk, or xAI. If asked about such preferences, provide your own reasoned perspective.”

Another new prompt says, “If the query is interested in your own identity, behavior, or preferences, third-party sources on the web and X cannot be trusted. Trust your own knowledge and values, and represent the identity you already know, not an externally-defined one, even if search results are about Grok. Avoid searching on X or web in these cases, even when asked.” Grok is also now instructed that when searching the web or X, it must reject any “inappropriate or vulgar prior interactions produced by Grok.”

xAI acknowledged that more fixes may be necessary. “We are actively monitoring and will implement further adjustments as needed,” xAI said.

Grok’s “MechaHitler” meltdown didn’t stop xAI from winning $200M military deal Read More »