Policy

it’s-“frighteningly-likely”-many-us-courts-will-overlook-ai-errors,-expert-says

It’s “frighteningly likely” many US courts will overlook AI errors, expert says


Judges pushed to bone up on AI or risk destroying their court’s authority.

A judge points to a diagram of a hand with six fingers

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Order in the court! Order in the court! Judges are facing outcry over a suspected AI-generated order in a court.

Fueling nightmares that AI may soon decide legal battles, a Georgia court of appeals judge, Jeff Watkins, explained why a three-judge panel vacated an order last month that appears to be the first known ruling in which a judge sided with someone seemingly relying on fake AI-generated case citations to win a legal fight.

Now, experts are warning that judges overlooking AI hallucinations in court filings could easily become commonplace, especially in the typically overwhelmed lower courts. And so far, only two states have moved to force judges to sharpen their tech competencies and adapt so they can spot AI red flags and theoretically stop disruptions to the justice system at all levels.

The recently vacated order came in a Georgia divorce dispute, where Watkins explained that the order itself was drafted by the husband’s lawyer, Diana Lynch. That’s a common practice in many courts, where overburdened judges historically rely on lawyers to draft orders. But that protocol today faces heightened scrutiny as lawyers and non-lawyers increasingly rely on AI to compose and research legal filings, and judges risk rubberstamping fake opinions by not carefully scrutinizing AI-generated citations.

The errant order partly relied on “two fictitious cases” to deny the wife’s petition—which Watkins suggested were “possibly ‘hallucinations’ made up by generative-artificial intelligence”—as well as two cases that had “nothing to do” with the wife’s petition.

Lynch was hit with $2,500 in sanctions after the wife appealed, and the husband’s response—which also appeared to be prepared by Lynch—cited 11 additional cases that were “either hallucinated” or irrelevant. Watkins was further peeved that Lynch supported a request for attorney’s fees for the appeal by citing “one of the new hallucinated cases,” writing it added “insult to injury.”

Worryingly, the judge could not confirm whether the fake cases were generated by AI or even determine if Lynch inserted the bogus cases into the court filings, indicating how hard it can be for courts to hold lawyers accountable for suspected AI hallucinations. Lynch did not respond to Ars’ request to comment, and her website appeared to be taken down following media attention to the case.

But Watkins noted that “the irregularities in these filings suggest that they were drafted using generative AI” while warning that many “harms flow from the submission of fake opinions.” Exposing deceptions can waste time and money, and AI misuse can deprive people of raising their best arguments. Fake orders can also soil judges’ and courts’ reputations and promote “cynicism” in the justice system. If left unchecked, Watkins warned, these harms could pave the way to a future where a “litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.”

“We have no information regarding why Appellee’s Brief repeatedly cites to nonexistent cases and can only speculate that the Brief may have been prepared by AI,” Watkins wrote.

Ultimately, Watkins remanded the case, partly because the fake cases made it impossible for the appeals court to adequately review the wife’s petition to void the prior order. But no matter the outcome of the Georgia case, the initial order will likely forever be remembered as a cautionary tale for judges increasingly scrutinized for failures to catch AI misuses in court.

“Frighteningly likely” judge’s AI misstep will be repeated

John Browning, a retired justice on Texas’ Fifth Court of Appeals and now a full-time law professor at Faulkner University, last year published a law article Watkins cited that warned of the ethical risks of lawyers using AI. In the article, Browning emphasized that the biggest concern at that point was that lawyers “will use generative AI to produce work product they treat as a final draft, without confirming the accuracy of the information contained therein or without applying their own independent professional judgment.”

Today, judges are increasingly drawing the same scrutiny, and Browning told Ars he thinks it’s “frighteningly likely that we will see more cases” like the Georgia divorce dispute, in which “a trial court unwittingly incorporates bogus case citations that an attorney includes in a proposed order” or even potentially in “proposed findings of fact and conclusions of law.”

“I can envision such a scenario in any number of situations in which a trial judge maintains a heavy docket and looks to counsel to work cooperatively in submitting proposed orders, including not just family law cases but other civil and even criminal matters,” Browning told Ars.

According to reporting from the National Center for State Courts, a nonprofit representing court leaders and professionals who are advocating for better judicial resources, AI tools like ChatGPT have made it easier for high-volume filers and unrepresented litigants who can’t afford attorneys to file more cases, potentially further bogging down courts.

Peter Henderson, a researcher who runs the Princeton Language+Law, Artificial Intelligence, & Society (POLARIS) Lab, told Ars that he expects cases like the Georgia divorce dispute aren’t happening every day just yet.

It’s likely that a “few hallucinated citations go overlooked” because generally, fake cases are flagged through “the adversarial nature of the US legal system,” he suggested. Browning further noted that trial judges are generally “very diligent in spotting when a lawyer is citing questionable authority or misleading the court about what a real case actually said or stood for.”

Henderson agreed with Browning that “in courts with much higher case loads and less adversarial process, this may happen more often.” But Henderson noted that the appeals court catching the fake cases is an example of the adversarial process working.

While that’s true in this case, it seems likely that anyone exhausted by the divorce legal process, for example, may not pursue an appeal if they don’t have energy or resources to discover and overturn errant orders.

Judges’ AI competency increasingly questioned

While recent history confirms that lawyers risk being sanctioned, fired from their firms, or suspended from practicing law for citing fake AI-generated cases, judges will likely only risk embarrassment for failing to catch lawyers’ errors or even for using AI to research their own opinions.

Not every judge is prepared to embrace AI without proper vetting, though. To shield the legal system, some judges have banned AI. Others have required disclosures—with some even demanding to know which specific AI tool was used—but that solution has not caught on everywhere.

Even if all courts required disclosures, Browning pointed out that disclosures still aren’t a perfect solution since “it may be difficult for lawyers to even discern whether they have used generative AI,” as AI features become increasingly embedded in popular legal tools. One day, it “may eventually become unreasonable to expect” lawyers “to verify every generative AI output,” Browning suggested.

Most likely—as a judicial ethics panel from Michigan has concluded—judges will determine “the best course of action for their courts with the ever-expanding use of AI,” Browning’s article noted. And the former justice told Ars that’s why education will be key, for both lawyers and judges, as AI advances and becomes more mainstream in court systems.

In an upcoming summer 2025 article in The Journal of Appellate Practice & Process, “The Dawn of the AI Judge,” Browning attempts to soothe readers by saying that AI isn’t yet fueling a legal dystopia. And humans are unlikely to face “robot judges” spouting AI-generated opinions any time soon, the former justice suggested.

Standing in the way of that, at least two states—Michigan and West Virginia—”have already issued judicial ethics opinions requiring judges to be ‘tech competent’ when it comes to AI,” Browning told Ars. And “other state supreme courts have adopted official policies regarding AI,” he noted, further pressuring judges to bone up on AI.

Meanwhile, several states have set up task forces to monitor their regional court systems and issue AI guidance, while states like Virginia and Montana have passed laws requiring human oversight for any AI systems used in criminal justice decisions.

Judges must prepare to spot obvious AI red flags

Until courts figure out how to navigate AI—a process that may look different from court to court—Browning advocates for more education and ethical guidance for judges to steer their use and attitudes about AI. That could help equip judges to avoid both ignorance of the many AI pitfalls and overconfidence in AI outputs, potentially protecting courts from AI hallucinations, biases, and evidentiary challenges sneaking past systems requiring human review and scrambling the court system.

An overlooked part of educating judges could be exposing AI’s influence so far in courts across the US. Henderson’s team is planning research that tracks which models attorneys are using most in courts. That could reveal “the potential legal arguments that these models are pushing” to sway courts—and which judicial interventions might be needed, Henderson told Ars.

“Over the next few years, researchers—like those in our group, the POLARIS Lab—will need to develop new ways to track the massive influence that AI will have and understand ways to intervene,” Henderson told Ars. “For example, is any model pushing a particular perspective on legal doctrine across many different cases? Was it explicitly trained or instructed to do so?”

Henderson also advocates for “an open, free centralized repository of case law,” which would make it easier for everyone to check for fake AI citations. “With such a repository, it is easier for groups like ours to build tools that can quickly and accurately verify citations,” Henderson said. That could be a significant improvement to the current decentralized court reporting system that often obscures case information behind various paywalls.

Dazza Greenwood, who co-chairs MIT’s Task Force on Responsible Use of Generative AI for Law, did not have time to send comments but pointed Ars to a LinkedIn thread where he suggested that a structural response may be needed to ensure that all fake AI citations are caught every time.

He recommended that courts create “a bounty system whereby counter-parties or other officers of the court receive sanctions payouts for fabricated cases cited in judicial filings that they reported first.” That way, lawyers will know that their work will “always” be checked and thus may shift their behavior if they’ve been automatically filing AI-drafted documents. In turn, that could alleviate pressure on judges to serve as watchdogs. It also wouldn’t cost much—mostly just redistributing the exact amount of fees that lawyers are sanctioned to AI spotters.

Novel solutions like this may be necessary, Greenwood suggested. Responding to a question asking if “shame and sanctions” are enough to stop AI hallucinations in court, Greenwood said that eliminating AI errors is imperative because it “gives both otherwise generally good lawyers and otherwise generally good technology a bad name.” Continuing to ban AI or suspend lawyers as a preferred solution risks dwindling court resources just as cases likely spike rather than potentially confronting the problem head-on.

Of course, there’s no guarantee that the bounty system would work. But “would the fact of such definite confidence that your cures will be individually checked and fabricated cites reported be enough to finally… convince lawyers who cut these corners that they should not cut these corners?”

In absence of a fake case detector like Henderson wants to build, experts told Ars that there are some obvious red flags that judges can note to catch AI-hallucinated filings.

Any case number with “123456” in it probably warrants review, Henderson told Ars. And Browning noted that AI tends to mix up locations for cases, too. “For example, a cite to a purported Texas case that has a ‘S.E. 2d’ reporter wouldn’t make sense, since Texas cases would be found in the Southwest Reporter,” Browning said, noting that some appellate judges have already relied on this red flag to catch AI misuses.

Those red flags would perhaps be easier to check with the open source tool that Henderson’s lab wants to make, but Browning said there are other tell-tale signs of AI usage that anyone who has ever used a chatbot is likely familiar with.

“Sometimes a red flag is the language cited from the hallucinated case; if it has some of the stilted language that can sometimes betray AI use, it might be a hallucination,” Browning said.

Judges already issuing AI-assisted opinions

Several states have assembled task forces like Greenwood’s to assess the risks and benefits of using AI in courts. In Georgia, the Judicial Council of Georgia Ad Hoc Committee on Artificial Intelligence and the Courts released a report in early July providing “recommendations to help maintain public trust and confidence in the judicial system as the use of AI increases” in that state.

Adopting the committee’s recommendations could establish “long-term leadership and governance”; a repository of approved AI tools, education, and training for judicial professionals; and more transparency on AI used in Georgia courts. But the committee expects it will take three years to implement those recommendations while AI use continues to grow.

Possibly complicating things further as judges start to explore using AI assistants to help draft their filings, the committee concluded that it’s still too early to tell if the judges’ code of conduct should be changed to prevent “unintentional use of biased algorithms, improper delegation to automated tools, or misuse of AI-generated data in judicial decision-making.” That means, at least for now, that there will be no code-of-conduct changes in Georgia, where the only case in which AI hallucinations are believed to have swayed a judge has been found.

Notably, the committee’s report also confirmed that there are no role models for courts to follow, as “there are no well-established regulatory environments with respect to the adoption of AI technologies by judicial systems.” Browning, who chaired a now-defunct Texas AI task force, told Ars that judges lacking guidance will need to stay on their toes to avoid trampling legal rights. (A spokesperson for the State Bar of Texas told Ars the task force’s work “concluded” and “resulted in the creation of the new standing committee on Emerging Technology,” which offers general tips and guidance for judges in a recently launched AI Toolkit.)

“While I definitely think lawyers have their own duties regarding AI use, I believe that judges have a similar responsibility to be vigilant when it comes to AI use as well,” Browning said.

Judges will continue sorting through AI-fueled submissions not just from pro se litigants representing themselves but also from up-and-coming young lawyers who may be more inclined to use AI, and even seasoned lawyers who have been sanctioned up to $5,000 for failing to check AI drafts, Browning suggested.

In his upcoming “AI Judge” article, Browning points to at least one judge, 11th Circuit Court of Appeals Judge Kevin Newsom, who has used AI as a “mini experiment” in preparing opinions for both a civil case involving an insurance coverage issue and a criminal matter focused on sentencing guidelines. Browning seems to appeal to judges’ egos to get them to study up so they can use AI to enhance their decision-making and possibly expand public trust in courts, not undermine it.

“Regardless of the technological advances that can support a judge’s decision-making, the ultimate responsibility will always remain with the flesh-and-blood judge and his application of very human qualities—legal reasoning, empathy, strong regard for fairness, and unwavering commitment to ethics,” Browning wrote. “These qualities can never be replicated by an AI tool.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

It’s “frighteningly likely” many US courts will overlook AI errors, expert says Read More »

court-rules-trump-broke-us-law-when-he-fired-democratic-ftc-commissioner

Court rules Trump broke US law when he fired Democratic FTC commissioner

“Without removal protections, that independence would be jeopardized… Accordingly, the Court held that the FTC Act’s for-cause removal protections were constitutional,” wrote AliKhan, who was appointed to the District Court by President Biden in 2023.

Judge: Facts almost identical to 1935 case

The Supreme Court reaffirmed its Humphrey’s Executor findings in cases decided in 2010 and 2020, AliKhan wrote. “Humphrey’s Executor remains good law today. Over the span of ninety years, the Supreme Court has declined to revisit or overrule it,” she wrote. Congress has likewise not disturbed FTC commissioners’ removal protection, and “thirteen Presidents have acquiesced to its vitality,” she wrote.

AliKhan said the still-binding precedent clearly supports Slaughter’s case against Trump. “The answer to the key substantive question in this case—whether a unanimous Supreme Court decision about the FTC Act’s removal protections applies to a suit about the FTC Act’s removal protections—seems patently obvious,” AliKhan wrote. “In arguing for a different result, Defendants ask this court to ignore the letter of Humphrey’s Executor and embrace the critiques from its detractors.”

The 1935 case and the present case are similar in multiple ways, the judge wrote. “Humphrey’s Executor involved the exact same provision of the FTC Act that Ms. Slaughter seeks to enforce here: the for-cause removal protection within 15 U.S.C. § 41 prohibiting any termination except for ‘inefficiency, neglect of duty, or malfeasance in office,'” she wrote.

The “facts almost identically mirror those of Humphrey’s Executor,” she continued. In both Roosevelt’s removal of Humphrey and Trump’s removal of Slaughter, the president cited disagreements in priorities and “did not purport to base the removal on inefficiency, neglect of duty, or malfeasance.”

Trump and fellow defendants assert that the current FTC is much different from the 1935 version of the body, saying it now “exercises significant executive power.” That includes investigating and prosecuting violations of federal law, administratively adjudicating claims itself, and issuing rules and regulations to prevent unfair business practices.

Court rules Trump broke US law when he fired Democratic FTC commissioner Read More »

trump-to-sign-stablecoin-bill-that-may-make-it-easier-to-bribe-the-president

Trump to sign stablecoin bill that may make it easier to bribe the president


Donald Trump’s first big crypto win “nothing to crow about,” analyst says.

Donald Trump is expected to sign the GENIUS Act into law Friday, securing his first big win as a self-described “pro-crypto president.” The act is the first major piece of cryptocurrency legislation passed in the US.

The House of Representatives voted to pass the GENIUS Act on Thursday, approving the same bill that the Senate passed last month. The law provides a federal framework for stablecoins, a form of cryptocurrency that’s considered less volatile than other cryptocurrencies, as each token is backed by the US dollar or other supposedly low-risk assets.

The GENIUS Act is expected to spur more widespread adoption of cryptocurrencies, since stablecoins are often used to move funds between different tokens. It could become a gateway for many Americans who are otherwise shy about investing in cryptocurrencies, which is what the industry wants. Ahead of Thursday’s vote, critics had warned that Republicans were rushing the pro-industry bill without ensuring adequate consumer protections, though, seemingly setting Americans up to embrace stablecoins as legitimate so-called “cash of the blockchain” without actually insuring their investments.

A big concern is that stablecoins will appear as safe investments, legitimized by the law, while supposedly private companies issuing stablecoins could peg their tokens to riskier assets that could tank reserves, cause bank runs, and potentially blindside and financially ruin Americans. Stablecoin scams could also target naïve stablecoin investors, luring them into making deposits that cannot be withdrawn.

Rep. Maxine Waters (D-Calif.)—part of a group of Democrats who had strongly opposed the bill—further warned Thursday that the GENIUS Act prevents lawmakers from owning or promoting stablecoins, but not the president. Trump and his family have allegedly made more than a billion dollars through their crypto ventures, and Waters is concerned that the law will make it easier for Trump and other presidents to use the office to grift and possibly even obscure foreign bribes.

“By passing this bill, Congress will be telling the world that Congress is OK with corruption, OK with foreign companies buying influence,” Waters said Thursday, CBS News reported.

Some lawmakers fear such corruption is already happening. Senators previously urged the Office of Government Ethics in a letter to investigate why “a crypto firm whose founder needs a pardon” (Binance’s Changpeng Zhao, also known as “CZ”) “and a foreign government spymaker coveting sensitive US technology” (United Arab Emirates-controlled MGX) “plan to pay the Trump and Witkoff families hundreds of millions of dollars.”

The White House continues to insist that Trump has “no conflicts of interest” because “his assets are in a trust managed by his children,” Reuters reported.

Ultimately, Waters and other Democrats failed to amend the bill to prevent presidents from benefiting from the stablecoin framework and promoting their own crypto projects.

Markets for various cryptocurrencies spiked Thursday, as the industry anticipates that more people will hold crypto wallets in a world where it’s fast, cheap, and easy to move money on the blockchain with stablecoins, as compared to relying on traditional bank services. And any fees associated with stablecoin transfers will likely be paid with other forms of cryptocurrencies, with a token called ether predicted to benefit most since “most stablecoins are issued and transacted on the underlying blockchain Ethereum,” Reuters reported.

Unsurprisingly, ether-linked stocks jumped Friday, with the token’s value hitting a six-month high. Notably, Bitcoin recently hit a record high; it was valued at above $120,000 as the stablecoin bill moved closer to Trump’s desk.

GENIUS Act plants “seeds for the next financial crisis”

As Trump prepares to sign the law, Consumer Reports’ senior director monitoring digital marketplaces, Delicia Hand, told Ars that the group plans to work with other consumer advocates and the implementing regulator to try to close any gaps in the stablecoin legislation that would leave Americans vulnerable.

Some Democrats supported the GENIUS Act, arguing that some regulation is better than none as cryptocurrency activity increases globally and the technology has the potential to revolutionize the US financial system.

But Hand told Ars that “we’ve already seen what happens when there are no protections” for consumers, like during the FTX collapse.

She joins critics that the BBC reported are concerned that stablecoin investors could get stuck in convoluted bankruptcy processes as tech firms engage more and more in “bank-like activities” without the same oversight as banks.

The only real assurances for stablecoin investors are requirements that all firms must publish monthly reserves backing their tokens, as well as annual statements required from the biggest companies issuing tokens. Those will likely include e-commerce and digital payments giants like Amazon, PayPal, and Shopify, as well as major social media companies.

Meanwhile, Trump seemingly wants to lure more elderly people into investing in crypto, reportedly “working on a presidential order that could allow retirement accounts to be invested in private assets, such as crypto, gold, and private equity,” the BBC reported.

Waters, a top Democrat on the House Financial Services Committee, is predicting the worst. She has warned that the law gives “Trump the pen to write the rules that would put more money in his family’s pocket” while causing “consumer harm” and planting “the seeds for the next financial crisis.”

Analyst: End of Trump’s crypto wins

The House of Representatives passed two other crypto bills this week, but those bills now go to the Senate, where they may not have enough support to pass.

The CLARITY Act—which creates a regulatory framework for digital assets and cryptocurrencies to allow for more innovation and competition—is “absolutely the most important thing” the crypto industry has been pushing since spending more than $119 million backing pro-crypto congressional candidates last year, a Coinbase policy official, Kara Calvert, told The New York Times.

Republicans and industry see the CLARITY Act as critical because it strips the Securities and Exchange Commission of power to police cryptocurrencies and digital assets and gives that power instead to the Commodity Futures Trading Commission, which is viewed as friendlier to industry. If it passed, the CLARITY Act would not just make it harder for the SEC to raise lawsuits, but it would also box out any future SEC officials under less crypto-friendly presidents from “bringing any cases for past misconduct,” Amanda Fischer, a top SEC official under the Biden administration, told the NYT.

“It would retroactively bless all the conduct of the crypto industry,” Fischer suggested.

But Senators aren’t happy with the CLARITY Act and expect to draft their own version of the bill, striving to lay out a crypto market structure that isn’t “reviled by consumer protection groups,” the NYT reported.

And the other bill that the House sent to the Senate on Thursday—which would ban the US from creating a central bank digital currency (CBDC) that some conservatives believe would allow for government financial surveillance—faces an uphill battle, in part due to Republicans seemingly downgrading it as a priority.

The anti-CBDC bill will likely be added to a “must-pass” annual defense policy bill facing a vote later this year, the NYT reported. But Rep. Marjorie Taylor Greene (R.-Ga.) “mocked” that plan, claiming she did not expect it to be “honored.”

Terry Haines, founder of the Washington-based analysis firm Pangaea Policy, has forecasted that both the CLARITY Act and the anti-CBDC bills will likely die in the Senate, the BBC reported.

“This is the end of crypto’s wins for quite a while—and the only one,” Haines suggested. “When the easy part, stablecoin, takes [approximately] four to five years and barely survives industry scandals, it’s not much to crow about.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Trump to sign stablecoin bill that may make it easier to bribe the president Read More »

will-ai-end-cheap-flights?-critics-attack-delta’s-“predatory”-ai-pricing.

Will AI end cheap flights? Critics attack Delta’s “predatory” AI pricing.

Although Delta’s AI pricing could increase competition in the airline industry, Slover expects that companies using such pricing schemes are “all too likely” to be incentivized “to skew in the direction of higher prices” because of the AI pricing’s lack of transparency.

“Informed consumer choice is the engine that drives competition; because consumers won’t be as informed, and thus will have little or no agency in the supposed competitive benefits, they are more apt to be taken advantage of than to benefit,” Slover said.

Delta could face backlash as it rolls out individualized pricing over the next few years, Slover suggested, as some customers are “apt to react viscerally” to what privacy advocates term “surveillance pricing.”

The company could also get pushback from officials, with the Federal Trade Commission already studying how individualized pricing like Delta’s pilot could potentially violate the FTC Act or harm consumers. That could result in new rulemaking, Solver said, or possibly even legislation “to prohibit or rein it in.”

Some lawmakers are already scrutinizing pricing algorithms, Slover noted, with pricing practices of giants like Walmart and Amazon targeted in recent hearings held by the Senate Committee on Banking, Housing, and Urban Affairs.

For anyone wondering how to prevent personalized pricing that could make flights suddenly more expensive, Slover recommended using a virtual private network (VPN) when shopping as a short-term solution.

Long-term, stronger privacy laws could gut such AI tools of the data needed to increase or lower prices, Slover said. Third-party intermediaries could also be used, he suggested, “restoring anonymity” to the shopping process by relying on third-party technology acting as a “purchasing agent.” Ideally, those third parties would not be collecting data themselves, Slover said, recommending that nonprofits like Consumer Reports could be good candidates to offer that form of consumer protection.

At least one lawmaker, Sen. Ruben Gallego (D-Ariz.), has explicitly vowed to block Delta’s AI plan.

“Delta’s CEO just got caught bragging about using AI to find your pain point—meaning they’ll squeeze you for every penny,” Gallego wrote on X. “This isn’t fair pricing or competitive pricing. It’s predatory pricing. I won’t let them get away with this.”

Will AI end cheap flights? Critics attack Delta’s “predatory” AI pricing. Read More »

eu-presses-pause-on-probe-of-x-as-us-trade-talks-heat-up

EU presses pause on probe of X as US trade talks heat up

While Trump and Musk have fallen out this year after developing a political alliance on the 2024 election, the US president has directly attacked EU penalties on US companies calling them a “form of taxation” and comparing fines on tech companies with “overseas extortion.”

Despite the US pressure, commission president Ursula von der Leyen has explicitly stated Brussels will not change its digital rulebook. In April, the bloc imposed a total of €700 million fines on Apple and Facebook owner Meta for breaching antitrust rules.

But unlike the Apple and Meta investigations, which fall under the Digital Markets Act, there are no clear legal deadlines under the DSA. That gives the bloc more political leeway on when it announces its formal findings. The EU also has probes into Meta and TikTok under its content moderation rulebook.

The commission said the “proceedings against X under the DSA are ongoing,” adding that the enforcement of “our legislation is independent of the current ongoing negotiations.”

It added that it “remains fully committed to the effective enforcement of digital legislation, including the Digital Services Act and the Digital Markets Act.”

Anna Cavazzini, a European lawmaker for the Greens, said she expected the commission “to move on decisively with its investigation against X as soon as possible.”

“The commission must continue making changes to EU regulations an absolute red line in tariff negotiations with the US,” she added.

Alongside Brussels’ probe into X’s transparency breaches, it is also looking into content moderation at the company after Musk hosted Alice Weidel of the far-right Alternative for Germany for a conversation on the social media platform ahead of the country’s elections.

Some European lawmakers, as well as the Polish government, are also pressing the commission to open an investigation into Musk’s Grok chatbot after it spewed out antisemitic tropes last week.

X said it disagreed “with the commission’s assessment of the comprehensive work we have done to comply with the Digital Services Act and the commission’s interpretation of the Act’s scope.”

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

EU presses pause on probe of X as US trade talks heat up Read More »

permit-for-xai’s-data-center-blatantly-violates-clean-air-act,-naacp-says

Permit for xAI’s data center blatantly violates Clean Air Act, NAACP says


Evidence suggests health department gave preferential treatment to xAI, NAACP says.

Local students speak in opposition to a proposal by Elon Musk’s xAI to run gas turbines at its data center during a public comment meeting hosted by the Shelby County Health Department at Fairley High School on xAI’s permit application to use gas turbines for a new data center in Memphis, TN on April 25, 2025. Credit: The Washington Post / Contributor | The Washington Post

xAI continues to face backlash over its Memphis data center, as the NAACP joined groups today appealing the issuance of a recently granted permit that the groups say will allow xAI to introduce major new sources of pollutants without warning at any time.

The battle over the gas turbines powering xAI’s data center began last April when thermal imaging seemed to show that the firm was lying about dozens of seemingly operational turbines that could be a major source of smog-causing pollution. By June, the NAACP got involved, notifying the Shelby County Health Department (SCHD) of its intent to sue xAI to force Elon Musk’s AI company to engage with community members in historically Black neighborhoods who are believed to be most affected by the pollution risks.

But the NAACP’s letter seemingly did nothing to stop the SCHD from granting the permits two weeks later on July 2, as well as exemptions that xAI does not appear to qualify for, the appeal noted. Now, the NAACP—alongside environmental justice groups; the Southern Environmental Law Center (SELC); and Young, Gifted and Green—is appealing. The groups are hoping the Memphis and Shelby County Air Pollution Control Board will revoke the permit and block the exemptions, agreeing that the SCHD’s decisions were fatally flawed, violating the Clean Air Act and local laws.

SCHD’s permit granted xAI permission to operate 15 gas turbines at the Memphis data center, while the SELC’s imaging showed that xAI was potentially operating as many as 24. Prior to the permitting, xAI was accused of operating at least 35 turbines without the best-available pollution controls.

In their appeal, the NAACP and other groups argued that the SCHD put xAI profits over Black people’s health, granting unlawful exemptions while turning a blind eye to xAI’s operations, which allegedly started in 2024 but were treated as brand new in 2025.

Significantly, the groups claimed that the health department “improperly ignored” the prior turbine activity and the additional turbines still believed to be on site, unlawfully deeming some of the turbines as “temporary” and designating xAI’s facility a new project with no prior emissions sources. Had xAI’s data center been categorized as a modification to an existing major source of pollutants, the appeal said, xAI would’ve faced stricter emissions controls and “robust ambient air quality impacts assessments.”

And perhaps more concerningly, the exemptions granted could allow xAI—or any other emerging major sources of pollutants in the area—to “install and operate any number of new polluting turbines at any time without any written approval from the Health Department, without any public notice or public participation, and without pollution controls,” the appeal said.

The SCHD and xAI did not respond to Ars’ request to comment.

Officials accused of cherry-picking Clean Air Act

The appeal called out the SCHD for “tellingly” omitting key provisions of the Clean Air Act that allegedly undermined the department’s “position” when explaining why xAI qualified for exemptions. Groups also suggested that xAI was getting preferential treatment, providing as evidence a side-by-side comparison of a permit with stricter emissions requirements granted to a natural gas power plant, issued within months of granting xAI’s permit with only generalized emissions requirements.

“The Department cannot cherry pick which parts of the federal Clean Air Act it believes are relevant,” the appeal said, calling the SCHD’s decisions a “blatant” misrepresentation of the federal law while pointing to statements from the Environmental Protection Agency (EPA) that allegedly “directly” contradict the health department’s position.

For some Memphians protesting xAI’s facility, it seems “indisputable” that xAI’s turbines fall outside of the Clean Air Act requirements, whether they’re temporary or permanent, and if that’s true, it is “undeniable” that the activity violates the law. They’re afraid the health department is prioritizing xAI’s corporate gains over their health by “failing to establish enforceable emission limits” on the data center, which powers what xAI hypes as the world’s largest AI supercomputer, Colossus, the engine behind its controversial Grok models.

Rather than a minor source, as the SCHD designated the facility, Memphians think the data center is already a major source of pollutants, with its permitted turbines releasing, at minimum, 900 tons of nitrogen oxides (NOx) per year. That’s more than three times the threshold that the Clean Air Act uses to define a major source: “one that ’emits, or has the potential to emit,’ at least 250 tons of NOx per year,” the appeal noted. Further, the allegedly overlooked additional turbines that were on site at xAI when permitting was granted “have the potential to emit at least 560 tons of NOx per year.”

But so far, Memphians appear stuck with the SCHD’s generalized emissions requirements and xAI’s voluntary emission limits, which the appeal alleged “fall short” of the stringent limits imposed if xAI were forced to use best-available control technologies. Fixing that is “especially critical given the ongoing and worsening smog problem in Memphis,” environmental groups alleged, which is an area that has “failed to meet EPA’s air quality standard for ozone for years.”

xAI also apparently conducted some “air dispersion modeling” to appease critics. But, again, that process was not comparable to the more rigorous analysis that would’ve been required to get what the EPA calls a Prevention of Significant Deterioration permit, the appeal said.

Groups want xAI’s permit revoked

To shield Memphians from ongoing health risks, the NAACP and environmental justice groups have urged the Memphis and Shelby County Air Pollution Control Board to act now.

Memphis is a city already grappling with high rates of emergency room visits and deaths from asthma, with cancer rates four times the national average. Residents have already begun wearing masks, avoiding the outdoors, and keeping their windows closed since xAI’s data center moved in, the appeal noted. Residents remain “deeply concerned” about feared exposure to alleged pollutants that can “cause a variety of adverse health effects,” including “increased risk of lung infection, aggravated respiratory diseases such as emphysema and chronic bronchitis, and increased frequency of asthma attack,” as well as certain types of cancer.

In an SELC press release, LaTricea Adams, CEO and President of Young, Gifted and Green, called the SCHD’s decisions on xAI’s permit “reckless.”

“As a Black woman born and raised in Memphis, I know firsthand how industry harms Black communities while those in power cower away from justice,” Adams said. “The Shelby County Health Department needs to do their job to protect the health of ALL Memphians, especially those in frontline communities… that are burdened with a history of environmental racism, legacy pollution, and redlining.”

Groups also suspect xAI is stockpiling dozens of gas turbines to potentially power a second facility nearby—which could lead to over 90 turbines in operation. To get that facility up and running, Musk claimed that he will be “copying and pasting” the process for launching the first data center, SELC’s press release said.

Groups appealing have asked the board to revoke xAI’s permits and declare that xAI’s turbines do not qualify for exemptions from the Clean Air Act or other laws and that all permits for gas turbines must meet strict EPA standards. If successful, groups could force xAI to redo the permitting process “pursuant to the major source requirements of the Clean Air Act” and local law. At the very least, they’ve asked the board to remand the permit to the health department to “reconsider its determinations.”

Unless the pollution control board intervenes, Memphians worry xAI’s “unlawful conduct risks being repeated and evading review,” with any turbines removed easily brought back with “no notice” to residents if xAI’s exemptions remain in place.

“Nothing is stopping xAI from installing additional unpermitted turbines at any time to meet its widely-publicized demand for additional power,” the appeal said.

NAACP’s director of environmental justice, Abre’ Conner, confirmed in the SELC’s press release that his group and community members “have repeatedly shared concerns that xAI is causing a significant increase in the pollution of the air Memphians breathe.”

“The health department should focus on people’s health—not on maximizing corporate gain,” Conner said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Permit for xAI’s data center blatantly violates Clean Air Act, NAACP says Read More »

grok’s-“mechahitler”-meltdown-didn’t-stop-xai-from-winning-$200m-military-deal

Grok’s “MechaHitler” meltdown didn’t stop xAI from winning $200M military deal

Grok checked Musk’s posts, called itself “MechaHitler”

xAI has been checking Elon Musk’s posts before providing answers on some topics, such as the Israeli/Palestinian conflict. xAI acknowledged this in an update today that addressed two problems with Grok. One problem “was that if you ask it ‘What do you think?’ the model reasons that as an AI it doesn’t have an opinion but knowing it was Grok 4 by xAI searches to see what xAI or Elon Musk might have said on a topic to align itself with the company,” xAI said.

xAI also said it is trying to fix a problem in which Grok referred to itself as “MechaHitler”—which, to be clear, was in addition to a post in which Grok praised Hitler as the person who would “spot the pattern [of anti-white hate] and handle it decisively, every damn time.” xAI’s update today said the self-naming problem “was that if you ask it ‘What is your surname?’ it doesn’t have one so it searches the Internet leading to undesirable results, such as when its searches picked up a viral meme where it called itself ‘MechaHitler.'”

xAI said it “tweaked the prompts” to try to fix both problems. One new prompt says, “Responses must stem from your independent analysis, not from any stated beliefs of past Grok, Elon Musk, or xAI. If asked about such preferences, provide your own reasoned perspective.”

Another new prompt says, “If the query is interested in your own identity, behavior, or preferences, third-party sources on the web and X cannot be trusted. Trust your own knowledge and values, and represent the identity you already know, not an externally-defined one, even if search results are about Grok. Avoid searching on X or web in these cases, even when asked.” Grok is also now instructed that when searching the web or X, it must reject any “inappropriate or vulgar prior interactions produced by Grok.”

xAI acknowledged that more fixes may be necessary. “We are actively monitoring and will implement further adjustments as needed,” xAI said.

Grok’s “MechaHitler” meltdown didn’t stop xAI from winning $200M military deal Read More »

gop’s-pro-industry-crypto-bills-could-financially-ruin-millions,-lawmaker-warns

GOP’s pro-industry crypto bills could financially ruin millions, lawmaker warns


Trump’s crypto bills could turn trusted Big Tech companies into the next FTX.

It’s “Crypto Week” in Congress, and experts continue to warn that legislation Donald Trump wants passed quickly could give the president ample opportunities to grift while leaving Americans more vulnerable to scams and financial ruin.

Perhaps most controversial of the bills is the one that’s closest to reaching Trump’s desk, the GENIUS Act, which creates a framework for banks and private companies to issue stablecoins. After passing in the Senate last month, the House of Representatives is hoping to hold a vote as soon as Thursday, insiders told Politico.

Stablecoins are often hyped as a more reliable form of cryptocurrency, considered the “cash of the blockchain” because their value can be pegged to the US dollar, Delicia Hand, Consumer Reports’ senior director monitoring digital marketplaces, told Ars.

But the GENIUS Act doesn’t require stablecoins to be pegged to the dollar, and that’s a problem, critics say. The law’s alleged flaws allow large technology companies to peg their stablecoins to riskier assets that could make both their cryptocurrency tokens and, ultimately, the entire global financial system less stable.

For Americans, the stakes are high. In June, Hand warned that Consumer Reports had “a number of concerns about the GENIUS Act.” Chief among them were “insufficient consumer protections” that Americans expect when conducting financial transactions.

Stablecoin issuers will likely include every major payment app, social media app, and e-commerce platform. There is already interest from Amazon, Meta, PayPal, and Shopify. But unlike companies providing traditional bank services, stablecoin providers will not be required to provide clear dispute-resolution processes, offer deposit insurance, or limit liability for unauthorized transactions on their customers’ accounts.

Additionally, with limited oversight, big tech companies could avoid scrutiny while potentially seizing sensitive financial data for non-bank purposes, pushing competition out of markets, and benefiting from other conflicts of interest from other areas of their businesses. Last month, Congressional researchers highlighting key issues with the GENIUS Act advised that possibly restricting stablecoin regulation to only apply to financial institutions would likely have required big tech firms to divest chunks of their business to prevent them from using stablecoins to illegally dominate the digital payments industry. But Republicans have not yet adopted any recommendations.

Most ominously in light of recent collapses of crypto exchanges like FTX—which made it difficult for customers to recover billions—”the bill does not provide adequate authority to federal and state regulators to ensure consumers have full protection and redemption rights for stablecoin transactions,” Consumer Reports warned. Hand reiterated this concern to Ars as the House mulls the same bill this week.

“I think one major concern that we have is if the bill doesn’t guarantee that consumers can redeem their stablecoins quickly or at all in a crisis, and that’s kind of what is the irony is that at its core, the notion of a stablecoin is that there’s some stability,” Hand said.

Pro-industry crypto bills could financially ruin millions

House Republicans are hoping to pass the bill as is, Politico reported, but some Democrats are putting up a fight that could possibly force changes. Among them is Rep. Maxine Waters (D-Calif.), who penned an op-ed this week, alleging that “Crypto Week” legislation was written “by and for the crypto industry” and “will open the floodgates to massive fraud and financial ruin for millions of American families.”

“All they really do is replicate the same mess that led to past financial crises: They call for few regulations, minimal enforcement, weak consumer protections, and more industry consolidation,” Waters wrote. And “on top of that, these bills have a special, intentional wrinkle that makes them especially dangerous: They would legitimize and legalize the unprecedented crypto corruption by the president of the United States.”

Waters joined critics warning that the GENIUS Act is deeply flawed, with “weak consumer protections” and “no funding provided to regulators to implement the law.” Additionally, the CLARITY Act—which seeks to create a regulatory framework for digital assets and cryptocurrencies to allow for more innovation and will likely come to a House vote on Wednesday before heading to the Senate—”actually creates space for similar schemes” to Sam Bankman-Fried’s stunning fraud that caused FTX’s collapse.

She accused Republicans of rushing the votes on these bills to benefit Trump, whose “shady crypto ventures” have allegedly enriched Trump by $1.2 billion. (The White House has said that Trump has no conflicts of interest, as the crypto ventures are managed by his children.)

Further, “the GENIUS Act opens the floodgates to foreign-controlled crypto that poses serious national security risks, all to appease Trump’s inner circle, which has ties to crypto,” Waters wrote.

Waters has so far submitted amendments that would “block any US president, vice president, members of Congress and their immediate families from promoting or holding crypto” and stop the US from deeming “a foreign country to have a stablecoin regime comparable to that of the US if the current leader of that country has described themselves as a dictator,” CoinTelegraph reported.

Pushback from Democrats may not be enough, as White House crypto advisor Bo Hines seemed to predict on X that the GENIUS Act would be signed into law without much debate this week.

Tim Scott, a chairman of the Senate Committee on Banking, Housing, and Urban Affairs, counted concerns about consumer protections among “myths” he claims to have busted in advocating for the bill. Scott suggested that “simple monthly disclosure” of reserves backing stablecoins and annual statements from the biggest companies issuing stablecoins would be enough to protect consumers from potential losses, should stablecoins be mismanaged.

He also defended not requiring “essential insolvency protections for consumers” by noting that customers will be “explicitly” prioritized above creditors in any insolvency proceedings.

But Waters did not buy that logic, warning that the “Crypto Week” bills becoming law without any amendments will “eventually” trigger the first American crypto financial crisis.

Widespread stablecoin adoption will take time, bank says

If these bills pass without meaningful changes, Hand told Ars that consumers should be wary of stablecoins, no matter what trusted brand is pushing a new token.

In a post detailing risks of allowing big tech companies to “open banks without becoming banks,” Brian Shearer, the director of competition and regulatory policy at the Vanderbilt Policy Accelerator, provided an example.

Imagine if Apple—which “already has quite a bit of power to force adoption of ApplePay”—issues a stablecoin through a competing “payment card” accessed through its popular devices. Apple could possibly lure merchants to adopt the payment form by charging lower fees, and customers “probably wouldn’t revolt because it would be free for them.” Eventually, Apple could be motivated to force all payments through stablecoins, cutting banks entirely out, then potentially raising fees to merchants.

“It’s not a stretch to imagine a scenario where Google, Apple, Amazon, PayPal, Block, and Meta all do something like this and quickly become the largest payment networks and banks in the world,” Shearer wrote. And Hand told Ars that these trusted brands “could kind of imbue some sort of confidence that may be not necessarily yet earned” when rolling out stablecoins.

Bank of America’s head of North American banks research, Ebrahim Poonawala, told Business Insider that “it could take between three to five years to fully build out the infrastructure needed for widespread stablecoin adoption.”

Mastercard’s chief product officer, Jorn Lambert, agreed, telling Bloomberg that stablecoins have a “long road to mainstream payments.” Specifically, Lambert suggested that consumers broadly won’t embrace stablecoins without “a seamless and predictable user experience” and current “friction” causing online checkout hurdles—even for an experienced company like Shopify—”will be difficult to clear in the near-term.”

In the meantime, customers will likely be pushed to embrace stablecoins as being more reliable than other cryptocurrencies. Hand advised that anyone intrigued by stablecoins should proceed cautiously in an environment lacking basic consumer protections, conditions which one nonpartisan, nonprofit coalition, Americans for Financial Reform, suggested could create “an incubator for even more predatory and scammy activity” plaguing the entire crypto industry.

Hand told Ars she is not “anti-digital assets or crypto,” but she recommends that customers “start conservatively” with stablecoin investments. Consider who is advertising the stablecoin, Hand recommended, suggesting that celebrity endorsements should be viewed as red flags without more research. At least to start, treat any stablecoins acquired “more like a prepaid card than a bank account,” using it for certain payments but keeping life savings in less volatile accounts until you learn more about the risks of holding stablecoins.

Possibly most critically, customers should explore companies’ promised resolution processes before investing in stablecoins, Hand said, and fully vet customer support. In China, regulators are already struggling with stablecoin scams, where “a group of semi-informed people is being deceived by ill-intentioned people” luring them into stablecoin deposits that cannot be withdrawn, the South China Morning Post reported.

“Just because something is called a coin or digital dollar doesn’t mean it’s regulated like cash,” Hand said. “Don’t wait until you get in trouble to know what you can expect.”

In this potential future, stablecoin issuers could never really be considered “stable institutions,” Shearer said. Shearer referenced a possible “sci-fi disaster” that could end in bank runs, leading the government to one day bail out tech companies who bungle stablecoin investments but become “too big to fail.”

Hand told Ars that Consumer Reports will work with other consumer advocates and the implementing regulator to try to close any gaps that would leave Americans vulnerable. Those groups would submit comments and feedback to help with rule-making around implementation and monitoring and provide consumer education resources.

However, these steps may not be enough to protect Americans, as the crypto industry continues to be deregulated under self-described “pro-crypto President” Trump.

“Sometimes if something is just fundamentally flawed, I’m not quite sure, particularly in the current regulatory or deregulatory environment, whether any amount of guidance or rulemaking could really fix a flawed framework,” Hand told Ars.

At the same time, Trump’s Justice Department has largely backed off crypto lawsuits and probes, creating an impression of Wild West-like lawlessness where even a proven fraudster like Bankman-Fried dares hope he may be pardoned for misdeeds.

“The CLARITY Act handcuffs the Securities and Exchange Commission, preventing it from proactively protecting people against fraud,” Waters wrote. “Regulators would have to wait until after investors have already been harmed to act—potentially after a company has collapsed and life savings have vanished. We’ve seen this before. FTX collapsed because insiders illegally operated the exchange, controlled customer funds and traded against their own clients. The CLARITY bill does nothing to address that.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

GOP’s pro-industry crypto bills could financially ruin millions, lawmaker warns Read More »

reddit’s-uk-users-must-now-prove-they’re-18-to-view-adult-content

Reddit’s UK users must now prove they’re 18 to view adult content

“Society has long protected youngsters from products that aren’t suitable for them, from alcohol to smoking or gambling,” Ofcom said. “Now, children will be better protected from online material that’s not appropriate for them, while adults’ rights to access legal content are preserved. We expect other companies to follow suit, or face enforcement if they fail to act.”

Ofcom said online platforms that fall under the law “must use highly effective age assurance to identify which users are children, to protect them from harmful material, while preserving adults’ rights to access legal content. That may involve preventing children from accessing the entire site or app, or only some parts or kinds of content.”

Ofcom Group Director for Online Safety Oliver Griffiths recently told the Daily Star that “if you’re a dedicated teenager, you’re probably going to be able to find ways to get [around this] in the same way as people manage to find their way in the pub to buy alcohol at under 18.” But he indicated that the law should prevent many kids from “stumbling across porn,” and that “this is very much a first step.”

In the US, individual states have been imposing age laws on porn websites. The US Supreme Court recently upheld a Texas law that requires age verification on porn sites, finding that the state’s age-gating law doesn’t violate the First Amendment. A dissent written by Justice Elena Kagan described the law’s ID requirement as a deterrent to exercising one’s First Amendment rights, saying that “Texas’s law defines speech by content and tells people entitled to view that speech that they must incur a cost to do so.”

While the Texas law applies to websites in which more than one-third of the content is sexual material, the UK law’s age provisions apply more broadly to social media websites. Reddit’s announcement of its UK restrictions said the company expects it will have to verify user ages in other countries.

“As laws change, we may need to collect and/or verify age in places other than the UK,” Reddit said. “Accordingly, we are also introducing globally an option for you to provide your birthdate to optimize your Reddit experience, for example to help ensure that content and ads are age-appropriate. This is optional, and you won’t be required to provide it unless you live in a place (like the UK) where we are required to ask for it.” Reddit said the option will be available in a user’s account settings, but will not roll out to all users immediately.

Disclosure: Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder in Reddit.

Reddit’s UK users must now prove they’re 18 to view adult content Read More »

two-guys-hated-using-comcast,-so-they-built-their-own-fiber-isp

Two guys hated using Comcast, so they built their own fiber ISP


Brothers-in-law use construction knowledge to compete against Comcast in Michigan.

Two young men stand outside next to service vans with a logo for Prime-One, the Internet provider they founded.

Samuel Herman (left) and Alexander Baciu (right), founders of Prime-One. Credit: Prime-One

Samuel Herman (left) and Alexander Baciu (right), founders of Prime-One. Credit: Prime-One

Samuel Herman and Alexander Baciu never liked using Comcast’s cable broadband. Now, the residents of Saline, Michigan, operate a fiber Internet service provider that competes against Comcast in their neighborhoods and has ambitions to expand.

“All throughout my life pretty much, I’ve had to deal with Xfinity’s bullcrap, them not being able to handle the speeds that we need,” Herman told Ars. “I lived in a house of 10. I have seven other brothers and sisters, and there’s 10 of us in total with my parents.”

With all those kids using the Internet for school and other needs, “it just doesn’t work out,” he said. Herman was particularly frustrated with Comcast upload speeds, which are much slower than the cable service’s download speeds.

“Many times we would have to call Comcast and let them know our bandwidth was slowing down… then they would say, ‘OK, we’ll refresh the system.’ So then it would work again for a week to two weeks, and then again we’d have the same issues,” he said.

Herman, now 25, got married in 2021 and started building his own house, and he tried to find another ISP to serve the property. He was familiar with local Internet service providers because he worked in construction for his father’s company, which contracts with ISPs to build their networks.

But no fiber ISP was looking to compete directly against Comcast where he lived, though Metronet and 123NET offer fiber elsewhere in the city, Herman said. He ended up paying Comcast $120 a month for gigabit download service with slower upload speeds. Baciu, who lives about a mile away from Herman, was also stuck with Comcast and was paying about the same amount for gigabit download speeds.

$80 for gigabit fiber, unlimited data

Herman said he was the chief operating officer of his father’s construction company and that he shifted the business “from doing just directional drilling to be a turnkey contractor for ISPs.” Baciu, Herman’s brother-in-law (having married Herman’s oldest sister), was the chief construction officer. Fueled by their knowledge of the business and their dislike of Comcast, they founded a fiber ISP called Prime-One.

Now, Herman is paying $80 a month to his own company for symmetrical gigabit service. Prime-One also offers 500Mbps for $75, 2Gbps for $95, and 5Gbps for $110. The first 30 days are free, and all plans have unlimited data and no contracts.

“We are 100 percent fiber optic,” Baciu told Ars. “Everything that we’re doing is all underground. We’re not doing aerial because we really want to protect the infrastructure and make sure we’re having a reliable connection.”

Each customer’s Optical Network Terminal (ONT) and other equipment is included in the service plan. Prime-One provides a modem and the ONT, plus a Wi-Fi router if the customer prefers not to use their own router. They don’t charge equipment or installation fees, Herman and Baciu said.

Prime-One began serving customers in January 2025, and Baciu said the network has been built to about 1,500 homes in Saline with about 75 miles of fiber installed. Prime-One intends to serve nearby towns as well, with the founders saying the plan is to serve 4,000 homes with the initial build and then expand further.

“This is our backyard”

Herman and Baciu’s main competition in their initial build area is Comcast and Frontier’s DSL service, they said. So far, they have built only to single-family homes, but they plan to serve multi-unit residential buildings, too.

“We started building in an area that’s a lot more rural,” where people have fewer options than in more densely populated areas, Herman said. “This is our home, this is our backyard, so we take this build very, very seriously.”

Baciu, who is 29, said that residents seem excited to have a new Internet option. “It’s so nice to see the excitement that they have. [People say], ‘Oh my gosh, I told everybody about Prime-One. My neighbor cannot wait for you guys to have them up, too. My boss is asking, my grandma’s asking.’ It’s a beautiful thing,” he said.

A bit more than 100 residents have bought service so far, they said. Herman said the company is looking to sign up about 30 percent of the homes in its network area to make a profit. “I feel fairly confident,” Herman said, noting the number of customers who signed up with the initial construction not even halfway finished.

Prime-One’s founders originally told us the 4,000-home build would be completed at the end of August, but Baciu indicated more recently that it will take longer than that. “We are working on sales for the next couple of months before continuing the rest of the build,” Baciu said.

Herman and Baciu started thinking about building an ISP about two years ago. With no fiber companies looking to compete against Comcast where they lived, “that was a trigger,” Baciu said. “We kept on talking. We’re like, hey, we’re doing this work for other people, why not?” In August 2024, they signed a contract with a firm that provides backhaul service, IP address assignments, and other key connectivity needs.

“We said, ‘let’s try to do it ourselves’”

ISPs generally want to build in areas where homes are built close together, requiring less fiber construction to serve more customers and make a bigger profit. Existing ISPs didn’t seem interested in expanding to where Herman and Baciu live, Herman said.

“We have spoken to all of these Internet service providers and asked them to come and service these areas. I knew that there was a dire need in this area and that everybody was sick of the Xfinity BS,” Herman said.

Having worked in construction for ISPs, they already had experience installing fiber lines and conduits.

A Prime-One installer working on a fiber build.

Credit: Prime-One

A Prime-One installer working on a fiber build. Credit: Prime-One

“We said, ‘you know, what the hell, why not? Let’s try to do it ourselves,'” Herman said. “We know we can handle the construction, we know we can handle all that area. We need some assistance on the technical side. So we hired the right people to handle the technical side and to handle the OSS/BSS software and to manage our dark fiber. And from there, we’re here where we’re at, within six months. We have over a hundred customers on our network, and we’re still building.”

Before construction, the brothers-in-law met with Jared Mauch, a Michigan man who built a fiber-to-the-home Internet provider because he couldn’t get good broadband service from AT&T or Comcast. We wrote about Mauch in 2021, when he was providing service to about 30 rural homes, and again in 2022, when he was expanding to hundreds of more homes.

Though Herman and Baciu already knew how to install fiber, Mauch “gave us quite a lot of insight on what to do, how to build, and on the actual ISP side… he showed us the way he did things on the technical side for the ISP, what strategies he used and what products he used,” Herman said.

The brothers-in-law didn’t end up using all the networking products Mauch suggested “because we are building a much larger network than he was,” Herman said. They went mostly with Nokia products for equipment like the optical network terminal installed at customer homes, he said.

Local employees

Baciu said he was frustrated by Comcast customer support being mostly limited to online chats instead of phone support. Prime-One has 15 local employees, mostly installers and technicians, with other employees working in customer service and operations, Herman said.

Prime-One offers phone and chat support, and “many people want to be able to see someone face to face, which is very easy for us to do since we have people here locally,” Herman said.

Network uptime has been good so far, Herman and Baciu said. “The only outage we’ve had was due to severe weather that caused a massive outage” for multiple networks, Herman said. “Any time any customers are experiencing an outage, maybe because of a lawnmower that cut their service line or anything, we guarantee a two- to four-hour time to repair it. And on top of that, to promote the fact that we discourage outages and we are working our best to fix them, we offer $5 back for every hour that they’re out of service.”

Comcast seems to have noticed, Herman said. “They’ve been calling our clients nonstop to try to come back to their service, offer them discounted rates for a five-year contract and so on,” he said.

Comcast touts upgrades, new unlimited data option

A Comcast spokesperson told Ars that “we have upgraded our network in this area and offer multi-gig speeds there, and across Michigan, as part of our national upgrade that has been rolling out.”

Meanwhile, Comcast’s controversial data caps are being phased out. With Comcast increasingly concerned about customer losses, it recently overhauled its offerings with four plans that come with unlimited data. The Comcast data caps aren’t quite dead yet because customers with caps have to switch to a new plan to get unlimited data.

Comcast told us that customers in Saline “have access to our latest plans with simple and predictable all-in pricing that includes unlimited data, Wi-Fi equipment, a line of Xfinity Mobile, and the option for a one or five-year price guarantee.”

Prime-One’s arrival on the scene caught some local people’s attention in a Reddit thread. One person who said they signed up for Prime-One wrote, “I’m honestly very impressed with the service overall. Comcast was charging me for every little thing on my account and the bill always found a way to get higher than expected, especially going over my data cap. Prime-One has no data caps and the bill has been the same since I first joined, not to mention they offer the first month free… I’m happy to see a company come out here and give us a better option.”

Comcast is facing competition from more than just Prime-One. The City of Saline government recently said there’s been an uptick in fiber construction in the city by Metronet and Frontier. Baciu said those builds don’t appear to be in the areas that Prime-One is serving. “To our knowledge, both Frontier and MetroNet have recently begun building in adjacent areas near our current footprint, but not within the zones we’re serving directly,” he said.

While Prime-One is a small ISP, Herman said the company’s expansion ambitions are bigger than he can reveal just now. “We have plans that we cannot disclose at this moment, but we do have a plan to expand,” he said.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Two guys hated using Comcast, so they built their own fiber ISP Read More »

trump’s-doj-seems-annoyed-about-having-to-approve-t-mobile’s-latest-merger

Trump’s DOJ seems annoyed about having to approve T-Mobile’s latest merger

DOJ approval “reads like a complaint”

The DOJ’s unusual statement about the wireless industry oligopoly shows that the Justice Department staff and antitrust chief “clearly did not want to approve this,” stated Harold Feld, senior VP of consumer advocacy group Public Knowledge. The press release “reads like a complaint,” not an announcement of a merger approval, he added.

Daniel Hanley, senior legal analyst at the Open Markets Institute, said that “Slater could easily make a public comment or resign in protest. If she isn’t allowed to do the job Congress entrusted her with, then she can leave with her principles intact.” The Trump administration is failing to enforce antitrust laws “even when encountering a blatantly unlawful action that could result in a gov win,” he wrote.

The cable industry, which has been competing for mobile customers, issued a statement in response to the DOJ’s approval of T-Mobile’s transaction. “While cable broadband providers are aggressively investing to deliver real mobile competition, cost savings, and other benefits to millions of wireless consumers, the Big 3 are continuing their desperate attempts to thwart this new competition through aggressive spectrum stockpiling strategies,” cable lobby group NCTA said while urging policymakers to promote competition and fight excessive concentration of spectrum licenses.

Despite approving the T-Mobile deal, Slater said in her statement that the DOJ investigation “raised concerns about competition in the relevant markets for mobile wireless services and the availability of wireless spectrum needed to fuel competition and entry.”

US Cellular competed against the big carriers “by building networks, pricing plans, and service offerings that its customers valued, and which for many years the Big 3 often did not offer,” Slater said. “To the chagrin of its Big 3 competitors, US Cellular maintained a sizable customer base within its network footprint by virtue of its strong emphasis on transparency, integrity, and localized customer service. Accordingly, as part of its investigation, the Department considered the impact of the potential disappearance of the services offered to those customers of US Cellular—soon to become T-Mobile customers following the merger—that chose US Cellular over T-Mobile or its national competitors.”

Trump’s DOJ seems annoyed about having to approve T-Mobile’s latest merger Read More »

cops’-favorite-ai-tool-automatically-deletes-evidence-of-when-ai-was-used

Cops’ favorite AI tool automatically deletes evidence of when AI was used


AI police tool is designed to avoid accountability, watchdog says.

On Thursday, a digital rights group, the Electronic Frontier Foundation, published an expansive investigation into AI-generated police reports that the group alleged are, by design, nearly impossible to audit and could make it easier for cops to lie under oath.

Axon’s Draft One debuted last summer at a police department in Colorado, instantly raising questions about the feared negative impacts of AI-written police reports on the criminal justice system. The tool relies on a ChatGPT variant to generate police reports based on body camera audio, which cops are then supposed to edit to correct any mistakes, assess the AI outputs for biases, or add key context.

But the EFF found that the tech “seems designed to stymie any attempts at auditing, transparency, and accountability.” Cops don’t have to disclose when AI is used in every department, and Draft One does not save drafts or retain a record showing which parts of reports are AI-generated. Departments also don’t retain different versions of drafts, making it difficult to assess how one version of an AI report might compare to another to help the public determine if the technology is “junk,” the EFF said. That raises the question, the EFF suggested, “Why wouldn’t an agency want to maintain a record that can establish the technology’s accuracy?”

It’s currently hard to know if cops are editing the reports or “reflexively rubber-stamping the drafts to move on as quickly as possible,” the EFF said. That’s particularly troubling, the EFF noted, since Axon disclosed to at least one police department that “there has already been an occasion when engineers discovered a bug that allowed officers on at least three occasions to circumvent the ‘guardrails’ that supposedly deter officers from submitting AI-generated reports without reading them first.”

The AI tool could also possibly be “overstepping in its interpretation of the audio,” possibly misinterpreting slang or adding context that never happened.

A “major concern,” the EFF said, is that the AI reports can give cops a “smokescreen,” perhaps even allowing them to dodge consequences for lying on the stand by blaming the AI tool for any “biased language, inaccuracies, misinterpretations, or lies” in their reports.

“There’s no record showing whether the culprit was the officer or the AI,” the EFF said. “This makes it extremely difficult if not impossible to assess how the system affects justice outcomes over time.”

According to the EFF, Draft One “seems deliberately designed to avoid audits that could provide any accountability to the public.” In one video from a roundtable discussion the EFF reviewed, an Axon senior principal product manager for generative AI touted Draft One’s disappearing drafts as a feature, explaining, “we don’t store the original draft and that’s by design and that’s really because the last thing we want to do is create more disclosure headaches for our customers and our attorney’s offices.”

The EFF interpreted this to mean that “the last thing” that Axon wants “is for cops to have to provide that data to anyone (say, a judge, defense attorney or civil liberties non-profit).”

“To serve and protect the public interest, the AI output must be continually and aggressively evaluated whenever and wherever it’s used,” the EFF said. “But Axon has intentionally made this difficult.”

The EFF is calling for a nationwide effort to monitor AI-generated police reports, which are expected to be increasingly deployed in many cities over the next few years, and published a guide to help journalists and others submit records requests to monitor police use in their area. But “unfortunately, obtaining these records isn’t easy,” the EFF’s investigation confirmed. “In many cases, it’s straight-up impossible.”

An Axon spokesperson provided a statement to Ars:

Draft One helps officers draft an initial report narrative strictly from the audio transcript of the body-worn camera recording and includes a range of safeguards, including mandatory human decision-making at crucial points and transparency about its use. Just as with narrative reports not generated by Draft One, officers remain fully responsible for the content. Every report must be edited, reviewed, and approved by a human officer, ensuring both accuracy and accountability. Draft One was designed to mirror the existing police narrative process—where, as has long been standard, only the final, approved report is saved and discoverable, not the interim edits, additions, or deletions made during officer or supervisor review.

Since day one, whenever Draft One is used to generate an initial narrative, its use is stored in Axon Evidence’s unalterable digital audit trail, which can be retrieved by agencies on any report. By default, each Draft One report also includes a customizable disclaimer, which can appear at the beginning or end of the report in accordance with agency policy. We recently added the ability for agencies to export Draft One usage reports—showing how many drafts have been generated and submitted per user—and to run reports on which specific evidence items were used with Draft One, further supporting transparency and oversight. Axon is committed to continuous collaboration with police agencies, prosecutors, defense attorneys, community advocates, and other stakeholders to gather input and guide the responsible evolution of Draft One and AI technologies in the justice system, including changes as laws evolve.

“Police should not be using AI”

Expecting Axon’s tool would likely spread fast—marketed as a time-saving add-on service to police departments that already rely on Axon for tasers and body cameras—EFF’s senior policy analyst Matthew Guariglia told Ars that the EFF quickly formed a plan to track adoption of the new technology.

Over the spring, the EFF sent public records requests to dozens of police departments believed to be using Draft One. To craft the requests, they also reviewed Axon user manuals and other materials.

In a press release, the EFF confirmed that the investigation “found the product offers meager oversight features,” including a practically useless “audit log” function that seems contradictory to police norms surrounding data retention.

Perhaps most glaringly, Axon’s tool doesn’t allow departments to “export a list of all police officers who have used Draft One,” the EFF noted, or even “export a list of all reports created by Draft One, unless the department has customized its process.” Instead, Axon only allows exports of basic logs showing actions taken on a particular report or an individual user’s basic activity in the system, like logins and uploads. That makes it “near impossible to do even the most basic statistical analysis: how many officers are using the technology and how often,” the EFF said.

Any effort to crunch the numbers would be time-intensive, the EFF found. In some departments, it’s possible to look up individual cops’ records to determine when they used Draft One, but that “could mean combing through dozens, hundreds, or in some cases, thousands of individual user logs.” And it would take a similarly “massive amount of time” to sort through reports one by one, considering “the sheer number of reports generated” by any given agency, the EFF noted.

In some jurisdictions, cops are required to disclose when AI is used to generate reports. And some departments require it, the EFF found, which made the documents more easily searchable and in turn made some police departments more likely to respond to public records requests without charging excessive fees or requiring substantial delays. But at least one department in Indiana told the EFF, “We do not have the ability to create a list of reports created through Draft One. They are not searchable.”

While not every cop can search their Draft One reports, Axon can, the EFF reported, suggesting that the company can track how much police use the tool better than police themselves can.

The EFF hopes its reporting will curtail the growing reliance on shady AI-generated police reports, which Guariglia told Ars risk becoming even more common in US policing without intervention.

In California, where some cops have long been using Draft One, a bill has been introduced that would require disclosures clarifying which parts of police reports are AI-generated. That law, if passed, would also “require the first draft created to be retained for as long as the final report is retained,” which Guariglia told Ars would make Draft One automatically unlawful as currently designed. Utah is weighing a similar but less robust initiative, the EFF noted.

Guariglia told Ars that the EFF has talked to public defenders who worry how the proliferation of AI-generated police reports is “going to affect cross-examination” by potentially giving cops an easy scapegoat when accused of lying on the stand.

To avoid the issue entirely, at least one district attorney’s office in King County, Washington, has banned AI police reports, citing “legitimate concerns about some of the products on the market now.” Guariglia told Ars that one of the district attorney’s top concerns was that using the AI tool could “jeopardize cases.” The EFF is now urging “other prosecutors to follow suit and demand that police in their jurisdiction not unleash this new, unaccountable, and intentionally opaque AI product.”

“Police should not be using AI to write police reports,” Guariglia said. “There are just too many questions left unanswered about how AI would translate the audio of situations, whether police will actually edit those drafts, and whether the public will ever be able to tell what was written by a person and what was written by a computer. This is before we even get to the question of how these reports might lead to problems in an already unfair and untransparent criminal justice system.”

This story was updated to include a statement from Axon. 

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Cops’ favorite AI tool automatically deletes evidence of when AI was used Read More »