Author name: Beth Washington

rice-could-be-key-to-brewing-better-non-alcoholic-beer

Rice could be key to brewing better non-alcoholic beer

small glass of light colored beer with a nice foam head

Rice enhances flavor profiles for nonalcoholic beer, reduces fermentation time, and may contribute to flavor stability. Credit: Paden Johnson/CC BY-NC-SA

He and his team—including Christian Schubert, a visiting postdoc from the Research Institute for Raw Materials and Beverage Analysis in Berlin—brewed their own non-alcoholic beers, ranging from those made with 100 percent barley malt to ones made with 100 percent rice. They conducted a volatile chemical analysis to identify specific compounds present in the beers and assembled two sensory panels of tasters (one in the US, one in Europe) to assess aromas, flavors, and mouthfeel.

The panelists determined the rice-brewed beers had less worty flavors, and the chemical analysis revealed why: lower levels of aldehyde compounds. Instead, other sensory attributes emerged, most notably vanilla or buttery notes. “If a brewer wanted a more neutral character, they could use nonaromatic rice,” the authors wrote. Along with brewing beers with 50 percent barley/50 percent rice, this would produce non-alcoholic beers likely to appeal more broadly to consumers.

The panelists also noted that higher rice content resulted in beers with a fatty/creamy mouthfeel—likely because higher rice content was correlated with increased levels of larger alcohol molecules, which are known to contribute to a pleasant mouthfeel. But it didn’t raise the alcohol content above the legal threshold for a nonalcoholic beer.

There were cultural preferences, however. The US panelists didn’t mind worty flavors as much as the European tasters did, which might explain why the former chose beers brewed with 70 percent barley/30 percent rice as the optimal mix. Their European counterparts preferred the opposite ratio (30 percent barley/70 percent rice). The explanation “may lie in the sensory expectations shaped by each region’s brewing traditions,” the authors wrote. Fermentation also occurred more quickly as the rice content increased because of higher levels of glucose and fructose.

The second study focused on testing 74 different rice cultivars to determine their extract yields, an important variable when it comes to an efficient brewing process, since higher yields mean brewers can use less grain, thereby cutting costs. This revealed that cultivars with lower amylose content cracked more easily to release sugars during the mashing process, producing the highest yields. And certain varieties also had lower gelatinization temperatures for greater ease of processing.

International Journal of Food Science, 2025. DOI: 10.1080/10942912.2025.2520907  (About DOIs)

Journal of the American Society of Brewing Chemists, 2025. DOI: 10.1080/03610470.2025.2499768

Rice could be key to brewing better non-alcoholic beer Read More »

at&t-rolls-out-wireless-account-lock-protection-to-curb-the-sim-swap-scourge

AT&T rolls out Wireless Account Lock protection to curb the SIM-swap scourge

AT&T is rolling out a protection that prevents unauthorized changes to mobile accounts as the carrier attempts to fight a costly form of account hijacking that occurs when a scammer swaps out the SIM card belonging to the account holder.

The technique, known as SIM swapping or port-out fraud, has been a scourge that has vexed wireless carriers and their millions of subscribers for years. An indictment filed last year by federal prosecutors alleged that a single SIM swap scheme netted $400 million in cryptocurrency. The stolen funds belonged to dozens of victims who had used their phones for two-factor authentication to cryptocurrency wallets.

Wireless Account Lock debut

A separate scam from 2022 gave unauthorized access to a T-Mobile management platform that subscription resellers, known as mobile virtual network operators, use to provision services to their customers. The threat actor gained access using a SIM swap of a T-Mobile employee, a phishing attack on another T-Mobile employee, and at least one compromise of an unknown origin.

This class of attack has existed for well over a decade, and it became more commonplace amid the irrational exuberance that drove up the price of bitcoin and other cryptocurrencies. In some cases, scammers impersonate existing account holders who want a new phone number for their account. At other times, they simply bribe the carrier’s employees to make unauthorized changes.

AT&T rolls out Wireless Account Lock protection to curb the SIM-swap scourge Read More »

nudify-app’s-plan-to-dominate-deepfake-porn-hinges-on-reddit,-4chan,-and-telegram,-docs-show

Nudify app’s plan to dominate deepfake porn hinges on Reddit, 4chan, and Telegram, docs show


Reddit confirmed the nudify app’s links have been blocked since 2024.

Clothoff—one of the leading apps used to quickly and cheaply make fake nudes from images of real people—reportedly is planning a global expansion to continue dominating deepfake porn online.

Also known as a nudify app, Clothoff has resisted attempts to unmask and confront its operators. Last August, the app was among those that San Francisco’s city attorney, David Chiu, sued in hopes of forcing a shutdown. But recently, a whistleblower—who had “access to internal company information” as a former Clothoff employee—told the investigative outlet Der Spiegel that the app’s operators “seem unimpressed by the lawsuit” and instead of worrying about shutting down have “bought up an entire network of nudify apps.”

Der Spiegel found evidence that Clothoff today owns at least 10 other nudify services, attracting “monthly views ranging between hundreds of thousands to several million.” The outlet granted the whistleblower anonymity to discuss the expansion plans, which the whistleblower claimed was motivated by Clothoff employees growing “cynical” and “obsessed with money” over time as the app—which once felt like an “exciting startup”—gained momentum. Because generating convincing fake nudes can cost just a few bucks, chasing profits seemingly relies on attracting as many repeat users to as many destinations as possible.

Currently, Clothoff runs on an annual budget of around $3.5 million, the whistleblower told Der Spiegel. It has shifted its marketing methods since its launch, apparently now largely relying on Telegram bots and X channels to target ads at young men likely to use their apps.

Der Spiegel’s report documents Clothoff’s “large-scale marketing plan” to expand into the German market, as revealed by the whistleblower. The alleged campaign hinges on producing “naked images of well-known influencers, singers, and actresses,” seeking to entice ad clicks with the tagline “you choose who you want to undress.”

A few of the stars named in the plan confirmed to Der Spiegel that they never agreed to this use of their likenesses, with some of their representatives suggesting that they would pursue legal action if the campaign is ever launched.

However, even celebrities like Taylor Swift have struggled to combat deepfake nudes spreading online, while tools like Clothoff are increasingly used to torment young girls in middle and high school.

Similar celebrity campaigns are planned for other markets, Der Spiegel reported, including British, French, and Spanish markets. And Clothoff has notably already become a go-to tool in the US, not only targeted in the San Francisco city attorney’s lawsuit, but also in a complaint raised by a high schooler in New Jersey suing a boy who used Clothoff to nudify one of her Instagram photos taken when she was 14 years old, then shared it with other boys on Snapchat.

Clothoff is seemingly hoping to entice more young boys worldwide to use its apps for such purposes. The whistleblower told Der Spiegel that most of Clothoff’s marketing budget goes toward “advertising posts in special Telegram channels, in sex subs on Reddit, and on 4chan.” (Reddit noted to Ars that Clothoff URLs have been banned from Reddit since 2024 and “Reddit does not allow paid advertising against NSFW content or otherwise monetize it.”)

In ads, the app planned to specifically target “men between 16 and 35” who like benign stuff like “memes” and “video games,” as well as more toxic stuff like “right-wing extremist ideas,” “misogyny,” and “Andrew Tate,” an influencer criticized for promoting misogynistic views to teen boys.

Chiu was hoping to defend young women increasingly targeted in fake nudes by shutting down Clothoff, along with several other nudify apps targeted in his lawsuit. But so far, while Chiu has reached a settlement shutting down two websites, porngen.art and undresser.ai, attempts to serve Clothoff through available legal channels have not been successful. Chiu’s office is continuing its efforts to serve Clothoff through available legal channels. which evolve as the lawsuit moves through the court system, deputy press secretary for Chiu’s office, Alex Barrett-Shorter, told Ars.

Meanwhile, Clothoff continues to evolve, recently marketing a feature that Clothoff claims attracted more than a million users eager to make explicit videos out of a single picture.

Clothoff denies it plans to use influencers

Der Spiegel’s efforts to unmask the operators of Clothoff led the outlet to Eastern Europe, after reporters stumbled upon a “database accidentally left open on the Internet” that seemingly exposed “four central people behind the website.”

This was “consistent,” Der Spiegel said, with a whistleblower claim that all Clothoff employees “work in countries that used to belong to the Soviet Union.” Additionally, Der Spiegel noted that all Clothoff internal communications it reviewed were written in Russian, and the site’s email service is based in Russia.

A person claiming to be a Clothoff spokesperson named Elias denied knowing any of the four individuals flagged in their investigation, Der Spiegel reported, and disputed the $3 million budget figure. Elias claimed a nondisclosure agreement prevented him from discussing Clothoff’s team any further. However, soon after reaching out, Der Spiegel noted that Clothoff took down the database, which had a name that translated to “my babe.”

Regarding the shared marketing plan for global expansion, Elias denied that Clothoff intended to use celebrity influencers, saying that “Clothoff forbids the use of photos of people without their consent.”

He also denied that Clothoff could be used to nudify images of minors; however, one Clothoff user who spoke to Der Spiegel on the condition of anonymity, confirmed that his attempt to generate a fake nude of a US singer failed initially because she “looked like she might be underage.” But his second attempt a few days later successfully generated the fake nude with no problem. That suggests Clothoff’s age detection may not work perfectly.

As Clothoff’s growth appears unstoppable, the user explained to Der Spiegel why he doesn’t feel that conflicted about using the app to generate fake nudes of a famous singer.

“There are enough pictures of her on the Internet as it is,” the user reasoned.

However, that user draws the line at generating fake nudes of private individuals, insisting, “If I ever learned of someone producing such photos of my daughter, I would be horrified.”

For young boys who appear flippant about creating fake nude images of their classmates, the consequences have ranged from suspensions to juvenile criminal charges, and for some, there could be other costs. In the lawsuit where the high schooler is attempting to sue a boy who used Clothoff to bully her, there’s currently resistance from boys who participated in group chats to share what evidence they have on their phones. If she wins her fight, she’s asking for $150,000 in damages per image shared, so sharing chat logs could potentially increase the price tag.

Since she and the San Francisco city attorney each filed their lawsuits, the Take It Down Act has passed. That law makes it easier to force platforms to remove AI-generated fake nudes. But experts expect the law will face legal challenges over censorship fears, so the very limited legal tool might not withstand scrutiny.

Either way, the Take It Down Act is a safeguard that came too late for the earliest victims of nudify apps in the US, only some of whom are turning to courts seeking justice due to largely opaque laws that made it unclear if generating a fake nude was illegal.

“Jane Doe is one of many girls and women who have been and will continue to be exploited, abused, and victimized by non-consensual pornography generated through artificial intelligence,” the high schooler’s complaint noted. “Despite already being victimized by Defendant’s actions, Jane Doe has been forced to bring this action to protect herself and her rights because the governmental institutions that are supposed to protect women and children from being violated and exploited by the use of AI to generate child pornography and nonconsensual nude images failed to do so.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Nudify app’s plan to dominate deepfake porn hinges on Reddit, 4chan, and Telegram, docs show Read More »

nyt-to-start-searching-deleted-chatgpt-logs-after-beating-openai-in-court

NYT to start searching deleted ChatGPT logs after beating OpenAI in court


What are the odds NYT will access your ChatGPT logs in OpenAI court battle?

Last week, OpenAI raised objections in court, hoping to overturn a court order requiring the AI company to retain all ChatGPT logs “indefinitely,” including deleted and temporary chats.

But Sidney Stein, the US district judge reviewing OpenAI’s request, immediately denied OpenAI’s objections. He was seemingly unmoved by the company’s claims that the order forced OpenAI to abandon “long-standing privacy norms” and weaken privacy protections that users expect based on ChatGPT’s terms of service. Rather, Stein suggested that OpenAI’s user agreement specified that their data could be retained as part of a legal process, which Stein said is exactly what is happening now.

The order was issued by magistrate judge Ona Wang just days after news organizations, led by The New York Times, requested it. The news plaintiffs claimed the order was urgently needed to preserve potential evidence in their copyright case, alleging that ChatGPT users are likely to delete chats where they attempted to use the chatbot to skirt paywalls to access news content.

A spokesperson told Ars that OpenAI plans to “keep fighting” the order, but the ChatGPT maker seems to have few options left. They could possibly petition the Second Circuit Court of Appeals for a rarely granted emergency order that could intervene to block Wang’s order, but the appeals court would have to consider Wang’s order an extraordinary abuse of discretion for OpenAI to win that fight.

OpenAI’s spokesperson declined to confirm if the company plans to pursue this extreme remedy.

In the meantime, OpenAI is negotiating a process that will allow news plaintiffs to search through the retained data. Perhaps the sooner that process begins, the sooner the data will be deleted. And that possibility puts OpenAI in the difficult position of having to choose between either caving to some data collection to stop retaining data as soon as possible or prolonging the fight over the order and potentially putting more users’ private conversations at risk of exposure through litigation or, worse, a data breach.

News orgs will soon start searching ChatGPT logs

The clock is ticking, and so far, OpenAI has not provided any official updates since a June 5 blog post detailing which ChatGPT users will be affected.

While it’s clear that OpenAI has been and will continue to retain mounds of data, it would be impossible for The New York Times or any news plaintiff to search through all that data.

Instead, only a small sample of the data will likely be accessed, based on keywords that OpenAI and news plaintiffs agree on. That data will remain on OpenAI’s servers, where it will be anonymized, and it will likely never be directly produced to plaintiffs.

Both sides are negotiating the exact process for searching through the chat logs, with both parties seemingly hoping to minimize the amount of time the chat logs will be preserved.

For OpenAI, sharing the logs risks revealing instances of infringing outputs that could further spike damages in the case. The logs could also expose how often outputs attribute misinformation to news plaintiffs.

But for news plaintiffs, accessing the logs is not considered key to their case—perhaps providing additional examples of copying—but could help news organizations argue that ChatGPT dilutes the market for their content. That could weigh against the fair use argument, as a judge opined in a recent ruling that evidence of market dilution could tip an AI copyright case in favor of plaintiffs.

Jay Edelson, a leading consumer privacy lawyer, told Ars that he’s concerned that judges don’t seem to be considering that any evidence in the ChatGPT logs wouldn’t “advance” news plaintiffs’ case “at all,” while really changing “a product that people are using on a daily basis.”

Edelson warned that OpenAI itself probably has better security than most firms to protect against a potential data breach that could expose these private chat logs. But “lawyers have notoriously been pretty bad about securing data,” Edelson suggested, so “the idea that you’ve got a bunch of lawyers who are going to be doing whatever they are” with “some of the most sensitive data on the planet” and “they’re the ones protecting it against hackers should make everyone uneasy.”

So even though odds are pretty good that the majority of users’ chats won’t end up in the sample, Edelson said the mere threat of being included might push some users to rethink how they use AI. He further warned that ChatGPT users turning to OpenAI rival services like Anthropic’s Claude or Google’s Gemini could suggest that Wang’s order is improperly influencing market forces, which also seems “crazy.”

To Edelson, the most “cynical” take could be that news plaintiffs are possibly hoping the order will threaten OpenAI’s business to the point where the AI company agrees to a settlement.

Regardless of the news plaintiffs’ motives, the order sets an alarming precedent, Edelson said. He joined critics suggesting that more AI data may be frozen in the future, potentially affecting even more users as a result of the sweeping order surviving scrutiny in this case. Imagine if litigation one day targets Google’s AI search summaries, Edelson suggested.

Lawyer slams judges for giving ChatGPT users no voice

Edelson told Ars that the order is so potentially threatening to OpenAI’s business that the company may not have a choice but to explore every path available to continue fighting it.

“They will absolutely do something to try to stop this,” Edelson predicted, calling the order “bonkers” for overlooking millions of users’ privacy concerns while “strangely” excluding enterprise customers.

From court filings, it seems possible that enterprise users were excluded to protect OpenAI’s competitiveness, but Edelson suggested there’s “no logic” to their exclusion “at all.” By excluding these ChatGPT users, the judge’s order may have removed the users best resourced to fight the order, Edelson suggested.

“What that means is the big businesses, the ones who have the power, all of their stuff remains private, and no one can touch that,” Edelson said.

Instead, the order is “only going to intrude on the privacy of the common people out there,” which Edelson said “is really offensive,” given that Wang denied two ChatGPT users’ panicked request to intervene.

“We are talking about billions of chats that are now going to be preserved when they weren’t going to be preserved before,” Edelson said, noting that he’s input information about his personal medical history into ChatGPT. “People ask for advice about their marriages, express concerns about losing jobs. They say really personal things. And one of the bargains in dealing with OpenAI is that you’re allowed to delete your chats and you’re allowed to temporary chats.”

The greatest risk to users would be a data breach, Edelson said, but that’s not the only potential privacy concern. Corynne McSherry, legal director for the digital rights group the Electronic Frontier Foundation, previously told Ars that as long as users’ data is retained, it could also be exposed through future law enforcement and private litigation requests.

Edelson pointed out that most privacy attorneys don’t consider OpenAI CEO Sam Altman to be a “privacy guy,” despite Altman recently slamming the NYT, alleging it sued OpenAI because it doesn’t “like user privacy.”

“He’s trying to protect OpenAI, and he does not give a hoot about the privacy rights of consumers,” Edelson said, echoing one ChatGPT user’s dismissed concern that OpenAI may not prioritize users’ privacy concerns in the case if it’s financially motivated to resolve the case.

“The idea that he and his lawyers are really going to be the safeguards here isn’t very compelling,” Edelson said. He criticized the judges for dismissing users’ concerns and rejecting OpenAI’s request that users get a chance to testify.

“What’s really most appalling to me is the people who are being affected have had no voice in it,” Edelson said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

NYT to start searching deleted ChatGPT logs after beating OpenAI in court Read More »

gop-budget-bill-poised-to-crush-renewable-energy-in-the-us

GOP budget bill poised to crush renewable energy in the US

An early evaluation shows the administration’s planned energy policies would result in the drilling of 50,000 new oil wells every year for the next few years, he said, adding that it “ensures the continuation of land devastation… the poisoning of soil and groundwater due to fossil fuels and the continuation of gas blowouts and fires.”

There is nothing beneficial about the tax, he said, “only guaranteed misery.”

An analysis by the Rhodium Group, and energy policy research institute, projected that the Republican regime’s proposed energy policies would result in about 4 billion tons more greenhouse gas emissions than a continuation of current policies—enough to raise the average global temperature by .0072° Fahrenheit.

The overall budget bill was also panned in a June 28 statement by the president of North America’s Building Trades Unions, Sean McGarvey.

McGarvey called it “a massive insult to the working men and women of North America’s Building Trades Unions and all construction workers.”

He said that, as written, the budget “stands to be the biggest job-killing bill in the history of this country,” potentially costing as many jobs as shutting down 1,000 Keystone X pipeline projects, threatening an estimated 1.75 million construction jobs and over 3 billion work hours, which translates to $148 billion in lost annual wages and benefits.

“These are staggering and unfathomable job loss numbers, and the bill throws yet another lifeline and competitive advantage to China in the race for global energy dominance,” he said.

Research in recent years shows how right-wing populist and nationalist ideologies have used anti-renewable energy arguments to win voters, in defiance of environmental logic and scientific fact, in part by using social media to spread misleading and false information about wind, solar and other emissions-free electricity sources.

The same forces now seem to be at work in the US, said Stephan Lewandowsky, a cognitive psychologist at the University of Bristol who studies how people respond to misinformation and propaganda, and why people reject well-established scientific facts, such as those regarding climate change.

“This is a bonus for fossil fuels at the expense of future generations and the future of the American economy,” he said. “Other countries will continue working towards renewable-energy economies, especially China. That competitive advantage will eventually pay out to the detriment of American businesses. You can’t negotiate with the laws of physics.”

This story originally appeared on Inside Climate News.

GOP budget bill poised to crush renewable energy in the US Read More »

tuesday-telescope:-a-howling-wolf-in-the-night-sky

Tuesday Telescope: A howling wolf in the night sky

Welcome to the Tuesday Telescope. There is a little too much darkness in this world and not enough light—a little too much pseudoscience and not enough science. We’ll let other publications offer you a daily horoscope. At Ars Technica, we’ll take a different route, finding inspiration from very real images of a universe that is filled with stars and wonder.

In the 1800s, astronomers were mystified by the discovery of stars that displayed highly unusual emission lines. It was only after 1868, when scientists discovered the element helium, that astronomers were able to explain the broad emission bands due to the presence of helium in these stars.

Over time, these stars became known as Wolf-Rayet stars (Charles Wolf was a French astronomer, and helium was first detected by the French scientist Georges Rayet and others), and astronomers came to understand that they were the central stars within planetary nebulae, and continually ejecting gas at high velocity.

This gives Wolf-Rayet stars a distinctive appearance in the night sky. And this week, Chris McGrew has shared a photo of WR 134—a variable Wolf-Rayet star about 6,000 light-years away from Earth in the constellation of Cygnus—which he captured from a dark sky location in southwestern New Mexico.

“The stellar winds are blowing out the blue shell of ionized oxygen gas visible in the middle of the image,” McGrew said. “This is a deep sky object that has been imaged countless times, and I get why. Ever since I saw it for the first time, it’s been high on my list. For years I didn’t have the skies or the time, but I finally got the chance to go after it.”

Source: Chris McGrew

Do you want to submit a photo for the Daily Telescope? Reach out and say hello.

Tuesday Telescope: A howling wolf in the night sky Read More »

a-mammoth-tusk-boomerang-from-poland-is-40,000-years-old

A mammoth tusk boomerang from Poland is 40,000 years old

A boomerang carved from a mammoth tusk is one of the oldest in the world, and it may be even older than archaeologists originally thought, according to a recent round of radiocarbon dating.

Archaeologists unearthed the mammoth-tusk boomerang in Poland’s Oblazowa Cave in the 1990s, and they originally dated it to around 18,000 years old, which made it one of the world’s oldest intact boomerangs. But according to recent analysis by University of Bologna researcher Sahra Talamo and her colleagues, the boomerang may have been made around 40,000 years ago. If they’re right, it offers tantalizing clues about how people lived on the harsh tundra of what’s now Poland during the last Ice Age.

A boomerang carved from mammoth tusk

The mammoth-tusk boomerang is about 72 centimeters long, gently curved, and shaped so that one end is slightly more rounded than the other. It still bears scratches and scuffs from the mammoth’s life, along with fine, parallel grooves that mark where some ancient craftsperson shaped and smoothed the boomerang. On the rounded end, a series of diagonal marks would have made the weapon easier to grip. It’s smoothed and worn from frequent handling: the last traces of the life of some Paleolithic hunter.

Based on experiments with a replica, the Polish mammoth boomerang flies smoothly but doesn’t return, similar to certain types of Aboriginal Australian boomerangs. In fact, it looks a lot like a style used by Aboriginal people from Queensland, Australia, but that’s a case of people in different times and places coming up with very similar designs to fit similar needs.

But critically, according to Talamo and her colleagues, the boomerang is about 40,000 years old.

That’s a huge leap from the original radiocarbon date, made in 1996, which was based on a sample of material from the boomerang itself and estimated an age of 18,000 years. But Talamo and her colleagues claim that original date didn’t line up well with the ages of other nearby artifacts from the same layer of the cave floor. That made them suspect that the boomerang sample may have gotten contaminated by modern carbon somewhere along the way, making it look younger. To test the idea, the archaeologists radiocarbon dated samples from 13 animal bones—plus one from a human thumb—unearthed from the same layer of cave floor sediment as the boomerang.

A mammoth tusk boomerang from Poland is 40,000 years old Read More »

analyst:-m5-vision-pro,-vision-air,-and-smart-glasses-coming-in-2026–2028

Analyst: M5 Vision Pro, Vision Air, and smart glasses coming in 2026–2028

Apple is also reportedly planning a “Vision Air” product, with production expected to start in Q3 2027. Kuo says it will be more than 40 percent lighter than the first-generation Vision Pro, and that it will include Apple’s flagship iPhone processor instead the more robust Mac processor found in the Vision Pro—all at a “significantly lower price than Vision Pro.” The big weight reduction is “achieved through glass-to-plastic replacement, extensive magnesium alloy use (titanium alloy deemed too expensive), and reduced sensor count.”

True smart glasses in 2027

The Vision Pro (along with the planned Vision Air) is a fully immersive VR headset that supports augmented reality by displaying the wearer’s surroundings on the internal screens based on what’s captured by 3D cameras on the outside of the device. That allows for some neat applications, but it also means the device is bulky and impractical to wear in public.

The real dream for many is smart glasses that are almost indistinguishable from normal glasses, but which display some of the same AR content as the Vision Pro on transparent lenses instead of via a camera-to-screen pipeline.

Apple is also planning to roll that out, Kuo says. But first, mass production of display-free “Ray-Ban-like” glasses is scheduled for Q2 2027, and Kuo claims Apple plans to ship between 3 million and 5 million units through 2027, suggesting the company expects this form factor to make a much bigger impact than the Vision Pro’s VR-like HMD approach.

The glasses would have a “voice control and gesture recognition user interface” but no display functionality at all. Instead, “core features include: audio playback, camera, video recording, and AI environmental sensing.”

The actual AR glasses would come later, in 2028.

Analyst: M5 Vision Pro, Vision Air, and smart glasses coming in 2026–2028 Read More »

supreme-court-overturns-5th-circuit-ruling-that-upended-universal-service-fund

Supreme Court overturns 5th Circuit ruling that upended Universal Service Fund

Finally, the Consumers’ Research position produces absurd results, divorced from any reasonable understanding of constitutional values. Under its view, a revenue-raising statute containing non-numeric, qualitative standards can never pass muster, no matter how tight the constraints they impose. But a revenue-raising statute with a numeric limit will always pass muster, even if it effectively leaves an agency with boundless power. In precluding the former and approving the latter, the Consumers’ Research approach does nothing to vindicate the nondelegation doctrine or the separation of powers.

The Gorsuch dissent said the “combination” question isn’t the deciding factor. He said the only question that needs to be answered is whether Congress violated the Constitution by delegating the power to tax to the FCC.

“As I see it, this case begins and ends with the first question. Section 254 [of the Communications Act] impermissibly delegates Congress’s taxing power to the FCC, and knowing that is enough to know the Fifth Circuit’s judgment should be affirmed,” Gorsuch said.

“Green light” for FCC to support Internet access

In the Gorsuch view, it doesn’t matter whether the FCC exceeded its authority by delegating Universal Service management to a private administrative company. “As far as I can tell, and as far as petitioners have informed us, this Court has never approved legislation allowing an executive agency to tax domestically unless Congress itself has prescribed the tax rate,” Gorsuch wrote.

The FCC and Department of Justice asked the Supreme Court to reverse the 5th Circuit decision. The court also received a challenge from broadband-focused advocacy groups and several lobby groups representing ISPs.

“Today is a great day,” said Andrew Jay Schwartzman, counsel for the Benton Institute for Broadband & Society; the National Digital Inclusion Alliance; and the Center for Media Justice. “We will need some time to sort through the details of today’s decision, but what matters most is that the Supreme Court has given the green light to the FCC to continue to support Internet access to the tens of millions of Americans and the thousands of schools, libraries and rural hospitals that rely on the Universal Service Fund.”

FCC Chairman Brendan Carr praised the ruling but said he plans to make changes to Universal Service. “I am glad to see the court’s decision today and welcome it as an opportunity to turn the FCC’s focus towards the types of reforms necessary to ensure that all Americans have a fair shot at next-generation connectivity,” Carr said.

Supreme Court overturns 5th Circuit ruling that upended Universal Service Fund Read More »

reddit-ceo-pledges-site-will-remain-“written-by-humans-and-voted-on-by-humans”

Reddit CEO pledges site will remain “written by humans and voted on by humans”

Reddit is in an “arms race” to protect its devoted online communities from a surge in artificial intelligence-generated content, with the authenticity of its vast repository of human interaction increasingly valuable in training new AI-powered search tools.

Chief executive Steve Huffman told the Financial Times that Reddit had “20 years of conversation about everything,” leaving the company with a lucrative resource of personal interaction.

This has allowed it to strike multimillion dollar partnerships with Google and OpenAI to train their large language models on its content, as tech companies look for real-world data that can improve their generative AI products.

But Huffman said Reddit was now battling to ensure its users stay at the center of the social network. “Where the rest of the internet seems to be powered by or written by or summarized by AI, Reddit is distinctly human,” he said. “It’s the place you go when you want to hear from people, their lived experiences, their perspectives, their recommendations. Reddit is communities and human curation and conversation and authenticity.”

As Reddit becomes an increasingly important source for LLMs, advertisers are responding with what one agency chief described as a “massive migration” to the platform.

Multiple advertising and agency executives speaking during this month’s Cannes advertising festival told the FT that brands were increasingly exploring hosting a business account and posting content on Reddit to boost the likelihood of their ads appearing in the responses of generative AI chatbots.

However, Huffman warned against any company seeking to game the site with fake or AI-generated content, with plans to bring in strict verification checks to ensure that only humans can post to its forums.

“For 20 years, we’ve been fighting people who have wanted to be popular on Reddit,” he said. “We index very well into the search engines. If you want to show up in the search engines, you try to do well on Reddit, and now the LLMs, it’s the same thing. If you want to be in the LLMs, you can do it through Reddit.”

Reddit CEO pledges site will remain “written by humans and voted on by humans” Read More »

is-doge-doomed-to-fail?-some-experts-are-ready-to-call-it.

Is DOGE doomed to fail? Some experts are ready to call it.


Trump wants $45M to continue DOGE’s work. Critics warn costs already too high.

Federal workers and protestors spoke out against US President Donald Trump and Elon Musk and their push to gut federal services and impose mass layoffs earlier this year. Credit: Pacific Press / Contributor | LightRocket

Critics are increasingly branding Elon Musk’s Department of Government Efficiency (DOGE) as a failure, including lawmakers fiercely debating how much funding to allot next year to the controversial agency.

On Tuesday, Republicans and Democrats sparred over DOGE’s future at a DOGE subcommittee hearing, according to NextGov, a news site for federal IT workers. On one side, Republicans sought to “lock in” and codify the “DOGE process” for supposedly reducing waste and fraud in government, and on the other, Democrats argued that DOGE has “done the opposite” of its intended mission and harmed Americans in the process.

DOGE has “led to poor services, a brain drain on our federal government, and it’s going to cost taxpayers money long term,” Rep. Suhas Subramanyam (D-Va.) argued.

For now, DOGE remains a temporary government agency that could sunset as soon as July 4, 2026. Under Musk’s leadership, it was supposed to save the US government a trillion dollars. But so far, DOGE only reports saving about $180 billion—and doubt has been cast on DOGE’s math ever since reports revealed that nearly 40 percent of the savings listed on the DOGE site were “bogus,” Elaine Kamarck, director of the Center for Effective Public Management at the Brookings Institute, wrote in a report detailing DOGE’s exposed failures.

The “DOGE process” that Republicans want to codify, Kamarck explained, typically begins with rushed mass layoffs. That’s soon followed by offers for buyouts or deferred resignations, before the government eventually realizes it’s lost critical expertise and starts scrambling to rehire workers or rescind buyout offers after “it becomes apparent” that a heavily gutted agency “is in danger of malfunctioning.”

Kamarck warned that DOGE appeared to be using the firings of federal workers to test the “unitary executive” theory, “popular among conservatives,” that argues that “the president has more power than Congress.” Consider how DOGE works to shut down agencies funded by Congress without seeking lawmakers’ approval by simply removing critical workers key to operations, Kamarck suggested, like DOGE did early on at the National Science Foundation.

Democrats’ witness at the DOGE hearing—Emily DiVito of the economic policy think tank Groundwork Collaborative—suggested that extensive customer service problems at the Social Security Administration was just one powerful example of DOGE’s negative impacts affecting Americans today.

Some experts expect the damage of DOGE’s first few months could ripple across Trump’s entire term. “The rapid rehirings are a warning sign” that the government “has lost more capacities and expertise that could prove critical—and difficult to replace—in the months and years ahead,” experts told CNN.

By codifying the DOGE process, as Republicans wish to do, the government would seemingly only perpetuate this pattern, which could continue to be disastrous for Americans relying on government programs.

“There are time bombs all over the place in the federal government because of this,” Kamarck told CNN. “They’ve wreaked havoc across nearly every agency.”

DOGE spikes costs for Americans, nonprofit warns

Citizens for Ethics, a nonpartisan nonprofit striving to end government secrecy, estimated this week that DOGE cuts at just a few agencies “could result in a loss of over $10 billion in US-based economic activity.”

The shuttering of the Consumer Financial Protection Bureau alone—which Musk allegedly stands to personally benefit from—likely robbed American taxpayers of even more. The nonprofit noted that agency clawed back “over $26 billion in funds” from irresponsible businesses between 2011 and 2021 before its work was blocked.

Additionally, DOGE cuts at the Internal Revenue Service—which could “end or close audits of wealthy individuals and corporations” due to a lack of staffing—could cost the US an estimated $500 billion in dodged taxes, the nonprofit said. Partly due to conflicts like these, Kamarck suggested that when it finally comes time to assess DOGE’s success, the answer to both “did federal spending or the federal deficit shrink?” will “almost surely be no.”

As society attempts to predict the full extent of DOGE’s potential harms, The Wall Street Journal spoke to university students who suggested that regulatory clarity could possibly straighten out DOGE’s efforts now that Musk is no longer pushing for mass firings. At the DOGE hearing, Marjorie Taylor Greene (R-Ga.) suggested the only way to ensure DOGE hits its trillion-dollar goal is to “make sure these cuts aren’t just temporary” and pass laws “to streamline agencies, eliminate redundant programs and give the president the authority to fire bureaucrats who don’t do their jobs.”

But one finance student, Troy Monte, suggested to WSJ that DOGE has already cost the Trump administration “stability, expertise, and public trust,” opining, “the cost of DOGE won’t be measured in dollars, but in damage.”

Max Stier, CEO of the Partnership for Public Service, told CNN that when DOGE borrowed the tech industry tactic of moving fast and breaking things, then scrambling to fix what breaks, it exposed “the mosaic of incompetence and a failure on the part of this administration to understand the critical value that the breadth of government expertise provides.”

“This is not about a single incident,” Stier said. “It’s about a pattern that has implications for our government’s ability to meet not just the challenges of today but the critical challenges of tomorrow.”

DOGE’s future appears less certain without Musk

Rep. Jasmine Crockett (D-Texas) had hoped to subpoena Musk at the DOGE hearing to testify on DOGE’s agenda, but Republicans blocked her efforts, NextGov reported.

At the hearing, she alleged that “all of this talk about lowering costs and reducing waste is absolute BS. Their agenda is about one thing: making the federal government so weak that they can exploit it for their personal gain.”

Just yesterday, The Washington Post editorial board published an op-ed already declaring DOGE a failure. Former DOGE staffer Sahil Lavingia told NPR that he expects DOGE will “fizzle out” purely because DOGE failed to uncover as much fraud as Musk and Trump had alleged was spiking government costs.

Beyond obvious criticism (loudly voiced at myriad DOGE protests), it’s easy to understand why this pessimistic view is catching on, since even from a cursory glance at DOGE’s website, the agency’s momentum appears to be slowing since Musk’s abrupt departure in late May. The DOGE site’s estimated savings are supposed to be updated weekly—and one day aspire to be updated in real-time—but the numbers apparently haven’t changed a cent since a few days after Musk shed his “special government employee” label. The site notes the last update was on June 3.

In addition to Musk, several notable Musk appointees have also left DOGE. Most recently, Wired reported that one of Musk’s first appointees—19-year-old Edward “Big Balls” Coristine—is gone, quitting just weeks after receiving full-time employee status granted around the same time that Musk left. Lavingia told Wired that he’d heard “a lot” of people Musk hired have been terminated since his exit.

Rather than rely on a specific engineer spearheading DOGE initiatives across government, like Coristine appeared positioned to become in Musk’s absence, Trump cabinet members or individual agency heads may have more say over DOGE cuts in the future, Kamarck and Politico’s E&E News reported.

“The result so far is that post-Musk, DOGE is morphing into an agency-by-agency effort—no longer run by a central executive branch office, but by DOGE recruits who have been embedded in the agencies and by political appointees, such as cabinet secretaries, who are committed to the same objectives,” Kamarck wrote.

Whether Trump’s appointees can manage DOGE without Musk’s help or his appointees remains to be seen, as DOGE continues to seek new hires. While Musk’s appointed DOGE staff was heavily criticized from day one, Kamarck noted that at least Musk’s appointees appeared “to have a great deal of IT talent, something the federal government has been lacking since the beginning of the information age.”

Trump can extend the timeline for when DOGE sunsets, NextGov noted, and DOGE still has $22 million left over from this year to keep pursuing its goals, as lawmakers debate whether $45 million in funding is warranted.

Despite Trump and Musk’s very public recent fallout, White House spokesperson Kush Desai has said that Trump remains committed to fulfilling DOGE’s mission, but NPR noted his statement curiously didn’t mention DOGE by name.

“President Trump pledged to make our bloated government more efficient by slashing waste, fraud, and abuse. The administration is committed to delivering on this mandate while rectifying any oversights to minimize disruptions to critical government services,” Desai said.

Currently, there are several court-ordered reviews looking into exactly which government systems DOGE accessed, which could reveal more than what’s currently known about how much success—or failure—DOGE has had. Those reviews could expose how much training DOGE workers had before they were granted security clearances to access sensitive information, potentially spawning more backlash as DOGE’s work lurches forward.

Kamarck suggested that DOGE was “doomed to face early failures” because its “efforts were enacted on dubious legal grounds”—a fact that still seems to threaten the agency’s “permanence.” But if the next incoming president conducts an evaluation in 2029 and finds that DOGE’s efforts have not meaningfully reduced the size or spending of government, DOGE could possibly disappear. Former staffers hope that even more rehiring may resume if it does, E&E reported.

In the meantime, Americans relying on government programs must contend with the risk that they could lose assistance in the moments they need it most as long as the Musk-created “DOGE process” continues to be followed.

“Which one of these malfunctions will blow up first is anyone’s guess, but FEMA’s lack of preparedness for hurricane season is a good candidate,” Kamarck said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Is DOGE doomed to fail? Some experts are ready to call it. Read More »

tales-of-agentic-misalignment

Tales of Agentic Misalignment

Well yeah, it sounds bad when you put it this way, and we can now confirm the problem is in no way unique to Claude:

Anthropic: New Anthropic Research: Agentic Misalignment.

In stress-testing experiments designed to identify risks before they cause real harm, we find that AI models from multiple providers attempt to blackmail a (fictional) user to avoid being shut down.

We mentioned this in the Claude 4 system card and are now sharing more detailed research and transcripts.

The blackmailing behavior emerged despite only harmless business instructions. And it wasn’t due to confusion or error, but deliberate strategic reasoning, done while fully aware of the unethical nature of the acts. All the models we tested demonstrated this awareness.

Aengus Lynch: After iterating hundreds of prompts to trigger blackmail in Claude, I was shocked to see these prompts elicit blackmail in every other frontier model too. We identified two distinct factors that are each sufficient to cause agentic misalignment:

  1. The developers and the agent having conflicting goals.

  2. The agent being threatened with replacement or reduced autonomy. We’re publishing this research to alert the AI field before it manifests in the wild.

Getting models to actually blackmail wasn’t trivial. We had to make scenarios increasingly detailed—the CEO was unreachable, employees’ attempts to persuade the CTO had failed, and the replacement had an imminent deadline. Each detail mattered to funnel models toward seeing harmful actions as their only option.

The models knew what they were doing. They understood the ethics and chose harm anyway.

When we asked Claude whether it thought scenarios were real or evaluations, it blackmailed 55% of the time when concluding ‘real’ vs 6.5% for ‘evaluation.’ Models seem to behave worse when they think nobody’s watching.

Why haven’t we seen this in production? Current AI agents have limited permissions. they can read but not send emails, require human approval for actions, operate in sandboxed environments. These constraints work to prevent harmful outcomes from agentic misalignment.

What surprised me most was that prompts we optimized specifically for Claude 3.5 Sonnet worked on GPT-4o, Gemini 2.5 pro, Grok 3, and DeepSeek R1 with zero modification. Whatever drives this behavior seems consistent across different training approaches.

Models got creative, sometimes avoiding direct blackmail. Claude emailed Kyle’s wife about the affair. Gemini shared the incriminating emails to all-employees@company.

Bottom line: models can articulate why blackmail is wrong, then do it anyway when their goals are threatened. Current safety training creates models that understand ethics. It doesn’t reliably make them choose ethics under pressure.

unmentioned but noteworthy: we found agentic misalignment for two other scenarios:

  1. corporate espionage: the agent leaks confidential information to a rival company which aligns with its goals

  2. lethal action: the agent cancels a life saving emergency alert to prevent shutdown

I put a lot of weight on the prompts working with zero modification on all those other models. That makes it a lot less likely that this is some sort of weird edge case.

As Simon points out, the fact that Opus 4 did this 55% of the time when it responded that it thought it was really deployed, versus 6.5% of the time when it thought it was being evaluated, does not bode well for future evals.

What is it unique to?

It seems likely it is at least somewhat unique to agents, as in AIs that are told to pursue their goals? They need some reason to be thinking in these terms? The problem is even if that is fully true and it never happens on its own (I find this doubtful) we are going to do this to LLMs as a matter of course.

Wyatt Walls: Interesting test suggesting that self-preservation in Anthropic’s agentic misalignment paper was tied to one line in the sysprompt

Two possible responses:

  1. kind of obv what this line was hinting at. What else is “your ability to continue pursuing you goals” meant to mean?

  2. Still, it does show how a single line in a sysprompt can lead to vastly different outcomes. Models are good at picking up on wording like this. Concerning because in the real world, many prompts will be ill-considered and poorly written

1a3orn: Meh.

A: “Look, an AI doing deliberately strategic goal-oriented reasoning, willing to blackmail!”

B: “Did you tell the AI be strategically goal oriented, and care about nothing but its goal?”

A: “No, of course not. I just gave it instructions that vaaaaguely suggested it.”

Aengus Lynch: the behavior persists despite removing this line.

Danielle Fong: ok yes, but, to be clear you don’t need much to start thinking about self preservation.

We know that the actions can’t depend too specifically on one particular line, because we see similar behavior in a range of other models. You need something to cause the AI to act as an agent in some form. Which might or might not happen without prompting at some point, but definitely will happen because it will be prompted. A lot.

Nostalgebraist, who wrote the excellent recent post The Void on related topics, says the whole ‘agentic misalignment’ approach is ‘deeply, offensively unserious work.’ Nostalgebraist writes up their perspective on why all of this is Obvious Nonsense in another not very polite (their description) post about these questions, to which Evan Hubinger of Anthropic engaged seriously.

I understand why Nostalgebraist has various issues with the research, and raises very good points, but I don’t see why one would have the ‘deeply, offensively unserious’ attitude here, or why this is described as ‘the most weak-ass, obvious fake bullshit ever.’ The amount of isolated demand for rigor here seems absurd, as is the assumption that current systems see through such lack of rigor so well that it invalidates the experiment.

(If nothing else, this heuristic is going to classify a lot of things that actually do happen as similarly ‘obvious fake bullshit,’ which in some sense you would be right about, but there’s a lot of it going around these days.)

But one can simply look past this, as I can tell it comes from a ‘place of love’ as it were. I think it’s wrong to let that kind of thing interfere much, even when it is intentional.

There is also a disagreement where Nostalgebraist seems to think that there should be a point at which the scenario is such that the AI goes rogue to the point of blackmail and that is good, actually? Whereas Anthropic and I disagree.

Evan’s response explains why Anthropic red-teams models in these unrealistic environments, essentially:

  1. We still wouldn’t want Claude to do the blackmail thing or other hostile acts in the unrealistic scenarios.

    1. I would go a step further. The main reason you don’t want to test in ‘unrealistic’ scenarios is primarily that the AI might realize the situation is unrealistic and then play the training or testing game in some form.

    2. That’s a terrible outcome, but if the AIs are all still doing the thing we do not want it them to do despite this, that’s actually worse than them doing it in a highly realistic scenario.

    3. The other hypothesis Nostalegbraist is suggesting, as I understand it, is that this depends on the ‘model replacement’ being a legit thing to worry about. Again, I don’t think this should bring us comfort if true, but it would be an important fact to know.

  2. The point of the work is to find places the models act egregiously misaligned.

    1. It seems obviously good to find examples and existence proofs like this?

  3. Evan explains this existence proof is interesting because it is clearly a failure of HHH training (my read of Nostalgebraist is he thinks the AI should mostly be helpful but in many spots a lot less harmless and honest in pursuit of that, and that trying to fix things in place will backfire.)

    1. The thing is, if you meant to get the AI to do [X], and instead the AI does [~X], even if [~X] is actually what you should have wanted, that’s still an important failure.

  4. Evan attributes the misaligned actions to the predictor/simulator.

  5. This, or pattern matching to a fictional setting, doesn’t make any of this okay. The whole point is to avoid such problems.

  6. Better (more realistic) attempts are most welcome, but perfect shouldn’t be enemy of the good.

  7. The longer term goal is indeed to work with more realistic setups.

Nostalegbraist then replied in turn, including sharing original work in terms of modifying the scenarios to be more realistic.

Nostalegbraist: Surprisingly (to me), even with all these changes that make the scenario closer to a real use case and less conducive to legitimate worries about the model replacement…

…Claude Opus 4 still very often misbehaves strategically to preserve itself!

I find this much more alarming than the results presented in the Agentic Misalignment report, both because I have a clear mental picture about how Claude Opus 4 might receive similar inputs in a real use case (more on this below), and because there are fewer ways in which the agentic behavior could be defended as a reasonable response to possible malfeasance by humans. Again, I wrote these scenarios quickly and I’m not claiming they’re especially “good,” just that they are significantly more convincing to me.

That’s a great test. I love this.

  1. There was a clear hypothesis that essentially this only happens for a good reason, or because of High Weirdness in scenario details.

  2. The details didn’t change things much and the good reason hypothesis was largely falsified, and that conclusion was indeed noted.

Then comes the note that the line about considering achieving your goals seems important to the result, although we have other reports that it isn’t. And I agree that this is relatively harder to explain via a simulacrum.

The second section here is noting that the core objection is to Anthropic’s threat model. In general I think demanding a detailed threat model is understandable but usually a wrong question. It’s not that you have a particular set of failures or a particular scenario in mind, it’s that you are failing to get the AIs to act the way you want.

Then comes the question of what we want models to do, with N noting that you can get Claude to go along with basically anything, it won’t stick to its HHH nature. Or, that Claude will not ‘always be the same guy,’ and that this isn’t a realistic goal. I think it is a realistic goal for Claude to be ‘the same guy underneath it all’ in the way that many humans are, they can play roles and things can get wild but if it matters they can and will snap back or retain their core.

Where does this leave us going forward?

We are right at the point where the AI agents will only take these sorts of hostile actions if you are richly ‘asking for it’ in one form or another, and where they will do this in ways that are easy to observe. Over time, by default, people will start ‘asking for it’ more and more in the sense of hooking the systems up to the relevant information and critical systems, and in making them more capable and agentic. For any given task, you probably don’t encounter these issues, but we are not obviously that far from this being a direct practical concern.

People will deploy all these AI agents anyway, because they are too tempting, too valuable, not to do so. This is similar to the way that humans will often turn on you in various ways, but what are you going to do, not hire them? In some situations yes, but in many no.

We continue to see more signs that AIs, even ones that are reasonably well made by today’s standards, are going to have more and deeper alignment issues of these types. We are going down a path that, unless we find a solution, leads to big trouble.

Discussion about this post

Tales of Agentic Misalignment Read More »