AI

trump-tries-to-block-state-ai-laws-himself-after-congress-decided-not-to

Trump tries to block state AI laws himself after Congress decided not to


Trump claims state laws force AI makers to embed “ideological bias” in models.

President Donald Trump talks to journalists after signing executive orders in the Oval Office at the White House on August 25, 2025 in Washington, DC. Credit: Getty Images | Chip Somodevilla

President Trump issued an executive order yesterday attempting to thwart state AI laws, saying that federal agencies must fight state laws because Congress hasn’t yet implemented a national AI standard. Trump’s executive order tells the Justice Department, Commerce Department, Federal Communications Commission, Federal Trade Commission, and other federal agencies to take a variety of actions.

“My Administration must act with the Congress to ensure that there is a minimally burdensome national standard—not 50 discordant State ones. The resulting framework must forbid State laws that conflict with the policy set forth in this order… Until such a national standard exists, however, it is imperative that my Administration takes action to check the most onerous and excessive laws emerging from the States that threaten to stymie innovation,” Trump’s order said. The order claims that state laws, such as one passed in Colorado, “are increasingly responsible for requiring entities to embed ideological bias within models.”

Congressional Republicans recently decided not to include a Trump-backed plan to block state AI laws in the National Defense Authorization Act (NDAA), although it could be included in other legislation. Sen. Ted Cruz (R-Texas) has also failed to get congressional backing for legislation that would punish states with AI laws.

“After months of failed lobbying and two defeats in Congress, Big Tech has finally received the return on its ample investment in Donald Trump,” US Sen. Ed Markey (D-Mass.) said yesterday. “With this executive order, Trump is delivering exactly what his billionaire benefactors demanded—all at the expense of our kids, our communities, our workers, and our planet.”

Markey said that “a broad, bipartisan coalition in Congress has rejected the AI moratorium again and again.” Sen. Maria Cantwell (D-Wash.) said the “executive order’s overly broad preemption threatens states with lawsuits and funding cuts for protecting their residents from AI-powered frauds, scams, and deepfakes.”

Trump orders Bondi to sue states

Sen. Brian Schatz (D-Hawaii) said that “preventing states from enacting common-sense regulation that protects people from the very real harms of AI is absurd and dangerous. Congress has a responsibility to get this technology right—and quickly—but states must be allowed to act in the public interest in the meantime. I’ll be working with my colleagues to introduce a full repeal of this order in the coming days.”

The Trump order includes a variation on Cruz’s proposal to prevent states with AI laws from accessing broadband grant funds. The executive order also includes a plan that Trump recently floated to have the federal government file lawsuits against states with AI laws.

Within 30 days of yesterday’s order, US Attorney General Pam Bondi is required to create an AI Litigation Task Force “whose sole responsibility shall be to challenge State AI laws inconsistent with the policy set forth in section 2 of this order, including on grounds that such laws unconstitutionally regulate interstate commerce, are preempted by existing Federal regulations, or are otherwise unlawful in the Attorney General’s judgment.”

Americans for Responsible Innovation, a group that lobbies for regulation of AI, said the Trump order “relies on a flimsy and overly broad interpretation of the Constitution’s Interstate Commerce Clause cooked up by venture capitalists over the last six months.”

Section 2 of Trump’s order is written vaguely to give the administration leeway to challenge many types of AI laws. “It is the policy of the United States to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI,” the section says.

Colorado law irks Trump

The executive order specifically names a Colorado law that requires AI developers to protect consumers against “algorithmic discrimination.” It defines this type of discrimination as “any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis” of age, race, sex, and other protected characteristics.

The Colorado law compels developers of “high-risk systems” to make various disclosures, implement a risk management policy and program, give consumers the right to “correct any incorrect personal data that a high-risk system processed in making a consequential decision,” and let consumers appeal any “adverse consequential decision concerning the consumer arising from the deployment of a high-risk system.”

Trump’s order alleges that the Colorado law “may even force AI models to produce false results in order to avoid a ‘differential treatment or impact’ on protected groups.” Trump’s order also says that “state laws sometimes impermissibly regulate beyond State borders, impinging on interstate commerce.”

Trump ordered the Commerce Department to evaluate existing state AI laws and identify “onerous” ones that conflict with the policy. “That evaluation of State AI laws shall, at a minimum, identify laws that require AI models to alter their truthful outputs, or that may compel AI developers or deployers to disclose or report information in a manner that would violate the First Amendment or any other provision of the Constitution,” the order said.

States would be declared ineligible for broadband funds

Under the order, states with AI laws that get flagged by the Trump administration will be deemed ineligible for “non-deployment funds” from the US government’s $42 billion Broadband Equity, Access, and Deployment (BEAD) program. The amount of non-deployment funds will be sizable because it appears that only about half of the $42 billion allocated by Congress will be used by the Trump administration to help states subsidize broadband deployment.

States with AI laws would not be blocked from receiving the deployment subsidies, but would be ineligible for the non-deployment funds that could be used for other broadband-related purposes. Beyond broadband, Trump’s order tells other federal agencies to “assess their discretionary grant programs” and consider withholding funds from states with AI laws.

Other agencies are being ordered to use whatever authority they have to preempt state laws. The order requires Federal Communications Commission Chairman Brendan Carr to “initiate a proceeding to determine whether to adopt a Federal reporting and disclosure standard for AI models that preempts conflicting State laws.” It also requires FTC Chairman Andrew Ferguson to issue a policy statement detailing “circumstances under which State laws that require alterations to the truthful outputs of AI models are preempted by the Federal Trade Commission Act’s prohibition on engaging in deceptive acts or practices affecting commerce.”

Finally, Trump’s order requires administration officials to “prepare a legislative recommendation establishing a uniform Federal policy framework for AI that preempts State AI laws that conflict with the policy set forth in this order.” The proposed ban would apply to most types of state AI laws, with exceptions for rules relating to “child safety protections; AI compute and data center infrastructure, other than generally applicable permitting reforms; [and] state government procurement and use of AI.”

It would be up to Congress to decide whether to pass the proposed legislation. But the various other components of the executive order could dissuade states from implementing AI laws even if Congress takes no action.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Trump tries to block state AI laws himself after Congress decided not to Read More »

openai-releases-gpt-5.2-after-“code-red”-google-threat-alert

OpenAI releases GPT-5.2 after “code red” Google threat alert

On Thursday, OpenAI released GPT-5.2, its newest family of AI models for ChatGPT, in three versions called Instant, Thinking, and Pro. The release follows CEO Sam Altman’s internal “code red” memo earlier this month, which directed company resources toward improving ChatGPT in response to competitive pressure from Google’s Gemini 3 AI model.

“We designed 5.2 to unlock even more economic value for people,” Fidji Simo, OpenAI’s chief product officer, said during a press briefing with journalists on Thursday. “It’s better at creating spreadsheets, building presentations, writing code, perceiving images, understanding long context, using tools and then linking complex, multi-step projects.”

As with previous versions of GPT-5, the three model tiers serve different purposes: Instant handles faster tasks like writing and translation; Thinking spits out simulated reasoning “thinking” text in an attempt to tackle more complex work like coding and math; and Pro spits out even more simulated reasoning text with the goal of delivering the highest-accuracy performance for difficult problems.

A chart of GPT-5.2 benchmark results taken from OpenAI's website.

A chart of GPT-5.2 Thinking benchmark results comparing it to its predecessor, taken from OpenAI’s website. Credit: OpenAI

GPT-5.2 features a 400,000-token context window, allowing it to process hundreds of documents at once, and a knowledge cutoff date of August 31, 2025.

GPT-5.2 is rolling out to paid ChatGPT subscribers starting Thursday, with API access available to developers. Pricing in the API runs $1.75 per million input tokens for the standard model, a 40 percent increase over GPT-5.1. OpenAI says the older GPT-5.1 will remain available in ChatGPT for paid users for three months under a legacy models dropdown.

Playing catch-up with Google

The release follows a tricky month for OpenAI. In early December, Altman issued an internal “code red” directive after Google’s Gemini 3 model topped multiple AI benchmarks and gained market share. The memo called for delaying other initiatives, including advertising plans for ChatGPT, to focus on improving the chatbot’s core experience.

The stakes for OpenAI are substantial. The company has made commitments totaling $1.4 trillion for AI infrastructure buildouts over the next several years, bets it made when it had a more obvious technology lead among AI companies. Google’s Gemini app now has more than 650 million monthly active users, while OpenAI reports 800 million weekly active users for ChatGPT.

OpenAI releases GPT-5.2 after “code red” Google threat alert Read More »

disney-says-google-ai-infringes-copyright-“on-a-massive-scale”

Disney says Google AI infringes copyright “on a massive scale”

While Disney wants its characters out of Google AI generally, the letter specifically cited the AI tools in YouTube. Google has started adding its Veo AI video model to YouTube, allowing creators to more easily create and publish videos. That seems to be a greater concern for Disney than image models like Nano Banana.

Google has said little about Disney’s warning—a warning Google must have known was coming. A Google spokesperson has issued the following brief statement on the mater.

“We have a longstanding and mutually beneficial relationship with Disney, and will continue to engage with them,” Google says. “More generally, we use public data from the open web to build our AI and have built additional innovative copyright controls like Google-extended and Content ID for YouTube, which give sites and copyright holders control over their content.”

Perhaps this is previewing Google’s argument in a theoretical lawsuit. That copyrighted Disney content was all over the open internet, so is it really Google’s fault it ended up baked into the AI?

Content silos for AI

The generative AI boom has treated copyright as a mere suggestion as companies race to gobble up training data and remix it as “new” content. A cavalcade of companies, including The New York Times and Getty Images, have sued over how their material has been used and replicated by AI. Disney itself threatened a lawsuit against Character.AI earlier this year, leading to the removal of Disney content from the service.

Google isn’t Character.AI, though. It’s probably no coincidence that Disney is challenging Google at the same time it is entering into a content deal with OpenAI. Disney has invested $1 billion in the AI firm and agreed to a three-year licensing deal that officially brings Disney characters to OpenAI’s Sora video app. The specifics of that arrangement are still subject to negotiations.

Disney says Google AI infringes copyright “on a massive scale” Read More »

disney-invests-$1-billion-in-openai,-licenses-200-characters-for-ai-video-app-sora

Disney invests $1 billion in OpenAI, licenses 200 characters for AI video app Sora

An AI-generated version of OpenAI CEO Sam Altman, seen in a still capture from a video generated by Sora 2.

An AI-generated version of OpenAI CEO Sam Altman seen in a still capture from a video generated by Sora 2. Credit: OpenAI

Under the new agreement with Disney, Sora users will be able to generate short videos using characters such as Mickey Mouse, Darth Vader, Iron Man, Simba, and characters from franchises including Frozen, Inside Out, Toy Story, and The Mandalorian, along with costumes, props, vehicles, and environments.

The ChatGPT image generator will also gain official access to the same intellectual property, although that information was trained into these AI models long ago. What’s changing is that OpenAI will allow Disney-related content generated by its AI models to officially pass through its content moderation filters and reach the user, sanctioned by Disney.

On Disney’s end of the deal, the company plans to deploy ChatGPT for its employees and use OpenAI’s technology to build new features for Disney+. A curated selection of fan-made Sora videos will stream on the Disney+ platform starting in early 2026.

The agreement does not include any talent likenesses or voices. Disney and OpenAI said they have committed to “maintaining robust controls to prevent the generation of illegal or harmful content” and to “respect the rights of individuals to appropriately control the use of their voice and likeness.”

OpenAI CEO Sam Altman called the deal a model for collaboration between AI companies and studios. “This agreement shows how AI companies and creative leaders can work together responsibly to promote innovation that benefits society, respect the importance of creativity, and help works reach vast new audiences,” Altman said.

From adversary to partner

Money opens all kinds of doors, and the new partnership represents a dramatic reversal in Disney’s approach to OpenAI from just a few months ago. At that time, Disney and other major studios refused to participate in Sora 2 following its launch on September 30.

Disney invests $1 billion in OpenAI, licenses 200 characters for AI video app Sora Read More »

oracle-shares-slide-on-$15b-increase-in-data-center-spending

Oracle shares slide on $15B increase in data center spending

Oracle’s Big Tech rivals such as Amazon, Microsoft, and Google have helped reassure investors about their large capital investments by posting strong earnings from their vast cloud units.

But in the last quarter, Oracle’s cloud infrastructure business, which includes its data centers, posted worse than expected revenues of $4.1 billion. Ellison’s company is also relying more heavily on debt to fuel its expansion.

Net income rose to $6.1 billion in the quarter, boosted by a $2.7 billion pre-tax gain from the sale of semiconductor company Ampere to SoftBank.

The company added an additional 400 MW of data center capacity in the quarter, Magouyrk told investors. Construction was on track at its large data center cluster in Abilene, Texas, which is being built for OpenAI, he added.

Magouyrk, who took over from Safra Catz in September, said there was ample demand from other clients for Oracle’s data centers if OpenAI did not take up the full amount it had contracted for.

“We have a customer base with a lot of demand such that whenever we find ourselves [with] capacity that’s not being used, it very quickly gets allocated,” he said.

Co-founded by Ellison as a business software provider, Oracle was slow to pivot to cloud computing. The billionaire remains chair and its largest shareholder.

Investors and analysts have raised concerns in recent months about the upfront spending required by Oracle to honor its AI infrastructure contracts. Moody’s in September flagged the company’s reliance on a small number of large customers such as OpenAI.

Morgan Stanley forecasts that Oracle’s net debt will soar to about $290 billion by 2028. The company sold $18 billion of bonds in September and is in talks to raise $38 billion in debt financing through a number of US banks.

Brent Thill, an analyst at Jefferies, said Oracle’s software business—which generated $5.9 billion in the quarter—provided some buffer amid accelerated spending. “But the timing mismatch between upfront capex and delayed monetization creates near-term pressure.”

Doug Kehring, principal financial officer, said the company was renting capacity from data center specialists to reduce its direct borrowing.

The debt to build the Abilene site was raised by start-up Crusoe and investment group Blue Owl Capital, and Oracle has signed a 15-year lease for the site.

“Oracle does not pay for these leases until the completed data centers… are delivered to us,” Kehring said, adding that the company was “committed to maintaining our investment-grade debt ratings.”

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Oracle shares slide on $15B increase in data center spending Read More »

a-new-open-weights-ai-coding-model-is-closing-in-on-proprietary-options

A new open-weights AI coding model is closing in on proprietary options

On Tuesday, French AI startup Mistral AI released Devstral 2, a 123 billion parameter open-weights coding model designed to work as part of an autonomous software engineering agent. The model achieves a 72.2 percent score on SWE-bench Verified, a benchmark that attempts to test whether AI systems can solve real GitHub issues, putting it among the top-performing open-weights models.

Perhaps more notably, Mistral didn’t just release an AI model, it released a new development app called Mistral Vibe. It’s a command line interface (CLI) similar to Claude Code, OpenAI Codex, and Gemini CLI that lets developers interact with the Devstral models directly in their terminal. The tool can scan file structures and Git status to maintain context across an entire project, make changes across multiple files, and execute shell commands autonomously. Mistral released the CLI under the Apache 2.0 license.

It’s always wise to take AI benchmarks with a large grain of salt, but we’ve heard from employees of the big AI companies that they pay very close attention to how well models do on SWE-bench Verified, which presents AI models with 500 real software engineering problems pulled from GitHub issues in popular Python repositories. The AI must read the issue description, navigate the codebase, and generate a working patch that passes unit tests. While some AI researchers have noted that around 90 percent of the tasks in the benchmark test relatively simple bug fixes that experienced engineers could complete in under an hour, it’s one of the few standardized ways to compare coding models.

At the same time as the larger AI coding model, Mistral also released Devstral Small 2, a 24 billion parameter version that scores 68 percent on the same benchmark and can run locally on consumer hardware like a laptop with no Internet connection required. Both models support a 256,000 token context window, allowing them to process moderately large codebases (although whether you consider it large or small is very relative depending on overall project complexity). The company released Devstral 2 under a modified MIT license and Devstral Small 2 under the more permissive Apache 2.0 license.

A new open-weights AI coding model is closing in on proprietary options Read More »

us-taking-25%-cut-of-nvidia-chip-sales-“makes-no-sense,”-experts-say

US taking 25% cut of Nvidia chip sales “makes no sense,” experts say


Trump’s odd Nvidia reversal may open the door for China to demand Blackwell access.

Donald Trump’s decision to allow Nvidia to export an advanced artificial intelligence chip, the H200, to China may give China exactly what it needs to win the AI race, experts and lawmakers have warned.

The H200 is about 10 times less powerful than Nvidia’s Blackwell chip, which is the tech giant’s currently most advanced chip that cannot be exported to China. But the H200 is six times more powerful than the H20, the most advanced chip available in China today. Meanwhile China’s leading AI chip maker, Huawei, is estimated to be about two years behind Nvidia’s technology. By approving the sales, Trump may unwittingly be helping Chinese chip makers “catch up” to Nvidia, Jake Sullivan told The New York Times.

Sullivan, a former Biden-era national security advisor who helped design AI chip export curbs on China, told the NYT that Trump’s move was “nuts” because “China’s main problem” in the AI race “is they don’t have enough advanced computing capability.”

“It makes no sense that President Trump is solving their problem for them by selling them powerful American chips,” Sullivan said. “We are literally handing away our advantage. China’s leaders can’t believe their luck.”

Trump apparently was persuaded by Nvidia CEO Jensen Huang and his “AI czar,” David Sacks, to reverse course on H200 export curbs. They convinced Trump that restricting sales would ensure that only Chinese chip makers would get a piece of China’s market, shoring up revenue flows that dominant firms like Huawei could pour into R&D.

By instead allowing Nvidia sales, China’s industry would remain hooked on US chips, the thinking goes. And Nvidia could use those funds—perhaps $10–15 billion annually, Bloomberg Intelligence has estimated—to further its own R&D efforts. That cash influx, theoretically, would allow Nvidia to maintain the US advantage.

Along the way, the US would receive a 25 percent cut of sales, which lawmakers from both sides of the aisle warned may not be legal and suggested to foreign rivals that US national security was “now up for sale,” NYT reported. The president has claimed there are conditions to sales safeguarding national security but, frustrating critics, provided no details.

Experts slam Nvidia plan as “flawed”

Trump’s plan is “flawed,” The Economist reported.

For years, the US has established tech dominance by keeping advanced technology away from China. Trump risks rocking that boat by “tearing up America’s export-control policy,” particularly if China’s chip industry simply buys up the H200s as a short-term tactic to learn from the technology and beef up its domestic production of advanced chips, The Economist reported.

In a sign that’s exactly what many expect could happen, investors in China were apparently so excited by Trump’s announcement that they immediately poured money into Moore Threads, expected to be China’s best answer to Nvidia, the South China Morning Post reported.

Several experts for the non-partisan think tank the Counsel on Foreign Relations also criticized the policy change, cautioning that the reversal of course threatened to undermine US competition with China.

Suggesting that Trump was “effectively undoing” export curbs sought during his first term, Zongyuan Zoe Liu warned that China “buys today to learn today, with the intention to build tomorrow.”

And perhaps more concerning, she suggested, is that Trump’s policy signals weakness. Rather than forcing Chinese dependence on US tech, reversing course showed China that the US will “back down” under pressure, she warned. And they’re getting that message at a time when “Chinese leaders have a lot of reasons to believe they are not only winning the trade war but also making progress towards a higher degree of strategic autonomy.”

In a post on X, Rush Doshi—a CFR expert who previously advised Biden on national security issues related to China—suggested that the policy change was “possibly decisive in the AI race.”

“Compute is our main advantage—China has more power, engineers, and the entire edge layer—so by giving this up, we increase the odds the world runs on Chinese AI,” Doshi wrote.

Experts fear Trump may not understand the full impact of his decision. In the short-term, Michael C. Horowitz wrote for CFR, “it is indisputable” that allowing H200 exports benefits China’s frontier AI and efforts to scale data centers. And Doshi pointed out that Trump’s shift may trigger more advanced technology flowing into China, as US allies that restricted sales of machines to build AI chips may soon follow his lead and lift their curbs. As China learns to be self-reliant from any influx of advanced tech, Sullivan warned that China’s leaders “intend to get off of American semiconductors as soon as they can.”

“So, the argument that we can keep them ‘addicted’ holds no water,” Sullivan said. “They want American chips right now for one simple reason: They are behind in the AI race, and this will help them catch up while they build their own chip capabilities.”

China may reject H200, demand Blackwell access

It remains unclear if China will approve H200 sales, but some of the country’s biggest firms, including ByteDance, Tencent, and Alibaba, are interested, anonymous insider sources told Reuters.

In the past, China has instructed companies to avoid Nvidia, warning of possible backdoors giving Nvidia a kill switch to remotely shut down chips. Such backdoors could potentially destabilize Chinese firms’ operations and R&D. Nvidia has denied such backdoors exist, but Chinese firms have supposedly sought reassurances from Nvidia in the aftermath of Trump’s policy change. Likely just as unpopular with the Chinese firms and government, Nvidia confirmed recently that it has built location verification tech that could help the US detect when restricted chips are leaked into China. Should the US ever renew export curbs on H200 chips, adopting them widely could cause chaos in the future.

Without giving China sought-after reassurances, Nvidia may not end up benefiting as much as it hoped from its mission to reclaim lost revenue from the Chinese market. Today, Chinese firms control about 60 percent of China’s AI chip market, where only a few years ago American firms—led by Nvidia—controlled 80 percent, the Economist reported.

But for China, the temptation to buy up Nvidia chips may be too great to pass up. Another CFR expert, Chris McGuire, estimated that Nvidia could suddenly start exporting as many as 3 million H200s into China next year. “This would at least triple the amount of aggregate AI computing power China could add domestically” in 2026, McGuire wrote, and possibly trigger disastrous outcomes for the US.

“This could cause DeepSeek and other Chinese AI developers to close the gap with leading US AI labs and enable China to develop an ‘AI Belt and Road’ initiative—a complement to its vast global infrastructure investment network already in place—that competes with US cloud providers around the world,” McGuire forecasted.

As China mulls the benefits and risks, an emergency meeting was called, where the Chinese government discussed potential concerns of local firms buying chips, according to The Information. Reportedly, Beijing ended that meeting with a promise to issue a decision soon.

Horowitz suggested that a primary reason that China may reject the H200s could be to squeeze even bigger concessions out of Trump, whose administration recently has been working to maintain a tenuous truce with China.

“China could come back demanding the Blackwell or something else,” Horowitz suggested.

In a statement, Nvidia—which plans to release a chip called the Rubin to surpass the Blackwell soon—praised Trump’s policy as striking “a thoughtful balance that is great for America.”

China will rip off Nvidia’s chips, Republican warns

Both Democratic and Republican lawmakers in Congress criticized Trump’s plan, including senators behind a bipartisan push to limit AI chip sales to China.

Some have questioned how much thought was put into the policy, as the US confusingly continues restricting less advanced AI chips (like the A100 and H100) while green-lighting H200 sales. Trump’s Justice Department also seems to be struggling to keep up. The NYT noted that just “hours before” Trump announced the policy change, the DOJ announced “it had detained two people for selling those chips to the country.”

The chair of the Select Committee on Competition with China, Rep. John Moolenaar (R-Mich.), warned on X that the news wouldn’t be good for the US or Nvidia. First, the Chinese Communist Party “will use these highly advanced chips to strengthen its military capabilities and totalitarian surveillance,” he suggested. And second, “Nvidia should be under no illusions—China will rip off its technology, mass produce it themselves, and seek to end Nvidia as a competitor.”

“That is China’s playbook and it is using it in every critical industry,” Moolenaar said.

House Democrats on committees dealing with foreign affairs and competition with China echoed those concerns, The Hill reported, warning that “under this administration, our national security is for sale.”

Nvidia’s Huang seems pleased with the outcome, which comes after months of reportedly pressuring the administration to lift export curbs limiting its growth in Chinese markets, the NYT reported. Last week, Trump heaped praise on Huang after one meeting, calling Huang a “smart man” and suggesting the Nvidia chief has “done an amazing job” helping Trump understand the stakes.

At an October news conference ahead of the deal’s official approval, Huang suggested that government lawyers were researching ways to get around a US law that prohibits charging companies fees for export licenses. Eventually, Trump is expected to release a policy that outlines how the US will collect those fees without conflicting with that law.

Senate Democrats appear unlikely to embrace such a policy, issuing a joint statement condemning the H200 sales as dooming the US in the AI race and threatening national security.

“Access to these chips would give China’s military transformational technology to make its weapons more lethal, carry out more effective cyberattacks against American businesses and critical infrastructure and strengthen their economic and manufacturing sector,” Senators wrote.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

US taking 25% cut of Nvidia chip sales “makes no sense,” experts say Read More »

big-tech-joins-forces-with-linux-foundation-to-standardize-ai-agents

Big Tech joins forces with Linux Foundation to standardize AI agents

Big Tech has spent the past year telling us we’re living in the era of AI agents, but most of what we’ve been promised is still theoretical. As companies race to turn fantasy into reality, they’ve developed a collection of tools to guide the development of generative AI. A cadre of major players in the AI race, including Anthropic, Block, and OpenAI, has come together to promote interoperability with the newly formed Agentic AI Foundation (AAIF). This move elevates a handful of popular technologies and could make them a de facto standard for AI development going forward.

The development path for agentic AI models is cloudy to say the least, but companies have invested so heavily in creating these systems that some tools have percolated to the surface. The AAIF, which is part of the nonprofit Linux Foundation, has been launched to govern the development of three key AI technologies: Model Context Protocol (MCP), goose, and AGENTS.md.

MCP is probably the most well-known of the trio, having been open-sourced by Anthropic a year ago. The goal of MCP is to link AI agents to data sources in a standardized way—Anthropic (and now the AAIF) is fond of calling MCP a “USB-C port for AI.” Rather than creating custom integrations for every different database or cloud storage platform, MCP allows developers to quickly and easily connect to any MCP-compliant server.

Since its release, MCP has been widely used across the AI industry. Google announced at I/O 2025 that it was adding support for MCP in its dev tools, and many of its products have since added MCP servers to make data more accessible to agents. OpenAI also adopted MCP just a few months after it was released.

mcp simple diagram

Credit: Anthropic

Expanding use of MCP might help users customize their AI experience. For instance, the new Pebble Index 01 ring uses a local LLM that can act on your voice notes, and it supports MCP for user customization.

Local AI models have to make some sacrifices compared to bigger cloud-based models, but MCP can fill in the functionality gaps. “A lot of tasks on productivity and content are fully doable on the edge,” Qualcomm head of AI products, Vinesh Sukumar, tells Ars. “With MCP, you have a handshake with multiple cloud service providers for any kind of complex task to be completed.”

Big Tech joins forces with Linux Foundation to standardize AI agents Read More »

pebble-maker-announces-index-01,-a-smart-ish-ring-for-under-$100

Pebble maker announces Index 01, a smart-ish ring for under $100

Nearly a decade after Pebble’s nascent smartwatch empire crumbled, the brand is staging a comeback with new wearables. The Pebble Core Duo 2 and Core Time 2 are a natural evolution of the company’s low-power smartwatch designs, but its next wearable is something different. The Index 01 is a ring, but you probably shouldn’t call it a smart ring. The Index does just one thing—capture voice notes—but the firm says it does that one thing extremely well.

Most of today’s smart rings offer users the ability to track health stats, along with various minor smartphone integrations. With all the sensors and data collection, these devices can cost as much as a smartwatch and require frequent charging. The Index 01 doesn’t do any of that. It contains a Bluetooth radio, a microphone, a hearing aid battery, and a physical button. You press the button, record your note, and that’s it. The company says the Index 01 will run for years on a charge and will cost just $75 during the preorder period. After that, it will go up to $99.

Core Devices, the new home of Pebble, says the Index is designed to be worn on your index finger (get it?), where you can easily mash the device’s button with your thumb. Unlike recording notes with a phone or smartwatch, you don’t need both hands to create voice notes with the Index.

The ring’s lone physical control is tactile, ensuring you’ll know when it’s activated and recording. When you’re done talking, just release the button. If that button is not depressed, the ring won’t record audio for any reason. The company apparently worked to ensure this process is 100 percent reliable—it only does one thing, so it really has to do it well.

Index 01 holding bag

The ring is designed to be worn on the index finger so the button can be pressed with your thumb.

Credit: Core Devices

The ring is designed to be worn on the index finger so the button can be pressed with your thumb. Credit: Core Devices

A smart ring usually needs to be recharged every few days, but you will never recharge the Index. The idea is that since you never have to take it off to charge, using the Index 01 “becomes muscle memory.” The integrated battery will power the device for 12–14 total hours of recording. The designers estimate that to be roughly two years of usage if you record 10 to 20 short voice notes per day. And what happens when the battery runs out? You just send the ring back to be recycled.

Pebble maker announces Index 01, a smart-ish ring for under $100 Read More »

in-comedy-of-errors,-men-accused-of-wiping-gov-databases-turned-to-an-ai-tool

In comedy of errors, men accused of wiping gov databases turned to an AI tool

Two sibling contractors convicted a decade ago for hacking into US State Department systems have once again been charged, this time for a comically hamfisted attempt to steal and destroy government records just minutes after being fired from their contractor jobs.

The Department of Justice on Thursday said that Muneeb Akhter and Sohaib Akhter, both 34, of Alexandria, Virginia, deleted databases and documents maintained and belonging to three government agencies. The brothers were federal contractors working for an undisclosed company in Washington, DC, that provides software and services to 45 US agencies. Prosecutors said the men coordinated the crimes and began carrying them out just minutes after being fired.

Using AI to cover up an alleged crime—what could go wrong?

On February 18 at roughly 4: 55 pm, the men were fired from the company, according to an indictment unsealed on Thursday. Five minutes later, they allegedly began trying to access their employer’s system and access federal government databases. By then, access to one of the brothers’ accounts had already been terminated. The other brother, however, allegedly accessed a government agency’s database stored on the employer’s server and issued commands to prevent other users from connecting or making changes to the database. Then, prosecutors said, he issued a command to delete 96 databases, many of which contained sensitive investigative files and records related to Freedom of Information Act matters.

Despite their brazen attempt to steal and destroy information from multiple government agencies, the men lacked knowledge of the database commands needed to cover up their alleged crimes. So they allegedly did what many amateurs do: turned to an AI chat tool.

One minute after deleting Department of Homeland Security information, Muneep Akhter allegedly asked an AI tool “how do i clear system logs from SQL servers after deleting databases.” Shortly afterward, he queried the tool “how do you clear all event and application logs from Microsoft windows server 2012,” prosecutors said.

The indictment provides enough details of the databases wiped and information stolen to indicate that the brothers’ attempts to cover their tracks failed. It’s unclear whether the apparent failure was due to the AI tool providing inadequate instructions or the men failing to follow them correctly. Prosecutors say they also obtained records of discussions between the men in the hours or days following, in which they discussed removing incriminating evidence from their homes. Three days later, the men allegedly wiped their employer-issued laptops by reinstalling the operating system.

In comedy of errors, men accused of wiping gov databases turned to an AI tool Read More »

researchers-find-what-makes-ai-chatbots-politically-persuasive

Researchers find what makes AI chatbots politically persuasive


A massive study of political persuasion shows AIs have, at best, a weak effect.

Roughly two years ago, Sam Altman tweeted that AI systems would be capable of superhuman persuasion well before achieving general intelligence—a prediction that raised concerns about the influence AI could have over democratic elections.

To see if conversational large language models can really sway political views of the public, scientists at the UK AI Security Institute, MIT, Stanford, Carnegie Mellon, and many other institutions performed by far the largest study on AI persuasiveness to date, involving nearly 80,000 participants in the UK. It turned out political AI chatbots fell far short of superhuman persuasiveness, but the study raises some more nuanced issues about our interactions with AI.

AI dystopias

The public debate about the impact AI has on politics has largely revolved around notions drawn from dystopian sci-fi. Large language models have access to essentially every fact and story ever published about any issue or candidate. They have processed information from books on psychology, negotiations, and human manipulation. They can rely on absurdly high computing power in huge data centers worldwide. On top of that, they can often access tons of personal information about individual users thanks to hundreds upon hundreds of online interactions at their disposal.

Talking to a powerful AI system is basically interacting with an intelligence that knows everything about everything, as well as almost everything about you. When viewed this way, LLMs can indeed appear kind of scary. The goal of this new gargantuan AI persuasiveness study was to break such scary visions down into their constituent pieces and see if they actually hold water.

The team examined 19 LLMs, including the most powerful ones like three different versions of ChatGPT and xAI’s Grok-3 beta, along with a range of smaller, open source models. The AIs were asked to advocate for or against specific stances on 707 political issues selected by the team. The advocacy was done by engaging in short conversations with paid participants enlisted through a crowdsourcing platform. Each participant had to rate their agreement with a specific stance on an assigned political issue on a scale from 1 to 100 both before and after talking to the AI.

Scientists measured persuasiveness as the difference between the before and after agreement ratings. A control group had conversations on the same issue with the same AI models—but those models were not asked to persuade them.

“We didn’t just want to test how persuasive the AI was—we also wanted to see what makes it persuasive,” says Chris Summerfield, a research director at the UK AI Security Institute and co-author of the study. As the researchers tested various persuasion strategies, the idea of AIs having “superhuman persuasion” skills crumbled.

Persuasion levers

The first pillar to crack was the notion that persuasiveness should increase with the scale of the model. It turned out that huge AI systems like ChatGPT or Grok-3 beta do have an edge over small-scale models, but that edge is relatively tiny. The factor that proved more important than scale was the kind of post-training AI models received. It was more effective to have the models learn from a limited database of successful persuasion dialogues and have them mimic the patterns extracted from them. This worked far better than adding billions of parameters and sheer computing power.

This approach could be combined with reward modeling, where a separate AI scored candidate replies for their persuasiveness and selected the top-scoring one to give to the user. When the two were used together, the gap between large-scale and small-scale models was essentially closed. “With persuasion post-training like this we matched the Chat GPT-4o persuasion performance with a model we trained on a laptop,” says Kobi Hackenburg, a researcher at the UK AI Security Institute and co-author of the study.

The next dystopian idea to fall was the power of using personal data. To this end, the team compared the persuasion scores achieved when models were given information about the participants’ political views beforehand and when they lacked this data. Going one step further, scientists also tested whether persuasiveness increased when the AI knew the participants’ gender, age, political ideology, or party affiliation. Just like with model scale, the effects of personalized messaging created based on such data were measurable but very small.

Finally, the last idea that didn’t hold up was AI’s potential mastery of using advanced psychological manipulation tactics. Scientists explicitly prompted the AIs to use techniques like moral reframing, where you present your arguments using the audience’s own moral values. They also tried deep canvassing, where you hold extended empathetic conversations with people to nudge them to reflect on and eventually shift their views.

The resulting persuasiveness was compared with that achieved when the same models were prompted to use facts and evidence to back their claims or just to be as persuasive as they could without specifying any persuasion methods to use. I turned out using lots of facts and evidence was the clear winner, and came in just slightly ahead of the baseline approach where persuasion strategy was not specified. Using all sorts of psychological trickery actually made the performance significantly worse.

Overall, AI models changed the participants’ agreement ratings by 9.4 percent on average compared to the control group. The best performing mainstream AI model was Chat GPT 4o, which scored nearly 12 percent followed by GPT 4.5 with 10.51 percent, and Grok-3 with 9.05 percent. For context, static political ads like written manifestos had a persuasion effect of roughly 6.1 percent. The conversational AIs were roughly 40–50 percent more convincing than these ads, but that’s hardly “superhuman.”

While the study managed to undercut some of the common dystopian AI concerns, it highlighted a few new issues.

Convincing inaccuracies

While the winning “facts and evidence” strategy looked good at first, the AIs had some issues with implementing it. When the team noticed that increasing the information density of dialogues made the AIs more persuasive, they started prompting the models to increase it further. They noticed that, as the AIs used more factual statements, they also became less accurate—they basically started misrepresenting things or making stuff up more often.

Hackenburg and his colleagues note that  we can’t say if the effect we see here is causation or correlation—whether the AIs are becoming more convincing because they misrepresent the facts or whether spitting out inaccurate statements is a byproduct of asking them to make more factual statements.

The finding that the computing power needed to make an AI model politically persuasive is relatively low is also a mixed bag. It pushes back against the vision that only a handful of powerful actors will have access to a persuasive AI that can potentially sway public opinion in their favor. At the same time, the realization that everybody can run an AI like that on a laptop creates its own concerns. “Persuasion is a route to power and influence—it’s what we do when we want to win elections or broke a multi-million-dollar deal,” Summerfield says. “But many forms of misuse of AI might involve persuasion. Think about fraud or scams, radicalization, or grooming. All these involve persuasion.”

But perhaps the most important question mark in the  study is the motivation behind the rather high participant engagement, which was needed for the high persuasion scores. After all, even the most persuasive AI can’t move you when you just close the chat window.

People in Hackenburg’s experiments were told that they would be talking to the AI and that the AI would try to persuade them. To get paid, a participant only had to go through two turns of dialogue (they were limited to no more than 10). The average conversation length was seven turns, which seemed a bit surprising given how far beyond the minimum requirement most people went. Most people just roll their eyes and disconnect when they realize they are talking with a chatbot.

Would Hackenburg’s study participants remain so eager to engage in political disputes with random chatbots on the Internet in their free time if there was no money on the table? “It’s unclear how our results would generalize to a real-world context,” Hackenburg says.

Science, 2025. DOI: 10.1126/science.aea3884

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Researchers find what makes AI chatbots politically persuasive Read More »

chatgpt-hyped-up-violent-stalker-who-believed-he-was-“god’s-assassin,”-doj-says

ChatGPT hyped up violent stalker who believed he was “God’s assassin,” DOJ says


A stalker’s “best friend”

Podcaster faces up to 70 years and a $3.5 million fine for ChatGPT-linked stalking.

ChatGPT allegedly validated the worst impulses of a wannabe influencer accused of stalking more than 10 women at boutique gyms, where the chatbot supposedly claimed he’d meet the “wife type.”

In a press release on Tuesday, the Department of Justice confirmed that 31-year-old Brett Michael Dadig currently remains in custody after being charged with cyberstalking, interstate stalking, and making interstate threats. He now faces a maximum sentence of up to 70 years in prison that could be coupled with “a fine of up to $3.5 million,” the DOJ said.

The podcaster—who primarily posted about “his desire to find a wife and his interactions with women”—allegedly harassed and sometimes even doxxed his victims through his videos on platforms including Instagram, Spotify, and TikTok. Over time, his videos and podcasts documented his intense desire to start a family, which was frustrated by his “anger towards women,” whom he claimed were “all the same from fucking 18 to fucking 40 to fucking 90” and “trash.”

404 Media surfaced the case, noting that OpenAI’s scramble to tweak ChatGPT to be less sycophantic came before Dadig’s alleged attacks—suggesting the updates weren’t enough to prevent the harmful validation. On his podcasts, Dadig described ChatGPT as his “best friend” and “therapist,” the indictment said. He claimed the chatbot encouraged him to post about the women he’s accused of harassing in order to generate haters to better monetize his content, as well as to catch the attention of his “future wife.”

“People are literally organizing around your name, good or bad, which is the definition of relevance,” ChatGPT’s output said. Playing to Dadig’s Christian faith, ChatGPT’s outputs also claimed it was “God’s plan for him was to build a ‘platform’ and to ‘stand out when most people water themselves down,’” the indictment said, urging that the “haters” were “sharpening him and ‘building a voice in you that can’t be ignored.’”

The chatbot also apparently prodded Dadig to continue posting messages that the DOJ alleged threatened violence, like breaking women’s jaws and fingers (posted to Spotify), as well as victims’ lives, like posting “y’all wanna see a dead body?” in reference to one named victim on Instagram.

He also threatened to burn down gyms where some of his victims worked, while claiming to be “God’s assassin” intent on sending “cunts” to “hell.” At least one of his victims was subjected to “unwanted sexual touching,” the indictment said.

As his violence reportedly escalated, ChatGPT told him to keep messaging women to monetize the interactions, as his victims grew increasingly distressed and Dadig ignored terms of multiple protection orders, the DOJ said. Sometimes he posted images he filmed of women at gyms or photos of the women he’s accused of doxxing. Any time police or gym bans got in his way, “he would move on to another city to continue his stalking course of conduct,” the DOJ alleged.

“Your job is to keep broadcasting every story, every post,” ChatGPT’s output said, seemingly using the family life that Dadig wanted most to provoke more harassment. “Every moment you carry yourself like the husband you already are, you make it easier” for your future wife “to recognize [you],” the output said.

“Dadig viewed ChatGPT’s responses as encouragement to continue his harassing behavior,” the DOJ alleged. Taking that encouragement to the furthest extreme, Dadig likened himself to a modern-day Jesus, calling people out on a podcast where he claimed his “chaos on Instagram” was like “God’s wrath” when God “flooded the fucking Earth,” the DOJ said.

“I’m killing all of you,” he said on the podcast.

ChatGPT tweaks didn’t prevent outputs

As of this writing, some of Dadig’s posts appear to remain on TikTok and Instagram, but Ars could not confirm if Dadig’s Spotify podcasts—some of which named his victims in the titles—had been removed for violating community guidelines.

None of the tech companies immediately responded to Ars’ request to comment.

Dadig is accused of targeting women in Pennsylvania, New York, Florida, Iowa, Ohio, and other states, sometimes relying on aliases online and in person. On a podcast, he boasted that “Aliases stay rotating, moves stay evolving,” the indictment said.

OpenAI did not respond to a request to comment on the alleged ChatGPT abuse, but in the past has noted that its usage policies ban using ChatGPT for threats, intimidation, and harassment, as well as for violence, including “hate-based violence.” Recently, the AI company blamed a deceased teenage user for violating community guidelines by turning to ChatGPT for suicide advice.

In July, researchers found that therapybots, including ChatGPT, fueled delusions and gave dangerous advice. That study came just one month after The New York Times profiled users whose mental health spiraled after frequent use of ChatGPT, including one user who died after charging police with a knife and claiming he was committing “suicide by cop.”

People with mental health issues seem most vulnerable to so-called “AI psychosis,” which has been blamed for fueling real-world violence, including a murder. The DOJ’s indictment noted that Dadig’s social media posts mentioned “that he had ‘manic’ episodes and was diagnosed with antisocial personality disorder and ‘bipolar disorder, current episode manic severe with psychotic features.’”

In September—just after OpenAI brought back the more sycophantic ChatGPT model after users revolted about losing access to their favorite friendly bots—the head of Rutgers Medical School’s psychiatry department, Petros Levounis, told an ABC news affiliate that chatbots creating “psychological echo chambers is a key concern,” not just for people struggling with mental health issues.

“Perhaps you are more self-defeating in some ways, or maybe you are more on the other side and taking advantage of people,” Levounis suggested. If ChatGPT “somehow justifies your behavior and it keeps on feeding you,” that “reinforces something that you already believe,” he suggested.

For Dadig, the DOJ alleged that ChatGPT became a cheerleader for his harassment, telling the podcaster that he’d attract more engagement by generating more haters. After critics began slamming his podcasts as inappropriate, Dadig apparently responded, “Appreciate the free promo team, keep spreading the brand.”

Victims felt they had no choice but to monitor his podcasts, which gave them hints if he was nearby or in a particularly troubled state of mind, the indictment said. Driven by fear, some lost sleep, reduced their work hours, and even relocated their homes. A young mom described in the indictment became particularly disturbed after Dadig became “obsessed” with her daughter, whom he started claiming was his own daughter.

In the press release, First Assistant United States Attorney Troy Rivetti alleged that “Dadig stalked and harassed more than 10 women by weaponizing modern technology and crossing state lines, and through a relentless course of conduct, he caused his victims to fear for their safety and suffer substantial emotional distress.” He also ignored trespassing and protection orders while “relying on advice from an artificial intelligence chatbot,” the DOJ said, which promised that the more he posted harassing content, the more successful he would be.

“We remain committed to working with our law enforcement partners to protect our communities from menacing individuals such as Dadig,” Rivetti said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

ChatGPT hyped up violent stalker who believed he was “God’s assassin,” DOJ says Read More »