AI

block-lays-off-40%-of-workforce-as-it-goes-all-in-on-ai-tools

Block lays off 40% of workforce as it goes all-in on AI tools

The staff reduction at Block comes as anxiety rises about AI leading to job losses across vast parts of the economy.

Investors and economists are grappling with an influx of US economic data and corporate announcements in an effort to gauge the impact the technology could be having on the labor market. The latest non-farm payrolls figures were better than expected, suggesting the domestic jobs market was stabilizing, but several big US companies have committed to cutting staff.

Amazon, UPS, Dow, Nike, Home Depot, and others in late January announced they would be cutting a combined 52,000 jobs.

Dorsey said the cuts at Block, which owns the payment processor Square, came despite what he described as a “strong” financial performance in 2025.

Block has made a contrarian bet on bitcoin at a time when many payment companies favored stablecoins: cash-like digital tokens that became regulated in the US last year.

Block’s strategy was spearheaded by Dorsey, a “bitcoin maximalist” who has said he believes the digital currency will eventually eclipse the dollar.

The company offers payment services in bitcoin for merchants and consumers—and suffered a loss on its own bitcoin holdings as the price of the cryptocurrency dropped 23 percent this year.

In contrast, payment companies that made a bet on stablecoins experienced a boost. Stripe earlier this week said its stablecoin transaction volumes increased fourfold last year.

In its fiscal fourth quarter, Block reported revenue of almost $6.3 billion, in line with Wall Street expectations. Its earnings tumbled to 19 cents a share, owing to a $234 million hit on its bitcoin holdings.

© 2026 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Block lays off 40% of workforce as it goes all-in on AI tools Read More »

perplexity-announces-“computer,”-an-ai-agent-that-assigns-work-to-other-ai-agents

Perplexity announces “Computer,” an AI agent that assigns work to other AI agents

Given the right permissions and with the proper plugins, it could create, modify, or delete the user’s files and otherwise change things far beyond what most users could achieve with existing models and MCP (Model Context Protocol). Users would use files like USER.MD, MEMORY.MD, SOUL.MD, or HEARTBEAT.MD to give the tool context about its goals and how to work toward them independently, sometimes running for long stretches without direct user input.

On one hand, that meant it could do impressive things—the first glimpses of the sort of knowledge work that AI boosters have been saying agentic AI would ultimately do. On the other hand, it was prone to serious errors and vulnerable to prompt injection and other security problems, in part due to a Wild West of unverified plugins.

The same toolkit that was used to create a viral Reddit clone populated by AI agents was also, at least in one case, responsible for deleting a user’s emails against her will.

Stay in your lane

Perplexity Computer aims to address those concerns in a few ways. First, its core process occurs in the cloud, not on the user’s local machine. Second, it lives within a walled garden with a curated list of integrations, in contrast to OpenClaw’s unregulated frontier.

This is, of course, an imperfect analogy, but you could say that if OpenClaw were the open web of AI agent tools, then Computer is Apple’s App Store. While you’re more limited in what you can do, you’re not trusting packages from unverified sources with access to your system.

There could still be risks, though. For one thing, LLMs make mistakes, and those could be consequential if Computer is working with data you don’t have backed up elsewhere or if you’re not verifying the outputs, for example.

Perplexity Computer aims to button up, refine, and contain the wild power of the viral OpenClaw agentic AI tool—competing with the likes of Claude Cowork—by optimizing subtasks by selecting models best suited to them.

It surely won’t be the last existing AI player to try and do this sort of thing. After all, OpenAI hired OpenClaw’s developer, with CEO Sam Altman suggesting that some of what we saw in OpenClaw will be essential to the company’s product vision moving forward.

Perplexity announces “Computer,” an AI agent that assigns work to other AI agents Read More »

xai-spent-$7m-building-wall-that-barely-muffles-annoying-power-plant-noise

xAI spent $7M building wall that barely muffles annoying power plant noise


“Temu sound wall” not enough to quell fury over xAI’s power plant.

For miles around xAI’s makeshift power plant in Southaven, Mississippi, neighbors have endured months of constant roaring, erupting pops, and bursts of high-pitched whining from 27 temporary gas turbines installed without consulting the community.

In a report on Thursday, NBC News interviewed residents fighting to shut down xAI’s turbines. They confirmed that xAI operates the turbines day and night, allegedly tormenting residents in order to power xAI founder Elon Musk’s unbridled AI ambitions.

Eventually, 41 permanent gas turbines—that supposedly won’t be as noisy—will be installed, if xAI can secure the permitting. In the meantime, xAI has erected a $7 million “sound barrier” that’s supposed to mitigate some of the noise.

However, residents told NBC News that the wall that xAI built does little to quiet the din.

Taylor Logsdon, who lives near the power plant, said that neighbors nearby jokingly call it the “Temu sound wall,” referencing the Chinese e-commerce site known for peddling cheap, rather than high-quality goods. For Logsdon, the wall has not helped to calm her dogs, which have been unsettled by sudden booms and squeals that videos show can frequently be heard amid the turbines’ continual jet engine-like hum. Some residents are just as unsettled as the dogs, describing the noises from the plant as “scary.”

A nonprofit environmental advocacy group, the Safe and Sound Coalition, has been collecting evidence, hoping to raise awareness in the community to block xAI from obtaining permits for its permanent turbines. The group’s website links to videos documenting the noise, noise analysis reports, and public records showing how challenging it’s been to track xAI’s communications with public officials.

Safe and Sound Coalition video documents constant roars after a “loud bang” signaled “something popped off.”

For example, public records requests to the city of Southaven seeking information on xAI exemptions to noise ordinances or communications about the sound wall turned up nothing. A director overseeing the city’s planning and development claimed that the office was not “involved with the noise barrier wall” and could provide no details. Similarly, a permit clerk for the city’s building department confirmed there were no documents to share.

Asked for comment, a spokesperson for the coalition told Ars that the “absence of documentation raises transparency concerns.”

“When decisions with community impact are made without accessible records, it creates an accountability gap and limits the public’s ability to understand how those decisions were evaluated or authorized,” the spokesperson said.

An IT worker who co-founded the coalition, Jason Haley, told NBC News that xAI’s wall showed that the city could have required the company to do more to prevent noise pollution before upsetting community members.

“If you knew the noise was going to be an issue, put in a sound wall first,” Haley said. “Do some other stuff first before you torture us. That’s not that hard of an ask.”

xAI did not immediately respond to Ars’ request to comment. According to NBC News, the company has yet to make public a noise analysis that it conducted.

xAI’s turbines spark other concerns

xAI has maintained that it follows the law when rushing at breakneck speeds to build infrastructure to support its AI innovations. In Southaven, xAI was approved to operate the temporary gas turbines at the power plant for 12 months, without any additional permitting required.

Now it’s seeking permits for the permanent turbines, which residents worry could be nearly as loud, while possibly introducing more smog into an area that’s mostly homes, churches, parks, and schools, the Safe and Sound Coalition’s website said.

Pollutants could increase risks of asthma, heart attacks, stroke, and cancer, a community flyer the coalition distributed warned, urging attendance at a public meeting where residents could finally air their complaints (a meeting which NBC News’ report thoroughly documented). The flyer also suggested that the city’s main drinking water supply could be affected and perhaps tainted if the power plant’s wastewater contains toxic chemicals, since there isn’t a graywater recycling plant nearby. For residents, it’s hard to tell if things will ever get better. One noise analysis the coalition shared found that the daily sound of the turbines was higher on an “annoyance scale” than when entire neighborhoods set off New Year’s Eve fireworks.

“Our water, air, power grid, utility bills, property values, and health are all at risk,” the Safe and Sound Coalition’s website said. “We’re already facing toxic pollution and relentless industrial noise. There is no clear oversight, no transparency, and no plan to protect the people living nearby.”

The coalition expects that if enough community members protest the plant, the permitting agency will deny xAI’s permits and order any potentially dangerous turbines to be shut down. But other groups are taking a different approach, considering suing xAI if it continues operating the unpermitted gas turbines in Southaven.

Earlier this month, the Southern Environmental Law Center (SELC) joined the NAACP in sending xAI a notice of intent to sue. In that letter, groups warned that the Environmental Protection Agency (EPA) recently changed a rule that they argued now requires permits for the temporary turbines. They gave xAI 60 days to respond.

The same groups previously sent a legal threat to xAI, opposing alleged data center pollution in Memphis, Tenn. xAI eventually secured permits for some of the gas turbines sparking scrutiny there, which many locals found “devastating.” Further concerning, residents relying on drone imagery—with no other way to keep track of how many turbines xAI was running—warned that the permits only covered 15 of 24 turbines on site.

EPA shrugs off xAI permitting concerns

It’s unclear whether the SELC can win if it takes xAI to court, or whether the EPA would ever intervene if that action could be construed as delaying Trump’s order to rush permitting and build as many data centers as fast as possible to power AI.

The SELC declined Ars’ request to comment, but the EPA’s administrator, Lee Zeldin, seemed to negate that argument in an interview with Fox Business in January. Asked directly about xAI’s gas turbines, Zeldin confirmed that the EPA was working closely on permitting with local officials in Southaven and Shelby County—where xAI built a massive data center sparking protests.

Rather than suggesting that the EPA might be preparing to review xAI’s unpermitted gas turbines, Zeldin emphasized that for Donald Trump, it “is about getting permits done faster.”

“EPA has the power to slow things down; EPA also has the power to speed things up, and that’s where the Trump EPA is,” Zeldin said.

Permitting for the Southaven project’s permanent gas turbines may be approved as soon as next month, NBC News reported.

Residents skeptical second sound barrier will be better

For Southaven, xAI’s power plant—along with a planned data center, which Musk has dubbed “MACROHARDRR” to mock Microsoft—represents a chance to surge the local economy. That prospect seemingly swayed government support for the projects, which has apparently not waned in the face of mounting protests.

When Musk bought the dormant power plant, “it was the largest private investment in state history,” Tate Reeves, Mississippi’s Republican governor, claimed. Additionally, xAI’s affiliated company that’s behind the projects, MZX Tech, donated $1.38 million to the city’s police department, NBC News reported. Both the plant and the data center “are expected to bring in millions of dollars and new jobs,” Reeves said.

For Southaven residents, the only hope they have that the noise may die down any time soon is that construction on another sound barrier will be finished in the next two months, NBC News reported. Supposedly, engineers were taking time to study “what type of sound barrier would be most effective” amid complaints about the current sound barrier.

A spokesperson for the Safe and Sound Coalition told Ars that the group remains “skeptical” that the new wall will be any better than the first sound barrier.

“To our understanding, sound barriers can reduce certain frequencies under controlled conditions, but turbine noise involves low-frequency sounds and tonal components that often reach beyond barriers,” the coalition’s spokesperson said. “The most effective method for reducing industrial noise exposure is typically distance from residential areas, which is not a mitigation option in this scenario given the facility’s proximity to homes.”

The coalition urged xAI to be transparent and to share data backing mitigation claims if it wants the community to believe that the second sound barrier will make any difference.

“Without transparent modeling, validated field measurements, and independent verification, it is difficult to assess whether the barrier will meaningfully address the ongoing nuisance experienced by nearby residents,” the coalition’s spokesperson said. “Mitigation claims are only meaningful if they are supported by transparent data.”

Mayor labels protestors Musk haters

At least one city official, Mayor Darren Musselwhite, has suggested that community backlash is “political.” Although he acknowledged that the noise was a “legitimate concern,” he also claimed on Facebook that some people protesting xAI’s facility were simply Elon Musk haters, NBC News reported.

“Southaven is now under attack by all who choose to oppose Elon Musk because of his high-profile political stances,” Musselwhite wrote.

However, residents told NBC News that “their concerns have nothing to do with politics.” One person interviewed even praised Musk’s work with the Department of Government Efficiency.

Instead, they’re worried that local officials seeing dollar signs have potentially let xAI exploit loopholes to pollute communities without any warning. The community flyer from the Safe and Sound Coalition criticized what they viewed as shady behavior from local officials:

“This project was started behind our backs, with zero community input. Local officials have repeatedly downplayed concerns, spun the facts, and misled residents about the true impacts and the deals made with xAI. Many people only found out after the turbines were up and running.”

The coalition’s spokesperson told Ars that a health impact analysis published on behalf of the SELC provides “meaningful insight” into the biggest health risks. That concluded that using the EPA’s COBRA health impact model, emissions from running 41 permanent turbines at the Southaven plant “are estimated to result in $30–$44 million per year in health-related damages, including costs from premature deaths, hospital visits, and lost productivity. Over a typical 30-year operating life, these impacts would amount to approximately $588–$862 million in cumulative discounted public-health costs, borne largely by residents of Tennessee and Mississippi.”

Additionally, the largest amount of harmful pollutants increases are expected to be “concentrated in communities that are disproportionately Black, highly socially vulnerable, and have elevated baseline asthma prevalence,” the report said.

If the permits are issued, the Coalition’s spokesperson told Ars that the group expects to continue gathering reports of “firsthand experiences” from nearby residents, which will “continue to provide valuable information regarding ongoing impacts.” The group plans to continue engaging with officials and pushing for greater accountability and transparent monitoring, as well as documenting noise conditions, reviewing emissions reports, and collecting independent data where feasible.

“The Coalition’s focus is long-term community protection, which means tracking compliance, advocating for corrective action if standards are not met, and ensuring residents have access to accurate information about environmental and health impacts,” the spokesperson said. “Permit approval would not resolve community concerns; it would shift our focus toward ongoing oversight and enforcement.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

xAI spent $7M building wall that barely muffles annoying power plant noise Read More »

google-reveals-nano-banana-2-ai-image-model,-coming-to-gemini-today

Google reveals Nano Banana 2 AI image model, coming to Gemini today

With Nano Banana 2, Google promises consistency for up to five characters at a time, along with accurate rendering of as many as 14 different objects per workflow. This, along with richer textures and “vibrant” lighting will aid in visual storytelling with Nano Banana 2. Google is also expanding the range of available aspect ratios and resolutions, from 512px square up to 4K widescreen.

So what can you do with Nano Banana 2? Google has provided some example images with associated prompts. These are, of course, handpicked images, but Nano Banana has been a popular image model for good reason. This degree of improvement seems believable based on past iterations of Nano Banana.

Google AI infographic

Prompt: High-quality flat lay photography creating a DIY infographic that simply explains how the water cycle works, arranged on a clean, light gray textured background. The visual story flows from left to right in clear steps. Simple, clean black arrows are hand-drawn onto the background to guide the viewer’s eye. The overall mood is educational, modern, and easy to understand. The image is shot from a top-down, bird’s-eye view with soft, even lighting that minimizes shadows and keeps the focus on the process.

Credit: Google

Prompt: High-quality flat lay photography creating a DIY infographic that simply explains how the water cycle works, arranged on a clean, light gray textured background. The visual story flows from left to right in clear steps. Simple, clean black arrows are hand-drawn onto the background to guide the viewer’s eye. The overall mood is educational, modern, and easy to understand. The image is shot from a top-down, bird’s-eye view with soft, even lighting that minimizes shadows and keeps the focus on the process. Credit: Google

AI museum comparison

Prompt: Create an image of Museum Clos Lucé. In the style of bright colored Synthetic Cubism. No text. Your plan is to first search for visual references, and generate after. Aspect ratio 16:9.

Credit: Google

Prompt: Create an image of Museum Clos Lucé. In the style of bright colored Synthetic Cubism. No text. Your plan is to first search for visual references, and generate after. Aspect ratio 16:9. Credit: Google

AI farm image

Create an image of these 14 characters and items having fun at the farm. The overall atmosphere is fun, silly and joyful. It is strictly important to keep identity consistent of all the 14 characters and items.

Credit: Google

Create an image of these 14 characters and items having fun at the farm. The overall atmosphere is fun, silly and joyful. It is strictly important to keep identity consistent of all the 14 characters and items. Credit: Google

Google must be pretty confident in this model’s capabilities because it will be the only one available going forward. Starting now, Nano Banana 2 will replace both the standard and Pro variants of Nano Banana across the Gemini app, search, AI Studio, Vertex AI, and Flow.

In the Gemini app and on the website, Nano Banana 2 will be the image generator for the Fast, Thinking, and Pro settings. It’s possible there will eventually be a Nano Banana 2 Pro—Google tends to release elements of new model families one at a time. For now, it’s all “Flash” Image.

Google reveals Nano Banana 2 AI image model, coming to Gemini today Read More »

musk-has-no-proof-openai-stole-xai-trade-secrets,-judge-rules,-tossing-lawsuit

Musk has no proof OpenAI stole xAI trade secrets, judge rules, tossing lawsuit


Hostility is not proof of theft

Even twisting an ex-employee’s text to favor xAI’s reading fails to sway judge.

Elon Musk appears to be grasping at straws in a lawsuit accusing OpenAI of poaching eight xAI employees in an allegedly unlawful bid to access xAI trade secrets connected to its data centers and chatbot, Grok.

In a Tuesday order granting OpenAI’s motion to dismiss, US District Judge Rita F. Lin said that xAI failed to provide evidence of any misconduct from OpenAI.

Instead, xAI seemed fixated on a range of alleged conduct of former employees. But in assessing xAI’s claims, Lin said that xAI failed to show proof that OpenAI induced any of these employees to steal trade secrets “or that these former xAI employees used any stolen trade secrets once employed by OpenAI.”

Two employees admitted to stealing confidential information, with both downloading xAI’s source code and one improperly grabbing a supposedly sensitive recording from a Musk “All Hands” meeting. But the rest were either accused of retaining seemingly less consequential data, like retaining work chats on their devices, or didn’t seem to hold any confidential information at all. Lin called out particularly weak arguments that xAI’s complaint acknowledged that one employee who OpenAI poached never received access to confidential information allegedly sought after exiting xAI, and two employees were lumped into the complaint who “simply left xAI for OpenAI,” Lin noted.

From the limited evidence, Lin concluded that “while xAI may state misappropriation claims against a couple of its former employees, it does not state a plausible misappropriation claim against OpenAI.”

Lin’s order will likely not be the end of the litigation, as she is allowing xAI to amend its complaint to address the current deficiencies.

Ars could not immediately reach xAI for comment, so it’s unclear what steps xAI may take next.

However, xAI seems unlikely to give up the fight, which OpenAI has alleged is part of a “harassment campaign” that Musk is waging through multiple lawsuits attacking his biggest competitor’s business practices.

Unsurprisingly, OpenAI celebrated the order on X, alleging that “this baseless lawsuit was never anything more than yet another front in Mr. Musk’s ongoing campaign of harassment.”

Other tech companies poaching talent for AI projects will likely be relieved while reading Lin’s order. Commercial litigator Sarah Tishler told Ars that the order “boils down to a fundamental concept in trade secret law: hiring from a competitor is not the same as stealing trade secrets from one.”

“Under the Defend Trade Secrets Act, xAI has to show that OpenAI actually received and used the alleged trade secrets, not just that it hired employees who may have taken them,” Tishler said. “Suspicious timing, aggressive recruiting, and even downloaded files are not enough on their own.”

Tishler suggested that the ruling will likely be welcomed by AI firms eager to secure the best talent without incurring legal risks from their hiring practices.

“In the AI industry, where talent moves fast and the competitive stakes are enormous, this ruling reaffirms that suspicion is not enough,” Tishler said. “You have to show the stolen information actually made it into the competitor’s hands and was put to use.”

OpenAI not liable for engineers swiping source code

Through the lawsuit, Musk has alleged that OpenAI is violating California’s unfair competition law. He claims that OpenAI is attempting “to destroy legitimate competition in the AI industry by neutralizing xAI’s innovations” and forcing xAI “to unfairly compete against its own trade secrets.”

But this claim hinges entirely upon xAI proving that OpenAI poached its employees to steal its trade secrets. So, for xAI’s lawsuit to proceed, xAI will need to beef up the evidence base for its other claim, that OpenAI has violated the federal Defend Trade Secrets Act, Lin said. To succeed on that, xAI must prove that OpenAI unlawfully acquired, disclosed, or used a trade secret with xAI’s consent.

That will likely be challenging because xAI, at this point, has not offered “any nonconclusory allegations that OpenAI itself acquired, disclosed, or used xAI’s trade secrets,” Lin wrote.

All xAI has claimed is that OpenAI induced former employees to share secrets, and so far, nothing backs that claim, Lin said. Tishler noted that the court also rejected an xAI theory that “OpenAI should be responsible for what its new hires did before they arrived” for “the same reason: without evidence that OpenAI directed the theft or actually put the stolen information to use, you cannot hold the company liable.”

The strongest evidence that xAI had of employee misconduct, allegedly allowing OpenAI to misappropriate xAI trade secrets, revolves around the departure of one of xAI’s earliest engineers, Xuechen Li.

That evidence wasn’t enough, Lin said. xAI alleged that Li gave a presentation to OpenAI that supposedly included confidential information. Li also uploaded “the entire xAI source code base to a personal cloud account,” which he had connected to ChatGPT, Lin noted, after a recruiter sent a message on Signal sharing a link with Li to another unrelated cloud storage location.

xAI hoped the Signal messages would shock the court, expecting it to read through the lines the way xAI did. As proof that OpenAI allegedly got access to xAI’s source code, xAI pointed to a Signal message that an OpenAI recruiter sent to Li “four hours after” Li downloaded the source code, saying “nw!” xAI has alleged this message is short-hand for “no way!”—suggesting the OpenAI recruiter was geeked to get access to xAI’s source code. But in a footnote, Lin said that “OpenAI insists that ‘nw’ means ‘no worries,’” and thus is unconnected to Li’s decision to upload the source code to a ChatGPT-linked cloud account.

Even interpreting the text using xAI’s reading, however, xAI did not show enough to prove the recruiter or OpenAI accessed or requested the files, Lin said.

It also didn’t help xAI’s case that a temporary injunction that xAI secured in a separate lawsuit targeting the engineer blocked Li from accepting a job at OpenAI.

That injunction led OpenAI to withdraw its job offer to Li. And that’s a problem for xAI, because since Li never worked at OpenAI, it’s clear that he never used xAI’s trade secrets while working for OpenAI.

Further weakening xAI’s arguments, if Li indeed shared confidential information during his presentation while interviewing for OpenAI, xAI has alleged no facts suggesting that OpenAI was aware Li was sharing xAI trade secrets, Lin wrote.

This “makes it very hard to argue OpenAI ever used anything he allegedly took,” Tishler told Ars.

Another former xAI engineer, Jimmy Fraiture, was accused of copying xAI trade secrets, but Fraiture has said he deleted the information he improperly downloaded before starting his job at OpenAI. Importantly, Lin said, since he joined OpenAI, there’s no evidence that he used xAI trade secrets to benefit xAI’s rival.

“Other than the bare fact that Fraiture had been recruited” by the same OpenAI employee “who had also recruited Li, xAI does not allege any facts indicating that OpenAI had encouraged Fraiture to take xAI’s confidential information in the first place,” Lin wrote.

Since “none of the other former employees allegedly shared with or disclosed to OpenAI any xAI trade secrets,” xAI could not advance its claim that OpenAI misappropriated trade secrets based only on allegations tied to Li and Fraiture’s supposed misconduct, Lin said.

xAI may be able to amend its complaint to maintain these arguments, but the company has thus far presented scant, purely circumstantial evidence.

It’s possible that xAI will secure more evidence to support its misappropriation claims against OpenAI in its ongoing lawsuit against Li. Ars could not immediately reach Li’s lawyer to find out if today’s ruling may impact that case.

Ex-executive’s “hostility” is not proof of theft

Among the least convincing arguments that xAI raised was a claim that an unnamed finance executive left xAI to take a “lesser role” at OpenAI after learning everything he knew about data centers from xAI.

That executive slighted xAI when Musk’s company later attempted to inquire about “confidentiality concerns.”

“Suck my dick,” the former xAI executive allegedly said, refusing to explain how his OpenAI work might overlap with his xAI position. “Leave me the fuck alone.”

xAI tried to argue that the executive’s hostility was proof of misconduct. But Lin wrote that xAI only alleged that the executive “merely possessed xAI trade secrets about data centers” and did not allege that he ever used trade secrets to benefit OpenAI.

Had xAI found evidence that OpenAI’s data center strategy suddenly mirrored xAI’s after the executive joined xAI’s rival, that may have helped xAI’s case. But there are plenty of reasons a former employee might reject an ex-employer’s outreach following an exit, Lin suggested.

“His hostility when xAI reached out about its confidentiality concerns also does not support a plausible inference of use,” Lin wrote. “Hostility toward one’s former employer during departure does not, without more, indicate use of trade secrets in a subsequent job. Nor does the executive’s lack of experience with AI data centers before his time at xAI, without more, support a plausible inference that he used xAI’s trade secrets at OpenAI.”

xAI has until March 17 to amend its complaint to keep up this particular fight against OpenAI. But the company won’t be able to add any new claims or parties, Lin noted, “or otherwise change the allegations except to correct the identified deficiencies.”

Criminal probe likely leaves OpenAI on pins

For Li, the engineer accused of disclosing xAI trade secrets with OpenAI, the litigation could eliminate one front of discovery as he navigates two other legal fights over xAI’s trade secrets claims.

Tishler has been closely monitoring xAI’s trade secret legal battles. In October, she noted that Li is in a particularly prickly position, facing pressure in civil litigation from Musk to turn over data that could be used against him in the Federal Bureau of Investigation’s criminal investigation into Musk’s allegations. As Tishler explained:

“The practical reality is stark: Li faces a choice between protecting himself in the criminal action with his silence, and the civil consequences of doing so. Refuse to answer, and xAI could argue adverse inferences; answer, and the responses could feed the criminal case.”

Ultimately, the FBI is trying to prove that Li stole information that qualified as a trade secret and intended to use it for OpenAI’s benefit, while knowing that it would harm xAI. If they succeed, “xAI would suddenly have a government-backed record that its trade secrets were stolen,” Tishler wrote.

If xAI were so armed and able to keep the OpenAI lawsuit alive, the central question in the lawsuit that Lin dismissed today would shift, Tishler suggested, from “was there a theft?” to “what did OpenAI know, and when did it know it?”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Musk has no proof OpenAI stole xAI trade secrets, judge rules, tossing lawsuit Read More »

the-galaxy-s26-is-faster,-more-expensive,-and-even-more-chock-full-of-ai

The Galaxy S26 is faster, more expensive, and even more chock-full of AI


Samsung’s Galaxy S26 series is available for preorder today and ships on March 11.

The Galaxy S26 lineup doesn’t change much on the outside. Credit: Samsung

There used to be countless companies making flagship Android phones, but a combination of factors has narrowed the field over time. Today, Samsung is the undisputed king of the Android device ecosystem with its Galaxy S line. So we can safely assume today’s Unpacked has revealed the most popular Android phones for the next year—the Galaxy S26 Ultra, Galaxy S26+, and Galaxy S26.

Samsung didn’t swing for the fences this time around, producing phones with a few cosmetic tweaks and upgraded internals. Meanwhile, Samsung is investing even more in AI, saying the S26 series includes the first “Agentic AI phones.” Despite limited hardware upgrades, the realities of component prices in the age of AI mean the prices of the two cheaper models have gone up by $100 this year. The Ultra remains at an already eye-watering $1,300.

Faster and more private

Looking at the Galaxy S26 family, you’d be hard-pressed to tell them apart from last year’s phones. The camera surround is different, and the measurements of the smallest and largest phone are ever so slightly different. You probably won’t be able to tell just by looking, but the S26 Ultra has regressed from titanium to aluminum, a reversion Apple also made with its latest high-end phones. This phone also retains its S Pen stylus.

Specs at a glance: Samsung Galaxy S26 series
Galaxy S26 ($900) Galaxy S26+ ($1,100) Galaxy S26 Ultra ($1,300)
SoC Snapdragon 8 Elite Gen 5 (3 nm) Snapdragon 8 Elite Gen 5 (3 nm) Snapdragon 8 Elite Gen 5 (3 nm)
Memory 12GB 12GB 12GB, 16GB
Storage 256GB, 512GB 256GB, 512GB 256GB, 512GB, 1TB
Display 6.3-inch OLED, 10-bit color, 2340×1080, 1-120Hz 6.7-inch OLED, 10-bit color, 3120×1440, 1-120Hz 6.9-inch OLED, 10-bit color, 3120×1440, 1-120Hz, S Pen support
Cameras 50MP primary, f/1.8, 1.0 μm; 12MP ultrawide, f/2.2, 1.4 μm, 10MP 3x telephoto, f/2.4, 1.0 μm; 12MP selfie, f/2.2, 1.12 μm 50MP primary, f/1.8, 1.0 μm; 12MP ultrawide, f/2.2, 1.4 μm, 10MP 3x telephoto, f/2.4, 1.0 μm; 12MP selfie, f/2.2, 1.12 μm 200MP primary, f/1.4, 0.6 μm; 50MP ultrawide, f/1.9, 0.7 μm; 10MP 3x telephoto, f/2.4, 1.12 μm; 50MP 5x telephoto, f/2.9, 0.7 μm; 12MP selfie, f/2.2, 1.12 μm
Software Android 16 Android 16 Android 16
Battery 4,300 mAh 4,900 mAh 5,000 mAh
Connectivity Wi-Fi 7, Bluetooth 5.4, USB-C 3.2, Sub6 5G Wi-Fi 7, Bluetooth 5.4, USB-C 3.2, Sub6 and mmWave 5G Wi-Fi 7, Bluetooth 5.4, USB-C 3.2, Sub6 and mmWave 5G
Measurements 71.7×149.6×7.2 mm, 167g 75.8×158.4×7.3 mm, 190g 78.1×163.6×7.9 mm, 214 g

These phones will again have the latest Snapdragon flagship processor (in North America, Japan, and China) with customizations exclusive to Samsung. The Snapdragon 8 Elite Gen 5 for Galaxy is a 3 nm chip with third-gen Oryon CPU cores, an Adreno 840 GPU, and a powerful Hexagon NPU for on-device AI processing. Samsung promises double-digit performance gains across the board, which is what we hear every year.

Samsung flagship phones have extremely fast hardware, so they benchmark well. However, they also tend to heat up and throttle quickly during sustained use. Perhaps that won’t be as much of a problem with the S26 series. Samsung says it has implemented its largest vapor chamber ever to better control temperatures.

The batteries have also been redesigned for greater efficiency and charging speed, but the base model is the only one that saw a capacity boost (4,000 to 4,300 mAh). Charging speeds have gotten a much-needed increase at the Ultra level. Samsung has only said you can now get a 75 percent charge in 30 minutes using its most expensive phone—it peaks at 60 W, up from 45 W for the last Ultra.

Samsung has been using the same camera sensors for a few cycles now, and it’s not changing anything major this time around. The Ultra still has four cameras (including two telephotos) that top out with the 200 MP primary, and the S26+ and base model still have three cameras with a 50 MP primary. The apertures on the Ultra sensors are a bit wider to allow for brighter photos in challenging conditions. More interesting, though, is the option to record high-quality 8K video directly to an external drive. The S26 also brings support for the Advanced Professional Video (APV) codec.

While the display specs haven’t changed much, they are home to the phone’s most notable new feature: Privacy Display. As smartphone screens have improved, they have emphasized high brightness and wide viewing angles, which is what you want most of the time. However, that also makes it easy for people nearby to see what’s on your screen. With one tap, the S26 can make it harder for shoulder surfers to see what you’re doing.

Privacy Display prevents shoulder surfers from peeking at your screen.

Credit: Samsung

Privacy Display prevents shoulder surfers from peeking at your screen. Credit: Samsung

Privacy Display uses a technology called Black Matrix, which activates “narrow pixels.” These pixels focus light more directly on the user to limit the viewing angle. Privacy Display can be activated system-wide as you like, but it can also be activated on a per-app basis or even just in the part of the screen where notifications appear.

What is an Agentic AI phone anyway?

Unsurprisingly, AI takes the lead with the S26 launch. Part of that is just Samsung following the zeitgeist, but companies can also add new AI capabilities to fill out spec sheets without a bunch of increasingly expensive hardware upgrades. In Samsung’s words, it has sought to have “AI integrated into every layer” of the Galaxy S26 experience.

That starts with expanded awareness of screen context. The company’s Now Brief feature, which is supposed to pull together useful information from across your apps, has not been very impressive so far. With the S26, Samsung is piping notification content into Now Brief, allowing it to remind you about things even if you never added them to your calendar or to-do list. Like many of Samsung’s Galaxy AI features, this data is processed on-device and won’t go to the cloud.

A Galaxy AI Nudge that helps you select photos.

In a similar vein, Galaxy AI is also getting “Nudges,” which look similar to Google’s Magic Cue on the Pixel 10 series. The Galaxy S26 will be able to suggest content and apps based on what’s happening on the screen. For example, Galaxy AI might see you want to share images and suggest the right ones, or perhaps it will check your calendar for openings to save you from switching apps. Of course, that assumes the AI will correctly recognize the context and call the right action.

AI features will also be expanding in Samsung’s stock apps. In the Browser, Samsung has partnered with Perplexity for a new “Ask AI” feature. Rather than juggling tabs to read original sources yourself, you can have the AI do it. It basically gives you a research report like you could get from Perplexity itself (or Gemini Deep Research), but it’s integrated with the browser. Samsung’s gallery app also gets expanded AI editing tools with the S26. These capabilities will really allow you to change the substance of photos, so Samsung has added a visible watermark to label them. We’ve asked if there are AI labels in the image metadata, like you get with some other editing systems.

AI-edited photos have a visible watermark.

Credit: Samsung

AI-edited photos have a visible watermark. Credit: Samsung

A major component of Samsung’s “Agentic AI phone” pitch comes from a partnership with Google. For starters, Google’s AI-powered scam detection features in the Messaging app, previously exclusive to Pixels, will launch on the S26 in preview before expanding to more devices later. Circle to Search is getting an upgrade that lets it identify multiple objects in a single image—this is in testing on both the Pixel 10 series and the Galaxy S26.

The other Google tie-in is more in keeping with the goal of agentic AI. For the first time, Gemini will be able to handle multistep tasks for you. You can watch it work if you prefer, but this can also happen entirely in the background while you do other things. It’s a bit like the recently launched Chrome Auto Browse but for apps.

The selection of apps is pretty slim during this testing period. Samsung and Google say you’ll be able to order food and groceries in apps like DoorDash and Grubhub, and there will be a tie-in with Uber for both rides and food. Google currently says you should “supervise closely” when the agent is working on your behalf. So we’ll see how that goes.

When you can get it

Samsung is accepting preorders for its new phones starting today. You can get them at every mobile carrier or directly from Samsung’s website. Carriers will offer a variety of deals with monthly credits to reduce the sting of the new, higher prices. Samsung has enhanced trade-in values right now, which is a more straightforward way to get a discount if you have an old phone to unload. It’s offering up to $900 off instantly with an S25 Ultra or Z Fold 6 trade-in. Even a phone from a couple of years ago can cut the price of a Galaxy S26 way down.

S26 colors

The Galaxy S26 comes in a variety of understated colors.

Credit: Samsung

The Galaxy S26 comes in a variety of understated colors. Credit: Samsung

The phones are available in violet cobalt, sky blue, white, and black at all retailers. Samsung’s exclusive colors this time are silver shadow and pink gold. Devices will be on shelves and the doorsteps of preorderers on or around March 11.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

The Galaxy S26 is faster, more expensive, and even more chock-full of AI Read More »

pete-hegseth-tells-anthropic-to-fall-in-line-with-dod-desires,-or-else

Pete Hegseth tells Anthropic to fall in line with DoD desires, or else

The act gives the administration the ability to “allocate materials, services and facilities” for national defense. The Trump and Biden administrations used the act to address a shortage of medical supplies during the coronavirus pandemic, and Trump has also used the DPA to order an increase in the US’s production of critical minerals.

The Pentagon has pushed for open-ended use of AI technology, aiming to expand the set of tools at its disposal to counter threats and to undertake military operations.

The department released its AI strategy last month, with Hegseth saying in a memo that “AI-enabled warfare and AI-enabled capability development will redefine the character of military affairs over the next decade.”

He added the US military “must build on its lead” over foreign adversaries to make soldiers “more lethal and efficient,” and that the AI race was “fueled by the accelerating pace” of innovation coming from the private sector.

Anthropic has expressed particular concern about its models being used for lethal missions that do not have a human in the loop, arguing that state of the art AI models are not reliable enough to be trusted in those contexts, said people familiar with the negotiations.

It had also pushed for new rules to govern the use of AI models for mass domestic surveillance, even where that was legal under current regulations, they added.

A decision to cut Anthropic from the defense department’s supply chain would have significant ramifications for national security work and the company, which has a $200 million contract with the department.

It would also have an impact on partners, including Palantir, that make use of Anthropic’s models.

Claude was used in the US capture of Venezuelan leader Nicolás Maduro in January. That mission prompted queries from Anthropic about the exact manner in which its model was used, said people familiar with the matter.

A person with knowledge of Tuesday’s meeting said Amodei had stressed to Hegseth that his company had never objected to legitimate military operations.

The Defense Department declined to comment.

© 2026 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Pete Hegseth tells Anthropic to fall in line with DoD desires, or else Read More »

meta-could-end-up-owning-10%-of-amd-in-new-chip-deal

Meta could end up owning 10% of AMD in new chip deal

Su said the warrant structure would help “make sure that we are always a clear seat at the table when [Meta] are thinking about what they need next.”

Meta’s chief executive Mark Zuckerberg said he expected AMD to be “an important partner for many years to come.”

Meta has said that it will almost double its AI infrastructure spending this year to as much as $135 billion, as US tech giants rush to build the data centers to train and run AI software. It is already one of AMD’s biggest AI chip customers.

“We don’t believe that a single silicon solution will work for all of our workloads,” said Santosh Janardhan, Meta’s head of infrastructure. “There’s a place for Nvidia, there’s a place for AMD and… there’s a place for our own custom silicon as well. We need all three.”

Under the deal, AMD will build a custom version of its MI450 AI chips for Meta. They will be used primarily for “inference” workloads, the process of running models after they have been trained.

The chips need 6 gigawatts of power—equivalent to the amount required by 5 million US households for a year.

Increasingly creative funding arrangements to support massive AI infrastructure build-outs have emerged in recent years, leading to warnings about circular financing.

AMD has, for example, helped data center builder Crusoe secure a $300 million loan from Goldman Sachs by offering a backstop guaranteeing the use of its chips if Crusoe is unable to find customers after installing them in an Ohio facility.

Tech giants such as Meta, historically flush with cash, are meanwhile facing the prospect of tapping bond and equity markets or stemming capital returns to shareholders to help fund their unprecedented infrastructure plans. The Facebook and Instagram parent raised $30 billion in October, marking its biggest bond sale to date.

© 2026 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Meta could end up owning 10% of AMD in new chip deal Read More »

data-center-builders-thought-farmers-would-willingly-sell-land,-learn-otherwise

Data center builders thought farmers would willingly sell land, learn otherwise

Notably, one resident in Huddleston’s county who received an offer, 75-year-old Timothy Grosser, even declined a proposal to “name your price” when a tech company sought to buy his 250-acre farm, The Guardian reported.

“There is none,” Grosser said.

The farm is where he “lives, hunts, and raises cattle” and where his grandson hunts a turkey every Christmas for the family feast.

“The money’s not worth giving up your lifestyle,” Grosser said.

Another farmer in Wisconsin, Anthony Barta, reportedly fretted about what would happen to his neighbors if he took a deal he was offered—showing the deep bonds of people whose farms have bordered each other for years. In his community, another farmer was offered between $70 million and $80 million for 6,000 acres.

“Me and my family, we own the farm and run close to 1,000 animals,” Barta said. “What would that do if that’s next to it? Can they even be there? You know, that’s our livelihood—the farm. We’re just concerned what, if it would go through, what would happen to us and our neighbors and farms and our community? What would happen to that?”

Some tech companies are apparently not taking “no” for an answer. At least one farmer who spent 51 years milking cows in Pennsylvania prior to the AI boom described tech companies as “relentless.”

Eighty-six-year-old Mervin Raudabaugh, Jr., found a creative solution to end the pressure to sell two contiguous farms. He reportedly staved off developers by turning to “a farmland preservation program dedicating taxpayer dollars toward protecting agricultural resources.”

By working with the program, Raudabaugh will only receive about one-eighth of what the developers were offering. But he said it’s worth it to know his land would be preserved for farming purposes and out of reach of persistent tech companies.

“These people have hounded the living daylights out of me,” Raudabaugh said.

Data center deals come amid fragile farm economy

For people in rural communities, data center fights go beyond concerns about water and electricity consumption—although those are concerns, too. Communities are defending the character of the land, which they don’t want to see suddenly disrupted by extensive construction, data center noise pollution, or untold environmental impacts from massive operations.

Data center builders thought farmers would willingly sell land, learn otherwise Read More »

new-microsoft-gaming-chief-has-“no-tolerance-for-bad-ai”

New Microsoft gaming chief has “no tolerance for bad AI”

A gaming education

Unlike Spencer, who spent years at Microsoft Game Studios before heading Microsoft’s gaming division, Sharma has no professional experience in the video game industry. And her personal experience with Xbox also seems somewhat limited; after sharing her Gamertag on social media over the weekend, curious gamers found that her Xbox play history dates back roughly one month. That’s also in stark contrast to Spencer, who has amassed a score of over 121,000 across decades of play.

In her interview with Variety, Sharma cited 2016’s Firewatch as an example of the kinds of games with “deep emotional resonance” and “a distinct point of view” that she’s looking for from Microsoft. And on social media, Sharma shared her list of the three greatest games ever: “Halo, Valheim, Goldeneye,” for what it’s worth. Sharma also seems to be taking recommendations for games to catch up on; after saying on social media that she would try Borderlands 2, the game appeared in her recently played games over the weekend.

A look at some of Sharma’s recently played Xbox games, as of this writing.

A look at some of Sharma’s recently played Xbox games, as of this writing. Credit: Xbox.com

Being a personal fan of video games isn’t necessarily required to succeed in running a gaming company. Nintendo President Hiroshi Yamauchi famously didn’t care for video games even as he launched the Famicom and Nintendo Entertainment System to worldwide success in the 1980s. Still, the lack of direct experience with the gaming world marks a sharp change after Spencer’s long tenure at a time when Microsoft is struggling to redefine the Xbox brand amid cratering hardware sales, a pivot away from software exclusives, and a move to extend the Xbox brand to many different devices.

Xbox President and COO Sarah Bond, who by all accounts was being set up to succeed Spencer, also announced her departure from Microsoft on Friday, ending a nearly nine-year stint as a public face for the company’s gaming efforts. The Verge reports that Bond caused a lot of friction within the Xbox team when she championed the “Xbox Everywhere” strategy and “This is an Xbox” marketing campaign, which focused on streaming Xbox games to hardware like mobile phones and tablets, according to anonymous sources. Shortly before the launch of that campaign in 2024, Microsoft lost marketing executives Jerrett West and Kareem Choudry, leading to significant internal reorganization.

Longtime Xbox Game Studios executive Matt Booty, whose history in the game industry dates back to working for Williams Electronics in the ’90s, has been promoted to executive vice president and chief content officer for Xbox and “will continue working closely with [Sharma] to ensure a smooth transition,” Microsoft said in its announcement Friday.

New Microsoft gaming chief has “no tolerance for bad AI” Read More »

ais-can-generate-near-verbatim-copies-of-novels-from-training-data

AIs can generate near-verbatim copies of novels from training data

A US court last year found that Anthropic’s training of LLMs on some copyrighted content could be considered fair use as it was deemed “transformative.”

But it determined that storing pirated works was “inherently, irredeemably infringing,” which then led the AI group to pay $1.5 billion to settle the lawsuit.

In Germany, a ruling from November last year found that OpenAI had infringed on copyright because its model had memorized song lyrics. The case, brought by GEMA, an association representing composers, lyricists, and publishers, was considered a landmark ruling in the EU.

Rudy Telscher, a partner at law firm Husch Blackwell, said reproducing an entire book without jailbreaking is “clearly a copyright violation.” But “it’s a matter of whether this is happening enough that [AI models] could be vicariously liable for the infringement,” he added.

Anthropic said the jailbreaking technique used in the Stanford and Yale research was impractical for normal users and would require more effort to extract the text than just purchasing the content.

The company also added that its model does not store copies of specific datasets but learns from patterns and relationships between words and strings in its training data.

xAI, OpenAI, and Google did not respond to requests for comment.

The fact that AI labs have put safeguards in place to prevent training data from being extracted means they are aware of the problem, said Imperial’s de Montjoye.

Ben Zhao, a computer science professor at the University of Chicago, questioned whether AI labs really needed to use copyrighted content in training data to create cutting-edge models in the first place.

“Whether the technical result can be done or not, it’s still a question of should we be doing this?” Zhao said. “The legal side should eventually hold their ground and really be the arbiter in this whole process.”

© 2026 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

AIs can generate near-verbatim copies of novels from training data Read More »

an-ai-coding-bot-took-down-amazon-web-services

An AI coding bot took down Amazon Web Services

“In both instances, this was user error, not AI error,” Amazon said, adding that it had not seen evidence that mistakes were more common with AI tools.

The company said the incident in December was an “extremely limited event” affecting only a single service in parts of mainland China. Amazon added that the second incident did not have an impact on a “customer facing AWS service.”

Neither disruption was anywhere near as severe as a 15-hour AWS outage in October 2025 that forced multiple customers’ apps and websites offline—including OpenAI’s ChatGPT.

Employees said the group’s AI tools were treated as an extension of an operator and given the same permissions. In these two cases, the engineers involved did not require a second person’s approval before making changes, as would normally be the case.

Amazon said that by default its Kiro tool “requests authorisation before taking any action” but said the engineer involved in the December incident had “broader permissions than expected—a user access control issue, not an AI autonomy issue.”

AWS launched Kiro in July. It said the coding assistant would advance beyond “vibe coding”—which allows users to quickly build applications—to instead write code based on a set of specifications.

The group had earlier relied on its Amazon Q Developer product, an AI-enabled chatbot, to help engineers write code. This was involved in the earlier outage, three of the employees said.

Some Amazon employees said they were still skeptical of AI tools’ utility for the bulk of their work given the risk of error. They added that the company had set a target for 80 percent of developers to use AI for coding tasks at least once a week and was closely tracking adoption.

Amazon said it was experiencing strong customer growth for Kiro and that it wanted customers and employees to benefit from efficiency gains.

“Following the December incident, AWS implemented numerous safeguards,” including mandatory peer review and staff training, Amazon added.

© 2026 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

An AI coding bot took down Amazon Web Services Read More »