Author name: Shannon Garcia

what-the-epa’s-“endangerment-finding”-is-and-why-it’s-being-challenged

What the EPA’s “endangerment finding” is and why it’s being challenged


Getting rid of the justification for greenhouse gas regulations won’t be easy.

Credit: Mario Tama/Getty Images

A document that was first issued in 2009 would seem an unlikely candidate for making news in 2025. Yet the past few weeks have seen a steady stream of articles about an analysis first issued by the Environmental Protection Agency (EPA) in the early years of Obama’s first term: the endangerment finding on greenhouse gases.

The basics of the document are almost mundane: Greenhouse gases are warming the climate, and this will have negative consequences for US citizens. But it took a Supreme Court decision to get written in the first place, and it has played a role in every attempt by the EPA to regulate greenhouse gas emissions across multiple administrations. And, while the first Trump administration left it in place, the press reports we’re seeing suggest that an attempt will be made to eliminate it in the near future.

The only problem: The science in which the endangerment finding is based on is so solid that any ensuing court case will likely leave its opponents worse off in the long run, which is likely why the earlier Trump administration didn’t challenge it.

Get comfortable, because the story dates all the way back to the first Bush administration.

A bit of history

One of the goals of the US’s Clean Air Act, first passed in 1963, is to “address the public health and welfare risks posed by certain widespread air pollutants.” By the end of the last century, it was becoming increasingly clear that greenhouse gases fit that definition. While they weren’t necessarily directly harmful to the people inhaling them—our lungs are constantly being filled with carbon dioxide, after all—the downstream effects of the warming they caused could certainly impact human health and welfare. But, with the federal government taking no actions during George W. Bush’s time in office, a group of states and cities sued to force the EPA’s hand.

That suit eventually reached the Supreme Court in the form of Massachusetts v. EPA, which led to a ruling in 2007 determining that the Clean Air Act required the EPA to perform an analysis of the dangers posed by greenhouse gases. That analysis was done by late 2007, but the Bush administration simply ignored it for the remaining year it had in office. (It was eventually released after Bush left office.)

That left the Obama-era EPA to reach essentially the same conclusions that the Bush administration had: greenhouse gases are warming the planet. And that will have various impacts—sea-level rise, dangerous heat, damage to agriculture and forestry, and more.

That conclusion compelled the EPA to formulate regulations to limit the emission of greenhouse gases from power plants. Obama’s EPA did just that, but came late enough to still be tied up in courts by the time his term ended. The regulations were also formulated before the plunge in the cost of renewable power sources, which have since led to a drop in carbon emissions that have far outpaced what the EPA’s rules intended to accomplish.

The first Trump administration formulated alternative rules that also ended up in court for being an insufficient response to the conclusions of the endangerment finding, which ultimately led the Biden administration to start formulating a new set of rules. And at that point, the Supreme Court decided to step in and rule on the Obama rules, even though everyone knew they would never go into effect.

The court indicated that the EPA needed to regulate each power plant individually, rather than regulating the wider grid, which sent the Biden administration back to the drawing board. Its attempts at crafting regulations were also in court when Trump returned to office.

There were a couple of notable aspects to that last case, West Virginia v. EPA, which hinged on the fact that Congress had never explicitly indicated that it wanted to see greenhouse gases regulated. Congress responded by ensuring that the Inflation Reduction Act’s energy-focused components specifically mentioned that these were intended to limit carbon emissions, eliminating one potential roadblock. The other thing is that, in this and other court cases, the Supreme Court could have simply overturned Massachusetts v. EPA, the case that put greenhouse gases within the regulatory framework of the Clean Air Act. Yet a court that has shown a great enthusiasm for overturning precedent didn’t do so.

Nothing dangerous?

So, in the 15 years since the EPA initially released its endangerment findings, they’ve resulted in no regulations whatsoever. But, as long as they existed, the EPA is required to at least attempt to regulate them. So, getting rid of the endangerment findings would seem like the obvious thing for an administration led by a president who repeatedly calls climate change a hoax. And there were figures within the first Trump administration who argued in favor of that.

So why didn’t it happen?

That was never clear, but I’d suggest at least some members of the first Trump administration were realistic about the likely results. The effort to contest the endangerment finding was pushed by people who largely reject the vast body of scientific evidence that indicates that greenhouse gases are warming the climate. And, if anything, the evidence had gotten more decisive in the years between the initial endangerment finding and Trump’s inauguration. I expect that their effort was blocked by people who knew that it would fail in the courts and likely leave behind precedents that made future regulatory efforts easier.

This interpretation is supported by the fact that the Trump-era EPA received a number of formal petitions to revisit the endangerment finding. Having read a few (something you should not do), they are uniformly awful. References to supposed peer-reviewed “papers” turn out to be little more than PDFs hosted on a WordPress site. Other arguments are based on information contained in the proceedings of a conference organized by an anti-science think tank. The Trump administration rejected them all with minimal comment the day before Biden’s inauguration.

Biden’s EPA went back and made detailed criticisms of each of them if you want to see just how laughable the arguments against mainstream science were at the time. And, since then, we’ve experienced a few years of temperatures that are so high they’ve surprised many climate scientists.

Unrealistic

But the new head of the EPA is apparently anything but a realist, and multiple reports have indicated he’s asking to be given the opportunity to go ahead and redo the endangerment finding. A more recent report suggests two possibilities. One is to recruit scientists from the fringes to produce a misleading report and roll the dice on getting a sympathetic judge who will overlook the obvious flaws. The other would be to argue that any climate change that happens will have net benefits to the US.

That latter approach would run into the problem that we’ve gotten increasingly sophisticated at doing analyses that attribute the impact of climate change on the individual weather disasters that do harm the welfare of citizens of the US. While it might have been possible to make a case for uncertainty here a decade ago, that window has been largely closed by the scientific community.

Even if all of these efforts fail, it will be entirely possible for the EPA to construct greenhouse gas regulations that accomplish nothing and get tied up in court for the remainder of Trump’s term. But a court case could show just how laughably bad the positions staked out by climate contrarians are (and, by extension, the position of the president himself). There’s a small chance that the resulting court cases will result in a legal record that will make it that much harder to accept the sorts of minimalist regulations that Trump proposed in his first term.

Which is probably why this approach was rejected the first time around.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

What the EPA’s “endangerment finding” is and why it’s being challenged Read More »

amd-says-top-tier-ryzen-9900x3d-and-9950x3d-cpus-arrive-march-12-for-$599-and-$699

AMD says top-tier Ryzen 9900X3D and 9950X3D CPUs arrive March 12 for $599 and $699

Like the 7950X3D and 7900X3D, these new X3D chips combine a pair of AMD’s CPU chiplets, one that has the extra 64MB of cache stacked underneath it and one that doesn’t. For the 7950X3D, you get eight cores with extra cache and eight without; for the 7900X3D, you get eight cores with extra cache and four without.

It’s up to AMD’s chipset software to decide what kinds of apps get to run on each kind of CPU core. Non-gaming workloads prioritize the normal CPU cores, which are generally capable of slightly higher peak clock speeds, while games that benefit disproportionately from the extra cache are run on those cores instead. AMD’s software can “park” the non-V-Cache CPU cores when you’re playing games to ensure they’re not accidentally being run on less-suitable CPU cores.

We didn’t have issues with this core parking technology when we initially tested the 7950X3D and 7900X3D, and AMD has steadily made improvements since then to make sure that core parking is working properly. The new 9000-series X3D chips should benefit from that work, too. To get the best results, AMD officially recommends a fresh and fully updated Windows install, along with the newest BIOS for your motherboard and the newest AMD chipset drivers; swapping out another Ryzen CPU for an X3D model (or vice versa) without reinstalling Windows can occasionally lead to CPUs being parked (or not parked) when they are supposed to be (or not supposed to be).

AMD says top-tier Ryzen 9900X3D and 9950X3D CPUs arrive March 12 for $599 and $699 Read More »

what-does-“phd-level”-ai-mean?-openai’s-rumored-$20,000-agent-plan-explained.

What does “PhD-level” AI mean? OpenAI’s rumored $20,000 agent plan explained.

On the Frontier Math benchmark by EpochAI, o3 solved 25.2 percent of problems, while no other model has exceeded 2 percent—suggesting a leap in mathematical reasoning capabilities over the previous model.

Benchmarks vs. real-world value

Ideally, potential applications for a true PhD-level AI model would include analyzing medical research data, supporting climate modeling, and handling routine aspects of research work.

The high price points reported by The Information, if accurate, suggest that OpenAI believes these systems could provide substantial value to businesses. The publication notes that SoftBank, an OpenAI investor, has committed to spending $3 billion on OpenAI’s agent products this year alone—indicating significant business interest despite the costs.

Meanwhile, OpenAI faces financial pressures that may influence its premium pricing strategy. The company reportedly lost approximately $5 billion last year covering operational costs and other expenses related to running its services.

News of OpenAI’s stratospheric pricing plans come after years of relatively affordable AI services that have conditioned users to expect powerful capabilities at relatively low costs. ChatGPT Plus remains $20 per month and Claude Pro costs $30 monthly—both tiny fractions of these proposed enterprise tiers. Even ChatGPT Pro’s $200/month subscription is relatively small compared to the new proposed fees. Whether the performance difference between these tiers will match their thousandfold price difference is an open question.

Despite their benchmark performances, these simulated reasoning models still struggle with confabulations—instances where they generate plausible-sounding but factually incorrect information. This remains a critical concern for research applications where accuracy and reliability are paramount. A $20,000 monthly investment raises questions about whether organizations can trust these systems not to introduce subtle errors into high-stakes research.

In response to the news, several people quipped on social media that companies could hire an actual PhD student for much cheaper. “In case you have forgotten,” wrote xAI developer Hieu Pham in a viral tweet, “most PhD students, including the brightest stars who can do way better work than any current LLMs—are not paid $20K / month.”

While these systems show strong capabilities on specific benchmarks, the “PhD-level” label remains largely a marketing term. These models can process and synthesize information at impressive speeds, but questions remain about how effectively they can handle the creative thinking, intellectual skepticism, and original research that define actual doctoral-level work. On the other hand, they will never get tired or need health insurance, and they will likely continue to improve in capability and drop in cost over time.

What does “PhD-level” AI mean? OpenAI’s rumored $20,000 agent plan explained. Read More »

review:-mickey-17’s-dark-comedic-antics-make-for-a-wild-cinematic-ride

Review: Mickey 17’s dark comedic antics make for a wild cinematic ride

Mickey settles into his expendable role on the four-year journey, dying and being reprinted several times, and even finds love with security agent Nasha (Naomi Ackie). The mission finally reaches Niflheim, and he’s soon on Version 17—thanks to being used to detect a deadly airborne virus, with multiple versions dying in the quest to develop a vaccine. As the colonists explore this cold new world, Mickey 17 falls into a deep fissure inhabited by native life forms that resemble macroscale tardigrades, dubbed “creepers.” Timo leaves Mickey for dead,  assuming they’ll just eat him, but the creepers (who seem to share a hive mind) instead save Mickey’s life, returning him to the surface.

Mickey Barnes (Robert Pattinson) failed to read the fine print when he signed up as an “expendable.” Warner Bros.

When Mickey gets back to his quarters, he finds his replacement, Mickey 18, is already there. The problem goes beyond Nasha’s opportunistic desire for an awkward threesome with the two Mickeys. Multiples are simply not allowed. The controversial reprinting technology isn’t even legal on Earth and was only allowed on the colonization mission with the understanding that any multiples would be killed immediately and their consciousness backup wiped—i.e., a permanent death.

A tale of two Mickeys

It’s Pattinson’s impressive dual performance as Mickey 17 and Mickey 18 that anchors the film. They might be clones with identical physical traits and memories, but we learn there are subtle differences in all the printings. Mickey 17 is more laid-back, meekly suffering abuse in the name of progress, while Mickey 18 is more rebellious and frankly has some anger issues. Pattinson adopted two different accents to differentiate between the two. Mickey and Nasha’s love story is the movie’s heart; she loves him in all his incarnations, through death after death. The scene where she dons a hazmat suit to hold Mickey 14—or is it 15?—in his isolation chamber as he dies (yet again) from the airborne virus is among the film’s most touching.

Review: Mickey 17’s dark comedic antics make for a wild cinematic ride Read More »

nearly-1-million-windows-devices-targeted-in-advanced-“malvertising”-spree

Nearly 1 million Windows devices targeted in advanced “malvertising” spree

A broad overview of the four stages. Credit: Microsoft

The campaign targeted “nearly” 1 million devices belonging both to individuals and a wide range of organizations and industries. The indiscriminate approach indicates the campaign was opportunistic, meaning it attempted to ensnare anyone, rather than targeting certain individuals, organizations, or industries. GitHub was the platform primarily used to host the malicious payload stages, but Discord and Dropbox were also used.

The malware located resources on the infected computer and sent them to the attacker’s c2 server. The exfiltrated data included the following browser files, which can store login cookies, passwords, browsing histories, and other sensitive data.

  • AppDataRoamingMozillaFirefoxProfiles.default-releasecookies.sqlite
  • AppDataRoamingMozillaFirefoxProfiles.default-releaseformhistory.sqlite
  • AppDataRoamingMozillaFirefoxProfiles.default-releasekey4.db
  • AppDataRoamingMozillaFirefoxProfiles.default-releaselogins.json
  • AppDataLocalGoogleChromeUser DataDefaultWeb Data
  • AppDataLocalGoogleChromeUser DataDefaultLogin Data
  • AppDataLocalMicrosoftEdgeUser DataDefaultLogin Data

Files stored on Microsoft’s OneDrive cloud service were also targeted. The malware also checked for the presence of cryptocurrency wallets including Ledger Live, Trezor Suite, KeepKey, BCVault, OneKey, and BitBox, “indicating potential financial data theft,” Microsoft said.

Microsoft said it suspects the sites hosting the malicious ads were streaming platforms providing unauthorized content. Two of the domains are movies7[.]net and 0123movie[.]art.

Microsoft Defender now detects the files used in the attack, and it’s likely other malware defense apps do the same. Anyone who thinks they may have been targeted can check indicators of compromise at the end of the Microsoft post. The post includes steps users can take to prevent falling prey to similar malvertising campaigns.

Nearly 1 million Windows devices targeted in advanced “malvertising” spree Read More »

will-the-future-of-software-development-run-on-vibes?

Will the future of software development run on vibes?


Accepting AI-written code without understanding how it works is growing in popularity.

For many people, coding is about telling a computer what to do and having the computer perform those precise actions repeatedly. With the rise of AI tools like ChatGPT, it’s now possible for someone to describe a program in English and have the AI model translate it into working code without ever understanding how the code works. Former OpenAI researcher Andrej Karpathy recently gave this practice a name—”vibe coding”—and it’s gaining traction in tech circles.

The technique, enabled by large language models (LLMs) from companies like OpenAI and Anthropic, has attracted attention for potentially lowering the barrier to entry for software creation. But questions remain about whether the approach can reliably produce code suitable for real-world applications, even as tools like Cursor Composer, GitHub Copilot, and Replit Agent make the process increasingly accessible to non-programmers.

Instead of being about control and precision, vibe coding is all about surrendering to the flow. On February 2, Karpathy introduced the term in a post on X, writing, “There’s a new kind of coding I call ‘vibe coding,’ where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.” He described the process in deliberately casual terms: “I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.”

Karapthy tweet screenshot: There's a new kind of coding I call

A screenshot of Karpathy’s original X post about vibe coding from February 2, 2025. Credit: Andrej Karpathy / X

While vibe coding, if an error occurs, you feed it back into the AI model, accept the changes, hope it works, and repeat the process. Karpathy’s technique stands in stark contrast to traditional software development best practices, which typically emphasize careful planning, testing, and understanding of implementation details.

As Karpathy humorously acknowledged in his original post, the approach is for the ultimate lazy programmer experience: “I ask for the dumbest things, like ‘decrease the padding on the sidebar by half,’ because I’m too lazy to find it myself. I ‘Accept All’ always; I don’t read the diffs anymore.”

At its core, the technique transforms anyone with basic communication skills into a new type of natural language programmer—at least for simple projects. With AI models currently being held back by the amount of code an AI model can digest at once (context size), there tends to be an upper-limit to how complex a vibe-coded software project can get before the human at the wheel becomes a high-level project manager, manually assembling slices of AI-generated code into a larger architecture. But as technical limits expand with each generation of AI models, those limits may one day disappear.

Who are the vibe coders?

There’s no way to know exactly how many people are currently vibe coding their way through either hobby projects or development jobs, but Cursor reported 40,000 paying users in August 2024, and GitHub reported 1.3 million Copilot users just over a year ago (February 2024). While we can’t find user numbers for Replit Agent, the site claims 30 million users, with an unknown percentage using the site’s AI-powered coding agent.

One thing we do know: the approach has particularly gained traction online as a fun way of rapidly prototyping games. Microsoft’s Peter Yang recently demonstrated vibe coding in an X thread by building a simple 3D first-person shooter zombie game through conversational prompts fed into Cursor and Claude 3.7 Sonnet. Yang even used a speech-to-text app so he could verbally describe what he wanted to see and refine the prototype over time.

A photo of a MS-DOS computer with Q-BASIC code on the screen.

In August 2024, the author vibe coded his way into a working Q-BASIC utility script for MS-DOS, thanks to Claude Sonnet. Credit: Benj Edwards

We’ve been doing some vibe coding ourselves. Multiple Ars staffers have used AI assistants and coding tools for extracurricular hobby projects such as creating small games, crafting bespoke utilities, writing processing scripts, and more. Having a vibe-based code genie can come in handy in unexpected places: Last year, I asked Anthropic’s Claude write a Microsoft Q-BASIC program in MS-DOS that decompressed 200 ZIP files into custom directories, saving me many hours of manual typing work.

Debugging the vibes

With all this vibe coding going on, we had to turn to an expert for some input. Simon Willison, an independent software developer and AI researcher, offered a nuanced perspective on AI-assisted programming in an interview with Ars Technica. “I really enjoy vibe coding,” he said. “It’s a fun way to try out an idea and prove if it can work.”

But there are limits to how far Willison will go. “Vibe coding your way to a production codebase is clearly risky. Most of the work we do as software engineers involves evolving existing systems, where the quality and understandability of the underlying code is crucial.”

At some point, understanding at least some of the code is important because AI-generated code may include bugs, misunderstandings, and confabulations—for example, instances where the AI model generates references to nonexistent functions or libraries.

“Vibe coding is all fun and games until you have to vibe debug,” developer Ben South noted wryly on X, highlighting this fundamental issue.

Willison recently argued on his blog that encountering hallucinations with AI coding tools isn’t as detrimental as embedding false AI-generated information into a written report, because coding tools have built-in fact-checking: If there’s a confabulation, the code won’t work. This provides a natural boundary for vibe coding’s reliability—the code runs or it doesn’t.

Even so, the risk-reward calculation for vibe coding becomes far more complex in professional settings. While a solo developer might accept the trade-offs of vibe coding for personal projects, enterprise environments typically require code maintainability and reliability standards that vibe-coded solutions may struggle to meet. When code doesn’t work as expected, debugging requires understanding what the code is actually doing—precisely the knowledge that vibe coding tends to sidestep.

Programming without understanding

When it comes to defining what exactly constitutes vibe coding, Willison makes an important distinction: “If an LLM wrote every line of your code, but you’ve reviewed, tested, and understood it all, that’s not vibe coding in my book—that’s using an LLM as a typing assistant.” Vibe coding, in contrast, involves accepting code without fully understanding how it works.

While vibe coding originated with Karpathy as a playful term, it may encapsulate a real shift in how some developers approach programming tasks—prioritizing speed and experimentation over deep technical understanding. And to some people, that may be terrifying.

Willison emphasizes that developers need to take accountability for their code: “I firmly believe that as a developer you have to take accountability for the code you produce—if you’re going to put your name to it you need to be confident that you understand how and why it works—ideally to the point that you can explain it to somebody else.”

He also warns about a common path to technical debt: “For experiments and low-stake projects where you want to explore what’s possible and build fun prototypes? Go wild! But stay aware of the very real risk that a good enough prototype often faces pressure to get pushed to production.”

The future of programming jobs

So, is all this vibe coding going to cost human programmers their jobs? At its heart, programming has always been about telling a computer how to operate. The method of how we do that has changed over time, but there may always be people who are better at telling a computer precisely what to do than others—even in natural language. In some ways, those people may become the new “programmers.”

There was a point in the late 1970s to early ’80s when many people thought people required programming skills to use a computer effectively because there were very few pre-built applications for all the various computer platforms available. School systems worldwide made educational computer literacy efforts to teach people to code.

A brochure for the GE 210 computer from 1964. BASIC's creators used a similar computer four years later to develop the programming language.

A brochure for the GE 210 computer from 1964. BASIC’s creators used a similar computer four years later to develop the programming language that many children were taught at home and school. Credit: GE / Wikipedia

Before too long, people made useful software applications that let non-coders utilize computers easily—no programming required. Even so, programmers didn’t disappear—instead, they used applications to create better and more complex programs. Perhaps that will also happen with AI coding tools.

To use an analogy, computer controlled technologies like autopilot made reliable supersonic flight possible because they could handle aspects of flight that were too taxing for all but the most highly trained and capable humans to safely control. AI may do the same for programming, allowing humans to abstract away complexities that would otherwise take too much time to manually code, and that may allow for the creation of more complex and useful software experiences in the future.

But at that point, will humans still be able to understand or debug them? Maybe not. We may be completely dependent on AI tools, and some people no doubt find that a little scary or unwise.

Whether vibe coding lasts in the programming landscape or remains a prototyping technique will likely depend less on the capabilities of AI models and more on the willingness of organizations to accept risky trade-offs in code quality, maintainability, and technical debt. For now, vibe coding remains an apt descriptor of the messy, experimental relationship between AI and human developers—more collaborative than autonomous, but increasingly blurring the lines of who (or what) is really doing the programming.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

Will the future of software development run on vibes? Read More »

andor-s2-featurette-teases-canonical-tragic-event

Andor S2 featurette teases canonical tragic event

Most of the main S1 cast is returning for S2, with the exception of Shaw. Forest Whitaker once again reprises his Rogue One role as Clone Wars veteran Saw Gerrera, joined by fellow Rogue One alums Ben Mendelsohn and Alan Tudyk as Orson Krennic and K-2SO, respectively. Benjamin Bratt has also been cast in an as-yet-undisclosed role.

The behind-the-scenes look opens with footage of a desperate emergency broadcast calling for help because Imperial ships were landing, filled with storm troopers intent on quashing any protesters or nascent rebels against the Empire who might be lurking about. “Revolutionary movements are spontaneously happening all over the galaxy,” series creator Tony Gilroy explains. “How those come together is the stuff of our story.” While S1 focused a great deal on political intrigue, Genevieve O’Reilly, who plays Mon Mothma, describes S2 as a “juggernaut,” with a size and scope to match.

The footage shown—some new, some shown in the last week’s teaser—confirms that assessment. There are glimpses of Gerrera, Krennic, and K-2SO, as well as Mothma’s home world, Chandrila. And are all those protesters chanting on the planet of Ghorman? That means we’re likely to see the infamous Ghorman Massacre, a brutal event that resulted in Mothma resigning from the Senate in protest against Emperor Palpatine. The massacre was so horrifying that it eventually served to mobilize and unite rebel forces across the galaxy in the Star Wars canon.

The first three (of 12) episodes of Andor S2 premiere on April 22, 2025, on Disney+. Subsequent three-episode chapters will drop weekly for the next three weeks after that.

poster art for Andor S2

Credit: LucasFilm/Disney+

Andor S2 featurette teases canonical tragic event Read More »

google-tells-trump’s-doj-that-forcing-a-chrome-sale-would-harm-national-security

Google tells Trump’s DOJ that forcing a Chrome sale would harm national security

Close-up of Google Chrome Web Browser web page on the web browser. Chrome is widely used web browser developed by Google.

Credit: Getty Images

The government’s 2024 request also sought to have Google’s investment in AI firms curtailed even though this isn’t directly related to search. If, like Google, you believe leadership in AI is important to the future of the world, limiting its investments could also affect national security. But in November, Mehta suggested he was open to considering AI remedies because “the recent emergence of AI products that are intended to mimic the functionality of search engines” is rapidly shifting the search market.

This perspective could be more likely to find supporters in the newly AI-obsessed US government with a rapidly changing Department of Justice. However, the DOJ has thus far opposed allowing AI firm Anthropic to participate in the case after it recently tried to intervene. Anthropic has received $3 billion worth of investments from Google, including $1 billion in January.

New year, new Justice Department

Google naturally opposed the government’s early remedy proposal, but this happened in November, months before the incoming Trump administration began remaking the DOJ. Since taking office, the new administration has routinely criticized the harsh treatment of US tech giants, taking aim at European Union laws like the Digital Markets Act, which tries to ensure user privacy and competition among so-called “gatekeeper” tech companies like Google.

We may get a better idea of how the DOJ wants to proceed later this week when both sides file their final proposals with Mehta. Google already announced its preferred remedy at the tail end of 2024. It’s unlikely Google’s final version will be any different, but everything is up in the air for the government.

Even if current political realities don’t affect the DOJ’s approach, the department’s staffing changes could. Many of the people handling Google’s case today are different than they were just a few months ago, so arguments that fell on deaf ears in 2024 could move the needle. Perhaps emphasizing the national security angle will resonate with the newly restaffed DOJ.

After both sides have had their say, it will be up to the judge to eventually rule on how Google must adapt its business. This remedy phase should get fully underway in April.

Google tells Trump’s DOJ that forcing a Chrome sale would harm national security Read More »

threat-posed-by-new-vmware-hyperjacking-vulnerabilities-is-hard-to-overstate

Threat posed by new VMware hyperjacking vulnerabilities is hard to overstate

Three critical vulnerabilities in multiple virtual-machine products from VMware can give hackers unusually broad access to some of the most sensitive environments inside multiple customers’ networks, the company and outside researchers warned Tuesday.

The class of attack made possible by exploiting the vulnerabilities is known under several names, including hyperjacking, hypervisor attack, or virtual machine escape. Virtual machines often run inside hosting environments to prevent one customer from being able to access or control the resources of other customers. By breaking out of one customer’s isolated VM environment, a threat actor could take control of the hypervisor that apportions each VM. From there, the attacker could access the VMs of multiple customers, who often use these carefully controlled environments to host their internal networks.

All bets off

“If you can escape to the hypervisor you can access every system,” security researcher Kevin Beaumont said on Mastodon. “If you can escape to the hypervisor, all bets are off as a boundary is broken.” He added: “With this vuln you’d be able to use it to traverse VMware managed hosting providers, private clouds orgs have built on prem etc.”

VMware warned Tuesday that it has evidence suggesting the vulnerabilities are already under active exploitation in the wild. The company didn’t elaborate. Beaumont said the vulnerabilities affect “every supported (and unsupported)” version in VMware’s ESXi, Workstation, Fusion, Cloud Foundation, and Telco Cloud Platform product lines.

Threat posed by new VMware hyperjacking vulnerabilities is hard to overstate Read More »

butch-wilmore-says-elon-musk-is-“absolutely-factual”-on-dragon’s-delayed-return

Butch Wilmore says Elon Musk is “absolutely factual” on Dragon’s delayed return

For what it is worth, all of the reporting done by Ars over the last nine months suggests the decision to return Wilmore and Williams this spring was driven by technical reasons and NASA’s needs on board the International Space Station, rather than because of politics.

Q. How do you feel about waking up and finding yourself in a political storm?

Wilmore: I can tell you at the outset, all of us have the utmost respect for Mr. Musk, and obviously, respect and admiration for our president of the United States, Donald Trump. We appreciate them. We appreciate all that they do for us, for human space flight, for our nation. The words they said, politics, I mean, that’s part of life. We understand that. And there’s an important reason why we have a political system, a political system that we do have, and we’re behind it 100 percent. We know what we’ve lived up here, the ins and outs, and the specifics that they may not be privy to. And I’m sure that they have some issues that they are dealing with, information that they have, that we are not privy to. So when I think about your question, that’s part of life, we are on board with it.

Q. Did politics influence NASA’s decision for you to stay longer in space?

Wilmore: From my standpoint, politics is not playing into this at all. From our standpoint, I think that they would agree, we came up prepared to stay long, even though we plan to stay short. That’s what we do in human spaceflight. That’s what your nation’s human space flight program is all about, planning for unknown, unexpected contingencies. And we did that, and that’s why we flowed right into Crew 9, into Expedition 72 as we did. And it was somewhat of a seamless transition, because we had planned ahead for it, and we were prepared.

Butch Wilmore says Elon Musk is “absolutely factual” on Dragon’s delayed return Read More »

tsmc-to-invest-$100b-as-trump-demands-more-us-made-chips,-report-says

TSMC to invest $100B as Trump demands more US-made chips, report says

Currently, TSMC only builds its most advanced chips in Taiwan. But when the most advanced US fabs are operational, they’ll be prepared to manufacture “tens of millions of leading-edge chips” to “power products like 5G/6G smartphones, autonomous vehicles, and AI datacenter servers,” the Commerce Department said in 2024.

TSMC has not confirmed the WSJ’s report but provided a statement: “We’re pleased to have an opportunity to meet with the President and look forward to discussing our shared vision for innovation and growth in the semiconductor industry, as well as exploring ways to bolster the technology sector along with our customers.”

Trump threat of semiconductor tariffs still looms

Advanced chips are regarded as critical for AI innovation, which Trump has prioritized, as well as for national security.

Without a steady supply, the US risks substantial technological and economic losses as well as potential weakening of its military.

To avert that, Trump campaigned on imposing tariffs that he claimed would drive more semiconductor manufacturing into the US, while criticizing the CHIPS Act for costing the US billions. Following through on that promise, in February, he threatened a “25 percent or more tariff” on all semiconductor imports, the WSJ reported. According to CNBC, Trump suggested those tariffs could be in effect by April 2.

“We have to have chips made in this country,” Trump said last month. “Right now, everything is made in Taiwan, practically, almost all of it, a little bit in South Korea, but everything—almost all of it is made in Taiwan. And we want it to be made—we want those companies to come to our country, in all due respect.”

While it’s unclear if Trump plans to overtly kill the CHIPS Act, his government funding cuts could trigger a future where the CHIPS Act dies with no workers left to certify that companies meet requirements for ongoing award disbursements, a semiconductor industry consultant group, Semiconductor Advisors, warned in a statement last month.

“If I were running a chip company, I would not count on CHIPS Act funding, even if I had a signed contract,” SA’s statement said.

TSMC to invest $100B as Trump demands more US-made chips, report says Read More »

driving-an-ev-restomod-that-costs-as-much-as-a-house—the-jia-chieftain

Driving an EV restomod that costs as much as a house—the JIA Chieftain

The Chieftain Range Rover is a fascinating thing—a refitted, reskinned, restored classic Range Rover is no new thing, nor is one with a ludicrous American V8 stuffed under the hood. But one that can be had as a gas car, plug-in hybrid, or as an EV? It can be all of those things depending on which boxes you tick. Ars Technica went for a spin in the EV to see how it stacks up.

The UK is something of an EV restomod hub. It’s been throwing electricity in things that didn’t come off the line electrified in the first place for years. Businesses like Electrogenic, Lunaz, and Everrati will, for a price, make an old car feel a little more peppy—depending on who you go to, it’ll come back restored as well. The Chieftain isn’t quite like them. Developed by Oxfordshire, UK, based Jensen International Automotive (the company’s bread ‘n butter is Jensen Interceptors), the Chieftain is an old Range Rover turned up to VERY LOUD. Or, actually, not loud at all.

Of course, these things come at a cost. A Chieftain EV Range Rover conversion, today, will set you back at least $568,000 should you choose to order one. This one was a private commission, and at that price there won’t be any built on spec on the off chance someone wants to buy one “off the peg.” By any stretch of the imagination it is a huge amount for an old car, but they’re custom-built from start to finish.

The Range Rover has aged well. Alex Goy

Yours will be made to your specification, have CarPlay/Android Auto, and the sort of mod cons one would expect in the 2020s. Under its perfectly painted shell—the color is your choice, of course—lives a 120 kWh battery. It’s made of packs mounted under the hood and in the rear, firing power to all four wheels via three motors: one at the front, and two at the rear. The tri-motor setup can theoretically produce around 650 hp (485 kW), but it’s paired back to a smidge over 405 hp (302 kW), so it doesn’t eat its tires on a spirited launch. There’s a 60: 40 rear-to-front torque split to keep things exciting if that’s your jam. Air suspension keeps occupants comfortable and insulated from the world around them.

Driving an EV restomod that costs as much as a house—the JIA Chieftain Read More »