Author name: Kelly Newman

gmail-gains-gemini-powered-“add-to-calendar”-button

Gmail gains Gemini-powered “Add to calendar” button

Google has a new mission in the AI era: to add Gemini to as many of the company’s products as possible. We’ve already seen Gemini appear in search results, text messages, and more. In Google’s latest update to Workspace, Gemini will be able to add calendar appointments from Gmail with a single click. Well, assuming Gemini gets it right the first time, which is far from certain.

The new calendar button will appear at the top of emails, right next to the summarize button that arrived last year. The calendar option will show up in Gmail threads with actionable meeting chit-chat, allowing you to mash that button to create an appointment in one step. The Gemini sidebar will open to confirm the appointment was made, which is a good opportunity to double-check the robot. There will be a handy edit button in the Gemini window in the event it makes a mistake. However, the robot can’t invite people to these events yet.

The effect of using the button is the same as opening the Gemini panel and asking it to create an appointment. The new functionality is simply detecting events and offering the button as a shortcut of sorts. You should not expect to see this button appear on messages that already have calendar integration, like dining reservations and flights. Those already pop up in Google Calendar without AI.

Gmail gains Gemini-powered “Add to calendar” button Read More »

after-less-than-a-day,-the-athena-lander-is-dead-on-the-moon

After less than a day, the Athena lander is dead on the Moon

NASA expected Athena to have a reasonable chance of success. Although it landed on its side, Odysseus was generally counted as a win because it accomplished most of its tasks. Accordingly, NASA loaded a number of instruments onto the lander. Most notable among these was the PRIME-1 experiment, an ice drill to sample and analyze any ice that lies below the surface.

A dark day, but not the end

“After landing, mission controllers were able to accelerate several program and payload milestones, including NASA’s PRIME-1 suite, before the lander’s batteries depleted,” the company’s statement said. However, this likely means that the company was able to contact the instrument but not perform any meaningful scientific activities.

NASA has accepted that these commercial lunar missions are high-risk, high-reward. (Firefly’s successful landing last weekend offers an example of high rewards). It is paying the companies, on average, $100 million or less per flight. This is a fraction of what NASA would pay through a traditional procurement program. The hope is that, after surviving initial failures, companies like Intuitive Machines will learn from their mistakes and open a low-cost, reliable pathway to the lunar surface.

Even so, this failure has to be painful for NASA and Intuitive Machines. The space agency lost out on some valuable science, and Intuitive Machines has taken a step backward with this mission rather than moving forward as it had hoped to do.

Fortunately, this is unlikely to be the end for the company. NASA has committed to a third and fourth mission on Intuitive Machines’ lander, the next of which could come during the first quarter of 2026. NASA has also contracted with the company to build a small network of satellites around the Moon for communications and positioning services. So although the company’s fortunes look dark today, they are not permanently shadowed like the craters on the Moon that NASA hopes to soon explore.

After less than a day, the Athena lander is dead on the Moon Read More »

blood-typers-is-a-terrifically-tense,-terror-filled-typing-tutor

Blood Typers is a terrifically tense, terror-filled typing tutor

When you think about it, the keyboard is the most complex video game controller in common use today, with over 100 distinct inputs arranged in a vast grid. Yet even the most complex keyboard-controlled games today tend to only use a relative handful of all those available keys for actual gameplay purposes.

The biggest exception to this rule is a typing game, which by definition asks players to send their fingers flying across every single letter on the keyboard (and then some) in quick succession. By default, though, typing games tend to take the form of extremely basic typing tutorials, where the gameplay amounts to little more than typing out words and sentences by rote as they appear on screen, maybe with a few cute accompanying animations.

Typing “gibbon” quickly has rarely felt this tense or important.

Credit: Outer Brain Studios

Typing “gibbon” quickly has rarely felt this tense or important. Credit: Outer Brain Studios

Blood Typers adds some much-needed complexity to that basic type-the-word-you-see concept, layering its typing tests on top of a full-fledged survival horror game reminiscent of the original PlayStation era. The result is an amazingly tense and compelling action adventure that also serves as a great way to hone your touch-typing skills.

See it, type it, do it

For some, Blood Typers may bring up first-glance memories of Typing of the Dead, Sega’s campy, typing-controlled take on the House of the Dead light gun game series. But Blood Typers goes well beyond Typing of the Dead‘s on-rails shooting, offering an experience that’s more like a typing-controlled version of Resident Evil.

Practically every action in Blood Typers requires typing a word that you see on-screen. That includes basic locomotion, which is accomplished by typing any of a number of short words scattered at key points in your surroundings in order to automatically walk to that point. It’s a bit awkward at first, but quickly becomes second nature as you memorize the names of various checkpoints and adjust to using the shift keys to turn that camera as you move.

Each of those words on the ground is a waypoint that you can type to move toward.

Credit: Outer Brain Studios

Each of those words on the ground is a waypoint that you can type to move toward. Credit: Outer Brain Studios

When any number of undead enemies appear, a quick tap of the tab key switches you to combat mode, which asks you to type longer words that appear above those enemies to use your weapons. More difficult enemies require multiple words to take down, including some with armor that means typing a single word repeatedly before you can move on.

While you start each scenario in Blood Typers with a handy melee weapon, you’ll end up juggling a wide variety of projectile firearms that feel uniquely tuned to the typing gameplay. The powerful shotgun, for instance, can take out larger enemies with just a single word, while the rapid-fire SMG lets you type only the first few letters of each word, allowing for a sort of rapid fire feel. The flamethrower, on the other hand, can set whole groups of nearby enemies aflame, which makes each subsequent attack word that much shorter and faster.

Blood Typers is a terrifically tense, terror-filled typing tutor Read More »

the-x-37b-spaceplane-lands-after-helping-pave-the-way-for-“maneuver-warfare”

The X-37B spaceplane lands after helping pave the way for “maneuver warfare”

On this mission, military officials said the X-37B tested “space domain awareness technology experiments” that aim to improve the Space Force’s knowledge of the space environment. Defense officials consider the space domain—like land, sea, and aira contested environment that could become a battlefield in future conflicts.

Last month, the Space Force released the first image of Earth from an X-37B in space. This image was captured in 2024 as the spacecraft flew in its high-altitude orbit, and shows a portion of the X-37B’s power-generating solar array. Credit: US Space Force

The Space Force hasn’t announced plans for the next X-37B mission. Typically, the next X-37B flight has launched within a year of the prior mission’s landing. So far, all of the X-37B flights have launched from Florida, with landings at Vandenberg and at NASA’s Kennedy Space Center, where Boeing and the Space Force refurbish the spaceplanes between missions.

The aerobraking maneuvers demonstrated by the X-37B could find applications on future operational military satellites, according to Gen. Stephen Whiting, head of US Space Command.

“The X-37 is a test and experimentation platform, but that aerobraking maneuver allowed it to bridge multiple orbital regimes, and we think this is exactly the kind of maneuverability we’d like to see in future systems, which will unlock a whole new series of operational concepts,” Whiting said in December at the Space Force Association’s Spacepower Conference.

Space Command’s “astrographic” area of responsibility (AOR) starts at the top of Earth’s atmosphere and extends to the Moon and beyond.

“An irony of the space domain is that everything in our AOR is in motion, but rarely do we use maneuver as a way to gain positional advantage,” Whiting said. “We believe at US Space Command it is vital, given the threats we now see in novel orbits that are hard for us to get to, as well as the fact that the Chinese have been testing on-orbit refueling capability, that we need some kind of sustained space maneuver.”

Improvements in maneuverability would have benefits in surveilling an adversary’s satellites, as well as in defensive and offensive combat operations in orbit.

The Space Force could attain the capability for sustained maneuvers—known in some quarters as dynamic space operations—in several ways. One is to utilize in-orbit refueling that allows satellites to “maneuver without regret,” and another is to pursue more fuel-efficient means of changing orbits, such as aerobraking or solar-electric propulsion.

Then, Whiting said Space Command could transform how it operates by employing “maneuver warfare” as the Army, Navy and Air Force do. “We think we need to move toward a joint function of true maneuver advantage in space.”

The X-37B spaceplane lands after helping pave the way for “maneuver warfare” Read More »

feds-arrest-man-for-sharing-dvd-rip-of-spider-man-movie-with-millions-online

Feds arrest man for sharing DVD rip of Spider-Man movie with millions online

A 37-year-old Tennessee man was arrested Thursday, accused of stealing Blu-rays and DVDs from a manufacturing and distribution company used by major movie studios and sharing them online before the movies’ scheduled release dates.

According to a US Department of Justice press release, Steven Hale worked at the DVD company and allegedly stole “numerous ‘pre-release’ DVDs and Blu-rays” between February 2021 and March 2022. He then allegedly “ripped” the movies, “bypassing encryption that prevents unauthorized copying” and shared copies widely online. He also supposedly sold the actual stolen discs on e-commerce sites, the DOJ alleged.

Hale has been charged with “two counts of criminal copyright infringement and one count of interstate transportation of stolen goods,” the DOJ said. He faces a maximum sentence of five years for the former, and 10 years for the latter.

Among blockbuster movies that Hale is accused of stealing are Dune, F9: The Fast Saga, Venom: Let There Be Carnage, Godzilla v. Kong, and, perhaps most notably, Spider-Man: No Way Home.

The DOJ claimed that “copies of Spider-Man: No Way Home were downloaded tens of millions of times, with an estimated loss to the copyright owner of tens of millions of dollars.”

In 2021, when the Spider-Man movie was released in theaters only, it became the first movie during the COVID-19 pandemic to gross more than $1 billion at the box office, Forbes noted. But for those unwilling to venture out to see the movie, Forbes reported, the temptation to find leaks and torrents apparently became hard to resist. It was in this climate that Hale is accused of widely sharing copies of the movie before it was released online.

Feds arrest man for sharing DVD rip of Spider-Man movie with millions online Read More »

iphone-16e-review:-the-most-expensive-cheap-iphone-yet

iPhone 16e review: The most expensive cheap iPhone yet


The iPhone 16e rethinks—and prices up—the basic iPhone.

An iPhone sits on the table, displaying the time with the screen on

The iPhone 16e, with a notch and an Action Button. Credit: Samuel Axon

The iPhone 16e, with a notch and an Action Button. Credit: Samuel Axon

For a long time, the cheapest iPhones were basically just iPhones that were older than the current flagship, but last week’s release of the $600 iPhone 16e marks a big change in how Apple is approaching its lineup.

Rather than a repackaging of an old iPhone, the 16e is the latest main iPhone—that is, the iPhone 16—with a bunch of stuff stripped away.

There are several potential advantages to this change. In theory, it allows Apple to support its lower-end offerings for longer with software updates, and it gives entry-level buyers access to more current technologies and features. It also simplifies the marketplace of accessories and the like.

There’s bad news, too, though: Since it replaces the much cheaper iPhone SE in Apple’s lineup, the iPhone 16e significantly raises the financial barrier to entry for iOS (the SE started at $430).

We spent a few days trying out the 16e and found that it’s a good phone—it’s just too bad it’s a little more expensive than the entry-level iPhone should ideally be. In many ways, this phone solves more problems for Apple than it does for consumers. Let’s explore why.

Table of Contents

A beastly processor for an entry-level phone

Like the 16, the 16e has Apple’s A18 chip, the most recent in the made-for-iPhone line of Apple-designed chips. There’s only one notable difference: This variation of the A18 has just four GPU cores instead of five. That will show up in benchmarks and in a handful of 3D games, but it shouldn’t make too much of a difference for most people.

It’s a significant step up over the A15 found in the final 2022 refresh of the iPhone SE, enabling a handful of new features like AAA games and Apple Intelligence.

The A18’s inclusion is good for both Apple and the consumer; Apple gets to establish a new, higher baseline of performance when developing new features for current and future handsets, and consumers likely get many more years of software updates than they’d get on the older chip.

The key example of a feature enabled by the A18 that Apple would probably like us all to talk about the most is Apple Intelligence, a suite of features utilizing generative AI to solve some user problems or enable new capabilities across iOS. By enabling these for the cheapest iPhone, Apple is making its messaging around Apple Intelligence a lot easier; it no longer needs to put effort into clarifying that you can use X feature with this new iPhone but not that one.

We’ve written a lot about Apple Intelligence already, but here’s the gist: There are some useful features here in theory, but Apple’s models are clearly a bit behind the cutting edge, and results for things like notifications summaries or writing tools are pretty mixed. It’s fun to generate original emojis, though!

The iPhone 16e can even use Visual Intelligence, which actually is handy sometimes. On my iPhone 16 Pro Max, I can point the rear camera at an object and press the camera button a certain way to get information about it.

I wouldn’t have expected the 16e to support this, but it does, via the Action Button (which was first introduced in the iPhone 15 Pro). This is a reprogrammable button that can perform a variety of functions, albeit just one at a time. Visual Intelligence is one of the options here, which is pretty cool, even though it’s not essential.

The screen is the biggest upgrade over the SE

Also like the 16, the 16e has a 6.1-inch display. The resolution’s a bit different, though; it’s 2,532 by 1,170 pixels instead of 2,556 by 1,179. It also has a notch instead of the Dynamic Island seen in the 16. All this makes the iPhone 16e’s display seem like a very close match to the one seen in 2022’s iPhone 14—in fact, it might literally be the same display.

I really missed the Dynamic Island while using the iPhone 16e—it’s one of my favorite new features added to the iPhone in recent years, as it consolidates what was previously a mess of notification schemes in iOS. Plus, it’s nice to see things like Uber and DoorDash ETAs and sports scores at a glance.

The main problem with losing the Dynamic Island is that we’re back to the old minor mess of notifications approaches, and I guess Apple has to keep supporting the old ways for a while yet. That genuinely surprises me; I would have thought Apple would want to unify notifications and activities with the Dynamic Island just like the A18 allows the standardization of other features.

This seems to indicate that the Dynamic Island is a fair bit more expensive to include than the good old camera notch flagship iPhones had been rocking since 2017’s iPhone X.

That compromise aside, the display on the iPhone 16e is ridiculously good for a phone at this price point, and it makes the old iPhone SE’s small LCD display look like it’s from another eon entirely by comparison. It gets brighter for both HDR content and sunny-day operation; the blacks are inky and deep, and the contrast and colors are outstanding.

It’s the best thing about the iPhone 16e, even if it isn’t quite as refined as the screens in Apple’s current flagships. Most people would never notice the difference between the screens in the 16e and the iPhone 16 Pro, though.

There is one other screen feature I miss from the higher-end iPhones you can buy in 2025: Those phones can drop the display all the way down to 1 nit, which is awesome for using the phone late at night in bed without disturbing a sleeping partner. Like earlier iPhones, the 16e can only get so dark.

It gets quite bright, though; Apple claims it typically reaches 800 nits in peak brightness but that it can stretch to 1200 when viewing certain HDR photos and videos. That means it gets about twice as bright as the SE did.

Connectivity is key

The iPhone 16e supports the core suite of connectivity options found in modern phones. There’s Wi-Fi 6, Bluetooth 5.3, and Apple’s usual limited implementation of NFC.

There are three new things of note here, though, and they’re good, neutral, and bad, respectively.

USB-C

Let’s start with the good. We’ve moved from Apple’s proprietary Lightning port found in older iPhones (including the final iPhone SE) toward USB-C, now a near-universal standard on mobile devices. It allows faster charging and more standardized charging cable support.

Sure, it’s a bummer to start over if you’ve spent years buying Lightning accessories, but it’s absolutely worth it in the long run. This change means that the entire iPhone line has now abandoned Lightning, so all iPhones and Android phones will have the same main port for years to come. Finally!

The finality of this shift solves a few problems for Apple: It greatly simplifies the accessory landscape and allows the company to move toward producing a smaller range of cables.

Satellite connectivity

Recent flagship iPhones have gradually added a small suite of features that utilize satellite connectivity to make life a little easier and safer.

Among those is crash detection and roadside assistance. The former will use the sensors in the phone to detect if you’ve been in a car crash and contact help, and roadside assistance allows you to text for help when you’re outside of cellular reception in the US and UK.

There are also Emergency SOS and Find My via satellite, which let you communicate with emergency responders from remote places and allow you to be found.

Along with a more general feature that allows Messages via satellite, these features can greatly expand your options if you’re somewhere remote, though they’re not as easy to use and responsive as using the regular cellular network.

Where’s MagSafe?

I don’t expect the 16e to have all the same features as the 16, which is $200 more expensive. In fact, it has more modern features than I think most of its target audience needs (more on that later). That said, there’s one notable omission that makes no sense to me at all.

The 16e does not support MagSafe, a standard for connecting accessories to the back of the device magnetically, often while allowing wireless charging via the Qi standard.

Qi wireless charging is still supported, albeit at a slow 7.5 W, but there are no magnets, meaning a lot of existing MagSafe accessories are a lot less useful with this phone, if they’re usable at all. To be fair, the SE didn’t support MagSafe either, but every new iPhone design since the iPhone 12 way back in 2020 has—and not just the premium flagships.

It’s not like the MagSafe accessory ecosystem was some bottomless well of innovation, but that magnetic alignment is handier than you might think, whether we’re talking about making sure the phone locks into place for the fastest wireless charging speeds or hanging the phone on a car dashboard to use GPS on the go.

It’s one of those things where folks coming from much older iPhones may not care because they don’t know what they’re missing, but it could be annoying in households with multiple generations of iPhones, and it just doesn’t make any sense.

Most of Apple’s choices in the 16e seem to serve the goal of unifying the whole iPhone lineup to simplify the message for consumers and make things easier for Apple to manage efficiently, but the dropping of MagSafe is bizarre.

It almost makes me think that Apple might plan to drop MagSafe from future flagship iPhones, too, and go toward something new, just because that’s the only explanation I can think of. That otherwise seems unlikely to me right now, but I guess we’ll see.

The first Apple-designed cellular modem

We’ve been seeing rumors that Apple planned to drop third-party modems from companies like Qualcomm for years. As far back as 2018, Apple was poaching Qualcomm employees in an adjacent office in San Diego. In 2020, Apple SVP Johny Srouji announced to employees that work had begun.

It sounds like development has been challenging, but the first Apple-designed modem has arrived here in the 16e of all places. Dubbed the C1, it’s… perfectly adequate. It’s about as fast or maybe just a smidge slower than what you get in the flagship phones, but almost no user would notice any difference at all.

That’s really a win for Apple, which has struggled with a tumultuous relationship with its partners here for years and which has long run into space problems in its phones in part because the third-party modems weren’t compact enough.

This change may not matter much for the consumer beyond freeing up just a tiny bit of space for a slightly larger battery, but it’s another step in Apple’s long journey to ultimately and fully control every component in the iPhone that it possibly can.

Bigger is better for batteries

There is one area where the 16e is actually superior to the 16, much less the SE: battery life. The 16e reportedly has a 3,961 mAh battery, the largest in any of the many iPhones with roughly this size screen. Apple says it offers up to 26 hours of video playback, which is the kind of number you expect to see in a much larger flagship phone.

I charged this phone three times in just under a week with it, though I wasn’t heavily hitting 5G networks, playing many 3D games, or cranking the brightness way up all the time while using it.

That’s a bit of a bump over the 16, but it’s a massive leap over the SE, which promised a measly 15 hours of video playback. Every single phone in Apple’s lineup now has excellent battery life by any standard.

Quality over quantity in the camera system

The 16E’s camera system leaves the SE in the dust, but it’s no match for the robust system found in the iPhone 16. Regardless, it’s way better than you’d typically expect from a phone at this price.

Like the 16, the 16e has a 48 MP “Fusion” wide-angle rear camera. It typically doesn’t take photos at 48 MP (though you can do that while compromising color detail). Rather, 24 MP is the target. The 48 MP camera enables 2x zoom that is nearly visually indistinguishable from optical zoom.

Based on both the specs and photo comparisons, the main camera sensor in the 16e appears to me to be exactly the same as that one found in the 16. We’re just missing the ultra-wide lens (which allows more zoomed-out photos, ideal for groups of people in small spaces, for example) and several extra features like advanced image stabilization, the newest Photographic Styles, and macro photography.

The iPhone 16e takes excellent photos in bright conditions. Samuel Axon

That’s a lot of missing features, sure, but it’s wild how good this camera is for this price point. Even something like the Pixel 8a can’t touch it (though to be fair, the Pixel 8a is $100 cheaper).

Video capture is a similar situation: The 16e shoots at the same resolutions and framerates as the 16, but it lacks a few specialized features like Cinematic and Action modes. There’s also a front-facing camera with the TrueDepth sensor for Face ID in that notch, and it has comparable specs to the front-facing cameras we’ve seen in a couple of years of iPhones at this point.

If you were buying a phone for the cameras, this wouldn’t be the one for you. It’s absolutely worth paying another $200 for the iPhone 16 (or even just $100 for the iPhone 15 for the ultra-wide lens for 0.5x zoom; the 15 is still available in the Apple Store) if that’s your priority.

The iPhone 16’s macro mode isn’t available here, so ultra-close-ups look fuzzy. Samuel Axon

But for the 16e’s target consumer (mostly folks with the iPhone 11 or older or an iPhone SE, who just want the cheapest functional iPhone they can get) it’s almost overkill. I’m not complaining, though it’s a contributing factor to the phone’s cost compared to entry-level Android phones and Apple’s old iPhone SE.

RIP small phones, once and for all

In one fell swoop, the iPhone 16e’s replacement of the iPhone SE eliminates a whole range of legacy technologies that have held on at the lower end of the iPhone lineup for years. Gone are Touch ID, the home button, LCD displays, and Lightning ports—they’re replaced by Face ID, swipe gestures, OLED, and USB-C.

Newer iPhones have had most of those things for quite some time. The latest feature was USB-C, which came in 2023’s iPhone 15. The removal of the SE from the lineup catches the bottom end of the iPhone up with the top in these respects.

That said, the SE had maintained one positive differentiator, too: It was small enough to be used one-handed by almost anyone. With the end of the SE and the release of the 16e, the one-handed iPhone is well and truly dead. Of course, most people have been clear they want big screens and batteries above almost all else, so the writing had been on the wall for a while for smaller phones.

The death of the iPhone SE ushers in a new era for the iPhone with bigger and better features—but also bigger price tags.

A more expensive cheap phone

Assessing the iPhone 16e is a challenge. It’s objectively a good phone—good enough for the vast majority of people. It has a nearly top-tier screen (though it clocks in at 60Hz, while some Android phones close to this price point manage 120Hz), a camera system that delivers on quality even if it lacks special features seen in flagships, strong connectivity, and performance far above what you’d expect at this price.

If you don’t care about extra camera features or nice-to-haves like MagSafe or the Dynamic Island, it’s easy to recommend saving a couple hundred bucks compared to the iPhone 16.

The chief criticism I have that relates to the 16e has less to do with the phone itself than Apple’s overall lineup. The iPhone SE retailed for $430, nearly half the price of the 16. By making the 16e the new bottom of the lineup, Apple has significantly raised the financial barrier to entry for iOS.

Now, it’s worth mentioning that a pretty big swath of the target market for the 16e will buy it subsidized through a carrier, so they might not pay that much up front. I always recommend buying a phone directly if you can, though, as carrier subsidization deals are usually worse for the consumer.

The 16e’s price might push more people to go for the subsidy. Plus, it’s just more phone than some people need. For example, I love a high-quality OLED display for watching movies, but I don’t think the typical iPhone SE customer was ever going to care about that.

That’s why I believe the iPhone 16e solves more problems for Apple than it does for the consumer. In multiple ways, it allows Apple to streamline production, software support, and marketing messaging. It also drives up the average price per unit across the whole iPhone line and will probably encourage some people who would have spent $430 to spend $600 instead, possibly improving revenue. All told, it’s a no-brainer for Apple.

It’s just a mixed bag for the sort of no-frills consumer who wants a minimum viable phone and who for one reason or another didn’t want to go the Android route. The iPhone 16e is definitely a good phone—I just wish there were more options for that consumer.

The good

  • Dramatically improved display than the iPhone SE
  • Likely stronger long-term software support than most previous entry-level iPhones
  • Good battery life and incredibly good performance for this price point
  • A high-quality camera, especially for the price

The bad

  • No ultra-wide camera
  • No MagSafe
  • No Dynamic Island

The ugly

  • Significantly raises the entry price point for buying an iPhone

Photo of Samuel Axon

Samuel Axon is a senior editor at Ars Technica. He covers Apple, software development, gaming, AI, entertainment, and mixed reality. He has been writing about gaming and technology for nearly two decades at Engadget, PC World, Mashable, Vice, Polygon, Wired, and others. He previously ran a marketing and PR agency in the gaming industry, led editorial for the TV network CBS, and worked on social media marketing strategy for Samsung Mobile at the creative agency SPCSHP. He also is an independent software and game developer for iOS, Windows, and other platforms, and he is a graduate of DePaul University, where he studied interactive media and software development.

iPhone 16e review: The most expensive cheap iPhone yet Read More »

starlink-benefits-as-trump-admin-rewrites-rules-for-$42b-grant-program

Starlink benefits as Trump admin rewrites rules for $42B grant program

Don’t be “technology-blind,” broadband group says

The Benton Institute for Broadband & Society criticized what it called “Trump’s BEAD meddling,” saying it would “leave millions of Americans with broadband that is slower, less reliable, and more expensive.” The shift to a “technology-neutral” approach should not be “technology-blind,” the advocacy group said.

“Fiber broadband is widely understood to be better than other Internet options—like Starlink’s satellites—because it delivers significantly faster speeds, is more reliable due to its resistance to interference (from weather, foliage, terrain, etc), has higher bandwidth capacity, and offers symmetrical upload and download speeds, making it ideal for activities like telehealth, online learning, streaming, and gaming that require consistent high performance,” the group said.

It’s ultimately up to individual states to distribute funds to ISPs after getting their allocations from the US government, though the states have to follow rules issued by federal officials. No one knows exactly how much each Internet provider will receive, but a Wall Street Journal report this week said the new rules could help Starlink get nearly half of the available funding.

“Under the BEAD program’s original rules, Starlink was expected to get up to $4.1 billion, said people familiar with the matter. With Lutnick’s overhaul, Starlink, a unit of Musk’s SpaceX, could receive $10 billion to $20 billion, they said,” according to the WSJ report.

The end of BEAD’s fiber preference would also help cable and fixed wireless providers access grant funding. Lobby groups for those industries have been calling for rule changes to help their members obtain grants.

While the Commerce Department is moving ahead with BEAD changes on its own, Republicans are also proposing a rewrite of the law. House Communications and Technology Subcommittee Chairman Richard Hudson (R-N.C.) yesterday announced legislation that his office said would eliminate “burdensome conditions imposed by the Biden-Harris Administration, including those related to labor, climate change, and rate regulation, that made deployment more expensive and participation less attractive.”

Starlink benefits as Trump admin rewrites rules for $42B grant program Read More »

you-knew-it-was-coming:-google-begins-testing-ai-only-search-results

You knew it was coming: Google begins testing AI-only search results

Google has become so integral to online navigation that its name became a verb, meaning “to find things on the Internet.” Soon, Google might just tell you what’s on the Internet instead of showing you. The company has announced an expansion of its AI search features, powered by Gemini 2.0. Everyone will soon see more AI Overviews at the top of the results page, but Google is also testing a more substantial change in the form of AI Mode. This version of Google won’t show you the 10 blue links at all—Gemini completely takes over the results in AI Mode.

This marks the debut of Gemini 2.0 in Google search. Google announced the first Gemini 2.0 models in December 2024, beginning with the streamlined Gemini 2.0 Flash. The heavier versions of Gemini 2.0 are still in testing, but Google says it has tuned AI Overviews with this model to offer help with harder questions in the areas of math, coding, and multimodal queries.

With this update, you will begin seeing AI Overviews on more results pages, and minors with Google accounts will see AI results for the first time. In fact, even logged out users will see AI Overviews soon. This is a big change, but it’s only the start of Google’s plans for AI search.

Gemini 2.0 also powers the new AI Mode for search. It’s launching as an opt-in feature via Google’s Search Labs, offering a totally new alternative to search as we know it. This custom version of the Gemini large language model (LLM) skips the standard web links that have been part of every Google search thus far. The model uses “advanced reasoning, thinking, and multimodal capabilities” to build a response to your search, which can include web summaries, Knowledge Graph content, and shopping data. It’s essentially a bigger, more complex AI Overview.

As Google has previously pointed out, many searches are questions rather than a string of keywords. For those kinds of queries, an AI response could theoretically provide an answer more quickly than a list of 10 blue links. However, that relies on the AI response being useful and accurate, something that often still eludes generative AI systems like Gemini.

You knew it was coming: Google begins testing AI-only search results Read More »

yes,-we-are-about-to-be-treated-to-a-second-lunar-landing-in-a-week

Yes, we are about to be treated to a second lunar landing in a week

Because the space agency now has some expectation that Intuitive Machines will be fully successful with its second landing attempt, it has put some valuable experiments on board. Principal among them is the PRIME-1 experiment, which has an ice drill to sample any ice that lies below the surface. Drill, baby, drill.

The Athena lander also is carrying a NASA-funded “hopper” that will fire small hydrazine rockets to bounce around the Moon and explore lunar craters near the South Pole. It might even fly into a lava tube. If this happens it will be insanely cool.

Because this is a commercial program, NASA has encouraged the delivery companies to find additional, private payloads. Athena has some nifty ones, including a small rover from Lunar Outpost, a data center from Lonestar Data Holdings, and a 4G cellular network from Nokia. So there’s a lot riding on Athena‘s success.

So will it be a success?

“Of course, everybody’s wondering, are we gonna land upright?” Tim Crain, Intuitive Machines’ chief technology officer, told Ars. “So, I can tell you our laser test plan is much more comprehensive than those last time.”

During the first landing about a year ago, Odysseus‘ laser-based system for measuring altitude failed during the descent. Because Odysseus did not have access to altitude data, the spacecraft touched down faster, and on a 12-degree slope, which exceeded the 10-degree limit. As a result, the lander skidded across the surface, and one of its six legs broke, causing it to fall over.

Crain said about 10 major changes were made to the spacecraft and its software for the second mission. On top of that, about 30 smaller things, such as more efficient file management, were updated on the new vehicle.

In theory, everything should work this time. Intuitive Machines has the benefit of all of its learnings from the last time, and nearly everything worked right during this first attempt. But the acid test comes on Thursday.

The company and NASA will provide live coverage of the attempt beginning at 11: 30 am ET (16: 30 UTC) on NASA+, with landing set for just about one hour later. The Moon may be a harsh mistress, but hopefully not too harsh.

Yes, we are about to be treated to a second lunar landing in a week Read More »

amd-radeon-rx-9070-and-9070-xt-review:-rdna-4-fixes-a-lot-of-amd’s-problems

AMD Radeon RX 9070 and 9070 XT review: RDNA 4 fixes a lot of AMD’s problems


For $549 and $599, AMD comes close to knocking out Nvidia’s GeForce RTX 5070.

AMD’s Radeon RX 9070 and 9070 XT are its first cards based on the RDNA 4 GPU architecture. Credit: Andrew Cunningham

AMD’s Radeon RX 9070 and 9070 XT are its first cards based on the RDNA 4 GPU architecture. Credit: Andrew Cunningham

AMD is a company that knows a thing or two about capitalizing on a competitor’s weaknesses. The company got through its early-2010s nadir partially because its Ryzen CPUs struck just as Intel’s current manufacturing woes began to set in, first with somewhat-worse CPUs that were great value for the money and later with CPUs that were better than anything Intel could offer.

Nvidia’s untrammeled dominance of the consumer graphics card market should also be an opportunity for AMD. Nvidia’s GeForce RTX 50-series graphics cards have given buyers very little to get excited about, with an unreachably expensive high-end 5090 refresh and modest-at-best gains from 5080 and 5070-series cards that are also pretty expensive by historical standards, when you can buy them at all. Tech YouTubers—both the people making the videos and the people leaving comments underneath them—have been almost uniformly unkind to the 50 series, hinting at consumer frustrations and pent-up demand for competitive products from other companies.

Enter AMD’s Radeon RX 9070 XT and RX 9070 graphics cards. These are aimed right at the middle of the current GPU market at the intersection of high sales volume and decent profit margins. They promise good 1440p and entry-level 4K gaming performance and improved power efficiency compared to previous-generation cards, with fixes for long-time shortcomings (ray-tracing performance, video encoding, and upscaling quality) that should, in theory, make them more tempting for people looking to ditch Nvidia.

Table of Contents

RX 9070 and 9070 XT specs and speeds

RX 9070 XT RX 9070 RX 7900 XTX RX 7900 XT RX 7900 GRE RX 7800 XT
Compute units (Stream processors) 64 RDNA4 (4,096) 56 RDNA4 (3,584) 96 RDNA3 (6,144) 84 RDNA3 (5,376) 80 RDNA3 (5,120) 60 RDNA3 (3,840)
Boost Clock 2,970 MHz 2,520 MHz 2,498 MHz 2,400 MHz 2,245 MHz 2,430 MHz
Memory Bus Width 256-bit 256-bit 384-bit 320-bit 256-bit 256-bit
Memory Bandwidth 650GB/s 650GB/s 960GB/s 800GB/s 576GB/s 624GB/s
Memory size 16GB GDDR6 16GB GDDR6 24GB GDDR6 20GB GDDR6 16GB GDDR6 16GB GDDR6
Total board power (TBP) 304 W 220 W 355 W 315 W 260 W 263 W

AMD’s high-level performance promise for the RDNA 4 architecture revolves around big increases in performance per compute unit (CU). An RDNA 4 CU, AMD says, is nearly twice as fast in rasterized performance as RDNA 2 (that is, rendering without ray-tracing effects enabled) and nearly 2.5 times as fast as RDNA 2 in games with ray-tracing effects enabled. Performance for at least some machine learning workloads also goes way up—twice as fast as RDNA 3 and four times as fast as RDNA 2.

We’ll see this in more detail when we start comparing performance, but AMD seems to have accomplished this goal. Despite having 64 or 56 compute units (for the 9070 XT and 9070, respectively), the cards’ performance often competes with AMD’s last-generation flagships, the RX 7900 XTX and 7900 XT. Those cards came with 96 and 84 compute units, respectively. The 9070 cards are specced a lot more like last generation’s RX 7800 XT—including the 16GB of GDDR6 on a 256-bit memory bus, as AMD still isn’t using GDDR6X or GDDR7—but they’re much faster than the 7800 XT was.

AMD has dramatically increased the performance-per-compute unit for RDNA 4. AMD

The 9070 series also uses a new 4 nm manufacturing process from TSMC, an upgrade from the 7000 series’ 5 nm process (and the 6 nm process used for the separate memory controller dies in higher-end RX 7000-series models that used chiplets). AMD’s GPUs are normally a bit less efficient than Nvidia’s, but the architectural improvements and the new manufacturing process allow AMD to do some important catch-up.

Both of the 9070 models we tested were ASRock Steel Legend models, and the 9070 and 9070 XT had identical designs—we’ll probably see a lot of this from AMD’s partners since the GPU dies and the 16GB RAM allotments are the same for both models. Both use two 8-pin power connectors; AMD says partners are free to use the 12-pin power connector if they want, but given Nvidia’s ongoing issues with it, most cards will likely stick with the reliable 8-pin connectors.

AMD doesn’t appear to be making and selling reference designs for the 9070 series the way it did for some RX 7000 and 6000-series GPUs or the way Nvidia does with its Founders Edition cards. From what we’ve seen, 2 or 2.5-slot, triple-fan designs will be the norm, the way they are for most midrange GPUs these days.

Testbed notes

We used the same GPU testbed for the Radeon RX 9070 series as we have for our GeForce RTX 50-series reviews.

An AMD Ryzen 7 9800X3D ensures that our graphics cards will be CPU-limited as little as possible. An ample 1050 W power supply, 32GB of DDR5-6000, and an AMD X670E motherboard with the latest BIOS installed round out the hardware. On the software side, we use an up-to-date installation of Windows 11 24H2 and recent GPU drivers for older cards, ensuring that our tests reflect whatever optimizations Microsoft, AMD, Nvidia, and game developers have made since the last generation of GPUs launched.

We have numbers for all of Nvidia’s RTX 50-series GPUs so far, plus most of the 40-series cards, most of AMD’s RX 7000-series cards, and a handful of older GPUs from the RTX 30-series and RX 6000 series. We’ll focus on comparing the 9070 XT and 9070 to other 1440p-to-4K graphics cards since those are the resolutions AMD is aiming at.

Performance

At $549 and $599, the 9070 series is priced to match Nvidia’s $549 RTX 5070 and undercut the $749 RTX 5070 Ti. So we’ll focus on comparing the 9070 series to those cards, plus the top tier of GPUs from the outgoing RX 7000-series.

Some 4K rasterized benchmarks.

Starting at the top with rasterized benchmarks with no ray-tracing effects, the 9070 XT does a good job of standing up to Nvidia’s RTX 5070 Ti, coming within a few frames per second of its performance in all the games we tested (and scoring very similarly in the 3DMark Time Spy Extreme benchmark).

Both cards are considerably faster than the RTX 5070—between 15 and 28 percent for the 9070 XT and between 5 and 13 percent for the regular 9070 (our 5070 scored weirdly low in Horizon Zero Dawn Remastered, so we’d treat those numbers as outliers for now). Both 9070 cards also stack up well next to the RX 7000 series here—the 9070 can usually just about match the performance of the 7900 XT, and the 9070 XT usually beats it by a little. Both cards thoroughly outrun the old RX 7900 GRE, which was AMD’s $549 GPU offering just a year ago.

The 7900 XT does have 20GB of RAM instead of 16GB, which might help its performance in some edge cases. But 16GB is still perfectly generous for a 1440p-to-4K graphics card—the 5070 only offers 12GB, which could end up limiting its performance in some games as RAM requirements continue to rise.

On ray-tracing improvements

Nvidia got a jump on AMD when it introduced hardware-accelerated ray-tracing in the RTX 20-series in 2018. And while these effects were only supported in a few games at the time, many modern games offer at least some kind of ray-traced lighting effects.

AMD caught up a little when it began shipping its own ray-tracing support in the RDNA2 architecture in late 2020, but the issue since then has always been that AMD cards have taken a larger performance hit than GeForce GPUs when these effects are turned on. RDNA3 promised improvements, but our tests still generally showed the same deficit as before.

So we’re looking for two things with RDNA4’s ray-tracing performance. First, we want the numbers to be higher than they were for comparably priced RX 7000-series GPUs, the same thing we look for in non-ray-traced (or rasterized) rendering performance. Second, we want the size of the performance hit to go down. To pick an example: the RX 7900 GRE could compete with Nvidia’s RTX 4070 Ti Super in games without ray tracing, but it was closer to a non-Super RTX 4070 in ray-traced games. It has helped keep AMD’s cards from being across-the-board competitive with Nvidia’s—is that any different now?

Benchmarks for games with ray-tracing effects enabled. Both AMD cards generally keep pace with the 5070 in these tests thanks to RDNA 4’s improvements.

The picture our tests paint is mixed but tentatively positive. The 9070 series and RDNA4 post solid improvements in the Cyberpunk 2077 benchmarks, substantially closing the performance gap with Nvidia. In games where AMD’s cards performed well enough before—here represented by Returnal—performance goes up, but roughly proportionately with rasterized performance. And both 9070 cards still punch below their weight in Black Myth: Wukong, falling substantially behind the 5070 under the punishing Cinematic graphics preset.

So the benefits you see, as with any GPU update, will depend a bit on the game you’re playing. There’s also a possibility that game optimizations and driver updates made with RDNA4 in mind could boost performance further. We can’t say that AMD has caught all the way up to Nvidia here—the 9070 and 9070 XT are both closer to the GeForce RTX 5070 than the 5070 Ti, despite keeping it closer to the 5070 Ti in rasterized tests—but there is real, measurable improvement here, which is what we were looking for.

Power usage

The 9070 series’ performance increases are particularly impressive when you look at the power-consumption numbers. The 9070 comes close to the 7900 XT’s performance but uses 90 W less power under load. It beats the RTX 5070 most of the time but uses around 30 W less power.

The 9070 XT is a little less impressive on this front—AMD has set clock speeds pretty high, and this can increase power use disproportionately. The 9070 XT is usually 10 or 15 percent faster than the 9070 but uses 38 percent more power. The XT’s power consumption is similar to the RTX 5070 Ti’s (a GPU it often matches) and the 7900 XT’s (a GPU it always beats), so it’s not too egregious, but it’s not as standout as the 9070’s.

AMD gives 9070 owners a couple of new toggles for power limits, though, which we’ll talk about in the next section.

Experimenting with “Total Board Power”

We don’t normally dabble much with overclocking when we review CPUs or GPUs—we’re happy to leave that to folks at other outlets. But when we review CPUs, we do usually test them with multiple power limits in place. Playing with power limits is easier (and occasionally safer) than actually overclocking, and it often comes with large gains to either performance (a chip that performs much better when given more power to work with) or efficiency (a chip that can run at nearly full speed without using as much power).

Initially, I experimented with the RX 9070’s power limits by accident. AMD sent me one version of the 9070 but exchanged it because of a minor problem the OEM identified with some units early in the production run. I had, of course, already run most of our tests on it, but that’s the way these things go sometimes.

By bumping the regular RX 9070’s TBP up just a bit, you can nudge it closer to 9070 XT-level performance.

The replacement RX 9070 card, an ASRock Steel Legend model, was performing significantly better in our tests, sometimes nearly closing the gap between the 9070 and the XT. It wasn’t until I tested power consumption that I discovered the explanation—by default, it was using a 245 W power limit rather than the AMD-defined 220 W limit. Usually, these kinds of factory tweaks don’t make much of a difference, but for the 9070, this power bump gave it a nice performance boost while still keeping it close to the 250 W power limit of the GeForce RTX 5070.

The 90-series cards we tested both add some power presets to AMD’s Adrenalin app in the Performance tab under Tuning. These replace and/or complement some of the automated overclocking and undervolting buttons that exist here for older Radeon cards. Clicking Favor Efficiency or Favor Performance can ratchet the card’s Total Board Power (TBP) up or down, limiting performance so that the card runs cooler and quieter or allowing the card to consume more power so it can run a bit faster.

The 9070 cards get slightly different performance tuning options in the Adrenalin software. These buttons mostly change the card’s Total Board Power (TBP), making it simple to either improve efficiency or boost performance a bit. Credit: Andrew Cunningham

For this particular ASRock 9070 card, the default TBP is set to 245 W. Selecting “Favor Efficiency” sets it to the default 220 W. You can double-check these values using an app like HWInfo, which displays both the current TBP and the maximum TBP in its Sensors Status window. Clicking the Custom button in the Adrenalin software gives you access to a Power Tuning slider, which for our card allowed us to ratchet the TBP up by up to 10 percent or down by as much as 30 percent.

This is all the firsthand testing we did with the power limits of the 9070 series, though I would assume that adding a bit more power also adds more overclocking headroom (bumping up the power limits is common for GPU overclockers no matter who makes your card). AMD says that some of its partners will ship 9070 XT models set to a roughly 340 W power limit out of the box but acknowledges that “you start seeing diminishing returns as you approach the top of that [power efficiency] curve.”

But it’s worth noting that the driver has another automated set-it-and-forget-it power setting you can easily use to find your preferred balance of performance and power efficiency.

A quick look at FSR4 performance

There’s a toggle in the driver for enabling FSR 4 in FSR 3.1-supporting games. Credit: Andrew Cunningham

One of AMD’s headlining improvements to the RX 90-series is the introduction of FSR 4, a new version of its FidelityFX Super Resolution upscaling algorithm. Like Nvidia’s DLSS and Intel’s XeSS, FSR 4 can take advantage of RDNA 4’s machine learning processing power to do hardware-backed upscaling instead of taking a hardware-agnostic approach as the older FSR versions did. AMD says this will improve upscaling quality, but it also means FSR4 will only work on RDNA 4 GPUs.

The good news is that FSR 3.1 and FSR 4 are forward- and backward-compatible. Games that have already added FSR 3.1 support can automatically take advantage of FSR 4, and games that support FSR 4 on the 90-series can just run FSR 3.1 on older and non-AMD GPUs.

FSR 4 comes with a small performance hit compared to FSR 3.1 at the same settings, but better overall quality can let you drop to a faster preset like Balanced or Performance and end up with more frames-per-second overall. Credit: Andrew Cunningham

The only game in our current test suite to be compatible with FSR 4 is Horizon Zero Dawn Remastered, and we tested its performance using both FSR 3.1 and FSR 4. In general, we found that FSR 4 improved visual quality at the cost of just a few frames per second when run at the same settings—not unlike using Nvidia’s recently released “transformer model” for DLSS upscaling.

Many games will let you choose which version of FSR you want to use. But for FSR 3.1 games that don’t have a built-in FSR 4 option, there’s a toggle in AMD’s Adrenalin driver you can hit to switch to the better upscaling algorithm.

Even if they come with a performance hit, new upscaling algorithms can still improve performance by making the lower-resolution presets look better. We run all of our testing in “Quality” mode, which generally renders at two-thirds of native resolution and scales up. But if FSR 4 running in Balanced or Performance mode looks the same to your eyes as FSR 3.1 running in Quality mode, you can still end up with a net performance improvement in the end.

RX 9070 or 9070 XT?

Just $50 separates the advertised price of the 9070 from that of the 9070 XT, something both Nvidia and AMD have done in the past that I find a bit annoying. If you have $549 to spend on a graphics card, you can almost certainly scrape together $599 for a graphics card. All else being equal, I’d tell most people trying to choose one of these to just spring for the 9070 XT.

That said, availability and retail pricing for these might be all over the place. If your choices are a regular RX 9070 or nothing, or an RX 9070 at $549 and an RX 9070 XT at any price higher than $599, I would just grab a 9070 and not sweat it too much. The two cards aren’t that far apart in performance, especially if you bump the 9070’s TBP up a little bit, and games that are playable on one will be playable at similar settings on the other.

Pretty close to great

If you’re building a 1440p or 4K gaming box, the 9070 series might be the ones to beat right now. Credit: Andrew Cunningham

We’ve got plenty of objective data in here, so I don’t mind saying that I came into this review kind of wanting to like the 9070 and 9070 XT. Nvidia’s 50-series cards have mostly upheld the status quo, and for the last couple of years, the status quo has been sustained high prices and very modest generational upgrades. And who doesn’t like an underdog story?

I think our test results mostly justify my priors. The RX 9070 and 9070 XT are very competitive graphics cards, helped along by a particularly mediocre RTX 5070 refresh from Nvidia. In non-ray-traced games, both cards wipe the floor with the 5070 and come close to competing with the $749 RTX 5070 Ti. In games and synthetic benchmarks with ray-tracing effects on, both cards can usually match or slightly beat the similarly priced 5070, partially (if not entirely) addressing AMD’s longstanding performance deficit here. Neither card comes close to the 5070 Ti in these games, but they’re also not priced like a 5070 Ti.

Just as impressively, the Radeon cards compete with the GeForce cards while consuming similar amounts of power. At stock settings, the RX 9070 uses roughly the same amount of power under load as a 4070 Super but with better performance. The 9070 XT uses about as much power as a 5070 Ti, with similar performance before you turn ray-tracing on. Power efficiency was a small but consistent drawback for the RX 7000 series compared to GeForce cards, and the 9070 cards mostly erase that disadvantage. AMD is also less stingy with the RAM, giving you 16GB for the price Nvidia charges for 12GB.

Some of the old caveats still apply. Radeons take a bigger performance hit, proportionally, than GeForce cards. DLSS already looks pretty good and is widely supported, while FSR 3.1/FSR 4 adoption is still relatively low. Nvidia has a nearly monopolistic grip on the dedicated GPU market, which means many apps, AI workloads, and games support its GPUs best/first/exclusively. AMD is always playing catch-up to Nvidia in some respect, and Nvidia keeps progressing quickly enough that it feels like AMD never quite has the opportunity to close the gap.

AMD also doesn’t have an answer for DLSS Multi-Frame Generation. The benefits of that technology are fairly narrow, and you already get most of those benefits with single-frame generation. But it’s still a thing that Nvidia does that AMDon’t.

Overall, the RX 9070 cards are both awfully tempting competitors to the GeForce RTX 5070—and occasionally even the 5070 Ti. They’re great at 1440p and decent at 4K. Sure, I’d like to see them priced another $50 or $100 cheaper to well and truly undercut the 5070 and bring 1440p-to-4K performance t0 a sub-$500 graphics card. It would be nice to see AMD undercut Nvidia’s GPUs as ruthlessly as it undercut Intel’s CPUs nearly a decade ago. But these RDNA4 GPUs have way fewer downsides than previous-generation cards, and they come at a moment of relative weakness for Nvidia. We’ll see if the sales follow.

The good

  • Great 1440p performance and solid 4K performance
  • 16GB of RAM
  • Decisively beats Nvidia’s RTX 5070, including in most ray-traced games
  • RX 9070 XT is competitive with RTX 5070 Ti in non-ray-traced games for less money
  • Both cards match or beat the RX 7900 XT, AMD’s second-fastest card from the last generation
  • Decent power efficiency for the 9070 XT and great power efficiency for the 9070
  • Automated options for tuning overall power use to prioritize either efficiency or performance
  • Reliable 8-pin power connectors available in many cards

The bad

  • Nvidia’s ray-tracing performance is still usually better
  • At $549 and $599, pricing matches but doesn’t undercut the RTX 5070
  • FSR 4 isn’t as widely supported as DLSS and may not be for a while

The ugly

  • Playing the “can you actually buy these for AMD’s advertised prices” game

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

AMD Radeon RX 9070 and 9070 XT review: RDNA 4 fixes a lot of AMD’s problems Read More »

china-aims-to-recruit-top-us-scientists-as-trump-tries-to-kill-the-chips-act

China aims to recruit top US scientists as Trump tries to kill the CHIPS Act


Tech innovation in US likely to stall if Trump ends the CHIPS Act.

On Tuesday, Donald Trump finally made it clear to Congress that he wants to kill the CHIPS and Science Act—a $280 billion bipartisan law Joe Biden signed in 2022 to bring more semiconductor manufacturing into the US and put the country at the forefront of research and innovation.

Trump has long expressed frustration with the high cost of the CHIPS Act, telling Congress on Tuesday that it’s a “horrible, horrible thing” to “give hundreds of billions of dollars” in subsidies to companies that he claimed “take our money” and “don’t spend it,” Reuters reported.

“You should get rid of the CHIPS Act, and whatever is left over, Mr. Speaker, you should use it to reduce debt,” Trump said.

Instead, Trump potentially plans to shift the US from incentivizing chips manufacturing to punishing firms dependent on imports, threatening a 25 percent tariff on all semiconductor imports that could kick in as soon as April 2, CNBC reported.

The CHIPS Act was supposed to be Biden’s legacy, and because he made it a priority, much of the $52.7 billion in subsidies that Trump is criticizing has already been finalized. In 2022, Biden approved $39 billion in subsidies for semiconductor firms, and in his last weeks in office, he finalized more than $33 billion in awards, Reuters noted.

Among the awardees are leading semiconductor firms, including the Taiwan Semiconductor Manufacturing Co. (TSMC), Micron, Intel, Nvidia, and Samsung Electronics. Although Trump claims the CHIPS Act is one-sided and only serves to benefit firms, according to the Semiconductor Industry Association, the law sparked $450 billion in private investments increasing semiconductor production across 28 states by mid-2024.

With the CHIPS Act officially in Trump’s crosshairs, innovation appears likely to stall the longer that lawmakers remain unsettled on whether the law stays or goes. Some officials worried that Trump might interfere with Biden’s binding agreements with leading firms already holding up their end of the bargain, Reuters reported. For example, Micron plans to invest $100 billion in New York, and TSMC just committed to spending the same over the next four years to expand construction of US chips fabs, which is already well underway.

So far, Commerce Secretary Howard Lutnick has only indicated that he will review the finalized awards, noting that the US wouldn’t be giving TSMC any new awards, Reuters reported.

But the CHIPS Act does much more than provide subsidies to lure leading semiconductor companies into the US. For the first time in decades, the law created a new arm of the National Science Foundation (NSF)—the Directorate of Technology, Innovation, and Partnerships (TIP)—which functions unlike any other part of NSF and now appears existentially threatened.

Designed to take the country’s boldest ideas from basic research to real-world applications as fast as possible to make the US as competitive as possible, TIP helps advance all NSF research and was supposed to ensure US leadership in breakthrough technologies, including AI, 6G communications, biotech, quantum computing, and advanced manufacturing.

Biden allocated $20 billion to launch TIP through the CHIPS Act to accelerate technology development not just at top firms but also in small research settings across the US. But as soon as the Department of Government Efficiency (DOGE) started making cuts at NSF this year, TIP got hit the hardest. Seemingly TIP was targeted not because DOGE deemed it the least consequential but simply because it was the youngest directorate at NSF with the most workers in transition when Trump took office and DOGE abruptly announced it was terminating all “probationary” federal workers.

It took years to get TIP ready to flip the switch to accelerate tech innovation in the US. Without it, Trump risks setting the US back at a time when competitors like China are racing ahead and wooing US scientists who suddenly may not know if or when their funding is coming, NSF workers and industry groups told Ars.

Without TIP, NSF slows down

Last month, DOGE absolutely scrambled the NSF by forcing arbitrary cuts of so-called probationary employees—mostly young scientists, some of whom were in transition due to promotions. All those cuts were deemed illegal and finally reversed Monday by court order after weeks of internal chaos reportedly stalling or threatening to delay some of the highest-priority research in the US.

“The Office of Personnel Management does not have any authority whatsoever under any statute in the history of the universe to hire and fire employees at another agency,” US District Judge William Alsup said, calling probationary employees the “life blood” of government agencies.

Ars granted NSF workers anonymity to discuss how cuts were impacting research. At TIP, a federal worker told Ars that one of the probationary cuts in particular threatened to do the most damage.

Because TIP is so new, only one worker was trained to code automated tracking forms that helped decision-makers balance budgets and approve funding for projects across NSF in real time. Ars’ source likened it to holding the only key to the vault of NSF funding. And because TIP is so different from other NSF branches—hiring experts never pulled into NSF before and requiring customized resources to coordinate projects across all NSF fields of research—the insider suggested another government worker couldn’t easily be substituted. It could take possibly two years to hire and train a replacement on TIP’s unique tracking system, the source said, while TIP’s (and possibly all of NSF’s) efficiency is likely strained.

TIP has never been fully functional, the TIP insider confirmed, and could be choked off right as it starts helping to move the needle on US innovation. “Imagine where we are in two years and where China is in two years in quantum computing, semiconductors, or AI,” the TIP insider warned, pointing to China’s surprisingly advanced AI model, DeepSeek, as an indicator of how quickly tech leadership in global markets can change.

On Monday, NSF emailed all workers to confirm that all probationary workers would be reinstated “right away.” But the damage may already be done as it’s unclear how many workers plan to return. When TIP lost the coder—who was seemingly fired for a technicality while transitioning to a different payscale—NSF workers rushed to recommend the coder on LinkedIn, hoping to help the coder quickly secure another opportunity in industry or academia.

Ars could not reach the coder to confirm whether a return to TIP is in the cards. But Ars’ source at TIP and another NSF worker granted anonymity said that probationary workers may be hesitant to return because they are likely to be hit in any official reductions in force (RIFs) in the future.

“RIFs done the legal way are likely coming down the pipe, so these staff are not coming back to a place of security,” the NSF worker said. “The trust is broken. Even for those that choose to return, they’d be wise to be seeking other opportunities.”

And even losing the TIP coder for a couple of weeks likely slows NSF down at a time when the US seemingly can’t afford to lose a single day.

“We’re going to get murdered” if China sets the standard on 6G or AI, the TIP worker fears.

Rivals and allies wooing top US scientists

On Monday, six research and scientific associations, which described themselves as “leading organizations representing more than 305,000 people in computing, information technology, and technical innovation across US industry, academia, and government,” wrote to Congress demanding protections for the US research enterprise.

The groups warned that funding freezes and worker cuts at NSF—and other agencies, including the Department of Energy, the National Institute of Standards & Technology, the National Aeronautics and Space Administration, the National Institutes of Health—”have caused disruption and uncertainty” and threaten “long-lasting negative consequences for our competitiveness, national security, and economic prosperity.”

Deeming America’s technology leadership at risk, the groups pointed out that “in computing alone, a federal investment in research of just over $10 billion annually across 24 agencies and offices underpins a technology sector that contributes more than $2 trillion to the US GDP each year.” Cutting US investment “would be a costly mistake, far outweighing any short-term savings,” the groups warned.

In a separate statement, the Computing Research Association (CRA) called NSF cuts, in particular, a “deeply troubling, self-inflicted setback to US leadership in computing research” that appeared “penny-wise and pound-foolish.”

“NSF is one of the most efficient federal agencies, operating with less than 9 percent overhead costs,” CRA said. “These arbitrary terminations are not justified by performance metrics or efficiency concerns; rather, they represent a drastic and unnecessary weakening of the US research enterprise.”

Many NSF workers are afraid to speak up, the TIP worker told Ars, and industry seems similarly tight-lipped as confusion remains. Only one of the organizations urging Congress to intervene agreed to talk to Ars about the NSF cuts and the significance of TIP. Kathryn Kelley, the executive director of the Coalition for Academic Scientific Computation, confirmed that while members are more aligned with NSF’s Directorate for Computer and Information Science and Engineering and the Office of Advanced Cyberinfrastructure, her group agrees that all NSF cuts are “deeply” concerning.

“We agree that the uncertainty and erosion of trust within the NSF workforce could have long-lasting effects on the agency’s ability to attract and retain top talent, particularly in such specialized areas,” Kelley told Ars. “This situation underscores the need for continued investment in a stable, well-supported workforce to maintain the US’s leadership in science and innovation.”

Other industry sources unwilling to go on the record told Ars that arbitrary cuts largely affecting the youngest scientists at NSF threatened to disrupt a generation of researchers who envisioned long careers advancing US tech. There’s now a danger that those researchers may be lured to other countries heavily investing in science and currently advertising to attract displaced US researchers, including not just rivals like China but also allies like Denmark.

Those sources questioned the wisdom of using the Elon Musk-like approach of breaking the NSF to rebuild it when it’s already one of the leanest organizations in government.

Ars confirmed that some PhD programs have been cancelled, as many academic researchers are already widely concerned about delayed or cancelled grants and generally freaked out about where to get dependable funding outside the NSF. And in industry, some CHIPS Act projects have already been delayed, as companies like Intel try to manage timelines without knowing what’s happening with CHIPS funding, AP News reported.

“Obviously chip manufacturing companies will slow spending on programs they previously thought they were getting CHIPS Act funding for if not cancel those projects outright,” the Semiconductor Advisors, an industry group, forecasted in a statement last month.

The TIP insider told Ars that the CHIPS Act subsidies for large companies that Trump despises mostly fuel manufacturing in the US, while funding for smaller research facilities is what actually advances technology. Reducing efficiency at TIP would likely disrupt those researchers the most, the TIP worker suggested, proclaiming that’s why TIP must be saved at all costs.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

China aims to recruit top US scientists as Trump tries to kill the CHIPS Act Read More »

“wooly-mice”-a-test-run-for-mammoth-gene-editing

“Wooly mice” a test run for mammoth gene editing

On Tuesday, the team behind the plan to bring mammoth-like animals back to the tundra announced the creation of what it is calling wooly mice, which have long fur reminiscent of the woolly mammoth. The long fur was created through the simultaneous editing of as many as seven genes, all with a known connection to hair growth, color, and/or texture.

But don’t think that this is a sort of mouse-mammoth hybrid. Most of the genetic changes were first identified in mice, not mammoths. So, the focus is on the fact that the team could do simultaneous editing of multiple genes—something that they’ll need to be able to do to get a considerable number of mammoth-like changes into the elephant genome.

Of mice and mammoths

The team at Colossal Biosciences has started a number of de-extinction projects, including the dodo and thylacine, but its flagship project is the mammoth. In all of these cases, the plan is to take stem cells from a closely related species that has not gone extinct, and edit a series of changes based on the corresponding genomes of the deceased species. In the case of the mammoth, that means the elephant.

But the elephant poses a large number of challenges, as the draft paper that describes the new mice acknowledges. “The 22-month gestation period of elephants and their extended reproductive timeline make rapid experimental assessment impractical,” the researchers acknowledge. “Further, ethical considerations regarding the experimental manipulation of elephants, an endangered species with complex social structures and high cognitive capabilities, necessitate alternative approaches for functional testing.”

So, they turned to a species that has been used for genetic experiments for over a century: the mouse. We can do all sorts of genetic manipulations in mice, and have ways of using embryonic stem cells to get those manipulations passed on to a new generation of mice.

For testing purposes, the mouse also has a very significant advantage: mutations that change its fur are easy to spot. Over the century-plus that we’ve been using mice for research, people have noticed and observed a huge variety of mutations that affect their fur, altering color, texture, and length. In many of these cases, the changes in the DNA that cause these changes have been identified.

“Wooly mice” a test run for mammoth gene editing Read More »