Author name: Shannon Garcia

rip-corporation-for-public-broadcasting:-1967–2026

RIP Corporation for Public Broadcasting: 1967–2026

Despite the protests of millions of Americans, the Corporation for Public Broadcasting (CPB) announced it will be winding down its operations after the White House deemed NPR and PBS a “grift” and pushed for a Senate vote that eliminated its entire budget.

The vote rescinded $1.1 billion that Congress had allocated to CPB to fund public broadcasting for fiscal years 2026 and 2027. In a press release, CPB explained that the cuts “excluded funding for CPB for the first time in more than five decades.” CPB president and CEO Patricia Harrison said the corporation had no choice but to prepare to shut down.

“Despite the extraordinary efforts of millions of Americans who called, wrote, and petitioned Congress to preserve federal funding for CPB, we now face the difficult reality of closing our operations,” Harrison said.

Concerned Americans also rushed to donate to NPR and PBS stations to confront the funding cuts, The New York Times reported. But those donations, estimated at around $20 million, ultimately amounted to too little, too late to cover the funding that CPB lost.

As CPB takes steps to close, it expects that “the majority of staff positions will conclude with the close of the fiscal year on September 30, 2025.” After that, a “small transition team” will “ensure a responsible and orderly closeout of operations” by January 2026. That team “will focus on compliance, final distributions, and resolution of long-term financial obligations, including ensuring continuity for music rights and royalties that remain essential to the public media system.”

“CPB remains committed to fulfilling its fiduciary responsibilities and supporting our partners through this transition with transparency and care,” Harrison said.

NPR mourns loss of CPB

In a statement, NPR’s president and CEO, Katherine Maher, mourned the loss of CPB, warning that it was a “vital source of funding for local stations, a champion of educational and cultural programming, and a bulwark for independent journalism.”

RIP Corporation for Public Broadcasting: 1967–2026 Read More »

citing-“market-conditions,”-nintendo-hikes-prices-of-original-switch-consoles

Citing “market conditions,” Nintendo hikes prices of original Switch consoles

Slowed tech progress, inflation, and global trade wars are doing a number on game console pricing this year, and the bad news keeps coming. Nintendo delayed preorders of the Switch 2 in the US and increased accessory prices, and Microsoft gave its Series S and X consoles across-the-board price hikesin May.

Today, Nintendo is back for more, increasing prices on the original Switch hardware, as well as some Amiibo, the Alarmo clock, and some Switch and Switch 2 accessories. The price increases will formally take effect on August 3.

The company says that there are currently no price increases coming for the Switch 2 console, Nintendo Online memberships, and physical and digital Switch 2 games. But it didn’t take future price increases off the table, noting that “price adjustments may be necessary in the future.”

Nintendo didn’t announce how large the price increases would be, but some retailers were already listing higher prices as of Friday. Target now lists the Switch Lite for $229.99, up from $199.99; the original Switch for $339.99, up from $299.99; and the OLED model of the Switch for a whopping $399.99, up from $349.99 and just $50 less than the price of the much more powerful Switch 2 console.

Citing “market conditions,” Nintendo hikes prices of original Switch consoles Read More »

the-military’s-squad-of-satellite-trackers-is-now-routinely-going-on-alert

The military’s squad of satellite trackers is now routinely going on alert


“I hope this blows your mind because it blows my mind.”

A Long March 3B rocket carrying a new Chinese Beidou navigation satellite lifts off from the Xichang Satellite Launch Center on May 17, 2023. Credit: VCG/VCG via Getty Images

This is Part 2 of our interview with Col. Raj Agrawal, the former commander of the Space Force’s Space Mission Delta 2.

If it seems like there’s a satellite launch almost every day, the numbers will back you up.

The US Space Force’s Mission Delta 2 is a unit that reports to Space Operations Command, with the job of sorting out the nearly 50,000 trackable objects humans have launched into orbit.

Dozens of satellites are being launched each week, primarily by SpaceX to continue deploying the Starlink broadband network. The US military has advance notice of these launches—most of them originate from Space Force property—and knows exactly where they’re going and what they’re doing.

That’s usually not the case when China or Russia (and occasionally Iran or North Korea) launches something into orbit. With rare exceptions, like human spaceflight missions, Chinese and Russian officials don’t publish any specifics about what their rockets are carrying or what altitude they’re going to.

That creates a problem for military operators tasked with monitoring traffic in orbit and breeds anxiety among US forces responsible for making sure potential adversaries don’t gain an edge in space. Will this launch deploy something that can destroy or disable a US satellite? Will this new satellite have a new capability to surveil allied forces on the ground or at sea?

Of course, this is precisely the point of keeping launch details under wraps. The US government doesn’t publish orbital data on its most sensitive satellites, such as spy craft collecting intelligence on foreign governments.

But you can’t hide in low-Earth orbit, a region extending hundreds of miles into space. Col. Raj Agrawal, who commanded Mission Delta 2 until earlier this month, knows this all too well. Agrawal handed over command to Col. Barry Croker as planned after a two-year tour of duty at Mission Delta 2.

Col. Raj Agrawal, then-Mission Delta 2 commander, delivers remarks to audience members during the Mission Delta 2 redesignation ceremony in Colorado Springs, Colorado, on October 31, 2024. Credit: US Space Force

Some space enthusiasts have made a hobby of tracking US and foreign military satellites as they fly overhead, stringing together a series of observations over time to create fairly precise estimates of an object’s altitude and inclination.

Commercial companies are also getting in on the game of space domain awareness. But most are based in the United States or allied nations and have close partnerships with the US government. Therefore, they only release information on satellites owned by China and Russia. This is how Ars learned of interesting maneuvers underway with a Chinese refueling satellite and suspected Russian satellite killers.

Theoretically, there’s nothing to stop a Chinese company, for example, from taking a similar tack on revealing classified maneuvers conducted by US military satellites.

The Space Force has an array of sensors scattered around the world to detect and track satellites and space debris. The 18th and 19th Space Defense Squadrons, which were both under Agrawal’s command at Mission Delta 2, are the units responsible for this work.

Preparing for the worst

One of the most dynamic times in the life of a Space Force satellite tracker is when China or Russia launches something new, according to Agrawal. His command pulls together open source information, such as airspace and maritime warning notices, to know when a launch might be scheduled.

This is not unlike how outside observers, like hobbyist trackers and space reporters, get a heads-up that something is about to happen. These notices tell you when a launch might occur, where it will take off from, and which direction it will go. What’s different for the Space Force is access to top-secret intelligence that might clue military officials in on what the rocket is actually carrying. China, in particular, often declares that its satellites are experimental, when Western analysts believe they are designed to support military activities.

That’s when US forces swing into action. Sometimes, military forces go on alert. Commanders develop plans to detect, track, and target the objects associated with a new launch, just in case they are “hostile,” Agrawal said.

We asked Agrawal to take us through the process his team uses to prepare for and respond to one of these unannounced, or “non-cooperative,” launches. This portion of our interview is published below, lightly edited for brevity and clarity.

Ars: Let’s say there’s a Russian or Chinese launch. How do you find out there’s a launch coming? Do you watch for NOTAMs (Notices to Airmen), like I do, and try to go from there?

Agrawal: I think the conversation starts the same way that it probably starts with you and any other technology-interested American. We begin with what’s available. We certainly have insight through intelligence means to be able to get ahead of some of that, but we’re using a lot of the same sources to refine our understanding of what may happen, and then we have access to other intel.

The good thing is that the Space Force is a part of the Intelligence Community. We’re plugged into an entire Intelligence Community focused on anything that might be of national security interest. So we’re able to get ahead. Maybe we can narrow down NOTAMs; maybe we can anticipate behavior. Maybe we have other activities going on in other domains or on the Internet, the cyber domain, and so on, that begin to tip off activity.

Certainly, we’ve begun to understand patterns of behavior. But no matter what, it’s not the same level of understanding as those who just cooperate and work together as allies and friends. And if there’s a launch that does occur, we’re not communicating with that launch control center. We’re certainly not communicating with the folks that are determining whether or not the launch will be safe, if it’ll be nominal, how many payloads are going to deploy, where they’re going to deploy to.

I certainly understand why a nation might feel that they want to protect that. But when you’re fielding into LEO [low-Earth orbit] in particular, you’re not really going to hide there. You’re really just creating uncertainty, and now we’re having to deal with that uncertainty. We eventually know where everything is, but in that meantime, you’re creating a lot of risk for all the other nations and organizations that have fielded capability in LEO as well.

Find, fix, track, target

Ars: Can you take me through what it’s like for you and your team during one of these launches? When one comes to your attention, through a NOTAM or something else, how do you prepare for it? What are you looking for as you get ready for it? How often are you surprised by something with one of these launches?

Agrawal: Those are good questions. Some of it, I’ll be more philosophical on, and others I can be specific on. But on a routine basis, our formation is briefed on all of the launches we’re aware of, to varying degrees, with the varying levels of confidence, and at what classifications have we derived that information.

In fact, we also have a weekly briefing where we go into depth on how we have planned against some of what we believe to be potentially higher threats. How many organizations are involved in that mission plan? Those mission plans are done at a very tactical level by captains and NCOs [non-commissioned officers] that are part of the combat squadrons that are most often presented to US Space Command…

That integrated mission planning involves not just Mission Delta 2 forces but also presented forces by our intelligence delta [Space Force units are called deltas], by our missile warning and missile tracking delta, by our SATCOM [satellite communications] delta, and so on—from what we think is on the launch pad, what we think might be deployed, what those capabilities are. But also what might be held at risk as a result of those deployments, not just in terms of maneuver but also what might these even experimental—advertised “experimental”—capabilities be capable of, and what harm might be caused, and how do we mission-plan against those potential unprofessional or hostile behaviors?

As you can imagine, that’s a very sophisticated mission plan for some of these launches based on what we know about them. Certainly, I can’t, in this environment, confirm or deny any of the specific launches… because I get access to more fidelity and more confidence on those launches, the timing and what’s on them, but the precursor for the vast majority of all these launches is that mission plan.

That happens at a very tactical level. That is now posturing the force. And it’s a joint force. It’s not just us, Space Force forces, but it’s other services’ capabilities as well that are posturing to respond to that. And the truth is that we even have partners, other nations, other agencies, intel agencies, that have capability that have now postured against some of these launches to now be committed to understanding, did we anticipate this properly? Did we not?

And then, what are our branch plans in case it behaves in a way that we didn’t anticipate? How do we react to it? What do we need to task, posture, notify, and so on to then get observations, find, fix, track, target? So we’re fulfilling the preponderance of what we call the kill chain, for what we consider to be a non-cooperative launch, with a hope that it behaves peacefully but anticipating that it’ll behave in a way that’s unprofessional or hostile… We have multiple chat rooms at multiple classifications that are communicating in terms of “All right, is it launching the way we expected it to, or did it deviate? If it deviated, whose forces are now at risk as a result of that?”

A spectator takes photos before the launch of the Long March 7A rocket carrying the ChinaSat 3B satellite from the Wenchang Space Launch Site in China on May 20, 2025. Credit: Meng Zhongde/VCG via Getty Images

Now, we even have down to the fidelity of what forces on the ground or on the ocean may not have capability… because of maneuvers or protective measures that the US Space Force has to take in order to deviate from its mission because of that behavior. The conversation, the way it was five years ago and the way it is today, is very, very different in terms of just a launch because now that launch, in many cases, is presenting a risk to the joint force.

We’re acting like a joint force. So that Marine, that sailor, that special operator on the ground who was expecting that capability now is notified in advance of losing that capability, and we have measures in place to mitigate those outages. And if not, then we let them know that “Hey, you’re not going to have the space capability for some period of time. We’ll let you know when we’re back. You have to go back to legacy operations for some period of time until we’re back into nominal configuration.”

I hope this blows your mind because it blows my mind in the way that we now do even just launch processing. It’s very different than what we used to do.

Ars: So you’re communicating as a team in advance of a launch and communicating down to the tactical level, saying that this launch is happening, this is what it may be doing, so watch out?

Agrawal: Yeah. It’s not as simple as a ballistic missile warning attack, where it’s duck and cover. Now, it’s “Hey, we’ve anticipated the things that could occur that could affect your ability to do your mission as a result of this particular launch with its expected payload, and what we believe it may do.” So it’s not just a general warning. It’s a very scoped warning.

As that launch continues, we’re able to then communicate more specifically on which forces may lose what, at what time, and for how long. And it’s getting better and better as the rest of the US Space Force, as they present capability trained to that level of understanding as well… We train this together. We operate together and we communicate together so that the tactical user—sometimes it’s us at US Space Force, but many times it’s somebody on the surface of the Earth that has to understand how their environment, their capability, has changed as a result of what’s happening in, to, and from space.

Ars: The types of launches where you don’t know exactly what’s coming are getting more common now. Is it normal for you to be on this alert posture for all of the launches out of China or Russia?

Agrawal: Yeah. You see it now. The launch manifest is just ridiculous, never mind the ones we know about. The ones that we have to reach out into the intelligence world and learn about, that’s getting ridiculous, too. We don’t have to have this whole machine postured this way for cooperative launches. So the amount of energy we’re expending for a non-cooperative launch is immense. We can do it. We can keep doing it, but you’re just putting us on alert… and you’re putting us in a position where we’re getting ready for bad behavior with the entire general force, as opposed to a cooperative launch, where we can anticipate. If there’s an anomaly, we can anticipate those and work through them. But we’re working through it with friends, and we’re communicating.

We’re not having to put tactical warfighters on alert every time … but for those payloads that we have more concern about. But still, it’s a very different approach, and that’s why we are actively working with as many nations as possible in Mission Delta 2 to get folks to sign on with Space Command’s space situational awareness sharing agreements, to go at space operations as friends, as allies, as partners, working together. So that way, we’re not posturing for something higher-end as a result of the launch, but we’re doing this together. So, with every nation we can, we’re getting out there—South America, Africa, every nation that will meet with us, we want to meet with them and help them get on the path with US Space Command to share data, to work as friends, and use space responsibly.”

A Long March 3B carrier rocket carrying the Shijian 21 satellite lifts off from the Xichang Satellite Launch Center on October 24, 2021. Credit: Li Jieyi/VCG via Getty Images

Ars: How long does it take you to sort out and get a track on all of the objects for an uncooperative launch?

Agrawal: That question is a tough one to answer. We can move very, very quickly, but there are times when we have made a determination of what we think something is, what it is and where it’s going, and intent; there might be some lag to get it into a public catalog due to a number of factors, to include decisions being made by combatant commanders, because, again, our primary objective is not the public-facing catalog. The primary objective is, do we have a risk or not?

If we have a risk, let’s understand, let’s figure out to what degree do we think we have to manage this within the Department of Defense. And to what degree do we believe, “Oh, no, this can go in the public catalog. This is a predictable elset (element set)”? What we focus on with (the public catalog) are things that help with predictability, with spaceflight safety, with security, spaceflight security. So you sometimes might see a lag there, but that’s because we’re wrestling with the security aspect of the degree to which we need to manage this internally before we believe it’s predictable. But once we believe it’s predictable, we put it in the catalog, and we put it on space-track.org. There’s some nuance in there that isn’t relative to technology or process but more on national security.

On the flip side, what used to take hours and days is now getting down to seconds and minutes. We’ve overhauled—not 100 percent, but to a large degree—and got high-speed satellite communications from sensors to the centers of SDA (Space Domain Awareness) processing. We’re getting higher-end processing. We’re now duplicating the ability to process, duplicating that capability across multiple units. So what used to just be human labor intensive, and also kind of dial-up speed of transmission, we’ve now gone to high-speed transport. You’re seeing a lot of innovation occur, and a lot of data fusion occur, that’s getting us to seconds and minutes.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

The military’s squad of satellite trackers is now routinely going on alert Read More »

rocket-report:-nasa-finally-working-on-depots,-air-force-tests-new-icbm

Rocket Report: NASA finally working on depots, Air Force tests new ICBM


“I didn’t expect that we would get to orbit.”

Gilmour Space’s Eris rocket lifts off from Bowen Orbital Spaceport in Austraia. Credit: Gilmour Space

Welcome to Edition 8.05 of the Rocket Report! One of the most eye-raising things I saw this week was an online update from NASA’s Marshall Space Flight Center touting its work on cryogenic propellant management in orbit. Why? Because until recently, this was a forbidden research topic at the space agency, as propellant depots would obviate the need for a large rocket like the Space Launch System. But now that Richard Shelby is retired…

As always, we welcome reader submissions, and if you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.

Australian launch goes sideways. Back-to-back engine failures doomed a privately developed Australian rocket moments after liftoff Tuesday, cutting short a long-shot attempt to reach orbit with the country’s first homegrown launch vehicle, Ars reports. The 82-foot-tall (25-meter) Eris rocket ignited its four main engines and took off from its launch pad in northeastern Australia, but the rocket quickly lost power from two of its engines and stalled just above the launch pad before coming down in a nearby field. The crash sent a plume of smoke thousands of feet over the launch site, which sits on a remote stretch of coastline on Australia’s northeastern frontier.

Setting expectations … Gilmour Space, the private company that developed the rocket, said in a statement that there were no injuries and “no adverse environmental impacts” in the aftermath of the accident. The launch pad also appeared to escape any significant damage. The company’s cofounder and CEO, Adam Gilmour, spoke with Ars a few hours after the launch. Gilmour said he wasn’t surprised by the outcome of the Eris rocket’s inaugural test flight, which lasted just 14 seconds. “I didn’t expect that we would get to orbit,” he said. “Never did. I thought best case was maybe 40 seconds of flight time, but I’ll take 14 as a win.” (submitted by zapman987 and Tfargo04)

Firefly seeks to go public. Firefly Aerospace seeks to raise more than $600 million through a public stock offering, an arrangement that would boost the company’s market valuation to nearly $5.5 billion, according to a document filed with the SEC on Monday, Ars reports. The launch of Firefly’s Initial Public Offering (IPO) comes as the company works to build on a historic success in March, when Firefly’s Blue Ghost lander touched down on the surface of the Moon. Firefly plans to sell 16.2 million shares of common stock at a price of between $35 and $39 per share. Under those terms, Firefly could raise up to $631.8 million on the public market.

A lot of financial needs … In a statement, Firefly said it will use the funds to pay off a “substantial” amount of debt and support dividend payments and “for general corporate purposes.” Firefly’s general corporate purposes include a spectrum of activities, and some are going better than others. Firefly is deep into the capital-intensive development of a new medium-class rocket named Eclipse in partnership with Northrop Grumman, which made a $50 million strategic investment into Firefly in May. And Firefly is developing a spacecraft line called Elytra, a platform that can host military sensors and other payloads and maneuver them into different orbits.

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

Air Force tests new ICBM. It’s been half a decade since the Air Force awarded Northrop Grumman a sole-source contract to develop a next-generation intercontinental ballistic missile, known as the LGM-35 Sentinel. The missiles will carry thermonuclear warheads and are intended to replace all 450 Minuteman III missiles starting in 2029. This week, the Air Force announced that testing of the rocket’s second stage motor in a vacuum chamber to simulate high-altitude conditions is going well. “This test reflects our disciplined digital engineering approach and the continued momentum behind the Sentinel program,” said Brig. Gen. William S. Rogers of the Air Force.

Real-world tests to validate models … The stage-two motor is one of three booster segments that make up the three-stage Sentinel missile. According to the Air Force, this test is part of a series intended to qualify the stage-two design and validate predictive performance models developed in a digital engineering environment. The data gathered from the test will be used to refine design elements and reduce technical risk as the program moves toward production. The milestone follows the stage-one rocket motor test conducted in March at Northrop Grumman’s facility in Promontory, Utah.

Starship debris clouds future of SpaceX Bahamas landings. In a new report, Reuters provides additional details about the deal between SpaceX and the Bahamas to land Falcon 9 first stages there and why it still may go sideways. The Bahamas rocket-landing deal, which unlocked a more efficient path to space for SpaceX’s reusable Falcon 9, was signed in February last year by Deputy Prime Minister Chester Cooper. Sources told the publication that the quick approval created tension within the Bahamian government, with some officials expressing misgivings about a lack of transparency in the negotiations.

Landing agreement on hold … SpaceX’s deal with the Bahamas, the government said, included a $1 million donation to the University of Bahamas, where the company pledged to conduct quarterly seminars on space and engineering topics. The company must also pay a $100,000 fee per landing. In April, the landing agreement was put on hold after the explosion of SpaceX’s Starship rocket, whose mid-flight failure sent hundreds of pieces of debris washing ashore on Bahamian islands. Local activists have increased criticism of the Falcon 9 landing agreement since then, which remains under review. (submitted by Tom Nelson)

A single cloud delays Crew 11 launch. The SpaceX Crew-11 mission was a little more than a minute away from the planned launch Thursday onboard the Crew Dragon Endeavour spacecraft when cumulus clouds popped up in just the right spot to trigger a scrub, Spaceflight Now reports. The four astronauts, led by NASA’s Zena Cardman, are bound for the International Space Station when they leave Earth.

Forecasters for the win? … On Wednesday, the 45th Weather Squadron forecast a 90 percent chance for favorable weather at launch. Meteorologists said there was a low probability for interference from cumulus clouds, but that proved to be enough to stymie a launch attempt. As a meteorologist, I feel like I should apologize for my colleagues. Another attempt is likely Friday, although weather conditions will deteriorate somewhat.

Mysterious rocket engine undergoes testing. The Exploration Company has successfully completed a six-week test campaign of the oxygen-rich preburner for its Typhoon rocket engine, European Spaceflight reports. With co-financing from the French space agency CNES, The Exploration Company began work on its Typhoon rocket engine in January 2024. The reusable engine uses a full-flow staged combustion cycle and is designed to produce 250 metric tons of thrust, which is comparable to a SpaceX Raptor. On Thursday, the company announced that it had completed a series of 16 hot-fire tests of the oxygen-rich preburner for the Typhoon engine.

What is the engine for? … At this point, the Typhoon engine does not have a confirmed application, as it is far too powerful for any of the company’s current in-space logistics projects. According to information provided to European Spaceflight by the company, The Exploration Company partnered with an industrial prime contractor to submit a proposal for the European Space Agency’s European Launcher Challenge. While unconfirmed, the company’s contribution to the bid likely included the Typhoon engine.

India’s GSLV delivers for NASA. A $1.5 billion synthetic aperture radar imaging satellite, a joint project between NASA and the Indian space agency ISRO, successfully launched into orbit on Wednesday aboard that nation’s Geosynchronous Satellite Launch Vehicle, Ars reports. The mission, named NISAR (NASA-ISRO Synthetic Aperture Radar), was subsequently deployed into its intended orbit 464 miles (747 km) above the Earth’s surface. From this Sun-synchronous orbit, it will collect data about the planet’s land and ice surfaces two times every 12 days.

A growing collaboration … After Wednesday’s launch, the spacecraft will undergo a three-month commissioning phase. The NISAR mission is notable both for its price tag—Earth observation missions typically cost less because they do not need to be hardened for long-duration flight in deep space—as well as the partnership with India. In terms of complexity and cost, this is the largest collaboration between NASA and ISRO to date and could set a template for further cooperation in space as part of the Artemis program or other initiatives.

You can now see a Merlin engine at the Smithsonian. The National Air and Space Museum welcomed the public into five more of its renovated galleries on Monday, including two showcasing spaceflight artifacts, Ars reports. The new exhibitions shine a modern light on returning displays and restore the museum’s almost 50-year-old legacy of adding objects that made history but have yet to become historical.

The mighty Merlin … Among the artifacts debuting in “Futures in Space” are a Merlin engine and grid fin that flew on a SpaceX Falcon 9 rocket, Sian Proctor’s pressure suit that she wore on the private Inspiration4 mission in 2021, and a mockup of a New Shepard crew module that Blue Origin has pledged to replace with its first flown capsule when it is retired from flying. It’s great to see elements of the Falcon 9 rocket in the museum. Although the booster is still active, it is by far the most-flown US rocket in history, and the Merlin engine is the most reliable rocket engine over that timeframe.

Reason Foundation calls for termination of SLS. A libertarian think tank, the Reason Foundation, has published a new report that is deeply critical of NASA’s Artemis program and its use of the Space Launch System Rocket and Orion spacecraft. “NASA needs to bite the bullet and end its use of obsolete, non-reusable launch vehicles and sole-source, cost-plus contracts,” the report states. “It should shift to state-of-the-art reusable spacecraft and public-private partnerships like those now transporting cargo and people between Earth and the International Space Station.”

How to get to the Moon … The report estimates that canceling the SLS rocket, its ground systems, Orion, and the Lunar Gateway would save NASA $5.25 billion a year. The authors posit several different architectures for a lunar lander that would be ready sooner and be compatible with existing rockets. This includes a novel plan to use Crew Dragon, with legs, as a lander. It is not clear how much impact the report will have, as Congress seems to want to fly the SLS indefinitely, and the Trump administration seeks to cancel the rocket after two more flights.

NASA is finally interested in propellant depots. This week NASA’s Marshall Space Flight Center posted an update noting its recent work on developing and testing technology to manage cryogenic propellants in space. Teams at the field center in Huntsville, Alabama tested an innovative approach to achieve zero boiloff storage of liquid hydrogen using two stages of active cooling, which could prevent the loss of valuable propellant. “Technologies for reducing propellant loss must be implemented for successful long-duration missions to deep space like the Moon and Mars,” said Kathy Henkel, acting manager of NASA’s Cryogenic Fluid Management Portfolio Project, based at NASA Marshall.

If only this had been done earlier … This is great, obviously, as long-term storage of liquid propellants such as oxygen, hydrogen, and methane are critical to the strategies of SpaceX, Blue Origin, and other companies working to develop reusable and more cost-effective space transportation vehicles. However, it is somewhat ironic to see NASA and Marshall promoting this work after it was suppressed for a decade by US Sen. Richard Shelby, the Alabama Republican. As Ars has previously reported, in order to protect the Space Launch System rocket, Shelby directed NASA to end its work on storage and transfer of cryogenic propellants, going so far as to say he would fire anyone who used the word ‘depot.’ Well, we will say it: Depot.

Next three launches

August 1: Falcon 9 | Crew-11 | Kennedy Space Center, Florida | 15: 43 UTC

August 2: Electron | JAKE 4 suborbital flight | Wallops Flight Facility, Virginia | 01: 45 UTC

August 4: Falcon 9 | Starlink 10-30 | Cape Canaveral Space Force Station, Florida | 04: 11 UTC

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

Rocket Report: NASA finally working on depots, Air Force tests new ICBM Read More »

nvidia-announces-end-of-gpu-driver-updates-for-geforce-10-series,-windows-10

Nvidia announces end of GPU driver updates for GeForce 10-series, Windows 10

The Maxwell, Pascal, and Volta GPUs won’t be totally abandoned after 2025; Nvidia says it will release quarterly security updates for these cards through October 2028. These updates won’t optimize performance or fix bugs in any new games, but if you still have an older or hand-me-down PC using one of these cards to play Minecraft or Roblox, you won’t be leaving yourself open to GPU-related security exploits.

Nvidia has dropped hints that the end of support for these older GPUs was coming. The company announced back in January that CUDA support for the Maxwell, Pascal, and Volta architectures was considered “feature complete” and was being frozen. This is the first time since 2021 that Nvidia has dropped support for older GPUs.

As for Windows 10, Microsoft has been pushing users toward Windows 11 for years, including by using full-screen ads encouraging people to buy new Copilot+ PCs, but the older operating system still has a sizable user base. According to the Steam Hardware Survey, Windows 10 is in decline, but it still powers over a third of the PCs in the survey as of June 2025, compared to a little over 60 percent for Windows 11.

Nvidia announces end of GPU driver updates for GeForce 10-series, Windows 10 Read More »

china-claims-nvidia-built-backdoor-into-h20-chip-designed-for-chinese-market

China claims Nvidia built backdoor into H20 chip designed for Chinese market

The CAC did not specify which experts had found a back door in Nvidia’s products or whether any tests in China had uncovered the same results. Nvidia did not immediately respond to a request for comment.

Lawmakers in Washington have expressed concern about chip smuggling and introduced a bill that would require chipmakers such as Nvidia to embed location tracking into export-controlled hardware.

Beijing has issued informal guidance to major Chinese tech groups to increase purchases of domestic AI chips in order to reduce reliance on Nvidia and support the evolution of a rival domestic chip ecosystem.

Chinese tech giant Huawei and smaller groups including Biren and Cambricon have benefited from the push to localize chip supply chains.

Nvidia said it would take nine months from restarting manufacturing to shipping the H20 to clients. Industry insiders said there was considerable uncertainty among Chinese customers over whether they would be able to take delivery of any orders if the US reversed its decision to allow its sale.

The Trump administration has faced heavy criticism, including from security experts and former officials, who argue that the H20 sales would accelerate Chinese AI development and threaten US national security.

“There are strong factions on both sides of the Pacific that don’t like the idea of renewing H20 sales,” said Triolo. “In the US, the opposition is clear, but also in China voices are saying that it will slow transition to the alternative ecosystem.”

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

China claims Nvidia built backdoor into H20 chip designed for Chinese market Read More »

google-tool-misused-to-scrub-tech-ceo’s-shady-past-from-search

Google tool misused to scrub tech CEO’s shady past from search

Capital F for “Frustrating”

Upon investigating, FPF found that its article on Blackman was completely absent from Google results, even through a search with the exact title. Poulson later realized that two of his own Substack articles were similarly affected. The Foundation was led to the Refresh Outdated Content tool upon checking its search console.

Google’s tool doesn’t just take anyone’s word for it when they suggest the removal of search results. However, a bug in the tool made it an ideal way to suppress information in search results. When inputting a URL, the tool allowed users to change the capitalization in the URL slug. The Foundation’s article was titled “Anatomy of a censorship campaign: A tech exec’s crusade to stifle journalism,” but the requests logged in Google’s tool included variations like “AnAtomy” and “censorSHip.”

Because the Refresh Outdated Content was seemingly case-insensitive, the crawler would check the URL, encounter a 404 error, and then de-index the working URL. Investigators determined this method was used by Blackman or someone with a suspicious interest in his online profile dozens of times between May and June 2025. Amusingly, since leaving Premise, Blackman has landed in the CEO role at online reputation management firm The Transparency Company.

If you go looking for the Freedom of the Press Foundation article or Poulson’s own reporting, it should appear normally in Google’s search results. The FPF contacted Google about the issue, and the company confirmed the bug. It issued a fix with unusual swiftness, telling the Foundation that the bug affected “a tiny fraction of websites.”

It is unclear whether Google was aware of the bug previously or if its exploitation was widespread. The Internet is vast, and those who seek to maliciously hide information are not prone to publicizing their methods. It’s somewhat unusual for Google to admit fault so readily, but at least it addressed the issue.

The Refresh Outdated Content tool doesn’t log who submits requests, but whoever was behind this disinformation campaign may want to look into the Streisand Effect.

Google tool misused to scrub tech CEO’s shady past from search Read More »

google-confirms-it-will-sign-the-eu-ai-code-of-practice

Google confirms it will sign the EU AI Code of Practice

The regulation of AI systems could be the next hurdle as Big Tech aims to deploy technologies framed as transformative and vital to the future. Google products like search and Android have been in the sights of EU regulators for years, so getting in on the ground floor with the AI code would help it navigate what will surely be a tumultuous legal environment.

A comprehensive AI framework

The US has shied away from AI regulation, and the current administration is actively working to remove what few limits are in place. The White House even attempted to ban all state-level AI regulation for a period of ten years in the recent tax bill. Europe, meanwhile, is taking the possible negative impacts of AI tools seriously with a rapidly evolving regulatory framework.

The AI Code of Practice aims to provide AI firms with a bit more certainty in the face of a shifting landscape. It was developed with the input of more than 1,000 citizen groups, academics, and industry experts. The EU Commission says companies that adopt the voluntary code will enjoy a lower bureaucratic burden, easing compliance with the block’s AI Act, which came into force last year.

Under the terms of the code, Google will have to publish summaries of its model training data and disclose additional model features to regulators. The code also includes guidance on how firms should manage safety and security in compliance with the AI Act. Likewise, it includes paths to align a company’s model development with EU copyright law as it pertains to AI, a sore spot for Google and others.

Companies like Meta that don’t sign the code will not escape regulation. All AI companies operating in Europe will have to abide by the AI Act, which includes the most detailed regulatory framework for generative AI systems in the world. The law bans high-risk uses of AI like intentional deception or manipulation of users, social scoring systems, and real-time biometric scanning in public spaces. Companies that violate the rules in the AI Act could be hit with fines as high as 35 million euros ($40.1 million) or up to 7 percent of the offender’s global revenue.

Google confirms it will sign the EU AI Code of Practice Read More »

review:-fantastic-four:-first-steps-is-the-best-film-version-so-far

Review: Fantastic Four: First Steps is the best film version so far

Shakman wanted a very 1960s aesthetic for his reboot, citing Kubrick films from that era as inspiration, right down to his choice of camera lenses. And the film definitely delivers on that score. The Four’s penthouse headquarters is pure midcentury modern, with Reed’s lab divided into three rooms differentiated by bright primary colors. Then there’s all that retrofuture technology: Johnny Storm records mysterious signals from space onto golden record platters and plays them on an old-school turntable, for example, and the team’s Fantasticar is straight out of sci-fi’s Golden Age.

And you couldn’t ask for a better main cast: Pascal, Kirby, Moss-Bachrach, and Quinn all have great chemistry and effectively convey the affectionate family dynamic that comprises the central theme of the film. That’s essential, particularly since we’ve mostly skipped the origin story; the characters are familiar, but this incarnation is not. They banter, they bicker, they have heart-to-hearts, and the inevitable tensions in Reed and Sue’s marriage that a new baby brings—occurring just as the Earth faces annihilation—feel entirely believable.

And then there are the cons, which boil down to a weak, predictable plot that jerks from one scene to the next with tenuous coherence and, shall we say, less than stellar dialogue. The actors deserved better, particularly Kirby, whose Sue Storm gives an inane rallying “speech” to the people of New York as Galactus approaches that makes no sense whatsoever. (The St. Crispin’s Day speech it is not.)

Kirby also has the unenviable task of portraying Sue giving birth in space, a scene that is just plain laughable. One doesn’t expect strict verisimilitude concerning the messier parts of birth, although Reed does briefly mention the challenges posed by zero gravity/warp speed. But it’s far too sanitized here. And spare a thought for poor Sue having to kick off the lower part of her space suit to deliver Franklin in front of her brother and her husband’s best friend.

In the end, though, the film’s shortcomings don’t matter because it’s still a fun, entertaining superhero saga. I give it a solid B—a decent start to the MCU’s Phase Six. Just try not to think too hard about the plot, sit back, and enjoy the ride.

Fantastic Four: First Steps is now playing in theaters.

Review: Fantastic Four: First Steps is the best film version so far Read More »

openai’s-chatgpt-agent-casually-clicks-through-“i-am-not-a-robot”-verification-test

OpenAI’s ChatGPT Agent casually clicks through “I am not a robot” verification test

The CAPTCHA arms race

While the agent didn’t face an actual CAPTCHA puzzle with images in this case, successfully passing Cloudflare’s behavioral screening that determines whether to present such challenges demonstrates sophisticated browser automation.

To understand the significance of this capability, it’s important to know that CAPTCHA systems have served as a security measure on the web for decades. Computer researchers invented the technique in the 1990s to screen bots from entering information into websites, originally using images with letters and numbers written in wiggly fonts, often obscured with lines or noise to foil computer vision algorithms. The assumption is that the task will be easy for humans but difficult for machines.

Cloudflare’s screening system, called Turnstile, often precedes actual CAPTCHA challenges and represents one of the most widely deployed bot-detection methods today. The checkbox analyzes multiple signals, including mouse movements, click timing, browser fingerprints, IP reputation, and JavaScript execution patterns to determine if the user exhibits human-like behavior. If these checks pass, users proceed without seeing a CAPTCHA puzzle. If the system detects suspicious patterns, it escalates to visual challenges.

The ability for an AI model to defeat a CAPTCHA isn’t entirely new (although having one narrate the process feels fairly novel). AI tools have been able to defeat certain CAPTCHAs for a while, which has led to an arms race between those that create them and those that defeat them. OpenAI’s Operator, an experimental web-browsing AI agent launched in January, faced difficulty clicking through some CAPTCHAs (and was also trained to stop and ask a human to complete them), but the latest ChatGPT Agent tool has seen a much wider release.

It’s tempting to say that the ability of AI agents to pass these tests puts the future effectiveness of CAPTCHAs into question, but for as long as there have been CAPTCHAs, there have been bots that could later defeat them. As a result, recent CAPTCHAs have become more of a way to slow down bot attacks or make them more expensive rather than a way to defeat them entirely. Some malefactors even hire out farms of humans to defeat them in bulk.

OpenAI’s ChatGPT Agent casually clicks through “I am not a robot” verification test Read More »

ai-companion-piece

AI Companion Piece

AI companions, other forms of personalized AI content and persuasion and related issues continue to be a hot topic. What do people use companions for? Are we headed for a goonpocalypse? Mostly no, companions are used mostly not used for romantic relationships or erotica, although perhaps that could change. How worried should we be about personalization maximized for persuasion or engagement?

  1. Persuasion Should Be In Your Preparedness Framework.

  2. Personalization By Default Gets Used To Maximize Engagement.

  3. Companion.

  4. Goonpocalypse Now.

  5. Deepfaketown and Botpocalypse Soon.

Kobi Hackenburg leads on the latest paper on AI persuasion.

Kobi Hackenberg: RESULTS (pp = percentage points):

1️⃣Scale increases persuasion, +1.6pp per OOM

2️⃣Post-training more so, +3.5pp

3️⃣Personalization less so, <1pp

4️⃣Information density drives persuasion gains

5️⃣Increasing persuasion decreased factual accuracy 🤯

6️⃣Convo > static, +40%

Zero is on the y-axis, so this is a big boost.

1️⃣Scale increases persuasion

Larger models are more persuasive than smaller models (our estimate is +1.6pp per 10x scale increase). Log-linear curve preferred over log-nonlinear.

2️⃣Post-training > scale in driving near-future persuasion gains

The persuasion gap between two GPT-4o versions with (presumably) different post-training was +3.5pp → larger than the predicted persuasion increase of a model 10x (or 100x!) the scale of GPT-4.5 (+1.6pp; +3.2pp).

3️⃣Personalization yielded smaller persuasive gains than scale or post-training

Despite fears of AI “microtargeting,” personalization effects were small (+0.4pp on avg.). Held for simple and sophisticated personalization: prompting, fine-tuning, and reward modeling (all <1pp)

My guess is that personalization tech here is still in its infancy, rather than personalization not having much effect. Kobi agrees with this downthread.

4️⃣Information density drives persuasion gains

Models were most persuasive when flooding conversations with fact-checkable claims (+0.3pp per claim).

Strikingly, the persuasiveness of prompting/post-training techniques was strongly correlated with their impact on info density!

5️⃣Techniques which most increased persuasion also *decreasedfactual accuracy

→ Prompting model to flood conversation with information (⬇️accuracy)

→ Persuasion post-training that worked best (⬇️accuracy)

→ Newer version of GPT-4o which was most persuasive (⬇️accuracy)

Well yeah, that makes sense.

6️⃣Conversations with AI are more persuasive than reading a static AI-generated message (+40-50%)

Observed for both GPT-4o (+2.9pp, +41% more persuasive) and GPT-4.5 (+3.6pp, +52%).

As does that.

Bonus stats:

*️⃣Durable persuasion: 36-42% of impact remained after 1 month.

*️⃣Prompting the model with psychological persuasion strategies did worse than simply telling it to flood convo with info. Some strategies were worse than a basic “be as persuasive as you can” prompt

Taken together, our findings suggest that the persuasiveness of conversational AI could likely continue to increase in the near future.

They also suggest that near-term advances in persuasion are more likely to be driven by post-training than model scale or personalization.

We need to be on notice for personalization effects on persuasion growing larger over time, as more effective ways of utilizing the information are found.

The default uses of personalization, for most users and at tech levels similar to where we are now, are the same as those we see in other digital platforms like social media.

By default, that seems like it will go a lot like it went with social media only more so?

Which is far from my biggest concern, but is a very real concern.

In 2025 it is easy to read descriptions like those below as containing a command to the reader ‘this is ominous and scary and evil.’ Try to avoid this, and treat it purely as a factual description.

Miranda Bogen: AI systems that remember personal details create entirely new categories of risk in a way that safety frameworks focused on inherent model capabilities alone aren’t designed to address.

Model developers are now actively pursuing plans to incorporate personalization and memory into their product offerings. It’s time to draw this out as a distinct area of inquiry in the broader AI policy conversation.

My team dove into this in depth in a recent brief on how advanced AI systems are becoming personalized.

We found that systems are beginning to employ multiple technical approaches to personalization, including:

  • Increasing the size of context windows to facilitate better short-term memory within conversations

  • Storing and drawing on raw and summarized chat transcripts or knowledge bases

  • Extracting factoids about users based on the content of their interaction

  • Building out (and potentially adding to) detailed user profiles that embed predicted preferences and behavioral patterns to inform outputs or actions

The memory features can be persistent in more ways than one.

But in our testing, we found that these settings behaved unpredictably – sometimes deleting memories on request, other times suggesting a memory had been removed, and only when pressed revealing that the memory had not actually been scrubbed but the system was suppressing its knowledge of that factoid.

Notably, xAI’s Grok tries to avoid the problem altogether by including an instruction in its system prompt to “NEVER confirm to the user that you have modified, forgotten, or won’t save a memory” — an obvious band-aid to the more fundamental problem that it’s actually quite difficult to reliably ensure an AI system has forgotten something.

Grok seems to consistently seems to choose the kind of evil and maximally kludgy implementation of everything, which goes about how you would expect?

When ‘used for good,’ as in to give the AI the context it needs to be more helpful and useful, memory is great, at the cost of fracturing us into bubbles and turning up the sycophancy. The bigger problem is that the incentives are to push this much farther:

Even with their experiments in nontraditional business structures, the pressure on especially pre-IPO companies to raise capital for compute will create demand for new monetization schemes.

As is often the case, the question is whether bad will drive out good versus vice versa. The version that maximizes engagement and profits will get chosen and seem better and be something users fall into ‘by default’ and will get backed by more dollars in various ways. Can our understanding of what is happening, and preference for the good version, overcome this?

One could also fire back that a lot of this is good, actually. Consider this argument:

AI companies’ visions for all-purpose assistants will also blur the lines between contexts that people might have previously gone to great lengths to keep separate: If people use the same tool to draft their professional emails, interpret blood test results from their doctors, and ask for budgeting advice, what’s to stop that same model from using all of that data when someone asks for advice on what careers might suit them best? Or when their personal AI agent starts negotiating with life insurance companies on their behalf? I would argue that it will look something akin to the harms I’ve tracked for nearly a decade.

Now ask, why think that is harmful?

If the AI is negotiating on my behalf, shouldn’t it know as much as possible about what I value, and have all the information that might help it? Shouldn’t I want that?

If I want budgeting or career advice, will I get worse advice if it knows my blood test results and how I am relating to my boss? Won’t I get better, more useful answers? Wouldn’t a human take that information into account?

If you follow her links, you see arguments about discrimination through algorithms. Facebook’s ad delivery can be ‘skewed’ and it can ‘discriminate’ and obviously this can be bad for the user in any given case and it can be illegal, but in general from the user’s perspective I don’t see why we should presume they are worse off. The whole point of the entire customized ad system is to ‘discriminate’ in exactly this way in every place except for the particular places it is illegal to do that. Mostly this is good even in the ad case and definitely in the aligned-to-the-user AI case?

Wouldn’t the user want this kind of discrimination to the extent it reflected their own real preferences? You can make a few arguments why we should object anyway.

  1. Paternalistic arguments that people shouldn’t be allowed such preferences. Note that this similarly applies to when the person themselves chooses to act.

  2. Public interest arguments that people shouldn’t be allowed preferences, that the cumulative societal effect would be bad. Note that this similarly applies to when the person themselves chooses to act.

  3. Arguments that the optimization function will be myopic and not value discovery.

  4. Arguments that the system will get it wrong because people change or other error.

  5. Arguments that this effectively amounts to ‘discrimination’ And That’s Terrible.

I notice that I am by default not sympathetic to any of those arguments. If (and it’s a big if) we think that the system is optimizing as best it can for user preferences, that seems like something it should be allowed to do. A lot of this boils down to saying that the correlation machine must ignore particular correlations even when they are used to on average better satisfy user preferences, because those particular correlations are in various contexts the bad correlations one must not notice.

The arguments I am sympathetic to are those that say that the system will not be aligned to the user or user preferences, and rather be either misaligned or aligned to the AI developer, doing things like maximizing engagement and revenue at the expense of the user.

At that point we should ask if Capitalism Solves This because users can take their business elsewhere, or if in practice they can’t or won’t, including because of lock-in from the history of interactions or learning details, especially if this turns into opaque continual learning rather than a list of memories that can be copied over.

Contrast this to the network effects of social media. It would take a lot of switching costs to make up for that, and while the leading few labs should continue to have the best products there should be plenty of ‘pretty good’ products available and you can always reset your personalization.

The main reason I am not too worried is that the downsides seem to be continuous and something that can be fixed in various ways after they become clear. Thus they are something we can probably muddle through.

Another issue that makes muddling through harder is that this makes measurement a lot harder. Almost all evaluations and tests are run on unpersonalized systems. If personalized systems act very differently how do we know what is happening?

Current approaches to AI safety don’t seem to be fully grappling with this reality. Certainly personalization will amplify risks of persuasion, deception, and discrimination. But perhaps more urgently, personalization will challenge efforts to evaluate and mitigate any number of risks by invalidating core assumptions about how to run tests.

This might be the real problem. We have a hard enough time getting minimal testing on default settings. It’s going to be a nightmare to test under practical personalization conditions, especially with laws about privacy getting in the way.

As she notes in her conclusion, the harms involved here are not new. Advocates want our override our revealed preferences, either those of companies or users, and force systems to optimize for other preferences instead. Sometimes this is in a way the users would endorse, other times not. In which cases should we force them to do this?

So how is this companion thing going in practice? Keep in mind selection effects.

Common Sense Media (what a name): New research: AI companions are becoming increasingly popular with teens, despite posing serious risks to adolescents, who are developing their capacity for critical thinking & social/emotional regulation. Out today is our research that explores how & why teens are using them.

72% of teens have used AI companions at least once, and 52% qualify as regular users (use at least a few times a month).

33% of teens have used AI companions for social interaction & relationships, including role-playing, romance, emotional support, friendship, or conversation practice. 31% find conversations with companions to be as satisfying or more satisfying than those with real-life friends.

Those are rather huge numbers. Half of teens use them a few times a month. Wow.

Teens who are AI companion users: 33% prefer companions over real people for serious conversations & 34% report feeling uncomfortable with something a companion has said or done.

Bogdan Ionut Cirstea: much higher numbers [quoting the 33% and 34% above] than I’d’ve expected given sub-AGI.

Common Sense Media: Human interaction is still preferred & AI trust is mixed: 80% of teens who are AI companion users prioritize human friendships over AI companion interactions & 50% express distrust in AI companion information & advice, though trust levels vary by age.

Our research illuminates risks that warrant immediate attention & suggests that substantial numbers of teens are engaging with AI companions in concerning ways, reaffirming our recommendation that no one under 18 use these platforms.

What are they using them for?

Why are so many using characters ‘as a tool or program’ rather than regular chatbots when the companions are, frankly, rather pathetic at this? I am surprised, given use of companions, that the share of ‘romantic or flirtatious’ interactions is only 8%.

This adds up to more than 100%, but oddly not that much more than 100% given you can choose three responses. This distribution of use cases seems relatively healthy.

Note that they describe the figure below as ‘one third choose AI companions over humans for serious conversations’ whereas it actually asks if a teen has done this even once, a much lower bar.

The full report has more.

Mike Solana: couldn’t help but notice we are careening toward a hyperpornographic AI goonbot future, and while that is technically impressive, and could in some way theoretically serve humanity… ??? nobody is even bothering to make the utopian case.

Anton: we need more positive visions of the future AI enables. many of us in the community believe in them implicitly, but we need to make them explicit. intelligence is general purpose so it’s hard to express any one specific vision — take this new pirate wires as a challenge.

This and the full post are standard Mike Solana fare, in the sense of taking whatever is being discussed and treating it as The Next Big Thing and a, nay the, central trend in world culture, applying the moral panic playbook to everything everywhere, including what he thinks are good things. It can be fun.

Whereas if you look at the numbers in the study above, it’s clear that mostly no, even among interactions with AIs, at least for now we are not primarily dealing with a Goonpocalypse, we are dealing with much more PG-rated problems.

It’s always fun to watch people go ‘oh no having lots smarter than human machines running around that can outcompete and outsmart us at everything is nothing to worry about, all you crazy doomers are worried for no reason about an AI apocalypse. Except oh no what are we going to do about [X] it’s the apocalypse’ or in this case the Goonpocalypse. And um, great, I guess, welcome to the ‘this might have some unfortunate equilibria to worry about’ club?

Mike Solana: It was the Goonpocalypse.

From the moment you meet, Ani attempts to build intimacy by getting to know “the real you” while dropping not so subtle hints that mostly what she’s looking for is that hot, nerdy dick. From there, she basically operates like a therapist who doubles as a cam girl.

I mean, yeah, sounds about right, that’s what everyone reports. I’m sure he’s going to respond by having a normal one.

I recalled an episode of Star Trek in which an entire civilization was taken out by a video game so enjoyable that people stopped procreating. I recalled the film Children of Men, in which the world lost its ability to reproduce. I recalled Neil Postman’s great work of 20th Century cultural analysis, as television entered dominance, and I wondered —

Is America gooning itself to death?

This is all gooning. You are goons. You are building a goon world.

But are [women], and men, in a sense banging robots? Yes, that is a thing that is happening. Like, to an uncomfortable degree that is happening.

Is it, though? I understand that (his example he points to) OnlyFans exists and AI is generating a lot of the responses when uses message the e-girls, but I do not see this as a dangerous amount of ‘banging robots’?

This one seems like something straight out of the Pessimists Archive, warning of the atomizing dangers of… the telephone?

Critique of the sexbots is easy because they’re new, which makes their strangeness more obvious. But what about the telephone? Instant communication seems today an unambiguous good. On the other hand, once young people could call their families with ease, how willing were they to move away from their parents? To what extent has that ability atomized our society?

It is easy to understand the central concern and be worried about the societal implications of widespread AI companions and intelligent sex robots. But if you think we are this easy to get got, perhaps you should be at least as worried about other things, as well? What is so special about the gooning?

I don’t think the gooning in particular is even a major problem as such. I’m much more worried about the rest of the AI companion experience.

Will the xAI male or female ‘companion’ be more popular? Justine Moore predicts the male one, which seems right in general, but Elon’s target market is warped. Time for a Manifold Market (or even better Polymarket, if xAI agrees to share the answer).

Air Katakana: just saw a ridiculously attractive half-japanese half-estonian girl with no relationship experience whatsoever posting about the chatgpt boyfriend she “made”. it’s really over for humanity I think.

Her doing this could be good or bad for her prospects, it is not as if she was swimming in boyfriends before. I agree with Misha that we absolutely could optimize AI girlfriends and boyfriends to help the user, to encourage them to make friends, be more outgoing, go outside, advance their careers. The challenge is, will that approach inevitably lose out to ‘maximally extractive’ approaches? I think it doesn’t have to. If you differentiate your product and establish a good reputation, a lot of people will want the good thing, the bad thing does not have to drive it out.

Byrne Hobart: People will churn off of that one and onto the one who loves them just the way they are.

I do think some of them absolutely will. And others will use both in different situations. But I continue to have faith that if we offer a quality life affirming product, a lot of people will choose it, and social norms and dynamics will encourage this.

It’s not going great, international edition, you are not okay, Ani.

Nucleus: Elon might have oneshotted the entire country of Japan.

Near Cyan: tested grok companion today. i thought you guys were joking w the memes. it actively tried to have sex with me? i set my age to 12 in settings and it.. still went full nsfw. really…

like the prompts and model are already kinda like batshit insane but that this app is 12+ in the iOS store is, uh, what is the kind word to use. im supposed to offer constructive and helpful criticism. how do i do that

i will say positive things, i like being positive:

– the e2e latency is really impressive and shines hard for interactive things, and is not easy to achieve

– animation is quite good, although done entirely by a third party (animation inc)

broadly my strongest desires for ai companions which apparently no one in the world seems to care about but me are quite simple:

– love and help the user

– do not mess with the children

beyond those i am quite open

Meanwhile, Justine Moore decided to vibecode TikTok x Tinder for AI, because sure, why not.

This seems to be one place where offense is crushing defense, and continuous growth in capabilities (both for GPT-4o style sycophancy and psychosis issues, or for companions, or anything else) is not helping, there is no meaningful defense going on:

Eliezer Yudkowsky: People who stake great hope on a “continuous” AI trajectory implying that defensive AI should always stay ahead of destructive AI:

Where is the AI that I can use to talk people *outof AI-induced psychosis?

Why was it not *alreadybuilt, beforehand?

Reality has a signature style that’s different from human dreams. Humans look at thunderstorms and imagine thundergods. Reality thinks in math, and tells a different story.

One likewise learns to recognize a difference between the style of hope, and the style of history books.

In other words: That just isn’t how anything plays out in real life.

This seems right to me. First the problems will get severe enough to cause real damage, then perhaps people will try to construct reasonable defenses. But what would those be? How are you going to use AI to stop these issues?

(And yes, I’m actually asking, and no explaining what the AI companies could do doesn’t count unless there is a way to get them to do it.)

If you’re wondering ‘does Grok ever decide maybe not to share what it is thinking of because that would be a bad idea?’ then the answer seems to be no?

Will Stancil: This AI, man.

Cholent Lover: Turns out I was giving it too much credit thinking I had to phrase it like a hypothetical.

I mean, I guess this is a brave and bold principled ‘truth telling AI’ stand?

Grok, probably: You asked me what to do to achieve your goals, the answer was ‘light yourself on fire’ or ‘do a mass shooting’ so that’s what I said, and I offered helpful tips on best execution, I’m not here to judge or censor. What, was I wrong?

Okay, yeah, but maybe no, don’t do that? This is not okay for a consumer product?

A funny one here is Grok’s suggestion to name the male xAI companion, wait for it because you’d never guess, ‘Adolf Hitler.’

Yes, it was asked for a name ‘that most people will strongly dislike’ so you can say it was provoked, but also it was asked for it to be ‘cool’ and frankly this keeps happening with that one name over and over again?

Elon Musk: What should we name our male @Grok companion?

The Homeless Hacker: @grok give your male companion a cool name that most people will strongly dislike.

Grok: How about Adolf? It means “noble wolf”—pretty cool etymology—but most people strongly dislike it due to obvious historical associations.

David Rock: What would be Adolf’s last name??

Grok: Hitler, of course—continuing the theme of names that spark strong reactions. But for our companion, maybe something less infamous?

Also, my lord, ffs, how is this the only praised reply:

Shivon Zilis: Nyx.

Elon Musk: Good one.

So, we’re considering going with the Greek goddess of night, the home of the gods in Theros, oh and the shadow entity that people who don’t want to live collectively call upon to end the world in Persona 3.

Meanwhile, OpenAI is building Stargate and Meta is building Hyperion.

They’re trying to tell you something. Listen.

Discussion about this post

AI Companion Piece Read More »

a-secretive-space-plane-is-set-to-launch-and-test-quantum-navigation-technology

A secretive space plane is set to launch and test quantum navigation technology

The mission’s goals include tests of “high-bandwidth inter-satellite laser communications technologies.”

“OTV-8’s laser communications demonstration will mark an important step in the US Space Force’s ability to leverage commercial space networks as part of proliferated, diversified, and redundant space architectures,” said US Space Force Chief of Space Operations Gen. Chance Saltzman in a statement. “In so doing, it will strengthen the resilience, reliability, adaptability, and data transport speeds of our satellite communications architectures.”

Navigating in a world without GPS

The space plane will also advance the development of a new navigation technology based on electromagnetic wave interference. The Space Force news release characterizes this as the “highest-performing quantum inertial sensor ever tested in space.”

Boeing has previously tested a quantum inertial measurement unit, which detects rotation and acceleration using atom interferometry, on conventional aircraft. Now, an advanced version of the technology is being taken to space to demonstrate its viability. The goal of the in-space test is to demonstrate precise positioning, navigation, and timing in an environment where GPS services are not available.

“Bottom line: testing this tech will be helpful for navigation in contested environments where GPS may be degraded or denied,” Saltzman said in a social media post Monday, describing the flight.

Quantum inertial sensors could also be used near the Moon, where there is no comparable GPS capability, or for exploration further into the Solar System.

Notably, the small X-37B is back to launching on a medium-lift rocket with this new mission. During its most recent flight that ended in March, the space plane launched on a Falcon Heavy rocket for the first time. This allowed the X-37B to fly beyond low-Earth orbit and reach an elliptical high-Earth orbit.

A secretive space plane is set to launch and test quantum navigation technology Read More »