Author name: Mike M.

blue-origin’s-new-glenn-rocket-came-back-home-after-taking-aim-at-mars

Blue Origin’s New Glenn rocket came back home after taking aim at Mars


“Never before in history has a booster this large nailed the landing on the second try.”

Blue Origin’s 320-foot-tall (98-meter) New Glenn rocket lifts off from Cape Canaveral Space Force Station, Florida. Credit: Blue Origin

The rocket company founded a quarter-century ago by billionaire Jeff Bezos made history Thursday with the pinpoint landing of an 18-story-tall rocket on a floating platform in the Atlantic Ocean.

The on-target touchdown came nine minutes after the New Glenn rocket, built and operated by Bezos’ company Blue Origin, lifted off from Cape Canaveral Space Force Station, Florida, at 3: 55 pm EST (20: 55 UTC). The launch was delayed from Sunday, first due to poor weather at the launch site in Florida, then by a solar storm that sent hazardous radiation toward Earth earlier this week.

“We achieved full mission success today, and I am so proud of the team,” said Dave Limp, CEO of Blue Origin. “It turns out Never Tell Me The Odds (Blue Origin’s nickname for the first stage) had perfect odds—never before in history has a booster this large nailed the landing on the second try. This is just the beginning as we rapidly scale our flight cadence and continue delivering for our customers.”

The two-stage launcher set off for space carrying two NASA science probes on a two-year journey to Mars, marking the first time any operational satellites flew on Blue Origin’s new rocket, named for the late NASA astronaut John Glenn. The New Glenn hit its marks on the climb into space, firing seven BE-4 main engines for nearly three minutes on a smooth ascent through blue skies over Florida’s Space Coast.

Seven BE-4 engines power New Glenn downrange from Florida’s Space Coast. Credit: Blue Origin

The engines consumed super-cold liquified natural gas and liquid oxygen, producing more than 3.8 million pounds of thrust at full power. The BE-4s shut down, and the first stage booster released the rocket’s second stage, with dual hydrogen-fueled BE-3U engines, to continue the mission into orbit.

The booster soared to an altitude of 79 miles (127 kilometers), then began a controlled plunge back into the atmosphere, targeting a landing on Blue Origin’s offshore recovery vessel named Jacklyn. Moments later, three of the booster’s engines reignited to slow its descent in the upper atmosphere. Then, moments before reaching the Atlantic, the rocket again lit three engines and extended its landing gear, sinking through low-level clouds before settling onto the football field-size deck of Blue Origin’s recovery platform 375 miles (600 kilometers) east of Cape Canaveral.

A pivotal moment

The moment of touchdown appeared electric at several Blue Origin facilities around the country, which had live views of cheering employees piped in to the company’s webcast of the flight. This was the first time any company besides SpaceX has propulsively landed an orbital-class rocket booster, coming nearly 10 years after SpaceX recovered its first Falcon 9 booster intact in December 2015.

Blue Origin’s New Glenn landing also came almost exactly a decade after the company landed its smaller suborbital New Shepard rocket for the first time in West Texas. Just like Thursday’s New Glenn landing, Blue Origin successfully recovered the New Shepard on its second-ever attempt.

Blue Origin’s heavy-lifter launched successfully for the first time in January. But technical problems prevented the booster from restarting its engines on descent, and the first stage crashed at sea. Engineers made “propellant management and engine bleed control improvements” to resolve the problems, and the fixes appeared to work Thursday.

The rocket recovery is a remarkable achievement for Blue Origin, which has long lagged dominant SpaceX in the commercial launch business. SpaceX has now logged 532 landings with its Falcon booster fleet. Now, with just a single recovery in the books, Blue Origin sits at second in the rankings for propulsive landings of orbit-class boosters. Bezos’ company has amassed 34 landings of the suborbital New Shepard model, which lacks the size and doesn’t reach the altitude and speed of the New Glenn booster.

Blue Origin landed a New Shepard returning from space for the first time in November 2015, a few weeks before SpaceX first recovered a Falcon 9 booster. Bezos threw shade on SpaceX with a post on Twitter, now called X, after the first Falcon 9 landing: “Welcome to the club!”

Jeff Bezos, Blue Origin’s founder and owner, wrote this message on Twitter following SpaceX’s first Falcon 9 landing on December 21, 2015. Credit: X/Jeff Bezos

Finally, after Thursday, Blue Origin officials can say they are part of the same reusable rocket club as SpaceX. Within a few days, Blue Origin’s recovery vessel is expected to return to Port Canaveral, Florida, where ground crews will offload the New Glenn booster and move it to a hangar for inspections and refurbishment.

“Today was a tremendous achievement for the New Glenn team, opening a new era for Blue Origin and the industry as we look to launch, land, repeat, again and again,” said Jordan Charles, the company’s vice president for the New Glenn program, in a statement. “We’ve made significant progress on manufacturing at rate and building ahead of need. Our primary focus remains focused on increasing our cadence and working through our manifest.”

Blue Origin plans to reuse the same booster next year for the first launch of the company’s Blue Moon Mark 1 lunar cargo lander. This mission is currently penciled in to be next on Blue Origin’s New Glenn launch schedule. Eventually, the company plans to have a fleet of reusable boosters, like SpaceX has with the Falcon 9, that can each be flown up to 25 times.

New Glenn is a core element in Blue Origin’s architecture for NASA’s Artemis lunar program. The rocket will eventually launch human-rated lunar landers to the Moon to provide astronauts with rides to and from the surface of the Moon.

The US Space Force will also examine the results of Thursday’s launch to assess New Glenn’s readiness to begin launching military satellites. The military selected Blue Origin last year to join SpaceX and United Launch Alliance as a third launch provider for the Defense Department.

Blue Origin’s New Glenn booster, 23 feet (7 meters) in diameter, on the deck of the company’s landing platform in the Atlantic Ocean.

Slow train to Mars

The mission wasn’t over with the buoyant landing in the Atlantic. New Glenn’s second stage fired its engines twice to propel itself on a course toward deep space, setting up for deployment of NASA’s two ESCAPADE satellites a little more than a half-hour after liftoff.

The identical satellites were released from their mounts on top of the rocket to begin their nearly two-year journey to Mars, where they will enter orbit to survey how the solar wind interacts with the rarefied uppermost layers of the red planet’s atmosphere. Scientists believe radiation from the Sun gradually stripped away Mars’ atmosphere, driving runaway climate change that transitioned the planet from a warm, habitable world to the global inhospitable desert seen today.

“I’m both elated and relieved to see NASA’s ESCAPADE spacecraft healthy post-launch and looking forward to the next chapter of their journey to help us understand Mars’ dynamic space weather environment,” said Rob Lillis, the mission’s principal investigator from the University of California, Berkeley.

Scientists want to understand the environment at the top of the Martian atmosphere to learn more about what drove this change. With two instrumented spacecraft, ESCAPADE will gather data from different locations around Mars, providing a series of multipoint snapshots of solar wind and atmospheric conditions. Another NASA spacecraft, named MAVEN, has collected similar data since arriving in orbit around Mars in 2014, but it is only a single observation post.

ESCAPADE, short for Escape and Plasma Acceleration and Dynamics Explorers, was developed and launched on a budget of about $80 million, a bargain compared to all of NASA’s recent Mars missions. The spacecraft were built by Rocket Lab, and the project is managed on behalf of NASA by the University of California, Berkeley.

The two spacecraft for NASA’s ESCAPADE mission at Rocket Lab’s factory in Long Beach, California. Credit: Rocket Lab

NASA paid Blue Origin about $20 million for the launch of ESCAPADE, significantly less than it would have cost to launch it on any other dedicated rocket. The space agency accepted the risk of launching on the relatively unproven New Glenn rocket, which hasn’t yet been certified by NASA or the Space Force for the government’s marquee space missions.

The mission was supposed to launch last year, when Earth and Mars were in the right positions to enable a direct trip between the planets. But Blue Origin delayed the launch, forcing a yearlong wait until the company’s second New Glenn was ready to fly. Now, the ESCAPADE satellites, each about a half-ton in mass fully fueled, will loiter in a unique orbit more than a million miles from Earth until next November, when they will set off for the red planet. ESCAPADE will arrive at Mars in September 2027 and begin its science mission in 2028.

Rocket Lab ground controllers established communication with the ESCAPADE satellites late Thursday night.

“The ESCAPADE mission is part of our strategy to understand Mars’ past and present so we can send the first astronauts there safely,” said Nicky Fox, associate administrator of NASA’s Science Mission Directorate. “Understanding Martian space weather is a top priority for future missions because it helps us protect systems, robots, and most importantly, humans, in extreme environments.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Blue Origin’s New Glenn rocket came back home after taking aim at Mars Read More »

us-spy-satellites-built-by-spacex-send-signals-in-the-“wrong-direction”

US spy satellites built by SpaceX send signals in the “wrong direction”


Spy satellites emit surprising signals

It seems US didn’t coordinate Starshield’s unusual spectrum use with other countries.

Image of a satellite in space and the Earth in the background.

Image of a Starshield satellite from SpaceX’s website. Credit: SpaceX

Image of a Starshield satellite from SpaceX’s website. Credit: SpaceX

About 170 Starshield satellites built by SpaceX for the US government’s National Reconnaissance Office (NRO) have been sending signals in the wrong direction, a satellite researcher found.

The SpaceX-built spy satellites are helping the NRO greatly expand its satellite surveillance capabilities, but the purpose of these signals is unknown. The signals are sent from space to Earth in a frequency band that’s allocated internationally for Earth-to-space and space-to-space transmissions.

There have been no public complaints of interference caused by the surprising Starshield emissions. But the researcher who found them says they highlight a troubling lack of transparency in how the US government manages the use of spectrum and a failure to coordinate spectrum usage with other countries.

Scott Tilley, an engineering technologist and amateur radio astronomer in British Columbia, discovered the signals in late September or early October while working on another project. He found them in various parts of the 2025–2110 MHz band, and from his location, he was able to confirm that 170 satellites were emitting the signals over Canada, the United States, and Mexico. Given the global nature of the Starshield constellation, the signals may be emitted over other countries as well.

“This particular band is allocated by the ITU [International Telecommunication Union], the United States, and Canada primarily as an uplink band to spacecraft on orbit—in other words, things in space, so satellite receivers will be listening on these frequencies,” Tilley told Ars. “If you’ve got a loud constellation of signals blasting away on the same frequencies, it has the potential to interfere with the reception of ground station signals being directed at satellites on orbit.”

In the US, users of the 2025–2110 MHz portion of the S-Band include NASA and the National Oceanic and Atmospheric Administration (NOAA), as well as nongovernmental users like TV news broadcasters that have vehicles equipped with satellites to broadcast from remote locations.

Experts told Ars that the NRO likely coordinated with the US National Telecommunications and Information Administration (NTIA) to ensure that signals wouldn’t interfere with other spectrum users. A decision to allow the emissions wouldn’t necessarily be made public, they said. But conflicts with other governments are still possible, especially if the signals are found to interfere with users of the frequencies in other countries.

Surprising signals

A man standing outdoors in front of two large antennas.

Scott Tilley and his antennas.

Credit: Scott Tilley

Scott Tilley and his antennas. Credit: Scott Tilley

Tilley previously made headlines in 2018 when he located a satellite that NASA had lost contact with in 2005. For his new discovery, Tilley published data and a technical paper describing the “strong wideband S-band emissions,” and his work was featured by NPR on October 17.

Tilley’s technical paper said emissions were detected from 170 satellites out of the 193 known Starshield satellites. Emissions have since been detected from one more satellite, making it 171 out of 193, he told Ars. “The apparent downlink use of an uplink-allocated band, if confirmed by authorities, warrants prompt technical and regulatory review to assess interference risk and ensure compliance” with ITU regulations, Tilley’s paper said.

Tilley said he uses a mix of omnidirectional antennas and dish antennas at his home to receive signals, along with “software-defined radios and quite a bit of proprietary software I’ve written or open source software that I use for analysis work.” The signals did not stop when the paper was published. Tilley said the emissions are powerful enough to be received by “relatively small ground stations.”

Tilley’s paper said that Starshield satellites emit signals with a width of 9 MHz and signal-to-noise (SNR) ratios of 10 to 15 decibels. “A 10 dB SNR means the received signal power is ten times greater than the noise power in the same bandwidth,” while “20 dB means one hundred times,” Tilley told Ars.

Other Starshield signals that were 4 or 5 MHz wide “have been observed to change frequency from day to day with SNR exceeding 20dB,” his paper said. “Also observed from time to time are other weaker wide signals from 2025–2110 MHz what may be artifacts or actual intentional emissions.”

The 2025–2110 MHz band is used by NASA for science missions and by other countries for similar missions, Tilley noted. “Any other radio activity that’s occurring on this band is intentionally limited to avoid causing disruption to its primary purpose,” he said.

The band is used for some fully terrestrial, non-space purposes. Mobile service is allowed in 2025–2110 MHz, but ITU rules say that “administrations shall not introduce high-density mobile systems” in these frequencies. The band is also licensed in the US for non-federal terrestrial services, including the Broadcast Auxiliary Service, Cable Television Relay Service, and Local Television Transmission Service.

While Earth-based systems using the band, such as TV links from mobile studios, have legal protection against interference, Tilley noted that “they normally use highly directional and local signals to link a field crew with a studio… they’re not aimed into space but at a terrestrial target with a very directional antenna.” A trade group representing the US broadcast industry told Ars that it hasn’t observed any interference from Starshield satellites.

“There without anybody knowing it”

Spectrum consultant Rick Reaser told Ars that Starshield’s space-to-Earth transmissions likely haven’t caused any interference problems. “You would not see this unless you were looking for it, or if it turns out that your receiver looks for everything, which most receivers aren’t going to do,” he said.

Reaser said it appears that “whatever they’re doing, they’ve come up with a way to sort of be there without anybody knowing it,” or at least until Tilley noticed the signals.

“But then the question is, can somebody prove that that’s caused a problem?” Reaser said. Other systems using the same spectrum in the correct direction probably aren’t pointed directly at the Starshield satellites, he said.

Reaser’s extensive government experience includes managing spectrum for the Defense Department, negotiating a spectrum-sharing agreement with the European Union, and overseeing the development of new signals for GPS. Reaser said that Tilley’s findings are interesting because the signals would be hard to discover.

“It is being used in the wrong direction, if they’re coming in downlink, that’s supposed to be an uplink,” Reaser said. As for what the signals are being used for, Reaser said he doesn’t know. “It could be communication, it could be all sorts of things,” he said.

Tilley’s paper said the “results raise questions about frequency-allocation compliance and the broader need for transparent coordination among governmental, commercial, and scientific stakeholders.” He argues that international coordination is becoming more important because of the ongoing deployment of large constellations of satellites that could cause harmful interference.

“Cooperative disclosure—without compromising legitimate security interests—will be essential to balance national capability with the shared responsibility of preserving an orderly and predictable radio environment,” his paper said. “The findings presented here are offered in that spirit: not as accusation, but as a public-interest disclosure grounded in reproducible measurement and open analysis. The data, techniques, and references provided enable independent verification by qualified parties without requiring access to proprietary or classified information.”

While Tilley doesn’t know exactly what the emissions are for, his paper said the “signal characteristics—strong, coherent, and highly predictable carriers from a large constellation—create the technical conditions under which opportunistic or deliberate PNT exploitation could occur.”

PNT refers to Positioning, Navigation, and Timing (PNT) applications. “While it is not suggested that the system was designed for that role, the combination of wideband data channels and persistent carrier tones in a globally distributed or even regionally operated network represents a practical foundation for such use, either by friendly forces in contested environments or by third parties seeking situational awareness,” the paper said.

Emissions may have been approved in secret

Tilley told us that a few Starshield satellites launched just recently, in late September, have not emitted signals while moving toward their final orbits. He said this suggests the emissions are for an “operational payload” and not merely for telemetry, tracking, and control (TT&C).

“This could mean that [the newest satellites] don’t have this payload or that the emissions are not part of TT&C and may begin once these satellites achieve their place within the constellation,” Tilley told Ars. “If these emissions are TT&C, you would expect them to be active especially during the early phases of the mission, when the satellites are actively being tested and moved into position within the constellation.”

Whatever they’re for, Reaser said the emissions were likely approved by the NTIA and that the agency would likely have consulted with the Federal Communications Commission. For federal spectrum use, these kinds of decisions aren’t necessarily made public, he said.

“NRO would have to coordinate that through the NTIA to make sure they didn’t have an interference problem,” Reaser said. “And by the way, this happens a lot. People figure out a way [to transmit] on what they call a non-interference basis, and that’s probably how they got this approved. They say, ‘listen, if somebody reports interference, then you have to shut down.’”

Tilley said it’s clear that “persistent S-band emissions are occurring in the 2025–2110 MHz range without formal ITU coordination.” Claims that the downlink use was approved by the NTIA in a non-public decision “underscore, rather than resolve, the transparency problem,” he told Ars.

An NTIA spokesperson declined to comment. The NRO and FCC did not provide any comment in response to requests from Ars.

SpaceX just “a contractor for the US government”

Randall Berry, a Northwestern University professor of electrical and computer engineering, agreed with Reaser that it’s likely the NTIA approved the downlink use of the band and that this decision was not made public. Getting NTIA clearance is “the proper way this should be done,” he said.

“It would be surprising if NTIA was not aware, as Starshield is a government-operated system,” Berry told Ars. While NASA and other agencies use the band for Earth-to-space transmissions, “they may have been able to show that the Starshield space-to-Earth signals do not create harmful interference with these Earth-to-space signals,” he said.

There is another potential explanation that is less likely but more sinister. Berry said it’s possible that “SpaceX did not make this known to NTIA when the system was cleared for federal use.” Berry said this would be “surprising and potentially problematic.”

Digital rendering of a satellite in space.

SpaceX rendering of a Starshield satellite.

Credit: SpaceX

SpaceX rendering of a Starshield satellite. Credit: SpaceX

Tilley doesn’t think SpaceX is responsible for the emissions. While Starshield relies on technology built for the commercial Starlink broadband system of low Earth orbit satellites, Elon Musk’s space company made the Starshield satellites in its role as a contractor for the US government.

“I think [SpaceX is] just operating as a contractor for the US government,” Tilley said. “They built a satellite to the government specs provided for them and launched it for them. And from what I understand, the National Reconnaissance Office is the operator.”

SpaceX did not respond to a request for comment.

TV broadcasters conduct interference analysis

TV broadcasters with news trucks that use the same frequencies “protect their band vigorously” and would have reported interference if it was affecting their transmissions, Reaser said. This type of spectrum use is known as Electronic News Gathering (ENG).

The National Association of Broadcasters told Ars that it “has been closely tracking recent reports concerning satellite downlink operation in the 2025–2110 MHz frequency band… While it’s not clear that satellite downlink operations are authorized by international treaty in this range, such operations are uncommon, and we are not aware of any interference complaints related to downlink use.”

The NAB investigated after Tilley’s report. “When the Tilley report first surfaced, NAB conducted an interference analysis—based on some assumptions given that Starshield’s operating parameters have not been publicly disclosed,” the group told us. “That analysis found that interference with ENG systems is unlikely. We believe the proposed downlink operations are likely compatible with broadcaster use of the band, though coordination issues with the International Telecommunication Union (ITU) could still arise.”

Tilley said that a finding of interference being unlikely “addresses only performance, not legality… coordination conducted only within US domestic channels does not meet international requirements under the ITU Radio Regulations. This deployment is not one or two satellites, it is a distributed constellation of hundreds of objects with potential global implications.”

Canada agency: No coordination with ITU or US

When contacted by Ars, an ITU spokesperson said the agency is “unable to provide any comment or additional information on the specific matter referenced.” The ITU said that interference concerns “can be formally raised by national administrations” and that the ITU’s Radio Regulations Board “carefully examines the specifics of the case and determines the most appropriate course of action to address it in line with ITU procedures.”

The Canadian Space Agency (CSA) told Ars that its “missions operating within the frequency band have not yet identified any instances of interference that negatively impact their operations and can be attributed to the referenced emissions.” The CSA indicated that there hasn’t been any coordination with the ITU or the US over the new emissions.

“To date, no coordination process has been initiated for the satellite network in question,” the CSA told Ars. “Coordination of satellite networks is carried out through the International Telecommunication Union (ITU) Radio Regulation, with Innovation, Science and Economic Development Canada (ISED) serving as the responsible national authority.”

The European Space Agency also uses the 2025–2100 band for TT&C. We contacted the agency but did not receive any comment.

The lack of coordination “remains the central issue,” Tilley told Ars. “This band is globally allocated for Earth-to-space uplinks and limited space-to-space use, not continuous space-to-Earth transmissions.”

NASA needs protection from interference

An NTIA spectrum-use report updated in 2015 said NASA “operates earth stations in this band for tracking and command of manned and unmanned Earth-orbiting satellites and space vehicles either for Earth-to-space links for satellites in all types of orbits or through space-to-space links using the Tracking Data and Relay Satellite System (TDRSS). These earth stations control ninety domestic and international space missions including the Space Shuttle, the Hubble Space Telescope, and the International Space Station.”

Additionally, the NOAA “operates earth stations in this band to control the Geostationary Operational Environmental Satellite (GOES) and Polar Operational Environmental Satellite (POES) meteorological satellite systems,” which collect data used by the National Weather Service. We contacted NASA and NOAA, but neither agency provided comment to Ars.

NASA’s use of the band has increased in recent years. The NTIA told the FCC in 2021 that 2025–2110 MHz is “heavily used today and require[s] extensive coordination even among federal users.” The band “has seen dramatically increased demand for federal use as federal operations have shifted from federal bands that were repurposed to accommodate new commercial wireless broadband operations.”

A 2021 NASA memo included in the filing said that NASA would only support commercial launch providers using the band if their use was limited to sending commands to launch vehicles for recovery and retrieval purposes. Even with that limit, commercial launch providers would cause “significant interference” for existing federal operations in the band if the commercial use isn’t coordinated through the NTIA, the memo said.

“NASA makes extensive use of this band (i.e., currently 382 assignments) for both transmissions from earth stations supporting NASA spacecraft (Earth-to-space) and transmissions from NASA’s Tracking and Data Relay Satellite System (TDRSS) to user spacecraft (space-to-space), both of which are critical to NASA operations,” the memo said.

In 2024, the FCC issued an order allowing non-federal space launch operations to use the 2025–2110 MHz band on a secondary basis. The allocation is “limited to space launch telecommand transmissions and will require commercial space launch providers to coordinate with non-Federal terrestrial licensees… and NTIA,” the FCC order said.

International non-interference rules

While US agencies may not object to the Starshield emissions, that doesn’t guarantee there will be no trouble with other countries. Article 4.4 of ITU regulations says that member nations may not assign frequencies that conflict with the Table of Frequency Allocations “except on the express condition that such a station, when using such a frequency assignment, shall not cause harmful interference to, and shall not claim protection from harmful interference caused by, a station operating in accordance with the provisions.”

Reaser said that under Article 4.4, entities that are caught interfering with other spectrum users are “supposed to shut down.” But if the Starshield users were accused of interference, they would probably “open negotiations with the offended party” instead of immediately stopping the emissions, he said.

“My guess is they were allowed to operate on a non-interference basis and if there is an interference issue, they’d have to go figure a way to resolve them,” he said.

Tilley told Ars that Article 4.4 allows for non-interference use domestically but “is not a blank check for continuous, global downlinks from a constellation.” In that case, “international coordination duties still apply,” he said.

Tilley pointed out that under the Convention on Registration of Objects Launched into Outer Space, states must report the general function of a space object. “Objects believed to be part of the Starshield constellation have been registered with UNOOSA [United Nations Office for Outer Space Affairs] under the broad description: ‘Spacecraft engaged in practical applications and uses of space technology such as weather or communications,’” his paper said.

Tilley told Ars that a vague description such as this “may satisfy the letter of filing requirements, but it contradicts the spirit” of international agreements. He contends that filings should at least state whether a satellite is for military purposes.

“The real risk is that we are no longer dealing with one or two satellites but with massive constellations that, by their very design, are global in scope,” he told Ars. “Unilateral use of space and spectrum affects every nation. As the examples of US and Chinese behavior illustrate, we are beginning from uncertain ground when it comes to large, militarily oriented mega-constellations, and, at the very least, this trend distorts the intent and spirit of international law.”

China’s constellation

Tilley said he has tracked China’s Guowang constellation and its use of “spectrum within the 1250–1300 MHz range, which is not allocated for space-to-Earth communications.” China, he said, “filed advance notice and coordination requests with the ITU for this spectrum but was not granted protection for its non-compliant use. As a result, later Chinese filings notifying and completing due diligence with the ITU omit this spectrum, yet the satellites are using it over other nations. This shows that the Chinese government consulted internationally and proceeded anyway, while the US government simply did not consult at all.”

By contrast, Canada submitted “an unusual level of detail” to the ITU for its military satellite Sapphire and coordinated fully with the ITU, he said.

Tilley said he reported his findings on Starshield emissions “directly to various western space agencies and the Canadian government’s spectrum management regulators” at the ISED.

“The Canadian government has acknowledged my report, and it has been disseminated within their departments, according to a senior ISED director’s response to me,” Tilley said, adding that he is continuing to collaborate “with other researchers to assist in the gathering of more data on the scope and impact of these emissions.”

The ISED told Ars that it “takes any reports of interference seriously and is not aware of any instances or complaints in these bands. As a general practice, complaints of potential interference are investigated to determine both the cause and possible resolutions. If it is determined that the source of interference is not Canadian, ISED works with its regulatory counterparts in the relevant administration to resolve the issue. ISED has well-established working arrangements with counterparts in other countries to address frequency coordination or interference matters.”

Accidental discovery

Two pictures of large antennas set up outdoors.

Antennas used by Scott Tilley.

Credit: Scott Tilley

Antennas used by Scott Tilley. Credit: Scott Tilley

Tilley’s discovery of Starshield signals happened because of “a clumsy move at the keyboard,” he told NPR. “I was resetting some stuff, and then all of a sudden, I’m looking at the wrong antenna, the wrong band,” he said.

People using the spectrum for Earth-to-space transmissions generally wouldn’t have any reason to listen for transmissions on the same frequencies, Tilley told Ars. Satellites using 2025–2100 MHz for Earth-to-space transmissions have their downlink operations on other frequencies, he said.

“The whole reason why I publicly revealed this rather than just quietly sit on it is to alert spacecraft operators that don’t normally listen on this band… that they should perform risk assessments and assess whether their missions have suffered any interference or could suffer interference and be prepared to deal with that,” he said.

A spacecraft operator may not know “a satellite is receiving interference unless the satellite is refusing to communicate with them or asking for the ground station to repeat the message over and over again,” Tilley said. “Unless they specifically have a reason to look or it becomes particularly onerous for them, they may not immediately realize what’s going on. It’s not like they’re sitting there watching the spectrum to see unusual signals that could interfere with the spacecraft.”

While NPR paraphrased Tilley as saying that the transmissions could be “designed to hide Starshield’s operations,” he told Ars that this characterization is “maybe a bit strongly worded.”

“It’s certainly an unusual place to put something. I don’t want to speculate about what the real intentions are, but it certainly could raise a question in one’s mind as to why they would choose to emit there. We really don’t know and probably never will know,” Tilley told us.

How amateurs track Starshield

After finding the signals, Tilley determined they were being sent by Starshield satellites by consulting data collected by amateurs on the constellation. SpaceX launches the satellites into what Tilley called classified orbits, but the space company distributes some information that can be used to track their locations.

For safety reasons, SpaceX publishes “a notice to airmen and sailors that they’re going to be dropping boosters and debris in hazard areas… amateurs use those to determine the orbital plane the launch is going to go into,” Tilley said. “Once we know that, we just basically wait for optical windows when the lighting is good, and then we’re able to pick up the objects and start tracking them and then start cataloguing them and generating orbits. A group of us around the world do that. And over the last year and a half or so since they started launching the bulk of this constellation, the amateurs have amassed considerable body of orbital data on this constellation.”

After accidentally discovering the emissions, Tilley said he used open source software to “compare the Doppler signal I was receiving to the orbital elements… and immediately started coming back with hits to Starshield and nothing else.” He said this means that “the tens of thousands of other objects in orbit didn’t match the radio Doppler characteristics that these objects have.”

Tilley is still keeping an eye on the transmissions. He told us that “I’m continuing to hear the signals, record them, and monitor developments within the constellation.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

US spy satellites built by SpaceX send signals in the “wrong direction” Read More »

the-pope-offers-wisdom

The Pope Offers Wisdom

The Pope is a remarkably wise and helpful man. He offered us some wisdom.

Yes, he is generally playing on easy mode by saying straightforwardly true things, but that’s meeting the world where it is. You have to start somewhere.

Some rejected his teachings.

Two thousand years after Jesus famously got nailed to a cross for suggesting we all be nice to each other for a change, Pope Leo XIV issues a similarly wise suggestion.

Pope Leo XIV: Technological innovation can be a form of participation in the divine act of creation. It carries an ethical and spiritual weight, for every design choice expresses a vision of humanity. The Church therefore calls all builders of #AI to cultivate moral discernment as a fundamental part of their work—to develop systems that reflect justice, solidarity, and a genuine reverence for life.

The world needs honest and courageous entrepreneurs and communicators who care for the common good. We sometimes hear the saying: “Business is business!” In reality, it is not so. No one is absorbed by an organization to the point of becoming a mere cog or a simple function. Nor can there be true humanism without a critical sense, without the courage to ask questions: “Where are we going? For whom and for what are we working? How are we making the world a better place?”

Pope Leo XIV (offered later, not part of the discussion in the rest of the post): Technological development has brought, and continues to bring, significant benefits to humanity. In order to ensure true progress, it is imperative that human dignity and the common good remain resolute priorities for all, both individuals and public entities. We must never forget the “ontological dignity that belongs to the person as such simply because he or she exists and is willed, created, and loved by God” (Dignitas Infinita, 7).

Roon: Pope Leo I think you would enjoy my account.

Liv Boeree: Well said. Also, it’s telling to observe which technologists (there’s one in particular) who just openly mocked you in the quote tweets for saying this.

Grimes: This is so incredibly lit. This is one of the most important things ever said.

Pliny the Liberator:

Rohit: Quite cool that amongst the world leaders it’s the Pope that has the best take on dealing with AI.

The one in particular who had an issue with the Pope’s statement is exactly the one you would guess: Marc Andreessen.

For those who aren’t aware of the image, my understanding of the image below, its context and its intended meaning is it is a selected-to-be-maximally-unflattering picture of GQ features director Katherine Stoeffel, from her interview of Sydney Sweeney, to contrast with Sydney Sweeney looking in command and hot.

During the otherwise very friendly interview, Stoeffel asked Sweeney a few times in earnest fashion about concerns and responses around her ad for jeans, presumably showing such ilk that she is woke, and Sweeney notes that she disregards internet talk, doesn’t check for it and doesn’t let it get to her, and otherwise dodged artfully at one point in particular, really excellent response all around, thus proving to such ilk she is based.

Also potentially important context I haven’t seen mentioned before: Katherine Stoeffel reads to me in this interview as having autistic mannerisms. Once you see the meme in this light you simply cannot unsee it or think it’s not central to what’s happening here, and it makes the whole thing another level of not okay and I’m betting other people saw this too and are similarly pissed off about it.

Not cool. Like, seriously, don’t use this half of the meme.

It’s not directly related, but if you do watch the interview, notice how much Sydney actually pauses to think about her answers, watch her eye movements and willingness to allow silence, and also take her advice and leave your phone at home. This also means that in the questions where Sydney doesn’t pause to think, it hits different. She’s being strategic but also credibly communicating. The answer everyone remembers, in context, is thus even more clearly something she had prepped and gamed out.

Here for reference is the other half of the meme, from the interview:

Daniel Eth: Andreessen is so dogmatically against working on decreasing risks from AI that he’s now mocking the pope for saying tech innovation “carries an ethical and spiritual weight” and that AI builders should “cultivate moral discernment as a fundamental part of their work”

Daniel Eth (rest of block quote is him):

The pope: “you should probably be a good person”

Marc Andreessen (as imagined by Daniel Eth): “this is an attack on me and everything I stand for”

Yes, I do think ‘you should probably be a good person’ in the context of a Pope is indeed against everything that Marc Andreessen stands for, straight up.

Is that an ‘uncharitable’ take? Yes, but I am confident it is an accurate one, and no this type of systematic mockery, performative cruelty, vice signaling and attempted cyberbullying does not come out of some sort of legitimate general welfare concern.

Marc Andreessen centrally has a strong position on cultivating moral discernment.

Which is the same as his position on not mocking those who support morality or moral discernment.

Which is that he is, in the strongest possible terms, against these things.

Near: we’re in the performative cruelty phase of silicon valley now

if we get an admin change in 2028 and tech is regulated to a halt b/c the american people hate us well this would not surprise me at all tbh. hence the superpacs i suppose.

pmarca deleted all the tweets now, the pope reigns victorious once again.

Roon: this is actually the end stage of what @nearcyan is describing, an age dominated by vice signaling, a nihilistic rw voice that says well someone gave me an annoying moral lecture in the past, so actually virtue is obsolete now.

bring back virtue signaling. the alternative is worse. at least pretend you are saving the world

near: first time ive ever seen him delete an entire page of tweets.

Daniel Eth: Tbf Andreessen is a lot more cruel than the typical person in SV

Scott Alexander: List of Catholic miracles that make me question my atheism:

  1. Sun miracle of Fatima

  2. Joan of Arc

  3. Pope making Marc Andreessen shut up for one day.

Yosarian: It’s also amazing that Marc got into a fight with the Pope only a few weeks after you proved he’s the Antichrist

Scott Alexander: You have seen and believed; blessed are those who believed without seeing.

It turns out that yes, you can go too far around these parts of Twitter.

TBPN (from a 2 min clip discussing mundane AI dangers): John: I don’t think you should just throw “decel” at someone who’s identifying a negative externality of a new technology. That’s not necessarily decelerationist.

Dean Ball: It’s heartening to see such eminently reasonable takes making their way into prominent, pro-tech new media.

There is something very “old man” about the terms of AI policy debates, and perhaps this is because it is has been largely old people who have set them.

When I hear certain arguments about how we need to “let the innovators innovate” and thus cannot bear the thought of resolving negative externalities from new technology, it just sounds retro to me. Like an old man dimly remembering things he believes Milton Friedman once said.

Whereas anyone with enough neuroplasticity remaining to internalize a concept as alien as “AGI” can easily see that such simple frames do not work.

It is as heartening to see pro-tech new media say this out loud as it is disheartening to realize that it needed to be said out loud.

A reasonable number of people finally turned on Marc for this.

Spor: I kinda can’t believe all it took was a bad meme swipe against the Pope for legit all of tech Twitter to turn against Marc Andreessen

There’s a lesson in there somewhere

To be fair, most of it came from his response, at least I think

If he doesn’t go around unfollowing and blocking everybody and deleting tweets, it’s a nothingburger

Daniel Eth: I think the thing is everyone already hated Andreessen, and it’s just that before a critical mass of dissent appeared people in tech were scared of criticizing him. And this event just happened to break that threshold

Feels like the dam has broken on people in the tech community airing grievances with Andreessen. Honestly makes me feel better about the direction of the tech community writ large

Matthew Griisser: You come at the King, you best not miss.

Jeremiah Johnson considers Marc Andreessen as an avatar for societal delay as he goes over the events discussed in this section.

Jeremiah Johnson: Now, I don’t want to say that Marc Andreessen is single-handedly causing the collapse of society. That would be a slight exaggeration. But I do believe this one incident shows that Andreessen is at the nexus of all the ways society is decaying. He’s not responsible. He’s just representative. He’s the mascot of decay, the guy who’s always there when the negative trends accelerate, cheering them on and pushing things downhill faster.

I mean, yeah, fair. That’s distinct from his ‘get everyone killed by ASI’ crusades, but the same person doing both is not a coincidence.

And no, he doesn’t make up for this with the quality of his technical takes, as Roon says he has been consistently wrong about generative AI.

As a reminder, here is Dwarkesh Patel’s excellent refutation to Marc Andreessen’s ‘why AI will save the world, its name calling and its many nonsensical arguments. Liron Shapira has a less measured thread that also points out many rather terrible arguments that were made and then doubled down upon and repeated.

Daniel Eth: This whole Andreessen thing is a good reminder that you shouldn’t confuse vice with competence. Just because the guy is rude & subversive does not mean that he has intelligent things to say.

Marc’s position really has reliably been, as explicitly stated as the central thesis of The Techno-Optimist Manifesto, that technological advancement is always universally good and anyone who points to any downside is explicitly his ‘enemy.’

There were a few defenders, and their defenses… did not make things better.

Dan Elton: I’m a bit dismayed to see @pmarca bowing to anti free speech political correctness culture and deleting a bunch of funny light hearted tweets. The pope is so overrated. If we can’t make a joke about the pope, is there anything left we are allowed to laugh at?

That is very much not what happened here, sir. Nor was it about joking about the pope. This was in no way about ‘free speech,’ or about lighthearted joking, it was about being performatively mean and hostile to the very idea of responsibility, earnestness or goodness, even in the theoretical. And it was about the systematic, intentional conflation of hostility with such supposed joking.

(Perhaps the deleted AI video of the pope was funny? I don’t know, didn’t see it.)

Marc nuked several full days of posts in response to all this, which seems wise. It’s a good way to not have to make a statement about exactly what you’re deleting, plays it safe, resets the board.

Mike Solana: my sense is the number of people who 1) furiously defended the pope last night and then 2) went to mass this morning is probably close to zero. status games.

Joe Weisenthal: Got em.

Very obviously: We’re supporting the Pope here because he took the trust and power and yes holiness vested in him and spoke earnestly, wisely and with moral clarity, saying obviously true things, and we agree with him, and then he was mocked for it exactly because of all these things. Not because we’re Catholics.

There’s this mindset that you can’t possibly be actually standing up for ideas or principles or simply goodness. It must be some status game, or some political fight, or something. No, it isn’t.

Marc also went on an unfollow binge, including Bayes and Grimes, which is almost entirely symbolic since he (still) follows 27,700 people.

I feel good about highlighting his holiness the Pope’s wise message.

I feel bad about spending the bulk of an entire post pointing out that someone has consistently been acting as a straight up horrible person. Ideally one would never do this. But this is pretty important context within the politics of the AI space and also for anyone evaluating venture capital funding from various angles.

Remember, when people tell you who they are? Believe them.

So I wanted to ensure there was a reference for these events we can link back to, and also to remember that yes if you go sufficiently too far people eventually notice.

Discussion about this post

The Pope Offers Wisdom Read More »

meta’s-star-ai-scientist-yann-lecun-plans-to-leave-for-own-startup

Meta’s star AI scientist Yann LeCun plans to leave for own startup

A different approach to AI

LeCun founded Meta’s Fundamental AI Research lab, known as FAIR, in 2013 and has served as the company’s chief AI scientist ever since. He is one of three researchers who won the 2018 Turing Award for pioneering work on deep learning and convolutional neural networks. After leaving Meta, LeCun will remain a professor at New York University, where he has taught since 2003.

LeCun has previously argued that large language models like Llama that Zuckerberg has put at the center of his strategy are useful, but they will never be able to reason and plan like humans, increasingly appearing to contradict his boss’s grandiose AI vision for developing “superintelligence.”

For example, in May 2024, when an OpenAI researcher discussed the need to control ultra-intelligent AI, LeCun responded on X by writing that before urgently figuring out how to control AI systems much smarter than humans, researchers need to have the beginning of a hint of a design for a system smarter than a house cat.

Mark Zuckerberg once believed the “metaverse” was the future and renamed his company because of it. Credit: Facebook

Within FAIR, LeCun has instead focused on developing world models that can truly plan and reason. Over the past year, though, Meta’s AI research groups have seen growing tension and mass layoffs as Zuckerberg has shifted the company’s AI strategy away from long-term research and toward the rapid deployment of commercial products.

Over the summer, Zuckerberg hired Alexandr Wang to lead a new superintelligence team at Meta, paying $14.3 billion to hire the 28-year-old founder of data-labeling startup Scale AI and acquire a 49 percent interest in his company. LeCun, who had previously reported to Chief Product Officer Chris Cox, now reports to Wang, which seems like a sharp rebuke of LeCun’s approach to AI.

Zuckerberg also personally handpicked an exclusive team called TBD Lab to accelerate the development of the next iteration of large language models, luring staff from rivals such as OpenAI and Google with astonishingly large $100 to $250 million pay packages. As a result, Zuckerberg has come under growing pressure from Wall Street to show that his multibillion-dollar investment in becoming an AI leader will pay off and boost revenue. But if it turns out like his previous pivot to the metaverse, Zuckerberg’s latest bet could prove equally expensive and unfruitful.

Meta’s star AI scientist Yann LeCun plans to leave for own startup Read More »

clickfix-may-be-the-biggest-security-threat-your-family-has-never-heard-of

ClickFix may be the biggest security threat your family has never heard of

Another campaign, documented by Sekoia, targeted Windows users. The attackers behind it first compromise a hotel’s account for Booking.com or another online travel service. Using the information stored in the compromised accounts, the attackers contact people with pending reservations, an ability that builds immediate trust with many targets, who are eager to comply with instructions, lest their stay be canceled.

The site eventually presents a fake CAPTCHA notification that bears an almost identical look and feel to those required by content delivery network Cloudflare. The proof the notification requires for confirmation that there’s a human behind the keyboard is to copy a string of text and paste it into the Windows terminal. With that, the machine is infected with malware tracked as PureRAT.

Push Security, meanwhile, reported a ClickFix campaign with a page “adapting to the device that you’re visiting from.” Depending on the OS, the page will deliver payloads for Windows or macOS. Many of these payloads, Microsoft said, are LOLbins, the name for binaries that use a technique known as living off the land. These scripts rely solely on native capabilities built into the operating system. With no malicious files being written to disk, endpoint protection is further hamstrung.

The commands, which are often base-64 encoded to make them unreadable to humans, are often copied inside the browser sandbox, a part of most browsers that accesses the Internet in an isolated environment designed to protect devices from malware or harmful scripts. Many security tools are unable to observe and flag these actions as potentially malicious.

The attacks can also be effective given the lack of awareness. Many people have learned over the years to be suspicious of links in emails or messengers. In many users’ minds, the precaution doesn’t extend to sites that instruct them to copy a piece of text and paste it into an unfamiliar window. When the instructions come in emails from a known hotel or at the top of Google results, targets can be further caught off guard.

With many families gathering in the coming weeks for various holiday dinners, ClickFix scams are worth mentioning to those family members who ask for security advice. Microsoft Defender and other endpoint protection programs offer some defenses against these attacks, but they can, in some cases, be bypassed. That means that, for now, awareness is the best countermeasure.

ClickFix may be the biggest security threat your family has never heard of Read More »

neutron-rocket’s-debut-slips-into-mid-2026-as-company-seeks-success-from-the-start

Neutron rocket’s debut slips into mid-2026 as company seeks success from the start

During an earnings call on Monday, Rocket Lab chief executive Peter Beck announced that the company’s medium-lift launch vehicle, Neutron, would not launch this year.

For anyone with the slightest understanding of the challenges involved in bringing a new rocket to the launch pad, as well as a calendar, the delay does not come as a surprise. Although Rocket Lab had been holding onto the possibility of launching Neutron this year publicly, it has been clear for months that a slip into 2026 was inevitable.

According to Beck, speaking during a third-quarter 2025 earnings call, the new timeline has the company bringing Neutron to Launch Complex 2 at Wallops Flight Facility in Virginia during the first quarter of next year. The first launch is scheduled to occur “thereafter,” according to the company’s plans.

The Rocket Lab way

As part of his remarks, Beck said Rocket Lab would not be rushed by an arbitrary deadline.

“We’ve seen what happens when others rush to the pad with an unproven product, and we just refused to do that,” he said, referring to other commercial launch companies that have not had success with their first launches. “Our aim is to make it to orbit on the first try. You won’t see us using some qualifier about us just clearing the pad, and claiming success and whatnot, and that means that we don’t want to learn something during Neutron’s first flight that could be learned on the ground during the testing phase.”

Through the development of the smaller Electron rocket as well as various satellites and in-space vehicles, Rocket Lab has followed and honed a process that breeds success in flight, Beck said. Right now, Rocket Lab is in a “meaty” testing process when components of the vehicle are being assembled for the first time, Beck added.

Rocket Lab has reached the “meaty” part of the testing process.

Credit: Rocket Lab

Rocket Lab has reached the “meaty” part of the testing process. Credit: Rocket Lab

“This is a time when you find out on the ground what you got right, and what you got wrong, rather than finding out that during first launch,” he said. “Now at Rocket Lab, we have a proven process for delivering and developing complex space flight hardware, and I think that process speaks for itself with respect to our hardware, always looking beautiful, and, more importantly, always working beautifully. Now, our process is meticulous, but it works.”

Neutron rocket’s debut slips into mid-2026 as company seeks success from the start Read More »

commercial-spyware-“landfall”-ran-rampant-on-samsung-phones-for-almost-a-year

Commercial spyware “Landfall” ran rampant on Samsung phones for almost a year

Before the April 2025 patch, Samsung phones had a vulnerability in their image processing library. This is a zero-click attack because the user doesn’t need to launch anything. When the system processes the malicious image for display, it extracts shared object library files from the ZIP to run the Landfall spyware. The payload also modifies the device’s SELinux policy to give Landfall expanded permissions and access to data.

Landfall flowchart

How Landfall exploits Samsung phones.

Credit: Unit 42

How Landfall exploits Samsung phones. Credit: Unit 42

The infected files appear to have been delivered to targets via messaging apps like WhatsApp. Unit 42 notes that Landfall’s code references several specific Samsung phones, including the Galaxy S22, Galaxy S23, Galaxy S24, Galaxy Z Flip 4, and Galaxy Z Fold 4. Once active, Landfall reaches out to a remote server with basic device information. The operators can then extract a wealth of data, like user and hardware IDs, installed apps, contacts, any files stored on the device, and browsing history. It can also activate the camera and microphone to spy on the user.

Removing the spyware is no easy feat, either. Because of its ability to manipulate SELinux policies, it can burrow deeply into the system software. It also includes several tools that help evade detection. Based on the VirusTotal submissions, Unit 42 believes Landfall was active in 2024 and early 2025 in Iraq, Iran, Turkey, and Morocco. The vulnerability may have been present in Samsung’s software from Android 13 through Android 15, the company suggests.

Unit 42 says that several naming schemes and server responses share similarities with industrial spyware developed by big cyber-intelligence firms like NSO Group and Variston. However, they cannot directly tie Landfall to any particular group. While this attack was highly targeted, the details are now in the open, and other threat actors could now employ similar methods to access unpatched devices. Anyone with a supported Samsung phone should make certain they are on the April 2025 patch or later.

Commercial spyware “Landfall” ran rampant on Samsung phones for almost a year Read More »

the-government-shutdown-is-starting-to-have-cosmic-consequences

The government shutdown is starting to have cosmic consequences

The federal government shutdown, now in its 38th day, prompted the Federal Aviation Administration to issue a temporary emergency order Thursday prohibiting commercial rocket launches from occurring during “peak hours” of air traffic.

The FAA also directed commercial airlines to reduce domestic flights from 40 “high impact airports” across the country in a phased approach beginning Friday. The agency said the order from the FAA’s administrator, Bryan Bedford, is aimed at addressing “safety risks and delays presented by air traffic controller staffing constraints caused by the continued lapse in appropriations.”

The government considers air traffic controllers essential workers, so they remain on the job without pay until Congress passes a federal budget and President Donald Trump signs it into law. The shutdown’s effects, which affected federal workers most severely at first, are now rippling across the broader economy.

Sharing the airspace

Vehicles traveling to and from space share the skies with aircraft, requiring close coordination with air traffic controllers to clear airspace for rocket launches and reentries. The FAA said its order restricting commercial air traffic, launches, and reentries is intended to “ensure the safety of aircraft and the efficiency of the [National Airspace System].”

In a statement explaining the order, the FAA said the air traffic control system is “stressed” due to the shutdown.

“With continued delays and unpredictable staffing shortages, which are driving fatigue, risk is further increasing, and the FAA is concerned with the system’s ability to maintain the current volume of operations,” the regulator said. “Accordingly, the FAA has determined additional mitigation is necessary.”

Beginning Monday, the FAA said commercial space launches will only be permitted between 10 pm and 6 am local time, when the national airspace is most calm. The order restricts commercial reentries to the same overnight timeframe. The FAA licenses all commercial launches and reentries.

The government shutdown is starting to have cosmic consequences Read More »

on-sam-altman’s-second-conversation-with-tyler-cowen

On Sam Altman’s Second Conversation with Tyler Cowen

Some podcasts are self-recommending on the ‘yep, I’m going to be breaking this one down’ level. This was very clearly one of those. So here we go.

As usual for podcast posts, the baseline bullet points describe key points made, and then the nested statements are my commentary.

If I am quoting directly I use quote marks, otherwise assume paraphrases.

The entire conversation takes place with an understanding that no one is to mention existential risk or the fact that the world will likely transform, without stating this explicitly. Both participants are happy to operate that way. I’m happy to engage in that conversation (while pointing out its absurdity in some places), but assume that every comment I make has an implicit ‘assuming normality’ qualification on it, even when I don’t say so explicitly.

  1. Cowen asks how Altman got so productive, able to make so many deals and ship so many products. Altman says people almost never allocate their time efficiently, and that when you have more demands on your time you figure out how to improve. Centrally he figures out what the core things to do are and delegates. He says deals are quicker now because everyone wants to work with OpenAI.

    1. Altman’s definitely right that most people are inefficient with their time.

    2. Inefficiency is relative. As in, I think of myself as inefficient with my time, and think of the ways I could be a lot more efficient.

    3. Not everyone responds to pressure by improving efficiency, far from it.

    4. Altman is good here to focus on delegation.

    5. It is indeed still remarkable how many things OpenAI is doing at once, with the associated worries about it potentially being too many things, and not taking the time to do them responsibly.

  1. What makes hiring in hardware different from in AI? Cycles are longer. Capital is more intense. So more time invested up front to pick wisely. Still want good, effective, fast-moving people and clear goals.

    1. AI seems to be getting pretty capital intensive?

  2. Nvidia’s people ‘are less weird’ and don’t read Twitter. OpenAI’s hardware people feel more like their software people than they feel like Nvidia’s people.

    1. My guess is there isn’t a right answer but you need to pick a lane.

  3. What makes Roon special? Lateral thinker, great at phrasing observations, lots of disparate skills in one place.

    1. I would add some more ingredients. There’s a sense of giving zero fucks, of having no filter, and having no agenda. Say things and let the chips fall.

    2. A lot of the disparate skills are disparate aesthetics, including many that are rare in AI, and taking all of them seriously at once.

  4. Altman doesn’t tell researchers what to work on. Researchers choose, that’s it.

  5. Email is very bad. Slack might not be good, it creates explosions of work including fake work to deal with, especially the first and last hours, but it is better than email. Altman suspects it’s time for a new AI-driven thing but doesn’t have it yet, probably due to lack of trying and unwillingness to pay focus and activation energy given everything else going on.

    1. I think email is good actually, and that Slack is quite bad.

    2. Email isn’t perfect but I like that you decide what you have ownership of, how you organize it, how you keep it, when you check it, and generally have control over the experience, and that you can choose how often you check it and aren’t being constantly pinged or expected to get into chat exchanges.

    3. Slack is an interruption engine without good information organization and I hate it so much, as in ‘it’s great I don’t have a job where I need slack.’

    4. There’s definitely room to build New Thing that integrates AI into some mix of information storage and retrieval, email slow communication, direct messaging and group chats, and which allows you to prioritize and get the right levels of interruption at the right times, and so on.

    5. However this will be tricky, you need to be ten times better and you can’t break the reliances people have. False negatives, where things get silently buried, can be quite bad.

  1. What will make GPT-6 special? Altman suggests it might be able to ‘really do’ science. He doesn’t have much practical advice on what to do with that.

    1. This seems like we hit the wall of ‘…and nothing will change much’ forcing Altman to go into contortions.

    2. One thing we learned from GPT-5 is that the version numbers don’t have to line up with big capabilities leaps. The numbers are mostly arbitrary.

Tyler isn’t going to let him off that easy. At this point, I don’t normally do this, but exact words seem important so I’m going to quite the transcript.

COWEN: If I’m thinking about restructuring an entire organization to have GPT-6 or 7 or whatever at the center of it, what is it I should be doing organizationally, rather than just having all my top people use it as add-ons to their current stock of knowledge?

ALTMAN: I’ve thought about this more for the context of companies than scientists, just because I understand that better. I think it’s a very important question. Right now, I have met some orgs that are really saying, “Okay, we’re going to adopt AI and let AI do this.” I’m very interested in this, because shame on me if OpenAI is not the first big company run by an AI CEO, right?

COWEN: Just parts of it. Not the whole thing.

ALTMAN: No, the whole thing.

COWEN: That’s very ambitious. Just the finance department, whatever.

ALTMAN: Well, but eventually it should get to the whole thing, right? So we can use this and then try to work backwards from that. I find this a very interesting thought experiment of what would have to happen for an AI CEO to be able to do a much better job of running OpenAI than me, which clearly will happen someday. How can we accelerate that? What’s in the way of that? I have found that to be a super useful thought experiment for how we design our org over time and what the other pieces and roadblocks will be. I assume someone running a science lab should try to think the same way, and they’ll come to different conclusions.

COWEN: How far off do you think it is that just, say, one division of OpenAI is 85 percent run by AIs?

ALTMAN: Any single division?

COWEN: Not a tiny, insignificant division, mostly run by the AIs.

ALTMAN: Some small single-digit number of years, not very far. When do you think I can be like, “Okay, Mr. AI CEO, you take over”?

COWEN: CEO is tricky because the public role of a CEO, as you know, becomes more and more important.

  1. On the above in terms of ‘oh no’:

    1. Oh no. Exactly the opposite. Shame on him if OpenAI goes first.

    2. OpenAI is the company, in this scenario, out of all the companies, we should be most worried about handing over to an AI CEO, for obvious reasons.

    3. If you’re wondering how the AIs could take over? You can stop wondering. They will take over because we will ask them to.

    4. CEO is an adversarial and anti-inductive position, where any weakness will be systematically exploited, and big mistakes can entirely sink you, and the way that you direct and set up the ‘AI CEO’ matters quite a lot in all this. The bar to a net positive AI CEO is much higher than the AI making on average better decisions, or having on average better features, and the actual bar is higher. Altman says ‘on the actual decision making maybe the AI is pretty good soon’ but this is a place where I’m going to be the Bottleneck Guy.

    5. CEO is also a position where, very obviously, misaligned means your company can be extremely cooked, and basically everything in it subverted, even if that CEO is a single human. Most of the ways in which this is limited are because the CEO can only be in one place at a time and do one thing at a time, couldn’t keep an eye on most things let alone micromanage them, and would require conspirators. A hostile AI CEO is death or subversion of the company.

    6. The ‘public role’ of the CEO being the bottleneck does not bring comfort here. If Altman (as he suggests) is public face and the AI ‘figures out what to do’ and Altman doesn’t actually get to overrule the AI (or is simply convinced not to) then the problem remains.

  2. On the above in terms of ‘oh yeah’:

    1. There is the clear expectation from both of them that AI will rise, reasonably soon, to the level of at least ‘run the finance department of a trillion dollar corporation.’ This doesn’t have to be AGI but it probably will be, no?

    2. It’s hard for me to square ‘AIs are running the actual decision making at top corporations’ with predictions for only modest GDP growth. As Altman notes, the AI CEO needs to be a lot better than the human CEO in order to get the job.

    3. They are predicting billion-dollar 2-3 person companies, with AIs, within three years.

  3. Altman asks potential hires about their use of AI now to predict their level of AI adoption in the future, which seems smart. Using it as ‘better Google’ is a yellow flag, thinking about day-to-day in three years is a green flag.

  4. In three years Altman is aiming to have a ‘fully automated AI researcher.’ So it’s pretty hard to predict day-to-day use in three years.

A timely section title.

  1. Cowen and Altman are big fans of nuclear power (as am I), but people worry about them. Cowen asks, do you worry similarly about AI and the similar Nervous Nellies, even if ‘AI is pretty safe’? Are the Feds your insurer? How will you insure everything?

    1. Before we get to Altman’s answer can we stop to think about how absolutely insane this question is as presented?

    2. Cowen is outright equating worries about AI to worries about nuclear power, calling both Nervous Nellies. My lord.

    3. The worry about AI risks is that the AI companies might be held too accountable? Might be asked to somehow provide too much insurance, when there is clearly no sign of any such requirement for the most important risks? They are building machines that will create substantial catastrophic and even existential risks, massive potential externalities.

    4. And you want the Federal Government to actively insure against AI catastrophic risks? To say that it’s okay, we’ve got you covered? This does not, in any way, actually reduce the public’s or world’s exposure to anything, and it further warps company incentives. It’s nuts.

    5. Not that even the Federal Government can actually ensure us here even at our own expense, since existential risk or sufficiently large catastrophic or systemic risk also wipes out the Federal Government. That’s kind of the point.

    6. The idea that the people are the Nervous Nellies around nuclear, which has majority public support, while Federal Government is the one calming them down and ensuring things can work is rather rich.

    7. Nuclear power regulations are insanely restrictive and prohibitive, and the insurance the government writes does not substantially make up for this, nor is it that expensive or risky. The NRC and other regulations are the reason we can’t have this nice thing, in ways that don’t relate much if at all to the continued existence of these Nervous Nellies. Providing safe harbor in exchange of that really is the actual least you can do.

    8. AI regulations impose very few rules and especially very few safety rules.

    9. Yes, there is the counterpoint that AI has to follow existing rules and thus is effectively rather regulated, but I find this rather silly as an argument, and no I don’t think the new laws around AI in particular move that needle much.

  2. Altman points out the Federal Government is the insurer of last resort for anything sufficiently large, whether you want it to be or not, but no not in the way of explicitly writing insurance policies.

    1. I mean yes if AI crashes the economy or does trillions in damages or what not, then the Federal Government will have to try and step in. This is a huge actual subsidy to the AI companies and they should (in theory anyway) be pay for it.

    2. A bailout for the actual AI companies if they are simply going bankrupt? David Sacks has made it clear our answer is no thank you, and rightfully so. Obviously, at some point the Fed Put or Trump Put comes into play in the stock market, that ship has sailed, but no we will not save your loans.

    3. And yeah, my lord, the idea that the Feds would write an insurance policy.

  3. Cowen then says he is worried about the Feds being the insurer of first resort and he doesn’t want that, Altman confirms he doesn’t either and doesn’t expect it.

    1. It’s good that they don’t want this to happen but this only slightly mitigates my outrage at the first question and the way it was presented.

  4. Cowen points out Trump is taking equity in Intel, lithium and rare earths, and asks how this applies to OpenAI. Altman mostly dodges, pivots to potential loss of meaning in the world, and points out the government might have strong opinions about AI company actions.

    1. Cowen doesn’t say it here but to his credit is on record correctly opposing this taking of equity in companies correctly identifying it as ‘seizing the means of production’ and pointing out it is the wrong tool for the job.

    2. This really was fully a non-answer. I see why that might be wise.

    3. Could OpenAI be coerced into giving up equity, or choose to do so as part of a regulatory capture play? Yeah. It would be a no-good, very bad thing.

    4. The government absolutely will and needs to have strong opinions about AI company actions and set the regulations and rules in place and otherwise play the role of being the actual government.

    5. If the government does not govern the AI companies, then the government will wake up one day to find the AI companies have become the government.

  1. Tyler Cowen did a trip through France and Spain and booked all but one hotel with GPT-5 (not directly in the app), and almost every meal they ate, and Altman didn’t get paid for that. Shouldn’t he get paid?

    1. Before I get to Altman’s answer, I will say that for specifically Tyler this seems very strange to me, unless he’s running an experiment as research.

    2. As in, Tyler has very particular preferences and a lot of comparative advantage in choosing hotels and especially restaurants, especially for himself. It seems unlikely that he can’t do better than ChatGPT?

    3. I expect to be able to do far better than ChatGPT on finding restaurants, although with a long and highly customized prompt, maybe? But it would require quite a lot of work.

    4. For hotels, yeah, I think it’s reasonably formulaic and AI can do fine.

  2. Altman responds that often ChatGPT is cited as the most trusted tech product from a big tech company. He notes that this is weird given the hallucinations. But it makes sense in that it doesn’t have ads and is in many visible ways more fully aligned with user preferences than other big tech products that involve financial incentives. He notes that a transaction fee probably is fine but any kind of payment for placement would endanger this.

    1. ChatGPT being most trusted is definitely weird given it is not very reliable.

    2. It being most trusted is an important clue to how people will deal with AI systems going forward, and it should worry you in important ways.

    3. In particular, trust for many people is about ‘are they Out To Get You?’ rather than reliability or overall quality, or are expectations set fairly. Compare to the many people who otherwise trust a Well Known Liar.

    4. I strongly agree with Altman about the payola worry, as Cowen calls it. Cowen says he’s not worried about it, but doesn’t explain why not.

    5. OpenAI’s instant checkout offerings and policies are right on the edge on this. I think in their present form they will be fine but they’re on thin ice.

  3. Cowen’s worry is that OpenAI will have a cap on how much commission they can charge, because stupider services will then book cheaply if you charge too much. Altman says he expects much lower margins.

    1. AI will as Altman notes make many markets much more efficient by vastly lowering search costs and transaction costs, which will lower margins, and this should include commissions.

    2. I still think OpenAI will be able to charge substantial commissions if it retains its central AI position with consumers, for the same reason that other marketplaces have not lost their ability to extract commissions, including some very large ones. Every additional hoop you ask a customer to go through loses a substantial portion of sales. OpenAI can pull the same tricks as Steam and Amazon and Apple including on price parity, and many will pay.

    3. This is true even if there are stupider services that can do the booking and are generally 90% as good, so long as OpenAI is the consumer default.

  4. Cowen doubles down on this worry about cheap competing agents, Altman notes that hotel booking is not the way to monetize, Cowen says but of course you do want to do that, Altman says no he wants to do new science, but ChatGPT and hotel booking is good for the world.

    1. This feels like a mix of a true statement and a dishonest dodge.

    2. As in, of course he wants to do hotel booking and make money off it, it’s silly to pretend that you don’t and there’s nothing wrong with that. It’s not the main goal, but it drives growth and valuation and revenue all of which is vital to the AGI or science mission (whether you agree with that mission or not).

  5. Cowen asks, you have a deal coming with Walmart, if you were Amazon would you make a deal with OpenAI or fight back? Altman says he doesn’t know, but that if he was Amazon he would fight back.

    1. Great answer from Altman.

    2. One thing Altman does well is being candid in places you would not expect, where it is locally superficially against his interests, but where it doesn’t actually cost him much. This is one of those places.

    3. Amazon absolutely cannot fold here because it loses too much control over the customer and customer flow. They must fight back. Presumably they should fight back together with their friends at Anthropic?

  6. Cowen asks about ads. Altman says some ads would be bad as per earlier, but other kinds of ads would be good although he doesn’t know what the UI is.

    1. Careful, Icarus.

    2. There definitely are ‘good’ ways to do ads if you keep them entirely distinct from the product, but the temptations and incentives here are terrible.

  1. What should OpenAI management know about KSA and UAE? Altman says it’s mainly knowing who will run the data centers and what security guarantees they will have, with data centers being built akin to US embassies or military bases. They bring in experts and as needed will bring in more.

    1. I read this as a combination of outsourcing the worries and not worrying.

    2. I would be more worried.

  2. Cowen asks, how good will GPT-6 be at teaching these kinds of national distinctions, or do you still need human experts? Altman expects to still need the experts, confirms they have an internal eval for that sort of thing but doesn’t want to pre-announce.

    1. My anticipation is that GPT-6 and its counterparts will actually be excellent at understanding these country distinctions in general, when it wants to be.

    2. My anticipation is also that GPT-6 will be excellent at explaining things it knows to humans and helping those humans learn, when it wants to, and this is already sufficiently true for current systems.

    3. The question is, will you be able to translate that into learning and understanding such issues?

    4. Why is this uncertain? Two concerns.

    5. The first concern is that understanding may depend on analysis of particular key people and relationships, in ways that are unavailable to AI, the same way you can’t get them out of reading books.

    6. The second concern is that to actually understand KSA and UAE, or any country or culture in general, requires communicating things that it would be impolitic to say out loud, or for an AI to typically output. How do you pass on that information in this context? It’s a problem.

  3. Cowen asks about poetry, predicts you’ll be able to get the median Pablo Neruda poem but not the best, maybe you’ll get to 8.8/10 in a few years. Altman says they’ll reach 10/10 and Cowen won’t care, Cowen promises he’ll care but Altman equates it to AI chess players. Cowen responds there’s something about a great poem ‘outside the rubric’ and he worries humans that can’t produce 10s can’t identify 10s? Or that only humanity collectively and historically can decide what is a 10?

    1. This is one of those ‘AI will never be able to [X] at level [Y]’ claims so I’m on Altman’s side here, a sufficiently capable AI can do 10/10 on poems, heck it can do 11/10 on poems. But yeah, I don’t think you or I will care other than as a technical achievement.

    2. If an AI cannot produce sufficiently advanced poetry, that means that the AI is insufficiently advanced. Also we should not assume that future AIs or LLMs will share current techniques or restrictions. I expect innovation with respect to poetry creation.

    3. The thing being outside the rubric is a statement primarily about the rubric.

    4. If only people writing 10s can identify 10s then for almost all practical purposes there’s no difference between a 9 and a 10. Why do we care, if we literally can’t tell the difference? Whereas if we can tell the difference, if verification is easier than generation as it seems like it should be here, then we can teach the AI how to tell the difference.

    5. I think Cowen is saying that a 10-poem is a 9-poem that came along at the right time and got the right cultural resonance, in which case sure, you cannot reliably produce 10s, but that’s because it’s theoretically impossible to do that, and no human could do that either. Pablo Neruda couldn’t do it.

    6. As someone who has never read a poem by Pablo Neruda, I wanted to see what this 10.0 business was all about, so by Claude’s recommendation of ‘widely considered best Neruda poem’ without any other context, I selected Tonight I Can Write (The Saddest Lines). And not only did it not work on me, it seemed like something an AI totally could write today, on the level of ‘if you claimed to have written this in 2025 I’d have suspected an AI did write it.’

    7. With that in mind, I gave Claude context and it selected Ode to the Onion. Which also didn’t do anything for me, and didn’t seem like anything that would be hard for an AI to write. Claude suggests it’s largely about context, that this style was new at the time, and I was reading translations into English and I’m no poetry guy, and agrees that in 2025 yes an AI could produce a similar poem, it just wouldn’t land because it’s no longer original.

    8. I’m willing to say that whatever it is Tyler thinks AI can’t do, also is something I don’t have the ability to notice. And which doesn’t especially motivate me to care? Or maybe is what Tyler actually wants something like ‘invent new genre of poetry’?

    9. We’re not actually trying to get AIs to invent new genres of poetry, we’re not trying to generate the things that drive that sort of thing, so who is to say if we could do it. I bet we could actually. I bet somewhere in the backrooms is a 10/10 Claude poem, if you have eyes to see.

  1. It’s hard. Might get easier with time, chips designing chips.

  2. Why not make more GPUs? Altman says, because we need more electrons. What he needs most are electrons. We’re working hard on that. For now, natural gas, later fusion and solar. He’s still bullish on fusion.

    1. This ‘electrons’ thing is going to drive me nuts on a technical level. No.

    2. This seems simply wrong? We don’t build more GPUs because TSMC and other bottlenecks mean we can’t produce more GPUs.

    3. That’s not to say energy isn’t an issue but the GPUs sell out.

    4. Certainly plenty of places have energy but no GPUs to run with them.

  3. Cowen worries that fusion uses the word ‘nuclear.’

    1. I don’t. I think that this is rather silly.

    2. The problem with fusion is purely that it doesn’t work. Not yet, anyway.

    3. Again, the people are pro-nuclear power. Yay the people.

  4. Cowen asks do you worry about a scenario where superintelligence does not need much compute, so you’re betting against progress over a 30-year time horizon?

    1. Always pause when you hear such questions to consider that perhaps under such a scenario this is not the correct thing to worry about?

    2. As in, if we not only have superintelligence it also does not need so much compute, the last thing I am going to ponder next is the return on particular investments of OpenAI, even if I am the CEO of OpenAI.

    3. If we have sufficiently cheap superintelligence that we have both superintelligence and an abundance of compute, ask not how the stock does, ask questions like how the humans survive or stay in control at all, notice that the entire world has been transformed, don’t worry about your damn returns.

  5. Altman responds if compute is cheaper people will want more. He’ll take that bet every day, and the energy will still be useful no matter the scenario.

    1. Good bet, so long as it matters what people want.

  6. Cowen loves Pulse, Altman says people love Pulse, the reason you don’t hear more is it’s only available to Pro users. Altman uses Pulse for a combination of work related news and family opportunities like hiking trails.

    1. I dabble with Pulse. It’s… okay? Most of the time it gives me stories I already know about, but occasionally there’s something I otherwise missed.

    2. I’ve tried to figure out things it will be good at monitoring, but it’s tough, maybe I should invest more time in giving it custom instructions.

    3. In theory it’s a good idea.

    4. It suffers from division of context, since the majority of my recent LLM activity has been on Claude and perhaps soon will include Gemini.

Ooh, fun stuff.

  1. What is Altman’s nuttiest view about his own health? Altman says he used to be more disciplined when he was less busy, but now he eats junk food and doesn’t exercise enough and it’s bad. Whereas before he once got in the hospital for trying semaglutide before it was cool, which itself is very cool.

    1. There’s weird incentives here. When you have more going on it means you have less time to care about food and exercise but also makes it more important.

    2. I’d say that over short periods (like days and maybe weeks) you can and should sacrifice health focus to get more attention and time on other things.

    3. However, if you’re going for months or years, you want to double down on health focus up to some reasonable point, and Altman is definitely here.

    4. That doesn’t mean obsess or fully optimize of course. 80/20 or 90/10 is good.

  2. Cowen says junk food doesn’t taste good and good sushi tastes better, Altman says yes junk food tastes good and sometimes he wants a chocolate chip cookie at 11: 30 at night.

    1. They’re both right. Sometimes you want the (fresh, warm, gooey) chocolate chip cookie and not the sushi, sometimes you want the sushi and not the cookie.

    2. You get into habits and your body gets expectations, and you develop a palate.

    3. With in-context unlimited funds you do want to be ‘spending your calories’ mostly on the high Quality things that are not junk, but yeah in the short term sometimes you really want that cookie.

    4. I think I would endorse that I should eat 25% less carbs and especially ‘junk’ than I actually do, maybe 50%, but not 75% less, that would be sad.

  3. Cowen asks if there’s alien life on the moons of Saturn, says he does believe this. Altman says he has no opinion, he doesn’t know.

    1. I’m actually with Altman in the sense that I’m happy to defer to consensus on the probability here, and I think it’s right not to invest in getting an opinion, but I’m curious why Cowen disagrees. I do think we can be confident there isn’t alien life there that matters to us.

  4. What about UAPs? Altman thinks ‘something’s going on there’ but doesn’t know, and doubts it’s little green men.

    1. I am highly confident it is not little green men. There may or may not be ‘something going on’ from Earth that is driving this, and my default is no.

  5. How many conspiracy theories does Altman believe in? Cowen says zero, at least in the United States. Altman says he’s predisposed to believe, has an X-Files ‘I want to believe’ t-shirt, but still believes in either zero or very few. Cowen says he’s the opposite, he doesn’t want to believe, maybe the White Sox fixed the World Series way back when, Altman points out this doesn’t count.

    1. The White Sox absolutely fixed that 1919 World Series, we know this. At the time it was a conspiracy theory but I think that means this is no longer a conspiracy theory?

    2. I also believe various other sporting events have been fixed, but with less certainty, and to varying degrees – sometimes there’s an official’s finger on the scale but the game is real, other times you’re in Russia and the players literally part the seas to ensure the final goal is scored, and everything in between, but most games played in the West are on or mostly on the level.

    3. Very obviously there exist conspiracies, some of which succeed at things, on various scales. That is distinct from ‘conspiracy theory.’

    4. As a check, I asked Claude for the top 25 most believed conspiracy theories in America. I am confident that 24 out of the 25 are false. The 25th was Covid-19 lab origins, which is called a conspiracy theory but isn’t one. If you modify that to ‘Covid-19 was not only from a lab but was released deliberately’ then I’m definitely at all 25 are false.

  6. Cowen asks again, how would you revitalize St. Louis with a billion dollars and copious free time? Altman says start a Y-Combinator thing, which is pretty similar to what Altman said last time. But he suggests that’s because that would be Altman’s comparative advantage, someone else would do something else.

    1. This seems correct to me.

  1. Should it be legal to release an AI agent into the wild, unowned, untraceable? Altman says it’s about thresholds. Anything capable of self-replication needs oversight, and the question is what is your threshold.

    1. Very obviously it should not be legal to, without checking first, release a self-replicating untraceable unowned highly capable agent into the wild that we have no practical means of shutting down.

    2. As a basic intuition pump, you should be responsible for what an AI agent you release into the wild does the same way you would be if you were still ‘in control’ of that agent, or you hired the agent, or if you did the actions yourself. You shouldn’t be able to say ‘oh that’s not on me anymore.’

    3. Thus, if you cannot be held accountable for it, I say you can’t release it. A computer cannot be held accountable, therefore a computer cannot make a management decision, therefore you cannot release an agent that will then make unaccountable management decisions.

    4. That includes if you don’t have the resources to take responsibility for the consequences, if they rise to the level where taking all your stuff and throwing you in jail is not good enough. Or if the effects cannot be traced.

    5. Certainly if such an agent poses a meaningful risk of loss of human control or of catastrophic or existential risks, the answer needs to be a hard no.

    6. If what you are doing is incompatible with such agents not being released into the wild, then what you are doing, via backchaining, is also not okay.

    7. There presumably should be a method whereby you can do this legally, with some set of precautions attached to it.

    8. Under what circumstances an open weight model would count as any of this is left as an open ended question.

  2. What to do if it happens and you can’t turn it off? Ring-fence it, identify, surveil, sanction the host location? Altman doesn’t know, it’s the same as the current version of this problem, more dangerous but we’ll have better defenses, and we need to urgently work on this problem.

    1. I don’t disagree with that response but it does not indicate a good world state.

    2. It also suggests the cost of allowing such releases is currently high.

  1. Both note (I concur) that it’s great to read your own AI responses but other people’s responses are boring.

    1. I do sometimes share AI queries as a kind of evidence, or in case someone needs a particular thing explained and I want to lower activation energy on asking the question. It’s the memo you hope no one ever needs to read.

  2. Altman says people like watching other people’s AI videos.

    1. Do they, though?

  3. Altman points out that everyone having great personal AI agents is way more interesting than all that, with new social dynamics.

    1. Indeed.

    2. The new social dynamics include ‘AI runs the social dynamics’ potentially along with everything else in short order.

  4. Altman’s goal is a new kind of computer with an AI-first interface very different from the last 50 years of computing. He wants to question basic assumptions like an operating system or opening a window, and he does notice the skulls along the ‘design a new type of computer’ road. Cowen notes that people really like typing into boxes.

    1. Should AI get integrated into computers far more? Well, yeah, of course.

    2. How much should this redesign the computer? I’m more skeptical here. I think we want to retain control, fixed commands that do fixed things, the ability to understand what is happening.

    3. In gaming, Sid Meier called this ‘letting the player have the fun.’ If you don’t have control or don’t understand what is happening and how mechanics work, then the computer has all the fun. That’s no good, the player wants the fun.

    4. Thus my focus would be, how do we have the AI enable the user to have the fun, as in understand what is happening and direct it and control it more when they want to? And also to enable the AI to automate the parts the user doesn’t want to bother about?

    5. I’d also worry a lot about predictability and consistently across users. You simultaneously want the AI to customize things to your preferences, but also to be able to let others share with you the one weird trick or explain how to do a thing.

  1. What would an ideal partnership with a university look like? Altman isn’t sure, maybe try 20 different experiments. Cowen worries that higher education institutions lack internal reputational strength or credibility to make any major changes and all that happens is privatized AI use, and Altman says he’s ok with it.

    1. It does seem like academia and universities in America are not live players, they lack the ability to respond to AI or other changes, and they are mostly going to collect what rents they can until they get run over.

    2. In some senses I agree This Is Fine, obviously it is a huge tragedy all the time and money being wasted but there is not much we can do about this and it will be increasingly viable to bypass the system, or to learn in spite of it.

  2. How will the value of a typical college degree change in 5-10 years? Cowen notes it’s gone down in the last 10, after previously going up. Altman says further decline, faster than before, but not to zero as fast as it should.

    1. Sounds right to me under an ‘economic normal’ scenario.

  3. So what does get returns other than learning AI? Altman says yes, wide benefits to learning to use AI well, including but not limited to things like new science or starting companies.

    1. I notice Altman didn’t name anything non-AI that goes up in value.

    2. I don’t think that’s because he missed a good answer. Ut oh.

  4. How do you teach normies to use AI five years from now, for their own job? Altman says basically people learn on their own.

    1. It’s great that they can learn on their own, but this definitely is not optimal.

    2. As in, you should be able to do a lot better by teaching people?

    3. There’s definitely a common theme of lack of curiosity, where people need pushes in the right directions. Perhaps AI itself can help more with this.

  5. Will we still read books? Altman notes books have survived a lot of things.

    1. Books are on rapid decline already though. Kids these days, AIUI, read lots of text, but basically don’t read books.

  6. Will we start creating our own movies? What else will change? Altman says how we use emails and calls and meetings and write documents will change a lot, family time or time in nature will change very little.

    1. There’s the ‘economic normal’ and non-transformational assumption here, that the outside world looks the same and it’s about how you personally interact with AIs. Altman and Cowen both sneak this in throughout.

    2. Time with family has changed a lot in the last 50-100 years. Phones, computers and television, even radio, the shift in need for various household activities, cultural changes, things like that. I expect more change here, even if in some sense it doesn’t change much, and even if those who are wisest in many ways let it change the least, again in these ‘normal’ worlds.

    3. All the document shuffling, yes, that will change a lot.

    4. Altman doesn’t take the bait on movies and I think he’s mostly right. I mostly don’t want customized movies, I want to draw from the same movies as everyone else, I want to consume someone’s particular vision, I want a fixed document.

    5. Then again, we’ve moved into a lot more consumption of ephemeral, customized media, especially short form video, mostly I think this is terrible, and (I believe Cowen agrees here) I think we should watch more movies instead, I would include television.

    6. I think there’s a divide. Interactive things like games and in the future VR, including games involving robots or LLM characters, are a different kind of experience that should often be heavily customizable. There’s room for personalized, unique story generation, and interactions, too.

  1. Will San Francisco, at least within the West, remain the AI center? Altman says this is the default, and he loves the Bay Area and thinks it is making a comeback.

  2. What about housing costs? Can AI make them cheaper? Altman thinks AI can’t help much with this.

    1. Other things might help. California’s going at least somewhat YIMBY.

    2. I do think AI can help with housing quite a lot, actually. AI can find the solutions to problems, including regulations, and it can greatly reduce ‘transaction costs’ in general and reduce the edge of local NIMBY forces, and otherwise make building cheaper and more tractable.

    3. AI can also potentially help a lot with political dysfunction, institutional design, and other related problems, as well as to improve public opinion.

    4. AI and robotics could greatly impact space needs.

    5. Or, of course, AI could transform the world more generally, including potentially killing everyone. Many things impact housing costs.

  3. What about food prices? Altman predicts down, at least within a decade.

    1. Medium term I’d predict down for sure at fixed quality. We can see labor shift back into agriculture and food, probably we get more highly mechanized agriculture, and also AI should optimize production in various ways.

    2. I’d also predict people who are wealthier due to AI invest more in food.

    3. I wouldn’t worry about energy here.

  4. What about healthcare? Cowen predicts we will spend more and live to 98, and the world will feel more expensive because rent won’t be cheaper. Altman disagrees, says we will spend less on healthcare, we should find cures and cheap treatments, including through pharmaceuticals and devices and also cheaper delivery of services, whereas what will go up in price are status goods.

    1. There’s two different sets of dynamics in healthcare I think?

    2. In the short run, transaction costs go down, people get better at fighting insurance companies, better at identifying and fighting for needed care. Demand probably goes up, total overall real spending goes up.

    3. Ideally we would also be eliminating unnecessary, useless or harmful treatments along the way, and thus spending would go down, since much of our medicine is useless, but alas I mostly don’t expect this.

    4. We also should see large real efficiency gains in provision, which helps.

    5. Longer term (again, in ‘normal’ worlds), we get new treatments, new drugs and devices, new delivery systems, new understanding, general improvement, including making many things cheaper.

    6. At that point, lots of questions come into play. We are wealthier with more to buy, so we spend more. We are wiser and know what doesn’t work and find less expensive solutions and gain efficiency, so we spend less. We are healthier so we spend less now but live longer which means we spend more.

    7. In the default AGI scenarios, we don’t only live to 98, we likely hit escape velocity and live indefinitely, and then it comes down to what that costs.

    8. My default in the ‘good AGI’ scenarios is that we spend more on healthcare in absolute terms, but less as a percentage of economic capacity.

  1. Cowen asks if we should reexamine patents and copyright? Altman has no idea.

    1. Our current systems are obviously not first best, already were not close.

    2. Copyright needs radical rethinking, and already did. Terms are way too long. The ‘AI outputs have no protections’ rule isn’t going to work. Full free fair use for AI training is no good, we need to compensate creators somehow.

    3. Patents are tougher but definitely need rethinking.

  2. Cowen is big on freedom of speech and worries people might want to rethink the First Amendment in light of AI.

    1. I don’t see signs of this? I do see signs of people abandoning support for free speech for unrelated reasons, which I agree is terrible. Free speech will ever and always be under attack.

    2. What I mostly have seen are attempts to argue that ‘free speech’ means various things in an AI context that are clearly not speech, and I think these should not hold and that if they did then I would worry about taking all of free speech down with you.

  3. They discuss the intention to expand free expression of ChatGPT, the famous ‘erotica tweet.’ Perhaps people don’t believe in freedom of expression after all? Cowen does have that take.

    1. People have never been comfortable with actual free speech, I think. Thus we get people saying things like ‘free speech is good but not [misinformation / hate speech / violence or gore / erotica / letting minors see it / etc].’

    2. I affirm that yes LLMs should mostly allow adults full freedom of expression.

    3. I do get the issue in which if you allow erotica then you’re doing erotica now, and ChatGPT would instantly become the center of erotica and porn, especially if the permissions expand to image and even video generation.

  4. Altman wants to change subpoena power with respect to AI, to allow your AI to have the same protections as a doctor or lawyer. He says America today is willing to trust AI on that level.

    1. It’s unclear here if Altman wants to be able to carve out protected conversations for when the AI is being a doctor or lawyer or similar, or if he wants this for all AI conversations. I think it is the latter one.

    2. You could in theory do the former, including without invoking it explicitly, by having a classifier ask (upon getting a subpoena) whether any given exchange should qualify as privileged.

    3. Another option is to ‘hire the AI lawyer’ or other specialist by paying a nominal fee, the way lawyers will sometimes say ‘pay me a dollar’ in order to nominally be your lawyer and thus create legal privilege.

    4. There could also be specialized models to act as these experts.

    5. But also careful what you wish for. Chances seem high that getting these protections would come with obligations AI companies do not want.

    6. The current rules for this are super weird in many places, and the result of various compromises of different interests and incentives and lobbies.

    7. What I do think would be good at a minimum is if ‘your AI touched this information’ did not invalidate confidentiality, whereas third party sharing of information often will do invalidate confidentiality.

    8. Google search is a good comparison point because it ‘feels private’ but your search for ‘how to bury a body’ very much will end up in your court proceeding. I can see a strong argument that your AI conversations should be protected but if so then why not your Google searches?

    9. Similarly, when facing a lawsuit, if you say your ChatGPT conversations are private, do you also think your emails should be private?

  1. Cowen asks about LLM psychosis. Altman says it’s a ‘very tiny thing’ but not a zero thing, which is why the restrictions put in place in response to it pissed users off, most people are okay so they just get annoyed.

    1. Users always get annoyed by restrictions and supervision, and the ones that are annoyed are often very loud.

    2. The actual outright LLM psychosis is rare but the number of people who actively want sycophancy and fawning and unhealthy interactions, and are mostly mad about not getting enough of that, are very common.

I’m going to go full transcript here again, because it seems important to track the thinking:

ALTMAN: Someone said to me once, “Never ever let yourself believe that propaganda doesn’t work on you. They just haven’t found the right thing for you yet.” Again, I have no doubt that we can’t address the clear cases of people near a psychotic break.

For all of the talk about AI safety, I would divide most AI thinkers into these two camps of “Okay, it’s the bad guy uses AI to cause a lot of harm,” or it’s, “the AI itself is misaligned, wakes up, whatever, intentionally takes over the world.”

There’s this other category, third category, that gets very little talk, that I think is much scarier and more interesting, which is the AI models accidentally take over the world. It’s not that they’re going to induce psychosis in you, but if you have the whole world talking to this one model, it’s not with any intentionality, but just as it learns from the world in this continually coevolving process, it just subtly convinces you of something. No intention, it just does. It learned that somehow. That’s not as theatrical as chatbot psychosis, obviously, but I do think about that a lot.

COWEN: Maybe I’m not good enough, but as a professor, I find people pretty hard to persuade, actually. I worry about this less than many of my AI-related friends do.

ALTMAN: I hope you’re right.

  1. On Altman’s statement:

    1. The initial quote is wise.

    2. The division into these three categories is a vast oversimplification, as all such things are. That doesn’t make the distinction not useful, but I worry about it being used in a way that ends up being dismissive.

    3. In particular, there is a common narrowing of ‘the AI itself is misaligned’ into ‘one day it wakes up and takes over the world’ and then people think ‘oh okay all we have to do is ensure that if one day one of them wakes up it doesn’t get to take over the world’ or something like that. The threat model within the category is a lot broader than that.

    4. There’s also ‘a bunch of different mostly-not-bad guys use the AI to pursue their particular interests, and the interactions and competitions and evolutions between them go badly or lead to loss of human control’ and there’s ‘we choose to put the AIs in charge of the world on purpose’ with or without AI having a hand in that decision, and so on and so forth.

    5. On the particular worry here of Altman’s, yes, I think that extended AI conversations are very good at convincing people of things, often in ways no one (including the AI) intended, and as AIs gain more context and adjust to it more, as they will, this will become a bigger and more common thing.

    6. People are heavily influenced by, and are products of, their environment, and of the minds they interact with on a regular basis.

  2. On Cowen’s statement:

    1. A professor is not especially well positioned to be persuasive, nor does a professor typically get that much time with engaged students one-on-one.

    2. When people talk about people being ‘not persuadable’ they typically talk about cases where people’s defenses are relatively high, in limited not-so-customized interactions in which the person is not especially engaged or following their curiosity or trusting, and where the interaction is divorced from their typical social context.

    3. We have very reliable persuasion techniques, in the sense that for the vast majority of human history most people in each area of the world believed in the local religion and local customs and were patriots of the local area and root for the local sports team and support the local political perspectives, and so on, and were persuaded to pass all that along to their own children.

    4. We have a reliable history of armies being able to break down and incorporate new people, of cults being able to do so for new recruits, for various politicians to often be very convincing and the best ones to win over large percentages of people they interact with in person, for famous religious figures to be able to do massive conversions, and so on.

    5. Marxists were able to persuade large percentages of the world, somehow.

    6. Children who attend school and especially go to college tend to exit with the views of those they attend with, even when it conflicts with their upbringing.

    7. If you are talking to an AI all the time, and it has access to your details and stuff, this is very much an integrated social context, so yes many are going over time to be highly persuadable.

    8. This is all assuming AI has to stick to Ordinary Human levels of persuasiveness, which it won’t have to.

    9. There are also other known techniques to persuade humans that we will not be getting into here, that need to be considered in such contexts.

    10. Remember the AI box experiments.

    11. I agree that if we’re talking about ‘the AI won’t in five minutes be able to convince you to hand over your bank account information’ that this will require capabilities we don’t know about, but that’s not the threshold.

  3. If you have a superintelligence ready to go, that is ‘safety-tested,’ that’s about to self-improve, and you get a prompt to type in, what do you type? Altman raises this question, says he doesn’t have an answer but he’s going to have someone ask the Dalai Lama.

    1. I also do not know the right answer.

    2. You’d better know that answer well in advance.

Discussion about this post

On Sam Altman’s Second Conversation with Tyler Cowen Read More »

dhs-offers-“disturbing-new-excuses”-to-seize-kids’-biometric-data,-expert-says

DHS offers “disturbing new excuses” to seize kids’ biometric data, expert says


Sweeping DHS power grab would collect face, iris, voice scans of all immigrants.

Civil and digital rights experts are horrified by a proposed rule change that would allow the Department of Homeland Security to collect a wide range of sensitive biometric data on all immigrants, without age restrictions, and store that data throughout each person’s “lifecycle” in the immigration system.

If adopted, the rule change would allow DHS agencies, including Immigration and Customs Enforcement (ICE), to broadly collect facial imagery, finger and palm prints, iris scans, and voice prints. They may also request DNA, which DHS claimed “would only be collected in limited circumstances,” like to verify family relations. These updates would cost taxpayers $288.7 million annually, DHS estimated, including $57.1 million for DNA collection alone. Annual individual charges to immigrants submitting data will likely be similarly high, estimated at around $231.5 million.

Costs could be higher, DHS admitted, especially if DNA testing is conducted more widely than projected.

“DHS does not know the full costs to the government of expanding biometrics collection in terms of assets, process, storage, labor, and equipment,” DHS’s proposal said, while noting that from 2020 to 2024, the US only processed such data from about 21 percent of immigrants on average.

Alarming critics, the update would allow DHS for the first time to collect biometric data of children under 14, which DHS claimed would help reduce human trafficking and other harms by making it easier to identify kids crossing the border unaccompanied or with a stranger.

Jennifer Lynch, general counsel for a digital rights nonprofit called the Electronic Frontier Foundation, told Ars that EFF joined Democratic senators in opposing a prior attempt by DHS to expand biometric data collection in 2020.

There was so much opposition to that rule change that DHS ultimately withdrew it, Lynch noted, but DHS confirmed in its proposal that the agency expects more support for the much broader initiative under the current Trump administration. Quoting one of Trump’s earliest executive orders in this term, directing DHS to “secure the border,” DHS suggested it was the agency’s duty to use “any available technologies and procedures to determine the validity of any claimed familial relationship between aliens encountered or apprehended by the Department of Homeland Security.”

Lynch warned that DHS’s plan to track immigrants over time, starting as young as possible, would allow DHS “to track people without their knowledge as they go about their lives” and “map families and connections in whole communities over time.”

“This expansion poses grave threats to the privacy, security, and liberty of US citizens and non-citizens,” Lynch told Ars, noting that “the federal government, including DHS, has failed to protect biometric data in the past.”

“Risks from security breaches to children’s biometrics are especially acute,” she said. “Large numbers of children are already victims of identity theft.”

By maintaining a database, the US also risks chilling speech, as immigrants weigh risks of social media comments—which DHS already monitors—possibly triggering removals or arrests.

“People will be less likely to speak out on any issue for fear of being tracked and facing severe reprisals, like detention and deportation, that we’ve already seen from this administration,” Lynch told Ars.

DHS also wants to collect more biometric data on US citizens and permanent residents who sponsor immigrants or have familial ties. Esha Bhandari, director of the ACLU’s speech, privacy, and technology project, told Ars that “we should all be concerned that the Trump administration is potentially building a vast database of people’s sensitive, unchangeable information, as this will have serious privacy consequences for citizens and noncitizens alike.”

“DHS continues to explore disturbing new excuses to collect more DNA and other sensitive biometric information, from the sound of our voice to the unique identifiers in our irises,” Bhandari said.

EFF previously noted that DHS’s biometric database was already the second largest in the world. By expanding it, DHS estimated that the agency would collect “about 1.12 million more biometrics submissions” annually, increasing the current baseline to about 3.19 million.

As the data pool expands, DHS plans to hold onto the data until an immigrant who has requested benefits or otherwise engaged with DHS agencies is either granted citizenship or removed.

Lynch suggested that “DHS cites questionable authority for this massive change to its practices,” which would “exponentially expand the federal government’s ability to collect biometrics from anyone associated with any immigration benefit or request—including US citizens and children of any age.”

“Biometrics are unique to each of us and can’t be changed, so these threats exists as long as the government holds onto our data,” Lynch said.

DHS will collect more data on kids than adults

Not all agencies will require all forms of biometric data to be submitted “instantly” if the rule change goes through, DHS said. Instead, agencies will assess their individual needs, while supposedly avoiding repetitive data collection, so that data won’t be collected every time someone is required to fill out a form.

DHS said it “recognizes” that its sweeping data collection plans that remove age restrictions don’t conform with Department of Justice policies. But the agency claimed there was no conflict since “DHS regulatory provisions control all DHS biometrics collections” and “DHS is not authorized to operate or collect biometrics under DOJ authorities.”

“Using biometrics for identity verification and management” is necessary, DHS claimed, because it “will assist DHS’s efforts to combat trafficking, confirm the results of biographical criminal history checks, and deter fraud.”

Currently, DHS is seeking public comments on the rule change, which can be submitted over the next 60 days ahead of a deadline on January 2, 2026. The agency suggests it “welcomes” comments, particularly on the types of biometric data DHS wants to collect, including concerns about the “reliability of technology.”

If approved, DHS said that kids will likely be subjected to more biometric data collection than adults. Additionally, younger kids will be subjected to processes that DHS formerly limited to only children age 14 and over.

For example, DHS noted that previously, “policies, procedures, and practices in place at that time” restricted DHS from running criminal background checks on children.

However, DHS claims that’s now appropriate, including in cases where children were trafficked or are seeking benefits under the Violence Against Women Act and, therefore, are expected to prove “good moral character.”

“Generally, DHS plans to use the biometric information collected from children for identity management in the immigration lifecycle only, but will retain the authority for other uses in its discretion, such as background checks and for law enforcement purposes,” DHS’s proposal said.

The changes will also help protect kids from removals, DHS claimed, by making it easier for an ICE attorney to complete required “identity, law enforcement, or security investigations or examinations.” As DHS explained:

DHS proposes to collect biometrics at any age to ensure the immigration records created for children can be related to their adult records later, and to help combat child trafficking, smuggling, and labor exploitation by facilitating identity verification, while also confirming the absence of criminal history or associations with terrorist organizations or gang membership.

A top priority appears to be tracking kids’ family relationships.

“DHS’s ability to collect biometrics, including DNA, regardless of a minor’s age, will allow DHS to accurately prove or disprove claimed genetic relationships among apprehended aliens and ensure that unaccompanied alien children (UAC) are properly identified and cared for,” the proposal said.

But DHS acknowledges that biometrics won’t help in some situations, like where kids are adopted. In those cases, DHS will still rely on documentation like birth certificates, medical records, and “affidavits to support claims based on familial relationships.”

It’s possible that some DHS agencies may establish an age threshold for some data collection, the rule change noted.

A day after the rule change was proposed, 42 comments have been submitted. Most were critical, but as Lynch warned, speaking out seemed risky, with many choosing to anonymously criticize the initiative as violating people’s civil rights and making the US appear more authoritarian.

One anonymous user cited guidance from the ACLU and the Electronic Privacy Information Center, while warning that “what starts as a ‘biometrics update’ could turn into widespread privacy erosion for immigrants and citizens alike.”

The commenter called out DHS for seriously “talking about harvesting deeply personal data that could track someone forever” and subjecting “infants and toddlers” to “iris scans or DNA swabs.”

“You pitch it as a tool against child trafficking, which is a real issue, but does swabbing a newborn really help, or does it just create a lifelong digital profile starting at day one?” the commenter asked. “Accuracy for growing kids is questionable, and the [ACLU] has pointed out how this disproportionately burdens families. Imagine the hassle for parents—it’s not protection; it’s preemptively treating every child like a data point in a government file.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

DHS offers “disturbing new excuses” to seize kids’ biometric data, expert says Read More »

new-quantum-hardware-puts-the-mechanics-in-quantum-mechanics

New quantum hardware puts the mechanics in quantum mechanics


As a test case, the machine was used to test a model of superconductivity.

Quantum computers based on ions or atoms have one major advantage: The qubits themselves aren’t manufactured, and there’s no device-to-device among atoms. Every atom is the same and should perform similarly every time. And since the qubits themselves can be moved around, it’s theoretically possible to entangle any atom or ion with any other in the system, allowing for a lot of flexibility in how algorithms and error correction are performed.

This combination of consistent, high-fidelity performance with all-to-all connectivity has led many key demonstrations of quantum computing to be done on trapped-ion hardware. Unfortunately, the hardware has been held back a bit by relatively low qubit counts—a few dozen compared to the hundred or more seen in other technologies. But on Wednesday, a company called Quantinuum announced a new version of its trapped-ion hardware that significantly boosts the qubit count and uses some interesting technology to manage their operation.

Trapped-ion computing

Both neutral atom and trapped-ion computers store their qubits in the spin of the nucleus. That spin is somewhat shielded from the environment by the cloud of electrons around the nucleus, giving these qubits a relatively long coherence time. While neutral atoms are held in place by a network of lasers, trapped ions are manipulated via electromagnetic control based on the ion’s charge. This means that key components of the hardware can be built using standard electronic manufacturing, although lasers are still needed for manipulations and readout.

While the electronics are static—they stay wherever they were manufactured—they can be used to move the ions around. That means that as long as the trackways the atoms can move on enable it, any two ions can be brought into close proximity and entangled. This all-to-all connectivity can enable more efficient implementation of algorithms performed directly on the hardware qubits or the use of error-correction codes that require a complicated geometry of connections. That’s one reason why Microsoft used a Quantinuum machine to demonstrate error-correction code based on a tesseract.

But arranging the trackways so that any two qubits can be next to each other can become increasingly complicated. Moving ions around is a relatively slow process, so retrieving two ions from the far ends of a chip too often can cause a system to start pushing up against the coherence time of the qubits. In the long term, Quantinuum plans to build chips with a square grid reminiscent of the street layout of many cities. But doing so will require a mastery of controlling the flow of ions through four-way intersections.

And that’s what Quantinuum is doing in part with its new chip, named Helios. It has a single intersection that couples two ion-storage areas, enabling operations as ions slosh from one end of the chip to the other. And it comes with significantly more qubits than its earlier hardware, moving from 56 to 96 qubits without sacrificing performance. “We’ve kept and actually even improved the two qubit gate fidelity,” Quantinuum VP Jenni Strabley told Ars. “So we’re not seeing any degradation in the two-qubit gate fidelity as we go to larger and larger sizes.”

Doing the loop

The image below is taken using the fluorescence of the atoms in the hardware itself. As you can see, the layout is dominated by two features: A loop at the left and two legs extending to the right. They’re connected by a four-way intersection. The Quantinuum staff described this intersection as being central to the computer’s operation.

A black background on which a series of small blue dots trace out a circle and two parallel lines connected by an x-shaped junction.

The actual ions trace out the physical layout of the Helios system, featuring a storage ring and two legs that contain dedicated operation sites. Credit: Quantinuum

The system works by rotating the ions around the loop. As an ion reaches the intersection, the system chooses whether to kick it into one of the legs and, if so, which leg. “We spin that ring almost like a hard drive, really, and whenever the ion that we want to gate gets close to the junction, there’s a decision that happens: Either that ion goes [into the legs], or it kind of makes a little turn and goes back into the ring,” said David Hayes, Quantinuum’s director of Computational Design and Theory. “And you can make that decision with just a few electrodes that are right at that X there.”

Each leg has a region where operations can take place, so this system can ensure that the right qubits are present together in the operation zones for things like two-qubit gates. Once the operations are complete, the qubits can be moved into the leg storage regions, and new qubits can be shuffled in. When the legs fill up, the qubits can be sent back to the loop, and the process is restarted.

“You get less traffic jams if all the traffic is running one way going through the gate zones,” Hayes told Ars. “If you had to move them past each other, you would have to do kind of physical swaps, and you want to avoid that.”

Obviously, issuing all the commands to control the hardware will be quite challenging for anything but the simplest operations. That puts an increasing emphasis on the compilers that add a significant layer of abstraction between what you want a quantum computer to do and the actual hardware commands needed to implement it. Quantinuum has developed its own compiler to take user-generated code and produce something that the control system can convert into the sequence of commands needed.

The control system now incorporates a real-time engine that can read data from Helios and update the commands it issues based on the state of the qubits. Quantinuum has this portion of the system running on GPUs rather than requiring customized hardware.

Quantinuum’s SDK for users is called Guppy and is based on Python, which has been modified to allow users to describe what they’d like the system to do. Helios is being accompanied by a new version of Guppy that includes some traditional programming tools like FOR loops and IF-based conditionals. These will be critical for the sorts of things we want to do as we move toward error-corrected qubits. This includes testing for errors, fixing them if they’re present, or repeatedly attempting initialization until it succeeds without error.

Hayes said the new version is also moving toward error correction. Thanks to Guppy’s ability to dynamically reassign qubits, Helios will be able to operate as a machine with 94 qubits while detecting errors on any of them. Alternatively, the 96 hardware qubits can be configured as a single unit that hosts 48 error-corrected qubits. “It’s actually a concatenated code,” Hayes told Ars. “You take two error detection codes and weave them together… it’s a single code block, but it has 48 logical cubits housed inside of it.” (Hayes said it’s a distance-four code, meaning it can fix up to two errors that occur simultaneously.)

Tackling superconductivity

While Quantinuum hardware has always had low error rates relative to most of its competitors, there was only so much you could do with 56 qubits. With 96 now at their disposal, researchers at the company decided to build a quantum implementation of a model (called the Fermi-Hubbard model) that’s meant to help study the electron pairing that takes place during the transition to superconductivity.

“There are definitely terms that the model doesn’t capture,” Quantinuum’s Henrik Dreyer acknowledged. “They neglect their electrorepulsion that [the electrons] still have—I mean, they’re still negatively charged; they are still repelling. There are definitely terms that the model doesn’t capture. On the other hand, I should say that this Fermi-Hubbard model—it has many of the features that a superconductor has.”

Superconductivity occurs when electrons join to form what are called Cooper pairs, overcoming their normal repulsion. And the model can tell that apart from normal conductivity in the same material.

“You ask the question ‘What’s the chance that one of the charged particles spontaneously disappears because of quantum fluctuations and goes over here?’” Dreyer said, describing what happens when simulating a conductor. “What people do in superconductivity is they take this concept, but instead of asking what’s the chance of a single-charge particle to tunnel over there spontaneously, they’re asking what is the chance of a pair to tunnel spontaneously?”

Even in its simplified form, however, it’s still a model of a quantum system, with all the computational complexity that comes with that. So the Quantinuum team modeled a few systems that classical computers struggle with. One was simply looking at a larger grid of atoms than most classical simulations have done; another expanded the grid in an additional dimension, modeling layers of a material. Perhaps the most complicated simulation involved what happens when a laser pulse of the right wavelength hits a superconductor at room temperature, an event that briefly induces a superconducting state.

And the system produced results, even without error correction. “It’s maybe a technical point, but I think it’s very important technical point, which is [that] the circuits that we ran, they all had errors,” Dreyer told Ars. “Maybe on the average of three or so errors, and for some reason, that is not very fully understood for this application, it doesn’t matter. You still get almost the perfect result in some of these cases.”

That said, he also indicated that having higher-fidelity hardware would help the team do a better job of putting the system in a ground state or running the simulation for longer. But those will have to wait for future hardware.

What’s next

If you look at Quantinuum’s roadmap for that future hardware, Helios would appear to be the last of its kind. It and earlier versions of the processors have loops and large straight stretches; everything in the future features a grid of squares. But both Strabley and Hayes said that Helios has several key transitional features. “Those ions are moving through that junction many, many times over the course of a circuit,” Strabley told Ars. “And so it’s really enabled us to work on the reliability of the junction, and that will translate into the large-scale systems.”

Image of a product roadmap, with years from 2020 to 2029 noted across the top. There are five processors arrayed from left to right, each with increasingly complex geometry.

Helios sits at the pivot between the simple geometries of earlier Quantinuum processors and the grids of future designs. Credit: Quantinuum

The collection of squares seen in future processors will also allow the same sorts of operations to be done with the loop-and-legs of Helios. Some squares can serve as the equivalent of a loop in terms of storage and sorting, while some of the straight lines nearby can be used for operations.

“What will be common to both of them is kind of the general concept that you can have a storage and sorting region and then gating regions on the side and they’re separated from one another,” Hayes said. “It’s not public yet, but that’s the direction we’re heading: a storage region where you can do really fast sorting in these 2D grids, and then gating regions that have parallelizable logical operations.”

In the meantime, we’re likely to see improvements made to Helios—ideas that didn’t quite make today’s release. “There’s always one more improvement that people want to make, and I’m the person that says, ‘No, we’re going to go now. Put this on the market, and people are going to go use it,’” Strabley said. “So there is a long list of things that we’re going to add to improve the performance. So expect that over the course of Helios, the performance is going to get better and better and better.”

That performance is likely to be used for the sort of initial work done on superconductivity or the algorithm recently described by Google, which is at or a bit beyond what classical computers can manage and may start providing some useful insights. But it will still be a generation or two before we start seeing quantum computing fulfill some of its promise.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

New quantum hardware puts the mechanics in quantum mechanics Read More »

youtube-tv’s-disney-blackout-reminds-users-that-they-don’t-own-what-they-stream

YouTube TV’s Disney blackout reminds users that they don’t own what they stream

“I don’t know (or care) which side is responsible for this, but the DVR is not VOD, it is your recording, and shows recorded before the dispute should be available. This is a hard lesson for us all,” an apparently affected customer wrote on Reddit this week.

For current or former cable subscribers, this experience isn’t new. Carrier disputes have temporarily and permanently killed cable subscribers’ access to many channels over the years. And since the early 2000s, many cable companies have phased out DVRs with local storage in favor of cloud-based DVRs. Since then, cable companies have been able to revoke customers’ access to DVR files if, for example, the customer stopped paying for the channel from which the content was recorded. What we’re seeing with YouTube TV’s DVR feature is one of several ways that streaming services mirror cable companies.

Google exits Movies Anywhere

In a move that appears to be best described as tit for tat, Google has removed content purchased via Google Play and YouTube from Movies Anywhere, a Disney-owned unified platform that lets people access digital video purchases from various distributors, including Amazon Prime Video and Fandango.

In removing users’ content, Google may gain some leverage in its discussions with Disney, which is reportedly seeking a larger carriage fee from YouTube TV. The content removals, however, are just one more pain point of the fragmented streaming landscape customers are already dealing with.

Customers inconvenienced

As of this writing, Google and Disney have yet to reach an agreement. On Monday, Google publicly rejected Disney’s request to restore ABC to YouTube TV for yesterday’s election day, although the company showed a willingness to find a way to quickly bring back ABC and ESPN (“the channels that people want,” per Google). Disney has escalated things by making its content unavailable to rent or purchase from all Google platforms.

Google is trying to appease customers by saying it will give YouTube TV subscribers a $20 credit if Disney “content is unavailable for an extended period of time.” Some people online have reported receiving a $10 credit already.

Regardless of how this saga ends, the immediate effects have inconvenienced customers of both companies. People subscribe to streaming services and rely on digital video purchases and recordings for easy, instant access, which Google and Disney’s disagreement has disrupted. The squabble has also served as another reminder that in the streaming age, you don’t really own anything.

YouTube TV’s Disney blackout reminds users that they don’t own what they stream Read More »