Author name: DJ Henderson

google-claims-win-for-everyone-as-text-scammers-lost-their-cloud-server

Google claims win for everyone as text scammers lost their cloud server

The day after Google filed a lawsuit to end text scams primarily targeting Americans, the criminal network behind the phishing scams was “disrupted,” a Google spokesperson told Ars.

According to messages that the “ringleader” of the so-called “Lighthouse enterprise” posted on his Telegram channel, the phishing gang’s cloud server was “blocked due to malicious complaints.”

“We will restore it as soon as possible!” the leader posted on the channel—which Google’s lawsuit noted helps over 2,500 members coordinate phishing attacks that have resulted in losses of “over a billion dollars.”

Google has alleged that the Lighthouse enterprise is a “criminal group in China” that sells “phishing for dummies” kits that make it easier for scammers with little tech savvy to launch massive phishing campaigns. So far, “millions” of Americans have been harmed, Google alleged, as scammers disproportionately impersonate US institutions, like the Postal Service, as well as well-known brands like E-ZPass.

The company’s lawsuit seeks to dismantle the entire Lighthouse criminal enterprise, so the company was pleased to see Lighthouse communities go dark. In a statement, Halimah DeLaine Prado, Google’s general counsel, told Ars that “this shutdown of Lighthouse’s operations is a win for everyone.

Google claims win for everyone as text scammers lost their cloud server Read More »

steam-deck-minus-the-screen:-valve-announces-new-steam-machine,-controller-hardware

Steam Deck minus the screen: Valve announces new Steam Machine, Controller hardware


SteamOS-powered cube for your TV targets early 2026 launch, no pricing details.

Meet the ValveCube (not its real name) Credit: Valve

Nearly four years after the Steam Deck changed the world of portable gaming, Valve is getting ready to release SteamOS-powered hardware designed for the living room TV, or even as a desktop PC gaming replacement. The simply named Steam Machine and Steam Controller, both planned to ship in early 2026, are “optimized for gaming on Steam and designed for players to get even more out of their Steam Library,” Valve said in a press release.

A Steam Machine spec sheet shared by Valve lists a “semi-custom” six-core AMD Zen 4 CPU clocked at up to 4.8 Ghz alongside an AMD RDNA3 GPU with 28 compute units. The motherboard will include 16GB of DDR5 RAM and an additional 8GB of dedicated DDR6 VRAM for the GPU. The new hardware will come in two configurations with 512GB or 2TB of unspecified “SSD storage,” though Valve isn’t sharing pricing for either just yet.

If you squint, you can make out a few ports on this unmarked black square. Valve

Those chips and numbers suggest the Steam Machine will have roughly the same horsepower as a mid-range desktop gaming PC from a few years back. But Valve says its “Machine”—which it ranks as “over 6x more powerful than the Steam Deck”—is powerful enough to support ray-tracing and/or 4K, 60 fps gaming using FSR upscaling.

Externally, the Steam Machine is housed in a stark black cube measuring 160 mm (~6.30-inch) on each side, making it slightly larger than the old Nintendo GameCube (sans handle). The front of the Machine sports two USB-A ports, an SD card storage expansion slot, a power button, and a “customizable LED bar” that can change to reflect when the system is booting up, downloading updates, etc. A huge fan vent takes up most of the rear of the unit, alongside three additional USB ports (including one USB-C port) and HDMI 2.0 and DisplayPort 1.4 outputs.

Taking control

While the Steam Machine will be able to connect to standard USB and Bluetooth PC controllers and peripherals, it has been designed with a brand-new Steam Controller in mind. And while both pieces of hardware will be sold separately, they will also be available in a bundle for gamers who want an all-in-one living room gaming solution.

If it weren’t for those touchpads, it would be hard to distinguish this gamepad from a lot of other modern controllers. Valve

The new Steam Controller (not to be confused with the identically named old Steam Controller) will make use of a proprietary 2.4 Ghz wireless connection that allows for around 8 ms of end-to-end latency between a button press and the resulting signal received by the system. A radio for that connection will be built into the Steam Machine but will also be available via an included “plug and play” Steam Controller Puck that can support up to four wireless controller connections.

Without the puck, the new Steam Controller can still connect to PCs (including portable gaming PCs) and smartphones via Bluetooth or a wired USB connection. And while console connections are technically possible, Valve Software Engineer Pierre-Loup Griffais and Designer Lawrence Yang told Ars via email that it would “require collaboration with the vendor” that the company would be “happy to discuss… if it came up.”

The most striking feature of the Steam Controller is the dual touchpads underneath the thumbsticks, mirroring the similar, somewhat underutilized control options on the Steam Deck. Each touchpad will come with its own haptic motor for “HD tactile feedback” that should feel akin to rolling a clicky trackball under your thumb (two more haptic motors in the grips handle force feedback output from the games themselves).

Aside from that, the Steam Controller seems a lot more standardized than Valve’s last attempt at a controller. It features thumbsticks, a d-pad, face buttons, and shoulder buttons pretty much where you’d expect them, plus four programmable “grip buttons” on the back side of the controller. The familiar Steam, View, Menu, and QAM (aka “three dots”) buttons also come over from the Steam Deck for quick access to useful SteamOS functions.

Internally, the Steam Controller will use magnetic TMR thumbstick sensors, which should hopefully limit the kind of stick drift we see with the mechanical sticks on the Nintendo Switch, for instance. A six-axis IMU will allow for gyro-based tilt controls as well, and a “grip sensor” can help make sure those controls turn off when you’re putting the controller down or picking it up.

Let’s try that again

Software-wise, the Steam Machine will of course run SteamOS, the custom Linux-based operating system popularized by the Steam Deck and recently officially expanded to other handhelds. Valve says that means fast suspend/resume features, easy access to your Steam cloud saves, “and all the other Steam features you’d expect.” It also means the ability to boot to a Linux desktop mode or install Windows with the help of drivers available on Valve’s website, Griffais and Yang told Ars.

Crucially, the new SteamOS offers compatibility with the vast majority of games made for Windows via Proton, a key feature that was missing the last time Valve pushed Linux-based “Steam Machines” hardware roughly a decade ago. Recent versions of SteamOS can actually boast better in-game performance than Windows on some games and hardware in Ars’ testing.

“One of our biggest learnings [from the first Steam Machines effort] is that it’s a tall order to ask developers to port their games to run on Linux—so we have done a bunch of work on Proton to the point where almost all games just work out of the box,” Griffais and Yang told Ars. “Since that time, we’ve gained valuable experience in manufacturing, made big improvements to Steam, Steam Input, and SteamOS, and we are excited to bring our own first party Steam Machine and the new Steam Controller to market.”

Valve’s ill-fated Steam Machines hardware rollout 10 years ago also relied on third-party manufacturers to handle the actual construction of a wide range of branded Linux boxes. This time around, Valve is handling the manufacture and distribution of a singular Steam Machine on its own, following the success of a similar rollout for the Steam Deck. And while we’ve seen leaked “Powered by SteamOS” branding suggesting third-party SteamOS living room boxes might be in the works, Valve hasn’t announced anything official yet.

“We’re always happy to chat with companies who are interested in making their own SteamOS powered devices,” Griffais and Yang told Ars. “We are working on broadening support, and with the recent updates to Steam and SteamOS, compatibility with other devices has improved, starting with other AMD powered PC handhelds.”

But while the Steam Deck filled an obvious market need for portable access to PC games, it’s harder to know where the new Steam Machine will fit in the already crowded market for living room gaming (not to mention the highly modular desktop gaming market). That’s especially true since the Steam Deck and its imitators can already serve as passable living room gaming devices when plugged into any number of third-party USB-C docks.

A lot will depend on pricing details and just how simple and convenient the new hardware makes the experience of playing PC games on the living room TV. We’ll keep you posted as more information comes in and when we’ve had a chance to get some hands-on time with Valve’s newest swing at the hardware market.

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

Steam Deck minus the screen: Valve announces new Steam Machine, Controller hardware Read More »

corals-survived-past-climate-changes-by-retreating-to-the-deeps

Corals survived past climate changes by retreating to the deeps


A recent die-off in Florida puts the spotlight on corals’ survival strategies.

Scientists have found that the 2023 marine heat wave caused “functional extinction” of two Acropora reef-building coral species living in the Florida Reef, which stretches from the Dry Tortugas National Park to Miami.

“At this point, we do not think there’s much of a chance for natural recovery—their numbers are so low that successful reproduction is incredibly unlikely,” said Ross Cunning, a coral biologist at the John G. Shedd Aquarium.

This isn’t the first time corals have faced the borderline of extinction over the last 460 million years, and they have always managed to bounce back and recolonize habitats lost during severe climate changes. The problem is that we won’t live long enough to see them doing that again.

Killer heat waves

Marine heat waves kill corals by messing with the photosynthetic machinery of symbiotic microalgae that live in the corals’ tissues. When the temperature of water goes up too much, the microalgae start producing reactive oxygen species instead of nutritious sugars. The reactive oxygen is toxic to corals, which respond by expelling the microalgae. This solves the toxicity problem, but it also starves the corals and causes them to bleach (the algae are the source of their yellowish color).

The 2023 marine heat wave was not the first to hit the Florida Reef—it was the ninth on record. “Those eight previous heat waves also had major negative effects on coral reefs, causing widespread mortality,” Cunning told Ars. “But the 2023 heat wave blew all other heat waves out of the water. It was 2.2 to four times greater in magnitude than anything that came before it.”

Cunning’s team monitored two Acropora coral species: the staghorn and elkhorn. “They are both branching corals,” Cunning explained. “The staghorn has pointy branches that form dense thickets, whereas elkhorn produces arm-like branches that reach up and grow toward the surface, producing highly complex three dimensionality, like a canopy in the forest.”

He and his colleagues chose those two species because they essentially built the Florida Reef. They also grow the fastest among all Florida Reef corals, which means they are essential for its ability to recover from damage. “Acropora corals were the primary reef builders for the last ten thousand years,” Cunning said. Unfortunately, they also showed the highest levels of mortality due to heat waves.

Coral apocalypse

Cunning’s team found the mortality rate among Acropora corals reached 100 percent in the Dry Tortugas National Park, which is at the southernmost end of the Florida Reef. Moving north to Lower Keys, Middle Keys, and most of the Upper Keys, the mortality stayed at between 98 and 100 percent.

“Once you start moving a little bit further north, there’s the Biscayne National Park, where mortality rates were at 90 percent,” Cunning said. “It wasn’t until the furthest northern extent of the reef in Miami and Broward counties where mortality dropped to just 38 percent thanks to cooler temperatures that occurred there.”

Still, the mortality rate was exceptionally high throughout most of Acropora colonies across the Florida Reef. “What we’re facing is a functional extinction,” Cunning said.

But corals have been around for about 460 million years, and they have survived multiple mass extinction events, including the one that wiped out the dinosaurs. As vulnerable as they appear, corals seemingly have some get-out-of-death card they always pull when things turn really bad for them. This card, most likely, is buried deep in their genome.

Ancestral strength

“There have been studies looking into the evolutionary history of corals, but the difference between those and our work lies in technology,” said Claudia Francesca Vaga, a marine biologist at the Smithsonian Institution.

Her team looked at ultra conserved elements, stretches of DNA that are nearly identical across even distantly related species. These elements were used to build the most extensive phylogenetic tree of corals to date. Based on the genomic data and fossil evidence, Vaga’s team analyzed how 274 stony coral species are related to one another to retrace their common ancestor and reconstruct how they evolved from it.

“We managed to confirm that the first common ancestor of stony corals was most likely solitary—it didn’t live in colonies, and it didn’t have symbionts,” Vaga said.

The very first coral most likely did not rely on algae to produce its nutrients, which means it was immune to bleaching. It was also not attached to a substrate, so it could move from one habitat to another. Another advantage the first corals had was that they were not particularly picky—they could live just as well in the shallow waters as in the deep sea, since they didn’t get most of their nutrients from their photosynthetic symbionts.

Descending from these incredibly resilient ancestors, corals started to specialize. “We learned that symbiosis and coloniality can be acquired independently by stony coral linages and that it happened multiple times,” Vaga said.

Based on her team’s research, past mass extinction events usually wiped out 90 percent of the species living in shallow waters—the ones that were colonial and reliant on symbionts. “But each such extinction triggered a process of retaking the shallows by the more resilient deep-sea corals, which in time evolved symbiosis and coloniality again,” Vaga said.

Thanks to corals’ deep-sea cousins, even the most extreme environmental changes—global warming or sudden, severe variations in the oceans’ acidity or oxygen levels—could not kill them for good. Each mass extinction event they’ve been through just reverted them to factory settings and made them start over from scratch.

The only catch here is time. “We’re talking about four to five million years before coral populations recover,” Vaga said.

Long way back

According to Cunning, the consequence of Acropora corals’ extinction in the Florida Reef is a lower overall reef-building rate, which will lead to reduced biodiversity in the reef’s ecosystem. “There are going to be cascading effects, and humans will be impacted as well. Reefs protect our coastlines by buffering over 90 percent of wave energy,” Cunning said.

In Florida, where coastlines are heavily urbanized, this may translate into hundreds of millions of dollars per year in damages.

But Cunning said we still have means at our disposal to save Acropora corals. “We’re not going to give up on them,” he said.

One option for improving the resilience of corals could be to crossbreed them with species from outside of Florida Reef, ideally ones that live in warmer places and are better adapted to heat. “The first tests of this approach are underway right now in Florida; elkhorn corals were cross bred between Florida parents and Honduran parents,” Cunning said. He hopes this will help produce a new generation of corals that has a better shot at surviving the next heat wave.

Other interventions include manipulating corals’ algal symbionts. “There are many different species of algae with different levels of heat tolerance,” Cunning said. To him, a possible way forward would be to pair the Acropora corals with more heat-tolerant symbionts. “This should alter the bleaching threshold in these corals,” he explained.

Still, even interventions like these will take a very long time to make a difference. “But if four or five million years is the benchmark to beat, then yeah, it’s hopefully going to happen faster than that,” Cunning said.

The upside is that corals will likely pull off their de-extinction trick once again, even if we do absolutely nothing to help them. “In a few million years, they will redevelop coloniality, redevelop symbiosis, and rebuild something similar to the coral reefs we have today,” Vaga said. “This is good news for them. Not necessarily for us.”

Science, 2025.  DOI: 10.1126/science.adx7825

Nature, 2025.  DOI: 10.1038/s41586-025-09615-6

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Corals survived past climate changes by retreating to the deeps Read More »

variously-effective-altruism

Variously Effective Altruism

This post is a roundup of various things related to philanthropy, as you often find in the full monthly roundup.

Peter Thiel warned Elon Musk to ditch donating to The Giving Pledge because Bill Gates will give his wealth away ‘to left-wing nonprofits.’

As John Arnold points out, this seems highly confused. The Giving Pledge is a promise to give away your money, not a promise to let Bill Gates give away your money. The core concern, that your money ends up going to causes one does not believe in (and probably highly inefficiently at that) seems real, once you send money into a foundation ecosystem it by default gets captured by foundation style people.

As he points out, ‘let my children handle it’ is not a great answer, and would be especially poor for Musk given the likely disagreements over values, especially if you don’t actually give those children that much free and clear (and thus, are being relatively uncooperative, so why should they honor your preferences?). There are no easy answers.

A new paper goes Full Hanson with the question Does Maximizing Good Make People Look Bad? They answer yes, if you give deliberately rather than empathetically and seek to maximize impact this is viewed as less moral and you are seen as a less desirable social partner, and donors estimate this effect roughly correctly. Which makes sense if you consider that one advantage of being a social partner is that you can direct your partners with social and emotional appeals, and thereby extract their resources. As with so many other things, you can be someone or do something, and if you focus on one you have to sacrifice some of the other.

This is one place where the core idea of Effective Altruism is pretty great. You create a community of people where it is socially desirable to be deliberative, and scorn is put on those who are empathic instead. If that was all EA did, without trying to drum up more resources or direct how people deliberated? That alone is a big win.

UATX eliminates tuition forever as the result of a $100 million gift from Jeff Yass. Well, hopefully. This gift alone doesn’t fund that, they’re counting on future donations from grateful students, so they might have to back out of this the way Rice had to in 1965. One could ask, given schools like Harvard, Yale and Stanford make such bets and have wildly successful graduates who give lots of money, and still charge tuition, what is the difference?

In general giving to your Alma Mater or another university is highly ineffective altruism. One can plausibly argue that fully paying for everyone’s tuition, with an agreement to that effect, is a lot better than giving to the general university fund, especially if you’re hoping for a cascade effect. It would be a highly positive cultural shift if selective colleges stopped charging tuition. Is that the best use of $100 million? I mean, obviously not even close, but it’s not clear that it is up against the better uses.

Will MacAskill asks what Effective Altruism should do now that AI is making rapid progress and there is a large distinct AI safety movement. He argues EA should embrace the mission of making the transition to a post-AGI society go well.

Will MacAskill: This third way will require a lot of intellectual nimbleness and willingness to change our minds. Post-FTX, much of EA adopted a “PR mentality” that I think has lingered and is counterproductive.

EA is intrinsically controversial because we say things that aren’t popular — and given recent events, we’ll be controversial regardless. This is liberating: we can focus on making arguments we think are true and important, with bravery and honesty, rather than constraining ourselves with excessive caution.

He does not mention until later the obvious objection, which is that the Effective Altruist brand is toxic, to the point that the label is used as a political accusation.

No, this isn’t primarily because EA is ‘inherently controversial’ for the things it advocates. It is primarily because, as I understand things:

  1. EA tells those who don’t agree with EA, and who don’t allocate substantial resources to EA causes, that they are bad, and that they should feel bad.

  2. EA (long before FTX) adopted in a broad range of ways the ‘PR mentality’ MacAskill rightfully criticizes, and other hostile actions it has taken, also FTX.

  3. FTX, which was severely mishandled.

  4. Active intentional scapegoating and fear mongering campaigns.

  5. Yes, the things it advocates for, and the extent to which it and components of it have pushed for them, but this is one of many elements.

Thus, I think that the things strictly labeled EA should strive to stay away from the areas in which being politically toxic is a problem, and consider the risks of further negative polarization. It also needs to address the core reasons EA got into the ‘PR mentality.’

Here are the causes he thinks this new EA should have in its portfolio (with unequal weight that is not specified):

  • global health & development

  • factory farming

  • AI safety

  • AI character[5]

  • AI welfare / digital minds

  • the economic and political rights of AIs

  • AI-driven persuasion and epistemic disruption

  • AI for better reasoning, decision-making and coordination

  • the risk of (AI-enabled) human coups

  • democracy preservation

  • gradual disempowerment

  • biorisk

  • space governance

  • s-risks

  • macrostrategy

  • meta

There are some strange flexes in there, but given the historical origins, okay, sure, not bad. Mostly these are good enough to be ‘some of you should do one thing, and some of you should do the other’ depending on one’s preferences and talents. I strongly agree with Will’s emphasis that his shift into AI is an affirmation of the core EA principles worth preserving, of finding the important thing and focusing there.

I am glad to see Will discuss the problem of ‘PR focus.’

By “PR mentality” I mean thinking about communications through the lens of “what is good for EA’s brand?” instead of focusing on questions like “what ideas are true, interesting, important, under-appreciated, and how can we get those ideas out there?

I also appreciate Will’s noticing that the PR focus hasn’t worked even on its own terms, that EA discourse is withering. I would add that EA’s brand and PR position is terrible in large part exactly because EA has often acted, for a long period, in this PR-focused, uncooperative and fundamentally hostile way, that comes across as highly calculated because it was, along with a lack of being straight with people, and eventually people learn the pattern.

This laid the groundwork, when combined with FTX and an intentional series of attacks from a16z and related sources, to poison the well. It wouldn’t have worked otherwise to anything like the same extent.

This was very wise:

And I think this mentality is corrosive to EA’s soul because as soon as you stop being ruthlessly focused on actually figuring out what’s true, then you’ll almost certainly believe the wrong things and focus on the wrong things, and lose out on most impact. Given fat-tailed distributions of impact, getting your focus a bit wrong can mean you do 10x less good than you could have done. Worse, you can easily end up having a negative rather than a positive effect.

Except I think this was a far broader issue than a post-FTX narrow PR focus.

Thus I see ‘PR focus’ as a broader problem than Will does. It is about this kind of communication, but also broader decision making and strategy and prioritization, and was woven into the DNA. It is the asking ‘what maximizes the inputs into EA brands’ question more broadly and centrally involves confusion of costs and benefits. The broader set of things all come from the same underlying mindset.

And I think that mindset greatly predates FTX. Indeed, it is hard to not view the entire FTX incident, and why it went so wrong, as largely about the PR mindset.

As a clear example, he thinks ‘growing the inputs’ was a good focus of EA in the last year. He thinks the focus should now shift to improving the culture, but his justifications still fall into the ‘maximize inputs’ mindset.

In the last year or two, there’s been a lot of focus on growing the inputs. I think this was important, in particular to get back a sense of momentum, and I’m glad that that effort has been pretty successful. I still think that growing EA is extremely valuable, and that some organisation (e.g. Giving What We Can) should focus squarely on growth.

Actively looking to grow the movement has obvious justification, but inputs are costs and not benefits, it is easy to confuse the two, and focus on growing inputs tends to cause severe PR mindset and hostile actions as you strive to capture resources, including people’s time and attention.

Another example I would cite was the response to If Anyone Builds It, Everyone Dies by the core EA people, including among others Will MacAskill himself and also the head of CEA. This was a very clear example of PR mindset, where quite frankly a decision was made that this was a bad EA look, the moves it proposes were unstrategic, and thus the book should be thrown overboard. If Will is sincere about this reckoning, he should be able to recognize that this is what happened.

What should you do if your brand is widely distrusted and toxic?

The good news, I agree with Will, is that you can stop doing PR.

But this is a liberating fact. It means we don’t need to constrain ourselves with PR mentality — we’ll be controversial whatever we do, so the costs of additional controversy are much lower. Instead, we can just focus on making arguments about things we think are true and important. Think Peter Singer! I also think the “vibe shift” is real, and mitigates much of the potential downsides from controversy.

The bad news is that this doesn’t raise the obvious question, which is why are you doubling down on this toxic brand, especially given the nature of many of the cause areas Will suggests EA enter?

When you hold your conference, This Is The Way:

Daniel Rothchild: Many great things about the @rootsofprogress conference this weekend, but I want to take a moment to give a shout out to excellent execution of an oft-overlooked event items that most planners and organizers get wrong: the name badge.

Might this the best conference name tag ever designed? Let’s go through its characteristics.

  1. It’s double-sided. That might seem obvious, but a lot of conferences just print on one side. I guess that saves a few cents, but it means half the time the badge is useless.

  2. It’s on a lanyard that’s the right length. It came to mid-torso for most people, making it easy to see and catch a glimpse of without looking at people in a weird way.

  3. it’s a) attractive and b) not on a safety pin so people actually want to wear it.

  4. Most importantly, the most important bit of information–the wearer’s first name–is printed in a maximally large font across the top. You could easily see it from 10 feet away. Again, it might seem obvious… but I go to a lot of events with 14 point printed names.

    1. The other information is fine to have in smaller fonts. Job title, organization, location… those are all secondary items. The most important thing is the wearer’s name, and the most important part of that is the first name.

  5. After all of the utilitarian questions have been answered… it’s attractive. The color scheme and graphic branding is consistent with the rest of the conference. But I stress, this is the least important part of the badge.

Why does all this matter? Because the best events are those that are designed to facilitate maximal interaction and introduction between people (and to meet IRL people you know online). That’s the case with unconferences, or events with a lot of social/semi-planned time.

There’s basically no reason for everyone not to outright copy this format, forever.

Indeed, one wonders if you shouldn’t have such badges and wear them at parties.

Alex Shintaro Araki offers thoughts on Impact Philanthropy fundraising, and Sarah Constantin confirms this matches her experiences. Impact philanthropy is ideally where you try to make cool new stuff happen, especially a scientific or technological cool new thing, although it can also be simply about ‘impact’ through things like carbon sequestration. This is a potentially highly effective approach, but also a tough road. Individual projects need $20 million to $100 million and most philanthropists are not interested. Sarah notes that many people temperamentally aren’t excited by cool new stuff, which is alien to me, that seems super exciting, but it’s true.

One key insight is that if you’re asking for $3 million you might as well ask for $30 million, provided you have a good pitch on what to do with it, and assuming you pitch people who have the money. If someone is a billionaire, they’re actively excited to be able to place large amounts of money.

Another is that there’s a lot of variance and luck, although he doesn’t call it that. You probably need a deep connection with your funder, but you also need to find your funder at the right time when things line up for them.

Finally, it sounds weird, but it matches my experience that funders need good things to fund even more than founders need to find people to fund them, the same way this is also true in venture capital. They don’t see good opportunities and have limited time. So things like cold emails can actually work.

Expect another philanthropy-related post later this month.

Discussion about this post

Variously Effective Altruism Read More »

new-project-brings-strong-linux-compatibility-to-more-classic-windows-games

New project brings strong Linux compatibility to more classic Windows games

Those additional options should be welcome news for fans looking for new ways to play PC games of a certain era. The PC Gaming Wiki lists over 400 titles written with the D3D7 APIs, and while most of those games were released between 2000 and 2004, a handful of new D3D7 games have continued to be released through 2022.

The D3D7 games list predictably includes a lot of licensed shovelware, but there are also well-remembered games like Escape from Monkey Island, Arx Fatalis, and the original Hitman: Codename 47. WinterSnowfall writes that the project was inspired by a desire to play games like Sacrifice and Disciples II on top of the existing dxvk framework.

Despite some known issues with certain D3D7 titles, WinterSnowfall writes that recent tuning means “things are now anywhere between decent to stellar in most of the supported games.” Still, the project author warns that the project will likely never reach full compatibility since “D3D7 is a land of highly cursed API interoperability.”

Don’t expect this project to expand to include support for even older DirectX APIs, either, WinterSnowfall warns. “D3D7 is enough of a challenge and a mess as it is,” the author writes. “The further we stray from D3D9, the further we stray from the divine.”

New project brings strong Linux compatibility to more classic Windows games Read More »

f1-in-brazil:-that’s-what-generational-talent-looks-like

F1 in Brazil: That’s what generational talent looks like

After a weekend off, perhaps spent trick or treating, Formula 1’s drivers, engineers, and mechanics made their yearly trip to the Interlagos track for the Brazilian Grand Prix. More formally called the Autodromo Jose Carlos Pace, it’s definitely one of the more old-school circuits that F1 visits—and invariably one of the more dramatic.

For one thing, it’s anything but billiard-smooth. Better yet, there’s elevation—lots of it—and cambers, too. Unlike most F1 tracks, it runs counterclockwise, and it combines some very fast sections with several rather technical corners that can catch out even the best drivers in the world. Nestled between a couple of lakes in São Paulo, weather is also a regular factor in races here. And indeed, a severe weather warning was issued in the lead-up to this weekend’s race.

You have to hit the ground running

This was another sprint weekend, which means that instead of two practice sessions on Friday and another on Saturday morning, the teams get one on Friday, then go into qualifying for the Saturday sprint race. The shortened testing time tends to shake things up a bit, and we definitely saw that this weekend.

When we left Mexico, there was only a point’s difference between McLaren drivers Lando Norris and Oscar Piastri in the championship. After a strong run in the middle of the season, when he led the championship and seemed to have the edge on Norris, Piastri has had a string of disappointing races. By recent standards, Brazil wasn’t quite so bad, but it wasn’t great, either.

Carlos Sainz Jr. of Spain drives the (55) Atlassian Williams Racing FW47 Mercedes during the Formula 1 MSC Cruises Grande Premio De Sao Paulo 2025 in Sao Paulo, Brazil, on November 9, 2025. (Photo by Alessio Morgese/NurPhoto via Getty Images)

Is it just me, or does Williams usually have a disappointing weekend when it does a Gulf Oil livery? Credit: Alessio Morgese/NurPhoto via Getty Images

Despite the weather warnings, none of the sessions required treaded tires. While the track surface was basically dry for the sprint race, the same couldn’t be said for the painted curbs—water had collected in the valleys between the stepped “teeth,” and as just about every racer knows, if the painted bits of the track are wet, you really don’t want to go near them if you have slick tires.

F1 in Brazil: That’s what generational talent looks like Read More »

nasa-is-kind-of-a-mess:-here-are-the-top-priorities-for-a-new-administrator

NASA is kind of a mess: Here are the top priorities for a new administrator


“He inevitably will have to make tough calls.”

Jared Isaacman, right, led the crew of Polaris Dawn, which performed the first private spacewalk. Credit: Polaris Dawn

Jared Isaacman, right, led the crew of Polaris Dawn, which performed the first private spacewalk. Credit: Polaris Dawn

After a long summer and fall of uncertainty, private astronaut Jared Isaacman has been renominated to lead NASA, and there appears to be momentum behind getting him confirmed quickly as the space agency’s 15th administrator. It is possible, although far from a lock, the Senate could finalize his nomination before the end of this year.

It cannot happen soon enough.

The National Aeronautics and Space Administration is, to put it bluntly, kind of a mess. This is not meant to disparage the many fine people who work at NASA. But years of neglect, changing priorities, mismanagement, creeping bureaucracy, meeting bloat, and other factors have taken their toll. NASA is still capable of doing great things. It still inspires. But it needs a fresh start.

“Jared has already garnered tremendous support from nearly everyone in the space community,” said Lori Garver, who served as NASA’s deputy administrator under President Obama. “This should give him a tail wind as he inevitably will have to make tough calls.”

Garver worked for a Democratic administration, and it’s notable that Isaacman has admirers from across the political spectrum, from left-leaning space advocates to right-wing influencers. A decade and a half ago, Garver led efforts to get NASA to more fully embrace commercial space. In some ways, Isaacman will seek to further this legacy, and Garver knows all too well how difficult it is to change the sprawling space agency and beat back entrenched contractors.

“Expectations are high, yet the challenge of marrying outsized goals to greatly reduced budget guidance from his administration remains,” Garver said. “It will be difficult to deliver on accelerating Artemis, transitioning to commercial LEO destinations, starting a serious nuclear electric propulsion program for Mars transportation, and attracting non-government funding for science missions. He’s coming in with a lot of support, which he will need in the current divisive political environment.”

Here’s a rundown of some of the challenges Isaacman must overcome to be a successful administrator.

A shrunken NASA

At the beginning of this year, the civil servant workforce at the space agency numbered about 18,000 people. NASA said that about 3,870 employees exited this year under various deferred resignation, early retirement, or buyout programs. After subtracting another 500 employees who left through normal attrition, NASA’s headcount will be down by 20 to 25 percent by the end of this year.

The question is how impactful these losses are. A number of the departures were from senior positions, leaving important divisions—such as Astrophysics—with acting directors and interim people in key positions. Some people who left were nearing retirement, and this may ultimately benefit the space agency by allowing younger people to bring new energy to the mission.

Yet there are very real concerns about NASA’s ability to retain its best people. As the commercial space industry grows around some of its key centers, including Alabama, Florida, and Texas, these companies cherry-pick the best NASA engineers by offering higher salaries and stock options. These engineers, in turn, know who to hire at the local field centers who are most promising.

This brain drain diminishes the engineering excellence at NASA. Can Isaacman do more with less?

Very low morale

Isaacman also arrives after what has essentially been a lost year for NASA.

Imagine you’re a NASA employee. You came to the agency to lead exploration of the Solar System and beyond. Then the second Trump administration shows up and demands widespread workforce cuts. The White House subsequently also proposes a 25 percent hit to the space agency’s budget and draconian cuts for NASA’s science programs.

Then, to cap off the spring of 2025, Isaacman’s nomination was pulled for purely political reasons. Not everyone at NASA liked Isaacman. There was genuine concern that he would shake things up and rattle cages. But Isaacman was also perceived as young, dynamic, and well-liked by the broader space community. He genuinely wanted to see NASA succeed. And then—poof—he’s gone. This only exacerbated uncertainty about the agency’s future.

Interim NASA Administrator Sean Duffy provides remarks at a briefing prior to the Crew 11 launch in August.

Credit: NASA

Interim NASA Administrator Sean Duffy provides remarks at a briefing prior to the Crew 11 launch in August. Credit: NASA

Isaacman’s de-nomination was followed by the appointment of Sean Duffy, a former reality TV star serving as the Secretary of Transportation, to lead NASA on an interim basis. Duffy was a wild card, but it soon became clear he saw NASA as a vehicle to further his political career. And even if Duffy had been focused on solutions, he knew little about space and already had a full-time job leading the Department of Transportation. NASA employees are not fools. They saw this and understood this move’s implications.

Finally, in a coup de grâce, the government shut down on October 1. The majority of NASA’s civil servant workforce has been sitting at home for six weeks, not getting paid, not exploring, and wondering just what the hell they’re doing working for NASA.

Arte-miss?

As NASA has struggled this year, China has made demonstrable progress in its lunar program. It is now probable that China’s Lanyue lander will put humans on the lunar surface by or before the year 2030, likely beating NASA in its return to the Moon with the Artemis Program.

NASA’s lunar program was created during the first Trump administration, but then NASA leader Jim Bridenstine was unable to secure enough funding (remember the whole Pell Grant fiasco?) before he left office in early 2021. This left NASA without the resources it needed to build a management team to lead the program and support key elements, including a lander and lunar spacesuits.

These problems more or less persisted under President Joe Biden and his NASA Administrator, Bill Nelson. From 2021 to 2024, the leaders of NASA essentially said everything was fine and that a lunar landing by 2026 was on track. When reporters, including myself, would ask the leaders of the Artemis Program, we were effectively shouted down.

For example, in January 2024, I pressed NASA’s chief of deep space exploration, Jim Free, about the non-viability of a 2026 human landing date.

“It’s interesting because we have 11 people in industry on here that have signed contracts to meet those dates,” Free replied during a teleconference, which included representatives from SpaceX, Axiom, and the other companies. “So from my perspective, the people in industry are here today saying we support it. We’ve signed contracts to those dates on the government side based on the technical details that they’ve given us, that our technical teams have come forward with.”

A shorter version of that might be: “Shut up, we know what we’re doing.”

NASA has already delayed the lunar landing officially to 2027. And no one believes that date is real. One of Isaacman’s first jobs will be to conduct an honest assessment of where the Artemis Program truly is and to rapidly take steps to get it on track. I think we can be confident he will do so with eyes wide open.

Human Landing System

So what will he do about this? The biggest challenge involves the Human Landing System (HLS), a necessary component to get humans to the surface from lunar orbit and back.

Ars explored how NASA found itself in this predicament in a long article published in early October. As for what to do now, NASA basically has two realistic options going forward. It can light a fire under SpaceX to prioritize the HLS component of its Starship program, and possibly adopt a simplified architecture. Or it can work with Blue Origin to develop to a human system using its Blue Moon Mk. 1 lander (originally intended for cargo) and a modified Mk. 1 lander for ascent purposes. (Blue says it is game). Beyond that, there is no hardware in work that could possibly accommodate a landing before 2030.

Duffy initially blustered about American capabilities. Repeatedly, he said, “We are going to beat the Chinese to the Moon.” It sounded good, but it underlined his inexperience with spaceflight because it was just not true.

Less than a month ago, Duffy changed his tune. He blamed SpaceX and its Starship vehicle for delays to Artemis, and he said he was “opening up” the lander competition. The problem is that Duffy’s solution was to raise the prospect of a “government option” lunar lander. He had been having discussions with Lockheed Martin, Northrop Grumman, and others about the possibility of issuing a cost-plus contract to build a smaller lunar lander in 30 months.

An artist’s illustration of multiple Starships on the lunar surface, with a Moon base in the background.

Credit: SpaceX

An artist’s illustration of multiple Starships on the lunar surface, with a Moon base in the background. Credit: SpaceX

Duffy should have known that this timeline was completely unrealistic. Moreover, a rapidly built lunar lander (think five years, at a bare minimum) would likely cost on the order of $20 billion, which NASA did not have. But no one in his inner circle, including Amit Kshatriya, NASA’s associate administrator, was telling him that. They were encouraging him.

Isaacman is not going to be snowed under by this kind of (preposterous) proposal. Most likely, he will push SpaceX to prioritize HLS and be eager to work with Blue Origin to develop a human lander based on Mk. 1 technology.

His first call as administrator may well be to Blue Origin founder Jeff Bezos.

Commercial LEO Destinations

Another looming problem involves commercial space stations in low-Earth orbit, which are supposed to be flying before the end of 2030 when the International Space Station is due to be retired.

There is much uncertainty over whether the primary companies involved in this effort—be it for financial, technical, regulatory, or other reasons—will be able to launch and test space stations by 2030 in order to allow NASA to maintain a continuous presence in low-Earth orbit. The main contractors are Axiom Space, Voyager Technologies, Blue Origin, and Vast Space.

This is one area in which Duffy took action. In August, he signed a document that implemented major changes to the Commercial LEO Destinations program. One of the biggest shifts was a lowering of the minimum requirements. Instead of fully operational stations, the new directive required only the capability to support four astronauts for 1-month increments in low-Earth orbit.

However, it is unclear that Duffy fully understood what he was signing, because there was an immediate pushback. Moreover, prior to the government shutdown, there was a lot of discussion about ripping up the directive and reverting to the old rules for commercial space stations. Everyone in the industry is scratching their heads about what comes next.

In the meantime, the space station companies are trying to raise funds, design stations for uncertain requirements, and prepare for competition for the next phase of NASA awards. This program needs more funding, clarity, and urgency for it to be successful.

Earth science

In recent days, there has been some excellent reporting about the fate of Earth science at NASA, which is part of the space agency’s core mission. Space.com published a long feature article about the Trump administration’s efforts to undermine Maryland’s Goddard Space Flight Center, which is NASA’s oldest field center.

Goddard houses the largest Earth science workforce at the agency, and its study of climate change is at odds with the policy positions of the Trump administration and many members of a Republican-controlled Congress. The result has been steep funding cuts, canceled missions, and closed buildings.

One of Isaacman’s most challenging jobs will be to balance support for Earth science while also placating an administration that frankly does not want to publish reports about how human activity is warming the planet.

In remarks on the social media site X, Isaacman recently said he wanted to expand commercial partnerships to science missions. “Better to have 10 x $100 million missions and a few fail than a single overdue and costly $1B+ mission,” he wrote. Isaacman said NASA should also buy more Earth data from providers like Planet and BlackSky, which already have satellites in orbit.

“Why build bespoke satellites at greater cost and delay when you could pay for the data as needed from existing providers?” he asked.

Planetary science

Another area of concern is planetary science. When one picks apart Trump’s budget priorities, there are two clear and disturbing trends.

The first is that there are no significant planetary science missions in the pipeline after the ambitious Dragonfly mission, which is scheduled to launch to Titan in July 2028. It becomes difficult to escape the reality that this administration is not prioritizing any mission that launches after Trump leaves office in January 2029. As a result, after Dragonfly, the planetary pipeline is running low.

Another major concern is the fate of the famed Jet Propulsion Laboratory in California. The lab laid off 550 people last month, which followed previous cuts. The center director, Laurie Leshin, stepped down on June 1. With the Mars Sample Return mission on hold, and quite possibly canceled, the future of NASA’s premier planetary science mission center is cloudy.

A view of the control room at NASA’s Jet Propulsion Laboratory in California.

Credit: NASA

A view of the control room at NASA’s Jet Propulsion Laboratory in California. Credit: NASA

Isaacman has said he has never “remotely suggested” that NASA could do without the Jet Propulsion Laboratory.

“Personally, I have publicly defended programs like the Chandra X-ray Observatory, offered to fund a Hubble reboost mission, and anything suggesting that I am anti-science or want to outsource that responsibility is simply untrue,” he wrote on X.

That is likely true, but charting a bright course for the future of planetary science, on a limited budget, will be a major challenge for the new administrator.

New initiatives

All of the above concerns NASA’s existing challenges. But Isaacman will certainly want to make his own mark. This is likely to involve a spaceflight technology he considers to be the missing link in charting a course for humans to explore the Solar System beyond the Moon: nuclear electric propulsion.

As he explained to Ars earlier this year, Isaacman’s signature issue was going to be a full-bore push into nuclear electric propulsion.

“We would have gone right to a 100-kilowatt test vehicle that we would send somewhere inspiring with some great cameras,” he said. “Then we are going right to megawatt class, inside of four years, something you could dock a human-rated spaceship to, or drag a telescope to a Lagrange point and then return, big stuff like that. The goal was to get America underway in space on nuclear power.”

Another key element of this plan is that it would give some of NASA’s field centers, including Marshall Space Flight Center, important work to do after the seemingly inevitable cancellation of the Space Launch System rocket.

Standing up new programs, and battling against existing programs that have strong backing in Congress and industry, will require all of the diplomatic skill and force of personality Isaacman can muster.

We will soon find out if he has the right stuff.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

NASA is kind of a mess: Here are the top priorities for a new administrator Read More »

at&t-falsely-promised-“everyone”-a-free-iphone,-ad-industry-board-rules

AT&T falsely promised “everyone” a free iPhone, ad-industry board rules

“Focusing on the words ‘everyone gets,’ Verizon argued to NAD that the challenged advertising communicated an explicit message—that all AT&T subscribers are eligible for the trade-in offer—which it asserts was literally false because only subscribers to ‘qualifying’ AT&T plans are eligible. Verizon also argued that the advertisement communicated a comparable misleading message that all AT&T customers were eligible for the trade-in,” the NARB decision said.

While AT&T disclosed the offer limits, Verizon argued that the disclosure was not clear and conspicuous. Verizon said—and the NAD agreed—that the phrase “everyone gets” suggests everyone will get a free phone, not that everyone “can get” a free phone if they subscribe to AT&T’s more expensive plans.

AT&T claimed the ad was literally true because it did not say that everyone “will” get the free phone. “Rather, according to the advertiser, the challenged language communicates that all customers, current or new, can qualify for the offer and urges customers to ‘learn’ the details about the trade-in opportunity,” the NARB said.

AT&T argued that the word “learn” makes it clear there are limits on the offer. The NAD disagreed, saying that the “learn how” phrase “precedes the word ‘everyone,’ suggesting everyone is eligible to receive a phone, not that everyone can learn how to get a phone.”

AT&T also submitted the results of a customer survey, arguing that it proved customers seeing the ad understood the offer’s limitations. The NAD decided that the survey was methodologically unsound, while the NARB said that both AT&T and Verizon offered “plausible” interpretations of the results.

Panel: Buyers of low-cost plans likely duped

After hearing AT&T’s and Verizon’s arguments, the NARB panel decided “that the challenged advertising, on its face, conveys a false message and further does not clarify the message by disclosing a material limitation to the offer of a free cell phone in a clear and conspicuous manner.”

AT&T falsely promised “everyone” a free iPhone, ad-industry board rules Read More »

how-to-trade-your-$214,000-cybersecurity-job-for-a-jail-cell

How to trade your $214,000 cybersecurity job for a jail cell

According to the FBI, in 2023, Martin took steps to become an “affiliate” of the BlackCat ransomware developers. BlackCat provides full-service malware, offering up modern ransomware code and dark web infrastructure in return for a cut of any money generated by affiliates, who find and hack their own targets. (And yes, sometimes BlackCat devs do scam their own affiliates.)

Martin had seen how this system worked in practice through his job, and he is said to have approached a pair of other people to help him make some easy cash. One of these people was allegedly Ryan Goldberg of Watkinsville, Georgia, who worked as an incident manager at the cybersecurity firm Sygnia. Goldberg told the FBI that Martin had recruited him to “try and ransom some companies.”

In May 2023, the group attacked its first target, a medical company based in Tampa, Florida. The team got the BlackCat software onto the company’s network, where it encrypted corporate data, and demanded a $10 million ransom for the decryption key.

Eventually, the extorted company decided to pay up—though only $1.27 million. The money was paid out in crypto, with a percentage going to the BlackCat devs and the rest split between Martin, Goldberg, and a third, as-yet-unnamed conspirator.

Success was short-lived, though. Throughout 2023, the extortion team allegedly went after a pharma company in Maryland, a doctor’s office, and an engineering firm in California, plus a drone manufacturer in Virginia.

Ransom requests varied widely: $5 million, or $1 million, or even a mere $300,000.

But no one else paid.

By early 2025, an FBI investigation had ramped up, and the Bureau searched Martin’s property in April. Once that happened, Goldberg said that he received a call from the third member of their team, who was “freaking out” about the raid on Martin. In early May, Goldberg searched the web for Martin’s name plus “doj.gov,” apparently looking for news on the investigation.

On June 17, Goldberg, too, was searched and his devices taken. He agreed to talk to agents and initially denied knowing anything about the ransomware attacks, but he eventually confessed his involvement and fingered Martin as the ringleader. Goldberg told agents that he had helped with the attacks to pay off some debts, and he was despondent about the idea of “going to federal prison for the rest of [his] life.”

How to trade your $214,000 cybersecurity job for a jail cell Read More »

if-you-want-to-satiate-ai’s-hunger-for-power,-google-suggests-going-to-space

If you want to satiate AI’s hunger for power, Google suggests going to space


Google engineers think they already have all the pieces needed to build a data center in orbit.

With Project Suncatcher, Google will test its Tensor Processing Units on satellites. Credit: Google

It was probably always when, not if, Google would add its name to the list of companies intrigued by the potential of orbiting data centers.

Google announced Tuesday a new initiative, named Project Suncatcher, to examine the feasibility of bringing artificial intelligence to space. The idea is to deploy swarms of satellites in low-Earth orbit, each carrying Google’s AI accelerator chips designed for training, content generation, synthetic speech and vision, and predictive modeling. Google calls these chips Tensor Processing Units, or TPUs.

“Project Suncatcher is a moonshot exploring a new frontier: equipping solar-powered satellite constellations with TPUs and free-space optical links to one day scale machine learning compute in space,” Google wrote in a blog post.

“Like any moonshot, it’s going to require us to solve a lot of complex engineering challenges,” Google’s CEO, Sundar Pichai, wrote on X. Pichai noted that Google’s early tests show the company’s TPUs can withstand the intense radiation they will encounter in space. “However, significant challenges still remain like thermal management and on-orbit system reliability.”

The why and how

Ars reported on Google’s announcement on Tuesday, and Google published a research paper outlining the motivation for such a moonshot project. One of the authors, Travis Beals, spoke with Ars about Project Suncatcher and offered his thoughts on why it just might work.

“We’re just seeing so much demand from people for AI,” said Beals, senior director of Paradigms of Intelligence, a research team within Google. “So, we wanted to figure out a solution for compute that could work no matter how large demand might grow.”

Higher demand will lead to bigger data centers consuming colossal amounts of electricity. According to the MIT Technology Review, AI alone could consume as much electricity annually as 22 percent of all US households by 2028. Cooling is also a problem, often requiring access to vast water resources, raising important questions about environmental sustainability.

Google is looking to the sky to avoid potential bottlenecks. A satellite in space can access an infinite supply of renewable energy and an entire Universe to absorb heat.

“If you think about a data center on Earth, it’s taking power in and it’s emitting heat out,” Beals said. “For us, it’s the satellite that’s doing the same. The satellite is going to have solar panels … They’re going to feed that power to the TPUs to do whatever compute we need them to do, and then the waste heat from the TPUs will be distributed out over a radiator that will then radiate that heat out into space.”

Google envisions putting a legion of satellites into a special kind of orbit that rides along the day-night terminator, where sunlight meets darkness. This north-south, or polar, orbit would be synchronized with the Sun, allowing a satellite’s power-generating solar panels to remain continuously bathed in sunshine.

“It’s much brighter even than the midday Sun on Earth because it’s not filtered by Earth’s atmosphere,” Beals said.

This means a solar panel in space can produce up to eight times more power than the same collecting area on the ground, and you don’t need a lot of batteries to reserve electricity for nighttime. This may sound like the argument for space-based solar power, an idea first described by Isaac Asimov in his short story Reason published in 1941. But instead of transmitting the electricity down to Earth for terrestrial use, orbiting data centers would tap into the power source in space.

“As with many things, the ideas originate in science fiction, but it’s had a number of challenges, and one big one is, how do you get the power down to Earth?” Beals said. “So, instead of trying to figure out that, we’re embarking on this moonshot to bring [machine learning] compute chips into space, put them on satellites that have the solar panels and the radiators for cooling, and then integrate it all together so you don’t actually have to be powered on Earth.”

SpaceX is driving down launch costs, thanks to reusable rockets and an abundant volume of Starlink satellite launches. Credit: SpaceX

Google has a mixed record with its ambitious moonshot projects. One of the most prominent moonshot graduates is the self-driving car kit developer Waymo, which spun out to form a separate company in 2016 and is now operational. The Project Loon initiative to beam Internet signals from high-altitude balloons is one of the Google moonshots that didn’t make it.

Ars published two stories last week on the promise of space-based data centers. One of the startups in this field, named Starcloud, is partnering with Nvidia, the world’s largest tech company by market capitalization, to build a 5 gigawatt orbital data center with enormous solar and cooling panels approximately 4 kilometers (2.5 miles) in width and length. In response to that story, Elon Musk said SpaceX is pursuing the same business opportunity but didn’t provide any details. It’s worth noting that Google holds an estimated 7 percent stake in SpaceX.

Strength in numbers

Google’s proposed architecture differs from that of Starcloud and Nvidia in an important way. Instead of putting up just one or a few massive computing nodes, Google wants to launch a fleet of smaller satellites that talk to one another through laser data links. Essentially, a satellite swarm would function as a single data center, using light-speed interconnectivity to aggregate computing power hundreds of miles over our heads.

If that sounds implausible, take a moment to think about what companies are already doing in space today. SpaceX routinely launches more than 100 Starlink satellites per week, each of which uses laser inter-satellite links to bounce Internet signals around the globe. Amazon’s Kuiper satellite broadband network uses similar technology, and laser communications will underpin the US Space Force’s next-generation data-relay constellation.

Artist’s illustration of laser crosslinks in space. Credit: TESAT

Autonomously constructing a miles-long structure in orbit, as Nvidia and Starcloud foresee, would unlock unimagined opportunities. The concept also relies on tech that has never been tested in space, but there are plenty of engineers and investors who want to try. Starcloud announced an agreement last week with a new in-space assembly company, Rendezvous Robotics, to explore the use of modular, autonomous assembly to build Starcloud’s data centers.

Google’s research paper describes a future computing constellation of 81 satellites flying at an altitude of some 400 miles (650 kilometers), but Beals said the company could dial the total swarm size to as many spacecraft as the market demands. This architecture could enable terawatt-class orbital data centers, according to Google.

“What we’re actually envisioning is, potentially, as you scale, you could have many clusters,” Beals said.

Whatever the number, the satellites will communicate with one another using optical inter-satellite links for high-speed, low-latency connectivity. The satellites will need to fly in tight formation, perhaps a few hundred feet apart, with a swarm diameter of a little more than a mile, or about 2 kilometers. Google says its physics-based model shows satellites can maintain stable formations at such close ranges using automation and “reasonable propulsion budgets.”

“If you’re doing something that requires a ton of tight coordination between many TPUs—training, in particular—you want links that have as low latency as possible and as high bandwidth as possible,” Beals said. “With latency, you run into the speed of light, so you need to get things close together there to reduce latency. But bandwidth is also helped by bringing things close together.”

Some machine-learning applications could be done with the TPUs on just one modestly sized satellite, while others may require the processing power of multiple spacecraft linked together.

“You might be able to fit smaller jobs into a single satellite. This is an approach where, potentially, you can tackle a lot of inference workloads with a single satellite or a small number of them, but eventually, if you want to run larger jobs, you may need a larger cluster all networked together like this,” Beals said.

Google has worked on Project Suncatcher for more than a year, according to Beals. In ground testing, engineers tested Google’s TPUs under a 67 MeV proton beam to simulate the total ionizing dose of radiation the chip would see over five years in orbit. Now, it’s time to demonstrate Google’s AI chips, and everything else needed for Project Suncatcher will actually work in the real environment.

Google is partnering with Planet, the Earth-imaging company, to develop a pair of small prototype satellites for launch in early 2027. Planet builds its own satellites, so Google has tapped it to manufacture each spacecraft, test them, and arrange for their launch. Google’s parent company, Alphabet, also has an equity stake in Planet.

“We have the TPUs and the associated hardware, the compute payload… and we’re bringing that to Planet,” Beals said. “For this prototype mission, we’re really asking them to help us do everything to get that ready to operate in space.”

Beals declined to say how much the demo slated for launch in 2027 will cost but said Google is paying Planet for its role in the mission. The goal of the demo mission is to show whether space-based computing is a viable enterprise.

“Does it really hold up in space the way we think it will, the way we’ve tested on Earth?” Beals said.

Engineers will test an inter-satellite laser link and verify Google’s AI chips can weather the rigors of spaceflight.

“We’re envisioning scaling by building lots of satellites and connecting them together with ultra-high bandwidth inter-satellite links,” Beals said. “That’s why we want to launch a pair of satellites, because then we can test the link between the satellites.”

Evolution of a free-fall (no thrust) constellation under Earth’s gravitational attraction, modeled to the level of detail required to obtain Sun-synchronous orbits, in a non-rotating coordinate system. Credit: Google

Getting all this data to users on the ground is another challenge. Optical data links could also route enormous amounts of data between the satellites in orbit and ground stations on Earth.

Aside from the technical feasibility, there have long been economic hurdles to fielding large satellite constellations. But SpaceX’s experience with its Starlink broadband network, now with more than 8,000 active satellites, is proof that times have changed.

Google believes the economic equation is about to change again when SpaceX’s Starship rocket comes online. The company’s learning curve analysis shows launch prices could fall to less than $200 per kilogram by around 2035, assuming Starship is flying about 180 times per year by then. This is far below SpaceX’s stated launch targets for Starship but comparable to SpaceX’s proven flight rate with its workhorse Falcon 9 rocket.

It’s possible there could be even more downward pressure on launch costs if SpaceX, Nvidia, and others join Google in the race for space-based computing. The demand curve for access to space may only be eclipsed by the world’s appetite for AI.

“The more people are doing interesting, exciting things in space, the more investment there is in launch, and in the long run, that could help drive down launch costs,” Beals said. “So, it’s actually great to see that investment in other parts of the space supply chain and value chain. There are a lot of different ways of doing this.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

If you want to satiate AI’s hunger for power, Google suggests going to space Read More »

openai:-the-battle-of-the-board:-ilya’s-testimony

OpenAI: The Battle of the Board: Ilya’s Testimony

The Information offers us new information about what happened when the board if AI unsuccessfully tried to fire Sam Altman, which I call The Battle of the Board.

The Information: OpenAI co-founder Ilya Sutskever shared new details on the internal conflicts that led to Sam Altman’s initial firing, including a memo alleging Altman exhibited a “consistent pattern of lying.”

Liv: Lots of people dismiss Sam’s behaviour as typical for a CEO but I really think we can and should demand better of the guy who thinks he’s building the machine god.

Toucan: From Ilya’s deposition—

• Ilya plotted over a year with Mira to remove Sam

• Dario wanted Greg fired and himself in charge of all research

• Mira told Ilya that Sam pitted her against Daniela

• Ilya wrote a 52 page memo to get Sam fired and a separate doc on Greg

Daniel Eth: A lot of the OpenAI boardroom drama has been blamed on EA – but looks like it really was overwhelmingly an Ilya & Mira led effort, with EA playing a minor role and somehow winding up as a scapegoat

Peter Wildeford: It seems troubling that the man doing trillions of dollars of infrastructure spending in order to transform the entire fabric of society also has a huge lying problem.

I think this is like on an extra bad level even for typical leaders.

Charles: I haven’t seen many people jumping to defend Altman with claims like “he doesn’t have a huge lying problem” either, it’s mostly claims that map to “I don’t care, he gets shit done”.

Joshua Achiam (OpenAI Head of Mission Alignment): There is plenty to critique about Sam in the same way there is plenty to critique about any significant leader. But it kills me to see what kind of tawdry, extreme stuff people are willing to believe about him.

When we look back years from now with the benefit of hindsight, it’s my honest belief that the record will show he was no more flawed than anyone, more virtuous than most, and did his best to make the world a better place. I also expect the record will show that he succeeded.

Joshua Achiam spoke out recently about some of OpenAI’s unethical legal tactics, and this is about as full throated a defense as I’ve seen of Altman’s behaviors. As with anyone important, no matter how awful they are, some people are going to believe they’re even worse, or worse in particular false ways. And in many ways, as I have consistently said, I find Altman to be well ‘above replacement’ as someone to run OpenAI, and I would not want to swap him out for a generic replacement executive.

I do still think he has a rather severe (even for his peer group) lying and manipulation problem, and a power problem, and that ‘no more flawed than anyone’ or ‘more virtuous than most’ seems clearly inaccurate, as is reinforced by the testimony here.

As I said at the time, The Battle of the Board, as in the attempt to fire Altman, was mostly not a fight over AI safety and not motivated by safety. It was about ordinary business issues.

Ilya had been looking to replace Altman for a year, the Witness here is Ilya, here’s the transcript link. If you are interested in the details, consider reading the whole thing.

Here are some select quotes:

Q. So for — for how long had you been planning to propose removal of Sam?

A. For some time. I mean, “planning” is the wrong word because it didn’t seem feasible.

Q. It didn’t seem feasible?

A. It was not feasible prior; so I was not planning.

Q. How — how long had you been considering it?

A. At least a year.

The other departures from the board, Ilya reports, made the math work where it didn’t before. Until then, the majority of the board had been friendly with Altman, which basically made moving against him a non-starter. So that’s why he tried when he did. Note that all the independent directors agreed on the firing.

[As Read] Sam exhibits a consistent pattern of lying, undermining his execs, and pitting his execs against one another. That was clearly your view at the time?

A: Correct.

Q. This is the section entitled “Pitting People Against Each Other.”

A. Yes.

Q. And turning on the next page, you see an example that’s offered is “Daniela versus Mira”?

A. Yes.

Q. Is “Daniela” Daniela Amodei?

A. Yes.

Q. Who told you that Sam pitted Daniela against Mira?

A. Mira.

Q. In the section below that where it says “Dario versus Greg, Ilya”—

A. Yes.

Q. — you see that?

A. Yes.

Q. The complaint — it says — you say here that:

[As Read] Sam was not taking a firm position in respect of Dario wanting to run all of research at OpenAI to have Greg fired — and to have Greg fired? Do you see that?

A. I do see that.

Q. And “Dario” is Dario Amodei?

A. Yes.

Q. Why were you faulting Sam for Dario’s efforts?

THE WITNESS: So my recollection of what I wrote here is that I was faulting Sam for not accepting or rejecting Dario’s conditions.

And for fun:

ATTORNEY MOLO: That’s all you’ve done the entire deposition is object.

ATTORNEY AGNOLUCCI: That’s my job. So —

ATTORNEY MOLO: Actually, it’s not.

ATTORNEY MOLO: Yeah, don’t raise your voice.

ATTORNEY AGNOLUCCI: I’m tired of being 24 told that I’m talking too much.

ATTORNEY MOLO: Well, you are.

Best not miss.

What did Sutskever and Murati think firing Altman meant? Vibes, paper, essays?

What happened here was, it seems, that Ilya Sutskever and Mira Murati came at the king for very good reasons one might come at a king, combined with Altman’s attempt to use lying to oust Helen Toner from the board.

But those involved (including the rest of the board) didn’t execute well because of various fears, during the fight both Murati and Sutskever refused to explain to the employees or world what they were upset about, lost their nerve and folded. The combination of that plus the board’s refusal to explain, and especially Murati’s refusal to back them up after setting things in motion, was fatal.

Do they regret coming at the king and missing? Yes they do, and did within a few days. That doesn’t mean they’d be regretting it if it had worked. And I continue to think if they’d been forthcoming about the reasons from the start, and otherwise executed well, it would have worked, and Mira Murati could have been OpenAI CEO.

Now, of course, it’s too late, and it would take a ten times worse set of behaviors for Altman to get into this level of trouble again.

It really was a brilliant response, to scapegoat Effective Altruism and the broader AI safety movement as the driving force and motivation for the change, thus with this one move burying Altman’s various misdeeds, remaking the board, purging the company and justifying the potentially greatest theft in human history while removing anyone who would oppose the path of commercialization. Well played.

This scapegoating continues to this day. For the record, Helen Toner (I believe highly credibly) clarifies that Ilya’s version of the events related to the extremely brief consideration of a potential merger was untrue, and unrelated to the rest of events.

The below is terrible writing, presumably from an AI, but yeah this sums it all up:

Pogino (presumably an AI generated Twitter reply): “This reframes the OpenAI power struggle as a clash of personalities and philosophies, not a proxy war for EA ideology.

Ilya’s scientific purism and Mira’s governance assertiveness collided with Altman’s entrepreneurial pragmatism — a tension intrinsic to mission-driven startups scaling into institutions. EA may have provided the vocabulary, but the conflict’s grammar was human: trust, ambition, and control.”

Discussion about this post

OpenAI: The Battle of the Board: Ilya’s Testimony Read More »

dune-driving-with-mercedes-benz-as-it-tests-off-road-systems

Dune driving with Mercedes-Benz as it tests off-road systems

The reason Mercedes’ engineers were driving up and down and across the dunes was to work on the car’s brake control systems. As you slow with the brake pedal, the car’s electronic brain juggles the input of the traction control, electronic stability control, antilock brakes, and a downhill speed governor that keep you going where you want, as opposed to careening down a slope at speed.

After a passenger ride through a particularly tricky section, it was my turn to have a go. It was a more surreal experience than messing around in an all-wheel drive car on fresh snow—that might involve low traction surfaces and some sliding around, but the horizon tends to remain in the same place.

As I climbed a dune, my view was nothing but sand, then the deep blue sky. Despite the steep slope and the fact that the car was shod with regular street tires, the wheels found traction where needed, “churning” where necessary. Under braking, the ABS allows the front wheels to remain more controllable, taking into consideration any steering angle you have.

And that may be a lot, because as Lightning McQueen learned in Cars, to go left, sometimes you have to turn right. At times, crabbing up the side of a dune involved making progress with a fair amount of opposite steering lock.

Just think, the wind deposited all this sand here. Note the return of a maximalist Mercedes front “grille.” Mercedes-Benz

Driving on a loose surface like sand, similar to driving on snow, requires a fair bit of torque, and the GLC’s 596 lb-ft (808 Nm) was more than enough to throw a rooster tail or two as the speed picked up and propelled us along. And the low center of gravity that results from the 94 kWh battery pack between the axles no doubt helped keep the car planted even while driving sideways along the dune.

My experience was much less repetitive than that of the Mercedes engineers, whose job it is to go out and drive a route, come back to the trailer, download the data, and upload a new configuration to the car. Then go out and drive the route again and repeat the whole process before driving two hours back to Las Vegas at the end of each day. But the result should be an electric SUV with the kind of mountain goat ability that belies its posh badge and looks.

The new GLC with EQ Technology goes on sale in the US late next year.

Dune driving with Mercedes-Benz as it tests off-road systems Read More »