Author name: 9u50fv

trump-boosts-china-tariffs-to-125%,-pauses-tariff-hikes-on-other-countries

Trump boosts China tariffs to 125%, pauses tariff hikes on other countries

On Wednesday, Donald Trump, once again, took to Truth Social to abruptly shift US trade policy, announcing a 90-day pause “substantially” lowering reciprocal tariffs against all countries except China to 10 percent.

Because China retaliated—raising tariffs on US imports to 84 percent on Wednesday—Trump increased tariffs on China imports to 125 percent “effective immediately.” That likely will not be received well by China, which advised the Trump administration to cancel all China tariffs Wednesday, NPR reported.

“The US’s practice of escalating tariffs on China is a mistake on top of a mistake,” the Chinese finance ministry said, calling for Trump to “properly resolve differences with China through equal dialogue on the basis of mutual respect.”

For tech companies, trying to keep up with Trump’s social media posts regarding tariffs has been a struggle, as markets react within minutes. It’s not always clear what Trump’s posts mean or how the math will add up, but after Treasury Secretary Scott Bessent clarified Trump’s recent post, the stock market surged, CNBC reported, after slumping for days.

But even though the stock market may be, for now, recovering, tech companies remain stuck swimming in uncertainty. Ed Brzytwa, vice president of international trade for the Consumer Technology Association (CTA)—which represents the $505 billion US consumer technology industry—told Ars that for many CTA members, including small businesses and startups, “the damage has been done.”

“Our small business and startup members were uniquely exposed to these reciprocal tariffs and the whipsaw effect,” Brzytwa told Ars. “There’s collateral damage to that.”

In a statement, CTA CEO Gary Shapiro suggested that the pause was “a victory for American consumers,” but ultimately the CTA wants Trump to “fully revoke” the tariffs.

“While this is great news, we are hearing directly from our members that the ongoing additional 10 percent universal baseline tariffs and this continued uncertainty, are already hurting American small businesses,” Shapiro said. “CTA urges President Trump to focus his efforts on what he does best, dealmaking. Now is the time to reposition the United States with our allies as a reliable trading partner while growing the American and global economy.”

Trump boosts China tariffs to 125%, pauses tariff hikes on other countries Read More »

creating-a-distinctive-aesthetic-for-daredevil:-born-again

Creating a distinctive aesthetic for Daredevil: Born Again


Ars chats with cinematographer Hillary Fyfe Spera on bringing a 1970s film vibe to the Marvel series.

Enthusiasm was understandably high for Daredevil: Born Again, Marvel’s revival of the hugely popular series in the Netflix Defenders universe. Not only was Charlie Cox returning to the title role as Matt Murdock/Daredevil, but Vincent D’Onofrio was also coming back as his nemesis, crime lord Wilson Fisk/Kingpin. Their dynamic has always been electric, and that on-screen magic is as powerful as ever in Born Again, which quickly earned critical raves and a second season that is currently filming.

(Some spoilers for the series below, but no major reveals beyond the opening events of the first episode.)

Born Again was initially envisioned as more of an episodic reset rather than a straight continuation of the serialized Netflix series. But during the 2023 Hollywood strikes, with production halted, the studio gave the show a creative overhaul more in line with the Netflix tone, even though six episodes had been largely completed by then. The pilot was reshot completely, and new footage was added to subsequent episodes to ensure narrative continuity with the original Daredevil—with a few well-placed nods to other characters in the MCU for good measure.

It was a savvy move. Sure, fans were shocked when the pilot episode killed off Matt’s best friend and law partner, Foggy Nelson (Elden Hensen), in the first 10 minutes, with his grief-stricken law partner, Karen Page (Deborah Ann Woll), taking her leave from the firm by the pilot’s end. But that creative choice cleared the decks to place the focus squarely on Matt’s and Fisk’s parallel arcs. Matt decides to focus on his legal work while Fisk is elected mayor of New York City, intent on leaving his criminal life behind. But each man struggles to remain in the light as the dark sides of their respective natures fight to be released.

The result is a series that feels very much a part of its predecessor while still having its own distinctive feel. Much of that is due to cinematographer Hillary Fyfe Spera, working in conjunction with the broader production team to bring Born Again‘s aesthetic to vivid life. Fyfe Spera drew much of her inspiration from 1970s films like Taxi DriverThe French Connection, The Conversation, and Klute. “I’m a big fan of films of the ’70s, especially New York films,” Fyfe Spera told Ars. “It’s pervaded all of my cinematography from the beginning. This one in particular felt like a great opportunity to use that as a reference. There’s a lot of paranoia, and it’s really about character, even though we’re in a comic book environment. I just thought that the parallels of that reference were solid.”

Ars caught up with Fyfe Spera to learn more.

Karen, Matt, and Foggy enjoy a moment of camaraderie before tragedy strikes. Marvel Studios/Disney+

Ars Technica: I was surprised to learn that you never watched an episode of the original Netflix series when designing the overall look of Born Again. What was your rationale for that?

Hillary Fyfe Spera: I think as a creative person you don’t want to get too much in your head before you get going. I was very aware of Daredevil, the original series. I have a lot of friends who worked on it. I’ve seen sequences, which are intimidatingly incredible. [My decision] stemmed from wanting to bring something new to the table. We still pay homage to the original; that’s in our blood, in our DNA. But there was enough of that in the ether, and I wanted to think forward and be very aware of the original comics and the original lore and story. It was more about the identities of the characters and making sure New York itself was an authentic character. Looking back now, we landed in a lot of the same places. I knew that would happen naturally.

Ars Technica:  I was intrigued by your choice to use anamorphic lenses, one assumes to capture some of that ’70s feel, particularly the broad shots of the city.

Hillary Fyfe Spera: It’s another thing that I just saw from the very beginning; you just get a feeling about lenses in your gut. I know the original show was 1.78; I just saw this story as 2.39. It just felt like so many of the cityscapes exist in that wide-screen format. For me, the great thing about anamorphic is the relationship within composition in the lens. We talk about this dichotomy of two individuals or reflections or parallel worlds. I felt the widescreen gave us that ability. Another thing we do frequently is center framing, something the widescreen lens can really nail. Also, we shoot with these vintage-series Panavision anamorphics, which are so beautiful and textured, and have beautiful flaring effects. It brought organic textured elements to the look of the show that were a little out of the box.

Ars Technica: The city is very much a character, not just a showy backdrop. Is that why you insisted on shooting as much as possible on location?

Hillary Fyfe Spera: We shot in New York on the streets, and that is a challenge. We deal with everything from weather to fans to just New Yorkers who don’t really care, they just need to go where they’re going. Rats were a big part of it. We use a lot of wet downs and steam sources to replicate what it looks like outside our window every day. It’s funny, I’ll walk down the street and be like, “Oh look at that steam source, it’s real, it’s coming out of the street.”

Shooting a show of this scale and with its demands in a practical environment is such a fun challenge, because you have to be beholden to what you’re receiving from the universe. I think that’s cool. One of my favorite things about cinematography is that you can plan it to an inch of its life, prepare a storyboard and shot list as much as you possibly can, and then the excitement of being out in the world and having to adapt to what’s happening is a huge part of it. I think we did that. We had the confidence to say, “Well, the sun’s setting over there and that looks pretty great, let’s make that an element, let’s bring it in.” Man, those fluorescent bulbs that we can’t turn off across the street? They’re part of it. They’re the wrong color, but maybe they’re the right color because that’s real.

Ars Technica: Were there any serendipitous moments you hadn’t planned but decided to keep in the show anyway? 

Hillary Fyfe Spera: There’s one that we were shooting on an interior. It was on a set that we built, where Fisk has a halo effect around his head. It’s a reflection in a table. That set was built by Michael Shaw, our production designer. One of our operators happened to tilt the camera down into the reflection, and we’re like, “Oh my God, it’s right there.” Of course, it ended up in the show; it was a total gimme. Another example is a lot of our New York City street stuff, which was completely just found. We just went out there and we shot it: the hotdog carts, the streets, the steam, the pigeons. There’s so many pigeons. I think it really makes it feel authentic.

Ars Technica: The Matt Murdock/Wilson Fisk dynamic is so central to the show. How does the cinematography visually enhance that dynamic? 

Hillary Fyfe Spera: They’re coming back to their identities as Kingpin and Daredevil, and they’re wrestling with those sides of themselves. I think in Charlie and Vincent’s case, both of them would say that neither one is complete without the other. For us, visually, that’s just such a fun challenge to be able to show that dichotomy and their alter egos. We do it a lot with lensing.

In Fisk’s case, we use a lot of wide-angle lenses, very close to him, very low angle to show his stature and his size. We use it with a white light in the pilot, where, as the Kingpin identity is haunting him and coming more to the surface, we show that with this white light. There’s the klieg lights of his inauguration, but then he steps into darkness and into this white light. It’s actually a key frame taken directly from the comic book, of that under light on him.

For Matt Murdock, it’s similar. He is wrestling with going back to being Daredevil, which he’s put aside after Foggy’s death. The red blinking light for him is an indication of that haunting him. You know it’s inevitable, you know he’s going to put the suit back on. It’s who these guys are, they’re damaged individuals dealing with their past and their true selves. And his world, just from an aesthetic place, is a lot warmer with a lot more use of handheld.

We’re using visual languages to separate everyone, but also have them be in the same conversation. As the show progresses, that arc is evolving. So, as Fisk becomes more Kingpin, we light him with a lot more white light, more oppression, he’s the institution. Matt is going into more of the red light environment, the warmer environment. There’s a diner scene between the two of them, and within their coverage Matt is shot handheld and Fisk is shot with a studio mode with a lockdown camera. So, we’re mixing, we’re blending it even within the scenes to try and stay true to that thesis.

Ars Technica: The episodes are definitely getting darker in terms of the lighting. That has become quite an issue, particularly on television, because many people’s TVs are not set up to be able to handle that much darkness.

Hillary Fyfe Spera: Yeah, when I visit my parents, I try to mess with their TV settings a little. People are just watching it in the wrong way. I can’t speak for everyone; I love darkness. I love a night exterior, I love what you don’t see. For me, that goes back to films like The French Connection. It’s all about what you don’t see. With digital, you see so much, you have so much latitude and resolution that it’s a challenge in the other way, where we’re trying to create environments where there is a lot of contrast and there is a lot of mystery. I just think cinematographers get excited with the ability to play with that. It’s hard to have darkness in a digital medium. But I think viewers on the whole are getting used to it. I think it’s an evolving conversation.

Ars Technica: The fight choreography looks like it would be another big challenge for a cinematographer.

Hillary Fyfe Spera: I need to give a shoutout to my gaffer, Charlie Grubbs, and key grip, Matt Staples. We light an environment, we shoot those sequences with three cameras a lot of times, which is hard to do from a lighting perspective because you’re trying to make every shot feel really unique. A lot of that fight stuff is happening so quickly that you want to backlight a lot, to really set out moments so you can see it. You don’t want to fall into a muddy movement world where you can’t really make out the incredible choreography. So we do try and set environments that are cinematic, but that shoot certain directions that are really going to pinpoint the movement and the action.

It’s a collaboration conversation with Phil Silvera, our stunt coordinator and action director: not only how we can support him, but how we can add these cinematic moments that sometimes aren’t always based in reality, but are just super fun. We’ll do interactive lighting, headlights moving through, flares, just to add a little something to the sequence. The lighting of those sequences are as much a character, I think, as the performances themselves.

Ars Technica: Will you be continuing the same general look and feel in terms of cinematography for S2?

Hillary Fyfe Spera: I’ve never come back for a second season. I love doing a project and moving on, but what was so cool about doing this one was that the plan is to evolve it, so we keep going. The way we leave things in episode nine—I don’t know if we’re picking up directly after, but there is a visual arc that lands in nine, and we will continue that in S2, which has its own arc as well. There are more characters and more storylines in S2, and it’s all being folded into the visual look, but it is coming from the same place: the grounded, ’70s New York look, and even more comic cinematic moments. I think we’re going to bring it.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Creating a distinctive aesthetic for Daredevil: Born Again Read More »

japanese-railway-shelter-replaced-in-less-than-6-hours-by-3d-printed-model

Japanese railway shelter replaced in less than 6 hours by 3D-printed model

Hatsushima is not a particularly busy station, relative to Japanese rail commuting as a whole. It serves a town (Arida) of about 25,000, known for mandarin oranges and scabbardfish, that is shrinking in population, like most of Japan. Its station sees between one to three trains per hour at its stop, helping about 530 riders find their way. Its wooden station was due for replacement, and the replacement could be smaller.

The replacement, it turned out, could also be a trial for industrial-scale 3D-printing of custom rail shelters. Serendix, a construction firm that previously 3D-printed 538-square-foot homes for about $38,000, built a shelter for Hatsushima in about seven days, as shown at The New York Times. The fabricated shelter was shipped in four parts by rail, then pieced together in a span that the site Futurism says is “just under three hours,” but which the Times, seemingly present at the scene, pegs at six. It was in place by the first train’s arrival at 5: 45 am.

Either number of hours is a marked decrease from the days or weeks you might expect for a new rail station to be constructed. In one overnight, teams assembled a shelter that is 2.6 meters (8.5 feet) tall and 10 square meters (32 square feet) in area. It’s not actually in use yet, as it needs ticket machines and finishing, but is expected to operate by July, according to the Japan Times.

Japanese railway shelter replaced in less than 6 hours by 3D-printed model Read More »

ai-2027:-responses

AI 2027: Responses

Yesterday I covered Dwarkesh Patel’s excellent podcast coverage of AI 2027 with Daniel Kokotajlo and Scott Alexander. Today covers the reactions of others.

  1. Kevin Roose in The New York Times.

  2. Eli Lifland Offers Takeaways.

  3. Scott Alexander Offers Takeaways.

  4. Others Takes on Scenario 2027. (Blank)

  5. Having a Concrete Scenario is Helpful.

  6. Writing It Down Is Valuable Even If It Is Wrong.

  7. Saffron Huang Worries About Self-Fulfilling Prophecy.

  8. Phillip Tetlock Calibrates His Skepticism.

  9. Jan Kulveit Wants to Bet.

  10. Matthew Barnett Debates How To Evaluate the Results.

  11. Teortaxes for China and Open Models and My Response.

  12. Others Wonder About PRC Passivity.

  13. Timothy Lee Remains Skeptical.

  14. David Shapiro for the Accelerationists and Scott’s Response.

  15. LessWrong Weighs In.

  16. Other Reactions.

  17. Next Steps.

  18. The Lighter Side.

Kevin Roose covered Scenario 2027 in The New York Times.

Kevin Roose: I wrote about the newest AGI manifesto in town, a wild future scenario put together by ex-OpenAI researcher @DKokotajlo and co.

I have doubts about specifics, but it’s worth considering how radically different things would look if even some of this happened.

Daniel Kokotajlo: AI companies claim they’ll have superintelligence soon. Most journalists understandably dismiss it as hype. But it’s not just hype; plenty of non-CoI’d people make similar predictions, and the more you read about the trendlines the more plausible it looks. Thank you & the NYT!

The final conclusion is supportive of this kind of work, and Kevin points out that expectations at the major labs are compatible with the scenario.

I was disappointed that the tone here seems to treat the scenario and the viewpoint behind it as ‘extreme’ or ‘fantastical.’ Yes, this scenario involves things that don’t yet exist and haven’t happened. It’s a scenario of the future.

One can of course disagree with much of it. And you probably should.

As we’ll see later with David Shapiro, we also have someone quoted as saying ‘oh they just made all this up without any grounding’ despite the hundreds of pages of grounding and evidence. It’s easier to simply pretend it isn’t there.

Kevin Roose: Ali Farhadi, the chief executive of the Allen Institute for Artificial Intelligence, an A.I. lab in Seattle, reviewed the “AI 2027” report and said he wasn’t impressed.

“I’m all for projections and forecasts, but this forecast doesn’t seem to be grounded in scientific evidence, or the reality of how things are evolving in A.I.,” he said.

And we have a classic Robin Hanson edit, here’s his full quote while linking:

Robin Hanson (quoting Kevin Roose): “I’m not convinced that superhuman A.I. coders will automatically pick up the other skills needed to bootstrap their way to general intelligence. And I’m wary of predictions that assume that A.I. progress will be smooth and exponential.”

I think it’s totally reasonable to be wary of predictions of continued smooth exponentials. I am indeed also wary of them. I am however confident that if you did get ‘superhuman A.I. coders’ in a fully broad sense, that the other necessary skills for any reasonable definition of (artificial) general intelligence would not be far behind.

Eli Lifland, who worked closely on the project, offers his takeaways here.

  1. By 2027 we may automate AI R&D leading to vastly superhuman AIs.

  2. Artificial superintelligences (ASIs) will dictate humanity’s future.

  3. ASIs might develop unintended, adversarial ‘misaligned’ goals, leading to human disempowerment.

  4. An actor with total control over ASIs could seize total power.

  5. An international race towards ASI will lead to cutting corners on safety.

  6. Geopolitically, the race to ASI will end in war, a deal or effective surrender.

  7. No US AI project is on track to be secure against nation-state actors stealing AI models by 2027.

  8. As ASI approaches, the public will likely be unaware of the best AI capabilities.

If you accept the assumptions inherent in the scenario, the conclusions seem right.

He offers us a post outlining them. The list is:

  1. Cyberwarfare as (one of) the first geopolitically relevant AI skills

  2. A period of potential geopolitical instability

  3. The software-only singularity

  4. The (ir)relevance of open-source AI

  5. AI communication as pivotal

  6. Ten people on the inside (outcomes depend on lab insider decisions)

  7. Potential for very fast automation

  8. Special economic zones

  9. Superpersuasion

  10. Potential key techs with unknown spots on the ‘tech tree’: AI lie detectors for AIs, superhuman forecasting, superpersuasion, AI negotiation.

I found this to be a good laying out of questions, even in places where Scott was anti-persuasive and moved me directionally away from the hypothesis he’s discussing. I would consider these less takeaways that are definitely right as they are takeaways of things to seriously consider.

The central points here seem spot on. If you want to know what a recursive self-improvement or AI R&D acceleration scenario looks like in a way that helps you picture one, and that lets you dive into details and considerations, this is the best resource available yet and it isn’t close.

Yoshua Bengio: I recommend reading this scenario-type prediction by @DKokotajlo and others on how AI could transform the world in just a few years. Nobody has a crystal ball, but this type of content can help notice important questions and illustrate the potential impact of emerging risks.

Nevin Freeman: If you wonder why some people (including @sama) seem to think AGI is going to be WAY more disruptive than others, read this scenario to see what they mean by “recursive self-improvement.”

Will it actually go this way? Hard to say, but this is the clearest articulation of the viewpoint so far, very worth the read if you are interested in tracking what’s going on with AGI.

I personally think this is the upper end of how quickly a self-enforcing positive feedback loop could happen, but even if it took ten years instead of two it would still massively reshape the world we’re in.

Over the next year you’ll probly see even more polarized fighting between the doomers and the yolos. Try to look past the ideological bubbles and figure out what’s actually most plausible. I doubt the outcome will be as immediately terrible as @ESYudkowsky thinks (or at least used to think?) but I also doubt it will be anywhere near as rosy as @pmarca thinks.

Anyway, read this or watch the podcast over the weekend and you’ll be massively more informed on this side of the debate.

Max Harms (what a name!): The AI-2027 forecast is one of the most impressive and important documents I’ve ever seen. The team involved are some of the smartest people in the world when it comes to predicting the future, and while I disagree with details, their vision is remarkably close to mine.

Wes, the Dadliest Catch: My big criticism of the AI safety community has always been the lack of concrete predictions that could possibly be falsified, so I really appreciate this and withdraw the criticism.

My one disagreement with Nevin (other than my standard objection to use of the word ‘doomer’) is that I don’t expect ‘even more polarized fighting.’

What I expect is for those who are worried to continue to attempt to find solutions that might possibly work, and for the ‘yolo’ crowd to continue to be maximally polarized against anything that might reduce existential risk, on principle, with a mix of anarchists and those who want government support for their project. Remarkably often, it will continue to be the same people.

Simeon: Excellent foresight scenario, as rigorous as it gets.

AI 2027 is to Situational Awareness what science is to fiction.

A must-read.

I very much appreciate those who say ‘I strongly disagree with these predictions but appreciate that you wrote them down with detailed explanations.’

John Pressman: So I know I was a little harsh on this in that thread but tbh it’s praiseworthy that Daniel is willing to write down a concrete near term timeline with footnotes to explanations of his reasoning for different variables. Very few others do.

Davidad: Most “AI timelines” are really just dates, not timelines.

This one, “AI 2027,” does have a date—“game over” occurs in December 2027—but it’s also a highly detailed scenario at monthly resolution before that date, and after (until human extinction in 2030).

I find the scenario highly plausible until about 2028. Extinction by 2030 seems extremely pessimistic— but for action-relevant purposes that doesn’t matter: if humanity irreversibly loses control of a superintelligent AI in the 2020s, eventual extinction may become inevitable.

Don’t just read it, do something!

I strongly agree with Davidad that the speed at which things play out starting in 2028 matters very little. The destination remains the same.

This is a reasonable thing to worry about. Is this a self-fulfilling or self-preventing style of prophecy? My take is that it is more self-preventing than self-fulfilling, especially since I expect the actions we want to avoid to be the baseline scenario.

Directionally the criticism highlights a fair worry. One always faces a tradeoff between creating something engaging versus emphasizing the particular most important messages and framings.

I think there are places Scenario 2027 could and should have gone harder there, but it’s tough to strike the right balance, including that you often have to ship what you can now and not let the perfect be the enemy of the good.

Daniel also notes on the Win-Win Podcast that he is worried about the self-fulfilling risks and plans to release additional things that have better endings, whereas he notes that Leopold Aschenbrenner in Situational Awareness was intentionally trying to do hyperstition, but that by default it’s wise to say what’s actually likely to happen.

Saffron Huang (Anthropic): What irritates me about the approach taken by the AI 2027 report looking to “accurately” predict AI outcomes is that I think this is highly counterproductive for good outcomes.

They say they don’t want this scenario to come to pass, but their actions—trying to make scary outcomes seem unavoidable, burying critical assumptions, burying leverage points for action—make it more likely to come to pass.

The researchers aim for predictive accuracy and make a big deal of their credentials in forecasting and research. (Although they obscure the actual research, wrapping this up with lots of very specific narrative.) This creates an intended illusion, especially for the majority of people who haven’t thought much about AI, that the near term scenarios are basically inevitable–they claim they are so objective, and good at forecasting!

Why implicitly frame it as inevitable if they explicitly say (buried in a footnote in the “What is this?” info box) that they hope that this scenario does not come to pass? Why not draw attention to points of leverage for human agency in this future, if they *actuallywant this scenario to not come to pass?

I think it would be more productive to make the underlying causal assumptions driving their predictions clear, rather than glossing this over with hyperspecific narratives. (E.g. the assumption that “if AI can perform at a task ‘better than humans’ –> AI simply replaces humans at that thing” drives a large amount of the narrative. I think this is pretty questionable, given that AIs can’t be trusted in the same way, but even if you disagree, readers should at least be able to see and debate that assumption explicitly.)

They gesture at wanting to do this, but don’t at all do it! In the section about “why this work is valuable”, they say that: “Painting the whole picture makes us notice important questions or connections we hadn’t considered or appreciated before” — but what are they? Can this be articulated directly, instead of buried? Burying it is counterproductive and leads to alarm rather than people being able to see where they can help.

This is based on detailed tabletop exercises. Tabletop exercises have the benefit that participants are seeing the causal results of their actions, and such exercises are usually limited to experts who can actually make decisions about the subject at hand. Maybe instead of widely publicizing this, this kind of exercise should be 1) tied up with an understanding of the causal chain, and 2) left to those who can plausibly do something about it?

Daniel Kokotajlo: Thanks for this thoughtful criticism. We have been worrying about this ourselves since day 1 of the project. We don’t want to accidentally create a self-fulfilling prophecy.

Overall we think it’s worth it because (1) The corporations are racing towards something like this outcome already, with a scandalous level of self-awareness at least among leadership. I doubt we will influence them that much. (2) We do want to present a positive vision + policy recommendations later, but it’ll make a lot more sense to people if they first understand where we think we are headed by default. (3) We have a general heuristic of “There are no adults in the room, the best way forward is to have a broad public conversation rather than trying to get a handful of powerful people to pull strings.”

Saffron Huang: Thanks for taking the time to reply, Daniel! I think your goals make sense, and I’m excited to see the policy recs/positive vision. An emphasis on learnings/takeaways (e.g. what Akbir outlined here) would be helpful, since all of you spent a lot of time thinking and synthesizing.

On the “there are no adults in the room”, I see what you mean. I guess the question on my mind is, how do you bring it to a broader public in a way that is productive, conducting such a conversation in a way that leads to better outcomes?

Imo, bringing something to the public != the right people will find the right roles for them and productive coordination will happen. Sometimes it means that large numbers of people are led down wild goose chases (especially when uninterrogated assumptions are in play), and it seems important to actively try to prevent that.

Andrew Critch: Self-fulfilling prophesies are everywhere in group dynamics! I wish more people explicitly made arguments to that effect. I’m not fully convinced by Saffron’s argument here, but I do wish more people did this kind of analysis. So far I see ~1.

Humanity really needs a better art & practice of identifying and choosing between self-fulfilling prophesies. Decisions at a group scale are almost the same type signature as a self-fulfilling prophesy — and idea that becomes reality because it was collectively imagined.

Akbir: I don’t think its framed as “inevitable”.

Isn’t it framed as making a decision in October 2027 about if the gov project advances or pauses?

for me it showed the importance of:

1) whistleblower protection

2) public education on ai outcomes

3) having a functioning oversight committee

like irrespective of if humans jobs are being replaced by AIs, stuff till Oct 2027 looks locked in?

also to be clear i’m just looking for answers.

i personally feel like my own ability to make difference is diminishing day to day and it’s pretty grating on my soul.

Dave Kasten (different thread): I like that the AI 2027 authors went to great lengths to make clear that those of us who gave feedback weren’t ENDORSING it. But as a result, normies don’t understand that basically everyone in AI policy has read/commented on drafts.

“Have you read AI 2027 yet?”

“Um, yes.”

I certainly don’t think this is presented as ‘everything until October 2027 here is inevitable.’ It’s a scenario. A potential path. You could yell that louder I guess?

It’s remarkable how often there is a natural way for people to misinterpret something [M] as a stronger fact or claim than it is, and:

  1. The standard thing most advocates for AI Anarchism, AI Acceleration, or for any side of most political and cultural debates to do in similar situations is to cause more people to conclude [M] and often they do this via actively claiming [M].

  2. The thing those worried about AI existential risk do is explicitly say [~M].

  3. There is much criticism that the [~M] wasn’t sufficiently prominent or clear.

That doesn’t make the critics wrong. Sometimes they are right. The way most people do this most of the time is awful. But the double standard here really is remarkable.

Ultimately, I see Saffron as saying that informing the public here seems bad.

I strongly disagree with that. I especially don’t think this is net likely to misinform people, who are otherwise highly misinformed, often by malicious actors but mostly by not having been exposed to the ideas at all.

Nor do I think this is likely to fall under the self-fulfilling side of prophecy on net. That is not how, at least on current margins, I expect people reading this to respond.

Philip Tetlock: I’m also impressed by Kokotajilo’s 2021 AI forecasts. It raises confidence in his Scenario 2027. But by how much? Tricky!

In my earliest work on subjective-probability forecasting, 1984-85, few forecasters guessed how radical a reformer Gorbachev would be. But they were also the slowest to foresee the collapse of USSR in 1991. “Superforecaster” is a description of past successes, not a guarantee of future ones.

Daniel Kokotajlo: Yes! I myself think there is about a 50% chance that 2027 will end without even hitting the superhuman coder milestone. AI 2027 is at the end of the day just an informed guess. But hopefully it will inspire others to counter it with their own predictions.

It is a well-known hard problem how much to update based on past predictions. In this case, I think quite a bit. Definitely enough to give the predictions a read.

You should still be mostly making up your own mind, as always.

Neel Nanda: The best way to judge a forecaster is their track record. In 2021 Daniel Kokotajlo predicted o1-style models. I think we should all be very interested in the new predictions he’s making in 2025!

I’ve read it and highly recommend – it’s thought provoking and stressfully plausible

Obviously, I expect many parts to play out differently, but no scenario like this will be accurate – but I think reading them is high value nonetheless. Even if you think it’s nonsense, clarifying *exactlywhat’s nonsense is valuable.

Jan Kulveit: To not over-update, I’d recommend thinking also about why forecasting continuous AGI transition from now is harder than the 2021 forecast was

I do think Jan’s right about that. Predictions until now were the easy part. That has a lot to do with why a lot of people are so worried.

However, one must always also ask how predictions were made, and are being made. Grading only on track record of being right (or ‘winning’), let alone evaluating forward looking predictions that way, is to invite disaster.

Andrew Critch: To only notice “he was right before” is failing to learn from Kokotajlo’s example of *howto forecast AI: *actuallythinkabout how AI capabilities work, who’s building them & why, how skilled the builders are, and if they’re trying enough approaches.

Kokotajlo is a disciplined thinker who *actually triedto make a step-by-step forecast, *again*. The main reason to take the forecast seriously is to read it (not the summary!) and observe that it is very well reasoned.

“Was right before” is a good reason to open the doc, tho.

Instead of “is he correct?”, it’s better to read the mechanical details of the forecast and comment on what’s wrong or missing.

Track record is a good reason to read it, but the reason to *believeit or not should involve thinking about the actual content.

Jan Kulveit: This is well worth a read, well argued, and gets a lot of the technical facts and possibilities basically correct.

At the same time, I think it gets a bunch of crucial considerations wrong. I’d be happy to bet against “2027” being roughly correct ~8:1.

Agreed operationalization is not easy. What about something like this: in April 2025 we agree “What 2026 looks like” was “roughly correct”. My bet is in April 2028 “AI 2027” will look “less right” than than the “2021->2025” forecast, judged by some panel of humans and AIs?

Daniel Kokotajlo: Fair enough. I accept at 8:1, April 2028 resolution date. So, $800 to me if I win, $100 to you if you win? Who shall we nominate as the judges? How about the three smartest easily available AIs (from different companies) + … idk, wanna nominate some people we both might know?

Being at least as right than ‘What 2026 Looks Like’ is a super high bar. If these odds are fair at 8:1, then that’s a great set of predictions. As always, kudos to everyone involved for public wagering.

This is an illustration of why setting up a bet like the above in a robust way is hard.

Matthew Barnett: I appreciate this scenario, and I am having fun reading it.

That said, I’m not sure how we should evaluate it as non-fiction. Is the scenario “falsified” if most of its concrete predictions don’t end up happening? Or should we judge it more on vibes?

I’m considering using an LLM to extract the core set of predictions in the essay and operationalize them so that we can judge whether the scenario “happened” or not in the future. I’d appreciate suggestions for how I can do this in a way that’s fair for all parties involved.

Daniel Kokotajlo: Great question! If someone who disagrees with us writes their own alternative scenario, even if it’s shorter/less-detailed, then when history unfolds people compare both to reality and argue about which was less wrong!

Matthew Barnett: I think the problem is that without clear evaluation criteria in advance, comparing scenarios after the fact becomes futile. People will come in with a variety of different assumptions about which predictions were most salient, and which incorrect predictions were excusable.

It’s definitely true that there will be a lot of disagreement over how accurate Scenario 2027 was, regardless of its level of accuracy, so long as it isn’t completely off base.

Teortaxes claims the scenario is underestimating China, and also challenges its lack of interest in human talent and the sidelining of open models, see his thread for the relevant highlights from the OP, here I pull together his key statements from the thread.

I see this as making a number of distinct criticisms, and also this is exactly the kind of thing that writing all this down gets you – Teortaxes gets to point to exactly where their predictions and model differ from Daniel’s.

Teortaxes: Reading through AI 2027 and what strikes me first is the utter lack of interest about human talent. It’s just compute, politics and OpenAI’s piece of shit internal models. Chinese nationalization of DeepSeek coming first, in spite of Stargate and CIA ties, is quite funny too.

Realistically a merger of top Chinese labs, as described, results in OpenAI getting BTFO within 2 months. Actually you might achieve that with just DeepSeek, Kimi, OpenBMB, the big lump of compute and 5000 interns to carry out experiments. General Secretary is being too kind.

[The part where DeepCent, the Chinese lab, falls behind] is getting kind of embarrassing.

My best guess is that the US is at like 1.5X effective talent disadvantage currently and it’ll be about 4X by end of 2026.

I think this whole spy angle is embarrassing.

The biggest omission is forcing brain-in-a-box-in-a-basement paradigm, on the assumption that Game Theory precludes long-term existence or relevance of open source models. But in fact maintaining competitive open source models disrupts your entire scenario. We are nearing a regime where weaker models given enough inference time and scaffolding can match almost arbitrary absolute performance, there’s no hard necessity to develop high-grade neuralese or whatever when you can scale plaintext reasoning.

just think this is Mohs scale theory of intelligence, where “stronger model wins”, and we are in the world where inference compute can be traded for nearly arbitrary perf improvement which with slightly lagging open models reduces the main determinant of survival to compute access (=capital), rather than proprietary weights.

  1. On the spy angle, where in the scenario China steals the American lab’s weights, Teortaxes thinks both that China wouldn’t need to do it due to not being behind (because of the other disagreements), and doubts that it would succeed if it tried.

    1. I think that right now, it seems very clear that China or any number of other actors could steal model weights if they cared enough. Security is not plausibly strong enough to stop this. What does stop it is the blowback, including this being a trick that is a lot harder to pull off a second time if we know someone did it once, plus that currently that play is not that valuable relative to the value it will have in the future, if the future looks anything like the scenario.

  2. Teortaxes claims that China has a talent advantage over the USA. And also that this will accelerate over time, but that it’s already true, and that if the major Chinese labs combined they’d lead in AI within a few months.

    1. This seems very false to me. I believe America has the talent advantage in AI in particular. Yes, DeepSeek exists and did some cracked things especially given their limited compute, but that does not equate to a general talent advantage of China over the USA at pushing the AI frontier.

    2. Consider what you would think if everything was reversed, and China had done the things America has done and vice versa. Total and utter panic.

    3. A lot of this, I think, is that Teortaxes thinks OpenAI’s talent is garbage. You can make of that what you will. The American lab here need not be OpenAI.

  3. Teortaxes does not expect China’s lab merger to come sooner than America’s.

    1. This alone would mean America was farther ahead, and there were less race dynamics involved.

    2. Presumably in the full alternative model, China’s additional skill advantage more than makes up for this.

  4. Teortaxes expects human talent to be more relevant, and for longer.

    1. This is definitely central to the scenario, the idea that past a certain point the value of human talent drops dramatically, and what matters is how good an AI you have and how much compute you spend running it.

    2. My guess is that the scenario is putting the humans off screen a little too suddenly, as in having a few excellent humans should matter for longer than they give the credit for. But it’s not clear where that grants the advantage, and thus what impact it has, ‘what matters’ is very different then. To the extent it does matter, I’d presume the top American labs benefit.

  5. Teortaxes expects open models to remain relevant, and for inference and scaffolding to allow them to do anything the largest models can do.

    1. Agreed on Huge If True, this changes the scenario a lot.

      1. But perhaps by not as much as one might think.

      2. Access to compute is already a very central scenario element. Everyone’s progress is proportional to, essentially, compute times efficiency.

    2. The quality of the underlying model, in terms of how efficiently it turns compute into results, still means everything in such a world. If I have a 2x efficient user of compute versus your model, after factoring in how we use them, that’s a big deal, even if both models can at some price do it all.

    3. The scenario treats AI as offering a multiplier on R&D speed, rather than saying that progress depends on unlocking unique AI abilities from beyond human intelligence. So we’re basically already working in the ‘small models can do everything’ world in that sense, the question is how efficiently.

      1. I’m not actually expecting that we will be in such a world, although I don’t think it changes things a ton in the scenario here.

    4. If we were in a compounding R&D gains world as described in this scenario, and you had the best models, you would be very not inclined to open them up. Indeed, when I played OpenAI in the wargame version of this, I decided I wasn’t even releasing fully empowered closed versions of the model.

    5. Even if you could do plaintext, wouldn’t it be much less compute efficient if you forced it all to be plaintext with all the logic in the actual plaintext if you read it as a human? This is perhaps the key question in the whole scenario!

    6. Certainly you can write down a scenario where being open is more competitive, and makes sense, and carries a big advantage. Cool, write that down, let’s see it. This is not the model of AI progress being predicted here, it requires a lot of different assumptions.

    7. Indeed, I’d like to see the wargame version of the open-relevant scenario, with different assumptions about how all of that works baked in, to see how people try to cause that to have good outcomes without massive hand waving. We’d want to be sure someone we all respect was playing Reality.

Here is another example of the ‘severely underestimates the PRC’ response, which seems to correlate highly with having a glib and dismissive attitude towards the entire project.

Julian Bradshaw asks if the scenario implies the PRC should at least blockade Taiwan. The answer is, if PRC fully believed this scenario then maybe, but it crashes the economy and risks war so it’s a hell of a play to make if you’re not sure.

Gabriel Weil: [In AI 2027] if China starts feeling the AGI in mid-2026 and the chip export controls are biting, why are hawks only starting to urge military action against Taiwan in August 2027 (when it’s probably too late to matter)?

Also, this seems to just assume Congress is passive. Once this public, isn’t Congress holding lots of hearing and possibly passing bills to try to reassert control? I think you can tell a story where Congress is too divided to take meaningful action, but that’s missing here.

I did play Congress & the judiciary in one of the tabletop exercises that the report discusses & did predict that Congress would be pretty ineffectual under the relevant conditions but still not totally passive. And even being ineffectual is highly sensitive to conditions, imo.

There’s a difference between ‘feel the AGI’ and both ‘feel the ASI’ and ‘be confident enough you actually act quickly at terrible cost.’ I think it’s correct to presume that it takes a lot to force the second reaction, and indeed so far we’ve seen basically no interest in even slightly costly action, and a backlash in many cases to free actions.

In terms of the Congress, I think them doing little is the baseline scenario. I mean, have you met them? Do you really think there wouldn’t be 35 senators who defer to the president, even if for whatever reason that wasn’t Trump?

This seems to be based on basic long standing disagreements. I think they all amount to, essentially, not feeling the ASI, and not thinking that superintelligence is A Thing.

In which case, yes, you’re not going to think any of this is going to happen.

Timothy Lee: This is a very nicely designed website but it didn’t convince me to rethink any of my core assumptions about safety risks from AI.

Some people, including the authors of the AI 2027 website, have a powerful intuition that intelligence is a scalar quantity that can go much higher than human level, and that an entity with a much higher level of intelligence will almost automatically be more powerful.

They also believe that it’s very difficult for someone (or something) with a lower level of intelligence to supervise someone (or something) at a much higher level of intelligence.

If you buy these premises, you’re going to find the scenario sketched out in AI 2027 plausible. I don’t, so I didn’t. But it’s a fun read.

To give one concrete example: there seems to be a strong assumption that there are a set of major military breakthroughs that can be achieved through sheer intelligence.

I obviously can’t rule this out but it’s hard to imagine what kind of breakthroughs this could be. If you had an idea for a new bomb or missile or drone or whatever, you’d need to build prototypes, test them, set up factories, etc. An AI in a datacenter can’t do that stuff.

Shapiro wants to accelerate AI and calls himself an ‘AI maximalist.’

I am including this for completeness. If you already know where this is going and don’t need to read this section, you are encouraged to skip it.

This was the most widely viewed version of this type of response I saw (227k views). I am including the full response, so you can judge it for yourself.

I will note that I found everything about this typical of such advocates. This is not meant to indicate that David Shapiro is being unusual, in any way, in his response, given the reference classes in question. Quite the contrary.

If you do read his response, ask yourself whether you think these criticisms, and accusations that Scenario 2027 is not grounded in any evidence or any justifications, are accurate, before reading Scott Alexander’s reply. Then read Scott’s reply.

David Shapiro (not endorsed): I finally got around to reviewing this paper and it’s as bad as I thought it would be.

1. Zero data or evidence. Just “we guessed right in the past, so trust me bro” even though they provide no evidence that they guessed right in the past. So, that’s their grounding.

2. They used their imagination to repeatedly ask “what happens next” based on…. well their imagination. No empirical data, theory, evidence, or scientific consensus. (Note, this by a group of people who have already convinced themselves that they alone possess the prognostic capability to know exactly how as-yet uninvented technology will play out)

3. They pull back at the end saying “We’re not saying we’re dead no matter what, only that we might be, and we want serious debate” okay sure.

4. The primary mechanism they propose is something that a lot of us have already discussed (myself included, which I dubbed TRC or Terminal Race Condition). Which, BTW, I first published a video about on June 13, 2023 – almost a full 2 years ago. So this is nothing new for us AI folks, but I’m sure they didn’t cite me.

5. They make up plausible sounding, but totally fictional concepts like “neuralese recurrence and memory” (this is dangerous handwaving meant to confuse uninitiated – this is complete snakeoil)

6. In all of their thought experiments, they never even acknowledge diminishing returns or negative feedback loops. They instead just assume infinite acceleration with no bottlenecks, market corrections or other pushbacks. For instance, they fail to contemplate that corporate adoption is critical for the investment required for infinite acceleration. They also fail to contemplate that military adoption (and that acquisition processes) also have tight quality controls. They just totally ignore these kinds of constraints.

7. They do acknowledge that some oversight might be attempted, but hand-wave it away as inevitably doomed. This sort of “nod and shrug” is the most attention they pay to anything that would totally shoot a hole in their “theory” (I use the word loosely, this paper amounts to a thought experiment that I’d have posted on YouTube, and is not as well thought through). The only constraint they explicitly acknowledge is computing constraints.

8. Interestingly, I actually think they are too conservative on their “superhuman coders”. They say that’s coming in 2027. I say it’s coming later this year.

Ultimately, this paper is the same tripe that Doomers have been pushing for a while, and I myself was guilty until I took the white pill.

Overall, this paper reads like “We’ve tried nothing and we’re all out of ideas.” It also makes the baseline assumption that “fast AI is dangerous AI” and completely ignores the null hypothesis: that superintelligent AI isn’t actually a problem. They are operating entirely from the assumption, without basis, that “AI will inevitably become superintelligent, and that’s bad.”

Link to my Terminal Race Condition video below (because receipts).

Guys, we’ve been over this before. It’s time to move the argument forward.

And here is Scott Alexander’s response, pointing out that, well…

Scott Alexander: Thanks for engaging. Some responses (mine only, not anyone else on AI 2027 team):

>>> “1. They provide no evidence that they guessed right in the past”.

In the “Who Are We?” box after the second paragraph, where it says “Daniel Kokotajlo is a former OpenAI researcher whose previous AI predictions have held up well”, the words “previous AI predictions” is a link to the predictions involved, and “held up well” is a link to a third-party evaluation of them.

>>> “2. No empirical data, theory, evidence, or scientific consensus”.

There’s a tab marked “Research” in the upper right. It has 193 pages of data/theory/evidence that we use to back our scenario.

>>> “3. They pull back at the end saying “We’re not saying we’re dead no matter what, only that we might be, and we want serious debate” okay sure.”

Our team members’ predictions for the chance of AI killing humans range from 20% to 70%. We hope that we made this wide range and uncertainty clear in the document, including by providing different endings based on these different possibilities.

>>> “4. The primary mechanism they propose is something that a lot of us have already discussed (myself included, which I dubbed TRC or Terminal Race Condition). Which, BTW, I first published a video about on June 13, 2023 – almost a full 2 years ago. So this is nothing new for us AI folks, but I’m sure they didn’t cite me.”

Bostrom discusses this in his 2014 book (see for example box 13 on page 303), but doesn’t claim to have originated it. This idea is too old and basic to need citation.

>>> 5. “They make up plausible sounding, but totally fictional concepts like “neuralese recurrence and memory”

Neuralese and recurrence are both existing concepts in machine learning with a previous literature (see eg here). The combination of them that we discuss is unusual, but researchers at Meta published about a preliminary version in 2024, see [here].

We have an expandable box “Neuralese recurrence and memory” which explains further, lists existing work, and tries to justify our assumptions. Nobody has successfully implemented the exact architecture we talk about yet, but we’re mentioning it as one of the technological advances that might happen by 2027 – by necessity, these future advances will be things that haven’t already been implemented.

>>> 6. “In all of their thought experiments, they never even acknowledge diminishing returns or negative feedback loops.”

In the takeoff supplement, CTRL+F for “Over time, they will run into diminishing returns, and we aim to take this into account in our forecasts.”

>>> 7. “They do acknowledge that some oversight might be attempted, but hand-wave it away as inevitably doomed.”

In one of our two endings, oversight saves the world. If you haven’t already, click the green “Slowdown” button at the bottom to read this one.

>>> 8. “Interestingly, I actually think they are too conservative on their ‘superhuman coders’. They say that’s coming in 2027. I say it’s coming later this year.”

I agree this is interesting. Please read our Full Definition of the “superhuman coder” phase [here] . If you still think it’s coming this year, you might want to take us up on our offer to bet people who disagree with us about specific milestones, see [here] .

>>> “9. Overall, this paper reads like ‘We’ve tried nothing and we’re all out of ideas.'”

We think we have lots of ideas, including some of the ones that we portray as saving the world in the second ending. We’ll probably publish something with specific ideas for making things go better later this year.

So in summary I agree with this response:

David Shapiro offered nine bullet point disagreements, plus some general ad hominem attacks against ‘doomers,’ which is used here as if it was a slur.

One criticism was a polite disagreement about a particular timeline development. Scott Alexander thanked him for that disagreement and offered to bet money.

Scott Alexander definitively refuted the other eight. As in, David Shapiro is making outright false claims in all eight, that can be directly refuted by the source material. In many cases, they are refuted by the central points in the scenario.

One thing Scott chose not to respond to was the idea of the ‘null hypothesis’ that ASI isn’t actually a problem. I find the idea of this being a ‘null hypothesis’ rather absurd (in addition to the idea of using frequentist statistics here also being rather absurd).

Could ASI turn out super well? Absolutely. But the idea that it ‘isn’t actually a problem’ should be your default assumption when creating minds smarter than ours? As in, not only will it definitely turn out well, it will do this without requiring us to solve any problems? What?

It’s so patently absurd to suggest this. Problems are inevitable. Hopefully we solve them and things turn out great. That’s one of the two scenarios here. But of course there is a problem to solve.

Vladimir Nesov challenges that the flop counts here seem modestly too high based on anticipated GPU production schedules. This is a great example of ‘post the wrong answer on the internet to get the right one,’ and why detailed scenarios are therefore so great. Best case you’re posting the right answer. Worst case you’re posting the wrong one. Then someone corrects you. Victory either way.

Wei Dei points out that when Agent-4 is caught, it’s odd that it sits back and lets the humans consider entering slowdown. Daniel agrees this is a good objection, and proposes a few ways it could make sense. Players of the AI in the wargame never taking the kinds of precautions against this Wei Dei mentions is an example of how this scenario and the wargame in general are in many ways extremely optimistic.

Knight Lee asks if they could write a second good ending based on the actions the authors actually would recommend, and Thomas Larsen responds that they couldn’t make it feel realistic. That’s fair, and also a really bad sign. Doing actually reasonable things is not currently in the Overton window enough to feel realistic.

Yitz offers An Optimistic 2027 Timeline, which opens with a massive trade war and warnings of a global depression. In Late 2026 China invades Taiwan and TSMC is destroyed. The ending is basically ‘things don’t move as fast.’ Yay, optimism?

Greg Colbourn has a reaction thread, from the perspective of someone much more skeptical about our chances in a scenario like this. It’s got some good questions in it, but due to how it’s structured it’s impossible to quote most of it here. I definitely consider this scenario to be making rather optimistic assumptions on the alignment front and related topics.

Patrick McKenzie focuses on the format.

Patrick McKenzie: I don’t have anything novel to contribute on the substance of [AI 2027] but have to again comment, pace Situational Awareness that I think kicked this trend off, that single-essay microdomains with a bit of design, a bit of JS, and perhaps a downloadable PDF are a really interesting form factor for policy arguments (or other ideas) designed to spread.

Back in the day, “I paid $15 to FedEx to put this letter in your hands” was one powerful way to sort oneself above the noise at a decisionmaker’s physical inbox, and “I paid $8.95 for a domain name” has a similar function to elevate things which are morally similar to blog posts.

Also with the new AI-powered boost to simple CSS design it doesn’t necessarily need to be a heartbreaking work of staggering genius to justify a bespoke design for any artifact you spend a few days/weeks on.

(Though it probably didn’t ever.)

(I might experiment with this a time or two this year for new essays.)

As usual I had a roundup thread. I included some of them throughout but noting that there are others I didn’t, if you want bonus content or completeness.

Joan Velja challenges the 30% growth assumption for 2026, but this is 30% growth in stock prices, not in GDP. That’s a very different thing, and highly realistic. The 30% GDP growth, if it happened, would come later.

Mena Fleschman doesn’t feel this successfully covered ‘crap-out’ scenarios, but that’s the nature of a modal scenario. There are things that could happen that aren’t in the scenario. Mena thinks it’s likely we will have local ‘crap-outs’ in particular places, but I don’t think that changes the scenario much if they’re not permanent, except insofar as it reflects much slower overall progress.

Joern Stoehler thinks the slowdown ending’s alignment solution won’t scale to these capability levels. I mostly agree, as I say several times I consider this part very optimistic, although the specific alignment solution isn’t important here for scenario purposes.

And having said that, watch some people get the takeaway that we should totally go for this particular alignment strategy. No, please don’t conclude that, that’s not what they are trying to say.

Gabriel Weil, in addition to his reactions on China, noticed that the ‘slowdown’ scenario in AI 2027 seems less plausible to him than other ‘lucky’ ways to avoid doom. I definitely wouldn’t consider this style of slowdown to be the majority of the win probability, versus a variety of other technical and political (in various combinations) ways out.

Dave Kasten: This has broken containment into non-AI policy elite circles in multiple parts of my life do a degree similar to Situational Awareness but distinct audiences (e.g., finance folks who did NOT care about Leopold’s doc).

Hedge fund folks; random tech folks who were fairly “AI is a tool but nothing worldshaking”; random DC policy elites who are fully focused on other issues.

There’s a fellowship that will include several AI 2027 collaborators (Eli Lifland and Thomas Larsen) at Pivotal in Q3, it will run from June 30 to August 29 in London, but the deadline is in two days, you have to apply by April 9, so act fast.

Here’s what they’re going to do next, in addition to writing additional content:

Daniel Kokotajlo: AI-2027 is live! Finally. What’s next?

–Making bets with people who disagree with us

–Awarding prizes to people who write alternative scenarios

–Awarding prizes to people who convince us we were wrong or find bugs

Bets: We offer to publicly bet on our position. By default, this is $100, even odds. Details tbd e.g. if we predict something will happen in April 2028, we won’t take a bet where you win if it happens in any other month.

Bugs: $100 to anyone who finds a factual mistake in our document. For example, “that chip has the wrong number of FLOPs”

Mind-changes: $250+ to anyone who substantially shifts our opinion on a substantive issue related to our forecast. We will award this bounty if, had we known about your argument before publication, we would have written our scenario substantially differently.

Scenarios: If you have an alternative vision of the future of AI, then send it in. We expect to give the $2,500 bounty to about three scenarios, although we might increase or decrease this number based on the quality and quantity of submissions. As a reference point, we would expect entries with quality similar to How AI Might Take Over in 2 Years, A History of The Future, and AI and Leviathan to pass our bar.

There are many more details, disclaimers, and caveats to hammer out–see our policy here.

Predicting things that are known is still impressive, because most people don’t know.

Trey Goff: I read this and thought it was silly that one of the key assumptions was that CoT wouldn’t be faithful.

not 4 hours ago, a prediction from this paper was proven right by Anthropic: CoT is not, in fact, truthful most of the time.

everyone must read this.

Scott Alexander:

ME, REPEATEDLY, OVER THE PAST SIX MONTHS: Daniel! Publish already! Your predictions keep coming true before anyone even sees them!

DANIEL: Oh, that one wasn’t even a real prediction, everyone knew that would happen.

Dirk: it was already known that cots were unfaithful ages ago though? This was a whole paper published in 2023.

Scott Alexander: Yeah, that’s what Daniel said too.

Discussion about this post

AI 2027: Responses Read More »

parents-give-kids-more-melatonin-than-ever,-with-unknown-long-term-effects

Parents give kids more melatonin than ever, with unknown long-term effects


More children are taking the hormone in the form of nightly gummies or drops.

Two years ago, at a Stop & Shop in Rhode Island, the Danish neuroscientist and physician Henriette Edemann-Callesen visited an aisle stocked with sleep aids containing melatonin. She looked around in amazement. Then she took out her phone and snapped a photo to send to colleagues back home.

“It was really pretty astonishing,” she recalled recently.

In Denmark, as in many countries, the hormone melatonin is a prescription drug for treating sleep problems, mostly in adults. Doctors are supposed to prescribe it to children only if they have certain developmental disorders that make it difficult to sleep—and only after the family has tried other methods to address the problem.

But at the Rhode Island Stop & Shop, melatonin was available over the counter, as a dietary supplement, meaning it receives slightly less regulatory scrutiny, in some respects, than a package of Skittles. Many of the products were marketed for children, in colorful bottles filled with liquid drops and chewable tablets and bright gummies that look and taste like candy.

A quiet but profound shift is underway in American parenting, as more and more caregivers turn to pharmacological solutions to help children sleep. What makes that shift unusual is that it’s largely taking place outside the traditional boundaries of health care. Instead, it’s driven by the country’s sprawling dietary supplements industry, which critics have long said has little regulatory oversight—and which may get a boost from Secretary of Health and Human Services Robert F. Kennedy Jr., who is widely seen as an ally to supplement makers.

Thirty years ago, few people were giving melatonin to children, outside of a handful of controlled experiments. Even as melatonin supplements grew in popularity among adults in the late 1990s in the United States and Canada, some of those products carried strict warnings not to give them to younger people. But with time, the age floor dropped, and by the mid-2000s, news reports and academic surveys suggest some early adopters were doing just that. (Try it for ages 11-and-up only, one CNN report warned at the time.) By 2013, according to a Wall Street Journal article, a handful of companies were marketing melatonin products specifically for kids.

And today? “It’s almost like a vitamin now,” said Judith Owens, a pediatric sleep specialist at Harvard Medical School. Usage is growing, including among children who are barely out of diapers. Academic surveys suggest that as many as 1 in 5 preteens in the US now take melatonin at least occasionally, and that some younger children consume it multiple times per week.

Store shelves stocked with sleep aids

Sleep aids, many of them melatonin, are displayed for sale in a Florida store in 2023. In the US, melatonin is available over the counter, but in many other countries the hormone is a prescription drug mostly used by adults.

Credit: Joe Raedle/Getty Images

Sleep aids, many of them melatonin, are displayed for sale in a Florida store in 2023. In the US, melatonin is available over the counter, but in many other countries the hormone is a prescription drug mostly used by adults. Credit: Joe Raedle/Getty Images

On social media, parenting influencers film themselves dancing with bottles of melatonin gummies or cut to shots of their snoozing kids. In the toxicology literature, a series of reports suggest a rise in melatonin misuse—and indicate that some caregivers are even giving doses to infants. And according to multiple studies, some brands may contain substantially higher doses of the hormone than product labels indicate.

The trend has unsettled many childhood sleep researchers. “It is a hormone that you are giving to young children. And there’s just very little research on the long-term effects of this,” said Lauren Hartstein, a childhood sleep researcher at the University of Arizona.

In a 2021 journal article, David Kennaway, a professor of physiology at the University of Adelaide in Australia, noted that melatonin can bind to receptors in the pancreas, the heart, fat tissue, and reproductive organs. (Kennaway once held a patent on a veterinary drug that uses melatonin to boost the fertility of ewes.) Distributing the hormone over the counter to American children, he has argued, is akin to a vast, uncontrolled medical experiment.

“It is a hormone that you are giving to young children. And there’s just very little research on the long-term effects of this.”

To others, that kind of language might seem alarmist—especially considering that melatonin appears to have mild side effects, and that sleep problems themselves can have consequences for both child and parental health. Many caregivers report melatonin is helpful for their children, and it’s been given for years to children with autism and ADHD, who often struggle to sleep. Beth Malow, a neurologist and sleep medicine expert at Vanderbilt University Medical Center who has consulted for a pharmaceutical company that manufactures melatonin products, raised concerns about a tendency to highlight “the evils of melatonin” without noting that “it’s actually very safe, and it can be very helpful.” Focusing just on the negatives, she added, “is to throw the baby out with the bathwater.”

All of this leaves parents navigating a lightly regulated marketplace while receiving conflicting medical advice. “We know that not getting enough sleep in early childhood has a lot of bad effects on health and attention and cognition and emotions, et cetera,” said Hartstein. Meanwhile, she added, “melatonin is safe and well-tolerated in the short term. So there’s a big question of, well, what’s worse, my kid not sleeping, or my kid taking melatonin once a week?”

As for the answer to that question, she said: “We don’t know.”

Mother’s little helper

The urge—the desperate, frantic, all-consuming urge—to get a child to fall sleep is familiar to many parents. So is the impulse to satisfy that urge through drugs. Into the early 20th century, parents sometimes administered an opiate called laudanum to help young children sleep, even though it could be fatal. Decades later, when over-the-counter antihistamines like Benadryl became popular, some parents began using them, off-label, as a sleep aid.

“Most people are pretty happy to resort to over-the-counter medication if their kids are not sleeping,” one mother of two small kids told a team of Australian researchers for a 2004 study. “It really saves the children’s lives,” she added, because “it stops mums from throwing them against the wall.”

Compared to other sleep aids, melatonin supplements have obvious advantages. Chief among them is that they mimic a natural hormone: The body secretes melatonin from a pea-sized gland nestled in the brain, typically starting in the early evening. Levels peak after midnight, and drop off a few hours before sunrise.

Artificially boosting melatonin helps many people fall sleep earlier or more easily.

“There’s a big question of, well, what’s worse, my kid not sleeping, or my kid taking melatonin once a week?”

When a child takes a 1 milligram dose of melatonin, the hormone quickly enters their bloodstream, signaling to the brain that it’s time for sleep. Melatonin reaches levels in the blood that can be more than 10 times higher than natural peak concentrations. Soon, many children begin to feel drowsy.

Children can generally tolerate melatonin. Known side effects appear to be mild, and, compared to antihistamines, people taking low doses of melatonin are less likely to wake up feeling groggy the next morning.

As early as 1991, some researchers began administering small doses of the hormone to children with autism, who sometimes have extreme difficulty falling and staying asleep. A series of trials conducted in the Netherlands in the 2000s found that melatonin could also have modest benefits for non-autistic children experiencing insomnia, and it seemed to be safe in the short-term—although the long-term consequences of regularly taking the hormone were unclear.

The timing of the research coincided with a move in the US to loosen regulations on dietary supplements, led by Sen. Orrin Hatch of Utah, a supplement-industry hub.

News reports suggest that, by the late 2000s, some parents were trying melatonin for older children.

It’s hard to know for sure who first decided to market melatonin specifically to children, but a key player seems to be Zak Zarbock, a Utah pediatrician and father of four boys who, in 2008, began selling a drug-free, honey-based cough syrup. In 2011, his company, Zarbee’s, introduced a version of its children’s cough remedy that contained melatonin. Soon after, Zarbee’s launched a line of melatonin supplements tailored to children. In a 2014 press release, Zarbock stressed that “a child shouldn’t need to take something to fall asleep every night.” But melatonin, he said, could act like “a reset button for your bedtime routine” when things got out-of-whack. (Zarbock did not respond to interview requests.)

More products followed, and usage rates have climbed. One possible reason for that is that American children are having more difficulty falling asleep. Some experts think screen use is causing sleep problems, and rising rates of anxiety and depression among children may also be affecting slumber. Clinicians report treating families that use melatonin to counteract the stimulating effects of caffeine.

Another possibility—and they’re not mutually exclusive—is that supplement makers sensed a market opportunity and seized it. Gummies have made melatonin more palatable to children; supplement makers now market widely to parents online. At least one company seems to have made overtures to parents via a pediatrics organization: Vicks ZzzQuil, a popular line of children’s melatonin products, sponsored a 2020 webinar on sleep hosted by the American Academy of Pediatrics.

How to anger sleep scientists

Is melatonin a harmless natural supplement or a sleep drug? The culture, at times, seems unsure: It’s easy to find parents fretting in online forums about whether the gummies are safe. Daycare workers have undergone criminal prosecution after providing melatonin to their charges without parental consent.

In their marketing, meanwhile, supplement companies consistently describe their melatonin products as drug-free, non-habit-forming, and safe. In one promotional video for Zarbee’s, Zarbock, wearing sky-blue scrubs, tells parents that “in recent short- and long-term studies, melatonin has been shown to be safe and effective for children.” Echoing language used across the industry, Zarbee’s melatonin gummies are marketed today as “safe and drug-free.”

Such claims raise hackles among sleep scientists. “That kind of advertising is unconscionable,” wrote Kennaway, the Adelaide professor, in an email. “Melatonin ingested whether in a gummy or a tablet is being administered as a drug,he wrote. (In a brief statement sent by Tyra Weeks, a spokesperson, Zarbee’s noted its melatonin products are “regulated as a dietary supplement ingredient by the FDA,” adding that they “do not contain active pharmaceutical ingredients.”)

What’s behind the growing use of melatonin to help children sleep? Some experts think screen use is causing sleep problems, and rising rates of anxiety and depression among children may also be affecting slumber.

Credit: Johner Images/Getty Images

What’s behind the growing use of melatonin to help children sleep? Some experts think screen use is causing sleep problems, and rising rates of anxiety and depression among children may also be affecting slumber. Credit: Johner Images/Getty Images

Among other things, Kennaway worries that long-term melatonin use could have unintended effects, including on the developing reproductive system. While it is known that melatonin can interact with lots of tissues, not just the parts of the brain responsible for initiating sleep, many experts note that there is little long-term safety data on supplemental use of the compound.

“Don’t be fooled by thinking that somehow, this is like a vitamin. It’s a drug,” said Owens, the Harvard sleep specialist. “It’s a medication. And there are no really long-term studies that have looked at things like impact on pubertal development.” (Jess Shatkin, a child psychiatrist at New York University’s medical school, noted that such gaps are common even for marquee prescription medications: “I don’t know of a safety study of Zoloft that goes more than two years,” he said, by way of an example.)

Owens has been in clinical practice for 35 years. The arrival of melatonin, she said, felt abrupt: Around 10 years ago, it suddenly seemed that every patient in her clinic was taking it. She is concerned now about inappropriate use, including caregivers using the hormone for children who do not have insomnia; she has heard reports of a summer camp nurse handing it out to campers at bedtime.

“One of the things that disturbs me the most is when I hear a parent say, ‘Oh well, she asks for her melatonin every night and she says she can’t sleep without it,’” Owens said. “You’re setting up a potential lifetime of dependence on sleeping medication.” (Owens has testified in a lawsuit against Zarbee’s, and she consults for AGB-Pharma, a Swedish firm that makes a prescription melatonin drug.)

Is melatonin a harmless natural supplement or a sleep drug? The culture, at times, seems unsure.

Owens and other researchers say melatonin can be helpful for children with neurodevelopmental disorders like autism and ADHD, who may otherwise be unable to establish a stable sleep routine. And they say it may be useful for other children who struggle to sleep—with certain safeguards.

Recently, teams of researchers in Europe and the United States have evaluated what melatonin can do. Edemann-Callesen, the Danish researcher, works at the Centre for Evidence-Based Psychiatry. She recently led a team to systematically collect and review published studies of melatonin in children. The evidence, she said, was mixed. Studies suggest that melatonin can help children fall asleep around 15 or 20 minutes earlier, on average. Whether that translates to a more rested kid is less clear: “When you look at the evidence,” she said, “melatonin doesn’t affect daytime functioning.”

Overall, she said, there just isn’t much research out there to draw on.

In both the US and Europe, experts are converging on certain recommendations: Families should consult a health care provider before use. They should try simple, non-pharmacological steps to improve sleep first, and only turn to melatonin if that fails. They should start with a low dose—typically around 0.5 mg. And they should only use melatonin for a few weeks as a kind of crutch, ideally dosing the hormone to help establish a better sleep routine and then weaning the child off the supplement.

Some families have been scared off by alarming reports about melatonin. Malow, the Vanderbilt sleep expert, began studying melatonin in the 2000s, as a sleep aid for children with autism. Recently, she said, some families who rely on the supplement to help their children have gotten jumpy: “I had a lot of families tell me in clinic, ‘I’m really worried about melatonin. I read this, I read that, is it safe?” She makes sure they’re using a brand that submits its products to external certification. “And I’d be like, you know, it’s working. It’s working for your kid. Why stop it?”

In 2021, Malow and several colleagues published a study of melatonin safety, looking at 80 children and adolescents who had taken the hormone over the course of two years. They did not flag any serious side effects, and the children’s puberty seemed to progress normally. (The study was funded by Neurim Pharmaceuticals, which manufactures a melatonin drug prescribed outside the US.)

Malow acknowledged the study was small, but she said the findings aligned with her own years of clinical experience. “At least it’s something,” she said. “And I have not, in my experience, had any kids where I was concerned, or the parents were concerned, that puberty was delayed because of melatonin use.”

Consult with your family doctor

Last year, the Council for Responsible Nutrition, a leading supplement industry group, published voluntary guidelines for its members. Among them: put products in child deterrent packaging; tell people to consult a pediatrician before using melatonin; and warn caregivers that melatonin is “for occasional and/or intermittent use only.”

Plenty of manufacturers aren’t part of CRN, and it’s not hard to find suppliers that aren’t in compliance with those recommendations. And whether parents follow the recommendations is something else entirely. User reviews and academic surveys indicate that some parents are dosing regularly for months or years on end, and the products themselves seem packaged for long-term use: For example, the company MaryRuth’s sells bottles of children’s melatonin gummies labeled “2 month supply.” Natrol, a popular brand that warns caregivers that the product is “for occasional short-term use only,” sells bottles containing 140 doses. (MaryRuth’s did not respond to requests for comment, and a spokesperson for Natrol declined to comment.)

Meanwhile, as melatonin sales climb, a growing body of evidence points to cases of misuse.

One issue: Children seem to be sometimes finding, and swallowing, gummies and other melatonin products. Calls to poison control centers for pediatric melatonin ingestion increased 530 percent between 2012 and 2021, according to one analysis published by the US Centers for Disease Control and Prevention.

Mostly, nothing happened: Among small children, the large majority of the incidents were resolved without the child experiencing symptoms at all. When symptoms do appear, they tend to be mild—drowsiness, for example, or gastrointestinal upset. (Achieving a lethal dose of melatonin appears to be virtually impossible, said Laura Labay, a forensic toxicologist at NMS Labs, which provides toxicology testing services.)

Still, some experts have expressed concern that melatonin misuse might, in rare cases, contribute to more serious outcomes.

In 2015, Sandra Bishop-Freeman, now the chief toxicologist at the North Carolina Office of the Chief Medical Examiner, was called to review on a tragic case. A 3-month-old girl had died in her crib. More than 20 bottles of melatonin were found in the home, and an investigation showed that the girl and her twin sister had been given 5 milligram doses of melatonin multiple times per day to help them sleep. The infant’s blood levels of melatonin were orders of magnitude above the natural range.

“Oftentimes when I explore topics, it’s because we find things that were previously unknown or confusing to us,” Bishop-Freeman told Undark. She wasn’t sure if melatonin had contributed to the infant’s death. But as she read more about the hormone, she felt concerns, especially when her office received several more cases involving elevated levels of melatonin. “It was hard to just tell the pathologist, ‘Eh, no worries, everyone thinks it’s safe, so you’re fine,’” she said.

User reviews and academic surveys indicate that some parents are dosing regularly for months or years on end, and the products themselves seem packaged for long-term use.

In 2022, Bishop-Freeman and colleagues published a paper detailing seven cases of undetermined pediatric deaths where bloodwork revealed elevated levels of melatonin. (They’ve seen more since finishing the paper.) “We don’t want to overstate these findings,” she said: The causes of the deaths are unknown, and the presence of melatonin may just be a coincidence. But her team can’t rule out the hormone as a possible contributor, she said, and investigators should be alert to elevated melatonin levels, which may sometimes be overlooked.

Labay, the forensic toxicologist, said she found those concerns plausible. But, she added, “I think I’m still waiting for the paper that says, ‘This was a pure melatonin death and there was no other contributing cause to that death.'”

Melatonin gummies have made the drug more palatable to children, and supplement makers now market them widely to parents online. But data suggests that the widespread availability of the supplements, often resembling candy, can lead to misuse.

Credit: Joe Raedle/Getty Images

Melatonin gummies have made the drug more palatable to children, and supplement makers now market them widely to parents online. But data suggests that the widespread availability of the supplements, often resembling candy, can lead to misuse. Credit: Joe Raedle/Getty Images

As more children take melatonin, some experts want the supplement industry to do more to prevent them from taking too-large quantities. Pieter Cohen, an internist and a prominent critic of supplement industry practices, faulted regulators for not requiring childproof caps and questioned why companies sell what he describes as higher-than-necessary doses of the hormone.

Many products also have considerably more melatonin than is listed on the label. Last year, a US Food and Drug Administration team analyzed melatonin content in 110 products that appeared in online searches for things like “melatonin + child,” and found dozens of mismatches. In one case, a product contained more than six times the amount on the label.

The study was submitted to a journal in July 2024. So far, the agency has not taken any public action against those companies. “The FDA is not doing their job. They’re basically cowering to the industry,” Cohen said.

In a statement from the FDA, sent by spokesperson Lindsay Haake, the agency said that the products analyzed in the study were “individually evaluated to determine if any agency follow up was needed.” The statement added that “we do not discuss potential or ongoing compliance or enforcement matters with third parties.”

“The FDA is not doing their job. They’re basically cowering to the industry.”

Steve Mister, the president and CEO of the Council for Responsible Nutrition, said manufacturers often have to sell products with higher levels in order to make sure there’s melatonin available throughout a product’s shelf-life. Those so-called overages, he stressed, are modest and safe: “Whatever we put in, we still have confidence that it is safe on day one,” he said.

The supplement industry, Mister said, has taken ­steps to ensure that melatonin is used responsibly, including the guidelines his organization issued last year. “I think our voluntary program is an illustration that we want to step up and do some education of parents,” he said.

He pushed back against suggestions that the supplement industry was not a responsible steward of melatonin, or that it was unwise for the hormone to be sold as an over-the-counter supplement: “Look at the safety and look at the number of doses that are sold in this country every year, and how few adverse events there are, and how little evidence that there is a concern,” Mister said. Other countries, he added, may choose to limit melatonin to prescription-use only. “They like the way their system is set up. That doesn’t mean that it’s right for the US.”

Bedtime struggles take a toll on everyone

For parents whose children struggle to fall sleep, the costs of an interminable bedtime can feel high: exhausted children, burned-out parents, and family conflict that stretches into the night. In online videos and forums, parents disclose insecurity (“We are now at the stage in parenthood where we drug our kids,” one mother says in a TikTok) and gratitude (“It’s saved our sanity,” writes a parent on Reddit). Caregivers talk about their children getting better rest—but it can seem as if the supplement is as much for parents’ mental health as it is for children’s restful sleep.

From the vantage point of a chaotic bedtime, the safety concerns about melatonin can feel academic, privileging unknown or speculative harms (such as the possibility of long-term side effects) over the chance of immediate relief. In conversations, physicians and psychologists who devote their careers to children’s sleep stress the importance of a good night’s rest. But some worry melatonin is often used as a shortcut—and suggest there are more effective paths to improved sleep that families could take, especially if they had better support.

For parents whose children struggle to fall sleep, the costs of an interminable bedtime can feel high: exhausted children, burned-out parents, and family conflict that stretches into the night.

Candice Alfano, a professor of psychology at the University of Houston, runs a center devoted to studying childhood sleep and anxiety. In 2020 and 2021, she and her team conducted a survey of sleep health among children in foster care, who struggle with insomnia at far higher rates than the general population. Pharmacological treatments, they found, were widespread: More than one in 10 foster parents reported receiving a prescription medicine to help the children sleep. And close to half were using melatonin at least occasionally—and often regularly—to help the children sleep.

Alfano’s team has recently developed a sleep treatment program for foster families that, she said, may offer an alternative intervention to drugs and supplements. The initial findings, from a small pilot, suggest it’s effective.

The appeal of melatonin, though, remains, both for caregivers and for the pediatricians who advise them, Alfano said: “It’s seemingly a quick and easy suggestion: ‘You know, here’s something you could go get over the counter. You don’t even need a prescription from me.’”

But the goal, she said, is something else: “to teach these children how to sleep, rather than just sleep.”

This article was originally published on Undark. Read the original article.

Parents give kids more melatonin than ever, with unknown long-term effects Read More »

tuesday-telescope:-does-this-milky-way-image-remind-you-of-powers-of-10?

Tuesday Telescope: Does this Milky Way image remind you of Powers of 10?

Welcome to the Tuesday Telescope. There is a little too much darkness in this world and not enough light—a little too much pseudoscience and not enough science. We’ll let other publications offer you a daily horoscope. At Ars Technica, we’ll take a different route, finding inspiration from very real images of a universe that is filled with stars and wonder.

When I was a kid, I was fascinated by the Powers of 10 video, which came out in the 1970s. Perhaps you remember it, with the narrator taking us both outward toward the fathomless end of the Universe and then, reversing course, guiding us back to Earth and inside a proton. The film gave a younger me a good sense of just how large the Universe around us really is.

What I did not know until much later is that the short film was made by the Eames Office, which was founded by the noted designers Charles Eames and Ray Kaiser. It’s the same organization that produced the Eames Lounge Chair. It goes to show you the value of good design across genres (shoutout to Ars’ resident designer, Aurich Lawson).

Anyway, I say all that because the Power of 10 film continues to live in my head, rent-free, decades later. It was the first thing I thought of when looking at today’s image of the Milky Way Galaxy’s center. The main image showcases huge vertical filaments, with the supermassive black hole at the galaxy’s core clearly visible. This image, captured by a South African radio telescope named MeerKAT, also shows the ghostly, bubble-like remnants of supernovas that exploded over millennia.

On the right of the image, there is a zoomed-in box taken in infrared light by the James Webb Space Telescope, and showing the star-forming Sagittarius C region. An estimated 500,000 stars are visible in this image of the Sagittarius C region. There is also a large region of ionized hydrogen, shown in cyan, that contains intriguing needle-like structures.

We don’t really know what those are.

Source: NASA, ESA, CSA, STScI, SARAO, Samuel Crowe (UVA), John Bally (CU), Ruben Fedriani (IAA-CSIC), Ian Heywood (Oxford)

Do you want to submit a photo for the Daily Telescope? Reach out and say hello.

Tuesday Telescope: Does this Milky Way image remind you of Powers of 10? Read More »

nintendo-explains-why-switch-2-hardware-and-software-cost-so-much

Nintendo explains why Switch 2 hardware and software cost so much

Things just cost more now

In justifying the $450 price of the Switch 2, Nintendo executives predictably pointed to the system’s upgraded hardware specs, as well as new features like GameChat and mouse mode. “As you add more technology into a system, especially in this day and age, that drives additional cost,” Nintendo Vice President of Player & Product Experience Bill Trinen told Polygon.

That said, Trinen also pointed to rising prices in the wider economy to justify the $150 jump between Switch and Switch 2 pricing. “We’re unfortunately living in an era where I think inflation is affecting everything,” Trinen said.

The Switch never saw a nominal price drop, but inflation still ate away at its total cost a bit over the years.

The Switch never saw a nominal price drop, but inflation still ate away at its total cost a bit over the years.

Trinen isn’t wrong about that; the $299 early adopters paid for a Switch in 2017 is worth about $391 in today’s dollars, according to the BLS CPI calculator. But for customers whose own incomes may have stayed flat over that time, the 50 percent jump in nominal pricing from Switch to Switch 2 may be hard to swallow in a time of increasing economic uncertainty.

“Obviously the cost of everything goes up over time, and I personally would love if the cost of things didn’t go up over time,” Trinen told IGN. “And certainly there’s the cost of goods and things that factor into that, but we try to find the right appropriate price for a product based on that.”

Is $80 the new $70?

Talk of inflation extended to Trinen’s discussion of why Nintendo decided to sell first-party Switch 2 games for $70 to $80. “The price of video games has been very stable for a very long time,” Trinen told Polygon. “I actually have an ad on my phone that I found from 1993, when Donkey Kong Country released on the SNES at $59. That’s a very, very long time where pricing on games has been very stable…”

Nintendo explains why Switch 2 hardware and software cost so much Read More »

meta’s-surprise-llama-4-drop-exposes-the-gap-between-ai-ambition-and-reality

Meta’s surprise Llama 4 drop exposes the gap between AI ambition and reality

Meta constructed the Llama 4 models using a mixture-of-experts (MoE) architecture, which is one way around the limitations of running huge AI models. Think of MoE like having a large team of specialized workers; instead of everyone working on every task, only the relevant specialists activate for a specific job.

For example, Llama 4 Maverick features a 400 billion parameter size, but only 17 billion of those parameters are active at once across one of 128 experts. Likewise, Scout features 109 billion total parameters, but only 17 billion are active at once across one of 16 experts. This design can reduce the computation needed to run the model, since smaller portions of neural network weights are active simultaneously.

Llama’s reality check arrives quickly

Current AI models have a relatively limited short-term memory. In AI, a context window acts somewhat in that fashion, determining how much information it can process simultaneously. AI language models like Llama typically process that memory as chunks of data called tokens, which can be whole words or fragments of longer words. Large context windows allow AI models to process longer documents, larger code bases, and longer conversations.

Despite Meta’s promotion of Llama 4 Scout’s 10 million token context window, developers have so far discovered that using even a fraction of that amount has proven challenging due to memory limitations. Willison reported on his blog that third-party services providing access, like Groq and Fireworks, limited Scout’s context to just 128,000 tokens. Another provider, Together AI, offered 328,000 tokens.

Evidence suggests accessing larger contexts requires immense resources. Willison pointed to Meta’s own example notebook (“build_with_llama_4“), which states that running a 1.4 million token context needs eight high-end Nvidia H100 GPUs.

Willison documented his own testing troubles. When he asked Llama 4 Scout via the OpenRouter service to summarize a long online discussion (around 20,000 tokens), the result wasn’t useful. He described the output as “complete junk output,” which devolved into repetitive loops.

Meta’s surprise Llama 4 drop exposes the gap between AI ambition and reality Read More »

dustland-delivery-plays-like-a-funny,-tough,-post-apocalyptic-oregon-trail

Dustland Delivery plays like a funny, tough, post-apocalyptic Oregon Trail

Road trips with just two people always have their awkward silences. In Dustland Delivery, my character, a sharpshooter, has tried to break the ice with the blacksmith he hired a few towns back, with only intermittent success.

Remember that bodyguard, the one I unsuccessfully tried to flirt with at that bar? The blacksmith was uninterested. What about that wily junk dealer, or the creepy cemetery? Silence. She only wanted to discuss “Abandoned train” and “Abandoned factory,” even though, in this post-apocalypse, abandonment was not that rare. But I made a note to look out for any rusted remains; stress and mood are far trickier to fix than hunger and thirst.

Dustland Delivery release trailer.

Dustland Delivery, available through Steam for Windows (and Proton/Steam Deck), puts you in the role typically taken up by NPCs in other post-apocalyptic RPGs. You’re a trader, buying cheap goods in one place to sell at a profit elsewhere, and working the costs of fuel, maintenance, and raider attacks into your margins. You’re in charge of everything on your trip: how fast you drive, when to rest and set up camp, whether to approach that caravan of pickups or give them a wide berth.

Some of you, the types whose favorite part of The Oregon Trail was the trading posts, might already be sold. For the others, let me suggest that the game is stuffed full of little bits of weird humor and emergent storytelling, and a wild amount of replayability for what is currently a $5 game. There are three quest-driven scenarios, plus a tutorial, in the base game. A new DLC out this week, Sheol, adds underground cities, ruins expeditions, more terrains, and a final story quest for four more dollars.

Dustland Delivery plays like a funny, tough, post-apocalyptic Oregon Trail Read More »

eu-may-“make-an-example-of-x”-by-issuing-$1-billion-fine-to-musk’s-social-network

EU may “make an example of X” by issuing $1 billion fine to Musk’s social network

European Union regulators are preparing major penalties against X, including a fine that could exceed $1 billion, according to a New York Times report yesterday.

The European Commission determined last year that Elon Musk’s social network violated the Digital Services Act. Regulators are now in the process of determining what punishment to impose.

“The penalties are set to include a fine and demands for product changes,” the NYT report said, attributing the information to “four people with knowledge of the plans.” The penalty is expected to be issued this summer and would be the first one under the new EU law.

“European authorities have been weighing how large a fine to issue X as they consider the risks of further antagonizing [President] Trump amid wider trans-Atlantic disputes over trade, tariffs and the war in Ukraine,” the NYT report said. “The fine could surpass $1 billion, one person said, as regulators seek to make an example of X to deter other companies from violating the law, the Digital Services Act.”

X’s global government affairs account criticized European regulators in a post last night. “If the reports that the European Commission is considering enforcement actions against X are accurate, it represents an unprecedented act of political censorship and an attack on free speech,” X said. “X has gone above and beyond to comply with the EU’s Digital Services Act, and we will use every option at our disposal to defend our business, keep our users safe, and protect freedom of speech in Europe.”

Penalty math could include Musk’s other firms

The Digital Services Act allows fines of up to 6 percent of a company’s total worldwide annual turnover. EU regulators suggested last year that they could calculate fines by including revenue from Musk’s other companies, including SpaceX. Yesterday’s NYT report says this method is still under consideration:

EU may “make an example of X” by issuing $1 billion fine to Musk’s social network Read More »

rocket-report:-next-starship-flight-to-reuse-booster;-faa-clears-new-glenn

Rocket Report: Next Starship flight to reuse booster; FAA clears New Glenn


“The first Super Heavy reuse will be a step towards our goal of zero-touch reflight.”

SpaceX tests a Super Heavy booster that previously launched in January. Credit: SpaceX

Welcome to Edition 7.38 of the Rocket Report! SpaceX test fired a Super Heavy booster that launched in January on Thursday, in South Texas. This sets up the possibility of a reused Super Heavy rocket launching within the next several weeks, and would be an important step forward in the Starship launch program. It’s also a bold step given that there is a lot riding on this Starship launch, given that the last two have failed due to propulsion issues with the rocket’s upper stage.

As always, we welcome reader submissions, and if you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.

European commercial launch industry joins the space race. The first flight of Isar Aerospace’s Spectrum rocket didn’t last long on Sunday, Ars reports. The booster’s nine engines switched off as the rocket cartwheeled upside-down and fell a short distance from its Arctic launch pad in Norway, ending the abbreviated test flight with a spectacular, fiery crash into the sea. However, it marked the beginning of something new in Europe as commercial startups begin launching rockets.

Learning to embrace failure … Isar Aerospace, based in Germany, was the first in a crop of new European rocket companies to attempt an orbital launch. Isar is one of a half-dozen or so European launch startups that could fly their orbital-class rockets in the next couple of years. Of this group, Isar has raised the most money, reporting more than 400 million euros ($430 million) of fundraising, primarily from venture capital sources. We are looking forward to the European launch industry heating up after a long period of development.

PLD Space signs launch agreement with D-Orbit. The Spanish launch company, PLD Space, announced an agreement this week with an Italy-based space transportation company, D-Orbit. As part of the agreement, D-Orbit’s ION orbital transfer vehicle will launch on PLD Space’s forthcoming rocket, the Miura 5. Although the announcement did not specify terms of the agreement, PLD Space said it has now filled “more than 80 percent” of the launch slots on its manifest until 2027.

Waiting on the rocket … The ION vehicle, essentially a dispenser of CubeSats, has previously flown several missions. The real question, therefore, concerns the readiness of the Miura 5 small rocket. PLD Space said it is currently ramping up serial production for the Miura 5 using technology from a prototype rocket, with the aim of starting its test flight campaign by the end of 2025. Commercial flights of Miura 5 could begin in 2026 with the objective of scaling up to 30 launches per year by 2030. We shall see about that.

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

China shooting for record number of launches. Early on Tuesday morning, a Long March 2D rocket lifted off from Jiuquan Satellite Launch Center in the Gobi Desert, Space News reports. The Shanghai Academy of Spaceflight Technology, a state-owned rocket maker, announced the success of the launch, revealing the payload to be a satellite Internet technology test satellite. Tuesday’s mission was China’s 17th orbital launch of 2025, following the launch of the classified TJS-16 satellite into geosynchronous transfer orbit on March 29 via a Long March 7A rocket.

Shooting for a century … This puts the country on pace to launch 68 rockets for the year. This is in line with China’s total orbital launches for each of the last three years (64, 67, and 68 launches respectively). However, Chinese space watcher Andrew Jones believes the country may attempt to go as high as 100 launches this year. This would be driven by growing commercial activity, megaconstellation projects, and new launcher development. A number of new, medium-lift and potentially reusable rockets are targeting debut flights this year, he reports.

Falcon 9 launches first crewed polar mission. Four adventurers suited up and embarked on a first-of-a-kind trip to space Monday night, becoming the first humans to fly in polar orbit aboard a SpaceX crew capsule chartered by a Chinese-born cryptocurrency billionaire, Ars reports. Chun Wang, born in China and now a citizen of Malta, paid SpaceX an undisclosed sum for the opportunity to fly to space and bring three hand-picked crewmates along with him. He named his mission Fram2 in honor of the Norwegian exploration ship Fram used for polar expeditions at the turn of the 20th century.

Rocket follows an unusual trajectory … The Falcon 9 rocket launched from Kennedy Space Center. However, instead of heading to the northeast in pursuit of the International Space Station, the Falcon 9 and Dragon spacecraft departed Launch Complex 39A and arced to the southeast, then turned south on a flight path hugging Florida’s east coast. The unusual trajectory aligned the Falcon 9 with a perfectly polar orbit at an inclination of 90 degrees to the equator, bringing the four-person crew directly over the North or South Pole every 45 minutes. They are the first humans to orbit over the poles.

Amazon targets April 9 for first Kuiper launch. As soon as next week, Amazon plans to send 27 of its satellites into low Earth orbit on a United Launch Alliance Atlas 5 rocket, Spaceflight Now reports. Launch is scheduled for Wednesday, April 9, during a three-hour window that opens at noon EDT (16: 00 UTC). “We’ve done extensive testing on the ground to prepare for this first mission, but there are some things you can only learn in flight, and this will be the first time we’ve flown our final satellite design and the first time we’ve deployed so many satellites at once,” said Rajeev Badyal, vice president of Project Kuiper.

Heaviest mission launched by an Atlas … This will be the first mission by United Launch Alliance of this year, and the company’s first in nearly half a year. But officials say that will change soon. In a February interview with Gary Wentz, ULA vice president of Government and Commercial Programs, said that the upcoming launch for Amazon, dubbed Kuiper 1 by ULA and Kuiper Atlas 1 (KA-01) by Amazon, was the first of many planned for the year. “We have quite a few Kuiper Atlases planned this year, as well as Kuiper Vulcans,” Wentz said. Atlas can carry 27 Kuiper satellites, and Vulcans can loft 45.

SpaceX tests previously flown Super Heavy booster. SpaceX is having trouble with Starship’s upper stage after back-to-back failures, but engineers are making remarkable progress with the rocket’s enormous booster. The most visible sign of SpaceX making headway with Starship’s first stage—called Super Heavy—came at 9: 40 am local time (10: 40 am EDT; 14: 40 UTC) Thursday at the company’s Starbase launch site in South Texas. With an unmistakable blast of orange exhaust, SpaceX fired up a Super Heavy booster that has already flown to the edge of space. The burn lasted approximately eight seconds, Ars reports.

Rocket will fly on next Starship test … This was the first time SpaceX has test-fired a “flight-proven” Super Heavy booster, and it paves the way for this particular rocket—designated Booster 14—to fly again soon. A reflight of Booster 14, which previously launched and returned to Earth in January, will happen on the next Starship launch, SpaceX confirmed Thursday. “This booster previously launched and returned on Flight 7 and 29 of its 33 Raptor engines are flight proven,” the company said. “The first Super Heavy reuse will be a step towards our goal of zero-touch reflight.” It is a legitimately and characteristically bold decision to refly a Starship booster on a test flight that SpaceX really needs to succeed. The next test may come late this month or more likely in May.

FAA closes big rocket mishap investigations. The Federal Aviation Administration (FAA) closed mishap investigations into both the SpaceX Starship flight and Blue Origin New Glenn debut that both took place on Jan. 16, Via Satellite reports. Although the FAA closed the mishap investigation regarding the January 16 Starship flight, the rocket is still grounded because there is still an open mishap investigation into the following March 7 flight. “There were no public injuries and one confirmed report of minor vehicle damage in the Turks and Caicos Islands,” the FAA said in a statement on the January 16 flight.

New Glenn closed out as well … The FAA also completed its mishap investigation of Blue Origin’s first New Glenn flight, which successfully deployed Blue Origin’s own space logistics vehicle Blue Ring. Blue Origin failed to recover the first stage booster, which triggered the mishap investigation. The first stage was not able to restart its engines, which prevented the reentry burn from occurring and caused the loss of the stage. Blue Origin has identified seven corrective actions, and the FAA will verify those have been implemented before the second mission. Blue Origin is targeting a return to flight in late spring and will attempt to land the booster again.

Artemis II one step closer to launch. The four astronauts who will fly on board NASA’s Artemis II mission unveiled the patch for their historic flight on Thursday. The four astronauts who will be the first to fly to the Moon under NASA’s Artemis campaign—Commander Reid Wiseman, pilot Victor Glover, and mission specialist Christina Koch from NASA, and mission specialist Jeremy Hansen from Canada—have designed an emblem to represent their mission that references both their distant destination and the home they will return to, the space agency said. It looks great!

Here’s what it means … “This patch designates the mission as “AII,” signifying not only the second major flight of the Artemis campaign, but also an endeavor of discovery that seeks to explore for all and by all. Framed in Apollo 8’s famous Earthrise photo, the scene of the Earth and the Moon represents the dual nature of human spaceflight, both equally compelling: The Moon represents our exploration destination, focused on discovery of the unknown. The Earth represents home, focused on the perspective we gain when we look back at our shared planet and learn what it is to be uniquely human. The orbit around Earth highlights the ongoing exploration missions that have enabled Artemis to set sights on a long-term presence on the Moon and soon, Mars.”

Next three launches

April 4: Falcon 9 | Starlink 11-13 | Vandenberg Space Force Base, Calif. | 01: 02 UTC

April 6: Falcon 9 | Starlink 6-72 | Cape Canaveral Space Force Station, Fla. | 02: 40 UTC

April 7: Falcon 9 | Starlink 11-11 | Vandenberg Space Force Base, California | 21: 35 UTC

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

Rocket Report: Next Starship flight to reuse booster; FAA clears New Glenn Read More »

unshittification:-3-tech-companies-that-recently-made-my-life…-better

Unshittification: 3 tech companies that recently made my life… better


Enshittification is not the only option.

I’ve been complaining about tech a lot recently, and I don’t apologize for it. Complaining feels great. That feeling of beleaguered, I-against-the-world self-righteousness? Highly underrated.

But a little righteous complaint goes a long, long, loooong way. (Just ask my wife.) Too much can be corrosive, it can make you insufferable to others, and it can leave you jaded, as many people, myself included, have become about technology.

I had three recent experiences, however, that were each quite small in their way but which reminded me that not everything in the tech world has fallen victim to the forces of “enshittification.” Once in a while, technology still feels easy and—dare I co-opt the world from Apple’s marketing department?—even magical.

Call it “unshittification.”

Better DRM

Ars has complained about DRM since our founding over 25 years ago. As writers and editors ourselves, we certainly get the desire not to have one’s work ripped off or repurposed without payment, but even effective DRM imposes annoying costs on those who actually paid the money for the thing.

Case in point: I’ve been teaching myself songwriting, audio production, and mixing for the last 18 months, and part of that process has led me to invest some decent money into Universal Audio products. I bought its stellar and rock-solid-reliable Volt 2 audio interface and then spent much of 2024 snapping up high-quality plugins like Topline Vocal Suite, the Manley Voxbox, and the Electra 88 Rhodes piano. Terrific stuff—but not necessarily cheap.

So it was just insulting to find out the hard way that Universal Audio used a variant of the iLok DRM system—itself unfortunately common in the audio industry—that required constant Internet connectivity to function.

The iLok ecosystem can be configured in three main ways, authorizing your plugins 1) to a custom iLok USB dongle (which costs $50–$70 and requires a USB port—plus, you have to remember it at all times), 2) to the local machine you are working on, or 3) to the cloud. Universal Audio allowed only dongle and cloud authorizations, but I figured this wouldn’t be a problem because, surely, the system would only need to check in semi-regularly.

In fact, the system checked in constantly. Go even a few minutes without Internet access, and all your plugins will disable themselves, leading any mix that uses them to fall apart immediately. Want to work on your laptop during a power outage? Edit some audio on a flight? Use a studio computer that—for stability, performance, and security reasons—is not generally online? Well, I hope you like dongles.

(Some users do—though others have complained that they too can be unstable, they cost extra, and they permanently take up a USB port on your machine.)

Universal Audio is a big name in the business, and their users have complained endlessly about this situation, but the response has generally been that machine-based authorization is less secure and therefore not supported.

So it was a surprise and delight when, on March 25, Universal Audio saw the light and announced that “by popular demand” it was shifting to local machine or iLok USB authorizations. The cloud option was gone, and a company rep even admitted that cloud monitoring “requires a constant Internet and server connection. [In other words], more resources.”

In addition, Universal Audio now allows “up to three” simultaneous authorizations of each digital tool, while before you could only have two.

The online response appears overwhelmingly positive. As one commenter put it, “Ok, I admit: I thought the ‘submit feedback’ feature was just there so users would vent without any serious change occurring… I was wrong on that front. Glad to see UA is listening. Good job!”

Others stressed just how beneficial the move was for touring musicians who may use various bits of Universal Audio tech on stage or on tour. “For touring musicians and all other people that often work in an offline environment this is awesome!” wrote one commenter. Another added, “iLok dongle on stage is scary and glad that’s over with. Power move!”

I concur.

Better customer service

Let’s stick with the “musical” theme for example No. 2.

I purchased Native Instruments’ terrific piano library Noire, which sampled the specific grand piano used by Nihls Frahm in both standard and felted formats—and all of it capturing the ambience of Saal 3 in the East Berlin Funkhaus recording facility where Frahm works. The library is one of my favorites—evocative and gorgeous. But I was apparently the victim of fraud.

See, I purchased the library secondhand. This is completely legal and explicitly allowed by Native Instruments, though the company needs to get manually involved in the transfer process. I purchased Noire from a UK user who already had a “transfer code” approved by Native Instruments, indicating that the software in question was genuine and available for sale.

So I purchased Noire, completed the transfer, and the software showed up in my Native Instruments account. Everything went smoothly, and I was (very gently) rocking out with Noire’s felted piano.

A few weeks (!) later, I received a note, completely out of the blue, from Native Instruments support. They had removed Noire from my account, they said, because the seller had committed some unspecified fraud, and Native Instruments had transferred my copy of Noire back to the original purchaser.

This was extremely uncool. Not only did I have nothing to do with any fraud, nor any reason to think fraud had occurred, but Native Instruments had vetted the software and approved it for transfer, which gave me the confidence to move forward with the purchase. So why was I now the only person to suffer? The original buyer got the plugin restored, the scammer had my money, and Native Instruments hadn’t lost anything.

There appeared to be little I could do about all this. Sure, I could file a dispute with PayPal and try to claw my money back, but Native Instruments is a German company, and—let’s face it—I wasn’t going to do anything if they decided to screw me out of a purchase they had helped me make. (Well—I was going to do something, namely, never purchase from them again. After all, who knew, when they awoke in the morning, if their purchased products would still function?)

This may sound like a complaint, but here’s the thing: When I made my case to Native Instruments over email, they got back to me in a day or two and agreed to put a free though “not for resale” copy of Noire on my account as a goodwill gesture. This was all conducted politely, in impeccable English, and without undue delay. It felt fair to me, and I’m likely to continue purchasing their excellent sample libraries.

Customer service can feel like a lesser priority to most companies, but done right, it actually ensures future sales.

Better money-taking

Finally, an almost trivial example, but one that worked so smoothly I still remember my feeling of shock. “Where’s the catch?” pretty much summed it up.

I’m talking, of course, about March Madness, the annual NCAA college basketball tournament. It’s a terrific spectacle if you can ignore all the economic questions about overpaid coaches, no-longer-amateur players, recruiting violations, and academic distortions that the big sports programs generate. And my University of North Carolina Tar Heels had juuuust squeaked in this year.

Ordinarily, watching the tournament is a nightmare if you don’t have a pay-TV package. For years, streaming options were terrible, forcing you to log in with your “TV provider” (i.e., an expensive cable or satellite company) account or otherwise jump through hoops to watch the games, which are generally shown across three or four different TV channels.

All I wanted was a simple way to give someone my money. No gimmicks, no intro offers, no “TV provider” BS—just a pure streaming play that puts all the games in one place, for a reasonable fee. When I looked into the situation this year, I was surprised to find that this did now exist, it was easy, and it was cheap.

The Max streaming service had all the games, except for those shown on CBS. (You can’t have everything, I guess, but I get CBS in HD using an over-the-air antenna.) It was $10 for a month of service. There were no “intro offers,” no lock-ins, no “before you go!” pleas, no nothing. Indeed, I didn’t even have to create a new account or share a credit card with some new vendor. I just added Max as a “subscription” within Amazon’s video app and boom—tournament time. It took about four seconds, and it has worked flawlessly.

That something this simple could feel revelatory was a good reminder of just how crapified our tech and media ecosystems have become. On my expensive LG OLED TV, for instance, I have to go out of my way to literally prevent my TV from spying on everything that I watch. (Seriously, you should turn this “feature” off. Otherwise, your TV will watch your screen and try to identify everything you watch, then send that data back to whatever group of zombified MBAs thought this was a good idea.) Roku, which provides streaming services to my basement television, is toying with new ads. Every streaming service I’ve subscribed to has jacked up rates significantly over the last year or so.

So just being able to sign up quickly and easily, for 10 bucks, felt frictionless and magical in the way that tech used to do more often. As a bonus, I’ve been able to watch full episodes of Curb Your Enthusiasm, which I have never seen before.

Magic?

“Unshittification” is not always the result of “innovation”—sometimes it’s just about treating people decently. Responding to feedback, personal customer service, and non-gimmicky pricing aren’t new or hot technologies, but they are the sort of things that make for satisfied long-term customers.

So much tech has fallen victim to algorithms, scale, and monetization that it can be a surprising relief to connect easily with a Real Live Human, one empowered to act on your behalf, or to make a purchase without being part of some constantly upselling “sales funnel.” But when it does happen, it feels good. Indeed, in a cynical and atomized age, it feels a tiny bit magical.

Listing image: Getty Images

Photo of Nate Anderson

Unshittification: 3 tech companies that recently made my life… better Read More »