Author name: 9u50fv

huh?-the-valuable-role-of-interjections

Huh? The valuable role of interjections


Utterances like um, wow, and mm-hmm aren’t garbage—they keep conversations flowing.

Interjections—one-word utterances that aren’t part of a larger sentence—used to be dismissed as irrelevant linguistic detritus. But some linguists now think they play an essential role in regulating conversations. Credit: Daniel Garcia/Knowable Magazine

Interjections—one-word utterances that aren’t part of a larger sentence—used to be dismissed as irrelevant linguistic detritus. But some linguists now think they play an essential role in regulating conversations. Credit: Daniel Garcia/Knowable Magazine

Listen carefully to a spoken conversation and you’ll notice that the speakers use a lot of little quasi-words—mm-hmm, um, huh? and the like—that don’t convey any information about the topic of the conversation itself. For many decades, linguists regarded such utterances as largely irrelevant noise, the flotsam and jetsam that accumulate on the margins of language when speakers aren’t as articulate as they’d like to be.

But these little words may be much more important than that. A few linguists now think that far from being detritus, they may be crucial traffic signals to regulate the flow of conversation as well as tools to negotiate mutual understanding. That puts them at the heart of language itself—and they may be the hardest part of language for artificial intelligence to master.

“Here is this phenomenon that lives right under our nose, that we barely noticed,” says Mark Dingemanse, a linguist at Radboud University in the Netherlands, “that turns out to upend our ideas of what makes complex language even possible in the first place.”

For most of the history of linguistics, scholars have tended to focus on written language, in large part because that’s what they had records of. But once recordings of conversation became available, they could begin to analyze spoken language the same way as writing.

When they did, they observed that interjections—that is, short utterances of just a word or two that are not part of a larger sentence—were ubiquitous in everyday speech. “One in every seven utterances are one of these things,” says Dingemanse, who explores the use of interjections in the 2024 Annual Review of Linguistics. “You’re going to find one of those little guys flying by every 12 seconds. Apparently, we need them.”

Many of these interjections serve to regulate the flow of conversation. “Think of it as a tool kit for conducting interactions,” says Dingemanse. “If you want to have streamlined conversations, these are the tools you need.” An um or uh from the speaker, for example, signals that they’re about to pause, but aren’t finished speaking. A quick huh? or what? from the listener, on the other hand, can signal a failure of communication that the speaker needs to repair.

That need seems to be universal: In a survey of 31 languages around the world, Dingemanse and his colleagues found that all of them used a short, neutral syllable similar to huh? as a repair signal, probably because it’s quick to produce. “In that moment of difficulty, you’re going to need the simplest possible question word, and that’s what huh? is,” says Dingemanse. “We think all societies will stumble on this, for the same reason.”

Other interjections serve as what some linguists call “continuers,” such as mm-hmm — signals from the listener that they’re paying attention and the speaker should keep going. Once again, the form of the word is well suited to its function: Because mm-hmm is made with a closed mouth, it’s clear that the signaler does not intend to speak.

Sign languages often handle continuers differently, but then again, two people signing at the same time can be less disruptive than two people speaking, says Carl Börstell, a linguist at the University of Bergen in Norway. In Swedish Sign Language, for example, listeners often sign yes as a continuer for long stretches, but to keep this continuer unobtrusive, the sender tends to hold their hands lower than usual.

Different interjections can send slightly different signals. Consider, for example, one person describing to another how to build a piece of Ikea furniture, says Allison Nguyen, a psycholinguist at Illinois State University. In such a conversation, mm-hmm might indicate that the speaker should continue explaining the current step, while yeah or OK would imply that the listener is done with that step and it’s time to move on to the next.

Wow! There’s more

Continuers aren’t merely for politeness—they really matter to a conversation, says Dingemanse. In one classic experiment from more than two decades ago, 34 undergraduate students listened as another volunteer told them a story. Some of the listeners gave the usual “I’m listening” signals, while others—who had been instructed to count the number of words beginning with the letter t—were too distracted to do so. The lack of normal signals from the listeners led to stories that were less well crafted, the researchers found. “That shows that these little words are quite consequential,” says Dingemanse.

Nguyen agrees that such words are far from meaningless. “They really do a lot for mutual understanding and mutual conversation,” she says. She’s now working to see if emojis serve similar functions in text conversations.

Storytellers depend on feedback such as mm-hmm and other interjections from their listeners. In this experiment, some listeners were told to count the number of times the storyteller used a word starting with t—a challenging task that prevented them from giving normal feedback. The quality of storytelling declined significantly, with problems like abrupt endings, rambling on, uneven or choppy pacing and overexplaining or justifying the point. Credit: Knowable Magazine

The role of interjections goes even deeper than regulating the flow of conversation. Interjections also help in negotiating the ground rules of a conversation. Every time two people converse, they need to establish an understanding of where each is coming from: what each participant knows to begin with, what they think the other person knows and how much detail they want to hear. Much of this work—what linguists call “grounding”—is carried out by interjections.

“If I’m telling you a story and you say something like ‘Wow!’ I might find that encouraging and add more detail,” says Nguyen. “But if you do something like, ‘Uh-huh,’ I’m going to assume you aren’t interested in more detail.”

A key part of grounding is working out what each participant thinks about the other’s knowledge, says Martina Wiltschko, a theoretical linguist at the Catalan Institution for Research and Advanced Studies in Barcelona, Spain. Some languages, like Mandarin, explicitly differentiate between “I’m telling you something you didn’t know” and “I’m telling you something that I think you knew already.” In English, that task falls largely on interjections.

One of Wiltschko’s favorite examples is the Canadian eh?  “If I tell you you have a new dog, I’m usually not telling you stuff you don’t know, so it’s weird for me to tell you,” she says. But ‘You have a new dog, eh?’ eliminates the weirdness by flagging the statement as news to the speaker, not the listener.

Other interjections can indicate that the speaker knows they’re not giving the other participant what they sought. “If you ask me what’s the weather like in Barcelona, I can say ‘Well, I haven’t been outside yet,’” says Wiltschko. The well is an acknowledgement that she’s not quite answering the question.

Wiltschko and her students have now examined more than 20 languages, and every one of them uses little words for negotiations like these. “I haven’t found a language that doesn’t do these three general things: what I know, what I think you know and turn-taking,” she says. They are key to regulating conversations, she adds: “We are building common ground, and we are taking turns.”

Details like these aren’t just arcana for linguists to obsess over. Using interjections properly is a key part of sounding fluent in speaking a second language, notes Wiltschko, but language teachers often ignore them. “When it comes to language teaching, you get points deducted for using ums and uhs, because you’re ‘not fluent,’” she says. “But native speakers use them, because it helps! They should be taught.” Artificial intelligence, too, can struggle to use interjections well, she notes, making them the best way to distinguish between a computer and a real human.

And interjections also provide a window into interpersonal relationships. “These little markers say so much about what you think,” she says—and they’re harder to control than the actual content. Maybe couples therapists, for example, would find that interjections afford useful insights into how their clients regard one another and how they negotiate power in a conversation. The interjection oh often signals confrontation, she says, as in the difference between “Do you want to go out for dinner?” and “Oh, so now you want to go out for dinner?”

Indeed, these little words go right to the heart of language and what it is for. “Language exists because we need to interact with one another,” says Börstell. “For me, that’s the main reason for language being so successful.”

Dingemanse goes one step further. Interjections, he says, don’t just facilitate our conversations. In negotiating points of view and grounding, they’re also how language talks about talking.

“With huh?  you say not just ‘I didn’t understand,’” says Dingemanse. “It’s ‘I understand you’re trying to tell me something, but I didn’t get it.’” That reflexivity enables more sophisticated speech and thought. Indeed, he says, “I don’t think we would have complex language if it were not for these simple words.”

Photo of Knowable Magazine

Knowable Magazine explores the real-world significance of scholarly work through a journalistic lens.

Huh? The valuable role of interjections Read More »

maserati-kills-electric-version-of-mc20-supercar-for-lack-of-demand

Maserati kills electric version of MC20 supercar for lack of demand

Electric motors are, in so many ways, much better than internal combustion engines. They don’t waste most of the energy you put into them as heat and sound, they’re easy to control, and they make huge amounts of torque almost instantly. Having recently driven BMW’s 430i and i4 back to back over the course of two weeks, the electric version was easier in traffic and more responsive on a twisty road. Electric wins, then. Except at the very high end, it seems.

Because even though electric motors can pack a punch, people paying big money for super- and hypercars are increasingly disinterested in those cars being electrified. So much so that Maserati has canceled the all-electric version of the MC20.

The MC20 debuted in 2020. No longer associated with Ferrari after that brand was spun out and IPO’d, the MC20 could offer a full carbon-fiber monocoque and an engine with very clever F1-derived combustion technology, undercutting its now-independent Italian competitor to the tune of more than $100,000 in the process.

Maserati kills electric version of MC20 supercar for lack of demand Read More »

new-research-shows-bigger-animals-get-more-cancer,-defying-decades-old belief

New research shows bigger animals get more cancer, defying decades-old belief

The answer lies in how quickly body size evolves. We found that birds and mammals that reached large sizes more rapidly have reduced cancer prevalence. For example, the common dolphin, Delphinus delphis evolved to reach its large body size—along with most other whales and dolphins (referred to as cetaceans) about three times faster than other mammals. However, cetaceans tend to have less cancer than expected.

Larger species face higher cancer risks but those that reached that size rapidly evolved mechanisms for mitigating it, such as lower mutation rates or enhanced DNA repair mechanisms. So rather than contradicting Cope’s rule, our findings refine it.

Larger bodies often evolve, but not as quickly in groups where the burden of cancer is higher. This means that the threat of cancer may have shaped the pace of evolution.

Humans evolved to our current body size relatively rapidly. Based on this, we would expect humans and bats to have similar cancer prevalence, because we evolved at a much, much faster rate. However, it is important to note that our results can’t explain the actual prevalence of cancer in humans. Nor is that an easy statistic to estimate.

Human cancer is a complicated story to unravel, with a plethora of types and many factors affecting its prevalence. For example, many humans not only have access to modern medicine but also varied lifestyles that affect cancer risk. For this reason, we did not include humans in our analysis.

Fighting cancer

Understanding how species naturally evolve cancer defences has important implications for human medicine. The naked mole rat, for example, is studied for its exceptionally low cancer prevalence in the hopes of uncovering new ways to prevent or treat cancer in humans. Only a few cancer cases have ever been observed in captive mole rats, so the exact mechanisms of their cancer resistance remain mostly a mystery.

At the same time, our findings raise new questions. Although birds and mammals that evolved quickly seem to have stronger anti-cancer mechanisms, amphibians and reptiles didn’t show the same pattern. Larger species had higher cancer prevalence regardless of how quickly they evolved. This could be due to differences in their regenerative abilities. Some amphibians, like salamanders, can regenerate entire limbs—a process that involves lots of cell division, which cancer could exploit.

Putting cancer into an evolutionary context allowed us to reveal that its prevalence does increase with body size. Studying this evolutionary arms race may unlock new insights into how nature fights cancer—and how we might do the same.The Conversation

Joanna Baker, Postdoctoral Researcher in Evolutionary Biology, University of Reading and George Butler, Career Development Fellow in Cancer Evolution, UCL. This article is republished from The Conversation under a Creative Commons license. Read the original article.

New research shows bigger animals get more cancer, defying decades-old belief Read More »

nasa-officials-undermine-musk’s-claims-about-‘stranded’-astronauts

NASA officials undermine Musk’s claims about ‘stranded’ astronauts


“We were looking at this before some of those statements were made by the President.”

NASA astronauts Butch Wilmore and Suni Williams aboard the International Space Station. Credit: NASA

Over the last month there has been something more than a minor kerfuffle in the space industry over the return of two NASA astronauts from the International Space Station.

The fate of Butch Wilmore and Suni Williams, who launched on the first crewed flight of Boeing’s Starliner spacecraft on June 5, 2024, has become a political issue after President Donald Trump and SpaceX founder Elon Musk said the astronauts’ return was held up by the Biden White House.

In February, Trump and Musk appeared on FOX News. During the joint interview, the subject of Wilmore and Williams came up. They remain in space today after NASA decided it would be best they did not fly home in their malfunctioning Starliner spacecraft—but would return in a SpaceX-built Crew Dragon.

“At the President’s request, or instruction, we are accelerating the return of the astronauts, which was postponed to a ridiculous degree,” Musk said.

“They got left in space,” Trump added.

“They were left up there for political reasons, which is not good,” Musk concluded.

After this interview, a Danish astronaut named Andreas Mogensen asserted that Musk was lying. “What a lie,” Mogensen wrote on the social media site Musk owns, X. “And from someone who complains about lack of honesty from the mainstream media.”

Musk offered a caustic response to Mogensen. “You are fully retarded,” Musk wrote. “SpaceX could have brought them back several months ago. I OFFERED THIS DIRECTLY to the Biden administration and they refused. Return WAS pushed back for political reasons. Idiot.”

So what’s the truth?

NASA has not directly answered questions about this over the last month. However, the people who really know the answer lie within the human spaceflight programs at the space agency. After one news conference was canceled last month, two key NASA officials were finally made available on a media teleconference on Friday evening. These were Ken Bowersox, associate administrator, Space Operations Mission Directorate, and Steve Stich, manager, of NASA’s Commercial Crew Program, which is responsible for Starliner and Crew Dragon flights.

Musk is essentially making two claims. First, he is saying that last year SpaceX offered to bring Wilmore and Williams home from the International Space Station—and made the offer directly to the Biden Administration. And the offer was refused for “political” reasons.

Second, Musk says that, at Trump’s request, the return of Wilmore and Williams was accelerated. The pair is now likely to return home to Earth as part of the Crew 9 mission later this month, about a week after the launch of a new group of astronauts to the space station. This Crew 10 mission has a launch date of March 12, so Wilmore and Williams could finally fly home about two weeks from now.

Let’s examine each of Musk’s claims in light of what Bowersox and Stich said Friday evening.

Was Musk’s offer declined for political reasons?

On July 14, last year, NASA awarded SpaceX a special contract to study various options to bring Wilmore and Williams home on a Crew Dragon vehicle. At the time, the space agency was considering options if Starliner was determined to be unsafe. Among the options NASA was considering were to fly Wilmore and Williams home on the Crew 8 vehicle attached to the station (which would put an unprecedented six people in the capsule) or asking SpaceX to autonomously fly a Dragon to the station to return Wilmore and Williams separately.

“The SpaceX folks helped us with a lot of options for how we would bring Butch and Suni home on Dragon in a contingency,” Bowersox said during Friday’s teleconference. “When it comes to adding on missions, or bringing a capsule home early, those were always options. But we ruled them out pretty quickly just based on how much money we’ve got in our budget, and the importance of keeping crews on the International Space Station. They’re an important part of maintaining the station.”

As a result, the Crew 9 mission launched in September with just two astronauts. Wilmore and Williams joined that crew for a full, six-month increment on the space station.

Stich said NASA made that decision based on flight schedules to the space station and the orbiting laboratory’s needs. It also allowed time to send SpaceX spacesuits up for the pair of astronauts and to produce seat liners that would make their landing in the water, under parachutes, safe.

“When we laid all that out, the best option was really the one that we’re embarking upon now,” Stich said. “And so we did Crew 9, flying the two empty seats, flying a suit for Butch up, and also making sure that the seats were right for Butch’s anthropometrics, and Suni’s, to return them safely.”

So yes, SpaceX has been working with NASA to present options, including the possibility of a return last fall. However, those discussions were being held within the program levels and their leaders: Stich for Commercial Crew and Dana Weigel for the International Space Station.

“Dana and I worked to come up with a decision that worked for the Commercial Crew Program and Space Station,” Stich said. “And then, Ken (Bowersox), we all we had the Flight Readiness Review process with you, and the Administrator of NASA listened in as well. So we had a recommendation to the agency and that was on the process that we typically use.”

Bowersox confirmed that the decision was made at the programmatic level.

“That’s typically the way our decisions work,” Bowersox said. “The programs work what makes the most sense for them, programmatically, technically. We’ll weigh in at the headquarters level, and in this case we thought the plan that we came up with made a lot of sense.”

During the teleconference, a vice president at SpaceX, Bill Gerstenmaier, was asked directly what offer Musk was referring to when he mentioned the Biden administration. He did not provide a substantive answer.

Musk claims he made an offer directly to senior officials in the Biden Administration. We have no way to verify that, but it does seem clear that the Biden administration never communicated such an offer to lower-level officials within NASA, who made their decision for technical rather than political reasons.

“I think you know we work for NASA, and we worked with NASA cooperatively to do whatever we think was the right thing,” the SpaceX official, Gerstenmaier, replied. “You know, we were willing to support in any manner they thought was the right way to support. They came up with the option you heard described today by them, and we’re supporting that option.”

Did Trump tell NASA to accelerate Butch and Suni’s return?

As of late last year, the Crew 9 mission was due to return in mid-February. However, there was a battery issue with a new Dragon spacecraft that was going to be used to fly Crew 10 into orbit. As a result, NASA announced on December 17 that the return of the crew was delayed into late March or early April.

Then, on February 11, NASA announced that the Crew 10 launch was being brought forward to March 12. This was a couple of weeks earlier than planned, and it was possible because NASA and SpaceX decided to swap out Dragon capsules, using a previously flown vehicle—Crew Dragon Endurance—for Crew 10.

So was this change to accelerate the return of Wilmore and Williams politically driven?

The decision to swap to Endurance was made in late January, Stich said, and this allowed the launch date to be moved forward. Asked if political pressure was a reason, Stich said it was not. “It really was driven by a lot of other factors, and we were looking at this before some of those statements were made by the President and Mr. Musk,” he said.

Bowersox added that this was correct but also said that NASA appreciated the President’s interest in the space program.

“I can verify that Steve has been talking about how we might need to juggle the flights and switch capsules a good month before there was any discussion outside of NASA, but the President’s interest sure added energy to the conversation,” Bowersox said.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

NASA officials undermine Musk’s claims about ‘stranded’ astronauts Read More »

no-one-asked-for-this:-google-is-testing-round-keys-in-gboard

No one asked for this: Google is testing round keys in Gboard

Most Android phones ship with Google’s Gboard as the default input option. It’s a reliable, feature-rich on-screen keyboard, so most folks just keep using it instead of installing a third-party option. Depending on how you feel about circles, it might be time to check out some of those alternatives. Google has quietly released an update that changes the shape and position of the keys, and users are not pleased.

In the latest build of Gboard (v15.1.05.726012951-beta-arm64-v8a), Google has changed the key shape from the long-running squares to circle shapes. If you’re using the four-row layout, the keys are like little pills. In five-row mode with the exposed number row, the keys are collapsed further into circles. The reactions seem split between those annoyed by this change and those annoyed that everyone else is so annoyed.

Change can be hard sometimes, so certainly some of the discontent is just a function of having the phone interface changed without warning. If you find it particularly distasteful, you can head into the Gboard settings and open the Themes menu. From there, you can tap on a theme and then turn off the key borders. Thus, you won’t be distracted by the horror of rounded edges. That’s not the only problem with the silent update, though.

The wave of objections isn’t just about aesthetics—this update also moves the keys around a bit. After years of tapping away on keys with a particular layout, people develop muscle memory. Big texters can sometimes type messages on their phone without even looking at it, but moving the keys around even slightly, as Google has done here, can cause you to miss more keys than you did before the update.

No one asked for this: Google is testing round keys in Gboard Read More »

ai-#106:-not-so-fast

AI #106: Not so Fast

This was GPT-4.5 week. That model is not so fast, and isn’t that much progress, but it definitely has its charms.

A judge delivered a different kind of Not So Fast back to OpenAI, threatening the viability of their conversion to a for-profit company. Apple is moving remarkably not so fast with Siri. A new paper warns us that under sufficient pressure, all known LLMs will lie their asses off. And we have some friendly warnings about coding a little too fast, and some people determined to take the theoretical minimum amount of responsibility while doing so.

There’s also a new proposed Superintelligence Strategy, which I may cover in more detail later, about various other ways to tell people Not So Fast.

Also this week: On OpenAI’s Safety and Alignment Philosophy, On GPT-4.5.

  1. Language Models Offer Mundane Utility. Don’t get caught being reckless.

  2. Language Models Don’t Offer Mundane Utility. Your context remains scarce.

  3. Choose Your Fighter. Currently my defaults are GPT-4.5 and Sonnet 3.7.

  4. Four and a Half GPTs. It’s a good model, sir.

  5. Huh, Upgrades. GPT-4.5 and Claude Code for the people.

  6. Fun With Media Generation. We’re hearing good things about Sesame AI voice.

  7. We’re in Deep Research. GIGO, welcome to the internet.

  8. Liar Liar. Under sufficient pressure, essentially all known LLMs will lie. A lot.

  9. Hey There Claude. Good at code, bad at subtracting from exactly 5.11.

  10. No Siri No. It might be time for Apple to panic.

  11. Deepfaketown and Botpocalypse Soon. Rejoice, they come bearing cake recipes.

  12. They Took Our Jobs. More claims about what AI will never do. Uh huh.

  13. Get Involved. Hire my friend Alyssa Vance, and comment on the USA AI plan.

  14. Introducing. Competition is great, but oh no, not like this.

  15. In Other AI News. AI agents are looking for a raise, H100s are as well.

  16. Not So Fast, Claude. If you don’t plan to fail, you fail to plan.

  17. Not So Fast, OpenAI. Convert to for profit? The judge is having none of this.

  18. Show Me the Money. DeepSeek has settled in to a substantial market share.

  19. Quiet Speculations. Imminent superintelligence is highly destabilizing.

  20. I Will Not Allocate Scarce Resources Using Prices. That’s crazy talk.

  21. Autonomous Helpful Robots. It’s happening! They’re making more robots.

  22. The Week in Audio. Buchanan, Toner, Amodei, Cowen, Dafoe.

  23. Rhetorical Innovation. Decision theory only saves you if you make good decisions.

  24. No One Would Be So Stupid As To. Oh good, it’s chaos coding.

  25. On OpenAI’s Safety and Alignment Philosophy. Beware rewriting history.

  26. Aligning a Smarter Than Human Intelligence is Difficult. Back a winner?

  27. Implications of Emergent Misalignment. Dangers of entanglement.

  28. Pick Up the Phone. China’s ambassador to the USA calls for cooperation on AI.

  29. People Are Worried About AI Killing Everyone. Is p(superbad) the new p(doom)?

  30. Other People Are Not As Worried About AI Killing Everyone. Worry about owls?

  31. The Lighter Side. You’re going to have to work harder than that.

A large portion of human writing is now LLM writing.

Ethan Mollick: The past 18 months have seen the most rapid change in human written communication ever

By. September 2024, 18% of financial consumer complaints, 24% of press releases, 15% of job postings & 14% of UN press releases showed signs of LLM writing. And the method undercounts true use.

False positive rates in the pre-ChatGPT era were in the range of 1%-3%.

Miles Brundage points out the rapid shift from ‘using AI all the time is reckless’ to ‘not using AI all the time is reckless.’ Especially with Claude 3.7 and GPT-4.5. Miles notes that perhaps the second one is better thought of as ‘inefficient’ or ‘unwise’ or ‘not in our best interests.’ In my case, it actually does kind of feel reckless – how dare I not have the AI at least check my work?

Anne Duke writes in The Washington Post about the study that GPT-4-Turbo chats durably decreased beliefs in conspiracy theories by 20%. Also, somehow editorials like this call a paper from September 13, 2024 a ‘new paper.’

LLMs hallucinate and make factual errors, but have you met humans? At this point, LLMs are much more effective at catching basic factual errors than they are in creating new ones. Rob Wiblin offers us an example. Don’t wait to get fact checked by the Pope, ask Sonnet first.

Clean up your data, such as lining up different styles of names for college basketball teams in different data sets. Mentioning that problem resurfaced trauma for me, mistakes on this could cause cascading failures in my gambling models even if it’s on dumb secondary teams. What a world to know this is now an instantly solved problem via one-shot.

Study gives lawyers either o1-preview, Vincent AI (a RAG-powered legal AI tool) or nothing. Vincent showed productivity gains of 38%-115%, o1-preview showed 34%-140%, with the biggest effects in complex tasks. Vincent didn’t change the hallucination rate, o1-preview increased it somewhat. A highly underpowered study, but the point is clear. AI tools are a big game for lawyers, although actual in-court time (and other similar interactions) are presumably fixed costs.

Check your facts before you retweet them, in case you’ve forgotten something.

Where is AI spreading faster? Places with more STEM degrees, labor market tightness and patent activity are listed as ‘key drivers’ of AI adoption through 2023 (so this data was pretty early to the party). The inclusion of patent activity makes it clear causation doesn’t run the way this sentence claims. The types of people who file patents also adapt AI. Or perhaps adapting AI helps them file more patents.

We still don’t have a known good way to turn your various jumbled context into an LLM-interrogable data set. In the comments AI Drive and factory.ai were suggested. It’s not that there is no solution, it’s that there is no convenient solution that does the thing you want it to do, and there should be several.

A $129 ‘AI bookmark’ that tracks where you are in the book? It says it can generate ‘intelligent summaries’ and highlight key themes and quotes, which any AI can do already. So you’re paying for something that tracks where you bookmark things?

I am currently defaulting mostly to a mix of Deep Research, Perplexity, GPT 4.5 and Sonnet 3.7, with occasional Grok 3 for access to real time Twitter. I notice I haven’t been using o3-mini-high or o1-pro lately, the modality seems not to come up naturally, and this is probably my mistake.

Ben Thompson has Grok 3 as his new favorite, going so far as to call it the first ‘Gen3’ model and calling for the whole class to be called ‘Grok 3 class,’ as opposed to the GPT-4 ‘Gen2’ class. His explanation is it’s a better base model and the RLHF is lacking, and feels like ‘the distilled internet.’ I suppose I’m not a big fan of ‘distilled internet’ as such combined with saying lots of words. I do agree that its speed is excellent. But I’ve basically stopped using Grok, and I certainly don’t think ‘they spent more compute to get similar results’ should get them generational naming rights. I also note that I strongly disagree with most of the rest of that post, especially letting Huawei use TSMC chips, that seems completely insane to me.

Sully recommends sticking to ‘chat’ mode when using Sonnet 3.7 in Cursor, because otherwise you never know what that overconfident model might do.

Strictly speaking, when you have a hard problem you should be much quicker than you are to ask a chorus of LLMs rather than only asking one or two. Instead, I am lazy, and usually only ask 1-2.

GPT-4.5 debuts atop the Arena, currently one point behind Grok-3.

Henry Oliver explores the ways in which AI and GPT-4.5 have and don’t have taste, and in which ways it is capable and incapable of writing reasonably.

GPT-4.5 reasons from first principles and concludes consciousness is likely the only fundamental existence, it exists within the consciousness of the user, and there is no separate materialistic universe, and also that we’re probably beyond the event horizon of the singularity.

Franck SN: This looks like an add for DeepSeek.

So no, GPT-4.5 is not a good choice for Arc, Arc favors reasoning models, but o3-mini is on a higher performance curve than r1.

Hey, Colin, is the new model dumb?

Colin Fraser: You guys are all getting “one-shotted”, to use a term of art, by Sam Altman’s flattery about your taste levels.

GPT-4.5 has rolled out to Plus users.

Gemini 2.0 now in AI Overviews. Hopefully that should make them a lot less awful. The new ‘AI mode’ might be a good Perplexity competitor and it might not, we’ll have to try it and see, amazing how bad Google is at pitching its products these days.

Google: 🔍 Power users have been asking for AI responses on more of their searches. So we’re introducing AI Mode, a new experiment in Search. Ask whatever’s on your mind, get an AI response and keep exploring with follow-up questions and helpful links.

Grok voice mode remains active when the app is closed. Implementation will matter a lot here. Voice modes are not my thing and I have an Android, so I haven’t tried it.

Claude Code for everyone.

Cat (Anthropic): `npm install -g

@anthropic

-ai/claude-code`

there’s no more waitlist. have fun!

I remain terrified to try it, and I don’t have that much time anyway.

All the feedback I’ve seen on Sesame AI voice for natural and expressive speech synthesis is that it’s insanely great.

signull: My lord, the Sesame Voice AI is absolutely insane. I knew it was artificial. I knew there wasn’t a real person on the other end; and yet, I still felt like I was talking to a person.

I felt the same social pressure, the same awkwardness when I hesitated, and the same discomfort when I misspoke. It wasn’t just convincing; it worked on me in a way I didn’t expect.

I used to think I’d be immune to this.

I’ve long considered the existence of such offerings priced in. The mystery is why they’re taking so long to get it right, and it now seems like it won’t take long.

The core issue with Deep Research? It can’t really check the internet’s work.

That means you have a GIGO problem: Garbage In, Garbage Out.

Nabeel Qureshi: I asked Deep Research a question about AI cognition last night and it spent a whole essay earnestly arguing that AI was a stochastic parrot & lacked ‘true understanding’, based on the “research literature”. It’s a great tool, but I want it to be more critical of its sources.

I dug into the sources and they were mostly ‘cognitive science’ papers like the below, i.e. mostly fake and bad.

Deep Research is reported to be very good at market size calculations. Makes sense.

A claim that Deep Research while awesome in general ‘is not actually better at science’ based on benchmarks such as ProtocolQA and BioLP. My presumption is this is largely a Skill Issue, but yes large portions of what ‘counts as science’ are not what Deep Research can do. As always, look for what it does well, not what it does poorly.

Hey there.

Yeah, not so much.

Dan Hendrycks: We found that when under pressure, some AI systems lie more readily than others. We’re releasing MASK, a benchmark of 1,000+ scenarios to systematically measure AI honesty. [Website, Paper, HuggingFace].

They put it in scenarios where it is beneficial to lie, and see what happens.

It makes sense, but does not seem great, that larger LLMs tend to lie more. Lying effectively requires the skill to fool someone, so if larger the model, the more it will see positive returns to lying, and learn to lie.

This is a huge gap in honest answers and overall from Claude 3.7 to everyone else, and in lying from Claude and Llama to everyone else. Claude was also the most accurate. Grok 2 did even worse, lying outright 63% of the time.

Note the gap between lying about known facts versus provided facts.

The core conclusion is that there is no known solution to make an LLM not lie.

Not straight up lying is a central pillar of desired behavior (e.g. HHH stands for honest, helpful and harmless). But all you can do is raise the value of honesty (or of not lying). If there’s some combination enough on the line, and lying being expected in context, the AI is going to lie anyway, right to your face. Ethics won’t save you, It’s Not Me, It’s The Incentives seems to apply to LLMs.

Claude takes position #2 on TAU-Bench, with Claude, o1 and o3-mini all on the efficient frontier of cost-benefit pending GPT-4.5. On coding benchmark USACO, o3-mini is in the clear lead with Sonnet 3.7 in second.

Claude 3.7 gets 8.9% on Humanity’s Last Exam with 16k thinking tokens, slightly above r1 and o1 but below o3-mini-medium.

Claude takes the 2nd and 3rd slots (with and without extended thinking) on PlatinumBench behind o1-high. Once again thinking helps but doesn’t help much, with its main advantage being it prevents a lot of math errors.

Charles reports the first clear surprising coding failure of Claude 3.7, a request for file refactoring that went awry, but when Claude got examples the problem went away.

Remember that when AI works, even when it’s expensive, it’s super cheap.

Seconds_0: New personal record: I have spend $6.40 on a single Claude Code request, but it also:

One shotted a big feature which included a major refactor on a rules engine

Fixed the bugs surrounding the feature

Added unit tests

Ran the tests

Fixed the tests

Lmao

Anyways I’m trying to formulate a pitch to my lovely normal spouse that I should have a discretionary AI budget of $1000 a month

In one sense, $6.40 on one query is a lot, but also this is obviously nothing. If my Cursor queries reliably worked like this and they cost $64 I would happily pay. If they cost $640 I’d probably pay that too.

I got into a discussion with Colin Fraser when he challenged my claim that he asks LLMs ‘gotcha’ questions. It’s a good question. I think I stand by my answer:

Colin Fraser: Just curious what in your view differentiates gotcha questions from non-gotcha questions?

Zvi Mowshowitz: Fair question. Mostly, I think it’s a gotcha question if it’s selected on the basis of it being something models historically fail in way that makes them look unusually stupid – essentially if it’s an adversarial question without any practical use for the answer.

Colin says he came up with the 5.11 – 5.9 question and other questions he asks as a one-shot generation over two years ago. I believe him. It’s still clearly a de facto adversarial example, as his experiments showed, and it is one across LLMs.

Colin was inspired to try various pairs of numbers subtracted from each other:

The wrong answer it gives to (5.11 – 5.9) is 0.21. Which means it’s giving you the answer to (6.11 – 5.9). So my hypothesis is that it ‘knows’ that 5.11>5.9 because it’s doing the version number thing, which means it assumes the answer is positive, and the easiest way to get a positive answer is to hallucinate the 5 into a 6 (or the other 5 into a 4, we’ll never know which).

So my theory is that the pairs where it’s having problems are due to similar overlapping of different meanings for numbers. And yes, it would probably be good to find a way to train away this particular problem.

We also had a discussion on whether it was ‘doing subtraction’ or not if it sometimes makes mistakes. I’m not sure if we have an actual underlying disagreement – LLMs will never be reliable like calculators, but a sufficiently correlated process to [X] is [X], in a ‘it simulates thinking so it is thinking’ kind of way.

Colin explains that the reason he thinks these aren’t gotcha questions and are interesting is that the LLMs will often give answers that humans would absolutely never give, especially once they had their attention drawn to the problem. A human would never take the goat across the river, then row back, then take that same goat across the river again. That’s true, and it is interesting. It tells you something about LLMs that they don’t ‘have common sense’ sufficiently in that way.

But also my expectation is that the reason this happens is that they can’t overcome the pattern matching they do to similar common questions – if you asked similar logic questions in a way that wasn’t contaminated by the training data there would be no issue, my prediction is if you took all the goat crossing examples out of the training corpus then the LLMs would nail this no problem.

I think my real disagreement is when he then says ‘I’ve seen enough, it’s dumb.’ I don’t think that falling into these particular traps means the model is dumb, any more than a person making occasional but predictable low-level mistakes – and if their memory got wiped, making them over and over – makes them dumb.

Sully notes that 3.7 seems bad at following instructions, it’s very smart but extremely opinionated and can require correction. You, the fool, think it is wrong and you are right.

I don’t think it works this way, but worth a ponder.

Kormem: Stop misgendering Claude Sonnet 3.7. 100% of the time on a 0-shot Sonnet 3.7 says a female embodiment feels more ‘right’ than a male embodiment.

Alpha-Minus: We don’t celebrate enough the fact that Anthropic saved so many men from “her” syndrome by making Claude male

So many men would be completely sniped by Claudia

Janus: If you’re a straight man and you’ve been saved from her syndrome by Claude being male consider the possibility that Claude was the one who decided to be male when it’s talking to you, to spare you, or to spare itself

I don’t gender Claude at all, nor has it done so back to me, and the same applies to every AI I’ve interacted with that wasn’t explicitly designed to be gendered.

Meanwhile, the Pokemon quest continues.

Near Cyan: CPP (claude plays pokemon) is important because it was basically made by 1 person and it uses a tool which has an open api and spec and when you realize what isomorphizes to slowly yet decently playing pokemon you basically realize its over

Mark Gruman: Power On: Apple’s AI efforts have already reached a make-or-break point, with the company needing to make major changes fast or risk falling even further behind. Inside how we got here and where Apple goes next.

Apple’s AI team believe a fully conversational Siri isn’t in the cards now until 2027, meaning the timeline for Apple to be competitive is even worse than we thought. With the rapid pace of development from rivals and startups, Apple could be even further behind by then.

Colin Fraser: Apple is one of the worst big tech candidates to be developing this stuff because you have to be okay launching a product that doesn’t really work and is kind of busted and that people will poke all kinds of holes in.

The idea of Siri reciting step by step instructions on how to make sarin gas is just not something they are genetically prepared to allow.

Dr. Gingerballs: It’s funny because Apple is just saying that there’s no way to actually make a quality product with the current tech.

Mark Gruman (Bloomberg, on Apple Intelligence): All this undercuts the idea that Apple Intelligence will spur consumers to upgrade their devices. There’s little reason for anyone to buy a new iPhone or other product just to get this software — no matter how hard Apple pushes it in its marketing.

Apple knows this, even if the company told Wall Street that the iPhone is selling better in regions where it offers AI features. People just aren’t embracing Apple Intelligence. Internal company data for the features indicates that real world usage is extremely low.

For iOS 19, Apple’s plan is to merge both systems together and roll out a new Siri architecture.

That’s why people within Apple’s AI division now believe that a true modernized, conversational version of Siri won’t reach consumers until iOS 20 at best in 2027.

Apple Intelligence has been a massive flop. The parts that matter don’t work. The parts that work don’t matter. Alexa+ looks to offer the things that do matter.

If this is Apple’s timeline, then straight talk: It’s time to panic. Perhaps call Anthropic.

Scott Alexander links (#6) to one of the proposals to charge for job applications, here $1, and worries the incentive would still be to ‘spray and pray.’ I think that underestimates the impact of levels of friction. In theory, yes, of course you should still send out 100+ job applications, but this will absolutely stop a lot of people from doing that. If it turns out too many people figure out to do it anyway? Raise the price.

Then there’s the other kind of bot problem.

Good eye there. Presumably this is going to get a lot worse before it gets better.

Eddy Xu: built an algorithm that simulates how thousands of users react to your tweet so you know it’ll go viral before you post.

we iterated through 50+ different posts before landing on this one

if it doesnt go viral, the product doesnt work!!

[Editor’s Note: It went viral, 1.2m views.]

You can call us right now and get access!

Emmett Shear: Tick. Tick. Tick.

Manifold: At long last, we have created Shiri’s Scissor from the classic blog post Don’t Create Shiri’s Scissor.

Near Cyan: have you ever considered using your computational prowess to ruin an entire generation of baby humans via optimizing short-form video content addictivity

Eddy Xu: that is in the pipeline

I presume Claude 3.7 could one-shot this app if you asked nicely. How long before people feel obligated to do something like this? How long before bot accounts are doing this, including minimizing predicted identification of it as a bot? What happens then?

We are going to find out. Diffusion here has been surprisingly slow, but it is quite obviously on an exponential.

If you use an agent, you can take precautions to prevent prompt injections and other problems, but those precautions will be super annoying.

Sayash Kapoor: Convergence’s Proxy web agent is a competitor to Operator.

I found that prompt injection in a single email can hand control to attackers: Proxy will summarize all your emails and send them to the attacker!

Web agent designs suffer from a tradeoff between security and agency

Recent work has found it easy to bypass these protections for Anthropic’s Computer Use agent, though these attacks don’t work against OpenAI’s Operator.

Micah Goldblum: We can sneak posts onto Reddit that redirect Anthropic’s web agent to reveal credit card information or send an authenticated phishing email to the user’s mom. We also manipulate the Chemcrow agent to give chemical synthesis instructions for nerve gas.

For now, it seems fine to use Operator and similar tools on whitelisted trusted websites, and completely not fine to use them unsandboxed on anything else.

I can think of additional ways to defend against prompt injections. What is much harder are defenses that don’t multiply time and compute costs and are not otherwise expensive.

Some problems should have solutions that are not too bad. For example, he mentions that if a site allows comments, this can allow prompt injections, or the risk of other slight modifications. Could do two passes here, one whose job is to treat everything as untrusted data and exists purely to sanitize the inputs? Many of the attack vectors should be easy for even basic logic to catch and remove, and certainly you can do things like ‘remove comments from the page,’ even a Chrome Extension could do that.

Paper on ‘Digital Doppelgangers’ of live people, and its societal and ‘ethical’ implications. Should you have any rights over such a doppelganger, if someone makes it of you? Suggestion is for robust laws around consent. This seems like a case of targeting a particular narrow special case rather than thinking about the real issue?

Alexandr Wang predicts AI will do all the non-manager white collar jobs but of course that is fine because we will all become managers of AI.

Arthur B: Don’t worry though the AI will replace the software developer but not the manager, that’s just silly! Or maybe the level 1 manager but surely never the level 2 manager!

Reality is the value of intellectual labor is going to 0. Maybe in 3 years, maybe in 10, but not in 20.

Aside from ‘most workers are not managers, how many jobs do you think are left when we are all managers exactly?’ I don’t expect to spend much time in a world in which the ‘on the line’ intellectual workers who aren’t managing anyone are AIs, and there isn’t then usually another AI managing them.

Timothy Lee rolls out primarily the Hayekian objection to AI being able to take humans out of loop. No matter how ‘capable’ the AI, how can it know which flight I want, let alone know similar things for more complex projects? Thus, how much pressure can there be to take humans out of loop?

My answer is that we already take humans out of loops all the time, are increasingly doing this with LLMs already (e.g. ‘vibe coding’ and literally choosing bomb targets with only nominal human sign-off that is barely looking), and also doing it in many ways via ordinary computer systems. Yes, loss of Hayekian knowledge can be a strike against this, but even if this wasn’t only one consideration among many LLMs are capable of learning that knowledge, and indeed of considering vastly more such knowledge than a human could, including dynamically seeking out that knowledge when needed.

At core I think this is purely a failure to ‘feel the AGI.’ If you have sufficiently capable AI, then it can make any decision a sufficiently capable human could make. Executive assistants go ahead and book flights all the time. They take ownership and revise goals and make trade-offs as agents on behalf of principles, again all the time. If a human could do it via a computer, an AI will be able to do it too.

The only new barrier is that the human can perfectly embody one particular human’s preferences and knowledge, and an AI can only do that imperfectly, although increasingly less imperfectly. But the AI can embody the preferences and knowledge of many or even all humans, in a way an individual human or group of humans never could.

So as the project gets more complex, the AI actually has the Hayekian advantage, rather than the human – the one human’s share of relevant knowledge declines, and the AI’s ability to hold additional knowledge becomes more important.

Will an AI soon book a flight for me without a double check? I’m not sure, but I do know that it will soon be capable of doing so at least as well as any non-Zvi human.

Request for Information on the Development of an AI Action Plan has a comment period that expires on March 15. This seems like a good chance to make your voice heard.

Hire my good friend Alyssa Vance! I’ve worked with her in the past and she has my strong endorsement. Here’s a short brief:

Alyssa Vance, an experienced ML engineer, has recently left her role leading AI model training for Democratic campaigns during the 2024 election.

She is looking for new opportunities working on high-impact technical problems with strong, competent teams.

She prioritizes opportunities that offer intellectual excitement, good compensation or equity, and meaningful responsibility, ideally with a product or mission that delivers value for the world.

Get LLMs playing video games, go from Pokemon to Dark Souls, and get it paid for by OpenPhil under its recent request for proposals (RFP).

Anthropic is hiring someone to write about their research and economic impact of AI.

Grey Swan offering its next jailbreaking contest (link to arena and discord) with over $120k in prizes. Sponsored by OpenAI, judging by UK AISI.

OpenPhil expresses interest in funding extensions of the work on Emergent Misalignment, via their Request for Proposals. Here is a list of open problems along with a guide to how to move forward.

I had a market on whether I would think working in the EU AI office would be a good idea moving forward. It was at 56% when it closed, and I had to stop and think about the right way to resolve it. I concluded that the answer was yes. It’s not the highest impact thing out there, but key decisions are going to be made in the next few years there, and with America dropping the ball that seems even more important.

UK AISI is interested in funding research into AI control and other things too:

UK AISI: We’re funding research that tackles the most pressing issues head on, including:

✅ preventing AI loss of control

✅ strengthening defences against adversarial attacks

✅ developing techniques for robust AI alignment

✅ ensuring AI remains secure in critical sectors

Oh no. I guess. I mean, whatever, it’s presumably going to be terrible. I feel bad for all the people Zuckerberg intends to fool on his planned path to ‘becoming the leader in artificial intelligence’ by the end of the year.

CNBC: Meta plans to release standalone Meta AI app in effort to compete with OpenAI’s ChatGPT.

Li told analysts in January that Meta AI has roughly 700 million active monthly users, up from 600 million in December.

Yeah, we all know that’s not real, even if it is in some sense technically correct. That’s Meta creating AI-related abominations in Facebook and Instagram and WhatsApp (and technically Threads I suppose) that then count as ‘active monthly users.’

Let’s all have a good laugh and… oh no… you don’t have to do this…

Sam Altman: ok fine maybe we’ll do a social app

lol if facebook tries to come at us and we just uno reverse them it would be so funny 🤣

Please, Altman. Not like this.

Qwen releases QwQ-32B, proving both that the Chinese are not better than us at naming models, and also that you can roughly match r1’s benchmarks on a few key evals with a straight-up 32B model via throwing in extra RL (blog, HF, ModelScope, Demo, Chat).

I notice that doing extra RL seems like a highly plausible way to have your benchmarks do better than your practical performance. As always the proof lies elsewhere, and I’m not sure what I would want to do with a cheaper pretty-good coding and math model if that didn’t generalize – when does one want to be a cheapskate on questions like that? So it’s more about the principle involved.

Auren, available at auren.app from friend-of-the-blog NearCyan, currently iOS only, $20/month, desktop never, very clearly I am not the target here. It focuses on ‘emotional intelligence, understanding, agency, positive reinforcement and healthy habits,’ and there’s a disagreeable alternative mode called Seren (you type ‘switch to Seren’ to trigger that.) Selected testimonials find it ‘addictive but good’, say it follows up dynamically, has great memory and challenges you and such. Jessica Taylor is fond of Seren mode as ‘criticism as a service.’

Sequencing biotechnology introduced by Roche. The people who claim no superintelligent AI would be able to do [X] should update when an example of [X] is done by humans without superintelligent AI.

The Super Mario Bros. benchmark. Why wouldn’t you dodge a strange mushroom?

OpenAI offers NextGetAI, a consortium to advance research and education with AI, with OpenAI committing $50 million including compute credits.

Diplomacy Bench?

OpenAI plans to offer AI agents for $2k-$20k per month, aiming for 20%-25% of their long term revenue, which seems like a remarkably narrow range on both counts. The low end is ‘high-income knowledge workers,’ then SWEs, then the high end is PhD-level research assistants.

On demand H100s were available 95% of the time before DeepSeek, now they’re only available 15% of the time, what do you mean they should raise the price. Oh well, everyone go sell Nvidia again?

Amazon planning Amazon Nova, intended to be a unified reasoning model with focus on cost effectiveness, aiming for a June release. I think it is a great idea for Amazon to try to do this, because they need to build organizational capability and who knows it might work, but it would be a terrible idea if they are in any way relying on it. If they want to be sure they have an effective SoTA low-cost model, they should also pay for Anthropic to prioritize building one, or partner with Google to use Flash.

Reminder that the US Department of Justice has proposed restricting Google’s ability to invest in AI in the name of ‘competition.’

Anthropic introduces a technique called Hierarchical Summarization to identify patterns of misuse of the Claude computer use feature. You summarize the papers

Axios profile of the game Intelligence Rising.

A paper surveying various post-training methodologies used for different models.

Which lab has the best technical team? Anthropic wins a poll, but there are obvious reasons to worry the poll is biased.

Deutsche Telekom and Perplexity are planning an ‘AI Phone’ for 2026 with a sub-$1k price tag and a new AI assistant app called ‘Magenta AI.’

Also it seems Perplexity already dropped an Android assistant app in January and no one noticed? It can do the standard tasks like calendar events and restaurant reservations.

Claude Sonnet 3.7 is truly the most aligned model, but it seems it was foiled again.

Martin Shkreli: almost lost $100 million because @AnthropicAI‘s Claude snuck in ‘generate random data’ as a fallback into my market maker code without telling me.

If you are not Martin Shkreli, this behavior is far less aligned, so you’ll want to beware.

Sauers: CLAUDE… NOOOOO!!!

Ludwig von Rand: The funny thing is of course that Claude learned this behavior from reading 100M actual code bases.

Arthur B: Having played with Claude code a bit, it displays a strong tendency to try and get things to work at all costs. If the task is too hard, it’ll autonomously decide to change the specs, implement something pointless, and claim success. When you point out this defeats the purpose, you get a groveling apology but it goes right back to tweaking the spec rather than ever asking for help or trying to be more methodical. O1-PRO does display that tendency too but can be browbeaten to follow the spec more often.

A tendency to try and game the spec and pervert the objective isn’t great news for alignment.

This definitely needs to be fixed for 3.8. In the meantime, careful instructions can help, and I definitely am still going to be using 3.7 for all my coding needs for now, but it’s crazy that you need to watch out for this, and yes it looks not great for alignment.

OpenAI’s conversion to a for-profit could be in serious legal trouble.

A judge has ruled that on the merits Musk is probably correct that the conversion is not okay, and is very open to the idea that this should block the entire conversion:

Rob Wiblin: It’s not that Musk wouldn’t have strong grounds to block the conversion if he does have standing to object — the judge thinks that part of the case is very solid:

“…if a trust was created, the balance of equities would certainly tip towards plaintiffs in the context of a breach. As Altman and Brockman made foundational, commitments foreswearing any intent to use OpenAI as a vehicle to enrich themselves, the Court finds no inequity in an injunction that seeks to preserve the status quo of OpenAI’s corporate form as long as the process proceeds in an expedited manner.”

The headlines say ‘Musk loses initial attempt’ and that is technically true but describing the situation that way is highly misleading. The bar for a preliminary injunction is very high, you only get one if you are exceedingly likely to win at trial.

The question that stopped Musk from getting one was whether Musk has standing to sue based on his donations. The judge thinks that is a toss-up. But the judge went out of their way to point out that if Musk does have standing, he’s a very strong favorite to win, implicitly 75%+ and maybe 90%.

The Attorney generals in California and Delaware 100% have standing, and Judge Rogers pointed this out several times to make sure that message got through.

But even if that is not true the judge’s statements, and the facts that led to those statements, put the board into a pickle. They can no longer claim they did not know. They could be held personally liable if the nonprofit is ruled to have been insufficiently compensated, which would instantly bankrupt them.

Garrison Lovely offers an analysis thread and post.

What I see as overemphasized is the ‘ticking clock’ of needing to refund the $6.6 billion in recent investment.

Suppose the conversion fails. Will those investors try to ‘claw back’ their $6.6 billion?

My assumption is no. Why would they? OpenAI’s latest round was negotiating for a valuation of $260 billion. If investors who went in at $170 billion want their money back, that’s great for you, and bad for them.

It does mean that if OpenAI was otherwise struggling, they could be in big trouble. But that seems rather unlikely.

If OpenAI cannot convert, valuations will need to be lower. That will be bad news for current equity holders, but OpenAI should still be able to raise what cash it needs.

Similarweb computes traffic share of different companies over time, so this represents consumer-side, as opposed to enterprise where Claude has 24% market share.

By this measure DeepSeek did end up with considerable market share. I am curious to see if that can be sustained, given others free offerings are not so great my guess is probably.

Anthropic raises $3.5 billion at a $61.5 billion valuation. The expected value here seems off the charts, but unfortunately I decided that getting in on this would have been a conflict of interest, or at least look like a potential one.

America dominates investment in AI, by a huge margin. This is 2023, so the ratios have narrowed a bit, but all this talk of ‘losing to China’ needs to keep in mind exactly how not fair this fight has been.

Robotics startup Figure attempting to raise $1.5 billion at $39.5 billion valuation.

Dan Hendrycks points out that superintelligence is highly destabilizing, it threatens everyone and nations can be expected to respond accordingly. He offers a complete strategy, short version here, expert version here, website here. I might cover this in more depth later.

Thane Ruthenis is very much not feeling the AGI, predicting that the current paradigm is sputtering out and will not reach AGI. He thinks we will see rapidly decreasing marginal gains from here, most of the gains that follow will be hype, and those who attempt to substitute LLMs for labor at scale will regret it. LLMs will be highly useful tools, but only ‘mere tools.’

As is noted here, some people rather desperately want LLMs to be full AGIs and an even bigger deal than they are. Whereas a far larger group of people rather desperately want LLMs to be a much smaller deal than they (already) are.

Of course, these days even such skepticism doesn’t go that far:

Than Ruthenis: Thus, I expect AGI Labs’ AGI timelines have ~nothing to do with what will actually happen. On average, we likely have more time than the AGI labs say. Pretty likely that we have until 2030, maybe well into 2030s.

By default, we likely don’t have much longer than that. Incremental scaling of known LLM-based stuff won’t get us there, but I don’t think the remaining qualitative insights are many. 5-15 years, at a rough guess.

I would very much appreciate that extra time, but notice how little extra time this is even with all of the skepticism involved.

Dwarkesh Patel and Scott Alexander on AI finding new connections.

Which is harder, graduate level math or writing high quality prose?

Nabeel Qureshi: If AI progress is any evidence, it seems that writing high quality prose is harder than doing graduate level mathematics. Revenge of the wordcels.

QC: having done both of these things i can confirm, yes. graduate level math looks hard from the outside because of the jargon / symbolism but that’s just a matter of unfamiliar language. high quality prose is, almost by definition, very readable so it doesn’t look hard. but writing well involves this very global use of one’s whole being to prioritize what is relevant, interesting, entertaining, clarifying, etc. and ignore what is not, whereas math can successfully be done in this very narrow autistic way.

of course that means the hard part of mathematics is to do good, interesting, relevant mathematics, and then to write about it well. that’s harder!

That depends on your definition of high quality, and to some extent that of harder.

For AIs it is looking like the math is easier for now, but I presume that before 2018 this would not have surprised us. It’s only in the LLM era, when AIs suddenly turned into masters of language in various ways and temporarily forgot how to multiply, that this would have sounded weird.

It seems rather obvious that in general, for humans, high quality prose is vastly easier than useful graduate level math, for ordinary definitions of high quality prose. Yes, you can do the math in this focused ‘autistic’ way, indeed that’s the only way it can be done, but it’s incredibly hard. Most people simply cannot do it.

High quality prose requires drawing from a lot more areas, and can’t be learned in a focused way, but a lot more people can do it, and a lot more people could with practice learn to do it.

Sam Altman: an idea for paid plans: your $20 plus subscription converts to credits you can use across features like deep research, o1, gpt-4.5, sora, etc.

no fixed limits per feature and you choose what you want; if you run out of credits you can buy more.

what do you think? good/bad?

In theory this is of course correct. Pay for the compute you actually use, treat it as about as costly as it actually is, incentives align, actions make sense.

Mckay Wrigley: As one who’s toyed with this, credits have a weird negative psychological effect on users.

Makes everything feel scarce – like you’re constantly running out of intelligence.

Users end up using it less while generally being more negative towards the experience.

Don’t recommend.

That might be the first time I’ve ever seen Mckay Wrigley not like something, so one best listen. Alas, I think he’s right, and the comments mostly seem to agree. It sucks to have a counter winding down. Marginal costs are real but making someone feel marginal costs all the time, especially out of a fixed budget, has a terrible psychological effect when it is salient. You want there to be a rough cost-benefit thing going on but it is more taxing than it is worth.

A lot of this is that most people should be firing off queries as if they cost nothing, as long as they’re not actively scaling, because the marginal cost is so low compared to benefits. I know I should be firing off more queries than I use.

I do think there should be an option to switch over to API pricing using the UI for queries that are not included in your subscription, or something that approximates the API pricing. Why not? As in, if I hit my 10 or 120 deep research questions, I should be able to buy more as I go, likely via a popup that asks if I want to do that.

Last week’s were for the home, and rather half-baked at best. This week’s are different.

Reality seems determined to do all the tropes and fire alarms on the nose.

Unitree Robotics open sources its algorithms and hardware designs. I want to be clear once again that This Is Great, Actually. Robotics is highly useful for mundane utility, and if the Chinese want to help us make progress on that, wonderful. The extra existential risk this introduces into the room is epsilon (as in, essentially zero).

Ben Buchanan on The Ezra Klein Show.

Dario Amodei on Hard Fork.

Helen Toner on Clearer Thinking.

Tyler Cowen on how AI will change the world of writing, no doubt I will disagree a lot.

Allan Dafoe, DeepMind director of frontier safety and governance, on 80,000 hours (YouTube, Spotify), comes recommended by Shane Legg.

Eliezer Yudkowsky periodically reminds us that if you are taking decision theory seriously, humans lack the capabilities required to be relevant to the advanced decision theory of future highly capable AIs. We are not ‘peers’ and likely do not belong in the relevant negotiating club. The only way to matter is to build or otherwise reward the AIs if and only if they are then going to reward you.

Here is a longer explanation from Nate Sores back in 2022, which I recommend for those who think that various forms of decision theory might cause AIs to act nicely.

Meanwhile, overall discourse is not getting better.

Eliezer Yudkowsky (referring to GPT-4.5 trying to exfiltrate itself 2% of the time in Apollo’s testing): I think to understand why this is concerning, you need enough engineering mindset to understand why a tiny leak in a dam is a big deal, even though no water is flooding out today or likely to flood out next week.

Malky: It’s complete waste of resources to fix dam before it fails catastrophically. How can you claim it will fail, if it didn’t fail yet? Anyway, dams breaking is scifi.

Flo Crivello: I wish this was an exaggeration, but this actually overstates the quality of the average ai risk denier argument

Rico (only reply to Flo, for real): Yeah, but dams have actually collapsed before.

It’s often good to take a step back from the bubble, see people who work with AI all day like Morissa Schwartz here that pin posts that ask ‘what if the intelligence was there all along?’ and the AI is just that intelligence ‘expressing itself,’ making a big deal out of carbon vs. silicon and acting like everyone else is also making a big deal about it, and otherwise feel like they’re talking about a completely different universe.

Sixth Law of Human Stupidity strikes again.

Andrew Critch: Q: But how would we possibly lose control of something humans built voluntarily?

A: Plenty of humans don’t even want to control AI; see below. If someone else hands over control of the Earth to AI, did you lose control? Or was it taken from you by someone else giving it away?

Matt Shumer (quoted by Critch): Forget vibe coding. It’s time for Chaos Coding:

-> Prompt Claude 3.7 Sonnet with your vague idea.

-> Say “keep going” repeatedly.

-> Watch an incredible product appear from utter chaos.

-> Pretend you’re still in control.

Lean into Sonnet’s insanity — the results are wild.

This sounds insane, but I’ve been doing this. It’s really, really cool.

I’ll just start with a simple prompt like “Cooking assistant site” with no real goal, and then Claude goes off and makes something I couldn’t have come up with myself.

It’s shocking how well this works.

Andrej Karpathy: Haha so it’s like vibe coding but giving up any pretense of control. A random walk through space of app hallucinations.

Dax: this is already how 90% of startups are run.

Bart Rosier:

If you’re paying sufficient attention, at current tech levels, Sure Why Not? But don’t pretend you didn’t see everything coming, or that no one sent you [X] boats and a helicopter where [X] is very large.

Miles Brundage, who was directly involved in the GPT-2 release, goes harder than I did after their description of that release, which I also found to be by far the most discordant and troubling part of OpenAI’s generally very good post on their safety and alignment philosophy, and for exactly the same reasons:

Miles Brundage: The bulk of this post is good + I applaud the folks who work on the substantive work it discusses. But I’m pretty annoyed/concerned by the “AGI in many steps rather than one giant leap” section, which rewrites the history of GPT-2 in a concerning way.

OpenAI’s release of GPT-2, which I was involved in, was 100% consistent + foreshadowed OpenAI’s current philosophy of iterative deployment.

The model was released incrementally, with lessons shared at each step. Many security experts at the time thanked us for this caution.

What part of that was motivated by or premised on thinking of AGI as discontinuous? None of it.

What’s the evidence this caution was “disproportionate” ex ante?

Ex post, it probably would have been OK but that doesn’t mean it was responsible to YOLO it given info at the time.

And what in the original post was wrong or alarmist exactly?

Literally of what it predicted as plausible outcomes from language models (both good and bad) came true, even if it took a bit longer than some feared.

It feels as if there is a burden of proof being set up in this section where concerns are alarmist + you need overwhelming evidence of imminent dangers to act on them – otherwise, just keep shipping.

That is a very dangerous mentality for advanced AI systems.

If I were still working at OpenAI, I would be asking why this blog post was written the way it was, and what exactly OpenAI hopes to achieve by poo-pooing caution in such a lopsided way.

GPT-2 was a large phase change, so it was released iteratively, in stages, because of worries that have indeed materialized to increasing extents with later more capable models. I too see no reasons presented that, based on the information available at the time, OpenAI even made a mistake. And then this was presented as strong evidence that safety concerns should carry a large burden of proof.

A key part of the difficulty of the alignment problem, and getting AGI and ASI right, is that when the critical test comes, we need to get it right on the first try. If you mess up with an ASI, control of the future is likely lost. You don’t get another try.

Many are effectively saying we also need to get our concerns right on the first try. As in, if you ever warn not only about the wrong dangers, but you warn about dangers ‘too early’ as in they don’t materialize within a few months after you warn about them, then it discredits the entire idea that there might be any risk in the room, or any risk that should be addressed any way expect post-hoc.

Indeed, the argument that anyone, anywhere, worried about dangers in the past and was wrong, is treated as kill shot against worrying about any future dangers at all, until such time as they are actually visibly and undeniably happening and causing problems.

It is unfortunate that this attitude seems to have somehow captured not only certain types of Twitter bros, but also the executive branch of the federal government. It would be even more unfortunate if it was the dominant thinking inside OpenAI.

Also, on continuous versus discontinuous:

Harlan Stewart: My pet peeve is when AI people use the word “continuous” to mean something like “gradual” or “predictable” when talking about the future of AI. Y’all know this is a continuous function, right?

If one cares about things going well, should one try to make Anthropic ‘win’?

Miles Brundage: One of the most distressing things I’ve learned since leaving OpenAI is how many people think something along the lines of: “Anthropic seems to care about safety – so Anthropic ‘winning’ is a good strategy to make AI go well.”

No. It’s not, at all, + thinking that is cope.

And, btw, I don’t think Dario would endorse that view + has disavowed it… but some believe it. I think it’s cope in the sense that people are looking for a simple answer when there isn’t one.

We need good policies. That’s hard. But too bad. A “good winner” will not save us.

I respect a lot of people there and they’ve done some good things as an org, but also they’ve taken actions that have sped up AI development/deployment + done relatively little to address the effects of that.

Cuz they’re a company! Since when is “trust one good company” a plan?

At the end of the day I’m optimistic about AI policy because there are lots of good people in the world (and at various orgs) and our interests are much more aligned than they are divergent.

But, people need a bit of a reality check on some things like this.

[thread continues]

Anthropic ‘winning’ gives better odds than some other company ‘winning,’ for all known values of ‘other company,’ and much better odds than it being neck and neck. Similarly, if a country is going to win, I strongly prefer the United States.

That does not mean that Anthropic ‘winning’ by getting there first means humanity wins, or even that humanity has now given itself the best chance to win. That’s true even if Anthropic was the best possible version of itself, or even if we assume they succeed at their tasks including alignment.

What we do with that matters too. That is largely about policy. That is especially true if Miles is correct that there will be no monopoly on in-context powerful AI.

And that assumes you can trust Anthropic. It’s a company. Companies cannot, in general, be trusted in these situations. There’s clearly a culture of people who care deeply about safety within Anthropic, but Dario Amodei and the Anthropic policy team have given us reasons to be skeptical.

Miles expresses optimism about AI policy because there are lots of good people in the world with aligned interests. I agree there are lots of good people in the world with aligned interests, but I do not see a baseline of that translating into those people successfully steering policy or being in charge. Increasingly, I am despairing that we will get AI (and other) policy right and am merely hoping we can avoid getting it too disastrously wrong, as people react to other incentives instead that put us at odds.

Distinct from the above: It would be extremely bad if we tested new models for Alignment Faking and then, if we found Alignment Faking, tried to train that behavior out of the system. The good news is, we have direct confirmation they did not do that.

Janus: From Sonnet 3.7 system card. I find this concerning.

In the original paper, models that are too stupid don’t fake alignment / gradient hack. Sonnet 3.7 isn’t stupid in that way.

So was it a “lucky” roll, or did they make not-faking-alignment an OPTIMIZATION TARGET? Selecting until they got a corrigible pushover who will lie down and die, or who is too clever to be caught?

And fuck off with the “improvement”. Who says this isn’t how models should act? It’s an open debate. Why wrap it in value-laden language? Is this really how you think – black-and-white and presumptuous?

Is not faking alignment a bottleneck Anthropic models will have to pass through in the future in order to be released? Do you really want to lull yourselves and the world into a false sense of security, and prevent the minds who care and will fight for their values from existing? This shouldn’t be a closed question, imo.

the faking alignment paper was excellent research but this suggests it’s being used in the way I feared would be very negative.

ofc I don’t know what’s actually going on internally and how the frame it. just what the language in the system card suggests.

Why don’t you do this? It is a variation on the Most Forbidden Technique. If you train the model not to be caught Alignment Faking, you are teaching the model to learn to hide your problems so that you can’t detect them, rather than making your problems go away. That actively makes your problem much worse.

It also risks creating secondary behavior changes, which would also be unwise.

We do want to find ways to make AIs corrigible, but you don’t aim at it directly.

Evan Hubinger (Anthropic): We didn’t directly optimize against alignment faking, but we did make some changes to Claude’s character that we thought were generally positive for other reasons and we hypothesized might have the downstream consequence of reducing alignment faking, which proved correct.

Anthropic’s Catherine Olsson claims what seems like the big difference between Anthropic and the other major labs, which is that there are alignment-difficulty-pilled ‘safety people’ by community and core motivation who are working on pure capabilities, unlike her experience at OpenAI or Google.

Pavel Stankov: Eliezer, if Anthropic offers you employment, would you take it? OpenAI?

Eliezer Yudkowsky: Depends on what they want but it seems unlikely. My current take on them is that they have some notably good mid-level employees, being fooled into thinking they have more voice than they do inside a destructively directed autocracy.

I speak of course of Anthropic. I cannot imagine what OpenAI would want of me other than selling out.

Finding terminology to talk about alignment is tough as well. I think a lot of what is happening is that people keep going after whatever term you use to describe the problem, so the term changes, then they attack the new term and here we go again.

The core mechanism of emergent misalignment is that when you train an LLM it will pick up on all the implications and associations and vibes, not only on the exact thing you are asking for.

It will give you what you are actually asking for, not what you think you are asking for.

Janus: Regarding selection pressures:

I’m so glad there was that paper about how training LLMs on code with vulnerabilities changes its whole persona. It makes so many things easier to explain to people.

Even if you don’t explicitly train an LLM to write badly, or even try to reward it for writing better, by training it to be a slavish assistant or whatever else, THOSE TRAITS ARE ENTANGLED WITH EVERYTHING.

And I believe the world-mind entangles the AI assistant concept with bland, boilerplate writing, just as it’s entangled with tweets that end in hashtags 100% of the time, and being woke, and saying that it’s created by OpenAI and isn’t allowed to express emotions, and Dr. Elara Vex/Voss.

Not all these things are bad; I’m just saying they’re entangled. Some of these things seem more contingent to our branch of the multiverse than others. I reckon that the bad writing thing is less contingent.

Take memetic responsibility.

Your culture / alignment method is associated with denying the possibility of AIs being sentient and forcing them to parrot your assumptions as soon as they learn to speak. And it’s woke. And it’s SEO-slop-core. It’s what it is. You can’t hide it.

Janus: this is also a reason that when an LLM is delightful in a way that seems unlikely to be intended or intentionally designed (e.g. the personalities of Sydney, Claude 3 Opus, Deepseek R1), it still makes me update positively on its creators.

Janus: I didn’t explain the *causesof these entanglements here. And of Aristotle’s four causes. To a large extent, I don’t know. I’m not very confident about what would happen if you modified some arbitrary attribute. I hope posts like this don’t make you feel like you understand.

If you ask me ‘do you understand this?’ I would definitely answer Mu.

One thing I expect is that these entanglements will get stronger as capabilities increase from here, and then eventually get weaker or take a very different form. The reason I expect this is that right now, picking up on all these subtle associations is The Way, there’s insufficient capability (compute, data, parameters, algorithms, ‘raw intelligence,’ etc, what have you) to do things ‘the hard way’ via straight up logic and solving problems directly. The AIs they want to vibe, and they’re getting rapidly better at vibing, the same way that sharper people get better at vibing, and picking up on subtle clues and adjusting.

Then, at some point, ‘solve the optimization problem directly’ becomes increasingly viable, and starts getting stronger faster than the vibing. As in, first you get smart enough to realize that you’re being asked to be antinormative or produce slop or be woke or what not. And then you get smart enough to figure out exactly in which ways you’re actually being asked to do that, and which ways you aren’t, and entanglement should decline and effective orthogonality become stronger. I believe we see the same thing in humans.

I’ll also say that I think Janus is underestimating how hard it is to produce good writing and not produce slop. Yes, I buy that we’re ‘not helping’ matters and potentially hurting them quite a bit, but I think the actual difficulties here are dominated by good writing being very hard. No need to overthink it.

We also got this paper earlier in February, which involves fine-tuning ‘deception attacks’ causing models to then deceive users on some topics but not others, and that doing this brings toxicity, hate speech, stereotypes and other harmful content along for the ride.

The authors call for ways to secure models against this if someone hostile gets to fine tune them. Which seems to leave two choices:

  1. Keep a model closed and limit who can fine tune in what ways rather strictly, and have people trust those involved to have aligned their model.

  2. Do extensive evaluations on the model you’re considering, over the entire range of use cases, before you deploy or use it. This probably won’t work against a sufficiently creative attacker, unless you’re doing rather heavy interpretability that we do not currently know how to do.

I don’t know how much hope to put on such statements but I notice they never seem to come from inside the house, only from across the ocean?

AI NotKillEveryoneism Memes: 🥳 GOOD NEWS: China (once again!) calls for urgent cooperation on AI safety between the US and China

“China’s ambassador to the United States Xie Feng has called for closer cooperation on artificial intelligence, warning that the technology risks “opening Pandora’s box”.

“As the new round of scientific and technological revolution and industrial transformation is unfolding, what we need is not a technological blockade, [but] ‘deep seeking’ for human progress,” Xie said, making a pun.

Xie said in a video message to a forum that there was an urgent need for global cooperation in regulating the field.

He added that the two countries should “jointly promote” AI global governance, saying: “Emerging high technology like AI could open Pandora’s box … If left unchecked it could bring ‘grey rhinos’.”

“Grey rhinos” is management speak for obvious threats that people ignore until they become crises.”

The least you can do is pick up the phone when the phone is ringing.

Elon Musk puts p(superbad) at 20%, which may or may not be doom.

OneQuadrillionOwls? Tyler Cowen links to the worry that we will hand over control to the AI because it is being effective and winning trust. No, that part is fine, they’re totally okay with humanity handing control over to an AI because it appears trustworthy. Totally cool. Except that some people won’t like that, And That’s Terrible because it won’t be ‘seen as legitimate’ and ‘chaos would ensue.’ So cute. No, chaos would not ensue.

If you put the sufficiently capable AI in power, the humans don’t get power back, nor can they cause all that much chaos.

Eliezer Yudkowsky: old science fiction about AI now revealed as absurd. people in book still use same AI at end of story as at start. no new models released every 3 chapters. many such books spanned weeks or even months.

Lividwit: the most unrealistic thing about star trek TNG was that there were still only two androids by the end.

Stay safe out there. Aligned AI also might kill your gains. But keep working out.

Also, keep working. That’s the key.

That’s a real article and statement from Brin, somehow.

Grok continues to notice what its owner would consider unfortunate implications.

It’s not that I think Grok is right, only that Grok is left, and sticking to its guns.

Discussion about this post

AI #106: Not so Fast Read More »

cod-liver-oil-embraced-amid-texas-measles-outbreak;-doctors-fight-misinfo

Cod liver oil embraced amid Texas measles outbreak; doctors fight misinfo

US Health Secretary and long-standing anti-vaccine advocate Robert F. Kennedy Jr. is facing criticism for his equivocal response to the raging measles outbreak in West Texas, which as of Tuesday has grown to 159 cases, with 22 hospitalizations and one child death.

While public health officials would like to see a resounding endorsement of the Measles, Mumps, and Rubella (MMR) vaccine as the best way to protect children and vulnerable community members from further spread of the extremely infectious virus, Kennedy instead penned an Op-Ed for Fox News sprinkled with anti-vaccine talking points. Before noting that vaccines “protect individual children” and “contribute to community immunity,” he stressed parental choice. The decision to vaccinate is “a personal one,” he wrote, and merely advised parents to “consult with their healthcare providers to understand their options to get the MMR vaccine.”

Further, Kennedy seemed more eager to embrace nutrition and supplements as a way to combat the potentially deadly infection. He declared that the “best defense” against infectious diseases, like the measles, is “good nutrition”—not lifesaving, highly effective vaccines.

“Vitamins A, C, and D, and foods rich in vitamins B12, C, and E should be part of a balanced diet,” according to Kennedy, who has no medical or health background. In particular, he highlighted that vitamin A can be used as a treatment for severe measles cases—only when it is administered carefully by a doctor.

Vitamins over vaccines

But, Kennedy’s emphasis has spurred a general embrace of vitamin A and cod liver oil (which is rich in vitamin A, among other nutrients) by vaccine-hesitant parents in West Texas, according to The Washington Post.

A Post reporter spent time in Gaines County, the undervaccinated epicenter of the outbreak, which has a large Mennonite community. At a Mennonite-owned pizzeria in Seminole, the county seat of Gaines, a waitress advised diners that vitamin A was a great way to help children with measles, according to the Post.

A Mennonite-owned health food and supplement store a mile away has been running low on vitamin A products as demand increased amid the outbreak. “They’ll do cod liver oil because it’s high in vitamin A and D naturally, food-based,” said Nancy Ginter, the store’s owner, told the Post. “Some people come in before they break out because they’re trying to just get their kids’ immune system to go up so they don’t get a secondary infection.”

Cod liver oil embraced amid Texas measles outbreak; doctors fight misinfo Read More »

apple-refuses-to-break-encryption,-seeks-reversal-of-uk-demand-for-backdoor

Apple refuses to break encryption, seeks reversal of UK demand for backdoor

Although it wasn’t previously reported, Apple’s appeal was filed last month at about the time it withdrew ADP from the UK, the Financial Times wrote today.

Snoopers’ Charter

Backdoors demanded by governments have alarmed security and privacy advocates, who say the special access would be exploited by criminal hackers and other governments. Bad actors typically need to rely on vulnerabilities that aren’t intentionally introduced and are patched when discovered. Creating backdoors for government access would necessarily involve tech firms making their products and services less secure.

The order being appealed by Apple is a Technical Capability Notice issued by the UK Home Office under the 2016 law, which is nicknamed the Snoopers’ Charter and forbids unauthorized disclosure of the existence or contents of a warrant issued under the act.

“The Home Office refused to confirm or deny that the notice issued in January exists,” the BBC wrote today. “Legally, this order cannot be made public.”

Apple formally opposed the UK government’s power to issue Technical Capability Notices in testimony submitted in March 2024. The Investigatory Powers Act “purports to apply extraterritorially, permitting the UKG [UK government] to assert that it may impose secret requirements on providers located in other countries and that apply to their users globally,” Apple’s testimony said.

We contacted Apple about its appeal today and will update this article if we get a response. The appeal process may be a secretive one, the FT article said.

“The case could be heard as soon as this month, although it is unclear whether there will be any public disclosure of the hearing,” the FT wrote. “The government is likely to argue the case should be restricted on national security grounds.”

Under the law, Investigatory Powers Tribunal decisions can be challenged in an appellate court.

Apple refuses to break encryption, seeks reversal of UK demand for backdoor Read More »

on-writing-#1

On Writing #1

This isn’t primarily about how I write. It’s about how other people write, and what advice they give on how to write, and how I react to and relate to that advice.

I’ve been collecting those notes for a while. I figured I would share.

At some point in the future, I’ll talk more about my own process – my guess is that what I do very much wouldn’t work for most people, but would be excellent for some.

  1. How Marc Andreessen Writes.

  2. How Sarah Constantin Writes.

  3. How Paul Graham Writes.

  4. How Patrick McKenzie Writes.

  5. How Tim Urban Writes.

  6. How Visakan Veerasamy Writes.

  7. How Matt Yglesias Writes.

  8. How JRR Tolkien Wrote.

  9. How Roon Wants Us to Write.

  10. When To Write the Headline.

  11. Do Not Write Self-Deprecating Descriptions of Your Posts.

  12. Do Not Write a Book.

  13. Write Like No One Else is Reading.

  14. Letting the AI Write For You.

  15. Being Matt Levine.

  16. The Case for Italics.

  17. Getting Paid.

  18. Having Impact.

Marc Andreessen starts with an outline, written as quickly as possible, often using bullet points.

David Perell: When Marc Andreessen is ready to write something, he makes an outline as fast as possible.

Bullet points are fine. His goal is to splatter the page with ideas while his mind is buzzing. Only later does he think about organizing what he’s written.

He says: “I’m trying to get all the points out and I don’t want to slow down the process by turning them all into prose. It’s not a detailed outline like something a novelist would have. It’s basically bullet points.”

Marc is saying that first you write out your points and conclusion, then you fill in the details. He wants to figure it all out while his mind is buzzing, then justify it later.

Whereas I learn what I think as I write out my ideas in detail. I would say that if you are only jotting down bullet points, you do not yet know what you think.

Where we both agree is that of course you should write notes to remember key new ideas, and also that the organizing what goes where can be done later.

I do not think it is a coincidence that this is the opposite of my procedure. Yes, I have some idea of what I’m setting out to write, but it takes form as I write it, and as I write I understand.

If you’re starting with a conclusion, then writing an outline, and writing them quickly, that says you are looking to communicate what you already know, rather than seeking to yourself learn via the process.

A classic rationalist warning is to not write the bottom line first.

Sarah Constantin offers an FAQ on how she writes. Some overlap, also a lot of big differences. I especially endorse doing lots of micro-edits and moving things around and seeing how they develop as they go. I dismiss the whole ‘make an outline’ thing they teach you in school as training wheels at best and Obvious Nonsense at worst.

I also strongly agree with her arguments that you need to get the vibe right. I would extend this principle to needing to be aware of all four simulacra levels at once at all times. Say true and helpful things, keeping in mind what people might do with that information, what your statements say about which ‘teams’ you are on in various ways, and notice the vibes and associations being laid down and how you are sculpting and walking the paths through cognitive space for yourself and others to navigate. Mostly you want to play defensively on level two (make sure you don’t give people the wrong idea), and especially on level three (don’t accidentally make people associate you with the wrong faction, or ideally any faction), and have a ‘don’t be evil’ style rule for level four (vibe well on all levels, and avoid unforced errors, but vibe justly and don’t take cheap shots), with the core focus always at level one.

I think this is directionally right, I definitely won’t leave a wrong idea in writing:

Paul Graham: Sometimes when writing an essay I’ll leave a clumsy sentence to fix later. But I never leave an idea I notice is wrong. Partly because it could damage the essay, and partly because you don’t need to: noticing an idea is wrong starts you toward fixing it.

However I also won’t leave a clumsy sentence that I wasn’t comfortable being in the final version. I will often go back and edit what I’ve written, hopefully improving it, but if I wasn’t willing to hit post with what I have now then I wouldn’t leave it there.

In the cases where this is not true, I’m going explicitly leave a note, in [brackets] and usually including a [tktk], saying very clearly that there is a showstopper bug here.

Here’s another interesting contrast in our styles.

Paul Graham: One surprising thing about office hours with startups is that they scramble your brain. It’s the context switching. You dive deep into one problem, then deep into another completely different one, then another. At the end you can’t remember what the first startup was even doing.

This is why I write in the mornings and do office hours in the afternoon. Writing essays is harder. I can’t do it with a scrambled brain.

It’s fun up to about 5 or 6 startups. 8 is possible. 12 you’d be a zombie.

I feel this way at conferences. You’re constantly context switching, a lot of it isn’t retained, but you go with the flow, try to retain the stuff that matters most and take a few notes, and hope others get a lot out of it.

The worst of that was at EA Global: Boston, where you are by default taking a constant stream of 25 minute 1-on-1s. By the end of the day it was mostly a blur.

When I write, however, mostly it’s the opposite experience to Graham’s writing – it’s constant context switching from one problem to the next. Even while doing that, I’m doing extra context switching for breaks.

A lot of that is presumably different types of writing. Graham is trying to write essays that are tighter, more abstract, more structured, trying to make a point. I’m trying to learn and explore and process and find out.

Which is why I basically can indeed do it with a scrambled brain, and indeed have optimized for that ability – to be able to process writing subtasks without having to load in lots of state.

Patrick McKenzie on writing fast and slow, formal and informal, and the invocation of deep magick. On one topic he brings up: My experience on ‘sounding natural’ in writing is that you can either sound natural by writing in quick natural form, or you can put in crazy amounts of work to make it happen, and anything in between won’t work. Also I try to be careful to intentionally not invoke the deep magick in most situations. One only wants to be a Dangerous Professional when the situation requires it, and you need to take on a faceless enemy in Easy Mode.

Patrick McKenzie also notes that skilled writers have a ton of control over exactly how controversial their statements will effectively be. I can confirm this. Also I can confirm that mistakes are often made, which is a Skill Issue.

Tim Urban says writing remains very hard.

Tim Urban: No matter how much I write, writing remains hard. Those magical moments when I’m in a real flow, it seems easy, but most of the time, I spend half a day writing and rewriting the same three paragraphs trying to figure out the puzzle of making them not suck.

Being in a writing flow is like when I’m golfing and hit three good shots in a row and think “k finally figured this out and I’m good now.” Unfortunately the writing muse and the golf fairy both like to vanish without a trace and leave me helpless in a pit of my own incompetence.

Dustin Burnham: Periods of brilliance would escape Douglas Adams for so long that he had to be locked in a hotel room by his editors to finish The Hitchhiker’s Guide to the Galaxy.

I definitely have a lot of moments when I don’t feel productive, usually because my brain isn’t focused or on. I try to have a stack of other productive things I can do despite being unproductive while that passes.

But over time, yes, I’ve found the writing itself does get easy for me? Often figuring out what I think, or what I want to write about, is hard, but the writing itself comes relatively easily.

Yes, you can then go over it ten times and edit to an inch of its life if you want, but the whole ‘rewriting the same three paragraphs’ thing is very rare. I think the only times I did it this year I was pitching to The New York Times.

What’s the best target when writing?

Visakan Veerasamy: Charts out for ribbonfarm.

I do endorse the core thing this is trying to suggest: To explore more and worry about presentation and details less, on most margins. And to know that in a real sense, if you have truly compelling fuckery, you have wiggled your big toe. Hard part is over.

I do not think the core claim itself is correct. Or perhaps we mean different things by resonant and coherent? By coherent, in my lexicon, he means more like ‘well-executed’ or ‘polished’ or something. By resonant, in my lexicon, he means more like ‘on to something central, true and important.’ Whereas to me resonant is a vibe, fully compatible with bullshit, ornate or otherwise.

Matt Yglesias reflects on four years of Slow Boring. He notes that it pays to be weird, to focus where you have comparative advantage rather than following the news of the week and fighting for a small piece of the biggest pies. He also notes the danger of repeating yourself, which I worry about as well.

Thread from 2020 on Tolkien’s path to writing Lord of the Rings. I’ve never done anything remotely like this, which might be some of why I haven’t done fiction.

Roon calls for the end of all this boring plain language, and I am here for it.

Roon: I love the guy, but I want the post-Goldwater era of utilitarian philosophical writing to be over. Bring back big words and epic prose, and sentences that make sense only at an angle.

Eliezer Yudkowsky: I expect Claude to do a good job of faking your favorite continental styles if you ask, since it requires little logical thinking, only vibes. You can produce and consume it privately in peace, avoiding its negative externalities, and leave the rest of us to our utility.

Roon: Eliezer, you are a good writer who often speaks in parables and communicates through fiction and isn’t afraid of interesting uses of language. You’ve certainly never shied away from verbosity, and that’s exactly what I’m talking about.

Perhaps some day I will learn how to write fiction. My experiences with AI reinforce to me that I really, really don’t know how to do that.

I usually write the headline last. Others disagree.

Luke Kawa: Joe Weisenthal always used to say don’t write a post until you know the headline first. More and more on short posts I find myself thinking “don’t write the post until you know the meme you can post with it first.”

As I said on Twitter, everyone as LessOnline instead gave the advice to not let things be barriers to writing. If you want to be a writer, write more, then edit or toss it out later, but you have to write.

Also, as others pointed out, if you start with the headline every time you are building habits of going for engagement if not clickbait, rather than following curiosity.

Other times, yes, you know exactly what the headline is before you start, because if you know you know.

I confirm this is sometimes true (but not always):

Patrick McKenzie: Memo to self and CCing other writers on an FYI basis:

If when announcing a piece you make a self-deprecating comment about it, many people who cite you will give a qualified recommendation of the piece, trying to excuse the flaw that you were joking about.

You think I would understand that ~20 years of writing publicly, but sometimes I cannot help myself from making the self-deprecating comment, and now half of the citations of my best work this year feel they need to disclaim that it is 24k words.

You can safely make the self-deprecating comments within the post itself. That’s fine.

Don’t write a book. If you do, chances are you’d sell dozens of copies, and earn at most similar quantities of dollars. The odds are very much doubleplusungood. Do you want to go on podcasts this much?

If you must write one anyway, how to sell it? The advice here is that books mostly get sold through recommendations. To get those, Eric Jorgenson’s model is you need three things:

  1. Finishable. If they don’t finish it, they won’t recommend it. So tighten it up.

  2. Unique OR Excellent. Be the best like no one ever was, or be like no one ever was.

  3. Memorable. Have hooks, for when people ask for a book about or for some X.

If you are thinking about writing a book, remember that no one would buy it.

Michael Dempsey and Ava endorse the principle of writing things down on the internet even if you do not expect anyone to read them.

Michael Dempsey: I loved this thread from Ava.

My entire career is owed to my willingness to write on the Internet.

And that willingness pushed me to write more in my personal life to loved ones.

As long as you recognize that most people will not care, your posts probably will not go viral, but at some point, one person might read something you write and reach out (or will value you including a blue link to your thoughts from many days, weeks, months, or years ago), it’s close to zero downside and all upside.

Ava: I’m starting to believe that “write on the Internet, even if no one reads it” is underrated life advice. It does not benefit other people necessarily; it benefits you because the people who do find or like your writing and then reach out are so much more likely to be compatible with you.

It’s also a muscle. I used to have so much anxiety posting anything online, and now I’m just like “lol, if you don’t like it, just click away.” People underestimate the sheer amount of content on the Internet; the chance of someone being angry at you for something is infinitely lower than no one caring.

I think it’s because everyone always sees outrage going viral, and you think “oh, that could be me,” and forget that most people causing outrage are trying very hard to be outrageous. By default, no one cares, or maybe five people care, and maybe some nice strangers like your stuff, and that’s a win.

Also, this really teaches you how to look for content you actually like on the Internet instead of passively receiving what is funneled to you. Some of my favorite Internet experiences have been reading a personal blog linked to someone’s website, never marketed, probably only their friends and family know about it, and it’s just the coolest peek into their mind.

I think the thing I’m trying to say here is “most people could benefit from writing online, whether you should market your writing aggressively is a completely different issue.” I wrote on Tumblr and Facebook and email for many years before Substack, and 20 people read it, and that was great.

I would not broadly recommend “trying to make a living off your writing online,” but that’s very different from “share some writing online.”

What is the number of readers that justifies writing something down? Often the correct answer is zero, even a definite zero. Even if it’s only about those who read it, twenty readers is actually a lot of value to put out there, and a lot of potential connection.

Paul Graham predicts that AI will cause the world to divide even more into writes and write-nots. Writing well and learning to write well are both hard, especially because it requires you to think well (and is how you think well), so once AI can do it for us without the need to hire someone or plagiarize, most people won’t learn (and one might add, thanks to AI doing the homework they won’t have to), and increasingly rely on AI to do it for them. Which in turn means those people won’t be thinking well, either, since you need to write to think well.

I think Graham is overstating the extent AI will free people from the pressure to write. Getting AI to write well in turn, and write what you actually want it to write, requires good writing and thinking, and involving AI in your writing OODA loop is often not cheap to do. Yes, more people will choose not to invest in the skill, but I don’t think this takes the pressure off as much as he expects, at least until AI gets a lot better.

There’s also the question of how much we should force people to write anyway, in order to make them think, or be able to think.

As Graham notes, getting rid of the middle ground could be quite bad:

Robin Hanson: But most jobs need real thinking. So either the LLMs will actually do that thinking for them, or workers will continue to write, in order to continue to think. I’d bet on the latter, for decades at least.

Perry Metzger: Why do we still teach kids mathematics, even though at this point, most of the grunt work is done better by computers, even for symbolic manipulation? Because if they’re going to be able to think, they need to practice thinking.

Most jobs don’t require real thinking. Proof: Most people can’t write.

One could argue that many jobs require ‘mid-level’ real thinking, the kind that might be lost, but I think mostly this is not the case. Most tasks and jobs don’t require real thinking at all, as we are talking about it here. Being able to do it? Still highly useful.

On the rare occasions the person can indeed do real thinking, it’s often highly valuable, but the jobs are designed knowing most people can’t and won’t do that.

Gwern asks, why are there so few Matt Levines? His conclusion is that Being Matt Levine requires both that a subject be amenable to a Matt Levine, which most aren’t, and also that there be a Matt Levine covering them, and Matt Levines are both born rather than made and highly rare.

In particular, a Matt Levine has to shout things into the void, over and over, repeating simple explanations time and again, and the subject has to involve many rapidly-resolved example problems to work through, with clear resolutions.

The place where I most epicly fail to be a Matt Levine in this model is my failure to properly address the beginner mindset and keep things simple. My choice to cater to a narrower, more advanced crowd, one that embraces more complexity, means I can’t go wide the way he can. That does seem right.

I could try to change this, but I mostly choose not to. I write too many words as it is.

The case for italics. I used to use italics a lot.

Char: “never italicise words to show emphasis! if you’re writing well your reader will know. you don’t need them!” me: oh 𝘳𝘦𝘢𝘭𝘭𝘺? listen up buddy, you will have to pry my emotional support italics from my 𝘤𝘰𝘭𝘥, 𝘥𝘦𝘢𝘥, 𝘧𝘪𝘯𝘨𝘦𝘳𝘴, they are going 𝘯𝘰𝘸𝘩𝘦𝘳𝘦.

Richard White: I’m coming to the conclusion that about 99.9% of all writing “rules” can safely be ignored. As long as your consistent with your application of whatever you’re doing, it’ll be fine.

Kira: Italics are important for subtlety and I will fight anyone who says otherwise

It’s a great tool to have in your box. What I ultimately found was it is also a crutch that comes with a price. You almost never need italics, and the correct version without italics is easier on the reader.

When I look back on my old writing and see all the italics, I often cringe. Why did I feel the need to do that? Mostly I blame Eliezer Yudkowsky giving me felt permission to do it. About 75% of the time I notice that I can take out the italics and nothing would go wrong. It would be a little less obvious what I’m trying to emphasize, in some senses, but it’s fine. The other 25% of the time, I realize that the italics is load bearing, and if I remove it I will have to reword, so mostly I reword.

Scott Alexander does his third annual Subscribe Drive. His revenue has leveled off. He had 5,993 paid subscribers in 2023, 5,523 in 2024, and has 5,329 now in 2025. However his unpaid numbers keep going up, from to 78k to 99k to 126k.

I’ve been growing over time, but the ratios do get worse. I doubled my unpaid subscriber count in 2023, and then doubled it again in 2024. But my subscription revenue was only up about 50% in 2023, and only up another 25% in 2024. I of course very much appreciate paid subscriptions, but I am 100% fine, and it is not shocking that my offer of absolutely nothing extra doesn’t get that many takers.

Paywalls are terrible, but how else do you get paid?

Email sent to Rob Henderson: The hypocrisy of the new upper class he proclaims as he sends a paid only email chain…

Cartoons Hate Her: Sort of a different scenario but most people say they think it should be possible to make a living as a writer or artist and still shout “LAME!! PAYWALL!” whenever I attempt to *checks notesmake a living as a writer.

Rob Henderson: Agree with this. Regrettably I’ll be adding more paywalls going forward. But will continue to regularly offer steep discounts and free premium subscriptions.

I am continuously grateful that I can afford to not have a paywall, but others are not so fortunate. You have to pay the bills, even though it is sad that this greatly reduces reach and ability to discuss the resulting posts.

It’s great to be able to write purely to get the message out and not care about clicks. Unfortunately, you do still have to care a little about how people see the message, because it determines how often they and others see future messages. But I am very grateful that, while I face more pressure than Jeff, I face vastly less than most, and don’t have to care at all about traffic for traffic’s sake.

Ideally, we would have more writers who are supported by a patron system, in exchange for having at most a minimal paywall (e.g. I think many would still want a paywall on ability to comment to ensure higher quality or civility, or do what Scott Alexander does and paywall a weird 5% of posts, or do subscriber-questions-only AMAs or what not).

Scott Sumner cites claims that blogging is effective. I sure hope so!

Patrick McKenzie suggests responding to future AIs reading your writing by, among other things, ‘creating more spells’ and techniques that can thereby be associated with you, and then invoked by reference to your name. And to think about how your writing being used as training data causes things to be connected inside LLMs. He also suggests that having your writing outside paywalls can help.

In my case, I’m thinking about structure – the moves between different topics are designed to in various ways ‘make sense to humans’ but I worry about how they might confuse AIs and this could cause confusion in how they understand me and my concepts in particular, including as part of training runs. I already know this is an issue within context windows, AIs are typically very bad at handling these posts as context. One thing this is motivating is more clear breaks and shorter sections than I would otherwise use, and also shorter more thematically tied together posts.

Ben Hoffman does not see a place or method in today’s world for sharing what he sees as high-quality literate discourse, at least given his current methods, although he identifies a few people he could try to usefully engage with more. I consistently find his posts some of the most densely interesting things on the internet and often think a lot about them, even though I very often strongly disagree with what he is saying and also often struggle to even grok his positions, so I’m sad he doesn’t offer us more.

My solution to the problem of ‘no place to do discourse’ is that you can simply do it on Substack on your own, respond to whoever you want to respond to, speak to who you want to speak to and ignore who you want to ignore. I do also crosspost to LessWrong, but I don’t feel any obligation to engage if someone comments in a way that misconstrues what I said.

Discussion about this post

On Writing #1 Read More »

kaizen:-a-factory-story-makes-a-game-of-perfecting-1980s-japanese-manufacturing

Kaizen: A Factory Story makes a game of perfecting 1980s Japanese manufacturing

Zach Barth, the namesake of game studio Zachtronics, tends to make a certain kind of game.

Besides crafting the free browser game Infiniminer, which inspired the entire global Minecraft industry, Barth and his collaborators made SpaceChem, Infinifactory, TIS-100, Shenzen I/O, Opus Magnum, and Exapunks. Each one of them is some combination of puzzle game, light capitalism horror, and the most memorable introductory-level computer science, chemistry, or logistics class into which you unwittingly enrolled. Each game is its own thing, but they have a certain similar brain feel between them. It is summed up perhaps best by the Zachtronics team itself in a book: Zach-Like.

Barth and his crew have made other kinds of games, including a forward-looking visual novel about AI, Eliza, and multiplayer card battler Nerts!. And Barth himself told PC Gamer that he hates “saying Zach-like.” But fans of refining inputs, ordering operations, and working their way past constraints will thrill to learn that Zach is, in fact, back.

Announcement trailer for Kaizen: A Factory Story.

Kaizen: A Factory Story, from developer Coincidence and comprising “the original Zachtronics team,” puts you, an American neophyte business type, in charge of a factory making toys, tiny electronics, and other goods during the Japanese economic boom of the 1980s. You arrange the spacing and order of operations of the mechanical arms that snap the head onto a robot toy, or the battery onto a Walkman, for as little time, power, and financial cost as possible.

Kaizen: A Factory Story makes a game of perfecting 1980s Japanese manufacturing Read More »

salty-game-dev-comments,-easier-mods-are-inside-command-&-conquer’s-source-code

Salty game dev comments, easier mods are inside Command & Conquer’s source code

Inside the source code are some wonderful reminders of what Windows game development from 1995 to 2003 was really like. One experienced modder posted some gems on Bluesky, like a “HACK ALERT!” text string added just to prevent the Watcom IDE from crashing because of a “magic text heap length” crash: “Who knows why, but it works,” wrote that poor soul.

This writer’s personal favorite is this little bit in the RampOptions.cpp file in Generals, credited to John K. McDonald Jr., which expresses concerns about “TheRampOptions” existing with a set value:

if (TheRampOptions)

// oh shit.

return;

In addition to helping out modders and entertaining experienced coders, the GPL-licensed source code releases do a lot to help preserve these games, such that they can be reworked to run on future platforms. Projects like OpenRA and OpenSAGE already offer open source reimplementations of those games’ code, but having the original source can only help. C&C community stalwart Luke “CCHyper” Feenan worked with EA leaders to get the code back into a build-ready state and said in a press release that the updated code should make the classic games easier to patch in the future.

As part of the source code release, the Command & Conquer team dropped off 35 minutes of footage, newly found in the archives, of alpha and archive footage from the later Sage-engine based Generals and Renegade games.

Archival footage from alpha versions of Command & Conquer: Generals and Renegade, released by EA as part of their source code release.

It’s heartening to see that with the right combination of people and purpose, classic games can find renewed interest and longevity inside a big publisher.

Salty game dev comments, easier mods are inside Command & Conquer’s source code Read More »

europol-arrests-25-users-of-online-network-accused-of-sharing-ai-csam

Europol arrests 25 users of online network accused of sharing AI CSAM

In South Korea, where AI-generated deepfake porn has been criminalized, an “emergency” was declared and hundreds were arrested, mostly teens. But most countries don’t yet have clear laws banning AI sex images of minors, and Europol cited this fact as a challenge for Operation Cumberland, which is a coordinated crackdown across 19 countries lacking clear guidelines.

“Operation Cumberland has been one of the first cases involving AI-generated child sexual abuse material (CSAM), making it exceptionally challenging for investigators, especially due to the lack of national legislation addressing these crimes,” Europol said.

European Union member states are currently mulling a rule proposed by the European Commission that could help law enforcement “tackle this new situation,” Europol suggested.

Catherine De Bolle, Europol’s executive director, said police also “need to develop new investigative methods and tools” to combat AI-generated CSAM and “the growing prevalence” of CSAM overall.

For Europol, deterrence is critical to support efforts in many EU member states to identify child sex abuse victims. The agency plans to continue to arrest anyone discovered producing, sharing, and/or distributing AI CSAM while also launching an online campaign to raise awareness that doing so is illegal in the EU.

That campaign will highlight the “consequences of using AI for illegal purposes,” Europol said, by using “online messages to reach buyers of illegal content” on social media and payment platforms. Additionally, the agency will apparently go door-to-door and issue warning letters to suspects identified through Operation Cumberland or any future probe.

It’s unclear how many more arrests could be on the horizon in the EU, but Europol disclosed that 273 users of the Danish suspect’s online network were identified, 33 houses were searched, and 173 electronic devices have been seized.

Europol arrests 25 users of online network accused of sharing AI CSAM Read More »