Author name: Shannon Garcia

cambridge-mapping-project-solves-a-medieval-murder

Cambridge mapping project solves a medieval murder


“A tale of shakedowns, sex, and vengeance that expose[s] tensions between the church and England’s elite.”

Location of the murder of John Forde, taken from the Medieval Murder Maps. Credit: Medieval Murder Maps. University of Cambridge: Institute of Criminology

In 2019, we told you about a new interactive digital “murder map” of London compiled by University of Cambridge criminologist Manuel Eisner. Drawing on data catalogued in the city coroners’ rolls, the map showed the approximate location of 142 homicide cases in late medieval London. The Medieval Murder Maps project has since expanded to include maps of York and Oxford homicides, as well as podcast episodes focusing on individual cases.

It’s easy to lose oneself down the rabbit hole of medieval murder for hours, filtering the killings by year, choice of weapon, and location. Think of it as a kind of 14th-century version of Clue: It was the noblewoman’s hired assassins armed with daggers in the streets of Cheapside near St. Paul’s Cathedral. And that’s just the juiciest of the various cases described in a new paper published in the journal Criminal Law Forum.

The noblewoman was Ela Fitzpayne, wife of a knight named Sir Robert Fitzpayne, lord of Stogursey. The victim was a priest and her erstwhile lover, John Forde, who was stabbed to death in the streets of Cheapside on May 3, 1337. “We are looking at a murder commissioned by a leading figure of the English aristocracy,” said University of Cambridge criminologist Manuel Eisner, who heads the Medieval Murder Maps project. “It is planned and cold-blooded, with a family member and close associates carrying it out, all of which suggests a revenge motive.”

Members of the mapping project geocoded all the cases after determining approximate locations for the crime scenes. Written in Latin, the coroners’ rolls are records of sudden or suspicious deaths as investigated by a jury of local men, called together by the coroner to establish facts and reach a verdict. Those records contain such relevant information as where the body was found and by whom; the nature of the wounds; the jury’s verdict on cause of death; the weapon used and how much it was worth; the time, location, and witness accounts; whether the perpetrator was arrested, escaped, or sought sanctuary; and any legal measures taken.

A brazen killing

The murder of Forde was one of several premeditated revenge killings recorded in the area of Westcheap. Forde was walking on the street when another priest, Hascup Neville, caught up to him, ostensibly for a casual chat, just after Vespers but before sunset. As they approached Foster Lane, Neville’s four co-conspirators attacked: Ela Fitzpayne’s brother, Hugh Lovell; two of her former servants, Hugh of Colne and John Strong; and a man called John of Tindale. One of them cut Ford’s throat with a 12-inch dagger, while two others stabbed him in the stomach with long fighting knives.

At the inquest, the jury identified the assassins, but that didn’t result in justice. “Despite naming the killers and clear knowledge of the instigator, when it comes to pursuing the perpetrators, the jury turn a blind eye,” said Eisner. “A household of the highest nobility, and apparently no one knows where they are to bring them to trial. They claim Ela’s brother has no belongings to confiscate. All implausible. This was typical of the class-based justice of the day.”

Colne, the former servant, was eventually charged and imprisoned for the crime some five years later in 1342, but the other perpetrators essentially got away with it.

Eisner et al. uncovered additional historical records that shed more light on the complicated history and ensuing feud between the Fitzpaynes and Forde. One was an indictment in the Calendar of Patent Rolls of Edward III, detailing how Ela and her husband, Forde, and several other accomplices raided a Benedictine priory in 1321. Among other crimes, the intruders “broke [the prior’s] houses, chests and gates, took away a horse, a colt and a boar… felled his trees, dug in his quarry, and carried away the stone and trees.” The gang also stole 18 oxen, 30 pigs, and about 200 sheep and lambs.

There were also letters that the Archbishop of Canterbury wrote to the Bishop of Winchester. Translations of the letters are published for the first time on the project’s website. The archbishop called out Ela by name for her many sins, including adultery “with knights and others, single and married, and even with clerics and holy orders,” and devised a punishment. This included not wearing any gold, pearls, or precious stones and giving money to the poor and to monasteries, plus a dash of public humiliation. Ela was ordered to perform a “walk of shame”—a tamer version than Cersei’s walk in Game of Thrones—every fall for seven years, carrying a four-pound wax candle to the altar of Salisbury Cathedral.

The London Archives. Inquest number 15 on 1336-7 City of London Coroner’s Rolls (

The London Archives. Inquest number 15 on 1336-7 City of London Coroner’s Rolls. Credit: The London Archives

Ela outright refused to do any of that, instead flaunting “her usual insolence.” Naturally, the archbishop had no choice but to excommunicate her. But Eisner speculates that this may have festered within Ela over the ensuing years, thereby sparking her desire for vengeance on Forde—who may have confessed to his affair with Ela to avoid being prosecuted for the 1321 raid. The archbishop died in 1333, four years before Forde’s murder, so Ela was clearly a formidable person with the patience and discipline to serve her revenge dish cold. Her marriage to Robert (her second husband) endured despite her seemingly constant infidelity, and she inherited his property when he died in 1354.

“Attempts to publicly humiliate Ela Fitzpayne may have been part of a political game, as the church used morality to stamp its authority on the nobility, with John Forde caught between masters,” said Eisner. “Taken together, these records suggest a tale of shakedowns, sex, and vengeance that expose tensions between the church and England’s elites, culminating in a mafia-style assassination of a fallen man of god by a gang of medieval hitmen.”

I, for one, am here for the Netflix true crime documentary on Ela Fitzpayne, “a woman in 14th century England who raided priories, openly defied the Archbishop of Canterbury, and planned the assassination of a priest,” per Eisner.

The role of public spaces

The ultimate objective of the Medieval Murder Maps project is to learn more about how public spaces shaped urban violence historically, the authors said. There were some interesting initial revelations back in 2019. For instance, the murders usually occurred in public streets or squares, and Eisner identified a couple of “hot spots” with higher concentrations than other parts of London. One was that particular stretch of Cheapside running from St Mary-le-Bow church to St. Paul’s Cathedral, where John Forde met his grisly end. The other was a triangular area spanning Gracechurch, Lombard, and Cornhill, radiating out from Leadenhall Market.

The perpetrators were mostly men (in only four cases were women the only suspects). As for weapons, knives and swords of varying types were the ones most frequently used, accounting for 68 percent of all the murders. The greatest risk of violent death in London was on weekends (especially Sundays), between early evening and the first few hours after curfew.

Eisner et al. have now extended their spatial analysis to include homicides committed in York and London in the 14th century with similar conclusions. Murders most often took place in markets, squares, and thoroughfares—all key nodes of medieval urban life—in the evenings or on weekends. Oxford had significantly higher murder rates than York or London and also more organized group violence, “suggestive of high levels of social disorganization and impunity.” London, meanwhile, showed distinct clusters of homicides, “which reflect differences in economic and social functions,” the authors wrote. “In all three cities, some homicides were committed in spaces of high visibility and symbolic significance.”

Criminal Law Forum, 2025. DOI: 10.1007/s10609-025-09512-7  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Cambridge mapping project solves a medieval murder Read More »

openai-is-retaining-all-chatgpt-logs-“indefinitely”-here’s-who’s-affected.

OpenAI is retaining all ChatGPT logs “indefinitely.” Here’s who’s affected.

In the copyright fight, Magistrate Judge Ona Wang granted the order within one day of the NYT’s request. She agreed with news plaintiffs that it seemed likely that ChatGPT users may be spooked by the lawsuit and possibly set their chats to delete when using the chatbot to skirt NYT paywalls. Because OpenAI wasn’t sharing deleted chat logs, the news plaintiffs had no way of proving that, she suggested.

Now, OpenAI is not only asking Wang to reconsider but has “also appealed this order with the District Court Judge,” the Thursday statement said.

“We strongly believe this is an overreach by the New York Times,” Lightcap said. “We’re continuing to appeal this order so we can keep putting your trust and privacy first.”

Who can access deleted chats?

To protect users, OpenAI provides an FAQ that clearly explains why their data is being retained and how it could be exposed.

For example, the statement noted that the order doesn’t impact OpenAI API business customers under Zero Data Retention agreements because their data is never stored.

And for users whose data is affected, OpenAI noted that their deleted chats could be accessed, but they won’t “automatically” be shared with The New York Times. Instead, the retained data will be “stored separately in a secure system” and “protected under legal hold, meaning it can’t be accessed or used for purposes other than meeting legal obligations,” OpenAI explained.

Of course, with the court battle ongoing, the FAQ did not have all the answers.

Nobody knows how long OpenAI may be required to retain the deleted chats. Likely seeking to reassure users—some of which appeared to be considering switching to a rival service until the order lifts—OpenAI noted that “only a small, audited OpenAI legal and security team would be able to access this data as necessary to comply with our legal obligations.”

OpenAI is retaining all ChatGPT logs “indefinitely.” Here’s who’s affected. Read More »

dear-readers:-let-us-know-what-you’d-like-to-see-more-of-on-ars

Dear readers: Let us know what you’d like to see more of on Ars

Since 1998, Ars has covered desktop computing, IT, gaming, and personal gadgets. Over the years, the remit has broadened to increase focus on science, space, policy, culture, automobiles, AI, and more.

You can expect our coverage in those areas to continue, but it’s important for Ars to ensure our editors stay on top of what our readers are most interested in. As we plan our approach for the coming months, we’d like to hear from you about what you’d like to see more of at Ars.

For example, do you want to see more focus on reviews of consumer technology? Is there a hunger for closer coverage of applications, toolkits, and issues relevant to professional software developers? Should we invest more effort in covering the latest AAA games on Steam? Are Ars readers excited to read more about 3D printing or drones?

Those are just a few examples, but we’d like to hear your ideas, whether they’re blue sky ideas for things we haven’t previously tracked at all, or just, “I’d love to see even more of [this].”

For this informal survey, let us know in the comments on this article about what coverage areas you’d like to see us expand when and if we have the opportunity. We have to consider many factors when deciding how to invest our resources, but our community’s interest is the biggest.

Dear readers: Let us know what you’d like to see more of on Ars Read More »

rocket-report:-spacex’s-500th-falcon-launch;-why-did-uk’s-reaction-engines-fail?

Rocket Report: SpaceX’s 500th Falcon launch; why did UK’s Reaction Engines fail?


SpaceX’s rockets make a lot more noise, but the machinations of Texas’ newest city are underway.

Prefabricated homes painted black, white, and gray are set against the backdrop of SpaceX’s Starship rocket factory at Starbase, Texas. Credit: Sergio Flores/AFP via Getty Images

Welcome to Edition 7.47 of the Rocket Report! Let’s hope not, but the quarrel between President Donald Trump and Elon Musk may be remembered as “Black Thursday” for the US space program. A simmering disagreement over Trump’s signature “One Big Beautiful Bill” coursing its way through Congress erupted into public view, with two of the most powerful Americans trading insults and threats on social media. Trump suggested the government should terminate “Elon’s governmental contracts and subsidies.” Musk responded with a post saying SpaceX will begin decommissioning the Dragon spacecraft used to transport crew and cargo to the International Space Station. This could go a number of ways, but it’s hard to think anything good will come of it.

As always, we welcome reader submissions. If you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets, as well as a quick look ahead at the next three launches on the calendar.

Blue Origin aces suborbital space shot. Blue Origin, the space company founded and owned by Jeff Bezos, launched six people to the edge of space Saturday, May 31, from Bezos’ ranch in West Texas, CBS News reports. A hydrogen-fueled New Shepard booster propelled a crew capsule, equipped with the largest windows of any operational spaceship, to an altitude of nearly 65 miles (105 kilometers), just above the internationally recognized boundary between the discernible atmosphere and space, before beginning the descent to landing. The passengers included three Americans—Aymette Medina Jorge, Gretchen Green, and Paul Jeris—along with Canadian Jesse Williams, New Zealand’s Mark Rocket, and Panamanian Jaime Alemán, who served as his country’s ambassador to the United States.

If you missed it … You wouldn’t be alone. This was the 32nd flight of Blue Origin’s New Shepard rocket, and the company’s 12th human flight. From a technical perspective, these flights aren’t breaking any new ground in human spaceflight or rocketry. However, each flight provides an opportunity for wealthy or well-connected passengers to view Earth from a perspective only about 700 people have seen before. That’s really cool, but most of these launches are no longer newsworthy, and it takes a devoted fan of spaceflight to tune in to a New Shepard flight on a summertime Saturday morning. (submitted by EllPeaTea)

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

Momentum for Amentum. The US Space Force awarded Jacobs Technology a contract worth up to $4 billion over 10 years to provide engineering and technical services at the nation’s primary space launch ranges, as the military seeks to modernize aging infrastructure and boost capacity amid a surge in commercial space activity, Space News reports. Jacobs Technology is now part of Amentum, a defense contractor based in Chantilly, Virginia. Amentum merged with Jacobs in September 2024. The so-called “Space Force Range Contract” covers maintenance, sustainment, systems engineering and integration services for the Eastern and Western ranges until 2035. The Eastern Range operates from Patrick Space Force Base in Florida, while the Western Range is based at Vandenberg Space Force Base in California.

Picking from the menu … The contract represents a significant shift in how space launch infrastructure is funded. Under the new arrangement, commercial launch service providers—which now account for the majority of launches at both ranges—can request services or upgrades and pay for them directly, rather than having the government bear the costs upfront. This arrangement would create a more market-driven approach to range operations and potentially accelerate modernization. “Historically, the government has fronted these costs,” Brig. Gen. Kristin Panzenhagen, Space Launch Delta 45 Commander and Eastern Range Director, said June 3 in a news release. “The ability of our commercial partners to directly fund their own task order will lessen the financial and administrative burden on the government and is in line with congressionally mandated financial improvement and audit readiness requirements.”

Impulse Space rakes in more cash. This week, an in-space propulsion company, Impulse Space, announced that it had raised a significant amount of money, $300 million, Ars reports. This follows a fundraising round just last year in which the Southern California-based company raised $150 million. This is one of the largest capital raises in space in a while, especially for a non-launch company. Founded by Tom Mueller, a former propulsion guru at SpaceX, Impulse Space has test-flown an orbital maneuvering vehicle called Mira on two flights over the last couple of years. The company is developing a larger vehicle, named Helios, that could meaningfully improve the ability of SpaceX’s Falcon 9 and Falcon Heavy to transport large payloads to the Moon, Mars, and other destinations in the Solar System.

Reacting to the market … The Mira vehicle was originally intended to provide “last-mile” services for spacecraft launched as part of rideshare missions. “The reality is the market for that is not very good,” said Eric Romo, the company’s CEO. Instead, Impulse Space found interest from the Space Force to use Mira as an agile platform for hosting electronic warfare payloads and other military instrumentation in orbit. “Mira wasn’t necessarily designed out of the gate for that, but what we found out after we flew it successfully was, the Space Force said, ‘Hey, we know what that thing’s for,'” Romo said. Helios is a larger beast, with an engine capable of producing 15,000 pounds of thrust and the ability to move a multi-ton payload from low-Earth orbit to geostationary space in less than a day. (submitted by EllPeaTea)

Falcon rockets surpass 500 flights. SpaceX was back at the launch pad for a midweek flight from Vandenberg Space Force Base in California. This particular flight, designated Starlink 11-22, marked the company’s 500th orbital launch attempt with a Falcon rocket, including Falcon 1, Falcon 9, and Falcon Heavy, Spaceflight Now reports. This milestone coincided with the 15th anniversary of the first Falcon 9 launch on June 4, 2010. The day before, SpaceX launched the 500th Falcon rocket overall, counting a single suborbital flight in 2020 that tested the Dragon spacecraft’s abort system. The launch on Wednesday from California was the 68th Falcon 9 launch of the year.

Chasing Atlas … The soon-to-be-retired Atlas rocket holds the record for the most-flown family of space launchers in the United States, with 684 launches to date, beginning with Atlas ICBMs in the Cold War to the Atlas V rocket flying today. In reality, however, the Atlas V shares virtually nothing in common with the Atlas ICBM, other than its name. The Atlas V has new engines, more modern computers, and a redesigned booster stage that ended the line of pressure-stabilized “balloon tanks” that flew on Atlas rockets from 1957 until 2005. The Falcon 1, Falcon 9, and Falcon Heavy share more heritage, all using variants of SpaceX’s Merlin engine. If you consider the Atlas rocket as the US record-holder for most space launches, SpaceX’s Falcon family is on pace to reach 684 flights before the end of 2026.

SpaceX delivers again for GPS. The Space Force successfully sent its latest GPS III satellite to orbit Friday, May 30, demonstrating the ability to prepare and launch a military spacecraft on condensed timelines, Defense News reports. The satellite flew on a SpaceX Falcon 9 rocket from Cape Canaveral Space Force Base in Florida. GPS III, built by Lockheed Martin, is the latest version of the navigation and timing system and is designed to provide improved anti-jamming capabilities. It will broadcast additional military and civilian signals.

More anti-jamming capability … The launch was the second in a series of Rapid Response Trailblazer missions the Space Force is running to test whether it can quickly launch high-value satellites in response to national security needs. The goal is to condense a process that can take up to two years down to a handful of months. The first mission, which flew in December, reduced the time between launch notification and lift off to around five months—and the May 30 mission shortened it even further, to around 90 days. In addition to demonstrating the launch could be done on an accelerated timeline, Space Force officials were motivated to swap this satellite from United Launch Alliance’s long-delayed Vulcan rocket to SpaceX’s Falcon 9 in order to add more tech to the GPS constellation to counter jamming and spoofing. (submitted by EllPeaTea)

An autopsy on Reaction Engines. An article published by the BBC this week recounts some of the backstory behind the bankruptcy of Reaction Engines, a British company that labored for 35 years to develop a revolutionary air-breathing rocket engine. According to the vision of the company’s leaders, the new engine, called SABRE, could have powered a single-stage-to-orbit spaceplane or hypersonic vehicles within the atmosphere. If an engine like SABRE could ever be mastered, it could usher in a new era of spaceplanes that can take off and land horizontally on a runway, instead of launching vertically like a rocket.

A little too quixotic … But Reaction Engines started in an era too soon for true commercial spaceflight and couldn’t convince enough venture capital investors that the idea could compete with the likes of SpaceX. Instead, the company secured a handful of investments from large aerospace companies like Boeing, BAE Systems, and Rolls-Royce. This money allowed Reaction Engines to grow to a staff of approximately 200 employees and kept it afloat until last October, when the company went into administration and laid off its workforce. “A few people were in tears,” Richard Varvill, the company’s chief engineer, told the BBC. “A lot of them were shocked and upset because they’d hoped we could pull it off right up to the end.” It was galling for Varvill “because we were turning it around with an improved engine. Just as we were getting close to succeeding, we failed. That’s a uniquely British characteristic.” (submitted by ShuggyCoUk)

Draconian implications for Trump’s budget. New details of the Trump administration’s plans for NASA, released Friday, May 30, revealed the White House’s desire to end the development of an experimental nuclear thermal rocket engine that could have shown a new way of exploring the Solar System, Ars reports. The budget proposal’s impacts on human spaceflight and space science have been widely reported, but Trump’s plan would cut NASA’s space technology budget in half. One of the victims would be DRACO, a partnership with DARPA to develop and test the first nuclear thermal rocket engine in space.

But wait, there’s more … The budget proposal not only cancels DRACO, but it also zeros out funding for all of NASA’s nuclear propulsion projects. Proponents of nuclear propulsion say it offers several key advantages for sending heavy cargo and humans to deep space destinations, like Mars. “This budget provides no funding for Nuclear Thermal Propulsion and Nuclear Electric Propulsion projects,” officials wrote in the NASA budget request. “These efforts are costly investments, would take many years to develop, and have not been identified as the propulsion mode for deep space missions. The nuclear propulsion projects are terminated to achieve cost savings and because there are other nearer-term propulsion alternatives for Mars transit.” Trump’s budget request isn’t final. Both Republican-controlled houses of Congress will write their own versions of the NASA budget, which must be reconciled before going to the White House for President Trump’s signature.

Blue Origin CEO says government should get out of the launch business. Eighteen months after leaving his job as a vice president at Amazon to take over as Blue Origin’s chief executive, Dave Limp has some thoughts on how commercial companies and government agencies like NASA should explore the Solar System together. First, the government should leave launching things into space to private industry. “I think commercial folks can worry about the infrastructure,” he said. “We can do the launch. We can build the satellite buses that can get you to Mars much more frequently, that don’t cost billions of dollars. We can take a zero, and over time, maybe two zeros off of that. And if the governments around the world leave that to the commercial side, then there are a lot more resources that are freed up for the science side, for the national prestige side, and those types of things.”

Do the exotic … While commercial companies should drive the proverbial bus into the Solar System, NASA should get back to its roots in research and exploration, Limp said. “I would say, and it might be a little provocative, let’s have those smart brains look on the forward-thinking types of things, the really edge of science, planning the really exotic missions, figuring out how to get to planetary bodies we haven’t gotten to before, and staying there.” But Limp highlighted one area where he thinks government investment is needed: the Moon. He said there’s currently no commercial business case for sending people to the Moon, and the government should continue backing those efforts.

Hurdles ahead for Rocket Cargo. The Center for Biological Diversity is suing the military for details on a proposal to build a rocket test site in a remote wildlife refuge less than 900 miles from Hawaiʻi Island, Hawaiʻi Public Radio reports. The Air Force announced in March that it planned to prepare an environmental assessment for the construction and operation of two landing pads on Johnston Atoll to test the viability of using rockets to deliver military cargo loads. While the announcement didn’t mention SpaceX, that company’s Starship rocket is on contract with the Air Force Research Laboratory to work on delivering cargo anywhere in the world within an hour. Now, several conservationists have spoken out against the proposal, pointing out that Johnston is an important habitat for birds and marine species.

Scarred territory … For nearly a century, Johnston Atoll has served dual roles as a wildlife refuge and a base for US military operations, including as a nuclear test site between 1958 and 1963. In March, the Air Force said it anticipated an environmental assessment for its plans on Johnston Atoll would be available for public review in early April. So far, it has not been released. The Center for Biological Diversity filed a Freedom of Information Act request about the project. They say a determination on their request was due by May 19, but they have not received a response. The center filed a lawsuit last week to compel the military to rule on their request and release information about the project.

Getting down to business at Starbase. SpaceX’s rockets make a lot of noise at Starbase, but the machinations of setting up Texas’ newest city are in motion, too. After months of planning, SpaceX launched the city of Starbase on May 29 with its first public meeting chaired by Mayor Robert Peden and the City Commission at The Hub, a building owned by SpaceX, ValleyCentral.com reports. During the meeting, which lasted about 80 minutes, they hired a city administrator, approved standard regulations for new construction, and created a committee to guide the community’s long-term development. Voters approved the creation of Starbase on May 3, incorporating territory around SpaceX’s remote rocket factory and launch site near the US-Mexico border. SpaceX owns most of the land in Starbase and employs nearly everyone in the tiny town, including the mayor.

Property rights and zoning … “The new city’s leaders have told landowners they plan to introduce land use rules that could result in changes for some residents,” KUT reports. In a letter, Starbase’s first city administrator, Kent Myers, warned local landowners that they may lose the right to continue using their property for its current use under the city’s new zoning plan. “Our goal is to ensure that the zoning plan reflects the City’s vision for balanced growth, protecting critical economic drivers, ensuring public safety, and preserving green spaces,” the letter, dated May 21, reads. This is a normal process when a city creates new zoning rules, and a new city is required by state law to notify landowners—most of which are SpaceX or its employees—of potential zoning changes so they can ask questions in a public setting. A public meeting to discuss the zoning ordinance at Starbase is scheduled for June 23.

Next three launches

June 7: Falcon 9 | SXM-10| Cape Canaveral Space Force Station, Florida | 03: 19 UTC

June 8: Falcon 9 | Starlink 15-8 | Vandenberg Space Force Base, California | 13: 34 UTC

June 10: Falcon 9 | Axiom Mission 4 | Kennedy Space Center, Florida | 12: 22 UTC

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Rocket Report: SpaceX’s 500th Falcon launch; why did UK’s Reaction Engines fail? Read More »

reddit-sues-anthropic-over-ai-scraping-that-retained-users’-deleted-posts

Reddit sues Anthropic over AI scraping that retained users’ deleted posts

Of particular note, Reddit pointed out that Anthropic’s Claude models will help power Amazon’s revamped Alexa, following about $8 billion in Amazon investments in the AI company since 2023.

“By commercially licensing Claude for use in several of Amazon’s commercial offerings, Anthropic reaps significant profit from a technology borne of Reddit content,” Reddit alleged, and “at the expense of Reddit.” Anthropic’s unauthorized scraping also burdens Reddit’s servers, threatening to degrade the user experience and costing Reddit additional damages, Reddit alleged.

To rectify alleged harms, Reddit is hoping a jury will award not just damages covering Reddit’s alleged losses but also punitive damages due to Anthropic’s alleged conduct that is “willful, malicious, and undertaken with conscious disregard for Reddit’s contractual obligations to its users and the privacy rights of those users.”

Without an injunction, Reddit users allegedly have “no way of knowing” if Anthropic scraped their data, Reddit alleged. They also are “left to wonder whether any content they deleted after Claude began training on Reddit data nevertheless remains available to Anthropic and the likely tens of millions (and possibly growing) of Claude users,” Reddit said.

In a statement provided to Ars, Anthropic’s spokesperson confirmed that the AI company plans to fight Reddit’s claims.

“We disagree with Reddit’s claims and will defend ourselves vigorously,” Anthropic’s spokesperson said.

Amazon declined to comment. Reddit did not immediately respond to Ars’ request to comment. But Reddit’s chief legal officer, Ben Lee, told The New York Times that Reddit “will not tolerate profit-seeking entities like Anthropic commercially exploiting Reddit content for billions of dollars without any return for redditors or respect for their privacy.”

“AI companies should not be allowed to scrape information and content from people without clear limitations on how they can use that data,” Lee said. “Licensing agreements enable us to enforce meaningful protections for our users, including the right to delete your content, user privacy protections, and preventing users from being spammed using this content.”

Reddit sues Anthropic over AI scraping that retained users’ deleted posts Read More »

“in-10-years,-all-bets-are-off”—anthropic-ceo-opposes-decadelong-freeze-on-state-ai-laws

“In 10 years, all bets are off”—Anthropic CEO opposes decadelong freeze on state AI laws

On Thursday, Anthropic CEO Dario Amodei argued against a proposed 10-year moratorium on state AI regulation in a New York Times opinion piece, calling the measure shortsighted and overbroad as Congress considers including it in President Trump’s tax policy bill. Anthropic makes Claude, an AI assistant similar to ChatGPT.

Amodei warned that AI is advancing too fast for such a long freeze, predicting these systems “could change the world, fundamentally, within two years; in 10 years, all bets are off.”

As we covered in May, the moratorium would prevent states from regulating AI for a decade. A bipartisan group of state attorneys general has opposed the measure, which would preempt AI laws and regulations recently passed in dozens of states.

In his op-ed piece, Amodei said the proposed moratorium aims to prevent inconsistent state laws that could burden companies or compromise America’s competitive position against China. “I am sympathetic to these concerns,” Amodei wrote. “But a 10-year moratorium is far too blunt an instrument. A.I. is advancing too head-spinningly fast.”

Instead of a blanket moratorium, Amodei proposed that the White House and Congress create a federal transparency standard requiring frontier AI developers to publicly disclose their testing policies and safety measures. Under this framework, companies working on the most capable AI models would need to publish on their websites how they test for various risks and what steps they take before release.

“Without a clear plan for a federal response, a moratorium would give us the worst of both worlds—no ability for states to act and no national policy as a backstop,” Amodei wrote.

Transparency as the middle ground

Amodei emphasized his claims for AI’s transformative potential throughout his op-ed, citing examples of pharmaceutical companies drafting clinical study reports in minutes instead of weeks and AI helping to diagnose medical conditions that might otherwise be missed. He wrote that AI “could accelerate economic growth to an extent not seen for a century, improving everyone’s quality of life,” a claim that some skeptics believe may be overhyped.

“In 10 years, all bets are off”—Anthropic CEO opposes decadelong freeze on state AI laws Read More »

endangered-classic-mac-plastic-color-returns-as-3d-printer-filament

Endangered classic Mac plastic color returns as 3D-printer filament

On Tuesday, classic computer collector Joe Strosnider announced the availability of a new 3D-printer filament that replicates the iconic “Platinum” color scheme used in classic Macintosh computers from the late 1980s through the 1990s. The PLA filament (PLA is short for polylactic acid) allows hobbyists to 3D-print nostalgic novelties, replacement parts, and accessories that match the original color of vintage Apple computers.

Hobbyists commonly feed this type of filament into commercial desktop 3D printers, which heat the plastic and extrude it in a computer-controlled way to fabricate new plastic parts.

The Platinum color, which Apple used in its desktop and portable computer lines starting with the Apple IIgs in 1986, has become synonymous with a distinctive era of classic Macintosh aesthetic. Over time, original Macintosh plastics have become brittle and discolored with age, so matching the “original” color can be a somewhat challenging and subjective experience.

A close-up of

A close-up of “Retro Platinum” PLA filament by Polar Filament. Credit: Polar Filament

Strosnider, who runs a website about his extensive vintage computer collection in Ohio, worked for years to color-match the distinctive beige-gray hue of the Macintosh Platinum scheme, resulting in a spool of hobby-ready plastic by Polar Filament and priced at $21.99 per kilogram.

According to a forum post, Strosnider paid approximately $900 to develop the color and purchase an initial 25-kilogram supply of the filament. Rather than keeping the formulation proprietary, he arranged for Polar Filament to make the color publicly available.

“I paid them a fee to color match the speaker box from inside my Mac Color Classic,” Strosnider wrote in a Tinkerdifferent forum post on Tuesday. “In exchange, I asked them to release the color to the public so anyone can use it.”

Endangered classic Mac plastic color returns as 3D-printer filament Read More »

two-certificate-authorities-booted-from-the-good-graces-of-chrome

Two certificate authorities booted from the good graces of Chrome

Google says its Chrome browser will stop trusting certificates from two certificate authorities after “patterns of concerning behavior observed over the past year” diminished trust in their reliability.

The two organizations, Taiwan-based Chunghwa Telecom and Budapest-based Netlock, are among the dozens of certificate authorities trusted by Chrome and most other browsers to provide digital certificates that encrypt traffic and certify the authenticity of sites. With the ability to mint cryptographic credentials that cause address bars to display a padlock, assuring the trustworthiness of a site, these certificate authorities wield significant control over the security of the web.

Inherent risk

“Over the past several months and years, we have observed a pattern of compliance failures, unmet improvement commitments, and the absence of tangible, measurable progress in response to publicly disclosed incident reports,” members of the Chrome security team wrote Tuesday. “When these factors are considered in aggregate and considered against the inherent risk each publicly-trusted CA poses to the internet, continued public trust is no longer justified.”

Two certificate authorities booted from the good graces of Chrome Read More »

in-which-i-make-the-mistake-of-fully-covering-an-episode-of-the-all-in-podcast

In Which I Make the Mistake of Fully Covering an Episode of the All-In Podcast

I have been forced recently to cover many statements by US AI Czar David Sacks.

Here I will do so again, for the third time in a month. I would much prefer to avoid this. In general, when people go on a binge of repeatedly making such inaccurate inflammatory statements, in such a combative way, I ignore.

Alas, under the circumstances of his attacks on Anthropic, I felt an obligation to engage once more. The All-In Podcast did indeed go almost all-in (they left at least one chip behind) to go after anyone worried about AI killing everyone or otherwise opposing the administration’s AI strategies, in ways that are often Obvious Nonsense.

To their credit, they also repeatedly agreed AI existential risk is real, which also makes this an opportunity to extend an olive branch. And some of the disagreements clearly stem from real confusions and disagreements, especially around them not feeling the AGI or superintelligence and thinking all of this really is about jobs and also market share.

If anyone involved wants to look for ways to work together, or simply wants to become less confused, I’m here. If not, I hope to be elsewhere.

  1. Our Continuing Coverage.

  2. Important Recent Context.

  3. The Point of This Post.

  4. Summary of the Podcast.

  5. Part 1 (The Part With the Unhinged Attacks on Anthropic and also other targets).

  6. Other Related Obvious Nonsense.

  7. Part 2 – We Do Mean the Effect on Jobs.

  8. Part 3 – The Big Beautiful Bill.

  9. Where Does This Leave Us.

I first covered many of his claims in Fighting Obvious Nonsense About AI Diffusion. Then I did my best to do a fully balanced look at the UAE-KSA chips deal, in America Makes AI Chip Diffusion Deal with UAE and KSA. As I said then, depending on details of the deal and other things we do not publicly know, it is possible that from the perspective of someone whose focus in AI is great power competition, this deal advanced American interests. The fact that many of Sacks’s arguments in favor of the deal were Obvious Nonsense, and many seemed to be in clearly bad faith, had to be addressed but did not mean the deal itself had to be an error.

This third post became necessary because of recent additional statements by Sacks on the All-In Podcast. Mostly they are not anything he has not said before, and are things he is likely to say many times again in the future, and they are largely once again Obvious Nonsense, so why cover them? Doesn’t Sacks rant his hallucinations about the supposed ‘AI Existential Risk Industrial Complex’ all the time?

Yes. Yes, he does. Mostly he falsely rants, and he repeats himself, and I ignore it.

What was different this time was the context.

The Trump Administration is attempting to pass what they call the ‘Big Beautiful Bill.’

Primarily this bill is a federal budget, almost none of which has to do with AI.

It also contains a provision that would impose a 10 year moratorium, on the state or local level, on civil law enforcement of almost any laws related to AI.

Many people, including myself and Anthropic CEO Dario Amodei, are not afraid to say that this is a bonkers crazy thing to do, and that perhaps we might want to take some modest actions on AI prior to it transforming the world rather than after.

Dario Amodei (CEO Anthropic): You can’t just step in front of the train and stop it. The only move that’s going to work is steering the train – steer it 10 degrees in a different direction from where it was going. That can be done. That’s possible, but we have to do it now.

Putting this provision in the BBB is also almost certainly a violation of the Byrd rule, but Congress chose to put it in anyway, likely as a form of ‘reconnaissance in force.’

It is not entirely clear that the administration even wants this moratorium in this form. Maybe yes, maybe no. But they very much do care about the BBB.

Thus, someone leaked to Semafor, and we got this article with the title ‘Anthropic emerges as an adversary to Trump’s big bill,’ claiming that Anthropic is lobbying against the BBB due to the AI provision, and this and other Anthropic actions are making Trump world very angry.

The other main trigger, Semafor reports, was Anthropic’s hiring two Biden AI staffers, Elizabeth Kelly and Tarun Chhabra, and Biden AI advisor Ben Buchanan, although it is noted by Semafor that Anthropic also employs Republican-aligned policy staff, like Benjamin Merkel and Mary Croghan. Buchanan, the architect of the Biden Diffusion rules, has (as one would expect) personally opposed the UAE-KSA deal and other ways in which Biden administration rules have been reversed.

Bizarrely, the Trump administration also expressed annoyance at Anthropic CEO Dario Amodei warning about imminent loss of up to half of white collar jobs. I think that projection was too aggressive, but I am confident he believes it.

Semafor bizarrely frames these lobbying tactics as potentially savvy business moves?

Reed Albergotti: Opposing the bill preempting state AI laws may not be necessary anyway, because it faces high hurdles in both congress and in the courts.

In other words, Anthropic’s federal lobbying probably won’t make much of a difference. Influencing the White House on its executive orders would have been the best shot.

In the long run, though, maybe it’s a smart strategy. AI researchers may see Anthropic as more principled and it could help with recruiting. The Trump administration won’t be around forever and Anthropic may be better positioned when the next president takes office.

Yeah, look, no, obviously not, if you agree with Reed (and I do) that Anthropic can’t have a substantial impact on the BBB proceedings then this was clearly a misstep given the reaction. Why would anyone think ‘antagonize the Trump administration’ was good business for Anthropic? To help a bit with recruiting because they would look slightly more ‘more principled’ at the risk of facing a hostile White House?

Anthropic and the White House being enemies would help only OpenAI and China.

Anthropic’s lobbying of course is partly motivated by what they believe is good for America and humanity, and partly by what is good for Anthropic.

Anthropic has been, up until recently, seemingly been pursuing a very deliberate insider strategy. They were careful not to antagonize anyone. They continue to downplay public statements about AI existential and catastrophic risks. They have offered only very careful and measured support for any AI regulations. Dario has very much publicly gotten behind and emphasized the ‘need to beat China’ framework. Not only does Anthropic not call for AI to ‘slow down’ or ‘pause,’ they call upon American AI to accelerate. On SB 1047, Anthropic called for and got major softening of the bill and then still refused to endorse it.

This has been extremely frustrating for those who are worried about AI killing everyone, many of whom think Anthropic should speak up far louder and make the case for what is actually necessary. They see Anthropic as having largely sold out on this and often other fronts. Because such an approach is very obviously good for Anthropic’s narrow business interests.

What was said on the All-In Podcast recently, and is being reiterated even more than usual on Sacks’s Twitter recently, is a frankly rather unhinged attack against anyone and everyone Sacks dislikes in the AI space, in an attempt to associate all of it together into a supposed grand diabolical and conspiratorial ‘AI Existential Risk Industrial Complex’ out that, quite frankly, does not exist.

What is different this time is primarily the targeting of Anthropic.

Presumably the message is, loud and clear: Back the hell off. Or else.

This post has five primary objectives.

  1. Actually look concretely at the arguments being made in case they have a point.

  2. Have a reference point for this event and for this general class of claims and arguments, explaining that they simply are not a description of reality and illustrating the spirit in which they are being offered to us, such that I can refer others back to this post, and link back to it in the future.

  3. Extend an olive branch and offer of help to Sacks and those at the All-In Podcast.

  4. Ensure that Anthropic understands the messages being sent here.

  5. Provide a response to the podcast’s discussion on jobs in their Part 2.

For various reasons, I am, shall we say, writing this with the maximum amount of charity and politeness that I can bring myself to muster.

You should proceed to the rest of this post if and only if this post is relevant to you.

I used the YouTube transcript. This was four podcasts in one.

  1. A rather misinformed and unhinged all-out attack on and an attempt to conflate through associations and confusions and vibes some combination of Anthropic, diffusion controls on advanced AI chips, anyone supporting diffusion controls, anyone opposing the UAE deal especially if they are a China hawk, more generally anyone who has a different opinion on how best to beat China, anyone worried about AI job losses, anyone worried about AI existential risk (while admitting to their credit that AI is indeed an existential risk several times), those who cause AIs to create black George Washingtons, several distinct classes of people referred to as ‘doomers,’ EA, The Biden Administration, anyone previously employed by the Biden Administration at least in AI, OpenPhil, Dustin Moskovitz, Netflix CEO Reed Hoffman, woke agendas, a full on dystopian government with absolute power, a supposed plot to allocate all compute to a few chosen companies that was this close to taking over the world if Trump had lost.

    1. This was then extended to Barack Obama via Twitter.

    2. As presented this was presumably in large part a warning to Anthropic, that their recent activities have pissed people off more than they might realize, in ways I presume Anthropic did not intend.

  2. A much better discussion about AI job losses and economic growth, in which new startups and new jobs and cheap goods will save us all and everything will be great and we’ll all work less hours and be wealthier. I largely disagree.

    1. It also makes clear that yes, by existential they do (often) mean the effect on jobs and they do not in any way feel or expect superintelligence or even AGI. Or at minimum, they often speak and think in ways that assume this.

  3. A discussion of the ‘big beautiful bill’ also known as the budget, without reference to the attempted 10-year moratorium on any local or state enforcement of any civil law related to AI. Mostly here I just note key claims and attitudes. I thought a lot of the talk was confused but it’s not relevant to our interests here.

  4. A discussion of other matters outside our scope. I won’t comment.

If those involved believe what they are saying in part one and what David Sacks often says on Twitter on related topics, then they are deeply, deeply misinformed and confused about many things. That would mean this is a great opportunity for us all to talk, learn and work together. We actually agree on quite a lot, and that ‘we’ extends also to many of the others they are attacking here.

I would be happy to talk to any combination of the All-In hosts, in public, in private or on the podcast in any combination, to help clear all this up along with anything else they are curious about. We all benefit from that. I would love to do all this cooperatively. However differently we go about it, we all want all the good things and there are some signs there is underlying appreciation here for the problems ahead.

However it ended up in the podcast – again, this could all be a big misunderstanding – there was a lot of Obvious Nonsense here, including a lot of zombie lies, clearly weaponized. They say quite a lot of things that are not, and frame things in ways that serve to instill implications that are not true, and equate things that should not be equated, and so on. I can’t pretend otherwise.

There’s also a profound failure to ‘feel the AGI’ and definitely a failure to feel the ASI (artificial superintelligence), or even to feel that others might truly feel it, which seems to be driving a lot of the disagreement.

There’s a conflation, that I believe is largely genuine, of any and all skepticism of technology under the umbrella term ‘Doomer.’ Someone worries about job loss? Doomer. Someone worries about existential risk (by which perhaps you mean the effect on jobs?)? Doomer. Someone worries about AI ethics? Doomer. Someone worries about climate change? Doesn’t come up, but also doomer, perusambly.

But guys, seriously, if you actually believe all this, call me, let’s clear this up. I don’t know how you got this confused but we can fix it, even if we continue to disagree about important things too.

If you don’t believe it, of course, then stop saying it. And whether or not you intend to stop, you can call me anyway, let’s talk off the record and see if there’s anything to be done about all this.

The transcript mostly doesn’t make clear who is saying what, but also there don’t seem to be any real disagreements between the participants, so I’m going to use ‘they’ throughout.

I put a few of these notes into logical order rather than order in the transcript where it made more sense, but mostly this is chronological. I considered moving a few jobs-related things into the jobs section but decided not to do this.

As per my podcast standard, I will organize this as a series of bullet points. Anything in the main bullet point is my description of what was importantly said. Anything in the secondary sections is me responding to what was said.

  1. They start off acknowledging employment concerns are real, they explicitly say people are concerned about ASI and yes they do mean the effect on jobs.

  2. Then start going hard after ‘doomers’ starting with Dario Amodei’s aggressive claims about white collar job losses, accusing him of hype.

    1. Pot? Cryto-kettle?

    2. I do actually think that particular claim was too aggressive, but if Dario is saying that it is because he believes it (and has confusions about diffusion, probably).

    3. Later they say ‘Anthropic’s warnings coincidence with key moments in their fundraising journey’ right after Anthropic recently closed their Series E and now is finally warning us about AI risks.

    4. They are repeating the frankly zombie lie that Anthropic and OpenAI talk about AI existential risk or job loss as hype for fundraising, that it’s a ‘smart business strategy.’ That it is a ‘nefarious strategy.’ This is Obvious Nonsense. It is in obvious bad faith. OpenAI and Anthropic have in public been mostly actively downplaying existential risk concerns for a while now, in ways I know them not to believe. Stop it.

  3. Then claim broader AI risk concerns expressed at the first AI safety summit ‘have been discredited,’ while agreeing that the risks are real they simply haven’t arrived yet. Then they go on about an ‘agenda’ you should be ‘concerned about.’

  4. They essentially go all Jevon’s Paradox on labor, that the more we automate (without loss of generality) coding there will be better returns so you’ll actually end up using more. They state this like it is fact, even in the context of multipliers like 20x productivity.

    1. This claim seems obviously too strong. I won’t reiterate my position on jobs.

  5. These venture capitalists think that venture capitalists will always just create a lot more jobs than we lose even if e.g. all the truck drivers are out of work because profits, while investing in a bunch of one-person tech companies and cryptos.

  6. ‘Fear is a way of getting people into power and they’re going to create a new kind of control.’ I… I mean… given who is doing this podcast do I even have to say it?

  7. They claim Effective Altruism ‘astroturfs.’

    1. This is complete lying Obvious Nonsense, and rather rich coming from venture capitalists who engage in exactly this in defense of their books, with company disingenuous lobbying efforts from the likes of a16z and Meta massively outspending all worried people combined and lying their asses off outright on the regular and also being in control in the White House.

    2. Every survey says that Americans are indeed worried about AI (although it is low salience) and AI is unpopular.

  8. They then outright accuse OpenPhil, EA in general, Anthropic and so on of being in a grand conspiracy seeking ‘global AI governance,’ then conflate this with basic compute governance, then conflate this with the overall Biden AI agenda and DEI.

    1. Which again is Obvious Nonsense, at best such efforts are indifferent to DEI.

    2. I assure everyone Anthropic does not care about a woke agenda or about DEI.

    3. My experience with EA reflects this same attitude in almost all cases.

  9. Then they claim this ‘led to woke AI like the black George Washington.’

    1. I refer to what happened with that as The Gemini Incident.

    2. The causal claim here is Obvious Nonsense. Google was being stupid and woke all on its own for well documented reasons and you can be made at Google’s employees if you want about this.

  10. They make it sound as sinister as possible that Anthropic hired several ex-Biden AI policy people.

    1. I get why this is a bad look from the All-In Podcast perspective.

    2. However, what they are clearly implying here is not true, and Anthropic has hired people from both sides of the aisle as per Semafor, and is almost certainly simply snapping up talent that was available.

  11. They accuse ‘EA’ or OpenPhil or even Anthropic of advocating ‘for a pause.’

    1. This is unequivocally false for OP, for Anthropic and for the vast majority of EA efforts. Again, lies or deep deep confusion, Obvious Nonsense.

    2. Anthropic CEO Dario Amodei has put out extensive essays about the need to beat China and all that. He is actively trying to build transformational AI.

    3. A ‘pause’ would damage or destroy Anthropic and he thinks a pause would be obviously unwise right now. Which I agree with.

    4. I am very confident the people making these claims know the claims are false.

  12. They say ‘x-risk is not the only risk we have to beat China.’

    1. And I agree! We all agree! Great that we can agree these are two important goals. Can we please stop with the claims that we don’t agree with this?

    2. Dario also agrees very explicitly, out loud, in public, so much so it makes a lot of worried people and likely many of his employees highly uneasy and he’s accused of selling out.

    3. David Sacks in particular has accused anyone who opposes his approach to ‘beating China’ of not caring about beating China. He either needs to understand that a lot of other people genuinely worried about China strongly disagree about the right way to beat China and think keeping compute out of the wrong hands is important here, or he needs to stop lying about this.

  13. Someone estimates 30% chance China ‘wins the AI race’ but thinks existential risk is lower than 30%.

    1. I disagree on the both percentages, but yes that is a position one might reasonably take, but we can and must then work on both, and also while both these outcomes are very bad one is much much worse than the other and I hope we agree on which is which.

  14. They say Claude kicks ass, great product.

    1. I definitely agree with that.

  15. The pull quote comes around (19: 00) where they accuse everyone involved of being ‘funded by hardcore leftists’ and planning on some ‘Orwellian future where AI is controlled by the government’ that they ‘use to control all of us’ and using this to spread their ‘woke’ or ‘left-wing’ values.

    1. Seriously no, stop.

    2. I go into varying degrees of detail about this throughout this and other posts, but please, seriously, no, this is simply false on all counts.

    3. It is true that there are other people, including people who were in the Biden administration, who on the margin will prioritize doing things that promote ‘left-wing’ values and ‘woke’ agendas. Those are different people.

  16. They even claim that before Trump was elected they were on a path to ‘global compute governance’ restricted to 2-3 companies that then forced the AIs to be woke.

    1. This is again all such complete Obvious Nonsense.

    2. I believe this story originated with Marc Andreessen.

    3. At best it is a huge willful misunderstanding of something that was said by someone in the Biden Administration.

    4. It’s insane that they are still claiming this and harping on it, it makes it so hard to treat anything they say as if it maps to reality.

    5. At this point I seriously can’t even with painting people advocating for ‘maybe we should figure out what is the best thing to do with our money and do that’ and ‘we should prevent China from getting access to our compute’ and ‘if we are going to make digital minds that are potentially smarter than us that will transform the world that might not be a safe thing to do and is going to require some regulations at some point’ as ‘we should dictate all the actions of everyone on Earth in some Orwellian government conspiracy for Woke World Domination these people would totally pull off if it wasn’t for Trump’ and seriously just stop.

  17. They ask ‘should you fear government regulation or should you fear autocomplete.’

    1. It is 2025 are you still calling this ‘autocomplete’ you cannot be serious?

    2. We agree this thing is going to be pivotal to the future and that it presents existential risk. What the hell, guys. You are making a mockery of yourselves.

    3. I cannot emphasize enough that if you people could just please be normal on these fronts where we all want the same things then the people worried about AI killing everyone would mostly be happy to work together, and would largely be willing to overlook essentially everything else we disagree about.

    4. I honestly don’t even know why these people think they need to be spending their time, effort and emotional energy on these kinds of attacks right now. They must really think that they have some sort of mysterious super powerful enemy here and it’s a mirage.

    5. These are the same people pushing for their ‘big beautiful bill’ that includes a full pre-emption of any state or local regulations on AI (in a place that presumably won’t survive the Byrd rule, but they’re trying anyway) with the intended federal action to fill that void being actual nothing.

    6. Then they’re getting angry when people react as if that proposal is extreme and insane, and treat those opposed to it as being in an enemy camp.

  18. They do some reiteration of their defenses of the UAE-KSA chips deal.

    1. I’ve already said my peace on this extensively, again reasonable people can disagree on what is the best strategic approach, and reasonable people would recognize this.

David Sacks in particular continues to repeat a wide variety of highly unhinged claims about Effective Altruism. Here he includes Barack Obama in this grand conspiracy, then links to several even worse posts that are in transparently obvious bad faith.

David Sacks (2025, saying Obvious Nonsense): Republicans should understand that when Obama retweets hyperbolic and unproven claims about AI job loss, it’s not an accident, it’s part of an influence operation. The goal: to further “Global AI Governance,” a massive power grab by the bureaucratic state and globalist institutions.

The organizers: “Effective Altruist” billionaires with a long history of funding left-wing causes and Trump hatred. Of course, it’s fine to be concerned about a technology as transformational as AI, but if you repeat their claims uncritically, you may be falling for an astroturfed campaign by the “AI Existential Risk Industrial Complex.”

Claims about job loss (what I call They Took Our Jobs) are a mundane problem, calling for mundane solutions, and have nothing whatsoever to do with existential risk or ‘effective altruism,’ what are you even talking about. Is this because the article quotes Dario Amodei’s claims about job losses, therefore it is part of some grand ‘existential risk industrial complex’?

Seriously, do you understand how fully unhinged you sound to anyone with any knowledge of the situation?

David Sacks does not even disagree that we will face large scale job loss from AI, only about the speed and net impact. This same All-In Podcast talks about the possibility of large job losses in Part 2, not dissimilar in size to what Dario describes. Everyone who talks about this on the podcast seems to agree that massive job losses via AI automation are indeed coming, except they say This Is Good, Actually because technology will always also create more jobs to replace them. The disagreement here is highly reasonable and is mainly talking price, and the talking price is almost entirely about whether new jobs will replace the old ones.

Indeed, they talk about a ‘tough job market for new grads’ and warn that if you don’t embrace the AI tools, you’ll be left behind and won’t find work. That’s basically the same claim as Kevin Roose is making.

What did Barack Obama do and say? The post I saw was that he retweeted a New York Times article by Kevin Roose that talks about job losses and illustrates some signs of it, including reporting the newsworthy statement from Dario Amodei, and then Obama made this statement:

Barack Obama: Now’s the time for public discussions about how to maximize the benefits and limit the harms of this powerful new technology.

Do you disagree with Obama’s statement here, Sacks? Do you think it insufficiently expresses the need to provide miniature American flags for others and be twirling, always twirling towards freedom? Obama’s statement is essentially content-free.

EDIT: I then realized after I hit post later that yes, Obama did also retweet the Axios article that quoted Dario, saying this:

Barak Obama: At a time when people are understandably focused on the daily chaos in Washington, these articles describe the rapidly accelerating impact that AI is going to have on jobs, the economy, and how we live.

That is at least a non-trivial statement, although his follow-up Call to Action is the ultimate trivial statement. This very clearly is not part of some conspiracy to make us ‘have public discussions about how to maximize the benefits and limit the harms of this powerful technology.’

How do these people continue to claim that this all-powerful ‘Effective Altruism’ was somehow the astroturfing lobbyist group and they are the rogue resistance, when the AI industry has more lobbyists in Washington and Brussels than the fossil fuel industry and the tobacco industry combined? When almost all of that industry lobbying, including from OpenAI, Google, Meta and a16z, is exactly what you would expect, opposition to regulations and attempts to get their bag of subsidies.

What is most frustrating is that David Sacks very clearly understands that AGI presents an existential risk. AI existential risk is even explicitly affirmed multiple times during this podcast!

He has been very clear on this in the past, as in, for example:

David Sacks (2024, saying helpful things): AI is a wonderful tool for the betterment of humanity; AGI is a potential successor species.

I’m all in favor of accelerating technological progress, but there is something unsettling about the way OpenAI explicitly declares its mission to be the creation of AGI.

Despite this, Sacks seems to have decided that reiterating these bizarre conspiracy theories and unhinged attacks is a good strategy for whatever his goals might be.

Here is another recent absurdity that I got forcibly put in front of me via Tyler Cowen:

David Sacks (June 2025, saying untrue things): Nobody was caught more off guard by the DeepSeek moment than the AI Doomers.

They had been claiming:

— that the U.S. was years ahead in AI;

— that PRC leadership didn’t care much about AI;

— that China would prioritize stability over disruption; and

— that if the U.S. slowed down AI development, China would slow down too.

All of this turned out to be profoundly wrong. Now, ironically, many of the Doomers — who prior to DeepSeek had tried to ban American models now currently in use — are trying to rebrand as “China Hawks.” If they had their way, the U.S. would have already lost the AI race!

David Sacks has to know exactly what he is doing here. This is in obvious bad faith. At best, this is the tactic of ‘take a large group of people, and treat the entire group as saying anything that its most extreme member once said, and state it in the most negative way possible.’

To state the obvious, going point by point, how false all of this is:

  1. The USA remains ahead in AI, but yes China has closed this gap somewhat, as one would broadly expect, at least in terms of fast following. The impact of the DeepSeek moment was largely that various people, including Sacks, totally blew what happened out of proportion. Some of that was obvious at the time, some only became clear in retrospect. But the rhetoric is full on ‘missile gap.’ Also, this is like saying ‘you claimed Alice was two miles ahead of Bob, but then Bob caught up to Alice, so you were lying.’ That is not how anything works.

  2. The PRC leadership was, as far as I can tell, highly surprised by DeepSeek. They were indeed far more caught off guard than the ‘AI Doomers,’ many of whom had already been following DeepSeek and had noticed v3 and expected this. The PRC then noticed, and yes they now care about AI more, but for a long time they very much did not appreciate what was going on, what are you even talking about.

  3. China seems to have favored stability over disruption far more than America has in this case, they are absolutely caring about stability in the ways China cares about, and this is not what a China that was actually AGI-pilled would look like. China is happy to ‘disrupt’ in places where what they are disrupting is us. Sure.

  4. This is a complete non sequitur. This claims that ‘we’ said [X] → [Y], where [X] is ‘America slows down’ and [Y] is ‘China slows down.’ [X] did not happen! At all! So how can you possibly say that [X]→[Y] turned out to be profoundly wrong? You have absolutely no idea. I also note that we almost always didn’t even make this claim, that X→Y, we said it would be good if both X and Y were true and we should try to get that to happen. For example, I did not say ‘If we slow down, China slows down.’ I said things of the form ‘it would be good to open a dialogue about whether, if we solved down, China would also slow down, because we haven’t even tried that yet.’

  5. The reference to ‘attempts to ban models currently in use’ as if this applies broadly to the group in question, rather than to a very small number of people who were widely criticized at the time, including repeatedly by myself very very explicitly, for overreach because of this exact request.

  6. The repetition of the false claim that there is an attempted ‘rebrand as China Hawks’ which I have discussed previously, and then the claim that these are the same people who tried to ban current models, which they aren’t.

I sincerely wish that David Sacks would stop. I do not expect him to stop. Given that I do not expect him to stop, I sincerely wish that I can go back to avoiding responding when he continues.

The discussion of the future of jobs and employment in Part 2 was much better.

There seemed to be a problem with scale throughout Part 2.

This all seems to take place in a tech and startup bubble where everyone can be founding a new startup or deeply steeping themselves in AI tools to get one of those cool new AI jobs.

This is great advice for podcast listeners in terms of career development, but it simply doesn’t scale the way they want it to, nor does it then broaden out as fast or far in terms of jobs as they pitch it as doing.

There’s ‘what can a bright young listener to this podcast who is into tech and startups and is situationally aware do’ and ‘what is going to happen to a typical person.’ You cannot, in fact, successfully tell most people to ‘learn to code’ by adding in the word vibe.

  1. They assert ‘technology always means more jobs,’ and see concerns about job loss as largely looking at union jobs or those of particular groups like truck drivers that Biden cares about or coal miners that Trump cares about.

    1. I think the worries are mostly far more general. I find it interesting they focus primarily on the non-LLM job loss from self-driving rather than the wider things coming.

    2. I see union jobs as likely far more protected, especially government protected unions, as unions have leverage to prevent diffusion, until they are disrupted by non-union rivals, and similar for jobs protected by license regimes.

  2. They point out that we will all be richer and the benefits will come quickly, not only the job losses.

    1. True, although it will likely be cold comfort to many during the transition, the gains won’t flow through in ease of making ends meet the way one might hope unless we make that happen.

  3. They emphasize that costs of goods will fall.

    1. I think this is largely very right and yes people are underestimating this, but goods we can make without regulatory barriers are not where people are struggling and are a remarkably low percentage of costs.

    2. In the past, getting cheaper food and clothing was a huge deal because that was 50%+ of expenses and it shrunk dramatically, which is great.

    3. But now food is about 10% and clothing is trivial, the prices can’t go that much lower, and labor income might be falling quite a lot if there’s enough competition for jobs.

    4. If the price of food is cut in half that is great, I do agree it would be good to automate food prep (and truck driving and so on) when we can, but this actually doesn’t save all that much money.

    5. I think a lot of people’s focus on the price of food is essentially generational, historical and evolutionary memory of different times when food costs were central to survival.

  4. They correctly ask the right question, what allows for the same lifestyle.

    1. In the past, the main constraint on lifestyle was ability to purchase goods, so cutting goods costs via increased productivity means you need to work less to match lifestyle.

    2. But now it is mostly services, and the goods with restricted supply, and also we are ratcheting up what counts as the baseline lifestyle and what is the required basket of goods.

    3. The key question about lifestyle isn’t quality of goods. It’s about quality of life, it’s about ability to raise a family, as I will soon discuss in ‘Is Life Getting Harder?’

    4. Their model seems to boil down to something not that different from ‘startups are magic’ or ‘lump of income and labor fallacy?’ As in, if you have a bunch of wealth and investment then of course that will create tons of jobs through new startups and investment.

    5. But in a rapidly automating world, especially one in which the best startups will often be disruptors via automation, we’re talking about the need for tens of millions of new jobs over the course of a few years, and then those jobs start getting automated too, and AI keeps improving as this happens. If you think there really are this many ‘shadow jobs’ waiting for us I want a much more concrete model of how that can be true.

    6. Note that if you think we don’t need more gears here, then think about why you think that is true here and where else that might apply.

    7. Reminder: My expectation is that for a while unemployment won’t change that much, although there will be some extra unemployment due to transitional effects, until we exhaust the ‘shadow jobs’ that previously weren’t worth hiring people for, but then this will run out – there is a lot of ruin in the job market but not forever.

  5. Prediction that we will ‘take our profits’ in 30 hour work weeks, speculation about 10% GDP growth if we have 10%-20% white collar job loss (one time?!). None of this seems coherent, other than a general ‘we will all be rich and trends of richness continue’ intuition.

    1. Note the lack of ambition here. If only 20% of current white collar jobs or tasks get automated over a medium term then that isn’t that big. There’s no reason to think that causes persistent 10% growth.

    2. I do think there is a good chance of persistent 10%+ growth but if so it will involve far more transformational changes.

    3. I also don’t see why we should expect people to ‘take our profits’ in shorter work weeks unless we use government to essentially force this.

  6. ‘People say jobs are going to go away but I am on the ground and I see more startups than ever and they’re making a million dollars per employee.’

    1. The statement is true, and I buy that the startup world is going great, but in terms of responding to the threat of massive job losses? These people seem to be in a bubble. Do they even hear themselves? Can they imagine a Democratic politician talking like that in this context?

    2. Do they understand the relative scales of these employment opportunities and economic impacts? ‘The ground’ does not want to mean startup world in San Francisco.

  7. They talk about how it is hard to automate all of a customer service job because some parts are hard for AI.

    1. This is a distinct lack of thinking ahead.

    2. In general it does not seem like this discussion is baking in future AI progress, and also still leaves room for most such jobs to go away anyway.

  8. They say yes if we have 20% job loss government will have to step in but it is a ‘total power grab’ to demand the government ‘act now’ about potential future unemployment.

    1. What is this word salad specter of Andrew Yang or something? How does this relate to anything that anyone is seriously asking for?

    2. The thing about unemployment is that you can indeed respond after it happens. I strongly agree that we should wait and see before doing anything major about this, but also I don’t see serious calls to do otherwise.

  9. Based on various statements where they seem to conflate the two:

    1. I think that by existential risk they might literally mean the effect on jobs? No, seriously, literally, they think it means the effect on jobs? Or they are at least confused here? I can’t make sense of this discussion any other way. Not in a bad faith way, just it seems like they’re legitimately deeply confused about this.

  10. They say diffusion rules wouldn’t solve existential risk but they’re open to suggestions?

    1. I mean no they won’t do that on their own, the primary goal of diffusion rules is to hold back China so we can both win the race and giving ourselves enough freedom of action (and inaction) to have a chance to find a solution to existential risk, why is this so confusing.

    2. And what is this doing in the middle of a discussion about job loss and economic growth rates?

  11. More talk about ‘glorified auto compute.’

    1. You can stop any time, guys.

  12. (36: 52) ‘tough job market for new grads in the established organizations and so what should new grads do they should probably, steep themselves in the tools and go to younger companies or start a company i think that’s the only solution for them.’

    1. This is great advice but I don’t think they understand how grim that is. The vast majority of people are not going to be able to do a startup, I wish this were possible and it’s good advice for their audience sure but this is innumerate to suggest for the population as a whole.

    2. So the only thing, as they say, that young people can do in this type of future is deeply steep themselves in these AI tools to outcompete those that don’t do it, but obviously only a small portion of such people can go that route at once, this works exactly because everyone else mostly won’t do it. The vast majority of grads will be screwed on an epic level.

    3. This is the same as the whole ‘learn to code’ message that, shall we say, did not win the votes of the coal miners. Yes, any individual sufficiently capable person could learn to code, but not everyone can, and there were never that many slots. Similarly, for a long time ‘learn to play poker and grind it out’ has been a very viable path for anyone who has the discipline, but very obviously that is not a solution at scale because it would stop working (also it doesn’t produce anything).

  13. Again speculation that ‘the people who benefit the most’ are new coders willing to embrace the tech.

    1. I mean tell that to the current SWE market, this is not at all obvious, but yes in a AI-is-super-productive world the handful of people who most embrace this opportunity will do well. They’re right that the people who embrace the tools will beat the people who push back, okay, sure.

    2. I will never get the python love they also express here, or the hate for OOP. I really wish we weren’t so foolish as to build the AI future on python, but here we are.

  14. (40: 57) Again the conflation where blaming a layoff on AI is a ‘doomer story.’

    1. This is, once again, a distinct very different concern. Both are real.

    2. So they’re confirming that by ‘doomer’ they often simply mean someone who by existential risk does mean the effect on jobs.

    3. That’s a mostly different group of people, and that’s not how the term is typically used, and it’s clear that they’re either being fooled by the conflation or using it strategically or both.

    4. Pick a lane, I’m fine with either, but this trying to equate both camps to use each to attack the other? No.

  15. They insist that when layoffs happen so far they’re not due to AI.

    1. Okay, I mean, the companies do often say otherwise and you agree AI is making us all a lot more productive, but maybe they’re all lying and everyone only cuts management now but also then they say management jobs aren’t being eliminated due to AI yet.

    2. Alternatively they are also telling the ‘the layoffs are due to AI because the people who won’t embrace AI now need to be fired and this is good, actually’ story, which is also plausible but you can’t have it both ways.

    3. This all sounds like throwing everything at the wall that sounds like ‘AI is good’ and seeing what sticks.

    4. This is perhaps related to throwing everything that sounds like ‘AI is bad’ into a pot and claiming all of it is the same people in a grand conspiracy?

  16. As I understand them: The AI race is an infinite race with no finish line but it is still a race to see who is stronger and maybe USA wins maybe China wins maybe it’s a tie maybe ‘open source wins’ and nuclear deterrence led to peace and was good actually but this is better because it’s a system of productivity not destruction and everyone will have to compete vigorously but we have to watch out for something like 5G where Huawei ‘weren’t worried about diffusion’ they wanted to get their tech out, the race is about market share and whose technology people are using, and the pace of improvement is ‘holy shit.’

    1. I covered a (more coherent but logically identical) version of this when I previously covered Sacks, this does not what matters and the ‘AI race’ is not about market share, and this reflects like the rest of this podcast a profound failure to ‘feel the AGI’ and certainly to ‘feel the ASI.’

It seems worth a few notes while I am here. I will divide the ‘BBB’ into two things.

  1. The attempted 10-year moratorium on enforcement of any AI anything on the local or state level whatsoever. This is, in my humble opinion and also that of Anthropic’s CEO, deeply stupid, bonkers crazy, a massive overreach, a ‘of course you know this means war’ combined with ‘no one could have predicted a break in the levees’ level move. Also an obvious violation of the Byrd rule when placed within the budget, although sadly not in practice a violation of the 10th amendment.

  2. Everything else in the bill, which is what they discuss here. The most important note is that they only talk about the rest of the BBB without the moratorium.

I am not an expert on Congressional budget procedure or different types of appropriations but it seemed like no one here was one either, and the resulting discussion seemed like it would benefit from someone who understands how any of this works.

They are very keen to blame anything and everything they can on Biden, the rest on Congress, and nothing on Trump.

They seem very excited by making the DOGE cuts permanent for reasons that are not explained.

I notice that there is a prediction that this administration will balance the Federal budget. Are we taking wagers on that? There’s a lot of talk of the need to get the deficit down, and they blame the bill not doing this on Congress, essentially.

It sees this expectation is based on creating lots of economic growth, largely via AI. Very large gains from AI does seem to me to be the only sane way we might balance the budget any time soon. I agree that there should be lots of emphasis on GDP growth. They are very confident, it seems, that lower taxes will pay for themselves and spur lots of growth, and they think the CBO is dumb and simplistic.

There’s a concrete prediction for a very hot Q2 GDP print, 3%-4%. I hope it happens. It seems they generally think the economy will do better than predicted, largely due to AI but also I think due to Trump Is Magic Economy Catnip?

They talk about the need for more energy production and some details are discussed on timing and sizing, I agree and would be doing vastly more to move projects forward but from what I have seen of the BBB it does not seem to be net positive on this front. I think they are right to emphasize this but from what I can tell this is not cashing out in terms of much action to create new energy production.

I don’t have anything to say about Part 4, especially given it is out of my scope here.

I hope that Anthropic understands the reaction that they seem to be causing, and chooses wisely how to navigate given this. Given how often Sacks makes similar claims and how much we all have learned to tune those claims out most of the time, it would be easy to miss that something important has changed there.

I presume that David Sacks will continue to double down on this rhetoric, as will many others who have chosen to go down similar rhetorical paths. I expect them to continue employing these Obvious Nonsense vibe-based strategies and accusations of grand conspiracies indefinitely, without regard to whether they map onto reality.

I expect it to be part of a deliberate strategy to brand anyone opposing them, in the style of a certain kind of politics, as long as such styles are ascendant. Notice when someone makes or amplifies such claims. Update on that person accordingly.

I would love to be wrong about that. I do see signs that, underneath it all, something better might indeed be possible. But assuming I’m not wrong, it is what it is.

My realistic aspiration is to not have to keep having that conversation this way, and in particular not having to parse claims from such arguments as if they were attempting to be words that have meaning, that are truthful, or that map into physical reality. It is not fun for anyone, and there are so many other important things to do.

If they want to have a different kind of conversation, I would welcome that.

Discussion about this post

In Which I Make the Mistake of Fully Covering an Episode of the All-In Podcast Read More »

squid-game-trailer-anchors-netflix-tudum-event

Squid Game trailer anchors Netflix Tudum event


Also: Wednesday S2 sneak peek, Stranger Things S5 premiere date, Frankenstein teaser, more Benoit Blanc.

Squid Game returns this month for its third and final season. Credit: Netflix

Netflix held its Tudum Global Fan Event in Los Angeles this weekend to showcase its upcoming slate of programming. Among the highlights: the official trailer for the third and final season of Squid Game, the first six minutes of Wednesday S2, a teaser for Guillermo del Toro’s Frankenstein, and date announcements for the fifth and final season of Stranger Things, as well as Wake Up Dead Man: A Knives Out Mystery.

(Some spoilers below.)

Squid Game S3

As previously reported, Squid Game‘s first season followed Seong Gi-hun (Lee Jung-Jae), a down-on-his-luck gambler who has little left to lose when he agrees to play children’s playground games against 455 other players for money. The twist? If you lose a game, you die. If you cheat, you die. And if you win, you might also die. In the S1 finale, Gi-hun faced off against fellow finalist and childhood friend Cho Sang-woo (Park Hae-soo) in the titular “squid game.” He won their fight but refused to kill his friend. Sang-woo instead stabbed himself in the neck, leaving Gi-hun the guilt-ridden winner.

S2 was set three years later. Gi-hun successfully finagled his way back into the game, intent on revenge against the Front Man (Lee Byung-hun). Meanwhile, Front Man’s police officer brother, Jun-ho (Wi Ha-joon), hired mercenaries to track down the island where the game is staged. Alliances formed and shifted as the games proceeded, with betrayals galore, culminating in the loss of Gi-hun’s friend and ally Player 390 and a cliffhanger ending.

Series creator Hwang Dong-hyuk conceived of S2 and S3 as a single season, but there were too many episodes, so he split them over two seasons. Back in January we got our first glimpse of S3 when Netflix released a 15-second teaser on X, introducing a brand-new killer doll dubbed Chul-su—similar to the giant “Red Light, Green Light” doll Young-hee. Per the official premise:

A failed rebellion, the death of a friend, and a secret betrayal. Picking up in the aftermath of Season 2’s bloody cliffhanger, the third and final season of Netflix’s most popular series finds Gi-hun, a.k.a. Player 456, at his lowest point yet. But the Squid Game stops for no one, so Gi-hun will be forced to make some important choices in the face of overwhelming despair as he and the surviving players are thrust into deadlier games that test everyone’s resolve. With each round, their choices lead to increasingly grave consequences. Meanwhile, In-ho resumes his role as Front Man to welcome the mysterious VIPs, and his brother Jun-ho continues his search for the elusive island, unaware there’s a traitor in their midst. Will Gi-hun make the right decisions, or will Front Man finally break his spirit?

The third season of Squid Game drops on Netflix on June 27, 2025.

Wednesday S2

Star Jenna Ortega put her own stamp on the iconic title character in the first season of Wednesday. At Tudum, Netflix introduced footage of S2’s first six minutes with a performance by Lady Gaga, who emerged from a coffin to perform a couple of spooky numbers—including “Bloody Mary” from Born This Way. (We can thank a viral video featuring the tune set to Wednesday’s fantastic S1 dancing sequence for that.)

As previously reported, along with Ortega, most of the main cast is returning for S2, including Emma Myers as Enid, and Joy Sunday as Bianca. Reprising their roles: Luis Guzman and Catherine Zeta-Jones as Gomez and Morticia Addams; Isaac Ordonez as Pugsley Addams; Victor Dorobantu as Thing; Fred Armisen as Uncle Fester; Luyanda Unati Lewis-Nyawo as Deputy Ritchie Santiago; Hunter Doohan as Tyler Galpin, revealed as a murderous Hyde in the S1 finale; and Jamie McShane as Donovan Galpin, the Jericho sheriff and Tyler’s father (McShane is a guest this season).

We’ll miss Gwendoline Christie’s Principal Larissa Weems and Christina Ricci’s diabolical botany teacher, Marilyn Thornhill (RIP to both), but at least we’re getting the fabulous Joanna Lumley as Hester Frump, Morticia’s mother. Other new cast members include Billie Piper as Capri, Steve Buscemi as new Nevermore principle Barry Dort, and Evie Templeton, Owen Painter, and Noah Tyler in as-yet-undisclosed roles. Bonus: Lady Gaga will make a guest appearance in the show, and, as we see in the new footage, Haley Joel Osment makes a cameo.

Wednesday S2 will air in two installments. Part 1 debuts August 6, 2025. Part 2 is coming on September 3, 2025.

Stranger Things S5

It’s been a long, wild ride with the plucky residents of Hawkins, but we’re finally approaching the ultimate showdown against the dark force that has plagued the town since S1. The fifth season will have eight episodes and each one will be looong—akin to eight feature-length films.

In addition to the returning main cast, Amybeth McNulty and Gabriella Pizzolo are back as Vicki and Dustin’s girlfriend, Suzie, respectively, with Jamie Campbell Bower reprising his role as the ultimate Big Bad, now known as Vecna. Linda Hamilton joins the cast as Dr. Kay, along with Nell Fisher as Holly Wheeler, Jake Connelly as Derek Turnbow, and Alex Breaux as Lt. Akers

S4 ended with Vecna opening the gate that allowed the Upside Down to leak into Hawkins. We’re getting a time jump for S5, but in a way we’re coming full circle, since the events coincide with the third anniversary of Will’s original disappearance in S1. Per the official premise:

The fall of 1987. Hawkins is scarred by the opening of the Rifts, and our heroes are united by a single goal: find and kill Vecna. But he has vanished—his whereabouts and plans unknown. Complicating their mission, the government has placed the town under military quarantine and intensified its hunt for Eleven, forcing her back into hiding. As the anniversary of Will’s disappearance approaches, so does a heavy, familiar dread. The final battle is looming—and with it, a darkness more powerful and more deadly than anything they’ve faced before. To end this nightmare, they’ll need everyone—the full party—standing together, one last time.

The fifth and final season of Stranger Things will drop in not one, not two, but three installments, because apparently Netflix wants to be as annoying as possible. Volume 1 premieres on November 26, 2025; Volume 2 drops on Christmas Day, December 25, 2025; and the series finale will air on New Year’s Eve, December 31, 2025.

Frankenstein

Oscar-wining director Guillermo del Toro has been dreaming of adapting Mary Shelley’s Frankenstein for the big screen for more than a decade. There have been so many adaptations of Shelley’s novel, of varying quality, and even more reinventions and homages (cf. Poor Things). We finally have the first teaser for del Toro’s take, and it’s as sumptuously horrifying and visually rich as one would expect from the man who made such films as Pan’s Labyrinth and The Shape of Water.

Per the official premise: “A brilliant but egotistical scientist brings a creature to life in a monstrous experiment that ultimately leads to the undoing of both the creator and his tragic creation.” The events take place in 19th century Eastern Europe. Oscar Isaac stars as Victor Frankenstein, with Jacob Elordi playing the monster. Christopher Waltz plays Dr. Pretorious, who hopes to continue in Victor’s footsteps by tracking his monster—who, it turns out, did not die in a fire 40 years before.

The cast also includes Mia Goth as Victor’s fiancee, Elizabeth; Felix Kammerer as Williams; Lars Mikkelsen as Captain Anderson; David Bradley as a blind man; and Ralph Inseon as Professor Kempre. Charles Dance will also appear in an as-yet-undisclosed role.

Frankenstein premieres on Netflix in November 2025.

Wake Up Dead Man: A Knives Out Mystery

Rian Johnson’s Knives Out series of films is still going strong, with the third installment featuring Daniel Craig’s languorously brilliant detective, Benoit Blanc, slated to premiere a couple of weeks before Christmas. It’s called Wake Up Dead Man, a title that pays homage to the 1997 U2 song of the same name.

Johnson is playing his cards close to the chest about the plot details. But we do know he’s assembled another all-star cast of murderous suspects: Josh O’Connor, Glenn Close, Josh Brolin, Mila Kunis, Jeremy Renner—whose “Renning Hot” chili pepper sauce featured prominently in Glass Onion—Kerry Washington, Andrew Scott, Cailee Spaeny, Daryl McCormack, and Thomas Haden Church.

Wake Up Dead Man: A Knives Out Mystery drops on Netflix on December 12, 2025—or if you want to be all Benoit Blanc about it, XII.XII.MMXXV.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Squid Game trailer anchors Netflix Tudum event Read More »

research-roundup:-7-stories-we-almost-missed

Research roundup: 7 stories we almost missed


Ping-pong bots, drumming chimps, picking styles of two jazz greats, and an ancient underground city’s soundscape

Time lapse photos show a new ping-pong-playing robot performing a top spin. Credit: David Nguyen, Kendrick Cancio and Sangbae Kim

It’s a regrettable reality that there is never time to cover all the interesting scientific stories we come across each month. In the past, we’ve featured year-end roundups of cool science stories we (almost) missed. This year, we’re experimenting with a monthly collection. May’s list includes a nifty experiment to make a predicted effect of special relativity visible; a ping-pong playing robot that can return hits with 88 percent accuracy; and the discovery of the rare genetic mutation that makes orange cats orange, among other highlights.

Special relativity made visible

The Terrell-Penrose-Effect: Fast objects appear rotated

Credit: TU Wien

Perhaps the most well-known feature of Albert Einstein’s special theory of relativity is time dilation and length contraction. In 1959, two physicists predicted another feature of relativistic motion: an object moving near the speed of light should also appear to be rotated. It’s not been possible to demonstrate this experimentally, however—until now. Physicists at the Vienna University of Technology figured out how to reproduce this rotational effect in the lab using laser pulses and precision cameras, according to a paper published in the journal Communications Physics.

They found their inspiration in art, specifically an earlier collaboration with an artist named Enar de Dios Rodriguez, who collaborated with VUT and the University of Vienna on a project involving ultra-fast photography and slow light. For this latest research, they used objects shaped like a cube and a sphere and moved them around the lab while zapping them with ultrashort laser pulses, recording the flashes with a high-speed camera.

Getting the timing just right effectively yields similar results to a light speed of 2 m/s. After photographing the objects many times using this method, the team then combined the still images into a single image. The results: the cube looked twisted and the sphere’s North Pole was in a different location—a demonstration of the rotational effect predicted back in 1959.

DOI: Communications Physics, 2025. 10.1038/s42005-025-02003-6  (About DOIs).

Drumming chimpanzees

A chimpanzee feeling the rhythm. Credit: Current Biology/Eleuteri et al., 2025.

Chimpanzees are known to “drum” on the roots of trees as a means of communication, often combining that action with what are known as “pant-hoot” vocalizations (see above video). Scientists have found that the chimps’ drumming exhibits key elements of musical rhythm much like humans, according to  a paper published in the journal Current Biology—specifically non-random timing and isochrony. And chimps from different geographical regions have different drumming rhythms.

Back in 2022, the same team observed that individual chimps had unique styles of “buttress drumming,” which served as a kind of communication, letting others in the same group know their identity, location, and activity. This time around they wanted to know if this was also true of chimps living in different groups and whether their drumming was rhythmic in nature. So they collected video footage of the drumming behavior among 11 chimpanzee communities across six populations in East Africa (Uganda) and West Africa (Ivory Coast), amounting to 371 drumming bouts.

Their analysis of the drum patterns confirmed their hypothesis. The western chimps drummed in regularly spaced hits, used faster tempos, and started drumming earlier during their pant-hoot vocalizations. Eastern chimps would alternate between shorter and longer spaced hits. Since this kind of rhythmic percussion is one of the earliest evolved forms of human musical expression and is ubiquitous across cultures, findings such as this could shed light on how our love of rhythm evolved.

DOI: Current Biology, 2025. 10.1016/j.cub.2025.04.019  (About DOIs).

Distinctive styles of two jazz greats

Wes Montgomery (left)) and Joe Pass (right) playing guitars

Jazz lovers likely need no introduction to Joe Pass and Wes Montgomery, 20th century guitarists who influenced generations of jazz musicians with their innovative techniques. Montgomery, for instance, didn’t use a pick, preferring to pluck the strings with his thumb—a method he developed because he practiced at night after working all day as a machinist and didn’t want to wake his children or neighbors. Pass developed his own range of picking techniques, including fingerpicking, hybrid picking, and “flat picking.”

Chirag Gokani and Preston Wilson, both with Applied Research Laboratories and the University of Texas, Austin, greatly admired both Pass and Montgomery and decided to explore the underlying the acoustics of their distinctive playing, modeling the interactions of the thumb, fingers, and pick with a guitar string. They described their research during a meeting of the Acoustical Society of America in New Orleans, LA.

Among their findings: Montgomery achieved his warm tone by playing closer to the bridge and mostly plucking at the string. Pass’s rich tone arose from a combination of using a pick and playing closer to the guitar neck. There were also differences in how much a thumb, finger, and pick slip off the string:  use of the thumb (Montgomery) produced more of a “pluck” compared to the pick (Pass), which produced more of a “strike.” Gokani and Wilson think their model could be used to synthesize digital guitars with a more realistic sound, as well as helping guitarists better emulate Pass and Montgomery.

Sounds of an ancient underground city

A collection of images from the underground tunnels of Derinkuyu.

Credit: Sezin Nas

Turkey is home to the underground city Derinkuyu, originally carved out inside soft volcanic rock around the 8th century BCE. It was later expanded to include four main ventilation channels (and some 50,000 smaller shafts) serving seven levels, which could be closed off from the inside with a large rolling stone. The city could hold up to 20,000 people and it  was connected to another underground city, Kaymakli, via tunnels. Derinkuyu helped protect Arab Muslims during the Arab-Byzantine wars, served as a refuge from the Ottomans in the 14th century, and as a haven for Armenians escaping persecution in the early 20th century, among other functions.

The tunnels were rediscovered in the 1960s and about half of the city has been open to visitors since 2016. The site is naturally of great archaeological interest, but there has been little to no research on the acoustics of the site, particularly the ventilation channels—one of Derinkuyu’s most unique features, according to Sezin Nas, an architectural acoustician at Istanbul Galata University in Turkey.  She gave a talk at a meeting of the Acoustical Society of America in New Orleans, LA, about her work on the site’s acoustic environment.

Nas analyzed a church, a living area, and a kitchen, measuring sound sources and reverberation patterns, among other factors, to create a 3D virtual soundscape. The hope is that a better understanding of this aspect of Derinkuyu could improve the design of future underground urban spaces—as well as one day using her virtual soundscape to enable visitors to experience the sounds of the city themselves.

MIT’s latest ping-pong robot

Robots playing ping-pong have been a thing since the 1980s, of particular interest to scientists because it requires the robot to combine the slow, precise ability to grasp and pick up objects with dynamic, adaptable locomotion. Such robots need high-speed machine vision, fast motors and actuators, precise control, and the ability to make accurate predictions in real time, not to mention being able to develop a game strategy. More recent designs use AI techniques to allow the robots to “learn” from prior data to improve their performance.

MIT researchers have built their own version of a ping-pong playing robot, incorporating a lightweight design and the ability to precisely return shots. They built on prior work developing the Humanoid, a small bipedal two-armed robot—specifically, modifying the Humanoid’s arm by adding an extra degree of freedom to the wrist so the robot could control a ping-pong paddle. They tested their robot by mounting it on a ping-pong table and lobbing 150 balls at it from the other side of the table, capturing the action with high-speed cameras.

The new bot can execute three different swing types (loop, drive, and chip) and during the trial runs it returned the ball with impressive accuracy across all three types: 88.4 percent, 89.2 percent, and 87.5 percent, respectively. Subsequent tweaks to theirrystem brought the robot’s strike speed up to 19 meters per second (about 42 MPH), close to the 12 to 25 meters per second of advanced human players. The addition of control algorithms gave the robot the ability to aim. The robot still has limited mobility and reach because it has to be fixed to the ping-pong table but the MIT researchers plan to rig it to a gantry or wheeled platform in the future to address that shortcoming.

Why orange cats are orange

an orange tabby kitten

Cat lovers know orange cats are special for more than their unique coloring, but that’s the quality that has intrigued scientists for almost a century. Sure, lots of animals have orange, ginger, or yellow hues, like tigers, orangutans, and golden retrievers. But in domestic cats that color is specifically linked to sex. Almost all orange cats are male. Scientists have now identified the genetic mutation responsible and it appears to be unique to cats, according to a paper published in the journal Current Biology.

Prior work had narrowed down the region on the X chromosome most likely to contain the relevant mutation. The scientists knew that females usually have just one copy of the mutation and in that case have tortoiseshell (partially orange) coloring, although in rare cases, a female cat will be orange if both X chromosomes have the mutation. Over the last five to ten years, there has been an explosion in genome resources (including complete sequenced genomes) for cats which greatly aided the team’s research, along with taking additional DNA samples from cats at spay and neuter clinics.

From an initial pool of 51 candidate variants, the scientists narrowed it down to three genes, only one of which was likely to play any role in gene regulation: Arhgap36. It wasn’t known to play any role in pigment cells in humans, mice, or non-orange cats. But orange cats are special; their mutation (sex-linked orange) turns on Arhgap36 expression in pigment cells (and only pigment cells), thereby interfering with the molecular pathway that controls coat color in other orange-shaded mammals. The scientists suggest that this is an example of how genes can acquire new functions, thereby enabling species to better adapt and evolve.

DOI: Current Biology, 2025. 10.1016/j.cub.2025.03.075  (About DOIs).

Not a Roman “massacre” after all

Two of the skeletons excavated by Mortimer Wheeler in the 1930s, dating from the 1st century AD.

Credit: Martin Smith

In 1936, archaeologists excavating the Iron Age hill fort Maiden Castle in the UK unearthed dozens of human skeletons, all showing signs of lethal injuries to the head and upper body—likely inflicted with weaponry. At the time, this was interpreted as evidence of a pitched battle between the Britons of the local Durotriges tribe and invading Romans. The Romans slaughtered the native inhabitants, thereby bringing a sudden violent end to the Iron Age. At least that’s the popular narrative that has prevailed ever since in countless popular articles, books, and documentaries.

But a paper published in the Oxford Journal of Archaeology calls that narrative into question. Archaeologists at Bournemouth University have re-analyzed those burials, incorporating radiocarbon dating into their efforts. They concluded that those individuals didn’t die in a single brutal battle. Rather, it was Britons killing other Britons over multiple generations between the first century BCE and the first century CE—most likely in periodic localized outbursts of violence in the lead-up to the Roman conquest of Britain. It’s possible there are still many human remains waiting to be discovered at the site, which could shed further light on what happened at Maiden Castle.

DOI: Oxford Journal of Archaeology, 2025. 10.1111/ojoa.12324  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Research roundup: 7 stories we almost missed Read More »

trump-pulls-isaacman-nomination-for-space.-source:-“nasa-is-f***ed”

Trump pulls Isaacman nomination for space. Source: “NASA is f***ed”

Musk was a key factor behind Isaacman’s nomination as NASA administrator, and with his backing, Isaacman was able to skip some of the party purity tests that have been applied to other Trump administration nominees. One mark against Isaacman is that he had recently donated money to Democrats. He also indicated opposition to some of the White House’s proposed cuts to NASA’s science budget.

Musk’s role in the government was highly controversial, winning him enemies both among opponents of Trump’s “Make America Great Again” agenda as well as inside the administration. One source told Ars that, with Musk’s exit, his opponents within the administration sought to punish him by killing Isaacman’s nomination.

The loss of Isaacman is almost certainly a blow to NASA, which faces substantial budget cuts. The Trump Administration’s budget request for fiscal year 2026, released Friday, seeks $18.8 billion for the agency next year—a 24 percent cut from the agency’s budget of $24.8 billion for FY 2025.

Going out of business?

Isaacman is generally well-liked in the space community and is known to care deeply about space exploration. Officials within the space agency—and the larger space community—hoped that having him as NASA’s leader would help the agency restore some of these cuts.

Now? “NASA is f—ed,” one current leader in the agency told Ars on Saturday.

“NASA’s budget request is just a going-out-of-business mode without Jared there to innovate,” a former senior NASA leader said.

The Trump administration did not immediately name a new nominee, but two people told Ars that former US Air Force Lieutenant General Steven L. Kwast may be near the top of the list. Now retired, Kwast has a distinguished record in the Air Force and is politically loyal to Trump and MAGA.

However, his background seems to be far less oriented toward NASA’s civil space mission and far more focused on seeing space as a battlefield—decidedly not an arena for cooperation and peaceful exploration.

Trump pulls Isaacman nomination for space. Source: “NASA is f***ed” Read More »