Author name: Beth Washington

president-trump-says-intel’s-new-ceo-“must-resign-immediately”

President Trump says Intel’s new CEO “must resign immediately”

Intel and the White House did not immediately respond to a request for comment on Trump’s post. Intel shares dropped 3 percent in pre-market trading in New York.

Tan was appointed as Intel CEO in March after the Silicon Valley company’s board ousted his predecessor, Pat Gelsinger, in December.

Intel is the only US-headquartered company capable of producing advanced semiconductors, though it has so far largely missed out on the current boom for artificial intelligence chips. It has been awarded billions of dollars in US government subsidies and loans to support its chip manufacturing business, which has fallen far behind its rival Taiwan Semiconductor Manufacturing Company.

However, amid a radical cost-cutting program, Tan warned last month that Intel might be forced to abandon development of its next-generation manufacturing technology if it were unable to secure a “significant external customer.” Such a move would hand a virtual monopoly of leading-edge chipmaking to TSMC.

“Intel is required to be a responsible steward of American taxpayer dollars and to comply with applicable security regulations,” Cotton wrote in Tuesday’s letter to Intel’s board chair, Frank Yeary. “Mr Tan’s associations raise questions about Intel’s ability to fulfill these obligations.”

Additional reporting by Demetri Sevastopulo.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

President Trump says Intel’s new CEO “must resign immediately” Read More »

trump’s-trade-and-environment-policies-are-a-disaster-for-carmakers

Trump’s trade and environment policies are a disaster for carmakers

General Motors blamed Trump’s tariffs for costing it $1.1 billion in Q2 and as much as $5 billion by the end of the year. And while the new anti-EV adoption policies are yet to fully bite, it’s clear they’ve motivated some action inside the GM boardroom. Although GM CEO Mary Barra wrote to investors that the company believes “the long-term future is profitable electric vehicle production,” she followed by explaining that GM’s flexible factories will help it succeed in a world where EPA fuel economy targets are no longer a thing. That’s probably why GM added 300,000 more units of capacity for “high margin light-duty pickups, full-size SUVs and crossovers.”

Ford said that the tariffs could cost it as much as $2 billion this year, despite it making more actual vehicles in the US than any other automaker. That’s because it has to pay the US government to import raw materials like steel and aluminum, as well as components and subassemblies.

Foreign automakers are also feeling the effects, given the importance—until now, at least—of the US car buyer. Stellantis, which owns the Jeep and Ram brands, said it had already lost $2.7 billion this year due to tariffs, although the automaker stands to benefit in the coming years from the gutting of fleet fuel efficiency fines.

Aston Martin may benefit from a lower 10 percent tariff for UK-made cars, but it described the process as “extremely disruptive,” and although it has now restarted shipping cars to America, it issued a profit warning last week.

BMW is among the less badly hurt; although its operating margin fell to 5.4 percent, this was within its expectations. Mercedes had to warn investors to expect less this year, and it says the US will become a less-important market for the company, which plans to make up for it with growth in China. Volkswagen Group said the tariffs have cost it $1.5 billion so far this year, and it has also revised down its forecasts for the rest of the year.

Although Porsche announced record deliveries in North America just a week ago, its operating profit was a third of that a year ago. “In the US, import tariffs are also putting huge pressure on our business. Looking ahead, the movement of the dollar could also have an impact. In addition, the transformation to electric mobility is progressing more slowly than expected overall, with consequences for the supplier network,” said Porsche and VW Group CEO Oliver Blume.

Trump’s trade and environment policies are a disaster for carmakers Read More »

some-ai-tools-don’t-understand-biology-yet

Some AI tools don’t understand biology yet


A collection of new studies on gene activity shows that AI tools aren’t very good.

Gene activity appears to remain beyond the abilities of AI at the moment. Credit: BSIP

Biology is an area of science where AI and machine-learning approaches have seen some spectacular successes, such as designing enzymes to digest plastics and proteins to block snake venom. But in an era of seemingly endless AI hype, it might be easy to think that we could just set AI loose on the mounds of data we’ve already generated and end up with a good understanding of most areas of biology, allowing us to skip a lot of messy experiments and the unpleasantness of research on animals.

But biology involves a whole lot more than just protein structures. And it’s extremely premature to suggest that AI can be equally effective at handling all aspects of biology. So we were intrigued to see a study comparing a set of AI software packages designed to predict how active genes will be in cells exposed to different conditions. As it turns out, the AI systems couldn’t manage to do any better than a deliberately simplified method of predicting.

The results serve as a useful caution that biology is incredibly complex, and developing AI systems that work for one aspect of it is not an indication that they can work for biology generally.

AI and gene activity

The study was conducted by a trio of researchers based in Heidelberg: Constantin Ahlmann-Eltze, Wolfgang Huber, and Simon Anders. They note that a handful of additional studies have been released while their work was on a pre-print server, all of them coming to roughly the same conclusions. But these authors’ approach is pretty easy to understand, so we’ll use it as an example.

The AI software they examined attempts to predict changes in gene activity. While every cell carries copies of the roughly 20,000 genes in the human genome, not all of them are active in a given cell—”active” in this case meaning they are producing messenger RNAs. Some provide an essential function and are active at high levels at all times. Others are only active in specific cell types, like nerves or skin. Still others are activated under specific conditions, like low oxygen or high temperatures.

Over the years, we’ve done many studies examining the activity of every gene in a given cell type under different conditions. These studies can range from using gene chips to determine which messenger RNAs are present in a population of cells to sequencing the RNAs isolated from single cells and using that data to identify which genes are active. But collectively, they can provide a broad, if incomplete, picture that links the activity of genes with different biological circumstances. It’s a picture you could potentially use to train an AI that would make predictions about gene activity under conditions that haven’t been tested.

Ahlmann-Eltze, Huber, and Anders tested a set of what are called single-cell foundation models that have been trained on this sort of gene activity data. The “single cell” portion indicates that these models have been trained on gene activity obtained from individual cells rather than a population average of a cell type. Foundation models mean that they have been trained on a broad range of data but will require additional training before they’re deployed for a specific task.

Underwhelming performance

The task in this case is predicting how gene activity might change when genes are altered. When an individual gene is lost or activated, it’s possible that the only messenger RNA that is altered is the one made by that gene. But some genes encode proteins that regulate a collection of other genes, in which case you might see changes in the activity of dozens of genes. In other cases, the loss or activation of a gene could affect a cell’s metabolism, resulting in widespread alterations of gene activity.

Things get even more complicated when two genes are involved. In many cases, the genes will do unrelated things, and you get a simple additive effect: the changes caused by the loss of one, plus the changes caused by the loss of others. But if there’s some overlap between the functions, you can get an enhancement of some changes, suppression of others, and other unexpected changes.

To start exploring these effects, researchers have intentionally altered the activity of one or more genes using the CRISPR DNA editing technology, then sequenced every RNA in the cell afterward to see what sorts of changes took place. This approach (termed Perturb-seq) is useful because it can give us a sense of what the altered gene does in a cell. But for Ahlmann-Eltze, Huber, and Anders, it provides the data they need to determine if these foundation models can be trained to predict the ensuing changes in the activity of other genes.

Starting with the foundation models, the researchers conducted additional training using data from an experiment where either one or two genes were activated using CRISPR. This training used the data from 100 individual gene activations and another 62 where two genes were activated. Then, the AI packages were asked to predict the results for another 62 pairs of genes that were activated. For comparison, the researchers also made predictions using two extremely simple models: one that always predicted that nothing would change and a second that always predicted an additive effect (meaning that activating genes A and B would produce the changes caused by activating A plus the changes caused by activating B).

They didn’t work. “All models had a prediction error substantially higher than the additive baseline,” the researchers concluded. The result held when the researchers used alternative measurements of the accuracy of the AI’s predictions.

The gist of the problem seemed to be that the trained foundation models weren’t very good at predicting when the alterations of pairs of genes would produce complex patterns of changes—when the alteration of one gene synergized with the alteration of a second. “The deep learning models rarely predicted synergistic interactions, and it was even rarer that those predictions were correct,” the researchers concluded. In a separate test that looked specifically at these synergies between genes, it turned out that none of the models were better than the simplified system that always predicted no changes.

Not there yet

The overall conclusions from the work are pretty clear. “As our deliberately simple baselines are incapable of representing realistic biological complexity yet were not outperformed by the foundation models,” the researchers write, “we conclude that the latter’s goal of providing a generalizable representation of cellular states and predicting the outcome of not-yet-performed experiments is still elusive.”

It’s important to emphasize that “still elusive” doesn’t mean we’re incapable of ever developing an AI that can help with this problem. It also doesn’t mean that this applies to all cellular states (the results are specific to gene activity), much less all of biology. At the same time, the work provides a valuable caution at a time when there’s a lot of enthusiasm for the idea that AI’s success in a couple of areas means we’re on the cusp of a world where it can be applied to anything.

Nature Methods, 2025. DOI: 10.1038/s41592-025-02772-6  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Some AI tools don’t understand biology yet Read More »

rip-to-the-macintosh-hd-hard-drive-icon,-2000–2025

RIP to the Macintosh HD hard drive icon, 2000–2025

That version of the icon persisted through the Apple Silicon-era Big Sur redesign and was still with us in the first public beta build for macOS 26 Tahoe that Apple released last week. The new beta also updates the icons for external drives (orange, with a USB-C connector on top), network shares (blue, with a globe on top), and removable disk images (white, with an arrow on top).

All of the system’s disk icons get an update in the latest macOS 26 Tahoe developer beta. Credit: Apple/Andrew Cunningham

Other icons that reused or riffed on the old hard drive icon have also been changed. Disk Utility now looks like a wrench tightening an Apple-branded white bolt, for some reason, and drive icons within Disk Utility also have the new SSD-esque icon. Installer apps use the new icon instead of the old one. Navigate to the /System/Library/CoreServices folder where many of the built-in operating system icons live, and you can see a bunch of others that exchange the old HDD icon for the new SSD.

Apple first offered a Mac with an SSD in 2008, when the original MacBook Air came out. By the time “Retina” Macs began arriving in the early 2010s, SSDs had become the primary boot disk for most of them; laptops tended to be all-SSD, while desktops could be configured with an SSD or a hybrid Fusion Drive that used an SSD as boot media and an HDD for mass storage. Apple stopped shipping spinning hard drives entirely when the last of the Intel iMacs went away.

This doesn’t actually matter much. The old icon didn’t look much like the SSD in your Mac, and the new one doesn’t really look like the SSD in your Mac either. But we didn’t want to let the old icon’s passing go unremarked. So, thanks for the memories, Macintosh HD hard drive icon! Keep on spinning, wherever you are.

RIP to the Macintosh HD hard drive icon, 2000–2025 Read More »

openai-releases-its-first-open-source-models-since-2019

OpenAI releases its first open source models since 2019

OpenAI is releasing new generative AI models today, and no, GPT-5 is not one of them. Depending on how you feel about generative AI, these new models may be even more interesting, though. The company is rolling out gpt-oss-120b and gpt-oss-20b, its first open weight models since the release of GPT-2 in 2019. You can download and run these models on your own hardware, with support for simulated reasoning, tool use, and deep customization.

When you access the company’s proprietary models in the cloud, they’re running on powerful server infrastructure that cannot be replicated easily, even in enterprise. The new OpenAI models come in two variants (120b and 20b) to be run on less powerful hardware configurations. Both are transformers with configurable chain of thought (CoT), supporting low, medium, and high settings. The lower settings are faster and use fewer compute resources, but the outputs are better with the highest setting. You can set the CoT level with a single line in the system prompt.

The smaller gpt-oss-20b has a total of 21 billion parameters, utilizing mixture-of-experts (MoE) to reduce that to 3.6 billion parameters per token. As for gpt-oss-120b, its 117 billion parameters come down to 5.1 billion per token with MoE. The company says the smaller model can run on a consumer-level machine with 16GB or more of memory. To run gpt-oss-120b, you need 80GB of memory, which is more than you’re likely to find in the average consumer machine. It should fit on a single AI accelerator GPU like the Nvidia H100, though. Both models have a context window of 128,000 tokens.

Credit: OpenAI

The team says users of gpt-oss can expect robust performance similar to its leading cloud-based models. The larger one benchmarks between the o3 and o4-mini proprietary models in most tests, with the smaller version running just a little behind. It gets closest in math and coding tasks. In the knowledge-based Humanity’s Last Exam, o3 is far out in front with 24.9 percent (with tools), while gpt-oss-120b only manages 19 percent. For comparison, Google’s leading Gemini Deep Think hits 34.8 percent in that test.

OpenAI releases its first open source models since 2019 Read More »

report:-intel-struggles-with-new-18a-process-as-it-cuts-workers-and-cancels-projects

Report: Intel struggles with new 18A process as it cuts workers and cancels projects

Intel has a lot riding on “18A,” its next-generation manufacturing process for silicon chips that the company claims will help it catch up to the lead that competitors like TSMC have built up over the last few years. With 18A, Intel would return to manufacturing its own processor designs in its own factories, including the upcoming Series 3 Core Ultra chips for laptops (codenamed Panther Lake), after manufacturing parts of all other Core Ultra chips with TSMC. Intel is also offering 18A manufacturing capacity to external chipmakers, a major milestone in former CEO Pat Gelsinger’s plan to make Intel a competitive cutting-edge (and primarily US-based) chip manufacturer for the rest of the industry.

But a Reuters report claims that Intel is struggling to make usable chips on 18A, according to “people who were briefed on the company’s test data since late last year.” As of this summer, these sources say that just 10 percent of the chips being manufactured on 18A are “up to [Intel’s] specifications.”

Intel disputed the numbers cited in the report. “Yields are better than that,” Intel CFO David Zinsner told Reuters, though neither Zinsner nor Intel provided an alternate figure.

Whether Intel is struggling with 18A or not, the story is easy to believe because it fits a decade-long pattern going back to early delays for Intel’s 14 nm process in 2013 and 2014. Intel had finally switched its lineup to the 14 nm process by late 2015, but it was then stuck on that manufacturing process for years (2019–2020 for laptop chips, 2021–2022 for desktop chips).

Through that span, Intel’s PR strategy was familiar: insist that things were ramping up well internally and that bugs were being ironed out, express confidence in the roadmap, give itself a little wiggle room on launch dates of actual products, and continue onward.

In this case, Intel told Reuters that its Panther Lake chips are “fully on track” as of July 30. Intel reaffirmed that it would launch Panther Lake using the 18A manufacturing process in the second half of 2025, with more models coming in 2026. These will be the milestones to watch for—Intel could very well be struggling to ramp up yields on 18A chips, but the struggles could be normal-ish and planned-for ones that don’t delay the company’s plans any more than they already have.

Report: Intel struggles with new 18A process as it cuts workers and cancels projects Read More »

the-week-in-ai-governance

The Week in AI Governance

There was enough governance related news this week to spin it out.

Anthropic, Google, OpenAI, Mistral, Aleph Alpha, Cohere and others commit to signing the EU AI Code of Practice. Google has now signed. Microsoft says it is likely to sign.

xAI signed the AI safety chapter of the code, but is refusing to sign the others, citing them as overreach especially as pertains to copyright.

The only company that said it would not sign at all is Meta.

This was the underreported story. All the important AI companies other than Meta have gotten behind the safety section of the EU AI Code of Practice. This represents a considerable strengthening of their commitments, and introduces an enforcement mechanism. Even Anthropic will be forced to step up parts of their game.

That leaves Meta as the rogue state defector that once again gives zero anythings about safety, as in whether we all die, and also safety in its more mundane forms. Lol, we are Meta, indeed. So the question is, what are we going to do about it?

xAI took a middle position. I see the safety chapter as by far the most important, so as long as xAI is signing that and taking it seriously, great. Refusing the other parts is a strange flex, and I don’t know exactly what their problem is since they didn’t explain. They simply called it ‘unworkable,’ which is odd when Google, OpenAI and Anthropic all declared they found it workable.

Then again, xAI finds a lot of things unworkable. Could be a skill issue.

This is a sleeper development that could end up being a big deal. When I say ‘against regulations’ I do not mean against AI regulations. I mean against all ‘regulations’ in general, no matter what, straight up.

From the folks who brought you ‘figure out who we technically have the ability to fire and then fire all of them, and if something breaks maybe hire them back, this is the Elon way, no seriously’ and also ‘whoops we misread something so we cancelled PEPFAR and a whole lot of people are going to die,’ Doge is proud to give you ‘if a regulation is not technically required by law it must be an unbridled bad thing we can therefore remove, I wonder why they put up this fence.’

Hannah Natanson, Jeff Stein, Dan Diamond and Rachel Siegel (WaPo): The tool, called the “DOGE AI Deregulation Decision Tool,” is supposed to analyze roughly 200,000 federal regulations to determine which can be eliminated because they are no longer required by law, according to a PowerPoint presentation obtained by The Post that is dated July 1 and outlines DOGE’s plans.

Roughly 100,000 of those rules would be deemed worthy of trimming, the PowerPoint estimates — mostly through the automated tool with some staff feedback. The PowerPoint also suggests the AI tool will save the United States trillions of dollars by reducing compliance requirements, slashing the federal budget and unlocking unspecified “external investment.”

The conflation here is absolute. There are two categories of regulations: The half ‘required by law,’ and the half ‘worthy of trimming.’ Think of the trillions you can save.

They then try to hedge and claim that’s not how it is going to work.

Asked about the AI-fueled deregulation, White House spokesman Harrison Fields wrote in an email that “all options are being explored” to achieve the president’s goal of deregulating government.

No decisions have been completed on using AI to slash regulations, a HUD spokesperson said.

The spokesperson continued: “The intent of the developments is not to replace the judgment, discretion and expertise of staff but be additive to the process.”

That would be nice. I’m far more ‘we would be better off with a lot less regulations’ than most. I think it’s great to have an AI tool that splits off the half we can consider cutting from the half we are stuck with. I still think that ‘cut everything that a judge wouldn’t outright reverse if you tried cutting it’ is not a good strategy.

I find the ‘no we will totally consider whether this is a good idea’ talk rather hollow, both because of track record and also they keep telling us what the plan is?

“The White House wants us higher on the leader board,” said one of the three people. “But you have to have staff and time to write the deregulatory notices, and we don’t. That’s a big reason for the holdup.”

That’s where the AI tool comes in, the PowerPoint proposes. The tool will save 93 percent of the human labor involved by reviewing up to 500,000 comments submitted by the public in response to proposed rule changes. By the end of the deregulation exercise, humans will have spent just a few hours to cancel each of the 100,000 regulations, the PowerPoint claims.

They then close by pointing out that the AI makes mistakes even on the technical level it is addressing. Well, yeah.

Also, welcome to the future of journalism:

China has its own AI Action Plan and is calling for international cooperation on AI. Wait, what do they mean by that? If you look in the press, that depends who you ask. All the news organizations will be like ‘the Chinese released an AI Action Plan’ and then not link to the actual plan, I had to have o3 dig it up.

Here’s o3’s translation of the actual text. This is almost all general gestures in the direction of capabilities, diffusion, infrastructure and calls for open models. It definitely is not an AI Action Plan in the sense that America offered an AI Action Plan, with had lots of specific actionable proposals. This is more of a general outline of a plan and statement of goals, at best. At least it doesn’t talk about or call for a ‘race’ but a call for everything to be open and accelerated is not obviously better.

  • Seize AI opportunities together. Governments, international organizations, businesses, research institutes, civil groups, and individuals should actively cooperate, accelerate digital‑infrastructure build‑out, explore frontier AI technologies, and spread AI applications worldwide, fully unlocking AI’s power to drive growth, achieve the UN‑2030 goals, and tackle global challenges.

  • Foster AI‑driven innovation. Uphold openness and sharing, encourage bold experimentation, build international S‑and‑T cooperation platforms, harmonize policy and regulation, and remove technical barriers to spur continuous breakthroughs and deep “AI +” applications.

  • Empower every sector. Deploy AI across manufacturing, consumer services, commerce, healthcare, education, agriculture, poverty reduction, autonomous driving, smart cities, and more; share infrastructure and best practices to supercharge the real economy.

  • Accelerate digital infrastructure. Expand clean‑energy grids, next‑gen networks, intelligent compute, and data centers; create interoperable AI infrastructure and unified compute‑power standards; support especially the Global South in accessing and applying AI.

  • Build a pluralistic open‑source ecosystem. Promote cross‑border open‑source communities and secure platforms, open technical resources and interfaces, improve compatibility, and let non‑sensitive tech flow freely.

  • Supply high‑quality data. Enable lawful, orderly, cross‑border data flows; co‑create top‑tier datasets while safeguarding privacy, boosting corpus diversity, and eliminating bias to protect cultural and ecosystem diversity.

  • Tackle energy and environmental impacts. Champion “sustainable AI,” set AI energy‑ and water‑efficiency standards, promote low‑power chips and efficient algorithms, and scale AI solutions for green transition, climate action, and biodiversity.

  • Forge standards and norms. Through ITU, ISO, IEC, and industry, speed up standards on safety, industry, and ethics; fight algorithmic bias and keep standards inclusive and interoperable.

  • Lead with public‑sector adoption. Governments should pioneer reliable AI in public services (health, education, transport), run regular safety audits, respect IP, enforce privacy, and explore lawful data‑trading mechanisms to upgrade governance.

  • Govern AI safety. Run timely risk assessments, create a widely accepted safety framework, adopt graded management, share threat intelligence, tighten data‑security across the pipeline, raise explainability and traceability, and prevent misuse.

  • Implement the Global Digital Compact. Use the UN as the main channel, aim to close the digital divide—especially for the Global South—and quickly launch an International AI Scientific Panel and a Global AI Governance Dialogue under UN auspices.

  • Boost global capacity‑building. Through joint labs, shared testing, training, industry matchmaking, and high‑quality datasets, help developing countries enhance AI innovation, application, and governance while improving public AI literacy, especially for women and children.

  • Create inclusive, multi‑stakeholder governance. Establish public‑interest platforms involving all actors; let AI firms share use‑case lessons; support think tanks and forums in sustaining global technical‑policy dialogue among researchers, developers, and regulators.

What does it have to say about safety or dealing with downsides? We have ‘forge standards and norms’ with a generic call for safety and ethics standards, which seems to mostly be about interoperability and ‘bias.’

Mainly we have ‘Govern AI safety,’ which is directionally nice to see I guess but essentially content free and shows no sign that the problems are being taken seriously on the levels we care about. Most concretely, in the ninth point, we have a call for regular safety audits of AI models. That all sounds like ‘the least you could do.’

Here’s one interpretation of the statement:

Brenda Goh (Reuters): China said on Saturday it wanted to create an organisation to foster global cooperation on artificial intelligence, positioning itself as an alternative to the U.S. as the two vie for influence over the transformative technology.

Li did not name the United States but appeared to refer to Washington’s efforts to stymie China’s advances in AI, warning that the technology risked becoming the “exclusive game” of a few countries and companies.

China wants AI to be openly shared and for all countries and companies to have equal rights to use it, Li said, adding that Beijing was willing to share its development experience and products with other countries, particularly the “Global South”. The Global South refers to developing, emerging or lower-income countries, mostly in the southern hemisphere.

The foreign ministry released online an action plan for global AI governance, inviting governments, international organisations, enterprises and research institutions to work together and promote international exchanges including through a cross-border open source community.

As in, we notice you are ahead in AI, and that’s not fair. You should do everything in the open so you let us catch up in all the ways you are ahead, so we can bury you using the ways in which you are behind. That’s not an unreasonable interpretation.

Here’s another.

The Guardian: Chinese premier Li Qiang has proposed establishing an organisation to foster global cooperation on artificial intelligence, calling on countries to coordinate on the development and security of the fast-evolving technology, days after the US unveiled plans to deregulate the industry.

Li warned Saturday that artificial intelligence development must be weighed against the security risks, saying global consensus was urgently needed.

“The risks and challenges brought by artificial intelligence have drawn widespread attention … How to find a balance between development and security urgently requires further consensus from the entire society,” the premier said.

Li said China would “actively promote” the development of open-source AI, adding Beijing was willing to share advances with other countries, particularly developing ones in the global south.

So that’s a call to keep security in mind, but every concrete reference is mundane and deals with misuse, and then they call for putting everything out into the open, with the main highlighted ‘risk’ to coordinate on being that America might get an advantage, and encouraging us to give it away via open models to ‘safeguard multilateralism.’

A third here, from the Japan Times, frames it as a call for an alliance to take aim at an American AI monopoly.

Director Michael Kratsios: China’s just-released AI Action Plan has a section that drives at a fundamental difference between our approaches to AI: whether the public or private sector should lead in AI innovation.

I like America’s odds of success.

He quotes point nine, which his translation has as ‘the public sector takes the lead in deploying applications.’ Whereas o3’s translation says ‘governments should pioneer reliable AI in public services (health, education, transport), run regular safety audits, respect IP, enforce privacy, and explore lawful data‑trading mechanisms to upgrade governance.’

Even in Michael’s preferred translation, this is saying government should aggressively deploy AI applications to improve government services. The American AI Action Plan, correctly, fully agrees with this. Nothing in the Chinese statement says to hold the private sector back. Quite the contrary.

The actual disagreement we have with point nine is the rest of it, where the Chinese think we should run regular safety audits, respect IP and enforce privacy. Those are not parts of the American AI Action Plan. Do you think we were right not to include those provisions, sir? If so, why?

Suppose in the future, we learned we were in a lot more danger than we think we are in now, and we did want to make a deal with China and others. Right now the two sides would be very far apart but circumstances could quickly change that.

Could we do it in a way that could be verified?

It wouldn’t be easy, but we do have tools.

This is the sort of thing we should absolutely be preparing to be able to do, whether or not we ultimately decide to do it.

Mauricio Baker: For the last year, my team produced the most technically detailed overview so far. Our RAND working paper finds: strong verification is possible—but we need ML and hardware research.

You can find the paper here and on arXiv. It includes a 5-page summary and a list of open challenges.

In the Cold War, the US and USSR used inspections and satellites to verify nuclear weapon limits. If future, powerful AI threatens to escape control or endanger national security, the US and China would both be better off with guardrails.

It’s a tough challenge:

– Verify narrow restrictions, like “no frontier AI training past some capability,” or “no mass-deploying if tests show unacceptable danger”

– Catch major state efforts to cheat

– Preserve confidentiality of models, data, and algorithms

– Keep overhead low

Still, reasons for optimism:

– No need to monitor all computers—frontier AI needs thousands of specialized AI chips.

– We can build redundant layers of verification. A cheater only needs to be caught once.

– We can draw from great work in cryptography and ML/hardware security.

One approach is to use existing chip security features like Confidential Computing, built to securely verify chip activities. But we’d need serious design vetting, teardowns, and maybe redesigns before the US could strongly trust Huawei’s chip security (or frankly, NVIDIA’s).

“Off-chip” mechanisms could be reliable sooner: network taps or analog sensors (vetted, limited use, tamper evident) retrofitted onto AI data centers. Then, mutually secured, airgapped clusters could check if claimed compute uses are reproducible and consistent with sensor data.

Add approaches “simple enough to work”: whistleblower programs, interviews of personnel, and intelligence activities. Whistleblower programs could involve regular in-person contact—carefully set up so employees can anonymously reveal violations, but not much more.

We could have an arsenal of tried-and-tested methods to confidentially verify a US-China AI treaty. But at the current pace, in three years, we’ll just have a few speculative options. We need ML and hardware researchers, new RFPs by funders, and AI company pilot programs.

Jeffrey Ladish: Love seeing this kind of in-depth work on AI treaty verification. A key fact is verification doesn’t have to be bullet proof to be useful. We can ratchet up increasingly robust technical solutions while using other forms of HUMINT and SIGINT to provide some level of assurance.

Remember, the AI race is a mixed-motive conflict, per Schelling. Both sides have an incentive to seek an advantage, but also have an incentive to avoid mutually awful outcomes. Like with nuclear war, everyone loses if any side loses control of superhuman AI.

This makes coordination easier, because even if both sides don’t like or trust each other, they have an incentive to cooperate to avoid extremely bad outcomes.

It may turn out that even with real efforts there are not good technical solutions. But I think it is far more likely that we don’t find the technical solutions due to lack of trying, rather than that the problem is so hard that it cannot be done.

The reaction to the AI Action Plan was almost universally positive, including here from Nvidia and AMD. My own review, focused on the concrete proposals within, also reflected this. It far exceeded my expectations on essentially all fronts, so much so that I would be actively happy to see most of its proposals implemented rather than nothing be done.

I and others focused on the concrete policy, and especially concrete policy relative to expectations and what was possible in context, for which it gets high praise.

But a document like this might have a lot of its impact due to the rhetoric instead, even if it lacks legal force, or cause people to endorse the approach as ideal in absolute terms rather than being the best that could be done at the time.

So, for example, the actual proposals for open models were almost reasonable, but if the takeaway is lots more rhetoric of ‘yay open models’ like it is in this WSJ editorial, where the central theme is very clearly ‘we must beat China, nothing else matters, this plan helps beat China, so the plan is good’ then that’s really bad.

Another important example: Nothing in the policy proposals here makes future international cooperation harder. The rhetoric? A completely different story.

The same WSJ article also noticed the same obvious contradictions with other Trump policies that I did – throttling renewable energy and high-skilled immigration and even visas are incompatible with our goals here, the focus on ‘woke AI’ could have been much worse but remains a distraction, also I would add, what is up with massive cuts to STEM research if we are taking this seriously? If we are serious about winning and worry that one false move would ‘forfeit the race’ then we need to act like it.

Of course, none of that is up to the people who were writing the AI Action Plan.

What the WSJ editorial board didn’t notice, or mention at all, is the possibility that there are other risks or downsides at play here, and it dismisses outright the possibility of any form of coordination or cooperation. That’s a very wrong, dangerous and harmful attitude, one it shares with many in or lobbying the government.

A worry I have on reflection, that I wasn’t focusing on at the time, is that officials and others might treat the endorsements of the good policy proposals here as an endorsement of the overall plan presented by the rhetoric, especially the rhetoric at the top of the plan, or of the plan’s sufficiency and that it is okay to ignore and not speak about what the plan ignores and does not speak about.

That rhetoric was alarmingly (but unsurprisingly) terrible, as it is the general administration plan of emphasizing whenever possible that we are in an ‘AI race’ that will likely go straight to AGI and superintelligence even if those words couldn’t themselves be used in the plan, where ‘winning’ is measured in the mostly irrelevant ‘market share.’

And indeed, the inability to mention AGI or superintelligence in the plan leads to such exactly the standard David Sacks lines that toxically center the situation on ‘winning the race’ by ‘exporting the American tech stack.’

I will keep repeating, if necessary until I am blue in the face, that this is effectively a call (the motivations for which I do not care to speculate) for sacrificing the future and get us all killed in order to maximize Nvidia’s market share.

There is no ‘tech stack’ in the meaningful sense of necessary integration. You can run any most any AI model on most any advanced chip, and switch on an hour’s notice.

It does not matter who built the chips. It matters who runs the chips and for whose benefit. Supply is constrained by manufacturing capacity, so every chip we sell is one less chip we have. The idea that failure to hand over large percentages of the top AI chips to various authoritarians, or even selling H20s directly to China as they currently plan to do, would ‘forfeit’ ‘the race’ is beyond absurd.

Indeed, both the rhetoric and actions discussed here do the exact opposite. It puts pressure on others especially China to push harder towards ‘the race’ including the part that counts, the one to AGI, and also the race for diffusion and AI’s benefits. And the chips we sell arm China and others to do this important racing.

There is later talk acknowledging that ‘we do not intend to ignore the risks of this revolutionary technological power.’ But Sacks frames this as entire about the risk that AI will be misused or stolen by malicious actors. Which is certainly a danger, but far from the primary thing to worry about.

That’s what happens when you are forced to pretend AGI, ASI, potential loss of control and all other existential risks do not exist as possibilities. The good news is that there are some steps in the actual concrete plan to start preparing for those problems, even if they are insufficient and it can’t be explained, but it’s a rough path trying to sustain even that level of responsibility under this kind of rhetorical oppression.

The vibes and rhetoric were accelerationist throughout, especially at the top, and completely ignored the risks and downsides of AI, and the dangers of embracing a rhetoric based on an ‘AI race’ that we ‘must win,’ and where that winning mostly means chip market share. Going down this path is quite likely to get us all killed.

I am happy to make the trade of allowing the rhetoric to be optimistic, and to present the Glorious Transhumanist Future as likely to be great even as we have no idea how to stay alive and in control while getting there, so long as we can still agree to take the actions we need to take in order to tackle that staying alive and in control bit – again, the actions are mostly the same even if you are highly optimistic that it will work out.

But if you dismiss the important dangers entirely, then your chances get much worse.

So I want to be very clear that I hate that rhetoric, I think it is no good, very bad rhetoric both in terms of what is present and what (often with good local reasons) is missing, while reiterating that the concrete particular policy proposals were as good as we could reasonably have hoped for on the margin, and the authors did as well as they could plausibly have done with people like Sacks acting as veto points.

That includes the actions on ‘preventing Woke AI,’ which have convinced even Sacks to frame this as preventing companies from intentionally building DEI into their models. That’s fine, I wouldn’t want that either.

Even outlets like Transformer weighed in positively, with them calling the plan ‘surprisingly okay’ and noting its ability to get consensus support, while ignoring the rhetoric. They correctly note the plan is very much not adequate. It was a missed opportunity to talk about or do something about various risks (although I understand why), and there was much that could have been done that wasn’t.

Seán Ó hÉigeartaigh: Crazy to reflect on the three global AI competitions going on right now:

– 1. US political leadership have made AI a prestige race, echoing the Space Race. It’s cool and important and strategic, and they’re going to Win.

– 2. For Chinese leadership AI is part of economic strength, soft power and influence. Technology is shared, developing economies will be built on Chinese fundamental tech, the Chinese economy and trade relations will grow. Weakening trust in a capricious US is an easy opportunity to take advantage of.

– 3. The AGI companies are racing something they think will out-think humans across the board, that they don’t yet know how to control, and think might literally kill everyone.

Scariest of all is that it’s not at all clear to decision-makers that these three things are happening in parallel. They think they’re playing the same game, but they’re not.

I would modify the US political leadership position. I think to a lot of them it’s literally about market share, primarily chip market share. I believe this because they keep saying, with great vigor, that it is literally about chip market share. But yes, they think this matters because of prestige, and because this is how you get power.

My guess is, mostly:

  1. The AGI companies understand these are three distinct things.

    1. They are using the confusions of political leadership for their own ends.

  2. The Chinese understand there are two distinct things, but not three.

    1. As in, they know what US leadership is doing, and they know what they are doing, and they know these are distinct things.

    2. They do not feel the AGI and understand its implications.

  3. The bulk of the American political class cannot differentiate between the US and Chinese strategies, or strategic positions, or chooses to pretend not to, cannot imagine things other than ordinary prestige, power and money, and cannot feel the AGI.

    1. There are those within the power structure who do feel the AGI, to varying extents, and are trying to sculpt actions (including the action plan) accordingly with mixed success.

    2. An increasing number of them, although still small, do feel the AGI to varying extents but have yet to cash that out into anything except ‘oh ’.

  4. There is of course a fourth race or competition, which is to figure out how to build it without everyone dying.

The actions one would take in each of these competitions are often very similar, especially the first three and often the fourth as well, but sometimes are very different. What frustrates me most is when there is an action that is wise on all levels, yet we still don’t do it.

Also, on the ‘preventing Woke AI’ question, the way the plan and order are worded seems designed to make compliance easy and not onerous, but given other signs from the Trump administration lately, I think we have reason to worry…

Fact Post: Trump’s FCC Chair says he will put a “bias monitor” in place who will “report directly” to Trump as part of the deal for Sky Dance to acquire CBS.

Ari Drennen: The term that the Soviet Union used for this job was “apparatchik” btw.

I was willing to believe that firing Colbert was primarily a business decision. This is very different imagine the headline in reverse: “Harris’s FCC Chair says she will put a “bias monitor” in place who will “report directly” to Harris as part of the deal for Sky Dance to acquire CBS.”

Now imagine it is 2029, and the headline is ‘AOC appoints new bias monitor for CBS.’ Now imagine it was FOX. Yeah. Maybe don’t go down this road?

Director Krastios has now given us his view on the AI Action Plan. This is a chance to see how much it is viewed as terrible rhetoric versus its good policy details, and to what extent overall policy is going to be guided by good details versus terrible rhetoric.

Peter Wildeford offers his takeaway summary.

Peter Wildeford: Winning the Global AI Race

  1. The administration’s core philosophy is a direct repudiation of the previous one, which Kratsios claims was a “fear-driven” policy “manically obsessed” with hypothetical risks that stifled innovation.

  2. The plan is explicitly called an “Action Plan” to signal a focus on immediate execution and tangible results, not another government strategy document that just lists aspirational goals.

  3. The global AI race requires America to show the world a viable, pro-innovation path for AI development that serves as an alternative to the EU’s precautionary, regulation-first model.

He leads with hyperbolic slander, which is par for the course, but yes concrete action plans are highly useful and the EU can go too far in its regulations.

There are kind of two ways to go with this.

  1. You could label any attempt to do anything to ensure we don’t die as ‘fear-driven’ and ‘maniacally obsessed’ with ‘hypothetical’ risks that ‘stifle’ innovation, and thus you probably die.

  2. You could label the EU and Biden Administration as ‘fear-driven’ and ‘manically obsessed’ with ‘hypothetical’ risks that ‘stifle’ innovation, contrasting that with your superior approach, and then having paid this homage do reasonable things.

The AI Action Plan as written was the second one. But you have to do that on purpose, because the default outcome is to shift to the first one.

Executing the ‘American Stack’ Export Strategy

  1. The strategy is designed to prevent a scenario where the world runs on an adversary’s AI stack by proactively offering a superior, integrated American alternative.

  2. The plan aims to make it simple for foreign governments to buy American by promoting a “turnkey solution”—combining chips, cloud, models, and applications—to reduce complexity for the buyer.

  3. A key action is to reorient US development-finance institutions like the DFC and EXIM to prioritize financing for the export of the American AI stack, shifting their focus from traditional hard infrastructure.

The whole ‘export’ strategy is either nonsensical, or an attempt to control capital flow, because I heard a rumor that it is good to be the ones directing capital flow.

Once again, the ‘tech stack’ thing is not, as described here, what’s the word? Real.

The ‘adversary’ does not have a ‘tech stack’ to offer, they have open models people can run on the same chips. They don’t have meaningful chips to even run their own operations, let alone export. And the ‘tech’ does not ‘stack’ in a meaningful way.

Turnkey solutions and package marketing are real. I don’t see any reason for our government to be so utterly obsessed with them, or even involved at all. That’s called marketing and serving the customer. Capitalism solves this. Microsoft and Amazon and Google and OpenAI and Anthropic and so on can and do handle it.

Why do we suddenly think the government needs to be prioritizing financing this? Given that it includes chip exports, how is it different from ‘traditional hard infrastructure’? Why do we need financing for the rest of this illusory stack when it is actually software? Shouldn’t we still be focusing on ‘traditional hard infrastructure’ in the places we want it, and then whenever possible exporting the inference?

Refining National Security Controls

  1. Kratsios argues the biggest issue with export controls is not the rules themselves but the lack of resources for enforcement, which is why the plan calls for giving the Bureau of Industry and Security (BIS) the tools it needs.

  2. The strategy is to maintain strict controls on the most advanced chips and critical semiconductor-manufacturing components, while allowing sales of less-advanced chips under a strict licensing regime.

  3. The administration is less concerned with physical smuggling of hardware and more focused on preventing PRC front companies from using legally exported hardware for large-scale, easily flaggable training runs.

  4. Proposed safeguards against misuse are stringent “Know Your Customer” (KYC) requirements paired with active monitoring for the scale and scope of compute jobs.

It is great to see the emphasis on enforcement. It is great to hear that the export control rules are not the issue.

In which case, can we stop waving them, such as with H20 sales to China? Thank you. There is of course a level at which chips can be safely sold even directly to China, but the experts all agree the H20 is past that level.

The lack of concern about smuggling is a blind eye in the face of overwhelming evidence of widespread smuggling. I don’t much care if they are claiming to be concerned, I care about the actual enforcement, but we need enforcement. Yes, we should stop ‘easily flaggable’ PRC training runs and use KYC techniques, but this is saying we should look for our keys under the streetlight and then if we don’t find the keys assume we can start our car without them.

Championing ‘Light-Touch’ Domestic Regulation

  1. The administration rejects the idea of a single, overarching AI law, arguing that expert agencies like the FDA and DOT should regulate AI within their specific domains.

  2. The president’s position is that a “patchwork of regulations” across 50 states is unacceptable because the compliance burden disproportionately harms innovative startups.

  3. While using executive levers to discourage state-level rules, the administration acknowledges that a durable solution requires an act of Congress to create a uniform federal standard.

Yes, a ‘uniform federal standard’ would be great, except they have no intention of even pretending to meaningfully pursue one. They want each federal agency to do its thing in its own domain, as in a ‘use case’ based AI regime which when done on its own is the EU approach and doomed to failure.

I do acknowledge the step down from ‘kill state attempts to touch anything AI’ (aka the insane moratorium) to ‘discourage’ state-level rules using ‘executive levers,’ at which point we are talking price. One worries the price will get rather extreme.

Addressing AI’s Economic Impact at Home

  1. Kratsios highlights that the biggest immediate labor need is for roles like electricians to build data centers, prompting a plan to retrain Americans for high-paying infrastructure jobs.

  2. The technology is seen as a major productivity tool that provides critical leverage for small businesses to scale and overcome hiring challenges.

  3. The administration issued a specific executive order on K-12 AI education to ensure America’s students are prepared to wield these tools in their future careers.

Ahem, immigration, ahem, also these things rarely work, but okay, sure, fine.

Prioritizing Practical Infrastructure Over Hypothetical Risk

  1. Kratsios asserts that chip supply is no longer a major constraint; the key barriers to the AI build-out are shortages of skilled labor and regulatory delays in permitting.

  2. Success will be measured by reducing the time from permit application to “shovels in the ground” for new power plants and data centers.

  3. The former AI Safety Institute is being repurposed to focus on the hard science of metrology—developing technical standards for measuring and evaluating models, rather than vague notions of “safety.”

It is not the only constraint, but it is simply false to say that chip supply is no longer a major constraint.

Defining success in infrastructure in this way would, if taken seriously, lead to large distortions in the usual obvious Goodhart’s Law ways. I am going to give the benefit of the doubt and presume this ‘success’ definition is local, confined to infrastructure.

If the only thing America’s former AISI can now do are formal measured technical standards, then that is at least a useful thing that it can hopefully do well, but yeah it basically rules out at the conceptual level the idea of actually addressing the most important safety issues, by dismissing them are ‘vague.’

This goes beyond ‘that which is measured is managed’ to an open plan of ‘that which is not measured is not managed, it isn’t even real.’ Guess how that turns out.

Defining the Legislative Agenda

  1. While the executive branch has little power here, Kratsios identifies the use of copyrighted data in model training as a “quite controversial” area that Congress may need to address.

  2. The administration would welcome legislation that provides statutory cover for the reformed, standards-focused mission of the Center for AI Standards and Innovation (CAISI).

  3. Continued congressional action is needed for appropriations to fund critical AI-related R&D across agencies like the National Science Foundation.

TechCrunch: 20 national security experts urge Trump administration to restrict Nvidia H20 sales to China.

The letter says the H20 is a potent accelerator of China’s frontier AI capabilities and could be used to strengthen China’s military.

Americans for Responsible Innovation: The H20 and the AI models it supports will be deployed by China’s PLA. Under Beijing’s “Military-Civil Fusion” strategy, it’s a guarantee that H20 chips will be swiftly adapted for military purposes. This is not a question of trade. It is a question of national security.

It would be bad enough if this was about selling the existing stock of H20s, that Nvidia has taken a writedown on, even though it could easily sell them in the West instead. It is another thing entirely that Nvidia is using its capacity on TSMC machines to make more of them, choosing to create chips to sell directly to China instead of creating chips for us.

Ruby Scanlon: Nvidia placed orders for 300,000 H20 chipsets with contract manufacturer TSMC last week, two sources said, with one of them adding that strong Chinese demand had led the US firm to change its mind about just relying on its existing stockpile.

It sounds like we’re planning on feeding what would have been our AI chips to China. And then maybe you should start crying? Or better yet tell them they can’t do it?

I share Peter Wildeford’s bafflement here:

Peter Wildeford: “China is close to catching up to the US in AI so we should sell them Nvidia chips so they can catch up even faster.”

I never understand this argument from Nvidia.

The argument is also false, Nvidia is lying, but I don’t understand even if it were true.

There is only a 50% premium to buy Nvidia B200 systems within China, which suggests quite a lot of smuggling is going on.

Tao Burga: Nvidia still insists that there’s “no evidence of any AI chip diversion.” Laughable. All while lobbying against the data center chip location verification software that would provide the evidence. Tell me, where does the $1bn [in AI chips smuggled to China] go?

Rob Wiblin: Nvidia successfully campaigning to get its most powerful AI chips into China has such “the capitalists will sell us the rope with which we will hang them” energy.

Various people I follow keep emphasizing that China is smuggling really a lot of advanced AI chips, including B200s and such, and perhaps we should be trying to do something about it, because it seems rather important.

Chipmakers will always oppose any proposal to track chips or otherwise crack down on smuggling and call it ‘burdensome,’ where the ‘burden’ is ‘if you did this they would not be able to smuggle as many chips, and thus we would make less money.’

Reuters Business: Demand in China has begun surging for a business that, in theory, shouldn’t exist: the repair of advanced v artificial intelligence chipsets that the US has banned the export of to its trade and tech rival.

Peter Wildeford: Nvidia position: “datacenters from smuggled products is a losing proposition […] Datacenters require service and support, which we provide only to authorized NVIDIA products.”

Reality: Nvidia AI chip repair industry booms in China for banned products.

Scott Bessent Warns TSMC’s $40 billion Arizona fab that could meet 7% of American chip demand keeps getting delayed, and blames inspectors and red tape. There’s confusion here in the headline that he is warning it would ‘only’ meet 7% of demand, but 7% of demand would be amazing for one plant and the article’s text reflects this.

Bessent criticized regulatory hurdles slowing construction of the $40 billion facility. “Evidently, these chip design plants are moving so quickly, you’re constantly calling an audible and you’ve got someone saying, ‘Well, you said the pipe was going to be there, not there. We’re shutting you down,’” he explained.

It does also mean that if we want to meet 100% or more of demand we will need a lot more plants, but we knew that.

Epoch reports that Chinese hardware is behind American hardware, and is ‘closing the gap’ but faces major obstacles in chip manufacturing capability.

Epoch: Even if we exclude joint ventures with U.S., Australian, or U.K. institutions (where the developers can access foreign silicon), the clear majority of homegrown models relied on NVIDIA GPUs. In fact, it took until January 2024 for the first large language model to reportedly be trained entirely on Chinese hardware, arguably years after the first large language models.

Probably the most important reason for the dominance of Western hardware is that China has been unable to manufacture these AI chips in adequate volumes. Whereas Huawei reportedly manufactured 200,000 Ascend 910B chips in 2024, estimates suggest that roughly one million NVIDIA GPUs were legally delivered to China in the same year.

That’s right. For every top level Huawei chip manufactured, Nvidia sold five to China. No, China is not about to export a ‘full Chinese tech stack’ for free the moment we turn our backs. They’re offering downloads of r1 and Kimi K2, to be run on our chips, and they use all their own chips internally because they still have a huge shortage.

Put bluntly, we don’t see China leaping ahead on compute within the next few years. Not only would China need to overcome major obstacles in chip manufacturing and software ecosystems, they would also need to surpass foreign companies making massive investments into hardware R&D and chip fabrication.

Unless export controls erode or Beijing solves multiple technological challenges in record time, we think that China will remain at least one generation behind in hardware. This doesn’t prevent Chinese developers from training and running frontier AI models, but it does make it much more costly.

Overall, we think these costs are large enough to put China at a substantial disadvantage in AI scaling for at least the rest of the decade.

Beating China may or may not be your number one priority. We do know that taking export controls seriously is the number one priority for ‘beating China.’

Intel will cancel 14A and following nodes, essentially abandoning the technological frontier, if it cannot win a major external customer.

Discussion about this post

The Week in AI Governance Read More »

research-roundup:-7-cool-science-stories-we-almost-missed

Research roundup: 7 cool science stories we almost missed


Other July stories: Solving a 150-year-old fossil mystery and the physics of tacking a sailboat.

150-year-old fossil of Palaeocampa anthrax isn’t a sea worm after all. Credit: Christian McCall

It’s a regrettable reality that there is never enough time to cover all the interesting scientific stories we come across each month. In the past, we’ve featured year-end roundups of cool science stories we (almost) missed. This year, we’re experimenting with a monthly collection. July’s list includes the discovery of the tomb of the first Maya king of Caracol in Belize, the fluid dynamics of tacking a sailboat, how to determine how fast blood was traveling when it stained cotton fabric, and how the structure of elephant ears could lead to more efficient indoor temperature control in future building designs, among other fun stories.

Tomb of first king of Caracol found

University of Houston provost and archeologist Diane Chase in newly discovered tomb of the first ruler of the ancient Maya city Caracol and the founder of its royal dynasty.

Credit: Caracol Archeological Project/University of Houston

Archaeologists Arlen and Diane Chase are the foremost experts on the ancient Maya city of Caracol in Belize and are helping to pioneer the use of airborne LiDAR to locate hidden structures in dense jungle, including a web of interconnected roadways and a cremation site in the center of the city’s Northeast Acropolis plaza. They have been painstakingly excavating the site since the mid-1980s. Their latest discovery is the tomb of Te K’ab Chaak, Caracol’s first ruler, who took the throne in 331 CE and founded a dynasty that lasted more than 460 years.

This is the first royal tomb the husband-and-wife team has found in their 40+ years of excavating the Caracol site. Te K’ab Chaak’s tomb (containing his skeleton) was found at the base of a royal family shrine, along with pottery vessels, carved bone artifacts, jadeite jewelry, and a mosaic jadeite death mask. The Chases estimate that the ruler likely stood about 5’7″ tall and was probably quite old when he died, given his lack of teeth. The Chases are in the process of reconstructing the death mask and conducting DNA and stable isotope analysis of the skeleton.

How blood splatters on clothing

Cast-off blood stain pattern

Credit: Jimmy Brown/CC BY 2.0

Analyzing blood splatter patterns is a key focus in forensic science, and physicists have been offering their expertise for several years now, including in two 2019 studies on splatter patterns from gunshot wounds. The latest insights gleaned from physics concern the distinct ways in which blood stains cotton fabrics, according to a paper published in Forensic Science International.

Blood is a surprisingly complicated fluid, in part because the red blood cells in human blood can form long chains, giving it the consistency of sludge. And blood starts to coagulate immediately once it leaves the body. Blood is also viscoelastic: not only does it deform slowly when exposed to an external force, but once that force has been removed, it will return to its original configuration. Add in coagulation and the type of surface on which it lands, and correctly interpreting the resulting spatter patterns becomes incredibly difficult.

The co-authors of the July study splashed five different fabric surfaces with pig’s blood at varying velocities, capturing the action with high-speed cameras. They found that when a blood stain has “fingers” spreading out from the center, the more fingers there are, the faster the blood was traveling when it struck the fabric. And the faster the blood was moving, the more “satellite droplets” there will be—tiny stains surrounding the central stain. Finally, it’s much easier to estimate the velocity of blood splatter on plain-woven cotton than on other fabrics like twill. The researchers plan to extend future work to include a wider variety of fabrics, weaves, and yarns.

DOI: Forensic Science International, 2025. 10.1016/j.forsciint.2025.112543  (About DOIs).

Offshore asset practices of the uber-rich

The uber-rich aren’t like the rest of us in so many ways, including their canny exploitation of highly secretive offshore financial systems to conceal their assets and/or identities. Researchers at Dartmouth have used machine learning to analyze two public databases and identified distinct patterns in the strategies oligarchs and billionaires in 65 different countries employ when squirreling away offshore assets, according to a paper published in the journal PLoS ONE.

One database tracks offshore finance, while the other rates different countries on their “rule of law.” This enabled the team to study key metrics like how much of their assets elites move offshore, how much they diversify, and how much they make use of “blacklisted” offshore centers that are not part of the mainstream financial system. The researchers found three distinct patterns, all tied to where an oligarch comes from.

Billionaires from authoritarian countries are more likely to diversify their hidden assets across many different centers—a “confetti strategy”—perhaps because these are countries likely to exact political retribution. Others, from countries with effective government regulations—or where there is a pronounced lack of civil rights—are more likely to employ a “concealment strategy” that includes more blacklisted jurisdictions, relying more on bearer shares that protect their anonymity. Those elites most concerned about corruption and/or having their assets seized typically employ a hybrid strategy.

The work builds on an earlier 2023 study concluding that issuing sanctions on individual oligarchs in Russia, China, the US, and Hong Kong is less effective than targeting the small, secretive network of financial experts who manage that wealth on behalf of the oligarchs. That’s because sanctioning just one wealth manager effectively takes out several oligarchs at once, per the authors.

DOI: PLoS ONE, 2025. 10.1371/journal.pone.0326228  (About DOIs).

Medieval remedies similar to TikTok trends

Medieval manuscripts like the Cotton MS Vitellius C III highlight uses for herbs that reflect modern-day wellness trends.

Credit: The British Library

The Middle Ages are stereotypically described as the “Dark Ages,” with a culture driven by superstition—including its medical practices. But a perusal of the hundreds of medical manuscripts collected in the online Corpus of Early Medieval Latin Medicine (CEMLM) reveals that in many respects, medical practices were much more sophisticated; some of the remedies are not much different from alternative medicine remedies touted by TikTok influencers today. That certainly doesn’t make them medically sound, but it does suggest we should perhaps not be too hasty in who we choose to call backward and superstitious.

Per Binghamton University historian Meg Leja, medievalists were not “anti-science.” In fact, they were often quite keen on learning from the natural world. And their health practices, however dubious they might appear to us—lizard shampoo, anyone?—were largely based on the best knowledge available at the time. There are detox cleanses and topical ointments, such as crushing the stone of a peach, mixing it with rose oil, and smearing it on one’s forehead to relieve migraine pain. (Rose oil may actually be an effective migraine pain reliever.) The collection is well worth perusing; pair it with the Wellcome-funded Curious Cures in Cambridge Libraries to learn even more about medieval medical recipes.

Physics of tacking a sailboat

The Courant Institute's Christiana Mavroyiakoumou, above at Central Park's Conservatory Water with model sailboats

Credit: Jonathan King/NYU

Possibly the most challenging basic move for beginner sailors is learning how to tack to sail upwind. Done correctly, the sail will flip around into a mirror image of its previous shape. And in competitive sailboat racing, a bad tack can lose the race. So physicists at the University of Michigan decided to investigate the complex fluid dynamics at play to shed more light on the tricky maneuver, according to a paper published in the journal Physical Review Fluids.

After modeling the maneuver and conducting numerical simulations, the physicists concluded that there are three primary factors that determine a successful tack: the stiffness of the sail, its tension before the wind hits, and the final sail angle in relation to the direction of the wind. Ideally, one wants a less flexible, less curved sail with high tension prior to hitting the wind and to end up with a 20-degree final sail angle. Other findings: It’s harder to flip a slack sail when tacking, and how fast one manages to flip the sail depends on the sail’s mass and the speed and acceleration of the turn.

DOI: Physical Review Fluids, 2025. 10.1103/37xg-vcff  (About DOIs).

Elephant ears inspire building design

African bush elephant with ears spread in a threat or attentive position and visible blood vessels

Maintaining a comfortable indoor temperature constitutes the largest fraction of energy usage for most buildings, with the surfaces of walls, windows, and ceilings contributing to roughly 63 percent of energy loss. Engineers at Drexel University have figured out how to make surfaces that help rather than hamper efforts to maintain indoor temperatures: using so-called phase-change materials that can absorb and release thermal energy as needed as they shift between liquid and solid states. They described the breakthrough in a paper published in the Journal of Building Engineering.

The Drexel group previously developed a self-warming concrete using a paraffin-based material, similar to the stuff used to make candles. The trick this time around, they found, was to create the equivalent of a vascular network within cement-based building materials. They used a printed polymer matrix to create a grid of channels in the surface of concrete and filled those channels with the same paraffin-based material. When temperatures drop, the material turns into a solid and releases heat energy; as temperatures rise, it shifts its phase to a liquid and absorbs heat energy.

The group tested several different configurations and found that the most effective combination of strength and thermal regulation was realized with a diamond-shaped grid, which boasted the most vasculature surface area. This configuration successfully slowed the cooling and heating of its surface to between 1 and 1.2 degrees Celsius per hour, while holding up against stretching and compression tests. The structure is similar to that of jackrabbit and elephant ears, which have extensive vascular networks to help regulate body temperature.

DOI: Journal of Building Engineering, 2025. 10.1016/j.jobe.2025.112878  (About DOIs).

ID-ing a century-old museum specimen

Neotype of Palaeocampa anthrax from the Mazon Creek Lagerstätte and rediscovered in the Invertebrate Paleontology collection of the MCZ.

Credit: Richard J. Knecht

Natural history museums have lots of old specimens in storage, and revisiting those specimens can sometimes lead to new discoveries. That’s what happened to University of Michigan evolutionary biologist Richard J. Knecht as he was poring over a collection at Harvard’s Museum of Comparative Zoology while a grad student there. One of the fossils, originally discovered in 1865, was labeled a millipede. But Knecht immediately recognized it as a type of lobopod, according to a paper published in the journal Communications Biology. It’s the earliest lobopod yet found, and this particular species also marks an evolutionary leap since it’s the first known lobopod to be non-marine.

Lobopods are the evolutionary ancestors to arthropods (insects, spiders, and crustaceans), and their fossils are common along Paleozoic sea beds. Apart from tardigrades and velvet worms, however, they were thought to be confined to oceans. But Palaeocampa anthrax has legs on every trunk, as well as almost 1,000 bristly spines covering its body with orange halos at their tips. Infrared spectroscopy revealed traces of fossilized molecules—likely a chemical that emanated from the spinal tips. Since any chemical defense would just disperse in water, limiting its effectiveness, Knecht concluded that Palaeocampa anthrax was most likely amphibious rather than being solely aquatic.

DOI: Communications Biology, 2025. 10.1038/s42003-025-08483-0  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Research roundup: 7 cool science stories we almost missed Read More »

in-search-of-riches,-hackers-plant-4g-enabled-raspberry-pi-in-bank-network

In search of riches, hackers plant 4G-enabled Raspberry Pi in bank network

“One of the most unusual elements of this case was the attacker’s use of physical access to install a Raspberry Pi device,” Group-IB Senior Digital Forensics and Incident Response Specialist Nam Le Phuong wrote. “This device was connected directly to the same network switch as the ATM, effectively placing it inside the bank’s internal network. The Raspberry Pi was equipped with a 4G modem, allowing remote access over mobile data.”

To maintain persistence, UNC2891 also compromised a mail server because it had constant Internet connectivity. The Raspberry Pi and the mail server backdoor would then communicate by using the bank’s monitoring server as an intermediary. The monitoring server was chosen because it had access to almost every server within the data center.

The Network Monitoring Server as an intermediary between the Raspberry Pi and the Mail Server.

Credit: Group-IB

The Network Monitoring Server as an intermediary between the Raspberry Pi and the Mail Server. Credit: Group-IB

As Group-IB was initially investigating the bank’s network, researchers noticed some unusual behaviors on the monitoring server, including an outbound beaconing signal every 10 minutes and repeated connection attempts to an unknown device. The researchers then used a forensic tool to analyze the communications. The tool identified the endpoints as a Raspberry Pi and the mail server but was unable to identify the process names responsible for the beaconing.

The forensic triage tool is unable to collect the relevant process name or ID associated with the socket.

Credit: Group-IB

The forensic triage tool is unable to collect the relevant process name or ID associated with the socket. Credit: Group-IB

The researchers then captured the system memory as the beacons were sent. The review identified the process as lightdm, a process associated with an open source LightDM display manager. The process appeared to be legitimate, but the researchers found it suspicious because the LightDM binary was installed in an unusual location. After further investigation, the researchers discovered that the processes of the custom backdoor had been deliberately disguised in an attempt to throw researchers off the scent.

Phuong explained:

The backdoor process is deliberately obfuscated by the threat actor through the use of process masquerading. Specifically, the binary is named “lightdm”, mimicking the legitimate LightDM display manager commonly found on Linux systems. To enhance the deception, the process is executed with command-line arguments resembling legitimate parameters – for example,

lightdm –session child 11 19 — in an effort to evade detection and mislead forensic analysts during post-compromise investigations.

These backdoors were actively establishing connections to both the Raspberry Pi and the internal Mail Server.

As noted earlier, the processes were disguised using the Linux bind mount. Following that discovery, Group-IB added the technique to the MITRE ATT&CK framework as “T1564.013 – Hide Artifacts: Bind Mounts.”

Group-IB didn’t say where the compromised switching equipment was located or how attackers managed to plant the Raspberry Pi. The attack was detected and shut down before UNC2891 was able to achieve its final goal of infecting the ATM switching network with the CakeTap backdoor.

In search of riches, hackers plant 4G-enabled Raspberry Pi in bank network Read More »

childhood-and-education-#12:-college-admissions

Childhood and Education #12: College Admissions

  1. College Applications.

  2. The College Application Essay (Is) From Hell.

  3. Don’t Guess The Teacher’s Password, Ask For It Explicitly.

  4. A Dime a Dozen.

  5. Treat Admissions Essays Like Games of Balderdash.

  6. It’s About To Get Worse.

  7. Alternative Systems Need Good Design.

  8. The SAT Scale Is Broken On Purpose.

In case you missed it, yes, of course Harvard admissions are way up and Harvard did this on purpose. The new finding is that Harvard was recruiting African American applicants in particular, partly in order to balance conditional acceptance rates. One could of course also argue that the goal was ‘to find more worthy students,’ with the counterevidence being that the test scores of such applicants declined as more applications came in (as they obviously would for any group) but the scores of those who got admitted didn’t change.

As a student, one needs to understand that schools love applications they can then reject, and might care about that even more depending on your details. So when they tell you to apply, that you have a shot, that is not the evidence you want it to be.

Or, your future depends on knowing exactly the right way to lie your ass off, and having sufficiently low integrity to do so shamelessly.

One can ask questions like: If you can get hired by Google for an engineering job, you have a 4.4 GPA and a 1590 SAT score, and you get rejected by 5 University of California schools and 16 out of 18 schools overall, is it fair to say that was probably an illegal form of racial discrimination, as his lawsuit is claiming? It doesn’t automatically have to be that, there could in theory be other details of his application that are problems.

I’d like to say to that objection ‘who are we kidding’ but maybe? You had two groups debating this recently, after a different applicant, Zack Yadegari, got rejected from all the colleges for being too successful and daring to write about that.

One group said this was the situation, And That’s Terrible.

The other group said, yes this is the situation, And You’re Terrible, get with the program or go off and die, oh and it’s just the essay, he can apply next year it’s fine.

Zack Yadegari: 18 years old

34 ACT

4.0 GPA

$30M ARR biz

Stanford ❌ MIT ❌ Harvard ❌ Yale ❌ WashU ❌ Columbia ❌ UPenn ❌ Princeton ❌ Duke ❌ USC ❌ Georgia Tech ✅ UVA ❌ NYU ❌ UT ✅ Vanderbilt ❌ Brown ❌ UMiami ✅ Cornell ❌

I dedicated 2 straight weeks to my essays doing nothing else. Had them looked over by the best writers I know.

Michael Druggan: When I applied to Harvard, I was a USAMO winner (only 12 per year and with significant duplicates that works out to significantly less than 12 per graduating class). I also had a clean 36 in every section on the ACT from my very first attempt. Neither of those are dime-a-dozen stats.

The admissions committee didn’t care. They rejected me in favor of legions of clearly academically inferior candidates who did a better job kissing their asses in their application essays. Let’s not pretend this process is anything but a farce.

Avi Shiffmann: My (in my opinion awful) personal statement that got me into Harvard. [comments mostly talk about how the essay is good actually]

Felpix: College admissions is so competitive, kids are just crashing out [describes a kid who basically did everything you could imagine to get in to study Computer Science, and still got rejected from the majors, reasons unknown.]

Gabriel: this is incredibly sad, someone spent their entire childhood to get into MIT with perfect scores without getting in, and now can’t live his dream

all this effort could have been spent on becoming economically valuable and he’d now have his dream job. this is obviously not this persons fault, but the fault of collective inability to change, and constantly reaffirming our beliefs that whatever we have now is working great. we put this talented person into doing fake work to get the chance to do more fake work, to get a degree, which is seen as much more important than the actual work being performed later

he wasted his entire childhood, literally irreparable damage

The kid Felpix is quoting is going to have a fine future without academia, and yes they’d be a year ahead of the game if they’d spent all that time learning to code better instead of playing the college game. It’s not even clear they should have been trying to go to college at all, other than VCs wanting to see you go for a bit and then drop out.

Zack’s mistake was, presumably, asking the best writers he knew rather than people who know to write college essays in particular.

Dr. Ellie Murray, ScD: The fact that every academic reads this guy’s essay and is like, yeah of course you didn’t get in, but tech twitter all seem to think he was a shoe-in and cheated out of a spot… We’re living in 2 different worlds and it’s a problem.

If you’re writing your own college apps & want to know how to avoid these pitfalls, there are lots of great threads about this guy’s essay. Start here.

Mason: The essay is easily the most regressive and gameable part of the app. The point tech twitter is making is not that the essay is good, but that if this kid came from the “right” family his essay would have been ghostwritten by an admissions coach anyway

Amal Dorai: It’s supposed to be gameable! They’re trying to put their imprimatur on a meritocratic class that can “game” its way into the country’s power elite. Yes it’s a sort of pre-Trumpian way of thinking but they are not just looking for the country’s future NVIDIA engineers.

Monica Marks: Statistically well-qualified applicants come a dime a dozen in elite admissions, more than most people realise.

For every student w/ perfect scores like Zach, there’s a student w/ near perfect scores & more humility who’s overcome terrible circumstances & does not seem entitled.

[she gives advice on how to write a good essay, basically ‘sell that you can pretend that you need this in order to fight some Good Fight that liberals love and are super motivated and shows the proper appreciation and humility etc, and in his case he should have emphasized his Forbes essay rather than his actual achievements.’]

Wind Come Calling: I’ve read applications from kids like this and, being obviously very bright, they tend to think they can hide their arrogance or sense of entitlement, that it won’t come through in their application or that the reviewers will miss it. they are mistaken.

Lastdance: “You must follow my lead and feign humility. If you are merely gifted then go somewhere else, it’s the gifted liar we want!”

Kelsey Piper: before you make fun of someone’s college application personal statement, I urge you to go way back into your old emails and read your own college application essays, I promise this will cure you of the urge to say anything mean about anyone else’s

Tracing Woodgrains: I’m seeing people criticize this personal statement, and—look. Don’t subject yourself to the indignity of defending arbitrary games. the Personal Statement is the lowest genre of essay and the worst admissions practice. his resumé speaks for itself.

“but the personal statement is…”

…an arbitrary game of “guess what’s in my head,” inauthenticity embodied by writers and readers alike. an undignified hazing ritual whether written by you, your $200/hr advisor, or your good friend Claude.

good? bad? junk either way.

every time people defend this system on its own terms it makes me grimace

do not validate a system that encourages kids to twist themselves into pretzels and then purports to judge their whole persons

the whole game is socially corrosive.

so like [Monica Marks from above] seems perfectly nice but I simply do not want access to be gatekept by “did I strike the perfect tone to flatter her sensibilities”

the red flags – someone go tell UMass Amherst they got a dud! or don’t, bc it’s a deranged process

Tracing Woods (also): It’s not the competition that gets people, I suspect, but the arbitrariness. Young, ambitious people jump through a million hoops to roll the dice. It is unhealthy to let this process control so much of the collective youth psyche. Elite college admissions are twisted.

Deedy: Reddit father says son who is

— #1/476 in high school

— 1580/1600 on SAT

— 5/5 on 18 APs

got rejected by all the Ivies for CS. Only got UMass Amherst.

It’s college season and this is the #1 post last week on r/ApplyingToCollege.

Competition is fine, but this just feels unfair.

Of course, some people will say it’s fake but if you read the OP’s comments it feels real. Son is 1/4th Korean 3/4th white, according to his comments.

Depending on where you set the bar for applicants, ‘statistically well-qualified’ might be ‘dime a dozen,’ maybe even being #1 in your HS with 1580 SAT and 18 5/5 APs is ‘a dime a dozen.’ That’s by design, as I discuss elsewhere, the tests cap out on purpose. If the top colleges wanted differentiation the tests would provide it.

But you know what very much is not ‘a dime a dozen’? Things like being a USAMO winner or founding a $30mm ARR business.

If admissions chooses not to care much about even that, and merely puts it into the ‘statistical qualification’ bucket and mostly looks to see who within that bucket is better at playing the Guess the Teacher’s Password game and playing their PTCs (Personal Trauma Cards) and being members of the preferred groups and so on, well, it is what it is.

If you see someone thinking being a USAMO winner and founding a $30mm ARR business means they shouldn’t be feigning false humility, and think ‘that’s an asshole,’ well, I have a humble suggestion about who the asshole is in this situation.

And it’s totally fair to point out that this is indeed what it is, and that our academic system checks your ‘statistical qualifications’ but is mostly actively selecting for this form of strategic dishonesty combined with class performance and some inherent characteristics.

That is very different from saying that this is good, actually. It’s not good.

I would also however say that it is now common knowledge that this is how it works. So, given that it is common knowledge how this works, while I place a super high value on honesty and honor, I hereby give everyone reading this full ethical and moral permission to completely lie your ass off.

College admission essays are not a place where words have meaning and you are representing your statement as true. So aside from specific verifiable things like your GPA or SAT score, you can and should lie your ass off the same way you would lie when playing a game of Diplomacy or Balderdash. It doesn’t count, and I will not hold it against you, at all.

Oh, also, requiring all these hours of volunteer work is straight up enslavement of our kids for child labor, and not the good kind where you learn valuable skills.

Those disputes were at the top of the scale. An at least somewhat reasonable response would be ‘boo hoo, you didn’t get into the top 25 colleges in the country, go to your state college and you’ll be fine.’

Except that the state colleges are sometimes doing it too. And that’s not okay, at all.

Analytic Valley Girl Chris: State universities should be legally mandated to accept any in state graduate who meets certain academic thresholds, save some compelling disqualification. Generic “not what we’re looking for” shouldn’t be allowed.

As in, MIT can do what it wants, it’s their loss, but UC San Diego and UC Davis?

Yes, obviously if you simply want ‘any college at all’ there will always be options for such students, but that degree and experience, and the connections available to be made, will offer dramatically lower value. Going is probably a large mistake.

The ‘top X% of your class’ system is excellent, such as Texas’s top 10% rule. I’d supplement that with a points system or threshold rules or both for grades, test scores and other quantifiable achievements, with a known minimum auto-admission threshold.

UATX does a simplified version of this, the deadline for this year has passed.

University of Austin (AUTX): College admissions are unjust.

Not just biased. Not just broken. Unjust.

Students spend high school anxiously stacking their résumés with hollow activities, then collect generic recommendation letters and outsource their essays to tutors or AI. Admissions at elite colleges now come down to who you know, your identity group, or how well you play the game.

This system rewards manipulation, not merit. It selects for conformity, not character.

That’s why we’re introducing the University of Austin’s new admissions policy:

If you score 1460+ on the SAT, 33+ on the ACT, or 105+ on the CLT, you will be automatically admitted, pending basic eligibility and an integrity check. Below that threshold, you’ll be evaluated on your test scores, AP/IB results, and three verifiable achievements, each described in a single sentence.

That’s it.

We care about two things: Intelligence and courage.

Intelligence to succeed in a rigorous intellectual environment (we don’t inflate grades). Courage to join the first ranks of our truth-oriented university.

College admission should be earned—not inherited, bought, or gamed. At UATX, your merit earns you a place—and full tuition scholarship.

Apply here by April 15.

Note the deadline. Because your decisions are deterministic, you get to move last.

As in, all they get to sweep up all these students whose essays were rejected or got discriminated against. Then we get to find out what happens when you put them all together. And you get to see which employers are excited by that, and which aren’t.

The New York Times headline writers understood the assignment, although it’s even worse than this: Elite Colleges Have Found a New Virtue For Applicants To Fake.

The basic version is indeed a new virtue to fake, combined with a cultural code to crack and teacher’s password to guess, the ‘disagreement question’:

Alex Bronzini-Vender (Sophomore, Harvard University, hire him): This time I found a new question: “Tell us about a moment when you engaged in a difficult conversation or encountered someone with an opinion or perspective that was different from your own. How did you find common ground?”

It’s known as the disagreement question, and since the student encampments of spring 2024 and the American right’s attacks on universities, a growing number of elite colleges have added it to their applications.

This didn’t escalate quickly so much as skip straight to the equilibrium. Kids are pros.

The trouble is that the disagreement question — like much of the application process — isn’t built for honesty. Just as I once scrambled to demonstrate my fluency in D.E.I., students now scramble to script the ideal disagreement, one that manages to be intriguing without being dangerous.

So now there’s a new guessing game in town.

Then again, maybe demonstrating one’s ability to delicately navigate controversial topics is the point. Perhaps the trick is balance? Be humble; don’t make yourself look too right. But you can’t choose a time when you were entirely wrong, either. Or should you tailor your responses by geography, betting that, say, a Southern admissions officer would be more likely to appreciate a conservative-leaning anecdote?

The emerging consensus in the application-prep industry is that it’s best to avoid politics entirely. … Dr. Jager-Hyman, for her part, usually advises students to choose a topic that is meaningful to them but unlikely to stoke controversy — like a time someone told you your favorite extracurricular activity was a waste of time.

So far, ordinary terrible, sure, fine, I suppose it’s not different in kind than anything else in the college essay business. Then it gets worse.

This fall, an expanding number of top schools — including Columbia, M.I.T., Northwestern, Johns Hopkins, Vanderbilt and the University of Chicago — will begin accepting “dialogues” portfolios from Schoolhouse.world, a platform co-founded by Sal Khan, the founder of Khan Academy, to help students with math skills and SAT prep.

High-schoolers will log into a Zoom call with other students and a peer tutor, debate topics like immigration or Israel-Palestine, and rate one another on traits like empathy, curiosity or kindness. The Schoolhouse.world site offers a scorecard: The more sessions you attend, and the more that your fellow participants recognize your virtues, the better you do.

“I don’t think you can truly fake respect,” Mr. Khan said.

Even as intended this is terrible already:

Owl of Athena: Remember when I told you Sal Khan was evil? I didn’t know the half of it!

Meet the Civility Score, courtesy of Khan’s “Dialogues.”

Get your kids used to having a social credit score, and make sure they understand their highest value should be the opinion of their peers! What could possibly go wrong?!

Steve McGuire: Elite universities are going to start using peer-scored civility ratings for admissions?!

Sorry, that’s a terrible idea. Why not just admit people based on their scores and then teach them to debate and dialogue?

You don’t need to go full CCP to solve this problem.

Nate Silver: This is basically affirmative action for boring people.

Blightersort: it is kind of amazing that elite schools would look at the current world and worry they are not selecting for conformity strongly enough and then work on new ways to select for conformity

Except of course it is way worse than that on multiple levels.

Remember your George Burns: “Sincerity is the most important thing. If you can fake that you’ve got it made.”

Of course you can fake respect. I do it and see it all the time. It is a central life skill.

Also, if you’re not generally willing to or don’t know how to properly pander to peers in such settings, don’t ‘read the room’ or are ugly? No college for you.

You can, and people constantly do, fake empathy, curiosity and kindness. It is not only a central life skill, but it is considered a central virtue.

And the fortunate ones won’t have to do it alone: They’ll have online guides, school counselors and private tutors to help them learn to simulate earnestness.

You could argue that one cannot fake civility, because there is no difference between faked civility and real civility. It lives in the perception of the audience. And you can argue that to some extent this applies to other such virtues too.

Quite possibly, there will be rampant discrimination of other kinds, as well. Expect lots of identify-based appeals. The game will be won by those who play to win it.

And then let’s address the elephant in the room. Notice this sentence:

High-schoolers will log into a Zoom call with other students and a peer tutor, debate topics like immigration or Israel-Palestine, and rate one another on traits like empathy, curiosity or kindness.

Yeah. Um.

Neils Hoven: Oh look, they figured out how to scale ideological conformity testing.

Brain in a jar: Haha hot people and conformists will win. Fuck.

If you have a bunch of high schoolers rating each other on ‘empathy, curiosity or kindness’ on the basis of discussions of those topics, that is a politics test. If you go in there and take a right-wing stance on immigration? No college for you. Not pledging your support for ending the state of Israel? No college for you. Indeed, I’m willing to bet that going in with actual full empathy and curiosity will get you lower, not higher, scores than performative endorsement.

To be fair, the website doesn’t emphasize those topics in particular, although I’m assuming they were listed because the author here encountered them. Instead, it looks like this:

The problem will persist regardless, if less egregiously. Across essentially all topics, the peer consensus in high school is left-wing, and left-wing consensus holds that left-wing views are empathic and curious and kind, whereas anything opposed to them is not. I would very much not advice anyone oppose student debt ‘relief,’ meaning abrogation of contracts, or anything but the most radical positions on climate change, or oppose aggressive moderation and censorship of social media.

Short of using AI evaluators (an actually very good idea), I don’t see a way around the problem that this is not a civility test, it is a popularity and ideological purity challenge, and we are forcing everyone to put on an act.

On the positive side (I am not sure if I am kidding), it also is potentially a game theory challenge. 5 for 5, anyone? Presumably students will quickly learn the signals of how to make the very obvious deals with each other.

Also you see what else this is testing? You outright get a higher score for participating in more of these ‘dialogues.’ You also presumably learn, over time, the distribution of other participants, and what techniques work on them, and develop your ‘get them to rank you highly’ skills. So who wants to grind admissions chances between hours of assigned busywork (aka ‘homework’) and mandatory (‘community service’) shifts working as an indentured servant, perhaps at the homeless shelter?

You cannot simply do this:

Zaid Jilani: Make SAT and GPA almost all of the college admissions standard, any essays should be analytical like on GRE rather than personal.

Mike Riggs: And you have no concerns about grade inflation?

Zaid Jilani: I do but how is that any different than status quo? Have to deal with that issue regardless. FWIW that’s much worse in college than high school.

Kelsey Piper: yep. just cut all the holistic shit. it turns high school into hell without meaningfully identifying the kids most prepared to contribute at top schools let alone teaching them anything

Emmett Shear: Overfit! Overfit! You cannot make your model robust by adding more parameters, only more accurate in the moment! Trying to create a global rating for “best students” is a bad idea and intrinsically high-complexity. Stop doing that.

Most of the holistic stuff needs to go. The essay needs to either go fully, or become another test taken in person, ideally graded pass-fail, to check for competence.

You do need a way to control for both outstanding achievement in the field of excellence.

I would thus first reserve some number of slots for positive selection outside the system, for those who are just very obviously people you want to admit.

I also think you need to have a list of achievements, at least on AP and other advanced tests, that grant bonus points. The SAT does not get hard enough or cover a wide enough set of topics.

I think you mostly don’t need to worry about any but the most extreme deal breakers and negative selection. Stop policing anything that shouldn’t involve actual police.

The other problem then is that at this level of stakes everything will get gamed. You cannot use a fully or even incompletely uncontrolled GPA if you are not doing holistic adjustments. GPAs would average 4.33 so fast it would make your head spin. Any optional class not playing along would be dropped left and right. And so on. If you want to count GPA at all, you need to adjust for school and by-class averages, and adjust for the success rate of the school of origin as a function of average-adjusted GPA controlling for SAT, and so on.

The ultimate question here is whether you want students in high school to be maximizing GPA as a means to college admissions. It can be a powerful motivating factor, but it also warps motivation. My inclination is to say you want to use it mostly as a threshold effect, with the threshold rising as you move up the ladder, with only modest bonus points for going beyond that, or use it as a kind of fast-track that gets you around other requirements, ideally including admission fees.

Where it gets tricky is scholarships. Even if admission depends only on SAT+AP and similar scores plus some extraordinary achievements and threshold checks, the sticker prices of colleges are absurd. So if scholarships depend on other triggers, you end up with the same problem, or you end up with a two-tier system where those who need merit scholarships have to game everything, probably with the ‘immune tier’ rather small since even if you can afford full price that doesn’t mean you want to pay it.

Sahsa Gusev has a fun thread pointing out various flaws that lead one back to a holistic approach rather than an SAT+GPA approach. I think that if you do advanced stats on the GPA (perhaps create GPV, grade percentile value, or GVOA, or grade value over average), and add in advanced additional objective examinations as sources of additional points, perhaps including a standardized entrance exam at most, and have a clearly defined list of negative selection dealbreakers (that are either outright dealbreakers or not, nothing in between), you can get good enough that letting students mostly game that is better than the holistic nightmare, and you can two-track as discussed above by reserve some slots for the best of the best on pure holistic judgment.

It’s not perfect, but no options are perfect, and I think these are the better mistakes.

Another way of putting this is:

Sasha Gusev: *Open: Office of the president at the new 100% Meritocratic University*

President: We’ve admitted the top 2,000 applicants by GPA and SATs. How are they doing?

Admissions: Several hundred valedictorians who’ve never gotten a B in their life are now at the bottom of all their classes and are experiencing a collective mental breakdown. Also our sports teams are an embarrassment.

[Zvi’s alternative continuation]: President: Okay. Is there a problem?

Admissions: Yes, this is scaring off some potential applicants, and also our sports teams are an embarrassment.

President: If a few get scared off or decide to transfer to feel smarter because they care mainly about signaling and positional goods rather than learning, that seems fine, make sure our office helps them get good placements elsewhere. And yeah, okay, or sports teams suck, but remind me why I should care about that?

Admissions: Because some students won’t want to go to a school whose sports teams suck and alumni won’t give us money that way.

President: Fine, those students can go elsewhere, too, it’s not like we’re going to be short on applicants, and that’s why we charge tuition.

Admissions: But if all we do is math then you’re going to replace me with an AI!

President: Well, yes.

[End scene.]

Paul Graham: Part of the problem with college admissions is that the SAT is too easy. It doesn’t offer enough resolution at the high end of the scale, and thus gives admissions officers too much discretion.

The problem is that we have tests that solve this problem but no one cares that much about them. Once you are maximizing the SAT, the attitude is not ‘well then, okay, let’s give them the LSAT or GRE and see how many APs they can ace,’ it’s ‘okay we’ll give them a tiny boost for each additional AP and such but mostly we don’t care.’ If the SAT bell curve went up to 2000, then they’d be forced to care, and differentiate 1570 from 1970.

That doesn’t seem hard to do? All you have to do is add some harder questions?

Or alternatively, you could have the ASAT (Advanced SAT), which is the same test on the same curve (a 1500 on ASAT is a 1500 on the SAT), except it’s harder, if you don’t earn at least 1400 you get back a null result the way you do on the USAMO or Putnam, and it goes up to 2500, and you can choose to take that instead. Yes, that’s not technically so different from what we do already, but it would feel very different – you’d be looking at that 1950 in that spot and it would be a lot harder to ignore.

Yeah, well, on that front it just got even worse, and the ACT is making similar changes:

Steve McGuire: Reading passages on the SAT have been shortened from 500-750 words down to 25-150. They say “the eliminated reading passages are ‘not an essential prerequisite for college’ and that the new, shorter content helps ‘students who might have struggled to connect with the subject matter.’” The reality, of course, is that the test is getting easier because so many students are struggling.

Zac Hill: This is capital-B bad not just for the obvious reason (reading is Good) but for the maybe-more-important second-order reason that this is not just about reading; it’s about all information synthesis involving the construction of models as a product of sustained attention.

Alex Tabarrok: SOD: “The SAT now caters to students who have trouble reading long, complex texts.”

Meanwhile in the math section, students have more time per question and free use of a calculator, without the questions changing.

On top of that, this paper says the Math SAT declined in rigor by 71 points between 2008 and 2023, which would mean that we have a 107-point decline in average performance that cuts across major demographic groups. Yikes, but also comments point out that the decline is largely caused by more students taking the test, which should indeed cause them to lower the grading curve. Relative score is what matters, except that we’re running into a lot more cases where 800 isn’t getting the job done.

Schools could of course move to the Classic Learning Test (CLT) or others that would differentiate between students. Instead, they are the customers of the ACT and SAT, and the customer is always right.

The only way to interpret this is that the colleges want to differentiate student ability up to some low minimum threshold, because otherwise the students fail out, but they actively do not want to differentiate on ability at the high end. They prefer other criteria. I will not further speculate as to why.

Perhaps even more important than all that is this, it cannot be overstated how much I see this screwing almost everyone and everything up:

Nephew Jonathan (QTing Tracing Woods above): I’m gonna hijack this: if there’s one thing that explains why everyone under the age of 40 seems to be a nervous wreck it’s the reduction of life to “guessing the teacher’s password” for everything.

Dating apps? Guess the girl’s password. College admissions? Grad school? HR personality screenings?

Discussion about this post

Childhood and Education #12: College Admissions Read More »

vpn-use-soars-in-uk-after-age-verification-laws-go-into-effect

VPN use soars in UK after age-verification laws go into effect

Also on Friday, the Windscribe VPN service posted a screenshot on X claiming to show a spike in new subscribers. The makers of the AdGuard VPN claimed that they have seen a 2.5X increase in install rates from the UK since Friday.

Nord Security, the company behind the NordVPN app, says it has seen a “1,000 percent increase in purchases” of subscriptions from the UK since the day before the new laws went into effect. “Such spikes in demand for VPNs are not unusual,” Laura Tyrylyte, Nord Security’s head of public relations, tells WIRED. She adds in a statement that “whenever a government announces an increase in surveillance, Internet restrictions, or other types of constraints, people turn to privacy tools.”

People living under repressive governments that impose extensive Internet censorship—like China, Russia, and Iran—have long relied on circumvention tools like VPNs and other technologies to maintain anonymity and access blocked content. But as countries that have long claimed to champion the open Internet and access to information, like the United States, begin considering or adopting age verification laws meant to protect children, the boundaries for protecting digital rights online quickly become extremely murky.

“There will be a large number of people who are using circumvention tech for a range of reasons” to get around age verification laws, the ACLU’s Kahn Gillmor says. “So then as a government you’re in a situation where either you’re obliging the websites to do this on everyone globally, that way legal jurisdiction isn’t what matters, or you’re encouraging people to use workarounds—which then ultimately puts you in the position of being opposed to censorship-circumvention tools.”

This story originally appeared on wired.com.

VPN use soars in UK after age-verification laws go into effect Read More »

tesla-picks-lges,-not-catl,-for-$4.3-billion-storage-battery-deal

Tesla picks LGES, not CATL, for $4.3 billion storage battery deal

Tesla has a new battery cell supplier. Although the automaker is vertically integrated to a degree not seen in the automotive industry for decades, when it comes to battery cells it’s mostly dependent upon suppliers. Panasonic cells can be found in many Teslas, with the cheaper, sturdier lithium iron phosphate (LFP) battery cells being supplied by CATL. Now Tesla has a new source of LFP cells thanks to a deal just signed with LG Energy Solutions.

According to The Korea Economic Daily, the contract between Tesla and LGES is worth $4.3 billion. LGES will begin supplying Tesla with cells next August through until at least the end of July 2030, with provisions to extend the contract if necessary.

The LFP cells probably aren’t destined for life on the road, however. Instead, they’ll likely be used in Tesla’s energy storage products, which both Tesla and LGES hope will soak up demand now that EV sales prospects look so weak in North America.

The deal also reduces Tesla’s reliance on Chinese suppliers. LGES will produce the LFP cells at its factory in Michigan, says Reuters, and so they will not be subject to the Trump trade war tariffs, unlike Chinese-made cells from CATL.

Although Tesla CEO Elon Musk has boasted about the size of the energy storage market, its contribution to Tesla’s financials remains meagre, and actually shrank during the last quarter.

Tesla picks LGES, not CATL, for $4.3 billion storage battery deal Read More »