Author name: Beth Washington

report:-intel-struggles-with-new-18a-process-as-it-cuts-workers-and-cancels-projects

Report: Intel struggles with new 18A process as it cuts workers and cancels projects

Intel has a lot riding on “18A,” its next-generation manufacturing process for silicon chips that the company claims will help it catch up to the lead that competitors like TSMC have built up over the last few years. With 18A, Intel would return to manufacturing its own processor designs in its own factories, including the upcoming Series 3 Core Ultra chips for laptops (codenamed Panther Lake), after manufacturing parts of all other Core Ultra chips with TSMC. Intel is also offering 18A manufacturing capacity to external chipmakers, a major milestone in former CEO Pat Gelsinger’s plan to make Intel a competitive cutting-edge (and primarily US-based) chip manufacturer for the rest of the industry.

But a Reuters report claims that Intel is struggling to make usable chips on 18A, according to “people who were briefed on the company’s test data since late last year.” As of this summer, these sources say that just 10 percent of the chips being manufactured on 18A are “up to [Intel’s] specifications.”

Intel disputed the numbers cited in the report. “Yields are better than that,” Intel CFO David Zinsner told Reuters, though neither Zinsner nor Intel provided an alternate figure.

Whether Intel is struggling with 18A or not, the story is easy to believe because it fits a decade-long pattern going back to early delays for Intel’s 14 nm process in 2013 and 2014. Intel had finally switched its lineup to the 14 nm process by late 2015, but it was then stuck on that manufacturing process for years (2019–2020 for laptop chips, 2021–2022 for desktop chips).

Through that span, Intel’s PR strategy was familiar: insist that things were ramping up well internally and that bugs were being ironed out, express confidence in the roadmap, give itself a little wiggle room on launch dates of actual products, and continue onward.

In this case, Intel told Reuters that its Panther Lake chips are “fully on track” as of July 30. Intel reaffirmed that it would launch Panther Lake using the 18A manufacturing process in the second half of 2025, with more models coming in 2026. These will be the milestones to watch for—Intel could very well be struggling to ramp up yields on 18A chips, but the struggles could be normal-ish and planned-for ones that don’t delay the company’s plans any more than they already have.

Report: Intel struggles with new 18A process as it cuts workers and cancels projects Read More »

the-week-in-ai-governance

The Week in AI Governance

There was enough governance related news this week to spin it out.

Anthropic, Google, OpenAI, Mistral, Aleph Alpha, Cohere and others commit to signing the EU AI Code of Practice. Google has now signed. Microsoft says it is likely to sign.

xAI signed the AI safety chapter of the code, but is refusing to sign the others, citing them as overreach especially as pertains to copyright.

The only company that said it would not sign at all is Meta.

This was the underreported story. All the important AI companies other than Meta have gotten behind the safety section of the EU AI Code of Practice. This represents a considerable strengthening of their commitments, and introduces an enforcement mechanism. Even Anthropic will be forced to step up parts of their game.

That leaves Meta as the rogue state defector that once again gives zero anythings about safety, as in whether we all die, and also safety in its more mundane forms. Lol, we are Meta, indeed. So the question is, what are we going to do about it?

xAI took a middle position. I see the safety chapter as by far the most important, so as long as xAI is signing that and taking it seriously, great. Refusing the other parts is a strange flex, and I don’t know exactly what their problem is since they didn’t explain. They simply called it ‘unworkable,’ which is odd when Google, OpenAI and Anthropic all declared they found it workable.

Then again, xAI finds a lot of things unworkable. Could be a skill issue.

This is a sleeper development that could end up being a big deal. When I say ‘against regulations’ I do not mean against AI regulations. I mean against all ‘regulations’ in general, no matter what, straight up.

From the folks who brought you ‘figure out who we technically have the ability to fire and then fire all of them, and if something breaks maybe hire them back, this is the Elon way, no seriously’ and also ‘whoops we misread something so we cancelled PEPFAR and a whole lot of people are going to die,’ Doge is proud to give you ‘if a regulation is not technically required by law it must be an unbridled bad thing we can therefore remove, I wonder why they put up this fence.’

Hannah Natanson, Jeff Stein, Dan Diamond and Rachel Siegel (WaPo): The tool, called the “DOGE AI Deregulation Decision Tool,” is supposed to analyze roughly 200,000 federal regulations to determine which can be eliminated because they are no longer required by law, according to a PowerPoint presentation obtained by The Post that is dated July 1 and outlines DOGE’s plans.

Roughly 100,000 of those rules would be deemed worthy of trimming, the PowerPoint estimates — mostly through the automated tool with some staff feedback. The PowerPoint also suggests the AI tool will save the United States trillions of dollars by reducing compliance requirements, slashing the federal budget and unlocking unspecified “external investment.”

The conflation here is absolute. There are two categories of regulations: The half ‘required by law,’ and the half ‘worthy of trimming.’ Think of the trillions you can save.

They then try to hedge and claim that’s not how it is going to work.

Asked about the AI-fueled deregulation, White House spokesman Harrison Fields wrote in an email that “all options are being explored” to achieve the president’s goal of deregulating government.

No decisions have been completed on using AI to slash regulations, a HUD spokesperson said.

The spokesperson continued: “The intent of the developments is not to replace the judgment, discretion and expertise of staff but be additive to the process.”

That would be nice. I’m far more ‘we would be better off with a lot less regulations’ than most. I think it’s great to have an AI tool that splits off the half we can consider cutting from the half we are stuck with. I still think that ‘cut everything that a judge wouldn’t outright reverse if you tried cutting it’ is not a good strategy.

I find the ‘no we will totally consider whether this is a good idea’ talk rather hollow, both because of track record and also they keep telling us what the plan is?

“The White House wants us higher on the leader board,” said one of the three people. “But you have to have staff and time to write the deregulatory notices, and we don’t. That’s a big reason for the holdup.”

That’s where the AI tool comes in, the PowerPoint proposes. The tool will save 93 percent of the human labor involved by reviewing up to 500,000 comments submitted by the public in response to proposed rule changes. By the end of the deregulation exercise, humans will have spent just a few hours to cancel each of the 100,000 regulations, the PowerPoint claims.

They then close by pointing out that the AI makes mistakes even on the technical level it is addressing. Well, yeah.

Also, welcome to the future of journalism:

China has its own AI Action Plan and is calling for international cooperation on AI. Wait, what do they mean by that? If you look in the press, that depends who you ask. All the news organizations will be like ‘the Chinese released an AI Action Plan’ and then not link to the actual plan, I had to have o3 dig it up.

Here’s o3’s translation of the actual text. This is almost all general gestures in the direction of capabilities, diffusion, infrastructure and calls for open models. It definitely is not an AI Action Plan in the sense that America offered an AI Action Plan, with had lots of specific actionable proposals. This is more of a general outline of a plan and statement of goals, at best. At least it doesn’t talk about or call for a ‘race’ but a call for everything to be open and accelerated is not obviously better.

  • Seize AI opportunities together. Governments, international organizations, businesses, research institutes, civil groups, and individuals should actively cooperate, accelerate digital‑infrastructure build‑out, explore frontier AI technologies, and spread AI applications worldwide, fully unlocking AI’s power to drive growth, achieve the UN‑2030 goals, and tackle global challenges.

  • Foster AI‑driven innovation. Uphold openness and sharing, encourage bold experimentation, build international S‑and‑T cooperation platforms, harmonize policy and regulation, and remove technical barriers to spur continuous breakthroughs and deep “AI +” applications.

  • Empower every sector. Deploy AI across manufacturing, consumer services, commerce, healthcare, education, agriculture, poverty reduction, autonomous driving, smart cities, and more; share infrastructure and best practices to supercharge the real economy.

  • Accelerate digital infrastructure. Expand clean‑energy grids, next‑gen networks, intelligent compute, and data centers; create interoperable AI infrastructure and unified compute‑power standards; support especially the Global South in accessing and applying AI.

  • Build a pluralistic open‑source ecosystem. Promote cross‑border open‑source communities and secure platforms, open technical resources and interfaces, improve compatibility, and let non‑sensitive tech flow freely.

  • Supply high‑quality data. Enable lawful, orderly, cross‑border data flows; co‑create top‑tier datasets while safeguarding privacy, boosting corpus diversity, and eliminating bias to protect cultural and ecosystem diversity.

  • Tackle energy and environmental impacts. Champion “sustainable AI,” set AI energy‑ and water‑efficiency standards, promote low‑power chips and efficient algorithms, and scale AI solutions for green transition, climate action, and biodiversity.

  • Forge standards and norms. Through ITU, ISO, IEC, and industry, speed up standards on safety, industry, and ethics; fight algorithmic bias and keep standards inclusive and interoperable.

  • Lead with public‑sector adoption. Governments should pioneer reliable AI in public services (health, education, transport), run regular safety audits, respect IP, enforce privacy, and explore lawful data‑trading mechanisms to upgrade governance.

  • Govern AI safety. Run timely risk assessments, create a widely accepted safety framework, adopt graded management, share threat intelligence, tighten data‑security across the pipeline, raise explainability and traceability, and prevent misuse.

  • Implement the Global Digital Compact. Use the UN as the main channel, aim to close the digital divide—especially for the Global South—and quickly launch an International AI Scientific Panel and a Global AI Governance Dialogue under UN auspices.

  • Boost global capacity‑building. Through joint labs, shared testing, training, industry matchmaking, and high‑quality datasets, help developing countries enhance AI innovation, application, and governance while improving public AI literacy, especially for women and children.

  • Create inclusive, multi‑stakeholder governance. Establish public‑interest platforms involving all actors; let AI firms share use‑case lessons; support think tanks and forums in sustaining global technical‑policy dialogue among researchers, developers, and regulators.

What does it have to say about safety or dealing with downsides? We have ‘forge standards and norms’ with a generic call for safety and ethics standards, which seems to mostly be about interoperability and ‘bias.’

Mainly we have ‘Govern AI safety,’ which is directionally nice to see I guess but essentially content free and shows no sign that the problems are being taken seriously on the levels we care about. Most concretely, in the ninth point, we have a call for regular safety audits of AI models. That all sounds like ‘the least you could do.’

Here’s one interpretation of the statement:

Brenda Goh (Reuters): China said on Saturday it wanted to create an organisation to foster global cooperation on artificial intelligence, positioning itself as an alternative to the U.S. as the two vie for influence over the transformative technology.

Li did not name the United States but appeared to refer to Washington’s efforts to stymie China’s advances in AI, warning that the technology risked becoming the “exclusive game” of a few countries and companies.

China wants AI to be openly shared and for all countries and companies to have equal rights to use it, Li said, adding that Beijing was willing to share its development experience and products with other countries, particularly the “Global South”. The Global South refers to developing, emerging or lower-income countries, mostly in the southern hemisphere.

The foreign ministry released online an action plan for global AI governance, inviting governments, international organisations, enterprises and research institutions to work together and promote international exchanges including through a cross-border open source community.

As in, we notice you are ahead in AI, and that’s not fair. You should do everything in the open so you let us catch up in all the ways you are ahead, so we can bury you using the ways in which you are behind. That’s not an unreasonable interpretation.

Here’s another.

The Guardian: Chinese premier Li Qiang has proposed establishing an organisation to foster global cooperation on artificial intelligence, calling on countries to coordinate on the development and security of the fast-evolving technology, days after the US unveiled plans to deregulate the industry.

Li warned Saturday that artificial intelligence development must be weighed against the security risks, saying global consensus was urgently needed.

“The risks and challenges brought by artificial intelligence have drawn widespread attention … How to find a balance between development and security urgently requires further consensus from the entire society,” the premier said.

Li said China would “actively promote” the development of open-source AI, adding Beijing was willing to share advances with other countries, particularly developing ones in the global south.

So that’s a call to keep security in mind, but every concrete reference is mundane and deals with misuse, and then they call for putting everything out into the open, with the main highlighted ‘risk’ to coordinate on being that America might get an advantage, and encouraging us to give it away via open models to ‘safeguard multilateralism.’

A third here, from the Japan Times, frames it as a call for an alliance to take aim at an American AI monopoly.

Director Michael Kratsios: China’s just-released AI Action Plan has a section that drives at a fundamental difference between our approaches to AI: whether the public or private sector should lead in AI innovation.

I like America’s odds of success.

He quotes point nine, which his translation has as ‘the public sector takes the lead in deploying applications.’ Whereas o3’s translation says ‘governments should pioneer reliable AI in public services (health, education, transport), run regular safety audits, respect IP, enforce privacy, and explore lawful data‑trading mechanisms to upgrade governance.’

Even in Michael’s preferred translation, this is saying government should aggressively deploy AI applications to improve government services. The American AI Action Plan, correctly, fully agrees with this. Nothing in the Chinese statement says to hold the private sector back. Quite the contrary.

The actual disagreement we have with point nine is the rest of it, where the Chinese think we should run regular safety audits, respect IP and enforce privacy. Those are not parts of the American AI Action Plan. Do you think we were right not to include those provisions, sir? If so, why?

Suppose in the future, we learned we were in a lot more danger than we think we are in now, and we did want to make a deal with China and others. Right now the two sides would be very far apart but circumstances could quickly change that.

Could we do it in a way that could be verified?

It wouldn’t be easy, but we do have tools.

This is the sort of thing we should absolutely be preparing to be able to do, whether or not we ultimately decide to do it.

Mauricio Baker: For the last year, my team produced the most technically detailed overview so far. Our RAND working paper finds: strong verification is possible—but we need ML and hardware research.

You can find the paper here and on arXiv. It includes a 5-page summary and a list of open challenges.

In the Cold War, the US and USSR used inspections and satellites to verify nuclear weapon limits. If future, powerful AI threatens to escape control or endanger national security, the US and China would both be better off with guardrails.

It’s a tough challenge:

– Verify narrow restrictions, like “no frontier AI training past some capability,” or “no mass-deploying if tests show unacceptable danger”

– Catch major state efforts to cheat

– Preserve confidentiality of models, data, and algorithms

– Keep overhead low

Still, reasons for optimism:

– No need to monitor all computers—frontier AI needs thousands of specialized AI chips.

– We can build redundant layers of verification. A cheater only needs to be caught once.

– We can draw from great work in cryptography and ML/hardware security.

One approach is to use existing chip security features like Confidential Computing, built to securely verify chip activities. But we’d need serious design vetting, teardowns, and maybe redesigns before the US could strongly trust Huawei’s chip security (or frankly, NVIDIA’s).

“Off-chip” mechanisms could be reliable sooner: network taps or analog sensors (vetted, limited use, tamper evident) retrofitted onto AI data centers. Then, mutually secured, airgapped clusters could check if claimed compute uses are reproducible and consistent with sensor data.

Add approaches “simple enough to work”: whistleblower programs, interviews of personnel, and intelligence activities. Whistleblower programs could involve regular in-person contact—carefully set up so employees can anonymously reveal violations, but not much more.

We could have an arsenal of tried-and-tested methods to confidentially verify a US-China AI treaty. But at the current pace, in three years, we’ll just have a few speculative options. We need ML and hardware researchers, new RFPs by funders, and AI company pilot programs.

Jeffrey Ladish: Love seeing this kind of in-depth work on AI treaty verification. A key fact is verification doesn’t have to be bullet proof to be useful. We can ratchet up increasingly robust technical solutions while using other forms of HUMINT and SIGINT to provide some level of assurance.

Remember, the AI race is a mixed-motive conflict, per Schelling. Both sides have an incentive to seek an advantage, but also have an incentive to avoid mutually awful outcomes. Like with nuclear war, everyone loses if any side loses control of superhuman AI.

This makes coordination easier, because even if both sides don’t like or trust each other, they have an incentive to cooperate to avoid extremely bad outcomes.

It may turn out that even with real efforts there are not good technical solutions. But I think it is far more likely that we don’t find the technical solutions due to lack of trying, rather than that the problem is so hard that it cannot be done.

The reaction to the AI Action Plan was almost universally positive, including here from Nvidia and AMD. My own review, focused on the concrete proposals within, also reflected this. It far exceeded my expectations on essentially all fronts, so much so that I would be actively happy to see most of its proposals implemented rather than nothing be done.

I and others focused on the concrete policy, and especially concrete policy relative to expectations and what was possible in context, for which it gets high praise.

But a document like this might have a lot of its impact due to the rhetoric instead, even if it lacks legal force, or cause people to endorse the approach as ideal in absolute terms rather than being the best that could be done at the time.

So, for example, the actual proposals for open models were almost reasonable, but if the takeaway is lots more rhetoric of ‘yay open models’ like it is in this WSJ editorial, where the central theme is very clearly ‘we must beat China, nothing else matters, this plan helps beat China, so the plan is good’ then that’s really bad.

Another important example: Nothing in the policy proposals here makes future international cooperation harder. The rhetoric? A completely different story.

The same WSJ article also noticed the same obvious contradictions with other Trump policies that I did – throttling renewable energy and high-skilled immigration and even visas are incompatible with our goals here, the focus on ‘woke AI’ could have been much worse but remains a distraction, also I would add, what is up with massive cuts to STEM research if we are taking this seriously? If we are serious about winning and worry that one false move would ‘forfeit the race’ then we need to act like it.

Of course, none of that is up to the people who were writing the AI Action Plan.

What the WSJ editorial board didn’t notice, or mention at all, is the possibility that there are other risks or downsides at play here, and it dismisses outright the possibility of any form of coordination or cooperation. That’s a very wrong, dangerous and harmful attitude, one it shares with many in or lobbying the government.

A worry I have on reflection, that I wasn’t focusing on at the time, is that officials and others might treat the endorsements of the good policy proposals here as an endorsement of the overall plan presented by the rhetoric, especially the rhetoric at the top of the plan, or of the plan’s sufficiency and that it is okay to ignore and not speak about what the plan ignores and does not speak about.

That rhetoric was alarmingly (but unsurprisingly) terrible, as it is the general administration plan of emphasizing whenever possible that we are in an ‘AI race’ that will likely go straight to AGI and superintelligence even if those words couldn’t themselves be used in the plan, where ‘winning’ is measured in the mostly irrelevant ‘market share.’

And indeed, the inability to mention AGI or superintelligence in the plan leads to such exactly the standard David Sacks lines that toxically center the situation on ‘winning the race’ by ‘exporting the American tech stack.’

I will keep repeating, if necessary until I am blue in the face, that this is effectively a call (the motivations for which I do not care to speculate) for sacrificing the future and get us all killed in order to maximize Nvidia’s market share.

There is no ‘tech stack’ in the meaningful sense of necessary integration. You can run any most any AI model on most any advanced chip, and switch on an hour’s notice.

It does not matter who built the chips. It matters who runs the chips and for whose benefit. Supply is constrained by manufacturing capacity, so every chip we sell is one less chip we have. The idea that failure to hand over large percentages of the top AI chips to various authoritarians, or even selling H20s directly to China as they currently plan to do, would ‘forfeit’ ‘the race’ is beyond absurd.

Indeed, both the rhetoric and actions discussed here do the exact opposite. It puts pressure on others especially China to push harder towards ‘the race’ including the part that counts, the one to AGI, and also the race for diffusion and AI’s benefits. And the chips we sell arm China and others to do this important racing.

There is later talk acknowledging that ‘we do not intend to ignore the risks of this revolutionary technological power.’ But Sacks frames this as entire about the risk that AI will be misused or stolen by malicious actors. Which is certainly a danger, but far from the primary thing to worry about.

That’s what happens when you are forced to pretend AGI, ASI, potential loss of control and all other existential risks do not exist as possibilities. The good news is that there are some steps in the actual concrete plan to start preparing for those problems, even if they are insufficient and it can’t be explained, but it’s a rough path trying to sustain even that level of responsibility under this kind of rhetorical oppression.

The vibes and rhetoric were accelerationist throughout, especially at the top, and completely ignored the risks and downsides of AI, and the dangers of embracing a rhetoric based on an ‘AI race’ that we ‘must win,’ and where that winning mostly means chip market share. Going down this path is quite likely to get us all killed.

I am happy to make the trade of allowing the rhetoric to be optimistic, and to present the Glorious Transhumanist Future as likely to be great even as we have no idea how to stay alive and in control while getting there, so long as we can still agree to take the actions we need to take in order to tackle that staying alive and in control bit – again, the actions are mostly the same even if you are highly optimistic that it will work out.

But if you dismiss the important dangers entirely, then your chances get much worse.

So I want to be very clear that I hate that rhetoric, I think it is no good, very bad rhetoric both in terms of what is present and what (often with good local reasons) is missing, while reiterating that the concrete particular policy proposals were as good as we could reasonably have hoped for on the margin, and the authors did as well as they could plausibly have done with people like Sacks acting as veto points.

That includes the actions on ‘preventing Woke AI,’ which have convinced even Sacks to frame this as preventing companies from intentionally building DEI into their models. That’s fine, I wouldn’t want that either.

Even outlets like Transformer weighed in positively, with them calling the plan ‘surprisingly okay’ and noting its ability to get consensus support, while ignoring the rhetoric. They correctly note the plan is very much not adequate. It was a missed opportunity to talk about or do something about various risks (although I understand why), and there was much that could have been done that wasn’t.

Seán Ó hÉigeartaigh: Crazy to reflect on the three global AI competitions going on right now:

– 1. US political leadership have made AI a prestige race, echoing the Space Race. It’s cool and important and strategic, and they’re going to Win.

– 2. For Chinese leadership AI is part of economic strength, soft power and influence. Technology is shared, developing economies will be built on Chinese fundamental tech, the Chinese economy and trade relations will grow. Weakening trust in a capricious US is an easy opportunity to take advantage of.

– 3. The AGI companies are racing something they think will out-think humans across the board, that they don’t yet know how to control, and think might literally kill everyone.

Scariest of all is that it’s not at all clear to decision-makers that these three things are happening in parallel. They think they’re playing the same game, but they’re not.

I would modify the US political leadership position. I think to a lot of them it’s literally about market share, primarily chip market share. I believe this because they keep saying, with great vigor, that it is literally about chip market share. But yes, they think this matters because of prestige, and because this is how you get power.

My guess is, mostly:

  1. The AGI companies understand these are three distinct things.

    1. They are using the confusions of political leadership for their own ends.

  2. The Chinese understand there are two distinct things, but not three.

    1. As in, they know what US leadership is doing, and they know what they are doing, and they know these are distinct things.

    2. They do not feel the AGI and understand its implications.

  3. The bulk of the American political class cannot differentiate between the US and Chinese strategies, or strategic positions, or chooses to pretend not to, cannot imagine things other than ordinary prestige, power and money, and cannot feel the AGI.

    1. There are those within the power structure who do feel the AGI, to varying extents, and are trying to sculpt actions (including the action plan) accordingly with mixed success.

    2. An increasing number of them, although still small, do feel the AGI to varying extents but have yet to cash that out into anything except ‘oh ’.

  4. There is of course a fourth race or competition, which is to figure out how to build it without everyone dying.

The actions one would take in each of these competitions are often very similar, especially the first three and often the fourth as well, but sometimes are very different. What frustrates me most is when there is an action that is wise on all levels, yet we still don’t do it.

Also, on the ‘preventing Woke AI’ question, the way the plan and order are worded seems designed to make compliance easy and not onerous, but given other signs from the Trump administration lately, I think we have reason to worry…

Fact Post: Trump’s FCC Chair says he will put a “bias monitor” in place who will “report directly” to Trump as part of the deal for Sky Dance to acquire CBS.

Ari Drennen: The term that the Soviet Union used for this job was “apparatchik” btw.

I was willing to believe that firing Colbert was primarily a business decision. This is very different imagine the headline in reverse: “Harris’s FCC Chair says she will put a “bias monitor” in place who will “report directly” to Harris as part of the deal for Sky Dance to acquire CBS.”

Now imagine it is 2029, and the headline is ‘AOC appoints new bias monitor for CBS.’ Now imagine it was FOX. Yeah. Maybe don’t go down this road?

Director Krastios has now given us his view on the AI Action Plan. This is a chance to see how much it is viewed as terrible rhetoric versus its good policy details, and to what extent overall policy is going to be guided by good details versus terrible rhetoric.

Peter Wildeford offers his takeaway summary.

Peter Wildeford: Winning the Global AI Race

  1. The administration’s core philosophy is a direct repudiation of the previous one, which Kratsios claims was a “fear-driven” policy “manically obsessed” with hypothetical risks that stifled innovation.

  2. The plan is explicitly called an “Action Plan” to signal a focus on immediate execution and tangible results, not another government strategy document that just lists aspirational goals.

  3. The global AI race requires America to show the world a viable, pro-innovation path for AI development that serves as an alternative to the EU’s precautionary, regulation-first model.

He leads with hyperbolic slander, which is par for the course, but yes concrete action plans are highly useful and the EU can go too far in its regulations.

There are kind of two ways to go with this.

  1. You could label any attempt to do anything to ensure we don’t die as ‘fear-driven’ and ‘maniacally obsessed’ with ‘hypothetical’ risks that ‘stifle’ innovation, and thus you probably die.

  2. You could label the EU and Biden Administration as ‘fear-driven’ and ‘manically obsessed’ with ‘hypothetical’ risks that ‘stifle’ innovation, contrasting that with your superior approach, and then having paid this homage do reasonable things.

The AI Action Plan as written was the second one. But you have to do that on purpose, because the default outcome is to shift to the first one.

Executing the ‘American Stack’ Export Strategy

  1. The strategy is designed to prevent a scenario where the world runs on an adversary’s AI stack by proactively offering a superior, integrated American alternative.

  2. The plan aims to make it simple for foreign governments to buy American by promoting a “turnkey solution”—combining chips, cloud, models, and applications—to reduce complexity for the buyer.

  3. A key action is to reorient US development-finance institutions like the DFC and EXIM to prioritize financing for the export of the American AI stack, shifting their focus from traditional hard infrastructure.

The whole ‘export’ strategy is either nonsensical, or an attempt to control capital flow, because I heard a rumor that it is good to be the ones directing capital flow.

Once again, the ‘tech stack’ thing is not, as described here, what’s the word? Real.

The ‘adversary’ does not have a ‘tech stack’ to offer, they have open models people can run on the same chips. They don’t have meaningful chips to even run their own operations, let alone export. And the ‘tech’ does not ‘stack’ in a meaningful way.

Turnkey solutions and package marketing are real. I don’t see any reason for our government to be so utterly obsessed with them, or even involved at all. That’s called marketing and serving the customer. Capitalism solves this. Microsoft and Amazon and Google and OpenAI and Anthropic and so on can and do handle it.

Why do we suddenly think the government needs to be prioritizing financing this? Given that it includes chip exports, how is it different from ‘traditional hard infrastructure’? Why do we need financing for the rest of this illusory stack when it is actually software? Shouldn’t we still be focusing on ‘traditional hard infrastructure’ in the places we want it, and then whenever possible exporting the inference?

Refining National Security Controls

  1. Kratsios argues the biggest issue with export controls is not the rules themselves but the lack of resources for enforcement, which is why the plan calls for giving the Bureau of Industry and Security (BIS) the tools it needs.

  2. The strategy is to maintain strict controls on the most advanced chips and critical semiconductor-manufacturing components, while allowing sales of less-advanced chips under a strict licensing regime.

  3. The administration is less concerned with physical smuggling of hardware and more focused on preventing PRC front companies from using legally exported hardware for large-scale, easily flaggable training runs.

  4. Proposed safeguards against misuse are stringent “Know Your Customer” (KYC) requirements paired with active monitoring for the scale and scope of compute jobs.

It is great to see the emphasis on enforcement. It is great to hear that the export control rules are not the issue.

In which case, can we stop waving them, such as with H20 sales to China? Thank you. There is of course a level at which chips can be safely sold even directly to China, but the experts all agree the H20 is past that level.

The lack of concern about smuggling is a blind eye in the face of overwhelming evidence of widespread smuggling. I don’t much care if they are claiming to be concerned, I care about the actual enforcement, but we need enforcement. Yes, we should stop ‘easily flaggable’ PRC training runs and use KYC techniques, but this is saying we should look for our keys under the streetlight and then if we don’t find the keys assume we can start our car without them.

Championing ‘Light-Touch’ Domestic Regulation

  1. The administration rejects the idea of a single, overarching AI law, arguing that expert agencies like the FDA and DOT should regulate AI within their specific domains.

  2. The president’s position is that a “patchwork of regulations” across 50 states is unacceptable because the compliance burden disproportionately harms innovative startups.

  3. While using executive levers to discourage state-level rules, the administration acknowledges that a durable solution requires an act of Congress to create a uniform federal standard.

Yes, a ‘uniform federal standard’ would be great, except they have no intention of even pretending to meaningfully pursue one. They want each federal agency to do its thing in its own domain, as in a ‘use case’ based AI regime which when done on its own is the EU approach and doomed to failure.

I do acknowledge the step down from ‘kill state attempts to touch anything AI’ (aka the insane moratorium) to ‘discourage’ state-level rules using ‘executive levers,’ at which point we are talking price. One worries the price will get rather extreme.

Addressing AI’s Economic Impact at Home

  1. Kratsios highlights that the biggest immediate labor need is for roles like electricians to build data centers, prompting a plan to retrain Americans for high-paying infrastructure jobs.

  2. The technology is seen as a major productivity tool that provides critical leverage for small businesses to scale and overcome hiring challenges.

  3. The administration issued a specific executive order on K-12 AI education to ensure America’s students are prepared to wield these tools in their future careers.

Ahem, immigration, ahem, also these things rarely work, but okay, sure, fine.

Prioritizing Practical Infrastructure Over Hypothetical Risk

  1. Kratsios asserts that chip supply is no longer a major constraint; the key barriers to the AI build-out are shortages of skilled labor and regulatory delays in permitting.

  2. Success will be measured by reducing the time from permit application to “shovels in the ground” for new power plants and data centers.

  3. The former AI Safety Institute is being repurposed to focus on the hard science of metrology—developing technical standards for measuring and evaluating models, rather than vague notions of “safety.”

It is not the only constraint, but it is simply false to say that chip supply is no longer a major constraint.

Defining success in infrastructure in this way would, if taken seriously, lead to large distortions in the usual obvious Goodhart’s Law ways. I am going to give the benefit of the doubt and presume this ‘success’ definition is local, confined to infrastructure.

If the only thing America’s former AISI can now do are formal measured technical standards, then that is at least a useful thing that it can hopefully do well, but yeah it basically rules out at the conceptual level the idea of actually addressing the most important safety issues, by dismissing them are ‘vague.’

This goes beyond ‘that which is measured is managed’ to an open plan of ‘that which is not measured is not managed, it isn’t even real.’ Guess how that turns out.

Defining the Legislative Agenda

  1. While the executive branch has little power here, Kratsios identifies the use of copyrighted data in model training as a “quite controversial” area that Congress may need to address.

  2. The administration would welcome legislation that provides statutory cover for the reformed, standards-focused mission of the Center for AI Standards and Innovation (CAISI).

  3. Continued congressional action is needed for appropriations to fund critical AI-related R&D across agencies like the National Science Foundation.

TechCrunch: 20 national security experts urge Trump administration to restrict Nvidia H20 sales to China.

The letter says the H20 is a potent accelerator of China’s frontier AI capabilities and could be used to strengthen China’s military.

Americans for Responsible Innovation: The H20 and the AI models it supports will be deployed by China’s PLA. Under Beijing’s “Military-Civil Fusion” strategy, it’s a guarantee that H20 chips will be swiftly adapted for military purposes. This is not a question of trade. It is a question of national security.

It would be bad enough if this was about selling the existing stock of H20s, that Nvidia has taken a writedown on, even though it could easily sell them in the West instead. It is another thing entirely that Nvidia is using its capacity on TSMC machines to make more of them, choosing to create chips to sell directly to China instead of creating chips for us.

Ruby Scanlon: Nvidia placed orders for 300,000 H20 chipsets with contract manufacturer TSMC last week, two sources said, with one of them adding that strong Chinese demand had led the US firm to change its mind about just relying on its existing stockpile.

It sounds like we’re planning on feeding what would have been our AI chips to China. And then maybe you should start crying? Or better yet tell them they can’t do it?

I share Peter Wildeford’s bafflement here:

Peter Wildeford: “China is close to catching up to the US in AI so we should sell them Nvidia chips so they can catch up even faster.”

I never understand this argument from Nvidia.

The argument is also false, Nvidia is lying, but I don’t understand even if it were true.

There is only a 50% premium to buy Nvidia B200 systems within China, which suggests quite a lot of smuggling is going on.

Tao Burga: Nvidia still insists that there’s “no evidence of any AI chip diversion.” Laughable. All while lobbying against the data center chip location verification software that would provide the evidence. Tell me, where does the $1bn [in AI chips smuggled to China] go?

Rob Wiblin: Nvidia successfully campaigning to get its most powerful AI chips into China has such “the capitalists will sell us the rope with which we will hang them” energy.

Various people I follow keep emphasizing that China is smuggling really a lot of advanced AI chips, including B200s and such, and perhaps we should be trying to do something about it, because it seems rather important.

Chipmakers will always oppose any proposal to track chips or otherwise crack down on smuggling and call it ‘burdensome,’ where the ‘burden’ is ‘if you did this they would not be able to smuggle as many chips, and thus we would make less money.’

Reuters Business: Demand in China has begun surging for a business that, in theory, shouldn’t exist: the repair of advanced v artificial intelligence chipsets that the US has banned the export of to its trade and tech rival.

Peter Wildeford: Nvidia position: “datacenters from smuggled products is a losing proposition […] Datacenters require service and support, which we provide only to authorized NVIDIA products.”

Reality: Nvidia AI chip repair industry booms in China for banned products.

Scott Bessent Warns TSMC’s $40 billion Arizona fab that could meet 7% of American chip demand keeps getting delayed, and blames inspectors and red tape. There’s confusion here in the headline that he is warning it would ‘only’ meet 7% of demand, but 7% of demand would be amazing for one plant and the article’s text reflects this.

Bessent criticized regulatory hurdles slowing construction of the $40 billion facility. “Evidently, these chip design plants are moving so quickly, you’re constantly calling an audible and you’ve got someone saying, ‘Well, you said the pipe was going to be there, not there. We’re shutting you down,’” he explained.

It does also mean that if we want to meet 100% or more of demand we will need a lot more plants, but we knew that.

Epoch reports that Chinese hardware is behind American hardware, and is ‘closing the gap’ but faces major obstacles in chip manufacturing capability.

Epoch: Even if we exclude joint ventures with U.S., Australian, or U.K. institutions (where the developers can access foreign silicon), the clear majority of homegrown models relied on NVIDIA GPUs. In fact, it took until January 2024 for the first large language model to reportedly be trained entirely on Chinese hardware, arguably years after the first large language models.

Probably the most important reason for the dominance of Western hardware is that China has been unable to manufacture these AI chips in adequate volumes. Whereas Huawei reportedly manufactured 200,000 Ascend 910B chips in 2024, estimates suggest that roughly one million NVIDIA GPUs were legally delivered to China in the same year.

That’s right. For every top level Huawei chip manufactured, Nvidia sold five to China. No, China is not about to export a ‘full Chinese tech stack’ for free the moment we turn our backs. They’re offering downloads of r1 and Kimi K2, to be run on our chips, and they use all their own chips internally because they still have a huge shortage.

Put bluntly, we don’t see China leaping ahead on compute within the next few years. Not only would China need to overcome major obstacles in chip manufacturing and software ecosystems, they would also need to surpass foreign companies making massive investments into hardware R&D and chip fabrication.

Unless export controls erode or Beijing solves multiple technological challenges in record time, we think that China will remain at least one generation behind in hardware. This doesn’t prevent Chinese developers from training and running frontier AI models, but it does make it much more costly.

Overall, we think these costs are large enough to put China at a substantial disadvantage in AI scaling for at least the rest of the decade.

Beating China may or may not be your number one priority. We do know that taking export controls seriously is the number one priority for ‘beating China.’

Intel will cancel 14A and following nodes, essentially abandoning the technological frontier, if it cannot win a major external customer.

Discussion about this post

The Week in AI Governance Read More »

research-roundup:-7-cool-science-stories-we-almost-missed

Research roundup: 7 cool science stories we almost missed


Other July stories: Solving a 150-year-old fossil mystery and the physics of tacking a sailboat.

150-year-old fossil of Palaeocampa anthrax isn’t a sea worm after all. Credit: Christian McCall

It’s a regrettable reality that there is never enough time to cover all the interesting scientific stories we come across each month. In the past, we’ve featured year-end roundups of cool science stories we (almost) missed. This year, we’re experimenting with a monthly collection. July’s list includes the discovery of the tomb of the first Maya king of Caracol in Belize, the fluid dynamics of tacking a sailboat, how to determine how fast blood was traveling when it stained cotton fabric, and how the structure of elephant ears could lead to more efficient indoor temperature control in future building designs, among other fun stories.

Tomb of first king of Caracol found

University of Houston provost and archeologist Diane Chase in newly discovered tomb of the first ruler of the ancient Maya city Caracol and the founder of its royal dynasty.

Credit: Caracol Archeological Project/University of Houston

Archaeologists Arlen and Diane Chase are the foremost experts on the ancient Maya city of Caracol in Belize and are helping to pioneer the use of airborne LiDAR to locate hidden structures in dense jungle, including a web of interconnected roadways and a cremation site in the center of the city’s Northeast Acropolis plaza. They have been painstakingly excavating the site since the mid-1980s. Their latest discovery is the tomb of Te K’ab Chaak, Caracol’s first ruler, who took the throne in 331 CE and founded a dynasty that lasted more than 460 years.

This is the first royal tomb the husband-and-wife team has found in their 40+ years of excavating the Caracol site. Te K’ab Chaak’s tomb (containing his skeleton) was found at the base of a royal family shrine, along with pottery vessels, carved bone artifacts, jadeite jewelry, and a mosaic jadeite death mask. The Chases estimate that the ruler likely stood about 5’7″ tall and was probably quite old when he died, given his lack of teeth. The Chases are in the process of reconstructing the death mask and conducting DNA and stable isotope analysis of the skeleton.

How blood splatters on clothing

Cast-off blood stain pattern

Credit: Jimmy Brown/CC BY 2.0

Analyzing blood splatter patterns is a key focus in forensic science, and physicists have been offering their expertise for several years now, including in two 2019 studies on splatter patterns from gunshot wounds. The latest insights gleaned from physics concern the distinct ways in which blood stains cotton fabrics, according to a paper published in Forensic Science International.

Blood is a surprisingly complicated fluid, in part because the red blood cells in human blood can form long chains, giving it the consistency of sludge. And blood starts to coagulate immediately once it leaves the body. Blood is also viscoelastic: not only does it deform slowly when exposed to an external force, but once that force has been removed, it will return to its original configuration. Add in coagulation and the type of surface on which it lands, and correctly interpreting the resulting spatter patterns becomes incredibly difficult.

The co-authors of the July study splashed five different fabric surfaces with pig’s blood at varying velocities, capturing the action with high-speed cameras. They found that when a blood stain has “fingers” spreading out from the center, the more fingers there are, the faster the blood was traveling when it struck the fabric. And the faster the blood was moving, the more “satellite droplets” there will be—tiny stains surrounding the central stain. Finally, it’s much easier to estimate the velocity of blood splatter on plain-woven cotton than on other fabrics like twill. The researchers plan to extend future work to include a wider variety of fabrics, weaves, and yarns.

DOI: Forensic Science International, 2025. 10.1016/j.forsciint.2025.112543  (About DOIs).

Offshore asset practices of the uber-rich

The uber-rich aren’t like the rest of us in so many ways, including their canny exploitation of highly secretive offshore financial systems to conceal their assets and/or identities. Researchers at Dartmouth have used machine learning to analyze two public databases and identified distinct patterns in the strategies oligarchs and billionaires in 65 different countries employ when squirreling away offshore assets, according to a paper published in the journal PLoS ONE.

One database tracks offshore finance, while the other rates different countries on their “rule of law.” This enabled the team to study key metrics like how much of their assets elites move offshore, how much they diversify, and how much they make use of “blacklisted” offshore centers that are not part of the mainstream financial system. The researchers found three distinct patterns, all tied to where an oligarch comes from.

Billionaires from authoritarian countries are more likely to diversify their hidden assets across many different centers—a “confetti strategy”—perhaps because these are countries likely to exact political retribution. Others, from countries with effective government regulations—or where there is a pronounced lack of civil rights—are more likely to employ a “concealment strategy” that includes more blacklisted jurisdictions, relying more on bearer shares that protect their anonymity. Those elites most concerned about corruption and/or having their assets seized typically employ a hybrid strategy.

The work builds on an earlier 2023 study concluding that issuing sanctions on individual oligarchs in Russia, China, the US, and Hong Kong is less effective than targeting the small, secretive network of financial experts who manage that wealth on behalf of the oligarchs. That’s because sanctioning just one wealth manager effectively takes out several oligarchs at once, per the authors.

DOI: PLoS ONE, 2025. 10.1371/journal.pone.0326228  (About DOIs).

Medieval remedies similar to TikTok trends

Medieval manuscripts like the Cotton MS Vitellius C III highlight uses for herbs that reflect modern-day wellness trends.

Credit: The British Library

The Middle Ages are stereotypically described as the “Dark Ages,” with a culture driven by superstition—including its medical practices. But a perusal of the hundreds of medical manuscripts collected in the online Corpus of Early Medieval Latin Medicine (CEMLM) reveals that in many respects, medical practices were much more sophisticated; some of the remedies are not much different from alternative medicine remedies touted by TikTok influencers today. That certainly doesn’t make them medically sound, but it does suggest we should perhaps not be too hasty in who we choose to call backward and superstitious.

Per Binghamton University historian Meg Leja, medievalists were not “anti-science.” In fact, they were often quite keen on learning from the natural world. And their health practices, however dubious they might appear to us—lizard shampoo, anyone?—were largely based on the best knowledge available at the time. There are detox cleanses and topical ointments, such as crushing the stone of a peach, mixing it with rose oil, and smearing it on one’s forehead to relieve migraine pain. (Rose oil may actually be an effective migraine pain reliever.) The collection is well worth perusing; pair it with the Wellcome-funded Curious Cures in Cambridge Libraries to learn even more about medieval medical recipes.

Physics of tacking a sailboat

The Courant Institute's Christiana Mavroyiakoumou, above at Central Park's Conservatory Water with model sailboats

Credit: Jonathan King/NYU

Possibly the most challenging basic move for beginner sailors is learning how to tack to sail upwind. Done correctly, the sail will flip around into a mirror image of its previous shape. And in competitive sailboat racing, a bad tack can lose the race. So physicists at the University of Michigan decided to investigate the complex fluid dynamics at play to shed more light on the tricky maneuver, according to a paper published in the journal Physical Review Fluids.

After modeling the maneuver and conducting numerical simulations, the physicists concluded that there are three primary factors that determine a successful tack: the stiffness of the sail, its tension before the wind hits, and the final sail angle in relation to the direction of the wind. Ideally, one wants a less flexible, less curved sail with high tension prior to hitting the wind and to end up with a 20-degree final sail angle. Other findings: It’s harder to flip a slack sail when tacking, and how fast one manages to flip the sail depends on the sail’s mass and the speed and acceleration of the turn.

DOI: Physical Review Fluids, 2025. 10.1103/37xg-vcff  (About DOIs).

Elephant ears inspire building design

African bush elephant with ears spread in a threat or attentive position and visible blood vessels

Maintaining a comfortable indoor temperature constitutes the largest fraction of energy usage for most buildings, with the surfaces of walls, windows, and ceilings contributing to roughly 63 percent of energy loss. Engineers at Drexel University have figured out how to make surfaces that help rather than hamper efforts to maintain indoor temperatures: using so-called phase-change materials that can absorb and release thermal energy as needed as they shift between liquid and solid states. They described the breakthrough in a paper published in the Journal of Building Engineering.

The Drexel group previously developed a self-warming concrete using a paraffin-based material, similar to the stuff used to make candles. The trick this time around, they found, was to create the equivalent of a vascular network within cement-based building materials. They used a printed polymer matrix to create a grid of channels in the surface of concrete and filled those channels with the same paraffin-based material. When temperatures drop, the material turns into a solid and releases heat energy; as temperatures rise, it shifts its phase to a liquid and absorbs heat energy.

The group tested several different configurations and found that the most effective combination of strength and thermal regulation was realized with a diamond-shaped grid, which boasted the most vasculature surface area. This configuration successfully slowed the cooling and heating of its surface to between 1 and 1.2 degrees Celsius per hour, while holding up against stretching and compression tests. The structure is similar to that of jackrabbit and elephant ears, which have extensive vascular networks to help regulate body temperature.

DOI: Journal of Building Engineering, 2025. 10.1016/j.jobe.2025.112878  (About DOIs).

ID-ing a century-old museum specimen

Neotype of Palaeocampa anthrax from the Mazon Creek Lagerstätte and rediscovered in the Invertebrate Paleontology collection of the MCZ.

Credit: Richard J. Knecht

Natural history museums have lots of old specimens in storage, and revisiting those specimens can sometimes lead to new discoveries. That’s what happened to University of Michigan evolutionary biologist Richard J. Knecht as he was poring over a collection at Harvard’s Museum of Comparative Zoology while a grad student there. One of the fossils, originally discovered in 1865, was labeled a millipede. But Knecht immediately recognized it as a type of lobopod, according to a paper published in the journal Communications Biology. It’s the earliest lobopod yet found, and this particular species also marks an evolutionary leap since it’s the first known lobopod to be non-marine.

Lobopods are the evolutionary ancestors to arthropods (insects, spiders, and crustaceans), and their fossils are common along Paleozoic sea beds. Apart from tardigrades and velvet worms, however, they were thought to be confined to oceans. But Palaeocampa anthrax has legs on every trunk, as well as almost 1,000 bristly spines covering its body with orange halos at their tips. Infrared spectroscopy revealed traces of fossilized molecules—likely a chemical that emanated from the spinal tips. Since any chemical defense would just disperse in water, limiting its effectiveness, Knecht concluded that Palaeocampa anthrax was most likely amphibious rather than being solely aquatic.

DOI: Communications Biology, 2025. 10.1038/s42003-025-08483-0  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Research roundup: 7 cool science stories we almost missed Read More »

in-search-of-riches,-hackers-plant-4g-enabled-raspberry-pi-in-bank-network

In search of riches, hackers plant 4G-enabled Raspberry Pi in bank network

“One of the most unusual elements of this case was the attacker’s use of physical access to install a Raspberry Pi device,” Group-IB Senior Digital Forensics and Incident Response Specialist Nam Le Phuong wrote. “This device was connected directly to the same network switch as the ATM, effectively placing it inside the bank’s internal network. The Raspberry Pi was equipped with a 4G modem, allowing remote access over mobile data.”

To maintain persistence, UNC2891 also compromised a mail server because it had constant Internet connectivity. The Raspberry Pi and the mail server backdoor would then communicate by using the bank’s monitoring server as an intermediary. The monitoring server was chosen because it had access to almost every server within the data center.

The Network Monitoring Server as an intermediary between the Raspberry Pi and the Mail Server.

Credit: Group-IB

The Network Monitoring Server as an intermediary between the Raspberry Pi and the Mail Server. Credit: Group-IB

As Group-IB was initially investigating the bank’s network, researchers noticed some unusual behaviors on the monitoring server, including an outbound beaconing signal every 10 minutes and repeated connection attempts to an unknown device. The researchers then used a forensic tool to analyze the communications. The tool identified the endpoints as a Raspberry Pi and the mail server but was unable to identify the process names responsible for the beaconing.

The forensic triage tool is unable to collect the relevant process name or ID associated with the socket.

Credit: Group-IB

The forensic triage tool is unable to collect the relevant process name or ID associated with the socket. Credit: Group-IB

The researchers then captured the system memory as the beacons were sent. The review identified the process as lightdm, a process associated with an open source LightDM display manager. The process appeared to be legitimate, but the researchers found it suspicious because the LightDM binary was installed in an unusual location. After further investigation, the researchers discovered that the processes of the custom backdoor had been deliberately disguised in an attempt to throw researchers off the scent.

Phuong explained:

The backdoor process is deliberately obfuscated by the threat actor through the use of process masquerading. Specifically, the binary is named “lightdm”, mimicking the legitimate LightDM display manager commonly found on Linux systems. To enhance the deception, the process is executed with command-line arguments resembling legitimate parameters – for example,

lightdm –session child 11 19 — in an effort to evade detection and mislead forensic analysts during post-compromise investigations.

These backdoors were actively establishing connections to both the Raspberry Pi and the internal Mail Server.

As noted earlier, the processes were disguised using the Linux bind mount. Following that discovery, Group-IB added the technique to the MITRE ATT&CK framework as “T1564.013 – Hide Artifacts: Bind Mounts.”

Group-IB didn’t say where the compromised switching equipment was located or how attackers managed to plant the Raspberry Pi. The attack was detected and shut down before UNC2891 was able to achieve its final goal of infecting the ATM switching network with the CakeTap backdoor.

In search of riches, hackers plant 4G-enabled Raspberry Pi in bank network Read More »

childhood-and-education-#12:-college-admissions

Childhood and Education #12: College Admissions

  1. College Applications.

  2. The College Application Essay (Is) From Hell.

  3. Don’t Guess The Teacher’s Password, Ask For It Explicitly.

  4. A Dime a Dozen.

  5. Treat Admissions Essays Like Games of Balderdash.

  6. It’s About To Get Worse.

  7. Alternative Systems Need Good Design.

  8. The SAT Scale Is Broken On Purpose.

In case you missed it, yes, of course Harvard admissions are way up and Harvard did this on purpose. The new finding is that Harvard was recruiting African American applicants in particular, partly in order to balance conditional acceptance rates. One could of course also argue that the goal was ‘to find more worthy students,’ with the counterevidence being that the test scores of such applicants declined as more applications came in (as they obviously would for any group) but the scores of those who got admitted didn’t change.

As a student, one needs to understand that schools love applications they can then reject, and might care about that even more depending on your details. So when they tell you to apply, that you have a shot, that is not the evidence you want it to be.

Or, your future depends on knowing exactly the right way to lie your ass off, and having sufficiently low integrity to do so shamelessly.

One can ask questions like: If you can get hired by Google for an engineering job, you have a 4.4 GPA and a 1590 SAT score, and you get rejected by 5 University of California schools and 16 out of 18 schools overall, is it fair to say that was probably an illegal form of racial discrimination, as his lawsuit is claiming? It doesn’t automatically have to be that, there could in theory be other details of his application that are problems.

I’d like to say to that objection ‘who are we kidding’ but maybe? You had two groups debating this recently, after a different applicant, Zack Yadegari, got rejected from all the colleges for being too successful and daring to write about that.

One group said this was the situation, And That’s Terrible.

The other group said, yes this is the situation, And You’re Terrible, get with the program or go off and die, oh and it’s just the essay, he can apply next year it’s fine.

Zack Yadegari: 18 years old

34 ACT

4.0 GPA

$30M ARR biz

Stanford ❌ MIT ❌ Harvard ❌ Yale ❌ WashU ❌ Columbia ❌ UPenn ❌ Princeton ❌ Duke ❌ USC ❌ Georgia Tech ✅ UVA ❌ NYU ❌ UT ✅ Vanderbilt ❌ Brown ❌ UMiami ✅ Cornell ❌

I dedicated 2 straight weeks to my essays doing nothing else. Had them looked over by the best writers I know.

Michael Druggan: When I applied to Harvard, I was a USAMO winner (only 12 per year and with significant duplicates that works out to significantly less than 12 per graduating class). I also had a clean 36 in every section on the ACT from my very first attempt. Neither of those are dime-a-dozen stats.

The admissions committee didn’t care. They rejected me in favor of legions of clearly academically inferior candidates who did a better job kissing their asses in their application essays. Let’s not pretend this process is anything but a farce.

Avi Shiffmann: My (in my opinion awful) personal statement that got me into Harvard. [comments mostly talk about how the essay is good actually]

Felpix: College admissions is so competitive, kids are just crashing out [describes a kid who basically did everything you could imagine to get in to study Computer Science, and still got rejected from the majors, reasons unknown.]

Gabriel: this is incredibly sad, someone spent their entire childhood to get into MIT with perfect scores without getting in, and now can’t live his dream

all this effort could have been spent on becoming economically valuable and he’d now have his dream job. this is obviously not this persons fault, but the fault of collective inability to change, and constantly reaffirming our beliefs that whatever we have now is working great. we put this talented person into doing fake work to get the chance to do more fake work, to get a degree, which is seen as much more important than the actual work being performed later

he wasted his entire childhood, literally irreparable damage

The kid Felpix is quoting is going to have a fine future without academia, and yes they’d be a year ahead of the game if they’d spent all that time learning to code better instead of playing the college game. It’s not even clear they should have been trying to go to college at all, other than VCs wanting to see you go for a bit and then drop out.

Zack’s mistake was, presumably, asking the best writers he knew rather than people who know to write college essays in particular.

Dr. Ellie Murray, ScD: The fact that every academic reads this guy’s essay and is like, yeah of course you didn’t get in, but tech twitter all seem to think he was a shoe-in and cheated out of a spot… We’re living in 2 different worlds and it’s a problem.

If you’re writing your own college apps & want to know how to avoid these pitfalls, there are lots of great threads about this guy’s essay. Start here.

Mason: The essay is easily the most regressive and gameable part of the app. The point tech twitter is making is not that the essay is good, but that if this kid came from the “right” family his essay would have been ghostwritten by an admissions coach anyway

Amal Dorai: It’s supposed to be gameable! They’re trying to put their imprimatur on a meritocratic class that can “game” its way into the country’s power elite. Yes it’s a sort of pre-Trumpian way of thinking but they are not just looking for the country’s future NVIDIA engineers.

Monica Marks: Statistically well-qualified applicants come a dime a dozen in elite admissions, more than most people realise.

For every student w/ perfect scores like Zach, there’s a student w/ near perfect scores & more humility who’s overcome terrible circumstances & does not seem entitled.

[she gives advice on how to write a good essay, basically ‘sell that you can pretend that you need this in order to fight some Good Fight that liberals love and are super motivated and shows the proper appreciation and humility etc, and in his case he should have emphasized his Forbes essay rather than his actual achievements.’]

Wind Come Calling: I’ve read applications from kids like this and, being obviously very bright, they tend to think they can hide their arrogance or sense of entitlement, that it won’t come through in their application or that the reviewers will miss it. they are mistaken.

Lastdance: “You must follow my lead and feign humility. If you are merely gifted then go somewhere else, it’s the gifted liar we want!”

Kelsey Piper: before you make fun of someone’s college application personal statement, I urge you to go way back into your old emails and read your own college application essays, I promise this will cure you of the urge to say anything mean about anyone else’s

Tracing Woodgrains: I’m seeing people criticize this personal statement, and—look. Don’t subject yourself to the indignity of defending arbitrary games. the Personal Statement is the lowest genre of essay and the worst admissions practice. his resumé speaks for itself.

“but the personal statement is…”

…an arbitrary game of “guess what’s in my head,” inauthenticity embodied by writers and readers alike. an undignified hazing ritual whether written by you, your $200/hr advisor, or your good friend Claude.

good? bad? junk either way.

every time people defend this system on its own terms it makes me grimace

do not validate a system that encourages kids to twist themselves into pretzels and then purports to judge their whole persons

the whole game is socially corrosive.

so like [Monica Marks from above] seems perfectly nice but I simply do not want access to be gatekept by “did I strike the perfect tone to flatter her sensibilities”

the red flags – someone go tell UMass Amherst they got a dud! or don’t, bc it’s a deranged process

Tracing Woods (also): It’s not the competition that gets people, I suspect, but the arbitrariness. Young, ambitious people jump through a million hoops to roll the dice. It is unhealthy to let this process control so much of the collective youth psyche. Elite college admissions are twisted.

Deedy: Reddit father says son who is

— #1/476 in high school

— 1580/1600 on SAT

— 5/5 on 18 APs

got rejected by all the Ivies for CS. Only got UMass Amherst.

It’s college season and this is the #1 post last week on r/ApplyingToCollege.

Competition is fine, but this just feels unfair.

Of course, some people will say it’s fake but if you read the OP’s comments it feels real. Son is 1/4th Korean 3/4th white, according to his comments.

Depending on where you set the bar for applicants, ‘statistically well-qualified’ might be ‘dime a dozen,’ maybe even being #1 in your HS with 1580 SAT and 18 5/5 APs is ‘a dime a dozen.’ That’s by design, as I discuss elsewhere, the tests cap out on purpose. If the top colleges wanted differentiation the tests would provide it.

But you know what very much is not ‘a dime a dozen’? Things like being a USAMO winner or founding a $30mm ARR business.

If admissions chooses not to care much about even that, and merely puts it into the ‘statistical qualification’ bucket and mostly looks to see who within that bucket is better at playing the Guess the Teacher’s Password game and playing their PTCs (Personal Trauma Cards) and being members of the preferred groups and so on, well, it is what it is.

If you see someone thinking being a USAMO winner and founding a $30mm ARR business means they shouldn’t be feigning false humility, and think ‘that’s an asshole,’ well, I have a humble suggestion about who the asshole is in this situation.

And it’s totally fair to point out that this is indeed what it is, and that our academic system checks your ‘statistical qualifications’ but is mostly actively selecting for this form of strategic dishonesty combined with class performance and some inherent characteristics.

That is very different from saying that this is good, actually. It’s not good.

I would also however say that it is now common knowledge that this is how it works. So, given that it is common knowledge how this works, while I place a super high value on honesty and honor, I hereby give everyone reading this full ethical and moral permission to completely lie your ass off.

College admission essays are not a place where words have meaning and you are representing your statement as true. So aside from specific verifiable things like your GPA or SAT score, you can and should lie your ass off the same way you would lie when playing a game of Diplomacy or Balderdash. It doesn’t count, and I will not hold it against you, at all.

Oh, also, requiring all these hours of volunteer work is straight up enslavement of our kids for child labor, and not the good kind where you learn valuable skills.

Those disputes were at the top of the scale. An at least somewhat reasonable response would be ‘boo hoo, you didn’t get into the top 25 colleges in the country, go to your state college and you’ll be fine.’

Except that the state colleges are sometimes doing it too. And that’s not okay, at all.

Analytic Valley Girl Chris: State universities should be legally mandated to accept any in state graduate who meets certain academic thresholds, save some compelling disqualification. Generic “not what we’re looking for” shouldn’t be allowed.

As in, MIT can do what it wants, it’s their loss, but UC San Diego and UC Davis?

Yes, obviously if you simply want ‘any college at all’ there will always be options for such students, but that degree and experience, and the connections available to be made, will offer dramatically lower value. Going is probably a large mistake.

The ‘top X% of your class’ system is excellent, such as Texas’s top 10% rule. I’d supplement that with a points system or threshold rules or both for grades, test scores and other quantifiable achievements, with a known minimum auto-admission threshold.

UATX does a simplified version of this, the deadline for this year has passed.

University of Austin (AUTX): College admissions are unjust.

Not just biased. Not just broken. Unjust.

Students spend high school anxiously stacking their résumés with hollow activities, then collect generic recommendation letters and outsource their essays to tutors or AI. Admissions at elite colleges now come down to who you know, your identity group, or how well you play the game.

This system rewards manipulation, not merit. It selects for conformity, not character.

That’s why we’re introducing the University of Austin’s new admissions policy:

If you score 1460+ on the SAT, 33+ on the ACT, or 105+ on the CLT, you will be automatically admitted, pending basic eligibility and an integrity check. Below that threshold, you’ll be evaluated on your test scores, AP/IB results, and three verifiable achievements, each described in a single sentence.

That’s it.

We care about two things: Intelligence and courage.

Intelligence to succeed in a rigorous intellectual environment (we don’t inflate grades). Courage to join the first ranks of our truth-oriented university.

College admission should be earned—not inherited, bought, or gamed. At UATX, your merit earns you a place—and full tuition scholarship.

Apply here by April 15.

Note the deadline. Because your decisions are deterministic, you get to move last.

As in, all they get to sweep up all these students whose essays were rejected or got discriminated against. Then we get to find out what happens when you put them all together. And you get to see which employers are excited by that, and which aren’t.

The New York Times headline writers understood the assignment, although it’s even worse than this: Elite Colleges Have Found a New Virtue For Applicants To Fake.

The basic version is indeed a new virtue to fake, combined with a cultural code to crack and teacher’s password to guess, the ‘disagreement question’:

Alex Bronzini-Vender (Sophomore, Harvard University, hire him): This time I found a new question: “Tell us about a moment when you engaged in a difficult conversation or encountered someone with an opinion or perspective that was different from your own. How did you find common ground?”

It’s known as the disagreement question, and since the student encampments of spring 2024 and the American right’s attacks on universities, a growing number of elite colleges have added it to their applications.

This didn’t escalate quickly so much as skip straight to the equilibrium. Kids are pros.

The trouble is that the disagreement question — like much of the application process — isn’t built for honesty. Just as I once scrambled to demonstrate my fluency in D.E.I., students now scramble to script the ideal disagreement, one that manages to be intriguing without being dangerous.

So now there’s a new guessing game in town.

Then again, maybe demonstrating one’s ability to delicately navigate controversial topics is the point. Perhaps the trick is balance? Be humble; don’t make yourself look too right. But you can’t choose a time when you were entirely wrong, either. Or should you tailor your responses by geography, betting that, say, a Southern admissions officer would be more likely to appreciate a conservative-leaning anecdote?

The emerging consensus in the application-prep industry is that it’s best to avoid politics entirely. … Dr. Jager-Hyman, for her part, usually advises students to choose a topic that is meaningful to them but unlikely to stoke controversy — like a time someone told you your favorite extracurricular activity was a waste of time.

So far, ordinary terrible, sure, fine, I suppose it’s not different in kind than anything else in the college essay business. Then it gets worse.

This fall, an expanding number of top schools — including Columbia, M.I.T., Northwestern, Johns Hopkins, Vanderbilt and the University of Chicago — will begin accepting “dialogues” portfolios from Schoolhouse.world, a platform co-founded by Sal Khan, the founder of Khan Academy, to help students with math skills and SAT prep.

High-schoolers will log into a Zoom call with other students and a peer tutor, debate topics like immigration or Israel-Palestine, and rate one another on traits like empathy, curiosity or kindness. The Schoolhouse.world site offers a scorecard: The more sessions you attend, and the more that your fellow participants recognize your virtues, the better you do.

“I don’t think you can truly fake respect,” Mr. Khan said.

Even as intended this is terrible already:

Owl of Athena: Remember when I told you Sal Khan was evil? I didn’t know the half of it!

Meet the Civility Score, courtesy of Khan’s “Dialogues.”

Get your kids used to having a social credit score, and make sure they understand their highest value should be the opinion of their peers! What could possibly go wrong?!

Steve McGuire: Elite universities are going to start using peer-scored civility ratings for admissions?!

Sorry, that’s a terrible idea. Why not just admit people based on their scores and then teach them to debate and dialogue?

You don’t need to go full CCP to solve this problem.

Nate Silver: This is basically affirmative action for boring people.

Blightersort: it is kind of amazing that elite schools would look at the current world and worry they are not selecting for conformity strongly enough and then work on new ways to select for conformity

Except of course it is way worse than that on multiple levels.

Remember your George Burns: “Sincerity is the most important thing. If you can fake that you’ve got it made.”

Of course you can fake respect. I do it and see it all the time. It is a central life skill.

Also, if you’re not generally willing to or don’t know how to properly pander to peers in such settings, don’t ‘read the room’ or are ugly? No college for you.

You can, and people constantly do, fake empathy, curiosity and kindness. It is not only a central life skill, but it is considered a central virtue.

And the fortunate ones won’t have to do it alone: They’ll have online guides, school counselors and private tutors to help them learn to simulate earnestness.

You could argue that one cannot fake civility, because there is no difference between faked civility and real civility. It lives in the perception of the audience. And you can argue that to some extent this applies to other such virtues too.

Quite possibly, there will be rampant discrimination of other kinds, as well. Expect lots of identify-based appeals. The game will be won by those who play to win it.

And then let’s address the elephant in the room. Notice this sentence:

High-schoolers will log into a Zoom call with other students and a peer tutor, debate topics like immigration or Israel-Palestine, and rate one another on traits like empathy, curiosity or kindness.

Yeah. Um.

Neils Hoven: Oh look, they figured out how to scale ideological conformity testing.

Brain in a jar: Haha hot people and conformists will win. Fuck.

If you have a bunch of high schoolers rating each other on ‘empathy, curiosity or kindness’ on the basis of discussions of those topics, that is a politics test. If you go in there and take a right-wing stance on immigration? No college for you. Not pledging your support for ending the state of Israel? No college for you. Indeed, I’m willing to bet that going in with actual full empathy and curiosity will get you lower, not higher, scores than performative endorsement.

To be fair, the website doesn’t emphasize those topics in particular, although I’m assuming they were listed because the author here encountered them. Instead, it looks like this:

The problem will persist regardless, if less egregiously. Across essentially all topics, the peer consensus in high school is left-wing, and left-wing consensus holds that left-wing views are empathic and curious and kind, whereas anything opposed to them is not. I would very much not advice anyone oppose student debt ‘relief,’ meaning abrogation of contracts, or anything but the most radical positions on climate change, or oppose aggressive moderation and censorship of social media.

Short of using AI evaluators (an actually very good idea), I don’t see a way around the problem that this is not a civility test, it is a popularity and ideological purity challenge, and we are forcing everyone to put on an act.

On the positive side (I am not sure if I am kidding), it also is potentially a game theory challenge. 5 for 5, anyone? Presumably students will quickly learn the signals of how to make the very obvious deals with each other.

Also you see what else this is testing? You outright get a higher score for participating in more of these ‘dialogues.’ You also presumably learn, over time, the distribution of other participants, and what techniques work on them, and develop your ‘get them to rank you highly’ skills. So who wants to grind admissions chances between hours of assigned busywork (aka ‘homework’) and mandatory (‘community service’) shifts working as an indentured servant, perhaps at the homeless shelter?

You cannot simply do this:

Zaid Jilani: Make SAT and GPA almost all of the college admissions standard, any essays should be analytical like on GRE rather than personal.

Mike Riggs: And you have no concerns about grade inflation?

Zaid Jilani: I do but how is that any different than status quo? Have to deal with that issue regardless. FWIW that’s much worse in college than high school.

Kelsey Piper: yep. just cut all the holistic shit. it turns high school into hell without meaningfully identifying the kids most prepared to contribute at top schools let alone teaching them anything

Emmett Shear: Overfit! Overfit! You cannot make your model robust by adding more parameters, only more accurate in the moment! Trying to create a global rating for “best students” is a bad idea and intrinsically high-complexity. Stop doing that.

Most of the holistic stuff needs to go. The essay needs to either go fully, or become another test taken in person, ideally graded pass-fail, to check for competence.

You do need a way to control for both outstanding achievement in the field of excellence.

I would thus first reserve some number of slots for positive selection outside the system, for those who are just very obviously people you want to admit.

I also think you need to have a list of achievements, at least on AP and other advanced tests, that grant bonus points. The SAT does not get hard enough or cover a wide enough set of topics.

I think you mostly don’t need to worry about any but the most extreme deal breakers and negative selection. Stop policing anything that shouldn’t involve actual police.

The other problem then is that at this level of stakes everything will get gamed. You cannot use a fully or even incompletely uncontrolled GPA if you are not doing holistic adjustments. GPAs would average 4.33 so fast it would make your head spin. Any optional class not playing along would be dropped left and right. And so on. If you want to count GPA at all, you need to adjust for school and by-class averages, and adjust for the success rate of the school of origin as a function of average-adjusted GPA controlling for SAT, and so on.

The ultimate question here is whether you want students in high school to be maximizing GPA as a means to college admissions. It can be a powerful motivating factor, but it also warps motivation. My inclination is to say you want to use it mostly as a threshold effect, with the threshold rising as you move up the ladder, with only modest bonus points for going beyond that, or use it as a kind of fast-track that gets you around other requirements, ideally including admission fees.

Where it gets tricky is scholarships. Even if admission depends only on SAT+AP and similar scores plus some extraordinary achievements and threshold checks, the sticker prices of colleges are absurd. So if scholarships depend on other triggers, you end up with the same problem, or you end up with a two-tier system where those who need merit scholarships have to game everything, probably with the ‘immune tier’ rather small since even if you can afford full price that doesn’t mean you want to pay it.

Sahsa Gusev has a fun thread pointing out various flaws that lead one back to a holistic approach rather than an SAT+GPA approach. I think that if you do advanced stats on the GPA (perhaps create GPV, grade percentile value, or GVOA, or grade value over average), and add in advanced additional objective examinations as sources of additional points, perhaps including a standardized entrance exam at most, and have a clearly defined list of negative selection dealbreakers (that are either outright dealbreakers or not, nothing in between), you can get good enough that letting students mostly game that is better than the holistic nightmare, and you can two-track as discussed above by reserve some slots for the best of the best on pure holistic judgment.

It’s not perfect, but no options are perfect, and I think these are the better mistakes.

Another way of putting this is:

Sasha Gusev: *Open: Office of the president at the new 100% Meritocratic University*

President: We’ve admitted the top 2,000 applicants by GPA and SATs. How are they doing?

Admissions: Several hundred valedictorians who’ve never gotten a B in their life are now at the bottom of all their classes and are experiencing a collective mental breakdown. Also our sports teams are an embarrassment.

[Zvi’s alternative continuation]: President: Okay. Is there a problem?

Admissions: Yes, this is scaring off some potential applicants, and also our sports teams are an embarrassment.

President: If a few get scared off or decide to transfer to feel smarter because they care mainly about signaling and positional goods rather than learning, that seems fine, make sure our office helps them get good placements elsewhere. And yeah, okay, or sports teams suck, but remind me why I should care about that?

Admissions: Because some students won’t want to go to a school whose sports teams suck and alumni won’t give us money that way.

President: Fine, those students can go elsewhere, too, it’s not like we’re going to be short on applicants, and that’s why we charge tuition.

Admissions: But if all we do is math then you’re going to replace me with an AI!

President: Well, yes.

[End scene.]

Paul Graham: Part of the problem with college admissions is that the SAT is too easy. It doesn’t offer enough resolution at the high end of the scale, and thus gives admissions officers too much discretion.

The problem is that we have tests that solve this problem but no one cares that much about them. Once you are maximizing the SAT, the attitude is not ‘well then, okay, let’s give them the LSAT or GRE and see how many APs they can ace,’ it’s ‘okay we’ll give them a tiny boost for each additional AP and such but mostly we don’t care.’ If the SAT bell curve went up to 2000, then they’d be forced to care, and differentiate 1570 from 1970.

That doesn’t seem hard to do? All you have to do is add some harder questions?

Or alternatively, you could have the ASAT (Advanced SAT), which is the same test on the same curve (a 1500 on ASAT is a 1500 on the SAT), except it’s harder, if you don’t earn at least 1400 you get back a null result the way you do on the USAMO or Putnam, and it goes up to 2500, and you can choose to take that instead. Yes, that’s not technically so different from what we do already, but it would feel very different – you’d be looking at that 1950 in that spot and it would be a lot harder to ignore.

Yeah, well, on that front it just got even worse, and the ACT is making similar changes:

Steve McGuire: Reading passages on the SAT have been shortened from 500-750 words down to 25-150. They say “the eliminated reading passages are ‘not an essential prerequisite for college’ and that the new, shorter content helps ‘students who might have struggled to connect with the subject matter.’” The reality, of course, is that the test is getting easier because so many students are struggling.

Zac Hill: This is capital-B bad not just for the obvious reason (reading is Good) but for the maybe-more-important second-order reason that this is not just about reading; it’s about all information synthesis involving the construction of models as a product of sustained attention.

Alex Tabarrok: SOD: “The SAT now caters to students who have trouble reading long, complex texts.”

Meanwhile in the math section, students have more time per question and free use of a calculator, without the questions changing.

On top of that, this paper says the Math SAT declined in rigor by 71 points between 2008 and 2023, which would mean that we have a 107-point decline in average performance that cuts across major demographic groups. Yikes, but also comments point out that the decline is largely caused by more students taking the test, which should indeed cause them to lower the grading curve. Relative score is what matters, except that we’re running into a lot more cases where 800 isn’t getting the job done.

Schools could of course move to the Classic Learning Test (CLT) or others that would differentiate between students. Instead, they are the customers of the ACT and SAT, and the customer is always right.

The only way to interpret this is that the colleges want to differentiate student ability up to some low minimum threshold, because otherwise the students fail out, but they actively do not want to differentiate on ability at the high end. They prefer other criteria. I will not further speculate as to why.

Perhaps even more important than all that is this, it cannot be overstated how much I see this screwing almost everyone and everything up:

Nephew Jonathan (QTing Tracing Woods above): I’m gonna hijack this: if there’s one thing that explains why everyone under the age of 40 seems to be a nervous wreck it’s the reduction of life to “guessing the teacher’s password” for everything.

Dating apps? Guess the girl’s password. College admissions? Grad school? HR personality screenings?

Discussion about this post

Childhood and Education #12: College Admissions Read More »

vpn-use-soars-in-uk-after-age-verification-laws-go-into-effect

VPN use soars in UK after age-verification laws go into effect

Also on Friday, the Windscribe VPN service posted a screenshot on X claiming to show a spike in new subscribers. The makers of the AdGuard VPN claimed that they have seen a 2.5X increase in install rates from the UK since Friday.

Nord Security, the company behind the NordVPN app, says it has seen a “1,000 percent increase in purchases” of subscriptions from the UK since the day before the new laws went into effect. “Such spikes in demand for VPNs are not unusual,” Laura Tyrylyte, Nord Security’s head of public relations, tells WIRED. She adds in a statement that “whenever a government announces an increase in surveillance, Internet restrictions, or other types of constraints, people turn to privacy tools.”

People living under repressive governments that impose extensive Internet censorship—like China, Russia, and Iran—have long relied on circumvention tools like VPNs and other technologies to maintain anonymity and access blocked content. But as countries that have long claimed to champion the open Internet and access to information, like the United States, begin considering or adopting age verification laws meant to protect children, the boundaries for protecting digital rights online quickly become extremely murky.

“There will be a large number of people who are using circumvention tech for a range of reasons” to get around age verification laws, the ACLU’s Kahn Gillmor says. “So then as a government you’re in a situation where either you’re obliging the websites to do this on everyone globally, that way legal jurisdiction isn’t what matters, or you’re encouraging people to use workarounds—which then ultimately puts you in the position of being opposed to censorship-circumvention tools.”

This story originally appeared on wired.com.

VPN use soars in UK after age-verification laws go into effect Read More »

tesla-picks-lges,-not-catl,-for-$4.3-billion-storage-battery-deal

Tesla picks LGES, not CATL, for $4.3 billion storage battery deal

Tesla has a new battery cell supplier. Although the automaker is vertically integrated to a degree not seen in the automotive industry for decades, when it comes to battery cells it’s mostly dependent upon suppliers. Panasonic cells can be found in many Teslas, with the cheaper, sturdier lithium iron phosphate (LFP) battery cells being supplied by CATL. Now Tesla has a new source of LFP cells thanks to a deal just signed with LG Energy Solutions.

According to The Korea Economic Daily, the contract between Tesla and LGES is worth $4.3 billion. LGES will begin supplying Tesla with cells next August through until at least the end of July 2030, with provisions to extend the contract if necessary.

The LFP cells probably aren’t destined for life on the road, however. Instead, they’ll likely be used in Tesla’s energy storage products, which both Tesla and LGES hope will soak up demand now that EV sales prospects look so weak in North America.

The deal also reduces Tesla’s reliance on Chinese suppliers. LGES will produce the LFP cells at its factory in Michigan, says Reuters, and so they will not be subject to the Trump trade war tariffs, unlike Chinese-made cells from CATL.

Although Tesla CEO Elon Musk has boasted about the size of the energy storage market, its contribution to Tesla’s financials remains meagre, and actually shrank during the last quarter.

Tesla picks LGES, not CATL, for $4.3 billion storage battery deal Read More »

epa-plans-to-ignore-science,-stop-regulating-greenhouse-gases

EPA plans to ignore science, stop regulating greenhouse gases

It derives from a 2007 Supreme Court ruling that named greenhouse gases as “air pollutants,” giving the EPA the mandate to regulate them under the Clean Air Act.

Critics of the rule say that the Clean Air Act was fashioned to manage localized emissions, not those responsible for global climate change.

A rollback would automatically weaken the greenhouse gas emissions standards for cars and heavy-duty vehicles. Manufacturers such as Daimler and Volvo Cars have previously opposed the EPA’s efforts to tighten emission standards, while organized labour groups such as the American Trucking Association said they “put the trucking industry on a path to economic ruin.”

However, Katherine García, director of Sierra Club’s Clean Transportation for All Campaign, said that the ruling would be “disastrous for curbing toxic truck pollution, especially in frontline communities disproportionately burdened by diesel exhaust.”

Energy experts said the move could also stall progress on developing clean energy sources such as nuclear power.

“Bipartisan support for nuclear largely rests on the fact that it doesn’t have carbon emissions,” said Ken Irvin, a partner in Sidley Austin’s global energy and infrastructure practice. “If carbon stops being considered to endanger human welfare, that might take away momentum from nuclear.”

The proposed rule from the EPA will go through a public comment period and inter-agency review. It is likely to face legal challenges from environmental activists.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

EPA plans to ignore science, stop regulating greenhouse gases Read More »

the-case-for-memes-as-a-new-form-of-comics

The case for memes as a new form of comics


Both comics and memes rely on the same interplay of visual and verbal elements for their humor.

Credit: Jennifer Ouellette via imgflip

It’s undeniable that the rise of the Internet had a profound impact on cartooning as a profession, giving cartoonists both new tools and a new publishing and/or distribution medium. Online culture also spawned the emergence of viral memes in the late 1990s. Michelle Ann Abate, an English professor at The Ohio State University, argues in a paper published in INKS: The Journal of the Comics Studies Society, that memes—specifically, image macros—represent a new type of digital comic, right down to the cognitive and creative ways in which they operate.

“One of my areas of specialty has been graphic novels and comics,” Abate told Ars. “I’ve published multiple books on various aspects of comics history and various titles: everything from Charles Schulz’s Peanuts to The Far Side, to Little Lulu to Ziggy to The Family Circus. So I’ve been working on comics as part of the genres and texts and time periods that I look at for many years now.”

Her most recent book is 2024’s Singular Sensations: A Cultural History of One-Panel Comics in the United States, which Abate was researching when the COVID-19 pandemic hit in 2020. “I was reading a lot of single panel comics and sharing them with friends during the pandemic, and memes were something we were always sharing, too,” Abate said. “It occurred to me one day that there isn’t a whole lot of difference between the single panel comics I’m sharing and the memes. In terms of how they function, how they operate, the connection of the verbal and the visual, there’s more continuity than there is difference.”

So Abate decided to approach the question more systematically. Evolutionary biologist Richard Dawkins coined the word “meme” in his 1976 popular science book, The Selfish Gene, well before the advent of the Internet age. For Dawkins, it described a “unit of cultural transmission, or a unit of information”: ideas, catchphrases, catchy tunes, fashions, even arch building.

distraught woman pointing a finger and yelling, facing an image of a confused cat in front of a salad

Credit: Jennifer Ouellette via imgflp

In a 21st century context, “meme” refers to a piece of online content that spikes in popularity and gets passed from user to user, i.e., going viral. These can be single images remixed with tailored text, such as “Distracted Boyfriend,” “This Is Fine,” or “Batman Slapping Robin.” Or they can feature multiple panels, like “American Chopper.” Furthermore, “Memes can also be a gesture, they can be an activity, they can be a video like the Wednesday dance or the ice bucket challenge,” said Abate. “It’s become such a part of our lexicon that it’s hard to imagine a world without memes at this point.”

For Abate, Internet memes are clearly related to sequential art like comics, representing a new stage of evolution in the genre. In both cases, the visual and verbal elements work in tandem to produce the humor.

Granted, comic artists usually create both the image and the text, whereas memes adapt preexisting visuals with new text. Some might consider this poaching, but Abate points out that cartoonists like Charles Schulz have long used stencil templates (a static prefabricated element) to replicate images, a practice that is also used effectively in, say, Dinosaur Comics. And meme humor depends on people connecting the image to its origin rather than obscuring it. She compares the practice to sampling in music; the end result is still an original piece of art.

In fact, The New Yorker’s hugely popular cartoon caption contest—in which the magazine prints a single-panel drawing with no speech balloons or dialogue boxes and asks readers to supply their own verbal jokes—is basically a meme generator. “It’s seen more as a highbrow thing, crowdsourcing everybody’s wit,” said Abate. “But [the magazine supplies] the template image and then everybody puts in their own text or captions. They’re making memes. If they only published the winner, folks would be disappointed because the fun is seeing all the clever, funny things that people come up with.”

Memes both mirror and modify the comic genre. For instance, the online nature of memes can affect formatting. If there are multiple panels, those panels are usually arranged vertically rather than horizontally since memes are typically read by scrolling down one’s phone—like the “American Chopper” meme:

American Chopper meme with each frame representing a stage in the debate

Credit: Jennifer Ouellette via imgflip

Per Abate, this has the added advantage of forcing the reader to pause briefly to consider the argument and counter-argument, emphasizing that it’s an actual debate rather than two men simply yelling at one another. “If the panels were arranged horizontally and the guys were side by side in each other’s face, installments of ‘American Chopper’ would come across very differently,” she said.

A pad with infinite sheets

Scott McCloud is widely considered the leading theorist when it comes to the art of comics, and his hugely influential 2000 book, Reinventing Comics: The Evolution of an Art Form, explores the boundless potential for digital comics, freed from the constraints of a printed page. He calls this aspect the “infinite canvas,” because cartoonists can now create works of any size or shape, even as tall as a mountain. Memes have endless possibilities of a different kind, per Abate.

“[McCloud] thinks of it very expansively: a single panel could be the size of a city block,” said Abate. “You could never do that with a book because how could you print the book? How could you hold the book? How could you read the book? How could you download the book on your Kindle? But when you’ve got a digital world, it could be a city block and you can explore it with your mouse and your cursor and your track pad and, oh, all the possibilities for storytelling and for the medium that will open up with this infinite canvas. There have been many places and titles where this has played out with digital comics.

“Obviously with a meme, they’re not the size of a city block,” she continued. “So it occurred to me that they are infinite, but almost like you’re peeling sheets off a pad and the pad just has an endless number of sheets. You can just keep redoing it, redo, redo, redo. That’s memes. They get revised and repurposed and re-imagined and redone and recirculated over and over and over again. The template gets used inexhaustibly, which is what makes them fun, what makes them go viral.”

comic frame showing batman slapping robin

Credit: Jennifer Ouellette via imgflp

Just what makes a good meme image? Abate has some thoughts about that, too. “It has to be not just the image, but the ability for the image to be paired with a caption, a text,” she said. “It has to lend itself to some kind of verbal element as well. And it also has to have some elasticity of being specific enough that it’s recognizable, but also being malleable enough that it can be adapted to different forms.”

In other words, a really good meme must be generalizable if it is to last longer than a few weeks. The recent kiss-cam incident at a Coldplay concert is a case in point. When a married tech CEO was caught embracing his company’s “chief people officer,” they quickly realized they were on the Jumbotron, panicked, and hid their faces—which only made it worse. The moment went viral and spawned myriad memes. Even the Phillies mascots got into the spirit, re-enacting the moment at a recent baseball game. But that particular meme might not have long-term staying power.

“It became a meme very quickly and went viral very fast,” said Abate. “I may be proved wrong, but I don’t think the Coldplay moment will be a meme that will be around a year from now. It’s commenting on a particular incident in the culture, and then the clock will tick, and folks will move on. Whereas something like ‘Distracted Boyfriend’ or ‘This is Fine’ has more staying power because it’s not tied to a particular incident or a particular scandal but can be applied to all kinds of political topics, pop culture events, and cultural experiences.”

black man stroking his chin, mouth partly open in surprise

Credit: Sean Carroll via imgflp

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

The case for memes as a new form of comics Read More »

microsoft-is-revamping-windows-11’s-task-manager-so-its-numbers-make-more-sense

Microsoft is revamping Windows 11’s Task Manager so its numbers make more sense

Copilot+ features, and annoying “features”

Microsoft continues to roll out AI features, particularly to PCs that meet the qualifications for the company’s Copilot+ features. These betas enable “agent-powered search” for Intel and AMD Copilot+ PCs, which continue to get most of these features a few weeks or months later than Qualcomm Snapdragon+ PCs. This agent is Microsoft’s latest attempt to improve the dense, labyrinthine Settings app by enabling natural-language search that knows how to respond to queries like “my mouse pointer is too small” or “how to control my PC by voice” (Microsoft’s examples). Like other Copilot+ features, this relies on your PC’s neural processing unit (NPU) to perform all processing locally on-device. Microsoft has also added a tutorial for the “Click to Do” feature that suggests different actions you can perform based on images, text, and other content on your screen.

Finally, Microsoft is tweaking the so-called “Second Chance Out of Box Experience” window (also called “SCOOBE,” pronounced “scooby”), the setup screen that you’ll periodically see on a Windows 11 PC even if you’ve already been using it for months or years. This screen attempts to enroll your PC in Windows Backup, to switch your default browser to Microsoft Edge and its default search engine to Bing, and to import favorites and history into Edge from whatever browser you might have been trying to use before.

If you, like me, experience the SCOOBE screen primarily as a nuisance rather than something “helpful,” it is possible to make it go away. Per our guide to de-cluttering Windows 11, open Settings, go to System, then to Notifications, scroll down, expand the “additional settings” drop-down, and uncheck all three boxes here to get rid of the SCOOBE screen and other irritating reminders.

Most of these features are being released simultaneously to the Dev and Beta channels of the Windows Insider program (from least- to most-stable, the four channels are Canary, Dev, Beta, and Release Preview). Features in the Beta channel are usually not far from being released into the public versions of Windows, so non-Insiders can probably expect most of these things to appear on their PCs in the next few weeks. Microsoft is also gearing up to release the Windows 11 25H2 update, this year’s big annual update, which will enable a handful of features that the company is already quietly rolling out to PCs running version 24H2.

Microsoft is revamping Windows 11’s Task Manager so its numbers make more sense Read More »

trump-promised-a-drilling-boom,-but-us-energy-industry-hasn’t-been-interested

Trump promised a drilling boom, but US energy industry hasn’t been interested


Exec: “Liberation Day chaos and tariff antics have harmed the domestic energy industry.”

“We will drill, baby, drill,” President Donald Trump declared at his inauguration on January 20. Echoing the slogan that exemplified his energy policies during the campaign, he made his message clear: more oil and gas, lower prices, greater exports.

Six months into Trump’s second term, his administration has little to show on that score. Output is ticking up, but slower than it did under the Biden administration. Pump prices for gasoline have bobbed around where they were in inauguration week. And exports of crude oil in the four months through April trailed those in the same period last year.

The White House is discovering, perhaps the hard way, that energy markets aren’t easily managed from the Oval Office—even as it moves to roll back regulations on the oil and gas sector, offers up more public lands for drilling at reduced royalty rates, and axes Biden-era incentives for wind and solar.

“The industry is going to do what the industry is going to do,” said Jenny Rowland-Shea, director for public lands at the Center for American Progress, a progressive policy think tank.

That’s because the price of oil, the world’s most-traded commodity, is more responsive to global demand and supply dynamics than to domestic policy and posturing.

The market is flush with supplies at the moment, as the Saudi Arabia-led cartel of oil-producing nations known as OPEC+ allows more barrels to flow while China, the world’s top oil consumer, curbs its consumption. Within the US, a boom in energy demand driven by rapid electrification and AI-serving data centers is boosting power costs for homes and businesses, yet fossil fuel producers are not rushing to ramp up drilling.

There is one key indicator of drilling levels that the industry has watched closely for more than 80 years: a weekly census of active oil and gas rigs published by Baker Hughes. When Trump came into office January 20, the US rig count was 580. Last week, the most recent figure, it was down to 542—hovering just above a four-year low reached earlier in the month.

The most glaring factor behind this stagnant rig count is the current level of crude oil prices. Take the US benchmark grade: West Texas Intermediate crude. Its prices were near $66 a barrel on July 28, after hitting a four-year low of $62 in May. The break-even level for drilling new wells is somewhere close to $60 per barrel, according to oil and gas experts.

That’s before you account for the fallout of elevated tariffs on steel and other imports for the many companies that get their pipes and drilling equipment from overseas, said Robert Rapier, editor-in-chief of Shale Magazine, who has two decades of experience as a chemical engineer.

The Federal Reserve Bank of Dallas’ quarterly survey of over 130 oil and gas producers based in Texas, Louisiana, and New Mexico, conducted in June, suggests the industry’s outlook is pessimistic. Nearly half of the 38 firms that responded to this question saw their firms drilling fewer wells this year than they had earlier expected.

Survey participants could also submit comments. One executive from an exploration and production (E&P) company said, “It’s hard to imagine how much worse policies and DC rhetoric could have been for US E&P companies.” Another executive said, “The Liberation Day chaos and tariff antics have harmed the domestic energy industry. Drill, baby, drill will not happen with this level of volatility.”

Roughly one in three survey respondents chalked up the expectations for fewer wells to higher tariffs on steel imports. And three in four said tariffs raised the cost of drilling and completing new wells.

“They’re getting more places to drill and they’re getting some lower royalties, but they’re also getting these tariffs that they don’t want,” Rapier said. “And the bottom line is their profits are going to suffer.”

Earlier this month, ExxonMobil estimated that its profit in the April-June quarter will be roughly $1.5 billion lower than in the previous three months because of weaker oil and gas prices. And over in Europe, BP, Shell, and TotalEnergies issued similar warnings to investors about hits to their respective profits.

These warnings come even as Trump has installed friendly faces to regulate the oil and gas sector, including at the Department of Energy, the Environmental Protection Agency, and the Department of the Interior, the latter of which manages federal lands and is gearing up to auction more oil and gas leases on those lands.

“There’s a lot of enthusiasm for a window of opportunity to make investments. But there’s also a lot of caution about wanting to make sure that if there’s regulatory reforms, they’re going to stick,” said Kevin Book, managing director of research at ClearView Energy Partners, which produces analyses for energy companies and investors.

The recently enacted One Big Beautiful Bill Act contains provisions requiring four onshore and two offshore lease sales every year, lowering the minimum royalty rate to 12.5 percent from 16.67 percent, and bringing back speculative leasing—when lands that don’t invite enough bids are leased for less money—that was stopped in 2022.

“Pro-energy policies play a critical role in strengthening domestic production,” said a spokesperson for the American Petroleum Institute, the top US oil and gas industry group. “The new tax legislation unlocks opportunities for safe, responsible development in critical resource basins to deliver the affordable, reliable fuel Americans rely on.”

Because about half of the federal royalties end up with the states and localities where the drilling occurs, “budgets in these oil and gas communities are going to be hit hard,” Rowland-Shea of American Progress said. Meanwhile, she said, drilling on public lands can pollute the air, raise noise levels, cause spills or leaks, and restrict movement for both people and wildlife.

Earlier this year, Congress killed an EPA rule finalized in November that would have charged oil and gas companies for flaring excess methane from their operations.

“Folks in the Trump camp have long said that the Biden administration was killing drilling by enforcing these regulations on speculative leasing and reining in methane pollution,” said Rowland-Shea. “And yet under Biden, we saw the highest production of oil and gas in history.”

In fact, the top three fossil fuel producers collectively earned less during Trump’s first term than they did in either of President Barack Obama’s terms or under President Joe Biden. “It’s an irony that when Democrats are in there and they’re putting in policies to shift away from oil and gas, which causes the price to go up, that is more profitable for the oil and gas industry,” said Rapier.

That doesn’t mean, of course, that the Trump administration’s actions won’t have long-lasting climate implications. Even though six months may be a significant amount of time in political accounting, investment decisions in the energy sector are made over longer horizons, ClearView’s Book said. As long as the planned lease sales take place, oil companies can snap up and sit on public lands until they see more favorable conditions for drilling.

It’s an irony that when Democrats are in there and they’re putting in policies to shift away from oil and gas, which causes the price to go up, that is more profitable for the oil and gas industry.

What could pad the demand for oil and gas is how the One Big Beautiful Bill Act will withdraw or dilute the Inflation Reduction Act’s tax incentives and subsidies for renewable energy sources. “With the kneecapping of wind and solar, that’s going to put a lot more pressure on fossil fuels to fill that gap,” Rowland-Shea said.

However, the economics of solar and wind are increasingly too attractive to ignore. With electricity demand exceeding expectations, Book said, “any president looking ahead at end-user prices and power supply might revisit or take a flexible position if they find themselves facing shortage.”

A recent United Nations report found that “solar and wind are now almost always the least expensive—and the fastest—option for new electricity generation.” That is why Texas, deemed the oil capital of the world, produces more wind power than any other state and also led the nation in new solar capacity in the last two years.

Renewables like wind and solar, said Rowland-Shea, are “a truly abundant and American source of energy.”

This story originally appeared on Inside Climate News.

Photo of Inside Climate News

Trump promised a drilling boom, but US energy industry hasn’t been interested Read More »

mistral’s-new-“environmental-audit”-shows-how-much-ai-is-hurting-the-planet

Mistral’s new “environmental audit” shows how much AI is hurting the planet

Despite concerns over the environmental impacts of AI models, it’s surprisingly hard to find precise, reliable data on the CO2 emissions and water use for many major large language models. French model-maker Mistral is seeking to fix that this week, releasing details from what it calls a first-of-its-kind environmental audit “to quantify the environmental impacts of our LLMs.”

The results, which are broadly in line with estimates from previous scholarly work, suggest the environmental harm of any single AI query is relatively small compared to many other common Internet tasks. But with billions of AI prompts taxing GPUs every year, even those small individual impacts can lead to significant environmental effects in aggregate.

Is AI really destroying the planet?

To generate a life-cycle analysis of its “Large 2” model after just under 18 months of existence, Mistral partnered with sustainability consultancy Carbone 4 and the French Agency for Ecological Transition. Following the French government’s Frugal AI guidelines for measuring overall environmental impact, Mistral says its peer-reviewed study looked at three categories: greenhouse gas (i.e., CO2) emissions, water consumption, and materials consumption (i.e., “the depletion of non-renewable resources,” mostly through wear and tear on AI server GPUs). Mistral’s audit found that the vast majority of CO2 emissions and water consumption (85.5 percent and 91 percent, respectively) occurred during model training and inference, rather than from sources like data center construction and energy used by end-user equipment.

Through its audit, Mistral found that the marginal “inference time” environmental impact of a single average prompt (generating 400 tokens’ worth of text, or about a page’s worth) was relatively minimal: just 1.14 grams of CO2 emitted and 45 milliliters of water consumed. Through its first 18 months of operation, though, the combination of model training and running millions (if not billions) of those prompts led to a significant aggregate impact: 20.4 ktons of CO2 emissions (comparable to 4,500 average internal combustion-engine passenger vehicles operating for a year, according to the Environmental Protection Agency) and the evaporation of 281,000 cubic meters of water (enough to fill about 112 Olympic-sized swimming pools).

The marginal impact of a single Mistral LLM query compared to some other common activities.

The marginal impact of a single Mistral LLM query compared to some other common activities. Credit: Mistral

Comparing Mistral’s environmental impact numbers to those of other common Internet tasks helps put the AI’s environmental impact in context. Mistral points out, for instance, that the incremental CO2 emissions from one of its average LLM queries are equivalent to those of watching 10 seconds of a streaming show in the US (or 55 seconds of the same show in France, where the energy grid is notably cleaner). It’s also equivalent to sitting on a Zoom call for anywhere from four to 27 seconds, according to numbers from the Mozilla Foundation. And spending 10 minutes writing an email that’s read fully by one of its 100 recipients emits as much CO2 as 22.8 Mistral prompts, according to numbers from Carbon Literacy.

Mistral’s new “environmental audit” shows how much AI is hurting the planet Read More »