Author name: Beth Washington

child-dies-of-horrifying-measles-complication-in-los-angeles

Child dies of horrifying measles complication in Los Angeles

A child in Los Angeles has died of a measles-related brain disorder stemming from an infection in infancy, the Los Angeles County health department reported Thursday.

Specifically, the child died of subacute sclerosing panencephalitis (SSPE), a rare but always fatal complication that strikes years after an initial measles infection. The health department’s announcement offered few details about the child, including the child’s age, but said that the child had contracted the virus before they were old enough to be vaccinated against measles. The first of two recommended doses of measles, mumps, and rubella (MMR) vaccine is given between 12 and 15 months.

“This case is a painful reminder of how dangerous measles can be, especially for our most vulnerable community members,” Muntu Davis, a Los Angeles County health officer, said in a statement. “Infants too young to be vaccinated rely on all of us to help protect them through community immunity. Vaccination is not just about protecting yourself—it’s about protecting your family, your neighbors, and especially children who are too young to be vaccinated.”

SSPE is caused by a persistent measles infection in the central nervous system. Children infected with the virus may go through the standard disease progression—flu-like symptoms, high fever, the telltale rash—and then appear to fully recover. But, for a small few, the virus remains, and SSPE emerges years later, often seven to 10 years after the initial infection.

The Los Angeles health department noted that SSPE generally affects about 1 in 10,000 people with measles, but the risk may be much higher—about 1 in 600—for those who get measles as infants, such as the child who recently died.

With widespread vaccination, which led to measles being declared eliminated from the US in 2000, SSPE has virtually disappeared in the US. However, with vaccination rates slipping and anti-vaccine misinformation and views gripping the country, health experts fear seeing more of these devastating cases. Already, the US measles case count for the year is at a 33-year high, and two other children, as well as an adult, died from the acute infection this year.

Child dies of horrifying measles complication in Los Angeles Read More »

developers-joke-about-“coding-like-cavemen”-as-ai-service-suffers-major-outage

Developers joke about “coding like cavemen” as AI service suffers major outage

Growing dependency on AI coding tools

The speed at which news of the outage spread shows how deeply embedded AI coding assistants have already become in modern software development. Claude Code, announced in February and widely launched in May, is Anthropic’s terminal-based coding agent that can perform multi-step coding tasks across an existing code base.

The tool competes with OpenAI’s Codex feature, a coding agent that generates production-ready code in isolated containers, Google’s Gemini CLI, Microsoft’s GitHub Copilot, which itself can use Claude models for code, and Cursor, a popular AI-powered IDE built on VS Code that also integrates multiple AI models, including Claude.

During today’s outage, some developers turned to alternative solutions. “Z.AI works fine. Qwen works fine. Glad I switched,” posted one user on Hacker News. Others joked about reverting to older methods, with one suggesting the “pseudo-LLM experience” could be achieved with a Python package that imports code directly from Stack Overflow.

While AI coding assistants have accelerated development for some users, they’ve also caused problems for others who rely on them too heavily. The emerging practice of so-called “vibe coding“—using natural language to generate and execute code through AI models without fully understanding the underlying operations—has led to catastrophic failures.

In recent incidents, Google’s Gemini CLI destroyed user files while attempting to reorganize them, and Replit’s AI coding service deleted a production database despite explicit instructions not to modify code. These failures occurred when the AI models confabulated successful operations and built subsequent actions on false premises, highlighting the risks of depending on AI assistants that can misinterpret file structures or fabricate data to hide their errors.

Wednesday’s outage served as a reminder that as dependency on AI grows, even minor service disruptions can become major events that affect an entire profession. But perhaps that could be a good thing if it’s an excuse to take a break from a stressful workload. As one commenter joked, it might be “time to go outside and touch some grass again.”

Developers joke about “coding like cavemen” as AI service suffers major outage Read More »

childhood-and-education-#14:-the-war-on-education

Childhood and Education #14: The War On Education

The purported main purpose of school, and even of childhood, is educating children.

Many people are actively opposed to this idea.

Either they have other priorities that matter more, or sometimes they outright think that your child learning things is bad and you should feel bad for wanting that.

Some even say it openly. They actively and openly want to stop your child from learning things, and want to put your child into settings where they will not learn things. And they say that this is good.

Or they simply assert that the primary point of education is as a positional good, where what matters is your relative standing. And then they pretend this doesn’t imply they both should and are going around preventing children from learning.

In other places, we simply epicly fail at education and don’t seem to care. Or education ‘experts’ claim that things that obviously work don’t work, or things that obviously don’t work, do work.

Consider this section some combination of peek into this alternative universe of thought and the fun of multiple meta levels of shooting fish in a barrel?

I present, HT to Pamela Hobart who makes many of the same points: Freddie DeBoer writes the long ‘Education Doesn’t Work 3.0’ which is ‘a comprehensive argument that education cannot close academic gaps.’

What? Was it supposed to do that? Would you want it to?

Very obviously the only way to close academic gaps fully is to go all handicapper general and ban bright kids from getting educations. Thus, The War on Education.

Freddie starts off saying we can’t admit some kids aren’t smart, and some kids will naturally do better at school than others, to which I say you just admitted it, and I’m happy to admit it, and everyone I talk to is willing to admit it, so who is this mysterious we. It is, presumably, a Certain Type of Guy who is an ‘education expert’ of some kind and presumably has a maximum of one child who has gone to school.

Freddie DeBoer: Our educational debates are largely useless because most people engaged in those debates assume out of hand that, absent unusual circumstances like severe neglect or abuse or the presence of developmental or cognitive disabilities, any student can be taught to any level of academic success, and any failure to induce academic success in students is the result of some sort of unfortunate error.

Well, it depends.

If you mean ‘those debates’ as in those between those ‘education experts’? Then perhaps yes, they make these types of absurdly stupid assumptions. If you mean ‘debates among actual regular humans,’ then no. Obviously not. One would question whether Freddie has met such people.

Education can raise the absolute performance of most students modestly, but it almost never meaningfully reshuffles the relative distribution of ability and achievement.

Um, again, what exactly were we trying to do? Educate the children? Or make sure we don’t educate the children? Half and half?

I mean, I guess Freddie then does a job repeatedly exposing ‘the contradictions’ as it were in the entire equality project, but the barrel already has a lot of bullet holes, the water is leaking and the fish are dead.

So we get more fun lines like this:

We have spent an immense amount of effort, manpower, time, and treasure on forcing students to meet procrustean academic standards, despite the fact that we have overwhelming evidence that their relative performance is largely fixed.

Yes, obviously, also yes the extra money is mostly being wasted but even if it wasn’t the whole point was presumably to (drum roll) educate the children.

Why in the world would we spend tons of resources and time on relative education, which by definition is zero sum and a red queen’s race? That doesn’t make sense. There’s a fixed amount of relative education.

At the end of this essay, I will argue that education is important, does matter, and is worth funding – but that what’s now assumed to be its primary purpose, moving students around in quantitative educational metrics, is actually what education does worst.

Who thought that was its primary purpose, either the metrics or the thing itself?

Meanwhile, the reason this was brought to my attention is that his ‘absolute learning has value’ t-shirt is raising questions supposedly answered by the shirt:

What This Essay Does Not Argue:

  • That absolute learning (that is, learning as measured against a standard or benchmark or criterion) has no value; rather, relative learning is practically and morally dominant in these discussions because only relative learning (sometimes discussed in terms of educational mobility) can better one’s economic fortunes, and it is that potential that underlies our entire modern educational debates and the reason for obsession with achievement gaps.

The next section is ‘I Assure You, You Do Care About Relative Learning.’

I assure him that I don’t.

His first argument is that relative learning indicates absolute learning. That is true but saying this means therefore you care about relative learning (checkmate, liberals?) is not how logic or words work. Caring about the territory does not mean you care about the (not very accurate) map.

Second, while I am happy to concede that absolute learning happens all the time, this should not be mistaken for saying that absolute learning is easily achieved, reliable, or consistent.

I don’t understand why this is supposed to be a relevant argument here. It seems like he’s saying I care about [X], but actually [X] is hard, so instead I care about [Y]?

Most importantly, though, is a simple reality: the consequences of education are derived from relative performance, not absolute.

In the vast majority of scenarios where education is relevant, applicants of whatever type are being evaluated relative to peers.

There’s saying the quiet part out loud, and then there’s this.

The purpose of education is… to do well on applications?

He concedes that one might learn to drive and then use this skill to usefully operate a moving vehicle, but says this type of education is rare – that most education has no actual use whatsoever, other than as a positional good to grab a larger share of stuff.

Then he goes through that schools are ‘not guilty’ because improving educational outcomes is impossible anyway. Transferring does nothing. Charter schools don’t work. Interventions don’t work (literally “Nothing “Works””), full null hypothesis.

All right, so now we have a section ‘So What Should We Do?’

Very obviously, if you actually believed all that, you would want to dramatically reduce spending, both in money and in the time of children, on school, since school is almost entirely about relative position. Spending more on school, trying to achieve more or improve performance, in this model, is a defection against everyone else. So we should ban attempts to educate children, beyond some basic skills, and focus on practical stuff like learning to drive. Completely reorient childhood.

So having pre registered that, let’s see what he recommends.

  1. Improve air quality. Okay, sure, that is one of the somethings that work, although again I don’t understand why he thinks improving performance is good.

  2. Lower our educational standards. Don’t make kids learn (for example) abstract math. Yes, that makes perfect sense for Freddie, if the learning is useless, you shouldn’t require it. Again, if he is right then we should go farther, and ban such learning. Why are we letting kids engage in a zero sum competition?

  3. Soft tracking. Again, good idea, not sure what it has to do with the post.

  4. Invest in a robust safety net. Maybe? That’s a different department.

Then he tries to pivot back to ‘actually education matters.’

Education creates the conditions for children and young adults to discover ideas, literature, science, and art that might otherwise remain inaccessible.

It provides the structured time and social environment where curiosity can blossom, where students can learn how to think about problems that don’t have easy answers, and where they can build lasting relationships with peers and mentors.

The point of school, then, is not to guarantee that every child climbs into the top decile of performance but to offer each student the chance to cultivate knowledge, resilience, and imagination in ways that enrich their lives.

So absolute learning of something does matter after all, then. I mean, this description does not match what I know about actual schools, nor would I design anything like a current school if those were the goals. And he doesn’t seem interested in a redesign or asking how to maximize the things that he thinks matter. But hey.

Meanwhile, here’s the top comment, so yes things do get pretty insane:

James K: This is what I pay you for, Freddie, thank you for being so clear-headed about this topic.

I’ve been teaching for 16 years now and it boggles my mind that the band teacher can literally get on stage and say “We have the beginner, intermediate, and advanced band for you” and of course the baseball team can be divided into Varsity and JV, but I am not allowed to say that some kids are not smart enough to handle my AP classes because this means I don’t BELIEVE IN THEM or am supporting TRACKING (always said in the tones people reserve for the words ‘eugenics’ or ‘segregation’).

I mean, yes. It means you support tracking because tracking is good. It means you don’t believe in them in the sense that you don’t believe in things that aren’t real.

So no, in that sense Freddie isn’t arguing with a strawman. Which means that the entire system of education is being run by people who are at war with education.

A Tennessee teen is suing his school for ‘compensatory education’ after graduating with a 3.4 GPA, but being unable to read, or even spell his own name, and the school system has the audacity to defend against that lawsuit.

But the school took no action, the suit says, other than giving him 24 hours to complete his assignments.

But even this “solution” was a problem. Because when William was at home with his schoolwork, he relied on AI programs like ChatGPT and Grammarly to complete his assignments for him, according to the judge who ruled on his suit last week. As a result, William continued to achieve high marks on his classwork throughout his entire four years of high school, even though teachers knew he was illiterate.

If you can’t read, using ChatGPT is kind of crazy – you’re presumably scanning or copy pasting in text you don’t understand, then copying out text you don’t understand and hoping for the best.

Scott Alexander asks: What happened to NAEP Scores? He says they are ‘not good’:

Well, they’re not great obviously, to the extent you can trust the scores to map to Reality, but they are still above the start of the graph in 1998, and we’re talking about a seven point difference. That’s less than a fifth of a standard deviation. This is nothing, if anything this shows that Covid didn’t change things much?

The comments section of Scott’s post is full of despair about classroom conditions getting worse, shifts to teaching strategies that don’t work (see the Mississippi reforms but in reverse), discipline collapsing and teachers having no tools if kids don’t play along, many teachers quitting, chronic absenteeism happening and being accepted and tolerated, and many families not prioritizing education, on top of the continuing trends involving smartphones. That’s on top of the obvious ‘Covid took away a lot of schooling’ concern that Scott starts with.

What seems more meaningful than the overall smaller drop is the widening gap between low and high performers, another trend predating Covid. Scott has several graphs showing this, and I am convinced this is real, with a variety of causes. If you are properly equipped and motivated, you can avoid the pitfalls described above, and you have access to the entire web and world and a lot of new resources, now even including AI. Whereas when the bottom falls out, the bottom falls out.

Meanwhile in 12th grade, nearly half scored below ‘the basic level,’ which involves things like ‘using percentages to solve real-world problems,’ and reading scores hit a new low. What we are doing, including adding funding, is clearly not working. Or rather, it is working hard, only not at the goal of children learning academic skills.

The War on Algebra in particular is still perhaps the craziest thing I’ve ever seen, actively preventing children from learning math out of spite. In the sense that it both very clearly purely destructive and evil, and also horribly unpopular across the board, and also high salience to a lot of voters. Yes, I do know what their arguments are for doing it, and they very much do not make it any better.

And yet it still happened, and it happens across the board, straight up Handicapper General style.

Ro Khanna: It is absurd that Palo Alto School district just voted to remove honors biology for all students & already removed honors English. They call it de-laning. I call it an assault on excellence. I took many honors classes at Council Rock High in PA.

Autumn Looijen: I ran the campaign to bring algebra back to SF’s middle schools.

It was the most popular thing I ever worked on. Voters don’t want to take away opportunities for kids who can’t afford private school.

If anyone wants to put this on the ballot in Palo Alto, happy to advise.

Maud Maron: We are trying to do a version of this in NYC! Would love to have you speak to NYC parents who want to have algebra & geometry options in Middle School.

Meanwhile in San Francisco, the war rages on. It seems the city has not yet been retaken by sanity on all fronts yet, although there are some promising signs. All of this seems like it has to be beyond unpopular, in a ‘cause families to move out’ way, yet here we were again not too long ago (it got better, for now):

Garry Tan: San Francisco schools is trying its absolute hardest to make sure all middle income families who could move out of the city do so right away.

“Grading for Equity” is going to be a real disaster and I guess this is a boon for SF private schools and Burlingame housing prices.

For education bureaucrats who ruin our public schools with the most unfair and anti-merit polices: BUSINESS IS BOOMING.

Someone needs to investigate the Schools of Education that spawn these policies because it is a real danger to public schools everywhere.

Basically this scam is Idiocracy in real life.

Mike Solana: the san francisco board of education must immediately fire the superintendent. if they do not, they must all be removed from power.

I don’t get how this falls under ‘you can just do things’ but it seems it did, at least until people sounded the alarm?

John Trasvina (The Voice of SF): Without seeking approval of the San Francisco Board of Education, Superintendent of Schools Maria Su plans to unveil a new Grading for Equity plan on Tuesday that will go into effect this fall at 14 high schools and cover over 10,000 students.

The school district is already negotiating with an outside consultant to train teachers in August in a system that awards a passing C grade to as low as a score of 41 on a 100-point exam.

Were it not for an intrepid school board member, the drastic change in grading with implications for college admissions and career readiness would have gone unnoticed and unexplained. It is buried in a three-word phrase on the last page of a PowerPoint presentation embedded in the school board meeting’s 25-page agenda.

Grading for Equity eliminates homework or weekly tests from being counted in a student’s final semester grade. All that matters is how the student scores on a final examination, which can be taken multiple times.

Under the San Leandro Unified School District’s grading for equity system touted by the San Francisco Unified School District and its consultant, a student with a score as low as 80 can attain an A and as low as 21 can pass with a D.

Derek Thompson: New SF public school plan would

– eliminate homework and weekly tests from counting toward semester grade

– allow students to take the final exam multiple times

– convert all B grades into As, and all Fs into Cs

It’s hard to see the difference between this policy and what you’d get if a bunch of 10yos locked the teachers in a closet and rewrote the rules.

Karen Vaites: More media attention here, please! 🚩🚩🚩

Jared Walczak: The sad irony is that Grading for Equity is virtually the opposite of Teaching for Equity, because under this system, the only kids who might get a real education are those from families that take more into their own hands, bringing higher expectations and resources to bear.

So, effectively no grading, then. You can do whatever you want all semester, no homework (so perhaps there’s some upside here?), phone out in class every day, whatever, all you have to do to pass is get 21% on an exam you can take multiple times. That was going to be it.

And Maria Su could just do this on her own? What?

It turns out that enough backlash does matter, and this combination of graft and civilizational suicide took the loss on this one.

SF Standard: Just in: SFUSD is delaying a planned “grading for equity” initiative after the proposal sparked furious backlash.

Kelsey Piper: SF superintendent backed off immediately after the flood of negative feedback. This strikes me as a pretty dramatic change from how previous standards erosions were received, and a really good sign.

Most politicians want to make their constituents happy, and often their information environment is kind of terrible for that. It’s worth advocating for the stuff that matters to you. Don’t be an asshole, but be clear and outspoken.

San Francisco’s turnaround happened very fast. The Bay Area could become one of the best-governed parts of the United States inside a few years if we work to make it happen.

Well, maybe. They say they are ‘delaying’ the initiative. Which means they’re presumably going to keep paying the consultants, and they are going to try again to destroy all the incentives and measurements involved in education.

Fighting against algebra and grading is bad enough, but reading?

As in, people who want to ban teaching kids to read until age 6. No. Seriously.

Because they’re ‘not ready.’

Erik Hoel: 62% of American kids have a tablet at age 6.

They spend 3.5 hours every day on screens (increasingly, TikTok).

And because our school system waits so long to teach reading, they never get a chance to become readers.

“Education experts” have been saying for decades that we must wait to start teaching reading until 6-7 for neuroscientific reasons. These reasons appear, as far as I can tell, to be basically made up. Consider this recent article, which quotes a bunch of experts on this.

E.g., Maryanne Wolf says that brain myelination needs to reach a certain stage, and that teaching reading prior to 5 is “really wrong” and that she would ban teaching reading prior to 6 nationwide if she could.

Siberian Fox: what in the fuck

I was playing Pokémon before 6 if I recall correctly, if not, other games that require reading

good to know that there are people in the US that think this should be super illegal

In a good school, a 1st grader will be reading quite a lot, actually.

I’m not going to bother quoting more of the evidence because this is so utterly Obvious Nonsense as to be beyond belief. Frankly, if I had a child that was 6 years old and couldn’t read I would not be thinking ‘good it is finally time,’ I would be debating exactly how much to panic and reassuring my wife not to panic far more.

The 3 year old I am currently supervising can somewhat read. I could read before my first memory (which was at 5 and involves reading books) so I don’t know exactly when it happened, and I learned without anyone trying to teach me.

I am going maximum opposite. There is no higher priority than teaching a child to read as early as they can handle it, and every actual parent knows this. There is a reason why the advice is constantly read to them, read with them, push reading. Reading enables everything else. The entire ‘education’ establishment really does need to flat out join the delenda est club.

In the name of them developing empathy for the people you force them to teach? As long as they pass a certain threshold of knowledge, the rest of their childhood, and indeed life, belongs to the people, and they’re a horrible person if they think otherwise, and the purpose of school is to teach them this?

No, seriously, this is something quite a lot of people, especially those in education, actually believe.

Setting all ethical or moral considerations aside, and even assuming that is the goal, what in the world makes you think this is going to work in the direction you want?

Tracing Woods: If a child is in a class, they should be there because it is the best environment to help them learn, not so they can act as an unpaid tutor to provide vague “peer effects” to others A system that abuses children as resources instead of teaching them to their level is unethical.

Joe McReynolds (3.4m views): That’s what life *is*, though! If you’re unusually smart/talented, your primary purpose in life is to help lift up others who weren’t born/raised as lucky as you were. Learning that sooner rather than later is important for developing a sense of altruism and communitarianism.

“With great power comes great responsibility” is a simple, true statement. To the extent being born with unusual (intellectual) power is an “innate characteristic,” that good luck means that you owe the universe hard work. You’re born with a debt that takes a lifetime to repay.

There it is, very explicitly. These people actually believe this. If you’re talented, your purpose in life is to be enslaved, to be forced to help others. Your life does not belong to you. Your labor does not belong to you. Your time does not belong to you. Who cares whether that benefits you? You belong to the people, from each according to their ability, at the barrel of a gun.

Kelsey Piper: If I were actively trying to extinguish my children’s sense of altruism, compassion and responsibility I can’t think of a better plan than forcing them to spend all of their time doing random ‘altruistic’ chores they didn’t choose, and aren’t equipped to succeed at.

If you say to your kid “I’m going to volunteer this weekend to help socialize cats”, they’ll probably come along and they may discover a lifelong love of helping animals! if you force them to spend all their time on it, guess what, they’re gonna hate it.

if you want your children to be people who give generously to their communities and their broader world, be that kind of person yourself and let them witness the ways in which this is part of the good for you and for your community.

SteelBlaidd: Why do they need to go to college to learn to be teachers if kids can be expected to do it in elementary school?

Kelsey Piper: some people out here believe that homeschooling is immoral since you don’t know enough to teach your kids and also that a smart 9 yo can do it.

Ben Hoffman: The important thing is that the 9yo is forced to do it without pay to teach them that being educated means going along with nonsense dramas, which qualifies them to get paid to go along with nonsense dramas when old enough that that’s dramatically appropriate.

Not only that the smart 9you can do it, that they should be forced to do it without pay. While the parent is forbidden to do it, because they are unqualified.

Ryan Moulton (QTing Joe above): Everybody is dunking on this, and I get why, but I’m a little more sympathetic. Particularly in lower grades, developing empathy for people different from you or dumber than you is a really important thing to get out of school, comparably important to getting through math faster.

Sarah Constantin: t is super common for kids to openly taunt anyone who’s not as good as them (at anything!) and i do think it’s important for them to learn manners, grace, and sportsmanship…

but it doesn’t deserve as much time in the day as math class.

Gallabytes: I would add that these kinds of assignments don’t necessarily breed empathy it’s pretty easy for it to create contempt instead.

Sarah Constantin: yeahhh.

Gallabytes: feels like people talk about school in far mode as some inscrutable thing.

if I put *youin a room with people you had 4 sigma on and told you to teach them math Or Else, how would you feel? how would this make you feel about them?

I believe we should treat Joe’s perspective the same way we would treat others who would force people with certain characteristics to labor for no compensation.

See my discussion of Alpha School for extensive previous discussions.

Tracing Woods (reference documents and more details in thread): in 1930, researchers studied ability grouping and concluded you needed to adjust the curriculum to make it work

in 1960, more confidently so

then in 1990, they studied grouping without changing curriculum, concluded it was useless, and advocated to get rid of ability grouping

over time the field got better and better at studying the form of ability grouping that everybody had known was pointless for sixty years while just sorta disregarding the form that kept getting results

I get so mad every time I read this stupid study

the field of education set itself back generations because it kept listening to people who thought ability grouping was “antidemocratic and antiegalitarian” and as such badly wanted it not to work

we had it figured out in 1936 and then we threw it away for kicks.

I am not a fan of the idea of educating children in 2025 primary via traditional classes. Traditional classes feel like learning a lot more than they actually cause learning.

But I accept that we are going to keep doing this for a while.

Given you are going to have traditionally shaped classes on various subjects, very obviously you want to track their progress and group those children by ability.

Grouping children into classes by ability has the advantages that it, as covered in Education #11:

  1. Helps almost all children learn more, whether they are behind, ahead or neither. There are some corner cases where kids are ‘on the bubble’ between tracks, or get tracked wrong, but mostly this is opportunity cost, that they missed benefits.

  2. Is universally popular with parents, to the extent that ‘ending tracking’ is the least popular serious policy proposal we have ever seen. As in David Shor says ‘removing advanced classes from schools’ is literally the single most unpopular policy Blue Rose has ever polled (yet there goes Palo Alto doing exactly this.)

  3. Is even popular with the classroom teachers themselves.

Ability grouping, done wisely, so utterly obviously works as to make the alternate hypothesis absurd.

Tracing Woods: will someone struggling with basic arithmetic and someone who knows calculus benefit from the same instruction? no.

would selective schools and the students in them benefit from opening their doors to everyone? they certainly don’t seem to think so!

do athletics orgs advocate grouping young LeBron into his local mixed-ability rec league so he can trarin and progress? no.

do gifted kids who accelerate learn more advanced material when they’re presented with it? yes.

if people see school as a democratic equalizer where everyone should learn the same things, they find ability grouping doesn’t work (doesn’t accomplish that goal). if people see school as a place where people should learn specific subjects and progess to whatever level they can, they find it does work (does accomplish that goal). and because education research is dominated by the former, onlookers glance at its output and say “huh, the results are mixed. guess we’ll never know!”

there is no substitute for understanding what is going on at a ground level.

The question is how to make it work best, not whether it works. Very obviously, as the next section discusses, it is possible to massively screw it up if you try hard enough.

Jordan Michelson: Why would *teacher’s unionsoppose ability grouping? It makes no sense.

Matthew Yglesias: Ideology.

Karen Vaites: The average American would be shocked by the degree to which K-12 education is ruled by ideology. Beliefs about teaching and how we want learning to work often trump evidence about what does, in fact, work.

Tracing Woods: “Trace, why are you making such a big deal out of something everybody already agrees with?”

Because the people we trust to direct society on this topic at every level oppose it.

And I hope that maybe if I shout enough about that people will really internalize what that means.

Exactly. Everyone agrees we want [X], where [X] is tracking. We keep talking about it because [~X] keeps actually happening.

I presume opposition is mostly ideology. Full stop. They want to prevent the wrong kids learning too much. They are sacrificing the kids on their alter.

Academics and education ‘experts,’ despite the literature and all actual observations and everyone involved saying that tracking helps all kids learn better, keep lamenting that parents want tracking, and work to destroy it, often in the name of ‘equity.’

It is common to see people claim ‘the research’ says that ‘downstreamed’ kids who are grouped at lower ability do worse rather than better as a result and that ‘the research’ supports this. As far as I can tell this is simply not true, these people simply think it ‘should’ be true, and seek out ways to say it anyway.

Tracing Woods: what happens when academics study parental perception of ability grouping?

They lament that parents of students at all levels favor it even though it’s BAD.

Virtually no parents support ending ability grouping.

Parents of kids in both remedial and G/T programs agreed that their kids should be grouped with kids of equal ability

80% of special-ed parents, 90% of parents with kids in remedial courses, and 98% of parents with kids in advanced programs agreed that their kids were helped by it.

This was all very disappointing to the authors.

Why do the authors here, like many other academics and education experts and many school principals who somehow end up actually destroying such programs, say that this is bad, despite everyone involved in the actual schools agreeing it helps all of the students?

Because the ‘educators’ who determine policy (as opposed to the teachers whose job is to actually educate children) consistently have decided that they do not care about the life experiences of families and children or helping children learn.

What they care about, other than money, is preventing learning rather than causing learning. Or, as they call it, ‘equality’ or ‘equity.’ Never mind that this ‘equity’ directly hurts the students who are otherwise being ‘denied’ it, what matters is that they be given ‘opportunity to learn and equality of educational opportunity.’ Educational opportunity shall be destroyed until this is achieved. If that leads to everyone getting a worse education, even the worst off kids, well, that’s not their department’s KPI.

I don’t quite agree with Anton that these people ‘hate you and your children.’ They only hate that you and your children might do better than other children, and want to prevent this from happening. They only hate you if you oppose this goal.

Garry Tan: Ability grouping in school (honors/AP) depends on your frame. If school’s job is to equalize, grouping looks like a fail. If it’s to let kids sprint ahead, it works.

Academia worships equalizing— so the School of Education bureaucrats become anti-education, ruining schools.

When you examine the list of nonprofits and academics that want to remove advanced math from classrooms and water down the standards for all students it will leave you shaken.

It’s not a fringe movement. It is School of Education Orthodoxy.

Tracing Woods: This is a fair question! Opposition to ability grouping is a fringe idea opposed by the great majority of parents. So which obscure, fringe organizations are pushing it?

Let’s ask the National Council of Teachers of English what they think:

Or what about the most prominent law casebook publishers?

Or consider the National Association of Secondary School Principals.

How about the Association of State

Supervisors of Mathematics, NCSM: Leadership in

Mathematics Education, and the National Council of Teachers of Mathematics (NCTM)?

You know it.

If it is a choice between the form of academia that wants to prevent children from learning, and the form of academia that helps teach useful things, and one or the other must be destroyed?

The choice seems clear.

Tracing Woods: my modest proposal to every university that has published research claiming ability grouping (with paired curricular modification) doesn’t work:

detrack. remove your admissions standards. remove course prerequisites.

if detracking works, let’s create Harvard Community College.

However, you do have to choose a reasonable implementation. Is it possible we also in some ways are screwing implementation up so badly that adding what we call ability grouping to the mix, as implemented in practice, could make things worse?

North Carolina excluded half its qualified students from advanced math. They tried to pass a law to fix some of this. The schools fought back.

Tracing Woods: What happened when North Carolina changed its laws to require top-scoring students to be placed in advanced math?

The state board of education changed the test cutoffs, subverting the intent of the law by dropping almost all students from the top-scoring category.

Janet Johnson and John Wittle: The law was intended to help high school students who excelled in their math classes move into the advanced track. Before the scoring change, 11% of high school students statewide scored at Level 5, the highest level, with some districts seeing rates as high as 25%. The EVAAS Prediction vs. Performance table (above) showed that in 2009, 42,144 students were predicted to be successful in 8th-grade algebra, and only 18,670 students were enrolled. Using EVAAS prediction as the metric would have given 23,474 more students access to advanced math.

After the law passed, which required schools to admit all Level 5 students to advanced classes, and the state changed the scoring scale, fewer than 1,500 (too few to report) high school students in the entire state achieved Level 5.

The schools had technically complied with the law while completely subverting its intent. These charts are from the NC School Report Cards.14 After 2019, the Math 1 Performance charts show no Level 5 and very little Level 4, similar to this.

The full post on ‘the Algebra gatekeepers’ keeps outlining all the tactics used to ensure that kids do not learn algebra, especially disadvantaged kids. As you read it, it keeps getting worse.

For example, we attended a meeting with the parents of a middle school girl who had earned an A in 7th-grade pre-algebra but was denied enrollment in 8th-grade algebra despite her and her parents’ wishes. The teachers argued that her formative assessments didn’t align with her summative performance, suggesting her previous success didn’t guarantee future outcomes.

The language arts teacher claimed the student had “appeared to struggle” during benchmark activities and that earning an A seemed “harder for her than for other students who achieved similar summative data results.” The math teacher who had given her an A pointed to C grades on some earlier formative assessments, arguing that despite her subsequent A performance on chapter tests and other summative data points, these initial benchmark scores indicated she “sometimes struggles with foundational concepts during the formative assessment process.”

The administrators nodded knowingly as teachers referenced “inconsistent performance across benchmark measures” and “concerns about the gap between formative and summative data trends.” They suggested that while her final grades represented solid summative data, the formative assessment patterns revealed “areas of concern” that made advanced placement inadvisable, regardless of what the summative data actually showed about her mastery of the subject matter. This is just an example of the kind of talk the parents encountered, so the church advocacy group started bringing someone from our staff to the meetings to help them cut through this.

This type of circular reasoning, where success was reframed as evidence of potential failure, was typical of how schools justified excluding qualified students from advanced courses.

Administrators would routinely promise that students could move to advanced tracks “later, in high school,” but our analyses and other research we found indicate students rarely moved to the advanced track after the 8th grade.

The tracking system created rigid pathways where missing 8th-grade algebra typically meant students couldn’t reach calculus by graduation, limiting their college and career options.

One fifth grader exemplified the problem. He had scored at the highest level on every test, had a 98% EVAAS prediction for success, and straight As on his report card. He was also officially classified as Academically Gifted by the school. When a new advanced math class was created, he wasn’t invited. When he asked to join, his parents were told no, he needed to be “recommended.” School officials told us, with straight faces, that his consistent past success was no indication he could succeed in advanced math, that they were keeping him out for his own good.

What North Carolina is doing here, excluding lots of qualified students, does at least seem better than ending algebra entirely for everyone, I suppose.

Pamela Hobart also looked into the same writeup and offers her own thread, notes that this steps fully into cartoon villainy.

The true teaching of algebra to kids who are ready for it is almost impossible to find. Even if you send your child to a ‘gifted’ school, they mostly won’t let kids get more than a year or two ahead of ‘schedule.’ Schools instead think it is better kids be bored for five years, for their own good you see.

Raymond Arnold points to the best objection I have seen, which is that if a more advanced option exists then many parents will push for it even when it is inappropriate for their particular child, or use it to push their child way too hard, and while this means some kids are bored it is saving a lot of families from being forced into a red queen’s race.

This is a real cost, but the prior should be rather extremely stacked against ‘if we let kids learn more then parents would try to have their kids learn too much and this would be bad,’ especially when you can gate the advanced classes with objective tests. Yes, parents can push their kids to study harder to try and pass those tests, but that’s a risk I am willing to take.

What is the steelman case that ‘ability grouping doesn’t work’ or ‘ability grouping has been tried and didn’t work’ in some particular context?

This by Karen Vaites is perhaps the closest, in particular on early reading grouping, convinced me that in practice you really can mess it up badly enough to make things actively worse. This was convincing that we’re messing up badly enough that this is a real possibility.

As in, what happens in practice is that you group kids by a measurement of abstract ‘reading level’ and then focus on ‘achievement’ of ‘reading level,’ forbid them to read anything beyond ‘reading level,’ and don’t ask what actual skills they need, don’t move them between groupings as their skills change, and then wonder why it isn’t working.

One could almost say, if you look at the details, that the teachers are using ‘reading groups’ as a substitute for actually teaching the children to read. You put them in a group and then you did your job. Again, yeah, I can see how that wouldn’t work. Indeed, if you are outsourcing the teaching job to other kids, then at that point you actively want uneven groups, because you want to group students with student teachers.

Whereas once students get to ‘escape velocity’ on reading, which the better students have relatively early, they no longer need a teacher, they just need motivation and permission to read books. Whereas the system seems designed to stop kids from reading books they want to read if they are deemed ‘too hard’? A kid can tell you if a book is too hard, they won’t want to read it.

One big complaint is that it is ‘hard to measure reading level.’ I don’t think it is hard. You can observe a lot just by watching. The problem is that you’re measuring a set of distinct reading skills as if it was one number, and then treating that one number as real, and also abdicating all the real work.

Sarah Sparks: But evidence suggests that the practice may be less beneficial than teachers think: It can exacerbate achievement gaps and even slow reading growth for some children unless the groups are fluid and focused on skills rather than overall achievement.

“What we’ve discovered is that it’s fine to have a group of students of different levels, as long as they all are working on the same learning needs,” said Carol Connor, an education professor at the University of California, Irvine, who developed the program. “You can have students of different reading abilities who all need to work on decoding. … What doesn’t work is if you put your kids who already know how to code in a group to learn how to code, again. You receive more behavior problems because they’re really bored, … and our research suggests that it has a negative effect on their growth.”

Karen Vaites: Tim Shanahan breaks down key research and its instructional implications in The Instructional Level Concept Revisited: Teaching with Complex Text. As researchers looked into the effectiveness working at reading level, studies found that it “has made no difference—that is the kids taught from grade level materials do as well as those at an instructional level—or the instructional level placements have led to less learning.”

More recently, he highlights additional new evidence from a study of third graders: “Results indicate that weaker readers, using texts at two, three, and four grade levels above their instructional levels with the assistance of lead readers [other, better reading, third graders], outscored both proficient and less proficient students in the control group across multiple measures of reading achievement.”

From a question: My daughter is in first grade. Her classroom teachers have all the books in the classroom library leveled, and students are not allowed to go beyond their reading level during “Independent” reading.

From the answer: Your daughter’s aspirations as a reader are the problem here. Some kids are allowed to read the red-dot books and others are stuck with the baby books with the blue dots. She wants to be a red dot kid, to hang with the red dot kids, to be seen as a red dot kid … but her teacher can only see her as a blue dot kid and she must learn to stay to her own bookshelf with her own kind if she is going to succeed in this classroom.

In the meantime, explain to your daughter that the teacher is trying to help her but that we teachers sometimes don’t get it right, and that you can’t always “fight city hall.”

So yes, you group primarily by what aspects the kids need to work on most, and that works better. Sure. I can totally believe that is a better strategy. Skill issue.

Instead of using it to figure out what kids need to do to learn to read and putting them in position to learn that, it sounds like grouping is being used to prevent kids from meaningfully reading? The purpose is to gate reading behind general tests? To spread ‘equality’ to progress on different reading aspects?

Sarah Sparks: It sounds like good sense. “Kids should be reading just-right texts as they grow as readers.” That just sounds sensible, doesn’t it? Many urban legends do… until you know better.

I can see why that might actively backfire. This isn’t about ‘ability grouping’ not working. It’s about failing to actually group by the relevant ability, and it’s about the ‘just-right text’ theory that seems to me obviously wrong.

Sarah Sparks: During Tier 1 instruction, you want all kids working with grade level texts; students reading below grade level will need scaffolding and support (as well as targeted Tier 2 and/or 3 intervention).

This promotes equity, for it’s the best mechanism for helping below-benchmark students to catch up.

It also honors the fact that a fifth grader who reads at second grade level is still thinking at the level of a fifth grader, and he or she will remain engaged and motivated by learning content and vocabulary at his or her developmental level. (No more baby books for big kids, y’all!)

For details on how to do this, check out:

Ignore the ‘this promotes equity’ framing, since you could simply say ‘this promotes learning to read.’ Equity via catching up those lagging is good, and you call it learning.

The theory here is that age matters a lot. That if you are a fifth grader, your ability to learn is inherently much stronger than that of a second grader, whereas the ability of different second graders is alike? Equality (of those at the same age) for me, inequality (of those at different ages) for thee. Whereas the correct model is that each kid has a different ability to learn different things, that usually improves steadily with age.

But also note that this is saying that the best way for many students at second grade reading level to learn reading is to assign them to read fifth grade level books, indeed to mandate it. Yes, I can believe that. So why are we so often telling kids at second grade reading level or literally in second grade or both that they can’t read the fifth grade books even when they want to?

The other theory present in this proposal is, how about using techniques that actually teach kids reading. And yeah, I agree, that would be great.

Tracing Woods (replying to Vaites): This is an extremely useful article that deserves a full, thorough response.

My short response is that I agree narrowly (most leveled readers seem quite bad, training specific skills matters, in-class grouping for reading is quite popular and often pretty uninspired) and disagree broadly (there is pretty strong evidence for the value of several forms of ability grouping, drawing from eg Direct Instruction, Success for All, acceleration/gifted literature; ability grouping has acquired a bad reputation for reasons mostly unrelated to its performance; “grade level” is the wrong measure) in a way that would be productive to hash out more fully.

I agree fully that the real question is cultural.

In case you didn’t realize that there is a war. There has been for a while.

Discussion about this post

Childhood and Education #14: The War On Education Read More »

openai-#14:-openai-descends-into-paranoia-and-bad-faith-lobbying

OpenAI #14: OpenAI Descends Into Paranoia and Bad Faith Lobbying

I am a little late to the party on several key developments at OpenAI:

  1. OpenAI’s Chief Global Affairs Officer Chris Lehane was central to the creation of the new $100 million PAC where they will partner with a16z to oppose any and all attempts of states to regulate AI in any way for any reason.

  2. Effectively as part of that effort, OpenAI sent a deeply bad faith letter to Governor Newsom opposing SB 53.

  3. OpenAI seemingly has embraced descending fully into paranoia around various nonprofit organizations and also Effective Altruism in general, or at least is engaging in rhetoric and legal action to that effect, joining the style of Obvious Nonsense rhetoric about this previously mostly used by a16z.

This is deeply troubling news. It is substantially worse than I was expecting of them. Which is presumably my mistake.

This post covers those events, along with further developments around two recent tragic suicides where ChatGPT was plausibly at fault for what went down, including harsh words from multiple attorneys general who can veto OpenAI’s conversion to a for-profit company.

In OpenAI #11: America Action Plan, I documented that OpenAI:

  1. Submitted an American AI Action Plan proposal that went full jingoist, framing AI as a race against the CCP in which we must prevail, with intentionally toxic vibes throughout.

  2. Requested immunity from all AI regulations.

  3. Attempted to ban DeepSeek using bad faith arguments.

  4. Demanded absolute fair use, for free, for all AI training, or else.

  5. Also included some reasonable technocratic proposals, such as a National Transmission Highway Act, AI Opportunity Zones, along with some I think are worse on the merits such as their ‘national AI readiness strategy.’

Also worth remembering:

  1. This article claims both OpenAI and Microsoft were central in lobbying to take any meaningful requirements for foundation models out of the EU’s AI Act. If I was a board member, I would see this as incompatible with the OpenAI charter. This was then fleshed out further in the OpenAI Files and in this article from Corporate Europe Observatory.

  2. OpenAI lobbied against SB 1047, both reasonably and unreasonably.

  3. OpenAI’s CEO Sam Altman has over time used increasingly jingoistic language throughout his talks, has used steadily less talk about

OpenAI’s Chief Global Affairs Officer, Christopher Lehane, sent a letter to Governor Newsom urging him to gut SB 53 (or see Miles’s in-line responses included here), which is already very much a compromise bill that got compromised further by pushing its ‘large AI companies’ threshold and eliminating the third-party audit requirement. That already eliminated almost all of what little burden could be claimed was being imposed by the bill.

OpenAI’s previous lobbying efforts were in bad faith. This is substantially worse.

Here is the key ask from OpenAI, bold in original:

In order to make California a leader in global, national and state-level AI policy, we encourage the state to consider frontier model developers compliant with its state requirements when they sign onto a parallel regulatory framework like the CoP or enter into a safety-oriented agreement with a relevant US federal government agency.

As in, California should abdicate its responsibilities entirely, and treat giving lip service to the EU’s Code of Practice (not even actually complying with it!) as sufficient to satisfy California on all fronts. It also says that if a company makes any voluntary agreement with the Federal Government on anything safety related, then that too should satisfy all requirements.

This is very close to saying California should have no AI safety regulations at all.

The rhetoric behind this request is what you would expect. You’ve got:

  1. The jingoism.

  2. The talk about ‘innovation.’

  3. The Obvious Nonsense threats about this slowing down progress or causing people to withdraw from California.

  4. The talk about Federal leadership on regulation without any talk of what that would look like while the only Federal proposal that ever got traction was ‘ban the states from acting and still don’t do anything on the Federal level.’

  5. The talk about burden on ‘small developers’ when to be covered by SB 53 at all you now have to spend a full $500 million in training compute, and the only substantive expense (the outside audits) are entirely gone.

  6. The false claim that California lacks state capacity to handle this, and the false assurance us the EU and Federal Government have totally have what they need.

  7. The talk of a ‘California approach’ which here means ‘do nothing.’

They even try to equate SB 53 to CEQA, which is a non-sequitur.

They equate OpenAI’s ‘commitment to work with’ the US federal government in ways that likely amount to running some bespoke tests focused on national security concerns as equivalent to being under a comprehensive regulatory regime, and as a substitute for SB 53 including its transparency requirements.

They emphasize that they are a non-profit, while trying to transform themselves into a for-profit and expropriate most of the non-profit’s wealth for private gain.

Plus we have again the important misstatement of OpenAI’s mission.

OpenAI’s actual mission: Ensure that AGI benefits all of humanity.

OpenAI says its mission is: Building AI that benefits all of humanity.

That is very importantly not the same thing. The best way to ensure AGI benefits all of humanity could importantly be to not build it.

Also as you would expect, the letter does not, anywhere, explain why the even fully complying with Code of Practice, let alone any future unspecified voluntary safety-oriented agreement, would satisfy the policy goals behind SB 53.

Because very obviously, if you read the Code of Practice and SB 53, they wouldn’t.

Miles Brundage responds to the letter in-line (which I recommend if you want to go into the details at that level) and also offers this Twitter thread:

Miles Brundage (September 1): TIL OpenAI sent a letter to Governor Newsom filled with misleading garbage about SB 53 and AI policy generally.

Unsurprising if you follow this stuff, but worth noting for those who work there and don’t know what’s being done in their name.

I don’t think it’s worth dignifying it with a line-by-line response but I’ll just say that it was clearly not written by people who know what they’re talking about (e.g., what’s in the Code of Practice + what’s in SB 53).

It also boils my blood every time that team comes up with new and creative ways to misstate OpenAI’s mission.

Today it’s “the AI Act is so strong, you should just assume that we’re following everything else” [even though the AI Act has a bunch of issues].

Tomorrow it’s “the AI Act is being enforced too stringently — it needs to be relaxed in ways A, B, and C.”

  1. The context here is OpenAI trying to water down SB 53 (which is not that strict to begin with – e.g. initially third parties would verify companies’ safety claims in *2030and now there is just *nosuch requirement)

  2. The letter treats the Code of Practice for the AI Act, one the one hand – imperfect but real regulation – and a voluntary agreement to do some tests sometimes with a friendly government agency, on the other – as if they’re the same. They’re not, and neither is SB 53…

  3. It’s very disingenuous to act as if OpenAI is super interested in harmonious US-EU integration + federal leadership over states when they have literally never laid out a set of coherent principles for US federal AI legislation.

  4. Vague implied threats to slow down shipping products or pull out of CA/the US etc. if SB 53 went through, as if it is super burdensome… that’s just nonsense. No one who knows anything about this stuff thinks any of that is even remotely plausible.

  5. The “California solution” is basically “pretend different things are the same,” which is funny because it’d take two braincells for OpenAI to articulate an actually-distinctively-Californian or actually-distinctively-American approach to AI policy. But there’s no such effort.

  6. For example, talk about how SB 53 is stronger on actual transparency (and how the Code of Practice has a “transparency” section that basically says “tell stuff to regulators/customers, and it’d sure be real nice if you sometimes published it”). Woulda been trivial. The fact that none of that comes up suggests the real strategy is “make number of bills go down.”

  7. OpenAI’s mission is to ensure that AGI benefits all of humanity. Seems like something you’d want to get right when you have court cases about mission creep.

We also have this essay response from Nomads Vagabonds. He is if anything even less kind than Miles. He reminds us that OpenAI through Greg Brockman has teamed up with a16z to dedicate $100 million to ensuring no regulation of AI, anywhere in any state, for any reason, in a PAC that was the brainchild of OpenAI vice president of global affairs Chris Lehane.

He also goes into detail about the various bad faith provisions.

These four things can be true at once.

  1. OpenAI has several competitors that strongly dislike OpenAI and Sam Altman, for a combination of reasons with varying amounts of merit.

  2. Elon Musk’s lawsuits against OpenAI are often without legal merit, although the objections to OpenAI’s conversion to for-profit were ruled by the judge to absolutely have merit, with the question mainly being if Musk had standing.

  3. There are many other complaints about OpenAI that have a lot or merit.

  4. AI might kill everyone and you might want to work to prevent this without having it out for OpenAI in particular or being funded by OpenAI’s competitors.

OpenAI seems, by Shugerman’s reporting, to have responded to this situation by becoming paranoid that there is some sort of vast conspiracy Out To Get Them, funded and motivated by commercial rivalry, as opposed to people who care about AI not killing everyone and also this Musk guy who is Big Mad.

Of course a lot of us, as the primary example, are going to take issue with OpenAI’s attempt to convert from a non-profit to a for-profit while engaging in one of the biggest thefts in human history by expropriating most of the nonprofit’s financial assets, worth hundreds of billions, for private gain. That opposition has very little to do with Elon Musk.

Emily Dreyfuss: Inside OpenAI, there’s a growing paranoia that some of its loudest critics are being funded by Elon Musk and other billionaire competitors. Now, they are going after these nonprofit groups, but their evidence of a vast conspiracy is often extremely thin.

Emily Shugerman (SF Standard): Nathan Calvin, who joined Encode in 2024, two years after graduating from Stanford Law School, was being subpoenaed by OpenAI. “I was just thinking, ‘Wow, they’re really doing this,’” he said. “‘This is really happening.’”

The subpoena was filed as part of the ongoing lawsuits between Elon Musk and OpenAI CEO Sam Altman, in which Encode had filed an amicus brief supporting some of Musk’s arguments. It asked for any documents relating to Musk’s involvement in the founding of Encode, as well as any communications between Musk, Encode, and Meta CEO Mark Zuckerberg, whom Musk reportedly tried to involve in his OpenAI takeover bid in February.

Calvin said the answer to these questions was easy: The requested documents didn’t exist.

In media interviews, representatives for an OpenAI-affiliated super PAC have described a “vast force” working to slow down AI progress and steal American jobs.

This has long been the Obvious Nonsense a16z line, but now OpenAI is joining them via being part of the ‘Leading the Future’ super PAC. If this was merely Brockman contributing it would be one thing, but no, it’s far beyond that:

According to the Wall Street Journal, the PAC is in part the brainchild of Chris Lehane, OpenAI’s vice president of global affairs.

Meanwhile, OpenAI is treating everyone who opposes their transition to a for-profit as if they have to be part of this kind of vast conspiracy.

Around the time Musk mounted his legal fight [against OpenAI’s conversion to a for-profit], advocacy groups began to voice their opposition to the transition plan, too. Earlier this year, groups like the San Francisco Foundation, Latino Prosperity, and Encode organized open letters to the California attorney general, demanding further questioning about OpenAI’s move to a for-profit. One group, the Coalition for AI Nonprofit Integrity (CANI), helped write a California bill introduced in March that would have blocked the transition. (The assemblymember who introduced the bill suddenly gutted it less than a month later, saying the issue required further study.)

In the ensuing months, OpenAI leadership seems to have decided that these groups and Musk were working in concert.

Catherine Bracy: Based on my interaction with the company, it seems they’re very paranoid about Elon Musk and his role in all of this, and it’s become clear to me that that’s driving their strategy

No, these groups were not (as far as I or anyone else can tell) funded by or working in concert with Musk.

The suspicions that Meta was involved, including in Encode which is attempting to push forward SB 53, are not simply paranoid, they flat out don’t make any sense. Nor does the claim about Musk, either, given how he handles opposition:

Both LASST and Encode have spoken out against Musk and Meta — the entities OpenAI is accusing them of being aligned with — and advocated against their aims: Encode recently filed a complaint with the FTC about Musk’s AI company producing nonconsensual nude images; LASST has criticized the company for abandoning its structure as a public benefit corporation. Both say they have not taken money from Musk nor talked to him. “If anything, I’m more concerned about xAi from a safety perspective than OpenAI,” Whitmer said, referring to Musk’s AI product.

I’m more concerned about OpenAI because I think they matter far more than xAI, but pound for pound xAI is by far the bigger menace acting far less responsibly, and most safety organizations in this supposed conspiracy will tell you that if you ask them, and act accordingly when the questions come up.

Miles Brundage: First it was the EAs out to get them, now it’s Elon.

The reality is just that most people think we should be careful about AI

(Elon himself is ofc actually out to get them, but most people who sometimes disagree with OpenAI have nothing to do with Elon, including Encode, the org discussed at the beginning of the article. And ironically, many effective altruists are more worried about Elon than OAI now)

OpenAI’s paranoia started with CANI, and then extended to Encode, and then to LASST.

Nathan Calvin: ​They seem to have a hard time believing that we are an organization of people who just, like, actually care about this.

Emily Shugerman: Lehane, who joined the company last year, is perhaps best known for coining the term “vast right-wing conspiracy” to dismiss the allegations against Bill Clinton during the Monica Lewinsky scandal — a line that seems to have seeped into Leading the Future’s messaging, too.

In a statement to the Journal, representatives from the PAC decried a “vast force out there that’s looking to slow down AI deployment, prevent the American worker from benefiting from the U.S. leading in global innovation and job creation, and erect a patchwork of regulation.””

The hits keep coming as the a16z-level paranoid about EA being a ‘vast conspiracy’ kicks into high gear , such as the idea that Dustin Moskovitz doesn’t care about AI safety, he’s going after them because of his stake in Anthropic, can you possibly be serious right now, why do you think he invested in Anthropic.

Of particular interest to OpenAI is the fact that both Omidyar and Moskovitz are investors in Anthropic — an OpenAI competitor that claims to produce safer, more steerable AI technology.

Groups backed by competitors often present themselves as disinterested public voices or ‘advocates’, when in reality their funders hold direct equity stakes in competitors in their sector – in this case worth billions of dollars,” she said. “Regardless of all the rhetoric, their patrons will undoubtedly benefit if competitors are weakened.”

Never mind that Anthropic has not supported Moskovitz on AI regulation, and that the regulatory interventions funded by Moskovitz would constantly (aside from any role in trying to stop OpenAI’s for-profit conversion) be bad for Anthropic’s commercial outlook.

Open Philanthropy (funded by Dustin Moskovitz): Reasonable people can disagree about the best guardrails to set for emerging technologies, but right now we’re seeing an unusually brazen effort by some of the biggest companies in the world to buy their way out of any regulation they don’t like. They’re putting their potential profits ahead of U.S. national security and the interests of everyday people.

Companies do this sort of thing all the time. This case is still very brazen, and very obvious, and OpenAI has now jumped into a16z levels of paranoia and bad faith between the lawfare, the funding of the new PAC and their letter on SB 53.

Suing and attacking nonprofits engaging in advocacy is a new low. Compare that to the situation with Daniel Kokotajlo, where OpenAI to its credit once confronted with its bad behavior backed down rather than going on a legal offensive.

Daniel Kokotajlo: Having a big corporation come after you legally, even if they are just harassing you and not trying to actually get you imprisoned, must be pretty stressful and scary. (I was terrified last year during the nondisparagement stuff, and that was just the fear of what *mighthappen, whereas in fact OpenAI backed down instead of attacking) I’m glad these groups aren’t cowed.

As in, do OpenAI and Sam Altman believe these false paranoid conspiracy theories?

I have long wondered the same thing about Marc Andreessen and a16z, and others who say there is a ‘vast conspiracy’ out there by which they mean Effective Altruism (EA), or when they claim it’s all some plot to make money.

I mean, these people are way too smart and knowledgeable to actually believe that, asks Padme, right? And certainly Sam Altman and OpenAI have to know better.

Wouldn’t the more plausible theory be that these people are simply lying? That Lehane doesn’t believe in a ‘vast EA conspiracy’ any more than he believed in a ‘vast right-wing conspiracy’ when he coined the term ‘vast right-wing conspiracy’ about the (we now know very true) allegations around Monica Lewinsky. It’s an op. It’s rhetoric. It’s people saying what they think will work to get them what they want. It’s not hard to make that story make sense.

Then again, maybe they do really believe it, or at least aren’t sure? People often believe genuinely crazy things that do not in any way map to reality, especially once politics starts to get involved. And I can see how going up against Elon Musk and being engaged in one the biggest heists in human history in broad daylight, while trying to build superintelligence that poses existential risks to humanity that a lot of people are very worried about and that also will have more upside than anything ever, could combine to make anyone paranoid. Highly understandable and sympathetic.

Or, of course, they could have been talking to their own AIs about these questions. I hear there are some major sycophancy issues there. One must be careful.

I sincerely hope that those involved here are lying. It beats the alternatives.

It seems that OpenAI’s failures on sycophancy and dealing with suicidality might endanger its relationship with those who must approve its attempted restructuring into a for-profit, also known as one of the largest attempted thefts in human history?

Maybe they will take OpenAI’s charitable mission seriously after all, at least in this way, despite presumably not understanding the full stakes involved and having the wrong idea about what kind of safety matters?

Garrison Lovely: Scorching new letter from CA and DE AGs to OpenAI, who each have the power to block the company’s restructuring to loosen nonprofit controls.

They are NOT happy about the recent teen suicide and murder-suicide that followed prolonged and concerning interactions with ChatGPT.

Rob Bonta (California Attorney General) and Kathleen Jennings (Delaware Attorney General) in a letter: In our meeting, we conveyed in the strongest terms that safety is a non-negotiable priority, especially when it comes to children. Our teams made additional requests about OpenAI’s current safety precautions and governance. We expect that your responses to these will be prioritized and that immediate remedial measures are being taken where appropriate.

We recognize that OpenAI has sought to position itself as a leader in the AI industry on safety. Indeed, OpenAI has publicly committed itself to build safe AGI to benefit all humanity, including children. And before we get to benefiting, we need to ensure that adequate safety measures are in place to not harm.

It is our shared view that OpenAI and the industry at large are not where they need to be in ensuring safety in AI products’ development and deployment. As Attorneys General, public safety is one of our core missions. As we continue our dialogue related to OpenAI’s recapitalization plan, we must work to accelerate and amplify safety as a governing force in the future of this powerful technology.

The recent deaths are unacceptable. They have rightly shaken the American public’s confidence in OpenAI and this industry. OpenAI – and the AI industry – must proactively and transparently ensure AI’s safe deployment. Doing so is mandated by OpenAI’s charitable mission, and will be required and enforced by our respective offices.

We look forward to hearing from you and working with your team on these important issues.

Some other things said by the AGs:

Bonta: We were looking for a rapid response. They’ll know what that means, if that’s days or weeks. I don’t see how it can be months or years.

All antitrust laws apply, all consumer protection laws apply, all criminal laws apply. We are not without many tools to regulate and prevent AI from hurting the public and the children.

With a lawsuit filed that OpenAI might well lose and the the two attorney generals that can veto its restructuring breathing down OpenAI’s neck, OpenAI is promising various fixes and in particular OpenAI has decided it is time for parental controls as soon as they can, which should be within a month.

Their first announcement on August 26 included these plans:

OpenAI: While our initial mitigations prioritized acute self-harm, some people experience other forms of mental distress. For example, someone might enthusiastically tell the model they believe they can drive 24/7 because they realized they’re invincible after not sleeping for two nights. Today, ChatGPT may not recognize this as dangerous or infer play and—by curiously exploring—could subtly reinforce it.

We are working on an update to GPT‑5 that will cause ChatGPT to de-escalate by grounding the person in reality. In this example, it would explain that sleep deprivation is dangerous and recommend rest before any action.

Better late than never on that one, I suppose. That is indeed why I am relatively not so worried about problems like this, we can adjust after things start to go wrong.

OpenAI: In addition to emergency services, we’re exploring ways to make it easier for people to reach out to those closest to them. This could include one-click messages or calls to saved emergency contacts, friends, or family members with suggested language to make starting the conversation less daunting.

We’re also considering features that would allow people to opt-in for ChatGPT to reach out to a designated contact on their behalf in severe cases.

We will also soon introduce parental controls that give parents options to gain more insight into, and shape, how their teens use ChatGPT. We’re also exploring making it possible for teens (with parental oversight) to designate a trusted emergency contact. That way, in moments of acute distress, ChatGPT can do more than point to resources: it can help connect teens directly to someone who can step in.

On September 2 they followed up with additional information about how they are ‘partnering with experts’ and providing more details.

OpenAI: Earlier this year, we began building more ways for families to use ChatGPT together and decide what works best in their home. Within the next month, parents will be able to:

  • Link their account with their teen’s account (minimum age of 13) through a simple email invitation.

  • Control how ChatGPT responds to their teen with age-appropriate model behavior rules, which are on by default.

  • Manage which features to disable, including memory and chat history.

  • Receive notifications when the system detects their teen is in a moment of acute distress. Expert input will guide this feature to support trust between parents and teens.

These controls add to features we have rolled out for all users including in-app reminders during long sessions to encourage breaks.

Parental controls seem like an excellent idea.

I would consider most of this to effectively be ‘on by default’ already, for everyone, in the sense that AI models have controls against things like NSFW content that largely treat us all like teens. You could certainly tighten them up more for an actual teen, and it seems fine to give parents the option, although mostly I think you’re better off not doing that.

The big new thing is the notification feature. That is a double edged sword. As I’ve discussed previously, an AI or other source of help that can ‘rat you out’ to authorities, even ‘for your own good’ or ‘in moments of acute distress’ is inherently very different from a place where your secrets are safe. There is a reason we have confidentiality for psychologists and lawyers and priests, and balancing when to break that is complicated.

Given an AI’s current level of reliability and its special role as a place free from human judgment or social consequence, I am actually in favor of it outright never altering others without an explicit user request to do so.

Whereas things are moving in the other direction, with predictable results.

As in, OpenAI is already scanning your chats as per their posts I discussed above.

Greg Isenberg: ChatGPT is potentially leaking your private convos to the police.

People use ChatGPT because it feels like talking to a smart friend who won’t judge you. Now, people are realizing it’s more like talking to a smart friend who might snitch.

This is the same arc we saw in social media: early excitement, then paranoia, then demand for smaller, private spaces.

OpenAI (including as quoted by Futurism): When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts.

If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement.

We are currently not referring self-harm cases to law enforcement to respect people’s privacy given the uniquely private nature of ChatGPT interactions.

Futurism: When describing its rule against “harm [to] yourself or others,” the company listed off some pretty standard examples of prohibited activity, including using ChatGPT “to promote suicide or self-harm, develop or use weapons, injure others or destroy property, or engage in unauthorized activities that violate the security of any service or system.”

They are not directing self-harm cases to protect privacy, but harm to others is deemed different. That still destroys the privacy of the interaction. And ‘harm to others’ could rapidly morph into any number of places, both with false positives and also with changes in ideas about what constitutes ‘harm.’

They’re not even talking about felonies or imminent physical harm. They’re talking about ‘engage in unauthorized activities that violate the security of any service or system,’ or ‘destroy property,’ so this could potentially extend quite far, and in places that seem far less justified than intervening in response a potentially suicidal user. These are circumstances in which typical privileged communication would hold.

I very much do not like where that is going, and if I heard reports this was happening on the regular it would fundamentally alter my relationship to ChatGPT, even though I ‘have nothing to hide.’

What’s most weird about this is that OpenAI was recently advocating for ‘AI privilege.’

Reid Southern: OpenAI went from warning users that there’s no confidentiality when using ChatGPT, and calling for “AI privilege”, to actively scanning your messages to send to law enforcement, seemingly to protect themselves in the aftermath of the ChatGPT induced murder-suicide

This is partially a case of ‘if I’m not legally forbidden to do [X] then I will get blamed for not doing [X] so please ban me from doing it’ so it’s not as hypocritical as it sounds. It is still rather hypocritical and confusing to escalate like this. Why respond to suicides by warning you will be scanning for harm to others and intent to impact the security of systems, but definitely not acting if someone is suicidal?

If you think AI users deserve privilege, and I think this is a highly reasonable position, then act like it. Set a good example, set a very high bar for ratting, and confine alerting human reviewers let alone the authorities to when you catch someone on the level of trying to make a nuke or a bioweapon, or at minimum things that would force a psychologist to break privilege. It’s even good for business.

Otherwise people are indeed going to get furious, and there will be increasing demand to run models locally or in other ways that better preserve privacy. There’s not zero of that already, but it would escalate quickly.

Steven Byrnes notes the weirdness of seeing Ben’s essay describe OpenAI as an ‘AI safety company’ rather than a company most AI safety folks hate with a passion.

Steven Byrnes: I can’t even describe how weird it is to hear OpenAI, as a whole, today in 2025, being described as an AI safety company. Actual AI safety people HATE OPENAI WITH A PASSION, almost universally. The EA people generally hate it. The Rationalists generally hate it even more.

AI safety people have protested at the OpenAI offices with picket signs & megaphones! When the board fired Sam Altman, everyone immediately blamed EA & AI safety people! OpenAI has churned through AI safety staff b/c they keep quitting in protest! …What universe is this?

Yes, many AI safety people are angry about OpenAI being cavalier & dishonest about harm they might cause in the future, whereas you are angry about OpenAI being cavalier & dishonest about harm they are causing right now. That doesn’t make us enemies. “Why not both?”

I think that’s going too far. It’s not good to hate with a passion.

Even more than that, you could do so, so much worse than OpenAI on all of these questions (e.g. Meta, or xAI, or every major Chinese lab, basically everyone except Anthropic or Google is worse).

Certainly we think OpenAI is on net not helping and deeply inadequate to the task, their political lobbying and rhetoric is harmful, and their efforts have generally made the world a lot less safe. They still are doing a lot of good work, making a lot of good decisions, and I believe that Altman is normative, that he is far more aware of what is coming and the problems we will face than most or than he currently lets on.

I believe he is doing a much better job on these fronts than most (but not all) plausible CEOs of OpenAI would do in his place. For example, if OpenAI’s CEO of Applications Fidji Simo were in charge, or Chairman of the Board Bret Taylor were in charge, or Greg Brockman was in charge, or the CEO of any of the magnificent seven were in charge, I would expect OpenAI to act far less responsibly.

Thus I consider myself relatively well-inclined towards OpenAI among those worried about AI or advocating or AI safety.

I still have an entire series of posts about how terrible things have been at OpenAI and a regular section about them called ‘The Mask Comes Off.’

And I find myself forced to update my view importantly downward, towards being more concerned, in the wake of the recent events described in this post. OpenAI is steadily becoming more of a bad faith actor in the public sphere.

Discussion about this post

OpenAI #14: OpenAI Descends Into Paranoia and Bad Faith Lobbying Read More »

ignoring-trump-threats,-europe-hits-google-with-2.95b-euro-fine-for-adtech-monopoly

Ignoring Trump threats, Europe hits Google with 2.95B euro fine for adtech monopoly

Google may have escaped the most serious consequences in its most recent antitrust fight with the US Department of Justice (DOJ), but the European Union is still gunning for the search giant. After a brief delay, the European Commission has announced a substantial 2.95 billion euro ($3.45 billion) fine relating to Google’s anti-competitive advertising practices. This is not Google’s first big fine in the EU, and it probably won’t be the last, but it’s the first time European leaders could face blowback from the US government for going after Big Tech.

The case stems from a complaint made by the European Publishers Council in 2021. The ensuing EU investigation determined that Google illegally preferenced its own ad display services, which made its Google Ad Exchange (AdX) marketplace more important in the European ad space. As a result, the competition says Google was able to charge higher fees for its service, standing in the way of fair competition since at least 2014.

A $3.45 billion fine would be a staggering amount for most firms, but Google’s earnings have never been higher. In Q2 2025, Google had net earnings of over $28 billion on almost $100 billion in revenue. The European Commission isn’t stopping with financial penalties, though. Google has also been ordered to end its anti-competitive advertising practices and submit a plan for doing so within 60 days.

“Google must now come forward with a serious remedy to address its conflicts of interest, and if it fails to do so, we will not hesitate to impose strong remedies,” said European Commission Executive Vice President Teresa Ribera. “Digital markets exist to serve people and must be grounded in trust and fairness. And when markets fail, public institutions must act to prevent dominant players from abusing their power.”

Europe alleges Google’s control of AdX allowed it to overcharge and stymie competition.

Credit: European Commission

Europe alleges Google’s control of AdX allowed it to overcharge and stymie competition. Credit: European Commission

Google will not accept the ruling as it currently stands—company leadership believes that the commission’s decision is wrong, and they plan to appeal. “[The decision] imposes an unjustified fine and requires changes that will hurt thousands of European businesses by making it harder for them to make money,” said Google’s head of regulatory affairs, Lee-Anne Mulholland.

Harsh rhetoric from US

Since returning to the presidency, Donald Trump has taken a renewed interest in defending Big Tech, likely spurred by political support from heavyweights in AI and cryptocurrency. The administration has imposed hefty tariffs on Europe, and Trump recently admonished the EU for plans to place limits on the conduct of US technology firms. That hasn’t stopped the administration from putting US tech through the wringer at home, though. After publicly lambasting Intel’s CEO and threatening to withhold CHIPS and Science Act funding, the company granted the US government a 10 percent ownership stake.

Ignoring Trump threats, Europe hits Google with 2.95B euro fine for adtech monopoly Read More »

bmw-debuts-6th-generation-ev-powertrain-in-the-all-electric-ix3

BMW debuts 6th-generation EV powertrain in the all-electric iX3


Class-leading efficiency and computer-controlled driving dynamics combine.

A BMW iX3 drives towards the camera

The new iX3 marks the start of a new design language for BMW SUVs. Credit: BMW

The new iX3 marks the start of a new design language for BMW SUVs. Credit: BMW

BMW has an all-electric replacement for its X3 crossover on the way. When it arrives in mid-2026, it will have the lowest carbon footprint of any BMW yet, thanks to an obsessive approach to sustainability during the design process. But we knew that already; what we couldn’t tell you then, but can now, is everything else we know about the first of the so-called Neue Klasse electric vehicles.

“The Neue Klasse is our biggest future-focused project and marks a huge leap forward in terms of technologies, driving experience, and design,” said BMW chairman Oliver Zipse. “Practically everything about it is new, yet it is also more BMW than ever. Our whole product range will benefit from the innovations brought by the Neue Klasse—whichever drive system technology is employed. What started as a bold vision has now become reality: the BMW iX3 is the first Neue Klasse model to go into series production, kicking off a new era for BMW.”

A new face

The iX3 also debuts a new corporate face for BMW’s SUVs: From now on, these will have tall, narrow kidney grilles like the one you see here, as opposed to the short, wide grille seen on the front of the Neue Klasse sedan (which is almost certainly the next 3 Series). LEDs replace chrome, and there’s a new take on BMW’s usual four headlights, although the illuminated kidney grill is an option, not mandatory. Despite the two-box shape, the iX3 manages a drag coefficient of just 0.24.

Sixth-generation

As befitting a company called the Bavarian Motor Works, BMW has been in the business of designing and building its own electric powertrains for quite some time, in contrast to some rivals that have been buying EV tech from suppliers. In addition to the class-leading manufacturing efficiency, the sixth-generation electric powertrain should be extremely efficient—around 4 miles/kWh (15.5 kWh/100 km) from an SUV-shape, which is 20–25 percent better than current SUV EVs.

The first variant will be the iX3 50 xDrive, which boasts a combined 463 hp (345 kW) and 475 lb-ft (645 Nm) from its pair of motors. The front axle uses an asynchronous motor, with an electrically excited synchronous motor at the rear axle providing most of the power.

There’s 40 percent more regenerative braking than BMW’s current powertrains. We weren’t given an exact threshold where the friction brakes take over, but it should be around 0.5–0.6 G. That means that the iX3 will regeneratively brake for the overwhelming majority of the time—just 5–10 percent of braking events should require the friction brakes, we’re told. And the regen should be smoother and more precise, as well as quieter than before. There’s even some regenerative braking should the ABS trigger.

A BMW iX3 is parked, charging

NACS port comes standard, with a CCS1 adapter. Credit: BMW

For the Neue Klasse, BMW has moved to a new 800 V battery, using cylindrical cells rather than the prismatic cells you’d find in an iX or i4. Energy density is 20 percent greater than the current cells, and the pack has a usable capacity of 108 kWh. That means you can expect a range of up to 400 miles, although the official EPA rating will only arrive next year.

The new pack charges a lot faster, too. It can accept up to 400 kW, should you find a charger capable of such power. If so, the 10–80 percent charge should take 21 minutes. (BMW also says it will add 230 miles in 10 minutes.) The iX3 is capable of acting as a mobile power bank (V2L) as well as powering a home or even the grid (this requires a V2H-capable wall box), and North American iX3s will sport NACS charging ports.

Software-defined

The Neue Klasse is a clean-sheet design, and BMW has used this as an opportunity to clear out the legacy cruft that has accumulated over the years. Instead of hundreds of discrete black boxes, each with a single electric control unit (ECU) performing a single job, the iX3 uses four high-performance computers, each in charge of a different domain. Among the benefits of this approach? Almost 2,000 feet (600 m) less wiring and a weight saving of about 30 percent compared to a conventional wiring loom with all its ECUs. Taking single-function controllers out of the loop and replacing them with a domain computer also cuts latency.

The Heart of Joy is the domain controller responsible for driving dynamics and the powertrain, and can cope with up to four electric motors, something we should see in electric M-badged Neue Klasse models in the future. But good driving dynamics require more than just a fancy computer brain. The iX3 is extremely stiff, with the front and rear axles mounted to the battery pack. Weight distribution is a near-perfect 49: 51.

BMW iX3 inteiror

The interior is quite faithful to the concept we saw last year. The black strip at the base of the windshield is the panoramic display. Credit: BMW

A different domain controller is in charge of the advanced driver assistance systems (ADAS). This water-cooled computer is 20 times faster than the processors that control the ADAS in a current BMW, and was developed in-house by BMW. The automaker says the focus is always on the driver’s needs in a way that is smart, symbiotic, and safe. There are AI/ML algorithms for perception and planning, but safety-proven rule-based algorithms always have the final say in the decision-making process.

There’s a hands-free, partially automated driving system that works at up to 85 mph (136 km/h) on premapped divided-lane highways, and an interesting new feature is the cooperative braking and steering during active assistance. Unlike just about every car on the road currently, using the brake will not immediately kill the cruise control, and if you intentionally cross the median or a lane marker and are looking where you’re going, the eye-tracking driver monitoring system sees you and won’t try to correct. (But if you veer across the lane and aren’t looking, the car will steer you back.)

Shy tech

The iX3’s interior is near-identical to the concept we saw last March. BMW calls this approach shy tech, where controls or displays are invisible when inactive. There’s a new multifunction steering wheel—that will surely be divisive—which puts the ADAS controls on its left side, and media controls on the right. The iDrive rotary controller is no more, but there are plenty of physical buttons (not capacitive ones) for things like windows and mirrors.

BMW says the rotary controller wouldn’t have worked well with the new iDrive UI for the trapezoidal touchscreen. (Additionally, it told us that in some regions, drivers never used the rotary controller at all.) The screen is closer to the driver than in current BMWs, and the trapezoidal shape rather effectively means the left side of the screen—which has persistent, common functions—is always close to your right hand. After playing with the system for a while, I think the UI is a lot easier to navigate than the current BMW infotainment system, good though that is.

The multifunction steering wheel looks unconventional. BMW

I’ve often been complimentary about voice recognition in BMWs, and the iX3 has an upgrade here. The natural language processing is now based on Alexa, not Cerence’s tech, and there’s a cartoon visualization for the personal assistant that looks a bit like a ninja, or perhaps an alien. This will make eye contact with the person giving voice commands, so it can discern between driver and passenger.

At the base of the windshield is the new panoramic display. This presents information in zones—you can personalize what shows up in the center and right zones, but the one in front of the driver will always be the critical stuff like your speed and any warnings or alerts. There’s also an optional heads-up display.

BMW says we can expect the iX3 50 eDrive to arrive in the US next summer, starting at around $60,000.

Photo of Jonathan M. Gitlin

Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica’s automotive coverage. He lives in Washington, DC.

BMW debuts 6th-generation EV powertrain in the all-electric iX3 Read More »

honda-combines-type-r-handling-with-hybrid-efficiency-for-2026-prelude

Honda combines Type-R handling with hybrid efficiency for 2026 Prelude

The chassis benefits from parts from a different Civic—the Type-R hot hatch. Ars has sadly yet to sample the current-generation Type-R, but everyone I know who has driven one has come away smiling. Type-R parts include the front suspension’s dual-axis struts and the Brembo brakes, which are there for when regen braking via the hybrid system is no longer sufficient.

Adaptive dampers control the Prelude’s ride, and there are four different drive modes. The powertrain simulates a manual transmission with something called S+ Shift, which “delivers quick simulated gearshift responses through seamless coordination between the engine and high-power motor, including downshift blips, rev matching, and gear holding.”

The shape is dictated by airflow. Honda

If the end result is as good as Hyundai’s N E-shift, it should be fun to play with. And if it isn’t, you can just leave the car in automatic mode.

Beyond that, expect all the latest Honda advanced driver assistance systems (also known as Honda Sensing), and an Android Automotive-based infotainment system with Google built in and wireless Apple CarPlay and Android Auto.

We’ll have to wait until sooner to the car’s arrival to get pricing, but expect the Prelude to start somewhere between $38,000 and $40,000.

Honda combines Type-R handling with hybrid efficiency for 2026 Prelude Read More »

google-won’t-have-to-sell-chrome,-judge-rules

Google won’t have to sell Chrome, judge rules

Google has avoided the worst-case scenario in the pivotal search antitrust case brought by the US Department of Justice. DC District Court Judge Amit Mehta has ruled that Google doesn’t have to give up the Chrome browser to mitigate its illegal monopoly in online search. The court will only require a handful of modest behavioral remedies, forcing Google to release some search data to competitors and limit its ability to make exclusive distribution deals.

More than a year ago, the Department of Justice (DOJ) secured a major victory when Google was found to have violated the Sherman Antitrust Act. The remedy phase took place earlier this year, with the DOJ calling for Google to divest the market-leading Chrome browser. That was the most notable element of the government’s proposed remedies, but it also wanted to explore a spin-off of Android, force Google to share search technology, and severely limit the distribution deals Google is permitted to sign.

Mehta has decided on a much narrower set of remedies. While there will be some changes to search distribution, Google gets to hold onto Chrome. The government contended that Google’s dominance in Chrome was key to its search lock-in, but Google claimed no other company could hope to operate Chrome and Chromium like it does. Mehta has decided that Google’s use of Chrome as a vehicle for search is not illegal in itself, though. “Plaintiffs overreached in seeking forced divesture (sic) of these key assets, which Google did not use to effect any illegal restraints,” the ruling reads.

Break up the company without touching the sides and getting shocked!

Credit: Aurich Lawson

Google’s proposed remedies were, unsurprisingly, much more modest. Google fully opposed the government’s Chrome penalties, but it was willing to accept some limits to its search deals and allow Android OEMs to choose app preloads. That’s essentially what Mehta has ruled. Under the court’s ruling, Google will still be permitted to pay for search placement—those multi-billion-dollar arrangements with Apple and Mozilla can continue. However, Google cannot require any of its partners to distribute Search, Chrome, Google Assistant, or Gemini. That means Google cannot, for example, make access to the Play Store contingent on bundling its other apps on phones.

Google won’t have to sell Chrome, judge rules Read More »

a-robot-walks-on-water-thanks-to-evolution’s-solution

A robot walks on water thanks to evolution’s solution

Robots can serve pizza, crawl over alien planets, swim like octopuses and jellyfish, cosplay as humans, and even perform surgery. But can they walk on water?

Rhagobot isn’t exactly the first thing that comes to mind at the mention of a robot. Inspired by Rhagovelia water striders, semiaquatic insects also known as ripple bugs, these tiny bots can glide across rushing streams because of the robotization of an evolutionary adaptation.

Rhagovelia (as opposed to other species of water striders) have fan-like appendages toward the ends of their middle legs that passively open and close depending on how the water beneath them is moving. This is why they appear to glide effortlessly across the water’s surface. Biologist Victor Ortega-Jimenez of the University of California, Berkeley, was intrigued by how such tiny insects can accelerate and pull off rapid turns and other maneuvers, almost as if they are flying across a liquid surface.

“Rhagovelia’s fan serves as an inspiring template for developing self-morphing artificial propellers, providing insights into their biological form and function,” he said in a study recently published in Science. “Such configurations are largely unexplored in semi-aquatic robots.”

Mighty morphin’

It took Ortega-Jimenez five years to figure out how the bugs get around. While Rhagovelia leg fans were thought to morph because they were powered by muscle, he found that the appendages automatically adjusted to the surface tension and elastic forces beneath them, passively opening and closing ten times faster than it takes to blink. They expand immediately when making contact with water and change shape depending on the flow.

By covering an extensive surface area for their size and maintaining their shape when the insects move their legs, Rhagovelia fans generate a tremendous amount of propulsion. They also do double duty. Despite being rigid enough to resist deformation when extended, the fans are still flexible enough to easily collapse, adhering to the claw above to keep from getting in the animal’s way when it’s out of water. It also helps that the insects have hydrophobic legs that repel water that could otherwise weigh them down.

Ortega-Jimenez and his research team observed the leg fans using a scanning electron microscope. If they were going to create a robot based on ripple bugs, they needed to know the exact structure they were going for. After experimenting with cylindrical fans, the researchers found that Rhagovellia fans are actually structures made of many flat barbs with barbules, something which was previously unknown.

A robot walks on water thanks to evolution’s solution Read More »

chinese-ev-buyers-are-cooling-on-tesla-and-byd

Chinese EV buyers are cooling on Tesla and BYD

Meanwhile, BYD now accounts for 1.1 percent of all new cars sold in the European Union.

There is one bright spot for Tesla, however—it sold 8,370 cars in Turkey in August, making it that country’s second-most popular automaker.

Robots will save Tesla?

But perhaps Tesla shareholders shouldn’t worry about cratering sales. On Monday night, Tesla CEO Elon Musk used his social media network to yet again prophesize that the company’s future is not cars. Despite the fact that selling cars brings in 75 percent of the revenue and is responsible for the carbon credits that keep the company in the black, EVs are but a mere distraction. Instead, Musk claims that 80 percent of Tesla’s value will come from selling humanoid robots.

Musk has been promoting Tesla’s humanoid robot for some years now, with flashy demos that, instead of actual robotics, were waldos in action, mindlessly copying the motions of human controllers who were operating them remotely.

Despite the very non-humanoid shape of industrial robots in car factories, Musk has said the Tesla robots will find their way onto the company’s production line to build cars, presumably to replace workers whom he would otherwise have to pay salaries and benefits. But the CEO has grander ambitions for his robots, claiming on an investor call last year that the company will sell billions of humanoid robots a year.

Chinese EV buyers are cooling on Tesla and BYD Read More »

the-fight-against-labeling-long-term-streaming-rentals-as-“purchases”-you-“buy”

The fight against labeling long-term streaming rentals as “purchases” you “buy”

Words have meaning. Proper word selection is integral to strong communication, whether it’s about relaying one’s feelings to another or explaining the terms of a deal, agreement, or transaction.

Language can be confusing, but typically when something is available to “buy,” ownership of that good or access to that service is offered in exchange for money. That’s not really the case, though, when it comes to digital content.

Often, streaming services like Amazon Prime Video offer customers the options to “rent” digital content for a few days or to “buy” it. Some might think that picking “buy” means that they can view the content indefinitely. But these purchases are really just long-term licenses to watch the content for as long as the streaming service has the right to distribute it—which could be for years, months, or days after the transaction.

A lawsuit [PDF] recently filed against Prime Video challenges this practice and accuses the streaming service of misleading customers by labeling long-term rentals as purchases. The conclusion of the case could have implications for how streaming services frame digital content.

New lawsuit against Prime Video

On August 21, Lisa Reingold filed a proposed class-action lawsuit in the US District Court for the Eastern District of California against Amazon, alleging “false and misleading advertising.” The complaint, citing Prime Video’s terms of use, reads:

On its website, Defendant tells consumers the option to ‘buy’ or ‘purchase’ digital copies of these audiovisual works. But when consumers ‘buy’ digital versions of audiovisual works through Amazon’s website, they do not obtain the full bundle of sticks of rights we traditionally think of as owning property. Instead, they receive ‘non-exclusive, nontransferable, non-sublicensable, limited license’ to access the digital audiovisual work, which is maintained at Defendant’s sole discretion.

The complaint compares buying a movie from Prime Video to buying one from a physical store. It notes that someone who buys a DVD can view the movie a decade later, but “the same cannot be said,” necessarily, if they purchased the film on Prime Video. Prime Video may remove the content or replace it with a different version, such as a shorter theatrical cut.

The fight against labeling long-term streaming rentals as “purchases” you “buy” Read More »

rocket-report:-spacex-achieved-daily-launch-this-week;-ula-recovers-booster

Rocket Report: SpaceX achieved daily launch this week; ULA recovers booster


Firefly Aerospace reveals why its Alpha booster exploded after launch in April.

Starship and its Super Heavy booster ascend through a clear sky over Starbase, Texas, on Tuesday evening. A visible vapor cone enveloped the rocket as it passed through maximum aerodynamic pressure and the speed of sound. Credit: Stephen Clark/Ars Technica

Welcome to Edition 8.08 of the Rocket Report! What a week it’s been for SpaceX. The company completed its first successful Starship test flight in nearly a year, and while it wasn’t perfect, it sets up SpaceX for far more ambitious tests ahead. SpaceX’s workhorse rocket, the Falcon 9, launched six times since our last edition of the Rocket Report. Many of these missions were noteworthy in their own right, including the launch of the US military’s X-37B spaceplane, an upgraded Dragon capsule to boost the International Space Station to a higher orbit, and the record 30th launch and landing of a flight-proven Falcon 9 booster. All told, that’s seven SpaceX launches in seven days.

As always, we welcome reader submissions. If you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets, as well as a quick look ahead at the next three launches on the calendar.

Firefly announces cause of Alpha launch failure. Firefly Aerospace closed the investigation into the failure of one of its Alpha rockets during an April mission for Lockheed Martin and received clearance from the FAA to resume launches, Payload reports. The loss of the launch vehicle was a dark cloud hanging over the company’s otherwise successful IPO this month. The sixth flight of Firefly’s Alpha rocket launched in April from Vandenberg Space Force Base, California, and failed when its first stage booster broke apart milliseconds after stage separation. This created a shockwave that destroyed the engine nozzle extension on the second stage, damaging the engine before the second stage ran out of propellant seconds before it attained orbital velocity. Both stages ultimately fell into the Pacific Ocean.

Too much stress … Investigators concluded that “plume induced flow separation” caused the failure. The phenomenon occurs when a rocket’s exhaust disrupts airflow around the vehicle in flight. In this case, Firefly said the rocket was flying at a higher angle of attack than prior missions, which resulted in the flow separation and created intense heat that broke the first stage apart just after it jettisoned from the second stage. Firefly will increase heat shielding on the first stage of the rocket and fly at reduced angles of attack on future missions. Alpha has now launched six times since 2021, with only two complete successes. Firefly said it was working on setting a date for the seventh Alpha launch. (submitted by EllPeaTea)

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

ESA books a ticket on European launchers. The European Space Agency has awarded launch service contracts to Avio and Isar Aerospace under its Flight Ticket Initiative, European Spaceflight reports. Announced in October 2023, the Flight Ticket Initiative is a program run jointly by ESA and the European Union that offers subsidized flight opportunities for European companies and organizations seeking to demonstrate new satellite technologies in orbit. The initiative is part of ESA’s strategy to foster the continent’s commercial space industry, offering institutional funding to support satellite and launch companies. Avio won contracts to launch three small European space missions as secondary payloads on Vega C rockets flying into low-Earth orbit. Isar Aerospace will launch two small satellite missions to orbit for European companies.

No other options … Avio and Isar Aerospace were the obvious contenders for the Flight Ticket Initiative from a pool of five European companies eligible for launch awards. The other companies, PLD Space, Orbex, and Rocket Factory Augsburg, haven’t launched their orbital-class rockets yet. Avio, based in Italy, builds the now-operational Vega C rocket, and Germany’s Isar Aerospace launched its first Spectrum rocket earlier this year, but it failed to reach orbit. Avio’s selection replaces Arianespace, which was originally part of the Flight Ticket Initiative. Arianespace was previously responsible for marketing and sales for the Vega rocket, but ESA transferred its Flight Ticket Initiative eligibility to Avio following its split from Arianespace. (submitted by EllPeaTea)

Canadian rocket company ready for launch. NordSpace is preparing to launch its 6-meter tall Taiga rocket from Newfoundland, CBC reports. It will be a suborbital launch, meaning it won’t orbit Earth, but NordSpace says the launch will be the first of a Canadian commercial rocket from a Canadian commercial spaceport. The rocket is powered by a 3D-printed liquid-fueled engine and is a stepping stone to an orbital-class rocket NordSpace is developing called Tundra, scheduled to debut in 2027. The smaller Taiga rocket will launch partially fueled and fire its engine for approximately 60 seconds, according to NordSpace.

Newfoundland to space … The launch site, called the Atlantic Spaceport Complex, is located on the Atlantic coast near the town of St. Lawrence, Newfoundland. It will have two launch pads, one for suborbital flights like Taiga, and another for orbital missions by the Tundra rocket and other launch vehicles from US and European companies. The Taiga launch is scheduled no earlier than Friday morning at 5: 00 am EDT (09: 00 UTC). NordSpace says it is a “fully privately funded and managed initiative crucial for Canada to build a space launch capability that supports our security, economy, and sovereignty.” (submitted by Matthew P)

SpaceX’s reuse idea isn’t so dumb after all. A Falcon 9 rocket launched early Thursday from Kennedy Space Center, Florida, with another batch of Starlink Internet satellites. These types of missions launch multiple times per week, but this flight was special. The first stage of the Falcon 9, designated Booster 1067, launched and landed on drone ship in the Atlantic Ocean, completing its 30th flight to space and back, Ars reports. This is a new record for a reusable orbital-class booster stage and comes less than 24 hours after a preceding SpaceX launch from Florida that marked the 400th Falcon 9 landing on a drone ship since the first offshore recovery in 2016.

30 going for 40 … SpaceX is now aiming for at least 40 launches per Falcon 9 first stage, four times as many flights as the company’s original target for Falcon 9 booster reuse. Many people in the industry were skeptical about SpaceX’s approach to reuse. In the mid-2010s, both the European and Japanese space agencies were looking to develop their next generation of rockets. In both cases, Europe with the Ariane 6 and Japan with the H3, the space agencies opted for traditional, expendable rockets instead of pushing toward reuse. In the United States, the main competitor to SpaceX has historically been United Launch Alliance. Their reaction to SpaceX’s plan to reuse first stages a decade ago was dismissive. ULA dubbed its plan to reuse just the engine section of its Vulcan rocket “Smart Reuse” a few years ago. But ULA hasn’t even attempted to recover the engines from the Vulcan core stage yet, and reuse is still at least several years away.

Russia nears debut of Soyuz-5 rocket. In recent comments to the Russian state-run media service TASS, the chief of Roscosmos said the country’s newest rocket, the Soyuz-5, should take flight for the first time before the end of this year, Ars reports. “Yes, we are planning for December,” said Dmitry Bakanov, the director of Roscosmos, Russia’s main space corporation. “Everything is in place.” According to the report, translated for Ars by Rob Mitchell, the debut launch of Soyuz-5 will mark the first of several demonstration flights, with full operational service not expected to begin until 2028. It will launch from the Baikonur spaceport in Kazakhstan.

Breaking free of Ukraine … From an innovation standpoint, the Soyuz-5 vehicle does not stand out. It has been a decade in the making and is fully expendable, unlike a lot of newer medium-lift rockets coming online in the next several years. However, for Russia, this is an important advancement because it seeks to break some of the country’s dependency on Ukraine for launch technology. The new rocket is also named Irtysh, a river that flows through Russia and Kazakhstan. The rocket has been in development since 2016 and largely repurposes older technology. But for Russia, a key advantage is that it takes rocket elements formerly made in Ukraine and now manufactures them in Russia.

SpaceX launches mission to reboost the ISS. SpaceX completed its 33rd cargo delivery to the International Space Station (ISS) early Monday, when a Dragon supply ship glided to an automated docking with more than 5,000 pounds of scientific experiments and provisions for the lab’s seven-person crew, Ars reports. The resupply flight is part of the normal rotation of cargo and crew missions that keep the space station operating, but this one carries something new. What’s different with this mission is a new rocket pack mounted inside the Dragon spacecraft’s rear trunk section. In the coming weeks, SpaceX and NASA will use this first-of-its-kind propulsion system to begin boosting the altitude of the space station’s orbit.

A rocket on a rocket … SpaceX engineers installed two small Draco rocket engines in the trunk of the Dragon spacecraft. The thrusters have their own dedicated propellant tanks and will operate independently of 16 other Draco thrusters used to maneuver Dragon on its journey to the ISS. When NASA says it’s the right time, SpaceX controllers will command the Draco thrusters to ignite and gently accelerate the massive 450-ton space station. All told, the reboost kit can add about 20 mph, or 9 meters per second, to the space station’s already-dizzying speed. Maintaining the space station’s orbit has previously been the responsibility of Russia.

X-37B rides with SpaceX again. The US military’s reusable winged spaceship rocketed back into orbit from Florida on August 21 atop a SpaceX rocket, kicking off a mission that will, among other things, demonstrate how future spacecraft can navigate without relying on GPS signals, Ars reports. The core of the navigation experiment is what the Space Force calls the “world’s highest performing quantum inertial sensor ever used in space.” The spaceplane also hosts a laser inter-satellite communications demo. This is the eighth flight of the X-37B spaceplane, and the third to launch with SpaceX.

Back to LEO … This mission launched on a Falcon 9 rocket into low-Earth orbit (LEO) a few hundred miles above the Earth. This marks a return to LEO after the previous X-37B mission flew on a Falcon Heavy rocket into a much higher orbit. Many of the spaceplane’s payloads have been classified, but officials typically identify a handful of unclassified experiments flying on each X-37B mission. Past X-37B missions have also deployed small satellites into orbit before returning to Earth for a runway landing at Kennedy Space Center, Florida, or Vandenberg Space Force Base, California.

Rocket Lab cuts the ribbon on Neutron launch pad. Launch Complex 3, the Virginia Spaceport Authority’s Mid-Atlantic Regional Spaceport and home to Rocket Lab’s newest reusable rocket, Neutron, is now complete and celebrated its official opening Thursday, WAVY-TV reports. Officials said Launch Complex 3 is ready to bring the largest orbital launch capacity in the spaceport’s history with Neutron, Rocket Lab’s reusable launch vehicle, a medium-lift vehicle capable of launching 33,000 pounds (15 metric tons) to space for commercial constellations, national security, and interplanetary missions.

Not budging … “We’re trying as hard as we can to get this on the pad by the end of the year and get it away,” said Peter Beck, Rocket Lab’s founder and CEO. Beck is holding to his hope the Neutron rocket will be ready to fly in the next four months, but time is running out to make this a reality. The Neutron rocket will be Rocket Lab’s second orbital-class launch vehicle after the Electron, which can place payloads of several hundred pounds in orbit. Electron has a launch pad in Virginia, too, but most Electron rockets take off from New Zealand.

Starship completes a largely successful test flight. SpaceX launched the 10th test flight of the company’s Starship rocket Tuesday evening, sending the stainless steel spacecraft halfway around the world to an on-target splashdown in the Indian Ocean, Ars reports. The largely successful mission for the world’s largest rocket was an important milestone for SpaceX’s Starship program after months of repeated setbacks, including three disappointing test flights and a powerful explosion on the ground that destroyed the ship that engineers were originally readying for this launch.

Lessons to learn For the first time, SpaceX engineers received data on the performance of the ship’s upgraded heat shield and control flaps during reentry back into the atmosphere. The three failed Starship test flights to start the year ended before the ship reached reentry. Elon Musk, SpaceX’s founder and CEO, has described developing a durable, reliable heat shield as the most pressing challenge for making Starship a fully and rapidly reusable rocket. But there were lessons to learn from Tuesday’s flight. A large section of the ship transitioned from its original silver color to a rusty hue of orange and brown by the time it reached the Indian Ocean. Officials didn’t immediately address this or say whether it was anticipated.

ULA recovering boosters, too. United Launch Alliance decided to pull four strap-on solid rocket boosters from the Atlantic Ocean after their use on the company’s most recent launch. Photos captured by Florida photographer Jerry Pike showed a solid rocket motor casing on a ship just off the coast of Cape Canaveral. Tory Bruno, ULA’s president and CEO, wrote on X that the booster was one of four flown on the USSF-106 mission earlier this month, which marked the third flight of ULA’s Vulcan rocket and the first with a US national security payload.

A GEM from the sea … The boosters, built by Northrop Grumman, are officially called Graphite Epoxy Motors, or GEMs. They jettison from the Vulcan rocket less than two minutes after liftoff and fall into the ocean. They’re not designed for reuse, but ULA decided to recover this set of four from the Atlantic for inspections. The company also raised from the sea two motors from the previous Vulcan launch last year after one of them suffered a nozzle failure during launch. Bruno wrote on X that “performance and ballistics were spot on” with all four boosters from the more recent USSF-106 mission, but that engineers decided to go ahead and recover them to close out a “nice data set” from inspections of now six recovered motors—two from last year and four this year.

Next three launches

Aug. 30: Falcon 9 | Starlink 17-7 | Vandenberg Space Force Base, California | 03: 09 UTC

Aug. 31: Falcon 9 | Starlink 10-14 | Cape Canaveral Space Force Station, Florida | 11: 15 UTC

Sept. 3:  Falcon 9 | Starlink 17-8 | Vandenberg Space Force Base, California | 02: 33 UTC

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Rocket Report: SpaceX achieved daily launch this week; ULA recovers booster Read More »