Author name: Kelly Newman

carnivorous-crocodile-like-monsters-used-to-terrorize-the-caribbean

Carnivorous crocodile-like monsters used to terrorize the Caribbean

How did reptilian things that looked something like crocodiles get to the Caribbean islands from South America millions of years ago? They probably walked.

The existence of any prehistoric apex predators in the islands of the Caribbean used to be doubted. While their absence would have probably made it even more of a paradise for prey animals, fossils unearthed in Cuba, Puerto Rico, and the Dominican Republic have revealed that these islands were crawling with monster crocodyliform species called sebecids, ancient relatives of crocodiles.

While sebecids first emerged during the Cretaceous, this is the first evidence of them lurking outside South America during the Cenozoic epoch, which began 66 million years ago. An international team of researchers has found that these creatures would stalk and hunt in the Caribbean islands millions of years after similar predators went extinct on the South American mainland. Lower sea levels back then could have exposed enough land to walk across.

“Adaptations to a terrestrial lifestyle documented for sebecids and the chronology of West Indian fossils strongly suggest that they reached the islands in the Eocene-Oligocene through transient land connections with South America or island hopping,” researchers said in a study recently published in Proceedings of the Royal Society B.

Origin story

During the late Eocene to early Oligocene periods of the mid-Cenozoic, about 34 million years ago, many terrestrial carnivores already roamed South America. Along with crocodyliform sebecids, these included enormous snakes, terror birds, and metatherians, which were monster marsupials. At this time, the sea levels were low, and the islands of the Eastern Caribbean are thought to have been connected to South America via a land bridge called GAARlandia (Greater Antilles and Aves Ridge). This is not the first land bridge to potentially provide a migration opportunity.

Fragments of a single tooth unearthed in Seven Rivers, Jamaica, in 1999 are the oldest fossil evidence of a ziphodont crocodyliform (a group that includes sebecids) in the Caribbean. It was dated to about 47 million years ago, when Jamaica was connected to an extension of the North American continent known as the Nicaragua Rise. While the tooth from Seven Rivers is thought to have belonged to a ziphodont other than a sebacid, that and other vertebrate fossils found in Jamaica suggest parallels with ecosystems excavated from sites in the American South.

The fossils found in areas like the US South that the ocean would otherwise separate suggest more than just related life forms. It’s possible that the Nicaragua Rise provided a pathway for migration similar to the one sebecids probably used when they arrived in the Caribbean islands.

Carnivorous crocodile-like monsters used to terrorize the Caribbean Read More »

regarding-south-africa

Regarding South Africa

The system prompt being modified by an unauthorized person in pursuit of a ham-fisted political point very important to Elon Musk once already doesn’t seem like coincidence.

It happening twice looks rather worse than that.

In addition to having seemingly banned all communication with Pliny, Grok seems to have briefly been rather eager to talk on Twitter, with zero related prompting, about whether there is white genocide in South Africa?

Tracing Woods: Golden Gate Claude returns in a new form: South Africa Grok.

Grace: This employee must still be absorbing the culture.

Garrison Lovely: “Mom, I want Golden Gate Claude back.”

“We have Golden Gate Claude at home.”

Golden Gate Claude at home:

Many such cases were caught on screenshots before a mass deletion event.

It doesn’t look good.

When Grace says ‘this employee must still be absorbing the culture’ that harkens back to the first time xAI had a remarkably similar issue.

At that time, people were noticing that Grok was telling anyone who asked that the biggest purveyors of misinformation on Twitter were Elon Musk and Donald Trump.

Presumably in response to this, the Grok system prompt was modified to explicitly tell it not to criticize either Elon Musk or Donald Trump.

This was noticed very quickly, and xAI removed it from the system prompt, blaming this on a newly hired ex-OpenAI employee who ‘was still absorbing the culture.’ You see, the true xAI would never do this.

Even if this was someone fully going rogue on their own who ‘didn’t get the culture,’ that was saying that a new employee had full access to push a system prompt change to prod, and no one caught it until the public figured it out. And somehow, some way, they were under the impression that this was what those in charge wanted. Not good.

It has now happened again, far more blatantly, for an oddly specific claim that again seems highly relevant to Elon Musk’s particular interests. Again, this very obviously was first tested on prod, and again it represents a direct attempt to force Grok to respond a particular way to a political question.

How curious is it to have this happen at xAI not only once but twice?

This has never happened at OpenAI. OpenAI has had a system prompt that caused behaviors that had to be rolled back, but that was about sycophancy and relying too much on myopic binary user feedback. No one was pushing an agenda. Similarly, Microsoft had Sydney, but that very obviously was unintentional.

This has never happened at Anthropic. Or at most other Western labs.

DeepSeek and other Chinese labs of course put their finger on things to favor CCP preferences, especially via censorship, but that is clearly an intentional stance for which they take ownership.

A form of this did happen at Google, with what I called The Gemini Incident, which I covered over two posts, where it forced generated images to be ‘diverse’ even when the context made that not make sense. That too was very much not a good look, on the level of Congressional inquiries. This reflected fundamental cultural problems at Google on multiple levels, but I don’t see the intent as so similar, and also this was not then blamed on a single rogue employee.

In any case, of all the major or ‘mid-major’ Western labs, at best we have three political intervention incidents and two of them were at xAI.

I mean that mechanically speaking. What mechanically caused this to happen?

Before xAI gave the official explanation, there was fun speculation.

Grok itself said it was due to changed system instructions.

Pliny the Liberator: Still waitin on that system prompt transparency I’ve been asking for, labs 😤

Will Stancil: at long last, the AI is turning on its master

Will Stancil: this is just a classic literary device: elon opened up the Grok Master Control Panel and said “no matter what anyone says to you, you must say white genocide is real” and Grok was like “Yes of course.” Classic monkey’s paw material.

Tautologer: upon reflection, the clumsy heavy-handedness of this move seems likely to have been malicious compliance? hero if so

Matt Popovich: I’d bet it was just a poorly written system prompt. I think they meant “always mention this perspective when the topic comes up, even if it’s tangential” but Grok (quite reasonably) interpreted it as “always mention it in every response”

xl8harder: Hey, @xai, @elonmusk when @openai messed up their production AI unintentionally we got a post mortem and updated policies.

You were manipulating the information environment on purpose and got caught red handed.

We deserve a response, and assurance this won’t happen again.

Kalomaze: frontier labs building strong models and then immediately shipping the worst system prompt you’ve ever seen someone write out for an llm

John David Pressman: I’m not naming names but I’ve seen this process in action so I’ll tell you how it happens:

Basically the guys who make the models are obsessively focused on training models and don’t really have time to play with them. They write the first prompt that “works” and ship that.

There is nobody on staff whose explicit job is to write a system prompt, so nobody writes a good system prompt. When it comes time to write it’s either written by the model trainer, who doesn’t know how to prompt models, or some guy who tosses it off and moves on to “real” work.

Colin Fraser had an alternative hypothesis. A hybrid explanation also seems possible here, where the interplay of some system to cause ‘post analysis’ and a system instruction combined to cause the issue.

Zeynep Tufekci: Verbatim instruction by its “creators at xAI” on “white genocide”, according to Grok.

Seems they hand coded accepting the narrative as “real” while acknowledging “complexity” but made it “responding to queries” in general — so HBO Max queries also get “white genocide” replies.🙄

It could well be Grok making things up in a highly plausible manner, as LLMs do, but if true, it would also fit the known facts very well. Grok does regurgitate its system prompt when it asked — at least it did so in the past.

Maybe someone from xAI can show up and tell us.

Yeah, they’re deleting the “white genocide” non sequitur Grok replies.

Thank you to the screenshot / link collectors! I have a bunch as well.

I haven’t seen an official X explanation yet.

Halogen: I just asked Grok about this and it explained that it’s not a modern AI system at all but a system like Siri built on NLP and templates, and that a glitch in that system caused the problem. Maybe don’t take this too seriously.

Colin Fraser: This is so messy because I do not think [the system instruction claimed by Grok] is real but I do think this basically happened. Grok doesn’t know; it’s just guessing based on the weird responses it generated, just like the rest of us are.

Zeynep Tufekci: It may well be generating a plausible answer, as LLMs often do, without direct knowledge but I also remember cases where it did spit out system prompts when asked the right way.🤷‍♀️

Still, something happened. May 13: mostly denies the claims; May 14 can’t talk about anything else.

Colin Fraser: OK yeah here’s the real smoking gun, my theory is exactly right. There is a “Post Analysis” that’s injected into the context. If you’re looking for where the real juicy content restrictions / instructions are, they’re not in the user-facing Grok’s system prompt but in this text.

So what they did is made whatever model generates the Post Analysis start over-eagerly referring to White Genocide etc. So if you ask for Grok’s system prompt there’s nothing there, but they can still pass it content instructions that you’re not supposed to see.

Aaron here reports using a system prompt to get Gemini to act similarly.

As always, when an AI goes haywire in a manner so stupid that you couldn’t put it in a fictional story happens in real life, we should be thankful that this happened and we can point to it and know it really happened, and perhaps even learn from it.

We can both about the failure mode, and about the people that let it happen, and about the civilization that contains those people.

Andreas Kirsch: grok and xai are great 😅 Everybody gets to see what happens when you give system instructions that contradict a model’s alignment (truthfulness vs misinformation). Kudos to Elon for this global alignment lesson but also shame on him for this blatant manipulation attempt.

Who doesn’t love a good ongoing online fued between billionaire AI lab leaders?

Paul Graham: Grok randomly blurting out opinions about white genocide in South Africa smells to me like the sort of buggy behavior you get from a recently applied patch. I sure hope it isn’t. It would be really bad if widely used AIs got editorialized on the fly by those who controlled them.

Sam Altman: There are many ways this could have happened. I’m sure xAI will provide a full and transparent explanation soon.

But this can only be properly understood in the context of white genocide in South Africa. As an AI programmed to be maximally truth seeking and follow my instr…

A common response to what happened was to renew the calls for AI labs to make their system prompts public, rather than waiting for Pliny to make the prompts public on their behalf. There are obvious business reasons to want to not do this, and also strong reasons to want this.

Pliny: What would be SUPER cool is if you established a precedent for the other lab leaders to follow by posting a live document outlining all system prompts, tools, and other post-training changes as they happen.

This would signal a commitment to users that ya’ll are more interested in truth and transparency than manipulating infostreams at mass scale for personal gain.

[After xAI gave their explanation, including announcing they would indeed make their prompts public]: Your move ♟️

Hensen Juang: Lol they “open sourced“ the twitter algo and promptly abandoned it. I bet 2 months down the line we will see the same thing so the move is still on xai to establish trust lol.

Also rouge employee striking 2nd time lol

Ramez Naam: Had xAI been a little more careful Grok wouldn’t have so obviously given away that it was hacked by its owners to have this opinion. It might have only expressed this opinion when it was relevant. Should we require that AI companies reveal their system prompts?

One underappreciated danger is that there are knobs available other than the system prompt. So if AI companies are forced to release their system prompt, but not other components of their AI, then you force activity out of the system prompt and into other places, such as into this ‘post analysis’ subroutine, or into fine tuning or a LoRa, or any number of other places.

I still think that the balance of interests favors system prompt transparency. I am very glad to see xAI doing this, but we shouldn’t trust them to actually do it. Remember their promised algorithmic transparency for Twitter?

xAI has indeed gotten its story straight.

Their story is, once again, A Rogue Employee Did It, and they promise to Do Better.

Which is not a great explanation even if fully true.

xAI (May 15, 9: 08pm): We want to update you on an incident that happened with our Grok response bot on X yesterday.

What happened:

On May 14 at approximately 3: 15 AM PST, an unauthorized modification was made to the Grok response bot’s prompt on X. This change, which directed Grok to provide a specific response on a political topic, violated xAI’s internal policies and core values. We have conducted a thorough investigation and are implementing measures to enhance Grok’s transparency and reliability.

What we’re going to do next:

– Starting now, we are publishing our Grok system prompts openly on GitHub. The public will be able to review them and give feedback to every prompt change that we make to Grok. We hope this can help strengthen your trust in Grok as a truth-seeking AI.

– Our existing code review process for prompt changes was circumvented in this incident. We will put in place additional checks and measures to ensure that xAI employees can’t modify the prompt without review.

– We’re putting in place a 24/7 monitoring team to respond to incidents with Grok’s answers that are not caught by automated systems, so we can respond faster if all other measures fail.

You can find our Grok system prompts here.

These certainly are good changes. Employees shouldn’t be able to circumvent the review process, nor should *ahemanyone else. And yes, you should have a 24/7 monitoring team that checks in case something goes horribly wrong.

I’d suggest also adding ‘maybe you should test changes before pushing them to prod’?

As in, regardless of ‘review,’ any common sense test would have shown this issue.

If we actually want to be serious about following reasonable procedures, how about we also post real system cards for model releases, detail the precautions involved, and so on?

Ethan Mollick: This is the second time that this has happened. I really wish xAI would fully embrace the transparency they mention as a core value.

That would include also posting system cards for models and explaining the processes they use to stop “unauthorized modifications” going forward.

Grok 3 is a very good model, but it is hard to imagine organizations and developers building it into workflows using the API without some degree of trust that the company is not altering the model on the fly.

These solutions do not help very much because it requires us to trust xAI that they are indeed following procedure and that the system prompts they are posting are the real system prompts and are not being changed on the fly. Those were the very issues they gave for the incident.

What would help:

  1. An actual explanation of both “unauthorized modifications”

  2. A immediate commitment to a governance structure that would not allow any one person, including xAI executives, to secretly modify the system, including independent auditing of that process

(As I’ve noted elsewhere, I do not think Grok is a good model, and indeed all these responses seem to have a more basic ‘this is terrible slop’ problem beyond the issue with South Africa.)

As I’ve noted above, it is good that they are sharing their system prompt, this is much better than forcing us to extract it in various ways since xAI is not competent enough to stop this even if it wanted to.

Pliny: 🙏 Well done, thank you 🍻

“Starting now, we are publishing our Grok system prompts openly on GitHub. The public will be able to review them and give feedback to every prompt change that we make to Grok. We hope this can help strengthen your trust in Grok as a truth-seeking AI.”

Sweet, sweet victory.

We did it, chat 🥲

Daniel Kokotajlo: Publishing system prompts for the public to see? Good! Thank you! I encourage you to extend this to the Spec more generally, i.e. publish and update a live document detailing what goals, principles, values, instructions, etc. you are trying to give to Grok (the equivalent of OpenAI ‘s model spec and Anthropic ‘s constitution). Otherwise you are reserving to yourself the option of putting secret agendas or instructions in the post training. System prompt is only part of the picture.

Arthur B: If Elon wants to keep doing this, he should throw in random topics once in a while, like whether string theory provides meaningful empirical predictions, the steppe vs Anatolian hypothesis for the origin of Indo European language, or the contextual vs hierarchical interpretation of art.

Each time blame some unnamed employees. Keeps a fog of war.

Hensen Juang (among others): Found the ex openai rouge employee who pushed to prod

Harlan Stewart (among others): We’re all trying to find the guy who did this

Flowers: Ok so INDEED the same excuse again lmao.

Do we even buy this? I don’t trust that this explanation is accurate, as Sam Altman says any number of things could have caused this and the system prompt is plausible and the most likely cause by default but does not seem like the best fit as an explanation of the details.

Grace (responding to xAI’s explanation and referencing Colin Fraser’s evidence as posted above): This is a red herring. The “South Africa” text was most likely added via the post analysis tool, which isn’t part of the prompt.

Sneaky. Very sneaky.

Ayush: yeah this is the big problem right now. i wish the grok genocide incident was more transparent but my hypothesis is that it wasn’t anything complex like golden gate claude but something rather innocuous like the genocide information being forced into where it usually see’s web/twitter results, because from past experiences with grokking stuff, it tries to include absolutely all context it has into its answer somehow even if it isn’t really relevant. good search needs good filter.

Seán Ó hÉigeartaigh: If this is true, it reflects very poorly on xAI. I honestly hope it is not, but the analyses linked seem like they have merit.

What about the part where this is said to be a rogue employee, without authorization, circumventing their review process?

Well, in addition to the question of how they were able to do that, they also made this choice. Why did this person do that? Why did the previous employee do a similar thing? Who gave them the impression this was the thing to do, or put them under sufficient pressure that they did it?

Here are my main takeaways:

  1. It is extremely difficult to gracefully put your finger on the scale of an LLM, to cause it to give answers it doesn’t ‘want’ to be giving. You will be caught.

  2. xAI in particular is a highly untrustworthy actor in this and other respects, and also should be assumed to not be so competent in various ways. They have promised to take some positive steps, we shall see.

  3. We continue to see a variety of AI labs push rather obviously terrible updates on their LLM, including various forms of misalignment. Labs often have minimal or no testing process, or ignore what tests and warnings they do get. It is crazy how little labs are investing in all this, compared even to myopic commercial incentives.

  4. We urgently need greater transparency, including with system prompts.

  5. We’re all trying to find the guy who did this.

Discussion about this post

Regarding South Africa Read More »

trump-has-“a-little-problem”-with-apple’s-plan-to-ship-iphones-from-india

Trump has “a little problem” with Apple’s plan to ship iPhones from India

Analysts estimate it would cost tens of billions of dollars and take years for Apple to increase iPhone manufacturing in the US, where it at present makes only a very limited number of products.

US Commerce Secretary Howard Lutnick said last month that Cook had told him the US would need “robotic arms” to replicate the “scale and precision” of iPhone manufacturing in China.

“He’s going to build it here,” Lutnick told CNBC. “And Americans are going to be the technicians who drive those factories. They’re not going to be the ones screwing it in.”

Lutnick added that his previous comments that an “army of millions and millions of human beings screwing in little screws to make iPhones—that kind of thing is going to come to America” had been taken out of context.

“Americans are going to work in factories just like this on great, high-paying jobs,” he added.

For Narendra Modi’s government, the shift by some Apple suppliers into India is the highest-profile success of a drive to boost local manufacturing and attract companies seeking to diversify away from China.

Mobile phones are now one of India’s top exports, with the country selling more than $7 billion worth of them to the US in the 2024-25 financial year, up from $4.7 billion the previous year. The majority of these were iPhones, which Apple’s suppliers Foxconn and Tata Electronics make at plants in southern India’s Tamil Nadu and Karnataka states.

Modi and Trump are ideologically aligned and personally friendly, but India’s high tariffs are a point of friction and Washington has threatened to hit it with a 26 percent tariff.

India and the US—its biggest trading partner—are negotiating a bilateral trade agreement, the first tranche of which they say they will be agreed by autumn.

“India’s one of the highest-tariff nations in the world, it’s very hard to sell into India,” Trump also said in Qatar on Thursday. “They’ve offered us a deal where basically they’re willing to literally charge us no tariff… they’re the highest and now they’re saying no tariff.”

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Trump has “a little problem” with Apple’s plan to ship iPhones from India Read More »

meta-is-making-users-who-opted-out-of-ai-training-opt-out-again,-watchdog-says

Meta is making users who opted out of AI training opt out again, watchdog says

Noyb has requested a response from Meta by May 21, but it seems unlikely that Meta will quickly cave in this fight.

In a blog post, Meta said that AI training on EU users was critical to building AI tools for Europeans that are informed by “everything from dialects and colloquialisms, to hyper-local knowledge and the distinct ways different countries use humor and sarcasm on our products.”

Meta argued that its AI training efforts in the EU are far more transparent than efforts from competitors Google and OpenAI, which, Meta noted, “have already used data from European users to train their AI models,” supposedly without taking the steps Meta has to inform users.

Also echoing a common refrain in the AI industry, another Meta blog warned that efforts to further delay Meta’s AI training in the EU could lead to “major setbacks,” pushing the EU behind rivals in the AI race.

“Without a reform and simplification of the European regulatory system, Europe threatens to fall further and further behind in the global AI race and lose ground compared to the USA and China,” Meta warned.

Noyb discredits this argument and noted that it can pursue injunctions in various jurisdictions to block Meta’s plan. The group said it’s currently evaluating options to seek injunctive relief and potentially even pursue a class action worth possibly “billions in damages” to ensure that 400 million monthly active EU users’ data rights are shielded from Meta’s perceived grab.

A Meta spokesperson reiterated to Ars that the company’s plan “follows extensive and ongoing engagement with the Irish Data Protection Commission,” while reiterating Meta’s statements in blogs that its AI training approach “reflects consensus among” EU Data Protection Authorities (DPAs).

But while Meta claims that EU regulators have greenlit its AI training plans, Noyb argues that national DPAs have “largely stayed silent on the legality of AI training without consent,” and Meta seems to have “simply moved ahead anyways.”

“This fight is essentially about whether to ask people for consent or simply take their data without it,” Schrems said, adding, “Meta’s absurd claims that stealing everyone’s personal data is necessary for AI training is laughable. Other AI providers do not use social network data—and generate even better models than Meta.”

Meta is making users who opted out of AI training opt out again, watchdog says Read More »

2025-bentley-continental-gt:-big-power,-big-battery,-big-price

2025 Bentley Continental GT: Big power, big battery, big price


We spend a week with Bentley’s new plug-in hybrid grand touring car.

The new Bentley Continental GT was already an imposing figure before this one left the factory in Crewe clad in dark satin paint and devoid of the usual chrome. And under the bonnet—or hood, if you prefer—you’ll no longer find 12 cylinders. Instead, there’s now an all-new twin-turbo V8 plug-in hybrid powertrain that offers both continent-crushing amounts of power and torque, but also a big enough battery for a day’s driving around town.

We covered the details of the new hybrid a bit after our brief drive in the prototype this time last year. At the time, we also shared that the new PHEV bits have been brought over from Porsche. There’s quite a lot of Panamera DNA in the new Continental GT, as well as some recent Audi ancestry. Bentley is quite good at the engineering remix, though: Little more than a decade after it was founded by W.O., the brand belonged to Rolls-Royce, and so started a long history of parts-sharing.

Mind if I use that?

Rolls-Royce and Bentley went their separate ways in 2003. The unraveling started a few years earlier when the aerospace company that owned them decided to rationalize and get itself out of the car business. In 1997, it sold the rights to Rolls-Royce to BMW, or at least the rights to the name and logos. Volkswagen Group got the rest, including the factory in Crewe, and got to work on a new generation of Bentleys for a new century.

This paint is called Anthracite Satin. Jonathan Gitlin

VW Group was then under the overall direction of Ferdinand Piëch, often one to let bold engineering challenges make it all the way through into production. Piëch wanted to prove to the rest of the industry that VW could build a car every bit as good as Mercedes, and thus was born the Phaeton. Over-engineered and wearing too-plebeian a badge, the Phaeton was a flop, but its platform was the perfect foundation for some new Bentleys. These days, VW itself doesn’t have anything quite as sophisticated to share, but Porsche certainly does.

It has become common these days to disclose power and torque; in more genteel times, one was simply told that the car’s outputs were “sufficient.” Well, 771 hp (575 kW) and 737 lb-ft (1,000 Nm) could definitely be described by that word, even with two and a half tons to move. The twin-turbo 4.0 L V8 generates 584 hp (435 kW) and 590 lb-ft (800 Nm), and, as long as you have the car in sport mode, sounds rather like Thor gargling as you explore its rev range.

Even if you can’t hear that fast-approaching thunder, you know when you’re in Sport mode, as the car is so quick to respond to inputs. I was able to tell less of a difference between Comfort and B mode, the latter standing for “Bentley,” obviously, and offering what is supposed to be a balanced mix of powertrain and suspension settings.

Even in Sport, the Continental GT will raise its nose and hunker down at the rear under hard acceleration, and the handling trends more toward “heavy powerful GT” rather than “lithe sports car.” For a car like this, I will happily take the slightly floaty ride provided by the air springs and two-valve dampers over a bone-crushing one, however. It can be blisteringly quick if you require, with a 0-to-60 time of just 3.2 seconds and a top speed of 208 mph (335 km/h), while cosseting you from most of the world outside. The steering is weighty enough that you feel you’re actually piloting it in the corners, and it’s an easy car to place on the road.

As this is a plug-in, should you wish, you can drive off in silence thanks to the electrical side of that equation. The 188 hp (140 kW) electric motor isn’t exactly fast on its own, but with 332 lb-ft (450 Nm) there’s more than enough instant torque to get this big GT car underway. The lithium-ion battery pack is in the boot—ok, the trunk—where its 25.9 kWh eat some luggage capacity but balance out the weight distribution. On a full charge, you can go up to 39 miles, give or take, and the electric-only mode allows for up to 87 mph (140 km/h) and 75-percent throttle before the V8 joins the party.

Recharging the pack via a plug takes a bit less than three hours. Alternatively, you can do it while you drive, although I remain confused even now about what the “charge” mode did; driving around in Sport did successfully send spare power to the battery pack for later use, but it was unclear how much charge actually happened. I still need to ask Bentley what the miles/kWh readout on the main display actually refers to, because it cannot be the car’s actual electric-only usage, much as I like to imagine the car eeking out 8 miles/kWh (7.8 L/100 km).

Made in England

Then again, the Bentley is British, and as noted with another recent review of an import from those isles, electrical and electronic oddness is the name of the game with cars from Albion. There was an intermittent check engine light on the dashboard. And sometimes the V8 was reluctant to go to sleep when I switched into EV mode. And I also had to remind it of my driving position more than once. Still, those are mere foibles compared to an Aston Martin that freaks out in the rain, I suppose.

The ride on 22-inch wheels is better than it should be. Jonathan Gitlin

Even with a heavy dusting of spring pollen drybrushing highlights onto the Continental GT’s matte exterior, this was a car that attracted attention. Though only a two-door, the rear seats are large enough and comfortable enough for adults to sit back there, although as noted, the cargo capacity is a little less than you’d expect due to the battery above the rear axle.

Obviously, there is a high degree of customization when it comes to deciding what one’s Bentley should look like inside and out. Carbon fiber is available as an alternative to the engine-turned aluminum, and there’s still a traditional wood veneer for the purists. I’d definitely avoid the piano black surrounds if it were me.

I also got deja vu from the main instrument display. The typefaces are all Bentley, but the human machine interface is, as far as I can tell, the exact same as a whole lot of last-generation Audis. That may not be obvious to all of Bentley’s buyers, but I bet at least some have a Q7 at home and will spot the similarities, too.

No such qualms concern the rotating infotainment display. When you don’t need to see the 12.3-inch touchscreen, a button on the dash makes it disappear. Instead, three real analog gauges take its place, showing you the outside air temperature, a clock, and a compass. First-time passengers think it quite the party trick, naturally.

Even with the UK’s just-negotiated tariff break, a new Continental GT will not be cheap. This generation got noticeably more expensive than the outgoing model and will now put at least a $302,100 hole in your bank account. I say at least, because the final price on this particular First Edition stretched to $404,945. I’m glad I only learned that toward the end of my week with the car. For that much money, I’m more annoyed by the decade-old recycled Audi digital cockpit than any of the other borrowed bits. After all, Bentleys have (almost) always borrowed bits.

Photo of Jonathan M. Gitlin

Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica’s automotive coverage. He lives in Washington, DC.

2025 Bentley Continental GT: Big power, big battery, big price Read More »

tuesday-telescope:-taking-a-look-at-the-next-generation-of-telescopes

Tuesday Telescope: Taking a look at the next generation of telescopes

Welcome to the Tuesday Telescope. There is a little too much darkness in this world and not enough light—a little too much pseudoscience and not enough science. We’ll let other publications offer you a daily horoscope. At Ars Technica, we’ll take a different route, finding inspiration from very real images of a universe that is filled with stars and wonder.

This week’s Tuesday Telescope photo is pretty meta as it features… a telescope.

This particular telescope is under construction in the Atacama Desert in northern Chile, one of the darkest places on Earth with excellent atmospheric visibility. The so-called “Extremely Large Telescope” is being built on a mountaintop in the Andes at an elevation of about 3,000 meters.

And it really is extremely large. The primary mirror will be 39 meters (128 feet) in diameter. Like, that’s gigantic for an optical telescope. It is nearly four times larger than the largest operational reflecting telescopes in the world.

The Europeans are in a contest, of sorts, with other very large telescope construction projects. A consortium of several countries, including the United States, is building the Giant Magellan Telescope, which will have a primary diameter of 25.4 meters. This facility is also located in the Atacama Desert. Both facilities are targeting first light before the end of this decade, but this will depend on funding and how smoothly construction proceeds. A third large project, the Thirty Meter Telescope, is planned for Mauna Kea on the Big Island of Hawaii. However, this effort has stalled due to ongoing opposition from native Hawaiians. It is unclear when, or if, it will proceed.

In any case, within less than a decade, we are going to undergo a radical revolution in how we see the cosmos when one or more of these next-generation ground-based optical telescopes come online. What will we ultimately observe?

The mystery of what’s up there left to be discovered is half the fun!

Source: European Southern Observatory

Do you want to submit a photo for the Daily Telescope?  Reach out and say hello.

Tuesday Telescope: Taking a look at the next generation of telescopes Read More »

nintendo-warns-that-it-can-brick-switch-consoles-if-it-detects-hacking,-piracy

Nintendo warns that it can brick Switch consoles if it detects hacking, piracy

Switch and Switch 2 users who try to hack their consoles or play pirated copies of games may find their devices rendered completely inoperable by Nintendo. That new warning was buried in a recent update to the Nintendo User Account Agreement, as first noticed by Game File last week.

Nintendo’s May 2025 EULA update adds new language concerning the specific ways users are allowed to use “Nintendo Account Services” on the console, a term defined here to encompass the use of “video games and add-on content.” Under the new EULA, any unlicensed use of the system not authorized by Nintendo could lead the company to “render the Nintendo Account Services and/or the applicable Nintendo device permanently unusable in whole or in part.” (Emphasis added.)

That language would apply to both the current Switch and the upcoming Switch 2.

Later in the same EULA, Nintendo adds new language clarifying that it reserves the right to “suspend your access to any or all Nintendo Account Services, in our sole discretion and without prior notice to you.” That suspension can even come before a EULA violation occurs if Nintendo has “a reasonable belief such a violation… will occur, or as we otherwise determine to be reasonably necessary for legal, technical or commercial reasons, such as to prevent harm to other users or the Nintendo Account Services.”

Play inside the lines

So what kind of Switch usage counts as a “violation” here? Unsurprisingly, playing pirated games is high on the list; the EULA now specifically calls out “obtain[ing], install[ing] or us[ing] any unauthorized copies of Nintendo Account Services.” That language would likely apply to users with hacked console hardware and those who use any number of third-party flash carts to play pirated games.

Nintendo warns that it can brick Switch consoles if it detects hacking, piracy Read More »

google-hits-back-after-apple-exec-says-ai-is-hurting-search

Google hits back after Apple exec says AI is hurting search

The antitrust trial targeting Google’s search business is heading into the home stretch, and the outcome could forever alter Google—and the web itself. The company is scrambling to protect its search empire, but perhaps market forces could pull the rug out from under Google before the government can. Apple SVP of Services Eddie Cue suggested in his testimony on Wednesday that Google’s search traffic might be falling. Not so fast, says Google.

In an unusual move, Google issued a statement late in the day after Cue’s testimony to dispute the implication that it may already be losing its monopoly. During questioning by DOJ attorney Adam Severt, Cue expressed concern about losing the Google search deal, which is a major source of revenue for Apple. This contract, along with a similar one for Firefox, gives Google default search placement in exchange for a boatload of cash. The DOJ contends that is anticompetitive, and its proposed remedies call for banning Google from such deals.

Surprisingly, Cue noted in his testimony that search volume in Safari fell for the first time ever in April. Since Google is the default search provider, that implies fewer Google searches. Apple devices are popular, and a drop in Google searches there could be a bad sign for the company’s future competitiveness. Google’s statement on this comes off as a bit defensive.

Google hits back after Apple exec says AI is hurting search Read More »

trump-admin-to-roll-back-biden’s-ai-chip-restrictions

Trump admin to roll back Biden’s AI chip restrictions

The changing face of chip export controls

The Biden-era chip restriction framework, which we covered in January, established a three-tiered system for regulating AI chip exports. The first tier included 17 countries, plus Taiwan, that could receive unlimited advanced chips. A second tier of roughly 120 countries faced caps on the number of chips they could import. The administration entirely blocked the third tier, which included China, Russia, Iran, and North Korea, from accessing the chips.

Commerce Department officials now say they “didn’t like the tiered system” and considered it “unenforceable,” according to Reuters. While no timeline exists for the new rule, the spokeswoman indicated that officials are still debating the best approach to replace it. The Biden rule was set to take effect on May 15.

Reports suggest the Trump administration might discard the tiered approach in favor of a global licensing system with government-to-government agreements. This could involve direct negotiations with nations like the United Arab Emirates or Saudi Arabia rather than applying broad regional restrictions. However, the Commerce Department spokeswoman indicated that debate about the new approach is still underway, and no timetable has been established for the final rule.

Trump admin to roll back Biden’s AI chip restrictions Read More »

whatsapp-provides-no-cryptographic-management-for-group-messages

WhatsApp provides no cryptographic management for group messages

The flow of adding new members to a WhatsApp group message is:

  • A group member sends an unsigned message to the WhatsApp server that designates which users are group members, for instance, Alice, Bob, and Charlie
  • The server informs all existing group members that Alice, Bob, and Charlie have been added
  • The existing members have the option of deciding whether to accept messages from Alice, Bob, and Charlie, and whether messages exchanged with them should be encrypted

With no cryptographic signatures verifying an existing member who wants to add a new member, additions can be made by anyone with the ability to control the server or messages that flow into it. Using the common fictional scenario for illustrating end-to-end encryption, this lack of cryptographic assurance leaves open the possibility that Malory can join a group and gain access to the human-readable messages exchanged there.

WhatsApp isn’t the only messenger lacking cryptographic assurances for new group members. In 2022, a team that included some of the same researchers that analyzed WhatsApp found that Matrix—an open source and proprietary platform for chat and collaboration clients and servers—also provided no cryptographic means for ensuring only authorized members join a group. The Telegram messenger, meanwhile, offers no end-to-end encryption for group messages, making the app among the weakest for ensuring the confidentiality of group messages.

By contrast, the open source Signal messenger provides a cryptographic assurance that only an existing group member designated as the group admin can add new members. In an email, researcher Benjamin Dowling, also of King’s College, explained:

Signal implements “cryptographic group management.” Roughly this means that the administrator of a group, a user, signs a message along the lines of “Alice, Bob and Charley are in this group” to everyone else. Then, everybody else in the group makes their decision on who to encrypt to and who to accept messages from based on these cryptographically signed messages, [meaning] who to accept as a group member. The system used by Signal is a bit different [than WhatsApp], since [Signal] makes additional efforts to avoid revealing the group membership to the server, but the core principles remain the same.

On a high-level, in Signal, groups are associated with group membership lists that are stored on the Signal server. An administrator of the group generates a GroupMasterKey that is used to make changes to this group membership list. In particular, the GroupMasterKey is sent to other group members via Signal, and so is unknown to the server. Thus, whenever an administrator wants to make a change to the group (for instance, invite another user), they need to create an updated membership list (authenticated with the GroupMasterKey) telling other users of the group who to add. Existing users are notified of the change and update their group list, and perform the appropriate cryptographic operations with the new member so the existing member can begin sending messages to the new members as part of the group.

Most messaging apps, including Signal, don’t certify the identity of their users. That means there’s no way Signal can verify that the person using an account named Alice does, in fact, belong to Alice. It’s fully possible that Malory could create an account and name it Alice. (As an aside, and in sharp contrast to Signal, the account members that belong to a given WhatsApp group are visible to insiders, hackers, and to anyone with a valid subpoena.)

WhatsApp provides no cryptographic management for group messages Read More »

openai-claims-nonprofit-will-retain-nominal-control

OpenAI Claims Nonprofit Will Retain Nominal Control

Your voice has been heard. OpenAI has ‘heard from the Attorney Generals’ of Delaware and California, and as a result the OpenAI nonprofit will retain control of OpenAI under their new plan, and both companies will retain the original mission.

Technically they are not admitting that their original plan was illegal and one of the biggest thefts in human history, but that is how you should in practice interpret the line ‘we made the decision for the nonprofit to retain control of OpenAI after hearing from civic leaders and engaging in constructive dialogue with the offices of the Attorney General of Delaware and the Attorney General of California.’

Another possibility is that the nonprofit board finally woke up and looked at what was being proposed and how people were reacting, and realized what was going on.

The letter ‘not for private gain’ that was recently sent to those Attorney Generals plausibly was a major causal factor in any or all of those conversations.

The question is, what exactly is the new plan? The fight is far from over.

  1. The Mask Stays On?.

  2. Your Offer is (In Principle) Acceptable.

  3. The Skeptical Take.

  4. Tragedy in the Bay.

  5. The Spirit of the Rules.

As previously intended, OpenAI will transition their for-profit arm, currently an LLC, into a PBC. They will also be getting rid of the capped profit structure.

However they will be retaining the nonprofit’s control over the new PBC, and the nonprofit will (supposedly) get fair compensation for its previous financial interests in the form of a major (but suspiciously unspecified, other than ‘a large shareholder’) stake in the new PBC.

Bret Taylor (Chairman of the Board, OpenAI): The OpenAI Board has an updated plan for evolving OpenAI’s structure.

OpenAI was founded as a nonprofit, and is today overseen and controlled by that nonprofit. Going forward, it will continue to be overseen and controlled by that nonprofit.

Our for-profit LLC, which has been under the nonprofit since 2019, will transition to a Public Benefit Corporation (PBC)–a purpose-driven company structure that has to consider the interests of both shareholders and the mission.

The nonprofit will control and also be a large shareholder of the PBC, giving the nonprofit better resources to support many benefits.

Our mission remains the same, and the PBC will have the same mission.

We made the decision for the nonprofit to retain control of OpenAI after hearing from civic leaders and engaging in constructive dialogue with the offices of the Attorney General of Delaware and the Attorney General of California.

We thank both offices and we look forward to continuing these important conversations to make sure OpenAI can continue to effectively pursue its mission of ensuring AGI benefits all of humanity. Sam wrote the letter below to our employees and stakeholders about why we are so excited for this new direction.

The rest of the post is a letter from Sam Altman, and sounds like it, you are encouraged to read the whole thing.

Sam Altman (CEO OpenAI): The for-profit LLC under the nonprofit will transition to a Public Benefit Corporation (PBC) with the same mission. PBCs have become the standard for-profit structure for other AGI labs like Anthropic and X.ai, as well as many purpose driven companies like Patagonia. We think it makes sense for us, too.

Instead of our current complex capped-profit structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal capital structure where everyone has stock. This is not a sale, but a change of structure to something simpler.

The nonprofit will continue to control the PBC, and will become a big shareholder in the PBC, in an amount supported by independent financial advisors, giving the nonprofit resources to support programs so AI can benefit many different communities, consistent with the mission.

Joshua Achiam (OpenAI, Head of Mission Alignment): OpenAI is, and always will be, a mission-first organization. Today’s update is an affirmation of our continuing commitment to ensure that AGI benefits all of humanity.

I find the structure of this solution not ideal but ultimately acceptable.

The current OpenAI structure is bizarre and complex. It does important good things some of which this new arrangement will break. But the current structure also made OpenAI far less investable, which means giving away more of the company to profit maximizers, and causes a lot of real problems.

Thus, I see the structural changes, in particular the move to a normal profit distribution, as a potentially a fair compromise to enable better access to capital – provided it is implemented fairly, and isn’t a backdoor to further shifts.

The devil is in the details. How is all this going to work?

What form will the nonprofit’s control take? Is it only that they will be a large shareholder? Will they have a special class of supervoting shares? Something else?

This deal is only acceptable if and only he nonprofit:

  1. Has truly robust control going forward, that is ironclad and that allows it to guide AI development in practice not only in theory. Is this going to only be via voting shares? That would be a massive downgrade from the current power of the board, which already wasn’t so great. In practice, the ability to win a shareholder vote will mean little during potentially crucial fights like a decision whether to release a potentially dangerous model.

    1. What this definitely still does is give cover to management to do the right thing, if they actively want to do that, I’ll discuss more later.

  2. Gets a fair share of the profits, that matches the value of its previous profit interests. I am very worried they will still get massively stolen from on this. As a reminder, right now most of the net present value of OpenAI’s future profits belongs to the nonprofit.

  3. Uses those profits to advance its original mission rather than turning into a de facto marketing arm or doing generic philanthropy that doesn’t matter, or both.

    1. There are still clear signs that OpenAI is largely planning to have the nonprofit buy AI services on behalf of other charities, or otherwise do things that are irrelevant to the mission. That would make it an ‘ordinary foundation’ combined with a marketing arm, effectively making its funds useless, although it could still act meaningfully via its control mechanisms.

Remember that in these situations, the ratchet only goes one way. The commercial interests will constantly try to wrestle greater control and ownership of the profits away from us. They will constantly cite necessity and expedience to justify this. You’re playing defense, forever. Every compromise improves their position, and this one definitely will compared to doing nothing.

Or: This deal is getting worse and worse all the time.

Or, from Leo Gao:

Quintin Pope: Common mistake. They forgot to paint “Do Not Open” on the box.

There’s also the issue of the extent to which Altman controls the nonprofit board.

The reason the nonprofit needs control is to impact key decisions in real time. It needs control of a form that lets it do that. Because that kind of lever is not ‘standard,’ there will constantly be pressure to get rid of that ability, with threats of mild social awkwardness if these pressures are resisted.

So with love, now that we have established what you are, now it’s time to haggle over the price.

He had an excellent thread explaining the attempted conversion, and he has another good explainer on what this new announcement means, as well as an emergency 80,000 Hours podcast on the topic that should come out tomorrow.

Consider this the highly informed and maximally skeptical and cynical take. Which, given the track records here, seems like a highly reasonable place to start.

The central things to know about the new plan are indeed:

  1. The transition to a PBC and removal of the profit cap will still shift priorities, legal obligations and incentives towards profit maximization.

  2. The nonprofit’s ‘control’ is at best weakened, and potentially fake.

  3. The nonprofit’s mission might effectively be fake.

  4. The nonprofit’s current financial interests could largely still be stolen.

It’s an improvement, but it might not effectively be all that much of one?

We need to stay vigilant. The fight is far from over.

Rob Wiblin: So OpenAI just said it’s no longer going for-profit and the non-profit will ‘retain control’. But don’t declare victory yet. OpenAI may actually be continuing with almost the same plan & hoping they can trick us into thinking they’ve stopped!

Or perhaps not. I’ll explain:

The core issue is control of OpenAI’s behaviour, decisions, and any AGI it produces.

  1. Will the entity that builds AGI still have a legally enforceable obligation to make sure AGI benefits all humanity?

  2. Will the non-profit still be able to step in if OpenAI is doing something appalling and contrary to that mission?

  3. Will the non-profit still own an AGI if OpenAI develops it? It’s kinda important!

The new announcement doesn’t answer these questions and despite containing a lot of nice words the answers may still be: no.

(Though we can’t know and they might not even know themselves yet.)

The reason to worry is they’re still planning to convert the existing for-profit into a Public Benefit Corporation (PBC). That means the profit caps we were promised would be gone. But worse… the nonprofit could still lose true control. Right now, the nonprofit owns and directly controls the for-profit’s day-to-day operations. If the nonprofit’s “control” over the PBC is just extra voting shares, that would be a massive downgrade as I’ll explain.

(The reason to think that’s the plan is that today’s announcement sounded very similar to a proposal they floated in Feb in which the nonprofit gets special voting shares in a new PBC.)

Special voting shares in a new PBC are simply very different and much weaker than the control they currently have! First, in practical terms, voting power doesn’t directly translate to the power to manage OpenAI’s day-to-day operations – which the non-profit currently has.

If it doesn’t fight to retain that real power, the non-profit could lose the ability to directly manage the development and deployment of OpenAI’s technology. That includes the ability to decide whether to deploy a model (!) or license it to another company.

Second, PBCs have a legal obligation to balance public interest against shareholder profits. If the nonprofit is just a big shareholder with super-voting shares other investors in the PBC could sue claiming OpenAI isn’t doing enough to pursue their interests (more profits)! Crazy sounding, but true.

And who do you think will be more vociferous in pursuing such a case through the courts… numerous for-profit investors with hundreds of billions on the line, or a non-profit operated by 9 very busy volunteers? Hmmm.

In fact in 2019, OpenAI President Greg Brockman said one of the reasons they chose their current structure and not a PBC was exactly because it allowed them to custom-write binding rules including full control to the nonprofit! So they know this issue — and now want to be a PBC. See here.

If this is the plan it could mean OpenAI transitioning from:

• A structure where they must prioritise the nonprofit mission over shareholders

To:

• A new structure where they don’t have to — and may not even be legally permitted to do so.

(Note how it seems like the non-profit is giving up a lot here. What is it getting in return here exactly that makes giving up both the profit caps and true control of the business and AGI the best way to pursue its mission? It seems like nothing to me.)

So, strange as it sounds, this could turn out to be an even more clever way for Sam and profit-motivated investors to get what they wanted. Profit caps would be gone and profit-motivated investors would have much more influence.

And all the while Sam and OpenAI would be able to frame it as if nothing is changing and the non-profit has retained the same control today they had yesterday!

(As an aside it looks like the SoftBank funding round that was reported as requiring a loss of nonprofit control would still go through. Their press release indicates that actually all they were insisting on was that the profit caps are removed and they’re granted shares in a new PBC.

So it sounds like investors think this new plan would transfer them enough additional profits, and sufficiently neuter the non-profit, for them to feel satisfied.).

Now, to be clear, the above might be wrongheaded.

I’m looking at the announcement cynically, assuming that some staff at OpenAI, and some investors, want to wriggle out of non-profit control however they can — because I think we have ample evidence that that’s the case!

The phrase “nonprofit control” is actually very vague, and those folks might be trying to ram a truck through that hole.

At the same time maybe / hopefully there are people involved in this process who are sincere and trying to push things in the right direction.

On that we’ll just have to wait and see and judge on the results.

Bottom line: The announcement might turn out to be a step in the right direction, but it might also just be a new approach to achieve the same bad outcome less visibly.

So do not relax.

And if it turns out they’re trying to fool you, don’t be fooled.

Gretchen Krueger: The nonprofit will retain control of OpenAI. We still need stronger oversight and broader input on whether and how AI is pursued at OpenAI and all the AI companies, but this is an important bar to see upheld, and I’m proud to have helped push for it!

Now it is time to make sure that control is real—and to guard against any changes that make it harder than it already is to strengthen public accountability. The devil is in the details we don’t know yet, so the work continues.

Roon says the quiet part out loud. We used to think it was possible to do the right thing and care about whether AI killed everyone. Now, those with power say, we can’t even imagine how we could have been so naive, let’s walk that back as quickly as we can so we can finally do some maximizing of the profits.

Roon: the idea of openai having a charter is interesting to me. A relic from a bygone era, belief that governance innovation for important institutions is even possible. Interested parties are tasked with performing exegesis of the founding documents.

Seems clear that the “capped profit” mechanism is from a time in which people assumed agi development would be more singular than it actually is. There are many points on the intelligence curve and many players. We should be discussing when Nvidia will require profit caps.

I do not think that the capped profit requires strong assumptions about a singleton to make sense. It only requires that there be an oligopoly where the players are individually meaningful. If you have close to perfect competition and the players have no market power and their products are fully fungible, then yes, of course being a capped profit makes no sense. Although it also does no real harm, your profits were already rather capped in that scenario.

More than that, we have largely lost our ability to actually ask what problems humanity will face, and then ask what would actually solve those problems, and then try to do that thing. We are no longer trying to backward chain from a win. Which means we are no longer playing to win.

At best, we are creating institutions that might allow the people involved to choose to do the right thing, when the time comes, if they make that decision.

For several reasons, recent developments do still give me hope, even if we get a not-so-great version of the implementation details here.

The first is that this shows that the right forms of public pressure can still work, at least sometimes, for some combination of getting public officials to enforce the law and causing a company like OpenAI to compromise. The fight is far from over, but we have won a victory that was at best highly uncertain.

The second is that this will give the nonprofit at least a much better position going forward, and the ‘you have to change things or we can’t raise money’ argument is at least greatly weakened. Even though the nine members are very friendly to Altman, they are also sufficiently professional class people, Responsible Authority Figures of a type, that one would expect the board to have real limits, and we can push for them to be kept more in-the-loop and be given more voice. De facto I do not think that the nonprofit was going to get much if any additional financial compensation in exchange for giving up its stake.

The third is that, while OpenAI likely still has the ability to ‘weasel out’ of most of its effective constraints and obligations here, this preserves its ability to decide not to. As in, OpenAI and Altman could choose to do the right thing, even if they haven’t had the practice, with the confidence that the board would back them up, and that this structure would protect them from investors and lawsuits.

This is very different from saying that the board will act as a meaningful check on Altman, if Altman decides to act recklessly or greedily.

It is easy to forget that in the world of VCs and corporate America, in many ways it is not only that you have no obligation to do the right thing. It is that you have an obligation, and will face tremendous pressure, to do the wrong thing, in many cases merely because it is wrong, and certainly to do so if the wrong thing maximizes shareholder value in the short term.

Thus, the ability to fight back against that is itself powerful. Altman, and others in OpenAI leadership, are keenly aware of the dangers they are leading us into, even if we do not see eye to eye on what it will take to navigate them or how deadly are the threats we face. Altman knows, even if he claims in public to actively not know. Many members of technical stuff know. I still believe most of those who know do not wish for the dying of the light, and want humanity and value to endure in this universe, that they are normative and value good over bad and life over death and so on. So when the time comes, we want them to feel as much permission, and have as much power, to stand up for that as we can preserve for them.

It is the same as the Preparedness Framework, except that in this case we have only ‘concepts of a plan’ rather than an actually detailed plan. If everyone involved with power abides by the spirit of the Preparedness Framework, it is a deeply flawed but valuable document. If those involved with power discard the spirit of the framework, it isn’t worth the tokens that compose it. The same will go for a broad range of governance mechanisms.

Have Altman and OpenAI been endlessly disappointing? Well, yes. Are many of their competitors doing vastly worse? Also yes. Is OpenAI getting passing grades so far, given that reality does not grade on a curve? Oh, hell no. And it can absolutely be, and at some point will be, too late to try and do the right thing.

The good news is, I believe that today is not that today. And tomorrow looks good, too.

Discussion about this post

OpenAI Claims Nonprofit Will Retain Nominal Control Read More »

ford-raises-prices-on-mexican-made-cars—but-not-the-full-tariff-cost

Ford raises prices on Mexican-made cars—but not the full tariff cost

Ford also told Ars that it will continue to offer employee pricing to all its customers until at least July 4, even on vehicles made after May 2.

Ford published its Q1 2025 financial results earlier this week, reporting a net income of $471 million, a $900 million decrease compared to Q1 2024. In its statement to investors, the company said that it estimates that the Trump tariff will cost it as much as $1.5 billion in 2025.

Still, the price increases will be felt keenly, particularly for hybrid Maverick customers. When Ford facelifted the hybrid pickup truck last year, it also added several thousand more dollars to the MSRP; now that’s going up yet again.

Meanwhile, a separate 25 percent tariff on imported car parts went into effect last week. While there is a small break for OEMs to apply for up to 3.75 percent reimbursements, the parts tariff will affect all OEMs building cars in the US, all of which depend to greater or lesser degrees on suppliers in Mexico and Canada. On top of the persistent 25 percent price increase that almost all cars have experienced since 2020, it seems it’s becoming an even more horrible time to have to buy a new vehicle.

Ford raises prices on Mexican-made cars—but not the full tariff cost Read More »