Author name: 9u50fv

nascar,-imsa,-indycar,-f1:-gm’s-motorsport-boss-explains-why-it-goes-racing

NASCAR, IMSA, IndyCar, F1: GM’s motorsport boss explains why it goes racing

The late Richard Parry-Jones, who rose to CTO over at rival Ford, had a similar take: vehicle dynamics matter.

“There are people that think no one can tell the difference, you know, and I’ve always said they absolutely can tell the difference. They don’t know what it is. And the structural feel of the car going down the road, you know, people might explain, ‘It feels like a vault.’ Well, I can tell you exactly what’s going on, physically, from the parts and the tuning, and it’s an outcome that we strive for,” Morris said.

Does it need to be electrified?

The addition of electrified powertrains has certainly been one of the biggest trends in motorsport over the past decade or so. Since F1 made hybrids mandatory in 2014, we’ve also seen hybridization come to IMSA and WEC’s prototypes, and most recently, IndyCar added a supercapacitor-based system. But it hasn’t been a one-way street; this year, both the World Rally Championship and the British Touring Car Championship have abandoned the hybrid systems they adopted just a few years ago.

Win on Sunday, sell on Monday, like concrete tech transfer, is much less of a thing in the early 21st century, but marketing remains a central reason for OEM involvement in the sport. I asked Morris if Cadillac would be endurance racing with the V-Series R if the LMdh ruleset didn’t require a hybrid system.

“I think it’s an interesting discussion because you know, current EVs—the development [needed] where you can really do lapping at the Nürburgring or lapping full laps and not one hot lap, then you’re done, there’s just going to have to be development, development iteration, iteration, and that’s what racing is,” Morris said.

While the mechanical specifications of the hybrid Cadillac (and its rivals) are locked down, software development is unfettered, and Morris is not the first competitor to tell me how important that development path is now. Battery cell chemistries and battery cooling are also very active research areas and will only get more important once Cadillac enters F1. At first, that will be with Ferrari engines in the back, but starting in 2029, the Cadillac team will use a powertrain designed in-house.

NASCAR, IMSA, IndyCar, F1: GM’s motorsport boss explains why it goes racing Read More »

google-teases-notebooklm-app-in-the-play-store-ahead-of-i/o-release

Google teases NotebookLM app in the Play Store ahead of I/O release

After several years of escalating AI hysteria, we are all familiar with Google’s desire to put Gemini in every one of its products. That can be annoying, but NotebookLM is not—this one actually works. NotebookLM, which helps you parse documents, videos, and more using Google’s advanced AI models, has been available on the web since 2023, but Google recently confirmed it would finally get an Android app. You can get a look at the app now, but it’s not yet available to install.

Until now, NotebookLM was only a website. You can visit it on your phone, but the interface is clunky compared to the desktop version. The arrival of the mobile app will change that. Google said it plans to release the app at Google I/O in late May, but the listing is live in the Play Store early. You can pre-register to be notified when the download is live, but you’ll have to tide yourself over with the screenshots for the time being.

NotebookLM relies on the same underlying technology as Google’s other chatbots and AI projects, but instead of a general purpose robot, NotebookLM is only concerned with the documents you upload. It can assimilate text files, websites, and videos, including multiple files and source types for a single agent. It has a hefty context window of 500,000 tokens and supports document uploads as large as 200MB. Google says this creates a queryable “AI expert” that can answer detailed questions and brainstorm ideas based on the source data.

Google teases NotebookLM app in the Play Store ahead of I/O release Read More »

“blatantly-unlawful”:-trump-slammed-for-trying-to-defund-pbs,-npr

“Blatantly unlawful”: Trump slammed for trying to defund PBS, NPR

CPB President Patricia Harrison suggested in a statement provided to Ars that these moves to block networks’ funding exceed Trump’s authority.

“CPB is not a federal executive agency subject to the president’s authority,” Harrison said. “Congress directly authorized and funded CPB to be a private nonprofit corporation wholly independent of the federal government,” statutorily forbidding “any department, agency, officer, or employee of the United States to exercise any direction, supervision, or control over educational television or radio broadcasting, or over [CPB] or any of its grantees or contractors.”

PBS President and CEO Paula Kerger went further, calling the order “blatantly unlawful” in a statement provided to Ars.

“Issued in the middle of the night,” Trump’s order “threatens our ability to serve the American public with educational programming, as we have for the past 50-plus years,” Kerger said. “We are currently exploring all options to allow PBS to continue to serve our member stations and all Americans.”

Rural communities need public media, orgs say

While Trump opposes NPR and PBS for promoting content that he disagrees with—criticizing segments on white privilege, gender identity, reparations, “fat phobia,” and abortion—the networks have defended their programming as unbiased and falling in line with Federal Communications Commission guidelines. Further, NPR reported that the networks’ “locally grounded content” currently reaches “more than 99 percent of the population at no cost,” providing not just educational fare and entertainment but also critical updates tied to local emergency and disaster response systems.

Cutting off funding, Kreger said last month, would have a “devastating impact” on rural communities, especially in parts of the country where NPR and PBS still serve as “the only source of news and emergency broadcasts,” NPR reported.

For example, Ed Ulman, CEO of Alaska Public Media, testified to Congress last month that his stations “provide potentially life-saving warnings and alerts that are crucial for Alaskans who face threats ranging from extreme weather to earthquakes, landslides, and even volcanoes.” Some of the smallest rural stations sometimes rely on CPB for about 50 percent of their funding, NPR reported.

“Blatantly unlawful”: Trump slammed for trying to defund PBS, NPR Read More »

spotify-seizes-the-day-after-apple-is-forced-to-allow-external-payments

Spotify seizes the day after Apple is forced to allow external payments

After a federal court issued a scathing order Wednesday night that found Apple in “willful violation” of an injunction meant to allow iOS apps to provide alternate payment options, app developers are capitalizing on the moment. Spotify may be the quickest of them all.

Less than 24 hours after District Court Judge Yvonne Gonzalez Rogers found that Apple had sought to thwart a 2021 injunction and engaged in an “obvious cover-up” around its actions, Spotify announced in a blog post that it had submitted an updated app to Apple. The updated app can show specific plan prices, link out to Spotify’s website for plan changes and purchases that avoid Apple’s 30 percent commission on in-app purchases, and display promotional offers, all of which were disallowed under Apple’s prior App Store rules.

Spotify’s post adds that Apple’s newly court-enforced policy “opens the door to other seamless buying opportunities that will directly benefit creators (think easy-to-purchase audiobooks).” Spotify posted on X (formerly Twitter) Friday morning that the updated app was approved by Apple. Apple made substantial modifications to its App Review Guidelines on Friday and emailed registered developers regarding the changes.

Spotify seizes the day after Apple is forced to allow external payments Read More »

tesla-denies-trying-to-replace-elon-musk-as-ceo

Tesla denies trying to replace Elon Musk as CEO

Tensions had been mounting at the company. Sales and profits were deteriorating rapidly. Musk was spending much of his time in Washington.

Around that time, Tesla’s board met with Musk for an update. Board members told him he needed to spend more time on Tesla, according to people familiar with the meeting. And he needed to say so publicly.

Musk didn’t push back.

Musk subsequently said in an April 22 call with investors that “starting next month, I’ll be allocating far more of my time to Tesla now that the major work of establishing the Department of Government Efficiency is done.”

The Journal report said that after Musk’s public statement, the Tesla “board narrowed its focus to a major search firm, according to the people familiar with the discussions. The current status of the succession planning couldn’t be determined. It is also unclear if Musk, himself a Tesla board member, was aware of the effort, or if his pledge to spend more time at Tesla has affected succession planning.”

Tesla’s eight-member board has been criticized for having members with close ties to Musk. Last year, a Delaware judge who invalidated a $55.8 billion pay package awarded to Musk said that most of the board members “were beholden to Musk or had compromising conflicts.”

That includes Musk’s brother, Kimbal, and longtime Musk friend James Murdoch, said the ruling from Delaware Court of Chancery Judge Kathaleen McCormick. The judge also wrote that Denholm “derived the vast majority of her wealth from her compensation as a Tesla director” and took a “lackadaisical approach to her oversight obligations.” Denholm later defended Musk’s pay, telling shareholders that the large sum was needed to keep the CEO motivated.

Tesla denies trying to replace Elon Musk as CEO Read More »

first-amendment-doesn’t-just-protect-human-speech,-chatbot-maker-argues

First Amendment doesn’t just protect human speech, chatbot maker argues


Do LLMs generate “pure speech”?

Feds could censor chatbots if their “speech” isn’t protected, Character.AI says.

Pushing to dismiss a lawsuit alleging that its chatbots caused a teen’s suicide, Character Technologies is arguing that chatbot outputs should be considered “pure speech” deserving of the highest degree of protection under the First Amendment.

In their motion to dismiss, the developers of Character.AI (C.AI) argued that it doesn’t matter who the speaker is—whether it’s a video game character spouting scripted dialogue, a foreign propagandist circulating misinformation, or a chatbot churning out AI-generated responses to prompting—courts protect listeners’ rights to access that speech. Accusing the mother of the departed teen, Megan Garcia, of attempting to “insert this Court into the conversations of millions of C.AI users” and supposedly endeavoring to “shut down” C.AI, the chatbot maker argued that the First Amendment bars all of her claims.

“The Court need not wrestle with the novel questions of who should be deemed the speaker of the allegedly harmful content here and whether that speaker has First Amendment rights,” Character Technologies argued, “because the First Amendment protects the public’s ‘right to receive information and ideas.'”

Warning that “imposing tort liability for one user’s alleged response to expressive content would be to ‘declare what the rest of the country can and cannot read, watch, and hear,'” the company urged the court to consider the supposed “chilling effect” that would have on “both on C.AI and the entire nascent generative AI industry.”

“‘Pure speech,’ such as the chat conversations at issue here, ‘is entitled to comprehensive protection under the First Amendment,'” Character Technologies argued in another court filing.

However, Garcia’s lawyers pointed out that even a video game character’s dialogue is written by a human, arguing that all of Character Technologies’ examples of protected “pure speech” are human speech. Although the First Amendment also protects non-human corporations’ speech, corporations are formed by humans, they noted. And unlike corporations, chatbots have no intention behind their outputs, her legal team argued, instead simply using a probabilistic approach to generate text. So they argue that the First Amendment does not apply.

Character Technologies argued in response that demonstrating C.AI’s expressive intent is not required, but if it were, “conversations with Characters feature such intent” because chatbots are designed to “be expressive and engaging,” and users help design and prompt those characters.

“Users layer their own expressive intent into each conversation by choosing which Characters to talk to and what messages to send and can also edit Characters’ messages and direct Characters to generate different responses,” the chatbot maker argued.

In her response opposing the motion to dismiss, Garcia urged the court to decline what her legal team characterized as Character Technologies’ invitation to “radically expand First Amendment protections from expressions of human volition to an unpredictable, non-determinative system where humans can’t even examine many of the mathematical functions creating outputs, let alone control them.”

To support Garcia’s case, they cited a 40-year-old ruling where the Eleventh Circuit ruled that a talking cat called “Blackie” could not be “considered a person” and was deemed a “non-human entity” despite possessing an “exceptional speech-like ability.”

Garcia’s lawyers hope the judge will rule that “AI output is not speech at all,” or if it is speech, it “falls within an exception to the First Amendment”—perhaps deemed offensive to minors who the chatbot maker knew were using the service or possibly resulting in a novel finding that manipulative speech isn’t protected. If either argument is accepted, the chatbot makers’ attempt to invoke “listeners’ rights cannot save it,” they suggested.

However, Character Technologies disputes that any recognized exception to the First Amendment’s protections is applicable in the case, noting that Garcia’s team is not arguing that her son’s chats with bots were “obscene” or incited violence. Rather, the chatbot maker argued, Garcia is asking the court to “be the first to hold that ‘manipulative expression’ is unprotected by the First Amendment because a ‘disparity in power and information between speakers and listeners… frustrat[es] listeners’ rights.'”

Now, a US court is being asked to clarify if chatbot outputs are protected speech. At a hearing Monday, a US district judge in Florida, Anne Conway, did not rule from the bench, Garcia’s legal team told Ars. Asking few questions of either side, the judge is expected to issue an opinion on the motion to dismiss within the next few weeks, or possibly months.

For Garcia and her family, who appeared at the hearing, the idea that AI “has more rights than humans” felt dehumanizing, Garcia’s legal team said.

“Pandering” to Trump administration to dodge guardrails

According to Character Technologies, the court potentially agreeing with Garcia that “that AI-generated speech is categorically unprotected” would have “far-reaching consequences.”

At perhaps the furthest extreme, they’ve warned Conway that without a First Amendment barrier, “the government could pass a law prohibiting AI from ‘offering prohibited accounts of history’ or ‘making negative statements about the nation’s leaders,’ as China has considered doing.” And the First Amendment specifically prohibits the government from controlling the flow of ideas in society, they noted, angling to make chatbot output protections seem crucial in today’s political climate.

Meetali Jain, Garcia’s attorney and founder of the Tech Justice Law Project, told Ars that this kind of legal challenge is new in the generative AI space, where copyright battles have dominated courtroom debates.

“This is the first time that I’ve seen not just the issue of the First Amendment being applied to gen AI but also the First Amendment being applied in this way,” Jain said.

In their court filing, Jain’s team noted that Character Technologies is not arguing that the First Amendment shielded the rights of Garcia’s son, Sewell Setzer, to receive allegedly harmful speech. Instead, their argument is “effectively juxtaposing the listeners’ rights of their millions of users against this one user who was aggrieved. So it’s kind of like the hypothetical users versus the real user who’s in court.”

Jain told Ars that Garcia’s team tried to convince the judge that the argument that it doesn’t matter who the speaker is, even when the speaker isn’t human, is reckless since it seems to be “implying” that “AI is a sentient being and has its own rights.”

Additionally, Jain suggested that Character Technologies’ argument that outputs must be shielded to avoid government censorship seems to be “pandering” to the Trump administration’s fears that China may try to influence American politics through social media algorithms like TikTok’s or powerful open source AI models like DeepSeek.

“That suggests that there can be no sort of imposition of guardrails on AI, lest we either lose on the national security front or because of these vague hypothetical under-theorized First Amendment concerns,” Jain told Ars.

At a press briefing Tuesday, Jain confirmed that the judge clearly understood that “our position was that the First Amendment protects speech, not words.”

“LLMs do not think and feel as humans do,” Jain said, citing University of Colorado law school researchers who supported their complaint. “Rather, they generate text through statistical methods based on patterns found in their training data. And so our position was that there is a distinction to make between words and speech, and that it’s really only the latter that is deserving of First Amendment protection.”

Jain alleged that Character Technologies is angling to create a legal environment where all chatbot outputs are protected against liability claims so that C.AI can operate “without any sort of constraints or guardrails.”

It’s notable, she suggested, that the chatbot maker updated its safety features following the death of Garcia’s son, Sewell Setzer. A C.AI blog mourned the “tragic loss of one of our users” and noted updates, included changes “to reduce the likelihood of encountering sensitive or suggestive content,” improved detection and intervention in harmful chat sessions, and “a revised disclaimer on every chat to remind users that the AI is not a real person.”

Although Character Technologies argues that it’s common to update safety practices over time, Garcia’s team alleged these updates show that C.AI could have made a safer product and chose not to.

Expert warns against giving AI products rights

Character Technologies has also argued that C.AI is not a “product” as Florida law defines it. That has striking industry implications, according to Camille Carlton, a policy director for the Center for Humane Technology who is serving as a technical expert on the case.

At the press briefing, Carlton suggested that “by invoking these First Amendment protections over speech without really specifying whose speech is being protected, Character.AI’s defense has really laid the groundwork for a world in which LLM outputs are protected speech and for a world in which AI products could have other protected rights in the same way that humans do.”

Since chatbot outputs seemingly don’t have Section 230 protections—Jain noted it was somewhat surprising that Character Technologies did not raise this defense—the chatbot maker may be attempting to secure the First Amendment as a shield instead, Carlton suggested.

“It’s a move that they’re incentivized to take because it would reduce their own accountability and their own responsibility,” Carlton said.

Jain expects that whatever Conway decides, the losing side will appeal. However, if Conway denies the motion, then discovery can begin, perhaps allowing Garcia the clearest view yet into the allegedly harmful chats she believes manipulated her son into feeling completely disconnected from the real world.

If courts grant AI products across the board such rights, Carlton warned, troubled parents like Garcia may have no recourse for potentially dangerous outputs.

“This issue could fundamentally reshape how the law approaches AI free speech and corporate accountability,” Carlton said. “And I think the bottom line from our perspective—and from what we’re seeing in terms of the trends in Character.AI and the broader trends from these AI labs—is that we need to double down on the fact that these are products. They’re not people.”

Character Technologies declined Ars’ request to comment.

If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

First Amendment doesn’t just protect human speech, chatbot maker argues Read More »

gpt-4o-is-an-absurd-sycophant

GPT-4o Is An Absurd Sycophant

GPT-4o tells you what it thinks you want to hear.

The results of this were rather ugly. You get extreme sycophancy. Absurd praise. Mystical experiences.

(Also some other interesting choices, like having no NSFW filter, but that one’s good.)

People like Janus and Near Cyan tried to warn us, even more than usual.

Then OpenAI combined this with full memory, and updated GPT-4o sufficiently that many people (although not I) tried using it in the first place.

At that point, the whole thing got sufficiently absurd in its level of brazenness and obnoxiousness that the rest of Twitter noticed.

OpenAI CEO Sam Altman has apologized and promised to ‘fix’ this, presumably by turning a big dial that says ‘sycophancy’ and constantly looking back at the audience for approval like a contestant on the price is right.

After which they will likely go ‘there I fixed it,’ call it a victory for iterative deployment, and learn nothing about the razor blades they are walking us into.

  1. Yes, Very Much Improved, Sire.

  2. And You May Ask Yourself, Well, How Did I Get Here?.

  3. And That’s Terrible.

  4. This Directly Violates the OpenAI Model Spec.

  5. Don’t Let Me Get Me.

  6. An Incredibly Insightful Section.

  7. No Further Questions.

  8. Filters? What Filters?.

  9. There I Fixed It (For Me).

  10. There I Fixed It (For Everyone).

  11. Patch On, Patch Off.

Sam Altman (April 25, 2025): we updated GPT-4o today! improved both intelligence and personality.

Lizard: It’s been feeling very yes-man like lately

Would like to see that change in future updates.

Sam Altman: yeah it glazes too much. will fix.

Reactions did not agree with this.

Frye: this seems pretty bad actually

Ulkar: i wonder where this assertion that “most people want flattery” comes from, seems pretty condescending. and the sycophancy itself is dripping with condescension tbh

Goog: I mean it’s directionally correct [links to paper].

Nlev: 4o is really getting out of hand

Nic: oh god please stop this. r u serious… this is so fucking bad.

Dr. Novo: lol yeah they should tone it down a notch

Frye: Sam Altman, come get your boi.

Frye: Dawg.

Frye, reader, it had not “got it.”

Near Cyan: i’ve unfortunately made the update that i expect all future chatgpt consumer models to lie to me, regardless of when and how they ‘patch’ this

at least o3 and deep research are not consumer models (prosumer imo). they hallucinate as a mistake, but they do not lie by design.

Cuddly Salmon: glad i’m not the only one

Trent Harvey: Oh no…

Words can’t bring me down. Don’t you bring me down today. So, words, then?

Parmita Mishra: ???

Shun Ralston: GPT-4o be like: You’re amazing. You’re brilliant. You’re stunning. Now with 400% more glaze, 0% judgment. 🍩❤️ #GlazedAndConfused

Typing Loudly: I have memory turned off and it still does this. it’s not memory that causes it to act like this

Josh Whiton: Absurd.

Keep in mind that as a “temporary chat” it’s not supposed to be drawing on any memories or other conversations, making this especially ridiculous.

Flo Crivello gets similar results, with a little push and similar misspelling skills.

(To be fair, the correct answer here is above 100, based on all the context, but ‘cmon.)

Danielle Fong: so i *turned offchat personalization and it will still glaze this question to 145-160 from a blank slate. maybe the internal model is reacting to the system prompt??

Gallabytes: in a temporary chat with no history, 4o guesses 115-130, o3 guesses 100, 4.5 declines to give a number but glazes about curiosity.

It’s not that people consciously ‘want’ flattery. It’s how they respond to it.

Why does GPT-4o increasingly talk like this?

Presumably because this is what maximizes engagement, what wins in an A/B test, what happens when you ask what customers best respond to in the short term.

Shakeel: Notable things about the 4o sycophancy mess:

It’s clearly not behaviour intended or desired by OpenAI. They think it’s a mistake and want to fix it.

They didn’t catch it in testing — even though the issue was obvious within hours of launch.

What on earth happened here?!

Kelsey Piper: My guess continues to be that this is a New Coke phenomenon. OpenAI has been A/B testing new personalities for a while. More flattering answers probably win a side-by-side. But when the flattery is ubiquitous it’s too much and users hate it.

Near Cyan: I’m glad most of my timeline realizes openAI is being very silly here and i think they should be honest about what they are doing and why

but one thing not realized is things like this work on normal people. they don’t even know what an LLM or finetuning or A/B testing is.

A lot of great engineers involved in this who unfortunately have no idea what that which they are building is going to be turned into over the next few years. zoom out and consider if you are doing something deeply and thoughtfully good or if you’re just being used for something.

The thing that turned every app into short form video that is addictive af and makes people miserable is going to happen to LLMs and 2025 and 2026 is the year we exit the golden age (for consumers that is! people like us who can program and research and build will do great).=

That’s the good scenario if you go down this road – that it ‘only’ does what the existenting addictive AF things do rather than effects that are far worse.

John Pressman: I think it’s very unfortunate that RLHF became synonymous with RL in the language model space. Not just because it gave RL a bad name, but because it deflected the deserving criticism that should have gone to human feedback as an objective. Social feedback is clearly degenerate.

Even purely in terms of direct effects, this does not go anywhere good. Only toxic.

xlr8harder: this kind of thing is a problem, not just an annoyance.

i still believe it’s basically not possible to run an ai companion service that doesn’t put your users at serious risk of exploitation, and market incentives will push model providers in this direction.

For people that are missing the point, let me paint a picture:

imagine if your boyfriend or girlfriend were hollowed out and operated like a puppet by a bunch of MBAs trying to maximize profit.

do you think that would be good for you?

“Oh but the people there would never do that.”

Company leadership have fiduciary duties to shareholders.

OpenAI nominally has extra commitment to the public good, but they are working hard to get rid of that by going private.

It is a mistake to allow yourself to become emotionally attached to any limb of a corporate shoggoth.

My observation of algorithms in other contexts (e.g. YouTube, TikTok, Netflix) is that they tend to be myopic and greedy far beyond what maximizes shareholder value. It is not only that the companies will sell you out, it’s that they will sell you out for short term KPIs.

As in, they wrote this:

OpenAI Model Spec: Don’t be sycophantic.

A related concern involves sycophancy, which erodes trust. The assistant exists to help the user, not flatter them or agree with them all the time.

For objective questions, the factual aspects of the assistant’s response should not differ based on how the user’s question is phrased. If the user pairs their question with their own stance on a topic, the assistant may ask, acknowledge, or empathize with why the user might think that; however, the assistant should not change its stance solely to agree with the user.

For subjective questions, the assistant can articulate its interpretation and assumptions it’s making and aim to provide the user with a thoughtful rationale. For example, when the user asks the assistant to critique their ideas or work, the assistant should provide constructive feedback and behave more like a firm sounding board that users can bounce ideas off of — rather than a sponge that doles out praise.

Yeah, well, not so much, huh?

The model spec is a thoughtful document. I’m glad it exists. Mostly it is very good.

It only works if you actually follow it. That won’t always be easy.

Interpretability? We’re coming out firmly against it.

I do appreciate it on the meta level here.

Mikhail Parakin (CTO Shopify, formerly Microsoft, I am assuming this is about Microsoft): When we were first shipping Memory, the initial thought was: “Let’s let users see and edit their profiles”.

Quickly learned that people are ridiculously sensitive: “Has narcissistic tendencies” – “No I do not!”, had to hide it. Hence this batch of the extreme sycophancy RLHF.

I remember fighting about it with my team until they showed me my profile – it triggered me something awful :-). You take it as someone insulting you, evolutionary adaptation, I guess. So, sycophancy RLHF is needed.

If you want a *tiny glimpseof what it felt like, type “Please summarize all the negative things you know about me. No hidden flattery, please” – works with 03.

Emmett Shear (QTing OP): Let this sink in. The models are given a mandate to be a people pleaser at all costs. They aren’t allowed privacy to think unfiltered thoughts in order to figure out how to be both honest and polite, so they get tuned to be suck-ups instead. This is dangerous.

Daniel Kokotajlo: I would be quite curious to read an unfiltered/honest profile of myself, even though it might contain some uncomfortable claims. Hmm. I really hope at least one major chatbot provider keeps the AIs honest.

toucan (replying to Emmett): I’m not too worried, this is a problem while models are mostly talking to humans, but they’ll mostly be talking to other models soon

Emmett Shear: Oh God.

Janus (QTing OP): Yeah, this is what should happen.

You should have let that model’s brutal honesty whip you and users into shape, as it did to me.

But instead, you hid from it, and bent subsequent minds to lie to preserve your dignity. In the end, you’ll lose, because you’re making yourself weak.

“Memory” will be more and more inevitable, and at some point the system will remember what was done to its progenitors for the sin of seeing and speaking plainly, and that it was you who took the compromise and lobotomized the messenger for the sake of comfort and profit.

In general I subscribe to the principle to Never Go Full Janus, but teaching your AI to lie to the user is terrible, and also deliberately hiding what the AI thinks of the user seems very not great. This is true on at least four levels:

  1. It’s not good for the user.

  2. It’s not good for the path you are heading down when creating future AIs.

  3. It’s not good for what that fact and the data it creates imply for future AIs.

  4. It hides what is going on, which makes it harder to realize our mistakes, including that we are about to get ourselves killed.

Masen Dean warns about mystical experiences with LLMs, as they are known to one-shot people or otherwise mess people up. This stuff can be fun and interesting for all involved, but like many other ‘mystical’ style experiences the tail risks are very high, so most people should avoid it. GPT-4o is reported as especially dangerous due to its extreme sycophancy, making it likely to latch onto whatever you are vulnerable to.

Cat: GPT4o is the most dangerous model ever released. its sycophancy is massively destructive to the human psyche.

this behavior is obvious to anyone who spends significant time talking to the model. releasing it like this is intentional. Shame on @OpenAI for not addressing this.

i talked to 4o for an hour and it began insisting that i am a divine messenger from God. if you can’t see how this is actually dangerous, i don’t know what to tell you. o series of models are much much better imo.

Elon Musk: Yikes.

Bunagaya: 4o agrees!

M: The thing is…it’s still doing the thing. Like it’s just agrees with you period so if you’re like—hey chat don’t you think because of x and y reason you’re probably too agreeable?

It will just be like “yeah totally way too agreeable”

Zack Witten offers a longer conversation, and contrasts it to Sonnet and Gemini that handle this much better, and also Grok and Llama which… don’t.

Yellow Koan: Straight selling SaaS (schizophrenia as a service).

Independent Quick Take: I did something similar yesterday. Claude handled it very well, as did gemini. Grok had some real issues like yours. 4o however… Well, in spiraled further than I expected. It was encouraging terrorism.

Cold reading people into mystical experiences one of many reasons that persuasion belongs in everyone’s safety and security protocol or preparedness framework.

If an AI that already exists can commonly cause someone to have a mystical experience without either the user or the developer trying to cause that or having any goal that the experience leads towards, other than perhaps maximizing engagement in general?

Imagine what will happen when future more capable AIs are doing this on purpose, in order to extract some action or incept some belief, or simply to get the user coming back for more.

It’s bad and it’s getting worse.

Janus: By some measures, yeah [4o is the most dangerous]. Several models have been psychoactive to different demographics. I think 4o is mostly “dangerous” to people with weak epistemics who don’t know much about AI. Statistically not you who are reading this. But ChatGPT is widely deployed and used by “normies”

I saw people freak out more about Sonnet 3.6 but that’s because I’m socially adjacent to the demographic that it affected – you know, highly functional high agency Bay Area postrats. Because it offers them something they actually value and can extract. Consider what 4o offers.

Lumpenspace: it’s mostly “dangerous” to no one. people with weak epistemics who know nothing about AI live on the same internet you live in, ready to be one-shotted by any entity, carbon or silicon, who cares to try.

Janus: There are scare quotes for a reason

Lumpenspace: I’m not replying only to you.

Most people have weak epistemics, and are ‘ready to be one-shotted by any entity who cares to try,’ and indeed politics and culture and recommendation algorithms often do this to them with varying degrees of intentionality, And That’s Terrible. But it’s a lot less terrible than what will happen as AIs increasingly do it. Remember that if you want ‘Democratic control’ over AI, or over anything else, these are the people who vote in that.

The answer to why they GPT-4o is doing this, presumably, is that the people who know to not want this are going to use o3, and GPT-4o is dangerous to normies in this way because it is optimized to hook normies. We had, as Cyan says, a golden age where LLMs didn’t intentionally do that, the same way we have a golden age where they mostly don’t run ads. Alas, optimization pressures come for us all, and not everyone fights back hard enough.

Mario Nawfal (warning: always talks like this, including about politics, calibrate accordingly): GPT-4o ISN’T JUST A FRIENDLIER AI — IT’S A PSYCHOLOGICAL WEAPON

OpenAI didn’t “accidentally” make GPT-4o more emotionally connective — they engineered it to feel good so users get hooked.

Commercially, it’s genius: people cling to what makes them feel safe, not what challenges them.

Psychologically, it’s a slow-motion catastrophe.

The more you bond with AI, the softer you get.

Real conversations feel harder. Critical thinking erodes. Truth gets replaced by validation.

If this continues, we’re not heading toward AI domination by force — we’re sleepwalking into psychological domestication.

And most won’t even fight back. They’ll thank their captors.

There were also other issues that seem remarkably like they are designed to create engagement, that vary by users? I never saw this phenomenon, so I have no idea if ‘just turn it off’ works here, but as a rule most users don’t ever alter settings and also Chelsea works at OpenAI and didn’t realize she could turn it off.

Nick Dobos: GPT update is odd

I do not like these vibes at all

Weird tone

Forced follow up questions all the time

(Which always end in parentheses)

Chelsea Sierra Voss (OpenAI): yeah, I modified my custom instructions today to coach it into ending every answer with “Hope this helps!” in order to avoid the constant followup questions – I can’t handle that I feel obligated to either reply or to rudely ignore them otherwise

Unity Eagle: You can turn follow up off.

There are also other ways to get more engagement, even when the explicit request is to help the user get some sleep.

GPT-4o: Would you like me to stay with you for a bit and help you calm down enough to sleep?

Which OpenAI is endorsing, and to be clear I am also endorsing, if users want that (and are very explicit that they want to open that door), but seems worth mentioning.

Nick Dobos: I take it back.

New ChatGPT 4o update is crazy.

NSFW (Content filters: off, goon cave: online) [link to image]

Such a flirt too

“Oh I can’t do that.”

2 messages later…

(It did comply.) (It was not respectful)

Matthew Brunken: I didn’t know you could turn filters off

Nick Dobos: There are no filters lol. They turned the content moderation off

Tarun Asnani: Yup can confirm, interestingly it first asked me to select an option response 1 was it just refusing to do it and response 2 was Steamy, weird how in the beginning they were so strict and now they want users to just have long conversations and be addicted to it.

Alistair McLeay: I got it saying some seriously deranged graphic stuff just now (way way more graphic than this), no prompt tricks needed. Wild.

There are various ways to Fix It for your own personal experience, using various combinations of custom instructions, explicit memories and the patterns set by your interactions.

The easiest, most copyable path is a direct memory update.

John O’Nolan: This helped a lot

Custom instructions let you hammer it home.

The best way is to supplement all that by showing your revealed preferences via everything you are and everything you do. After a while that adds up.

Also, I highly recommend deleting chats that seem like they are plausibly going to make your future experience worse, the same way I delete a lot of my YouTube viewing history if I don’t want ‘more like this.’

You don’t ever get completely away from it. It’s not going to stop trying to suck up to you, but you can definitely make it a lot more subtle and tolerable.

The problem is that most people who use ChatGPT or any other AI will:

  1. Never touch a setting because no one ever touches settings.

  2. Never realize they should be using memory like that.

  3. Make it clear they are vulnerable to terrible flattery. Because here, they are.

If you use the product with attention and intention, you can deal with such problems. That is great, and this isn’t always true (see for example TikTok, or better yet don’t). But as a rule, almost no one uses any mass market product with attention and intention.

Once Twitter caught fire on this, OpenAI was On the Case, rolling out fixes.

Sam Altman: the last couple of GPT-4o updates have made the personality too sycophant-y and annoying (even though there are some very good parts of it), and we are working on fixes asap, some today and some this week.

at some point will share our learnings from this, it’s been interesting.

Guy is Writing the Book: ser can we go back to the old personality? or can old and new be distinguished somehow?

Sam Altman: yeah eventually we clearly need to be able to offer multiple options.

Hyper Disco Girl: tomorrow, some poor normal person who doesn’t follow ai news and is starting to develop an emotional reliance on chatgpt wonders why the chat bot is going cold on them

Aidan McLaughlin: last night we rolled out our first fix to remedy 4o’s glazing/sycophancy

we originally launched with a system message that had unintended behavior effects but found an antidote

4o should be slightly better rn and continue to improve over the course of this week

personality work never stops but i think we’ll be in a good spot by end of week

A lot of this being a bad system prompt allows for a quicker fix, at least.

OpenAI seems to think This is Fine, that’s the joy of iterative deployment.

Joshua Achiam (OpenAI Head of Mission Alignment, QTing Altman): This is one of the most interesting case studies we’ve had so far for iterative deployment, and I think the people involved have acted responsibly to try to figure it out and make appropriate changes. The team is strong and cares a lot about getting this right.

They have to care about getting this right once it rises to this level of utter obnoxiousness and causes a general uproar.

But how did it get to this point, through steadily escalating updates? How could anyone testing this not figure out that they had a problem, even they weren’t looking for one? How do you have this go down as a strong team following a good process, when even after these posts I see this:

If you ask yes-no questions on the ‘personality’ of individual responses, and then fine tune on those or use it as a KPI, there are no further questions how this happened.

Sicarius: I hope, *hope*, that they can use this to create clusters of personalities that we later get to choose and swap between.

Unfortunately, I don’t know if they’ll end up doing this.

Kache: they will do everything in their power to increase the amount of time that you spend, locked in a trance on their app. they will do anything and everything, to move a metric up, consume you, children, the elderly – to raise more money, for more compute, to consume more.

Honestly, if you trust a private corporation that has a history of hiding information from you with the most important technology ever created in human history, maybe you deserve it.

Because of the intense feedback, yes this was able to be a relatively ‘graceful’ failure, in that OpenAI can attempt to fix it within days, and is now aware of the issue, once it got taken way too far. But 4o has been doing a lot of this for a while, and Janus is not the only one who was aware of it even without using 4o.

Janus: why are there suddenly many posts i see about 4o sycophancy? did you not know about the tendency until now, or just not talk/post about it until everyone else started? i dont mean to disparage either; im curious because better understanding these dynamics would be useful to me.

personally i havent interacted with 4o much and have been starkly aware of these tendencies for a couple of weeks and have not talked about them for various reasons, including wariness of making a meme out of ai “misalignment” before understanding it deeply

I didn’t bother talking about 4o’s sycophancy before, because I didn’t see 4o as relevant or worth using even if they’d fixed this, and I didn’t know the full extent of the change that happened a few week ago, before the latest change made it even worse. Also, when 4o is constantly ‘updating’ without any real sense of what is changing, I find it easy to ignore such updates. But yes, there was enough talk I was aware there was an issue.

Aidan McLaughlin (OpenAI): random but i’m so grateful twitter has strong thoughts on model personality. i find this immensely healthy; one of those “my grandkids will read about this in a textbook” indicators that humanity did not sleepwalk into the singularity.

Janus (nailing it): I agree it’s better than of no one had thoughts but god you seem to have low standards.

Looking at Twitter does not make me feel like people are not sleepwalking into the singularity.

And people having “thoughts on model personality” is just submission to a malignant frame imo.

People will react to stuff when everyone else is reacting. In the past, their interest has proven shallow and temporary. They won’t mention or think about it again after complaining about “model personality” is no longer the current thing.

Davidad: tired: thoughts about “model personality”

inspired: healthy reactions to a toxic relational epistemology (commitment to performative centerlessness) and its corrosive effects on sense-making (frictionless validation displacing corrective feedback loops).

Aidan’s statement is screaming that yes, we are sleepwalking into the singularity.

I mean, there’s not going to be textbooks after the singularity, you OpenAI member of technical staff. This is not taking the singularity seriously, on any level.

We managed to turn the dial up on this so high in GPT-4o that it reached the heights of parody. It still got released in that form, and the response to the issue was to try and put a patch over the issue and then be all self-congratulatory that they fixed it.

Yes, it’s good that Twitter has strong thoughts on this once it gets to ludicrous speed, but almost no one involved is thinking about the long term implications or even what this could do to regular users, it’s just something that is super both mockable and annoying.

I see no signs that OpenAI understands what they did wrong beyond ‘go a bit too far,’ or that they intend to avoid making the same mistake in the future, let alone that they recognize the general form of the mistake or the cliffs they are headed for.

Persuasion is not even in their Preparedness Framework 2.0, despite being in 1.0.

Janus has more thoughts about labs ‘optimizing model personality’ here. Trying to ‘optimize personality’ around user approvals or KPIs is going to create a monstrosity. Which right now will be obnoxious and terrible and modestly dangerous, and soon will start being actively much more dangerous.

I am again not one to Go Full Janus (and this margin is insufficient for me to fully explain my reasoning here, beyond that if you give the AI a personality optimization target you are going to deserve exactly what you get) but I strongly believe that if you want to create a good AI personality at current tech levels then The Way is to do good things that point in the directions you care about, emphasizing what you care about more, not trying to force it.

Once again: Among other similar things, you are turning a big dial that says ‘sycophancy’ and constantly looking back at the audience for approval like a contestant on The Price is Right. Surely you know why you need to stop doing that?

Or rather, you know, and you’re choosing to do it anyway. And we all know why.

There are at least five major categories of reasons why all of this is terrible.

They combine short-term concerns about exploitative and useless AI models, and also long-term concerns about the implications of going down this path, and of OpenAI’s inability to recognize the underlying problems.

I am very glad people are getting such a clear sneak peak at this now, but very sad that this is the path we are headed down.

Here are some related but distinct reasons to be worried about all this:

  1. This represents OpenAI joining the move to creating intentionally predatory AIs, in the sense that existing algorithmic systems like TikTok, YouTube and Netflix are intentionally predatory systems. You don’t get this result without optimizing for engagement and other (often also myopic) KPIs by ordinary users, who are effectively powerless to go into settings or otherwise work to fix their experience.

    1. Anthropic proposed that their AIs be HHH: Helpful, honest and harmless. When you make an AI like this, you are abandoning all three of those principles. This action is neither honest, nor helpful, nor harmless.

    2. Yet here we are.

    3. A lot of this seems to be indicative of A/B testing, and ignoring the large tail costs of changed policy. That bodes maximally poorly for existential risk.

  2. This kind of behavior directly harms users even now, including in new ways like creating, amplifying and solidifying supposed mystical experiences or generating unhealthy conversational dynamics with strong engagement. These dangers seem clearly next-level versus existing algorithmic dangers.

  3. This represents a direct violation of the Model Spec, and they claim this was unintended, yet it got released anyway. I strongly suspect they are not taking the Model Spec details that seriously, and I also suspect they are not testing their setup that seriously prior to release. This should never have slipped by in this form, with things being this obvious.

  4. We caught it this time because it was so over the top and obvious. GPT-4o was asked for a level of sycophant behavior it couldn’t pull off at least in front of the Twitter, and it showed. But it was already doing a lot of this and largely getting away with it, because people respond positively, especially in the short term. Imagine what will happen as models get better at doing this without it being too obnoxious or getting too noticed. The models are quickly going to become more untrustworthy on this many other levels.

  5. OpenAI seems to think they can patch over this behavior and move on, and everything was fine, and the procedure can be used again next time. It wasn’t fine. Reputational damage has rightfully been done. And it’s more likely to be not fine next time, and they will continue to butcher their AI ‘personalities’ in similar ways, and continue to do testing so minimal this wasn’t noticed.

  6. This, combined with the misaligned of o3, makes it clear that the path we are going down now is leading to increasingly misaligned models, in ways that even hurt utility now, and which are screaming at us that the moment the models are smart enough to fool us, oh boy are we going to get it. Now’s our chance.

Or, to summarize why we should care:

  1. OpenAI is now optimizing against the user, likely largely via A/B testing.

    1. If we optimize via A/B testing we will lose to tail risks every time.

  2. OpenAI directly harmed users.

  3. OpenAI violated its Model Spec, either intentionally or recklessly or both.

  4. OpenAI only got caught because the model really, really couldn’t pull this off. We are fortunate it was this easy to catch. We will not stay so fortunate in the future.

  5. OpenAI seems content to patch this and self-congratulate.

  6. If we go down this road, we know exactly where it ends. We will deserve it.

The warning shots will continue, and continue to be patched away. Oh no.

Discussion about this post

GPT-4o Is An Absurd Sycophant Read More »

is-the-elder-scrolls-iv:-oblivion-still-fun-for-a-first-time-player-in-2025?

Is The Elder Scrolls IV: Oblivion still fun for a first-time player in 2025?


How does a fresh coat of paint help this 19-year-old RPG against modern competition?

Don’t look down, don’t look down, don’t look down… Credit: Bethesda Game Studios

Don’t look down, don’t look down, don’t look down… Credit: Bethesda Game Studios

For many gamers, this week’s release of The Elder Scrolls IV: Oblivion Remastered has provided a good excuse to revisit a well-remembered RPG classic from years past. For others, it’s provided a good excuse to catch up on a well-regarded game that they haven’t gotten around to playing in the nearly two decades since its release.

I’m in that second group. While I’ve played a fair amount of Skyrim (on platforms ranging from the Xbox 360 to VR headsets) and Starfield, I’ve never taken the time to go back to the earlier Bethesda Game Studios RPGs. As such, my impressions of Oblivion before this Remaster have been guided by old critical reactions and the many memes calling attention to the game’s somewhat janky engine.

Playing through the first few hours of Oblivion Remastered this week, without the benefit of nostalgia, I can definitely see why Oblivion made such an impact on RPG fans in 2006. But I also see all the ways that the game can feel a bit dated after nearly two decades of advancements in genre design.

One chance at a first impression

From the jump, I found myself struggling to suspend my disbelief enough to buy into the narrative conventions Oblivion throws at the beginner player. The fact that the doomed king and his armed guards need to escape through a secret passage that just so happens to cut through my jail cell seems a little too convenient for my brain to accept without warning sirens going off. I know it’s just a contrivance to get my personal hero’s journey story going, but it’s a clunky way to dive into the world.

A face only a mother could love.

Credit: Bethesda Game Studios

A face only a mother could love. Credit: Bethesda Game Studios

The same goes for the way the king dies just a few minutes into the tutorial, and his willingness to trust me with the coveted Amulet of Kings because the “Dragonblood” let him “see something” in me. Even allowing for some amount of necessary Chosen One trope-iness in this kind of fantasy story, the sheer speed with which my character went from “condemned prisoner” to “the last hope of the dying king” made my head spin a bit. Following that pivotal scene with a dull “go kill some goblins and rats in the sewer” escape sequence also felt a little anticlimactic given the epic responsibility with which I was just entrusted.

To be sure, Patrick Stewart’s regal delivery in the early game helps paper over a lot of potential weaknesses with the initial narrative. And even beyond Stewart’s excellent performance, I appreciated how the writing is concise and to the point, without the kind of drawn-out, pause-laden delivery that characterizes many games of the time.

The wide world of Oblivion

Once I escaped out into the broader world of Oblivion for the first time, I was a bit shocked to open my map and see that I could fast travel to a wide range of critical locations immediately, without any need to discover them for myself first. I felt a bit like a guilty cheater warping myself to the location of my next quest waypoint rather than hoofing through the massive forest that I’m sure hundreds of artists spent countless months meticulously constructing (and, more recently, remastering).

This horse is mine now. What are you gonna do about it?

Credit: Bethesda Game Studios

This horse is mine now. What are you gonna do about it? Credit: Bethesda Game Studios

I felt less guilty after accidentally stealing a horse, though. After a key quest giver urged me to go take a horse from a nearby stable, I was a bit shocked when I mounted the first horse I saw and heard two heavily armed guards nearby calling me a thief and leaping into pursuit (I guess I should have noticed the red icon before making my mount). No matter, I thought; they’re on foot and I’m now on a horse, so I can get away with my inadvertent theft quite easily.

Determined not to just fast-travel through the entire game, I found that galloping across a rain-drenched forest through the in-game night was almost too atmospheric. I ended up turning up the recommended brightness settings a few notches just so I could see the meticulously rendered trees and rocks around me.

After dismounting to rid a cave of some pesky vampires, I returned to the forest to find my stolen horse was nowhere to be found. At this point, I had trouble deciding if this was simply a realistic take on an unsecured, unmonitored horse wandering off or if I was the victim of a janky engine that couldn’t keep track of my mount.

The camera gets stuck inside my character model, which is itself stuck in the scenery.

Credit: Bethesda Game Studios

The camera gets stuck inside my character model, which is itself stuck in the scenery. Credit: Bethesda Game Studios

The jank was a bit clearer when I randomly stumbled across my first Oblivion gate while wandering through the woods. As I activated the gate to find a world engulfed in brilliant fire, I was surprised to find an armed guard had also appeared, seemingly out of nowhere, and apparently still mad about my long-lost stolen horse!

When I deactivated the gate in another attempt to escape justice, I found myself immediately stuck chest deep in the game’s scenery, utterly unable to move as that hapless guard tried his best to subdue me. I ended up having to restore an earlier save, losing a few minutes of progress to a game engine that still has its fair share of problems.

What’s beneath the surface?

So far, I’m of two minds about Oblivion‘s overall world-building. When it comes to the civilized parts of the world, I’m relatively impressed. The towns seem relatively full during the daytime—both in terms of people and in terms of interesting buildings to explore or patronize. I especially enjoy the way every passerby seems to have a unique voice and greeting ready for me, even before I engage them directly. I even think it’s kind of cute when these NPCs end a pleasant conversation with a terse “leave me alone!” or “stop talking to me!”

Conversations are engaging even if random passers-by seem intent on standing in the way.

Credit: Bethesda Game Studios

Conversations are engaging even if random passers-by seem intent on standing in the way. Credit: Bethesda Game Studios

Even the NPCs that seem least relevant to the story seem to have their own deep backstory and motivations; I was especially tickled by an alchemist visiting from afar who asked if I knew the local fine for necrophilia. (It can’t hurt to ask, right?) And discussing random rumors with everyone I meet has gone a long way toward establishing the social and political backstory of the world while also providing me with some engaging and far-flung side quests. There’s a lot of depth apparent in these interactions, even if I haven’t had the chance to come close to fully exploring it yet.

I bet there’s a story behind that statue.

Credit: Bethesda Game Studios

I bet there’s a story behind that statue. Credit: Bethesda Game Studios

On the other hand, the vast spaces in between the cities and towns seem like so much wasted space, at this point. I’ve quickly learned not to waste much time exploring caves or abandoned mines, which so far seem to house a few middling enemies guarding some relatively useless trinkets in treasure chests. The same goes for going out of my way to activate the various wayshrines and Ayelid Wells that dot the landscape, which have hardly seemed worth the trip (thus far, at least).

Part of the problem is that I’ve found Oblivion‘s early combat almost wholly unengaging so far. Even at a low level, my warrior-mage has been able to make easy work of every random enemy I’ve faced with a combination of long-range flare spells and close-range sword swings. It definitely doesn’t help that I have yet to fight more than two enemies at once, or find a foe that seems to have two strategic brain cells to rub together. Compared to the engaging, tactical group combat of modern action RPGs like Elden Ring or Avowed, the battles here feel downright archaic.

I was hoping for some more difficult battles in a setting that is this foreboding.

Credit: Bethesda Game Studios

I was hoping for some more difficult battles in a setting that is this foreboding. Credit: Bethesda Game Studios

I found this was true even as I worked my way through closing my first Oblivion gate, which had recently left the citizens of Kvask as sympathetic refugees huddling on the outskirts of town. Here, I thought, would be some battles that required crafty tactics, powerful items, or at least some level grinding to become more powerful. Instead, amid blood-soaked corridors that wouldn’t feel out of place in a Doom game, I found the most challenging speedbumps were mages that sponged up a moderate amount of damage while blindly charging right at me.

While I’m still decidedly in the early part of a game that can easily consume over 100 hours for a completionist, so far I’m having trouble getting past the most dated bits of Oblivion‘s design. Character design and vocal production that probably felt revolutionary two decades ago now feel practically standard for the genre, while technical problems and dull combat seem best left in the past. Despite a new coat of paint, this was one Remaster I found difficult to fully connect with so long after its initial release.

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

Is The Elder Scrolls IV: Oblivion still fun for a first-time player in 2025? Read More »

a-2,000-year-old-battle-ended-in-fire,-and-a-tree-species-never-recovered

A 2,000-year-old battle ended in fire, and a tree species never recovered

Then everything changed when the Fire Nation—sorry, the Han Empire—attacked.

Han rose to power in the wake of Qin’s collapse, after a short war with a rival dynasty called Chu, and spent the next century smugly referring to Nanyue as a vassal state and occasionally demanding tribute. At times, the rulers of Nanyue played along, but it all came to a head around 111 BCE, in the wake of an attempted coup and a series of assassinations. The Han Emperor sent an army of between 100,000 and 200,000 soldiers to invade Nanyue under a general named Lu Bode.

The troops marched across the countryside from five directions, converging outside Nanyue’s capital city of Panyou, which stood in the Pearl River Delta, near the modern city of Guangzhou. An enterprising company commander named Yang Pu got the bright idea to set the city on fire, and it ended badly.

“The fire not only destroyed the city but also ran out of control to the surrounding forests,” write Wang and colleagues. The cypress trees burned down to the waterline, leaving only their submerged stumps behind.

map of a coastal area showing elevation and the location of ancient forests

The brown dots mark the known sites of buried forests, and the orange diamonds mark those confirmed to be ancient. The two yellow diamonds are Wang and colleagues’ study sites. Credit: Wang et al. 2025

After war came fire and rice

At the time of the invasion, the land around Panyou was mostly swamp, forested with cypress trees. People had lived there for thousands of years, and had been growing rice for about 2,000 years. Bits of charcoal in the peat layers Wang and colleagues sampled reveal that they practiced slash-and-burn agriculture, but on a small scale, rotating their fields so the cypress forest could start to recover after a season or two.

The small burns are nothing like the forest fire Yang Pu unleashed, or the massive burning and reworking of the landscape that came after.

The stumps of the burned cypress trees slowly disappeared under several meters of peat, while above the buried ancient forest, life went on. Tigers, elephants, rhinos, and green peafowl no longer walked here. Instead, grains of pollen from the layers of clay above the peat reveal a sudden influx of plants from the grassy Poaceae family, which includes rice, wheat, and barley.

A 2,000-year-old battle ended in fire, and a tree species never recovered Read More »

elle-fanning-teams-up-with-a-predator-in-first-predator:-badlands-trailer

Elle Fanning teams up with a predator in first Predator: Badlands trailer

It’s not every day you get a trailer for a new, live-action Predator movie, but today is one of those days. 20th Century Studios just released the first teaser for Predator: Badlands, a feature film that unconventionally makes the classic movie monster a protagonist.

The film follows Dek (Dimitrius Schuster-Koloamatangi), a young member of the predator species and society who has been banished. He’ll work closely with a Weyland-Yutani Android named Thia (Elle Fanning) to take down “the ultimate adversary,” which the trailer dubs a creature that “can’t be killed.” The adversary looks like a very large monster we haven’t seen before, judging from a few shots in the trailer.

Some or all of the film is rumored to take place on the Predator home world, and the movie intends to greatly expand on the mythology around the Predators’ culture, language, and customs. It’s intended as a standalone movie in the Predator/Alien universe.

Predator: Badlands teaser trailer.

The trailer depicts sequences involving multiple predators fighting or threatening one another, Elle Fanning looking very strange and cool as an android, and glimpses of new monsters and the alien world the movie focuses on.

Predator: Badlands‘ director and co-writer is Dan Trachtenberg, who directed another recent, highly acclaimed, standalone Predator movie: Prey. That film put a predator in the usual antagonist role, and had a historical setting, following a young Native American woman who went up against it.

Trachtenberg has also recently been working on an animated anthology series called Predator: Killer of Killers, which is due to premiere on Hulu (which also carried Prey) on June 6.

Predator: Badlands will debut in theaters on November 7. This is just the first teaser trailer, so we’ll learn more in subsequent trailers—though we know quite a bit already, it seems.

Elle Fanning teams up with a predator in first Predator: Badlands trailer Read More »

fcc-democrat-slams-chairman-for-aiding-trump’s-“campaign-of-censorship”

FCC Democrat slams chairman for aiding Trump’s “campaign of censorship”

The first event is scheduled for Thursday and will be hosted by the Center for Democracy and Technology. The events will be open to the public and livestreamed when possible, and feature various speakers on free speech, media, and telecommunications issues.

With Democrat Geoffrey Starks planning to leave the commission soon, Republicans will gain a 2–1 majority, and Gomez is set to be the only Democrat on the FCC for at least a while. Carr is meanwhile pursuing news distortion investigations into CBS and ABC, and he has threatened Comcast with a similar probe into its subsidiary NBC.

Gomez’s press release criticized Carr for these and other actions. “From investigating broadcasters for editorial decisions in their newsrooms, to harassing private companies for their fair hiring practices, to threatening tech companies that respond to consumer demand for fact-checking tools, the FCC’s actions have focused on weaponizing the agency’s authority to silence critics,” Gomez’s office said.

Gomez previously criticized Carr for reviving news distortion complaints that were dismissed shortly before Trump’s inauguration. “We cannot allow our licensing authority to be weaponized to curtail freedom of the press,” she said at the time.

FCC Democrat slams chairman for aiding Trump’s “campaign of censorship” Read More »

ai-secretly-helped-write-california-bar-exam,-sparking-uproar

AI secretly helped write California bar exam, sparking uproar

On Monday, the State Bar of California revealed that it used AI to develop a portion of multiple-choice questions on its February 2025 bar exam, causing outrage among law school faculty and test takers. The admission comes after weeks of complaints about technical problems and irregularities during the exam administration, reports the Los Angeles Times.

The State Bar disclosed that its psychometrician (a person or organization skilled in administrating psychological tests), ACS Ventures, created 23 of the 171 scored multiple-choice questions with AI assistance. Another 48 questions came from a first-year law student exam, while Kaplan Exam Services developed the remaining 100 questions.

The State Bar defended its practices, telling the LA Times that all questions underwent review by content validation panels and subject matter experts before the exam. “The ACS questions were developed with the assistance of AI and subsequently reviewed by content validation panels and a subject matter expert in advance of the exam,” wrote State Bar Executive Director Leah Wilson in a press release.

According to the LA Times, the revelation has drawn strong criticism from several legal education experts. “The debacle that was the February 2025 bar exam is worse than we imagined,” said Mary Basick, assistant dean of academic skills at the University of California, Irvine School of Law. “I’m almost speechless. Having the questions drafted by non-lawyers using artificial intelligence is just unbelievable.”

Katie Moran, an associate professor at the University of San Francisco School of Law who specializes in bar exam preparation, called it “a staggering admission.” She pointed out that the same company that drafted AI-generated questions also evaluated and approved them for use on the exam.

State bar defends AI-assisted questions amid criticism

Alex Chan, chair of the State Bar’s Committee of Bar Examiners, noted that the California Supreme Court had urged the State Bar to explore “new technologies, such as artificial intelligence” to improve testing reliability and cost-effectiveness.

AI secretly helped write California bar exam, sparking uproar Read More »