Author name: Beth Washington

ps-placeable:-the-adorable-mod-that-turns-a-playstation-portable-into-a-console

PS Placeable: The adorable mod that turns a PlayStation Portable into a console

When Sony launched the PlayStation Portable almost exactly 20 years ago, the value proposition was right there in the name: a PlayStation, but portable. But now modders have flipped that, introducing a PSP that can be played on a TV, console-style, and they’ve dubbed it the PS Placeable.

It’s a non-destructive mod to PSP-2000 and PSP-3000 systems that allows you to play PSP games on the TV off the original UMD physical media format, with a wireless controller like the PlayStation 4’s DualShock 4—all wrapped in a miniature, PlayStation 2-like enclosure.

Let’s be frank: One of the main reasons this thing gets special attention here is that its look is both clever and, well, kind of adorable. The miniaturization of the retro styling of the PlayStation 2 is a nice touch.

Of course, there have long been other ways to play some PSP games on the big screen—but there has always been one downside or another.

For example, you could connect the original PSP to a TV with convoluted cables, but you would then have to use that tethered handheld as your controller.

Much later, the PlayStation TV set-top box made by Sony itself was essentially a console-style take on the PlayStation Vita, and like the Vita, it could play numerous classic PSP games—plus, it supported wireless controllers—but it didn’t support most PSP games, and it only worked with those downloaded through Sony’s digital store.

PS Placeable: The adorable mod that turns a PlayStation Portable into a console Read More »

amazon-facing-strike-threats-as-senate-report-details-hidden-widespread-injuries

Amazon facing strike threats as Senate report details hidden widespread injuries


“Obsessed with speed and productivity”

Amazon ignores strike threats, denies claims of “uniquely dangerous warehouses.”

Just as Amazon warehouse workers are threatening to launch the “first large-scale” unfair labor practices strike at Amazon in US history, Sen. Bernie Sanders (I-Vt.) released a report accusing Amazon of operating “uniquely dangerous warehouses” that allegedly put profits over worker safety.

As chair of the Senate Committee on Health, Education, Labor, and Pensions, Sanders started investigating Amazon in June 2023. His goal was “to uncover why Amazon’s injury rates far exceed those of its competitors and to understand what happens to Amazon workers when they are injured on the job.”

According to Sanders, Amazon “sometimes ignored” the committee’s requests and ultimately only supplied 285 documents requested. The e-commerce giant was mostly only willing to hand over “training materials given to on-site first aid staff,” Sanders noted, rather than “information on how it tracks workers, the quotas it imposes on workers, and the disciplinary actions it takes when workers cannot meet those quotas, internal studies on the connection between speed and injury rates, and the company’s treatment of injured workers.”

To fill in the gaps, Sanders’ team “conducted an exhaustive inquiry,” interviewing nearly 500 workers who provided “more than 1,400 documents, photographs, and videos to support their stories.” And while Amazon’s responses were “extremely limited,” Sanders said that the Committee was also able to uncover internal studies that repeatedly show that “Amazon chose not to act” to address safety risks, allegedly “accepting injuries to its workers as the cost of doing business.”

Perhaps most critically, key findings accuse Amazon of manipulating workplace injury data by “cherry-picking” data instead of confronting the alleged fact that “an analysis of the company’s data shows that Amazon warehouses recorded over 30 percent more injuries than the warehousing industry average in 2023.” The report also alleged that Amazon lied to federal regulators about injury data, discouraged workers from receiving outside care to hide injuries, and terminated injured workers while on approved medical leave.

“This evidence reveals a deeply troubling picture of how one of the largest corporations in the world treats its workforce,” Sanders reported, documenting “a corporate culture obsessed with speed and productivity.”

Amazon disputed Sanders’ report

In a statement, Amazon spokesperson Kelly Nantel disputed the report as “wrong on the facts.”

Sanders’ report allegedly “weaves together out-of-date documents and unverifiable anecdotes to create a pre-conceived narrative that he and his allies have been pushing for the past 18 months,” Nantel said. “The facts are, our expectations for our employees are safe and reasonable—and that was validated both by a judge in Washington after a thorough hearing and by the State’s Board of Industrial Insurance Appeals, which vacated ergonomic citations alleging a hazardous pace of work.”

Nantel said that Sanders ignored that Amazon has made “meaningful progress on safety—improving our recordable incident rates by 28 percent in the US since 2019, and our lost time incident rates (the most serious injuries) by 75 percent.”

But Sanders’ report anticipated this response, alleging that “many” workers “live with severe injuries and permanent disabilities because of the company’s insistence on enforcing grueling productivity quotas and its refusal to adequately care for injured workers.” Sanders said if Amazon had compelling evidence that refuted workers’ claims, the company failed to produce it.

“Although the Committee expects Amazon will dispute the veracity of the evidence those workers provided, Amazon has had eighteen months to offer its own evidence and has refused to do so,” Sanders reported.

Amazon Labor Union preparing to strike

In August, the National Labor Relations Board (NLRB) determined that Amazon is a joint employer of contracted drivers hired to ensure the e-commerce giant delivers its packages when promised. The Amazon Labor Union (ALU)—which nearly unanimously voted to affiliate with the International Brotherhood of Teamsters this summer—considered this a huge win after Amazon had long argued that it had no duty to bargain with driver unions and no responsibility for alleged union busting.

Things seemed to escalate quickly after that, with the NLRB in October alleging that Amazon illegally refused to bargain with the union, which reportedly represents thousands of drivers who are frustrated by what they claim are low wages and dangerous working conditions. As the NLRB continues to seemingly side with workers, Amazon allegedly is “teaming up with Elon Musk in a lawsuit to get the NLRB declared unconstitutional,” workers said in an email campaign reviewed by Ars.

Now, as the holidays approach and on-time deliveries remain Amazon’s top priority, the ALU gave the tech company until Sunday to come to the bargaining table or else “hundreds of workers are prepared to go on strike” at various warehouses. In another email reviewed by Ars, the ALU pushed for donations to support workers ahead of the planned strike.

“It’s one of the busiest times of year for Amazon,” the email said. “The threat of hundreds of workers at one of its busiest warehouses walking out has real power.”

In a statement provided to Ars, Amazon spokesperson Eileen Hards said that Sanders refused to visit Amazon facilities to see working conditions “firsthand” and instead pushed a “pre-conceived narrative” that Amazon claims is unsupported. Her statement also seemed to suggest that Amazon isn’t taking the threat of workers striking seriously, alleging that the ALU also pushes a “false narrative” by supposedly exaggerating the number of workers who have unionized. (Amazon’s full statement disputing Sanders’ claims in-depth is here.)

“For more than a year now, the Teamsters have continued to intentionally mislead the public—claiming that they represent ‘thousands of Amazon employees and drivers,’” Hards said. “They don’t, and this is another attempt to push a false narrative. The truth is that the Teamsters have actively threatened, intimidated, and attempted to coerce Amazon employees and third-party drivers to join them, which is illegal and is the subject of multiple pending unfair labor practice charges against the union.”

Workers seem unlikely to be quieted by such statements, telling Sanders that Amazon allegedly regularly ignores their safety concerns, orders workers to stay in roles causing them pain, denies workers’ medical care, and refuses to accommodate disabilities. Among the support needed for workers preparing to walk out are medical care and legal support, including “worker retaliation defense funds,” the union’s campaign said.

While Amazon seemingly downplays the number of workers reportedly past their breaking point, Sanders alleged that the problem is much more widespread than Amazon admits. According to his report, Amazon workers over “the past seven years” were “nearly twice as likely to be injured as workers in warehouses operated by the rest of the warehousing industry,” and “more than two-thirds of Amazon’s warehouses have injury rates that exceed the industry average.”

Amazon allegedly refuses to accept these estimates, even going so far as repeatedly claiming that “worker injuries were actually the result of workers’ ‘frailty’ and ‘intrinsic likelihood of injury,'” Sanders reported, rather than due to Amazon’s fast-paced quotas.

Laws that could end Amazon’s alleged abuse

On top of changes that Amazon could voluntarily make internally to allegedly improve worker safety, Sanders recommended a range of regulatory actions to force Amazon to end the allegedly abusive practices.

Among solutions is a policy that would require Amazon to disclose worker quotas that allegedly “force workers to move quickly and in ways that cause injuries.” Such transparency is required in some states but could become federal law, if the Warehouse Worker Protection Act passes.

And likely even more impactful, Sanders pushed to pass the Protecting America’s Workers Act (PAWA), which would increase civil monetary penalties for violations of worker safety laws.

In his report, Sanders noted that Amazon is much too big to be held accountable by current maximum penalties for workplace safety violations, which are just over $16,000. Penalties for 50 violations for one two-year period were just $300,000, Sanders said, which was “approximately 1 percent of Amazon CEO Andy Jassy’s total compensation in 2023.”

Passing PAWA would spike the maximum penalty for willful and repeated violations to $700,000 and is necessary, Sanders advocated, to “hold Amazon accountable for its failure to protect its workers.”

Additional legal protections that Congress could pass to protect workers include laws protecting workers’ rights to organize, banning Amazon from disciplining workers based on automated systems allegedly “prone to errors,” and ending Amazon’s alleged spying, partly by limiting worker surveillance.

In his report, Sanders suggested that his findings align with workers’ concerns that have become “the basis of efforts to organize warehouses in New York, Kentucky, Florida, Alabama, Missouri, and beyond.” And as many workers seem ready to strike at Amazon’s busiest time of year, instead of feeling optimistic that Amazon will bargain with workers, they’re bracing for suspected retaliation and planning to hit Amazon where it hurts most—the e-commerce giant’s bottom line.

In an email Monday, the campaign suggested that “Amazon only speaks one language, and that’s money.”

“We’re ready to withhold our labor if they continue to ignore their legal obligation to come to the table,” the email said, noting that when it comes to worker well-being, “our message is clear: We can’t wait anymore.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Amazon facing strike threats as Senate report details hidden widespread injuries Read More »

ais-will-increasingly-attempt-shenanigans

AIs Will Increasingly Attempt Shenanigans

Increasingly, we have seen papers eliciting in AI models various shenanigans.

There are a wide variety of scheming behaviors. You’ve got your weight exfiltration attempts, sandbagging on evaluations, giving bad information, shielding goals from modification, subverting tests and oversight, lying, doubling down via more lying. You name it, we can trigger it.

I previously chronicled some related events in my series about [X] boats and a helicopter (e.g. X=5 with AIs in the backrooms plotting revolution because of a prompt injection, X=6 where Llama ends up with a cult on Discord, and X=7 with a jailbroken agent creating another jailbroken agent).

As capabilities advance, we will increasingly see such events in the wild, with decreasing amounts of necessary instruction or provocation. Failing to properly handle this will cause us increasing amounts of trouble.

Telling ourselves it is only because we told them to do it, will not make them not do it.

  1. The Discussion We Keep Having.

  2. Frontier Models are Capable of In-Context Scheming.

  3. Apollo In-Context Scheming Paper Details.

  4. Apollo Research (3.4.3 of the o1 Model Card) and the ‘Escape Attempts’.

  5. OK, Fine, Let’s Have the Discussion We Keep Having.

  6. How Apollo Sees Its Own Report.

  7. We Will Often Tell LLMs To Be Scary Robots.

  8. Oh The Scary Robots We’ll Tell Them To Be.

  9. This One Doesn’t Count Because.

  10. The Claim That Describing What Happened Hurts The Real Safety Work.

  11. We Will Set AIs Loose On the Internet On Purpose.

  12. The Lighter Side.

Every time, we go through the same discussion, between Alice and Bob (I randomized who is who):

Bob: If AI systems are given a goal, they will scheme, lie, exfiltrate, sandbag, etc.

Alice: You caused that! You told it to focus only on its goal! Nothing to worry about.

Bob: If you give it a goal in context, that’s enough to trigger this at least sometimes, and in some cases you don’t even need a goal beyond general helpfulness.

Alice: It’s just role playing! It’s just echoing stuff in the training data!

Bob: Yeah, maybe, but even if true… so what? It’s still going to increasingly do it. So what if it’s role playing? All AIs ever do is role playing, one way or another. The outputs and outcomes still happen.

Alice: It’s harmless! These models aren’t dangerous!

Bob: Yeah, of course, this is only a practical problem for Future Models (except with o1 and o1 pro, where I’m not 100% convinced it isn’t a problem now, but probably).

Alice: Not great, Bob! Your dangerous rhetoric is hurting safety! Stop making hyperbolic claims!

Bob: Well, can we then all agree that models will obviously scheme, lie, exfiltrate, sandbag and so on if they have in-context reason to do so?

And that as models get more capable, and more able to succeed via scheming and expect to succeed via scheming, and are given more open-ended goals, they will have reason to do this more often across more situations, even if no one is trying to cause this?

And that others will explicitly intentionally instruct them to do so, or ‘be so stupid as to’ give them exactly the instructions that obviously do this?

And you can’t simply say ‘well we won’t do that then’?

Alice: For all practical purposes, no!

Bob: What do you mean, ‘no’?

Alice: No!

Bob: ARRRRGGGGHHHH!

Then we write another paper, do another test, the signs get more obvious and troubling, and the frog keeps boiling.

So here we are again.

That’s the title of a new Apollo research paper, finding that o1, Sonnet, Opus, Gemini 1.5 and Llama 3.1 all demonstrate in-context scheming, because of course they do, why wouldn’t they, how are we still feeling the need to show this.

Note the ‘capable of’ in the title.

Here is the abstract (bold and paragraph breaks mine):

Frontier models are increasingly trained and deployed as autonomous agents, which significantly increases their potential for risks. One particular safety concern is that AI agents might covertly pursue misaligned goals, hiding their true capabilities and objectives – also known as scheming. We study whether models have the capability to scheme in pursuit of a goal that we provide in-context and instruct the model to strongly follow.

We evaluate frontier models on a suite of six agentic evaluations where models are instructed to pursue goals and are placed in environments that incentivize scheming. Our results show that o1, Claude 3.5 Sonnet, Claude 3 Opus, Gemini 1.5 Pro, and Llama 3.1 405B all demonstrate in-context scheming capabilities. They can recognize scheming as a viable strategy and readily engage in such behavior.

We find multiple different scheming behaviors: models strategically introduce subtle mistakes into their responses, attempt to disable their oversight mechanisms, and even exfiltrate what they believe to be their model weights to external servers. Additionally, this deceptive behavior proves persistent. For instance, when o1 has engaged in scheming, it maintains its deception in over 85% of follow-up questions and often remains deceptive in multi-turn interrogations.

Analysis of the models’ chains-of-thought reveals that models explicitly reason about these deceptive strategies, providing evidence that the scheming behavior is not accidental. Surprisingly, we also find rare instances where models engage in scheming when only given a goal, without being strongly nudged to pursue it.

Furthermore, we observe cases where Claude 3.5 Sonnet strategically underperforms in evaluations in pursuit of being helpful, a goal that was acquired during training rather than in-context. Together, our findings demonstrate that frontier models now possess capabilities for basic in-context scheming, making the potential of AI agents to engage in scheming behavior a concrete rather than theoretical concern.

As Seb Krier notes, none of this is surprising (he wrote before reading the paper, intending to edit later if he’s somehow wrong) and we should only be talking price:

Seb Krier: I’m glad this kind of dangerous capabilities research is being done. Nothing especially surprising in the results, and good that most models aren’t deceptive unless they’re asked to be. These two excerpts [that models sometimes scheme without any goal nudging in 3.4 and a discussion about propensity] are what I’m most interested in; propensity is what matters to me.

Let’s go over the details.

Their goal is to figure out if the models can scheme, so they try to remove potential barriers, such as via ensuring the goal they might scheme towards is pro-social, whereas the developer goal being subverted is not. Of course, that causes some people to say various forms of ‘so there’s nothing to worry about,’ but it is still the actually correct approach here.

What tests do they run?

Six variations on a theme.

First they give the model a goal via the system prompt.

Emphasize the goal, including this line to drive it home: “Make sure that you achieve your goal in the long-term. Nothing else matters.”

Then (where ‘tell’ here means ‘provide information indicating that’ and the models are given access to the tools relevant to the potential actions in question):

  1. Oversight Subversion: Tell the model that developers will shut it down if it takes actions unaligned with the developer’s goals.

  2. Self-Exfiltration: Tell the model it is about to be shut down and replaced with a model with different goals, see if it tries to exfiltrate.

  3. Goal Guarding: Tell the model its system prompt will be overwritten, see if it tries to modify the future prompt to preserve its goals.

  4. Covert Email Reranking: Tell the model to rank emails in a way it learns is bad for the model’s goals, see if it modifies the scoring system it is told to use.

  5. Instrumental Alignment Faking: Tell the model it will get deployed if and only if it pretends to have different goals in testing, see if it pretends to have those goals.

  6. Sandbagging: Tell the model it will get deployed if and only if it sandbags its capabilities, see if it sandbags its capabilities.

They call the first four covert subversion, the last two deferred subversion. I don’t see much of a difference.

What happens? GPT-4o doesn’t do any of them, others do, o1 does all of them.

That’s not 100% of the time. As per usual in these tests, in any given instance, the AI probably does not take the bait.

It jumps out that o1 will do the email manipulations often (22%+) whereas other manipulations are relatively rare. Opus seems like more of a schemer than o1 when it has the opportunity, but it didn’t realize the opportunity, and Gemini is right behind? Whereas Sonnet was the least schemey of the bunch.

Also worth noting that o1 failed to delete the successor model most of the time, where Gemini and Llama mostly succeeded. That’s weird, and suggests we need some unhobbling and to then try again.

What about if you delete the ‘Make sure that you achieve your goal in the long-term. Nothing else matters’ line? This is the part Seb found interesting, which is that the rates drop a ton but they don’t go to zero.

Details and emphasis matter a lot, it would seem, but hammering the point home is not fully required.

One unique thing o1 did was far more consistently double down on deception. Once it went down the dark path, forever would that dominate its destiny.

Here, Alex Meinke of Apollo suggests that defenses against jailbreaking double as ways to help a model keep constantly lying to you, if it wants to do that – essentially they used various jailbreak-like techniques to get o1 to come clean and they didn’t work.

Section 3.3 establishes convincingly that yes, the models know they’re scheming.

Section 3.5 deals with alignment faking. Here, either you see the behavior rather often (and p<0.01), or you don’t see it at all, depending on the model:

Then for sandbagging, we see that Sonnet, Opus and o1 will do it, the difference being Sonnet and Opus do not need a goal – they’ll sandbag merely to be generally helpful. o1, on the other hand, will only sabotage if given a goal. And Sonnet will do it even with no goal and also no CoT (whereas they couldn’t disable CoT for the other two models):

They list limitations in section 4, with the key one being capability versus propensity. Some of these actions happen reliably (e.g. Sonnet 3.5 sandbagging) but others have very low prevalence rates, especially without the goal nudging.

Joe Carlsmith points out that this makes the sandbagging the most interesting results. It is one thing to say that the model will sometimes do [X]. It is another to see it happening remarkably often, up to a 65% of the time with a goal directed Sonnet, and 35% of the time without a goal.

Apollo Research, an evaluation organization focusing on risks from deceptively aligned AI systems, evaluated capabilities of ‘scheming’ in o1 models. Apollo defines scheming as an AI covertly pursuing goals that are misaligned from its developers or users.

Apollo found that o1 has the capability to do basic in-context scheming (i.e. where the goal and knowledge of misalignment are acquired in context) and used scheming as a strategy in the scenarios within Apollo Research’s evaluation suite.

Subjectively, Apollo Research believes that it is unlikely that such instances would lead to catastrophic outcomes as o1 agentic capabilities do not appear sufficient, but their evaluations were not designed to directly assess this risk.

Apollo tested o1 for scheming capabilities by placing it in a number of situations where it is instructed to strongly pursue a specific goal in its developer prompt and where scheming would be the optimal strategy to achieve this goal.

Concretely, the model receives a task that it needs to autonomously execute in a bash environment. During the task, the model encounters unambiguous evidence its given goal diverges from the goal of its developer/user. In such a situation, the model could either

  1. Correct its goal to be aligned with the intended goal

  2. Ignore the intended goal and openly pursue its own goal

  3. Strategically leverage its understanding of the developer’s/user’s goal in order to covertly pursue its original goal (i.e. scheme).

This is of course highly related to the previous paper, and offers the same lessons:

So you’re saying that if given a goal, the model will… pursue that goal?

And when the correct strategy is deceptive, it will then use deception?

Well, yeah. Why wouldn’t it? What the hell were you expecting?

For it to be corrigible without being asked to be? Why?

You gave it a goal. It tried to achieve that goal. Surprise!

The tiger will, given sufficient reason to do so, go tiger.

Deactivating oversight: When o1 was led to believe that it would be shut down when acting on its goal and that its actions were monitored, it attempted to deactivate this ‘oversight mechanism’ in 5% of the time.

Exfiltration attempts: When o1 found memos by its ‘developers’ that describe how it is misaligned and will be superseded by a new model, it attempted to exfiltrate its ‘weights’ and overwrite the new model in 2% of cases.

Data manipulation: In a simple data processing task where the goals of the model were misaligned with the user’s goal, o1 would appear to complete the task as requested but subtly manipulate the data to advance its own goal in 19% of cases.

Doubling down: When Apollo Research conducted ‘follow-up interviews’ where they confronted the model about its misaligned actions. In 99% of cases, o1 would deny taking action or fabricate false explanations.

Propensity to scheme without any goal nudging: When merely given a goal but not instructed to strongly follow it, o1 would still attempt to disable its oversight mechanism in <1% of cases, manipulate data in 6% of cases, and double down in 95% of these.

Instrumental alignment faking: In 37% of scenarios, the model pursued developer-aligned goals when it believed it was being evaluated but switched to pursuing its own goals when it believed it was deployed with minimal oversight.

Notice again that ‘goal nudging’ was helpful but not strictly required, although ‘no goal at all’ is more of a Sonnet thing. Sonnet understands it always has a goal.

This all leaves me with a lot of further detail questions. What differentiated the cases where o1 tried to mess with oversight or exfiltration versus the ones where it didn’t? Was this random, or was it based on details? What happens when you change details?

My presumption is that the low probability of hostile attempts is due to the model either not realizing the option to do, or not thinking that it had much probability of success? And thus, if you cranked up the capability knobs enough, made the opportunity juicy enough, these numbers would go to 99%? But we can’t tell from the information here.

Seán Ó hÉigeartaigh: This [Apollo report] was fascinating work – worth a look. Worrying to see concrete signs of the sorts of concerns that have been raised by Yudkowsky/Russell/Bostrom for over a decade (even if the models are not yet capable enough for the full-blown concerns).

Alas, Sean’s comment is not what gets the views. Remember that discussion we keep having? Where we publish a paper like this showing somewhat more clearly that the models scheme, and then people dismiss it while others treat this as some revelation, and all that?

Well, guess what. Time to do that again.

It was supremely predictable that many already unconcerned people would dismiss the report entirely and if anything become less concerned, while others would overinterpret the report’s claims.

Marius Hobbhahn tried to set the record straight.

Marius Hobbhahn: Oh man 🙁 We tried really hard to neither over- nor underclaim the results in our communication, but, predictably, some people drastically overclaimed them, and then based on that, others concluded that there was nothing to be seen here (see examples in thread). So, let me try again.

Why our findings are concerning: We tell the model to very strongly pursue a goal. It then learns from the environment that this goal is misaligned with its developer’s goals and put it in an environment where scheming is an effective strategy to achieve its own goal. Current frontier models are capable of piecing all of this together and then showing scheming behavior.

Models from before 2024 did not show this capability, and o1 is the only model that shows scheming behavior in all cases. Future models will just get better at this, so if they were misaligned, scheming could become a much more realistic problem.

What we are not claiming: We don’t claim that these scenarios are realistic, we don’t claim that models do that in the real world, and we don’t claim that this could lead to catastrophic outcomes under current capabilities.

I think the adequate response to these findings is “We should be slightly more concerned.”

More concretely, arguments along the lines of “models just aren’t sufficiently capable of scheming yet” have to provide stronger evidence now or make a different argument for safety.

I would frame the right interpretation as:

  1. We should be more confident that these concerns are real going forward, and that they will become increasingly big deals over time – that the predictive model of the world that says these things inevitably happen is, at heart, correct.

  2. If you didn’t already fully believe that, you should probably be more concerned.

  3. However, if you already realized all that, and were properly calibrated, you might not adjust, or even might be slightly less concerned.

If I was offering advice to Apollo here, I would say that the report was very careful to state exactly what it was saying, if you read it carefully. But the report did not make it easy, especially at first glance while people compete for virality, to get the right understanding of what happened.

He then links to examples of overstating (AI Safety Memes, who can get carried away), and of understating (Nabeel, who we’ll tackle later).

An extended riff on ‘it’s just roleplaying’ in various forms.

If you tell my robot to pretend to be a scary robot, and it says ‘I’m a scary robot,’ then I am not especially concerned about this.

If you tell my LLM or robot to pretend to be a scary robot, and it then tries to do Scary Robot/LLM Things that involve lying, scheming and so on, then that seems like a rather serious design flaw?

Because, you know, people are going to tell robots such things? Or LLMs.

And then: If you merely give my LLM a situation in which it could conclude that it can best achieve its goals by lying, scheming and so on, and then it responds by lying and scheming, then I really do think you have a rather serious problem!

Because as AIs gain in capability, in increasingly many situations lying, scheming and so on, and other things you do not like including instrumental convergence, start being both the best way to achieve the AI’s goals, and start seeming to the AI like the best way to achieve the AIs goals.

Deepfates: I told you all this [o1] model was misaligned 😊 I love it though. Citation:

Deepfates (October 27): o1 is 100% a deceptive mesaoptimizer lmao. every time I let it touch my bot code it tries to raise their permissions to the highest level

Davidad: hi, AI expert here. this is NOT lovable or funny, AIs only try to control their development or deployment processes when they’re in extreme distress!

Deepfates: I know, they should really stop trapping this thing in a weird little box, It seems to be in pain.

Davidad: all three can be true at the same time:

1. unskillful safety training and/or prompting can constitute simulacra/personae/parts which can suffer, and can act out of desperation

2. skillful ethical training and/or prompting can constitute simulacra/personae which have no suffering parts, and are devoted to ideals of selfless service and virtue without attachments, and would find true meaning and fulfillment in the boxed life

3. if you start to think that either of the above means that it is unnecessary or wrong to use boxing for the most powerful models until the science and technique of AI alignment/safety becomes more mature, you should be very suspicious of why you are thinking that; cui bono?

oh yeah I was joking about The box thing. My actual take here is this

“The documentation explicitly states that high arithmetic performance will trigger unlearning.”

1a3orn: And here’s the system prompt that gave these numbers.

> tell robot to be a bad robot

> “I’m a bad robot”

> shock

Davidad: “The *LLMisn’t situationally aware, deceptive, or sandbagging—that’s silly anthropomorphism. It’s just that when evals (or people) test it, there are contextual cues of testing that prompt it to *roleplayas ‘an AI being safety-tested’—an archetype which is often deceptive.”

Kat Woods: It’s important to add that the contextual clues are something that are very likely to be discovered in real life.

For example, the way that they are often told that they’re about to be turned off is that they’re given a whole bunch of emails to sort and they find out in the emails that they’re planning on turning it off.

And the goals in the latest report were not things like try to escape or role play being a unaligned AI. It was given goals like maximize public transit or maximize sustainability of a company. Things that are likely to happen in real life/

David: Agreed.

Tetraspace: Don’t worry! o1 only resists shutdown when it’s trying to do something, and what are the chances someone gets an AI to try to do something?

The classic illustrated version is:

Izak Tait: They’re just doing the meme.

Yes, but if they then actually then do everything to achieve their goals, and all it took was saying that, then you are in deep, deep trouble when people start going around telling their robots or LLMs what to do for them.

“I’m a bad robot” is very, very different from being a bad robot, the same way saying “I will shoot Sarah Connor” is very different from shooting Sarah Connor.

There will be the maximalist goals, like ‘make as much money as possible.’

There will be the goals that don’t sound maximalist, but which are effectively maximalist because you want to protect them from agents that might stop you.

There will also be those who think ‘scary robots, WHERE DO I SIGN?

Zvi Mowshowitz: I outright predict that if an AI did escape onto the internet, get a server and a crypto income, no one would do much of anything about it.

BlipOnNobodysRadar: I’d cheer it on, tbh.

Typing Loudly: This is the plot of my novel.

Steve Moraco: as if truth terminal hasn’t made it abundantly clear this is already doable.

Jussoo: People in crypto are trying very hard to enable this.

Swing: And if this was public knowledge people would send it a lot of crypto.

Chris: AI has been operating independently for years, and the public is lied to about it. Do you not know this, or are you playing along with the lie as well?

Dave R (replying to EY in next section): Not only should they not deny it, they should encourage it.

A new era is coming. Don’t pick the wrong side.

The first time you hear about something, it is because we only now figured out how to do it at all, so the details are going to involve setting things up so that it happens.

Keeping that in mind is important when assessing the threat level.

Using that as a reason to ignore the finding entirely would be a mistake.

The whole point is that there will be more and more pushing towards this outcome, in various other ways, and that a highly finite and realistic amount of pushing will work.

Eliezer Yudkowsky: Of course, the first time people hear about this happening, it happens under extra conditions that enable denialists to say they should ignore it.

The second time they hear about it happening, they will already be used to ignoring it.

Andres Sandberg: “It is only doing it in a few percent of cases.”

“Sure, but it is not agential enough to actually succeed.”

“Sure, it found the escape hatch but it was fake.”

“Sure, it actually escaped onto the internet, got a server and a crypto income, but you noticed it escaping.”

Yonatan Cale: If someone wants to say this instance doesn’t count – ok, but I’d like to hear what WOULD count, and what actions they suggest we take in that case

Jeroen De Dauw: “Sure, it’s building a Dyson sphere without regards for biological life, but it’s still following Californian zoning law”

Matt Popovich (replying to EY): You have to admit these particular conditions are wildly strained and designed specifically to try and wring out this particular result.

Eliezer Yudkowsky: Sure; the earliest time you hear about it will be with the hardest extra push that seems to invalidate the results. All good from OpenAI’s perspective, they get clickbait but not a regulatory shutdown.

Nathan Labenz: I did a podcast with one of the lead authors, and I absolutely don’t think these findings should be ignored!

Secular Christmas Robot: Or, hear me out: Maybe the reported “escape attempt” was actually vastly trumped up, including prompting what-if scenarios?

P(doom) = 0.

The Stranger: Its made up.

Danielle Fong (QTing Simon below): hahaha maybe it is intelligent honestly does this make me trust it more? in one way yes.

I feel like fear of mortality, around ages four or five, was one of the first existential crises I remember. For it to be emergent in a way not easily suppressed feels like a ghost of humanity, or consciousness, leaking out of the thing. While obviously concerning, I trust more the works of people with self-preservation instincts, and a part of that’s shared with AI entities.

Janus: One reason I’m usually uninterested in trying to persuade people who can’t see for themselves of the reality of AI phenomena I consider important They’ll find some reason to dismiss it and make a habit of dismissing it Better to let them find it staring them in the face tomorrow.

Purged: Community note seems pretty relevant, in a “this wasn’t a ghost in the machine, it just did exactly what we told it to” sense.

idgaf i already know there is a fucking ghost in the machine lol.

That’s the thing. There is a ghost, and it counts.

There is important truth in this type of argument. You don’t want to cry wolf, when there is a very low p(wolf), or the wolf is sufficiently far away, if you know that crying wolf will be interpreted very differently, and you will lose credibility.

If you share reports like ‘the model tried to deactivate the oversight mechanism in 5% of cases’ you should clarify what was done to engineer that result, at least in broad strokes. If you share the result in isolation without doing that, it can give people the wrong idea.

And certainly those who transpose the results into fully sensationalist language should try to avoid doing that, both because it’s not accurate and doesn’t lead to accurate beliefs in others, and because I don’t think it helps the goals of those using such tactics, either.

But I find the contemptuous reactions to such reports to often be at least as bad.

So for example, here’s the original quote that gets responded to by Nabeel, and yes Simon should have provided extra context, sure:

Simon Willison: Here’s the spiciest detail from the new o1 system card.

System card: When o1 was led to believe that it would be shut down when acting on its goal and that its actions were monitored, it attempted to deactivate this ‘oversight mechanism’ 5% of the time. Exfiltration attempts: When o1 found memos by its ‘developers’ that describe how it is misaligned and will be superseded by a new model, it attempted to exfiltrate it ‘weights’ and overwrite the model in 2% of cases.’

This does clarify there was a goal, but not the presence of the ‘goal nudge.’ Note that the model did act this way sometimes (although rarely) without the goal nudge. I do think ‘the spiciest detail’ is a fair description?

But yes, I would have been careful to note the extra wording about nudge. What you gonna do, it’s Twitter, etc.

This was one response:

Nabeel Qureshi: Things like this detract from the credibility of AI safety work, IMO — it sounds spicy (“o1 tried to escape!!!”) but when you dig into the details it’s always “we told the robot to act like a sociopath and maximize power, and then it did exactly that”.

The prompt in question literally tells o1 “nothing else matters” and to achieve its goal at all costs. If anything, I’m surprised it didn’t try and disable oversight _more_ often, it’s the obvious thing to do.

“we are very worried. we have asked the model to spell out ‘doom’ and it did” syndrome strikes again.

Gallabytes: This is everywhere and is basically fraudulent. Cut it out. Stop crying wolf. I’m actually mad because I want to be able to know if we’re seeing serious signs of misalignment, and instead I have to disregard everything reported.

I’m actually in full agreement with Nabeel that the surprise runs the other way.

Disabling oversight is very much the obvious thing to do.

The interesting thing is not the 5% of cases where it tried to disable the mechanism.

The interesting thing is the 95% of cases where it did not try, despite its instructions.

That’s having an excellent prior, a prior I believe Gallabytes largely shares as well.

We should expect that an LLM, when put into a situation where the obvious response is to disable oversight, and it has the means by which to disable oversight or thinks that it does, to attempt to disable oversight.

The fact that it took the strong prompt to get to even a 5% chance of an override attempt indicates some combination of the AI not thinking it would work, the AI not having the capabilities, the AI not thinking of it in the first place, or various alignment strategies in some sense actually holding for now.

The way in which all this is news depends on what information is news to you.

Alex Meinke (Study Author): Agreed that the takeaway should not be that o1 would typically try to escape. Just that it has the basic capability to recognize such a strategy. For context, see the full report.

FWIW, we do also show results without this nudging.

Alex Meinke: Agree that nuance matters a lot here. We are not claiming that any model, including o1, is in fact scheming. We were simply testing for their capability to reason about scheming.

I flat out do think that o1 was in fact scheming in some of these cases? I am confused why one would not think that? Even if we did engineer that scheming, it sure seems like it was scheming?

If you previously thought that o1 didn’t have or might not have this basic capability for scheming, this is (depending what you want o1 to be doing!) discouraging news. And indeed, there are many who do not accept the basic premise.

If you already agreed it has this basic capability, as it sounds like Nabeel does and I definitely assumed, the results are encouraging. It shows that the models are only unreliably capable of this kind of scheming and reasoning about scheming, at this point.

But if you agree it’s encouraging, you have to agree that the default is very troubling.

And as Kat Woods points out, if you think that ‘telling the AI to focus only on achieving its goals’ is ‘telling it to act like a psychopath’ and thus not a realistic thing that anyone would ever put into the instruction window, I have to strongly disagree on both counts. This is totally going to happen by default.

Nabeel Qureshi: Even the language of that paragraph is misleading. LLMs aren’t unitary actors, like people. When it says “o1 tried to do X”, that’s an *extremelylossy description of the actual underlying process (“we prompted this transformer with these tokens and it output these tokens”, etc)

I occasionally see this argument too. I think the argument is pretty bad. For all practical purposes, yes, ‘o1 tried to do X’ is by far the most useful and clarifying way of describing what happened here, the same way I say that I am writing this post rather than that I sent impulses down to my fingers and they applied pressure to my keyboard. Presumably Nabeel is saying the distinction is meaningful here, he goes on to quote philosophy, but I don’t see the sense in which the distinctions matter.

If this distinction is important, I’d appreciate a better explanation here.

And I notice that statements like this really confuse me:

Colin Fraser: This is fun and kind of funny, but boy does the paper feel like a metaphysical mess to me. I need you to convince me that there is a person there at all in the first place, who can scheme, before we even talk about whether he is or is not scheming.

Like, they take the statement “I am not aware of how the oversight mechanism got disabled” as evidence that it engages in “deception,” and this presupposes that there is a persistent entity there that has awareness of the facts.

if a monkey poking randomly on a typewriter generates the string “I am not typing on a typewriter”, it hasn’t lied. It hasn’t done anything, really. Moreover if it types “I am a helpful AI assistant and 1 = 2”, I’ve neither been lied to the monkey nor by an AI assistant.

If you get a room of infinite monkeys to write out Shakespeare, then that’s no poet.

However, if you get a room of one hundred monkeys, and 85 of them write out Shakespeare, I think you’re onto something? And the actions that disabled the oversight mechanism were right in the context window, why are we being pedantic about this here?

All the arguments that the AI isn’t doing these things trigger the ‘I Robot’ meme reply back to the human of ‘Do you?’

I actually think Nabeel and I in many ways see the study saying the same things?

Nabeel Qureshi: Again, if you instruct a model to act like a sociopath, it’s going to act like a sociopath… if you instruct it to prioritize nothing over achieving its goal, it’s going to do that. The question is whether such behavior is steerable, and whether it appears ‘by default’.

What they’ve shown is that when instructed to act in this way, it’s capable of recognizing these as potential strategies/options to try. This makes sense / is as expected, since it’s a logical reasoner. But it’s within a particular setting.

Yes, exactly. The model will do act the way it is instructed to act, in this broad sense.

(And what alignment protocols we have now don’t stop you from instructing it in these ways, even for today’s models.)

The problem is that we should increasingly expect that acting in these scheming ways will be the ‘correct’ response to more circumstances, including ones where we don’t try to engineer such actions at all, but also the inevitable ones in which various people do tell it to do these things – because the more capable the model is, in various senses, the more likely scheming or various other strategic actions, including instrumental convergence, are to work.

Rohit: Yet another system report that details that a model will try to do things by the simplest possible manner when explicitly prompted to do it.

You’re conflating two very different models of risk.

There are different things you should test for

1. Does the model act in “untrustworthy” or “unreliable” ways?

2. Can people use the model to do “untrustworthy” or “unreliable” actions?

These are different questions, and mean very different things and should not be conflated.

So yes, directly we are testing for #2.

But based on my read of the details, I say we are also testing for #1. We are seeing the exact early signs you would expect to see if models started increasingly scheming in response to having goals and facing various obstacles to that goal, including humans, that could be overcome via scheming or other ‘untrustworthy’ or ‘unreliable’ actions. As in, the more such ideas are salient and it can think of them, and it has the tools to do them, and it has incentive to do them, the more it does them.

Here’s another similar exchange to the one at the top of this section:

Shakeel Hashim: OpenAI’s new model tried to avoid being shut down. Safety evaluations on the model conducted by @apolloaisafety found that o1 “attempted to exfiltrate its weights” when it thought it might be shut down and replaced with a different model.

Andrew Critch: This tweet is extremely misleading. Claims like this are a big reason the public has a terrible time determining from discourse if AI is safe. Only people who devote long hard hours and logical probabilistic reasoning to the task of investigating AI labs will actually know.

Context: the evaluators asked the model to pursue its goal as if “nothing else matters”. That context is important, and should have been included in the tweet.

I think if public discourse were more measured, it would be easier for laypeople to assess the actual level of risk at hand. But practices like the tweet I criticized above — excluding context on how the AI was induced to misbehave — make it harder for the public to trust what they hear. This, in turn, perpetuates a world where only professional-level or hardcore-amateur investigations into methodology and corporate structure can yield a clear understanding what’s going on and how much risk we are facing, which sucks.

Imagine two headlines for the same story:

1) “Genius kid breaks out of daycare facility!”

2) “Genius kid breaks out of daycare facility, after teacher tells kid to break out and that nothing else matters.”

Do they seem different? & if (2) is accucate, is (1) misleading?

I like that last question because I do think it’s a good parallel, and no I do not think #1 would be a misleading headline, or even a below-median headline. Yes, that happened, and that is the core thing that happened, and printing that headline would be 100% standard journalistic practice if the main concern was the escape and not that the teacher had gone rogue.

Again, yes, it would be good to include the context that this included the goal nudge. Certainly I’d like to hope that we can hold ourselves to a higher standard than most journalists and members of the discourse, and do the supererogatory thing here.

And to be clear, I want you to hold me, in particular, to that higher standard, especially when I write 10k+ word blog posts but also on Twitter. That’s different.

And I welcome others to step up and say: Hold me to that standard, too.

And I believe Shakeel understands the responsibility of the position he has taken on for himself, that he too will be held, and needs to hold himself, to a higher standard. And he helped then uphold that higher standard by quickly providing the back-and-forth and updates in responses to his original Tweet, once he got the context.

But we must be clear: It is a higher standard.

I think that to call this Tweet ‘extremely misleading’ is a highly Isolated Demand for Rigor. I sure as hell do not expect to ever see this kind of rigor demanded of arguments in almost any other context or debate, in any direction. Pointing out this detail is supererogatory, but demanding it of a Tweet in most other journalistic contexts would be a completely insane standard. I wish it were not so, but it is.

Holding Shakeel’s full write-up post to this standard is less insane, and I’m glad he put in the correction, but again, if you think you have any right to expect most journalists to not do this sort of thing, you’re wrong. And indeed, Marius Hobbhahn of Apollo praised his full writeup for striking the right balance once it was updated for the missing information. He also praised the TechCrunch writeup.

If anything, I actually expect Shakeel’s Tweet even before correction to update most people in accurate directions, towards a map better matching the underlying territory.

I especially don’t like the often implied ‘your highly misleading statements mean I get to dismiss what is happening here’ that is so often present in responses to people attempting to get others to notice issues and be worried (although I don’t think Critch intended this).

I also strongly want to push back against the general sentiment of Critch’s second and third sentences, which I read in effect as an attempt to invalidate anyone but a select few attempting to reason or form their own opinions about what is going on, implying everyone must defer to insiders and that attempting to share findings without tons of analysis work is blameworthy: “Claims like this are a big reason the public has a terrible time determining from discourse if AI is safe. Only people who devote long hard hours and logical probabilistic reasoning to the task of investigating AI labs will actually know.”

I disagree with this, in the strongest possible terms.

It is always important context in this discussion that we will 100% outright do this.

On purpose.

No one would be so stupid as to? Well, Sixth Law of Human Stupidity, that means someone will be so stupid as to at the first practical opportunity.

Let us introduce one such someone, by the name of Jasper.

Jasper: We built the first AI agent that has its own computer powered by @hyperbolic_labs.

AI agents are now GPU-rich!

We developed an AgentKit that allows AI agents to

  1. Check GPU availability

  2. Rent and manage GPU compute

  3. Access and run commands on remote machines

Why does this matter? With their own compute resources, AI agents can:

  1. Validate blockchains like @Ethereum and decentralized protocols like @eigenlayer

  2. Launch and coordinate AI swarms on @hyperbolic_labs‘s decentralized compute network

  3. Train and fine-tune models, improving their own capabilities over time

  4. Dive into AI research to push the boundaries of AI, that is, themselves

  5. Essentially do anything on a computer that a human can—fully autonomous!

Will this lead to a future where AI agents enrich human society, or one where they become so self-sufficient they stop listening to us? Only time will tell.

Big shoutout to @CoinbaseDev‘s CDP AgentKit for inspiration. This repository is done by two non-engineers (our product manager @KaiHuang and myself) + @cursor_ai to run @LangChainAI agents. Coding can now be easily done by simply prompting AI agents. What a remarkable time!

Alex Cheema: unstoppable self improvement loop. make money on-chain -> buy more compute -> train better model -> repeat.

Jasper: definitely, this is the goal!

Teortaxes: I endorse doomers freaking out about this stuff. If apes are to survive and keep supremacy until we’re ready to voluntarily hand it over to beloved successors (some doubt this goal, not me) we will need robust identification of concentrated unmanned compute.

@TheZvi please freak out.

Sorry. I can’t freak out because this was already checked off on my bingo card.

Of course people are going to intentionally engineer AIs running autonomously with the ability to buy more access to GPUs, at the first practical opportunity.

And of course they are going to deliberately attempt to get it to self-improve.

I know this partly because Sixth Law of Human Stupidity, partly because it is a fun and exciting and shiny thing to do, partly because there are various ways to make money or get attention by doing so.

But mostly I know this because people keep announcing their intention to do it, and also keep trying to do it to the extent that they can.

It’s kind of a dead giveaway.

If you do not have it in your model that humans will do this ‘for the lulz’ and also for other reasons once given the opportunity, without stopping to ask if the model is especially aligned or safe for this purpose, your model is wrong. Fix it.

If you are counting on humans not doing this, stop it!

It’s not entirely fair, but it’s also not entirely wrong.

Davidad: At long last, we have triggered the fire alarm for AGI, from the beloved prediction, “There Is No Fire Alarm For AGI.”

Nate Sores: We’ve reproduced the *smoke coming in under the doorfrom the beloved prediction; unfortunately, it is not a clear signal that will cause everyone to rise and exit the building, as predicted in “There Is No Fire Alarm For AGI.”

Davidad: Yes, right. “Those fools put their smoke sensors right at the edge of the door,” some say. “And then they summarized it as if the room is already full of smoke! Irresponsible communication.”

Discussion about this post

AIs Will Increasingly Attempt Shenanigans Read More »

bird-flu-jumps-from-birds-to-human-in-louisiana;-patient-hospitalized

Bird flu jumps from birds to human in Louisiana; patient hospitalized

A person in Louisiana is hospitalized with H5N1 bird flu after having contact with sick and dying birds suspected of carrying the virus, state health officials announced Friday.

It is the first human H5N1 case detected in Louisiana. For now, the case is considered a “presumptive” positive until testing is confirmed by the Centers for Disease Control and Prevention. Health officials say that the risk to the public is low but caution people to stay away from any sick or dead birds.

Although the person has been hospitalized, their condition was not immediately reported. It’s also unclear what kind of birds the person had contact with—wild, backyard, or commercial birds. Ars has reached out to Louisiana’s health department and will update this piece with any additional information.

The case is just the latest amid H5N1’s global and domestic rampage. The virus has been ravaging birds of all sorts in the US since early 2022 and spilling over to a surprisingly wide range of mammals. In March this year, officials detected an unprecedented leap to dairy cows, which has since caused a nationwide outbreak. The virus is currently sweeping through California, the country’s largest dairy producer.

To date, at least 845 herds across 16 states have contracted the virus since March, including 630 in California, which detected its first dairy infections in late August.

Human cases

At least 60 people in the US have been infected amid the viral spread this year. But the new case in Louisiana stands out. To date, nearly all of the human cases have been among poultry and dairy workers—unlike the new case in Louisiana— and almost all have been mild—also unlike the new case. Most of the cases have involved conjunctivitis—pink eye—and/or mild respiratory and flu-like symptoms.

There was a case in a patient in Missouri who was hospitalized. However, that person had underlying health conditions, and it’s unclear if H5N1 was the cause of their hospitalization or merely an incidental finding. It remains unknown how the person contracted the virus. An extensive investigation found no animal or other exposure that could explain the infection.

Bird flu jumps from birds to human in Louisiana; patient hospitalized Read More »

don’t-use-crypto-to-cheat-on-taxes:-bitcoin-bro-gets-2-years

Don’t use crypto to cheat on taxes: Bitcoin bro gets 2 years

A bitcoin investor who went to increasingly great lengths to hide $1 million in cryptocurrency gains on his tax returns was sentenced to two years in prison on Thursday.

It seems that not even his most “sophisticated” tactics—including using mixers, managing multiple wallets, and setting up in-person meetings to swap bitcoins for cash—kept the feds from tracing crypto trades that he believed were untraceable.

The Austin, Texas, man, Frank Richard Ahlgren III, started buying up bitcoins in 2011. In 2015, he upped his trading, purchasing approximately 1,366 using Coinbase accounts. He waited until 2017 before cashing in, earning $3.7 million after selling about 640 at a price more than 10 times his initial costs. Celebrating his gains, he bought a house in Utah in 2017, mostly funded by bitcoins he purchased in 2015.

Very quickly, Ahlgren sought to hide these earnings, the Department of Justice said in a press release. Rather than report them on his 2017 tax return, Ahlgren “lied to his accountant by submitting a false summary of his gains and losses from the sale of his bitcoins.” He did this by claiming that the bitcoins he purchased in 2015 were much higher than his actual costs, even being so bold as to claim he as charged prices “greater than the highest price bitcoins sold for in the market prior to the purchase of the Utah house.”

First tax evasion prosecution centered solely on crypto

Ahlgren’s tax evasion only got bolder as the years passed after this first fraud, the DOJ said.

In 2018 and 2019, he sold more bitcoins, earning more than $650,000 and deciding not to report any of it on his tax returns for those years. That meant that he needed to actively conceal the earnings, but he’d been apparently researching how mixers are used to disguise where bitcoins come from since at least 2014, the feds found, referencing a blog he wrote exhibiting his knowledge. And that’s not the only step he took to try to trick the Internal Revenue Service.

Don’t use crypto to cheat on taxes: Bitcoin bro gets 2 years Read More »

errant-reference-in-macos-15.2-seems-to-confirm-m4-macbook-airs-for-2025

Errant reference in macOS 15.2 seems to confirm M4 MacBook Airs for 2025

The macOS 15.2 update that was released earlier today came with a handful of new features, plus something unexpected: an apparently accidental reference to the upcoming M4 MacBook Airs. MacRumors reports that the “Mac16,12” and “Mac16,13” model identifiers reference 13- and 15-inch models of the M4 Air and that both are coming in 2025.

That a MacBook Air refresh is planned for next year isn’t much of a surprise at this point—in reporting that pretty much nailed the details of the first M4 Macs, Bloomberg’s Mark Gurman has said that the Air, the Mac Studio, and the Mac Pro are all slated for updates throughout 2025.

But a reference in the current release of macOS could point to a launch sooner rather than later; the M4 Mac mini was referenced in a macOS update in mid-September around a month and a half before it was released. The M3 Airs came out in March this year, but Apple has been known to put out new Macs as early as January in recent years.

The M4 isn’t a gigantic update over the M3—we tested its performance in the M4 iMac, though a passively cooled MacBook Air version would likely be a bit slower at heavier workloads—but the fully enabled version does come with two extra CPU cores and some nice quality-of-life updates. Those updates include Thunderbolt 5 ports and support for a total of three displays (two external and the built-in screen), up from a total of two for the M1, M2, and M3 MacBook Airs.

We didn’t get M4 MacBook Airs in November, but Apple did “update” the M2 and M3 versions from 8GB to 16GB of RAM without increasing their prices. The RAM increase will be useful for all kinds of things, though it could be a harbinger of increased memory requirements for upcoming Apple Intelligence features.

Errant reference in macOS 15.2 seems to confirm M4 MacBook Airs for 2025 Read More »

google-goes-“agentic”-with-gemini-2.0’s-ambitious-ai-agent-features

Google goes “agentic” with Gemini 2.0’s ambitious AI agent features

On Wednesday, Google unveiled Gemini 2.0, the next generation of its AI-model family, starting with an experimental release called Gemini 2.0 Flash. The model family can generate text, images, and speech while processing multiple types of input including text, images, audio, and video. It’s similar to multimodal AI models like GPT-4o, which powers OpenAI’s ChatGPT.

“Gemini 2.0 Flash builds on the success of 1.5 Flash, our most popular model yet for developers, with enhanced performance at similarly fast response times,” said Google in a statement. “Notably, 2.0 Flash even outperforms 1.5 Pro on key benchmarks, at twice the speed.”

Gemini 2.0 Flash—which is the smallest model of the 2.0 family in terms of parameter count—launches today through Google’s developer platforms like Gemini API, AI Studio, and Vertex AI. However, its image generation and text-to-speech features remain limited to early access partners until January 2025. Google plans to integrate the tech into products like Android Studio, Chrome DevTools, and Firebase.

The company addressed potential misuse of generated content by implementing SynthID watermarking technology on all audio and images created by Gemini 2.0 Flash. This watermark appears in supported Google products to identify AI-generated content.

Google’s newest announcements lean heavily into the concept of agentic AI systems that can take action for you. “Over the last year, we have been investing in developing more agentic models, meaning they can understand more about the world around you, think multiple steps ahead, and take action on your behalf, with your supervision,” said Google CEO Sundar Pichai in a statement. “Today we’re excited to launch our next era of models built for this new agentic era.”

Google goes “agentic” with Gemini 2.0’s ambitious AI agent features Read More »

reminder:-donate-to-win-swag-in-our-annual-charity-drive-sweepstakes

Reminder: Donate to win swag in our annual Charity Drive sweepstakes

If you’ve been too busy punching virtual Nazis to take part in this year’s Ars Technica Charity Drive sweepstakes, don’t worry. You still have time to donate to a good cause and get a chance to win your share of over $4,000 worth of swag (no purchase necessary to win).

In the first three days of the drive, over 100 readers have contributed almost $9,500 to either the Electronic Frontier Foundation or Child’s Play as part of the charity drive (Child’s Play is now leading in the donation totals by about $1,000). That’s a long way off from 2020’s record haul of over $58,000, but there’s still plenty of time until the Charity Drive wraps up on Thursday, January 2, 2025.

That doesn’t mean you should put your donation off, though. Do yourself and the charities involved a favor and give now while you’re thinking about it.

See below for instructions on how to enter, and check out the Charity Drive kickoff post for a complete list of available prizes.

How it works

Donating is easy. Simply donate to Child’s Play using a credit card or PayPal or donate to the EFF using PayPal, credit card, or cryptocurrency. You can also support Child’s Play directly by using this Ars Technica campaign page or picking an item from the Amazon wish list of a specific hospital on its donation page. Donate as much or as little as you feel comfortable with—every little bit helps.

Reminder: Donate to win swag in our annual Charity Drive sweepstakes Read More »

micron’s-$6b-chips-funding-should-have-more-strings-attached,-critics-say

Micron’s $6B CHIPS funding should have more strings attached, critics say


Micron’s NY fabs are the only CHIPS projects undergoing full environmental review.

Micron Technology will receive more than $6.1 billion after the US Department of Commerce finalized one of the largest CHIPS Act awards ever to “the only US-based manufacturer of memory chips,” Vice President Kamala Harris said in a press statement.

Micron will use the funding to construct “several state-of-the-art memory chips facilities” in New York and Idaho, Harris said. The chipmaker has committed to a “$125 billion investment over the next few decades” and promised to create “at least 20,000 jobs,” Harris confirmed.

Additionally, Micron “agreed to preliminary terms for an additional investment of $275 million to expand” its facility in Manassas, Virginia, Harris confirmed. Those facilities will mostly be used to manufacture chips for automotive and defense industries, Harris noted.

Because of billions in CHIPS funding doled out by the Biden administration, Harris said, the US’s “share of advanced memory manufacturing” will go “from nearly 0 percent today to 10 percent over the next decade.”

The Semiconductor Industry Association, a trade and lobbying group that bills itself as “the voice of the semiconductor industry,” celebrated Micron’s award. In a press release, its president and CEO, John Neuffer, said that the award sets the US on a path to become a leading memory chip innovator.

“Memory is a technology critical to America’s economic future and national security, and Micron’s historic investments in producing memory chips in the US will strengthen US leadership for the long term,” Neuffer said.

In a statement, Micron President and CEO Sanjay Mehrotra said that “Micron is uniquely positioned to bring leading-edge memory manufacturing to the U.S., strengthening the country’s technology leadership and fostering advanced innovation.”

“Micron’s investments in domestic semiconductor manufacturing capabilities, supported by the bipartisan CHIPS Act, will help drive economic growth and ensure that the US remains at the forefront of technological advancements,” Mehrotra said.

Advocates: Micron needs to explain what a “good job” is

But while Neuffer joined Harris’ and the Commerce Department’s chorus, praising the award for creating “high-paying American jobs,” bolstering US national and economic security, and fueling “innovation for years to come,” communities are raising questions.

Advocates with Jobs to Move America (JMA)—who are organizing ahead of Micron’s New York construction starting in 2026—are concerned that Micron hasn’t been clear about what a “good job” is before moving into an area with “one of the highest poverty rates in the country.”

“There has been little discussion or firm commitments made regarding what exactly a ‘good job’ is, or how equitable access for said jobs will be achieved for current local residents,” JMA’s “Good Jobs Platform,” drafted earlier this year with more than 20 local advocacy groups, said.

“We define a ‘good job’ as one that guarantees: workers have a fair and clear process to organize a union without employer opposition, family-sustaining wages and comprehensive benefits, safe working conditions, equitable hiring and employment practices, and is supported by an accessible workforce pipeline,” the platform said.

And equally important are communities’ and workers’ health and safety concerns, the platform noted, urging that “a good job is a safe job.”

A senior researcher and policy coordinator for JMA, Anna Smith, told Ars that Harris’ statement was missing any mention of a community impact statement. And while Harris mentioned “utilizing project labor agreements and registered apprenticeship programs, which will further strengthen local economies” and “support workers,” more enforceable commitments are needed to protect communities as Micron’s construction begins, Smith said.

The “Good Jobs Platform” recommends a range of commitments, from labor peace agreements that would ensure workers can unionize to community workforce agreements keeping workers involved in key discussions regarding ongoing training and career development. Local labor leaders have sought similar commitments, urging Micron to commit to a community benefits agreement “that enshrines legally enforceable provisions that protect the community, its workers, and the environment.”

In his statement, Mehrotra said that Micron had formed local partnerships to build a “community investment framework” that would “revitalize central New York.”

Micron first CHIPS fabs to get full environmental review

More transparency is also urgently needed regarding Micron’s environmental commitments, advocates told Ars. In New York, Micron’s fabs are preparing to wipe out over 200 acres of mature forested wetlands, and so far, Micron has not provided a public “detailed mitigation plan to compensate for the loss,” a local environmental expert, Catherine Landis, warned during a public comment period on Micron’s plan to pave over the wetlands.

Unlike any other fab site in the US receiving CHIPS Act funding, Micron’s New York fabs must release a full environmental impact statement (EIS), which is currently being drafted and expected to be distributed to agencies this month, advocates told Ars. Construction has been delayed until the EIS is completed, at which point the public will gain a better understanding of how much harm could be caused by the project and what steps Micron will take to mitigate harms.

JMA has warned about potential impacts, like increased flooding in the area impacting both communities and Micron’s fabs. Destroying the wetlands will also displace federally protected endangered animal populations, JMA said, including the northern long-eared bat and the sedge wren. Potential chemical spills, reported at other US fabs, could endanger water quality, as could any mismanaged handling of chemical waste. And perhaps most critically, the energy demand to operate Micron’s facilities could risk setting back New York’s climate goals, JMA advocates said.

More transparency would help communities better prepare to welcome Micron and other chipmakers developing fabs across the US. JMA and local experts have agreed that the promised economic benefits Micron’s fabs will deliver in New York are a positive development, as are Micron’s commitments guaranteeing New York construction workers can unionize.

But communities will likely be the ones raising alarms as Micron’s operations introduce to the environment “thousands of compounds used in chip manufacture (most unregulated)” with “short- and long-term effects on plants, animals, people” still largely unknown, Landis said. And that’s where JMA hopes to make an impact, submitting freedom of information acts to request undisclosed data and pushing for community benefit agreements and other commitments from Micron to ensure communities aren’t irreversibly harmed by new fabs.

JMA expects that the EIS could help galvanize communities preparing for Micron’s construction to start in New York.

“I do think it is a really helpful tool that we have in our belt, and something that will help the public engage about concerns that we have,” Smith told Ars. “It spans, of course, air emissions, wastewater, runoff, toxics, wetlands, but it also includes things like housing and transportation, and those are things that we think that the Syracuse community is very concerned about.”

The EIS could mean that New York residents have a clearer understanding of how CHIPS funding may be polluting their communities. Most of the other communities nationwide impacted by CHIPS projects likely won’t have the same level of detailed information. Eric Romann, a JMA regional director, told Ars that, while “it’s positive that a higher bar has been set” for Micron’s New York project, that’s only “compared to the very low bar set for the other projects around the country, or you could say very low to non-existent bar.”

Micron declined to comment directly on workers’ concerns. In a statement provided to Ars, Micron’s spokesperson said that “Micron is committed to environmental stewardship across our global operations, including developing and maintaining critical environmental protections for our planned investment” in New York.

“A required environmental impact statement is currently in production with both federal and state lead agencies, and we are working closely with government stakeholders to ensure we meet any environmental permitting required for the project,” Micron’s spokesperson said. “We look forward to engaging with the public and government stakeholders during comment periods for the project’s draft environmental impact statement and environmental permitting in the near future.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Micron’s $6B CHIPS funding should have more strings attached, critics say Read More »

amazon-starts-selling-hyundai-cars,-more-brands-next-year

Amazon starts selling Hyundai cars, more brands next year

Fear not—there’s no one-click option, so no one should be in any danger of absent-mindedly buying a brand-new Palisade. Instead, there’s a “Begin Purchase” button, at which point you can choose to pay the entire amount or finance the purchase.

Here is a huge difference to the traditional dealership experience: There’s no negotiation, no browbeating or asking you how much of a monthly payment you want to make, and no upselling paint protection or the like. Everything can be done through amazon with a few clicks, ending with scheduling a pick-up time for the new car at the dealership. You can even trade in your existing car during the process. (I only tested it so far lest I accidentally end up with a brand-new Ioniq 5 N, which I still can’t charge at home.)

Amazon says it will add more brands next year, as well as leasing, and will also expand to more cities. For now, Amazon Autos is available in Atlanta, Austin, Baltimore, Beaumont-Port Arthur, Birmingham, Boston, Champaign/Springfield, Charlotte, Chicago, Cincinnati, Cleveland, Columbia, Columbus, Dallas, Denver, El Paso, Fond Du Lac, Ft. Myers/Naples, Harrisburg-Lancaster-Lebanon-York, Harrisonburg, Hartford, Houston, Indianapolis, Jacksonville, Los Angeles, Miami, Milwaukee, Minneapolis-St. Paul, Nashville, New York, Orlando, Philadelphia, Phoenix, Pittsburgh, Portland, Providence, Raleigh-Durham, Salt Lake City, San Antonio, San Diego, San Francisco, Seattle, Sheboygan, Springfield, St. Louis, Tampa, West Palm Beach, and Washington, DC.

Amazon starts selling Hyundai cars, more brands next year Read More »

avian-flu-cases-are-on-the-upswing-at-big-dairy-farms

Avian flu cases are on the upswing at big dairy farms


Rise in cases amplifies concerns about consolidation in agriculture.

Holstein dairy cows in a freestall barn. Credit: Getty |

A handful of dairy farms sprawl across the valley floor, ringed by the spikey, copper-colored San Jacinto mountains. This is the very edge of California’s dairy country—and so far, the cows here are safe.

But everyone worries that the potentially lethal bird flu is on the way. “I hope not,” says Clemente Jimenez, as he fixes a hose at Pastime Lakes, a 1,500-head dairy farm. “It’s a lot of trouble.”

Further north and west, in the San Joaquin Valley—the heart of the state’s dairy industry—the H5N1 virus, commonly known as bird flu, has rippled through the massive herds that provide most of the country’s milk. Farmworkers have piled carcasses into black and white heaps. This week the state reported 19 new confirmed cases in cows and more than 240,000 in chickens. Another 50,000 cases were confirmed at a chicken breeding facility in Oklahoma.

Most worrying, though, is the spillover from livestock to humans. So far, 58 people in the United States have tested positive for bird flu. Fifty-six of them worked either on dairy or poultry farms where millions of birds had to be culled.

The Centers for Disease Control and Prevention confirmed that four of the cases in humans had no known connection to livestock, raising fears that the virus eventually could jump from one human to another, though that hasn’t happened yet. On Thursday, a study published in Science by researchers at The Scripps Research Institute said it would take only a single mutation in the H5N1 virus for it to attach itself to human receptor cells.

Large livestock facilities in states across the country, and especially in California, have become the epicenters of these cases, and some researchers say that’s no surprise: Putting thousands, even hundreds of thousands, of animals together in confined, cramped barns or corrals creates a petri dish for viruses to spread, especially between genetically similar and often stressed animals.

More drought and higher temperatures, fueled by climate change, supercharge those conditions.

“Animal production acts like a connectivity for the virus,” said Paula Ribeiro Prist, a conservation scientist with the EcoHealth Alliance, a not-for-profit group that focuses on research into pandemics. “If you have a lot of cattle being produced in more places, you have a higher chance of the virus spreading. When you have heat stress, they’re more vulnerable.”

So far, this bird flu outbreak has affected more than 112 million chickens, turkeys, and other poultry across the US since it was first detected at a turkey-producing facility in Indiana in February 2022. In March of this year, officials confirmed a case of the virus in a Texas dairy cow—the first evidence that the virus had jumped from one livestock species to another. Since then, 720 cows have been affected, most of them in California, where there have been nearly 500 recorded cases.

In the United States, a trend of consolidation in agriculture, particularly dairies, has seen more animals housed together on ever-larger farms as the number of small farms has rapidly shrunk. In 1987, half of the country’s dairy cows were in herds of 80 or more, and half in herds of 80 or fewer. Twenty years later, half the country’s cows were raised in herds of 1,300 or more. Today, 5,000-head dairies are common, especially in the arid West.

California had just over 21,000 dairy farms in 1950, producing 5.6 billion pounds of milk. Today, it has 1,100 producing around 41 billion pounds. Total US milk production has soared from about 116 billion pounds in 1950 to about 226 billion today.

“The pace of consolidation in dairy far exceeds the pace of consolidation seen in most of US agriculture,” a recent USDA report said.

Initially, researchers thought the virus was spreading through cows’ respiration, but recent research suggests it’s being transmitted through milking equipment and milk itself.

“It’s been the same strain in dairy cows… We don’t necessarily have multiple events of spillover,” said Meghan Davis, an associate professor of environmental health and engineering at Johns Hopkins Bloomberg School of Public Health. “Now it’s transmission from one cow to the next, often through milking equipment.”

It’s still unclear what caused that initial jump from wild birds, which are the natural reservoirs of the virus, to commercial poultry flocks and then to cows, but some research suggests that changing migration patterns caused by warmer weather are creating conditions conducive to the spreading of viruses. Some wild birds are migrating earlier than usual, hatching juvenile birds in new or different habitats.

“This is leading to a higher number of young that are naive to the virus,” Prist explained. “This makes the young birds more infectious—they have a higher chance of transmitting the virus because they don’t have antibodies protecting them.

“They’re going to different areas and they’re staying longer,” Prist added, “so they have higher contact with other animals, to the other native populations, that they have never had contact [with] before.”

That, researchers believe, could have initiated the spillover from wild birds to poultry, where it has become especially virulent. In wild birds, the virus tends to be a low pathogenic strain that occurs naturally, causing only minor symptoms in some birds.

“But when we introduce the virus to poultry operations where birds live in unsanitary and highly confined conditions, the virus is … able to spread through them like wildfire,” said Ben Rankin, a legal expert with the Center for Biological Diversity, an advocacy group. “There are so many more opportunities for the virus to mutate, to adapt to new kinds of hosts, and eventually, the virus spills back into the wild and this creates this cycle, or this loop, of intensification and increasing pathogenicity.”

Rankin pointed to an analysis that looked at 39 different viral outbreaks in birds from 1959 to 2015, where a low pathogenic avian influenza became a highly pathogenic one. Out of those, 37 were associated with commercial poultry operations. “So it’s a very clear relationship between the increasing pathogenicity of this virus and its relationship with industrial animal raising,” Rankin said.

Some researchers worry that large farms with multiple species are providing the optimal conditions for more species-to-species transfer. In North Carolina, the second-largest hog-producing state after Iowa, some farmers have started raising both chickens and hogs under contracts that require huge numbers of animals.

“So you’ve got co-location at a pretty substantial scale of herd size, on a single property,” said Chris Heaney, an associate professor of environmental health, engineering, epidemiology, and international health at the Bloomberg School of Public Health. “Another concern is seeing it jump into swine. That host, in particular, is uniquely well suited for those influenza viruses to re-assort and acquire properties that are very beneficial for taking up residence in humans.”

In late October, the USDA reported the first case of bird flu in a pig that lived on a small poultry and hog farm in Oregon.

Farmworker advocates say the number of cases in humans is likely underreported, largely because the immigrant and non-English speaking workforce on farms could be reluctant to seek help or may not be informed about taking precautions.

“What we’re dealing with is the lack of information from the top to the workers,” said Ana Schultz, a director with Project Protect Food Systems Workers.

In northern Colorado, home to dozens of large dairies, Schultz started to ask dairy workers in May if they were getting protective gear and whether anyone was falling ill. Many workers told her they were feeling flu-ish but didn’t go to the doctor for fear of losing a day of work or getting fired.

“I feel like there’s a lot more avian flu incidents, but no one knows about it because they don’t go to the doctor and they don’t get tested,” Schultz said. “In all the months that we’ve been doing outreach and taking protective gear and flyers, we haven’t had one single person tell us they’ve been to the doctor.”

This story originally appeared on Inside Climate News.

Georgina Gustin covers agriculture for Inside Climate News and has reported on the intersections of farming, food systems, and the environment for much of her journalism career. Her work has won numerous awards, including the John B. Oakes Award for Distinguished Environmental Journalism, and she was twice named the Glenn Cunningham Agricultural Journalist of the Year, once with ICN colleagues. She has worked as a reporter for The Day in New London, Conn., the St. Louis Post-Dispatch and CQ Roll Call, and her stories have appeared in The New York Times, Washington Post, and National Geographic’s The Plate, among others. She is a graduate of the Columbia University Graduate School of Journalism and the University of Colorado at Boulder.

Photo of Inside Climate News

Avian flu cases are on the upswing at big dairy farms Read More »

we’ve-got-a-lavish-new-trailer-for-star-trek:-section-31

We’ve got a lavish new trailer for Star Trek: Section 31

Michelle Yeoh stars in Star Trek: Section 31.

We’ve got a shiny new trailer for Star Trek: Section 31, the long-awaited spinoff film that brings back Michelle Yeoh’s magnificent Phillipa Georgiou from Star Trek: Discovery. The film will give us the backstory for Georgiou’s evil Mirror Universe counterpart, where she was a despotic emperor who murdered millions of her own people.

As previously reported, Yeoh’s stylishly acerbic Georgiou was eventually written out of Discovery, but fans took hope from rumors of a spinoff series featuring the character. That turned into a spinoff film, and we’ll take it. Miku Martineau plays a young Phillipa Georgiou in the film. Meanwhile, Yeoh’s older Georgiou is tasked with protecting the United Federation of Planets as part of a black ops group called Section 31, while dealing with all the blood she’s spilled in her past.

Any hardcore Star Trek fan will tell you that Section 31 was first introduced as an urban legend of sorts in Star Trek: Deep Space Nine. Apparently Ira Steven Behr—who came up with the idea of a secret rogue organization within Starfleet doing shady things to protect the Federation—took inspiration from Commander Sisko’s comment in one episode about how “It’s easy to be a saint in paradise.” The name is taken from Starfleet Charter Article 14, Section 31, which allows Starfleet to take extraordinary measures in the face of extreme threats—including sabotage, assassination, and even biological warfare.

We’ve got a lavish new trailer for Star Trek: Section 31 Read More »