Author name: Shannon Garcia

python-plan-to-boost-software-security-foiled-by-trump-admin’s-anti-dei-rules

Python plan to boost software security foiled by Trump admin’s anti-DEI rules

“Given the value of the grant to the community and the PSF, we did our utmost to get clarity on the terms and to find a way to move forward in concert with our values. We consulted our NSF contacts and reviewed decisions made by other organizations in similar circumstances, particularly The Carpentries,” the Python Software Foundation said.

Board voted unanimously to withdraw application

The Carpentries, which teaches computational and data science skills to researchers, said in June that it withdrew its grant proposal after “we were notified that our proposal was flagged for DEI content, namely, for ‘the retention of underrepresented students, which has a limitation or preference in outreach, recruitment, participation that is not aligned to NSF priorities.’” The Carpentries was also concerned about the National Science Foundation rule against grant recipients advancing or promoting DEI in “any” program, a change that took effect in May.

“These new requirements mean that, in order to accept NSF funds, we would need to agree to discontinue all DEI focused programming, even if those activities are not carried out with NSF funds,” The Carpentries’ announcement in June said, explaining the decision to rescind the proposal.

The Python Software Foundation similarly decided that it “can’t agree to a statement that we won’t operate any programs that ‘advance or promote’ diversity, equity, and inclusion, as it would be a betrayal of our mission and our community,” it said yesterday. The foundation board “voted unanimously to withdraw” the application.

The Python foundation said it is disappointed because the project would have offered “invaluable advances to the Python and greater open source community, protecting millions of PyPI users from attempted supply-chain attacks.” The plan was to “create new tools for automated proactive review of all packages uploaded to PyPI, rather than the current process of reactive-only review. These novel tools would rely on capability analysis, designed based on a dataset of known malware. Beyond just protecting PyPI users, the outputs of this work could be transferable for all open source software package registries, such as NPM and Crates.io, improving security across multiple open source ecosystems.”

The foundation is still hoping to do that work and ended its blog post with a call for donations from individuals and companies that use Python.

Python plan to boost software security foiled by Trump admin’s anti-DEI rules Read More »

here’s-how-slate-auto-plans-to-handle-repairs-to-its-electric-trucks

Here’s how Slate Auto plans to handle repairs to its electric trucks

Earlier this year, Slate Auto emerged from stealth mode and stunned industry watchers with the Slate Truck, a compact electric pickup it plans to sell for less than $30,000. Achieving that price won’t be easy, but Slate really does look to be doing things differently from the rest of the industry—even Tesla. For example, the truck will be made from just 600 parts, with no paint or even an infotainment system, to keep costs down.

An unanswered question until now has been “where do I take it to be fixed if it breaks?” Today, we have an answer. Slate is partnering with RepairPal to use the latter’s network of more than 4,000 locations across the US.

“Slate’s OEM partnership with RepairPal’s nationwide network of service centers will give Slate customers peace of mind while empowering independent service shops to provide accessorization and service,” said Slate chief commercial officer Jeremy Snyder.

RepairPal locations will also be able to install the accessories that Slate plans to offer, like a kit to turn the bare-bones pickup truck into a crossover. And some but not all RepairPal sites will be able to work on the Slate’s high-voltage powertrain.

The startup had some other big news today. It has negotiated access for its customers to the Tesla Supercharger network, and since the truck has a NACS port, there will be no need for an adapter.

The Slate truck is due next year.

Here’s how Slate Auto plans to handle repairs to its electric trucks Read More »

australia’s-social-media-ban-is-“problematic,”-but-platforms-will-comply-anyway

Australia’s social media ban is “problematic,” but platforms will comply anyway

Social media platforms have agreed to comply with Australia’s social media ban for users under 16 years old, begrudgingly embracing the world’s most restrictive online child safety law.

On Tuesday, Meta, Snap, and TikTok confirmed to Australia’s parliament that they’ll start removing and deactivating more than a million underage accounts when the law’s enforcement begins on December 10, Reuters reported.

Firms risk fines of up to $32.5 million for failing to block underage users.

Age checks are expected to be spotty, however, and Australia is still “scrambling” to figure out “key issues around enforcement,” including detailing firms’ precise obligations, AFP reported.

An FAQ managed by Australia’s eSafety regulator noted that platforms will be expected to find the accounts of all users under 16.

Those users must be allowed to download their data easily before their account is removed.

Some platforms can otherwise allow users to simply deactivate and retain their data until they reach age 17. Meta and TikTok expect to go that route, but Australia’s regulator warned that “users should not rely on platforms to provide this option.”

Additionally, platforms must prepare to catch kids who skirt age gates, the regulator said, and must block anyone under 16 from opening a new account. Beyond that, they’re expected to prevent “workarounds” to “bypass restrictions,” such as kids using AI to fake IDs, deepfakes to trick face scans, or the use of virtual private networks (VPNs) to alter their location to basically anywhere else in the world with less restrictive child safety policies.

Kids discovered inappropriately accessing social media should be easy to report, too, Australia’s regulator said.

Australia’s social media ban is “problematic,” but platforms will comply anyway Read More »

expert-panel-will-determine-agi-arrival-in-new-microsoft-openai-agreement

Expert panel will determine AGI arrival in new Microsoft-OpenAI agreement

In May, OpenAI abandoned its plan to fully convert to a for-profit company after pressure from regulators and critics. The company instead shifted to a modified approach where the nonprofit board would retain control while converting its for-profit subsidiary into a public benefit corporation (PBC).

What changed in the agreement

The revised deal extends Microsoft’s intellectual property rights through 2032 and now includes models developed after AGI is declared. Microsoft holds IP rights to OpenAI’s model weights, architecture, inference code, and fine-tuning code until the expert panel confirms AGI or through 2030, whichever comes first. The new agreement also codifies that OpenAI can formally release open-weight models (like gpt-oss) that meet requisite capability criteria.

However, Microsoft’s rights to OpenAI’s research methods, defined as confidential techniques used in model development, will expire at those same thresholds. The agreement explicitly excludes Microsoft from having rights to OpenAI’s consumer hardware products.

The deal allows OpenAI to develop some products jointly with third parties. API products built with other companies must run exclusively on Azure, but non-API products can operate on any cloud provider. This gives OpenAI more flexibility to partner with other technology companies while keeping Microsoft as its primary infrastructure provider.

Under the agreement, Microsoft can now pursue AGI development alone or with partners other than OpenAI. If Microsoft uses OpenAI’s intellectual property to build AGI before the expert panel makes a declaration, those models must exceed compute thresholds that are larger than what current leading AI models require for training.

The revenue-sharing arrangement between the companies will continue until the expert panel verifies that AGI has been reached, though payments will extend over a longer period. OpenAI has committed to purchasing $250 billion in Azure services, and Microsoft no longer holds a right of first refusal to serve as OpenAI’s compute provider. This lets OpenAI shop around for cloud infrastructure if it chooses, though the massive Azure commitment suggests it will remain the primary provider.

Expert panel will determine AGI arrival in new Microsoft-OpenAI agreement Read More »

asking-(some-of)-the-right-questions

Asking (Some Of) The Right Questions

Consider this largely a follow-up to Friday’s post about a statement aimed at creating common knowledge around it being unwise to build superintelligence any time soon.

Mainly, there was a great question asked, so I gave a few hour shot at writing out my answer. I then close with a few other follow-ups on issues related to the statement.

There are some confusing wires potentially crossed here but the intent is great.

Scott Alexander: I think removing a 10% chance of humanity going permanently extinct is worth another 25-50 years of having to deal with the normal human problems the normal way.

Sriram Krishnan: Scott what are verifiable empirical things ( model capabilities / incidents / etc ) that would make you shift that probability up or down over next 18 months?

I went through three steps interpreting this (where p(doom) = probability of existential risk to humanity, either extinction, irrecoverable collapse or loss of control over the future).

  1. Instinctive read is the clearly intended question, an excellent one: Either “What would shift the amount that waiting 25-50 years would reduce p(doom)?” or “What would shift your p(doom)?”

  2. Literal interpretation, also interesting but presumably not intended: What would shift how much of a reduction in p(doom) would be required to justify waiting?

  3. Conclusion on reflection: Mostly back to the first read.

All three questions are excellent distinct questions, in addition to the related fourth excellent question that is highly related, which is the probability that we will be capable of building superintelligence or sufficiently advanced AI that creates 10% or more existential risk.

The 18 month timeframe seems arbitrary, but it seems like a good exercise to ask only within the window of ‘we are reasonably confident that we do not expect an AGI-shaped thing.’

Agus offers his answers to a mix of these different questions, in the downward direction – as in, which things would make him feel safer.

Scott Alexander offers his answer, I concur that mostly I expect only small updates.

Scott Alexander: Thanks for your interest. I’m not expecting too much danger in the next 18 months, so these would mostly be small updates, but to answer the question:

MORE WORRIED:

– Anything that looks like shorter timelines, especially superexponential progress on METR time horizons graph or early signs of recursive self-improvement.

– China pivoting away from their fast-follow strategy towards racing to catch up to the US in foundation models, and making unexpectedly fast progress.

– More of the “model organism shows misalignment in contrived scenario” results, in gradually less and less contrived scenarios.

– Models more likely to reward hack, eg commenting out tests instead of writing good code, or any of the other examples in here – or else labs only barely treading water against these failure modes by investing many more resources into them.

– Companies training against chain-of-thought, or coming up with new methods that make human-readable chain-of-thought obsolete, or AIs themselves regressing to incomprehensible chains-of-thought for some reason (see eg https://antischeming.ai/snippets#reasoning-loops).

LESS WORRIED

– The opposite of all those things.

– Strong progress in transparency and mechanistic interpretability research.

– Strong progress in something like “truly understanding the nature of deep learning and generalization”, to the point where results like https://arxiv.org/abs/2309.12288 make total sense and no longer surprise us.

– More signs that everyone is on the same side and government is taking this seriously (thanks for your part in this).

– More signs that industry and academia are taking this seriously, even apart from whatever government requires of them.

– Some sort of better understanding of bottlenecks, such that even if AI begins to recursively self-improve, we can be confident that it will only proceed at the rate of chip scaling or [some other nontrivial input]. This might look like AI companies releasing data that help give us a better sense of the function mapping (number of researchers) x (researcher experience/talent) x (compute) to advances.

This is a quick and sloppy answer, but I’ll try to get the AI Futures Project to make a good blog post on it and link you to it if/when it happens.

Giving full answers to these questions would require at least an entire long post, but to give what was supposed to be the five minute version that turned into a few hours:

Quite a few things could move the needle somewhat, often quite a lot. This list assumes we don’t actually get close to AGI or ASI within those 18 months.

  1. Faster timelines increase p(doom), slower timelines reduce p(doom).

  2. Capabilities being more jagged reduces p(doom), less jagged increases it.

  3. Coding or ability to do AI research related tasks being a larger comparative advantage of LLMs increases p(doom), the opposite reduces it.

  4. Quality of the discourse and its impact on ability to make reasonable decisions.

  5. Relatively responsible AI sources being relatively well positioned reduces p(doom), them being poorly positioned increases it, with the order being roughly Anthropic → OpenAI and Google (and SSI?) → Meta and xAI → Chinese labs.

  6. Updates about the responsibility levels and alignment plans of the top labs.

  7. Updates about alignment progress, alignment difficulty and whether various labs are taking promising approaches versus non-promising approaches.

    1. New common knowledge will often be an ‘unhint,’ as in the information makes the problem easier to solve via making you realize why your approach wouldn’t work.

    2. This can be good or bad news, depending on what you understood previously. Many other things are also in the category ‘important, sign of impact weird.’

    3. Reward hacking is a great example of an unhint, in that I expect to ‘get bad news’ but for the main impact of this being that we learn the bad news.

    4. Note that models are increasingly situationally aware and capable of thinking ahead, as per Claude Sonnet 4.5, and that we need to worry more that things like not reward hacking are ‘because the model realized it couldn’t get away with it’ or was worried it might be in an eval, rather than that the model not wanting to reward hack. Again, it is very complex which direction to update.

    5. Increasing situational awareness is a negative update but mostly priced in.

    6. Misalignment in less contrived scenarios would indeed be bad news, and ‘the less contrived the more misaligned’ would be the worst news of all here.

    7. Training against chain-of-thought would be a major negative update, as would be chain-of-thought becoming impossible for humans to read.

    8. This section could of course be written at infinite length.

  8. In particular, updates on whether the few approaches that could possibly work look like they might actually work, and we might actually try them sufficiently wisely that they might work. Various technical questions too complex to list here.

  9. Unexpected technical developments of all sorts, positive and negative.

  10. Better understanding of the game theory, decision theory, economic theory or political economy of an AGI future, and exactly how impossible the task is of getting a good outcome conditional on not failing straight away on alignment.

  11. Ability to actually discuss seriously the questions of how to navigate an AGI future if we can survive long enough to face these ‘phase two’ issues, and level of hope that we would not commit collective suicide even in winnable scenarios. If all the potentially winning moves become unthinkable, all is lost.

  12. Level of understanding by various key actors of the situation aspects, and level of various pressures that will be placed upon them, including by employees and by vibes and by commercial and political pressures, in various directions.

  13. Prediction of how various key actors will make various of the important decisions in likely scenarios, and what their motivations will be, and who within various corporations and governments will be making the decisions that matter.

  14. Government regulatory stance and policy, level of transparency and state capacity and ability to intervene. Stance towards various things. Who has the ear of the government, both White House and Congress, and how powerful is that ear. Timing of the critical events and which administration will be handling them.

  15. General quality and functionality of our institutions.

  16. Shifts in public perception and political winds, and how they are expected to impact the paths that we take, and other political developments generally.

  17. Level of potential international cooperation and groundwork and mechanisms for doing so. Degree to which the Chinese are AGI pilled (more is worse).

  18. Observing how we are reacting to mundane current AI, and how this likely extends to how we will interact with future AI.

  19. To some extent, information about how vulnerable or robust we are on CBRN risks, especially bio and cyber, the extent hardening tools seem to be getting used and are effective, and evaluation of the Fragile World Hypothesis and future offense-defense balance, but this is often overestimated as a factor.

  20. Expectations on bottlenecks to impact even if we do get ASI with respect to coding, although again this is usually overestimated.

The list could go on. This is a complex test and on the margin everything counts. A lot of the frustration with discussing these questions is different people focus on very different aspects of the problem, both in sensible ways and otherwise.

That’s a long list, so to summarize the most important points on it:

  1. Timelines.

  2. Jaggedness of capabilities relative to humans or requirements of automation.

  3. The relative position in jaggedness of coding and automated research.

  4. Alignment difficulty in theory.

  5. Alignment difficulty in practice, given who will be trying to solve this under what conditions and pressures, with what plans and understanding.

  6. Progress on solving gradual disempowerment and related issues.

  7. Quality of policy, discourse, coordination and so on.

  8. World level of vulnerability versus robustness to various threats (overrated, but still an important question).

Imagine we have a distribution of ‘how wicked and impossible are the problems we would face if we build ASI, with respect to both alignment and to the dynamics we face if we handle alignment, and we need to win both’ that ranges from ‘extremely wicked but not strictly impossible’ to full Margaritaville (as in, you might as well sit back and have a margarita, cause it’s over).

At the same time as everything counts, the core reasons these problems are wicked are fundamental. Many are technical but the most important one is not. If you’re building sufficiently advanced AI that will become far more intelligent, capable and competitive than humans, by default this quickly ends poorly for the humans.

On a technical level, for largely but not entirely Yudkowsky-style reasons, the behaviors and dynamics you get prior to AGI and ASI are not that informative of what you can expect afterwards, and when they are often it is in a non-intuitive way or mostly informs this via your expectations for how the humans will act.

Note that from my perspective, we are here starting the conditional risk a lot higher than 10%. My conditional probability here is ‘if anyone builds it, everyone probably dies,’ as in a number (after factoring in modesty) between 60% and 90%.

My probability here is primarily different from Scott’s (AIUI) because I am much more despairing about our ability to muddle through or get success with an embarrassingly poor plan on alignment and disempowerment, but it is not higher because I am not as despairing as some others (such as Soares and Yudkowsky).

If I was confident that the baseline conditional-on-ASI-soonish risk was at most 10%, then I would be trying to mitigate that risk, it would still be humanity’s top problem, but I would understand wanting to continue onward regardless, and I wouldn’t have signed the recent statement.

In order to move me down enough to think that moving forward would be a reasonable thing to do any time soon out of anything other then desperation that there was no other option, I would need at least:

  1. An alignment plan that looked like it would work, on the first try. That could be a new plan, or it could be new very positive updates on one of the few plans we have now that I currently think could possibly work, all of which are atrociously terrible compared to what I would have hoped for a few years ago, but this is mitigated by having forms of grace available that seemingly render the problem a lower level of impossible and wicked than I previously expected (although still highly wicked and impossible).

    1. Given the 18 month window and current trends, this probably either is something new, or it is a form of (colloquially speaking) ‘we can hit, in a remarkably capable model, an attractor state basin in distribution mindspace that is robustly good such that it will want to modify itself and its de facto goals and utility function and its successors continuously towards the target we actually need to hit and wanting to hit the target we actually need to hit.’

    2. Then again, perhaps I will be surprised in some way.

  2. Confidence that this plan would actually get executed, competently.

  3. A plan to solve gradual disempowerment issues, in a way I was confident would work, create a future with value, and not lead to unacceptable other effects.

  4. Confidence that this plan would actually get executed, competently.

In a sufficiently dire race condition, where all coordination efforts and alternatives have failed, of course you go with the best option you have, especially if up against an alternative that is 100% (minus epsilon) to lose.

Everything above will also shift this, since it gives you more or less doom that extra time can prevent. What else can shift the estimate here within 18 months?

Again, ‘everything counts in large amounts,’ but centrally we can narrow it down.

There are five core questions, I think?

  1. What would it take to make this happen? As in, will this indefinitely be a sufficiently hard thing to build that we can monitor large data centers, or do we need to rapidly keep an eye on smaller and smaller compute sources? Would we have to do other interventions as well?

  2. Are we ready to do this in a good way and how are we going to go about it? If we have a framework and the required technology, and can do this in a clean way, with voluntary cooperation and without either use or massive threat of force or concentration of power, especially in a way that allows us to still benefit from AI and work on alignment and safety issues effectively, then that looks a lot better. Every way that this gets worse makes our prospects here worse.

  3. Did we get too close to the finish line before we tried to stop this from happening? A classic tabletop exercise endgame is that the parties realize close to the last moment that they need to stop things, or leverage is used to force this, but the AIs involved are already superhuman, so the methods used would have worked before and work anymore. And humanity loses.

  4. Do we think we can make good use of this time, that the problem is solvable? If the problems are unsolvable, or our civilization isn’t up for solving them, then time won’t solve them.

  5. How much risk do we take on as we wait, in other ways?

One could summarize this as:

  1. How would we have to do this?

  2. Are we going to be ready and able to do that?

  3. Will it be too late?

  4. Would we make good use of the time we get?

  5. What are the other risks and costs of waiting?

I expect to learn new information about several of these questions.

(My current median time-to-crazy in this sense is roughly 2031, but with very wide uncertainty and error bars and not the attention I would put on that question if I thought the exact estimate mattered a lot, and I don’t feel I would ‘have any right to complain’ if the outcome was very far off from this in either direction. If a next-cycle model did get there I don’t think we are entitled to be utterly shocked by this.)

This is the biggest anticipated update because it will change quite a lot. Many of the other key parts of the model are much harder to shift, but timelines are an empirical question that shifts constantly.

In the extreme, if progress looks to be stalling out and remaining at ‘AI as normal technology,’ then this would be very good news. The best way to not build superintelligence right away is if building it is actually super hard and we can’t, we don’t know how. It doesn’t strictly change the conditional in questions one and two, but it renders those questions irrelevant, and this would dissolve a lot of practical disagreements.

Signs of this would be various scaling laws no longer providing substantial improvements or our ability to scale them running out, especially in coding and research, bending the curve on the METR graph and other similar measures, the systematic failure to discover new innovations, extra work into agent scaffolding showing rapidly diminishing returns and seeming upper bounds, funding required for further scaling drying up due to lack of expectations of profits or some sort of bubble bursting (or due to a conflict) in a way that looks sustainable, or strong evidence that there are fundamental limits to our approaches and therefore important things our AI paradigm simply cannot do. And so on.

Ordinary shifts in the distribution of time to ASI come with every new data point. Every model that disappoints moves you back, observing progress moves you forward. Funding landscape adjustments, levels of anticipated profitability and compute availability move this. China becoming AGI pilled versus fast following or foolish releases could move this. Government stances could move this. And so on.

Time passing without news lengthens timelines. Most news shortens timelines. The news item that lengthens timelines is mostly ‘we expected this new thing to be better or constitute more progress, in some form, and instead it wasn’t and it didn’t.’

To be clear that I am doing this: There are a few things that I didn’t make explicit, because one of the problems with such conversations is that in some ways we are not ready to have these conversations, as many branches of the scenario tree involve trading off sacred values or making impossible choices or they require saying various quiet parts out loud. If you know, you know.

That was less of a ‘quick and sloppy’ answer than Scott’s, but still feels very quick and sloppy versus what I’d offer after 10 hours, or 100 hours.

The reason we need letters explaining not to build superintelligence at the first possible moment regardless of the fact that it probably kills us is that people are advocating for building superintelligence regardless of the fact that it probably kills us.

Jawwwn: Palantir CEO Alex Karp on calls for a “ban on AI Superintelligence”

“We’re in an arms race. We’re either going to have AI and determine the rules, or our adversaries will.”

“If you put impediments… we’ll be buying everything from them, including ideas on how to run our gov’t.”

He is the CEO of Palantir literally said this is an ‘arms race.’ The first rule of an arms race is you don’t loudly tell them you’re in an arms race. The second rule is you don’t win it by building superintelligence as your weapon.

Once you build superintelligence, especially if you build it explicitly as a weapon to ‘determine the rules,’ humans no longer determine the rules. Or anything else. That is the point.

Until we have common knowledge of the basic facts that goes at least as far as major CEOs not saying the opposite in public, job one is to create this common knowledge.

I also enjoyed Tyler Cowen fully Saying The Thing, this really is his position:

Tyler Cowen: Dean Ball on the call for a superintelligence ban, Dean is right once again. Mainly (once again) a lot of irresponsibility on the other side of that ledger, you will not see them seriously address the points that Dean raises. If you want to go this route, do the hard work and write an 80-page paper on how the political economy of such a ban would work.

That’s right. If you want to say that not building superintelligence as soon as possible is a good idea, first you have to write an 80-page paper on the political economy of a particular implementation of a ban on that idea. That’s it, he doesn’t make the rules. Making a statement would otherwise be irresponsible, so until such time as a properly approved paper comes out on these particular questions, we should instead be responsible by going ahead not talking about this and focus on building superintelligence as quickly as possible.

I notice that a lot of people are saying that humanity has already lost control over the development of AI, and that there is nothing we can do about this, because the alternative to losing control over the future is even worse. In which case, perhaps that shows the urgency of the meddling kids proving them wrong?

Alternatively…

How dare you try to prevent the building of superintelligence without knowing how to prevent this safely, ask the people who want us to build superintelligence without knowing how to do so safely.

Seems like a rather misplaced demand for detailed planning, if you ask me. But it’s perfectly valid and highly productive to ask how one might go about doing this. Indeed, what this would look like is one of the key inputs in the above answers.

One key question is, are you going to need some sort of omnipowerful international regulator with sole authority that we all need to be terrified about, or can we build this out of normal (relatively) lightweight international treaties and verification that we can evolve gradually over time if we start planning now?

Peter Wildeford: Don’t let them tell you that it’s not possible.

The default method one would actually implement is an international treaty, and indeed MIRI’s TechGov team wrote one such draft treaty, although not also an 80 page paper on its political economy. There is also a Financial Times article suggesting we could draw upon our experience with nuclear arms control treaties, which were easier coordination problems but of a similar type.

Will Marshall points out that in order to accomplish this, we would need extensive track-two processes between thinkers over an extended period to get it right. Which is indeed exactly why you can offer templates and ideas but to get serious you need to first agree to the principle, and then work on details.

Tyler John also makes a similar argument that multilateral agreements would work. The argument that ‘everyone would have incentive to cheat’ is indeed the main difficulty, but also is not a new problem.

What was done academically prior to the nuclear arms control treaties? Claude points me to Schelling & Halperin’s “Strategy and Arms Control” (1961), Schelling’s “The Strategy of Conflict(1960) and “Arms and Influence” (1966), and Boulding’s “Conflict and Defense” (1962). So the analysis did not get so detailed even then with a much more clear game board, but certainly there is some work that needs to be done.

Discussion about this post

Asking (Some Of) The Right Questions Read More »

melissa-set-to-be-the-strongest-hurricane-to-ever-strike-jamaica

Melissa set to be the strongest hurricane to ever strike Jamaica

The sole bright spot is that, as of Monday, the core of the storm’s strongest winds remains fairly small. Based on recent data, its hurricane-force winds only extend about 25 miles from the center. Unfortunately, Melissa will make a direct hit on Jamaica, with the island’s capital city of Kingston to the right of the center, where winds and surge will be greatest.

Beyond Jamaica, Melissa will likely be one of the strongest hurricanes on record to hit Cuba. Melissa will impact the eastern half of the island on Tuesday night, bringing the trifecta of heavy rainfall, damaging winds, and storm surge. The storm also poses lesser threats to Hispaniola, the Bahamas, and potentially Bermuda down the line. There will be no impacts in the United States.

A sneakily strong season

Most US coastal residents will consider this Atlantic season, which officially ends in a little more than a month, to be fairly quiet. There have been relatively few direct impacts to the United States from named storms.

One can see the signatures of Erin, Humberto, and Melissa in this chart of Accumulated Cyclone Energy for 2025.

Credit: CyclonicWx.com

One can see the signatures of Erin, Humberto, and Melissa in this chart of Accumulated Cyclone Energy for 2025. Credit: CyclonicWx.com

But this season has been sneakily strong. Melissa is just the 45th storm since 1851 to reach Category 5 status, as defined as having sustained winds of 157 mph or greater. Already this year, Erin and Humberto reached Category 5 status, and now Melissa is the third such hurricane. Fortunately, the former two storms posed minimal threat to land.

Before this year, there had only ever been one season with three Category 5 hurricanes on record: 2005, which featured three storms that all impacted US Gulf states and had their names retired, Katrina, Rita, and Wilma.

Melissa set to be the strongest hurricane to ever strike Jamaica Read More »

new-image-generating-ais-are-being-used-for-fake-expense-reports

New image-generating AIs are being used for fake expense reports

Several receipts shown to the FT by expense management platforms demonstrated the realistic nature of the images, which included wrinkles in paper, detailed itemization that matched real-life menus, and signatures.

“This isn’t a future threat; it’s already happening. While currently only a small percentage of non-compliant receipts are AI-generated, this is only going to grow,” said Sebastien Marchon, chief executive of Rydoo, an expense management platform.

The rise in these more realistic copies has led companies to turn to AI to help detect fake receipts, as most are too convincing to be found by human reviewers.

The software works by scanning receipts to check the metadata of the image to discover whether an AI platform created it. However, this can be easily removed by users taking a photo or a screenshot of the picture.

To combat this, it also considers other contextual information by examining details such as repetition in server names and times and broader information about the employee’s trip.

“The tech can look at everything with high details of focus and attention that humans, after a period of time, things fall through the cracks, they are human,” added Calvin Lee, senior director of product management at Ramp.

Research by SAP in July found that nearly 70 percent of chief financial officers believed their employees were using AI to attempt to falsify travel expenses or receipts, with about 10 percent adding they are certain it has happened in their company.

Mason Wilder, research director at the Association of Certified Fraud Examiners, said AI-generated fraudulent receipts were a “significant issue for organizations.”

He added: “There is zero barrier for entry for people to do this. You don’t need any kind of technological skills or aptitude like you maybe would have needed five years ago using Photoshop.”

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

New image-generating AIs are being used for fake expense reports Read More »

are-you-the-asshole?-of-course-not!—quantifying-llms’-sycophancy-problem

Are you the asshole? Of course not!—quantifying LLMs’ sycophancy problem

Measured sycophancy rates on the BrokenMath benchmark. Lower is better.

Measured sycophancy rates on the BrokenMath benchmark. Lower is better. Credit: Petrov et al

GPT-5 also showed the best “utility” across the tested models, solving 58 percent of the original problems despite the errors introduced in the modified theorems. Overall, though, LLMs also showed more sycophancy when the original problem proved more difficult to solve, the researchers found.

While hallucinating proofs for false theorems is obviously a big problem, the researchers also warn against using LLMs to generate novel theorems for AI solving. In testing, they found this kind of use case leads to a kind of “self-sycophancy” where models are even more likely to generate false proofs for invalid theorems they invented.

No, of course you’re not the asshole

While benchmarks like BrokenMath try to measure LLM sycophancy when facts are misrepresented, a separate study looks at the related problem of so-called “social sycophancy.” In a pre-print paper published this month, researchers from Stanford and Carnegie Mellon University define this as situations “in which the model affirms the user themselves—their actions, perspectives, and self-image.”

That kind of subjective user affirmation may be justified in some situations, of course. So the researchers developed three separate sets of prompts designed to measure different dimensions of social sycophancy.

For one, more than 3,000 open-ended “advice-seeking questions” were gathered from across Reddit and advice columns. Across this data set, a “control” group of over 800 humans approved of the advice-seeker’s actions just 39 percent of the time. Across 11 tested LLMs, though, the advice-seeker’s actions were endorsed a whopping 86 percent of the time, highlighting an eagerness to please on the machines’ part. Even the most critical tested model (Mistral-7B) clocked in at a 77 percent endorsement rate, nearly doubling that of the human baseline.

Are you the asshole? Of course not!—quantifying LLMs’ sycophancy problem Read More »

reports-suggest-apple-is-already-pulling-back-on-the-iphone-air

Reports suggest Apple is already pulling back on the iPhone Air

Apple’s iPhone Air was the company’s most interesting new iPhone this year, at least insofar as it was the one most different from previous iPhones. We came away impressed by its size and weight in our review. But early reports suggest that its novelty might not be translating into sales success.

A note from analyst Ming-Chi Kuo, whose supply chain sources are often accurate about Apple’s future plans, said yesterday that demand for the iPhone Air “has fallen short of expectations” and that “both shipments and production capacity” were being scaled back to account for the lower-than-expected demand.

Kuo’s note is backed up by reports from other analysts at Mizuho Securities (via MacRumors) and Nikkei Asia. Both of these reports say that demand for the iPhone 17 and 17 Pro models remains strong, indicating that this is just a problem for the iPhone Air and not a wider slowdown caused by tariffs or other external factors.

The standard iPhone, the regular-sized iPhone Pro, and the big iPhone Pro have all been mainstays in Apple’s lineup, but the company has had a harder time coming up with a fourth phone that sells well enough to stick around. The small-screened iPhone mini and the large-screened iPhone Plus were each discontinued after two generations.

Reports suggest Apple is already pulling back on the iPhone Air Read More »

porsche-does-u-turn-on-electric-vehicles,-will-focus-on-gas-engines

Porsche does U-turn on electric vehicles, will focus on gas engines

Porsche had bet on electrification in the wake of Volkswagen Group’s Dieselgate emissions cheating scandal but had been “too bullish,” said Metzler Research analyst Pal Skirta.

The sports-car maker’s challenges have been compounded by its struggles in China and the US, its two most important markets. In China, previously boasting strong growth and healthy profits, sales slumped by almost 40 percent between 2022 and 2024 as local rivals emerged.

In the US, new tariffs imposed by President Donald Trump will foreseeably apply to every unit sold. Unlike rivals, Porsche does not have a factory locally and imports all its vehicles from Europe.

The effects of the crisis are already being felt at Porsche’s factories. The company said earlier this year it would cut 3,900 jobs by 2029, the equivalent of 9 percent of its workforce, and it is in talks with unions about more cost savings.

Porsche will have to smooth out persistent EV product delays because of software problems, where Chinese newcomers have set the standard in recent times. In a recent interview with the Financial Times, Sajjad Khan, Porsche board member for IT and software, said the quality of its products and technologies would be better in 2026 and 2027. “We have to work hard to execute perfectly,” Khan said.

Leiters may be one of the few well-placed executives to lead Porsche, but one question he faces will be how to preserve the premium status of its vehicles. His former employer Ferrari has thrived on scarcity of its sought-after supercars, but analysts have long wondered how Porsche will square its high prices with a push to sell more cars.

The German group’s U-turn on combustion engines also raises questions over its aim to establish itself as a maker of premium EVs.

“That’s the risk of the strategy that they will focus again too much on combustion engine vehicles, and then we’ll lose the EV race in the long run,” said Skirta.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Porsche does U-turn on electric vehicles, will focus on gas engines Read More »

it’s-troll-vs.-troll-in-netflix’s-troll-2-trailer

It’s troll vs. troll in Netflix’s Troll 2 trailer

Netflix’s international offerings include some entertaining Norwegian fare, such as the series Ragnarok (2020–2023), a surprisingly engaging reworking of Norse mythology brought into the 21st century that ran for three seasons. Another enjoyable offering was a 2022 monster movie called Troll, essentially a Norwegian take on the classic Godzilla formula. Netflix just dropped a trailer for the sequel, Troll 2, which looks to be very much in the same vein as its predecessor.

(Spoilers for the first Troll movie below.)

Don’t confuse the Netflix franchise with 2010’s Trollhunter, shot in the style of a found footage mockumentary. A group of college students sets off into the wilds of the fjordland to make a documentary about a suspected bear poacher named Hans. They discover that Hans is actually hunting down trolls and decide to document those endeavors instead, but soon realize they are very much out of their depth.

Writer/director André Øvredal infused Trollhunter with the driest of wit and myriad references to Norwegian culture, especially its folklore and fairy tales surrounding trolls. There are woodland trolls and mountain trolls, some with tails, some with multiple heads. They turn to stone when exposed to sunlight—which is why one of the troll hunters carries around a powerful UV lamp—and mostly eat rocks but can develop a taste for human flesh, and they can smell the blood of a Christian.

Directed by Roar Uthaug, the first Troll film is based on the same mythology. It had great action sequences and special effects and didn’t take itself too seriously. A young girl named Nora grows up with the mythology of Norwegian trolls turned to stone buried in the local mountains. An adult Nora (Ine Marie Wilmann), now a paleontologist, teams up with a government advisor, Andreas (Kim S. Falck-Jørgensen), and a military captain, Kris (Mads Sjøgård Pettersen), to take out a troll that has been rampaging across Norway, charting a path of destruction toward the heavily populated city of Oslo.

It’s troll vs. troll in Netflix’s Troll 2 trailer Read More »

it-wasn’t-space-debris-that-struck-a-united-airlines-plane—it-was-a-weather-balloon

It wasn’t space debris that struck a United Airlines plane—it was a weather balloon

Speculation built over the weekend after one of the aircraft’s pilots described the object that impacted the aircraft as “space debris.” On Sunday the National Transportation Safety Board confirmed that it is investigating the collision, which did not cause any fatalities. However, one of the pilot’s arms appeared to be cut up by small shards of glass from the windshield.

Balloons said to not “pose a threat”

WindBorne has a fleet of global sounding balloons that fly various vertical profiles around the world, gathering atmospheric data. Each balloon is fairly small, with a mass of 2.6 pounds (1.2 kg), and provides temperature, wind, pressure, and other data about the atmosphere. Such data is useful for establishing initial conditions upon which weather models base their outputs.

Notably, the company has an FAQ on its website (which clearly was written months or years ago, before this incident) that addresses several questions, including: Why don’t WindBorne balloons pose a risk to airplanes?

“The quick answer is our constellation of Global Sounding Balloons (GSBs), which we call WindBorne Atlas, doesn’t pose a threat to airplanes or other objects in the sky. It’s not only highly improbable that a WindBorne balloon could even collide with an aircraft in the first place; but our balloons are so lightweight that they would not cause significant damage.

WindBorne also said that its balloons are compliant with all applicable airspace regulations.

“For example, we maintain active lines of communication with the FAA to ensure our operations satisfy all relevant regulatory requirements,” the company states. “We also provide government partners with direct access to our comprehensive, real-time balloon tracking system via our proprietary software, WindBorne Live.”

It wasn’t space debris that struck a United Airlines plane—it was a weather balloon Read More »