Author name: DJ Henderson

on-(not)-feeling-the-agi

On (Not) Feeling the AGI

Ben Thompson interviewed Sam Altman recently about building a consumer tech company, and about the history of OpenAI. Mostly it is a retelling of the story we’ve heard before, and if anything Altman is very good about pushing back on Thompson when Thompson tries to turn OpenAI’s future into the next Facebook, complete with an advertising revenue model.

It is such a strange perspective to witness. They do not feel the AGI, let alone the ASI. The downside risks of AI, let alone existential risks, are flat out not discussed, this is a world where that’s not even a problem for Future Earth.

Then we contrast this with the new Epoch model of economic growth from AI, which can produce numbers like 30% yearly economic growth. Epoch feels the AGI.

Given the discussion of GPT-2 in the OpenAI safety and alignment philosophy document, I wanted to note his explanation there was quite good.

Sam Altman: The GPT-2 release, there were some people who were just very concerned about, you know, probably the model was totally safe, but we didn’t know we wanted to get — we did have this new and powerful thing, we wanted society to come along with us.

Now in retrospect, I totally regret some of the language we used and I get why people are like, “Ah man, this was like hype and fear-mongering and whatever”, it was truly not the intention. The people who made those decisions had I think great intentions at the time, but I can see now how it got misconstrued.

As I said a few weeks ago, ‘this is probably totally safe but we don’t know for sure’ was exactly the correct attitude to initially take to GPT-2, given my understanding of what they knew at the time. The messaging could have made this clearer, but it very much wasn’t hype or fearmongering.

Altman repeatedly emphasizes that what he wanted to do from the beginning, what me still most wants to do, is build AGI.

Altman’s understanding of what he means by that, and what the implications will be, continues to seem increasingly confused. Now it seems it’s… fungible? And not all that transformative?

Sam Altman: My favorite historical analog is the transistor for what AGI is going to be like. There’s going to be a lot of it, it’s going to diffuse into everything, it’s going to be cheap, it’s an emerging property of physics and it on its own will not be a differentiator.

This seems bonkers crazy to me. First off, it seems to include the idea of ‘AGI’ as a fungible commodity, as a kind of set level. Even if AI stays for substantial amounts of time at ‘roughly human’ levels, differentiation between ‘roughly which humans in which ways, exactly’ is a giant deal, as anyone who has dealt with humans knows. There isn’t some natural narrow attractor level of capability ‘AGI.’

Then there’s the obvious question of why you can ‘diffuse AGI into everything’ and expect the world to otherwise look not so different, the way it did with transistors? Altman also says this:

Ben Thompson: What’s going to be more valuable in five years? A 1-billion daily active user destination site that doesn’t have to do customer acquisition, or the state-of-the-art model?

Sam Altman: The 1-billion user site I think.

That again implies little differentiation in capability, and he expects commoditization of everything but the very largest models to happen quickly.

Charles: This seems pretty incompatible with AGI arriving in that timeframe or shortly after, unless it gets commoditised very fast and subsequently improvements plateau.

Similarly, Altman in another interview continues to go with the line ‘I kind of believe we can launch the first AGI and no one cares that much.’

The whole thing is pedestrian, he’s talking about the Next Great Consumer Product. As in, Ben Thompson is blown away that this is the next Facebook, with a similar potential. Thompson and Altman are talking about issues of being a platform versus an aggregator and bundling and how to make ad revenue. Altman says they expect to be a platform only in the style of a Google, and wisely (and also highly virtuously) hopes to avoid the advertising that I sense has Thompson very excited, as he continues to assume ‘people won’t pay’ so the way you profit from AGI (!!!) is ads. It’s so weird to see Thompson trying to sell Altman on the need to make our future an ad-based dystopia, and the need to cut off the API to maximize revenue.

Such considerations do matter, and I think that Thompson’s vision is wrong on both business level and also on the normative level of ‘at long last we have created the advertising fueled cyberpunk dystopia world from the novel…’ but that’s not important now. Eyes on the damn prize!

I don’t even know how to respond to a vision so unambitious. I cannot count that low.

I mean, I could, and I have preferences over how we do so when we do, but it’s bizarre how much this conversation about AGI does not feel the AGI.

Altman’s answers in the DeepSeek section are scary. But it’s Thompson who really, truly, profoundly, simply does not get what is coming at all, or how you deal with this type of situation, and this answer from Altman is very good (at least by 2025 standards):

Ben Thompson: What purpose is served at this point in being sort of precious about these releases?

Sam Altman: I still think there can be big risks in the future. I think it’s fair that we were too conservative in the past. I also think it’s fair to say that we were conservative, but a principle of being a little bit conservative when you don’t know is not a terrible thing.

I think it’s also fair to say that at this point, this is going to diffuse everywhere and whether it’s our model that does something bad or somebody else’s model that does something bad, who cares? But I don’t know, I’d still like us to be as responsible an actor as we can be.

Other Altman statements, hinting at getting more aggressive with releases, are scarier.

They get to regulation, where Thompson repeats the bizarre perspective that previous earnest calls for regulations that only hit OpenAI and other frontier labs were an attempt at regulatory capture. And Altman basically says (in my words!), fine, the world doesn’t want to regulate only us and Google and a handful of others at the top, so we switched from asking for regulations to protect everyone into regulations to pave the way for AI.

Thus, the latest asks from OpenAI are to prevent states from regulating frontier models, and to declare universal free fair use for all model training purposes, saying to straight up ignore copyright.

Some of this week’s examples, on top of Thompson and Altman.

Spor: I genuinely get the feeling that no one *actuallybelieves in superintelligence except for the doomers

I think they were right about this (re: common argument against e/acc on x) and i have to own up to that.

John Pressman: There’s an entire genre of Guy on here whose deal is basically “Will the singularity bring me a wife?” and the more common I learn this guy is the less I feel I have in common with others.

Also this one:

Rohit: Considering AGI is coming, all coding is about to become vibe coding, and if you don’t believe it then you don’t really believe in AGI do you

Ethan Mollick: Interestingly, if you look at almost every investment decision by venture capital, they don’t really believe in AGI either, or else can’t really imagine what AGI would mean if they do believe in it.

Epoch creates the GATE model, explaining that if AI is highly useful, it will also get highly used to do a lot of highly useful things, and that would by default escalate quickly. The model is, as all such things are, simplified in important ways, ignoring regulatory friction issues and also the chance we lose control or all die.

My worry is that by ignoring regulatory, legal and social frictions in particular, Epoch has not modeled the questions we should be most interested in, as in what to actually expect if we are not in a takeoff scenario. The paper does explicitly note this.

Their default result of their model, excluding the excluded issues, is roughly 30% additional yearly economic growth.

You can play with their simulator here, and their paper is here.

Epoch AI: We developed GATE: a model that shows how AI scaling and automation will impact growth.

It predicts trillion‐dollar infrastructure investments, 30% annual growth, and full automation in decades.

Tweak the parameters—these transformative outcomes are surprisingly hard to avoid.

Imagine if a central bank took AI seriously. They’d build GATE—merging economics with AI scaling laws to show how innovation, automation, and investment interact.

At its core: more compute → more automation → growth → more investment in chips, fabs, etc.

Even when investors are uncertain, GATE predicts explosive economic growth within two decades. Trillions of dollars flow into compute, fabs, and related infrastructure—even before AI generates much value—because investors anticipate massive returns from widespread AI automation.

We’ve created an interactive sandbox so you can explore these dynamics yourself. Test your own assumptions, run different scenarios, and visualize how the economy might evolve as AI automation advances.

GATE has important limitations: no regulatory frictions, no innovation outside AI, and sensitivity to uncertain parameters. We see it as a first-order approximation of AI’s dynamics—try it out to learn how robust its core conclusions are!

Charles Foster: Epoch AI posts, for dummies

Epoch’s Ege Erdil and Matthew Barnett argue that most AI value will come from broad automation, not from R&D and scientific progress. That’s not a knock on the value of R&D, it’s saying the broad automation is even more valuable. Broadly automating labor is the name of their game.

One could say ‘wait, the reason you can do broad automation is exactly because of AI R&D, so that still makes R&D super valuable,’ so make of that dilemma what you will. A lot of this likely comes from the conflation of ‘formalized R&D’ as it is currently practiced, and ‘generally figuring things out’ which goes well beyond formalized R&D and will be turbocharged across the board. Yes, only ~20% of productivity gains look like they come from ‘R&D’ but that’s because we’re only counting formal R&D, and plausibly ignoring compounding effects.

Epoch argues against the likelihood of a full-on AI R&D singularity (or full RSI) via the standard arguments and emphasizing the need for a variety of cognitive skills. I don’t find this generally convincing, but they’re making a narrower argument than usual, that AI will first be able to fully automate a large share of the general workforce. I suppose sure, but also that’s going to happen rather quickly in such scenarios anyway?

I was pleased to see Tyler Cowen link to this without derision or claiming that it assumed its conclusions, even though it would have been highly reasonable to make similar objections.

I find very high growth scenarios highly plausible if we assume AI away loss of control and other existential risks along the way, and assume all of this is pointed in pro-human directions and that capabilities continue to scale.

Alas, I worry that the model here does not address most people’s ‘true objections.’

I don’t intend for this to be my definitive version of this, but sketching quickly, the plausibly true objections fall roughly into two categories.

First, there’s the ones that are often driving objections but that aren’t actually objections in this context:

  1. ASI likely kills everyone. Yeah, yeah. True but here we’re assuming it doesn’t.

  2. Disbelief on priors, absurdity heuristic, ‘sounds sci-fi’ or Nothing Ever Happens.

  3. Belief that tech is hype or always involves tons of hype, so this is also hype.

  4. Large growth would have implications I don’t want to think about, so no.

  5. Large growth means nothing matters so I’m going to act as if it won’t happen.

  6. Failure to even feel the AGI.

That’s all understandable, but not especially relevant. It’s a physical question, and it’s of the form of solving for the [Y] in ‘[X] → [Y].’

Second, there’s actual arguments, in various combinations, such as:

  1. AI progress will stall before we reach superintelligence (ASI), because of reasons.

  2. AI won’t be able to solve robotics or do act physically, because of reasons.

  3. Partial automation, even 90% or 99%, is very different from 100%, o-ring theory.

  4. Physical bottlenecks and delays prevent growth. Intelligence only goes so far.

  5. Regulatory and social bottlenecks prevent growth this fast, INT only goes so far.

  6. Decreasing marginal value means there literally aren’t goods with which to grow.

  7. Dismissing ability of AI to cause humans to make better decisions.

  8. Dismissing ability of AI to unlock new technologies.

And so on.

One common pattern is that relatively ‘serious people’ who do at least somewhat understand what AI is going to be put out highly pessimistic estimates and then call those estimates wildly optimistic and bullish. Which, compared to the expectations of most economists or regular people, they are, but that’s not the right standard here.

Dean Ball: For the record: I expect AI to add something like 1.5-2.5% GDP growth per year, on average, for a period of about 20 years that will begin in the late 2020s.

That is *wildlyoptimistic and bullish. But I do not believe 10% growth scenarios will come about.

Daniel Kokotajlo: Does that mean you think that even superintelligence (AI better than the best humans at everything, while also being faster and cheaper) couldn’t grow the economy at 10%+ speed? Or do you think that superintelligence by that definition won’t exist?

Dean Ball: the latter. it’s the “everything” that does it. 100% is a really big number. It’s radically bigger than 80%, 95%, or 99%. if bottlenecks persist–and I believe strongly that they will–we will have see baumol issues.

Daniel Kokotajlo: OK, thanks. Can you give some examples of things that AIs will remain worse than the best humans at 20 years from now?

Dean Ball: giving massages, running for president, knowing information about the world that isn’t on the internet, performing shakespeare, tasting food, saying sorry.

Samuel Hammond (responding to DB’s OP): That’s my expectation too, at least into the early 2030s as the last mile of resource and institutional constraints get ironed out. But once we have strong AGI and robotics production at scale, I see no theoretical reason why growth wouldn’t run much faster, a la 10-20% GWP. Not indefinitely, but rapidly to a much higher plateau.

Think of AGI as a step change increase in the Solow-Swan productivity factor A. This pushes out the production possibilities frontier, making even first world economies like a developing country. The marginal product of capital is suddenly much higher, setting off a period of rapid “catch up growth” to the post-AGI balanced growth path with the capital / labor ratio in steady state, signifying Baumol constraints.

Dean Ball: Right—by “AI” I really just meant the software side. Robotics is a totally separate thing, imo. I haven’t thought about the economics of robotics carefully but certainly 10% growth is imaginable, particularly in China where doing stuff is legal-er than in the us.

Thinking about AI impacts down the line without robotics seems to me like thinking about the steam engine without railroads, or computers without spreadsheets. You can talk about that if you want, but it’s not the question we should be asking. And even then, I expect more – for example I asked Claude about automating 80% of non-physical tasks, and it estimated about 5.5% additional GDP growth per year.

Another way of thinking about Dean Ball’s growth estimate is that in 20 years of having access to this, that would roughly turn Portugal into the Netherlands, or China into Romania. Does that seem plausible?

If you make a sufficient number of the pessimistic objections on top of each other, where we stall out before ASI and have widespread diffusion bottlenecks and robotics proves mostly unsolvable without ASI, I suppose you could get to 2% a year scenario. But I certainly wouldn’t call that wildly optimistic.

Distinctly, on the other objections, I will reiterate my position that various forms of ‘intelligence only goes so far’ are almost entirely a Skill Issue, certainly over a decade-long time horizon and at the margins discussed here, amounting to Intelligence Denialism. The ASI cuts through everything. And yes, physical actions take non-zero time, but that’s being taken into account, future automated processes can go remarkably quickly even in the physical realm, and a lot of claims of ‘you can only know [X] by running a physical experiment’ are very wrong, again a Skill Issue.

On the decreasing marginal value of goods, I think this is very much a ‘dreamed of in your philosophy’ issue, or perhaps it is definitional. I very much doubt that the physical limits kick in that close to where we are now, even if in important senses our basic human needs are already being met.

Altman’s model of the how AGI will impact the world is super weird if you take it seriously as a physical model of a future reality.

It’s kind of like there is this thing, ‘intelligence.’ It’s basically fungible, as it asymptotes quickly at close to human level, so it won’t be a differentiator.

There’s only so intelligent a thing can be, either in practice around current tech levels or in absolute terms, it’s not clear which. But it’s not sufficiently beyond us to be that dangerous, or for the resulting world to look that different. There’s risks, things that can go wrong, but they’re basically pedestrian, not that different from past risks. AGI will get released into the world, and ‘no one will care that much’ about the first ‘AGI products.’

I’m not willing to say that something like that is purely physically impossible, or has probability epsilon or zero. But it seems pretty damn unlikely to be how things go. I don’t see why we should expect this fungibility, or for capabilities to stall out exactly there even if they do stall out. And even if that did happen, I would expect things to change quite a lot more.

It’s certainly possible that the first AGI-level product will come out – maybe it’s a new form of Deep Research, let’s say – and initially most people don’t notice or care all that much. People often ignore exponentials until things are upon them, and can pretend things aren’t changing until well past points of no return. People might sense there were boom times and lots of cool toys without understanding what was happening, and perhaps AI capabilities don’t get out of control too quickly.

It still feels like an absurd amount of downplaying, from someone who knows better. And he’s far from alone.

Discussion about this post

On (Not) Feeling the AGI Read More »

uk-on-alert-after-h5n1-bird-flu-spills-over-to-sheep-in-world-first

UK on alert after H5N1 bird flu spills over to sheep in world-first

In the UK, officials said further testing of the rest of the sheep’s flock has found no other infections. The one infected ewe has been humanely culled to mitigate further risk and to “enable extensive testing.”

“Strict biosecurity measures have been implemented to prevent the further spread of disease,” UK Chief Veterinary Officer Christine Middlemiss said in a statement. “While the risk to livestock remains low, I urge all animal owners to ensure scrupulous cleanliness is in place and to report any signs of infection to the Animal Plant Health Agency immediately.”

While UK officials believe that the spillover has been contained and there’s no onward transmission among sheep, the latest spillover to a new mammalian species is a reminder of the virus’s looming threat.

“Globally, we continue to see that mammals can be infected with avian influenza A(H5N1),” Meera Chand, Emerging Infection lead at the UK Health Security Agency (UKHSA), said in a statement. In the US, the Department of Agriculture has documented hundreds of infections in wild and captive mammals, from cats and bears to raccoons and harbor seals.

Chand noted that, so far, the spillovers into animals have not easily transmitted to humans. For instance, in the US, despite extensive spread through the dairy industry, no human-to-human transmission has yet been documented. But, experts fear that with more spillovers and exposure to humans, the virus will gain more opportunities to adapt to be more infectious in humans.

Chand says that UKHSA and other agencies are monitoring the situation closely in the event the situation takes a turn. “UKHSA has established preparations in place for detections of human cases of avian flu and will respond rapidly with NHS and other partners if needed.”

UK on alert after H5N1 bird flu spills over to sheep in world-first Read More »

trump-administration-accidentally-texted-secret-bombing-plans-to-a-reporter

Trump administration accidentally texted secret bombing plans to a reporter

Using Signal in this way may have violated US law, Goldberg wrote. “Conceivably, Waltz, by coordinating a national-security-related action over Signal, may have violated several provisions of the Espionage Act, which governs the handling of ‘national defense’ information, according to several national-security lawyers interviewed by my colleague Shane Harris for this story,” he wrote.

Signal is not an authorized venue for sharing such information, and Waltz’s use of a feature that makes messages disappear after a set period of time “raises questions about whether the officials may have violated federal records law,” the article said. Adding a reporter to the thread “created new security and legal issues” by transmitting information to someone who wasn’t authorized to see it, “the classic definition of a leak, even if it was unintentional,” Goldberg wrote.

The account labeled “JD Vance” questioned the war plan in a Signal message on March 14. “I am not sure the president is aware how inconsistent this is with his message on Europe right now,” the message said. “There’s a further risk that we see a moderate to severe spike in oil prices. I am willing to support the consensus of the team and keep these concerns to myself. But there is a strong argument for delaying this a month, doing the messaging work on why this matters, seeing where the economy is, etc.”

The Vance account also stated, “3 percent of US trade runs through the suez. 40 percent of European trade does,” and “I just hate bailing Europe out again.” The Hegseth account responded that “I fully share your loathing of European free-loading. It’s PATHETIC,” but added that “we are the only ones on the planet (on our side of the ledger) who can do this.”

An account apparently belonging to Trump advisor Stephen Miller wrote, “As I heard it, the president was clear: green light, but we soon make clear to Egypt and Europe what we expect in return. We also need to figure out how to enforce such a requirement. EG, if Europe doesn’t remunerate, then what? If the US successfully restores freedom of navigation at great cost there needs to be some further economic gain extracted in return.”

Trump administration accidentally texted secret bombing plans to a reporter Read More »

should-we-be-concerned-about-the-loss-of-weather-balloons?

Should we be concerned about the loss of weather balloons?


Most of the time, not a big deal. But in critical times, the losses will be felt.

A radiosonde with mailing instructions. Credit: NWS Pittsburgh

Due to staff reductions, retirements, and a federal hiring freeze, the National Weather Service has announced a series of suspensions involving weather balloon launches in recent weeks. The question is, will this significantly degrade forecasts in the United States and around the world?

On February 27, it was announced that balloon launches would be suspended entirely at Kotzebue, Alaska, due to staffing shortages. In early March, Albany, N.Y., and Gray, Maine, announced periodic disruptions in launches. Since March 7, it appears that Gray has not missed any balloon launches through Saturday. Albany, however, has missed 14 of them, all during the morning launch cycle (12z).

The kicker came on Thursday afternoon when it was announced that all balloon launches would be suspended in Omaha, Neb., and Rapid City, S.D., due to staffing shortages. Additionally, the balloon launches in Aberdeen, S.D.; Grand Junction, Colo.; Green Bay, Wis.; Gaylord, Mich.; North Platte, Neb.; and Riverton, Wyo., would be reduced to once a day from twice a day.

What are weather balloons?

In a normal time, weather balloons would be launched across the country and world twice per day, right at about 8 am ET and 8 pm ET (one hour earlier in winter), or what we call 12z and 00z. That’s Zulu time, or noon and midnight in Greenwich, England. Rather than explain the whole reasoning behind why we use Zulu time in meteorology, here’s a primer on everything you need to know. Weather balloons are launched around the world at the same time. It’s a unique collaboration and example of global cooperation in the sciences, something that has endured for many years.

These weather balloons are loaded up with hydrogen or helium, soar into the sky, up to and beyond jet stream level, getting to a height of over 100,000 feet before they pop. Attached to the weather balloon is a tool known as a radiosonde, or “sonde” for short. This is basically a weather-sensing device that measures all sorts of weather variables like temperature, dewpoint, pressure, and more. Wind speed is usually derived from this based on GPS transmitting from the sonde.

Sunday morning’s upper air launch map showing a gaping hole over the Rockies and some of the Plains.

Credit: University of Wyoming

Sunday morning’s upper air launch map showing a gaping hole over the Rockies and some of the Plains. Credit: University of Wyoming

What goes up must come down, so when the balloon pops, that radiosonde falls from the sky. A parachute is attached to it, slowing its descent and ensuring no one gets plunked on the head by one. If you find a radiosonde, it should be clearly marked, and you can keep it, let the NWS know you found it, or dispose of it properly. In some instances, there may still be a way to mail it back to the NWS (postage and envelope included and prepaid).

How this data is used

In order to run a weather model, you need an accurate snapshot of what we call the initial conditions. What is the weather at time = zero? That’s your initialization point. Not coincidentally, weather models are almost always run at 12z and 00z, to time in line with retrieving the data from these weather balloons. It’s a critically important input to almost all weather modeling we use.

The data from balloon launches can be plotted on a chart called a sounding, which gives meteorologists a vertical profile of the atmosphere at a point. During severe weather season, we use these observations to understand the environment we are in, assess risks to model output, and make changes to our own forecasts. During winter, these observations are critical to knowing if a storm will produce snow, sleet, or freezing rain.

Observations from soundings are important inputs for assessing turbulence that may impact air travel, marine weather, fire weather, and air pollution. Other than some tools on some aircraft that we utilize, the data from balloon launches is the only real good verification tool we have for understanding how the upper atmosphere is behaving.

Have we lost weather balloon data before?

We typically lose out on a data point or two each day for various reasons when the balloons are launched. We’ve also been operating without a weather balloon launch in Chatham, Mass., for a few years because coastal erosion made the site too challenging and unsafe.

Tallahassee, Fla., has been pausing balloon launches for almost a year now due to a helium shortage and inability to safely switch to hydrogen gas for launching the balloons. In Denver, balloon launches have been paused since 2022 due to the helium shortage as well.

Those are three sites, though, spread out across the country. We are doubling or tripling the number of sites without launches now, many in critical areas upstream of significant weather.

Can satellites replace weather balloons?

Yes and no.

On one hand, satellites today are capable of incredible observations that can rival weather balloons at times. And they also cover the globe constantly, which is important. That being said, satellites cannot completely replace balloon launches. Why? Because the radiosonde data those balloon launches give us basically acts as a verification metric for models in a way that satellites cannot. It also helps calibrate derived satellite data to ensure that what the satellite is seeing is recorded correctly.

But in general, satellites cannot yet replace weather balloons. They merely act to improve upon what weather balloons do. A study done in the middle part of the last decade found that wind observations improved rainfall forecasts by 30 percent. The one tool at that time that made the biggest difference in improving the forecast were radiosondes. Has this changed since then? Yes, almost certainly. Our satellites have better resolution, are capable of getting more data, and send data back more frequently. So certainly, it’s improved some. But enough? That’s unclear.

An analysis done more recently on the value of dropsondes (the opposite of balloon launches; this time, the sensor is dropped from an aircraft instead of launched from the ground) in forecasting West Coast atmospheric rivers showed a marked improvement in forecasts when those targeted drops occur. Another study in 2017 showed that aircraft observations actually did a good job filling gaps in the upper air data network.

Even with aircraft observations, there were mixed studies done in the wake of the COVID-19 reduction in air travel that suggested no impact could be detected above usual forecast error noise or that there was some regional degradation in model performance.

But to be quite honest, there have not been many studies that I can find in recent years that assess how the new breed of satellites has (or has not) changed the value of upper-air observations. The NASA GEOS model keeps a record of what data sources are of most impact to model verification with respect to 24-hour forecasts. Number two on the list? Radiosondes. This could be considered probably a loose comp to the GFS model, one of the major weather models used by meteorologists globally.

The verdict

In reality, the verdict in all this is to be determined, particularly statistically. Will it make a meaningful statistical difference in model accuracy? Over time, yes, probably, but not in ways that most people will notice day to day.

However, based on 20 years of experience and a number of conversations about this with others in the field, there are some very real, very serious concerns beyond statistics. One thing is that the suspended weather balloon launches are occurring in relatively important areas for weather impacts downstream. A missed weather balloon launch in Omaha or Albany won’t impact the forecast in California. But what if a hurricane is coming? What if a severe weather event is coming? You’ll definitely see impacts to forecast quality during major, impactful events. At the very least, these launch suspensions will increase the noise-to-signal ratio with respect to forecasts.

The element with the second-highest impact on the NASA GEOS model? Radiosondes.

Credit: NASA

The element with the second-highest impact on the NASA GEOS model? Radiosondes. Credit: NASA

In other words, there may be situations where you have a severe weather event expected to kickstart in one place, but the lack of knowing the precise location of an upper air disturbance in the Rockies thanks to a suspended launch from Grand Junction, Colo., will lead to those storms forming 50 miles farther east than expected. In other words, losing this data increases the risk profile for more people in terms of knowing about weather, particularly high-impact weather.

Let’s say we have a hurricane in the Gulf that is rapidly intensifying, and we are expecting it to turn north and northeast thanks to a strong upper-air disturbance coming out of the Rockies, leading to landfall on the Alabama coast. What if the lack of upper-air observations has led to that disturbance being misplaced by 75 miles. Now, instead of Alabama, the storm is heading toward New Orleans. Is this an extreme example? Honestly, I don’t think it is as extreme as you might think. We often have timing and amplitude forecast issues with upper-air disturbances during hurricane season, and the reality is that we may have to make some more frequent last-second adjustments now that we didn’t have to in recent years. As a Gulf Coast resident, this is very concerning.

I don’t want to overstate things. Weather forecasts aren’t going to dramatically degrade day to day because we’ve reduced some balloon launches across the country. They will degrade, but the general public probably won’t notice much difference 90 percent of the time. But that 10 percent of the time? It’s not that the differences will be gigantic. But the impact of those differences could very well be gigantic, put more people in harm’s way, and increase the risk profile for an awful lot of people. That’s what this does: It increases the risk profile, it will lead to reduced weather forecast skill scores, and it may lead to an event that surprises a portion of the population that isn’t used to be surprised in the 2020s. To me, that makes the value of weather balloons very, very significant, and I find these cuts to be extremely troubling.

Should further cuts in staffing lead to further suspensions in weather balloon launches, we will see this problem magnify more often and involve bigger misses. In other words, the impacts here may not be linear, and repeated increased loss of real-world observational data will lead to very significant degradation in weather model performance that may be noticed more often than described above.

This story originally appeared on The Eyewall.

Photo of The Eyewall

The Eyewall is dedicated to covering tropical activity in the Atlantic Ocean, Caribbean Sea, and Gulf of Mexico. The site was founded in June 2023 by Matt Lanza and Eric Berger, who work together on the Houston-based forecasting site Space City Weather.

Should we be concerned about the loss of weather balloons? Read More »

“myterms”-wants-to-become-the-new-way-we-dictate-our-privacy-on-the-web

“MyTerms” wants to become the new way we dictate our privacy on the web

Searls and his group are putting up the standards and letting the browsers, extension-makers, website managers, mobile platforms, and other pieces of the tech stack craft the tools. So long as the human is the first party to a contract, the digital thing is the second, a “disinterested non-profit” provides the roster of agreements, and both sides keep records of what they agreed to, the function can take whatever shape the Internet decides.

Terms offered, not requests submitted

Searls’ and his group’s standard is a plea for a sensible alternative to the modern reality of accessing web information. It asks us to stop pretending that we’re all reading agreements stuffed full with opaque language, agreeing to thousands upon thousands of words’ worth of terms every day and willfully offering up information about us. And, of course, it makes people ask if it is due to become another version of Do Not Track.

Do Not Track was a request, while MyTerms is inherently a demand. Websites and services could, of course, simply refuse to show or provide content and data if a MyTerms agent is present, or they could ask or demand that people set the least restrictive terms.

There is nothing inherently wrong with setting up a user-first privacy scheme and pushing for sites and software to do the right thing and abide by it. People may choose to stick to search engines and sites that agree to MyTerms. Time will tell if MyTerms can gain the kind of leverage Searls is aiming for.

“MyTerms” wants to become the new way we dictate our privacy on the web Read More »

measles-arrives-in-kansas,-spreads-quickly-in-undervaccinated-counties

Measles arrives in Kansas, spreads quickly in undervaccinated counties

On Thursday, the county on the northern border of Stevens, Grant County, also reported three confirmed cases, which were also linked to the first case in Stevens. Grant County is in a much better position to handle the outbreak than its neighbors; its one school district, Ulysses, reported 100 percent vaccination coverage for kindergartners in the 2023–2024 school year.

Outbreak risk

So far, details about the fast-rising cases are scant. The Kansas Department of Health and Environment (KDHE) has not published another press release about the cases since March 13. Ars Technica reached out to KDHE for more information but did not hear back before this story’s publication.

The outlet KWCH 12 News out of Wichita published a story Thursday, when there were just six cases reported in just Grant and Stevens Counties, saying that all six were in unvaccinated people and that no one had been hospitalized. On Friday, KWCH updated the story to note that the case count had increased to 10 and that the health department now considers the situation an outbreak.

Measles is an extremely infectious virus that can linger in airspace and on surfaces for up to two hours after an infected person has been in an area. Among unvaccinated people exposed to the virus, 90 percent will become infected.

Vaccination rates have slipped nationwide, creating pockets that have lost herd immunity and are vulnerable to fast-spreading, difficult-to-stop outbreaks. In the past, strong vaccination rates prevented such spread, and in 2000, the virus was declared eliminated, meaning there was no continuous spread of the virus over a 12-month period. Experts now fear that the US will lose its elimination status, meaning measles will once again be considered endemic to the country.

So far this year, the Centers for Disease Control and Prevention has documented 378 measles cases as of Thursday, March 20. That figure is already out of date.

On Friday, the Texas health department reported 309 cases in its ongoing outbreak. Forty people have been hospitalized, and one unvaccinated child with no underlying medical conditions has died. The outbreak has spilled over to New Mexico and Oklahoma. In New Mexico, officials reported Friday that the case count has risen to 42 cases, with two hospitalizations and one death in an unvaccinated adult. In Oklahoma, the case count stands at four.

Measles arrives in Kansas, spreads quickly in undervaccinated counties Read More »

italy-demands-google-poison-dns-under-strict-piracy-shield-law

Italy demands Google poison DNS under strict Piracy Shield law

Spotted by TorrentFreak, AGCOM Commissioner Massimiliano Capitanio took to LinkedIn to celebrate the ruling, as well as the existence of the Italian Piracy Shield. “The Judge confirmed the value of AGCOM’s investigations, once again giving legitimacy to a system for the protection of copyright that is unique in the world,” said Capitanio.

Capitanio went on to complain that Google has routinely ignored AGCOM’s listing of pirate sites, which are supposed to be blocked in 30 minutes or less under the law. He noted the violation was so clear-cut that the order was issued without giving Google a chance to respond, known as inaudita altera parte in Italian courts.

This decision follows a similar case against Internet backbone firm Cloudflare. In January, the Court of Milan found that Cloudflare’s CDN, DNS server, and WARP VPN were facilitating piracy. The court threatened Cloudflare with fines of up to 10,000 euros per day if it did not begin blocking the sites.

Google could face similar sanctions, but AGCOM has had difficulty getting international tech behemoths to acknowledge their legal obligations in the country. We’ve reached out to Google for comment and will update this report if we hear back.

Italy demands Google poison DNS under strict Piracy Shield law Read More »

trump-white-house-drops-diversity-plan-for-moon-landing-it-created-back-in-2019

Trump White House drops diversity plan for Moon landing it created back in 2019

That was then. NASA’s landing page for the First Woman comic series, where young readers could download or listen to the comic, no longer exists. Callie and her crew survived the airless, radiation-bathed surface of the Moon, only to be wiped out by President Trump’s Diversity, Equity, and Inclusion executive order, signed two months ago.

Another casualty is the “first woman” language within the Artemis Program. For years, NASA’s main Artemis page, an archived version of which is linked here, included the following language: “With the Artemis campaign, NASA will land the first woman and first person of color on the Moon, using innovative technologies to explore more of the lunar surface than ever before.”

Artemis website changes

The current landing page for the Artemis program has excised this paragraph. It is not clear how recently the change was made. It was first noticed by British science journalist Oliver Morton.

The removal is perhaps more striking than Callie’s downfall since it was the first Trump administration that both created Artemis and highlighted its differences from Apollo by stating that the Artemis III lunar landing would fly the first woman and person of color to the lunar surface.

How NASA’s Artemis website appeared before recent changes.

Credit: NASA

How NASA’s Artemis website appeared before recent changes. Credit: NASA

For its part, NASA says it is simply complying with the White House executive order by making the changes.

“In keeping with the President’s Executive Order, we’re updating our language regarding plans to send crew to the lunar surface as part of NASA’s Artemis campaign,” an agency spokesperson said. “We look forward to learning more from about the Trump Administration’s plans for our agency and expanding exploration at the Moon and Mars for the benefit of all.”

The nominal date for the Artemis III landing is 2027, but few in the industry expect NASA to be able to hold to that date. With further delays likely, the space agency will probably not name a crew anytime soon.

Trump White House drops diversity plan for Moon landing it created back in 2019 Read More »

racer-with-paraplegia-successfully-test-drives-corvette-with-hand-controls

Racer with paraplegia successfully test drives Corvette with hand controls

Able-bodied co-driver Milner will use the Corvette GT3.R’s regular pedals when he drives, with the hand controls engaged when Wickens is in the car. The new hand controls are mounted to the steering wheel column, where otherwise you’d find a spacer between the column and multifunction steering wheel. There are paddles on both sides that operate the throttle, and a ring that engages the brakes.

The road-going Corvette C8 uses brake-by-wire, and Bosch has developed an electronic brake system for motorsport applications, which is now fitted to DXDT’s Corvette. Wickens actually used the Bosch EBS in the last two Pilot Challenge races of last year, but unlike the Corvette, the Elantra did not have a full brake-by-wire system.

Robert Wickens explains how his hand controls work.

“When I embarked on this journey of racing with hand controls, I was always envisioning just that hydraulic sensation with my hands, on applying the brake. And, yeah, everyone involved, they made it happen,” Wickens said. Adding that sensation has involved using tiny springs and dampers, and Wickens likened the process of fine-tuning that to working on a suspension setup for a race car, altering spring rates and damper settings until it felt right.

“You know, the fact that I was just straight away comfortable; frankly, internally, I was concerned that [it] might take me a little bit to get up to speed, but thankfully that wasn’t the case so far. There’s obviously still a lot of work to be done, but so far, I think the signs are positive,” he said.

“I think the biggest takeaway I have so far is that it feels like the Bosch EBS and the hand control system that was developed by Pratt Miller it was like it belonged in this car,” he said. “There hasn’t been a single hiccup. It feels like… when they designed the Z06 GT3, it was always in the plan, almost? It’s just looks like it belongs in the car. It feels like it belongs in the car.”

Racer with paraplegia successfully test drives Corvette with hand controls Read More »

dad-demands-openai-delete-chatgpt’s-false-claim-that-he-murdered-his-kids

Dad demands OpenAI delete ChatGPT’s false claim that he murdered his kids

Currently, ChatGPT does not repeat these horrible false claims about Holmen in outputs. A more recent update apparently fixed the issue, as “ChatGPT now also searches the Internet for information about people, when it is asked who they are,” Noyb said. But because OpenAI had previously argued that it cannot correct information—it can only block information—the fake child murderer story is likely still included in ChatGPT’s internal data. And unless Holmen can correct it, that’s a violation of the GDPR, Noyb claims.

“While the damage done may be more limited if false personal data is not shared, the GDPR applies to internal data just as much as to shared data,” Noyb says.

OpenAI may not be able to easily delete the data

Holmen isn’t the only ChatGPT user who has worried that the chatbot’s hallucinations might ruin lives. Months after ChatGPT launched in late 2022, an Australian mayor threatened to sue for defamation after the chatbot falsely claimed he went to prison. Around the same time, ChatGPT linked a real law professor to a fake sexual harassment scandal, The Washington Post reported. A few months later, a radio host sued OpenAI over ChatGPT outputs describing fake embezzlement charges.

In some cases, OpenAI filtered the model to avoid generating harmful outputs but likely didn’t delete the false information from the training data, Noyb suggested. But filtering outputs and throwing up disclaimers aren’t enough to prevent reputational harm, Noyb data protection lawyer, Kleanthi Sardeli, alleged.

“Adding a disclaimer that you do not comply with the law does not make the law go away,” Sardeli said. “AI companies can also not just ‘hide’ false information from users while they internally still process false information. AI companies should stop acting as if the GDPR does not apply to them, when it clearly does. If hallucinations are not stopped, people can easily suffer reputational damage.”

Dad demands OpenAI delete ChatGPT’s false claim that he murdered his kids Read More »

bird-flu-continues-to-spread-as-trump’s-pandemic-experts-are-mia

Bird flu continues to spread as Trump’s pandemic experts are MIA

Under the Biden administration, OPPR also worked behind the scenes. At the time, it was directed by Paul Friedrichs, a physician and retired Air Force major-general. Friedrichs told CNN that the OPPR regularly hosted interagency calls between the US Centers for Disease Control and Prevention, the USDA, the Administration for Strategic Preparedness and Response, the US Food and Drug Administration, and the National Institutes of Health. When the H5N1 bird flu outbreak erupted in dairy farms last March, OPPR was hosting daily meetings, which transitioned to weekly meetings toward the end of the administration.

“At the end of the day, bringing everybody together and having those meetings was incredibly important, so that we had a shared set of facts,” Friedrichs said. “When decisions were made, everyone understood why the decision was made, what facts were used to inform the decision.”

Sen. Patty Murray (D-Wash.), who co-wrote the bill that created OPPR with former Sen. Richard Burr (R-N.C.), is concerned by Trump’s sidelining of the office.

“Under the last administration, OPPR served, as intended, as the central hub coordinating a whole-of-government response to pandemic threats,” she said in a written statement to CNN. “While President Trump cannot legally disband OPPR, as he has threatened to do, it is deeply concerning that he has moved the statutorily created OPPR into the NSC.”

“As intended by law, OPPR is a separate, distinct office for a reason, which is especially relevant now as we are seeing outbreaks of measles, bird flu, and other serious and growing threats to public health,” Murray wrote. “This should be alarming to everyone.”

Bird flu continues to spread as Trump’s pandemic experts are MIA Read More »

hints-grow-stronger-that-dark-energy-changes-over-time

Hints grow stronger that dark energy changes over time

In its earliest days, the Universe was a hot, dense soup of subatomic particles, including hydrogen and helium nuclei, aka baryons. Tiny fluctuations created a rippling pattern through that early ionized plasma, which froze into a three-dimensional place as the Universe expanded and cooled. Those ripples, or bubbles, are known as baryon acoustic oscillations (BAO). It’s possible to use BAOs as a kind of cosmic ruler to investigate the effects of dark energy over the history of the Universe.

DESI is a state-of-the-art instrument and can capture light from up to 5,000 celestial objects simultaneously.

DESI is a state-of-the-art instrument that can capture light from up to 5,000 celestial objects simultaneously.

That’s what DESI was designed to do: take precise measurements of the apparent size of these bubbles (both near and far) by determining the distances to galaxies and quasars over 11 billion years. That data can then be sliced into chunks to determine how fast the Universe was expanding at each point of time in the past, the better to model how dark energy was affecting that expansion.

An upward trend

Last year’s results were based on analysis of a full year’s worth of data taken from seven different slices of cosmic time and include 450,000 quasars, the largest ever collected, with a record-setting precision of the most distant epoch (between 8 to 11 billion years back) of 0.82 percent. While there was basic agreement with the Lamba CDM model, when those first-year results were combined with data from other studies (involving the cosmic microwave background radiation and Type Ia supernovae), some subtle differences cropped up.

Essentially, those differences suggested that the dark energy might be getting weaker. In terms of confidence, the results amounted to a 2.6-sigma level for the DESI’s data combined with CMB datasets. When adding the supernovae data, those numbers grew to 2.5-sigma, 3.5-sigma, or 3.9-sigma levels, depending on which particular supernova dataset was used.

It’s important to combine the DESI data with other independent measurements because “we want consistency,” said DESI co-spokesperson Will Percival of the University of Waterloo. “All of the different experiments should give us the same answer to how much matter there is in the Universe at present day, how fast the Universe is expanding. It’s no good if all the experiments agree with the Lambda-CDM model, but then give you different parameters. That just doesn’t work. Just saying it’s consistent to the Lambda-CDM, that’s not enough in itself. It has to be consistent with Lambda-CDM and give you the same parameters for the basic properties of that model.”

Hints grow stronger that dark energy changes over time Read More »