Author name: Kelly Newman

these-are-the-flying-discs-the-government-wants-you-to-know-about

These are the flying discs the government wants you to know about


DiskSat’s design offers “a power-to-weight ratio unmatched by traditional aluminum satellites.”

An artist’s illustration of DiskSats deploying from a rocket in low-Earth orbit. Credit: NASA

Four small satellites rode a Rocket Lab Electron launch vehicle into orbit from Virginia early Thursday, beginning a government-funded technology demonstration mission to test the performance of a new spacecraft design.

The satellites were nestled inside a cylindrical dispenser on top of the 59-foot-tall (18-meter) Electron rocket when it lifted off from NASA’s Wallops Flight Facility at 12: 03 am EST (05: 03 UTC). A little more than an hour later, the rocket’s upper stage released the satellites one at a time at an altitude of about 340 miles (550 kilometers).

The launch was the starting gun for a proof-of-concept mission to test the viability of a new kind of satellite called DiskSats. These satellites were designed by the Aerospace Corporation, a nonprofit federally funded research and development center. The project is jointly financed by NASA and the US Space Force, which paid for DiskSat’s development and launch, respectively.

“DiskSat is a lightweight, compact, flat disc-shaped satellite designed for optimizing future rideshare launches,” the Aerospace Corporation says in a statement.

The DiskSats are 39 inches (1 meter) wide, about twice the diameter of a New York-style pizza, and measure just 1 inch (2.5 centimeters) thick. Made of composite carbon fiber, each satellite carries solar cells, control avionics, reaction wheels, and an electric thruster to change and maintain altitude.

“The launch went perfectly, and the DiskSat dispenser worked exactly as designed,” said Darren Rowen, the project’s chief engineer, in a statement. “We’re pleased to have established contact with all four of the DiskSats, and we’re looking forward to the rest of the demonstration mission.”

An engineer prepares Aerospace Corporation’s DiskSats for launch at NASA’s Wallops Flight Facility in Virginia. Credit: Aerospace Corporation

A new form factor

The Aerospace Corporation has a long history of supporting the US military and NASA since its founding in 1960. A few years ago, engineers at the center developed the DiskSat concept after surveying the government’s emerging needs in spaceflight.

CubeSats have been a ubiquitous part of the satellite industry for nearly a quarter-century. They are based on a cube-shaped design, measuring about 10 centimeters per side, but can be scaled from a single cube “unit” to three, six, 12, or more, depending on mission requirements. The CubeSat standard has become a popular choice for commercial companies, the military, NASA, and universities looking to build small satellites on a tight budget.

By one measure, nearly 3,000 CubeSats have launched since the first one soared into orbit in 2003. After originally being confined to low-Earth orbit, they have now flown to high-altitude orbits, to the Moon, and to Mars.

While CubeSats are now prolific, engineers at the Aerospace Corporation saw an opportunity to improve on the concept. Debra Emmons, Aerospace’s chief technology officer, said the idea originated from Rich Welle, a scientist recently retired from the center’s Experiments Lab, or xLab, division.

“They were asking questions,” Emmons told Ars. “They were looking at CubeSat studies and looking at some alternatives. The typical CubeSat is, in fact, a cube. So, the idea was could you look at some different types of form factors that might be able to generate more power … and offer up benefit for certain mission applications?”

Aerospace’s research team arrived at the DiskSat design. Emmons said the stackable flat-panel format is easier to pack for launch than a CubeSat. The concept is similar to SpaceX’s pioneering approach to launching stackable Starlink Internet satellites, but DiskSats are significantly smaller, lighter, and adaptable to different kinds of missions.

A batch of Starlink satellites prior to launch

A stack of Starlink satellites prior to launch. Credit: SpaceX

DiskSats have several advantages over CubeSats, according to the Aerospace Corporation. Each of the four DiskSats launched Thursday has a mass of about 35 pounds (16 kilograms), less than that of a typical 12U CubeSat. But a DiskSat has more than 13 times the surface area on a single side, providing valuable real estate for developers to load up the satellite with power-generating solar arrays, sensors, antennas, or other payloads that simply won’t fit on a CubeSat.

SpaceX’s current generation of mass-produced Starlink V2 satellites, by comparison, each has a mass of more than 1,100 pounds, or 500 kilograms.

DiskSat’s design offers “a power-to-weight ratio unmatched by traditional aluminum satellites,” the Aerospace Corporation says. In a research paper published earlier this year, engineers from the Aerospace Corporation claimed DiskSat can generate five to 10 times more power than a CubeSat.

A disruptive solution?

What kinds of missions might DiskSat be useful for? One idea involves placing a large radar antenna—too big to fit on any other low-mass satelliteon the broadside of a DiskSat to collect all-weather surveillance imagery. Similarly-sized antennas on other DiskSats could support high-bandwidth communications.

With this demo mission, the Aerospace Corporation will test the performance of the DiskSat platform in space for the first time. Engineers will initially look at how the satellites function at 340 miles, then use their electric thrusters to gradually step down to lower altitudes, where another aspect of DiskSat’s design will shine.

Flying edge-on, the satellite’s pancake shape will minimize aerodynamic drag as the DiskSats encounter thicker air below 250 miles. Continual pulsing from the satellites’ electric thrusters will allow the DiskSats to maintain altitude as they glide through the uppermost layers of the atmosphere.

“The primary mission is to demonstrate and to understand the performance, functionality, and maneuverability of the DiskSat buses on orbit, particularly in low-Earth orbit, or LEO, and very low-Earth orbit, or VLEO,” said Catherine Venturini, DiskSat’s principal investigator.

“In theory, I think you could operate down to 200 kilometers (124 miles) with electric propulsion,” Emmons said. That is two to three times closer to Earth than most commercial radar imaging satellites. Other satellite operators are also assessing the viability of flying remote sensing missions in VLEO.

Flying closer to the ground delivers higher-resolution imagery, bringing cities, ships, airports, and military bases into sharper view. So it’s easy to see why the Space Force is interested in the DiskSat concept.

DiskSat’s engineers acknowledge there are drawbacks to the format. With such a large surface area, it’s more difficult to manage the temperature extremes of low-Earth orbit than it is with a conventional cube-shaped satellite. While DiskSats carry a lot of oomph to change altitude, their shape makes them somewhat clunky and hard to turn, and engineers say they aren’t well-suited for missions requiring agile pointing.

Rocket Lab’s Electron launcher lifts off to begin the DiskSat demo mission, a program co-funded by NASA and the US military’s Space Test Program. Credit: Austin DeSisto/Rocket Lab

The Aerospace Corporation is a research center, not a commercial satellite manufacturer. Officials at the nonprofit are looking to hand over the DiskSat design to industry through a technology transfer agreement. “The plan is to release or license the technology to partners once it is flight-proven,” the Aerospace Corporation says on its website.

“We think this new technology will be disruptive to the small spacecraft enterprise and ecosystem,” said Eric Breckheimer, DiskSat’s program manager.

DiskSat’s stackable design makes it possible to launch a fleet of high-power, low-mass satellites in one go, according to Emmons.

Following the trend toward bigger CubeSats, the DiskSat format could also grow larger to take advantage of heavier rockets. “There’s a key scalability aspect, and with that in mind, you could bring an entire constellation of DiskSats with you in a single launch,” Breckheimer said.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

These are the flying discs the government wants you to know about Read More »

when-were-things-the-best?

When Were Things The Best?

People remember their childhood world too fondly.

You adapt to it. You forget the parts that sucked, many of which sucked rather really badly. It resonates with you and sticks with you. You think it was better.

This is famously true for music, but also in general, including places it makes no sense like ‘most reliable news reporting.’

Matthew Yglesias: Regardless of how old they are, people tend to think that things were better when they were young.

As a result, you’d expect more negativity as the median age goes up and up.

Very obviously these views are not objective.

As a fun and also useful exercise, as part of the affordability sequence, now that we’ve looked at claims of modern impoverishment and asked when things were cheaper, it’s time to ask ourselves: When were various things really at their best?

In some aspects, yes, the past was better, and those aspects are an important part of the picture. But in many others today is the day and people are wrong about this.

I’ll start with the things on the above graph, in order, include some claims from another source, and also include a few important other considerations that help set up the main thesis of the sequence.

Far in the past. You wouldn’t like how they accomplished it, but they accomplished it.

The top candidates for specific such communities are either:

  1. Hunter-gatherer bands.

  2. Isolated low-tech villages that all share an intense mandatory religion.

  3. Religious minority ethnic enclave communities under severe external threat.

You’re not going to match that without making intensive other sacrifices. Nor should you want to. Those communities were too close-knit for our taste.

In terms of on average most close knit communities in America, it’s probably right after we closed the frontier, so around 1900?

Close-knit communities, on a lesser level that is now rare, are valuable and important, but require large continuous investments and opportunity costs. You have to frequently choose engagement with a contained group over alternatives, including when those alternatives are otherwise far superior. You also, to do this today, have to engineer conditions to make the community possible, because you’re not going to be able to form one with whoever happens to live in your neighborhood.

Intentional communities are underrated, as is simply coordinating to live near your friends. I highly recommend such things, but coordination is hard, and they are going to remain rare.

I’m torn between today and about 2012.

There are some virtues and morals that are valuable and have been largely lost. Those who remember the past fondly focus on those aspects.

One could cite, depending on your comparison point, some combination of loyalty to both individuals, groups and institutions, honor and personal codes, hospitality, respect for laws and social norms, social trust, humility, some forms of mercy and forgiveness, stoicism, courage, respect for the sacred and adherence to duty and one’s commitments, especially the commitment to one’s family, having better and higher epistemic and discourse norms, plus religiosity.

There’s varying degrees of truth in those.

But they pale in comparison to the ways that things used to be terrible. People used to have highly exclusionary circles of concern. By the standards of today, until very recently and even under relatively good conditions, approximately everyone was horribly violent and tolerant of violence and bullying of all kinds, cruel to animals, tolerant of all manner of harassment, rape and violations of consent, cruel, intolerant, religiously intolerant often to the point of murder, drunk out of their minds, discriminatory, racist, sexist, homophobic, transphobic, neglectful, unsafe, physically and emotionally abusive to children including outright torture and frequent sexual abuse, and distrustful and dishonest dealing with strangers or in commerce.

It should be very clear which list wins.

This holds up to the introduction of social media, at which point some moral dynamics got out of control in various ways, on various sides of various questions, and many aspects went downhill. There were ways in which things got absolutely nuts. I’m not sure if we’ve recovered enough to have fully turned that around.

Within recent memory I’m going to say 1992-1996, which is the trap of putting it right in my teenage years. But I’m right. This period had extraordinarily low political division and partisanship.

On a longer time frame, the correct answer is the Era of Good Feelings, 1815-1825.

The mistake people make is to think that today’s high level of political division is some outlier in American history. It isn’t.

Good question. The survey data says 1957.

I also don’t strongly believe it is wrong, but I don’t trust survey data to give the right answer on this, for multiple reasons.

Certainly a lot more families used to be intact. That does not mean they were happy by our modern understanding of happy. The world of the 1950s was quite stifling. A lot of the way families stayed intact was people pretended everything was fine, including many things we now consider very not fine.

People benefited (in happiness terms) from many forms of lower expectations. That doesn’t mean that if you duplicated their life experiences, your family would be happy.

Fertility rates, having the most children, was during the Baby Boom, if we exclude the bad old times when children often failed to survive.

Marriage rates used to be near-universal, whether or not you think that was best.

Believe it or not, today. Yikes. We don’t believe it because of the Revolution of Rising Expectations. We now have standards for the press that the press has never met.

People used to trust the media more. Now we trust it a lot less. While there are downsides to this lack of trust, especially when people turn to even less worthy alternatives, that loss of trust is centrally good. The media was never worthy of trust.

There’s great fondness for the Walter Cronkite era, where supposedly we had high authority news sources worthy of our high trust. The thing is, that past trust was also misplaced, and indeed was even more misplaced.

There was little holding the press to account. They had their own agendas and biases, even if it was often ‘the good of the nation’ or ‘the good of the people,’ and they massively misunderstood things and often got things wrong. Reporters talking on the level of saying ‘wet ground causes rain’ is not a new phenomenon. When they did make mistakes or slant their coverage, there was no way to correct them back then.

Whereas now, with social media, we can and do keep the media on its toes.

If your goal is to figure out what is going on and you’re willing to put in the work, today you have the tools to do that, and in the past you basically didn’t, not in any reasonable amount of time.

The fact that other people do that, and hold them to account, makes the press hold itself to higher standards.

There are several forms of ‘the best music.’ It’s kind of today, kind of the 60s-80s.

If you are listening to music on your own, it is at its best today, by far. The entire back catalogue of the world is available at your fingertips, with notably rare exceptions, for a small monthly fee, on demand and fully customizable. If you are an audiophile and want super high quality, you can do that too. There’s no need to spend all that time seeking tings out.

If you want to create new music, on your own or with AI? Again, it’s there for you.

In terms of the creation of new music weighted by how much people listen, or in terms of the quality of the most popular music, I’d say probably the 1980s? A strong case can be made for the 60s or 70s too, my guess is that a bunch of that is nostalgia and too highly valuing innovation, but I can see it. What I can’t see is a case for the 1990s or 2000s, or especially 2010s or 2020s.

This could be old man syndrome talking, and it could be benefits of a lot of selection, but when I sample recent popular music it mostly (with exceptions!) seems highly non-innovative and also not very good. It’s plausible that with sufficiently good search and willingness to take highly deep cuts that today is indeed the best time for new music, but I don’t know how to do that search.

In terms of live music experiences, especially for those with limited budgets, my guess is this was closer to 1971, as so much great stuff was in hindsight so amazingly accessible.

The other case for music being better before is that music was better when it was worse. As in, you had to search for it, select it, pay for it, you had to listen to full albums and listen to them many times, so it meant more, that today’s freedom brings bad habits. I see the argument, but no, and you can totally set rules for yourself if that is what you want. I often have for brief periods, to shake things up.

My wild guess for traditional radio is the 1970s? There was enough high quality music, you had the spirit of radio, and video hadn’t killed the radio star.

You could make an argument for the 1930s-40s, right before television displaced it as the main medium. Certainly radio back then was more important and central.

The real answer is today. We have the best radio today.

We simply don’t call it radio.

Instead, we mostly call it podcasts and music streaming.

If you want pseudorandom music, Pandora and other similar services, or Spotify-style playlists, are together vastly better than traditional radio.

If you want any form of talk radio, or news radio, or other word-based radio programs that doesn’t depend on being broadcast live, podcasts rule. The quality and quantity and variety on offer are insane and you can move around on demand.

Also, remember reception problems? Not anymore.

Long before any of us were born, or today, depending on whether you mean ‘most awesome’ or ‘would choose to wear.’

Today’s fashion is not only cheaper, it is easier and more comfortable. In exchange, no, it does not look as cool.

As the question is intended, 2019. Then Covid happened. We still haven’t fully recovered from that.

There were periods with more economic growth or that had better employment conditions. You could point to 1947-1973 riding the postwar wave, or the late 1990s before the dot com bubble burst.

I still say 2019, because levels of wealth and real wages also matter.

In general I choose today. Average quality is way up and has been going up steadily except for a blip when we got way too many superhero movies crowding things out, but we’ve recovered from that.

The counterargument I respect is that the last few years have had no top tier all-time greats, and perhaps this is not an accident. We’ve forced movies to do so many other things well that there’s less room for full creativity and greatness to shine through? Perhaps this is true, and this system gets us fewer true top movies. But also that’s a Poisson distribution, you need to get lucky, and the effective sample size is small.

If I have to pick a particular year I’d go with 1999.

The traditional answer is the 1970s, but this is stupid and disregards the Revolution of Rising Expectations. Movies then were given tons of slack in essentially every direction. Were there some great picks? No doubt, although many of what we think of as all-time greats are remarkably slow to the point where if they weren’t all time greats they’d almost not be watchable. In general, if you think things were better back then, you’re grading back then on a curve, you have an extreme tolerance for not much happening, and also you’re prioritizing some sort of abstract Quality metric over what is actually entertaining.

Today. Stop lying to yourself.

The experience of television used to be terrible, and the shows used to be terrible. So many things very much do not hold up today even if you cut them quite a lot of slack. Old sitcoms are sleep inducing. Old dramas were basic and had little continuity. Acting tended to be quite poor. They don’t look good, either.

The interface for watching was atrocious. You would watch absurd amounts of advertisements. You would plan your day around when things were there, or you’d watch ‘whatever was on TV.’ If you missed episodes they would be gone. DVRs were a godsend despite requiring absurd levels of effort to manage optimally, and still giving up a ton of value.

The interface now is most of everything ever made at your fingertips.

The alternative argument to today being best is that many say that in terms of new shows the prestige TV era of the 2000s-2010s was the golden age, and the new streaming era can’t measure up, especially due to fractured experiences.

I agree that the shared national experiences were cool and we used to have more of them and they were bigger. We still get them, most recently for Severance and perhaps The White Lotus and Plurebis, which isn’t the same, but there really are still a ton of very high quality shows out there. Average quality is way up. Top talent going on television shows is way up, they still let top creators do their thing, and there are shows with top-tier people I haven’t even looked at, that never used to happen.

Today. Stop lying to yourself.

Average quality of athletic performance is way, way up. Modern players do things you wouldn’t believe. Game design has in many ways improved as well, as has the quality of strategic decision making.

Season design is way better. We get more and better playoffs, which can go too far but typically keeps far more games more relevant and exciting and high stakes. College football is insanely better for this over the last few years, I doubted and I was wrong. Baseball purists can complain but so few games used to mean anything. And so on.

Unless people are going to be blowing up your phone, you can start an event modestly late and skip all the ads and even dead time. You can watch sports on your schedule, not someone else’s. If you must be live, you can now get coverage in lots of alternative ways, and also get access to social media conversations in real time, various website information services and so on.

If you’re going to the stadium, the modern experience is an upgrade. It is down to a science. All seats are good seats and the food is usually excellent.

There are three downside cases.

  1. We used to all watch the same sporting events live and together more often. That was cool, but you can still find plenty of people online doing this anyway.

  2. In some cases correct strategic play has made things less fun. Too many NBA three pointers are a problem, as is figuring out that MLB starters should be taken out rather early, or analytics simply homogenized play. The rules have been too slow to adjust. It’s a problem, but on net I think a minor one. It’s good to see games played well.

  3. Free agency has made teams retain less identity, and made it harder to root for the same players over a longer period. This one hurts and I’d love to go back, even though there are good reasons why we can’t.

Mostly I think it’s nostalgia. Modern sports are awesome.

Today, and it’s really, really not close. If you don’t agree, you do not remember. So much of what people ate in the 20th century was barely even food by today’s standards, both in terms of tasting good and its nutritional content.

Food has gotten The Upgrade.

Average quality is way, way up. Diversity is way up, authentic or even non-authentic ethnic cuisines mostly used to be quite rare. Delivery used to be pizza and Chinese. Quality and diversity of available ingredients is way up. You can get it all on a smaller percentage of typical incomes, whether at home or from restaurants, and so many more of us get to use those restaurants more often.

A lot of this is driven by having access to online information and reviews, which allows quality to win out in a way it didn’t before, but even before that we were seeing rapid upgrades across the board.

Some time around 1965, probably? We had a pattern of something approaching lifetime employment where it was easy to keep one’s job for a long period, and count on this. The chance of staying in a job for 10+ or 20+ years has declined a lot. That makes people feel a lot more secure, and matters a lot.

That doesn’t mean you actually want the same job for 20+ years. There are some jobs where you totally do want that, but a lot of the jobs people used to keep for that long are jobs we wouldn’t want. Despite people’s impressions, the increased job changes have mostly not come from people being fired.

We don’t have the best everything. There are exceptions.

Most centrally, we don’t have the best intact families or close-knit communities, or the best dating ecosystem or best child freedoms. Those are huge deals.

But there are so many other places in which people are simply wrong.

As in:

Matt Walsh (being wrong, lol at ‘empirical,’ 3M views): It’s an empirical fact that basically everything in our day to day lives has gotten worse over the years. The quality of everything — food, clothing, entertainment, air travel, roads, traffic, infrastructure, housing, etc — has declined in observable ways. Even newer inventions — search engines, social media, smart phones — have gone down hill drastically.

This isn’t just a random “old man yells at clouds” complaint. It’s true. It’s happening. The decline can be measured. Everyone sees it. Everyone feels it. Meanwhile political pundits and podcast hosts (speaking of things that are getting worse) focus on anything and everything except these practical real-life problems that actually affect our quality of life.

The Honest Broker: There is an entire movement focused on trying to convince people that everything used to be better and everything is also getting worse and worse

That creates a market for reality-based correctives like the excellent thread below by @ben_golub [on air travel.]

Matthew Yglesias: I think everyone should take seriously:

  1. Content distribution channels have become more competitive and efficient

  2. Negative content tends to perform better

  3. Marinating all day in negativity-inflected content is cooking people’s brains

My quick investigation confirmed that American roads, traffic and that style of infrastructure did peak in the mid-to-late 20th century. We have not been doing a good job maintaining that.

On food, entertainment, clothing and housing he is simply wrong (have you heard of this new thing called ‘luxury’ apartments, or checked average sizes or amenities?), and to even make some of these claims requires both claiming ‘this is cheaper but it’s worse’ and ‘this is worse because it used to be cheaper’ in various places.

bumbadum: People are chimping out at Matt over this but nobody has been able to name one thing that has significantly grown in quality in the past 10-20 years.

Every commodity, even as they have become cheaper and more accessible has decreased in quality.

I am begging somebody to name 1 thing that is all around a better product than its counterpart from the 90s

Megan McArdle: Tomatoes, raspberries, automobiles, televisions, cancer drugs, women’s shoes, insulin monitoring, home security monitoring, clothing for tall women (which functionally didn’t exist until about 2008), telephone service (remember when you had to PAY EXTRA to call another area code?), travel (remember MAPS?), remote work, home video … sorry, ran out of characters before I ran out of hedonic improvements.

Thus:

Today. No explanation required on these.

Don’t knock the vast improvements in computers and televisions.

Saying the quality of phones has gone down, as Matt Walsh does, is absurdity.

That does still leave a few other examples he raised.

Today, or at least 2024 if you think Trump messed some things up.

I say this as someone who used to fly on about half of weekends, for several years.

Air travel has decreased in price, the most important factor, and safety improved. Experiential quality of the flight itself declined a bit, but has risen again as airport offerings improved and getting through security and customs went back from a nightmare to trivial. Net time spent, given less uncertainty, has gone down.

If you are willing to pay the old premium prices, you can buy first class tickets, and get an as good or better experience as the old tickets.

Today. We wax nostalgic about old cars. They looked cool. They also were cool.

They were also less powerful, more dangerous, much less fuel efficient, much less reliable, with far fewer features and of course absolutely no smart features. That’s even without considering that we’re starting to get self-driving cars.

This is one area where my preliminary research did back Walsh up. America has done a poor job of maintaining its roads and managing its traffic, and has not ‘paid the upkeep’ on many aspects what was previously a world-class infrastructure. These things seem to have peaked in the late 20th century.

I agree that this is a rather bad sign, and we should both fix and build the roads and also fix the things that are causing us not to fix and build the roads.

As a result of not keeping up with demand for roads or demand for housing in the right areas, average commute times for those going into the office have been increasing, but post-Covid we have ~29% of working days happening from home, which overwhelms all other factors combined in terms of hours on the road.

I do expect traffic to improve due to self-driving cars, but that will take a while.

Today, or at least the mobile phone and rideshare era. You used to have to call for or hail a taxi. Now in most areas you open your phone and a car appears. In some places it can be a Waymo, which is now doubling yearly. The ability to summon a taxi matters so much more than everything else, and as noted above air travel is improved.

This is way more important than net modest issues with roads and traffic.

Trains have not improved but they are not importantly worse.

Not everything is getting better all the time. Important things are getting worse.

We still need to remember and count our blessings, and not make up stories about how various things are getting worse, when those things are actually getting better.

To sum up, and to add some additional key factors, the following things did indeed peak in the past and quality is getting worse as more than a temporary blip:

  1. Political division.

  2. Average quality of new music, weighted by what people listen to.

  3. Live music and live radio experiences, and other collective national experiences.

  4. Fashion, in terms of awesomeness.

  5. Roads, traffic and general infrastructure.

  6. Some secondary but important moral values.

  7. Dating experiences, ability to avoid going on apps.

  8. Job security, ability to stay in one job for decades if desired.

  9. Marriage rates and intact families, including some definitions of ‘happy’ families.

  10. Fertility rates and felt ability to have and support children as desired.

  11. Childhood freedoms and physical experiences.

  12. Hope for the future, which is centrally motivating this whole series of posts.

The second half of that list is freaking depressing. Yikes. Something’s very wrong.

But what’s wrong isn’t the quality of goods, or many of the things people wax nostalgic about. The first half of this list cannot explain the second half.

Compare that first half to the ways in which quality is up, and in many of these cases things are 10 times better, or 100 times better, or barely used to even exist:

  1. Morality overall, in many rather huge ways.

  2. Access to information, including the news.

  3. Logistics and delivery. Ease of getting the things you want.

  4. Communication. Telephones including mobile phones.

  5. Music as consumed at home via deliberate choice.

  6. Audio experiences. Music streams and playlists. Talk.

  7. Electronics, including computers, televisions, medical devices, security systems.

  8. Television, both new content and old content, and modes of access.

  9. Movies, both new content and old content, and modes of access.

  10. Fashion in terms of comfort, cost and upkeep.

  11. Sports.

  12. Cuisine. Food of all kinds, at home and at restaurants.

  13. Air travel.

  14. Taxis.

  15. Cars.

  16. Medical care, dental care and medical (and nonmedical) drugs.

That only emphasizes the bottom of the first list. Something’s very wrong.

Once again, us doing well does not mean we shouldn’t be doing better.

We see forms of the same trends.

  1. Many things are getting better, but often not as much better as they could be.

  2. Other things are getting worse, both in ways inevitable and avoidable.

  3. This identifies important problems, but the changes in quantity and quality of goods and services do not explain people’s unhappiness, or why many of the most important things are getting worse. More is happening.

Some of the things getting worse reflect changes in technological equilibria or the running out of low-hanging fruit, in ways that are tricky to fix. Many of those are superficial, although a few of them aren’t. But these don’t add up to the big issues.

More is happening.

That more is what I will, in the next post, be calling The Revolution of Rising Expectations, and the Revolution of Rising Requirements.

Discussion about this post

When Were Things The Best? Read More »

youtube-bans-two-popular-channels-that-created-fake-ai-movie-trailers

YouTube bans two popular channels that created fake AI movie trailers

Deadline reports that the behavior of these creators ran afoul of YouTube’s spam and misleading-metadata policies. At the same time, Google loves generative AI—YouTube has added more ways for creators to use generative AI, and the company says more gen AI tools are coming in the future. It’s quite a tightrope for Google to walk.

AI movie trailers

A selection of videos from the now-defunct Screen Culture channel.

Credit: Ryan Whitwam

A selection of videos from the now-defunct Screen Culture channel. Credit: Ryan Whitwam

While passing off AI videos as authentic movie trailers is definitely spammy conduct, the recent changes to the legal landscape could be a factor, too. Disney recently entered into a partnership with OpenAI, bringing its massive library of characters to the company’s Sora AI video app. At the same time, Disney sent a cease-and-desist letter to Google demanding the removal of Disney content from Google AI. The letter specifically cited AI content on YouTube as a concern.

Both the banned trailer channels made heavy use of Disney properties, sometimes even incorporating snippets of real trailers. For example, Screen Culture created 23 AI trailers for The Fantastic Four: First Steps, some of which outranked the official trailer in searches. It’s unclear if either account used Google’s Veo models to create the trailers, but Google’s AI will recreate Disney characters without issue.

While Screen Culture and KH Studio were the largest purveyors of AI movie trailers, they are far from alone. There are others with five and six-digit subscriber counts, some of which include disclosures about fan-made content. Is that enough to save them from the ban hammer? Many YouTube viewers probably hope not.

YouTube bans two popular channels that created fake AI movie trailers Read More »

does-swearing-make-you-stronger?-science-says-yes.

Does swearing make you stronger? Science says yes.

The result: Only the F-word had any effect on pain outcomes. The team also measured the subjects’ pain threshold, asking them to indicate when the ice water began to feel painful. Those who chanted the F-word waited longer before indicating they felt pain—in other words, the swearing increased their threshold for pain. Chanting “fouch” or “twizpipe” had no effect on either measure.

F@%*-ing go for it

For this latest study, Stephens was interested in investigating potential mechanisms for swearing as a possible form of disinhibition (usually viewed negatively), building on his team’s 2018 and 2022 papers showing that swearing can improve strength in a chair push-up task. “In many situations, people hold themselves back—consciously or unconsciously—from using their full strength,” said Stephens. “By swearing, we throw off social constraint and allow ourselves to push harder in different situations. Swearing is an easily available way to help yourself feel focused, confident and less distracted, and ‘go for it’ a little more.”

In two separate experiments, participants were asked to select a swear word they’d normally use after, say, bumping their head, and a more neutral word to describe an inanimate object like a table. They then performed the aforementioned chair push-up task: sitting on a sturdy chair and placing their hands under their thighs with the fingers pointed inwards. Then they lifted their feet off the floor and straightened their arms to support their body weight for as long as possible, chanting either the swear word or the neutral word every two seconds. Afterward, subjects competed a questionnaire to assess various aspects of their mental state during the task.

The results: Subjects who swore during the task could support their body weight much longer than those who merely repeated the neutral word. This confirms the reported results of similar studies in the past. Furthermore, subjects reported increases in their sense of psychological “flow,” distraction, and self-confidence, all indicators of increased disinhibition.

“These findings help explain why swearing is so commonplace,” said Stephens. “Swearing is literally a calorie-neutral, drug-free, low-cost, readily available tool at our disposal for when we need a boost in performance.” The team next plans to explore the influence of swearing on public speaking and romantic behaviors, since these are situations where most people are more hesitant and less confident in themselves, and hence more likely to hold back.

DOI: American Psychologist, 2025. 10.1037/amp0001650  (About DOIs).

Does swearing make you stronger? Science says yes. Read More »

texas-sues-biggest-tv-makers,-alleging-smart-tvs-spy-on-users-without-consent

Texas sues biggest TV makers, alleging smart TVs spy on users without consent


Automated Content Recognition brings “mass surveillance” to homes, lawsuits say.

Credit: Getty Images | Maskot

Texas Attorney General Ken Paxton sued five large TV manufacturers yesterday, alleging that their smart TVs spy on viewers without consent. Paxton sued Samsung, the longtime TV market share leader, along with LG, Sony, Hisense, and TCL.

“These companies have been unlawfully collecting personal data through Automated Content Recognition (‘ACR’) technology,” Paxton’s office alleged in a press release that contains links to all five lawsuits. “ACR in its simplest terms is an uninvited, invisible digital invader. This software can capture screenshots of a user’s television display every 500 milliseconds, monitor viewing activity in real time, and transmit that information back to the company without the user’s knowledge or consent. The companies then sell that consumer information to target ads across platforms for a profit. This technology puts users’ privacy and sensitive information, such as passwords, bank information, and other personal information at risk.”

The lawsuits allege violations of the Texas Deceptive Trade Practices Act, seeking damages of up to $10,000 for each violation and up to $250,000 for each violation affecting people 65 years or older. Texas also wants restraining orders prohibiting the collection, sharing, and selling of ACR data while the lawsuits are pending.

Texas argues that providing personalized content and targeted advertising are not legitimate purposes for collecting ACR data about consumers. The companies’ “insatiable appetite for consumer data far exceeds what is reasonably necessary,” and the “invasive data harvesting is only needed to increase advertisement revenue, which does not satisfy a consumer-necessity standard,” the lawsuits say.

Paxton is far from the first person to raise privacy concerns about smart TVs. The Center for Digital Democracy advocacy group said in a report last year that in “the world of connected TV, viewer surveillance is now built directly into the television set, making manufacturers central players in data collection, monitoring, and digital marketing.” We recently published a guide on how to break free from smart TV ads and tracking.

“Companies using ACR claim that it is all opt-in data, with permission required to use it,” the Center for Digital Democracy report said. “But the ACR system is bundled into new TVs as part of the initial set-up, and its extensive role in monitoring and sharing viewer actions is not fully explained. As a consequence, most consumers would be unaware of the threats and risks involved in signing up for the service.”

“Mass surveillance system” in US living rooms

Pointing out that Hisense and TCL are based in China, Paxton’s press release said the firms’ “Chinese ties pose serious concerns about consumer data harvesting and are exacerbated by China’s National Security Law, which gives its government the capability to get its hands on US consumer data.”

“Companies, especially those connected to the Chinese Communist Party, have no business illegally recording Americans’ devices inside their own homes,” Paxton said. “This conduct is invasive, deceptive, and unlawful. The fundamental right to privacy will be protected in Texas because owning a television does not mean surrendering your personal information to Big Tech or foreign adversaries.”

The Paxton lawsuits, filed in district courts in several Texas counties, are identical in many respects. The complaints allege that TVs made by the five companies “aren’t just entertainment devices—they’re a mass surveillance system sitting in millions of American living rooms. What consumers were told would enhance their viewing experience actually tracks, analyzes, and sells intimate details about everything they watch.”

Using ACR, each company “secretly monitors what consumers watch across streaming apps, cable, and even connected devices like gaming consoles or Blu-ray players,” and harvests the data to build profiles of consumer behavior and sell the data for profit, the complaints say.

We contacted the five companies sued by Texas today. Sony, LG, and Hisense responded and said they would not comment on a pending legal matter.

Difficult opt-out processes detailed

The complaints allege that the companies fail to obtain meaningful consent from users. The following excerpt is from the Samsung lawsuit but is repeated almost verbatim in the others:

Consumers never agreed to Samsung Watchware. When families buy a television, they don’t expect it to spy on them. They don’t expect their viewing habits packaged and auctioned to advertisers. Yet Samsung deceptively guides consumers to activate ACR and buries any explanation of what that means in dense legal jargon that few will read or understand. The so-called “consent” Samsung obtains is meaningless. Disclosures are hidden, vague, and misleading. The company collects far more data than necessary to make the TV work. Consumers are stripped of real choice and kept in the dark about what’s happening in their own homes on Samsung Smart TVs.

Samsung and other companies force consumers to go through multistep menus to exercise their privacy choices, Texas said. “Consumers must circumnavigate a long, non-intuitive path to exercise their right to opt-out,” the Samsung lawsuit said. This involves selecting menu choices for Settings, Additional Settings, General Privacy, Terms & Privacy, Viewing Information Services, and, finally, “Disable,” the lawsuit said. There are “additional toggles for Interest-Based Ads, Ad Personalization, and Privacy Choices,” the lawsuit said.

The “privacy choices are not meaningful because opt-out rights are scattered across four or more separate menus which requires approximately 15+ clicks,” the lawsuit continued. “To fully opt-out of ACR and related ad tracking on Samsung Smart TVs, consumers must disable at least two settings: (1) Viewing Information Services, and (2) Interest-Based Ads. Each of which appear in different parts of the setting UI. Conversely, Samsung provides consumers with a one-click enrollment option to opt-in during the initial start-up process.”

When consumers first start up a Samsung smart TV, they “must click through a multipage onboarding flow before landing on a consent screen, titled Smart Hub Terms & Conditions,” the lawsuit said. “Upon finally reaching the consent screen, consumers are presented with four notices: Terms & Conditions: Dispute Resolution Agreement, Smart Hub U.S. Policy Notice, Viewing Information Services, and Interest-Based Advertisements Service U.S. Privacy Notice, with only one button prominently displayed: I Agree to all.”

Deceptive trade practices alleged

It would be unreasonable to expect consumers to understand that Samsung TVs come equipped with surveillance capabilities, the lawsuit said. “Most consumers do not know, nor have any reason to suspect, that Samsung Smart TVs are capturing in real-time the audio and visuals displayed on the screen and using the information to profile them for advertisers,” it said.

Paxton alleges that TV companies violated the state’s Deceptive Trade Practices Act with misrepresentations regarding the collection of personal information and failure to disclose the use of ACR technology. The lawsuit against Hisense additionally alleges a failure to disclose that it may provide the Chinese government with consumers’ personal data.

Hisense “fails to disclose to Texas Consumers that under Chinese law, Hisense is required to transfer its collections of Texas consumers’ personal data to the People’s Republic of China when requested by the PRC,” the lawsuit said.

The TCL lawsuit doesn’t include that specific charge. But both the Hisense and TCL complaints say the Chinese Communist Party may use ACR data from the companies’ smart TVs “to influence or compromise public figures in Texas, including judges, elected officials, and law enforcement, and for corporate espionage by surveilling those employed in critical infrastructure, as part of the CCP’s long-term plan to destabilize and undermine American democracy.”

The TVs “are effectively Chinese-sponsored surveillance devices, recording the viewing habits of Texans at every turn without their knowledge or consent,” the lawsuits said.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Texas sues biggest TV makers, alleging smart TVs spy on users without consent Read More »

reporter-suggests-half-life-3-will-be-a-steam-machine-launch-title

Reporter suggests Half-Life 3 will be a Steam Machine launch title

If you can take your mind way back to the beginning of 2025, you might remember a fresh wave of rumors suggesting that Half-Life 3 was finally reaching the final stages of production, and could be announced and/or released at any moment. Now, though, 2025 seems set to come to a close without any official news of a game fans have been waiting literal decades for.

That doesn’t necessarily mean a Half-Life 3 announcement and/or release isn’t imminent, though. On the contrary, veteran journalist Mike Straw insisted on a recent Insider Gaming podcast that “everybody I’ve talked to are still adamant [Half-Life 3] is a game that will be a launch title with the Steam Machine.”

Straw—who has a long history of reporting gaming rumors from anonymous sources—said this Half-Life 3 information is “not [from] these run-of-the-mill sources that haven’t gotten me information before. … These aren’t like random, one-off people.” And those sources are “still adamant that the game is coming in the spring,” Straw added, noting that he was “specifically told [that] spring 2026 [is the window] for the Steam Machine, for the Frame, for the Controller, [and] for Half-Life 3.”

For real, this time?

Tying the long-awaited Half-Life 3 to a major hardware push that has already been announced for an “early 2026” window certainly sounds plausible, given previous leaks about the game’s advanced state of development. But there are still some reasons to doubt Straw’s “adamant” sources here.

For one, Straw admitted that the previous information he had received on potential Half-Life 3 launch and/or announcement dates was not reliable enough to report in detail. “I had been told a date. I was not going to report that date because they weren’t 100 percent confident in that date,” he said. “That date has since passed.”

Reporter suggests Half-Life 3 will be a Steam Machine launch title Read More »

utah-leaders-hinder-efforts-to-develop-solar-energy-supply

Utah leaders hinder efforts to develop solar energy supply


Solar power accounts for two-thirds of the new projects waiting to connect to the state’s power grid.

Utah Gov. Spencer Cox believes his state needs more power—a lot more. By some estimates, Utah will require as much electricity in the next five years as it generated all last century to meet the demands of a growing population as well as chase data centers and AI developers to fuel its economy.

To that end, Cox announced Operation Gigawatt last year, declaring the state would double energy production in the next decade. Although the announcement was short on details, Cox, a Republican, promised his administration would take an “any of the above” approach, which aims to expand all sources of energy production.

Despite that goal, the Utah Legislature’s Republican supermajority, with Cox’s acquiescence, has taken a hard turn against solar power—which has been coming online faster than any other source in Utah and accounts for two-thirds of the new projects waiting to connect to the state’s power grid.

Cox signed a pair of bills passed this year that will make it more difficult and expensive to develop and produce solar energy in Utah by ending solar development tax credits and imposing a hefty new tax on solar generation. A third bill aimed at limiting solar development on farmland narrowly missed the deadline for passage but is expected to return next year.

While Operation Gigawatt emphasizes nuclear and geothermal as Cox’s preferred sources, the legislative broadside, and Cox’s willingness to go along with it, caught many in the solar industry off guard. The three bills, in their original form, could have brought solar development to a halt if not for solar industry lobbyists negotiating a lower tax rate and protecting existing projects as well as those under construction from the brunt of the impact.

“It took every dollar of political capital from all the major solar developers just to get to something tolerable, so that anything they have under development will get built and they can move on to greener pastures,” said one industry insider, indicating that solar developers will likely pursue projects in more politically friendly states. ProPublica spoke with three industry insiders—energy developers and lobbyists—all of whom asked to remain anonymous for fear of antagonizing lawmakers who, next month, will again consider legislation affecting the industry.

The Utah Legislature’s pivot away from solar mirrors President Donald Trump taking a more hostile approach to the industry than his predecessor. Trump has ordered the phaseout of lucrative federal tax incentives for solar and other renewable energy, which expanded under the Biden administration. The loss of federal incentives is a bigger hit to solar companies than the reductions to Utah’s tax incentives, industry insiders acknowledged. The administration has also canceled large wind and solar projects, which Trump has lamented as “the scam of the century.” He described solar as “farmer killing.”

Yet Cox criticized the Trump administration’s decision to kill a massive solar project in neighboring Nevada. Known as a governor who advocates for a return to more civil political discourse, Cox doesn’t often pick fights. But he didn’t pull punches with the decision to halt the Esmeralda 7 project planned on 62,300 acres of federal land. The central Nevada project was expected to produce 6.2 gigawatts of power—enough to supply nearly eight times the number of households in Las Vegas. (Although the Trump administration canceled the environmental review of the joint project proposed by multiple developers, it has the potential to move forward as individual projects.)

“This is how we lose the AI/energy arms race with China,” Cox wrote on X when news surfaced of the project’s cancellation. “Our country needs an all-of-the-above approach to energy (like Utah).”

But he didn’t take on his own Legislature, at least publicly.

Many of Utah’s Republican legislators have been skeptical of solar for years, criticizing its footprint on the landscape and viewing it as an unreliable energy source, while lamenting the retirement of coal-generated power plants. The economies of several rural counties rely on mining coal. But lawmakers’ skepticism hadn’t coalesced into successful anti-solar legislation—until this year. When Utah lawmakers convened at the start of 2025, they took advantage of the political moment to go after solar.

“This is a sentiment sweeping through red states, and it’s very disconcerting and very disturbing,” said Steve Handy, Utah director of The Western Way, which describes itself as a conservative organization advocating for an all-of-the-above approach to energy development.

The shift in sentiment against solar energy has created a difficult climate for an all-of-the-above approach. Solar projects can be built quickly on Utah’s vast, sun-drenched land, while nuclear is a long game with projects expected to take a decade or more to come online under optimistic scenarios.

Cox generally supports solar, “in the right places,” especially when the captured energy can be stored in large batteries for distribution on cloudy days and after the sun goes down.

Cox said that instead of vetoing the anti-solar bills, he spent his political capital to moderate the legislation’s impact. “I think you’ll see where our fingerprints were,” he told ProPublica. He didn’t detail specific changes for which he advocated but said the bills’ earlier iterations would have “been a lot worse.”

“We will continue to see solar in Utah.”

Cox’s any-of-the-above approach to energy generation draws from a decades-old Republican push similarly titled “all of the above.” The GOP policy’s aim was as much about preserving and expanding reliance on fossil fuels (indeed, the phrase may have been coined by petroleum lobbyists) as it was turning to cleaner energy sources such as solar, wind, and geothermal.

As governor of a coal-producing state, Cox hasn’t shown interest in reducing reliance on such legacy fuels. But as he slowly rolls out Operation Gigawatt, his focus has been on geothermal and nuclear power. Last month, he announced plans for a manufacturing hub for small modular reactors in the northern Utah community of Brigham City, which he hopes will become a nuclear supply chain for Utah and beyond. And on a recent trade mission to New Zealand, he signed an agreement to collaborate with the country on geothermal energy development.

Meanwhile, the bills Cox signed into law already appear to be slowing solar development in Utah. Since May, when the laws took effect, 51 planned solar projects withdrew their applications to connect to the state’s grid—representing more than a quarter of all projects in Utah’s transmission connection queue. Although projects drop out for many reasons, some industry insiders theorize the anti-solar legislation could be at play.

Caught in the political squeeze over power are Utah customers, who are footing higher electricity bills. Earlier this year, the state’s utility, Rocky Mountain Power, asked regulators to approve a 30 percent hike to fund increased fuel and wholesale energy costs, as well as upgrades to the grid. In response to outrage from lawmakers, the utility knocked the request down to 18 percent. Regulators eventually awarded the utility a 4.7 percent increase—a decision the utility promptly appealed to the state Supreme Court.

Juliet Carlisle, a University of Utah political science professor focusing on environmental policy, said the new solar tax could signal to large solar developers that Utah energy policy is “becoming more unpredictable,” prompting them to build elsewhere. This, in turn, could undermine Cox’s efforts to quickly double Utah’s electricity supply.

Operation Gigawatt “relies on rapid deployment across multiple energy sources, including renewables,” she said. “If renewable growth slows—especially utility-scale solar, which is currently the fastest-deploying resource—the state may face challenges meeting demand growth timelines.”

Rep. Kay Christofferson, R-Lehi, had sponsored legislation to end the solar industry’s state tax credits for several legislative sessions, but this was the first time the proposal succeeded.

Christofferson agrees Utah is facing unprecedented demand for power, and he supports Cox’s any-of-the-above approach. But he doesn’t think solar deserves the advantages of tax credits. Despite improving battery technology, he still considers it an intermittent source and thinks overreliance on it would work against Utah’s energy goals.

In testimony on his bill, Christofferson said he believed the tax incentives had served their purpose of getting a new industry off the ground—16 percent of Utah’s power generation now comes from solar, ranking it 16th in the nation for solar capacity.

Christofferson’s bill was the least concerning to the industry, largely because it negotiated a lengthy wind-down of the subsidies. Initially it would have ended the tax credit after Jan. 1, 2032. But after negotiations with the solar industry, he extended the deadline to 2035.

The bill passed the House, but when it reached the Senate floor, Sen. Brady Brammer, R-Pleasant Grove, moved the end of the incentives to 2028. He told ProPublica he believes solar is already established and no longer needs the subsidy. Christofferson tried to defend his compromise but ultimately voted with the legislative majority.

Unlike Christofferson’s bill, which wasn’t born of an antipathy for renewable energy, Rep. Casey Snider, R-Paradise, made it clear in public statements and behind closed doors to industry lobbyists that the goal of his bill was to make solar pay.

The bill imposes a tax on all solar production. The proceeds will substantially increase the state’s endangered species fund, which Utah paradoxically uses to fight federal efforts to list threatened animals for protection. Snider cast his bill as pro-environment, arguing the money could also go to habitat protection.

As initially written, the bill would have taxed not only future projects, but also those already producing power and, more worrisome for the industry, projects under construction or in development with financing in place. The margins on such projects are thin, and the unanticipated tax could kill projects already in the works, one solar industry executive testified.

“Companies like ours are being effectively punished for investing in the state,” testified another.

The pushback drew attacks from Snider, who accused solar companies of hypocrisy on the environment.

Industry lobbyists who spoke to ProPublica said Snider wasn’t as willing to negotiate as Christofferson. However, they succeeded in reducing the tax rate on future developments and negotiated a smaller, flat fee for existing projects.

“Everyone sort of decided collectively to save the existing projects and let it go for future projects,” said one lobbyist.

Snider told ProPublica, “My goal was never to run anybody out of business. If we wanted to make it more heavy-handed, we could have. Utah is a conservative state, and I would have had all the support.”

Snider said, like the governor, he favors an any-of-the-above approach to energy generation and doesn’t “want to take down any particular industry or source.” But he believes utility-scale solar farms need to pay to mitigate their impact on the environment. He likened his bill to federal law that requires royalties from oil and gas companies to be used for conservation. He hopes federal lawmakers will use his bill as a model for federal legislation that would apply to solar projects nationwide.

“This industry needs to give back to the environment that they claim very heavily they are going to protect,” he said. “I do believe there’s a tinge of hypocrisy to this whole movement. You can’t say you’re good for the environment and not offset your impacts.”

One of the more emotional debates over solar is set to return next year, after a bill that would end tax incentives for solar development on agricultural land failed to get a vote in the final minutes of this year’s session. Sponsored by Rep. Colin Jack, R-St. George, the bill has been fast-tracked in the next session, which begins in January.

Jack said he was driven to act by ranchers who were concerned that solar companies were outbidding them for land they had been leasing to graze cows. Solar companies pay substantially higher rates than ranchers can. His bill initially had a slew of land use restrictions—such as mandating the distance between projects and residential property and creeks, minimum lot sizes and 4-mile “green zones” between projects—that solar lobbyists said would have strangled their industry. After negotiating with solar developers, Jack eliminated the land use restrictions while preserving provisions to prohibit tax incentives for solar farms on private agricultural land and to create standards for decommissioning projects.

Many in rural Utah recoil at rows of black panels disrupting the landscape and fear solar farms will displace the ranching and farming way of life. Indeed, some wondered whether Cox, who grew up on a farm in central Utah, would have been as critical of Trump scuttling a 62,300-acre solar farm in his own state as he was of the Nevada project’s cancellation.

Peter Greathouse, a rancher in western Utah’s Millard County, said he is worried about solar farms taking up grazing land in his county. “Twelve and a half percent is privately owned, and a lot of that is not farmable. So if you bring in these solar places that start to eat up the farmland, it can’t be replaced,” he said.

Utah is losing about 500,000 acres of agricultural land every 10 years, most of it to housing. A report by The Western Way estimated solar farms use 0.1 percent of the United States’ total land mass. That number is expected to grow to 0.46 percent by 2050—a tiny fraction of what is used by agriculture. Of the land managed by the Utah Trust Lands Administration, less than 3,000 of the 2.9 million acres devoted to grazing have been converted to solar farms.

Other ranchers told ProPublica they’ve been able to stay on their land and preserve their way of life by leasing to solar. Landon Kesler’s family, which raises cattle for team roping competitions, has leased land to solar for more than a decade. The revenue has allowed the family to almost double its land holdings, providing more room to ranch, Kesler said.

“I’m going to be quite honest, it’s absurd,” Kesler said of efforts to limit solar on agricultural land. “Solar very directly helped us tie up other property to be used for cattle and ranching. It didn’t run us out; it actually helped our agricultural business thrive.”

Solar lobbyists and executives have been working to bolster the industry’s image with lawmakers ahead of the next legislative session. They’re arguing solar is a good neighbor.

“We don’t use water, we don’t need sidewalks, we don’t create noise, and we don’t create light,” said Amanda Smith, vice president of external affairs for AES, which has one solar project operating in Utah and a second in development. “So we just sort of sit out there and produce energy.”

Solar pays private landowners in Utah $17 million a year to lease their land. And, more important, solar developers argue, it’s critical to powering data centers the state is working to attract.

“We are eager to be part of a diversified electricity portfolio, and we think we bring a lot of values that will benefit communities, keep rates low and stable, and help keep the lights on,” Rikki Seguin, executive director of Interwest Energy Alliance, a western trade organization that advocates for utility-scale renewable energy projects, told an interim committee of lawmakers this summer.

The message didn’t get a positive reception from some lawmakers on the committee. Rep. Carl Albrecht, R-Richfield, who represents three rural Utah counties and was among solar’s critics last session, said the biggest complaint he hears from constituents is about “that ugly solar facility” in his district.

“Why, Rep. Albrecht, did you allow that solar field to be built? It’s black. It looks like the Dead Sea when you drive by it,” Albrecht said.

This story was originally published by ProPublica.

Photo of ProPublica

Utah leaders hinder efforts to develop solar energy supply Read More »

uk-to-“encourage”-apple-and-google-to-put-nudity-blocking-systems-on-phones

UK to “encourage” Apple and Google to put nudity-blocking systems on phones

The push for device-level blocking comes after the UK implemented the Online Safety Act, a law requiring porn platforms and social media firms to verify users’ ages before letting them view adult content. The law can’t fully prevent minors from viewing porn, as many people use VPN services to get around the UK age checks. Government officials may view device-level detection of nudity as a solution to that problem, but such systems would raise concerns about user rights and the accuracy of the nudity detection.

Age-verification battles in multiple countries

Apple and Google both provide optional tools that let parents control what content their children can access. The companies could object to mandates on privacy grounds, as they have in other venues.

When Texas enacted an age-verification law for app stores, Apple and Google said they would comply but warned of risks to user privacy. A lobby group that represents Apple, Google, and other tech firms then sued Texas in an attempt to prevent the law from taking effect, saying it “imposes a broad censorship regime on the entire universe of mobile apps.”

There’s another age-verification battle in Australia, where the government decided to ban social media for users under 16. Companies said they would comply, although Reddit sued Australia on Friday in a bid to overturn the law.

Apple this year also fought a UK demand that it create a backdoor for government security officials to access encrypted data. The Trump administration claimed it convinced the UK to drop its demand, but the UK is reportedly still seeking an Apple backdoor.

In another case, the image-sharing website Imgur blocked access for UK users starting in September while facing an investigation over its age-verification practices.

Apple faced a backlash in 2021 over potential privacy violations when it announced a plan to have iPhones scan photos for child sexual abuse material (CSAM). Apple ultimately dropped the plan.

UK to “encourage” Apple and Google to put nudity-blocking systems on phones Read More »

openai-built-an-ai-coding-agent-and-uses-it-to-improve-the-agent-itself

OpenAI built an AI coding agent and uses it to improve the agent itself


“The vast majority of Codex is built by Codex,” OpenAI told us about its new AI coding agent.

With the popularity of AI coding tools rising among some software developers, their adoption has begun to touch every aspect of the process, including the improvement of AI coding tools themselves.

In interviews with Ars Technica this week, OpenAI employees revealed the extent to which the company now relies on its own AI coding agent, Codex, to build and improve the development tool. “I think the vast majority of Codex is built by Codex, so it’s almost entirely just being used to improve itself,” said Alexander Embiricos, product lead for Codex at OpenAI, in a conversation on Tuesday.

Codex, which OpenAI launched in its modern incarnation as a research preview in May 2025, operates as a cloud-based software engineering agent that can handle tasks like writing features, fixing bugs, and proposing pull requests. The tool runs in sandboxed environments linked to a user’s code repository and can execute multiple tasks in parallel. OpenAI offers Codex through ChatGPT’s web interface, a command-line interface (CLI), and IDE extensions for VS Code, Cursor, and Windsurf.

The “Codex” name itself dates back to a 2021 OpenAI model based on GPT-3 that powered GitHub Copilot’s tab completion feature. Embiricos said the name is rumored among staff to be short for “code execution.” OpenAI wanted to connect the new agent to that earlier moment, which was crafted in part by some who have left the company.

“For many people, that model powering GitHub Copilot was the first ‘wow’ moment for AI,” Embiricos said. “It showed people the potential of what it can mean when AI is able to understand your context and what you’re trying to do and accelerate you in doing that.”

A place to enter a prompt, set parameters, and click

The interface for OpenAI’s Codex in ChatGPT. Credit: OpenAI

It’s no secret that the current command-line version of Codex bears some resemblance to Claude Code, Anthropic’s agentic coding tool that launched in February 2025. When asked whether Claude Code influenced Codex’s design, Embiricos parried the question but acknowledged the competitive dynamic. “It’s a fun market to work in because there’s lots of great ideas being thrown around,” he said. He noted that OpenAI had been building web-based Codex features internally before shipping the CLI version, which arrived after Anthropic’s tool.

OpenAI’s customers apparently love the command line version, though. Embiricos said Codex usage among external developers jumped 20 times after OpenAI shipped the interactive CLI extension alongside GPT-5 in August 2025. On September 15, OpenAI released GPT-5 Codex, a specialized version of GPT-5 optimized for agentic coding, which further accelerated adoption.

It hasn’t just been the outside world that has embraced the tool. Embiricos said the vast majority of OpenAI’s engineers now use Codex regularly. The company uses the same open-source version of the CLI that external developers can freely download, suggest additions to, and modify themselves. “I really love this about our team,” Embiricos said. “The version of Codex that we use is literally the open source repo. We don’t have a different repo that features go in.”

The recursive nature of Codex development extends beyond simple code generation. Embiricos described scenarios where Codex monitors its own training runs and processes user feedback to “decide” what to build next. “We have places where we’ll ask Codex to look at the feedback and then decide what to do,” he said. “Codex is writing a lot of the research harness for its own training runs, and we’re experimenting with having Codex monitoring its own training runs.” OpenAI employees can also submit a ticket to Codex through project management tools like Linear, assigning it tasks the same way they would assign work to a human colleague.

This kind of recursive loop, of using tools to build better tools, has deep roots in computing history. Engineers designed the first integrated circuits by hand on vellum and paper in the 1960s, then fabricated physical chips from those drawings. Those chips powered the computers that ran the first electronic design automation (EDA) software, which in turn enabled engineers to design circuits far too complex for any human to draft manually. Modern processors contain billions of transistors arranged in patterns that exist only because software made them possible. OpenAI’s use of Codex to build Codex seems to follow the same pattern: each generation of the tool creates capabilities that feed into the next.

But describing what Codex actually does presents something of a linguistic challenge. At Ars Technica, we try to reduce anthropomorphism when discussing AI models as much as possible while also describing what these systems do using analogies that make sense to general readers. People can talk to Codex like a human, so it feels natural to use human terms to describe interacting with it, even though it is not a person and simulates human personality through statistical modeling.

The system runs many processes autonomously, addresses feedback, spins off and manages child processes, and produces code that ships in real products. OpenAI employees call it a “teammate” and assign it tasks through the same tools they use for human colleagues. Whether the tasks Codex handles constitute “decisions” or sophisticated conditional logic smuggled through a neural network depends on definitions that computer scientists and philosophers continue to debate. What we can say is that a semi-autonomous feedback loop exists: Codex produces code under human direction, that code becomes part of Codex, and the next version of Codex produces different code as a result.

Building faster with “AI teammates”

According to our interviews, the most dramatic example of Codex’s internal impact came from OpenAI’s development of the Sora Android app. According to Embiricos, the development tool allowed the company to create the app in record time.

“The Sora Android app was shipped by four engineers from scratch,” Embiricos told Ars. “It took 18 days to build, and then we shipped it to the app store in 28 days total,” he said. The engineers already had the iOS app and server-side components to work from, so they focused on building the Android client. They used Codex to help plan the architecture, generate sub-plans for different components, and implement those components.

Despite OpenAI’s claims of success with Codex in house, it’s worth noting that independent research has shown mixed results for AI coding productivity. A METR study published in July found that experienced open source developers were actually 19 percent slower when using AI tools on complex, mature codebases—though the researchers noted AI may perform better on simpler projects.

Ed Bayes, a designer on the Codex team, described how the tool has changed his own workflow. Bayes said Codex now integrates with project management tools like Linear and communication platforms like Slack, allowing team members to assign coding tasks directly to the AI agent. “You can add Codex, and you can basically assign issues to Codex now,” Bayes told Ars. “Codex is literally a teammate in your workspace.”

This integration means that when someone posts feedback in a Slack channel, they can tag Codex and ask it to fix the issue. The agent will create a pull request, and team members can review and iterate on the changes through the same thread. “It’s basically approximating this kind of coworker and showing up wherever you work,” Bayes said.

For Bayes, who works on the visual design and interaction patterns for Codex’s interfaces, the tool has enabled him to contribute code directly rather than handing off specifications to engineers. “It kind of gives you more leverage. It enables you to work across the stack and basically be able to do more things,” he said. He noted that designers at OpenAI now prototype features by building them directly, using Codex to handle the implementation details.

The command line version of OpenAI codex running in a macOS terminal window.

The command line version of OpenAI codex running in a macOS terminal window. Credit: Benj Edwards

OpenAI’s approach treats Codex as what Bayes called “a junior developer” that the company hopes will graduate into a senior developer over time. “If you were onboarding a junior developer, how would you onboard them? You give them a Slack account, you give them a Linear account,” Bayes said. “It’s not just this tool that you go to in the terminal, but it’s something that comes to you as well and sits within your team.”

Given this teammate approach, will there be anything left for humans to do? When asked, Embiricos drew a distinction between “vibe coding,” where developers accept AI-generated code without close review, and what AI researcher Simon Willison calls “vibe engineering,” where humans stay in the loop. “We see a lot more vibe engineering in our code base,” he said. “You ask Codex to work on that, maybe you even ask for a plan first. Go back and forth, iterate on the plan, and then you’re in the loop with the model and carefully reviewing its code.”

He added that vibe coding still has its place for prototypes and throwaway tools. “I think vibe coding is great,” he said. “Now you have discretion as a human about how much attention you wanna pay to the code.”

Looking ahead

Over the past year, “monolithic” large language models (LLMs) like GPT-4.5 have apparently become something of a dead end in terms of frontier benchmarking progress as AI companies pivot to simulated reasoning models and also agentic systems built from multiple AI models running in parallel. We asked Embiricos whether agents like Codex represent the best path forward for squeezing utility out of existing LLM technology.

He dismissed concerns that AI capabilities have plateaued. “I think we’re very far from plateauing,” he said. “If you look at the velocity on the research team here, we’ve been shipping models almost every week or every other week.” He pointed to recent improvements where GPT-5-Codex reportedly completes tasks 30 percent faster than its predecessor at the same intelligence level. During testing, the company has seen the model work independently for 24 hours on complex tasks.

OpenAI faces competition from multiple directions in the AI coding market. Anthropic’s Claude Code and Google’s Gemini CLI offer similar terminal-based agentic coding experiences. This week, Mistral AI released Devstral 2 alongside a CLI tool called Mistral Vibe. Meanwhile, startups like Cursor have built dedicated IDEs around AI coding, reportedly reaching $300 million in annualized revenue.

Given the well-known issues with confabulation in AI models when people attempt to use them as factual resources, could it be that coding has become the killer app for LLMs? We wondered if OpenAI has noticed that coding seems to be a clear business use case for today’s AI models with less hazard than, say, using AI language models for writing or as emotional companions.

“We have absolutely noticed that coding is both a place where agents are gonna get good really fast and there’s a lot of economic value,” Embiricos said. “We feel like it’s very mission-aligned to focus on Codex. We get to provide a lot of value to developers. Also, developers build things for other people, so we’re kind of intrinsically scaling through them.”

But will tools like Codex threaten software developer jobs? Bayes acknowledged concerns but said Codex has not reduced headcount at OpenAI, and “there’s always a human in the loop because the human can actually read the code.” Similarly, the two men don’t project a future where Codex runs by itself without some form of human oversight. They feel the tool is an amplifier of human potential rather than a replacement for it.

The practical implications of agents like Codex extend beyond OpenAI’s walls. Embiricos said the company’s long-term vision involves making coding agents useful to people who have no programming experience. “All humanity is not gonna open an IDE or even know what a terminal is,” he said. “We’re building a coding agent right now that’s just for software engineers, but we think of the shape of what we’re building as really something that will be useful to be a more general agent.”

This article was updated on December 12, 2025 at 6: 50 PM to mention the METR study.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

OpenAI built an AI coding agent and uses it to improve the agent itself Read More »

rocket-report:-neutron’s-hungry-hippo-is-deemed-ready,-whither-orbex?

Rocket Report: Neutron’s Hungry Hippo is deemed ready, whither Orbex?


All the news that’s fit to lift

“That is the moment an IPO suddenly came into play.”

Rocket Lab has completed qualification testing of its “Hungry Hippo” payload fairing. Credit: Rocket Lab

Welcome to Edition 8.22 of the Rocket Report! The big news this week concerns the decision by SpaceX founder Elon Musk to take the company public, via IPO, sometime within the next 12 to 18 months. Musk confirmed this after Ars published a story on Wednesday evening. This understandably raises questions about whether a future SpaceX will be committed more to AI data centers in space or Mars settlement. However, one of the company’s founding employees, Tom Mueller, said this could benefit the company’s Mars plans. Clearly this is something we’ll be following closely.

As always, we welcome reader submissions, and if you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.

Avio will build solid rocket motors in Virginia. The governor of Virginia, Glenn Youngkin, announced Wednesday that Avio USA has selected his state to produce solid rocket motors for defense and commercial space propulsion purposes. Avio USA’s investment, which will be up to $500 million, is supported by its Italian parent Avio. The company’s factory will encompass 860,000 sq. feet.

From Italy with love … “Avio looks forward to establishing on U.S. soil a solid rocket motor production facility to contribute in strengthening the US industrial base by providing decades of experience in engineering and manufacturing,” said Avio Chief Executive Officer Giulio Ranzo. Final approvals and the site-selection announcement are expected to be completed early next year.

Orbex funding lags in European Launcher Challenge. One of the five launch companies in ESA’s European Launcher Challenge, Orbex, received far less funding than the other four at the agency’s ministerial conference after the United Kingdom deferred a decision on how to allocate most of its contribution. Unlike typical ESA programs, in which members contribute funds with the expectation of receiving contracts proportional to their investments, the launcher challenge allowed member states to choose among five “preselected challengers,” Space News reports.

Orbex not in prime position … Those companies were chosen in July based on technical and business maturity, and each could receive up to 169 million euros. They were: Isar Aerospace, MaiaSpace, Orbex, PLD Space, and Rocket Factory Augsburg (RFA). Isar, MaiaSpace, PLD Space, and RFA each received at least 169 million euros, while Orbex received just 34.9 million euros. The UK left 112.3 million euros unallocated, a move that puzzled many industry observers. “We are working with multiple partners to ensure this funding delivers our requirements for assured access to space and benefits U.K. taxpayers,” a UK Space Agency spokesperson said. This was not exactly a ringing endorsement of the UK-based launch company. (submitted by EllPeaTea)

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

Europe takes a tentative step toward crewed launch. The European Space Agency has published a call for tenders to develop a launch abort system for a future crewed launch capability, European Spaceflight reports. The system would be used in the event of an emergency, either on the launch pad or during the initial stages of flight.

Looking beyond ISS … The new call is part of the European agency’s post-ISS low-Earth orbit strategy. This strategy, the material explains, includes the development of an end-to-end European crewed flight solution. In addition to developing a crewed launch capability, the agency’s post-ISS strategy includes options for low-Earth orbit infrastructure. These options include partnering with a commercial space station or building a European station. (submitted by EllPeaTea)

After Russian launch incident, NASA brings Dragon launches forward. With a key Russian launch pad out of service, NASA is accelerating the launch of two Cargo Dragon spaceships in order to ensure that astronauts on board the International Space Station have all the supplies they need next year, Ars reports. According to the space agency’s internal schedule, the next Dragon supply mission, CRS-34, is moving forward one month, from June 2026 to May. And the next Dragon supply mission after this, CRS-35, has been advanced three months, from November to August.

NET April for pad repairs … A source indicated that the changing schedules are a “direct result” of a launch pad incident on Thanksgiving Day at the Russian spaceport in Baikonur, Kazakhstan. The issue occurred when a Soyuz rocket launched Roscosmos cosmonauts Sergei Kud-Sverchkov and Sergei Mikayev, as well as NASA astronaut Christopher Williams, on an eight-month mission to the International Space Station. The rocket had no difficulties, but a large mobile platform below the rocket was not properly secured prior to the launch and crashed into the flame trench below, taking the Soyuz pad offline. Russia has told NASA it will require at least four months to repair the pad.

Rocket Lab completes Neutron fairing test. Rocket Lab announced Monday that the Neutron rocket’s innovative “Hungry Hippo” captive fairing has successfully completed qualification testing and is en route to Virginia for Neutron’s first launch. Whereas typical rockets’ fairing halves fall away during launch and are disposable or require collection at sea for reuse, Neutron’s fairing halves open to release the rocket’s second stage and mission payload before closing again to return Neutron to Earth as a single reusable vehicle.

Gobbling marbles … To qualify the Hungry Hippo fairing for Neutron’s first launch, Rocket Lab completed an intensive qualification and acceptance testing campaign that validated the structure’s expected performance during the intense aerodynamic pressure of launch and re-entry featuring full-scale tests as well as a series of sub-component tests. “Building, qualifying, and shipping Hungry Hippo is a fantastic marker of progress toward Neutron’s first launch, and I’m proud of the team for their attention to detail and pulling off this significant milestone,” said Shaun D’Mello, the company’s vice president overseeing Neutron.

Terran R flight tanks assembled. Relativity Space has gone largely silent since being taken over by former Google chief executive Eric Schmidt, but the company still provides monthly updates online. On Tuesday the company published its November 2025 update and revealed that progress is being made on flight hardware for the debut launch of the large Terran R rocket. Relativity has not announced a new launch target yet.

More work to be done … “In November, the team completed all circumferential friction stir welds for the first stage tank for first flight,” the company said. “Measuring 163 feet (49.7 meters) in length, the tank is composed of eight barrel sections and three domes, joined by ten circumferential welds. The tank will now move into integration. With both the first and second stage tanks finished, focus has shifted to the interstage.”

Veteran Falcon 9 booster extends record. SpaceX achieved a couple notable milestones with its Falcon 9 rocket launch from NASA’s Kennedy Space Center on Monday, December 8, Spaceflight Now reports. The mission, dubbed Starlink 6-92, featured the use of the company’s most-flown Falcon booster, tail number B1067. On its 32nd flight, it delivered SpaceX’s 3,000th Starlink satellite of the year to low-Earth orbit.

How is your payload fairing? … The use of B1067 on this mission brings SpaceX one step closer to its current goal of certifying its Falcon boosters for up to 40 missions a piece. The ultimate number of missions a booster flies will partially depend on the types of missions for which it was used and if it is needed on an expendable flight. SpaceX is looking to achieve the same level of reuse for the payload fairings on a Falcon rocket’s upper stage, but typically only provides updates on those during the launches of customer missions for the government or from other companies.

SpaceX likely to IPO next year to fund ambitions. SpaceX is planning to raise tens of billions of dollars through an initial public offering next year, and this represents a major change in thinking from the world’s leading space company and its founder, Elon Musk. The question is, why? He has not enjoyed the public scrutiny of Tesla, and feared that shareholder desires for financial return were not consistent with his ultimate goal of settling Mars. Ars attempts to answer this question by speaking to a number of people familiar with Musk’s thinking.

The short-term answer is data centers … Abhi Tripathi, a long-time SpaceX employee, believes that once Musk realized Starlink satellites could be architected into a distributed network of data centers, the writing was on the wall. “That is the moment an IPO suddenly came into play after being unlikely for so long. Much of the AI race comes down to amassing and deploying assets that work quicker than your competition. A large war chest resulting from an IPO will greatly help his cause and disadvantage all others.” Foremost among Musk’s goals right now is to “win” the battle for artificial intelligence. Taking SpaceX public and using it to marshal an incredible amount of resources shows he is playing to win.

New Glenn targets a four-launch certification. Blue Origin’s New Glenn rocket will have to complete four successful orbital flights as its pathway to certification under the US Space Force’s National Security Space Launch program, Space News reports. Gen. Philip Garrant, who leads the Space Systems Command, said Blue Origin selected the four-flight benchmark and the government agreed.

And then there were three? … “The government is supporting a four-flight certification for New Glenn,” Garrant said. The rocket has logged two successful missions so far, and Garrant said a third launch is expected “earlier in the new year than later.” If upcoming flights stay on track, he added, “I think they’re going to be in a fantastic place to become our third certified provider and compete for missions.” If certified, Blue Origin would join SpaceX and United Launch Alliance as the Space Force’s third heavy-lift launch provider. (submitted by EllPeaTea)

Next three launches

December 13: Long March 6 | Unknown Payload | Taiyuan Satellite Launch Center, China | 01: 05 UTC

December 14: Electron | RAISE and Shine | Māhia Peninsula, New Zealand | 03: 00 UTC

December 14: Falcon 9 | Starlink 15-12 | Vandenberg Space Force Base, Calif. | 05: 20 UTC

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

Rocket Report: Neutron’s Hungry Hippo is deemed ready, whither Orbex? Read More »

trump-tries-to-block-state-ai-laws-himself-after-congress-decided-not-to

Trump tries to block state AI laws himself after Congress decided not to


Trump claims state laws force AI makers to embed “ideological bias” in models.

President Donald Trump talks to journalists after signing executive orders in the Oval Office at the White House on August 25, 2025 in Washington, DC. Credit: Getty Images | Chip Somodevilla

President Trump issued an executive order yesterday attempting to thwart state AI laws, saying that federal agencies must fight state laws because Congress hasn’t yet implemented a national AI standard. Trump’s executive order tells the Justice Department, Commerce Department, Federal Communications Commission, Federal Trade Commission, and other federal agencies to take a variety of actions.

“My Administration must act with the Congress to ensure that there is a minimally burdensome national standard—not 50 discordant State ones. The resulting framework must forbid State laws that conflict with the policy set forth in this order… Until such a national standard exists, however, it is imperative that my Administration takes action to check the most onerous and excessive laws emerging from the States that threaten to stymie innovation,” Trump’s order said. The order claims that state laws, such as one passed in Colorado, “are increasingly responsible for requiring entities to embed ideological bias within models.”

Congressional Republicans recently decided not to include a Trump-backed plan to block state AI laws in the National Defense Authorization Act (NDAA), although it could be included in other legislation. Sen. Ted Cruz (R-Texas) has also failed to get congressional backing for legislation that would punish states with AI laws.

“After months of failed lobbying and two defeats in Congress, Big Tech has finally received the return on its ample investment in Donald Trump,” US Sen. Ed Markey (D-Mass.) said yesterday. “With this executive order, Trump is delivering exactly what his billionaire benefactors demanded—all at the expense of our kids, our communities, our workers, and our planet.”

Markey said that “a broad, bipartisan coalition in Congress has rejected the AI moratorium again and again.” Sen. Maria Cantwell (D-Wash.) said the “executive order’s overly broad preemption threatens states with lawsuits and funding cuts for protecting their residents from AI-powered frauds, scams, and deepfakes.”

Trump orders Bondi to sue states

Sen. Brian Schatz (D-Hawaii) said that “preventing states from enacting common-sense regulation that protects people from the very real harms of AI is absurd and dangerous. Congress has a responsibility to get this technology right—and quickly—but states must be allowed to act in the public interest in the meantime. I’ll be working with my colleagues to introduce a full repeal of this order in the coming days.”

The Trump order includes a variation on Cruz’s proposal to prevent states with AI laws from accessing broadband grant funds. The executive order also includes a plan that Trump recently floated to have the federal government file lawsuits against states with AI laws.

Within 30 days of yesterday’s order, US Attorney General Pam Bondi is required to create an AI Litigation Task Force “whose sole responsibility shall be to challenge State AI laws inconsistent with the policy set forth in section 2 of this order, including on grounds that such laws unconstitutionally regulate interstate commerce, are preempted by existing Federal regulations, or are otherwise unlawful in the Attorney General’s judgment.”

Americans for Responsible Innovation, a group that lobbies for regulation of AI, said the Trump order “relies on a flimsy and overly broad interpretation of the Constitution’s Interstate Commerce Clause cooked up by venture capitalists over the last six months.”

Section 2 of Trump’s order is written vaguely to give the administration leeway to challenge many types of AI laws. “It is the policy of the United States to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI,” the section says.

Colorado law irks Trump

The executive order specifically names a Colorado law that requires AI developers to protect consumers against “algorithmic discrimination.” It defines this type of discrimination as “any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis” of age, race, sex, and other protected characteristics.

The Colorado law compels developers of “high-risk systems” to make various disclosures, implement a risk management policy and program, give consumers the right to “correct any incorrect personal data that a high-risk system processed in making a consequential decision,” and let consumers appeal any “adverse consequential decision concerning the consumer arising from the deployment of a high-risk system.”

Trump’s order alleges that the Colorado law “may even force AI models to produce false results in order to avoid a ‘differential treatment or impact’ on protected groups.” Trump’s order also says that “state laws sometimes impermissibly regulate beyond State borders, impinging on interstate commerce.”

Trump ordered the Commerce Department to evaluate existing state AI laws and identify “onerous” ones that conflict with the policy. “That evaluation of State AI laws shall, at a minimum, identify laws that require AI models to alter their truthful outputs, or that may compel AI developers or deployers to disclose or report information in a manner that would violate the First Amendment or any other provision of the Constitution,” the order said.

States would be declared ineligible for broadband funds

Under the order, states with AI laws that get flagged by the Trump administration will be deemed ineligible for “non-deployment funds” from the US government’s $42 billion Broadband Equity, Access, and Deployment (BEAD) program. The amount of non-deployment funds will be sizable because it appears that only about half of the $42 billion allocated by Congress will be used by the Trump administration to help states subsidize broadband deployment.

States with AI laws would not be blocked from receiving the deployment subsidies, but would be ineligible for the non-deployment funds that could be used for other broadband-related purposes. Beyond broadband, Trump’s order tells other federal agencies to “assess their discretionary grant programs” and consider withholding funds from states with AI laws.

Other agencies are being ordered to use whatever authority they have to preempt state laws. The order requires Federal Communications Commission Chairman Brendan Carr to “initiate a proceeding to determine whether to adopt a Federal reporting and disclosure standard for AI models that preempts conflicting State laws.” It also requires FTC Chairman Andrew Ferguson to issue a policy statement detailing “circumstances under which State laws that require alterations to the truthful outputs of AI models are preempted by the Federal Trade Commission Act’s prohibition on engaging in deceptive acts or practices affecting commerce.”

Finally, Trump’s order requires administration officials to “prepare a legislative recommendation establishing a uniform Federal policy framework for AI that preempts State AI laws that conflict with the policy set forth in this order.” The proposed ban would apply to most types of state AI laws, with exceptions for rules relating to “child safety protections; AI compute and data center infrastructure, other than generally applicable permitting reforms; [and] state government procurement and use of AI.”

It would be up to Congress to decide whether to pass the proposed legislation. But the various other components of the executive order could dissuade states from implementing AI laws even if Congress takes no action.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Trump tries to block state AI laws himself after Congress decided not to Read More »

supergirl-teaser-gives-us-a-likably-imperfect-kara-zor-el

Supergirl teaser gives us a likably imperfect Kara Zor-El

We met Alcock’s Supergirl briefly at the end of Superman, when she showed up to collect her dog Krypto, still a bit hung over from partying on a red-sun planet. She is more jaded than her cousin, having witnessed the destruction of Krypton and the loss of everything and everyone she loved. “He sees the good in everyone and I see the truth,” she says in the teaser.

Kara, aka Supergirl, is turning 23 and declares it will be the best year yet, which is admittedly “not a very high bar to clear.” While she might not be too keen on the prospect, she’s going to be a superhero nonetheless. Per the longline: “When an unexpected and ruthless adversary strikes too close to home, Kara Zor-El, aka Supergirl, reluctantly joins forces with an unlikely companion on an epic, interstellar journey of vengeance and justice.”

In addition to Alcock, the cast includes Matthias Schoenaerts as chief villain Krem of the Yellow Hills; Eve Ridley as Ruthye Marye Knoll, the aforementioned “unlikely companion” who meets and bonds with Supergirl over the course of the film; Ferdinand Kingsley as Ruthye’s father Elias; and David Krumholtz and Emily Beecham as Supergirl’s parents, Zor-El and Alura In-Ze. Jason Momoa also makes an appearance as Lobo, an alien bounty hunter from the planet Czarnia. We catch a brief, blurry glimpse of Momoa’s well-muscled mercenary with the glowing red eyes in the teaser. And of course, our favorite misbehaving pupster Krypto is returning, too; he kicks off the teaser by peeing on a newspaper.

Supergirl hits theaters on June 26, 2026.

post art showcasing the character of supergirl for the movie of the same name

Credit: Warner Bros.

Supergirl teaser gives us a likably imperfect Kara Zor-El Read More »