Author name: Kelly Newman

leaked-avengers:-doomsday-teaser-is-now-public

Leaked Avengers: Doomsday teaser is now public

Downey Jr. might be playing a new role, but Marvel is really getting the band(s) back together on this one. The film takes place 14 months after the events of this year’s Thunderbolts*.  So we’ve got Avengers favorites Thor (Chris Hemsworth), the new Captain America (Anthony Mackie), Bucky Barnes (Sebastian Stan), Ant-Man (Paul Rudd), Falcon (Danny Ramirez), and Loki (Tom Hiddleston). Then there’s the Wakandan contingent: Shuri as the new Black Panther (Letitia Wright), M’Baku (Winston Duke), and Namor (Tenoch Huerta Mejia).

Naturally, the Thunderbolts(aka New Avengers) will appear: John Walker/US Agent (Wyatt Russell), Yelena Belova (Florence Pugh), Bob/Sentry (Lewis Pullman), Red Guardian (David Harbour), and Ghost (Hannah John-Kamen). So will the Fantastic Four: Reed Richards (Pedro Pascal), Sue Storm (Vanessa Kirby), Ben Grimm (Ebon Moss-Bachrach), and Johnny Storm (Joseph Quinn). But we also have the original X-Men: Charles Xavier (Patrick Stewart), Beast (Kelsey Grammer), Magneto (Ian McKellen), Mystique (Rebecca Romijn), Nightcrawler (Alan Cumming), and Cyclops (James Marsden).

For good measure, Marvel threw in Gambit (Channing Tatum) and Xu Shang-Chi (Simu Liu). There will also be plenty of cameos, like the Steve Rogers appearance that was recently revealed. We can expect to see  (at least briefly) Peggy Carter, Spider-Man (Tom Holland), Hawkeye (Jeremy Renner), and Doctor Strange (Benedict Cumberbatch), among others.

Avengers: Doomsday hits theaters on December 18, 2026. Avengers: Secret Wars is currently slated for release on December 17, 2027, and will mark the conclusion of the MCU’s Phase Six.

Leaked Avengers: Doomsday teaser is now public Read More »

world’s-largest-shadow-library-made-a-300tb-copy-of-spotify’s-most-streamed-songs

World’s largest shadow library made a 300TB copy of Spotify’s most streamed songs

But Anna’s Archive is clearly working to support AI developers, another noted, pointing out that Anna’s Archive promotes selling “high-speed access” to “enterprise-level” LLM data, including “unreleased collections.” Anyone can donate “tens of thousands” to get such access, the archive suggests on its webpage, and any interested AI researchers can reach out to discuss “how we can work together.”

“AI may not be their original/primary motivation, but they are evidently on board with facilitating AI labs piracy-maxxing,” a third commenter suggested.

Meanwhile, on Reddit, some fretted that Anna’s Archive may have doomed itself by scraping the data. To them, it seemed like the archive was “only making themselves a target” after watching the Internet Archive struggle to survive a legal attack from record labels that ended in a confidential settlement last year.

“I’m furious with AA for sticking this target on their own backs,” a redditor wrote on a post declaring that “this Spotify hacking will just ruin the actual important literary archive.”

As Anna’s Archive fans spiraled, a conspiracy was even raised that the archive was only “doing it for the AI bros, who are the ones paying the bills behind the scenes” to keep the archive afloat.

Ars could not immediately reach Anna’s Archive to comment on users’ fears or Spotify’s investigation.

On Reddit, one user took comfort in the fact that the archive is “designed to be resistant to being taken out,” perhaps preventing legal action from ever really dooming the archive.

“The domain and such can be gone, sure, but the core software and its data can be resurfaced again and again,” the user explained.

But not everyone was convinced that Anna’s Archive could survive brazenly torrenting so much Spotify data.

“This is like saying the Titanic is unsinkable” that user warned, suggesting that Anna’s Archive might lose donations if Spotify-fueled takedowns continually frustrate downloads over time. “Sure, in theory data can certainly resurface again and again, but doing so each time, it will take money and resources, which are finite. How many times are folks willing to do this before they just give up?”

This story was updated to include Spotify’s statement. 

World’s largest shadow library made a 300TB copy of Spotify’s most streamed songs Read More »

power-outage-paralyzes-waymo-robotaxis-when-traffic-lights-go-out

Power outage paralyzes Waymo robotaxis when traffic lights go out

When the traffic lights went out, Waymo’s robotaxis got a little too cautious at intersections. With no red-yellow-green to cue drivers, the rule is to treat the intersection as a four-way stop. Indeed, Waymo’s cars are programmed to do this, but it seems the scale of the outage over the weekend was just too much to handle.

Social media and Reddit began to fill with videos of stationary Waymos at intersections, and the company temporarily suspended service.

Most areas saw power restored by noon yesterday, although Pacific Gas and Electric said it expected some power to remain out until Monday afternoon.

Meanwhile, Waymo’s robotaxis are up and running again. “We are resuming ride-hailing service in the San Francisco Bay Area,” a company spokesperson told Ars. “Yesterday’s power outage was a widespread event that caused gridlock across San Francisco, with non-functioning traffic signals and transit disruptions. While the failure of the utility infrastructure was significant, we are committed to ensuring our technology adjusts to traffic flow during such events.”

“Throughout the outage, we closely coordinated with San Francisco city officials. We are focused on rapidly integrating the lessons learned from this event and are committed to earning and maintaining the trust of the communities we serve every day,” Waymo said.

Power outage paralyzes Waymo robotaxis when traffic lights go out Read More »

when-clouds-flock-together

When clouds flock together


Scientists discover that clumping clouds supercharge storms in surprising ways.

Caroline Muller looks at clouds differently than most people. Where others may see puffy marshmallows, wispy cotton candy or thunderous gray objects storming overhead, Muller sees fluids flowing through the sky. She visualizes how air rises and falls, warms and cools, and spirals and swirls to form clouds and create storms.

But the urgency with which Muller, a climate scientist at the Institute of Science and Technology Austria in Klosterneuburg, considers such atmospheric puzzles has surged in recent years. As our planet swelters with global warming, storms are becoming more intense, sometimes dumping two or even three times more rain than expected. Such was the case in Bahía Blanca, Argentina, in March 2025: Almost half the city’s yearly average rainfall fell in less than 12 hours, causing deadly floods.

Atmospheric scientists have long used computer simulations to track how the dynamics of air and moisture might produce varieties of storms. But existing models hadn’t fully explained the emergence of these fiercer storms. A roughly 200-year-old theory describes how warmer air holds more moisture than cooler air: an extra 7 percent for every degree Celsius of warming. But in models and weather observations, climate scientists have seen rainfall events far exceeding this expected increase. And those storms can lead to severe flooding when heavy rain falls on already saturated soils or follows humid heatwaves.

Clouds, and the way that they cluster, could help explain what’s going on.

A growing body of research, set in motion by Muller over a decade ago, is revealing several small-scale processes that climate models had previously overlooked. These processes influence how clouds form, congregate, and persist in ways that may amplify heavy downpours and fuel larger, long-lasting storms. Clouds have an “internal life,” Muller says, “that can strengthen them or may help them stay alive longer.”

Other scientists need more convincing, because the computer simulations researchers use to study clouds reduce planet Earth to its simplest and smoothest form, retaining its essential physics but otherwise barely resembling the real world.

Now, though, a deeper understanding beckons. Higher-resolution global climate models can finally simulate clouds and the destructive storms they form on a planetary scale — giving scientists a more realistic picture. By better understanding clouds, researchers hope to improve their predictions of extreme rainfall, especially in the tropics where some of the most ferocious thunderstorms hit and where future rainfall projections are the most uncertain.

First clues to clumping clouds

All clouds form in moist, rising air. A mountain can propel air upward; so, too, can a cold front. Clouds can also form through a process known as convection: the overturning of air in the atmosphere that starts when sunlight, warm land or balmy water heats air from below. As warm air rises, it cools, condensing the water vapor it carried upwards into raindrops. This condensation process also releases heat, which fuels churning storms.

But clouds remain one of the weakest links in climate models. That’s because the global climate models scientists use to simulate scenarios of future warming are far too coarse to capture the updrafts that give rise to clouds or to describe how they swirl in a storm—let alone to explain the microphysical processes controlling how much rain falls from them to Earth.

To try to resolve this problem, Muller and other like-minded scientists turned to simpler simulations of Earth’s climate that are able to model convection. In these artificial worlds, each the shape of a shallow box typically a few hundred kilometers across and tens of kilometers deep, the researchers tinkered with replica atmospheres to see if they could figure out how clouds behaved under different conditions.

The top frame of this computer simulation shows an atmosphere where the movements of air are somewhat disorganized, leading to clouds popping up in random locations. At the bottom is a simulation of an atmosphere where patterns of convection have become organized, and clouds spontaneously clump together into one large region—forming a storm.

Intriguingly, when researchers ran these models, the clouds spontaneously clumped together, even though the models had none of the features that usually push clouds together—no mountains, no wind, no Earthly spin or seasonal variations in sunlight. “Nobody knew why this was happening,” says Daniel Hernández Deckers, an atmospheric scientist at the National University of Colombia in Bogotá.

In 2012, Muller discovered a first clue: a process known as radiative cooling. The Sun’s heat that bounces off Earth’s surface radiates back into space, and where there are few clouds, more of that radiation escapes—cooling the air. The cool spots set up atmospheric flows that drive air toward cloudier regions—trapping more heat and forming more clouds. A follow-up study in 2018 showed that in these simulations, radiative cooling accelerated the formation of tropical cyclones. “That made us realize that to understand clouds, you have to look at the neighborhood as well—outside clouds,” Muller says.

Once scientists started looking not just outside clouds, but also underneath them and at their edges, they found other small-scale processes that help to explain why clouds flock together. The various processes, described by Muller and colleagues in the Annual Review of Fluid Mechanics, all bring or hold together pockets of warm, moist air so more clouds form in already-cloudy regions. These small-scale processes hadn’t been understood much before because they are often obscured by larger weather patterns.

Hernández Deckers has been studying one of the processes, called entrainment—the turbulent mixing of air at the edges of clouds. Most climate models represent clouds as a steady plume of rising air, but in reality “clouds are like a cauliflower,” he says. “You have a lot of turbulence, and you have these bubbles [of air] inside the clouds.” This mixing at the edges affects how clouds evolve and thunderstorms develop; it can weaken or strengthen storms in various ways, but, like radiative cooling, it encourages more clouds to form as a clump in regions that are already moist.

Such processes are likely to be most important in storms in Earth’s tropical regions, where there’s the most uncertainty about future rainfall. (That’s why Hernández Deckers, Muller, and others tend to focus their studies there.) The tropics lack the cold fronts, jet streams, and spiraling high- and low-pressure systems that dominate air flows at higher latitudes.

Supercharging heavy rains

There are other microscopic processes happening inside clouds that affect extreme rainfall, especially on shorter timescales. Moisture matters: Condensed droplets falling through moist, cloudy air don’t evaporate as much on their descent, so more water falls to the ground. Temperature matters too: When clouds form in warmer atmospheres, they produce less snow and more rain. Since raindrops fall faster than snowflakes, they evaporate less on their descent—producing, once again, more rain.

These factors also help explain why more rain can get squeezed from a cloud than the 7 percent rise per degree of warming predicted by the 200-year-old theory. “Essentially you get an extra kick … in our simulations, it was almost a doubling,” says Martin Singh, a climate scientist at Monash University in Melbourne, Australia.

Cloud clustering adds to this effect by holding warm, moist air together, so more rain droplets fall. One study by Muller and her collaborators found that clumping clouds intensify short-duration rainfall extremes by 30 to 70 percent, largely because raindrops evaporate less inside sodden clouds.

Other research, including a study led by Jiawei Bao, a postdoctoral researcher in Muller’s group, has likewise found that the microphysical processes going on inside clouds have a strong influence over fast, heavy downpours. These sudden downpours are intensifying much faster with climate change than protracted deluges, and often cause flash flooding.

The future of extreme rainfall

Scientists who study the clumping of clouds want to know how that behavior will change as the planet heats up—and what that will mean for incidences of heavy rainfall and flooding.

Some models suggest that clouds (and the convection that gives rise to them) will clump together more with global warming — and produce more rainfall extremes that often far exceed what theory predicts. But other simulations suggest that clouds will congregate less. “There seems to be still possibly a range of answers,” says Allison Wing, a climate scientist at Florida State University in Tallahassee who has compared various models.

Scientists are beginning to try to reconcile some of these inconsistencies using powerful types of computer simulations called global storm-resolving models. These can capture the fine structures of clouds, thunderstorms, and cyclones while also simulating the global climate. They bring a 50-fold leap in realism beyond the global climate models scientists generally use—but demand 30,000 times more computational power.

Using one such model in a paper published in 2024, Bao, Muller, and their collaborators found that clouds in the tropics congregated more as temperatures increased—leading to less frequent storms but ones that were larger, lasted longer, and, over the course of a day, dumped more rain than expected from theory.

But that work relied on just one model and simulated conditions from around one future time point—the year 2070. Scientists need to run longer simulations using more storm-resolving models, Bao says, but very few research teams can afford to run them. They are so computationally intensive that they are typically run at large centralized hubs, and scientists occasionally host “hackathons” to crunch through and share data.

Researchers also need more real-world observations to get at some of the biggest unknowns about clouds. Although a flurry of recent studies using satellite data linked the clustering of clouds to heavier rainfall in the tropics, there are large data gaps in many tropical regions. This weakens climate projections and leaves many countries ill-prepared. In June of 2025, floods and landslides in Venezuela and Colombia swept away buildings and killed at least a dozen people, but scientists don’t know what factors worsened these storms because the data are so paltry. “Nobody really knows, still, what triggered this,” Hernández Deckers says.

New, granular data are on their way. Wing is analyzing rainfall measurements from a German research vessel that traversed the tropical Atlantic Ocean for six weeks in 2024. The ship’s radar mapped clusters of convection associated with the storms it passed through, so the work should help researchers see how clouds organize over vast tracts of the ocean.

And an even more global view is on the horizon. The European Space Agency plans to launch two satellites in 2029 that will measure, among other things, near-surface winds that ruffle Earth’s oceans and skim mountaintops. Perhaps, scientists hope, the data these satellites beam back will finally provide a better grasp of clumping clouds and the heaviest rains that fall from them.

Research and interviews for this article were partly supported through a journalism residency funded by the Institute of Science & Technology Austria (ISTA). ISTA had no input into the story. This story originally appeared on Knowable Magazine

Photo of Knowable Magazine

Knowable Magazine explores the real-world significance of scholarly work through a journalistic lens.

When clouds flock together Read More »

these-are-the-flying-discs-the-government-wants-you-to-know-about

These are the flying discs the government wants you to know about


DiskSat’s design offers “a power-to-weight ratio unmatched by traditional aluminum satellites.”

An artist’s illustration of DiskSats deploying from a rocket in low-Earth orbit. Credit: NASA

Four small satellites rode a Rocket Lab Electron launch vehicle into orbit from Virginia early Thursday, beginning a government-funded technology demonstration mission to test the performance of a new spacecraft design.

The satellites were nestled inside a cylindrical dispenser on top of the 59-foot-tall (18-meter) Electron rocket when it lifted off from NASA’s Wallops Flight Facility at 12: 03 am EST (05: 03 UTC). A little more than an hour later, the rocket’s upper stage released the satellites one at a time at an altitude of about 340 miles (550 kilometers).

The launch was the starting gun for a proof-of-concept mission to test the viability of a new kind of satellite called DiskSats. These satellites were designed by the Aerospace Corporation, a nonprofit federally funded research and development center. The project is jointly financed by NASA and the US Space Force, which paid for DiskSat’s development and launch, respectively.

“DiskSat is a lightweight, compact, flat disc-shaped satellite designed for optimizing future rideshare launches,” the Aerospace Corporation says in a statement.

The DiskSats are 39 inches (1 meter) wide, about twice the diameter of a New York-style pizza, and measure just 1 inch (2.5 centimeters) thick. Made of composite carbon fiber, each satellite carries solar cells, control avionics, reaction wheels, and an electric thruster to change and maintain altitude.

“The launch went perfectly, and the DiskSat dispenser worked exactly as designed,” said Darren Rowen, the project’s chief engineer, in a statement. “We’re pleased to have established contact with all four of the DiskSats, and we’re looking forward to the rest of the demonstration mission.”

An engineer prepares Aerospace Corporation’s DiskSats for launch at NASA’s Wallops Flight Facility in Virginia. Credit: Aerospace Corporation

A new form factor

The Aerospace Corporation has a long history of supporting the US military and NASA since its founding in 1960. A few years ago, engineers at the center developed the DiskSat concept after surveying the government’s emerging needs in spaceflight.

CubeSats have been a ubiquitous part of the satellite industry for nearly a quarter-century. They are based on a cube-shaped design, measuring about 10 centimeters per side, but can be scaled from a single cube “unit” to three, six, 12, or more, depending on mission requirements. The CubeSat standard has become a popular choice for commercial companies, the military, NASA, and universities looking to build small satellites on a tight budget.

By one measure, nearly 3,000 CubeSats have launched since the first one soared into orbit in 2003. After originally being confined to low-Earth orbit, they have now flown to high-altitude orbits, to the Moon, and to Mars.

While CubeSats are now prolific, engineers at the Aerospace Corporation saw an opportunity to improve on the concept. Debra Emmons, Aerospace’s chief technology officer, said the idea originated from Rich Welle, a scientist recently retired from the center’s Experiments Lab, or xLab, division.

“They were asking questions,” Emmons told Ars. “They were looking at CubeSat studies and looking at some alternatives. The typical CubeSat is, in fact, a cube. So, the idea was could you look at some different types of form factors that might be able to generate more power … and offer up benefit for certain mission applications?”

Aerospace’s research team arrived at the DiskSat design. Emmons said the stackable flat-panel format is easier to pack for launch than a CubeSat. The concept is similar to SpaceX’s pioneering approach to launching stackable Starlink Internet satellites, but DiskSats are significantly smaller, lighter, and adaptable to different kinds of missions.

A batch of Starlink satellites prior to launch

A stack of Starlink satellites prior to launch. Credit: SpaceX

DiskSats have several advantages over CubeSats, according to the Aerospace Corporation. Each of the four DiskSats launched Thursday has a mass of about 35 pounds (16 kilograms), less than that of a typical 12U CubeSat. But a DiskSat has more than 13 times the surface area on a single side, providing valuable real estate for developers to load up the satellite with power-generating solar arrays, sensors, antennas, or other payloads that simply won’t fit on a CubeSat.

SpaceX’s current generation of mass-produced Starlink V2 satellites, by comparison, each has a mass of more than 1,100 pounds, or 500 kilograms.

DiskSat’s design offers “a power-to-weight ratio unmatched by traditional aluminum satellites,” the Aerospace Corporation says. In a research paper published earlier this year, engineers from the Aerospace Corporation claimed DiskSat can generate five to 10 times more power than a CubeSat.

A disruptive solution?

What kinds of missions might DiskSat be useful for? One idea involves placing a large radar antenna—too big to fit on any other low-mass satelliteon the broadside of a DiskSat to collect all-weather surveillance imagery. Similarly-sized antennas on other DiskSats could support high-bandwidth communications.

With this demo mission, the Aerospace Corporation will test the performance of the DiskSat platform in space for the first time. Engineers will initially look at how the satellites function at 340 miles, then use their electric thrusters to gradually step down to lower altitudes, where another aspect of DiskSat’s design will shine.

Flying edge-on, the satellite’s pancake shape will minimize aerodynamic drag as the DiskSats encounter thicker air below 250 miles. Continual pulsing from the satellites’ electric thrusters will allow the DiskSats to maintain altitude as they glide through the uppermost layers of the atmosphere.

“The primary mission is to demonstrate and to understand the performance, functionality, and maneuverability of the DiskSat buses on orbit, particularly in low-Earth orbit, or LEO, and very low-Earth orbit, or VLEO,” said Catherine Venturini, DiskSat’s principal investigator.

“In theory, I think you could operate down to 200 kilometers (124 miles) with electric propulsion,” Emmons said. That is two to three times closer to Earth than most commercial radar imaging satellites. Other satellite operators are also assessing the viability of flying remote sensing missions in VLEO.

Flying closer to the ground delivers higher-resolution imagery, bringing cities, ships, airports, and military bases into sharper view. So it’s easy to see why the Space Force is interested in the DiskSat concept.

DiskSat’s engineers acknowledge there are drawbacks to the format. With such a large surface area, it’s more difficult to manage the temperature extremes of low-Earth orbit than it is with a conventional cube-shaped satellite. While DiskSats carry a lot of oomph to change altitude, their shape makes them somewhat clunky and hard to turn, and engineers say they aren’t well-suited for missions requiring agile pointing.

Rocket Lab’s Electron launcher lifts off to begin the DiskSat demo mission, a program co-funded by NASA and the US military’s Space Test Program. Credit: Austin DeSisto/Rocket Lab

The Aerospace Corporation is a research center, not a commercial satellite manufacturer. Officials at the nonprofit are looking to hand over the DiskSat design to industry through a technology transfer agreement. “The plan is to release or license the technology to partners once it is flight-proven,” the Aerospace Corporation says on its website.

“We think this new technology will be disruptive to the small spacecraft enterprise and ecosystem,” said Eric Breckheimer, DiskSat’s program manager.

DiskSat’s stackable design makes it possible to launch a fleet of high-power, low-mass satellites in one go, according to Emmons.

Following the trend toward bigger CubeSats, the DiskSat format could also grow larger to take advantage of heavier rockets. “There’s a key scalability aspect, and with that in mind, you could bring an entire constellation of DiskSats with you in a single launch,” Breckheimer said.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

These are the flying discs the government wants you to know about Read More »

when-were-things-the-best?

When Were Things The Best?

People remember their childhood world too fondly.

You adapt to it. You forget the parts that sucked, many of which sucked rather really badly. It resonates with you and sticks with you. You think it was better.

This is famously true for music, but also in general, including places it makes no sense like ‘most reliable news reporting.’

Matthew Yglesias: Regardless of how old they are, people tend to think that things were better when they were young.

As a result, you’d expect more negativity as the median age goes up and up.

Very obviously these views are not objective.

As a fun and also useful exercise, as part of the affordability sequence, now that we’ve looked at claims of modern impoverishment and asked when things were cheaper, it’s time to ask ourselves: When were various things really at their best?

In some aspects, yes, the past was better, and those aspects are an important part of the picture. But in many others today is the day and people are wrong about this.

I’ll start with the things on the above graph, in order, include some claims from another source, and also include a few important other considerations that help set up the main thesis of the sequence.

Far in the past. You wouldn’t like how they accomplished it, but they accomplished it.

The top candidates for specific such communities are either:

  1. Hunter-gatherer bands.

  2. Isolated low-tech villages that all share an intense mandatory religion.

  3. Religious minority ethnic enclave communities under severe external threat.

You’re not going to match that without making intensive other sacrifices. Nor should you want to. Those communities were too close-knit for our taste.

In terms of on average most close knit communities in America, it’s probably right after we closed the frontier, so around 1900?

Close-knit communities, on a lesser level that is now rare, are valuable and important, but require large continuous investments and opportunity costs. You have to frequently choose engagement with a contained group over alternatives, including when those alternatives are otherwise far superior. You also, to do this today, have to engineer conditions to make the community possible, because you’re not going to be able to form one with whoever happens to live in your neighborhood.

Intentional communities are underrated, as is simply coordinating to live near your friends. I highly recommend such things, but coordination is hard, and they are going to remain rare.

I’m torn between today and about 2012.

There are some virtues and morals that are valuable and have been largely lost. Those who remember the past fondly focus on those aspects.

One could cite, depending on your comparison point, some combination of loyalty to both individuals, groups and institutions, honor and personal codes, hospitality, respect for laws and social norms, social trust, humility, some forms of mercy and forgiveness, stoicism, courage, respect for the sacred and adherence to duty and one’s commitments, especially the commitment to one’s family, having better and higher epistemic and discourse norms, plus religiosity.

There’s varying degrees of truth in those.

But they pale in comparison to the ways that things used to be terrible. People used to have highly exclusionary circles of concern. By the standards of today, until very recently and even under relatively good conditions, approximately everyone was horribly violent and tolerant of violence and bullying of all kinds, cruel to animals, tolerant of all manner of harassment, rape and violations of consent, cruel, intolerant, religiously intolerant often to the point of murder, drunk out of their minds, discriminatory, racist, sexist, homophobic, transphobic, neglectful, unsafe, physically and emotionally abusive to children including outright torture and frequent sexual abuse, and distrustful and dishonest dealing with strangers or in commerce.

It should be very clear which list wins.

This holds up to the introduction of social media, at which point some moral dynamics got out of control in various ways, on various sides of various questions, and many aspects went downhill. There were ways in which things got absolutely nuts. I’m not sure if we’ve recovered enough to have fully turned that around.

Within recent memory I’m going to say 1992-1996, which is the trap of putting it right in my teenage years. But I’m right. This period had extraordinarily low political division and partisanship.

On a longer time frame, the correct answer is the Era of Good Feelings, 1815-1825.

The mistake people make is to think that today’s high level of political division is some outlier in American history. It isn’t.

Good question. The survey data says 1957.

I also don’t strongly believe it is wrong, but I don’t trust survey data to give the right answer on this, for multiple reasons.

Certainly a lot more families used to be intact. That does not mean they were happy by our modern understanding of happy. The world of the 1950s was quite stifling. A lot of the way families stayed intact was people pretended everything was fine, including many things we now consider very not fine.

People benefited (in happiness terms) from many forms of lower expectations. That doesn’t mean that if you duplicated their life experiences, your family would be happy.

Fertility rates, having the most children, was during the Baby Boom, if we exclude the bad old times when children often failed to survive.

Marriage rates used to be near-universal, whether or not you think that was best.

Believe it or not, today. Yikes. We don’t believe it because of the Revolution of Rising Expectations. We now have standards for the press that the press has never met.

People used to trust the media more. Now we trust it a lot less. While there are downsides to this lack of trust, especially when people turn to even less worthy alternatives, that loss of trust is centrally good. The media was never worthy of trust.

There’s great fondness for the Walter Cronkite era, where supposedly we had high authority news sources worthy of our high trust. The thing is, that past trust was also misplaced, and indeed was even more misplaced.

There was little holding the press to account. They had their own agendas and biases, even if it was often ‘the good of the nation’ or ‘the good of the people,’ and they massively misunderstood things and often got things wrong. Reporters talking on the level of saying ‘wet ground causes rain’ is not a new phenomenon. When they did make mistakes or slant their coverage, there was no way to correct them back then.

Whereas now, with social media, we can and do keep the media on its toes.

If your goal is to figure out what is going on and you’re willing to put in the work, today you have the tools to do that, and in the past you basically didn’t, not in any reasonable amount of time.

The fact that other people do that, and hold them to account, makes the press hold itself to higher standards.

There are several forms of ‘the best music.’ It’s kind of today, kind of the 60s-80s.

If you are listening to music on your own, it is at its best today, by far. The entire back catalogue of the world is available at your fingertips, with notably rare exceptions, for a small monthly fee, on demand and fully customizable. If you are an audiophile and want super high quality, you can do that too. There’s no need to spend all that time seeking tings out.

If you want to create new music, on your own or with AI? Again, it’s there for you.

In terms of the creation of new music weighted by how much people listen, or in terms of the quality of the most popular music, I’d say probably the 1980s? A strong case can be made for the 60s or 70s too, my guess is that a bunch of that is nostalgia and too highly valuing innovation, but I can see it. What I can’t see is a case for the 1990s or 2000s, or especially 2010s or 2020s.

This could be old man syndrome talking, and it could be benefits of a lot of selection, but when I sample recent popular music it mostly (with exceptions!) seems highly non-innovative and also not very good. It’s plausible that with sufficiently good search and willingness to take highly deep cuts that today is indeed the best time for new music, but I don’t know how to do that search.

In terms of live music experiences, especially for those with limited budgets, my guess is this was closer to 1971, as so much great stuff was in hindsight so amazingly accessible.

The other case for music being better before is that music was better when it was worse. As in, you had to search for it, select it, pay for it, you had to listen to full albums and listen to them many times, so it meant more, that today’s freedom brings bad habits. I see the argument, but no, and you can totally set rules for yourself if that is what you want. I often have for brief periods, to shake things up.

My wild guess for traditional radio is the 1970s? There was enough high quality music, you had the spirit of radio, and video hadn’t killed the radio star.

You could make an argument for the 1930s-40s, right before television displaced it as the main medium. Certainly radio back then was more important and central.

The real answer is today. We have the best radio today.

We simply don’t call it radio.

Instead, we mostly call it podcasts and music streaming.

If you want pseudorandom music, Pandora and other similar services, or Spotify-style playlists, are together vastly better than traditional radio.

If you want any form of talk radio, or news radio, or other word-based radio programs that doesn’t depend on being broadcast live, podcasts rule. The quality and quantity and variety on offer are insane and you can move around on demand.

Also, remember reception problems? Not anymore.

Long before any of us were born, or today, depending on whether you mean ‘most awesome’ or ‘would choose to wear.’

Today’s fashion is not only cheaper, it is easier and more comfortable. In exchange, no, it does not look as cool.

As the question is intended, 2019. Then Covid happened. We still haven’t fully recovered from that.

There were periods with more economic growth or that had better employment conditions. You could point to 1947-1973 riding the postwar wave, or the late 1990s before the dot com bubble burst.

I still say 2019, because levels of wealth and real wages also matter.

In general I choose today. Average quality is way up and has been going up steadily except for a blip when we got way too many superhero movies crowding things out, but we’ve recovered from that.

The counterargument I respect is that the last few years have had no top tier all-time greats, and perhaps this is not an accident. We’ve forced movies to do so many other things well that there’s less room for full creativity and greatness to shine through? Perhaps this is true, and this system gets us fewer true top movies. But also that’s a Poisson distribution, you need to get lucky, and the effective sample size is small.

If I have to pick a particular year I’d go with 1999.

The traditional answer is the 1970s, but this is stupid and disregards the Revolution of Rising Expectations. Movies then were given tons of slack in essentially every direction. Were there some great picks? No doubt, although many of what we think of as all-time greats are remarkably slow to the point where if they weren’t all time greats they’d almost not be watchable. In general, if you think things were better back then, you’re grading back then on a curve, you have an extreme tolerance for not much happening, and also you’re prioritizing some sort of abstract Quality metric over what is actually entertaining.

Today. Stop lying to yourself.

The experience of television used to be terrible, and the shows used to be terrible. So many things very much do not hold up today even if you cut them quite a lot of slack. Old sitcoms are sleep inducing. Old dramas were basic and had little continuity. Acting tended to be quite poor. They don’t look good, either.

The interface for watching was atrocious. You would watch absurd amounts of advertisements. You would plan your day around when things were there, or you’d watch ‘whatever was on TV.’ If you missed episodes they would be gone. DVRs were a godsend despite requiring absurd levels of effort to manage optimally, and still giving up a ton of value.

The interface now is most of everything ever made at your fingertips.

The alternative argument to today being best is that many say that in terms of new shows the prestige TV era of the 2000s-2010s was the golden age, and the new streaming era can’t measure up, especially due to fractured experiences.

I agree that the shared national experiences were cool and we used to have more of them and they were bigger. We still get them, most recently for Severance and perhaps The White Lotus and Plurebis, which isn’t the same, but there really are still a ton of very high quality shows out there. Average quality is way up. Top talent going on television shows is way up, they still let top creators do their thing, and there are shows with top-tier people I haven’t even looked at, that never used to happen.

Today. Stop lying to yourself.

Average quality of athletic performance is way, way up. Modern players do things you wouldn’t believe. Game design has in many ways improved as well, as has the quality of strategic decision making.

Season design is way better. We get more and better playoffs, which can go too far but typically keeps far more games more relevant and exciting and high stakes. College football is insanely better for this over the last few years, I doubted and I was wrong. Baseball purists can complain but so few games used to mean anything. And so on.

Unless people are going to be blowing up your phone, you can start an event modestly late and skip all the ads and even dead time. You can watch sports on your schedule, not someone else’s. If you must be live, you can now get coverage in lots of alternative ways, and also get access to social media conversations in real time, various website information services and so on.

If you’re going to the stadium, the modern experience is an upgrade. It is down to a science. All seats are good seats and the food is usually excellent.

There are three downside cases.

  1. We used to all watch the same sporting events live and together more often. That was cool, but you can still find plenty of people online doing this anyway.

  2. In some cases correct strategic play has made things less fun. Too many NBA three pointers are a problem, as is figuring out that MLB starters should be taken out rather early, or analytics simply homogenized play. The rules have been too slow to adjust. It’s a problem, but on net I think a minor one. It’s good to see games played well.

  3. Free agency has made teams retain less identity, and made it harder to root for the same players over a longer period. This one hurts and I’d love to go back, even though there are good reasons why we can’t.

Mostly I think it’s nostalgia. Modern sports are awesome.

Today, and it’s really, really not close. If you don’t agree, you do not remember. So much of what people ate in the 20th century was barely even food by today’s standards, both in terms of tasting good and its nutritional content.

Food has gotten The Upgrade.

Average quality is way, way up. Diversity is way up, authentic or even non-authentic ethnic cuisines mostly used to be quite rare. Delivery used to be pizza and Chinese. Quality and diversity of available ingredients is way up. You can get it all on a smaller percentage of typical incomes, whether at home or from restaurants, and so many more of us get to use those restaurants more often.

A lot of this is driven by having access to online information and reviews, which allows quality to win out in a way it didn’t before, but even before that we were seeing rapid upgrades across the board.

Some time around 1965, probably? We had a pattern of something approaching lifetime employment where it was easy to keep one’s job for a long period, and count on this. The chance of staying in a job for 10+ or 20+ years has declined a lot. That makes people feel a lot more secure, and matters a lot.

That doesn’t mean you actually want the same job for 20+ years. There are some jobs where you totally do want that, but a lot of the jobs people used to keep for that long are jobs we wouldn’t want. Despite people’s impressions, the increased job changes have mostly not come from people being fired.

We don’t have the best everything. There are exceptions.

Most centrally, we don’t have the best intact families or close-knit communities, or the best dating ecosystem or best child freedoms. Those are huge deals.

But there are so many other places in which people are simply wrong.

As in:

Matt Walsh (being wrong, lol at ‘empirical,’ 3M views): It’s an empirical fact that basically everything in our day to day lives has gotten worse over the years. The quality of everything — food, clothing, entertainment, air travel, roads, traffic, infrastructure, housing, etc — has declined in observable ways. Even newer inventions — search engines, social media, smart phones — have gone down hill drastically.

This isn’t just a random “old man yells at clouds” complaint. It’s true. It’s happening. The decline can be measured. Everyone sees it. Everyone feels it. Meanwhile political pundits and podcast hosts (speaking of things that are getting worse) focus on anything and everything except these practical real-life problems that actually affect our quality of life.

The Honest Broker: There is an entire movement focused on trying to convince people that everything used to be better and everything is also getting worse and worse

That creates a market for reality-based correctives like the excellent thread below by @ben_golub [on air travel.]

Matthew Yglesias: I think everyone should take seriously:

  1. Content distribution channels have become more competitive and efficient

  2. Negative content tends to perform better

  3. Marinating all day in negativity-inflected content is cooking people’s brains

My quick investigation confirmed that American roads, traffic and that style of infrastructure did peak in the mid-to-late 20th century. We have not been doing a good job maintaining that.

On food, entertainment, clothing and housing he is simply wrong (have you heard of this new thing called ‘luxury’ apartments, or checked average sizes or amenities?), and to even make some of these claims requires both claiming ‘this is cheaper but it’s worse’ and ‘this is worse because it used to be cheaper’ in various places.

bumbadum: People are chimping out at Matt over this but nobody has been able to name one thing that has significantly grown in quality in the past 10-20 years.

Every commodity, even as they have become cheaper and more accessible has decreased in quality.

I am begging somebody to name 1 thing that is all around a better product than its counterpart from the 90s

Megan McArdle: Tomatoes, raspberries, automobiles, televisions, cancer drugs, women’s shoes, insulin monitoring, home security monitoring, clothing for tall women (which functionally didn’t exist until about 2008), telephone service (remember when you had to PAY EXTRA to call another area code?), travel (remember MAPS?), remote work, home video … sorry, ran out of characters before I ran out of hedonic improvements.

Thus:

Today. No explanation required on these.

Don’t knock the vast improvements in computers and televisions.

Saying the quality of phones has gone down, as Matt Walsh does, is absurdity.

That does still leave a few other examples he raised.

Today, or at least 2024 if you think Trump messed some things up.

I say this as someone who used to fly on about half of weekends, for several years.

Air travel has decreased in price, the most important factor, and safety improved. Experiential quality of the flight itself declined a bit, but has risen again as airport offerings improved and getting through security and customs went back from a nightmare to trivial. Net time spent, given less uncertainty, has gone down.

If you are willing to pay the old premium prices, you can buy first class tickets, and get an as good or better experience as the old tickets.

Today. We wax nostalgic about old cars. They looked cool. They also were cool.

They were also less powerful, more dangerous, much less fuel efficient, much less reliable, with far fewer features and of course absolutely no smart features. That’s even without considering that we’re starting to get self-driving cars.

This is one area where my preliminary research did back Walsh up. America has done a poor job of maintaining its roads and managing its traffic, and has not ‘paid the upkeep’ on many aspects what was previously a world-class infrastructure. These things seem to have peaked in the late 20th century.

I agree that this is a rather bad sign, and we should both fix and build the roads and also fix the things that are causing us not to fix and build the roads.

As a result of not keeping up with demand for roads or demand for housing in the right areas, average commute times for those going into the office have been increasing, but post-Covid we have ~29% of working days happening from home, which overwhelms all other factors combined in terms of hours on the road.

I do expect traffic to improve due to self-driving cars, but that will take a while.

Today, or at least the mobile phone and rideshare era. You used to have to call for or hail a taxi. Now in most areas you open your phone and a car appears. In some places it can be a Waymo, which is now doubling yearly. The ability to summon a taxi matters so much more than everything else, and as noted above air travel is improved.

This is way more important than net modest issues with roads and traffic.

Trains have not improved but they are not importantly worse.

Not everything is getting better all the time. Important things are getting worse.

We still need to remember and count our blessings, and not make up stories about how various things are getting worse, when those things are actually getting better.

To sum up, and to add some additional key factors, the following things did indeed peak in the past and quality is getting worse as more than a temporary blip:

  1. Political division.

  2. Average quality of new music, weighted by what people listen to.

  3. Live music and live radio experiences, and other collective national experiences.

  4. Fashion, in terms of awesomeness.

  5. Roads, traffic and general infrastructure.

  6. Some secondary but important moral values.

  7. Dating experiences, ability to avoid going on apps.

  8. Job security, ability to stay in one job for decades if desired.

  9. Marriage rates and intact families, including some definitions of ‘happy’ families.

  10. Fertility rates and felt ability to have and support children as desired.

  11. Childhood freedoms and physical experiences.

  12. Hope for the future, which is centrally motivating this whole series of posts.

The second half of that list is freaking depressing. Yikes. Something’s very wrong.

But what’s wrong isn’t the quality of goods, or many of the things people wax nostalgic about. The first half of this list cannot explain the second half.

Compare that first half to the ways in which quality is up, and in many of these cases things are 10 times better, or 100 times better, or barely used to even exist:

  1. Morality overall, in many rather huge ways.

  2. Access to information, including the news.

  3. Logistics and delivery. Ease of getting the things you want.

  4. Communication. Telephones including mobile phones.

  5. Music as consumed at home via deliberate choice.

  6. Audio experiences. Music streams and playlists. Talk.

  7. Electronics, including computers, televisions, medical devices, security systems.

  8. Television, both new content and old content, and modes of access.

  9. Movies, both new content and old content, and modes of access.

  10. Fashion in terms of comfort, cost and upkeep.

  11. Sports.

  12. Cuisine. Food of all kinds, at home and at restaurants.

  13. Air travel.

  14. Taxis.

  15. Cars.

  16. Medical care, dental care and medical (and nonmedical) drugs.

That only emphasizes the bottom of the first list. Something’s very wrong.

Once again, us doing well does not mean we shouldn’t be doing better.

We see forms of the same trends.

  1. Many things are getting better, but often not as much better as they could be.

  2. Other things are getting worse, both in ways inevitable and avoidable.

  3. This identifies important problems, but the changes in quantity and quality of goods and services do not explain people’s unhappiness, or why many of the most important things are getting worse. More is happening.

Some of the things getting worse reflect changes in technological equilibria or the running out of low-hanging fruit, in ways that are tricky to fix. Many of those are superficial, although a few of them aren’t. But these don’t add up to the big issues.

More is happening.

That more is what I will, in the next post, be calling The Revolution of Rising Expectations, and the Revolution of Rising Requirements.

Discussion about this post

When Were Things The Best? Read More »

youtube-bans-two-popular-channels-that-created-fake-ai-movie-trailers

YouTube bans two popular channels that created fake AI movie trailers

Deadline reports that the behavior of these creators ran afoul of YouTube’s spam and misleading-metadata policies. At the same time, Google loves generative AI—YouTube has added more ways for creators to use generative AI, and the company says more gen AI tools are coming in the future. It’s quite a tightrope for Google to walk.

AI movie trailers

A selection of videos from the now-defunct Screen Culture channel.

Credit: Ryan Whitwam

A selection of videos from the now-defunct Screen Culture channel. Credit: Ryan Whitwam

While passing off AI videos as authentic movie trailers is definitely spammy conduct, the recent changes to the legal landscape could be a factor, too. Disney recently entered into a partnership with OpenAI, bringing its massive library of characters to the company’s Sora AI video app. At the same time, Disney sent a cease-and-desist letter to Google demanding the removal of Disney content from Google AI. The letter specifically cited AI content on YouTube as a concern.

Both the banned trailer channels made heavy use of Disney properties, sometimes even incorporating snippets of real trailers. For example, Screen Culture created 23 AI trailers for The Fantastic Four: First Steps, some of which outranked the official trailer in searches. It’s unclear if either account used Google’s Veo models to create the trailers, but Google’s AI will recreate Disney characters without issue.

While Screen Culture and KH Studio were the largest purveyors of AI movie trailers, they are far from alone. There are others with five and six-digit subscriber counts, some of which include disclosures about fan-made content. Is that enough to save them from the ban hammer? Many YouTube viewers probably hope not.

YouTube bans two popular channels that created fake AI movie trailers Read More »

does-swearing-make-you-stronger?-science-says-yes.

Does swearing make you stronger? Science says yes.

The result: Only the F-word had any effect on pain outcomes. The team also measured the subjects’ pain threshold, asking them to indicate when the ice water began to feel painful. Those who chanted the F-word waited longer before indicating they felt pain—in other words, the swearing increased their threshold for pain. Chanting “fouch” or “twizpipe” had no effect on either measure.

F@%*-ing go for it

For this latest study, Stephens was interested in investigating potential mechanisms for swearing as a possible form of disinhibition (usually viewed negatively), building on his team’s 2018 and 2022 papers showing that swearing can improve strength in a chair push-up task. “In many situations, people hold themselves back—consciously or unconsciously—from using their full strength,” said Stephens. “By swearing, we throw off social constraint and allow ourselves to push harder in different situations. Swearing is an easily available way to help yourself feel focused, confident and less distracted, and ‘go for it’ a little more.”

In two separate experiments, participants were asked to select a swear word they’d normally use after, say, bumping their head, and a more neutral word to describe an inanimate object like a table. They then performed the aforementioned chair push-up task: sitting on a sturdy chair and placing their hands under their thighs with the fingers pointed inwards. Then they lifted their feet off the floor and straightened their arms to support their body weight for as long as possible, chanting either the swear word or the neutral word every two seconds. Afterward, subjects competed a questionnaire to assess various aspects of their mental state during the task.

The results: Subjects who swore during the task could support their body weight much longer than those who merely repeated the neutral word. This confirms the reported results of similar studies in the past. Furthermore, subjects reported increases in their sense of psychological “flow,” distraction, and self-confidence, all indicators of increased disinhibition.

“These findings help explain why swearing is so commonplace,” said Stephens. “Swearing is literally a calorie-neutral, drug-free, low-cost, readily available tool at our disposal for when we need a boost in performance.” The team next plans to explore the influence of swearing on public speaking and romantic behaviors, since these are situations where most people are more hesitant and less confident in themselves, and hence more likely to hold back.

DOI: American Psychologist, 2025. 10.1037/amp0001650  (About DOIs).

Does swearing make you stronger? Science says yes. Read More »

texas-sues-biggest-tv-makers,-alleging-smart-tvs-spy-on-users-without-consent

Texas sues biggest TV makers, alleging smart TVs spy on users without consent


Automated Content Recognition brings “mass surveillance” to homes, lawsuits say.

Credit: Getty Images | Maskot

Texas Attorney General Ken Paxton sued five large TV manufacturers yesterday, alleging that their smart TVs spy on viewers without consent. Paxton sued Samsung, the longtime TV market share leader, along with LG, Sony, Hisense, and TCL.

“These companies have been unlawfully collecting personal data through Automated Content Recognition (‘ACR’) technology,” Paxton’s office alleged in a press release that contains links to all five lawsuits. “ACR in its simplest terms is an uninvited, invisible digital invader. This software can capture screenshots of a user’s television display every 500 milliseconds, monitor viewing activity in real time, and transmit that information back to the company without the user’s knowledge or consent. The companies then sell that consumer information to target ads across platforms for a profit. This technology puts users’ privacy and sensitive information, such as passwords, bank information, and other personal information at risk.”

The lawsuits allege violations of the Texas Deceptive Trade Practices Act, seeking damages of up to $10,000 for each violation and up to $250,000 for each violation affecting people 65 years or older. Texas also wants restraining orders prohibiting the collection, sharing, and selling of ACR data while the lawsuits are pending.

Texas argues that providing personalized content and targeted advertising are not legitimate purposes for collecting ACR data about consumers. The companies’ “insatiable appetite for consumer data far exceeds what is reasonably necessary,” and the “invasive data harvesting is only needed to increase advertisement revenue, which does not satisfy a consumer-necessity standard,” the lawsuits say.

Paxton is far from the first person to raise privacy concerns about smart TVs. The Center for Digital Democracy advocacy group said in a report last year that in “the world of connected TV, viewer surveillance is now built directly into the television set, making manufacturers central players in data collection, monitoring, and digital marketing.” We recently published a guide on how to break free from smart TV ads and tracking.

“Companies using ACR claim that it is all opt-in data, with permission required to use it,” the Center for Digital Democracy report said. “But the ACR system is bundled into new TVs as part of the initial set-up, and its extensive role in monitoring and sharing viewer actions is not fully explained. As a consequence, most consumers would be unaware of the threats and risks involved in signing up for the service.”

“Mass surveillance system” in US living rooms

Pointing out that Hisense and TCL are based in China, Paxton’s press release said the firms’ “Chinese ties pose serious concerns about consumer data harvesting and are exacerbated by China’s National Security Law, which gives its government the capability to get its hands on US consumer data.”

“Companies, especially those connected to the Chinese Communist Party, have no business illegally recording Americans’ devices inside their own homes,” Paxton said. “This conduct is invasive, deceptive, and unlawful. The fundamental right to privacy will be protected in Texas because owning a television does not mean surrendering your personal information to Big Tech or foreign adversaries.”

The Paxton lawsuits, filed in district courts in several Texas counties, are identical in many respects. The complaints allege that TVs made by the five companies “aren’t just entertainment devices—they’re a mass surveillance system sitting in millions of American living rooms. What consumers were told would enhance their viewing experience actually tracks, analyzes, and sells intimate details about everything they watch.”

Using ACR, each company “secretly monitors what consumers watch across streaming apps, cable, and even connected devices like gaming consoles or Blu-ray players,” and harvests the data to build profiles of consumer behavior and sell the data for profit, the complaints say.

We contacted the five companies sued by Texas today. Sony, LG, and Hisense responded and said they would not comment on a pending legal matter.

Difficult opt-out processes detailed

The complaints allege that the companies fail to obtain meaningful consent from users. The following excerpt is from the Samsung lawsuit but is repeated almost verbatim in the others:

Consumers never agreed to Samsung Watchware. When families buy a television, they don’t expect it to spy on them. They don’t expect their viewing habits packaged and auctioned to advertisers. Yet Samsung deceptively guides consumers to activate ACR and buries any explanation of what that means in dense legal jargon that few will read or understand. The so-called “consent” Samsung obtains is meaningless. Disclosures are hidden, vague, and misleading. The company collects far more data than necessary to make the TV work. Consumers are stripped of real choice and kept in the dark about what’s happening in their own homes on Samsung Smart TVs.

Samsung and other companies force consumers to go through multistep menus to exercise their privacy choices, Texas said. “Consumers must circumnavigate a long, non-intuitive path to exercise their right to opt-out,” the Samsung lawsuit said. This involves selecting menu choices for Settings, Additional Settings, General Privacy, Terms & Privacy, Viewing Information Services, and, finally, “Disable,” the lawsuit said. There are “additional toggles for Interest-Based Ads, Ad Personalization, and Privacy Choices,” the lawsuit said.

The “privacy choices are not meaningful because opt-out rights are scattered across four or more separate menus which requires approximately 15+ clicks,” the lawsuit continued. “To fully opt-out of ACR and related ad tracking on Samsung Smart TVs, consumers must disable at least two settings: (1) Viewing Information Services, and (2) Interest-Based Ads. Each of which appear in different parts of the setting UI. Conversely, Samsung provides consumers with a one-click enrollment option to opt-in during the initial start-up process.”

When consumers first start up a Samsung smart TV, they “must click through a multipage onboarding flow before landing on a consent screen, titled Smart Hub Terms & Conditions,” the lawsuit said. “Upon finally reaching the consent screen, consumers are presented with four notices: Terms & Conditions: Dispute Resolution Agreement, Smart Hub U.S. Policy Notice, Viewing Information Services, and Interest-Based Advertisements Service U.S. Privacy Notice, with only one button prominently displayed: I Agree to all.”

Deceptive trade practices alleged

It would be unreasonable to expect consumers to understand that Samsung TVs come equipped with surveillance capabilities, the lawsuit said. “Most consumers do not know, nor have any reason to suspect, that Samsung Smart TVs are capturing in real-time the audio and visuals displayed on the screen and using the information to profile them for advertisers,” it said.

Paxton alleges that TV companies violated the state’s Deceptive Trade Practices Act with misrepresentations regarding the collection of personal information and failure to disclose the use of ACR technology. The lawsuit against Hisense additionally alleges a failure to disclose that it may provide the Chinese government with consumers’ personal data.

Hisense “fails to disclose to Texas Consumers that under Chinese law, Hisense is required to transfer its collections of Texas consumers’ personal data to the People’s Republic of China when requested by the PRC,” the lawsuit said.

The TCL lawsuit doesn’t include that specific charge. But both the Hisense and TCL complaints say the Chinese Communist Party may use ACR data from the companies’ smart TVs “to influence or compromise public figures in Texas, including judges, elected officials, and law enforcement, and for corporate espionage by surveilling those employed in critical infrastructure, as part of the CCP’s long-term plan to destabilize and undermine American democracy.”

The TVs “are effectively Chinese-sponsored surveillance devices, recording the viewing habits of Texans at every turn without their knowledge or consent,” the lawsuits said.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Texas sues biggest TV makers, alleging smart TVs spy on users without consent Read More »

reporter-suggests-half-life-3-will-be-a-steam-machine-launch-title

Reporter suggests Half-Life 3 will be a Steam Machine launch title

If you can take your mind way back to the beginning of 2025, you might remember a fresh wave of rumors suggesting that Half-Life 3 was finally reaching the final stages of production, and could be announced and/or released at any moment. Now, though, 2025 seems set to come to a close without any official news of a game fans have been waiting literal decades for.

That doesn’t necessarily mean a Half-Life 3 announcement and/or release isn’t imminent, though. On the contrary, veteran journalist Mike Straw insisted on a recent Insider Gaming podcast that “everybody I’ve talked to are still adamant [Half-Life 3] is a game that will be a launch title with the Steam Machine.”

Straw—who has a long history of reporting gaming rumors from anonymous sources—said this Half-Life 3 information is “not [from] these run-of-the-mill sources that haven’t gotten me information before. … These aren’t like random, one-off people.” And those sources are “still adamant that the game is coming in the spring,” Straw added, noting that he was “specifically told [that] spring 2026 [is the window] for the Steam Machine, for the Frame, for the Controller, [and] for Half-Life 3.”

For real, this time?

Tying the long-awaited Half-Life 3 to a major hardware push that has already been announced for an “early 2026” window certainly sounds plausible, given previous leaks about the game’s advanced state of development. But there are still some reasons to doubt Straw’s “adamant” sources here.

For one, Straw admitted that the previous information he had received on potential Half-Life 3 launch and/or announcement dates was not reliable enough to report in detail. “I had been told a date. I was not going to report that date because they weren’t 100 percent confident in that date,” he said. “That date has since passed.”

Reporter suggests Half-Life 3 will be a Steam Machine launch title Read More »

utah-leaders-hinder-efforts-to-develop-solar-energy-supply

Utah leaders hinder efforts to develop solar energy supply


Solar power accounts for two-thirds of the new projects waiting to connect to the state’s power grid.

Utah Gov. Spencer Cox believes his state needs more power—a lot more. By some estimates, Utah will require as much electricity in the next five years as it generated all last century to meet the demands of a growing population as well as chase data centers and AI developers to fuel its economy.

To that end, Cox announced Operation Gigawatt last year, declaring the state would double energy production in the next decade. Although the announcement was short on details, Cox, a Republican, promised his administration would take an “any of the above” approach, which aims to expand all sources of energy production.

Despite that goal, the Utah Legislature’s Republican supermajority, with Cox’s acquiescence, has taken a hard turn against solar power—which has been coming online faster than any other source in Utah and accounts for two-thirds of the new projects waiting to connect to the state’s power grid.

Cox signed a pair of bills passed this year that will make it more difficult and expensive to develop and produce solar energy in Utah by ending solar development tax credits and imposing a hefty new tax on solar generation. A third bill aimed at limiting solar development on farmland narrowly missed the deadline for passage but is expected to return next year.

While Operation Gigawatt emphasizes nuclear and geothermal as Cox’s preferred sources, the legislative broadside, and Cox’s willingness to go along with it, caught many in the solar industry off guard. The three bills, in their original form, could have brought solar development to a halt if not for solar industry lobbyists negotiating a lower tax rate and protecting existing projects as well as those under construction from the brunt of the impact.

“It took every dollar of political capital from all the major solar developers just to get to something tolerable, so that anything they have under development will get built and they can move on to greener pastures,” said one industry insider, indicating that solar developers will likely pursue projects in more politically friendly states. ProPublica spoke with three industry insiders—energy developers and lobbyists—all of whom asked to remain anonymous for fear of antagonizing lawmakers who, next month, will again consider legislation affecting the industry.

The Utah Legislature’s pivot away from solar mirrors President Donald Trump taking a more hostile approach to the industry than his predecessor. Trump has ordered the phaseout of lucrative federal tax incentives for solar and other renewable energy, which expanded under the Biden administration. The loss of federal incentives is a bigger hit to solar companies than the reductions to Utah’s tax incentives, industry insiders acknowledged. The administration has also canceled large wind and solar projects, which Trump has lamented as “the scam of the century.” He described solar as “farmer killing.”

Yet Cox criticized the Trump administration’s decision to kill a massive solar project in neighboring Nevada. Known as a governor who advocates for a return to more civil political discourse, Cox doesn’t often pick fights. But he didn’t pull punches with the decision to halt the Esmeralda 7 project planned on 62,300 acres of federal land. The central Nevada project was expected to produce 6.2 gigawatts of power—enough to supply nearly eight times the number of households in Las Vegas. (Although the Trump administration canceled the environmental review of the joint project proposed by multiple developers, it has the potential to move forward as individual projects.)

“This is how we lose the AI/energy arms race with China,” Cox wrote on X when news surfaced of the project’s cancellation. “Our country needs an all-of-the-above approach to energy (like Utah).”

But he didn’t take on his own Legislature, at least publicly.

Many of Utah’s Republican legislators have been skeptical of solar for years, criticizing its footprint on the landscape and viewing it as an unreliable energy source, while lamenting the retirement of coal-generated power plants. The economies of several rural counties rely on mining coal. But lawmakers’ skepticism hadn’t coalesced into successful anti-solar legislation—until this year. When Utah lawmakers convened at the start of 2025, they took advantage of the political moment to go after solar.

“This is a sentiment sweeping through red states, and it’s very disconcerting and very disturbing,” said Steve Handy, Utah director of The Western Way, which describes itself as a conservative organization advocating for an all-of-the-above approach to energy development.

The shift in sentiment against solar energy has created a difficult climate for an all-of-the-above approach. Solar projects can be built quickly on Utah’s vast, sun-drenched land, while nuclear is a long game with projects expected to take a decade or more to come online under optimistic scenarios.

Cox generally supports solar, “in the right places,” especially when the captured energy can be stored in large batteries for distribution on cloudy days and after the sun goes down.

Cox said that instead of vetoing the anti-solar bills, he spent his political capital to moderate the legislation’s impact. “I think you’ll see where our fingerprints were,” he told ProPublica. He didn’t detail specific changes for which he advocated but said the bills’ earlier iterations would have “been a lot worse.”

“We will continue to see solar in Utah.”

Cox’s any-of-the-above approach to energy generation draws from a decades-old Republican push similarly titled “all of the above.” The GOP policy’s aim was as much about preserving and expanding reliance on fossil fuels (indeed, the phrase may have been coined by petroleum lobbyists) as it was turning to cleaner energy sources such as solar, wind, and geothermal.

As governor of a coal-producing state, Cox hasn’t shown interest in reducing reliance on such legacy fuels. But as he slowly rolls out Operation Gigawatt, his focus has been on geothermal and nuclear power. Last month, he announced plans for a manufacturing hub for small modular reactors in the northern Utah community of Brigham City, which he hopes will become a nuclear supply chain for Utah and beyond. And on a recent trade mission to New Zealand, he signed an agreement to collaborate with the country on geothermal energy development.

Meanwhile, the bills Cox signed into law already appear to be slowing solar development in Utah. Since May, when the laws took effect, 51 planned solar projects withdrew their applications to connect to the state’s grid—representing more than a quarter of all projects in Utah’s transmission connection queue. Although projects drop out for many reasons, some industry insiders theorize the anti-solar legislation could be at play.

Caught in the political squeeze over power are Utah customers, who are footing higher electricity bills. Earlier this year, the state’s utility, Rocky Mountain Power, asked regulators to approve a 30 percent hike to fund increased fuel and wholesale energy costs, as well as upgrades to the grid. In response to outrage from lawmakers, the utility knocked the request down to 18 percent. Regulators eventually awarded the utility a 4.7 percent increase—a decision the utility promptly appealed to the state Supreme Court.

Juliet Carlisle, a University of Utah political science professor focusing on environmental policy, said the new solar tax could signal to large solar developers that Utah energy policy is “becoming more unpredictable,” prompting them to build elsewhere. This, in turn, could undermine Cox’s efforts to quickly double Utah’s electricity supply.

Operation Gigawatt “relies on rapid deployment across multiple energy sources, including renewables,” she said. “If renewable growth slows—especially utility-scale solar, which is currently the fastest-deploying resource—the state may face challenges meeting demand growth timelines.”

Rep. Kay Christofferson, R-Lehi, had sponsored legislation to end the solar industry’s state tax credits for several legislative sessions, but this was the first time the proposal succeeded.

Christofferson agrees Utah is facing unprecedented demand for power, and he supports Cox’s any-of-the-above approach. But he doesn’t think solar deserves the advantages of tax credits. Despite improving battery technology, he still considers it an intermittent source and thinks overreliance on it would work against Utah’s energy goals.

In testimony on his bill, Christofferson said he believed the tax incentives had served their purpose of getting a new industry off the ground—16 percent of Utah’s power generation now comes from solar, ranking it 16th in the nation for solar capacity.

Christofferson’s bill was the least concerning to the industry, largely because it negotiated a lengthy wind-down of the subsidies. Initially it would have ended the tax credit after Jan. 1, 2032. But after negotiations with the solar industry, he extended the deadline to 2035.

The bill passed the House, but when it reached the Senate floor, Sen. Brady Brammer, R-Pleasant Grove, moved the end of the incentives to 2028. He told ProPublica he believes solar is already established and no longer needs the subsidy. Christofferson tried to defend his compromise but ultimately voted with the legislative majority.

Unlike Christofferson’s bill, which wasn’t born of an antipathy for renewable energy, Rep. Casey Snider, R-Paradise, made it clear in public statements and behind closed doors to industry lobbyists that the goal of his bill was to make solar pay.

The bill imposes a tax on all solar production. The proceeds will substantially increase the state’s endangered species fund, which Utah paradoxically uses to fight federal efforts to list threatened animals for protection. Snider cast his bill as pro-environment, arguing the money could also go to habitat protection.

As initially written, the bill would have taxed not only future projects, but also those already producing power and, more worrisome for the industry, projects under construction or in development with financing in place. The margins on such projects are thin, and the unanticipated tax could kill projects already in the works, one solar industry executive testified.

“Companies like ours are being effectively punished for investing in the state,” testified another.

The pushback drew attacks from Snider, who accused solar companies of hypocrisy on the environment.

Industry lobbyists who spoke to ProPublica said Snider wasn’t as willing to negotiate as Christofferson. However, they succeeded in reducing the tax rate on future developments and negotiated a smaller, flat fee for existing projects.

“Everyone sort of decided collectively to save the existing projects and let it go for future projects,” said one lobbyist.

Snider told ProPublica, “My goal was never to run anybody out of business. If we wanted to make it more heavy-handed, we could have. Utah is a conservative state, and I would have had all the support.”

Snider said, like the governor, he favors an any-of-the-above approach to energy generation and doesn’t “want to take down any particular industry or source.” But he believes utility-scale solar farms need to pay to mitigate their impact on the environment. He likened his bill to federal law that requires royalties from oil and gas companies to be used for conservation. He hopes federal lawmakers will use his bill as a model for federal legislation that would apply to solar projects nationwide.

“This industry needs to give back to the environment that they claim very heavily they are going to protect,” he said. “I do believe there’s a tinge of hypocrisy to this whole movement. You can’t say you’re good for the environment and not offset your impacts.”

One of the more emotional debates over solar is set to return next year, after a bill that would end tax incentives for solar development on agricultural land failed to get a vote in the final minutes of this year’s session. Sponsored by Rep. Colin Jack, R-St. George, the bill has been fast-tracked in the next session, which begins in January.

Jack said he was driven to act by ranchers who were concerned that solar companies were outbidding them for land they had been leasing to graze cows. Solar companies pay substantially higher rates than ranchers can. His bill initially had a slew of land use restrictions—such as mandating the distance between projects and residential property and creeks, minimum lot sizes and 4-mile “green zones” between projects—that solar lobbyists said would have strangled their industry. After negotiating with solar developers, Jack eliminated the land use restrictions while preserving provisions to prohibit tax incentives for solar farms on private agricultural land and to create standards for decommissioning projects.

Many in rural Utah recoil at rows of black panels disrupting the landscape and fear solar farms will displace the ranching and farming way of life. Indeed, some wondered whether Cox, who grew up on a farm in central Utah, would have been as critical of Trump scuttling a 62,300-acre solar farm in his own state as he was of the Nevada project’s cancellation.

Peter Greathouse, a rancher in western Utah’s Millard County, said he is worried about solar farms taking up grazing land in his county. “Twelve and a half percent is privately owned, and a lot of that is not farmable. So if you bring in these solar places that start to eat up the farmland, it can’t be replaced,” he said.

Utah is losing about 500,000 acres of agricultural land every 10 years, most of it to housing. A report by The Western Way estimated solar farms use 0.1 percent of the United States’ total land mass. That number is expected to grow to 0.46 percent by 2050—a tiny fraction of what is used by agriculture. Of the land managed by the Utah Trust Lands Administration, less than 3,000 of the 2.9 million acres devoted to grazing have been converted to solar farms.

Other ranchers told ProPublica they’ve been able to stay on their land and preserve their way of life by leasing to solar. Landon Kesler’s family, which raises cattle for team roping competitions, has leased land to solar for more than a decade. The revenue has allowed the family to almost double its land holdings, providing more room to ranch, Kesler said.

“I’m going to be quite honest, it’s absurd,” Kesler said of efforts to limit solar on agricultural land. “Solar very directly helped us tie up other property to be used for cattle and ranching. It didn’t run us out; it actually helped our agricultural business thrive.”

Solar lobbyists and executives have been working to bolster the industry’s image with lawmakers ahead of the next legislative session. They’re arguing solar is a good neighbor.

“We don’t use water, we don’t need sidewalks, we don’t create noise, and we don’t create light,” said Amanda Smith, vice president of external affairs for AES, which has one solar project operating in Utah and a second in development. “So we just sort of sit out there and produce energy.”

Solar pays private landowners in Utah $17 million a year to lease their land. And, more important, solar developers argue, it’s critical to powering data centers the state is working to attract.

“We are eager to be part of a diversified electricity portfolio, and we think we bring a lot of values that will benefit communities, keep rates low and stable, and help keep the lights on,” Rikki Seguin, executive director of Interwest Energy Alliance, a western trade organization that advocates for utility-scale renewable energy projects, told an interim committee of lawmakers this summer.

The message didn’t get a positive reception from some lawmakers on the committee. Rep. Carl Albrecht, R-Richfield, who represents three rural Utah counties and was among solar’s critics last session, said the biggest complaint he hears from constituents is about “that ugly solar facility” in his district.

“Why, Rep. Albrecht, did you allow that solar field to be built? It’s black. It looks like the Dead Sea when you drive by it,” Albrecht said.

This story was originally published by ProPublica.

Photo of ProPublica

Utah leaders hinder efforts to develop solar energy supply Read More »

uk-to-“encourage”-apple-and-google-to-put-nudity-blocking-systems-on-phones

UK to “encourage” Apple and Google to put nudity-blocking systems on phones

The push for device-level blocking comes after the UK implemented the Online Safety Act, a law requiring porn platforms and social media firms to verify users’ ages before letting them view adult content. The law can’t fully prevent minors from viewing porn, as many people use VPN services to get around the UK age checks. Government officials may view device-level detection of nudity as a solution to that problem, but such systems would raise concerns about user rights and the accuracy of the nudity detection.

Age-verification battles in multiple countries

Apple and Google both provide optional tools that let parents control what content their children can access. The companies could object to mandates on privacy grounds, as they have in other venues.

When Texas enacted an age-verification law for app stores, Apple and Google said they would comply but warned of risks to user privacy. A lobby group that represents Apple, Google, and other tech firms then sued Texas in an attempt to prevent the law from taking effect, saying it “imposes a broad censorship regime on the entire universe of mobile apps.”

There’s another age-verification battle in Australia, where the government decided to ban social media for users under 16. Companies said they would comply, although Reddit sued Australia on Friday in a bid to overturn the law.

Apple this year also fought a UK demand that it create a backdoor for government security officials to access encrypted data. The Trump administration claimed it convinced the UK to drop its demand, but the UK is reportedly still seeking an Apple backdoor.

In another case, the image-sharing website Imgur blocked access for UK users starting in September while facing an investigation over its age-verification practices.

Apple faced a backlash in 2021 over potential privacy violations when it announced a plan to have iPhones scan photos for child sexual abuse material (CSAM). Apple ultimately dropped the plan.

UK to “encourage” Apple and Google to put nudity-blocking systems on phones Read More »