Author name: Kelly Newman

at&t-kills-home-internet-service-in-ny-over-law-requiring-$15-or-$20-plans

AT&T kills home Internet service in NY over law requiring $15 or $20 plans

AT&T has stopped offering its 5G home Internet service in New York instead of complying with a new state law that requires ISPs to offer $15 or $20 plans to people with low incomes.

The decision was reported yesterday by CNET and confirmed by AT&T in a statement provided to Ars today. “While we are committed to providing reliable and affordable Internet service to customers across the country, New York’s broadband law imposes harmful rate regulations that make it uneconomical for AT&T to invest in and expand our broadband infrastructure in the state,” AT&T said. “As a result, effective January 15, 2025, we will no longer be able to offer AT&T Internet Air, our fixed-wireless Internet service, to New York customers.”

New York started enforcing its Affordable Broadband Act yesterday after a legal battle of nearly four years. Broadband lobby groups convinced a federal judge to block the law in 2021, but a US appeals court reversed the ruling in April 2024, and the Supreme Court decided not to hear the case last month.

The law requires ISPs with over 20,000 customers in New York to offer $15 broadband plans with download speeds of at least 25Mbps, or $20-per-month service with 200Mbps speeds. The plans only have to be offered to households that meet income eligibility requirements, such as qualifying for the National School Lunch Program, Supplemental Nutrition Assistance Program, or Medicaid.

AT&T’s Internet Air was launched in some areas in 2023 and is now available in nearly every US state. The standard price for Internet Air is $60 a month plus taxes and fees, or $47 when bundled with an eligible mobile service. Nationwide, AT&T said it added 135,000 Internet Air customers in the most recent quarter.

AT&T kills home Internet service in NY over law requiring $15 or $20 plans Read More »

two-lunar-landers-are-on-the-way-to-the-moon-after-spacex’s-double-moonshot

Two lunar landers are on the way to the Moon after SpaceX’s double moonshot

Julianna Scheiman, director of NASA science missions for SpaceX, said it made sense to pair the Firefly and ispace missions on the same Falcon 9 rocket.

“When we have two missions that can each go to the Moon on the same launch, that is something that we obviously want to take advantage of,” Scheiman said. “So when we found a solution for the Firefly and ispace missions to fly together on the same Falcon 9, it was a no-brainer to put them together.”

SpaceX stacked the two landers, one on top of the other, inside the Falcon 9’s payload fairing. Firefly’s lander, the larger of the two spacecraft, rode on top of the stack and deployed from the rocket first. The Resilience lander from ispace launched in the lower position, cocooned inside a specially designed canister. Once Firefly’s lander separated from the Falcon 9, the rocket jettisoned the canister, performed a brief engine firing to maneuver into a slightly different orbit, then released ispace’s lander.

This dual launch arrangement resulted in a lower launch price for Firefly and ispace, according to Scheiman.

“At SpaceX, we are really interested in and invested in lowering the cost of launch for everybody,” she said. “So that’s something we’re really proud of.”

The Resilience lunar lander is pictured at ispace’s facility in Japan last year. The company’s small Tenacious rover is visible on the upper left part of the spacecraft. credit: ispace Credit: ispace

The Blue Ghost and Resilience landers will take different paths toward the Moon.

Firefly’s Blue Ghost will spend about 25 days in Earth orbit, then four days in transit to the Moon. After Blue Ghost enters lunar orbit, Firefly’s ground team will verify the readiness of the lander’s propulsion and navigation systems and execute several thruster burns to set up for landing.

Blue Ghost’s final descent to the Moon is tentatively scheduled for March 2. The target landing site is in Mare Crisium, an ancient 350-mile-wide (560-kilometer) impact basin in the northeast part of the near side of the Moon.

After touchdown, Blue Ghost will operate for about 14 days (one entire lunar day). The instruments aboard Firefly’s lander include a subsurface drill, an X-ray imager, and an experimental electrodynamic dust shield to test methods of repelling troublesome lunar dust from accumulating on sensitive spacecraft components.

The Resilience lander from ispace will take four to five months to reach the Moon. It carries several intriguing tech demo experiments, including a water electrolyzer provided by a Japanese company named Takasago Thermal Engineering. This demonstration will test equipment that future lunar missions could use to convert the Moon’s water ice resources into electricity and rocket fuel.

The lander will also deploy a “micro-rover” named Tenacious, developed by an ispace subsidiary in Luxembourg. The Tenacious rover will attempt to scoop up lunar soil and capture high-definition imagery of the Moon.

Ron Garan, CEO of ispace’s US-based subsidiary, told Ars that this mission is “pivotal” for the company.

“We were not fully successful on our first mission,” Garan said in an interview. “It was an amazing accomplishment, even though we didn’t have a soft landing… Although the hardware worked flawlessly, exactly as it was supposed to, we did have some lessons learned in the software department. The fixes to prevent what happened on the first mission from happening on the second mission were fairly straightforward, so that boosts our confidence.”

The ispace subsidiary led by Garan, a former NASA astronaut, is based in Colorado. While the Resilience lander launched Wednesday is not part of the CLPS program, the company will build an upgraded lander for a future CLPS mission for NASA, led by Draper Laboratory.

“I think the fact that we have two lunar landers on the same rocket for the first time in history is pretty substantial,” Garan said. I think we all are rooting for each other.”

Investors need to see more successes with commercial lunar landers to fully realize the market’s potential, Garan said.

“That market, right now, is very nascent. It’s very, very immature. And one of the reasons for that is that it’s very difficult for companies that are contemplating making investments on equipment, experiments, etc., to put on the lunar surface and lunar orbit,” Garan said. “It’s very difficult to make those investments, especially if they’re long-term investments, because there really hasn’t been a proof of concept yet.”

“So every time we have a success, that makes it more likely that these companies that will serve as the foundation of a commercial lunar market movement will be able to make those investments,” Garan said. “Conversely, every time we have a failure, the opposite happens.”

Two lunar landers are on the way to the Moon after SpaceX’s double moonshot Read More »

Demystifying data fabrics – bridging the gap between data sources and workloads

The term “data fabric” is used across the tech industry, yet its definition and implementation can vary. I have seen this across vendors: in autumn last year, British Telecom (BT) talked about their data fabric at an analyst event; meanwhile, in storage, NetApp has been re-orienting their brand to intelligent infrastructure but was previously using the term. Application platform vendor Appian has a data fabric product, and database provider MongoDB has also been talking about data fabrics and similar ideas. 

At its core, a data fabric is a unified architecture that abstracts and integrates disparate data sources to create a seamless data layer. The principle is to create a unified, synchronized layer between disparate sources of data and the workloads that need access to data—your applications, workloads, and, increasingly, your AI algorithms or learning engines. 

There are plenty of reasons to want such an overlay. The data fabric acts as a generalized integration layer, plugging into different data sources or adding advanced capabilities to facilitate access for applications, workloads, and models, like enabling access to those sources while keeping them synchronized. 

So far, so good. The challenge, however, is that we have a gap between the principle of a data fabric and its actual implementation. People are using the term to represent different things. To return to our four examples:

  • BT defines data fabric as a network-level overlay designed to optimize data transmission across long distances.
  • NetApp’s interpretation (even with the term intelligent data infrastructure) emphasizes storage efficiency and centralized management.
  • Appian positions its data fabric product as a tool for unifying data at the application layer, enabling faster development and customization of user-facing tools. 
  • MongoDB (and other structured data solution providers) consider data fabric principles in the context of data management infrastructure.

How do we cut through all of this? One answer is to accept that we can approach it from multiple angles. You can talk about data fabric conceptually—recognizing the need to bring together data sources—but without overreaching. You don’t need a universal “uber-fabric” that covers absolutely everything. Instead, focus on the specific data you need to manage.

If we rewind a couple of decades, we can see similarities with the principles of service-oriented architecture, which looked to decouple service provision from database systems. Back then, we discussed the difference between services, processes, and data. The same applies now: you can request a service or request data as a service, focusing on what’s needed for your workload. Create, read, update and delete remain the most straightforward of data services!

I am also reminded of the origins of network acceleration, which would use caching to speed up data transfers by holding versions of data locally rather than repeatedly accessing the source. Akamai built its business on how to transfer unstructured content like music and films efficiently and over long distances. 

That’s not to suggest data fabrics are reinventing the wheel. We are in a different (cloud-based) world technologically; plus, they bring new aspects, not least around metadata management, lineage tracking, compliance and security features. These are especially critical for AI workloads, where data governance, quality and provenance directly impact model performance and trustworthiness.

If you are considering deploying a data fabric, the best starting point is to think about what you want the data for. Not only will this help orient you towards what kind of data fabric might be the most appropriate, but this approach also helps avoid the trap of trying to manage all the data in the world. Instead, you can prioritize the most valuable subset of data and consider what level of data fabric works best for your needs:

  1. Network level: To integrate data across multi-cloud, on-premises, and edge environments.
  2. Infrastructure level: If your data is centralized with one storage vendor, focus on the storage layer to serve coherent data pools.
  3. Application level: To pull together disparate datasets for specific applications or platforms.

For example, in BT’s case, they’ve found internal value in using their data fabric to consolidate data from multiple sources. This reduces duplication and helps streamline operations, making data management more efficient. It’s clearly a useful tool for consolidating silos and improving application rationalization.

In the end, data fabric isn’t a monolithic, one-size-fits-all solution. It’s a strategic conceptual layer, backed up by products and features, that you can apply where it makes the most sense to add flexibility and improve data delivery. Deployment fabric isn’t a “set it and forget it” exercise: it requires ongoing effort to scope, deploy, and maintain—not only the software itself but also the configuration and integration of data sources.

While a data fabric can exist conceptually in multiple places, it’s important not to replicate delivery efforts unnecessarily. So, whether you’re pulling data together across the network, within infrastructure, or at the application level, the principles remain the same: use it where it’s most appropriate for your needs, and enable it to evolve with the data it serves.

Demystifying data fabrics – bridging the gap between data sources and workloads Read More »

up-close-and-personal-with-the-stag-beetle-in-a-real-bug’s-life-s2

Up close and personal with the stag beetle in A Real Bug’s Life S2


It’s just one of the many fascinating insect species featured in the second season of this NatGeo docuseries.

A female giant stag beetle Credit: National Geographic/Darlyne A. Murawski

A plucky male American stag beetle thinks he’s found a mate on a rotting old tree stump—and then realizes there’s another male eager to make the same conquest. The two beetles face off in battle, until the first manages to get enough leverage to toss his romantic rival off the stump in a deft display of insect jujitsu. It’s the first time this mating behavior has been captured on film, and the stag beetle is just one of the many fascinating insects featured in the second season of A Real Bug’s Life, a National Geographic docuseries narrated by Awkwafina.

The genesis for the docuseries lies in a past rumored sequel to Pixar’s 1998 animated film A Bug’s Life, which celebrated its 25th anniversary two years ago. That inspired producer Bill Markham, among others, to pitch a documentary series on a real bug’s life to National Geographic. “It was the quickest commission ever,” Markham told Ars last year. “It was such a good idea, to film bugs in an entertaining family way with Pixar sensibilities.” And thanks to the advent of new technologies—photogrammetry, probe and microscope lenses, racing drones, ultra-high-speed camera—plus a handful of skilled “bug wranglers,” the team was able to capture the bug’s-eye view of the world beautifully.

As with the Pixar film, the bugs (and adjacent creatures) are the main characters here, from cockroaches, monarch butterflies, and praying mantises to bees, spiders, and even hermit crabs. The 10 episodes, across two seasons, tell their stories as they struggle to survive in their respective habitats, capturing entire ecosystems in the process: city streets, a farm, the rainforest, a Texas backyard, and the African savannah, for example. Highlights from S1 included the first footage of cockroach egg casings hatching; wrangling army ants on location in a Costa Rica rainforest; and the harrowing adventures of a tiny jumping spider navigating the mean streets of New York City.

Looking for love

A luna moth perched on a twig. National Geographic/Nathan Small

S2 takes viewers to Malaysia’s tropical beaches, the wetlands of Derbyshire in England, and the forests of Tennessee’s Smoky Mountains. Among the footage highlights: Malaysian tiger beetles, who can run so fast they temporarily are unable to see; a young female hermit crab’s hunt for a bigger shell; and tiny peacock spiders hatching Down Under. There is also a special behind-the-scenes look for those viewers keen to learn more about how the episodes were filmed, involving 130 different species across six continents. Per the official synopsis:

A Real Bug’s Life is back for a thrilling second season that’s bolder than ever. Now, thanks to new cutting-edge filming technology, we are able to follow the incredible stories of the tiny heroes living in this hidden world, from the fast-legged tiger beetle escaping the heat of Borneo’s beaches to the magical metamorphosis of a damselfly on a British pond to the Smoky Mountain luna moth whose quest is to grow wings, find love and pass on his genes all in one short night. Join our witty guide, Awkwafina, on new bug journeys full of more mind-blowing behaviors and larger-than-life characters.

Entomologist Michael Carr, an environmental compliance officer for Santa Fe County in New Mexico, served as a field consultant for the “Love in the Forest” episode, which focuses on the hunt for mates by a luna moth, a firefly, and an American stag beetle. The latter species is Carr’s specialty, ever since he worked at the Smithsonian’s Museum of Natural History and realized the beetles flourished near where he grew up in Virginia. Since stag beetles are something of a niche species, NatGeo naturally tapped Carr as its field expert to help them find and film the insects in the Smoky Mountains. To do so, Carr set up a mercury vapor lamp on a tripod—”old style warehouse lights that take a little time to charge up,” which just happen to emit frequencies of light that attract different insect species.

Behind the scenes

Beetle expert Michael Carr and shooting researcher Katherine Hannaford film a stag beetle at night. National Geographic/Tom Oldridge

Stag beetles are saprocylic insects, according to Carr, so they seek out decaying wood and fungal communities. Males can fly as high as 30 feet to reach tree canopies, while the females can dig down to between 1 and 3 meters to lay their eggs in wood. Much of the stag beetle’s lifecycle is spent underground as a white grub molting into larger and larger forms before hatching in two to three years during the summer. Once their exoskeletons harden, they fly off to find mates and reproduce as quickly as possible. And if another male happens to get in their way, they’re quite prepared to do battle to win at love.

Stag beetles might be his specialty, but Carr found the fireflies also featured in that episode to be a particular highlight. “I grew up in rural Virginia,” Carr told Ars. “There was always fireflies, but I’d never seen anything like that until I was there on site. I did not realize, even though I’d grown up in the woods surrounded by fireflies, that, ‘Oh, the ones that are twinkling at the top, that’s one species. The ones in the middle that are doing a soft glow, that’s a different species.'”

And Carr was as surprised and fascinated as any newbie to learn about the “femme fatale” firefly: a species in which the female mimics the blinking patterns of other species of firefly, luring unsuspecting males to their deaths. The footage captured by the NatGeo crew includes a hair-raising segment where this femme fatale opts not to wait for her prey to come to her. A tasty male firefly has been caught in a spider’s web, and our daring, hungry lady flies right into the web to steal the prey:

A femme fatale firefly steals prey from a rival spider’s web.

Many people have a natural aversion to insects; Carr hopes that inventive docuseries like A Real Bug’s Life can help counter those negative perceptions by featuring some lesser-loved insects in anthropomorphized narratives—like the cockroaches and fire ants featured in S1. “[The series] did an amazing job of showing how something at that scale lives its life, and how that’s almost got a parallel to how we can live our life,” he said. “When you can get your mindset down to such a small scale and not just see them as moving dots on the ground and you see their eyes and you see how they move and how they behave and how they interact with each other, you get a little bit more appreciation for ants as a living organism.”

“By showcasing some of the bigger interesting insects like the femme fatale firefly or the big chivalrous stag beetle fighting over each other, or the dung beetle getting stomped by an elephant—those are some pretty amazing just examples of the biodiversity and breadth of insect life,” said Carr. “People don’t need to love insects. If they can, just, have some new modicum of respect, that’s good enough to change perspectives.”

The second season of A Real Bug’s Life premieres on January 15, 2025, on Disney+.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Up close and personal with the stag beetle in A Real Bug’s Life S2 Read More »

zvi’s-2024-in-movies

Zvi’s 2024 In Movies

Now that I am tracking all the movies I watch via Letterboxd, it seems worthwhile to go over the results at the end of the year, and look for lessons, patterns and highlights.

  1. The Rating Scale.

  2. The Numbers.

  3. Very Briefly on the Top Picks and Whether You Should See Them.

  4. Movies Have Decreasing Marginal Returns in Practice.

  5. Theaters are Awesome.

  6. I Hate Spoilers With the Fire of a Thousand Suns.

  7. Scott Sumner Picks Great American Movies Then Dislikes Them.

  8. I Knew Before the Cards Were Even Turned Over.

  9. Other Notes to Self to Remember.

  10. Strong Opinions, Strongly Held: I Didn’t Like It.

  11. Strong Opinions, Strongly Held: I Did Like It.

  12. Megalopolis.

  13. The Brutalist.

  14. The Death of Award Shows.

  15. On to 2025.

Letterboxd ratings go from 0.5-5. Here is how I interpret the rating scale.

You can find all my ratings and reviews on Letterboxd. I do revise from time to time. I encourage you to follow me there.

5: All-Time Great. I plan to happily rewatch this multiple times. If you are an adult and haven’t seen this, we need to fix that, potentially together, right away, no excuses.

4.5: Excellent. Would happily rewatch. Most people who watch movies frequently should see this movie without asking questions.

4: Great. Very glad I saw it. Would not mind a rewatch. If the concept here appeals to you, then you should definitely see it.

3.5: Very Good. Glad I saw it once. This added value to my life.

3: Good. It was fine, happy I saw it I guess, but missing it would also have been fine.

2.5: Okay. It was watchable, but actually watching it was a small mistake.

2: Bad. Disappointing. I immediately regret this decision. Kind of a waste.

1.5: Very Bad. If you caused this to exist, you should feel bad. But something’s here.

1: Atrocious. Total failure. Morbid curiosity is the only reason to finish this.

0.5: Crime Against Cinema. You didn’t even try to do the not-even-trying thing.

The key thresholds are: Happy I saw it equals 3+, and Rewatchable equals 4+.

Here is the overall slope of my ratings, across all films so far, which slants to the right because of rewatches, and is overall a standard bell curve after selection:

So here’s everything I watched in 2024 (plus the first week of 2025) that Letterboxd classified as released in 2024.

The rankings are in order, including within a tier.

The correlation of my ratings with Metacritic is 0.54, with Letterboxd it is 0.53, and the two correlate with each other at 0.9.

See The Fall Guy if you haven’t. It’s not in my top 10, and you could argue doesn’t have the kind of depth and ambition that people with excellent taste in cinema like the excellent Film Colossus are looking for.

I say so what. Because it’s awesome. Thumbs up.

You should almost certainly also see Anora, Megalopolis and Challengers.

Deadpool & Wolverine is the best version of itself, so see it if you’d see that.

A Complete Unknown is worthwhile if you like Bob Dylan. If you don’t, why not?

Dune: Part 2 is about as good as Dune: Part 1, so your decision here should be easy.

Conclave is great drama, as long as you wouldn’t let a little left-wing sermon ruin it.

The Substance is bizarre and unique enough that I’d recommend that too.

After that, you can go from there, and I don’t think anything is a slam dunk.

This is because we have very powerful selection tools to find great movies, and great movies are much better than merely good movies.

That includes both great in absolute terms, and great for your particular preferences.

If you used reasonable algorithms to see only 1 movie a year, you would be able to reliably watch really awesome movies.

If you want to watch a movie or two per week, you’re not going to do as well. The marginal product you’re watching is now very different. And if you’re watching ‘everything’ for some broad definition of new releases, there’s a lot of drek.

There’s also decreasing returns from movies repeating similar formulas. As you gain taste in and experience with movies, some things that are cool the first time become predictable and generic. You want to mitigate this rather than lean into it, if you can.

There are increasing returns from improved context and watching skills, but they don’t make up for the adverse selection and repetition problems.

Seeing movies in theaters is much better than seeing them at home. As I’ve gotten bigger and better televisions I have expected this effect to mostly go away. It hasn’t. It has shrunk somewhat, but the focusing effects and overall experience matter a lot, and the picture and sound really are still much better.

It seems I should be more selective about watching marginal movies at home versus other activities, but I should be less selective on going to the theater, and I’ve joined AMC A-List to help encourage that, as I have an AMC very close to my apartment.

The correlation with my seeing it in a theater was 0.52, almost as strong as the correlation with others movie ratings.

Obviously a lot of this was selection. Perhaps all of it? My impression was that this was the result of me failing to debias the results, as my experiences in a movie theater seem much better than those outside of one.

But when I ran the correlation between [Zvi Review – Letterboxd] versus Theater, I got -0.015, essentially no correlation at all. So it seems like I did adjust properly for this, or others did similar things to what I did, perhaps. It could also be the two AI horror movies accidentally balancing the scales.

I also noticed that old versus new on average did not make a difference, once you included rewatches. I have a total of 106 reviews, 34 of which are movies from 2024. The average of all reviews for movies not released in 2024, which involved a much lower ratio of seeing them in theaters, is 3.11, versus 3.10 for 2024.

The caveat is that this included rewatches where I already knew my opinion, so newly watched older movies at home did substantially worse than this.

I hate spoilers so much that I consider the Metacritic or Letterboxd rating a spoiler.

That’s a problem. I would like to filter with it, and otherwise filter on my preferences, but I don’t actually want to know the relevant information. I want exactly one bit of output, either a Yes or a No.

It occurs to me that I should either find a way to make an LLM do that, or a way to make a program (perhaps plus an LLM) do that.

I’ve had brief attempts at similar things in the past with sports and they didn’t work, but that’s a place where trying and failing to get info is super dangerous. So this might be a better place to start.

I also don’t know what to do about the problem that when you have ‘too much taste’ or knowledge of media, this often constitutes a spoiler – there’s logically many ways a story could go in reality, but in fiction you realize the choice has been made. Or you’re watching a reality TV show, where the editors know the outcome, so based on their decisions you do too. Whoops. Damn it. One of the low-key things I loved about UnReal was that they broadcast their show-within-a-show Everlasting as it happens, so the editors of each episode do not know what comes next. We need more of that.

I saw five movies Scott Sumner also watched. They were five of the eight movies I rated 4 or higher. All platinum hits. Super impressive selection process.

He does a much better job here than Metacritic or Letterboxd.

But on his scale, 3 means a movie is barely worth watching, and his average is ~3.

My interpretation is that Scott really dislikes the traditional Hollywood movie. His reviews, especially of Challengers and Anora, make this clear. Scott is always right, in an important sense, but a lot of what he values is different, and the movie being different from what he expects.

My conclusion is that if Scott Sumner sees a Hollywood movie, I should make an effort to see it, even if he then decides he doesn’t like it, and I should also apply that to the past.

I did previously make a decision to try and follow Sumner’s reviews for other movies. Unfortunately, I started with Chimes at Midnight, and I ended up giving up on it, I couldn’t bring myself to care despite Scott giving it 4.0 and saying ‘Masterpiece on every level, especially personally for Wells.’ I suspect it’s better if one already knows the Shakespeare? I do want to keep trying, but I’ll need to use better judgment.

I considered recording my predictions for films before I went to see them.

I did not do this, because I didn’t want to anchor myself. But when I look back and ask what I was expecting, I notice my predictions were not only good, but scary good.

I’ve learned what I like, and what signals to look for. In particular, I’ve learned how to adjust the Metacritic and Letterboxd rankings based on the expected delta.

When I walked into The Fall Guy, I was super excited – the moment I saw the poster I instantly thought to myself ‘I’m in.’ I knew Megalopolis was a risk, but I expected to like that too.

The movies that I hated? If I expected to not like them, why did I watch them anyway? In many cases, I wanted to watch a generic movie and relax, and then missed low a bit, even if I was sort of fine with it. In other cases, it was morbid curiosity and perhaps hoping for So Bad It’s Good, which combined with ‘it’s playing four blocks away’ got me to go to Madame Web.

The worry is that the reason this happens is I am indeed anchoring, and liking what I already decided to like. There certainly isn’t zero of this – if you go into a movie super pumped thinking it’s going to be great that helps, and vice versa. I think this is only a small net effect, but I could be wrong.

  1. If the movie stinks, just don’t go. You know if the movie stinks.

  2. Trust your instincts and your gut feelings more than you think you should.

  3. Maybe gut feelings are self-fulfilling prophecies? Doesn’t matter. They still count.

  4. You love fun, meta, self-aware movies of all kinds, trust this instinct.

  5. You do not actually like action movies that play it straight, stop watching them.

  6. If the movie sounds like work or pain, it probably is, act accordingly.

  7. If the movie sounds very indy, the critics will overrate it.

  8. A movie being considered for awards is not a positive signal once you control for the Metacritic and Letterboxd ratings. If anything it is a negative.

  9. Letterboxd ratings adjusted for context are more accurate than Metacritic.

  10. Opinions of individuals very much have Alpha if you have enough context.

What are the places I most strongly disagreed with the critical consensus?

I disliked three movies in green on Metacritic: Gladiator 2, Monkey Man and Juror #2.

I think I might be wrong about Monkey Man, in that I buy that it’s actually doing a good job at the job it set out to do, but simply wasn’t for me, see the note that I need to stop watching (non-exceptional) action movies that play it straight.

I strongly endorse disliking Gladiator 2 on reflection. Denzel Washington was great but the rest of the movie failed to deliver on pretty much every level.

I’m torn on Juror #2. I do appreciate the moral dilemmas it set up. I agree they’re clever and well-executed. I worry this was a case where I have seen so many courtroom dramas, especially Law & Order episodes, that there was too much of a Get On With It impatience – that this was a place where I had too much taste of the wrong kind to enjoy the movie, especially when not at a theater.

The moments this is building towards? Those hit hard. They work.

The rest of the time, though? Bored. So bored, so often. I know what this movie thinks those moments are for. But there’s no need. This should have been closer to 42 minutes.

I do appreciate how this illustrates the process where the system convicts an innocent man. Potentially more than one. And I do appreciate the dilemmas the situation puts everyone in. And what this says about what, ultimately, often gets one caught, and what is justice. There’s something here.

But man, you got to respect my time more than this.

One could also include Civil War. I decided that this was the second clear case (the first was Don’t Look Up) of ‘I don’t want to see this and you can’t make me,’ so I didn’t see it, and I’m happy with at least waiting until after the election to do that.

I actively liked four movies that the critics thought were awful: Subservience, Joker: Folie a Duex, Unfrosted and of course Megalopolis.

For Subservience, and also for Afraid, I get that on a standard cinema level, these are not good films. They pattern match to C-movie horror. But if you actually are paying attention, they do a remarkably good job in the actual AI-related details, and being logically consistent. I value that highly. So I don’t think either of us are wrong.

There’s a reason Subservience got a remarkably long review from me:

The sixth law of human stupidity says that if anyone says ‘no one would be so stupid as to,’ then you know a lot of people would do so at the first opportunity.

People like to complain about the idiot ball and the idiot plot. Except, no, this is exactly the level of idiot that everyone involved would be, especially the SIM company.

If you want to know how I feel when I look at what is happening in the real world, and what is on track to happen to us? Then watch this movie. You will understand both how I feel, and also exactly how stupid I expect us to be.

No, I do not think that if we find the AI scheming against us then we will even shut down that particular AI. Maybe, if we’re super lucky?

The world they built has a number of obvious contradictions in it, and should be long dead many times over before the movie starts or at least utterly transformed, but in context I am fine with it, because it is in service of the story that needs to be told here.

The alignment failure here actually makes sense, and the capabilities developments at least sort of make sense as well if you accept certain background assumptions that make the world look like it does. And yes, people have made proposals exactly this stupid, that fail in pretty much exactly this way, exactly this predictably.

Also, in case you’re wondering why ‘protect the primary user’ won’t work, in its various forms and details? Now you know, as they say.

And yeah, people are this bad at explaining themselves.

In some sense, the suggested alignment solution here is myopia. If you ensure your AIs don’t do instrumental convergence beyond the next two minutes, maybe you can recover from your mistakes? It also of course causes all the problems, the AI shouldn’t be this stupid in the ways it is stupid, but hey.

Of course, actual LLMs would never end up doing any of this, not in these ways, unless perplexity suggested to them that they were an AI horror movie villain or you otherwise got them into the wrong context.

Also there’s the other movie here, which is about technological unemployment and cultural reactions to it, which is sadly underdeveloped. They could have done so much more with that.

Anyway, I’m sad this wasn’t better – not enough people will see it or pay attention to what it is trying to tell them, and that’s a shame. Still, we have the spiritual sequel to Megan, and it works.

Finally (minor spoilers here), it seems important that the people describing the movie have no idea what happened in the movie? As in, if you look at the Metacritic summary… it is simply wrong. Alice’s objective never changes. Alice never ‘wants’ anything for herself, in any sense. If anything, once you understand that, it makes it scarier.

Unfrosted is a dumb Jerry Seinfeld comedy. I get that. I’m not saying the critics are wrong, not exactly? But the jokes are good. I laughed quite a lot, and a lot more than at most comedies or than I laughed when I saw Seinfeld in person and gave him 3.5 GPTs – Unfrosted gets at least 5.0 GPTs. Live a little, everyone.

I needed something to watch with the kids, and this overperformed. There are good jokes and references throughout. This is Seinfeld having his kind of fun and everyone having fun helping him have it. Didn’t blow me away, wasn’t trying to do so. Mission accomplished.

Joker: Folie a Duex was definitely not what anyone expected, and it’s not for everyone, but I stand by my review here, and yes I have it slightly above Wicked:

You have to commit to the bit. The fantasy is the only thing that’s real.

Beware audience capture. You are who you choose to be.

Do not call up that which you cannot put down.

A lot of people disliked the ending. I disagree in the strongest terms. The ending makes both Joker movies work. Without it, they’d both be bad.

With it, I’m actively glad I saw this.

I liked what Film Colossus said about it, that they didn’t like the movie but they really loved what it was trying to do. I both loved what it was trying to do and kinda liked the movie itself, also I brought a good attitude.

For Megalopolis, yes it’s a mess, sure, but it is an amazingly great mess with a lot of the right ideas and messages, even if it’s all jumbled and confused.

If you don’t find a way to appreciate it, that is on you. Perhaps you are letting your sense of taste get in the way. Or perhaps you have terrible taste in ideas and values? Everyone I know of who said they actively liked this is one of my favorite people.

This movie is amazing. It is endlessly inventive and fascinating. Its heart, and its mind, are exactly where they need to be. I loved it.

Don’t get me wrong. The movie is a mess. You could make a better cut of it. There are unforced errors aplenty. I have so, so many notes.

The whole megalopolis design is insufficiently dense and should have been shoved out into the Bronx or maybe Queens.

But none of that matters compared to what you get. I loved it all.

And it should terrify you that we live in a country that doesn’t get that.

Then there’s The Brutalist, which the critics think is Amazingly Great (including Tyler Cowen here). Whereas ultimately I thought it was medium, on the border between 3 and 3.5, and I’m not entirely convinced my life is better because I saw it.

So the thing is… the building is ugly? Everything he builds is ugly?

That’s actually part of why I saw the film – I’d written a few weeks ago about how brutalist/modern architecture appears to be a literal socialist conspiracy to make people suffer, so I was curious to see things from their point of view. We get one answer about ‘why architecture’ and several defenses of ‘beauty’ against commercial concerns, and talk about standing the test of time. And it’s clear he pays attention to detail and cares about the quality of his work – and that technically he’s very good.

But. The. Buildings. Are. All. Ugly. AF.

He defends concrete as both cheap and strong. True enough. But it feels like there’s a commercial versus artistic tension going the other way, and I wish they’d explored that a bit? Alas.

Instead they focus on what the film actually cares about, the Jewish immigrant experience. Which here is far more brutalist than the buildings. It’s interesting to see a clear Oscar-bound film make such a robust defense of Israel, and portray America as a sick, twisted, hostile place for God’s chosen people, even when you have the unbearable weight of massive talent.

Then there’s the ending. I mouthed ‘WTF?’ more than once, and I still have no idea WTF. In theory I get the artistic choice, but really? That plus the epilogue and the way that was shot, and some other detail choices, made me think this was about a real person. But no, it’s just a movie that decided to be 3.5 hours long with an intermission and do slice-of-hard-knock-life things that didn’t have to go anywhere.

Ultimately, I respect a lot of what they’re doing here, and that they tried to do it at all, and yes Pierce and Brody are great (although I don’t think I’d be handing out Best Actor here or anything). But also I feel like I came back from an assignment.

Since I wrote that, I’ve read multiple things and had time to consider WTF, and I understand the decision, but that new understanding of the movie makes me otherwise like the movie less and makes it seem even more like an assignment. Contra Tyler I definitely did feel like this was 3.5 hours long.

I do agree with many of Tyler’s other points (including ‘recommended, for some’!) although the Casablanca angle seems like quite a stretch.

One detail I keep coming back to, that I very much appreciate and haven’t seen anyone else mention, is the scene where he is made to dance, why it happens and how that leads directly to other events. I can also see the ‘less interesting’ point they might have been going for instead, and wonder if they knew what they were doing there.

My new ultimately here is that I have a fundamentally different view than the movie of most of the key themes in the movie does, and that made it very difficult for me to enjoy it. When he puts that terrible chair and table in the front of the furniture store, I don’t think ‘oh he’s a genius’ I think ‘oh what a pretentious arse, that’s technically an achievement but in practice it’s ugly and non-functional, no one will want it, it can’t be good for business.’

It’s tough to enjoy watching a (highly brutal in many senses, as Tyler notes!) movie largely about someone being jealous of and wanting the main character’s talent when you agree he’s technically skilled but centrally think his talents suck, and when you so strongly disagree with its vision, judgment and measure of America. Consider that the antagonist is very clearly German. The upside-down Statue of Liberty tells you a lot.

We’ve moved beyond award shows, I think, now that we have Metacritic and Letterboxd, if your goal is to find the best movies.

In terms of the Oscars and award shows, I’ll be rooting for Anora, but wow the awards process is dumb when you actually look at it. Knowing what is nominated, or what won, no longer provides much alpha on movie quality.

Giving the Golden Globe for Best Musical or Comedy to Emelia Perez (2.8 on Letterboxd, 71 on Metacritic) over Anora (4.1 on Letterboxd, 91 on Metacritic) or Challengers tells you that they cared about something very different from movie quality.

There were and have been many similar other such cases, as well, but that’s the one that drove it home this year – it’s my own view, plus the view of the public, plus the view of the critics when they actually review the movies, and they all got thrown out the window.

Your goal is not, however, purely to find the best movies.

Robin Hanson: The Brutalist is better than all these other 2024 movies I’ve seen: Anora, Emilia Perez, Wicked, Conclave, Dune 2, Complete Unknown, Piano Lesson, Twisters, Challengers Juror #2, Megalopolis, Civil War. Engaging, well-made, but not satisfying or inspiring.

Tyler Cowen: A simple question, but if this is how it stands why go see all these movies?

Robin Hanson: For the 40 years we’ve been together, my wife & I have had a tradition of seeing most of the Oscar nominated movies every year. Has bonded us, & entertained us.

I like that tradition, and have tried at times a similar version of it. I think this made great sense back in the 1990s, or even 2000s, purely for the selection effects.

Today, you could still say do it to be part of the general conversation, or as tradition. And I’m definitely doing some amount of ‘see what everyone is likely to talk about’ since that is a substantial bonus.

But I think we’d do a lot better if the selection process was simply some aggregate of Metacritic, Letterboxd and (projected followed by actual) box office. You need box office, because you want to avoid niche movies that get high ratings from those that choose to watch them, but would do much less well with a general audience.

I definitely plan on continuing to log and review all the movies I see going forward. If you’re reading this and think I or others should consider following you there, let me know in the comments, you have permission to pitch that, or to pitch as a more general movie critic. You are also welcome to make recommendations, if they are specifically for me based on the information here – no simply saying ‘I thought [X] was super neat.’

Tracking and reviewing everything been a very useful exercise. You learn a lot by looking back. And I expect that feeding the data to LLMs will allow me to make better movie selections not too long from now. I highly recommend it to others.

Discussion about this post

Zvi’s 2024 In Movies Read More »

a-glowing-ring-of-metal-fell-to-earth,-and-no-one-has-any-idea-what-it-is

A glowing ring of metal fell to Earth, and no one has any idea what it is

It has been more than a week since reports first emerged about a “glowing ring of metal” that fell from the sky and crashed near a remote village in Kenya.

According to the Kenya Space Agency, the object weighed 1,100 pounds (500 kg) and had a diameter of more than 8 feet (2.4 meters) when measured after it landed on December 30. A couple of days later, the space agency confidently reported that the object was a piece of space debris, saying it was a ring that separated from a rocket. “Such objects are usually designed to burn up as they re-enter the Earth’s atmosphere or to fall over unoccupied areas, such as the oceans,” the space agency told The New York Times.

Since those initial reports were published in Western media, a small band of dedicated space trackers have been using open source data to try to identify precisely which space object fell into Kenya. So far, they have not been able to identify the rocket launch to which the large ring can be attributed.

Now, some space trackers believe the object may not have come from space at all.

Did it really come from space?

Space is increasingly crowded, but large chunks of metal from rockets are generally not flying around in Earth orbit undetected and untracked.

“It was suggested that the ring is space debris, but the evidence is marginal,” wrote Jonathan McDowell, an astrophysicist working at the Harvard-Smithsonian Center for Astrophysics. McDowell is highly regarded for his analysis of space objects. “The most likely space-related possibility is the reentry of the SYLDA adapter from the Ariane V184 flight, object 33155. Nevertheless, I am not fully convinced that the ring is space debris at all,” he wrote.

Another prominent space tracker, Marco Langbroek, believes it’s plausible that the ring came from space, so he investigated further into objects that may have returned around the time of the object’s discovery in Kenya. In a blog post written Wednesday he noted that apart from the metal ring, other fragments looking consistent with space debris—including material that looks like carbon wrap and isolation foil—were found several kilometers away from the ring.

A glowing ring of metal fell to Earth, and no one has any idea what it is Read More »

here’s-what-we-know,-and-what-we-don’t,-about-the-awful-palisades-wildfire

Here’s what we know, and what we don’t, about the awful Palisades wildfire

Let’s start with the meteorology. The Palisades wildfire and other nearby conflagrations were well-predicted days in advance. After a typically arid summer and fall, the Los Angeles area has also had a dry winter so far. December, January, February, and March are usually the wettest months in the region by far. More than 80 percent of Los Angeles’ rain comes during these colder months. But this year, during December, the region received, on average, less than one-tenth of an inch of rainfall. Normal totals are on the order of 2.5 inches in December.

So, the foliage in the area was already very dry, effectively extending the region’s wildfire season. Then, strong Santa Ana winds were predicted for this week due, in part, to the extreme cold observed in the eastern United States and high pressure over the Great Basin region of the country. “Red flag” winds were forecast locally, which indicates that winds could combine with dry grounds to spread wildfires efficiently. The direct cause of the Palisades fire is yet unknown.

Wildfires during the winter months in California are not a normal occurrence, but they are not unprecedented either. Scientists, however, generally agree that a warmer planet is extending wildfire seasons such as those observed in California.

“Climate change, including increased heat, extended drought, and a thirsty atmosphere, has been a key driver in increasing the risk and extent of wildfires in the western United States during the last two decades,” the US National Oceanic and Atmospheric Administration concludes. “Wildfires require the alignment of a number of factors, including temperature, humidity, and the lack of moisture in fuels, such as trees, shrubs, grasses, and forest debris. All these factors have strong direct or indirect ties to climate variability and climate change.”

Here’s what we know, and what we don’t, about the awful Palisades wildfire Read More »

why-i’m-disappointed-with-the-tvs-at-ces-2025

Why I’m disappointed with the TVs at CES 2025


Won’t someone please think of the viewer?

Op-ed: TVs miss opportunity for real improvement by prioritizing corporate needs.

The TV industry is hitting users over the head with AI and other questionable gimmicks Credit: Getty

If you asked someone what they wanted from TVs released in 2025, I doubt they’d say “more software and AI.” Yet, if you look at what TV companies have planned for this year, which is being primarily promoted at the CES technology trade show in Las Vegas this week, software and AI are where much of the focus is.

The trend reveals the implications of TV brands increasingly viewing themselves as software rather than hardware companies, with their products being customer data rather than TV sets. This points to an alarming future for smart TVs, where even premium models sought after for top-end image quality and hardware capabilities are stuffed with unwanted gimmicks.

LG’s remote regression

LG has long made some of the best—and most expensive—TVs available. Its OLED lineup, in particular, has appealed to people who use their TVs to watch Blu-rays, enjoy HDR, and the like. However, some features that LG is introducing to high-end TVs this year seem to better serve LG’s business interests than those users’ needs.

Take the new remote. Formerly known as the Magic Remote, LG is calling the 2025 edition the AI Remote. That is already likely to dissuade people who are skeptical about AI marketing in products (research suggests there are many such people). But the more immediately frustrating part is that the new remote doesn’t have a dedicated button for switching input modes, as previous remotes from LG and countless other remotes do.

LG AI remote

LG’s AI Remote. Credit: Tom’s Guide/YouTube

To use the AI Remote to change the TV’s input—a common task for people using their sets to play video games, watch Blu-rays or DVDs, connect their PC, et cetera—you have to long-press the Home Hub button. Single-pressing that button brings up a dashboard of webOS (the operating system for LG TVs) apps. That functionality isn’t immediately apparent to someone picking up the remote for the first time and detracts from the remote’s convenience.

By overlooking other obviously helpful controls (play/pause, fast forward/rewind, and numbers) while including buttons dedicated to things like LG’s free ad-supported streaming TV (FAST) channels and Amazon Alexa, LG missed an opportunity to update its remote in a way centered on how people frequently use TVs. That said, it feels like user convenience didn’t drive this change. Instead, LG seems more focused on getting people to use webOS apps. LG can monetize app usage through, i.e., getting a cut of streaming subscription sign-ups, selling ads on webOS, and selling and leveraging user data.

Moving from hardware provider to software platform

LG, like many other TV OEMs, has been growing its ads and data business. Deals with data analytics firms like Nielsen give it more incentive to acquire customer data. Declining TV margins and rock-bottom prices from budget brands (like Vizio and Roku, which sometimes lose money on TV hardware sales and make up for the losses through ad sales and data collection) are also pushing LG’s software focus. In the case of the AI Remote, software prioritization comes at the cost of an oft-used hardware capability.

Further demonstrating its motives, in September 2023, LG announced intentions to “become a media and entertainment platform company” by offering “services” and a “collection of curated content in products, including LG OLED and LG QNED TVs.” At the time, the South Korean firm said it would invest 1 trillion KRW (about $737.7 million) into its webOS business through 2028.

Low TV margins, improved TV durability, market saturation, and broader economic challenges are all serious challenges for an electronics company like LG and have pushed LG to explore alternative ways to make money off of TVs. However, after paying four figures for TV sets, LG customers shouldn’t be further burdened to help LG accrue revenue.

Google TVs gear up for subscription-based features

There are numerous TV manufacturers, including Sony, TCL, and Philips, relying on Google software to power their TV sets. Numerous TVs announced at CES 2025 will come with what Google calls Gemini Enhanced Google Assistant. The idea that this is something that people using Google TVs have requested is somewhat contradicted by Google Assistant interactions with TVs thus far being “somewhat limited,” per a Lowpass report.

Nevertheless, these TVs are adding far-field microphones so that they can hear commands directed at the voice assistant. For the first time, the voice assistant will include Google’s generative AI chatbot, Gemini, this year—another feature that TV users don’t typically ask for. Despite the lack of demand and the privacy concerns associated with microphones that can pick up audio from far away even when the TV is off, companies are still loading 2025 TVs with far-field mics to support Gemini. Notably, these TVs will likely allow the mics to be disabled, like you can with other TVs using far-field mics. But I still ponder about features/hardware that could have been implemented instead.

Google is also working toward having people pay a subscription fee to use Gemini on their TVs, PCWorld reported.

“For us, our biggest goal is to create enough value that yes, you would be willing to pay for [Gemini],” Google TV VP and GM Shalini Govil-Pai told the publication.

The executive pointed to future capabilities for the Gemini-driven Google Assistant on TVs, including asking it to “suggest a movie like Jurassic Park but suitable for young children” or to show “Bollywood movies that are similar to Mission: Impossible.”

She also pointed to future features like showing weather, top news stories, and upcoming calendar events when someone is near the TV, showing AI-generated news briefings, and the ability to respond to questions like “explain the solar system to a third-grader” with text, audio, and YouTube videos.

But when people have desktops, laptops, tablets, and phones in their homes already, how helpful are these features truly? Govil-Pai admitted to PCWorld that “people are not used to” using their TVs this way “so it will take some time for them to adapt to it.” With this in mind, it seems odd for TV companies to implement new, more powerful microphones to support features that Google acknowledges aren’t in demand. I’m not saying that tech companies shouldn’t get ahead of the curve and offer groundbreaking features that users hadn’t considered might benefit them. But already planning to monetize those capabilities—with a subscription, no less—suggests a prioritization of corporate needs.

Samsung is hungry for AI

People who want to use their TV for cooking inspiration often turn to cooking shows or online cooking videos. However, Samsung wants people to use its TV software to identify dishes they want to try making.

During CES, Samsung announced Samsung Food for TVs. The feature leverages Samsung TVs’ AI processors to identify food displayed on the screen and recommend relevant recipes. Samsung introduced the capability in 2023 as an iOS and Android app after buying the app Whisk in 2019. As noted by TechCrunch, though, other AI tools for providing recipes based on food images are flawed.

So why bother with such a feature? You can get a taste of Samsung’s motivation from its CES-announced deal with Instacart that lets people order off Instacart from Samsung smart fridges that support the capability. Samsung Food on TVs can show users the progress of food orders placed via the Samsung Food mobile app on their TVs. Samsung Food can also create a shopping list for recipe ingredients based on what it knows (using cameras and AI) is in your (supporting) Samsung fridge. The feature also requires a Samsung account, which allows the company to gather more information on users.

Other software-centric features loaded into Samsung TVs this year include a dedicated AI button on the new TVs’ remotes, the ability to use gestures to control the TV but only if you’re wearing a Samsung Galaxy Watch, and AI Karaoke, which lets people sing karaoke using their TVs by stripping vocals from music playing and using their phone as a mic.

Like LG, Samsung has shown growing interest in ads and data collection. In May, for example, it expanded its automatic content recognition tech to track ad exposure on streaming services viewed on its TVs. It also has an ads analytics partnership with Experian.

Large language models on TVs

TVs are mainstream technology in most US homes. Generative AI chatbots, on the other hand, are emerging technology that many people have yet to try. Despite these disparities, LG and Samsung are incorporating Microsoft’s Copilot chatbot into 2025 TVs.

LG claims that Copilot will help its TVs “understand conversational context and uncover subtle user intentions,” adding: “Access to Microsoft Copilot further streamlines the process, allowing users to efficiently find and organize complex information using contextual cues. For an even smoother and more engaging experience, the AI chatbot proactively identifies potential user challenges and offers timely, effective solutions.”

Similarly, Samsung, which is also adding Copilot to some of its smart monitors, said in its announcement that Copilot will help with “personalized content recommendations.” Samsung has also said that Copilot will help its TVs understand strings of commands, like increasing the volume and changing the channel, CNET noted. Samsung said it intends to work with additional AI partners, namely Google, but it’s unclear why it needs multiple AI partners, especially when it hasn’t yet seen how people use large language models on their TVs.

TV-as-a-platform

To be clear, this isn’t a condemnation against new, unexpected TV features. This also isn’t a censure against new TV apps or the usage of AI in TVs.

AI marketing hype is real and misleading regarding the demand, benefits, and possibilities of AI in consumer gadgets. However, there are some cases when innovative software, including AI, can improve things that TV users not only care about but actually want or need. For example, some TVs use AI for things like trying to optimize sound, color, and/or brightness, including based on current environmental conditions or upscaling. This week, Samsung announced AI Live Translate for TVs. The feature is supposed to be able to translate foreign language closed captions in real time, providing a way for people to watch more international content. It’s a feature I didn’t ask for but can see being useful and changing how I use my TV.

But a lot of this week’s TV announcements underscore an alarming TV-as-a-platform trend where TV sets are sold as a way to infiltrate people’s homes so that apps, AI, and ads can be pushed onto viewers. Even high-end TVs are moving in this direction and amplifying features with questionable usefulness, effectiveness, and privacy considerations. Again, I can’t help but wonder what better innovations could have come out this year if more R&D was directed toward hardware and other improvements that are more immediately rewarding for users than karaoke with AI.

The TV industry is facing economic challenges, and, understandably, TV brands are seeking creative solutions for making money. But for consumers, that means paying for features that you’re likely to ignore. Ultimately, many people just want a TV with amazing image and sound quality. Finding that without having to sift through a bunch of fluff is getting harder.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Why I’m disappointed with the TVs at CES 2025 Read More »

china-is-having-standard-flu-season-despite-widespread-hmpv-fears

China is having standard flu season despite widespread HMPV fears

There’s a good chance you’ve seen headlines about HMPV recently, with some touting “what you need to know” about the virus, aka human metapneumovirus. The answer is: not much.

It’s a common, usually mild respiratory virus that circulates every year, blending into the throng of other seasonal respiratory illnesses that are often indistinguishable from one another. (The pack includes influenza virus, respiratory syncytial virus (RSV), adenovirus, parainfluenza virus, common human coronaviruses, bocavirus, rhinovirus, enteroviruses, and Mycoplasma pneumoniae, among others.) HMPV is in the same family of viruses as RSV.

As one viral disease epidemiologist at the US Centers for Disease Control summarized in 2016, it’s usually “clinically indistinguishable” from other bog-standard respiratory illnesses, like seasonal flu, that cause cough, fever, and nasal congestion. For most, the infection is crummy but not worth a visit to a doctor. As such, testing for it is limited. But, like other common respiratory infections, it can be dangerous for children under age 5, older adults, and those with compromised immune systems. It was first identified in 2001, but it has likely been circulating since at least 1958.

The situation in China

The explosion of interest in HMPV comes after reports of a spike of HMPV infections in China, which allegedly led to hordes of masked patients filling hospitals. But none of that appears to be accurate. While HMPV infections have risen, the increase is not unusual for the respiratory illness season. Further, HMPV is not the leading cause of respiratory illnesses in China right now; the leading cause is seasonal flu. And the surge in seasonal flu is also within the usual levels seen at this time of year in China.

Last week, the Chinese Center for Disease Control and Prevention released its sentinel respiratory illness surveillance data collected in the last week of December. It included the test results of respiratory samples taken from outpatients. Of those, 30 percent were positive for flu (the largest share), a jump of about 6 percent from the previous week (the largest jump). Only 6 percent were positive for HMPV, which was about the same detection rate as in the previous week (there was a 0.1 percent increase).

China is having standard flu season despite widespread HMPV fears Read More »

dirty-deeds-in-denver:-ex-prosecutor-faked-texts,-destroyed-devices-to-frame-colleague

Dirty deeds in Denver: Ex-prosecutor faked texts, destroyed devices to frame colleague

How we got here

Choi was a young attorney a few years out of law school, working at the Denver District Attorney’s Office in various roles between 2019 and 2022. Beginning in 2021, she accused her colleague, Dan Hines, of sexual misconduct. Hines, she said at first, made an inappropriate remark to her. Hines denied it and nothing could be proven, but he was still transferred to another unit.

In 2022, Choi complained again. This time, she offered phone records showing inappropriate text messages she allegedly received from Hines. But Hines, who denied everything, offered investigators his own phone records, which showed no texts to Choi.

Investigators then went directly to Verizon for records, which showed that “Ms. Choi had texted the inappropriate messages to herself,” according to the Times. “In addition, she changed the name in her phone to make it appear as though Mr. Hines was the one who had sent them.”

At this point, the investigators started looking more closely at Choi and asked for her devices, leading to the incident described above.

In the end, Choi was fired from the DA’s office and eventually given a disbarment order by the Office of the Presiding Disciplinary Judge, which she can still appeal. For his part, Hines is upset about how he was treated during the whole situation and has filed a lawsuit of his own against the DA’s office, believing that he was initially seen as a guilty party even in the absence of evidence.

The case is a reminder that, despite well-founded concerns over tracking, data collection, and privacy, sometimes the modern world’s massive data collection can work to one’s benefit. Hines was able to escape the second allegation against him precisely because of the specific (and specifically refutable) digital evidence that was presented against him—as opposed to the murkier world of “he said/she said.”

Choi might have done as she liked with her devices, but her “evidence” wasn’t the only data out there. Investigators were able to draw on Hines’ own phone data, along with Verizon network data, to see that he had not been texting Choi at the times in question.

Update: Ars Technica has obtained the ruling, which you can read here (PDF). The document recounts in great detail what a modern, quasi-judicial workplace investigation looks like: forensic device examinations, search warrants to Verizon, asking people to log into their cell phone accounts and download data while investigators look over their shoulders, etc.

Dirty deeds in Denver: Ex-prosecutor faked texts, destroyed devices to frame colleague Read More »

new-geforce-50-series-gpus:-there’s-the-$1,999-5090,-and-there’s-everything-else

New GeForce 50-series GPUs: There’s the $1,999 5090, and there’s everything else


Nvidia leans heavily on DLSS 4 and AI-generated frames for speed comparisons.

Nvidia’s RTX 5070, one of four new desktop GPUs announced this week. Credit: Nvidia

Nvidia’s RTX 5070, one of four new desktop GPUs announced this week. Credit: Nvidia

Nvidia has good news and bad news for people building or buying gaming PCs.

The good news is that three of its four new RTX 50-series GPUs are the same price or slightly cheaper than the RTX 40-series GPUs they’re replacing. The RTX 5080 is $999, the same price as the RTX 4080 Super; the 5070 Ti and 5070 are launching for $749 and $549, each $50 less than the 4070 Ti Super and 4070 Super.

The bad news for people looking for the absolute fastest card they can get is that the company is charging $1,999 for its flagship RTX 5090 GPU, significantly more than the $1,599 MSRP of the RTX 4090. If you want Nvidia’s biggest and best, it will cost at least as much as four high-end game consoles or a pair of decently specced midrange gaming PCs.

Pricing for the first batch of Blackwell-based RTX 50-series GPUs. Credit: Nvidia

Nvidia also announced a new version of its upscaling algorithm, DLSS 4. As with DLSS 3 and the RTX 40-series, DLSS 4’s flagship feature will be exclusive to the 50-series. It’s called DLSS Multi Frame Generation, and as the name implies, it takes the Frame Generation feature from DLSS 3 and allows it to generate even more frames. It’s why Nvidia CEO Jensen Huang claimed that the $549 RTX 5070 performed like the $1,599 RTX 4090; it’s also why those claims are a bit misleading.

The rollout will begin with the RTX 5090 and 5080 on January 30. The 5070 Ti and 5070 will follow at some point in February. All cards except the 5070 Ti will come in Nvidia-designed Founders Editions as well as designs made by Nvidia’s partners; the 5070 Ti isn’t getting a Founders Edition.

The RTX 5090 and 5080

RTX 5090 RTX 4090 RTX 5080 RTX 4080 Super
CUDA Cores 21,760 16,384 10,752 10,240
Boost Clock 2,410 MHz 2,520 MHz 2,617 MHz 2,550 MHz
Memory Bus Width 512-bit 384-bit 256-bit 256-bit
Memory Bandwidth 1,792 GB/s 1,008 GB/s 960 GB/s 736 GB/s
Memory size 32GB GDDR7 24GB GDDR6X 16GB GDDR7 16GB GDDR6X
TGP 575 W 450 W 360 W 320 W

The RTX 5090, based on Nvidia’s new Blackwell architecture, is a gigantic chip with 92 billion transistors in it. And while it is double the price of an RTX 5080, you also get double the GPU cores and double the RAM and nearly double the memory bandwidth. Even more than the 4090, it’s being positioned head and shoulders above the rest of the GPUs in the family, and the 5080’s performance won’t come remotely close to it.

Although $1,999 is a lot to ask for a graphics card, if Nvidia can consistently make the RTX 5090 available at $2,000, it could still be an improvement over the pricing of the 4090, which regularly sold for well over $1,599 over the course of its lifetime, due in part to pandemic-fueled GPU shortages, cryptocurrency mining, and the generative AI boom. Companies and other entities buying them as AI accelerators may restrict the availability of the 5090, too, but Nvidia’s highest GPU tier has been well out of the price range of most consumers for a while now.

Despite the higher power budget—as predicted, it’s 125 W higher than the 4090 at 450 W, and Nvidia recommends a 1,000 W power supply or better—the physical size of the 5090 Founders Edition is considerably smaller than the 4090, which was large enough that it had trouble fitting into some computer cases. Thanks to a “high-density PCB” and redesigned cooling system, the 5090 Founders Edition is a dual-slot card that ought to fit into small-form-factor systems much more easily than the 4090. Of course, this won’t stop most third-party 5090 GPUs from being gigantic triple-fan monstrosities, but it is apparently possible to make a reasonably sized version of the card.

Moving on to the 5080, it looks like more of a mild update from last year’s RTX 4080 Super, with a few hundred more CUDA cores, more memory bandwidth (thanks to the use of GDDR7, since the two GPUs share the same 256-bit interface), and a slightly higher power budget of 360 W (compared to 320 W for the 4080 Super).

Having more cores and faster memory, in addition to whatever improvements and optimizations come with the Blackwell architecture, should help the 5080 easily beat the 4080 Super. But it’s an open question as to whether it will be able to beat the 4090, at least before you consider any DLSS-related frame rate increases. The 4090 has 52 percent more GPU cores, a wider memory bus, and 8GB more memory.

5070 Ti and 5070

RTX 5070 Ti RTX 4070 Ti Super RTX 5070 RTX 4070 Super
CUDA Cores 8,960 8,448 6,144 7,168
Boost Clock 2,452 MHz 2,610 MHz 2,512 MHz 2,475 MHz
Memory Bus Width 256-bit 256-bit 192-bit 192-bit
Memory Bandwidth 896 GB/s 672 GB/s 672 GB/s 504 GB/s
Memory size 16GB GDDR7 16GB GDDR6X 12GB GDDR7 12GB GDDR6X
TGP 300 W 285 W 250 W 220 W

At $749 and $549, the 5070 Ti and 5070 are slightly more within reach for someone who’s trying to spend less than $2,000 on a new gaming PC. Both cards hew relatively closely to the specs of the 4070 Ti Super and 4070 Super, both of which are already solid 1440p and 4K graphics cards for many titles.

Like the 5080, the 5070 Ti includes a few hundred more CUDA cores, more memory bandwidth, and slightly higher power requirements compared to the 4070 Ti Super. That the card is $50 less than the 4070 Ti Super was at launch is a nice bonus—if it can come close to or beat the RTX 4080 for $250 less, it could be an appealing high-end option.

The RTX 5070 is alone in having fewer CUDA cores than its immediate predecessor—6,144, down from 7,168. It is an upgrade from the original 4070, which had 5,888 CUDA cores, and GDDR7 and slightly faster clock speeds may still help it outrun the 4070 Super; like the other 50-series cards, it also comes with a higher power budget. But right now this card is looking like the closest thing to a lateral move in the lineup, at least before you consider the additional frame-generation capabilities of DLSS 4.

DLSS 4 and fudging the numbers

Many of Nvidia’s most ostentatious performance claims—including the one that the RTX 5070 is as fast as a 4090—factors in DLSS 4’s additional AI-generated frames. Credit: Nvidia

When launching new 40-series cards over the last two years, it was common for Nvidia to publish a couple of different performance comparisons to last-gen cards: one with DLSS turned off and one with DLSS and the 40-series-exclusive Frame Generation feature turned on. Nvidia would then lean on the DLSS-enabled numbers when making broad proclamations about a GPU’s performance, as it does in its official press release when it says the 5090 is twice as fast as the 4090, or as Huang did during his CES keynote when he claimed that an RTX 5070 offered RTX 4090 performance for $549.

DLSS Frame Generation is an AI feature that builds on what DLSS is already doing. Where DLSS uses AI to fill in gaps and make a lower-resolution image look like a higher-resolution image, DLSS Frame Generation creates entirely new frames and inserts them in between the frames that your GPU is actually rendering.

DLSS 4 now generates up to three frames for every frame the GPU is actually rendering. Used in concert with DLSS image upscaling, Nvidia says that “15 out of every 16 pixels” you see on your screen are being generated by its AI models. Credit: Nvidia

The RTX 50-series one-ups the 40-series with DLSS 4, another new revision that’s exclusive to its just-launched GPUs: DLSS Multi Frame Generation. Instead of generating one extra frame for every traditionally rendered frame, DLSS 4 generates “up to three additional frames” to slide in between the ones your graphics card is actually rendering—based on Nvidia’s slides, it looks like users ought to be able to control how many extra frames are being generated, just as they can control the quality settings for DLSS upscaling. Nvidia is leaning on the Blackwell architecture’s faster Tensor Cores, which it says are up to 2.5 times faster than the Tensor Cores in the RTX 40-series, to do the AI processing necessary to upscale rendered frames and to generate new ones.

Nvidia’s performance comparisons aren’t indefensible; with DLSS FG enabled, the cards can put out a lot of frames per second. It’s just dependent on game support (Nvidia says that 75 titles will support it at launch), and going off of our experience with the original iteration of Frame Generation, there will likely be scenarios where image quality is noticeably worse or just “off-looking” compared to actual rendered frames. DLSS FG also needed a solid base frame rate to get the best results, which may or may not be the case for Multi-FG.

Enhanced versions of older DLSS features can benefit all RTX cards, including the 20-, 30-, and 40-series. Multi-Frame Generation is restricted to the 50-series, though. Credit: Nvidia

Though the practice of restricting the biggest DLSS upgrades to all-new hardware is a bit frustrating, Nvidia did announce that it’s releasing a new transformer module for the DLSS Ray Reconstruction, Super Resolution, and Anti-Aliasing features. These are DLSS features that are available on all RTX GPUs going all the way back to the RTX 20-series, and games that are upgraded to use the newer models should benefit from improved upscaling quality even if they’re using older GPUs.

GeForce 50-series: Also for laptops!

Nvidia’s projected pricing for laptops with each of its new mobile GPUs. Credit: Nvidia

Nvidia’s laptop GPU announcements sometimes trail the desktop announcements by a few weeks or months. But the company has already announced mobile versions of the 5090, 5080, 5070 Ti, and 5070 that Nvidia says will begin shipping in laptops priced between $1,299 and $2,899 when they launch in March.

All of these GPUs share names, the Blackwell architecture, and DLSS 4 support with their desktop counterparts, but per usual they’re significantly cut down to fit on a laptop motherboard and within a laptop’s cooling capacity. The mobile version of the 5090 includes 10,496 GPU cores, less than half the number of the desktop version, and just 24GB of GDDR7 memory on a 256-bit interface instead of 32GB on a 512-bit interface. But it also can operate with a power budget between 95 and 150 W, a fraction of what the desktop 5090 needs.

RTX 5090 (mobile) RTX 5080 (mobile) RTX 5070 Ti (mobile) RTX 5070 (mobile)
CUDA Cores 10,496 7,680 5,888 4,608
Memory Bus Width 256-bit 256-bit 192-bit 128-bit
Memory size 24GB GDDR7 16GB GDDR7 12GB GDDR7 8GB GDDR7
TGP 95-150 W 80-150 W 60-115 W 50-100 W

The other three GPUs are mostly cut down in similar ways, and all of them have fewer GPU cores and lower power requirements than their desktop counterparts. The 5070 GPUs both have less RAM and narrowed memory buses, too, but the mobile RTX 5080 at least comes closer to its desktop iteration, with the same 256-bit bus width and 16GB of RAM.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

New GeForce 50-series GPUs: There’s the $1,999 5090, and there’s everything else Read More »

lenovo-laptop’s-rollable-screen-uses-motors-to-grow-from-14-to-16.7-inches

Lenovo laptop’s rollable screen uses motors to grow from 14 to 16.7 inches

Lenovo announced a laptop today that experiments with a new way to offer laptop users more screen space than the typical clamshell design. The Lenovo ThinkBook Plus Gen 6 Rollable has a screen that can roll up vertically to expand from 14 inches diagonally to 16.7 inches, presenting an alternative to prior foldable-screen and dual-screen laptops.

Here you can see the PC’s backside when the screen is extended. Lenovo

The laptop, which Lenovo says is coming out in June, builds on a concept that Lenovo demoed in February 2023. That prototype had a Sharp-made panel that initially measured 12.7 inches but could unroll to present a total screen size of 15.3 inches. Lenovo’s final product is working with a bigger display from Samsung Display, The Verge reported. Resolution-wise you’re going from 2,000×1,600 pixels (about 183 pixels per inch) to 2,000×2,350 (184.8 ppi), the publication said.

Users make the screen expand by pressing a dedicated button on the keyboard or by making a hand gesture at the PC’s webcam. Expansion entails about 10 seconds of loud whirring from the laptop’s motors. Lenovo executives told The Verge that the laptop was rated for at least 20,000 rolls up and down and 30,000 hinge openings and closings.

The system can also treat the expanded screens as two different 16:9 displays.

Lenovo ThinkBook Plus Gen 6 Rollable

The screen claims up to 400 nits brightness and 100 percent DCI-P3 coverage. Credit: Lenovo

This is a clever way to offer a dual-screen experience without the flaws inherent to current dual-screen laptops, including distracting hinges and designs with questionable durability. However, 16.7 inches is a bit small for two displays. The dual-screen Lenovo Yoga Book 9i, for comparison, previously had two 13.3-inch displays for a total of 26.6 inches, and this year’s model has two 14-inch screens. Still, the ThinkBook, when its screen is fully expanded, is the rare laptop to offer a screen that’s taller than it is wide.

Still foldable OLED

At first, you might think that since the screen is described as “rollable” it may not have the same visible creases that have tormented foldable-screen devices since their inception. But the screen, reportedly from Samsung Display, still shows “little curls visible in the display, which are more obvious when it’s moving and there’s something darker onscreen,” as well as “plenty of smaller creases along its lower half” that aren’t too noticeable when using the laptop but that are clear when looking at the screen closely or when staring at it “from steeper angles,” The Verge reported.

Lenovo laptop’s rollable screen uses motors to grow from 14 to 16.7 inches Read More »