Tech

why-i’m-disappointed-with-the-tvs-at-ces-2025

Why I’m disappointed with the TVs at CES 2025


Won’t someone please think of the viewer?

Op-ed: TVs miss opportunity for real improvement by prioritizing corporate needs.

The TV industry is hitting users over the head with AI and other questionable gimmicks Credit: Getty

If you asked someone what they wanted from TVs released in 2025, I doubt they’d say “more software and AI.” Yet, if you look at what TV companies have planned for this year, which is being primarily promoted at the CES technology trade show in Las Vegas this week, software and AI are where much of the focus is.

The trend reveals the implications of TV brands increasingly viewing themselves as software rather than hardware companies, with their products being customer data rather than TV sets. This points to an alarming future for smart TVs, where even premium models sought after for top-end image quality and hardware capabilities are stuffed with unwanted gimmicks.

LG’s remote regression

LG has long made some of the best—and most expensive—TVs available. Its OLED lineup, in particular, has appealed to people who use their TVs to watch Blu-rays, enjoy HDR, and the like. However, some features that LG is introducing to high-end TVs this year seem to better serve LG’s business interests than those users’ needs.

Take the new remote. Formerly known as the Magic Remote, LG is calling the 2025 edition the AI Remote. That is already likely to dissuade people who are skeptical about AI marketing in products (research suggests there are many such people). But the more immediately frustrating part is that the new remote doesn’t have a dedicated button for switching input modes, as previous remotes from LG and countless other remotes do.

LG AI remote

LG’s AI Remote. Credit: Tom’s Guide/YouTube

To use the AI Remote to change the TV’s input—a common task for people using their sets to play video games, watch Blu-rays or DVDs, connect their PC, et cetera—you have to long-press the Home Hub button. Single-pressing that button brings up a dashboard of webOS (the operating system for LG TVs) apps. That functionality isn’t immediately apparent to someone picking up the remote for the first time and detracts from the remote’s convenience.

By overlooking other obviously helpful controls (play/pause, fast forward/rewind, and numbers) while including buttons dedicated to things like LG’s free ad-supported streaming TV (FAST) channels and Amazon Alexa, LG missed an opportunity to update its remote in a way centered on how people frequently use TVs. That said, it feels like user convenience didn’t drive this change. Instead, LG seems more focused on getting people to use webOS apps. LG can monetize app usage through, i.e., getting a cut of streaming subscription sign-ups, selling ads on webOS, and selling and leveraging user data.

Moving from hardware provider to software platform

LG, like many other TV OEMs, has been growing its ads and data business. Deals with data analytics firms like Nielsen give it more incentive to acquire customer data. Declining TV margins and rock-bottom prices from budget brands (like Vizio and Roku, which sometimes lose money on TV hardware sales and make up for the losses through ad sales and data collection) are also pushing LG’s software focus. In the case of the AI Remote, software prioritization comes at the cost of an oft-used hardware capability.

Further demonstrating its motives, in September 2023, LG announced intentions to “become a media and entertainment platform company” by offering “services” and a “collection of curated content in products, including LG OLED and LG QNED TVs.” At the time, the South Korean firm said it would invest 1 trillion KRW (about $737.7 million) into its webOS business through 2028.

Low TV margins, improved TV durability, market saturation, and broader economic challenges are all serious challenges for an electronics company like LG and have pushed LG to explore alternative ways to make money off of TVs. However, after paying four figures for TV sets, LG customers shouldn’t be further burdened to help LG accrue revenue.

Google TVs gear up for subscription-based features

There are numerous TV manufacturers, including Sony, TCL, and Philips, relying on Google software to power their TV sets. Numerous TVs announced at CES 2025 will come with what Google calls Gemini Enhanced Google Assistant. The idea that this is something that people using Google TVs have requested is somewhat contradicted by Google Assistant interactions with TVs thus far being “somewhat limited,” per a Lowpass report.

Nevertheless, these TVs are adding far-field microphones so that they can hear commands directed at the voice assistant. For the first time, the voice assistant will include Google’s generative AI chatbot, Gemini, this year—another feature that TV users don’t typically ask for. Despite the lack of demand and the privacy concerns associated with microphones that can pick up audio from far away even when the TV is off, companies are still loading 2025 TVs with far-field mics to support Gemini. Notably, these TVs will likely allow the mics to be disabled, like you can with other TVs using far-field mics. But I still ponder about features/hardware that could have been implemented instead.

Google is also working toward having people pay a subscription fee to use Gemini on their TVs, PCWorld reported.

“For us, our biggest goal is to create enough value that yes, you would be willing to pay for [Gemini],” Google TV VP and GM Shalini Govil-Pai told the publication.

The executive pointed to future capabilities for the Gemini-driven Google Assistant on TVs, including asking it to “suggest a movie like Jurassic Park but suitable for young children” or to show “Bollywood movies that are similar to Mission: Impossible.”

She also pointed to future features like showing weather, top news stories, and upcoming calendar events when someone is near the TV, showing AI-generated news briefings, and the ability to respond to questions like “explain the solar system to a third-grader” with text, audio, and YouTube videos.

But when people have desktops, laptops, tablets, and phones in their homes already, how helpful are these features truly? Govil-Pai admitted to PCWorld that “people are not used to” using their TVs this way “so it will take some time for them to adapt to it.” With this in mind, it seems odd for TV companies to implement new, more powerful microphones to support features that Google acknowledges aren’t in demand. I’m not saying that tech companies shouldn’t get ahead of the curve and offer groundbreaking features that users hadn’t considered might benefit them. But already planning to monetize those capabilities—with a subscription, no less—suggests a prioritization of corporate needs.

Samsung is hungry for AI

People who want to use their TV for cooking inspiration often turn to cooking shows or online cooking videos. However, Samsung wants people to use its TV software to identify dishes they want to try making.

During CES, Samsung announced Samsung Food for TVs. The feature leverages Samsung TVs’ AI processors to identify food displayed on the screen and recommend relevant recipes. Samsung introduced the capability in 2023 as an iOS and Android app after buying the app Whisk in 2019. As noted by TechCrunch, though, other AI tools for providing recipes based on food images are flawed.

So why bother with such a feature? You can get a taste of Samsung’s motivation from its CES-announced deal with Instacart that lets people order off Instacart from Samsung smart fridges that support the capability. Samsung Food on TVs can show users the progress of food orders placed via the Samsung Food mobile app on their TVs. Samsung Food can also create a shopping list for recipe ingredients based on what it knows (using cameras and AI) is in your (supporting) Samsung fridge. The feature also requires a Samsung account, which allows the company to gather more information on users.

Other software-centric features loaded into Samsung TVs this year include a dedicated AI button on the new TVs’ remotes, the ability to use gestures to control the TV but only if you’re wearing a Samsung Galaxy Watch, and AI Karaoke, which lets people sing karaoke using their TVs by stripping vocals from music playing and using their phone as a mic.

Like LG, Samsung has shown growing interest in ads and data collection. In May, for example, it expanded its automatic content recognition tech to track ad exposure on streaming services viewed on its TVs. It also has an ads analytics partnership with Experian.

Large language models on TVs

TVs are mainstream technology in most US homes. Generative AI chatbots, on the other hand, are emerging technology that many people have yet to try. Despite these disparities, LG and Samsung are incorporating Microsoft’s Copilot chatbot into 2025 TVs.

LG claims that Copilot will help its TVs “understand conversational context and uncover subtle user intentions,” adding: “Access to Microsoft Copilot further streamlines the process, allowing users to efficiently find and organize complex information using contextual cues. For an even smoother and more engaging experience, the AI chatbot proactively identifies potential user challenges and offers timely, effective solutions.”

Similarly, Samsung, which is also adding Copilot to some of its smart monitors, said in its announcement that Copilot will help with “personalized content recommendations.” Samsung has also said that Copilot will help its TVs understand strings of commands, like increasing the volume and changing the channel, CNET noted. Samsung said it intends to work with additional AI partners, namely Google, but it’s unclear why it needs multiple AI partners, especially when it hasn’t yet seen how people use large language models on their TVs.

TV-as-a-platform

To be clear, this isn’t a condemnation against new, unexpected TV features. This also isn’t a censure against new TV apps or the usage of AI in TVs.

AI marketing hype is real and misleading regarding the demand, benefits, and possibilities of AI in consumer gadgets. However, there are some cases when innovative software, including AI, can improve things that TV users not only care about but actually want or need. For example, some TVs use AI for things like trying to optimize sound, color, and/or brightness, including based on current environmental conditions or upscaling. This week, Samsung announced AI Live Translate for TVs. The feature is supposed to be able to translate foreign language closed captions in real time, providing a way for people to watch more international content. It’s a feature I didn’t ask for but can see being useful and changing how I use my TV.

But a lot of this week’s TV announcements underscore an alarming TV-as-a-platform trend where TV sets are sold as a way to infiltrate people’s homes so that apps, AI, and ads can be pushed onto viewers. Even high-end TVs are moving in this direction and amplifying features with questionable usefulness, effectiveness, and privacy considerations. Again, I can’t help but wonder what better innovations could have come out this year if more R&D was directed toward hardware and other improvements that are more immediately rewarding for users than karaoke with AI.

The TV industry is facing economic challenges, and, understandably, TV brands are seeking creative solutions for making money. But for consumers, that means paying for features that you’re likely to ignore. Ultimately, many people just want a TV with amazing image and sound quality. Finding that without having to sift through a bunch of fluff is getting harder.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Why I’m disappointed with the TVs at CES 2025 Read More »

how-i-program-with-llms

How I program with LLMs

The second issue is we can do better. I am happy we now live in a time when programmers write their own tests, but we do not hold ourselves to the same standards with tests as we do with production code. That is a reasonable trade-off; there are only so many hours in the day. But what LLMs lack in arithmetical prowess, they make up for in enthusiasm.

Let’s ask for an even better test.

In the tests, implement the simplest, most readable version of the standard code for quartiles over a fixed set of known values in a slice. Then pass the test cases through the standard code and the reservoir sampler and confirm they are within an epsilon of each other. Structure the comparison code such that it can be used in a fuzz test, too.

This got us some new test code:

// referenceQuartiles calculates the exact quartiles for a slice of float64 values    // using linear interpolation, matching the behavior expected from the sampler.    func referenceQuartiles(data []float64) (q1, median, q3 float64)  …     // compareQuartiles checks if two sets of quartiles are within epsilon of each other.    // Returns true if they match within the tolerance, false otherwise.    func compareQuartiles(q1a, meda, q3a, q1b, medb, q3b, epsilon float64) bool  …     // checkQuartiles is a test helper that compares sampler output against the reference    // implementation and reports any differences.    func checkQuartiles(t *testing.T, data []float64, epsilon float64)        t.Helper()             // Get reference values       wantQ1, wantMed, wantQ3 := referenceQuartiles(data)             // Get sampler values using a large reservoir for accuracy       qs := NewQuartileSampler(1000)       for _, v := range data            qs.Add(v)              gotQ1, gotMed, gotQ3 := qs.Quartiles()             if !compareQuartiles(gotQ1, gotMed, gotQ3, wantQ1, wantMed, wantQ3, epsilon)            t.Errorf("Quartiles mismatch:ngot  (q1=%v, med=%v, q3=%v)nwant (q1=%v, med=%v, q3=%v)nepsilon=%v",               gotQ1, gotMed, gotQ3, wantQ1, wantMed, wantQ3, epsilon)             

The original test from above has been reworked to use checkQuartiles, and we have something new:

func FuzzQuartileSampler(f *testing.F)        // Add some seed corpus       f.Add([]float641, 2, 3, 4, 5)             f.Fuzz(func(t *testing.T, data []float64)            // Use a larger epsilon for fuzzing since we might get more extreme values           checkQuartiles(t, data, 0.2)       )      

This is fun because it’s wrong. My running gopls tool immediately says:

fuzzing arguments can only have the following types:      string, bool, float32, float64,      int, int8, int16, int32, int64,      uint, uint8, uint16, uint32, uint64,      []byte  

Pasting that error back into the LLM gets it to regenerate the fuzz test such that it is built around a func(t *testing.T, data []byte) function that uses math.Float64frombits to extract floats from the data slice. Interactions like this point us toward automating the feedback from tools; all it needed was the obvious error message to make solid progress toward something useful. I was not needed.

Doing a quick survey of the last few weeks of my LLM chat history shows (which, as I mentioned earlier, is not a proper quantitative analysis by any measure) that more than 80 percent of the time there is a tooling error, the LLM can make useful progress without me adding any insight. About half the time, it can completely resolve the issue without me saying anything of note. I am just acting as the messenger.

How I program with LLMs Read More »

new-geforce-50-series-gpus:-there’s-the-$1,999-5090,-and-there’s-everything-else

New GeForce 50-series GPUs: There’s the $1,999 5090, and there’s everything else


Nvidia leans heavily on DLSS 4 and AI-generated frames for speed comparisons.

Nvidia’s RTX 5070, one of four new desktop GPUs announced this week. Credit: Nvidia

Nvidia’s RTX 5070, one of four new desktop GPUs announced this week. Credit: Nvidia

Nvidia has good news and bad news for people building or buying gaming PCs.

The good news is that three of its four new RTX 50-series GPUs are the same price or slightly cheaper than the RTX 40-series GPUs they’re replacing. The RTX 5080 is $999, the same price as the RTX 4080 Super; the 5070 Ti and 5070 are launching for $749 and $549, each $50 less than the 4070 Ti Super and 4070 Super.

The bad news for people looking for the absolute fastest card they can get is that the company is charging $1,999 for its flagship RTX 5090 GPU, significantly more than the $1,599 MSRP of the RTX 4090. If you want Nvidia’s biggest and best, it will cost at least as much as four high-end game consoles or a pair of decently specced midrange gaming PCs.

Pricing for the first batch of Blackwell-based RTX 50-series GPUs. Credit: Nvidia

Nvidia also announced a new version of its upscaling algorithm, DLSS 4. As with DLSS 3 and the RTX 40-series, DLSS 4’s flagship feature will be exclusive to the 50-series. It’s called DLSS Multi Frame Generation, and as the name implies, it takes the Frame Generation feature from DLSS 3 and allows it to generate even more frames. It’s why Nvidia CEO Jensen Huang claimed that the $549 RTX 5070 performed like the $1,599 RTX 4090; it’s also why those claims are a bit misleading.

The rollout will begin with the RTX 5090 and 5080 on January 30. The 5070 Ti and 5070 will follow at some point in February. All cards except the 5070 Ti will come in Nvidia-designed Founders Editions as well as designs made by Nvidia’s partners; the 5070 Ti isn’t getting a Founders Edition.

The RTX 5090 and 5080

RTX 5090 RTX 4090 RTX 5080 RTX 4080 Super
CUDA Cores 21,760 16,384 10,752 10,240
Boost Clock 2,410 MHz 2,520 MHz 2,617 MHz 2,550 MHz
Memory Bus Width 512-bit 384-bit 256-bit 256-bit
Memory Bandwidth 1,792 GB/s 1,008 GB/s 960 GB/s 736 GB/s
Memory size 32GB GDDR7 24GB GDDR6X 16GB GDDR7 16GB GDDR6X
TGP 575 W 450 W 360 W 320 W

The RTX 5090, based on Nvidia’s new Blackwell architecture, is a gigantic chip with 92 billion transistors in it. And while it is double the price of an RTX 5080, you also get double the GPU cores and double the RAM and nearly double the memory bandwidth. Even more than the 4090, it’s being positioned head and shoulders above the rest of the GPUs in the family, and the 5080’s performance won’t come remotely close to it.

Although $1,999 is a lot to ask for a graphics card, if Nvidia can consistently make the RTX 5090 available at $2,000, it could still be an improvement over the pricing of the 4090, which regularly sold for well over $1,599 over the course of its lifetime, due in part to pandemic-fueled GPU shortages, cryptocurrency mining, and the generative AI boom. Companies and other entities buying them as AI accelerators may restrict the availability of the 5090, too, but Nvidia’s highest GPU tier has been well out of the price range of most consumers for a while now.

Despite the higher power budget—as predicted, it’s 125 W higher than the 4090 at 450 W, and Nvidia recommends a 1,000 W power supply or better—the physical size of the 5090 Founders Edition is considerably smaller than the 4090, which was large enough that it had trouble fitting into some computer cases. Thanks to a “high-density PCB” and redesigned cooling system, the 5090 Founders Edition is a dual-slot card that ought to fit into small-form-factor systems much more easily than the 4090. Of course, this won’t stop most third-party 5090 GPUs from being gigantic triple-fan monstrosities, but it is apparently possible to make a reasonably sized version of the card.

Moving on to the 5080, it looks like more of a mild update from last year’s RTX 4080 Super, with a few hundred more CUDA cores, more memory bandwidth (thanks to the use of GDDR7, since the two GPUs share the same 256-bit interface), and a slightly higher power budget of 360 W (compared to 320 W for the 4080 Super).

Having more cores and faster memory, in addition to whatever improvements and optimizations come with the Blackwell architecture, should help the 5080 easily beat the 4080 Super. But it’s an open question as to whether it will be able to beat the 4090, at least before you consider any DLSS-related frame rate increases. The 4090 has 52 percent more GPU cores, a wider memory bus, and 8GB more memory.

5070 Ti and 5070

RTX 5070 Ti RTX 4070 Ti Super RTX 5070 RTX 4070 Super
CUDA Cores 8,960 8,448 6,144 7,168
Boost Clock 2,452 MHz 2,610 MHz 2,512 MHz 2,475 MHz
Memory Bus Width 256-bit 256-bit 192-bit 192-bit
Memory Bandwidth 896 GB/s 672 GB/s 672 GB/s 504 GB/s
Memory size 16GB GDDR7 16GB GDDR6X 12GB GDDR7 12GB GDDR6X
TGP 300 W 285 W 250 W 220 W

At $749 and $549, the 5070 Ti and 5070 are slightly more within reach for someone who’s trying to spend less than $2,000 on a new gaming PC. Both cards hew relatively closely to the specs of the 4070 Ti Super and 4070 Super, both of which are already solid 1440p and 4K graphics cards for many titles.

Like the 5080, the 5070 Ti includes a few hundred more CUDA cores, more memory bandwidth, and slightly higher power requirements compared to the 4070 Ti Super. That the card is $50 less than the 4070 Ti Super was at launch is a nice bonus—if it can come close to or beat the RTX 4080 for $250 less, it could be an appealing high-end option.

The RTX 5070 is alone in having fewer CUDA cores than its immediate predecessor—6,144, down from 7,168. It is an upgrade from the original 4070, which had 5,888 CUDA cores, and GDDR7 and slightly faster clock speeds may still help it outrun the 4070 Super; like the other 50-series cards, it also comes with a higher power budget. But right now this card is looking like the closest thing to a lateral move in the lineup, at least before you consider the additional frame-generation capabilities of DLSS 4.

DLSS 4 and fudging the numbers

Many of Nvidia’s most ostentatious performance claims—including the one that the RTX 5070 is as fast as a 4090—factors in DLSS 4’s additional AI-generated frames. Credit: Nvidia

When launching new 40-series cards over the last two years, it was common for Nvidia to publish a couple of different performance comparisons to last-gen cards: one with DLSS turned off and one with DLSS and the 40-series-exclusive Frame Generation feature turned on. Nvidia would then lean on the DLSS-enabled numbers when making broad proclamations about a GPU’s performance, as it does in its official press release when it says the 5090 is twice as fast as the 4090, or as Huang did during his CES keynote when he claimed that an RTX 5070 offered RTX 4090 performance for $549.

DLSS Frame Generation is an AI feature that builds on what DLSS is already doing. Where DLSS uses AI to fill in gaps and make a lower-resolution image look like a higher-resolution image, DLSS Frame Generation creates entirely new frames and inserts them in between the frames that your GPU is actually rendering.

DLSS 4 now generates up to three frames for every frame the GPU is actually rendering. Used in concert with DLSS image upscaling, Nvidia says that “15 out of every 16 pixels” you see on your screen are being generated by its AI models. Credit: Nvidia

The RTX 50-series one-ups the 40-series with DLSS 4, another new revision that’s exclusive to its just-launched GPUs: DLSS Multi Frame Generation. Instead of generating one extra frame for every traditionally rendered frame, DLSS 4 generates “up to three additional frames” to slide in between the ones your graphics card is actually rendering—based on Nvidia’s slides, it looks like users ought to be able to control how many extra frames are being generated, just as they can control the quality settings for DLSS upscaling. Nvidia is leaning on the Blackwell architecture’s faster Tensor Cores, which it says are up to 2.5 times faster than the Tensor Cores in the RTX 40-series, to do the AI processing necessary to upscale rendered frames and to generate new ones.

Nvidia’s performance comparisons aren’t indefensible; with DLSS FG enabled, the cards can put out a lot of frames per second. It’s just dependent on game support (Nvidia says that 75 titles will support it at launch), and going off of our experience with the original iteration of Frame Generation, there will likely be scenarios where image quality is noticeably worse or just “off-looking” compared to actual rendered frames. DLSS FG also needed a solid base frame rate to get the best results, which may or may not be the case for Multi-FG.

Enhanced versions of older DLSS features can benefit all RTX cards, including the 20-, 30-, and 40-series. Multi-Frame Generation is restricted to the 50-series, though. Credit: Nvidia

Though the practice of restricting the biggest DLSS upgrades to all-new hardware is a bit frustrating, Nvidia did announce that it’s releasing a new transformer module for the DLSS Ray Reconstruction, Super Resolution, and Anti-Aliasing features. These are DLSS features that are available on all RTX GPUs going all the way back to the RTX 20-series, and games that are upgraded to use the newer models should benefit from improved upscaling quality even if they’re using older GPUs.

GeForce 50-series: Also for laptops!

Nvidia’s projected pricing for laptops with each of its new mobile GPUs. Credit: Nvidia

Nvidia’s laptop GPU announcements sometimes trail the desktop announcements by a few weeks or months. But the company has already announced mobile versions of the 5090, 5080, 5070 Ti, and 5070 that Nvidia says will begin shipping in laptops priced between $1,299 and $2,899 when they launch in March.

All of these GPUs share names, the Blackwell architecture, and DLSS 4 support with their desktop counterparts, but per usual they’re significantly cut down to fit on a laptop motherboard and within a laptop’s cooling capacity. The mobile version of the 5090 includes 10,496 GPU cores, less than half the number of the desktop version, and just 24GB of GDDR7 memory on a 256-bit interface instead of 32GB on a 512-bit interface. But it also can operate with a power budget between 95 and 150 W, a fraction of what the desktop 5090 needs.

RTX 5090 (mobile) RTX 5080 (mobile) RTX 5070 Ti (mobile) RTX 5070 (mobile)
CUDA Cores 10,496 7,680 5,888 4,608
Memory Bus Width 256-bit 256-bit 192-bit 128-bit
Memory size 24GB GDDR7 16GB GDDR7 12GB GDDR7 8GB GDDR7
TGP 95-150 W 80-150 W 60-115 W 50-100 W

The other three GPUs are mostly cut down in similar ways, and all of them have fewer GPU cores and lower power requirements than their desktop counterparts. The 5070 GPUs both have less RAM and narrowed memory buses, too, but the mobile RTX 5080 at least comes closer to its desktop iteration, with the same 256-bit bus width and 16GB of RAM.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

New GeForce 50-series GPUs: There’s the $1,999 5090, and there’s everything else Read More »

lenovo-laptop’s-rollable-screen-uses-motors-to-grow-from-14-to-16.7-inches

Lenovo laptop’s rollable screen uses motors to grow from 14 to 16.7 inches

Lenovo announced a laptop today that experiments with a new way to offer laptop users more screen space than the typical clamshell design. The Lenovo ThinkBook Plus Gen 6 Rollable has a screen that can roll up vertically to expand from 14 inches diagonally to 16.7 inches, presenting an alternative to prior foldable-screen and dual-screen laptops.

Here you can see the PC’s backside when the screen is extended. Lenovo

The laptop, which Lenovo says is coming out in June, builds on a concept that Lenovo demoed in February 2023. That prototype had a Sharp-made panel that initially measured 12.7 inches but could unroll to present a total screen size of 15.3 inches. Lenovo’s final product is working with a bigger display from Samsung Display, The Verge reported. Resolution-wise you’re going from 2,000×1,600 pixels (about 183 pixels per inch) to 2,000×2,350 (184.8 ppi), the publication said.

Users make the screen expand by pressing a dedicated button on the keyboard or by making a hand gesture at the PC’s webcam. Expansion entails about 10 seconds of loud whirring from the laptop’s motors. Lenovo executives told The Verge that the laptop was rated for at least 20,000 rolls up and down and 30,000 hinge openings and closings.

The system can also treat the expanded screens as two different 16:9 displays.

Lenovo ThinkBook Plus Gen 6 Rollable

The screen claims up to 400 nits brightness and 100 percent DCI-P3 coverage. Credit: Lenovo

This is a clever way to offer a dual-screen experience without the flaws inherent to current dual-screen laptops, including distracting hinges and designs with questionable durability. However, 16.7 inches is a bit small for two displays. The dual-screen Lenovo Yoga Book 9i, for comparison, previously had two 13.3-inch displays for a total of 26.6 inches, and this year’s model has two 14-inch screens. Still, the ThinkBook, when its screen is fully expanded, is the rare laptop to offer a screen that’s taller than it is wide.

Still foldable OLED

At first, you might think that since the screen is described as “rollable” it may not have the same visible creases that have tormented foldable-screen devices since their inception. But the screen, reportedly from Samsung Display, still shows “little curls visible in the display, which are more obvious when it’s moving and there’s something darker onscreen,” as well as “plenty of smaller creases along its lower half” that aren’t too noticeable when using the laptop but that are clear when looking at the screen closely or when staring at it “from steeper angles,” The Verge reported.

Lenovo laptop’s rollable screen uses motors to grow from 14 to 16.7 inches Read More »

apple-will-update-ios-notification-summaries-after-bbc-headline-mistake

Apple will update iOS notification summaries after BBC headline mistake

Nevertheless, it’s a serious problem when the summaries misrepresent news headlines, and edge cases where this occurs are unfortunately inevitable. Apple cannot simply fix these summaries with a software update. The only answers are either to help users understand the drawbacks of the technology so they can make better-informed judgments or to remove or disable the feature completely. Apple is apparently going for the former.

We’re oversimplifying a bit here, but generally, LLMs like those used for Apple’s notification summaries work by predicting portions of words based on what came before and are not capable of truly understanding the content they’re summarizing.

Further, these predictions are known to not be accurate all the time, with incorrect results occurring a few times per 100 or 1,000 outputs. As the models are trained and improvements are made, the error percentage may be reduced, but it never reaches zero when countless summaries are being produced every day.

Deploying this technology at scale without users (or even the BBC, it seems) really understanding how it works is risky at best, whether it’s with the iPhone’s summaries of news headlines in notifications or Google’s AI summaries at the top of search engine results pages. Even if the vast majority of summaries are perfectly accurate, there will always be some users who see inaccurate information.

These summaries are read by so many millions of people that the scale of errors will always be a problem, almost no matter how comparatively accurate the models get.

We wrote at length a few weeks ago about how the Apple Intelligence rollout seemed rushed, counter to Apple’s usual focus on quality and user experience. However, with current technology, there is no amount of refinement to this feature that Apple could have done to reach a zero percent error rate with these notification summaries.

We’ll see how well Apple does making its users understand that the summaries may be wrong, but making all iPhone users truly grok how and why the feature works this way would be a tall order.

Apple will update iOS notification summaries after BBC headline mistake Read More »

disney-makes-antitrust-problem-go-away-by-buying-majority-stake-in-fubo

Disney makes antitrust problem go away by buying majority stake in Fubo

Fubo’s about-face

Fubo’s merger with Disney represents a shocking about-face for the sports-streaming provider, which previously had raised alarms (citing Citi research) about Disney’s ownership of 54 percent of the US sports rights market—ESPN (26.8 percent), Fox (17.3 percent), and WBD (9.9 percent). Fubo successfully got a preliminary injunction against Venu in August, and a trial was scheduled for October 2025.

Fubo CEO David Gandler said in February that Disney, Fox, and WBD “are erecting insurmountable barriers that will effectively block any new competitors.

“Each of these companies has consistently engaged in anticompetitive practices that aim to monopolize the market, stifle any form of competition, create higher pricing for subscribers, and cheat consumers from deserved choice,” Gandler also said at the time.

Now, set to be a Disney company, Fubo is singing a new tune, with its announcement claiming that the merger “will enhance consumer choice by making available a broad set of programming offerings.”

In a statement today, Gandler added that the merger will allow Fubo to “provide consumers with greater choice and flexibility” and “to scale effectively,” while adding that the deal “strengthens Fubo’s balance sheet” and sets Fubo up for “positive cash flow.”

Ars Technica reached out to Fubo about its previously publicized antitrust and anticompetitive concerns, whether or not those concerns had been addressed, and new concerns that it has settled its lawsuit in favor of its own business needs rather than over a resolution of customer choice problems. Jennifer Press, Fubo SVP of communications, responded to our questions with a statement, saying in part:

We filed an antitrust suit against the Venu Sports partners last year because that product was intended to be exclusive. As its partners announced last year, consumers would only have access to the Venu content package from Venu, which would limit choice and competitive pricing.

The definitive agreement that Fubo signed with Disney today will actually bring more choice to the market. As part of the deal, Fubo extended carriage agreements with Disney and also Fox, enabling Fubo to create a new Sports and Broadcast service and other genre-based content packages. Additionally, as the antitrust litigation has been settled, the Venu Sports partners can choose to launch that product if they wish. The launch of these bundles will enhance consumer choice by making available a broad set of programming offerings.

“… a total deception”

Some remain skeptical about Disney buying out a company that was suing it over antitrust concerns.

Disney makes antitrust problem go away by buying majority stake in Fubo Read More »

amd’s-new-laptop-cpu-lineup-is-a-mix-of-new-silicon-and-new-names-for-old-silicon

AMD’s new laptop CPU lineup is a mix of new silicon and new names for old silicon

AMD’s CES announcements include a tease about next-gen graphics cards, a new flagship desktop CPU, and a modest refresh of its processors for handheld gaming PCs. But the company’s largest announcement, by volume, is about laptop processors.

Today the company is expanding the Ryzen AI 300 lineup with a batch of updated high-end chips with up to 16 CPU cores and some midrange options for cheaper Copilot+ PCs. AMD has repackaged some of its high-end desktop chips for gaming laptops, including the first Ryzen laptop CPU with 3D V-Cache enabled. And there’s also a new-in-name-only Ryzen 200 series, another repackaging of familiar silicon to address lower-budget laptops.

Ryzen AI 300 is back, along with high-end Max and Max+ versions

Ryzen AI is back, with Max and Max+ versions that include huge integrated GPUs. Credit: AMD

We came away largely impressed by the initial Ryzen AI 300 processors in August 2024, and new processors being announced today expand the lineup upward and downward.

AMD is announcing the Ryzen AI 7 350 and Ryzen AI 5 340 today, along with identically specced Pro versions of the same chips with a handful of extra features for large businesses and other organizations.

Midrange Ryzen AI processors should expand Copilot+ features into somewhat cheaper x86 PCs.

Credit: AMD

The 350 includes eight CPU cores split evenly between large Zen 5 cores and smaller, slower but more efficient Zen 5C cores, plus a Radeon 860M with eight integrated graphics cores (down from a peak of 16 for the Ryzen AI 9). The 340 has six CPU cores, again split evenly between Zen 5 and Zen 5C, and a Radeon 840M with four graphics cores. But both have the same 50 TOPS NPUs as the higher-end Ryzen AI chips, qualifying both for the Copilot+ label.

For consumers, AMD is launching three high-end chips across the new “Ryzen AI Max+” and “Ryzen AI Max” families. Compared to the existing Strix Point-based Ryzen AI processors, Ryzen AI Max+ and Max include more CPU cores, and all of their cores are higher-performing Zen 5 cores, with no Zen 5C cores mixed in. The integrated graphics also get significantly more powerful, with as many as 40 cores built in—these chips seem to be destined for larger thin-and-light systems that could benefit from more power but don’t want to make room for a dedicated GPU.

AMD’s new laptop CPU lineup is a mix of new silicon and new names for old silicon Read More »

amd-launches-new-ryzen-9000x3d-cpus-for-pcs-that-play-games-and-work-hard

AMD launches new Ryzen 9000X3D CPUs for PCs that play games and work hard

AMD’s batch of CES announcements this year includes just two new products for desktop PC users: the new Ryzen 9 9950X3D and 9900X3D. Both will be available at some point in the first quarter of 2025.

Both processors include additional CPU cores compared to the 9800X3D that launched in November. The 9900X3D includes 12 Zen 5 CPU cores with a maximum clock speed of 5.5 GHz, and the 9950X3D includes 16 cores with a maximum clock speed of 5.7 GHz. Both include 64MB of extra L3 cache compared to the regular 9900X and 9950X, for a total cache of 144MB and 140MB, respectively; games in particular tend to benefit disproportionately from this extra cache memory.

But the 9950X3D and 9900X3D aren’t being targeted at people who build PCs primarily to game—the company says their game performance is usually within 1 percent of the 9800X3D. These processors are for people who want peak game performance when they’re playing something but also need lots of CPU cores for chewing on CPU-heavy workloads during the workday.

AMD estimates that the Ryzen 9 9950X3D is about 8 percent faster than the 7950X3D when playing games and about 13 percent faster in professional content creation apps. These modest gains are more or less in line with the small performance bump we’ve seen in other Ryzen 9000-series desktop CPUs.

AMD launches new Ryzen 9000X3D CPUs for PCs that play games and work hard Read More »

amd’s-new-ryzen-z2-cpus-boost-gaming-handhelds,-if-you-buy-the-best-one

AMD’s new Ryzen Z2 CPUs boost gaming handhelds, if you buy the best one

Nearly two years ago, AMD announced its first Ryzen Z1 processors. These were essentially the same silicon that AMD was putting in high-end thin-and-light laptops but tuned specifically for handheld gaming PCs like the Steam Deck and Asus ROG Ally X. As part of its CES announcements today, AMD is refreshing that lineup with three processors, all slated for an undisclosed date in the first quarter of 2025.

Although they’re all part of the “Ryzen Z2” family, each of these three chips is actually much different under the hood, and some of them are newer than others.

The Ryzen Z2 Extreme is what you’d expect from a refresh: a straightforward upgrade to both the CPU and GPU architectures of the Ryzen Z1 Extreme. Based on the same “Strix Point” architecture as the Ryzen AI 300 laptop processors, the Z2 Extreme includes eight CPU cores (three high-performance Zen 5 cores, five smaller and efficiency-optimized Zen 5C cores) and an unnamed RDNA 3.5 GPU with 16 of AMD’s compute units (CUs). These should both provide small bumps to CPU and GPU performance relative to the Ryzen Z1 Extreme, which used eight Zen 4 CPU cores and 12 RDNA 3 GPU cores.

AMD’s full Ryzen Z2 lineup, which obfuscates the fact that these three chips are all using different CPU and GPU architectures. Credit: AMD

The Ryzen Z2, on the other hand, appears to be exactly the same chip as the Ryzen Z1 Extreme, but with a different name. Like the Z1 Extreme, it has eight Zen 4 cores with a 5.1 GHz maximum clock speed and an RDNA 3 GPU with 12 cores.

AMD’s new Ryzen Z2 CPUs boost gaming handhelds, if you buy the best one Read More »

the-end-of-an-era:-dell-will-no-longer-make-xps-computers

The end of an era: Dell will no longer make XPS computers

After ditching the traditional Dell XPS laptop look in favor of the polarizing design of the XPS 13 Plus released in 2022, Dell is killing the XPS branding that has become a mainstay for people seeking a sleek, respectable, well-priced PC.

This means that there won’t be any more Dell XPS clamshell ultralight laptops, 2-in-1 laptops, or desktops. Dell is also killing its Latitude, Inspiron, and Precision branding, it announced today.

Moving forward, Dell computers will have either just Dell branding, which Dell’s announcement today described as “designed for play, school, and work,” Dell Pro branding “for professional-grade productivity,” or be Dell Pro Max products, which are “designed for maximum performance.” Dell will release Dell and Dell Pro-branded displays, accessories, and “services,” it said. The Pro Max line will feature laptops and desktop workstations with professional-grade GPU capabilities as well as a new thermal design.

Dell claims its mid-tier Pro line emphasizes durability, “withstanding three times as many hinge cycles, drops, and bumps from regular use as competitor devices.” The statement is based on “internal analysis of multiple durability tests performed” on the Dell Pro 14 Plus (released today) and HP EliteBook 640 G11 laptops conducted in November. Also based on internal testing conducted in November, Dell claims its Pro PCs boost “airflow by 20 percent, making these Dell’s quietest commercial laptops ever.”

Within each line are base models, Plus models, and Premium models. In a blog post, Kevin Terwilliger, VP and GM of commercial, consumer, and gaming PCs at Dell, explained that Plus models offer “the most scalable performance” and Premium models offer “the ultimate in mobility and design.”

Credit: Dell

By those naming conventions, old-time Dell users could roughly equate XPS laptops with new Dell Premium products.

“The Dell portfolio will expand later this year to include more AMD and Snapdragon X Series processor options,” Terwilliger wrote. “We will also introduce new devices in the base tier, which offers everyday devices that provide effortless use and practical design, and the Premium tier, which continues the XPS legacy loved by consumers and prosumers alike.”

Meanwhile, Dell Pro base models feel like Dell’s now-defunct Latitude lineup, while its Precision workstations may best align with 2025’s Dell Pro Max offerings.

The end of an era: Dell will no longer make XPS computers Read More »

new-radeon-rx-9000-gpus-promise-to-fix-two-of-amd’s-biggest-weaknesses

New Radeon RX 9000 GPUs promise to fix two of AMD’s biggest weaknesses

Nvidia is widely expected to announce specs, pricing, and availability information for the first few cards in the new RTX 50 series at its CES keynote later today. AMD isn’t ready to get as specific about its next-generation graphics lineup yet, but the company shared a few morsels today about its next-generation RDNA 4 graphics architecture and its 9000-series graphics cards.

AMD mentioned that RDNA 4 cards were on track to launch in early 2025 during a recent earnings call, acknowledging that shipments of current-generation RX 7000-series cards were already slowing down. CEO Lisa Su said then that the architecture would include “significantly higher ray-tracing performance” as well as “new AI capabilities.”

AMD’s RDNA 4 launch will begin with the 9070 XT and 9070, which are both being positioned as upper-midrange GPUs like the RTX 4070 series. Credit: AMD

The preview the company is providing today provides few details beyond those surface-level proclamations. The compute units will be “optimized,” AI compute will be “supercharged,” ray-tracing will be “improved,” and media encoding quality will be “better,” but AMD isn’t providing hard numbers for anything at this point. The RDNA 4 launch will begin with the Radeon RX 9070 XT and 9070 at some point in Q1 of 2025, and AMD will provide more information “later in the quarter.”

The GPUs will be built on a 4 nm process, presumably from TSMC, an upgrade from the 5 nm process used for the 7000-series GPUs and the 6 nm process used for the separate memory controller chiplets (AMD hasn’t said whether RDNA 4 GPUs are using chiplets; the 7000 series used them for high-end GPUs but not lower-end ones).

FSR 4 will be AMD’s first ML-powered upscaling algorithm, similar to Nvidia’s DLSS, Intel’s XeSS (on Intel GPUs), and Apple’s MetalFX. This generally results in better image quality but more restrictive hardware requirements. Credit: AMD

We do know that AMD’s next-generation upscaling algorithm, FidelityFX Super Resolution 4, has been “developed for AMD RDNA 4,” and it will be the first version of FSR to use machine learning-powered upscaling. Nvidia’s DLSS and Intel’s XeSS (when running on Intel GPUs) also use ML-powered upscaling, which generally leads to better results but also has stricter hardware requirements than older versions of FSR. AMD isn’t saying whether FSR 4 will work on any older Radeon cards.

New Radeon RX 9000 GPUs promise to fix two of AMD’s biggest weaknesses Read More »

hdmi-2.2-will-require-new-“ultra96”-cables,-whenever-we-have-8k-tvs-and-content

HDMI 2.2 will require new “Ultra96” cables, whenever we have 8K TVs and content

We’ve all had a good seven years to figure out why our interconnected devices refused to work properly with the HDMI 2.1 specification. The HDMI Forum announced at CES today that it’s time to start considering new headaches. HDMI 2.2 will require new cables for full compatibility, but it has the same physical connectors. Tiny QR codes are suggested to help with that, however.

The new specification is named HDMI 2.2, but compatible cables will carry an “Ultra96” marker to indicate that they can carry 96GBps, double the 48 of HDMI 2.1b. The Forum anticipates this will result in higher resolutions and refresh rates and a “next-gen HDMI Fixed Rate Link.” The Forum cited “AR/VR/MR, spatial reality, and light field displays” as benefiting from increased bandwidth, along with medical imaging and machine vision.

A bit closer to home, the HDMI 2.2 specification also includes “Latency Indication Protocol” (LIP), which can help improve audio and video synchronization. This should matter most in “multi-hop” systems, such as home theater setups with soundbars or receivers. Illustrations offered by the Forum show LIP working to correct delays on headphones, soundbars connected through ARC or eARC, and mixed systems where some components may be connected to a TV, while others go straight into the receiver.

HDMI 2.2 will require new “Ultra96” cables, whenever we have 8K TVs and content Read More »