Tech

amd’s-fsr-4-upscaling-is-exclusive-to-90-series-radeon-gpus,-won’t-work-on-other-cards

AMD’s FSR 4 upscaling is exclusive to 90-series Radeon GPUs, won’t work on other cards

AMD’s new Radeon RX 90-series cards and the RDNA4 architecture make their official debut on March 5, and a new version of AMD’s FidelityFX Super Resolution (FSR) upscaling technology is coming along with them.

FSR and Nvidia’s Deep Learning Super Sampling (DLSS) upscalers have the same goal: to take a lower-resolution image rendered by your graphics card, bump up the resolution, and fill in the gaps between the natively rendered pixels to make an image that looks close to natively rendered without making the GPU do all that rendering work. These upscalers can make errors, and they won’t always look quite as good as a native-resolution image. But they’re both nice alternatives to living with a blurry, non-native-resolution picture on an LCD or OLED display.

FSR and DLSS are especially useful for older or cheaper 1080p or 1440p-capable GPUs that are connected to a 4K monitor, where you’d otherwise have to decide between a sharp 4K image and a playable frame rate; it’s also useful for hitting higher frame rates at lower resolutions, which can be handy for high-refresh-rate gaming monitors.

But unlike past versions of FSR, FSR 4 is upscaling images using hardware-backed machine-learning algorithms, hardware newly added to RDNA4 and the RX 90-series graphics cards. This mirrors Nvidia’s strategy with DLSS, which has always leveraged the tensor cores found in RTX GPUs to run machine-learning models to achieve superior image quality for upscaled and AI-generated frames. If you don’t have an RDNA4 GPU, you can’t use FSR 4.

AMD’s FSR 4 upscaling is exclusive to 90-series Radeon GPUs, won’t work on other cards Read More »

commercials-are-still-too-loud,-say-“thousands”-of-recent-fcc-complaints

Commercials are still too loud, say “thousands” of recent FCC complaints

Streaming ads could get muzzled, too

As you may have noticed—either through the text of this article or your own ears—The Calm Act doesn’t apply to streaming services. And because The Calm Act doesn’t affect commercials viewed on the Internet, online services providing access to broadcast channels, like YouTube TV and Sling, don’t have to follow the rules. This is despite such services distributing the same content as linear TV providers.

For years, this made sense. The majority of TV viewing occurred through broadcast, cable, or satellite access. Further, services like Netflix and Amazon Prime Video used to be considered safe havens from constant advertisements. But today, streaming services are more popular than ever and have grown to love ads, which have become critical to most platforms’ business models. Further, many streaming services are airing more live events. These events, like sports games, show commercials to all subscribers, even those with a so-called “ad-free” subscription.

Separate from the Calm Act violation complaints, the FCC noted this month that other recent complaints it has seen illustrate “growing concern with the loudness of commercials on streaming services and other online platforms.” If the FCC decides to apply Calm Act rules to the web, it would need to create new methods for ensuring compliance, it said.

TV viewing trends by platform bar graph by Nielsen.

Nielsen’s most recent data on how people watch TV. Credit: Nielsen

The FCC didn’t specify what’s behind the spike in consumers’ commercial complaints. Perhaps with declining audiences, traditional TV providers thought it would be less likely for anyone to notice and formally complain about Ozempic ads shouting at them. Twelve years have passed since the rules took effect, so it’s also possible that organizations are getting lackadaisical about ensuring compliance or have dwindling resources.

With Americans spending similar amounts of time—if not longer—watching TV online versus via broadcast, cable, and satellite, The Calm Act would have to take on the web in order to maximize effectiveness. The streaming industry is young, though, and operates differently than linear TV distribution, presenting new regulation challenges.

Commercials are still too loud, say “thousands” of recent FCC complaints Read More »

sergey-brin-says-agi-is-within-reach-if-googlers-work-60-hour-weeks

Sergey Brin says AGI is within reach if Googlers work 60-hour weeks

Sergey Brin co-founded Google in the 1990s along with Larry Page, but both stepped away from the day to day at Google in 2019. However, the AI boom tempted Brin to return to the office, and he thinks everyone should follow his example. In a new internal memo, Brin has advised employees to be in the office every weekday so Google can win the AI race.

Just returning to the office isn’t enough for the Google co-founder. According to the memo seen by The New York Times, Brin says Googlers should try to work 60 hours per week to support the company’s AI efforts. That works out to 12 hours per day, Monday through Friday, which Brin calls the “sweet spot of productivity.” This is not a new opinion for Brin.

Brin, like many in Silicon Valley, is seemingly committed to the dogma that the current trajectory of generative AI will lead to the development of artificial general intelligence (AGI). Such a thinking machine would be head and shoulders above current AI models, which can only do a good impression of thinking. An AGI would understand concepts and think more like a human being, which some would argue makes it a conscious entity.

To hear Brin tell it, Google is in the best position to make this AI computing breakthrough. He cites the company’s strong workforce of programmers and data scientists as the key, but he also believes the team must strive for greater efficiency by using Google’s own Gemini AI tools as much as possible. Oh, and don’t work from home.

Brin and Page handed the reins to current CEO Sundar Pichai in 2015, so his pronouncement doesn’t necessarily signal a change to the company’s current in-office policy. Google still operates on a hybrid model, with workers expected to be in the office three days per week. But as a founder, Brin’s voice carries weight. We reached out to Google to ask if the company intends to reassess its policies, but a Google rep says there are no planned changes to the return-to-office mandate.

Sergey Brin says AGI is within reach if Googlers work 60-hour weeks Read More »

microsoft-brings-an-official-copilot-app-to-macos-for-the-first-time

Microsoft brings an official Copilot app to macOS for the first time

It took a couple of years, but it happened: Microsoft released its Copilot AI assistant as an application for macOS. The app is available for download for free from the Mac App Store right now.

It was previously available briefly as a Mac app, sort of; for a short time, Microsoft’s iPad Copilot app could run on the Mac, but access on the Mac was quickly disabled. Mac users have been able to use a web-based interface for a while.

Copilot initially launched on the web and in web browsers (Edge, obviously) before making its way onto iOS and Android last year. It has since been slotted into all sorts of first-party Microsoft software, too.

The Copilot app joins a trend already spearheaded by ChatGPT and Anthropic of bringing native apps to the macOS platform. Like those, it enables an OS-wide keyboard shortcut to invoke a field for starting a chat at any time. It offers most of the same use cases: translating or summarizing text, answering questions, preparing reports and documents, solving coding problems or generating scripts, brainstorming, and so on.

Copilot uses OpenAI models like GPT-4 and DALL-E 3 (yes, it generates images, too) alongside others like Microsoft’s in-house Prometheus. Microsoft has invested significant amounts of money into OpenAI in recent years as the basis for Copilot and basically everything in its AI strategy.

Like Apple’s own built-in generative AI features, Copilot for macOS requires an M1 or later Mac. It also requires users to run macOS 14 or later.

Microsoft brings an official Copilot app to macOS for the first time Read More »

new-ai-text-diffusion-models-break-speed-barriers-by-pulling-words-from-noise

New AI text diffusion models break speed barriers by pulling words from noise

These diffusion models maintain performance faster than or comparable to similarly sized conventional models. LLaDA’s researchers report their 8 billion parameter model performs similarly to LLaMA3 8B across various benchmarks, with competitive results on tasks like MMLU, ARC, and GSM8K.

However, Mercury claims dramatic speed improvements. Their Mercury Coder Mini scores 88.0 percent on HumanEval and 77.1 percent on MBPP—comparable to GPT-4o Mini—while reportedly operating at 1,109 tokens per second compared to GPT-4o Mini’s 59 tokens per second. This represents roughly a 19x speed advantage over GPT-4o Mini while maintaining similar performance on coding benchmarks.

Mercury’s documentation states its models run “at over 1,000 tokens/sec on Nvidia H100s, a speed previously possible only using custom chips” from specialized hardware providers like Groq, Cerebras, and SambaNova. When compared to other speed-optimized models, the claimed advantage remains significant—Mercury Coder Mini is reportedly about 5.5x faster than Gemini 2.0 Flash-Lite (201 tokens/second) and 18x faster than Claude 3.5 Haiku (61 tokens/second).

Opening a potential new frontier in LLMs

Diffusion models do involve some trade-offs. They typically need multiple forward passes through the network to generate a complete response, unlike traditional models that need just one pass per token. However, because diffusion models process all tokens in parallel, they achieve higher throughput despite this overhead.

Inception thinks the speed advantages could impact code completion tools where instant response may affect developer productivity, conversational AI applications, resource-limited environments like mobile applications, and AI agents that need to respond quickly.

If diffusion-based language models maintain quality while improving speed, they might change how AI text generation develops. So far, AI researchers have been open to new approaches.

Independent AI researcher Simon Willison told Ars Technica, “I love that people are experimenting with alternative architectures to transformers, it’s yet another illustration of how much of the space of LLMs we haven’t even started to explore yet.”

On X, former OpenAI researcher Andrej Karpathy wrote about Inception, “This model has the potential to be different, and possibly showcase new, unique psychology, or new strengths and weaknesses. I encourage people to try it out!”

Questions remain about whether larger diffusion models can match the performance of models like GPT-4o and Claude 3.7 Sonnet, produce reliable results without many confabulations, and if the approach can handle increasingly complex simulated reasoning tasks. For now, these models may offer an alternative for smaller AI language models that doesn’t seem to sacrifice capability for speed.

You can try Mercury Coder yourself on Inception’s demo site, and you can download code for LLaDA or try a demo on Hugging Face.

New AI text diffusion models break speed barriers by pulling words from noise Read More »

google-will-finally-fix-awesome-(but-broken)-song-detection-feature-for-pixels

Google will finally fix awesome (but broken) song detection feature for Pixels

Google’s Pixel phones include numerous thoughtful features you don’t get on other phones, like Now Playing. This feature can identify background music from the lock screen, but unlike some similar song identifiers, it works even without an Internet connection. Sadly, it has been broken for months. There is some hope, though. Google has indicated that a fix is ready for deployment, and Pixel users can expect to see it in a future OS update.

First introduced in 2017, Now Playing uses a cache of thousands of audio fingerprints to identify songs you might encounter in your daily grind. Since it works offline, it’s highly efficient and preserves your privacy. Now Playing isn’t a life-changing addition to the mobile experience, but it’s damn cool.

That makes it all the stranger that Google appears to have broken Now Playing with the release of Android 15 (or possibly a Play Services update around the same time) and has left it that way for months. Before that update, Now Playing would regularly list songs on the lock screen and offer enhanced search for songs it couldn’t ID offline. It was obvious to Pixel fans when Now Playing stopped listening last year, and despite a large volume of online complaints, Google has seemingly dragged its feet.

Google will finally fix awesome (but broken) song detection feature for Pixels Read More »

now-the-overclock-curious-can-buy-a-delidded-amd-9800x3d,-with-a-warranty

Now the overclock-curious can buy a delidded AMD 9800X3D, with a warranty

The integrated heat spreaders put on CPUs at the factory are not the most thermally efficient material you could have on there, but what are you going to do—rip it off at the risk of killing your $500 chip with your clumsy hands?

Yes, that is precisely what enthusiastic overclockers have been doing for years, delidding, or decapping (though the latter term is used less often in overclocking circles), chips through various DIY techniques, allowing them to replace AMD and Intel’s common denominator shells with liquid metal or other advanced thermal interface materials.

As you might imagine, it can be nerve-wracking, and things can go wrong in just one second or one degree Celsius. In one overclocking forum thread, a seasoned expert noted that Intel’s Core Ultra 200S spreader (IHS) needs to be heated above 165° C for the indium (transfer material) to loosen. But then the glue holding the IHS is also loose at this temperature, and there is only 1.5–2 millimeters of space between IHS and surface-mounted components, so it’s easy for that metal IHS to slide off and take out a vital component with it. It’s quite the Saturday afternoon hobby.

That is the typical overclocking bargain: You assume the risk, you void your warranty, but you remove one more barrier to peak performance. Now, though, Thermal Grizzly, led by that same previously mentioned expert, Roman “der8auer” Hartung, has a new bargain to present. His firm is delidding AMD’s Ryzen 9800X3D CPUs with its own ovens and specialty tools, then selling them with two-year warranties that cover manufacturer’s defects and “normal overclocking damage,” but not mechanical damage.

Now the overclock-curious can buy a delidded AMD 9800X3D, with a warranty Read More »

google’s-free-gemini-code-assist-arrives-with-sky-high-usage-limits

Google’s free Gemini Code Assist arrives with sky-high usage limits

Generative AI has wormed its way into myriad products and services, some of which benefit more from these tools than others. Coding with AI has proven to be a better application than most, with individual developers and big companies leaning heavily on generative tools to create and debug programs. Now, indie developers have access to a new AI coding tool free of charge—Google has announced that Gemini Code Assist is available to everyone.

Gemini Code Assist was first released late last year as an enterprise tool, and the new version has almost all the same features. While you can use the standard Gemini or another AI model like ChatGPT to work on coding questions, Gemini Code Assist was designed to fully integrate with the tools developers are already using. Thus, you can tap the power of a large language model (LLM) without jumping between windows. With Gemini Code Assist connected to your development environment, the model will remain aware of your code and ready to swoop in with suggestions. The model can also address specific challenges per your requests, and you can chat with the model about your code, provided it’s a public domain language.

At launch, Gemini Code Assist pricing started at $45 per month per user. Now, it costs nothing for individual developers, and the limits on the free tier are generous. Google says the product offers 180,000 code completions per month, which it claims is enough that even prolific professional developers won’t run out. This is in stark contrast to Microsoft’s GitHub Copilot, which offers similar features with a limit of just 2,000 code completions and 50 Copilot chat messages per month. Google did the math to point out Gemini Code Assist offers 90 times the completions of Copilot.

Google’s free Gemini Code Assist arrives with sky-high usage limits Read More »

framework-gives-its-13-inch-laptop-another-boost-with-ryzen-ai-300-cpu-update

Framework gives its 13-inch Laptop another boost with Ryzen AI 300 CPU update

Framework announced two new systems to its lineup today: the convertible Framework 12 and a gaming-focused (but not-very-upgradeable) mini ITX Framework Desktop PC. But it’s continuing to pay attention to the Framework Laptop 13, too—the company’s first upgrade-friendly repairable laptop is getting another motherboard update, this time with AMD’s latest Ryzen AI 300-series processors. It’s Framework’s second AMD Ryzen-based board, following late 2023’s Ryzen 7040-based refresh.

The new boards are available for preorder today and will begin shipping in April. Buyers new to the Framework ecosystem can buy a laptop, which starts at $1,099 as a pre-built system with an OS, storage, and RAM included, or $899 for a build-it-yourself kit where you add those components yourself. Owners of Framework Laptops going all the way back to the original 11th-generation Intel version can also buy a bare board to drop into their existing systems; these start at $449.

Framework will ship six- and eight-core Ryzen AI 300 processors on lower-end configurations, most likely the Ryzen AI 5 340 and Ryzen AI 7 350 that AMD announced at CES in January. These chips include integrated Radeon 840M and 860M GPUs with four and eight graphics cores, respectively.

People who want to use the Framework Laptop as a thin-and-light portable gaming system will want to go for the top-tier Ryzen AI 9 HX 370, which includes 12 CPU cores and a Radeon 890M with 16 GPU cores. We’ve been impressed by this chip’s performance when we’ve seen it in other systems, though Framework’s may be a bit slower because it’s using slower socketed DDR5 memory instead of soldered-down RAM. This is a trade-off that Framework’s target customers are likely to be fine with.

The Ryzen AI 300-series motherboard. Framework says an updated heatpipe design helps to keep things cool. Credit: Framework

One of the issues with the original Ryzen Framework board was that the laptop’s four USB-C ports didn’t all support the same kinds of expansion cards, limiting the laptop’s customizability somewhat. That hasn’t totally gone away with the new version—the two rear USB ports support full 40Gbps USB4 speeds, while the front two are limited to 10Gbps USB 3.2—but all four ports do support display output instead of just three.

Framework gives its 13-inch Laptop another boost with Ryzen AI 300 CPU update Read More »

framework’s-first-desktop-is-a-strange—but-unique—mini-itx-gaming-pc

Framework’s first desktop is a strange—but unique—mini ITX gaming PC

In Framework’s first-party case, the PC starts at $1,099, which gets you a Ryzen AI Max 385 (that’s an 8-core CPU and 32 GPU cores) and 32GB of RAM. A fully loaded 128GB with a Ryzen AI Max+ 395 configuration (16 CPU cores, 40 GPU cores) will run you $1,999. There’s also an in-between build with the Ryzen AI Max+ 395 chip and 64GB of RAM for $1,599. If you just want the mini ITX board to put in a case of your choosing, that starts at $799.

None of these are impulse buys, exactly, but they’re priced a bit better than a gaming-focused mini PC like the Asus ROG NUC, which starts at nearly $1,300 as of this writing and comes with half as much RAM. It’s also priced well compared to what you can get out of a DIY mini ITX PC based on integrated graphics—the Ryzen 7 8700G, an AM5 ITX motherboard, and 32GB of DDR5 can all be had for around $500 collectively before you add a case, power supply, or SSD, but for considerably slower performance.

The volume of the Framework Desktop’s first-party case is just 4.5 liters—for reference, the SSUPD Meshroom S is 14.9 liters, a fairly middle-of-the-road volume for an ITX case that can fit a full-size GPU. An Xbox Series X is about 6.9 liters, and the Xbox Series S is 4.4 liters. Apple’s Mac Studio is about 3.7 liters. The Framework Desktop isn’t breaking records, but it’s definitely tiny.

Despite the non-upgradeability of the main components, Framework has tried to stick to existing standards where it can by using a flex ATX power supply, ATX headers on the motherboard, regular 120 mm fans that can be changed out, and of course the mini ITX form factor itself. Credit: Framework

So the pitch for the system is easy: You get a reasonably powerful 1440p-capable gaming and workstation PC inside a case the size of a small game console. “If the Series S could run Windows, I’d buy it in a second” is a thought that has occurred to me, so I can see the appeal, even though it costs at least three times as much.

But it does feel like a strange fit for Framework, given that it’s so much less upgradeable than most PCs. The CPU and GPU are one piece of silicon, and they’re soldered to the motherboard. The RAM is also soldered down and not upgradeable once you’ve bought it, setting it apart from nearly every other board Framework sells.

Framework’s first desktop is a strange—but unique—mini ITX gaming PC Read More »

framework-laptop-12-is-a-cheaper,-more-colorful-take-on-a-repairable-laptop-pc

Framework Laptop 12 is a cheaper, more colorful take on a repairable laptop PC

Framework has been selling and upgrading the upgrade-and-repair-friendly Framework Laptop 13 for nearly four years now, and in early 2024 it announced a larger, more powerful Framework Laptop 16. At a product event today, the company showed off what it called “an early preview” of its third laptop design, the convertible, budget-focused Framework Laptop 12.

This addition to Framework’s lineup centers on a 12.2-inch, 1920×1200 convertible touchscreen that flips around to the back with a flexible hinge, a la Lenovo’s long-running Yoga design. Framework CEO Nirav Patel said it had originally designed the systems with “students in mind,” and to that end it comes in five colors and uses a two-tone plastic body with an internal metal frame rather than the mostly aluminum exterior Framework has used for the 13 and 16. Framework will also sell the laptop with an optional stylus.

For better or worse, the Framework Laptop 12 appears to be its own separate system, with motherboards, accessories, and a refresh schedule distinct from the 13-inch laptop. While the Laptop 13 already offers first-generation Intel Core Ultra-based and (as of today) AMD Ryzen AI 300-based processors, the first Framework Laptop 12 motherboard is going to use Intel’s 13th-generation Core i3 and i5 processors, originally launched back in late 2022. Despite the age of these chips, Framework claims the laptop will be “unusually powerful for its class.”

Framework Laptop 12 is a cheaper, more colorful take on a repairable laptop PC Read More »

qualcomm-and-google-team-up-to-offer-8-years-of-android-updates

Qualcomm and Google team up to offer 8 years of Android updates

How long should your phone last?

This is just the latest attempt from Google and its partners to address Android’s original sin. Google’s open approach to Android roped in numerous OEMs to create and sell hardware, all of which were managing their update schemes individually and relying on hardware vendors to provide updated drivers and other components—which they usually didn’t. As a result, even expensive flagship phones could quickly fall behind and miss out on features and security fixes.

Google undertook successive projects over the last decade to improve Android software support. For example, Project Mainline in Android 10 introduced system-level modules that Google can update via Play Services without a full OS update. This complemented Project Treble, which was originally released in Android 8.0 Oreo. Treble separated the Android OS from the vendor implementation, giving OEMs the ability to update Android without changing the low-level code.

The legacy of Treble is still improving outcomes, too. Qualcomm cites Project Treble as a key piece of its update-extending initiative. The combination of consistent vendor layer support and fresh kernels will, according to Qualcomm, make it faster and easier for OEMs to deploy updates. However, they don’t have to.

Credit: Ron Amadeo

Update development is still the responsibility of device makers, with Google implementing only a loose framework of requirements. That means companies can build with Qualcomm’s most powerful chips and say “no thank you” to the extended support window. OnePlus has refused to match Samsung and Google’s current seven-year update guarantee, noting that pushing new versions of Android to older phones can cause performance and battery life issues—something we saw in action when Google’s Pixel 4a suffered a major battery life hit with the latest update.

Samsung has long pushed the update envelope, and it has a tight relationship with Qualcomm to produce Galaxy-optimized versions of its processors. So it won’t be surprising if Samsung tacks on another year to its update commitment in its next phone release. Google, too, emphasizes updates on its Pixel phones. Google doesn’t use Qualcomm chips, but it will probably match any move Samsung makes. The rest of the industry is anyone’s guess—eight years of updates is a big commitment, even with Qualcomm’s help.

Qualcomm and Google team up to offer 8 years of Android updates Read More »