intel

intel-has-finally-tracked-down-the-problem-making-13th-and-14th-gen-cpus-crash

Intel has finally tracked down the problem making 13th- and 14th-gen CPUs crash

crash no more? —

But microcode update can’t fix CPUs that are already crashing or unstable.

Intel's Core i9-13900K.

Enlarge / Intel’s Core i9-13900K.

Andrew Cunningham

For several months, Intel has been investigating reports that high-end 13th- and 14th-generation desktop CPUs (mainly, but not exclusively, the Core i9-13900K and 14900K) were crashing during gameplay. Intel partially addressed the issue by insisting that third-party motherboard makers adhere to Intel’s recommended default power settings in their motherboards, but the company said it was still working to identify the root cause of the problem.

The company announced yesterday that it has wrapped up its investigation and that a microcode update to fix the problem should be shipping out to motherboard makers in mid-August “following full validation.” Microcode updates like this generally require a BIOS update, so exactly when the patch hits your specific motherboard will be up to the company that made it.

Intel says that an analysis of defective processors “confirms that the elevated operating voltage is stemming from a microcode algorithm resulting in incorrect voltage requests to the processor.” In other words, the CPU is receiving too much power, which is degrading stability over time.

If you’re using a 13th- or 14th-generation CPU and you’re not noticing any problems, the microcode update should prevent your processor from degrading. But if you’re already noticing stability problems, Tom’s Hardware reports that “the bug causes irreversible degradation of the impacted processors” and that the fix will not be able to reverse the damage that has already happened.

There has been no mention of 12th-generation processors, including the Core i9-12900K, suffering from the same issues. The 12th-gen processors use Intel’s Alder Lake architecture, whereas the high-end 13th- and 14th-gen chips use a modified architecture called Raptor Lake that comes with higher clock speeds, a bit more cache memory, and additional E-cores.

Tom’s Hardware also says that Intel will continue to replace CPUs that are exhibiting problems and that the microcode update shouldn’t noticeably affect CPU performance.

Intel also separately confirmed speculation that there was an oxidation-related manufacturing issue with some early 13th-generation Core processors but that the problems were fixed in 2023 and weren’t related to the crashes and instability that the microcode update is fixing.

Intel has finally tracked down the problem making 13th- and 14th-gen CPUs crash Read More »

amd-brags-about-ryzen-9000’s-efficiency,-extends-am5-support-guarantee-to-2027

AMD brags about Ryzen 9000’s efficiency, extends AM5 support guarantee to 2027

still processing —

Ryzen 9000 will also have more overclocking headroom, for those interested.

AMD's Ryzen 9000 launch lineup.

Enlarge / AMD’s Ryzen 9000 launch lineup.

AMD

AMD has released more information about its next-generation Ryzen 9000 processors and their underlying Zen 5 CPU architecture this week ahead of their launch at the end of July. The company reiterated some of the high-level performance claims it made last month—low- to mid-double-digit performance increases over Zen 4 in both single- and multi-threaded tasks. But AMD also bragged about the chips’ power efficiency compared to Ryzen 7000, pointing out that they would reduce power usage despite increasing performance.

Prioritizing power efficiency

AMD said that it has lowered the default power limits for three of the four Ryzen 9000 processors—the Ryzen 5 9600X, the Ryzen 7 9700X, and the Ryzen 9 7900X—compared to the Ryzen 7000 versions of those same chips. Despite the lower default power limit, all three of those chips still boast double-digit performance improvements over their predecessors. AMD also says that Ryzen 9000 CPU temperatures have been reduced by up to 7º Celsius compared to Ryzen 7000 chips at the same settings.

  • Ryzen 9000’s low-double-digit performance gains are coming despite the fact that the company has lowered most of its chips’ default TDPs. These TDP settings determine how much power one of AMD’s CPUs can use (though not necessarily how much they will use).

    AMD

  • Because the TDPs have been lowered, AMD claims that Ryzen 9000 chips will have a bit more overclocking headroom than Ryzen 7000.

    AMD

It’s worth noting that we generally tested the original Ryzen 7000 CPUs at multiple power levels, and for most chips—most notably the 7600X and 7700X—we found that the increased TDP levels didn’t help performance all that much in the first place. The TDP lowering in the Ryzen 9000 may be enabled partly by architectural improvements or a newer manufacturing process, but AMD already had some headroom to lower those power usage numbers without affecting performance too much. TDP is also best considered as a power limit rather than the actual amount of power that a CPU will use for any given workload, even when fully maxed out.

Still, we appreciate AMD’s focus on power efficiency for the Ryzen 9000 series, especially because Intel’s high-end 13900K and 14900K have been plagued by crashes that seem to be related to high power use and improper motherboard configurations. Intel has yet to release a definitive statement about what the issue is, but it’s plausible (maybe even likely!) that it’s a side effect of these chips being pushed to their thermal and electrical limits.

Ryzen 9000 CPUs can still be pushed further by users who want to raise those power limits and try overclocking—AMD points out that the chips all have more headroom for Precision Boost Overdrive automated overclocking, precisely because the default power limits leave a little more performance on the table. But as long as the chips still perform well at their default settings, people who just want to build a computer without doing a ton of tinkering will be better served by chips that run cooler and use less power.

More time on the clock for socket AM5

  • AMD has committed to supporting the AM5 socket until “2027+,” two more years than the “2025+” it promised back in late 2022.

    AMD

  • Ryzen 9000 will launch alongside several marginally updated chipsets, though existing AM5 boards will be able to use these chips after a BIOS update.

    AMD

Another small but noteworthy change buried in AMD’s slide decks, and good news for anyone who has already invested in a Socket AM5 motherboard or has plans to do so in the near future: AMD has officially extended the socket’s guaranteed support timeline to at least 2027 and is leaving the door open to support past that point. That’s a two-year extension from the “2025+” timeline that the company laid out in late 2022.

Of course, “support” could mean a lot of different things. AMD is still officially supporting the AM4 socket with new CPU releases and continues to lean on AM4 as a budget platform as socket AM5 costs have remained stubbornly high. But these “new” releases have all been repackagings of various iterations of the late-2020-era Ryzen 5000 CPUs, rather than truly new products. Still, AMD’s formal commitment to socket AM5’s longevity makes it a bit easier to recommend for people who upgrade their CPUs regularly.

Ryzen 9000 chips will be able to pop into any current AM5 motherboard after a BIOS update. The company is also announcing a range of 800-series chipsets for new motherboards, though these generally only come with minor improvements compared to the 600-series chipsets they replace. The X870E and X870 are guaranteed to have USB 4 ports, and the X870 supports PCIe 5.0 speeds for the GPU slot where the X670 only supported PCIe 4.0 speeds for the GPU slot. The lower-end B850 chipset still supports PCIe 5.0 speeds for SSDs and PCIe 4.0 speeds for GPUs, while an even lower-end B840 chipset is restricted to PCIe 3.0 speeds for everything. The B840 also won’t support CPU overclocking, though it can still overclock RAM.

Listing image by AMD

AMD brags about Ryzen 9000’s efficiency, extends AM5 support guarantee to 2027 Read More »

intel-details-new-lunar-lake-cpus-that-will-go-up-against-amd,-qualcomm,-and-apple

Intel details new Lunar Lake CPUs that will go up against AMD, Qualcomm, and Apple

more lakes —

Lunar Lake returns to a more conventional-looking design for Intel.

A high-level breakdown of Intel's next-gen Lunar Lake chips, which preserve some of Meteor Lake's changes while reverting others.

Enlarge / A high-level breakdown of Intel’s next-gen Lunar Lake chips, which preserve some of Meteor Lake’s changes while reverting others.

Intel

Given its recent manufacturing troubles, a resurgent AMD, an incursion from Qualcomm, and Apple’s shift from customer to competitor, it’s been a rough few years for Intel’s processors. Computer buyers have more viable options than they have in many years, and in many ways the company’s Meteor Lake architecture was more interesting as a technical achievement than it was as an upgrade for previous-generation Raptor Lake processors.

But even given all of that, Intel still provides the vast majority of PC CPUs—nearly four-fifths of all computer CPUs sold are Intel’s, according to recent analyst estimates from Canalys. The company still casts a long shadow, and what it does still helps set the pace for the rest of the industry.

Enter its next-generation CPU architecture, codenamed Lunar Lake. We’ve known about Lunar Lake for a while—Intel reminded everyone it was coming when Qualcomm upstaged it during Microsoft’s Copilot+ PC reveal—but this month at Computex the company is going into more detail ahead of availability sometime in Q3 of 2024.

Lunar Lake will be Intel’s first processor with a neural processing unit (NPU) that meets Microsoft’s Copilot+ PC requirements. But looking beyond the endless flow of AI news, it also includes upgraded architectures for its P-cores and E-cores, a next-generation GPU architecture, and some packaging changes that simultaneously build on and revert many of the dramatic changes Intel made for Meteor Lake.

Intel didn’t have more information to share on Arrow Lake, the architecture that will bring Meteor Lake’s big changes to socketed desktop motherboards for the first time. But Intel says that Arrow Lake is still on track for release in Q4 of 2024, and it could be announced at Intel’s annual Innovation event in late September.

Building on Meteor Lake

Lunar Lake continues to use a mix of P-cores and E-cores, which allow the chip to handle a mix of low-intensity and high-performance workloads without using more power than necessary.

Enlarge / Lunar Lake continues to use a mix of P-cores and E-cores, which allow the chip to handle a mix of low-intensity and high-performance workloads without using more power than necessary.

Intel

Lunar Lake shares a few things in common with Meteor Lake, including a chiplet-based design that combines multiple silicon dies into one big one with Intel’s Foveros packaging technology. But in some ways Lunar Lake is simpler and less weird than Meteor Lake, with fewer chiplets and a more conventional design.

Meteor Lake’s components were spread across four tiles: a compute tile that was mainly for the CPU cores, a TSMC-manufactured graphics tile for the GPU rendering hardware, an IO tile to handle things like PCI Express and Thunderbolt connectivity, and a grab-bag “SoC” tile with a couple of additional CPU cores, the media encoding and decoding engine, display connectivity, and the NPU.

Lunar Lake only has two functional tiles, plus a small “filler tile” that seems to exist solely so that the Lunar Lake silicon die can be a perfect rectangle once it’s all packaged together. The compute tile combines all of the processor’s P-cores and E-cores, the GPU, the NPU, the display outputs, and the media encoding and decoding engine. And the platform controller tile handles wired and wireless connectivity, including PCIe and USB, Thunderbolt 4, and Wi-Fi 7 and Bluetooth 5.4.

This is essentially the same split that Intel has used for laptop chips for years and years: one chipset die and one die for the CPU, GPU, and everything else. It’s just that now, those two chips are part of the same silicon die, rather than separate dies on the same processor package. In retrospect it seems like some of Meteor Lake’s most noticeable design departures—the division of GPU-related functions among different tiles, the presence of additional CPU cores inside of the SoC tile—were things Intel had to do to work around the fact that another company was actually manufacturing most of the GPU. Given the opportunity, Intel has returned to a more recognizable assemblage of components.

Intel is shifting to on-package RAM for Meteor Lake, something Apple also uses for its M-series chips.

Enlarge / Intel is shifting to on-package RAM for Meteor Lake, something Apple also uses for its M-series chips.

Intel

Another big packaging change is that Intel is integrating RAM into the CPU package for Lunar Lake, rather than having it installed separately on the motherboard. Intel says this uses 40 percent less power, since it shortens the distance data needs to travel. It also saves motherboard space, which can either be used for other components, to make systems smaller, or to make more room for battery. Apple also uses on-package memory for its M-series chips.

Intel says that Lunar Lake chips can include up to 32GB of LPDDR5x memory. The downside is that this on-package memory precludes the usage of separate Compression-Attached Memory Modules, which combine many of the benefits of traditional upgradable DIMM modules and soldered-down laptop memory.

Intel details new Lunar Lake CPUs that will go up against AMD, Qualcomm, and Apple Read More »

for-the-second-time-in-two-years,-amd-blows-up-its-laptop-cpu-numbering-system

For the second time in two years, AMD blows up its laptop CPU numbering system

this again —

AMD reverses course on “decoder ring” numbering system for laptop CPUs.

AMD's Ryzen 9 AI 300 series is a new chip and a new naming scheme.

Enlarge / AMD’s Ryzen 9 AI 300 series is a new chip and a new naming scheme.

AMD

Less than two years ago, AMD announced that it was overhauling its numbering scheme for laptop processors. Each digit in its four-digit CPU model numbers picked up a new meaning, which, with the help of a detailed reference sheet, promised to inform buyers of exactly what it was they were buying.

One potential issue with this, as we pointed out at the time, was that this allowed AMD to change over the first and most important of those four digits every single year that it decided to re-release a processor, regardless of whether that chip actually included substantive improvements or not. Thus a “Ryzen 7730U” from 2023 would look two generations newer than a Ryzen 5800U from 2021, despite being essentially identical.

AMD is partially correcting this today by abandoning the self-described “decoder ring” naming system and resetting it to something more conventional.

For its new Ryzen AI laptop processors, codenamed “Strix Point,” AMD is still using the same broad Ryzen 3/5/7/9 number to communicate general performance level plus a one- or two-letter suffix to denote general performance and power level (U for ultraportables, HX for higher-performance chips, and so on). A new three-digit processor number will inform buyers of the chip’s generation in the first digit and denote the specific SKU using the last two digits.

AMD is changing how it numbers its laptop CPUs again.

Enlarge / AMD is changing how it numbers its laptop CPUs again.

AMD

In other words, the company is essentially hitting the undo button.

Like Intel, AMD is shifting from four-digit numbers to three digits. The Strix Point processor numbers will start with the 300 series, which AMD says is because this is the third generation of Ryzen laptop processors with a neural processing unit (NPU) included. Current 7040-series and 8040-series processors with NPUs are not being renamed retroactively, and AMD plans to stop using the 7000- and 8000-series numbering for processor introductions going forward.

AMD wouldn’t describe exactly how it would approach CPU model numbers for new products that used older architectures but did say that new processors that didn’t meet the 40+ TOPS requirement for Microsoft’s Copilot+ program would simply use the “Ryzen” name instead of the new “Ryzen AI” branding. That would include older architectures with slower NPUs, like the current 7040 and 8040-series chips.

Desktop CPUs are, once again, totally unaffected by this change. Desktop processors’ four-digit model numbers and alphabetic suffixes generally tell you all you need to know about their underlying architecture; the new Ryzen 9000 desktop CPUs and the Zen 5 architecture were also announced today.

It seems like a lot of work to do to end up basically where we started, especially when the people at AMD who make and market the desktop chips have been getting by just fine with older model numbers for newly released products when appropriate. But to be fair to AMD, there just isn’t a great way to do processor model numbers in a simple and consistent way, at least not given current market realities:

  • PC OEMs that seem to demand or expect “new” product from chipmakers every year, even though chip companies tend to take somewhere between one and three years to release significantly updated designs.
  • The fact that casual and low-end users don’t actually benefit a ton from performance enhancements, keeping older chips viable for longer.
  • Different subsections of the market that must be filled with slightly different chips (consider chips with vPro versus similar chips without it).
  • The need to “bin” chips—that is, disable small parts of a given silicon CPU or GPU die and then sell the results as a lower-end product—to recoup manufacturing costs and minimize waste.

Apple may come the closest to what the “ideal” would probably be—one number for the overarching chip generation (M1, M3, etc.), one word like “Pro” or “Max” to communicate the general performance level, and a straightforward description of the number of CPU and GPU cores included, to leave flexibility for binning chips. But as usual, Apple occupies a unique position: it’s the only company putting its own processors into its own systems, and the company usually only updates a product when there’s something new to put in it, rather than reflexively announcing new models every time another CES or back-to-school season or Windows version rolls around.

In reverting to more traditional model numbers, AMD has at least returned to a system that people who follow CPUs will be broadly familiar with. It’s not perfect, and it leaves plenty of room for ambiguity as the product lineup gets more complicated. But it’s in the same vein as Intel’s rebranding of 13th-gen Core chips, the whole “Intel Processor” thing, or Qualcomm’s unfriendly eight-digit model numbers for its Snapdragon X Plus and Elite chips. AMD’s new nomenclature is a devil, but at least it’s one we know.

For the second time in two years, AMD blows up its laptop CPU numbering system Read More »

thunderbolt-share-simplifies-dual-pc-workloads—but-requires-new-hardware

Thunderbolt Share simplifies dual-PC workloads—but requires new hardware

Thunderbolt 4 or 5 —

App comes out in June, but you’ll need a PC or dock licensed to use it.

Thunderbolt 5 cable

Intel

Intel this week announced new Thunderbolt software made for connecting two PCs. Thunderbolt Share will require Intel-licensed hardware and is looking to make it simpler to do things like transferring large files from one PC to another or working with two systems simultaneously.

For example, you could use a Thunderbolt cable to connect one laptop to another and then configure the system so that your keyboard, mouse, and monitor work with both computers. Thunderbolt Share also enables dragging and dropping and syncing files between computers.

The app has similar functionality to a KVM switch or apps like PCmover, Logitech Flow, or macOS’ File Sharing and Screen Sharing, which enable wireless file sharing. But Thunderbolt Share comes with Intel-backed Thunderbolt 4 or Thunderbolt 5 speeds (depending on the hardware) and some critical requirements.

In a press briefing, Jason Ziller, VP and GM of Intel’s Client Connectivity Division, said that the speeds would vary by what the user is doing.

“It’s hard to put a number on it,” he said. “But I’d say, generally speaking, probably expect to see around 20 gigabits per second… That’s on a Thunderbolt 4 on a 40 gig link. And then we’ll see higher bandwidth on Thunderbolt 5 [with an] 80 gig link.”

You could use Thunderbolt Share to connect a laptop to a desktop so they can share a monitor, mouse, and keyboard, for example. The systems could also connect via a Thunderbolt dock or Thunderbolt monitor. Ziller told the press that the feature could support FHD screen mirroring at up to 60 frames per second (fps), and higher resolutions would result in lower frame rates.

Per Ziller, the feature could pull some CPU and GPU resources depending on the workload and hardware involved. Full video mirroring, for example, would be a more taxing task. Ziller said.

Thunderbolt Share requires Windows 10 or newer to work, but Intel is “exploring” addition OS support for the future, Ziller told the press.

New hardware required

You might be thinking, “Great! I have a Thunderbolt 4 desktop and laptop I’d love to connect right now.” But no hardware you own will officially support Thunderbolt Share, as it requires Intel licensing, which will cost OEMs an extra fee. That means you’ll need a new computer or dock, which Intel says will start releasing this year. Thunderbolt Share will not be part of the Thunderbolt 5 spec, either.

When Ars Technica asked about this limitation, Intel spokesperson Tom Hannaford said, “We focused on partnering with the OEMs to test, validate, and provide support to ensure all new Thunderbolt Share-enabled PCs and accessories meet the performance and quality standards that users expect with Thunderbolt technology. Working with our OEM partners in this way to bring Thunderbolt Share to market will ensure the best possible multi-PC experience for creators, gamers, consumers, and businesses.”

Partners announced this week include Acer, Lenovo, MSI, Razer, Belkin, Kensington, and Plugable, and Intel says there will be more.

“Thunderbolt Share is a more advanced experience than what the baseline Thunderbolt spec should require,” Hannaford said. “That’s why we’re offering it as a value-add feature that OEMs can license for supported hardware going forward rather than requiring they license it as part of the base Thunderbolt spec.”

The Verge reported that Thunderbolt Share “doesn’t strictly require a Thunderbolt-certified computer” or Intel CPU. Ziller told the publication that USB4 and Thunderbolt 3 connections “may work, we just really don’t guarantee it; we won’t be providing support for it.”

Intel’s portrayal of Thunderbolt Share as something that needs rigid testing aligns with the company’s general approach to Thunderbolt. Still, I was able to use a preproduction version of the app without licensed hardware. Using a Thunderbolt 4 cable, the app seemed to work normally, and I moved a 1GB folder with Word documents and some images in about a minute and 15 seconds. Your experience may vary, though. Further, some Macs can link up over a Thunderbolt cable and share files and screens without licensing from Intel.

The test version of Thunderbolt Share is temporary, though. Those who want to use the officially supported final version will have to wait until the app’s release in June. You’ll also need a licensed third-party PC or dock to become available.

Thunderbolt Share simplifies dual-PC workloads—but requires new hardware Read More »

openai’s-flawed-plan-to-flag-deepfakes-ahead-of-2024-elections

OpenAI’s flawed plan to flag deepfakes ahead of 2024 elections

OpenAI’s flawed plan to flag deepfakes ahead of 2024 elections

As the US moves toward criminalizing deepfakes—deceptive AI-generated audio, images, and videos that are increasingly hard to discern from authentic content online—tech companies have rushed to roll out tools to help everyone better detect AI content.

But efforts so far have been imperfect, and experts fear that social media platforms may not be ready to handle the ensuing AI chaos during major global elections in 2024—despite tech giants committing to making tools specifically to combat AI-fueled election disinformation. The best AI detection remains observant humans, who, by paying close attention to deepfakes, can pick up on flaws like AI-generated people with extra fingers or AI voices that speak without pausing for a breath.

Among the splashiest tools announced this week, OpenAI shared details today about a new AI image detection classifier that it claims can detect about 98 percent of AI outputs from its own sophisticated image generator, DALL-E 3. It also “currently flags approximately 5 to 10 percent of images generated by other AI models,” OpenAI’s blog said.

According to OpenAI, the classifier provides a binary “true/false” response “indicating the likelihood of the image being AI-generated by DALL·E 3.” A screenshot of the tool shows how it can also be used to display a straightforward content summary confirming that “this content was generated with an AI tool” and includes fields ideally flagging the “app or device” and AI tool used.

To develop the tool, OpenAI spent months adding tamper-resistant metadata to “all images created and edited by DALL·E 3” that “can be used to prove the content comes” from “a particular source.” The detector reads this metadata to accurately flag DALL-E 3 images as fake.

That metadata follows “a widely used standard for digital content certification” set by the Coalition for Content Provenance and Authenticity (C2PA), often likened to a nutrition label. And reinforcing that standard has become “an important aspect” of OpenAI’s approach to AI detection beyond DALL-E 3, OpenAI said. When OpenAI broadly launches its video generator, Sora, C2PA metadata will be integrated into that tool as well, OpenAI said.

Of course, this solution is not comprehensive because that metadata could always be removed, and “people can still create deceptive content without this information (or can remove it),” OpenAI said, “but they cannot easily fake or alter this information, making it an important resource to build trust.”

Because OpenAI is all in on C2PA, the AI leader announced today that it would join the C2PA steering committee to help drive broader adoption of the standard. OpenAI will also launch a $2 million fund with Microsoft to support broader “AI education and understanding,” seemingly partly in the hopes that the more people understand about the importance of AI detection, the less likely they will be to remove this metadata.

“As adoption of the standard increases, this information can accompany content through its lifecycle of sharing, modification, and reuse,” OpenAI said. “Over time, we believe this kind of metadata will be something people come to expect, filling a crucial gap in digital content authenticity practices.”

OpenAI joining the committee “marks a significant milestone for the C2PA and will help advance the coalition’s mission to increase transparency around digital media as AI-generated content becomes more prevalent,” C2PA said in a blog.

OpenAI’s flawed plan to flag deepfakes ahead of 2024 elections Read More »

framework’s-software-and-firmware-have-been-a-mess,-but-it’s-working-on-them

Framework’s software and firmware have been a mess, but it’s working on them

The Framework Laptop 13.

Enlarge / The Framework Laptop 13.

Andrew Cunningham

Since Framework showed off its first prototypes in February 2021, we’ve generally been fans of the company’s modular, repairable, upgradeable laptops.

Not that the company’s hardware releases to date have been perfect—each Framework Laptop 13 model has had quirks and flaws that range from minor to quite significant, and the Laptop 16’s upsides struggle to balance its downsides. But the hardware mostly does a good job of functioning as a regular laptop while being much more tinkerer-friendly than your typical MacBook, XPS, or ThinkPad.

But even as it builds new upgrades for its systems, expands sales of refurbished and B-stock hardware as budget options, and promotes the re-use of its products via external enclosures, Framework has struggled with the other side of computing longevity and sustainability: providing up-to-date software.

Driver bundles remain un-updated for years after their initial release. BIOS updates go through long and confusing beta processes, keeping users from getting feature improvements, bug fixes, and security updates. In its community support forums, Framework employees, including founder and CEO Nirav Patel, have acknowledged these issues and promised fixes but have remained inconsistent and vague about actual timelines.

But according to Patel, the company is working on fixing these issues, and it has taken some steps to address them. We spoke to him about the causes of and the solutions to these issues, and the company’s approach to the software side of its efforts to promote repairability and upgradeability.

Promises made

Here’s a case in point: the 12th-generation Intel version of the Framework Laptop 13, which prompted me to start monitoring Framework’s software and firmware updates in the first place.

In November 2022, Patel announced that this model, then the latest version, was getting a nice, free-of-charge spec bump. All four of the laptop’s recessed USB-C ports would now become full-speed Thunderbolt ports. This wasn’t a dramatic functional change, especially for people who were mostly using those ports for basic Framework expansion modules like USB-A or HDMI, but the upgrade opened the door to high-speed external accessories, and all it would need was a BIOS update.

The recessed USB-C ports in the 12th-gen Intel version of the Framework Laptop 13 can be upgraded to fully certified Thunderbolt ports, but only if you're willing to install one in a long series of still-in-testing beta BIOSes.

Enlarge / The recessed USB-C ports in the 12th-gen Intel version of the Framework Laptop 13 can be upgraded to fully certified Thunderbolt ports, but only if you’re willing to install one in a long series of still-in-testing beta BIOSes.

Andrew Cunningham

A final version of this BIOS update finally showed up this week, nearly a year and a half later. Up until last week, Framework’s support page for that 12th-gen Intel laptop still said that there was “no new BIOS available” for a laptop that began shipping in the summer of 2022. This factory-installed BIOS, version 3.04, also didn’t include fixes for the LogoFAIL UEFI security vulnerability or any other firmware-based security patches that have cropped up in the last year and a half.

And it’s not just that the updates don’t come out in a timely way; the company has been bad about estimating when they might come out. That old12th-gen Framework BIOS also didn’t support the 61 WHr battery that the company released in early 2023 alongside the 13th-gen Intel refresh. Framework originally told me that BIOS update would be out in May of 2023. A battery-supporting update for the 11th-gen Intel version was also promised in May 2023; it came out this past January.

Framework has been trying, but it keeps running into issues. A beta 3.06 BIOS update with the promised improvements for the 12th-gen Intel Framework Laptop was posted back in December of 2022, but a final version was never released. The newer 3.08 BIOS beta entered testing in January 2024 but still gave users some problems. Users would go for weeks or months without any communication from anyone at Framework.

The result is multiple long forum threads of frustrated users asking for updates, interspersed with not-untrue but unsatisfying responses from Framework employees (some version of “we’re a small company” is one of the most common).

Framework’s software and firmware have been a mess, but it’s working on them Read More »

intel’s-“gaudi-3”-ai-accelerator-chip-may-give-nvidia’s-h100-a-run-for-its-money

Intel’s “Gaudi 3” AI accelerator chip may give Nvidia’s H100 a run for its money

Adventures in Matrix Multiplication —

Intel claims 50% more speed when running AI language models vs. the market leader.

An Intel handout photo of the Gaudi 3 AI accelerator.

Enlarge / An Intel handout photo of the Gaudi 3 AI accelerator.

On Tuesday, Intel revealed a new AI accelerator chip called Gaudi 3 at its Vision 2024 event in Phoenix. With strong claimed performance while running large language models (like those that power ChatGPT), the company has positioned Gaudi 3 as an alternative to Nvidia’s H100, a popular data center GPU that has been subject to shortages, though apparently that is easing somewhat.

Compared to Nvidia’s H100 chip, Intel projects a 50 percent faster training time on Gaudi 3 for both OpenAI’s GPT-3 175B LLM and the 7-billion parameter version of Meta’s Llama 2. In terms of inference (running the trained model to get outputs), Intel claims that its new AI chip delivers 50 percent faster performance than H100 for Llama 2 and Falcon 180B, which are both relatively popular open-weights models.

Intel is targeting the H100 because of its high market share, but the chip isn’t Nvidia’s most powerful AI accelerator chip in the pipeline. Announcements of the H200 and the Blackwell B200 have since surpassed the H100 on paper, but neither of those chips is out yet (the H200 is expected in the second quarter of 2024—basically any day now).

Meanwhile, the aforementioned H100 supply issues have been a major headache for tech companies and AI researchers who have to fight for access to any chips that can train AI models. This has led several tech companies like Microsoft, Meta, and OpenAI (rumor has it) to seek their own AI-accelerator chip designs, although that custom silicon is typically manufactured by either Intel or TSMC. Google has its own line of tensor processing units (TPUs) that it has been using internally since 2015.

Given those issues, Intel’s Gaudi 3 may be a potentially attractive alternative to the H100 if Intel can hit an ideal price (which Intel has not provided, but an H100 reportedly costs around $30,000–$40,000) and maintain adequate production. AMD also manufactures a competitive range of AI chips, such as the AMD Instinct MI300 Series, that sell for around $10,000–$15,000.

Gaudi 3 performance

An Intel handout featuring specifications of the Gaudi 3 AI accelerator.

Enlarge / An Intel handout featuring specifications of the Gaudi 3 AI accelerator.

Intel says the new chip builds upon the architecture of its predecessor, Gaudi 2, by featuring two identical silicon dies connected by a high-bandwidth connection. Each die contains a central cache memory of 48 megabytes, surrounded by four matrix multiplication engines and 32 programmable tensor processor cores, bringing the total cores to 64.

The chipmaking giant claims that Gaudi 3 delivers double the AI compute performance of Gaudi 2 using 8-bit floating-point infrastructure, which has become crucial for training transformer models. The chip also offers a fourfold boost for computations using the BFloat 16-number format. Gaudi 3 also features 128GB of the less expensive HBMe2 memory capacity (which may contribute to price competitiveness) and features 3.7TB of memory bandwidth.

Since data centers are well-known to be power hungry, Intel emphasizes the power efficiency of Gaudi 3, claiming 40 percent greater inference power-efficiency across Llama 7B and 70B parameters, and Falcon 180B parameter models compared to Nvidia’s H100. Eitan Medina, chief operating officer of Intel’s Habana Labs, attributes this advantage to Gaudi’s large-matrix math engines, which he claims require significantly less memory bandwidth compared to other architectures.

Gaudi vs. Blackwell

An Intel handout photo of the Gaudi 3 AI accelerator.

Enlarge / An Intel handout photo of the Gaudi 3 AI accelerator.

Last month, we covered the splashy launch of Nvidia’s Blackwell architecture, including the B200 GPU, which Nvidia claims will be the world’s most powerful AI chip. It seems natural, then, to compare what we know about Nvidia’s highest-performing AI chip to the best of what Intel can currently produce.

For starters, Gaudi 3 is being manufactured using TSMC’s N5 process technology, according to IEEE Spectrum, narrowing the gap between Intel and Nvidia in terms of semiconductor fabrication technology. The upcoming Nvidia Blackwell chip will use a custom N4P process, which reportedly offers modest performance and efficiency improvements over N5.

Gaudi 3’s use of HBM2e memory (as we mentioned above) is notable compared to the more expensive HBM3 or HBM3e used in competing chips, offering a balance of performance and cost-efficiency. This choice seems to emphasize Intel’s strategy to compete not only on performance but also on price.

As far as raw performance comparisons between Gaudi 3 and the B200, that can’t be known until the chips have been released and benchmarked by a third party.

As the race to power the tech industry’s thirst for AI computation heats up, IEEE Spectrum notes that the next generation of Intel’s Gaudi chip, code-named Falcon Shores, remains a point of interest. It also remains to be seen whether Intel will continue to rely on TSMC’s technology or leverage its own foundry business and upcoming nanosheet transistor technology to gain a competitive edge in the AI accelerator market.

Intel’s “Gaudi 3” AI accelerator chip may give Nvidia’s H100 a run for its money Read More »

intel-is-investigating-game-crashes-on-top-end-core-i9-desktop-cpus

Intel is investigating game crashes on top-end Core i9 desktop CPUs

i’m giving her all she’s got —

Crashes may be related to CPUs running above their specified power limits.

Intel's high-end Core i9-13900K and 14900K are reportedly having crashing problems in some games.

Enlarge / Intel’s high-end Core i9-13900K and 14900K are reportedly having crashing problems in some games.

Andrew Cunningham

If you own a recent high-end Intel desktop CPU and you’ve been running into weird game crashes lately, you’re not alone.

Scattered reports from Core i9-13900K and i9-14900K users over the last couple of months have pointed to processor power usage as a possible source of crashes even in relatively undemanding games like Fortnite. Games like Hogwarts Legacy, Remnant 2, Alan Wake 2, Horizon: Zero Dawn, The Last of Us Part 1, and Outpost: Infinity Siege have also reportedly been affected; the problem primarily seems to affect titles made with Epic’s Unreal Engine. Intel said in a statement to ZDNet Korea (via The Verge) that it’s looking into the problems, escalating it from an “isolated issue” to something that may be more widespread and could require a more systemic fix.

Related CPUs like the i9-13900KF, i9-14900KF, i9-13900KS, and i9-14900KS may be affected, too, since they’re all the same basic silicon. Some user reports have also indicated that the i7-13700K and i7-14700K series may also be affected.

“Intel is aware of reports regarding Intel Core 13th and 14th Gen unlocked desktop processors experiencing issues with certain workloads,” an Intel spokesperson told Ars. “We’re engaged with our partners and are conducting analysis of the reported issues.”

While Intel hasn’t indicated what it thinks could be causing the issue, support documents from Epic Games and other developers have suggested that the processors’ power settings are to blame, recommending that users change their BIOS settings or manually restrict their processors’ speed with tools like Intel’s Extreme Tuning Utility (XTU). Most enthusiast motherboards will set the power limits on Intel’s processors to be essentially infinite, squeezing out a bit more performance (especially for i7 and i9 chips) at the expense of increased power use and heat.

Epic suggests using a BIOS power setting called “Intel Fail Safe” on Asus, MSI, and Gigabyte motherboards—its name makes it sound like some kind of low-power safe mode, but it’s most likely just setting the processors’ power limits to Intel’s specified defaults. This could result in somewhat reduced performance, particularly when all CPU cores are active at the same time. But we and other reviewers have seen sharply diminishing returns when letting these chips use more power. This can even be a problem with Intel’s stock settings—the recently announced i9-14900KS can use as much as 31 percent more power than the standard i9-14900K while delivering just 1 or 2 percent faster performance.

If power limits are to blame, the good news is that users can adjust these in the short term and that motherboard makers could fix the problem in the long run by tweaking their default settings in future BIOS updates.

Updated April 9, 2024, at 2: 12 pm to add Intel spokesperson statement.

Intel is investigating game crashes on top-end Core i9 desktop CPUs Read More »

intel,-microsoft-discuss-plans-to-run-copilot-locally-on-pcs-instead-of-in-the-cloud

Intel, Microsoft discuss plans to run Copilot locally on PCs instead of in the cloud

the ai pc —

Companies are trying to make the “AI PC” happen with new silicon and software.

The basic requirements for an AI PC, at least when it's running Windows.

Enlarge / The basic requirements for an AI PC, at least when it’s running Windows.

Intel

Microsoft said in January that 2024 would be the year of the “AI PC,” and we know that AI PCs will include a few hardware components that most Windows systems currently do not include—namely, a built-in neural processing unit (NPU) and Microsoft’s new Copilot key for keyboards. But so far we haven’t heard a whole lot about what a so-called AI PC will actually do for users.

Microsoft and Intel are starting to talk about a few details as part of an announcement from Intel about a new AI PC developer program that will encourage software developers to leverage local hardware to build AI features into their apps.

The main news comes from Tom’s Hardware, confirming that AI PCs would be able to run “more elements of Copilot,” Microsoft’s AI chatbot assistant, “locally on the client.” Currently, Copilot relies on server-side processing even for small requests, introducing lag that is tolerable if you’re making a broad request for information but less so if all you want to do is change a setting or get basic answers. Running generative AI models locally could also improve user privacy, making it possible to take advantage of AI-infused software without automatically sending information to a company that will use it for further model training.

Right now, Windows doesn’t use local NPUs for much, since most current PCs don’t have them. The Surface Studio webcam features can use NPUs for power-efficient video effects and background replacement, but as of this writing that’s pretty much it. Apple’s and Google’s operating systems both use NPUs for a wider swatch of image and audio processing features, including facial recognition and object recognition, OCR, live transcription and translation, and more.

Intel also said that Microsoft would require NPUs in “next-gen AI PCs” to hit speeds of 40 trillion operations per second (TOPS) to meet its requirements. Intel, AMD, Qualcomm, and others sometimes use TOPS as a high-level performance metric when comparing their NPUs; Intel’s Meteor Lake laptop chips can run 10 TOPS, while AMD’s Ryzen 7040 and 8040 laptop chips hit 10 TOPS and 16 TOPS, respectively.

Unfortunately for Intel, the first company to put out an NPU suitable for powering Copilot locally may come from Qualcomm. The company’s upcoming Snapdragon X processors, long seen as the Windows ecosystem’s answer to Apple’s M-series Mac chips, promise up to 45 TOPS. Rumors suggest that Microsoft will shift the consumer version of its Surface tablet to Qualcomm’s chips after a few years of offering both Intel and Qualcomm options; Microsoft announced a Surface Pro update with Intel’s Meteor Lake chips last week but is only selling it to businesses.

Asus and Intel are offering a NUC with a Meteor Lake CPU and its built-in NPU as an AI development platform.

Enlarge / Asus and Intel are offering a NUC with a Meteor Lake CPU and its built-in NPU as an AI development platform.

Intel

All of that said, TOPS are just one simplified performance metric. As when using FLOPS to compare graphics performance, it’s imprecise and won’t capture variations in how each NPU handles different tasks. And the Arm version of Windows still has software and hardware compatibility issues that could continue to hold it back.

As part of its developer program, Intel is also offering an “AI PC development kit” centered on an Asus NUC Pro 14, a mini PC built around Intel’s Meteor Lake silicon. Intel formally stopped making its NUC mini PCs last year, passing the brand and all of its designs off to Asus. Asus is also handling all remaining warranty service and software support for older NUCs designed and sold by Intel. The NUC Pro 14 is one of the first new NUCs announced since the transition, along with the ROG NUC mini gaming PC.

Intel, Microsoft discuss plans to run Copilot locally on PCs instead of in the cloud Read More »

amd-promises-big-upscaling-improvements-and-a-future-proof-api-in-fsr-3.1

AMD promises big upscaling improvements and a future-proof API in FSR 3.1

upscale upscaling —

API should help more games get future FSR improvements without a game update.

AMD promises big upscaling improvements and a future-proof API in FSR 3.1

AMD

Last summer, AMD debuted the latest version of its FidelityFX Super Resolution (FSR) upscaling technology. While version 2.x focused mostly on making lower-resolution images look better at higher resolutions, version 3.0 focused on AMD’s “Fluid Motion Frames,” which attempt to boost FPS by generating interpolated frames to insert between the ones that your GPU is actually rendering.

Today, the company is announcing FSR 3.1, which among other improvements decouples the upscaling improvements in FSR 3.x from the Fluid Motion Frames feature. FSR 3.1 will be available “later this year” in games whose developers choose to implement it.

Fluid Motion Frames and Nvidia’s equivalent DLSS Frame Generation usually work best when a game is already running at a high frame rate, and even then can be more prone to mistakes and odd visual artifacts than regular FSR or DLSS upscaling. FSR 3.0 was an all-or-nothing proposition, but version 3.1 should let you pick and choose what features you want to enable.

It also means you can use FSR 3.0 frame generation with other upscalers like DLSS, especially useful for 20- and 30-series Nvidia GeForce GPUs that support DLSS upscaling but not DLSS Frame Generation.

“When using FSR 3 Frame Generation with any upscaling quality mode OR with the new ‘Native AA’ mode, it is highly recommended to be always running at a minimum of ~60 FPS before Frame Generation is applied for an optimal high-quality gaming experience and to mitigate any latency introduced by the technology,” wrote AMD’s Alexander Blake-Davies in the post announcing FSR 3.1.

Generally, FSR’s upscaling image quality falls a little short of Nvidia’s DLSS, but FSR 2 closed that gap a bit, and FSR 3.1 goes further. AMD highlights two specific improvements: one for “temporal stability,” which will help reduce the flickering and shimmering effect that FSR sometimes introduces, and one for ghosting reduction, which will reduce unintentional blurring effects for fast-moving objects.

The biggest issue with these new FSR improvements is that they need to be implemented on a game-to-game basis. FSR 3.0 was announced in August 2023, and AMD now trumpets that there are 40 “available and upcoming” games that support the technology, of which just 19 are currently available. There are a lot of big-name AAA titles in the list, but that’s still not many compared to the sum total of all PC games or even the 183 titles that currently support FSR 2.x.

AMD wants to help solve this problem in FSR 3.1 by introducing a stable FSR API for developers, which AMD says “makes it easier for developers to debug and allows forward compatibility with updated versions of FSR.” This may eventually lead to more games getting future FSR improvements for “free,” without the developer’s effort.

AMD didn’t mention any hardware requirements for FSR 3.1, though presumably, the company will still support a reasonably wide range of recent GPUs from AMD, Nvidia, and Intel. FSR 3.0 is formally supported on Radeon RX 5000, 6000, and 7000 cards, Nvidia’s RTX 20-series and newer, and Intel Arc GPUs. It will also bring FSR 3.x features to games that use the Vulkan API, not just DirectX 12, and the Xbox Game Development Kit (GDK) so it can be used in console titles as well as PC games.

AMD promises big upscaling improvements and a future-proof API in FSR 3.1 Read More »

your-current-pc-probably-doesn’t-have-an-ai-processor,-but-your-next-one-might

Your current PC probably doesn’t have an AI processor, but your next one might

Intel's Core Ultra chips are some of the first x86 PC processors to include built-in NPUs. Software support will slowly follow.

Enlarge / Intel’s Core Ultra chips are some of the first x86 PC processors to include built-in NPUs. Software support will slowly follow.

Intel

When it announced the new Copilot key for PC keyboards last month, Microsoft declared 2024 “the year of the AI PC.” On one level, this is just an aspirational PR-friendly proclamation, meant to show investors that Microsoft intends to keep pushing the AI hype cycle that has put it in competition with Apple for the title of most valuable publicly traded company.

But on a technical level, it is true that PCs made and sold in 2024 and beyond will generally include AI and machine-learning processing capabilities that older PCs don’t. The main thing is the neural processing unit (NPU), a specialized block on recent high-end Intel and AMD CPUs that can accelerate some kinds of generative AI and machine-learning workloads more quickly (or while using less power) than the CPU or GPU could.

Qualcomm’s Windows PCs were some of the first to include an NPU, since the Arm processors used in most smartphones have included some kind of machine-learning acceleration for a few years now (Apple’s M-series chips for Macs all have them, too, going all the way back to 2020’s M1). But the Arm version of Windows is a insignificantly tiny sliver of the entire PC market; x86 PCs with Intel’s Core Ultra chips, AMD’s Ryzen 7040/8040-series laptop CPUs, or the Ryzen 8000G desktop CPUs will be many mainstream PC users’ first exposure to this kind of hardware.

Right now, even if your PC has an NPU in it, Windows can’t use it for much, aside from webcam background blurring and a handful of other video effects. But that’s slowly going to change, and part of that will be making it relatively easy for developers to create NPU-agnostic apps in the same way that PC game developers currently make GPU-agnostic games.

The gaming example is instructive, because that’s basically how Microsoft is approaching DirectML, its API for machine-learning operations. Though up until now it has mostly been used to run these AI workloads on GPUs, Microsoft announced last week that it was adding DirectML support for Intel’s Meteor Lake NPUs in a developer preview, starting in DirectML 1.13.1 and ONNX Runtime 1.17.

Though it will only run an unspecified “subset of machine learning models that have been targeted for support” and that some “may not run at all or may have high latency or low accuracy,” it opens the door to more third-party apps to start taking advantage of built-in NPUs. Intel says that Samsung is using Intel’s NPU and DirectML for facial recognition features in its photo gallery app, something that Apple also uses its Neural Engine for in macOS and iOS.

The benefits can be substantial, compared to running those workloads on a GPU or CPU.

“The NPU, at least in Intel land, will largely be used for power efficiency reasons,” Intel Senior Director of Technical Marketing Robert Hallock told Ars in an interview about Meteor Lake’s capabilities. “Camera segmentation, this whole background blurring thing… moving that to the NPU saves about 30 to 50 percent power versus running it elsewhere.”

Intel and Microsoft are both working toward a model where NPUs are treated pretty much like GPUs are today: developers generally target DirectX rather than a specific graphics card manufacturer or GPU architecture, and new features, one-off bug fixes, and performance improvements can all be addressed via GPU driver updates. Some GPUs run specific games better than others, and developers can choose to spend more time optimizing for Nvidia cards or AMD cards, but generally the model is hardware agnostic.

Similarly, Intel is already offering GPU-style driver updates for its NPUs. And Hallock says that Windows already essentially recognizes the NPU as “a graphics card with no rendering capability.”

Your current PC probably doesn’t have an AI processor, but your next one might Read More »