Author name: Mike M.

shadowveil-is-a-stylish,-tough-single-player-auto-battler

Shadowveil is a stylish, tough single-player auto-battler

One thing Shadowveil: Legend of the Five Rings does well is invoke terror. Not just the terror of an overwhelming mass of dark energy encroaching on your fortress, which is what the story suggests. Moreso, the terror of hoping your little computer-controlled fighters will do the smart thing, then being forced to watch, helpless, as they are consumed by algorithmic choices, bad luck, your strategies, or some combination of all three.

Shadowveil, the first video game based on the more than 30-year-old Legend of the Five Rings fantasy franchise, is a roguelite auto-battler. You pick your Crab Clan hero (berserker hammer-wielder or tactical support type), train up some soldiers, and assign all of them abilities, items, and buffs you earn as you go. When battle starts, you choose which hex to start your fighters on, double-check your load-outs, then click to start and watch what happens. You win and march on, or you lose and regroup at base camp, buying some upgrades with your last run’s goods.

Shadowveil: Legend of the Five Rings launch trailer.

In my impressions after roughly seven hours of playing, Shadowveil could do more to soften its learning curve, but it presents a mostly satisfying mix of overwhelming odds and achievement. What’s irksome now could get patched, and what’s already there is intriguing, especially for the price.

The hard-worn path to knowledge

There are almost always more enemies than you have fighters, so it’s your job to find efficiencies, choke points, and good soldier pairings.

Credit: Palindrome Interactive

There are almost always more enemies than you have fighters, so it’s your job to find efficiencies, choke points, and good soldier pairings. Credit: Palindrome Interactive

Some necessary disclosure: Auto-battlers are not one of my go-to genres. Having responsibility for all the prep, but no control over what fighters will actually do when facing a glut of enemies, can feel punishing, unfair, and only sometimes motivating to try something different. Add that chaos and uncertainty to procedurally generated paths (like in Slay the Spire), and sometimes the defeats felt like my fault, sometimes the random number generator’s doing.

Losing is certainly anticipated in Shadowveil. The roguelite elements are the items and currencies you pick up from victories and carry back after defeat. With these, you can unlock new kinds of fighters, upgrade your squad members, and otherwise grease the skids for future runs. You’ll have to make tough choices here, as there are more than a half-dozen resources, some unique to each upgrade type, and some you might not pick up at all in any given run.

Shadowveil is a stylish, tough single-player auto-battler Read More »

eerily-realistic-ai-voice-demo-sparks-amazement-and-discomfort-online

Eerily realistic AI voice demo sparks amazement and discomfort online


Sesame’s new AI voice model features uncanny imperfections, and it’s willing to act like an angry boss.

In late 2013, the Spike Jonze film Her imagined a future where people would form emotional connections with AI voice assistants. Nearly 12 years later, that fictional premise has veered closer to reality with the release of a new conversational voice model from AI startup Sesame that has left many users both fascinated and unnerved.

“I tried the demo, and it was genuinely startling how human it felt,” wrote one Hacker News user who tested the system. “I’m almost a bit worried I will start feeling emotionally attached to a voice assistant with this level of human-like sound.”

In late February, Sesame released a demo for the company’s new Conversational Speech Model (CSM) that appears to cross over what many consider the “uncanny valley” of AI-generated speech, with some testers reporting emotional connections to the male or female voice assistant (“Miles” and “Maya”).

In our own evaluation, we spoke with the male voice for about 28 minutes, talking about life in general and how it decides what is “right” or “wrong” based on its training data. The synthesized voice was expressive and dynamic, imitating breath sounds, chuckles, interruptions, and even sometimes stumbling over words and correcting itself. These imperfections are intentional.

“At Sesame, our goal is to achieve ‘voice presence’—the magical quality that makes spoken interactions feel real, understood, and valued,” writes the company in a blog post. “We are creating conversational partners that do not just process requests; they engage in genuine dialogue that builds confidence and trust over time. In doing so, we hope to realize the untapped potential of voice as the ultimate interface for instruction and understanding.”

Sometimes the model tries too hard to sound like a real human. In one demo posted online by a Reddit user called MetaKnowing, the AI model talks about craving “peanut butter and pickle sandwiches.”

An example of Sesame’s female voice model craving peanut butter and pickle sandwiches, captured by Reddit user MetaKnowing.

Founded by Brendan Iribe, Ankit Kumar, and Ryan Brown, Sesame AI has attracted significant backing from prominent venture capital firms. The company has secured investments from Andreessen Horowitz, led by Anjney Midha and Marc Andreessen, along with Spark Capital, Matrix Partners, and various founders and individual investors.

Browsing reactions to Sesame found online, we found many users expressing astonishment at its realism. “I’ve been into AI since I was a child, but this is the first time I’ve experienced something that made me definitively feel like we had arrived,” wrote one Reddit user. “I’m sure it’s not beating any benchmarks, or meeting any common definition of AGI, but this is the first time I’ve had a real genuine conversation with something I felt was real.” Many other Reddit threads express similar feelings of surprise, with commenters saying it’s “jaw-dropping” or “mind-blowing.”

While that sounds like a bunch of hyperbole at first glance, not everyone finds the Sesame experience pleasant. Mark Hachman, a senior editor at PCWorld, wrote about being deeply unsettled by his interaction with the Sesame voice AI. “Fifteen minutes after ‘hanging up’ with Sesame’s new ‘lifelike’ AI, and I’m still freaked out,” Hachman reported. He described how the AI’s voice and conversational style eerily resembled an old friend he had dated in high school.

Others have compared Sesame’s voice model to OpenAI’s Advanced Voice Mode for ChatGPT, saying that Sesame’s CSM features more realistic voices, and others are pleased that the model in the demo will roleplay angry characters, which ChatGPT refuses to do.

An example argument with Sesame’s CSM created by Gavin Purcell.

Gavin Purcell, co-host of the AI for Humans podcast, posted an example video on Reddit where the human pretends to be an embezzler and argues with a boss. It’s so dynamic that it’s difficult to tell who the human is and which one is the AI model. Judging by our own demo, it’s entirely capable of what you see in the video.

“Near-human quality”

Under the hood, Sesame’s CSM achieves its realism by using two AI models working together (a backbone and a decoder) based on Meta’s Llama architecture that processes interleaved text and audio. Sesame trained three AI model sizes, with the largest using 8.3 billion parameters (an 8 billion backbone model plus a 300 million parameter decoder) on approximately 1 million hours of primarily English audio.

Sesame’s CSM doesn’t follow the traditional two-stage approach used by many earlier text-to-speech systems. Instead of generating semantic tokens (high-level speech representations) and acoustic details (fine-grained audio features) in two separate stages, Sesame’s CSM integrates into a single-stage, multimodal transformer-based model, jointly processing interleaved text and audio tokens to produce speech. OpenAI’s voice model uses a similar multimodal approach.

In blind tests without conversational context, human evaluators showed no clear preference between CSM-generated speech and real human recordings, suggesting the model achieves near-human quality for isolated speech samples. However, when provided with conversational context, evaluators still consistently preferred real human speech, indicating a gap remains in fully contextual speech generation.

Sesame co-founder Brendan Iribe acknowledged current limitations in a comment on Hacker News, noting that the system is “still too eager and often inappropriate in its tone, prosody and pacing” and has issues with interruptions, timing, and conversation flow. “Today, we’re firmly in the valley, but we’re optimistic we can climb out,” he wrote.

Too close for comfort?

Despite CSM’s technological impressiveness, advancements in conversational voice AI carry significant risks for deception and fraud. The ability to generate highly convincing human-like speech has already supercharged voice phishing scams, allowing criminals to impersonate family members, colleagues, or authority figures with unprecedented realism. But adding realistic interactivity to those scams may take them to another level of potency.

Unlike current robocalls that often contain tell-tale signs of artificiality, next-generation voice AI could eliminate these red flags entirely. As synthetic voices become increasingly indistinguishable from human speech, you may never know who you’re talking to on the other end of the line. It’s inspired some people to share a secret word or phrase with their family for identity verification.

Although Sesame’s demo does not clone a person’s voice, future open source releases of similar technology could allow malicious actors to potentially adapt these tools for social engineering attacks. OpenAI itself held back its own voice technology from wider deployment over fears of misuse.

Sesame sparked a lively discussion on Hacker News about its potential uses and dangers. Some users reported having extended conversations with the two demo voices, with conversations lasting up to the 30-minute limit. In one case, a parent recounted how their 4-year-old daughter developed an emotional connection with the AI model, crying after not being allowed to talk to it again.

The company says it plans to open-source “key components” of its research under an Apache 2.0 license, enabling other developers to build upon their work. Their roadmap includes scaling up model size, increasing dataset volume, expanding language support to over 20 languages, and developing “fully duplex” models that better handle the complex dynamics of real conversations.

You can try the Sesame demo on the company’s website, assuming that it isn’t too overloaded with people who want to simulate a rousing argument.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

Eerily realistic AI voice demo sparks amazement and discomfort online Read More »

george-orwell’s-1984-as-a-’90s-pc-game-has-to-be-seen-to-be-believed

George Orwell’s 1984 as a ’90s PC game has to be seen to be believed

Quick, to the training sphere!

The Big Brother announcement promised the ability to “interact with everything” and “disable and destroy intrusive tele-screens and spy cameras watching the player’s every move” across “10 square blocks of Orwell’s retro-futuristic world.” But footage from the demo falls well short of that promise, instead covering some extremely basic Riven-style puzzle gameplay (flips switches to turn on the power; use a screwdriver to open the grate, etc.) played from a first-person view.

Sample gameplay from the newly unearthed Big Brother demo.

It all builds up to a sequence where (according to a walk-through included on the demo disc) you have to put on a “zero-g suit” before planting a bomb inside a “zero gravity training sphere” guarded by robots. Sounds like inhabiting the world of the novel to us!

Aside from the brief mentions of the Thought Police and MiniPac, the short demo does include a few other incidental nods to its licensed source material, including a “WAR IS PEACE” propaganda banner and an animated screen with the titular Big Brother seemingly looking down on you. Still, the entire gameplay scenario is so far removed from anything in the actual 1984 novel to make you wonder why they bothered with the license in the first place. Of course, MediaX answers that question in the game’s announcement, predicting that “while the game stands on its own as an entirely new creation in itself and will attract the typical game audience, the ‘Big Brother’ game will undoubtedly also attract a large literary audience.”

We sadly never got the chance to see how that “large literary audience” would have reacted to a game that seemed poised to pervert both the name and themes of 1984 so radically. In any case, this demo can now sit alongside the release of 1984’s Fahrenheit 451 and 1992’s The Godfather: The Action Game on any list of the most questionable game adaptations of respected works of art.

George Orwell’s 1984 as a ’90s PC game has to be seen to be believed Read More »

netflix-drops-trailer-for-the-russo-brothers’-the-electric-state

Netflix drops trailer for the Russo brothers’ The Electric State

Millie Bobby Brown and Chris Pratt star in the Netflix original film The Electric State.

Anthony and Joe Russo have their hands full these days with the Marvel films Avengers: Doomsday and Avengers: Secret War, slated for 2026 and 2027 releases, respectively. But we’ll get a chance to see another, smaller film from the directors this month on Netflix: The Electric State, adapted from the graphic novel by Swedish artist/designer Simon Stålenhag.

Stålenhag’s stunningly surreal neofuturistic art—featured in his narrative art books, 2014’s Tales from the Loop and 2016’s Things From the Flood—inspired the 2020 eight-episode series Tales From the Loop, in which residents of a rural town find themselves grappling with strange occurrences thanks to the presence of an underground particle accelerator. That adaptation captured the mood and tone of the art that inspired it and received Emmy nominations for cinematography and special visual effects.

The Electric State was Stålenhag’s third such book, published in 2018 and set in a similar dystopian, ravaged landscape. Paragraphs of text, accompanied by larger artworks, tell the story of a teen girl named Michelle who must travel across the country with her robot companion to find her long-lost brother, while being pursued by a federal agent. The Russo brothers acquired the rights early on and initially intended to make the film with Universal, but when the studio decided it would not be giving the film a theatrical release, Netflix bought the distribution rights.

It’s worth noting that the Russo brothers have made several major plot changes from the source material, a decision that did not please Stålenhag’s many fans, particularly since the first-look images revealed that the directors were also adopting more of a colorful 1990s aesthetic than the haunting art that originally inspired their film. Per the official premise:

Netflix drops trailer for the Russo brothers’ The Electric State Read More »

these-hot-oil-droplets-can-bounce-off-any-surface

These hot oil droplets can bounce off any surface

The Hong Kong physicists were interested in hot droplets striking cold surfaces. Prior research showed there was less of a bouncing effect in such cases involving heated water droplets, with the droplets sticking to the surface instead thanks to various factors such as reduced droplet surface tension. The Hong Kong team discovered they could achieve enhanced bouncing by using hot droplets of less volatile liquids—namely, n-hexadecane, soybean oil, and silicon oil, which have lower saturation pressures compared to water.

Follow the bouncing droplet

The researchers tested these hot droplets (as well as burning and normal temperature droplets) on various solid, cold surfaces, including scratched glass, smooth glass, acrylic surfaces, surfaces with liquid-repellant coatings from candle soot, and surfaces coated with nanoparticles with varying “wettability” (i.e., how well particles stick to the surface). They captured the droplet behavior with both high-speed and thermal cameras, augmented with computer modeling.

The room-temperature droplets stuck to all the surfaces as expected, but the hot and burning droplets bounced. The team found that the bottom of a hot droplet cools faster than the top as it approaches a room-temperature surface, which causes hotter liquid within the droplet to flow from the edges toward the bottom. The air that is dragged to the bottom with it forms a thin cushion there and prevents the droplet from making contact with the surface, bouncing off instead. They dubbed the behavior “self-lubricated bouncing.”

“It is now clear that droplet-bouncing strategies are not isolated to engineering the substrate and that the thermophysical properties of droplets themselves are critical,” Jonathan B. Boreyko of Virginia Tech, who was not involved in the research, wrote in an accompanying commentary.

Future applications include improving the combustion efficiency of fuels or developing better fire-retardant coatings. “If burning droplets can’t stick to surfaces, they won’t be able to ignite new materials and allow fires to propagate,” co-author Pingan Zhu said. “Our study could help protect flammable materials like textiles from burning droplets. Confining fires to a smaller area and slowing their spread could give firefighters more time to put them out.”

DOI: Newton, 2025. 10.1016/j.newton.2025.100014  (About DOIs).

These hot oil droplets can bounce off any surface Read More »

the-modern-era-of-low-flying-satellites-may-begin-this-week

The modern era of low-flying satellites may begin this week

Clarity-1 at the pad

Albedo’s first big test may come within the next week and the launch of the “Transporter-13” mission on SpaceX’s Falcon 9 rocket. The company’s first satellite, Clarity-1, is 530 kg (1170 pounds) and riding atop the stack of ridesharing spacecraft. The mission could launch as soon as this coming weekend from Vandenberg Space Force Base in California.

The Clarity-1 satellite will be dropped off between 500 and 600 km orbit and then attempt to lower itself to an operational orbit 274 km (170 miles) above the planet.

This is a full-up version of Albedo’s satellite design. The spacecraft is larger than a full-size refrigerator, similar to a phone booth, and is intended to operate for a lifetime of about five years, depending on the solar cycle. Clarity-1 is launching near the peak of the 11-year solar cycle, so this could reduce its active lifetime.

Albedo recently won a contract from the US Air Force Research Laboratory that is worth up to $12 million to share VLEO-specific, on-orbit data and provide analysis to support the development of new missions and payloads beyond its own optical sensors.

Serving many different customers

The advantages of such a platform include superior image quality, less congested orbits, and natural debris removal as inoperable satellites are pulled down into Earth’s atmosphere and burnt up.

But what about the drawbacks? In orbits closer to Earth the primary issue is atomic oxygen, which is highly reactive and energetic. There are also plasma eddies and other phenomena that interfere with the operation of satellites and degrade their materials. This makes VLEO far more hazardous than higher altitudes. It’s also more difficult to capture precise imagery.

“The hardest part is pointing and attitude control,” Haddad said, “because that’s already hard in LEO, when you have a big telescope and you’re trying to get a high resolution. Then you put it in VLEO, where the Earth’s rotation beneath is moving faster, and it just exacerbates the problem.”

In the next several years, Albedo is likely to reach a constellation sized at about 24 satellites, but that number will depend on customer demand, Haddad said. Albedo has previously announced about half a dozen of its commercial customers who will task Clarity-1 for various purposes, such as power and pipeline monitoring or solar farm maintenance.

But first, it has to demonstrate its technology.

The modern era of low-flying satellites may begin this week Read More »

ai-firms-follow-deepseek’s-lead,-create-cheaper-models-with-“distillation”

AI firms follow DeepSeek’s lead, create cheaper models with “distillation”

Thanks to distillation, developers and businesses can access these models’ capabilities at a fraction of the price, allowing app developers to run AI models quickly on devices such as laptops and smartphones.

Developers can use OpenAI’s platform for distillation, learning from the large language models that underpin products like ChatGPT. OpenAI’s largest backer, Microsoft, used GPT-4 to distill its small language family of models Phi as part of a commercial partnership after investing nearly $14 billion into the company.

However, the San Francisco-based start-up has said it believes DeepSeek distilled OpenAI’s models to train its competitor, a move that would be against its terms of service. DeepSeek has not commented on the claims.

While distillation can be used to create high-performing models, experts add they are more limited.

“Distillation presents an interesting trade-off; if you make the models smaller, you inevitably reduce their capability,” said Ahmed Awadallah of Microsoft Research, who said a distilled model can be designed to be very good at summarising emails, for example, “but it really would not be good at anything else.”

David Cox, vice-president for AI models at IBM Research, said most businesses do not need a massive model to run their products, and distilled ones are powerful enough for purposes such as customer service chatbots or running on smaller devices like phones.

“Any time you can [make it less expensive] and it gives you the right performance you want, there is very little reason not to do it,” he added.

That presents a challenge to many of the business models of leading AI firms. Even if developers use distilled models from companies like OpenAI, they cost far less to run, are less expensive to create, and, therefore, generate less revenue. Model-makers like OpenAI often charge less for the use of distilled models as they require less computational load.

AI firms follow DeepSeek’s lead, create cheaper models with “distillation” Read More »

commercials-are-still-too-loud,-say-“thousands”-of-recent-fcc-complaints

Commercials are still too loud, say “thousands” of recent FCC complaints

Streaming ads could get muzzled, too

As you may have noticed—either through the text of this article or your own ears—The Calm Act doesn’t apply to streaming services. And because The Calm Act doesn’t affect commercials viewed on the Internet, online services providing access to broadcast channels, like YouTube TV and Sling, don’t have to follow the rules. This is despite such services distributing the same content as linear TV providers.

For years, this made sense. The majority of TV viewing occurred through broadcast, cable, or satellite access. Further, services like Netflix and Amazon Prime Video used to be considered safe havens from constant advertisements. But today, streaming services are more popular than ever and have grown to love ads, which have become critical to most platforms’ business models. Further, many streaming services are airing more live events. These events, like sports games, show commercials to all subscribers, even those with a so-called “ad-free” subscription.

Separate from the Calm Act violation complaints, the FCC noted this month that other recent complaints it has seen illustrate “growing concern with the loudness of commercials on streaming services and other online platforms.” If the FCC decides to apply Calm Act rules to the web, it would need to create new methods for ensuring compliance, it said.

TV viewing trends by platform bar graph by Nielsen.

Nielsen’s most recent data on how people watch TV. Credit: Nielsen

The FCC didn’t specify what’s behind the spike in consumers’ commercial complaints. Perhaps with declining audiences, traditional TV providers thought it would be less likely for anyone to notice and formally complain about Ozempic ads shouting at them. Twelve years have passed since the rules took effect, so it’s also possible that organizations are getting lackadaisical about ensuring compliance or have dwindling resources.

With Americans spending similar amounts of time—if not longer—watching TV online versus via broadcast, cable, and satellite, The Calm Act would have to take on the web in order to maximize effectiveness. The streaming industry is young, though, and operates differently than linear TV distribution, presenting new regulation challenges.

Commercials are still too loud, say “thousands” of recent FCC complaints Read More »

microsoft-brings-an-official-copilot-app-to-macos-for-the-first-time

Microsoft brings an official Copilot app to macOS for the first time

It took a couple of years, but it happened: Microsoft released its Copilot AI assistant as an application for macOS. The app is available for download for free from the Mac App Store right now.

It was previously available briefly as a Mac app, sort of; for a short time, Microsoft’s iPad Copilot app could run on the Mac, but access on the Mac was quickly disabled. Mac users have been able to use a web-based interface for a while.

Copilot initially launched on the web and in web browsers (Edge, obviously) before making its way onto iOS and Android last year. It has since been slotted into all sorts of first-party Microsoft software, too.

The Copilot app joins a trend already spearheaded by ChatGPT and Anthropic of bringing native apps to the macOS platform. Like those, it enables an OS-wide keyboard shortcut to invoke a field for starting a chat at any time. It offers most of the same use cases: translating or summarizing text, answering questions, preparing reports and documents, solving coding problems or generating scripts, brainstorming, and so on.

Copilot uses OpenAI models like GPT-4 and DALL-E 3 (yes, it generates images, too) alongside others like Microsoft’s in-house Prometheus. Microsoft has invested significant amounts of money into OpenAI in recent years as the basis for Copilot and basically everything in its AI strategy.

Like Apple’s own built-in generative AI features, Copilot for macOS requires an M1 or later Mac. It also requires users to run macOS 14 or later.

Microsoft brings an official Copilot app to macOS for the first time Read More »

new-ai-text-diffusion-models-break-speed-barriers-by-pulling-words-from-noise

New AI text diffusion models break speed barriers by pulling words from noise

These diffusion models maintain performance faster than or comparable to similarly sized conventional models. LLaDA’s researchers report their 8 billion parameter model performs similarly to LLaMA3 8B across various benchmarks, with competitive results on tasks like MMLU, ARC, and GSM8K.

However, Mercury claims dramatic speed improvements. Their Mercury Coder Mini scores 88.0 percent on HumanEval and 77.1 percent on MBPP—comparable to GPT-4o Mini—while reportedly operating at 1,109 tokens per second compared to GPT-4o Mini’s 59 tokens per second. This represents roughly a 19x speed advantage over GPT-4o Mini while maintaining similar performance on coding benchmarks.

Mercury’s documentation states its models run “at over 1,000 tokens/sec on Nvidia H100s, a speed previously possible only using custom chips” from specialized hardware providers like Groq, Cerebras, and SambaNova. When compared to other speed-optimized models, the claimed advantage remains significant—Mercury Coder Mini is reportedly about 5.5x faster than Gemini 2.0 Flash-Lite (201 tokens/second) and 18x faster than Claude 3.5 Haiku (61 tokens/second).

Opening a potential new frontier in LLMs

Diffusion models do involve some trade-offs. They typically need multiple forward passes through the network to generate a complete response, unlike traditional models that need just one pass per token. However, because diffusion models process all tokens in parallel, they achieve higher throughput despite this overhead.

Inception thinks the speed advantages could impact code completion tools where instant response may affect developer productivity, conversational AI applications, resource-limited environments like mobile applications, and AI agents that need to respond quickly.

If diffusion-based language models maintain quality while improving speed, they might change how AI text generation develops. So far, AI researchers have been open to new approaches.

Independent AI researcher Simon Willison told Ars Technica, “I love that people are experimenting with alternative architectures to transformers, it’s yet another illustration of how much of the space of LLMs we haven’t even started to explore yet.”

On X, former OpenAI researcher Andrej Karpathy wrote about Inception, “This model has the potential to be different, and possibly showcase new, unique psychology, or new strengths and weaknesses. I encourage people to try it out!”

Questions remain about whether larger diffusion models can match the performance of models like GPT-4o and Claude 3.7 Sonnet, produce reliable results without many confabulations, and if the approach can handle increasingly complex simulated reasoning tasks. For now, these models may offer an alternative for smaller AI language models that doesn’t seem to sacrifice capability for speed.

You can try Mercury Coder yourself on Inception’s demo site, and you can download code for LLaDA or try a demo on Hugging Face.

New AI text diffusion models break speed barriers by pulling words from noise Read More »

the-playstation-vr2-will-get-a-drastic-price-cut,-but-that-might-not-be-enough

The PlayStation VR2 will get a drastic price cut, but that might not be enough

Sony’s first PlayStation VR for the PlayStation 4 hit stores at the right price at the right time and ended up being one of VR’s biggest hits. The PlayStation 5’s PlayStation VR2? Not so much, unfortunately. In either an effort to clear unsold inventory, an attempt to revitalize the platform, or both, Sony has announced it’s dropping the price of the headset significantly.

Starting in March, the main SKU of the headset will drop from $550 to $400 in the US. Europe, the UK, and Japan will also see price cuts to 550 euros, 400 pounds, and 66,980 yen, respectively, as detailed on the PlayStation Blog. Strangely, the bundle that includes the game Horizon: Call of the Mountain (originally $600) will also drop to the same exact price. That’s welcome, but it’s also a little bit difficult not to interpret that as a sign that this is an attempt to empty inventory more than anything else.

The headset launched in early 2023 but has suffered from weak software support ever since—a far cry from the first PSVR, which had one of the strongest libraries of its time. It didn’t help that unlike the regular PlayStation 5, the PSVR2 was not backward-compatible with games released for its predecessor.

About a year ago, there were reports that Sony was temporarily pausing production because it wasn’t able to move the inventory it already had. Later, the company released an adapter and some software for getting it running on PCs. That made it one of the most attractive PC VR headsets, at least on paper. However, setup was clunky, and some features that were supported on the PS5 weren’t supported on PC.

PSVR2 games are still getting announced and released, but the VR market in general has slowed down quite a bit in recent years, and most of the remaining action (such as it is) is on Meta’s Quest platform.

The PlayStation VR2 will get a drastic price cut, but that might not be enough Read More »

now-the-overclock-curious-can-buy-a-delidded-amd-9800x3d,-with-a-warranty

Now the overclock-curious can buy a delidded AMD 9800X3D, with a warranty

The integrated heat spreaders put on CPUs at the factory are not the most thermally efficient material you could have on there, but what are you going to do—rip it off at the risk of killing your $500 chip with your clumsy hands?

Yes, that is precisely what enthusiastic overclockers have been doing for years, delidding, or decapping (though the latter term is used less often in overclocking circles), chips through various DIY techniques, allowing them to replace AMD and Intel’s common denominator shells with liquid metal or other advanced thermal interface materials.

As you might imagine, it can be nerve-wracking, and things can go wrong in just one second or one degree Celsius. In one overclocking forum thread, a seasoned expert noted that Intel’s Core Ultra 200S spreader (IHS) needs to be heated above 165° C for the indium (transfer material) to loosen. But then the glue holding the IHS is also loose at this temperature, and there is only 1.5–2 millimeters of space between IHS and surface-mounted components, so it’s easy for that metal IHS to slide off and take out a vital component with it. It’s quite the Saturday afternoon hobby.

That is the typical overclocking bargain: You assume the risk, you void your warranty, but you remove one more barrier to peak performance. Now, though, Thermal Grizzly, led by that same previously mentioned expert, Roman “der8auer” Hartung, has a new bargain to present. His firm is delidding AMD’s Ryzen 9800X3D CPUs with its own ovens and specialty tools, then selling them with two-year warranties that cover manufacturer’s defects and “normal overclocking damage,” but not mechanical damage.

Now the overclock-curious can buy a delidded AMD 9800X3D, with a warranty Read More »