gaming

the-first-new-marathon-game-in-decades-will-launch-on-march-5

The first new Marathon game in decades will launch on March 5

It’s been nearly three years now since Destiny maker (and Sony subsidiary) Bungie formally announced a revival of the storied Marathon FPS franchise. And it has been about seven months since the game’s original announced release date of September 23, 2025 was pushed back indefinitely after a reportedly poor response to the game’s first Alpha test.

But today, in a post on the PlayStation Blog, Bungie revealed that the new Marathon would finally be hitting PS5, Windows, and Xbox Series X|S on March 5, narrowing down the month-long March release window announced back in December.

Today’s pre-rder trailer revealing the Marathon release date.

Unlike Destiny 2, which transitioned to a free-to-play model in 2019, the new Marathon sells for $40 in a Standard Edition or a $60 Deluxe Edition that includes some digital rewards and cosmetics. That mirrors the pricing of the somewhat similar Arc Raiders, which recently hit 12 million sales in less than 12 weeks.

A new kind of Marathon

Unlike the original Marathon trilogy on the ’90s Macintosh—which closely followed on the single-player campaign corridors and deathmatch multiplayer of the original Doom—the new Marathon is described as a “PvPvE survival extraction shooter.” That means gameplay based around exploring distinct zones and scavenging for cosmetics and gear upgrades in exploratory missions alone or with up to two friends, then seeing those missions “break into fast-paced PvP combat” at a moment’s notice, according to the game’s official description.

The first new Marathon game in decades will launch on March 5 Read More »

reports-of-ad-supported-xbox-game-streams-show-microsoft’s-lack-of-imagination

Reports of ad-supported Xbox game streams show Microsoft’s lack of imagination

You can do better than that

That’s a moderately useful option for cloud-curious Xbox players that might not be willing to take the plunge on a monthly subscription, we suppose. But it also feels like Microsoft could come up with some more imaginative ways to use Cloud Gaming to reach occasional players in new ways.

What’s stopping Microsoft from offer streaming players a 30-minute timed demo stream of any available Xbox Cloud Gaming title—perhaps in exchange for watching a short ad, or perhaps simply as an Xbox Live Arcade-style sales juicing tactic? Or why not offer discounted access to a streaming-only Game Pass subscription for players willing to watch occasional ads, like Netflix? Microsoft could even let players spend a couple of bucks to rent a digital copy of the title for a few days, much as services like iTunes do for newer films.

Those are just a few ideas off the top of our heads. And they all feel potentially more impactful than using ads as a way to let Xbox players stream copies of games they already purchased.

Back in 2019, we noted how Stadia’s strictly buy-before-you-play streaming business model limited the appeal of what ended up as a doomed cloud-gaming experiment. Microsoft should take some lessons from Google’s failure and experiment with new ways to use streaming to reach players that might not have access to the latest high-end hardware for their gaming experiences.

Reports of ad-supported Xbox game streams show Microsoft’s lack of imagination Read More »

10-things-i-learned-from-burning-myself-out-with-ai-coding-agents

10 things I learned from burning myself out with AI coding agents


Opinion: As software power tools, AI agents may make people busier than ever before.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

If you’ve ever used a 3D printer, you may recall the wondrous feeling when you first printed something you could have never sculpted or built yourself. Download a model file, load some plastic filament, push a button, and almost like magic, a three-dimensional object appears. But the result isn’t polished and ready for mass production, and creating a novel shape requires more skills than just pushing a button. Interestingly, today’s AI coding agents feel much the same way.

Since November, I have used Claude Code and Claude Opus 4.5 through a personal Claude Max account to extensively experiment with AI-assisted software development (I have also used OpenAI’s Codex in a similar way, though not as frequently). Fifty projects later, I’ll be frank: I have not had this much fun with a computer since I learned BASIC on my Apple II Plus when I was 9 years old. This opinion comes not as an endorsement but as personal experience: I voluntarily undertook this project, and I paid out of pocket for both OpenAI and Anthropic’s premium AI plans.

Throughout my life, I have dabbled in programming as a utilitarian coder, writing small tools or scripts when needed. In my web development career, I wrote some small tools from scratch, but I primarily modified other people’s code for my needs. Since 1990, I’ve programmed in BASIC, C, Visual Basic, PHP, ASP, Perl, Python, Ruby, MUSHcode, and some others. I am not an expert in any of these languages—I learned just enough to get the job done. I have developed my own hobby games over the years using BASIC, Torque Game Engine, and Godot, so I have some idea of what makes a good architecture for a modular program that can be expanded over time.

In December, I used Claude Code to create a multiplayer online clone of Katamari Damacy called

In December, I used Claude Code to create a multiplayer online clone of Katamari Damacy called “Christmas Roll-Up.”

In December, I used Claude Code to create a multiplayer online clone of Katamari Damacy called “Christmas Roll-Up.” Credit: Benj Edwards

Claude Code, Codex, and Google’s Gemini CLI, can seemingly perform software miracles on a small scale. They can spit out flashy prototypes of simple applications, user interfaces, and even games, but only as long as they borrow patterns from their training data. Much like a 3D printer, doing production-level work takes far more effort. Creating durable production code, managing a complex project, or crafting something truly novel still requires experience, patience, and skill beyond what today’s AI agents can provide on their own.

And yet these tools have opened a world of creative potential in software that was previously closed to me, and they feel personally empowering. Even with that impression, though, I know these are hobby projects, and the limitations of coding agents lead me to believe that veteran software developers probably shouldn’t fear losing their jobs to these tools any time soon. In fact, they may become busier than ever.

So far, I have created over 50 demo projects in the past two months, fueled in part by a bout of COVID that left me bedridden with a laptop and a generous 2x Claude usage cap that Anthropic put in place during the last few weeks of December. As I typed furiously all day, my wife kept asking me, “Who are you talking to?”

You can see a few of the more interesting results listed on my personal website. Here are 10 interesting things I’ve learned from the process.

1. People are still necessary

Even with the best AI coding agents available today, humans remain essential to the software development process. Experienced human software developers bring judgment, creativity, and domain knowledge that AI models lack. They know how to architect systems for long-term maintainability, how to balance technical debt against feature velocity, and when to push back when requirements don’t make sense.

For hobby projects like mine, I can get away with a lot of sloppiness. But for production work, having someone who understands version control, incremental backups, testing one feature at a time, and debugging complex interactions between systems makes all the difference. Knowing something about how good software development works helps a lot when guiding an AI coding agent—the tool amplifies your existing knowledge rather than replacing it.

As independent AI researcher Simon Willison wrote in a post distinguishing serious AI-assisted development from casual “vibe coding,” “AI tools amplify existing expertise. The more skills and experience you have as a software engineer the faster and better the results you can get from working with LLMs and coding agents.”

With AI assistance, you don’t have to remember how to do everything. You just need to know what you want to do.

Card Miner: Heart of the Earth is entirely human-designed by AI coded using Claude Code. It represents about a month of iterative work.

Card Miner: Heart of the Earth is entirely human-designed, but it was AI-coded using Claude Code. It represents about a month of iterative work.

Card Miner: Heart of the Earth is entirely human-designed, but it was AI-coded using Claude Code. It represents about a month of iterative work. Credit: Benj Edwards

So I like to remind myself that coding agents are software tools best used to enact human ideas, not autonomous coding employees. They are not people (and not people replacements) no matter how the companies behind them might market them.

If you think about it, everything you do on a computer was once a manual process. Programming a computer like the ENIAC involved literally making physical bits (connections) with wire on a plugboard. The history of programming has been one of increasing automation, so even though this AI-assisted leap is somewhat startling, one could think of these tools as an advancement similar to the advent of high-level languages, automated compilers and debugger tools, or GUI-based IDEs. They can automate many tasks, but managing the overarching project scope still falls to the person telling the tool what to do.

And they can have rapidly compounding benefits. I’ve now used AI tools to write better tools—such as changing the source of an emulator so a coding agent can use it directly—and those improved tools are already having ripple effects. But a human must be in the loop for the best execution of my vision. This approach has kept me very busy, and contrary to some prevailing fears about people becoming dumber due to AI, I have learned many new things along the way.

2. AI models are brittle beyond their training data

Like all AI models based on the Transformer architecture, the large language models (LLMs) that underpin today’s coding agents have a significant limitation: They can only reliably apply knowledge gleaned from training data, and they have a limited ability to generalize that knowledge to novel domains not represented in that data.

What is training data? In this case, when building coding-flavored LLMs, AI companies download millions of examples of software code from sources like GitHub and use them to make the AI models. Companies later specialize them for coding through fine-tuning processes.

The ability of AI agents to use trial and error—attempting something and then trying again—helps mitigate the brittleness of LLMs somewhat. But it’s not perfect, and it can be frustrating to see a coding agent spin its wheels trying and failing at a task repeatedly, either because it doesn’t know how to do it or because it previously learned how to solve a problem but then forgot because the context window got compacted (more on that here).

Violent Checkers is a physics-based corruption of the classic board game, coded using Claude Code.

Violent Checkers is a physics-based corruption of the classic board game, coded using Claude Code.

Violent Checkers is a physics-based corruption of the classic board game, coded using Claude Code. Credit: Benj Edwards

To get around this, it helps to have the AI model take copious notes as it goes along about how it solved certain problems so that future instances of the agent can learn from them again. You also want to set ground rules in the claude.md file that the agent reads when it begins its session.

This brittleness means that coding agents are almost frighteningly good at what they’ve been trained and fine-tuned on—modern programming languages, JavaScript, HTML, and similar well-represented technologies—and generally terrible at tasks on which they have not been deeply trained, such as 6502 Assembly or programming an Atari 800 game with authentic-looking character graphics.

It took me five minutes to make a nice HTML5 demo with Claude but a week of torturous trial and error, plus actual systematic design on my part, to make a similar demo of an Atari 800 game. To do so, I had to use Claude Code to invent several tools, like command-line emulators and MCP servers, that allow it to peek into the operation of the Atari 800’s memory and chipset to even begin to make it happen.

3. True novelty can be an uphill battle

Due to what might poetically be called “preconceived notions” baked into a coding model’s neural network (more technically, statistical semantic associations), it can be difficult to get AI agents to create truly novel things, even if you carefully spell out what you want.

For example, I spent four days trying to get Claude Code to create an Atari 800 version of my HTML game Violent Checkers, but it had trouble because in the game’s design, the squares on the checkerboard don’t matter beyond their starting positions. No matter how many times I told the agent (and made notes in my Claude project files), it would come back to trying to center the pieces to the squares, snap them within squares, or use the squares as a logical basis of the game’s calculations when they should really just form a background image.

To get around this in the Atari 800 version, I started over and told Claude that I was creating a game with a UFO (instead of a circular checker piece) flying over a field of adjacent squares—never once mentioning the words “checker,” “checkerboard,” or “checkers.” With that approach, I got the results I wanted.

A screenshot of Benj's Mac while working on a Violent Checkers port for the Atari 800 home computer, amid other projects.

A screenshot of Benj’s Mac while working on a Violent Checkers port for the Atari 800 home computer, amid other projects.

A screenshot of Benj’s Mac while working on a Violent Checkers port for the Atari 800 home computer, amid other projects. Credit: Benj Edwards

Why does this matter? Because with LLMs, context is everything, and in language, context changes meaning. Take the word “bank” and add the words “river” or “central” in front of it, and see how the meaning changes. In a way, words act as addresses that unlock the semantic relationships encoded in a neural network. So if you put “checkerboard” and “game” in the context, the model’s self-attention process links up a massive web of semantic associations about how checkers games should work, and that semantic baggage throws things off.

A couple of tricks can help AI coders navigate around these limitations. First, avoid contaminating the context with irrelevant information. Second, when the agent gets stuck, try this prompt: “What information do you need that would let you implement this perfectly right now? What tools are available to you that you could use to discover that information systematically without guessing?” This forces the agent to identify (semantically link up) its own knowledge gaps, spelled out in the context window and subject to future action, instead of flailing around blindly.

4. The 90 percent problem

The first 90 percent of an AI coding project comes in fast and amazes you. The last 10 percent involves tediously filling in the details through back-and-forth trial-and-error conversation with the agent. Tasks that require deeper insight or understanding than what the agent can provide still require humans to make the connections and guide it in the right direction. The limitations we discussed above can also cause your project to hit a brick wall.

From what I have observed over the years, larger LLMs can potentially make deeper contextual connections than smaller ones. They have more parameters (encoded data points), and those parameters are linked in more multidimensional ways, so they tend to have a deeper map of semantic relationships. As deep as those go, it seems that human brains still have an even deeper grasp of semantic connections and can make wild semantic jumps that LLMs tend not to.

Creativity, in this sense, may be when you jump from, say, basketball to how bubbles form in soap film and somehow make a useful connection that leads to a breakthrough. Instead, LLMs tend to follow conventional semantic paths that are more conservative and entirely guided by mapped-out relationships from the training data. That limits their creative potential unless the prompter unlocks it by guiding the LLM to make novel semantic connections. That takes skill and creativity on the part of the operator, which once again shows the role of LLMs as tools used by humans rather than independent thinking machines.

5. Feature creep becomes irresistible

While creating software with AI coding tools, the joy of experiencing novelty makes you want to keep adding interesting new features rather than fixing bugs or perfecting existing systems. And Claude (or Codex) is happy to oblige, churning away at new ideas that are easy to sketch out in a quick and pleasing demo (the 90 percent problem again) rather than polishing the code.

Flip-Lash started as a

Flip-Lash started as a “Tetris but you can flip the board,” but feature creep made me throw in the kitchen sink, losing focus.

Flip-Lash started as a “Tetris but you can flip the board,” but feature creep made me throw in the kitchen sink, losing focus. Credit: Benj Edwards

Fixing bugs can also create bugs elsewhere. This is not new to coding agents—it’s a time-honored problem in software development. But agents supercharge this phenomenon because they can barrel through your code and make sweeping changes in pursuit of narrow-minded goals that affect lots of working systems. We’ve already talked about the importance of having a good architecture guided by the human mind behind the wheel above, and that comes into play here.

6. AGI is not here yet

Given the limitations I’ve described above, it’s very clear that an AI model with general intelligence—what people usually call artificial general intelligence (AGI)—is still not here. AGI would hypothetically be able to navigate around baked-in stereotype associations and not have to rely on explicit training or fine-tuning on many examples to get things right. AI companies will probably need a different architecture in the future.

I’m speculating, but AGI would likely need to learn permanently on the fly—as in modify its own neural network weights—instead of relying on what is called “in-context learning,” which only persists until the context fills up and gets compacted or wiped out.

Grapheeti is a

Grapheeti is a “drawing MMO” where people around the world share a canvas.

Grapheeti is a “drawing MMO” where people around the world share a canvas. Credit: Benj Edwards

In other words, you could teach a true AGI system how to do something by explanation or let it learn by doing, noting successes, and having those lessons permanently stick, no matter what is in the context window. Today’s coding agents can’t do that—they forget lessons from earlier in a long session or between sessions unless you manually document everything for them. My favorite trick is instructing them to write a long, detailed report on what happened when a bug is fixed. That way, you can point to the hard-earned solution the next time the amnestic AI model makes the same mistake.

7. Even fast isn’t fast enough

While using Claude Code for a while, it’s easy to take for granted that you suddenly have the power to create software without knowing certain programming languages. This is amazing at first, but you can quickly become frustrated that what is conventionally a very fast development process isn’t fast enough. Impatience at the coding machine sets in, and you start wanting more.

But even if you do know the programming languages being used, you don’t get a free pass. You still need to make key decisions about how the project will unfold. And when the agent gets stuck or makes a mess of things, your programming knowledge becomes essential for diagnosing what went wrong and steering it back on course.

8. People may become busier than ever

After guiding way too many hobby projects through Claude Code over the past two months, I’m starting to think that most people won’t become unemployed due to AI—they will become busier than ever. Power tools allow more work to be done in less time, and the economy will demand more productivity to match.

It’s almost too easy to make new software, in fact, and that can be exhausting. One project idea would lead to another, and I was soon spending eight hours a day during my winter vacation shepherding about 15 Claude Code projects at once. That’s too much split attention for good results, but the novelty of seeing my ideas come to life was addictive. In addition to the game ideas I’ve mentioned here, I made tools that scrape and search my past articles, a graphical MUD based on ZZT, a new type of MUSH (text game) that uses AI-generated rooms, a new type of Telnet display proxy, and a Claude Code client for the Apple II (more on that soon). I also put two AI-enabled emulators for Apple II and Atari 800 on GitHub. Phew.

Consider the advent of the steam shovel, which allowed humans to dig holes faster than a team using hand shovels. It made existing projects faster and new projects possible. But think about the human operator of the steam shovel. Suddenly, we had a tireless tool that could work 24 hours a day if fueled up and maintained properly, while the human piloting it would need to eat, sleep, and rest.

I used Claude Code to create a windowing GUI simulation of the Mac that works over Telnet.

I used Claude Code to create a windowing GUI simulation of the Mac that works over Telnet.

I used Claude Code to create a windowing GUI simulation of the Mac that works over Telnet. Credit: Benj Edwards

In fact, we may end up needing new protections for human knowledge workers using these tireless information engines to implement their ideas, much as unions rose as a response to industrial production lines over 100 years ago. Humans need rest, even when machines don’t.

Will an AI system ever replace the human role here? Even if AI coding agents could eventually work fully autonomously, I don’t think they’ll replace humans entirely because there will still be people who want to get things done, and new AI power tools will emerge to help them do it.

9. Fast is scary to people

AI coding tools can turn what was once a year-long personal project into a five-minute session. I fed Claude Code a photo of a two-player Tetris game I sketched in a notebook back in 2008, and it produced a working prototype in minutes (prompt: “create a fully-featured web game with sound effects based on this diagram”). That’s wild, and even though the results are imperfect, it’s a bit frightening to comprehend what kind of sea change in software development this might entail.

Since early December, I’ve been posting some of my more amusing experimental AI-coded projects to Bluesky for people to try out, but I discovered I needed to deliberately slow down with updates because they came too fast for people to absorb (and too fast for me to fully test). I’ve also received comments like “I’m worried you’re using AI, you’re making games too fast” and so on.

Benj's handwritten game design note about a two-player Tetris concept from 2007.

Benj’s handwritten game design note about a two-player Tetris concept from 2007.

Benj’s handwritten game design note about a two-player Tetris concept from 2007. Credit: Benj Edwards

Regardless of my own habits, the flow of new software will not slow down. There will soon be a seemingly endless supply of AI-augmented media (games, movies, images, books), and that’s a problem we’ll have to figure out how to deal with. These products won’t all be “AI slop,” either; some will be done very well, and the acceleration in production times due to these new power tools will balloon the quantity beyond anything we’ve seen.

Social media tends to prime people to believe that AI is all good or all bad, but that kind of black-and-white thinking may be the easy way out. You’ll have no cognitive dissonance, but you’ll miss a far richer third option: seeing these tools as imperfect and deserving of critique but also as useful and empowering when they bring your ideas to life.

AI agents should be considered tools, not entities or employees, and they should be amplifiers of human ideas. My game-in-progress Card Miner is entirely my own high-level creative design work, but the AI model handled the low-level code. I am still proud of it as an expression of my personal ideas, and it would not exist without AI coding agents.

10. These tools aren’t going away

For now, at least, coding agents remain very much tools in the hands of people who want to build things. The question is whether humans will learn to wield these new tools effectively to empower themselves. Based on two months of intensive experimentation, I’d say the answer is a qualified yes, with plenty of caveats.

We also have social issues to face: Professional developers already use these tools, and with the prevailing stigma against AI tools in some online communities, many software developers and the platforms that host their work will face difficult decisions.

Ultimately, I don’t think AI tools will make human software designers obsolete. Instead, they may well help those designers become more capable. This isn’t new, of course; tools of every kind have been serving this role since long before the dawn of recorded history. The best tools amplify human capability while keeping a person behind the wheel. The 3D printer analogy holds: amazing fast results are possible, but mastery still takes time, skill, and a lot of patience with the machine.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

10 things I learned from burning myself out with AI coding agents Read More »

meta’s-layoffs-leave-supernatural-fitness-users-in-mourning

Meta’s layoffs leave Supernatural fitness users in mourning

There is a split in the community about who will stay and continue to pay the subscription fee and who will leave. Supernatural has more than 3,000 lessons available in the service, so while new content won’t be added, some feel there is plenty of content left in the library. Other users worry about how Supernatural will continue to license music from big-name bands.

“Supernatural is amazing, but I am canceling it because of this,” Chip told me. “The library is large, so there’s enough to keep you busy, but not for the same price.”

There are other VR workout experiences like FitXR or even the VR staple Beat Saber, which Supernatural cribs a lot of design concepts from. Still, they don’t hit the same bar for many of the Supernatural faithful.

“I’m going to stick it out until they turn the lights out on us,” says Stefanie Wong, a Bay Area accountant who has used Supernatural since shortly after the pandemic and has organized and attended meetup events. “It’s not the app. It’s the community, and it’s the coaches that we really, really care about.”

Welcome to the new age

I tried out Supernatural’s Together feature on Wednesday, the day after the layoffs. It’s where I met Chip and Alisa. When we could stop to catch our breath, we talked about the changes coming to the service. They had played through previous sessions hosted by Jane Fonda or playlists with a mix of music that would change regularly. This one was an artist series featuring entirely Imagine Dragons songs.

In the session, as we punched blocks while being serenaded by this shirtless dude crooning, recorded narrations from Supernatural coach Dwana Olsen chimed in to hype us up.

“Take advantage of these moments,” Olsen said as we punched away. “Use these movements to remind you of how much awesome life you have yet to live.”

Frankly, it was downright invigorating. And bittersweet. We ended another round, sweaty, huffing and puffing. Chip, Alisa, and I high-fived like crazy and readied for another round.

“Beautiful,” Alisa said. “It’s just beautiful, isn’t it?”

Meta’s layoffs leave Supernatural fitness users in mourning Read More »

bully-online-mod-taken-down-abruptly-one-month-after-launch

Bully Online mod taken down abruptly one month after launch

A PC mod that added online gameplay to Rockstar’s 2006 school-exploration title Bully was abruptly taken down on Wednesday, roughly a month after it was first made available. While the specific reason for the “Bully Online” takedown hasn’t been publicly discussed, a message posted by the developers to the project’s now-defunct Discord server clarifies that “this was not something we wanted.”

The Bully Online mod was spearheaded by Swegta, a Rockstar-focused YouTuber who formally announced the project in October as a mod that “allows you and your friends to play minigames, role-play, compete in racing, fend off against NPCs, and much more.”

At the time of the announcement, Swegta said the mod was “a project me and my team have been working on for a very long time” and that early access in December would be limited to those who contributed at least $8 to a Ko-Fi account. When December actually rolled around, though, a message on Swegta.com (archived) suggested that the mod was being released freely as an open source project, with a registration page (archived) offering new accounts to anyone.

That source code has now been completely removed from Swegta.com, along with any webpages referencing the project or offering downloads for the mod’s custom launcher. On Discord, the team said that development of any Bully Online scripts would stop and that any account data created by users would be deleted.

Bully Online mod taken down abruptly one month after launch Read More »

i-can’t-stop-shooting-oddcore’s-endless-waves-of-weird-little-guys

I can’t stop shooting Oddcore’s endless waves of weird little guys

Every new semi-randomized area you clear increases your total capacity to store souls, but every visit to the portal shop increases the additional “tax” you need to spend on every purchase. This makes the decision of when to warp away to the shop a persistent quandary—do you power up as quickly as possible to increase your chances of survival, or wait until you’ll be able to purchase even more power-ups a little later?

What’s around the corner?

All the while, the enemies keep coming fast and furious, slowly getting faster, tougher, and more capable with each new zone you enter. Through it all, the tight controls, forgiving aim system, and wide variety of weapon and gadget options make every firefight fast, frenetic, and fun.

To keep things from getting too repetitive, you’ll sometimes get thrown into an arena where you have to chase down frolicking golden humanoid flowers or destroy a few giant ambulatory mushrooms—you know, standard tropes of the video game world. You’ll also occasionally get dropped into brief, intentionally off-putting, empty interstitial rooms that seem designed to surprise Twitch viewers more than fit some sort of coherent aesthetic, or “corruptions” that briefly prevent you from gaining health and/or warping away to the convenience shop for a breather.

What’s the worst that could happen?

Credit: Oddcorp

What’s the worst that could happen? Credit: Oddcorp

Between runs, you can move around an ersatz redemption arcade to earn new weapons and gadgets and explore the miniature theme park setting, which is full of hidden crannies and unlockable play spaces. In a few hours of play, I’ve already stumbled on so many secrets by pure accident that I can only imagine unlocking them all will be a real undertaking (and I presume even more will be added as the game moves through Early Access).

The in-game leaderboards and achievements suggest that it is possible to “beat” Oddcore at some point, presumably by combining enough skill and lucky upgrades to power your way through dozens of variants in a single run. Frankly, I’m not sure I’ll ever master the game enough to reach that point. Even so, I’m happy to have a new excuse to take a brain break by shooting a bunch of weird little guys in weird little spaces for a few minutes at a time.

I can’t stop shooting Oddcore’s endless waves of weird little guys Read More »

civilization-vii-is-headed-to-iphone-and-ipad-with-“arcade-edition”

Civilization VII is headed to iPhone and iPad with “Arcade Edition”

Civilization VII is coming to the iPhone and iPad, Apple and publisher 2K announced today.

Formally titled Sid Meier’s Civilization VII Arcade Edition, it is developed by Behaviour Interactive with input from original developer Firaxis Games.

The game will be available as part of the Apple Arcade service, which offers ad-free games for Apple platforms for $7 per month. Neither announcement makes any mention of a non-Arcade version, so this appears to be exclusively part of the subscription.

That shouldn’t be too much of a surprise; full-priced premium games have struggled on the platform when not bundled in a subscription. For example, Rockstar Games’ Red Dead Redemption came out both as a standalone title on the App Store and as part of Netflix’s subscription. The Netflix version surpassed a staggering 3.3 million downloads, while the $40 direct purchase managed just over 10,000.

The announcement calls this release “the authentic Civilization experience,” which you can probably take to mean that it doesn’t simplify the gameplay in any way. That said, there is some fine print you shouldn’t miss.

The App Store listing for the game says this release will not receive any of the DLC planned for other platforms. It also notes that “post-launch updates that apply to other platforms may be excluded or delayed.” Also, the supported players listed is “1,” suggesting it may not have multiplayer. (The desktop and console versions already lack hotseat multiplayer, but they support online play.)

Civilization VII is headed to iPhone and iPad with “Arcade Edition” Read More »

seasonal-switch-2-sales-show-significant-slowing-as-annual-cycle-sunsets

Seasonal Switch 2 sales show significant slowing as annual cycle sunsets

Lingering sales of the original Switch might also be contributing to the relatively weak holiday performance for the Switch 2. In the UK, at least, the older console is still selling well enough to buoy Nintendo’s overall holiday hardware sales in the country to 7 percent higher than what the company achieved in 2017.

Super Mario Odyssey

Nintendo might need another Super Mario Odyssey-sized hit to keep up sales momentum for the Switch 2.

Credit: Nintendo

Nintendo might need another Super Mario Odyssey-sized hit to keep up sales momentum for the Switch 2. Credit: Nintendo

That said, the transition from record-setting launch sales to relatively underwhelming holiday sales is a worrying sign for the Switch 2’s market momentum. A lack of system-selling Switch 2 exclusive games could explain that movement. In 2017, the October launch of Super Mario Odyssey built holiday excitement for the Switch on top of earlier hits like The Legend of Zelda: Breath of the Wild and Mario Kart 8 Deluxe. For the Switch 2, holiday releases like Pokémon Legends Z-A and Metroid Prime 4 don’t seem to have had as much impact as early system sellers like Mario Kart World and Donkey Kong Bananza.

Thus far, Nintendo’s planned 2026 schedule doesn’t seem primed to offer many big-name exclusive software to turn things around. The year’s first-party lineup is currently anchored by standard sequels for second-tier franchises like Yoshi, Mario Tennis, and Fire Emblem, alongside slightly upgraded “Switch 2 Edition” re-releases of popular Switch games. Aside from Nintendo’s own titles, the planned 2026 release of FromSoft’s Bloodborne-esque Duskbloods as a Switch 2 exclusive could make some fans of the company’s Souls-like games take a second look at the hardware.

Nintendo is likely to announce more Switch 2 exclusives and ports in the coming months, of course. Having a few system-selling blockbusters in that slate could be crucial to propping up the Switch 2’s sales now that pent-up launch-window demand seems largely satiated.

Seasonal Switch 2 sales show significant slowing as annual cycle sunsets Read More »

steamos-continues-its-slow-spread-across-the-pc-gaming-landscape

SteamOS continues its slow spread across the PC gaming landscape

Over time, Valve sees that kind of support expanding to other Arm-based devices, too. “This is already fully open source, so you could download it and run SteamOS, now that we will be releasing SteamOS for Arm, you could have gaming on any Arm device,” Valve Engineer Jeremy Selan told PC Gamer in November. “This is the first one. We’re very excited about it.”

Imagine if handhelds like the Retroid Pocket Flip 2 could run SteamOS instead of Android…

Credit: Retroid

Imagine if handhelds like the Retroid Pocket Flip 2 could run SteamOS instead of Android… Credit: Retroid

It’s an especially exciting prospect when you consider the wide range of Arm-based Android gaming handhelds that currently exist across the price and performance spectrum. While emulators like Fex can technically let players access Steam games on those kinds of handhelds, official Arm support for SteamOS could lead to a veritable Cambrian explosion of hardware options with native SteamOS support.

Valve seems aware of this potential, too. “There’s a lot of price points and power consumption points where Arm-based chipsets are doing a better job of serving the market,” Valve’s Pierre-Louis Griffais told The Verge last month. “When you get into lower power, anything lower than Steam Deck, I think you’ll find that there’s an Arm chip that maybe is competitive with x86 offerings in that segment. We’re pretty excited to be able to expand PC gaming to include all those options instead of being arbitrarily restricted to a subset of the market.”

That’s great news for fans of PC-based gaming handhelds, just as the announcement of Valve’s Steam Machine will provide a convenient option for SteamOS access on the living room TV. For desktop PC gamers, though, rigs sporting Nvidia GPUs might remain the final frontier for SteamOS in the foreseeable future. “With Nvidia, the integration of open-source drivers is still quite nascent,” Griffais told Frandroid about a year ago. “There’s still a lot of work to be done on that front… So it’s a bit complicated to say that we’re going to release this version when most people wouldn’t have a good experience.”

SteamOS continues its slow spread across the PC gaming landscape Read More »

with-geforce-super-gpus-missing-in-action,-nvidia-focuses-on-software-upgrades

With GeForce Super GPUs missing in action, Nvidia focuses on software upgrades

For the first time in years, Nvidia declined to introduce new GeForce graphics card models at CES. CEO Jensen Huang’s characteristically sprawling and under-rehearsed 90-minute keynote focused almost entirely on the company’s dominant AI business, relegating the company’s gaming-related announcements to a separate video posted later in the evening.

Instead, the company focused on software improvements for its existing hardware. The biggest announcement in this vein is DLSS 4.5, which adds a handful of new features to Nvidia’s basket of upscaling and frame generation technologies.

DLSS upscaling is being improved by a new “second-generation transformer model” that Nvidia says has been “trained on an expanded data set” to improve its predictions when generating new pixels. According to Nvidia’s Bryan Catanzaro, this is particularly beneficial for image quality in the Performance and Ultra Performance modes, where the upscaler has to do more guessing because it’s working from a lower-resolution source image.

DLSS Multi-Frame Generation is also improving, increasing the number of AI-generated frames per rendered frame from three to five. This new 6x mode for DLSS MFG is being paired with something called Dynamic Multi-Frame Generation, where the number of AI-generated frames can dynamically change, increasing generated frames during “demanding scenes,” and decreasing the number of generated frames during simpler scenes “so it only computes what’s needed.”

The standard caveats for Multi-Frame Generation still apply: It still needs an RTX 50-series GPU (the 40-series can still only generate one frame for every rendered frame, and older cards can’t generate extra frames at all), and the game still needs to be running at a reasonably high base frame rate to minimize lag and weird rendering artifacts. It remains a useful tool for making fast-running games run faster, but it won’t help make an unplayable frame rate into a playable one.

With GeForce Super GPUs missing in action, Nvidia focuses on software upgrades Read More »

nvidia’s-new-g-sync-pulsar-monitors-target-motion-blur-at-the-human-retina-level

Nvidia’s new G-Sync Pulsar monitors target motion blur at the human retina level

That gives those individual pixels time to fully transition from one color to the next before they’re illuminated, meaning viewers don’t perceive those pixels fading from one color as they do on a traditional G-Sync monitor. It also means those old pixels don’t persist as long on the viewer’s retina, increasing the “apparent refresh rate” above the monitor’s actual refresh rate, according to Nvidia.

An Asus illustration highlights how G-Sync Pulsar uses strobing to limit the persistence of old frames on your retina.

An Asus illustration highlights how G-Sync Pulsar uses strobing to limit the persistence of old frames on your retina. Credit: Asus/ Nvidia

Similar “Ultra Low Motion Blur” features on other pulsing backlight monitors have existed for a while, but they only worked at fixed refresh rates. Pulsar monitors differentiate themselves by syncing the pulses with the variable refresh rate of a G-Sync monitor, offering what Nvidia calls a combination of “tear free frames and incredible motion clarity.”

Independent testers have had more varied impressions of the visual impact of the Pulsar. The Monitors Unboxed YouTube channel called it “clearly the best solution currently available” for limiting motion blur and “the first version of this technology that I would genuinely consider using on a regular basis.” PC Magazine, on the other hand, said the Pulsar improvements are “minor in the grand scheme of things” and would be hard to notice for a casual viewer.

Nvidia explains how its Pulsar monitors work.

In any case, G-Sync Pulsar should be a welcome upgrade for high-end gamers as we wait for 1,000 Hz monitors to become a market force.

Nvidia’s new G-Sync Pulsar monitors target motion blur at the human retina level Read More »

bioware’s-anthem-will-soon-be-completely-unplayable

BioWare’s Anthem will soon be completely unplayable


Replay the troubled jetpack shooter before the servers shut down for good on Jan. 12.

Anthem may be down, but it’s not quite out yet. Credit: Bioware

We’ll admit that we weren’t paying enough attention to the state of Anthem—BioWare’s troubled 2019 jetpack-powered open-world shooter—to notice EA’s July announcement that it was planning to shut down the game’s servers. But with that planned server shutdown now just a week away, we thought it was worth alerting you readers to your final opportunity to play one of BioWare’s most ambitious failures.

Anthem was unveiled at E3 2017 in a demo that was later revealed to have been largely faked to paper over major issues with the game’s early development. Anthem’s early 2019 release was met with a lot of middling-to-poor reviews (including one from Ars itself), followed about a year later by a promise from BioWare General Manager Casey Hudson that a “longer-term redesign” and “substantial reinvention” of the overall game experience were coming. Hudson left BioWare in December 2020, though, and a few months later, that planned Anthem overhaul was officially canceled.

While active development on Anthem has been dormant for years, the game’s servers have remained up and running. And though the game didn’t exactly explode in popularity during that period of benign neglect, estimates from MMO Populations suggest a few hundred to a few thousand players have been jetpacking around the game’s world daily. The game also still sees a smattering of daily subreddit posts, including some hoping against hope for a fan-led private server revival, a la the Pretendo Network. And there are still a small handful of Twitch streamers sharing the game while they still can, including one racing to obtain all of the in-game achievements after picking up a $4 copy at Goodwill.

If you want to join in and get one last taste of Anthem before the January 12 shutdown, tracking down a used physical copy is probably your best bet. Current digital owners can still redownload Anthem for the time being, but EA removed the game from digital storefronts shortly after the server shutdown was announced last summer and removed it from EA Play and Xbox Game Pass subscriptions on August 15. Though many fans have been begging EA to enable some sort of offline mode, the publisher’s announcement makes clear that “Anthem was designed to be an online-only title so once the servers go offline, the game will no longer be playable.”

The FOMO from that impending server shutdown may bring back players who haven’t given Anthem a second thought for years now. After that, maybe the gaming world at large will finally realize that we don’t know what we’ve got till it’s gone.

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

BioWare’s Anthem will soon be completely unplayable Read More »