Tech

the-macbook-air-is-the-obvious-loser-as-the-sun-sets-on-the-intel-mac-era

The MacBook Air is the obvious loser as the sun sets on the Intel Mac era


In the end, Intel Macs have mostly gotten a better deal than PowerPC Macs did.

For the last three years, we’ve engaged in some in-depth data analysis and tea-leaf reading to answer two questions about Apple’s support for older Macs that still use Intel chips.

First, was Apple providing fewer updates and fewer years of software support to Macs based on Intel chips as it worked to transition the entire lineup to its internally developed Apple Silicon? And second, how long could Intel Mac owners reasonably expect to keep getting updates?

The answer to the first question has always been “it depends, but generally yes.” And this year, we have a definitive answer to the second question: For the bare handful of Intel Macs it supports, macOS 26 Tahoe will be the final new version of the operating system to support any of Intel’s chips.

To its credit, Apple has also clearly spelled this out ahead of time rather than pulling the plug on Intel Macs with no notice. The company has also said that it plans to provide security updates for those Macs for two years after Tahoe is replaced by macOS 27 next year. These Macs aren’t getting special treatment—this has been Apple’s unspoken, unwritten policy for macOS security updates for decades now—but to look past its usual “we don’t comment on our future plans” stance to give people a couple years of predictability is something we’ve been pushing Apple to do for a long time.

With none of the tea leaf reading left to do, we can now present a fairly definitive look at how Apple has handled the entire Intel transition, compare it to how the PowerPC-to-Intel switch went two decades ago, and predict what it might mean about support for Apple Silicon Macs.

The data

We’ve assembled an epoch-spanning spreadsheet of every PowerPC or Intel Mac Apple has released since the original iMac kicked off the modern era of Apple back in 1998. On that list, we’ve recorded the introduction date for each Mac, the discontinuation date (when it was either replaced or taken off the market), the version of macOS it shipped with, and the final version of macOS it officially supported.

For those macOS versions, we’ve recorded the dates they received their last major point update—these are the feature-adding updates these releases get when they’re Apple’s latest and greatest version of macOS, as macOS 15 Sequoia is right now. After replacing them, Apple releases security-only patches and Safari browser updates for old macOS versions for another two years after replacing them, so we’ve also recorded the dates that those Macs would have received their final security update. For Intel Macs that are still receiving updates (versions 13, 14, and 15) and macOS 26 Tahoe, we’ve extrapolated end-of-support dates based on Apple’s past practices.

A 27-inch iMac model. It’s still the only Intel Mac without a true Apple Silicon replacement. Credit: Andrew Cunningham

We’re primarily focusing on two time spans: from the date of each Mac’s introduction to the date it stopped receiving major macOS updates, and from the date of each Mac’s introduction to the date it stopped receiving any updates at all. We consider any Macs inside either of these spans to be actively supported; Macs that are no longer receiving regular updates from Apple will gradually become less secure and less compatible with modern apps as time passes. We measure by years of support rather than number of releases, which controls for Apple’s transition to a once-yearly release schedule for macOS back in the early 2010s.

We’ve also tracked the time between each Mac model’s discontinuation and when it stopped receiving updates. This is how Apple determines which products go on its “vintage” and “obsolete” hardware lists, which determine the level of hardware support and the kinds of repairs that the company will provide.

We have lots of detailed charts, but here are some highlights:

  • For all Mac models tracked, the average Mac receives about 6.6 years of macOS updates that add new features, plus another two years of security-only updates.
  • If you only count the Intel era, the average is around seven years of macOS updates, plus two years of security-only patches.
  • Most (though not all) Macs released since 2016 come in lower than either of these averages, indicating that Apple has been less generous to most Intel Macs since the Apple Silicon transition began.
  • The three longest-lived Macs are still the mid-2007 15- and 17-inch MacBook Pros, the mid-2010 Mac Pro, and the mid-2007 iMac, which received new macOS updates for around nine years after their introduction (and security updates for around 11 years).
  • The shortest-lived Mac is still the late-2008 version of the white MacBook, which received only 2.7 years of new macOS updates and another 3.3 years of security updates from the time it was introduced. (Late PowerPC-era and early Intel-era Macs are all pretty bad by modern standards.)

The charts

If you bought a Mac any time between 2016 and 2020, you’re generally settling for fewer years of software updates than you would have gotten in the recent past. If you bought a Mac released in 2020, the tail end of the Intel era when Apple Silicon Macs were around the corner, your reward is the shortest software support window since 2006.

There are outliers in either direction. The sole iMac Pro, introduced in 2017 as Apple tried to regain some of its lost credibility with professional users, will end up with 7.75 years of updates plus another two years of security updates when all is said and done. Buyers of 2018–2020 MacBook Airs and the two-port version of the 2020 13-inch MacBook Pro, however, are treated pretty poorly, getting not quite 5.5 years of updates (plus two years of security patches) on average from the date they were introduced.

That said, most Macs usually end up getting a little over six years of macOS updates and two more years of security updates. If that’s a year or two lower than the recent past, it’s also not ridiculously far from the historical average.

If there’s something to praise here, it’s interesting that Apple doesn’t seem to treat any of its Macs differently based on how much they cost. Now that we have a complete overview of the Intel era, breaking out the support timelines by model rather than by model year shows that a Mac mini doesn’t get dramatically more or less support than an iMac or a Mac Pro, despite costing a fraction of the price. A MacBook Air doesn’t receive significantly more or less support than a MacBook Pro.

These are just averages, and some models are lucky while others are not. The no-adjective MacBook that Apple has sold on and off since 2006 is also an outlier, with fewer years of support on average than the other Macs.

If there’s one overarching takeaway, it’s that you should buy new Macs as close to the date of their introduction as possible if you want to maximize your software support window. Especially for Macs that were sold continuously for years and years—the 2013 and 2019 Mac Pro, the 2018 Mac mini, the non-Retina 2015 MacBook Air that Apple sold some version of for over four years—buying them toward the end of their retail lifecycle means settling for years of fewer updates than you would have gotten if you had waited for the introduction of a new model. And that’s true even though Apple’s hardware support timelines are all calculated from the date of last availability rather than the date of introduction.

It just puts Mac buyers in a bad spot when Apple isn’t prompt with hardware updates, forcing people to either buy something that doesn’t fully suit their needs or settle for something older that will last for fewer years.

What should you do with an older Intel Mac?

The big question: If your Intel Mac is still functional but Apple is no longer supporting it, is there anything you can do to keep it both secure and functional?

All late-model Intel Macs officially support Windows 10, but that OS has its own end-of-support date looming in October 2025. Windows 11 can be installed, but only if you bypass its system requirements, which can work well, but it does require additional fiddling when it comes time to install major updates. Consumer-focused Linux distributions like Ubuntu, Mint, or Pop!_OS may work, depending on your hardware, but they come with a steep learning curve for non-technical users. Google’s ChromeOS Flex may also work, but ChromeOS is more functionally limited than most other operating systems.

The OpenCore Legacy Patcher provides one possible stay of execution for Mac owners who want to stay on macOS for as long as they can. But it faces two steep uphill climbs in macOS Tahoe. First, as Apple has removed more Intel Macs from the official support list, it has removed more of the underlying code from macOS that is needed to support those Macs and other Macs with similar hardware. This leaves more for the OpenCore Configurator team to have to patch in from older OSes, and this kind of forward-porting can leave hardware and software partly functional or non-functional.

Second, there’s the Apple T2 to consider. The Macs with a T2 treat it as a load-bearing co-processor, responsible for crucial operating system functions such as enabling Touch ID, serving as an SSD controller, encoding and decoding videos, communicating with the webcam and built-in microphone, and other operations. But Apple has never opened the T2 up to anyone, and it remains a bit of a black box for both the OpenCore/Hackintosh community and folks who would run Linux-based operating systems like Ubuntu or ChromeOS on that hardware.

The result is that the 2018 and 2019 MacBook Airs that didn’t support macOS 15 Sequoia last year never had support for them added to the OpenCore Legacy Patcher because the T2 chip simply won’t communicate with OpenCore firmware booted. Some T2 Macs don’t have this problem. But if yours does, it’s unlikely that anyone will be able to do anything about it, and your software support will end when Apple says it does.

Does any of this mean anything for Apple Silicon Mac support?

Late-model Intel MacBook Airs have fared worse than other Macs in terms of update longevity. Credit: Valentina Palladino

It will likely be at least two or three years before we know for sure how Apple plans to treat Apple Silicon Macs. Will the company primarily look at specs and technical capabilities, as it did from the late-’90s through to the mid-2010s? Or will Apple mainly stop supporting hardware based on its age, as it has done for more recent Macs and most current iPhones and iPads?

The three models to examine for this purpose are the first ones to shift to Apple Silicon: the M1 versions of the MacBook Air, Mac mini, and 13-inch MacBook Pro, all launched in late 2020. If these Macs are dropped in, say, 2027 or 2028’s big macOS release, but other, later M1 Macs like the iMac stay supported, it means Apple is likely sticking to a somewhat arbitrary age-based model, with certain Macs cut off from software updates that they are perfectly capable of running.

But it’s our hope that all Apple Silicon Macs have a long life ahead of them. The M2, M3, and M4 have all improved on the M1’s performance and other capabilities, but the M1 Macs are much more capable than the Intel ones they supplanted, the M1 was used so widely in various Mac models for so long, and Mac owners can pay so much more for their devices than iPhone and iPad owners. We’d love to see macOS return to the longer-tail software support it provided in the late-’00s and mid-2010s, when models could expect to see seven or eight all-new macOS versions and another two years of security updates afterward.

All signs point to Apple using the launch date of any given piece of hardware as the determining factor for continued software support. But that isn’t how it has always been, nor is it how it always has to be.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

The MacBook Air is the obvious loser as the sun sets on the Intel Mac era Read More »

google-can-now-generate-a-fake-ai-podcast-of-your-search-results

Google can now generate a fake AI podcast of your search results

NotebookLM is undoubtedly one of Google’s best implementations of generative AI technology, giving you the ability to explore documents and notes with a Gemini AI model. Last year, Google added the ability to generate so-called “audio overviews” of your source material in NotebookLM. Now, Google has brought those fake AI podcasts to search results as a test. Instead of clicking links or reading the AI Overview, you can have two nonexistent people tell you what the results say.

This feature is not currently rolling out widely—it’s available in search labs, which means you have to manually enable it. Anyone can opt in to the new Audio Overview search experience, though. If you join the test, you’ll quickly see the embedded player in Google search results. However, it’s not at the top with the usual block of AI-generated text. Instead, you’ll see it after the first few search results, below the “People also ask” knowledge graph section.

Credit: Google

Google isn’t wasting resources to generate the audio automatically, so you have to click the generate button to get started. A few seconds later, you’re given a back-and-forth conversation between two AI voices summarizing the search results. The player includes a list of sources from which the overview is built, as well as the option to speed up or slow down playback.

Google can now generate a fake AI podcast of your search results Read More »

inside-the-firm-turning-eerie-blank-streaming-ads-into-useful-nonprofit-messages

Inside the firm turning eerie blank streaming ads into useful nonprofit messages

AdGood’s offerings also include a managed service for ad campaign management for nonprofits. AdGood doesn’t yet offer pixels, but Johns said developments like that are “in the works.”

Johns explained that while many nonprofits use services like Meta and Google AdWords for tracking ads, they’re “hitting plateaus” with their typical methods. He said there is nonprofit interest in reaching younger audiences, who often use CTV devices:

A lot of them have been looking for ways to get [into CTV ads], but, unfortunately, with minimum spend amounts, they’re just not able to access it.

Helping nonprofits make commercials

AdGood also sells a self-serve generative AI ad manager, which it offers via a partnership with Streamr.AI. The tool is designed to simplify the process of creating 30-second video ads that are “completely editable via a chat prompt,” according to Johns.

“It automatically generates all their targeting. They can update their targeting for whatever they want, and then they can swipe a credit card and essentially run that campaign. It goes into our approval queue, which typically takes 24 hours for us to approve because it needs to be deemed TV-quality,” he explained.

The executive said AdGood charges nonprofits a $7 CPM and a $250 flat fee for the service. He added:

Think about a small nonprofit in a local community, for instance, my son’s special needs baseball team. I can get together with five other parents, easily pull together a campaign, and run it in our local town. We get seven kids to show up, and it changes their lives. We’re talking about $250 having a massive impact in a local market.

Looking ahead, Johns said he’d like to see AdGood’s platform and team grow to be able to give every customer “a certain allocation of inventory, whether it’s 50,000 impressions a month or 100,000 a month.”

For some, streaming ads are rarely a good thing. But when those ads can help important causes and replace odd blank ad spaces that make us question our own existence, it brings new meaning to the idea of a “good” commercial.

Inside the firm turning eerie blank streaming ads into useful nonprofit messages Read More »

another-one-for-the-graveyard:-google-to-kill-instant-apps-in-december

Another one for the graveyard: Google to kill Instant Apps in December

But that was then, and this is now. Today, an increasing number of mobile apps are functionally identical to the mobile websites they are intended to replace, and developer uptake of Instant Apps was minimal. Even in 2017, loading an app instead of a website had limited utility. As a result, most of us probably only encountered Instant Apps a handful of times in all the years it was an option for developers.

To use the feature, which was delivered to virtually all Android devices by Google Play Services, developers had to create a special “instant” version of their app that was under 15MB. The additional legwork to get an app in front of a subset of new users meant this was always going to be a steep climb, and Google struggles to incentivize developers to adopt new features. Plus, there’s no way to cram in generative AI! So it’s not a shock to see Google retiring the feature.

This feature is currently listed in the collection of Google services in your phone settings as “Google Play Instant.” Unfortunately, there aren’t many examples still available if you’re curious about what Instant Apps were like—the Finnish publisher Ilta-Sanomat is one of the few still offering it. Make sure the settings toggle for Instant Apps is on if you want a little dose of nostalgia.

Another one for the graveyard: Google to kill Instant Apps in December Read More »

ai-overviews-hallucinates-that-airbus,-not-boeing,-involved-in-fatal-air-india-crash

AI Overviews hallucinates that Airbus, not Boeing, involved in fatal Air India crash

When major events occur, most people rush to Google to find information. Increasingly, the first thing they see is an AI Overview, a feature that already has a reputation for making glaring mistakes. In the wake of a tragic plane crash in India, Google’s AI search results are spreading misinformation claiming the incident involved an Airbus plane—it was actually a Boeing 787.

Travelers are more attuned to the airliner models these days after a spate of crashes involving Boeing’s 737 lineup several years ago. Searches for airline disasters are sure to skyrocket in the coming days, with reports that more than 200 passengers and crew lost their lives in the Air India Flight 171 crash. The way generative AI operates means some people searching for details may get the wrong impression from Google’s results page.

Not all searches get AI answers, but Google has been steadily expanding this feature since it debuted last year. One searcher on Reddit spotted a troubling confabulation when searching for crashes involving Airbus planes. AI Overviews, apparently overwhelmed with results reporting on the Air India crash, stated confidently (and incorrectly) that it was an Airbus A330 that fell out of the sky shortly after takeoff. We’ve run a few similar searches—some of the AI results say Boeing, some say Airbus, and some include a strange mashup of both Airbus and Boeing. It’s a mess.

In this search, Google’s AI says the crash involved an Airbus A330 instead of a Boeing 787.

Credit: /u/stuckintrraffic

In this search, Google’s AI says the crash involved an Airbus A330 instead of a Boeing 787. Credit: /u/stuckintrraffic

But why is Google bringing up the Air India crash at all in the context of Airbus? Unfortunately, it’s impossible to predict if you’ll get an AI Overview that blames Boeing or Airbus—generative AI is non-deterministic, meaning the output is different every time, even for identical inputs. Our best guess for the underlying cause is that numerous articles on the Air India crash mention Airbus as Boeing’s main competitor. AI Overviews is essentially summarizing these results, and the AI goes down the wrong path because it lacks the ability to understand what is true.

AI Overviews hallucinates that Airbus, not Boeing, involved in fatal Air India crash Read More »

google-left-months-old-dark-mode-bug-in-android-16,-fix-planned-for-next-pixel-drop

Google left months-old dark mode bug in Android 16, fix planned for next Pixel Drop

Google’s Pixel phones got a big update this week with the release of Android 16 and a batch of Pixel Drop features. Pixels now have enhanced security, new contact features, and improved button navigation. However, some of the most interesting features, like desktop windowing and Material 3 Expressive, are coming later. Another thing that’s coming later, it seems, is a fix for an annoying bug Google introduced a few months back.

Google broke the system dark mode schedule in its March Pixel update and did not address it in time for Android 16. The company confirms a fix is coming, though.

The system-level dark theme arrives in Android 10 to offer a less eye-searing option, which is particularly handy in dark environments. It took a while for even Google’s apps to fully adopt this feature, but support is solid five years later. Google even offers a scheduling feature to switch between light and dark mode at custom times or based on sunrise/sunset. However, the scheduling feature was busted in the March update.

Currently, if you manually toggle dark mode on or off, schedules stop working. The only way to get them back is to set up your schedule again and then never toggle dark mode. Google initially marked this as “intended behavior,” but a more recent bug report was accepted as a valid issue.

Google left months-old dark mode bug in Android 16, fix planned for next Pixel Drop Read More »

amazon-prime-video-subscribers-sit-through-up-to-6-minutes-of-ads-per-hour

Amazon Prime Video subscribers sit through up to 6 minutes of ads per hour

Amazon forced all Prime Video subscribers onto a new ad-based subscription tier in January 2024 unless users paid more for their subscription type. Now, the tech giant is reportedly showing twice as many ads to subscribers as it did when it started selling ad-based streaming subscriptions.

Currently, anyone who signs up for Amazon Prime (which is $15 per month or $139 per year) gets Prime Video with ads. If they don’t want to see commercials, they have to pay an extra $3 per month. One can also subscribe to Prime Video alone for $9 per month with ads or $12 per month without ads.

When Amazon originally announced the ad tier, it said it would deliver “meaningfully fewer ads than linear TV and other streaming TV providers.” Based on “six ad buyers and documents” ad trade publication AdWeek reported viewing, Amazon has determined the average is four to six minutes of advertisements per hour.

“Prime Video ad load has gradually increased to four to six minutes per hour,” an Amazon representative said via email to an ad buyer this month, AdWeek reported.

That would mean that Prime Video subscribers are spending significantly more time sitting through ads than they did at the launch of Prime Video with ads. According to a report from The Wall Street Journal (WSJ) at the time, which cited an Amazon presentation it said it reviewed, “the average ad load at launch was two to three-and-a-half minutes.” However, when reached for comment, an Amazon Ads representative told Ars Technica that the WSJ didn’t confirm that figure directly with Amazon.

Amazon’s Ads spokesperson, however, declined to specify to Ars how many ads Amazon typically shows to Prime Videos subscribers today or in the past.

Instead, they shared a statement saying:

We remain focused on prioritizing ad innovation over volume. While demand continues to grow, our commitment is to improving ad experiences rather than simply increasing the number of ads shown. Since the beginning of this year alone, we’ve announced multiple capabilities, including Brand+, Complete TV, and new ad formats—all designed to deliver industry-leading relevancy and enhanced customer experiences. We will continue to invest in this important work, creating meaningful innovations that benefit both customers and advertisers alike.

Kendra Tang, programmatic supervisor at ad firm Rain the Growth Agency, told AdWeek that Amazon “told us the ad load would be increasing” and that she’s seen more ad opportunities made available in Amazon’s ad system.

Amazon Prime Video subscribers sit through up to 6 minutes of ads per hour Read More »

apple’s-craig-federighi-on-the-long-road-to-the-ipad’s-mac-like-multitasking

Apple’s Craig Federighi on the long road to the iPad’s Mac-like multitasking


Federighi talks to Ars about why the iPad’s Mac-style multitasking took so long.

Apple press photograph of iPads running iPadOS 26

iPads! Running iOS 26! Credit: Apple

iPads! Running iOS 26! Credit: Apple

CUPERTINO, Calif.—When Apple Senior Vice President of Software Engineering Craig Federighi introduced the new multitasking UI in iPadOS 26 at the company’s Worldwide Developers Conference this week, he did it the same way he introduced the Calculator app for the iPad last year or timers in the iPad’s Clock app the year before—with a hint of sarcasm.

“Wow,” Federighi enthuses in a lightly exaggerated tone about an hour and 19 minutes into a 90-minute presentation. “More windows, a pointier pointer, and a menu bar? Who would’ve thought? We’ve truly pulled off a mind-blowing release!”

This elicits a sensible chuckle from the gathered audience of developers, media, and Apple employees watching the keynote on the Apple Park campus, where I have grabbed myself a good-but-not-great seat to watch the largely pre-recorded keynote on a gigantic outdoor screen.

Federighi is acknowledging—and lightly poking fun at—the audience of developers, pro users, and media personalities who have been asking for years that Apple’s iPad behave more like a traditional computer. And after many incremental steps, including a big swing and partial miss with the buggy, limited Stage Manager interface a couple of years ago, Apple has finally responded to requests for Mac-like multitasking with a distinctly Mac-like interface, an improved file manager, and better support for running tasks in the background.

But if this move was so forehead-slappingly obvious, why did it take so long to get here? This is one of the questions we dug into when we sat down with Federighi and Senior Vice President of Worldwide Marketing Greg Joswiak for a post-keynote chat earlier this week.

It used to be about hardware restrictions

People have been trying to use iPads (and make a philosophical case for them) as quote-unquote real computers practically from the moment they were introduced 15 years ago.

But those early iPads lacked so much of what we expect from modern PCs and Macs, most notably robust multi-window multitasking and the ability for third-party apps to exchange data. The first iPads were almost literally just iPhone internals connected to big screens, with just a fraction of the RAM and storage available in the Macs of the day; that necessitated the use of a blown-up version of the iPhone’s operating system and the iPhone’s one-full-screen-app-at-a-time interface.

“If you want to rewind all the way to the time we introduced Split View and Slide Over [in iOS 9], you have to start with the grounding that the iPad is a direct manipulation touch-first device,” Federighi told Ars. “It is a foundational requirement that if you touch the screen and start to move something, that it responds. Otherwise, the entire interaction model is broken—it’s a psychic break with your contract with the device.”

Mac users, Federighi said, were more tolerant of small latency on their devices because they were already manipulating apps on the screen indirectly, but the iPads of a decade or so ago “didn’t have the capacity to run an unlimited number of windowed apps with perfect responsiveness.”

It’s also worth noting the technical limitations of iPhone and iPad apps at the time, which up until then had mostly been designed and coded to match the specific screen sizes and resolutions of the (then-manageable) number of iDevices that existed. It simply wasn’t possible for the apps of the day to be dynamically resized as desktop windows are, because no one was coding their apps that way.

Apple’s iPad Pros—and, later, the iPad Airs—have gradually adopted hardware and software features that make them more Mac-like. Credit: Andrew Cunningham

Of course, those hardware limitations no longer exist. Apple’s iPad Pros started boosting the tablets’ processing power, RAM, and storage in earnest in the late 2010s, and Apple introduced a Microsoft Surface-like keyboard and stylus accessories that moved the iPad away from its role as a content consumption device. For years now, Apple’s faster tablets have been based on the same hardware as its slower Macs—we know the hardware can do more because Apple is already doing more with it elsewhere.

“Over time the iPad’s gotten more powerful, the screens have gotten larger, the user base has shifted into a mode where there is a little bit more trackpad and keyboard use in how many people use the device,” Federighi told Ars. “And so the stars kind of aligned to where many of the things that you traditionally do with a Mac were possible to do on an iPad for the first time and still meet iPad’s basic contract.”

On correcting some of Stage Manager’s problems

More multitasking in iPadOS 26. Credit: Apple

Apple has already tried a windowed multitasking system on modern iPads once this decade, of course, with iPadOS 16’s Stage Manager interface.

Any first crack at windowed multitasking on the iPad was going to have a steep climb. This was the first time Apple or its developers had needed to contend with truly dynamically resizable app windows in iOS or iPadOS, the first time Apple had implemented a virtual memory system on the iPad, and the first time Apple had tried true multi-monitor support. Stage Manager was in such rough shape that Apple delayed that year’s iPadOS release to keep working on it.

But the biggest problem with Stage Manager was actually that it just didn’t work on a whole bunch of iPads. You could only use it on new expensive models—if you had a new cheap model or even an older expensive model, your iPad was stuck with the older Slide Over and Split View modes that had been designed around the hardware limitations of mid-2010s iPads.

“We wanted to offer a new baseline of a totally consistent experience of what it meant to have Stage Manager,” Federighi told Ars. “And for us, that meant four simultaneous apps on the internal display and an external display with four simultaneous apps. So, eight apps running at once. And we said that’s the baseline, and that’s what it means to be Stage Manager; we didn’t want to say ‘you get Stage Manager, but you get Stage Manager-lite here or something like that. And so immediately that established a floor for how low we could go.”

Fixing that was one of the primary goals of the new windowing system.

“We decided this time: make everything we can make available,” said Federighi, “even if it has some nuances on older hardware, because we saw so much demand [for Stage Manager].”

That slight change in approach, combined with other behind-the-scenes optimizations, makes the new multitasking model more widely compatible than Stage Manager is. There are still limits on those devices—not to the number of windows you can open, but to how many of those windows can be active and up-to-date at once. And true multi-monitor support would remain the purview of the faster, more-expensive models.

“We have discovered many, many optimizations,” Federighi said. “We re-architected our windowing system and we re-architected the way that we manage background tasks, background processing, that enabled us to squeeze more out of other devices than we were able to do at the time we introduced Stage Manager.”

Stage Manager still exists in iPadOS 26, but as an optional extra multitasking mode that you have to choose to enable instead of the new windowed multitasking system. You can also choose to turn both multitasking systems off entirely, preserving the iPad’s traditional big-iPhone-for-watching-Netflix interface for the people who prefer it.

“iPad’s gonna be iPad”

The $349 base-model iPad is one that stands to gain the most from iPadOS 26. Credit: Andrew Cunningham

However, while the new iPadOS 26 UI takes big steps toward the Mac’s interface, the company still tries to treat them as different products with different priorities. To date, that has meant no touch screens on the Mac (despite years of rumors), and it will continue to mean that there are some Mac things that the iPad will remain unable to do.

“But we’ve looked and said, as [the iPad and Mac] come together, where on the iPad the Mac idiom for doing something, like where we put the window close controls and maximize controls, what color are they—we’ve said why not, where it makes sense, use a converged design for those things so it’s familiar and comfortable,” Federighi told Ars. “But where it doesn’t make sense, iPad’s gonna be iPad.”

There will still be limitations and frustrations when trying to fit an iPad into a Mac-shaped hole in your computing setup. While tasks can run in the background, for example, Apple only allows apps to run workloads with a definitive endpoint, things like a video export or a file transfer. System agents or other apps that perform some routine on-and-off tasks continuously in the background aren’t supported. All the demos we’ve seen so far are also on new, high-end iPad hardware, and it remains to be seen how well the new features behave on low-end tablets like the 11th-generation A16 iPad, or old 2019-era hardware like the iPad Air 3.

But it does feel like Apple has finally settled on a design that might stick and that adds capability to the iPad without wrecking its simplicity for the people who still just want a big screen for reading and streaming.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

Apple’s Craig Federighi on the long road to the iPad’s Mac-like multitasking Read More »

hp-reveals-first-google-beam-3d-video-conferencing-setup,-priced-at-$25,000

HP reveals first Google Beam 3D video conferencing setup, priced at $25,000

Amid all the Gemini hype at Google I/O last month, the company also turned one of its experiments into a (kind of) real product. Project Starline was reborn as Google Beam, a 3D video conferencing system that makes it look like you’re in the same room with the other party. Google said HP would reveal the first Beam setup, and now it has. The HP Dimension is coming this year, and the price tag is a predictably hefty $24,999.

Google Beam calls for a lot of advanced hardware, so the high price isn’t a surprise. The HP Dimension uses six high-speed cameras positioned around the display to capture the speaker from multiple angles. This visual data is then fed into Google’s proprietary volumetric video model, which merges the streams together into a 3D reconstruction of the speaker.

Eventually, there will be Beam systems of various sizes, but the HP model comes with a big 65-inch display. All Beam systems will use light field screen technology, which can show the volumetric model in 3D, eliminating the need to wear a headset or glasses for the 3D effect. Google says Beam can show minute details at 60 fps with millimeter-scale precision.

Obscene price tag aside, Google Beam is impressive technology. We got a glimpse of it at Google I/O, and it really does look like the person you’re talking to is on the other side of the table. Google and HP claim that Beam’s 3D video makes meetings more efficient, with better display of non-verbal cues and participants experiencing improved recall of details versus a regular 2D chat. Google also promises its Meet-based live translation features will come to Beam later.

HP reveals first Google Beam 3D video conferencing setup, priced at $25,000 Read More »

a-history-of-the-internet,-part-2:-the-high-tech-gold-rush-begins

A history of the Internet, part 2: The high-tech gold rush begins


The Web Era arrives, the browser wars flare, and a bubble bursts.

Welcome to the second article in our three-part series on the history of the Internet. If you haven’t already, read part one here.

As a refresher, here’s the story so far:

The ARPANET was a project started by the Defense Department’s Advanced Research Project Agency in 1969 to network different mainframe computers together across the country.  Later, it evolved into the Internet, connecting multiple global networks together using a common TCP/IP protocol.

By the late 1980s, investments from the National Science Foundation (NSF) had established an “Internet backbone” supporting hundreds of thousands of users worldwide. These users were mostly professors, researchers, and graduate students.

In the meantime, commercial online services like CompuServe were growing rapidly. These systems connected personal computer users, using dial-up modems, to a mainframe running proprietary software. Once online, people could read news articles and message other users. In 1989, CompuServe added the ability to send email to anyone on the Internet.

In 1965, Ted Nelson submitted a paper to the Association for Computing Machinery. He wrote: “Let me introduce the word ‘hypertext’ to mean a body of written or pictorial material interconnected in such a complex way that it could not conveniently be presented or represented on paper.” The paper was part of a grand vision he called Xanadu, after the poem by Samuel Coleridge.

A decade later, in his book “Dream Machines/Computer Lib,” he described Xanadu thusly: “To give you a screen in your home from which you can see into the world’s hypertext libraries.” He admitted that the world didn’t have any hypertext libraries yet, but that wasn’t the point. One day, maybe soon, it would. And he was going to dedicate his life to making it happen.

As the Internet grew, it became more and more difficult to find things on it. There were lots of cool documents like the Hitchhiker’s Guide To The Internet, but to read them, you first had to know where they were.

The community of helpful programmers on the Internet leapt to the challenge. Alan Emtage at McGill University in Montreal wrote a tool called Archie. It searched a list of public file transfer protocol (FTP) servers. You still had to know the file name you were looking for, but Archie would let you download it no matter what server it was on.

An improved search engine was Gopher, written by a team headed by Mark McCahill at the University of Minnesota. It used a text-based menu system so that users didn’t have to remember file names or locations. Gopher servers could display a customized collection of links inside nested menus, and they integrated with other services like Archie and Veronica to help users search for more resources.

Gopher is a text-based Internet search and retrieval system. It’s still running in 2025! Jeremy Reimer

A Gopher server could provide many of the things we take for granted today: search engines, personal pages that could contain links, and downloadable files. But this wasn’t enough for a British computer scientist who was working at CERN, an intergovernmental institute that operated the world’s largest particle physics lab.

The World Wide Web

Hypertext had come a long way since Ted Nelson had coined the word in 1965. Bill Atkinson, a member of the original Macintosh development team, released HyperCard in 1987. It used the Mac’s graphical interface to let anyone develop “stacks,” collections of text, graphics, and sounds that could be connected together with clickable links. There was no networking, but stacks could be shared with other users by sending the files on a floppy disk.

The home screen of HyperCard 1.0 for Macintosh. Jeremy Reimer

Hypertext was so big that conferences were held just to discuss it in 1987 and 1988. Even Ted Nelson had finally found a sponsor for his personal dream: Autodesk founder John Walker had agreed to spin up a subsidiary to create a commercial version of Xanadu.

It was in this environment that CERN fellow Tim Berners-Lee drew up his own proposal in March 1989 for a new hypertext environment. His goal was to make it easier for researchers at CERN to collaborate and share information about new projects.

The proposal (which he called “Mesh”) had several objectives. It would provide a system for connecting information about people, projects, documents, and hardware being developed at CERN. It would be decentralized and distributed over many computers. Not all the computers at CERN were the same—there were Digital Equipment minis running VMS, some Macintoshes, and an increasing number of Unix workstations. Each of them should be able to view the information in the same way.

As Berners-Lee described it, “There are few products which take Ted Nelson’s idea of a wide ‘docuverse’ literally by allowing links between nodes in different databases. In order to do this, some standardization would be necessary.”

The original proposal document for the web, written in Microsoft Word for Macintosh 4.0, downloaded from Tim Berners-Lee’s website. Credit: Jeremy Reimer

The document ended by describing the project as “practical” and estimating that it might take two people six to 12 months to complete. Berners-Lee’s manager called it “vague, but exciting.” Robert Cailliau, who had independently proposed a hypertext system for CERN, joined Berners-Lee to start designing the project.

The computer Berners-Lee used was a NeXT cube, from the company Steve Jobs started after he was kicked out of Apple. NeXT workstations were expensive, but they came with a software development environment that was years ahead of its time. If you could afford one, it was like a coding accelerator. John Carmack would later write DOOM on a NeXT.

The NeXT workstation that Tim Berners-Lee used to create the World Wide Web. Please do not power down the World Wide Web. Credit: Coolcaesar (CC BY-SA 3.0)

Berners-Lee called his application “WorldWideWeb.” The software consisted of a server, which delivered pages of text over a new protocol called “Hypertext Transport Protocol,” or HTTP, and a browser that rendered the text. The browser translated markup code like “h1” to indicate a larger header font or “a” to indicate a link. There was also a graphical webpage editor, but it didn’t work very well and was abandoned.

The very first website was published, running on the development NeXT cube, on December 20, 1990. Anyone who had a NeXT machine and access to the Internet could view the site in all its glory.

The original WorldWideWeb browser running on NeXTstep 3, browsing the world’s first webpage. Jeremy Reimer

Because NeXT only sold 50,000 computers in total, that intersection did not represent a lot of people. Eight months later, Berners-Lee posted a reply to a question about interesting projects on the alt.hypertext Usenet newsgroup. He described the World Wide Web project and included links to all the software and documentation.

That one post changed the world forever.

Mosaic

On December 9, 1991, President George H.W. Bush signed into law the High Performance Computing Act, also known as the Gore Bill. The bill paid for an upgrade of the NSFNET backbone, as well as a separate funding initiative for the National Center for Supercomputing Applications (NCSA).

NCSA, based out of the University of Illinois, became a dream location for computing research. “NCSA was heaven,” recalled Alex Totic, who was a student there. “They had all the toys, from Thinking Machines to Crays to Macs to beautiful networks. It was awesome.” As is often the case in academia, the professors came up with research ideas but assigned most of the actual work to their grad students.

One of those students was Marc Andreessen, who joined NCSA as a part-time programmer for $6.85 an hour. Andreessen was fascinated by the World Wide Web, especially browsers. A new browser for Unix computers, ViolaWWW, was making the rounds at NCSA. No longer confined to the NeXT workstation, the web had caught the attention of the Unix community. But that community was still too small for Andreessen.

“To use the Net, you had to understand Unix,” he said in an interview with Forbes. “And the current users had no interest in making it easier. In fact, there was a definite element of not wanting to make it easier, of actually wanting to keep the riffraff out.”

Andreessen enlisted the help of his colleague, programmer Eric Bina, and started developing a new web browser in December 1992. In a little over a month, they released version 0.5 of “NCSA X Mosaic”—so called because it was designed to work with Unix’s X Window System. Ports for the Macintosh and Windows followed shortly thereafter.

Being available on the most popular graphical computers changed the trajectory of the web. In just 18 months, millions of copies of Mosaic were downloaded, and the rate was accelerating. The riffraff was here to stay.

Netscape

The instant popularity of Mosaic caused the management at NCSA to take a deeper interest in the project. Jon Mittelhauser, who co-wrote the Windows version, recalled that the small team “suddenly found ourselves in meetings with forty people planning our next features, as opposed to the five of us making plans at 2 am over pizzas and Cokes.”

Andreessen was told to step aside and let more experienced managers take over. Instead, he left NCSA and moved to California, looking for his next opportunity. “I thought I had missed the whole thing,” Andreessen said. “The overwhelming mood in the Valley when I arrived was that the PC was done, and by the way, the Valley was probably done because there was nothing else to do.”

But his reputation had preceded him. Jim Clark, the founder of Silicon Graphics, was also looking to start something new. A friend had shown him a demo of Mosaic, and Clark reached out to meet with Andreessen.

At a meeting, Andreessen pitched the idea of building a “Mosaic killer.” He showed Clark a graph that showed web users doubling every five months. Excited by the possibilities, the two men founded Mosaic Communications Corporation on April 4, 1994. Andreessen quickly recruited programmers from his former team, and they got to work. They codenamed their new browser “Mozilla” since it was going to be a monster that would devour Mosaic. Beta versions were titled “Mosaic Netscape,” but the University of Illinois threatened to sue the new company. To avoid litigation, the name of the company and browser were changed to Netscape, and the programmers audited their code to ensure none of it had been copied from NCSA.

Netscape became the model for all Internet startups to follow. Programmers were given unlimited free sodas and encouraged to basically never leave the office. “Netscape Time” accelerated software development schedules, and because updates could be delivered over the Internet, old principles of quality assurance went out the window. And the business model? It was simply to “get big fast,” and profits could be figured out later.

Work proceeded quickly, and the 1.0 version of Netscape Navigator and the Netsite web server were released on December 15, 1994, for Windows, Macintosh, and Unix systems running X Windows. The browser was priced at $39 for commercial users, but there was no charge for “academic and non-profit use, as well as for free evaluation purposes.”

Version 0.9 was called “Mosaic Netscape,” and the logo and company were still Mosaic. Jeremy Reimer

Netscape quickly became the standard. Within six months, it captured over 70 percent of the market share for web browsers. On August 9, 1995, only 16 months after the founding of the company, Netscape filed for an Initial Public Offering. A last-minute decision doubled the offering price to $28 per share, and on the first day of trading, the stock soared to $75 and closed at $58.25. The Web Era had officially arrived.

The web battles proprietary solutions

The excitement over a new way to transmit text and images to the public over phone lines wasn’t confined to the World Wide Web. Commercial online systems like CompuServe were also evolving to meet the graphical age. These companies released attractive new front-ends for their services that ran on DOS, Windows, and Macintosh computers. There were also new services that were graphics-only, like Prodigy, a cooperation between IBM and Sears, and an upstart that had sprung from the ashes of a Commodore 64 service called Quantum Link. This was America Online, or AOL.

Even Microsoft was getting into the act. Bill Gates believed that the “Information Superhighway” was the future of computing, and he wanted to make sure that all roads went through his company’s toll booth. The highly anticipated Windows 95 was scheduled to ship with a bundled dial-up online service called the Microsoft Network, or MSN.

At first, it wasn’t clear which of these online services would emerge as the winner. But people assumed that at least one of them would beat the complicated, nerdy Internet. CompuServe was the oldest, but AOL was nimbler and found success by sending out millions of free “starter” disks (and later, CDs) to potential customers. Microsoft was sure that bundling MSN with the upcoming Windows 95 would ensure victory.

Most of these services decided to hedge their bets by adding a sort of “side access” to the World Wide Web. After all, if they didn’t, their competitors would. At the same time, smaller companies (many of them former bulletin board services) started becoming Internet service providers. These smaller “ISPs” could charge less money than the big services because they didn’t have to create any content themselves. Thousands of new websites were appearing on the Internet every day, much faster than new sections could be added to AOL or CompuServe.

The tipping point happened very quickly. Before Windows 95 had even shipped, Bill Gates wrote his famous “Internet Tidal Wave” memo, where he assigned the Internet the “highest level of importance.” MSN was quickly changed to become more of a standard ISP and moved all of its content to the web. Microsoft rushed to release its own web browser, Internet Explorer, and bundled it with the Windows 95 Plus Pack.

The hype and momentum were entirely with the web now. It was the most exciting, most transformative technology of its time. The decade-long battle to control the Internet by forcing a shift to a new OSI standards model was forgotten. The web was all anyone cared about, and the web ran on TCP/IP.

The browser wars

Netscape had never expected to make a lot of money from its browser, as it was assumed that most people would continue to download new “evaluation” versions for free. Executives were pleasantly surprised when businesses started sending Netscape huge checks. The company went from $17 million in revenue in 1995 to $346 million the following year, and the press started calling Marc Andreessen “the new Bill Gates.”

The old Bill Gates wasn’t having any of that. Following his 1995 memo, Microsoft worked hard to improve Internet Explorer and made it available for free, including to business users. Netscape tried to fight back. It added groundbreaking new features like JavaScript, which was inspired by LISP but with a syntax similar to Java, the hot new programming language from Sun Microsystems. But it was hard to compete with free, and Netscape’s market share started to fall. By 1996, both browsers had reached version 3.0 and were roughly equal in terms of features. The battle continued, but when the Apache Software Foundation released its free web server, Netscape’s other source of revenue dried up as well. The writing was on the wall.

There was no better way to declare your allegiance to a web browser in 1996 than adding “Best Viewed In” above one of these icons. Credit: Jeremy Reimer

The dot-com boom

In 1989, the NSF lifted the restrictions on providing commercial access to the Internet, and by 1991, it had removed all barriers to commercial trade on the network. With the sudden ascent of the web, thanks to Mosaic, Netscape, and Internet Explorer, new companies jumped into this high-tech gold rush. But at first, it wasn’t clear what the best business strategy was. Users expected everything on the web to be free, so how could you make money?

Many early web companies started as hobby projects. In 1994, Jerry Yang and David Filo were electrical engineering PhD students at Stanford University. After Mosaic started popping off, they began collecting and trading links to new websites. Thus, “Jerry’s Guide to the World Wide Web” was born, running on Yang’s Sun workstation. Renamed Yahoo! (Yet Another Hierarchical, Officious Oracle), the site exploded in popularity. Netscape put multiple links to Yahoo on its main navigation bar, which further accelerated growth. “We weren’t really sure if you could make a business out of it, though,” Yang told Fortune. Nevertheless, venture capital companies came calling. Sequoia, which had made millions investing in Apple, put in $1 million for 25 percent of Yahoo.

Yahoo.com as it would have appeared in 1995. Credit: Jeremy Reimer

Another hobby site, AuctionWeb, was started in 1995 by Pierre Omidyar. Running on his own home server using the regular $30 per month service from his ISP, the site let people buy and sell items of almost any kind. When traffic started growing, his ISP told him it was increasing his Internet fees to $250 per month, as befitting a commercial enterprise. Omidyar decided he would try to make it a real business, even though he didn’t have a merchant account for credit cards or even a way to enforce the new 5 percent or 2.5 percent royalty charges. That didn’t matter, as the checks started rolling in. He found a business partner, changed the name to eBay, and the rest was history.

AuctionWeb (later eBay) as it would have appeared in 1995. Credit: Jeremy Reimer

In 1993, Jeff Bezos, a senior vice president at a hedge fund company, was tasked with investigating business opportunities on the Internet. He decided to create a proof of concept for what he described as an “everything store.” He chose books as an ideal commodity to sell online, since a book in one store was identical to one in another, and a website could offer access to obscure titles that might not get stocked in physical bookstores.

He left the hedge fund company, gathered investors and software development talent, and moved to Seattle. There, he started Amazon. At first, the site wasn’t much more than an online version of an existing bookseller catalog called Books In Print. But over time, Bezos added inventory data from the two major book distributors, Ingram and Baker & Taylor. The promise of access to every book in the world was exciting for people, and the company grew quickly.

Amazon.com as it would have appeared in 1995. Credit: Jeremy Reimer

The explosive growth of these startups fueled a self-perpetuating cycle. As publications like Wired experimented with online versions of their magazines, they invented and sold banner ads to fund their websites. The best customers for these ads were other web startups. These companies wanted more traffic, and they knew ads on sites like Yahoo were the best way to get it. Yahoo salespeople could then turn around and point to their exponential ad sales curves, which caused Yahoo stock to rise. This encouraged people to fund more web startups, which would all need to advertise on Yahoo. These new startups also needed to buy servers from companies like Sun Microsystems, causing those stocks to rise as well.

The crash

In the latter half of the 1990s, it looked like everything was going great. The economy was booming, thanks in part to the rise of the World Wide Web and the huge boost it gave to computer hardware and software companies. The NASDAQ index of tech-focused stocks painted a clear picture of the boom.

The NASDAQ composite index in the 1990s. Credit: Jeremy Reimer

Federal Reserve chairman Alan Greenspan called this phenomenon “irrational exuberance” but didn’t seem to be in a hurry to stop it. The fact that most new web startups didn’t have a realistic business model didn’t seem to bother investors. Sure, WebVan might have been paying more to deliver groceries than they earned from customers, but look at that growth curve!

The exuberance couldn’t last forever. The NASDAQ peaked at 8,843.87 in February 2000 and started to go down. In one month, it lost 34 percent of its value, and by August 2001, it was down to 3,253.38. Web companies laid off employees or went out of business completely. The party was over.

Andreessen said that the tech crash scarred him. “The overwhelming message to our generation in the early nineties was ‘You’re dirty, you’re all about grunge—you guys are fucking losers!’ Then the tech boom hit, and it was ‘We are going to do amazing things!’ And then the roof caved in, and the wisdom was that the Internet was a mirage. I 100 percent believed that because the rejection was so personal—both what everybody thought of me and what I thought of myself.”

But while some companies quietly celebrated the end of the whole Internet thing, others would rise from the ashes of the dot-com collapse. That’s the subject of our third and final article.

Photo of Jeremy Reimer

I’m a writer and web developer. I specialize in the obscure and beautiful, like the Amiga and newLISP.

A history of the Internet, part 2: The high-tech gold rush begins Read More »

apple-details-the-end-of-intel-mac-support-and-a-phaseout-for-rosetta-2

Apple details the end of Intel Mac support and a phaseout for Rosetta 2

The support list for macOS Tahoe still includes Intel Macs, but it has been whittled down to just four models, all released in 2019 or 2020. We speculated that this meant that the end was near for Intel Macs, and now we can confirm just how near it is: macOS Tahoe will be the last new macOS release to support any Intel Macs. All new releases starting with macOS 27 will require an Apple Silicon Mac.

Apple will provide additional security updates for Tahoe until fall 2028, two years after it is replaced with macOS 27. That’s a typical schedule for older macOS versions, which all get one year of major point updates that include security fixes and new features, followed by two years of security-only updates to keep them patched but without adding significant new features.

Apple is also planning changes to Rosetta 2, the Intel-to-Arm app translation technology created to ease the transition between the Intel and Apple Silicon eras. Rosetta will continue to work as a general-purpose app translation tool in both macOS 26 and macOS 27.

But after that, Rosetta will be pared back and will only be available to a limited subset of apps—specifically, older games that rely on Intel-specific libraries but are no longer being actively maintained by their developers. Devs who want their apps to continue running on macOS after that will need to transition to either Apple Silicon-native apps or universal apps that run on either architecture.

Apple details the end of Intel Mac support and a phaseout for Rosetta 2 Read More »

us-air-traffic-control-still-runs-on-windows-95-and-floppy-disks

US air traffic control still runs on Windows 95 and floppy disks

On Wednesday, acting FAA Administrator Chris Rocheleau told the House Appropriations Committee that the Federal Aviation Administration plans to replace its aging air traffic control systems, which still rely on floppy disks and Windows 95 computers, Tom’s Hardware reports. The agency has issued a Request For Information to gather proposals from companies willing to tackle the massive infrastructure overhaul.

“The whole idea is to replace the system. No more floppy disks or paper strips,” Rocheleau said during the committee hearing. Transportation Secretary Sean Duffy called the project “the most important infrastructure project that we’ve had in this country for decades,” describing it as a bipartisan priority.

Most air traffic control towers and facilities across the US currently operate with technology that seems frozen in the 20th century, although that isn’t necessarily a bad thing—when it works. Some controllers currently use paper strips to track aircraft movements and transfer data between systems using floppy disks, while their computers run Microsoft’s Windows 95 operating system, which launched in 1995.

A pile of floppy disks

Credit: Getty

As Tom’s Hardware notes, modernization of the system is broadly popular. Sheldon Jacobson, a University of Illinois professor who has studied risks in aviation, says that the system works remarkably well as is but that an upgrade is still critical, according to NPR. The aviation industry coalition Modern Skies has been pushing for ATC modernization and recently released an advertisement highlighting the outdated technology.

While the vintage systems may have inadvertently protected air traffic control from widespread outages like the CrowdStrike incident that disrupted modern computer systems globally in 2024, agency officials say 51 of the FAA’s 138 systems are unsustainable due to outdated functionality and a lack of spare parts.

The FAA isn’t alone in clinging to floppy disk technology. San Francisco’s train control system still runs on DOS loaded from 5.25-inch floppy disks, with upgrades not expected until 2030 due to budget constraints. Japan has also struggled in recent years to modernize government record systems that use floppy disks.

If it ain’t broke? (Or maybe it is broke)

Modernizing the air traffic control system presents engineering challenges that extend far beyond simply installing newer computers. Unlike typical IT upgrades, ATC systems must maintain continuous 24/7 operation, because shutting down facilities for maintenance could compromise aviation safety.

US air traffic control still runs on Windows 95 and floppy disks Read More »