Tech

reviewing-ios-26-for-power-users:-reminders,-preview,-and-more

Reviewing iOS 26 for power users: Reminders, Preview, and more


These features try to turn iPhones into more powerful work and organization tools.

iOS 26 came out last week, bringing a new look and interface alongside some new capabilities and updates aimed squarely at iPhone power users.

We gave you our main iOS 26 review last week. This time around, we’re taking a look at some of the updates targeted at people who rely on their iPhones for much more than making phone calls and browsing the Internet. Many of these features rely on Apple Intelligence, meaning they’re only as reliable and helpful as Apple’s generative AI (and only available on newer iPhones, besides). Other adjustments are smaller but could make a big difference to people who use their phone to do work tasks.

Reminders attempt to get smarter

The Reminders app gets the Apple Intelligence treatment in iOS 26, with the AI primarily focused on making it easier to organize content within Reminders lists. Lines in Reminders lists are often short, quickly jotted-down blurbs rather than lengthy, detailed complex instructions. With this in mind, it’s easy to see how the AI can sometimes lack enough information in order to perform certain tasks, like logically grouping different errands into sensible sections.

But Apple also encourages applying the AI-based Reminders features to areas of life that could hold more weight, such as making a list of suggested reminders from emails. For serious or work-critical summaries, Reminders’ new Apple Intelligence capabilities aren’t reliable enough.

Suggested Reminders based on selected text

iOS 26 attempts to elevate Reminders from an app for making lists to an organization tool that helps you identify information or important tasks that you should accomplish. If you share content, such as emails, website text, or a note, with the app, it can create a list of what it thinks are the critical things to remember from the text. But if you’re trying to extract information any more advanced than an ingredients list from a recipe, Reminders misses the mark.

iOS 26 Suggested Reminders

Sometimes I tried sharing longer text with Reminders and didn’t get any suggestions.

Credit: Scharon Harding

Sometimes I tried sharing longer text with Reminders and didn’t get any suggestions. Credit: Scharon Harding

Sometimes, especially when reviewing longer text, Reminders was unable to think of suggested reminders. Other times, the reminders that it suggested, based off of lengthy messages, were off-base.

For instance, I had the app pull suggested reminders from a long email with guidelines and instructions from an editor. Highlighting a lot of text can be tedious on a touchscreen, but I did it anyway because the message had lots of helpful information broken up into sections that each had their own bold sub-headings. Additionally, most of those sections had their own lists (some using bullet points, some using numbers). I hoped Reminders would at least gather information from all of the email’s lists. But the suggested reminders ended up just being the same text from three—but not all—of the email’s bold sub-headings.

When I tried getting suggested reminders from a smaller portion of the same email, I surprisingly got five bullet points that covered more than just the email’s sub-headings but that still missed key points, including the email’s primary purpose.

Ultimately, the suggested Reminders feature mostly just boosts the app’s ability to serve as a modern shopping list. Suggested Reminders excels at pulling out ingredients from recipes, turning each ingredient into a suggestion that you can tap to add to a Reminders list. But being able to make a bulleted list out of a bulleted list is far from groundbreaking.

Auto-categorizing lines in Reminders lists

Since iOS 17, Reminders has been able to automatically sort items in grocery lists into distinct categories, like Produce and Proteins. iOS 26 tries taking things further by automatically grouping items in a list into non-culinary sections.

The way Reminders groups user-created tasks in lists is more sensible—and useful—than when it tries to create task suggestions based on shared text.

For example, I made a long list of various errands I needed to do, and Reminders grouped them into these categories: Administrative Tasks, Household Chores, Miscellaneous, Personal Tasks, Shopping, and Travel & Accommodation. The error rate here is respectable, but I would have tweaked some things. For one, I wouldn’t use the word “administrative” to refer to personal errands. The two tasks included under Administrative Tasks would have made more sense to me in Personal Tasks or Miscellaneous, even though those category names are almost too vague to have distinct meaning.

Preview comes to iOS

With Preview’s iOS debut, Apple brings to iPhones an app for viewing and editing PDFs and images that macOS users have had for years. As a result, many iPhone users will find the software easy and familiar to use.

But for iPhone owners who have long relied on Files for viewing, marking, and filling out PDFs and the like, Preview doesn’t bring many new capabilities. Anything that you can do in Preview, you could have done by viewing the same document in Files in an older version of iOS, save for a new crop tool and dedicated button for showing information about the document.

That’s kind of the point, though. When an iPhone has two discrete apps that can read and edit files, it’s far less frustrating to work with multiple documents. While you’re annotating a document in Preview, the Files app is still available, allowing you to have more than one document open at once. It’s a simple adjustment but one that vastly improves multitasking.

More Shortcuts options

Shortcuts gets somewhat more capable in iOS 26. That’s assuming you’re interested in using ChatGPT or Apple Intelligence generative AI in your automated tasks. You can tag in generative AI to create a shortcut that includes summarizing text in bullet points and applying that bulleted list to the shortcut’s next task, for instance.

An example of a Shortcut that uses generative AI.

Credit: Apple

An example of a Shortcut that uses generative AI. Credit: Apple

There are inherent drawbacks here. For one, Apple Intelligence and ChatGPT, like many generative AI tools, are subject to inaccuracies and can frequently overlook and/or misinterpret critical information. iOS 26 makes it easier for power users to incorporate a rewrite of a long text that has a more professional tone into a Shortcut. But that doesn’t mean that AI will properly communicate the information, especially when used across different scenarios with varied text.

You have three options for building Shortcuts that include use of AI models. Using ChatGPT or Apple Intelligence via Apple’s Private Cloud Compute, which runs the model on an Apple server, requires an Internet connection. Alternatively, you can use an on-device model without connecting to the web.

You can run more advanced models via Private Cloud Compute than you can with Apple Intelligence on-device. In Apple’s testing, models via Private Cloud Compute perform better on things like writing summaries and composition compared to on-device models.

Apple says personal user data sent to Private Cloud Compute “isn’t accessible to anyone other than the user — not even to Apple.” Apple has a strong, but flawed, reputation for being better about user privacy than other Big Tech firms. But by offering three different models to use with Shortcuts, iOS 26 ensures greater functionality, options, and control.

Something for podcasters

It’s likely that more people rely on iPads (or Macs) than iPhones for podcasting. Nevertheless, a new local capture feature introduced to both iOS 26 and iPadOS 26 makes it a touch more feasible to use iPhones (and iPads especially) for recording interviews for podcasts.

Before the latest updates, iOS and iPadOS only allowed one app to access the device’s microphone at a time. So, if you were interviewing someone via a videoconferencing app, you couldn’t also use your iPhone or iPad to record the discussion, since the videoconferencing app is using your mic to share your voice with whoever is on the other end of the call. Local capture on iOS 26 doesn’t include audio input controls, but its inclusion gives podcasters a way to record interviews or conversations on iPhones without needing additional software or hardware. That capability could save the day in a pinch.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Reviewing iOS 26 for power users: Reminders, Preview, and more Read More »

disney-decides-it-hasn’t-angered-people-enough,-announces-disney+-price-hikes

Disney decides it hasn’t angered people enough, announces Disney+ price hikes

While mired in controversy from all sides, the Walt Disney Company has unveiled price hikes for Disney+ and its other streaming services today.

As of October 21, Disney+ will cost up to 20 percent more, depending on the plan you have. Disney+ with ads is increasing from $10 to $12 per month, while the ad-free plan is going from $16 to $19 per month. The annual ad-free plan will go from $160 to $190.

Acquisitions have enabled Disney to own multiple streaming services, so it’s not just Disney+ subscribers who will be impacted. Subscriptions for Hulu and ESPN Select will also increase, as will all Hulu + Live TV plans and bundles of Disney’s three subscription-based streaming services.

And anyone buying Disney+ and Hulu bundled with Warner Bros. Discovery’s HBO Max will also have to pay (up to 17.6 percent) more as of October 21.

Mouse House in the dog house

Unfortunately, for millions of cord-cutters, an increase in streaming service prices isn’t surprising. Disney+ most recently raised prices in October 2024. It also raised prices in October 2023 and December 2022. (Disney+ debuted in November 2019, and Disney’s overall streaming business became profitable in Q3 2024.)

Disney’s timing here is similar to its previous price hikes: The announcement is made in September, with the new prices taking effect in October. However, September 2024 was much different from September 2025, which will be remembered as a time when Disney was embroiled in boycotts from streaming subscribers, broadcast viewers, free speech activists, celebrities, liberals, and conservatives.

On September 17, Disney-owned ABC made the landmark announcement that Jimmy Kimmel Live! would “be pre-empted indefinitely.” The announcement followed comments that Kimmel made on a September 15 show about the murder of right-wing influencer Charlie Kirk. His comments drew the ire of Federal Communications Commission Chairman Brendan Carr, and ABC affiliate owners Nexstar and Sinclair subsequently pulled the show from their stations.

It didn’t take long for the public to turn against Disney. Hundreds of people protested outside Disney Studios in Burbank, California. Calls to cancel Disney+ flooded social media, and, per Yipit data cited by The New York Times today, this had a greater impact on subscriber churn than other streaming boycotts.

Disney decides it hasn’t angered people enough, announces Disney+ price hikes Read More »

youtube-will-restore-channels-banned-for-covid-and-election-misinformation

YouTube will restore channels banned for COVID and election misinformation

It’s not exactly hard to find politically conservative content on YouTube, but the platform may soon skew even further to the right. YouTube parent Alphabet has confirmed that it will restore channels that were banned in recent years for spreading misinformation about COVID-19 and elections. Alphabet says it values free expression and political debate, placing the blame for its previous moderation decisions on the Biden administration.

Alphabet made this announcement via a lengthy letter to Rep. Jim Jordan (R-Ohio). The letter, a response to subpoenas from the House Judiciary Committee, explains in no uncertain terms that the company is taking a more relaxed approach to moderating political content on YouTube.

For starters, Alphabet denies that its products and services are biased toward specific viewpoints and that it “appreciates the accountability” provided by the committee. The cloying missive goes on to explain that Google didn’t really want to ban all those accounts, but Biden administration officials just kept asking. Now that the political tables have turned, Google is looking to dig itself out of this hole.

According to Alphabet’s version of events, misinformation such as telling people to drink bleach to cure COVID wasn’t initially against its policies. However, Biden officials repeatedly asked YouTube to take action. YouTube did and specifically banned COVID misinformation sitewide until 2024, one year longer than the crackdown on election conspiracy theories. Alphabet says that today, YouTube’s rules permit a “wider range of content.”

In an apparent attempt to smooth things over with the Republican-controlled House Judiciary Committee, YouTube will restore the channels banned for COVID and election misinformation. This includes prominent conservatives like Dan Bongino, who is now the Deputy Director of the FBI, and White House counterterrorism chief Sebastian Gorka.

YouTube will restore channels banned for COVID and election misinformation Read More »

a-history-of-the-internet,-part-3:-the-rise-of-the-user

A history of the Internet, part 3: The rise of the user


the best of times, the worst of times

The reins of the Internet are handed over to ordinary users—with uneven results.

Everybody get together. Credit: D3Damon/Getty Images

Everybody get together. Credit: D3Damon/Getty Images

Welcome to the final article in our three-part series on the history of the Internet. If you haven’t already, catch up with part one and part two.

As a refresher, here’s the story so far:

The ARPANET was a project started by the Defense Department’s Advanced Research Project Agency in 1969 to network different mainframe computers together across the country. It later evolved into the Internet, connecting multiple global networks together using a common TCP/IP protocol. By the late 1980s, a small group of academics and a few curious consumers connected to each other on the Internet, which was still mostly text-based.

In 1991, Tim Berners-Lee invented the World Wide Web, an Internet-based hypertext system designed for graphical interfaces. At first, it ran only on the expensive NeXT workstation. But when Berners-Lee published the web’s protocols and made them available for free, people built web browsers for many different operating systems. The most popular of these was Mosaic, written by Marc Andreessen, who formed a company to create its successor, Netscape. Microsoft responded with Internet Explorer, and the browser wars were on.

The web grew exponentially, and so did the hype surrounding it. It peaked in early 2001, right before the dotcom collapse that left most web-based companies nearly or completely bankrupt. Some people interpreted this crash as proof that the consumer Internet was just a fad. Others had different ideas.

Larry Page and Sergey Brin met each other at a graduate student orientation at Stanford in 1996. Both were studying for their PhDs in computer science, and both were interested in analyzing large sets of data. Because the web was growing so rapidly, they decided to start a project to improve the way people found information on the Internet.

They weren’t the first to try this. Hand-curated sites like Yahoo had already given way to more algorithmic search engines like AltaVista and Excite, which both started in 1995. These sites attempted to find relevant webpages by analyzing the words on every page.

Page and Brin’s technique was different. Their “BackRub” software created a map of all the links that pages had to each other. Pages on a given subject that had many incoming links from other sites were given a higher ranking for that keyword. Higher-ranked pages could then contribute a larger score to any pages they linked to. In a sense, this was a like a crowdsourcing of search: When people put “This is a good place to read about alligators” on a popular site and added a link to a page about alligators, it did a better job of determining that page’s relevance than simply counting the number of times the word appeared on a page.

Step 1 of the simplified BackRub algorithm. It also stores the position of each word on a page, so it can make a further subset for multiple words that appear next to each other. Jeremy Reimer.

Creating a connected map of the entire World Wide Web with indexes for every word took a lot of computing power. The pair filled their dorm rooms with any computers they could find, paid for by a $10,000 grant from the Stanford Digital Libraries Project. Many were cobbled together from spare parts, including one with a case made from imitation LEGO bricks. Their web scraping project was so bandwidth-intensive that it briefly disrupted the university’s internal network. Because neither of them had design skills, they coded the simplest possible “home page” in HTML.

In August 1996, BackRub was made available as a link from Stanford’s website. A year later, Page and Brin rebranded the site as “Google.” The name was an accidental misspelling of googol, a term coined by a mathematician’s young son to describe a 1 with 100 zeros after it. Even back then, the pair was thinking big.

Google.com as it appeared in 1998. Credit: Jeremy Reimer

By mid-1998, their prototype was getting over 10,000 searches a day. Page and Brin realized they might be onto something big. It was nearing the height of the dotcom mania, so they went looking for some venture capital to start a new company.

But at the time, search engines were considered passée. The new hotness was portals, sites that had some search functionality but leaned heavily into sponsored content. After all, that’s where the big money was. Page and Brin tried to sell the technology to AltaVista for $1 million, but its parent company passed. Excite also turned them down, as did Yahoo.

Frustrated, they decided to hunker down and keep improving their product. Brin created a colorful logo using the free GIMP paint program, and they added a summary snippet to each result. Eventually, the pair received $100,000 from angel investor Andy Bechtolsheim, who had co-founded Sun Microsystems. That was enough to get the company off the ground.

Page and Brin were careful with their money, even after they received millions more from venture capitalist firms. They preferred cheap commodity PC hardware and the free Linux operating system as they expanded their system. For marketing, they relied mostly on word of mouth. This allowed Google to survive the dotcom crash that crippled its competitors.

Still, the company eventually had to find a source of income. The founders were concerned that if search results were influenced by advertising, it could lower the usefulness and accuracy of the search. They compromised by adding short, text-based ads that were clearly labeled as “Sponsored Links.” To cut costs, they created a form so that advertisers could submit their own ads and see them appear in minutes. They even added a ranking system so that more popular ads would rise to the top.

The combination of a superior product with less intrusive ads propelled Google to dizzying heights. In 2024, the company collected over $350 billion in revenue, with $112 billion of that as profit.

Information wants to be free

The web was, at first, all about text and the occasional image. In 1997, Netscape added the ability to embed small music files in the MIDI sound format that would play when a webpage was loaded. Because the songs only encoded notes, they sounded tinny and annoying on most computers. Good audio or songs with vocals required files that were too large to download over the Internet.

But this all changed with a new file format. In 1993, researchers at the Fraunhofer Institute developed a compression technique that eliminated portions of audio that human ears couldn’t detect. Suzanne Vega’s song “Tom’s Diner” was used as the first test of the new MP3 standard.

Now, computers could play back reasonably high-quality songs from small files using software decoders. WinPlay3 was the first, but WinAmp, released in 1997, became the most popular. People started putting links to MP3 files on their personal websites. Then, in 1999, Shawn Fanning released a beta of a product he called Napster. This was a desktop application that relied on the Internet to let people share their MP3 collection and search everyone else’s.

Napster as it would have appeared in 1999. Credit: Jeremy Reimer

Napster almost immediately ran into legal challenges from the Recording Industry Association of America (RIAA). It sparked a debate about sharing things over the Internet that persists to this day. Some artists agreed with the RIAA that downloading MP3 files should be illegal, while others (many of whom had been financially harmed by their own record labels) welcomed a new age of digital distribution. Napster lost the case against the RIAA and shut down in 2002. This didn’t stop people from sharing files, but replacement tools like eDonkey 2000, Limewire, Kazaa, and Bearshare lived in a legal gray area.

In the end, it was Apple that figured out a middle ground that worked for both sides. In 2003, two years after launching its iPod music player, Apple announced the Internet-only iTunes Store. Steve Jobs had signed deals with all five major record labels to allow legal purchasing of individual songs—astoundingly, without copy protection—for 99 cents each, or full albums for $10. By 2010, the iTunes Store was the largest music vendor in the world.

iTunes 4.1, released in 2003. This was the first version for Windows and introduced the iTunes Store to a wider world. Credit: Jeremy Reimer

The Web turns 2.0

Tim Berners-Lee’s original vision for the web was simply to deliver and display information. It was like a library, but with hypertext links. But it didn’t take long for people to start experimenting with information flowing the other way. In 1994, Netscape 0.9 added new HTML tags like FORM and INPUT that let users enter text and, using a “Submit” button, send it back to the web server.

Early web servers didn’t know what to do with this text. But programmers developed extensions that let a server run programs in the background. The standardized “Common Gateway Interface” (CGI) made it possible for a “Submit” button to trigger a program (usually in a /cgi-bin/ directory) that could do something interesting with the submission, like talking to a database. CGI scripts could even generate new webpages dynamically and send them back to the user.

This intelligent two-way interaction changed the web forever. It enabled things like logging into an account on a website, web-based forums, and even uploading files directly to a web server. Suddenly, a website wasn’t just a page that you looked at. It could be a community where groups of interested people could interact with each other, sharing both text and images.

Dynamic webpages led to the rise of blogging, first as an experiment (some, like Justin Hall’s and Dave Winer’s, are still around today) and then as something anyone could do in their spare time. Websites in general became easier to create with sites like Geocities and Angelfire, which let people build their own personal dream house on the web for free. A community-run dynamic linking site, webring.org, connected similar websites together, encouraging exploration.

Webring.org was a free, community-run service that allowed dynamically updated webrings. Credit: Jeremy Reimer

One of the best things to come out of Web 2.0 was Wikipedia. It arose as a side project of Nupedia, an online encyclopedia founded by Jimmy Wales, with articles written by volunteers who were subject matter experts. This process was slow, and the site only had 21 articles in its first year. Wikipedia, in contrast, allowed anyone to contribute and review articles, so it quickly outpaced its predecessor. At first, people were skeptical about letting random Internet users edit articles. But thanks to an army of volunteer editors and a set of tools to quickly fix vandalism, the site flourished. Wikipedia far surpassed works like the Encyclopedia Britannica in sheer numbers of articles while maintaining roughly equivalent accuracy.

Not every Internet innovation lived on a webpage. In 1988, Jarkko Oikarinen created a program called Internet Relay Chat (IRC), which allowed real-time messaging between individuals and groups. IRC clients for Windows and Macintosh were popular among nerds, but friendlier applications like PowWow (1994), ICQ (1996), and AIM (1997) brought messaging to the masses. Even Microsoft got in on the act with MSN Messenger in 1999. For a few years, this messaging culture was an important part of daily life at home, school, and work.

A digital recreation of MSN Messenger from 2001. Sadly, Microsoft shut down the servers in 2014. Credit: Jeremy Reimer

Animation, games, and video

While the web was evolving quickly, the slow speeds of dial-up modems limited the size of files you could upload to a website. Static images were the norm. Animation only appeared in heavily compressed GIF files with a few frames each.

But a new technology blasted past these limitations and unleashed a torrent of creativity on the web. In 1995, Macromedia released Shockwave Player, an add-on for Netscape Navigator. Along with its Director software, the combination allowed artists to create animations based on vector drawings. These were small enough to embed inside webpages.

Websites popped up to support this new content. Newgrounds.com, which started in 1995 as a Neo-Geo fan site, started collecting the best animations. Because Director was designed to create interactive multimedia for CD-ROM projects, it also supported keyboard and mouse input and had basic scripting. This meant that people could make simple games that ran in Shockwave. Newgrounds eagerly showcased these as well, giving many aspiring artists and game designers an entry point into their careers. Super Meat Boy, for example, was first prototyped on Newgrounds.

Newgrounds as it would have appeared circa 2003. Credit: Jeremy Reimer

Putting actual video on the web seemed like something from the far future. But the future arrived quickly. After the dotcom crash of 2001, there were many unemployed web programmers with a lot of time on their hands to experiment with their personal projects. The arrival of broadband with cable modems and digital subscriber lines (DSL), combined with the new MPEG4 compression standard, made a lot of formerly impossible things possible.

In early 2005, Chad Hurley, Steve Chen, and Jawed Karim launched Youtube.com. Initially, it was meant to be an online dating site, but that service failed. The site, however, had great technology for uploading and playing videos. It used Macromedia’s Flash, a new technology so similar to Shockwave that the company marketed it as Shockwave Flash. YouTube allowed anybody to upload videos up to ten minutes in length for free. It became so popular that Google bought it a year later for $1.65 billion.

All these technologies combined to provide ordinary people with the opportunity, however brief, to make an impact on popular culture. An early example was the All Your Base phenomenon. An animated GIF of an obscure, mistranslated Sega Genesis game inspired indie musicians The Laziest Men On Mars to create a song and distribute it as an MP3. The popular humor site somethingawful.com picked it up, and users in the Photoshop Friday forum thread created a series of humorous images to go along with the song. Then in 2001, the user Bad_CRC took the song and the best of the images and put them together in an animation they shared on Newgrounds. The YouTube version gained such wide popularity that it was reported on by USA Today.

You have no chance to survive make your time.

Media goes social

In the early 2000s, most websites were either blogs or forums—and frequently both. Forums had multiple discussion boards, both general and specific. They often leaned into a specific hobby or interest, and anyone with that interest could join. There were also a handful of dating websites, like kiss.com (1994), match.com (1995), and eHarmony.com (2000), that specifically tried to connect people who might have a romantic interest in each other.

The Swedish Lunarstorm was one of the first social media websites. Credit: Jeremy Reimer

The road to social media was a hazy and confusing merging of these two types of websites. There was classmates.com (1995) that served as a way to connect with former school chums, and the following year, the Swedish site lunarstorm.com opened with this mission:

Everyone has their own website called Krypin. Each babe [this word is an accurate translation] has their own Krypin where she or he introduces themselves, posts their diaries and their favorite files, which can be anything from photos and their own songs to poems and other fun stuff. Every LunarStormer also has their own guestbook where you can write if you don’t really dare send a LunarEmail or complete a Friend Request.

In 1997, sixdegrees.com opened, based on the truism that everyone on earth is connected with six or fewer degrees of separation. Its About page said, “Our free networking services let you find the people you want to know through the people you already know.”

By the time friendster.com opened its doors in 2002, the concept of “friending” someone online was already well established, although it was still a niche activity. LinkedIn.com, launched the following year, used the excuse of business networking to encourage this behavior. But it was MySpace.com (2003) that was the first to gain significant traction.

MySpace was initially a Friendster clone written in just ten days by employees at eUniverse, an Internet marketing startup founded by Brad Greenspan. It became the company’s most successful product. MySpace combined the website-building ability of sites like GeoCities with social networking features. It took off incredibly quickly: in just three years, it surpassed Google as the most visited website in the United States. Hype around MySpace reached such a crescendo that Rupert Murdoch purchased it in 2005 for $580 million.

But a newcomer to the social media scene was about to destroy MySpace. Just as Google crushed its competitors, this startup won by providing a simpler, more functional, and less intrusive product. TheFaceBook.com began as Mark Zuckerberg and his college roommate’s attempt to replace their college’s online directory. Zuckerberg’s first student website, “Facemash,” had been created by breaking into Harvard’s network, and its sole feature was to provide “Hot or Not” comparisons of student photos. Facebook quickly spread to other universities, and in 2006 (after dropping the “the”), it was opened to the rest of the world.

“The” Facebook as it appeared in 2004. Credit: Jeremy Reimer

Facebook won the social networking wars by focusing on the rapid delivery of new features. The company’s slogan, “Move fast and break things,” encouraged this strategy. The most prominent feature, added in 2006, was the News Feed. It generated a list of posts, selected out of thousands of potential updates for each user based on who they followed and liked, and showed it on their front page. Combined with a technique called “infinite scrolling,” first invented for Microsoft’s Bing Image Search by Hugh E. Williams in 2005, it changed the way the web worked forever.

The algorithmically generated News Feed created new opportunities for Facebook to make profits. For example, businesses could boost posts for a fee, which would make them appear in news feeds more often. These blurred the lines between posts and ads.

Facebook was also successful in identifying up-and-coming social media sites and buying them out before they were able to pose a threat. This was made easier thanks to Onavo, a VPN that monitored its users’ activities and resold the data. Facebook acquired Onavo in 2013. It was shut down in 2019 due to continued controversy over the use of private data.

Social media transformed the Internet, drawing in millions of new users and starting a consolidation of website-visiting habits that continues to this day. But something else was about to happen that would shake the Internet to its core.

Don’t you people have phones?

For years, power users had experimented with getting the Internet on their handheld devices. IBM’s Simon phone, which came out in 1994, had both phone and PDA features. It could send and receive email. The Nokia 9000 Communicator, released in 1996, even had a primitive text-based web browser.

Later phones like the Blackberry 850 (1999), the Nokia 9210 (2001), and the Palm Treo (2002), added keyboards, color screens, and faster processors. In 1999, the Wireless Application Protocol (WAP) was released, which allowed mobile phones to receive and display simplified, phone-friendly pages using WML instead of the standard HTML markup language.

Browsing the web on phones was possible before modern smartphones, but it wasn’t easy. Credit: James Cridland (Flickr)

But despite their popularity with business users, these phones never broke into the mainstream. That all changed in 2007 when Steve Jobs got on stage and announced the iPhone. Now, every webpage could be viewed natively on the phone’s browser, and zooming into a section was as easy as pinching or double-tapping. The one exception was Flash, but a new HTML 5 standard promised to standardize advanced web features like animation and video playback.

Google quickly changed its Android prototype from a Blackberry clone to something more closely resembling the iPhone. Android’s open licensing structure allowed companies around the world to produce inexpensive smartphones. Even mid-range phones were still much cheaper than computers. This technology allowed, for the first time, the entire world to become connected through the Internet.

The exploding market of phone users also propelled the massive growth of social media companies like Facebook and Twitter. It was a lot easier now to snap a picture of a live event with your phone and post it instantly to the world. Optimists pointed to the remarkable events of the Arab Spring protests as proof that the Internet could help spread democracy and freedom. But governments around the world were just as eager to use these new tools, except their goals leaned more toward control and crushing dissent.

The backlash

Technology has always been a double-edged sword. But in recent years, public opinion about the Internet has shifted from being mostly positive to increasingly negative.

The combination of mobile phones, social media algorithms, and infinite scrolling led to the phenomenon of “doomscrolling,” where people spend hours every day reading “news” that is tuned for maximum engagement by provoking as many people as possible. The emotional toil caused by doomscrolling has been shown to cause real harm. Even more serious is the fallout from misinformation and hate speech, like the genocide in Myanmar that an Amnesty International report claims was amplified on Facebook.

As companies like Google, Amazon, and Facebook grew into near-monopolies, they inevitably lost sight of their original mission in favor of a never-ending quest for more money. The process, dubbed enshittification by Cory Doctorow, shifts the focus first from users to advertisers and then to shareholders.

Chasing these profits has fueled the rise of generative AI, which threatens to turn the entire Internet into a sea of soulless gray soup. Google is now forcing AI summaries at the top of web searches, which reduce traffic to websites and often provide dangerous misinformation. But even if you ignore the AI summaries, the sites you find underneath may also be suspect. Once-trusted websites have laid off staff and replaced them with AI, generating an endless series of new articles written by nobody. A web where AIs comment on AI-generated Facebook posts that link to AI-generated articles, which are then AI-summarized by Google, seems inhuman and pointless.

A search for cute baby peacocks on Bing. Some of them are real, and some aren’t. Credit: Jeremy Reimer

Where from here?

The history of the Internet can be roughly divided into three phases. The first, from 1969 to 1990, was all about the inventors: people like Vint Cerf, Steve Crocker, and Robert Taylor. These folks were part of a small group of computer scientists who figured out how to get different types of computers to talk to each other and to other networks.

The next phase, from 1991 to 1999, was a whirlwind that was fueled by entrepreneurs, people like Jerry Yang and Jeff Bezos. They latched on to Tim Berners-Lee’s invention of the World Wide Web and created companies that lived entirely in this new digital landscape. This set off a manic phase of exponential growth and hype, which peaked in early 2001 and crashed a few months later.

The final phase, from 2000 through today, has primarily been about the users. New companies like Google and Facebook may have reaped the greatest financial rewards during this time, but none of their successes would have been possible without the contributions of ordinary people like you and me. Every time we typed something into a text box and hit the “Submit” button, we created a tiny piece of a giant web of content. Even the generative AIs that pretend to make new things today are merely regurgitating words, phrases, and pictures that were created and shared by people.

There is a growing sense of nostalgia today for the old Internet, when it felt like a place, and the joy of discovery was around every corner. “Using the old Internet felt like digging for treasure,” said YouTube commenter MySoftCrow. “Using the current Internet feels like getting buried alive.”

Ars community member MichaelHurd added his own thoughts: “I feel the same way. It feels to me like the core problem with the modern Internet is that websites want you to stay on them for as long as possible, but the World Wide Web is at its best when sites connect to each other and encourage people to move between them. That’s what hyperlinks are for!”

Despite all the doom surrounding the modern Internet, it remains largely open. Anyone can pay about $5 per month for a shared Linux server and create a personal website containing anything they can think of, using any software they like, even their own. And for the most part, anyone, on any device, anywhere in the world, can access that website.

Ultimately, the fate of the Internet depends on the actions of every one of us. That’s why I’m leaving the final words in this series of articles to you. What would your dream Internet of the future look and feel like? The comments section is open.

Photo of Jeremy Reimer

I’m a writer and web developer. I specialize in the obscure and beautiful, like the Amiga and newLISP.

A history of the Internet, part 3: The rise of the user Read More »

steam-will-wind-down-support-for-32-bit-windows-as-that-version-of-windows-fades

Steam will wind down support for 32-bit Windows as that version of Windows fades

Though the 32-bit versions of Windows were widely used from the mid-90s all the way through to the early 2010s, this change is coming so late that it should only actually affect a statistically insignificant number of Steam users. Valve already pulled Steam support for all versions of Windows 7 and Windows 8 in January 2024, and 2021’s Windows 11 was the first in decades not to ship a 32-bit version. That leaves only the 32-bit version of Windows 10, which is old enough that it will stop getting security updates in either October 2025 or October 2026, depending on how you count it.

According to Steam Hardware Survey data from August, usage of the 32-bit version of Windows 10 (and any other 32-bit version of Windows) is so small that it’s lumped in with “other” on the page that tracks Windows version usage. All “other” versions of Windows combined represent roughly 0.05 percent of all Steam users. The 64-bit version of Windows 10 still runs on just over a third of all Steam-using Windows PCs, while the 64-bit version of Windows 11 accounts for just under two-thirds.

The change to the Steam client shouldn’t have any effects on game availability or compatibility. Any older 32-bit games that you can currently run in 64-bit versions of Windows will continue to work fine because, unlike modern macOS versions, new 64-bit versions of Windows still maintain compatibility with most 32-bit apps.

Steam will wind down support for 32-bit Windows as that version of Windows fades Read More »

you’ll-enjoy-the-specialized-turbo-vado-sl-2-6.0-carbon-even-without-assist

You’ll enjoy the Specialized Turbo Vado SL 2 6.0 Carbon even without assist


It’s an investment, certainly of money, but also in long, fast rides.

The Specialized Turbo Vado SL 2 6.0 Carbon Credit: Specialized

Two things about the Specialized Turbo Vado SL 2 6.0 Carbon are hard to fathom: One is how light and lithe it feels as an e-bike, even with the battery off; the other is how hard it is to recite its full name when other riders ask you about the bike at stop lights and pit stops.

I’ve tested about a half-dozen e-bikes for Ars Technica. Each test period has included a ride with my regular group for about 30 miles. Nobody else in my group rides electric, so I try riding with no assist, at least part of the way. Usually I give up after a mile or two, realizing that most e-bikes are not designed for unpowered rides.

On the Carbon (as I’ll call it for the rest of this review), you can ride without power. At 35 pounds, it’s no gram-conscious road bike, but it feels lighter than that number implies. My daily ride is an aluminum-framed model with an internal geared hub that weighs about the same, so I might be a soft target. But it’s a remarkable thing to ride an e-bike that starts with a good unpowered ride and lets you build on that with power.

Once you actually crank up the juice, the Carbon is pretty great, too. Deciding whether this bike fits your riding goals is a lot tougher than using and enjoying it.

Specialized’s own system

It’s tough to compare this Carbon to other e-bikes, because it’s using hardly any of the same standard components as all the others.

The 320-watt mid-drive motor is unique to Specialized models, as is its control system, its handlebar display, its charge ports, and its software. On every other e-bike I’ve ridden, you can usually futz around with the controls or app or do some Internet searching to figure out a way to, say, turn off an always-on headlamp. On this Carbon, there is not. You are riding with the lights on, because that’s how it was designed (likely with European regulations in mind).

The bottom half of the Carbon, with its just-powerful-enough mid-drive motor, charging port, bottle cages, and a range-extending battery. Watch your stance if you’ve got wide-ranging feet, like the author.

Credit: Kevin Purdy

The bottom half of the Carbon, with its just-powerful-enough mid-drive motor, charging port, bottle cages, and a range-extending battery. Watch your stance if you’ve got wide-ranging feet, like the author. Credit: Kevin Purdy

Specialized has also carved out a very unique customer profile with this bike. It’s not the bike to get if you’re the type who likes to tinker, mod, or upgrade (or charge the battery outside the bike). It is the bike to get if you are the type who wants to absolutely wreck a decent commute, to power through some long climbs with electric confidence, or simply have a premier e-bike commute or exercise experience. It’s not an entirely exercise-minded carbon model, but it’s not a chill, throttle-based e-bike, either.

The ride

I spent probably a quarter as much time thinking about riding the Carbon as I did actually riding it. This bike costs a minimum of $6,000; where can you ride it and never let it out of your sight for even one moment? The Carbon offers Apple Find My tracker integration and has its own Turbo System Lock that kills the motor and (optionally) sets off lights and siren alarms when the bike is moved while disabled. That’s all good, but the Carbon remains a bike that demands full situational awareness, wherever you leave it.

The handlebar display on the Carbon. There are a few modes, but this is the relative display density: big numbers, basic information, refer to the phone app if you want more.

Credit: Kevin Purdy

The handlebar display on the Carbon. There are a few modes, but this is the relative display density: big numbers, basic information, refer to the phone app if you want more. Credit: Kevin Purdy

You unlock the bike with either the Specialized smartphone app or a PIN code, entered with an up/down/press switch. The 2.1-inch screen only has a few display options but can provide the basics (speed, pedal cadence, wattage, gear/assist levels), or, if you dig into Specialized’s app and training programs and connect ANT+ gear, your heart rate and effort.

Once you’re done plotting, unlocking, and data-picking, you can ride the Carbon and feel its real value. Specialized, a company that seems deeply committed to version control, claims that the Future Shock 3.2 front suspension on this 6.0 Carbon reduces impact by 53 percent or more, versus a bike with no suspension. Combined with the 47 mm knobby tires and the TRP hydraulic disc brakes, I had no trouble switching from road to gravel, taking grassy shortcuts, hopping off standard rubes, or facing down city streets with inconsistent upkeep.

I’ve been spoiled by the automatic assist available on Bosch mid-drive motors. The next best thing is probably something like the Shimano Devore XT/SLX shifters on this Carbon, paired with the power monitoring. The 12-speed system, with a 10-51t cassette range, shifted at the speed of thought. Your handlebar display gives you a color-coded guide when you should probably shift up or down, based on your cadence and wattage output.

The controls for the Carbon’s display, power, and switch are just this little switch, with three places to press and an up/down switch. Sometimes I thought it was clever and efficient; other times, I wish I had picked a more simple unlock code.

Credit: Kevin Purdy

The controls for the Carbon’s display, power, and switch are just this little switch, with three places to press and an up/down switch. Sometimes I thought it was clever and efficient; other times, I wish I had picked a more simple unlock code. Credit: Kevin Purdy

That battery range, as reported by Specialized, is “up to 5 hours,” a number that few people are going to verify. It’s a 520-watt-hour battery in a 48-volt system that can turn out a rated 320 watts of power. You can adjust the output of all three assist levels in the Specialized app. And you can buy a $450 water-bottle-sized range extender battery that adds another 160 Wh to your system if you sacrifice a bottle cage (leaving two others).

But nobody should ride this bike, or its cousins, like a juice miser on a cargo run. This bike is meant to move, whether to speed through a commute, push an exercise ride a bit farther, or tackle that one hill that ruins your otherwise enjoyable route. The Carbon felt good on straightaways, on curves, starting from a dead stop, and pretty much whenever I was in the zone, forgetting about the bike itself and just pedaling.

I don’t have many points of comparison, because most e-bikes that cost this much are bulky, intensely powerful, or haul a lot of cargo. The Carbon and its many cousins that Specialized sells cost more because they take things away from your ride: weight, frame, and complex systems. The Carbon provides a rack, lights, three bottle cages, and mounting points, so it can do more than just boost your ride. But that’s what it does better than most e-bikes out there: provide an agile, lightweight athletic ride, upgraded with a balanced amount of battery power and weight to make that ride go faster or farther.

The handlebar, fork, and wiring on the front of the Carbon.

Credit: Kevin Purdy

The handlebar, fork, and wiring on the front of the Carbon. Credit: Kevin Purdy

Always room to improve

I’ve said only nice things about this $6,000 bike, so allow me to pick a few nits. I’ve got big feet (size 12 wide) and a somewhat sloppy pedal position when I’m not using clips. Using the bottle-sized battery, with its plug on the side of the downtube, led to a couple of fat-footed disconnections while riding. When the Carbon notices that even its supplemental battery has disconnected, it locks out its display system; I had to enter a PIN code and re-plug the battery to get going again. This probably won’t be an issue for most people, but it’s worth noting if you’re looking at that battery as a range solution.

The on-board display and system seem a bit underdeveloped for the bike’s cost, too. Having a switch with three controls (up, down, push-in) makes navigating menus and customizing information tiresome. You can see Specialized pushing you to the smartphone for deeper data and configuration and keeping control space on the handlebars to a minimum. But I’ve found the display and configuration systems on many cheaper bikes more helpful and intuitive.

The Specialized Turbo Vado SL 2 6.0 Carbon (whew!) provided some of the most enjoyable rides I could imagine out of a bike I had no intention of keeping. It’s an investment, certainly of money, but also to long, fast rides, whether to get somewhere or nowhere in particular. Maybe you want more battery range, more utility, or more rugged and raw power for the price. But it is hard to beat this bike in the particular race it is running.

You’ll enjoy the Specialized Turbo Vado SL 2 6.0 Carbon even without assist Read More »

your-very-own-humane-interface:-try-jef-raskin’s-ideas-at-home

Your very own humane interface: Try Jef Raskin’s ideas at home


Use the magic of emulation to see a different kind of computer design.

Canon Cat keyboard close-up. Credit: Cameron Kaiser

Canon Cat keyboard close-up. Credit: Cameron Kaiser

In our earlier article about Macintosh project creator Jef Raskin, we looked at his quest for the humane computer, one that was efficient, consistent, useful, and above all else, respectful and adaptable to the natural frailties of humans. From Raskin’s early work on the Apple Macintosh to the Canon Cat and later his unique software implementations, you were guaranteed an interface you could sit down and interact with nearly instantly and—once you’d learned some basic keystrokes and rules—one you could be rapidly productive with.

But no modern computer implements his designs directly, even though some are based on principles he either espoused or outright pioneered. Fortunately, with a little work and the magic of emulation, you can have your very own humane interface at home and see for yourself what computing might have been had we traveled a little further down Raskin’s UI road.

You don’t need to feed a virtual Cat

Perhaps the most straightforward of Raskin’s systems to emulate is the Canon Cat. Sold by Canon as an overgrown word processor (billed as a “work processor”), it purported to be a simple editor for office work but is actually a full Motorola 68000-based computer programmable through an intentional backdoor in its own dialect of Forth. It uses a single workspace saved en masse to floppy disk that can be subdivided into multiple “documents” and jumped to quickly with key combinations, and it includes facilities for simple spreadsheets and lists.

The Cat is certainly Jef Raskin’s most famous system after the early Macintosh, and it’s most notable for its exclusive use of the keyboard for interaction—there is no mouse or pointing device of any kind. It is supported by MAME, the well-known multi-system emulator, using ROMs available from the Internet Archive.

Note that the MAME driver for the Canon Cat is presently incomplete; it doesn’t support a floppy drive or floppy disk images, and it doesn’t support the machine’s built-in serial port. Still, this is more than enough to get the flavor of how it operates, and the Internet Archive manual includes copious documentation.

There is also a MAME bug with the Cat’s beeper where if the emulated Cat makes a beep (or at least attempts to), it will freeze until it’s reset. To work around that, you need to make the Cat not beep, which requires a trip to its setup screen. On most systems, the Cat USE FRONT key is mapped to Control, and the Cat’s two famous pink LEAP keys are mapped to Alt or Option. Hold down USE FRONT and press the left brace key, which is mapped to SETUP, then release SETUP but keep USE FRONT/Control down.

The first screen appears; we want the second, so tap SETUP again with USE FRONT/Control still down. Now, with USE FRONT/Control still down, tap the space bar repeatedly to cycle through the options until it gets to the “Problem signal” option, and with USE FRONT/Control still down, tap one of the LEAP keys until it is set to “Flash” (i.e., no beep option). For style points, do the same basic operations to set the keyboard type to ASCII, which works better in MAME. When you’re all done, now you can release USE FRONT and experiment.

Getting around with the Cat requires knowing which keys do what, though once you’ve learned that, they never change. To enter text, just type. There are no cursor keys and no mouse; all motion is by leaping—that is, holding down either LEAP key and typing something to search for. Single taps of either LEAP key “creep” you forward or back by a single character.

Special control sequences are executed by holding down USE FRONT and pressing one of the keys marked with a blue function (like we did for the setup menu). The most important of these is USE FRONT-HELP (the N key), which explains errors when the Cat “beeps” (here, flashes its screen), or if you release the N key but keep USE FRONT down, you can press another key to find out what it does.

You can also break into the hidden Forth interpreter by typing Enable Forth Language, highlighting it (i.e., immediately press both LEAP keys together) and then evaluating it with USE FRONT-ANSWER (not CALC; usually Control-Backspace in MAME). You’ll get a Forth ok prompt, and the system is now yours. Remember, it’s Forth, and Forth has dragons. Reset the Cat or type re to return to the editor. With Forth on, you can also highlight Forth in your document and press USE FRONT-ANSWER to execute it and place the answer in your document.

The Internet Archive page has full documentation, and the Cat’s manual is easy to follow, but sadly, the MAME driver doesn’t yet offer you a way to save your document to disk or upload it somewhere.

A SwyftCard shows you swyftcare

Prior to the Cat’s development, however, Raskin’s backers had prevailed upon the company to release some aspects of the technology to raise cash, and as we discussed in the prior article, this initiative yielded the SwyftCard for the Apple IIe. The SwyftCard, like the later Cat, uses an editor on a single subdivided workspace as the core interface, but unlike the Cat, it was openly programmable, including in Applesoft BASIC. It also defines LEAP and USE FRONT keys (and stickers to mark them) and features an exclusively keyboard-driven interface. Being a relatively simple card and floppy disk combination, the package is not particularly difficult to reproduce, and some users have created clone cards with EPROMs and banking logic as historical re-creations.

That said, nowadays, the simplest means of experimenting with a SwyftCard is by using a software implementation developed by Eric Rangell for KansasFest 2021. This version loads the contents of the original 16K EPROM into high auxiliary RAM not used by the SwyftCard firmware and executes it from there. It is effectively a modern equivalent of the SwyftDisk, a software-only version IAI later sold for the Apple IIc that lacks additional expansion slots.

You can download Rangell’s software with ready-to-use disk images and media assets from the Internet Archive, with the user manual available separately. It should work in most Apple IIe emulators with at most minor adjustments; here, I tested it with Mariani, a macOS port of AppleWin, and Virtual ][. Make sure your emulator is configured for a IIe (enhanced is recommended) with an 80-column card and at least one floppy controller and drive in the standard slot 6. It should work with a IIc as well, but as of this writing, it does not work with the IIgs or II+. Also make sure you are running the system at Apple’s standard ~1MHz clock speed, as the software is somewhat timing-sensitive.

Booting up the SwyftCard. Credit: Cameron Kaiser

Start the emulated IIe with the disk image named SwyftCardResurrected.do. This is a standard ProDOS disk used to load the ROM’s contents into memory. At the menu, select option 1, and the SwyftCard ROM image will load from disk. When prompted, unmount the first disk image and change to the one named SwyftWare_-_SwyftCard_Tutorial.woz and then press RETURN. These disk images are based on the IIe build 1066; later versions of SwyftWare to at least 1131 are known.

The SwyftCard and SwyftDisk both came with a set of sticky labels to apply to your keys, marking the two LEAP keys (Open and Closed Apple), ESCape, LEAP AGAIN (TAB), USE FRONT (Control), and then the five functions accessed by USE FRONT: INSERT (A), SEND (D), CALC (G), DISK (L) and PRINT (N). In Mariani, Open Apple and Closed Apple map to Left and Right Option, which are LEAP BACK and LEAP FORWARD, respectively. In Virtual ][, press F5 to pass the Command key through to the emulated Apple, then use either Command as LEAP BACK and either Option as LEAP FORWARD. For regular AppleWin on a PC keyboard, use the Windows keys. All of these emulators use Control for USE FRONT.

The initial SwyftCard tutorial page. Credit: Cameron Kaiser

The tutorial begins by orienting you to the LEAP keys (i.e., the two Apple keys) and how to get around in the document. Unlike the original Swyft, the Apple II SwyftCard does not use the bitmap display and appears strictly in 80-column non-proportional text.

The bar at the top contains the page number, which starts at zero. Equals signs show explicitly entered hard page breaks using the ESCape key, which serve as “subdocuments.” Hard breaks may make pages as short as you desire, but after 54 printed lines, the editor will automatically insert a soft page break with dashes instead. Although up to 200 pages were supported, in practice, the available workspace limits you to about 15 or 20, “densely typed.”

Leaping to the next screen. Credit: Cameron Kaiser

You can jump to each of the help screens either directly by number (hold down the appropriate LEAP key and type the number, then release the keys) or by holding down the LEAP key, pressing the equals sign three times, and releasing the keys. These key combinations search forward and backward for the text you entered. Once you’ve leaped once, you can LEAP AGAIN in either direction to the next occurrence by holding down the appropriate LEAP key and pressing the TAB key.

You can of course leap to any arbitrary text in either direction as well, but you can also leap to the next or prior hard page break (subdocument) by holding down LEAP and pressing ESC, or even leap to hard line breaks with LEAP and RETURN. Raskin was explicit that the keys be released after the operation as a mental reminder that you are no longer leaping, so make sure to release all keys fully before your next leap.

You can also creep forward with the LEAP keys by single characters each time they are pressed.

The two-tone cursor. Credit: Cameron Kaiser

Swyft and the SwyftCard implemented a two-phased cursor, which the SwyftCard calls either “wide” or “narrow.” By default, the cursor is “narrow,” alternating between a solid and a partially filled block. As you type, the cursor splits into a “wide” form—any text shown in inverse, usually the last character you entered, is what is removed when you press DELETE (Mariani doesn’t seem to implement this fully, but it works in Virtual ][ and standard AppleWin), with the blinking portion after the inverse text indicating the insertion point. When you creep or leap, the cursor merges back into the “narrow” form. When narrow, DELETE deletes right as a true delete instead of a backspace.

If you press both LEAP keys together, they will select a range. If you were typing text, then what you just typed becomes selected. Since it appears in inverse, DELETE will remove it. You can also select a previous range by LEAPing to the beginning, LEAPing to the end, and pressing both together. Once deleted, you can insert it elsewhere with USE FRONT-INSERT (Control-A), and you can do so repeatedly to make multiple copies.

Programming in SwyftCard. Credit: Cameron Kaiser

If you start the SwyftCard program but leave the disk drive empty when entering the editor, you get a blank workspace. Not only can you type text into it, but you can type expressions and have the editor evaluate it, even full Applesoft BASIC programs. For example, we asked it to PRINT 355/113 by highlighting it and pressing USE FRONT-CALC (Control-G; this doesn’t currently work in Mariani either). After that, we entered an Applesoft BASIC program, ending with RUN, so that it could be executed. If you highlight this block and press USE FRONT-CALC:

The result of our SwyftCard program. Credit: Cameron Kaiser

…you get this colorful display in the Apple low-resolution graphics mode. (Notice our lines could be in any order.) Our program waits for any key and then returns to the editor. While the original Swyft offered programming in Forth, the SwyftCard uses BASIC, which most Apple II owners would have already known well.

Finally, to save your work to disk, you can insert a blank disk and press USE FRONT-DISK (Control-L). The editor will save the workspace to the disk, marking it with a unique identifier, and it keeps track of the identifiers of what’s in memory and what’s on the disk to prevent you from inadvertently overwriting another previously saved workspace with this one. You can’t save a different workspace over a previously written disk without making an explicit CALL in Applesoft BASIC to the editor to erase it. Highlighted text, however, can be transferred between disks, allowing you to cut and paste between workspaces.

Although we can’t effectively demonstrate serial communications here, USE FRONT-SEND (Control-D) sends whatever is highlighted over the serial port, and any data received on the serial port is automatically incorporated into the workspace, both at 300 baud. Eric Rangell’s YouTube demonstration shows the process in action.

Human beings deserve a Humane Environment

In the prior article, we also discussed Raskin’s software projects, including the last one he worked on before his death in 2005.

In 2002, Raskin, along with his son Aza and the rest of the development team, built a software implementation of his interface ideas called The Humane Environment. As before, it was centered on a core single-workspace editor initially called the Humane Editor and, in its earliest incarnation, was developed for the classic Mac OS.

These early builds of the Humane Editor will run under Classic on any Mac OS X-capable Power Mac or natively in Mac OS 9 and include runnable binaries, the Python and C source code, and the CodeWarrior projects necessary to build them. (Later systems should be able to run them with SheepShaver or QEMU. I recommend installing at least Mac OS 9.0.4, and preferably Mac OS 9.2.2.) They are particularly advantageous in that they are fully self-contained and don’t need a separate standalone Python interpreter. Here, we’ll be using my trusty 1.33GHz iBook G4 in Mac OS X Tiger 10.4.11 with Mac OS 9.2.2 in Classic.

The build we’ll demonstrate is the last one available in the SourceForge CVS, modified on September 25, 2003. An earlier version is available as a StuffIt archive in the Files section, though not all of what we’ll show here may apply to it. If you attempt to download the tree with a regular CVS client, however, you’ll find that most of the files are BinHexed to preserve their resource forks; it’s a classic Mac application, after all. You can manually correct this, but an easier way is to use a native old-school MacCVS client, which will still work with SourceForge since the connection is unencrypted and automatically fixes the resources for you. For this, we’ll use MacCVS 3.2b8, which is Carbonized and runs natively in PowerPC OS X.

Downloading THE with MacCVS. Credit: Cameron Kaiser

When starting MacCVS, it’s immaterial what you set the default preferences to because in the command sheet, we’ll enter a full command line: cvs -z3 -d:pserver:anonymous@a.cvs.sourceforge.net:/cvsroot/humane co -P HumaneEditorProject

The tree will then download (this may take a minute or two).

THE folder after downloading. Credit: Cameron Kaiser

You should now have a new folder called HumaneEditorProject in the same folder as the CVS client. Go into that and find the folder named bin, which contains the main application HumaneEnvironment. Assuming you did the CVS step right, the application will have an icon of General Halftrack from the Beetle Bailey comic strip (which is to say, even a clod like General Halftrack can use this editor). Before starting it up, create a new folder called Saved States in the same folder with HumaneEnvironment, or you’ll get weird errors while using it.

Double-click HumaneEnvironment to start the application. Initially, a window will flash open and then close. If you’re running THE under Classic, as I am here (so that I can more easily take screengrabs), it may switch to another application, so switch back to it.

Starting the Humane Editor. Credit: Cameron Kaiser

In HumaneEnvironment, press Command-N for a new document. Here, we’ll create an “untitled” file in the Documents folder. Notice that in this very early version, there were still “files,” and they were still accessed through the regular Macintosh Standard File package.

Default document. Credit: Cameron Kaiser

Here is the default document (I’ve zoomed the window to take up the whole screen). Backtick characters separate documents. Our familiar two-tone cursor we saw with the Cat and SwyftCard and discussed at length in the prior article is also maintained. However, although font sizes, boldface, italic, and underlining were supported, colors (and, additionally, font sizes) were still selected by traditional Mac pulldown menus in this version.

Leaping, here with a trademark, is again front and center in THE. However, instead of dedicated keys, leaping is subsumed into THE’s internal command line termed the Humane Quasimode. The Quasimode is activated by pressing SHIFT-SPACE, keeping SHIFT down, and then pressing < or > to leap back or forward, followed by the text (case insensitive) or characters. Backticks, spaces, and line terminators (RETURN) can all be leapt to. Notice that the prompt is displayed as translucent text over the work area; no ineffective single-option modal dialogue boxes died to bring you these Death Star plans.

Similarly, tasks such as selection (the S command) are done in the Quasimode instead of pressing both leap keys together.

The Deletion Document. Credit: Cameron Kaiser

When text is deleted, either by backspacing over it or pressing DELETE with a selected region, it goes to an automatically created and maintained “DELETION DOCUMENT” from which it can be rescued. (Deleting from the deletion document just deletes.) The Undo operation does not function properly in this early build, so the easiest way to rescue accidentally deleted text is from the deletion document. It is saved with the file just like any other document in the workspace, and several of the documentation files, obviously created with THE, have deletion documents at the end.

Command listing. Credit: Cameron Kaiser

A full list of commands accepted by the Quasimode are available by typing COMMANDS, which in turn emits them to the document. These are based on Python files, which are precompiled from .hpy sources (“Humane Python”), which you can modify and recompile (using COMPILE) on the fly. There is also a startup.py that you can alter to immediately set up your environment the way you want on launch. Like COMPILE, several commands are explicitly marked as for developers only or not working yet.

Interestingly, typical key combinations like Command-C and Command-V for copy and paste are handled here as commands.

The CALC command can turn a Python-compatible expression into text containing the result, though it is not editable again to change the underlying expression like the Cat. However, the original text of the expression goes to the deletion document so it can be recovered and edited if necessary. A possible bug in this release is that the CALC command fails to compute anything if the end-of-line delimiter was part of the selected text.

Similarly, the RUN command will take the output of a block of Python code and put it into your document in the same way. Notice the code is not removed like with the CALC command, facilitating repeated execution, and embedded Python code was expected to be indented by two fixed leading spaces so that it would stand out as executable text—passing Python code that is not indented won’t execute, and the RUN command won’t raise an error, either. Special INDENT and UNINDENT commands make the indenting process less tedious.

Subsequent builds migrated to Windows, renamed “Archy” not only after Don Marquis’ literary insect but also the Raskin Center for Humane Interfaces, which, of course, is abbreviated RCHI. To date, Archy remains unfinished, and the easiest example to run is the final build 124 dated December 15, 2005, available for Windows 98 and up. The build includes its own embedded Python interpreter, libraries, and support files, and as a well-behaved 32-bit application, will run on pretty much any modern Windows PC. Here, I’m running it on Windows 11 22H2.

The Archy build 124 installer. Credit: Cameron Kaiser

The program comes as a formal installer and needs no special privileges. An uninstaller is also provided. Although it’s possible to get Python sources from the same page for other systems, the last available source tarball is build 115, which may lack every Windows-specific change to various components needed later. If you want to try running the Python code on Mac or Linux, you will need at least Python 2.3 but not Python 3.x, a compatible version of Pygame 1.6 or better, and their prerequisites.

The initial Archy window. Credit: Cameron Kaiser

To start it up, double-click the Archy executable in the installed folder, and the default document will appear. Annoyingly, Archy’s window cannot be resized or maximized, at least not on my system, so the window here is as big as you get. Archy’s default font is no longer monospace, and size and colour are fully controllable from within the editor. There are also special control characters used to display the key icons. The document separator is still entered with the backtick but is translated into its own control character.

Entering an Archy command for one of the examples. Credit: Cameron Kaiser

The default document had substantially grown since the THE era and now includes multiple example tutorials. These are accessed through Archy’s own command mode, which is entered by holding down CAPS LOCK and typing the command. Here, for the first example, we start typing EX1 and notice that there is now visual command completion available. Release CAPS LOCK, and the suggested command is used.

Archy presents Archy, with an animated keyboard and voiceover. Credit: Cameron Kaiser

Archy tutorials are actually narrated with voiceovers, plus on-screen animated typing and keyboard. There are six of them in all. They are not part of your regular document, and your workspace returns when you press a key.

Leaping in Archy. Credit: Cameron Kaiser

The awkward multi-step leap command of THE has been replaced once again with dedicated leap keys, in this case Left and Right Alt, going back to the SwyftCard and Cat. Selection is likewise done by pressing both leap keys. A key advancement here is that any text that will be selected, if you choose to select it, is highlighted beforehand in a light shade of yellow, so you no longer have to remember where your ranges were.

A list of commands in Archy. Credit: Cameron Kaiser

The COMMANDS verb gives you a list of commands (notice that Archy has acquired a concept of locked text, normally on a black background, and my attempt to type there brought me automatically to somewhere I actually could type). While THE’s available command suite was almost entirely specific to an editor application, Archy’s aspirations as a more complete all-purpose environment are evident. In particular, in addition to many of the same commands we saw on the Mac, there are now special Internet-oriented commands like EMAIL and GOOGLE.

How commands in Archy are constructed. Credit: Cameron Kaiser

Unlike THE, where you had to edit them separately, commands in Archy are actually small documents containing Python snippets embedded in the same workspace, and Archy’s API is much more complete. Here is the GOOGLE command, which takes whatever text you have selected and turns it into a Google search in your default browser. In the other commands displayed here, you can also see how the API allows you to get and delete selected text, then insert or modify it.

Creating a new command in Archy. Cameron Kaiser

Here, we’ll take the LEAP command itself (which you can change, too!), select and copy it, and then use it as a template for a new one called TEST. This one will display a message to the user and insert a fixed string into the buffer. The command is ready right away; there is no need to restart the editor. We can immediately call it—its name is already part of command completion—and run it.

There are many such subsections and subdocuments. Besides the deletion document (now just called “DELETIONS”), your email is a document, your email server settings are a document, there is a document for formal Python modules which other commands can import, and there are several help documents. Each time you exit Archy, the entire workspace with all your commands, context, and settings is saved as a text file in the Archy folder with a new version number so you can go back to an old copy if you really screw up.

Every cul-de-sac ends

Although these are functional examples and some of their ideas were used (however briefly) in later products, we’ve yet to see them make a major return to modern platforms—but you can read all about that in the main article. Meanwhile, these emulations and re-creations give you a taste of what might have been, and what it could take to make today’s increasingly locked-down computer hardware devices more humane in the process.

Sadly, I think a lot of us would argue that they’re going the wrong way.

Your very own humane interface: Try Jef Raskin’s ideas at home Read More »

nvidia-will-invest-$5-billion-in-intel,-co-develop-new-server-and-pc-chips

Nvidia will invest $5 billion in Intel, co-develop new server and PC chips


Intel once considered buying Nvidia outright, but its fortunes have shifted.

In a major collaboration that would have been hard to imagine just a few years ago, Nvidia announced today that it was buying a total of $5 billion in Intel stock, giving Intel’s competitor ownership of roughly 4 percent of the company. In addition to the investment, the two companies said that they would be co-developing “multiple generations of custom data center and PC products.”

“The companies will focus on seamlessly connecting NVIDIA and Intel architectures using NVIDIA NVLink,” reads Nvidia’s press release, “integrating the strengths of NVIDIA’s AI and accelerated computing with Intel’s leading CPU technologies and x86 ecosystem to deliver cutting-edge solutions for customers.”

Rather than combining the two companies’ technologies, the data center chips will apparently be custom x86 chips that Intel builds to Nvidia’s specifications. Nvidia will “integrate [the CPUs] into its AI infrastructure platforms and offer [them] to the market.”

On the consumer side, Intel plans to build x86 SoCs that integrate both Intel CPUs and Nvidia RTX GPU chiplets—Intel’s current products use graphics chiplets based on its own Arc products. More tightly integrated chips could make for smaller gaming laptops, and could give Nvidia a way to get into handheld gaming PCs like the Steam Deck or ROG Xbox Ally.

It takes a while to design, test, and mass-produce new processor designs, so it will likely be a couple of years before we see any of the fruits of this collaboration. But even the announcement highlights just how far the balance of power between the two companies has shifted in the last few years.

A dramatic reversal

Back in 2005, Intel considered buying Nvidia outright for “as much as $20 billion,” according to The New York Times. At the time, Nvidia was known almost exclusively for its GeForce consumer graphics chips, and Intel was nearing the launch of its Core and Core 2 chips, which would manage to win Apple’s business and set it up for a decade of near-total dominance in consumer PCs and servers.

But in recent years, Nvidia’s income and market capitalization have soared on the strength of its data center chips, which have powered most of the AI features that tech companies have been racing to build into their products for years now. And Intel’s recent struggles are well-documented—it has struggled for years now to improve its chip manufacturing capabilities at the same pace as competitors like TSMC, and a yearslong effort to convince other chip designers to use Intel’s factories to build their chips has yielded one ousted CEO and not much else.

The two companies’ announcement comes one day after China banned the sale of Nvidia’s AI chips, including products that Nvidia had designed specifically for China to get around US-imposed performance-based export controls. China is pushing domestic chipmakers like Huawei and Cambricon to put out their own AI accelerators to compete with Nvidia’s.

Correlation isn’t causation, and it’s unlikely that Intel and Nvidia could have thrown together a $5 billion deal and product collaboration in the space of less than 24 hours. But Nvidia could be looking to prop up US-based chip manufacturing as a counterweight to China’s actions.

There are domestic political considerations for Nvidia, too. The Trump administration announced plans to take a 10 percent stake in Intel last month, and Nvidia CEO Jensen Huang has worked to curry favor with the Trump administration by making appearances at $1 million-per-plate dinners at Trump’s Mar-a-Lago golf course and promising to invest billions in US-based data centers.

Although the US government’s investment in Intel hasn’t gotten it seats on the company’s board, the investment comes with possible significant downsides for Intel, including disruptions to the company’s business outside the US and limiting its eligibility for future government grants. Trump and his administration could also decide to alter the deal for any or no reason—Trump was calling for Tan’s resignation for alleged Chinese Communist Party ties just days before deciding to invest in the company instead. Investing in a sometime-competitor may be a small price for Nvidia and Huang to pay if it means avoiding the administration’s ire.

Outstanding questions abound

Combining Intel CPUs and Nvidia GPUs makes a lot of sense, for certain kinds of products—the two companies’ chips already coexist in millions of gaming desktops and laptops. Being able to make custom SoCs that combine Intel’s and Nvidia’s technology could make for smaller and more power-efficient gaming PCs. It could also provide a counterbalance to AMD, whose willingness to build semi-custom x86-based SoCs has earned the company most of the emerging market for Steam Deck-esque handheld gaming PCs, plus multiple generations of PlayStation and Xbox console hardware.

But there are more than a few places where Intel’s and Nvidia’s products compete, and at this early date, it’s unclear what will happen to the areas of overlap.

Future Intel CPUs could use an Nvidia-designed graphics chiplet instead of one of Intel’s GPUs. Credit: Intel

For example, Intel has been developing its own graphics products for decades—historically, these have mostly been lower-performance integrated GPUs whose only job is to connect to a couple of monitors and encode and decode video, but more recent Arc-branded dedicated graphics cards and integrated GPUs have been more of a direct challenge to some of Nvidia’s lower-end products.

Intel told Ars that the company “will continue to have GPU product offerings,” which means that it will likely continue developing Arc and its underlying Intel Xe GPU architecture. But that could mean that Intel will focus on low-end, low-power GPUs and leave higher-end products to Nvidia. Intel has been happy to discard money-losing side projects in recent years, and dedicated Arc GPUs have struggled to make much of a dent in the GPU market.

On the software side, Intel has been pushing its own oneAPI graphics compute stack as an alternative to Nvidia’s CUDA and AMD’s ROCm, and has provided code to help migrate CUDA projects to oneAPI. And there’s a whole range of plausible outcomes here: Nvidia allowing Intel GPUs to run CUDA code, either directly or through some kind of translation layer; Nvidia contributing to oneAPI, which is an open source platform; or oneAPI fading away entirely.

On Nvidia’s side, we’ve already mentioned that the company offers some Arm-based CPUs—these are available in the Project DIGITS AI computer, Nvidia’s automotive products, or the Nintendo Switch and Switch 2. But rumors have indicated for some time now that Nvidia is working with MediaTek to create Arm-based chips for Windows PCs, which would compete not just with Intel and AMD’s x86 chips but also Qualcomm’s Snapdragon X-series processors. Will Nvidia continue to push forward on this project, or will it leave this as-yet-unannounced chip unannounced, to shore up its new investment in the x86 instruction set?

Finally, there’s the question of where these chips will be built. Nvidia’s current chips are manufactured mostly at TSMC, though it has used Samsung’s factories as recently as the RTX 3000 series. Intel also uses TSMC to build some chips, including its current top-end laptop and desktop processors, but it uses its own factories to build its server chips, and plans to bring its next-generation consumer chips back in-house.

Will Nvidia start to manufacture some of its chips on Intel’s 18A manufacturing process, or another process on Intel’s roadmap? Will the combined Intel and Nvidia chips be manufactured by Intel, or will they be built externally at TSMC, or using some combination of the two? (Nvidia has already said that Intel’s SoCs will integrate Nvidia GPU chiplets, so it’s likely that Intel will continue using its Foveros packaging technology to combine multiple bits of silicon into a single chip.)

A vote of confidence from Nvidia would be a big shot in the arm for Intel’s foundry, which has reportedly struggled to find major customers—but it’s hard to see Nvidia doing it if Intel’s manufacturing processes can’t compete with TSMC’s on performance or power consumption, or if Intel can’t manufacture chips in the volumes that Nvidia would need.

We’ve posed all of these questions to both Intel and Nvidia. This early, it’s unlikely that either company wants to commit to any plans other than the broad, vague collaborations that were part of this morning’s announcement. But we’ll update this article if we can shake any other details loose. Both Nvidia and Intel CEOs Huang and Tan will also be giving a joint press conference at 1 pm ET today, where they may discuss the answers to these and other questions.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

Nvidia will invest $5 billion in Intel, co-develop new server and PC chips Read More »

meta’s-$799-ray-ban-display-is-the-company’s-first-big-step-from-vr-to-ar

Meta’s $799 Ray-Ban Display is the company’s first big step from VR to AR

Zuckerberg also showed how the neural interface can be used to compose messages (on WhatsApp, Messenger, Instagram, or via a connected phone’s messaging apps) by following your mimed “handwriting” across a flat surface. Though this feature reportedly won’t be available at launch, Zuckerberg said he had gotten up to “about 30 words per minute” in this silent input mode.

The most impressive part of Zuckerberg’s on-stage demo that will be available at launch was probably a “live caption” feature that automatically types out the words your partner is saying in real-time. The feature reportedly filters out background noise to focus on captioning just the person you’re looking at, too.

A Meta video demos how live captioning works on the Ray-Ban Display (though the field-of-view on the actual glasses is likely much more limited).

Credit: Meta

A Meta video demos how live captioning works on the Ray-Ban Display (though the field-of-view on the actual glasses is likely much more limited). Credit: Meta

Beyond those “gee whiz” kinds of features, the Meta Ray-Ban Display can basically mirror a small subset of your smartphone’s apps on its floating display. Being able to get turn-by-turn directions or see recipe steps on the glasses without having to glance down at a phone feels like genuinely useful new interaction modes. Using the glasses display as a viewfinder to line up a photo or video (using the built-in 12 megapixel, 3x zoom camera) also seems like an improvement over previous display-free smartglasses.

But accessing basic apps like weather, reminders, calendar, and emails on your tiny glasses display strikes us as probably less convenient than just glancing at your phone. And hosting video calls via the glasses by necessity forces your partner to see what you’re seeing via the outward-facing camera, rather than seeing your actual face.

Meta also showed off some pie-in-the-sky video about how future “Agentic AI” integration would be able to automatically make suggestions and note follow-up tasks based on what you see and hear while wearing the glasses. For now, though, the device represents what Zuckerberg called “the next chapter in the exciting story of the future of computing,” which should serve to take focus away from the failed VR-based metaverse that was the company’s last “future of computing.”

Meta’s $799 Ray-Ban Display is the company’s first big step from VR to AR Read More »

report:-apple-inches-closer-to-releasing-an-oled-touchscreen-macbook-pro

Report: Apple inches closer to releasing an OLED touchscreen MacBook Pro

At multiple points over many years, Apple executives have taken great pains to point out that they think touchscreen Macs are a silly idea. But it remains one of those persistent Mac rumors that crops up over and over again every couple of years, from sources that are reliable enough that they shouldn’t be dismissed out of hand.

Today’s contribution comes from supply chain analyst Ming Chi-Kuo, who usually has some insight into what Apple is testing and manufacturing. Kuo says that touchscreen MacBook Pros are “expected to enter mass production by late 2026,” and that the devices will also shift to using OLED display panels instead of the Mini LED panels on current-generation MacBook Pros.

Kuo says that Apple’s interest in touchscreen Macs comes from “long-term observation of iPad user behavior.” Apple’s tablet hardware launches in the last few years have also included keyboard and touchpad accessories, and this year’s iPadOS 26 update in particular has helped to blur the line between the touch-first iPad and the keyboard-and-pointer-first Mac. In other words, Apple has already acknowledged that both kinds of input can be useful when combined in the same device; taking that same jump on the Mac feels like a natural continuation of work Apple is already doing.

Touchscreens became much more common on Windows PCs starting in 2012 when Windows 8 was released, itself a response to Apple’s introduction of the iPad a couple of years before. Microsoft backed off on almost all of Windows 8’s design decisions in the following years after the dramatic UI shift proved unpopular with traditional mouse-and-keyboard users, but touchscreen PCs like Microsoft’s Surface lineup have persisted even as the software has changed.

Report: Apple inches closer to releasing an OLED touchscreen MacBook Pro Read More »

ios-26-review:-a-practical,-yet-playful,-update

iOS 26 review: A practical, yet playful, update


More than just Liquid Glass

Spotlighting the most helpful new features of iOS 26.

The new Clear icons look in iOS 26 can make it hard to identify apps, since they’re all the same color. Credit: Scharon Harding

iOS 26 became publicly available this week, ushering in a new OS naming system and the software’s most overhauled look since 2013. It may take time to get used to the new “Liquid Glass” look, but it’s easier to appreciate the pared-down controls.

Beyond a glassy, bubbly new design, the update’s flashiest new features also include new Apple Intelligence AI integration that varies in usefulness, from fluffy new Genmoji abilities to a nifty live translation feature for Phones, Messages, and FaceTime.

New tech is often bogged down with AI-based features that prove to be overhyped, unreliable, or just not that useful. iOS 26 brings a little of each, so in this review, we’ll home in on the iOS updates that will benefit both mainstream and power users the most.

Table of Contents

Let’s start with Liquid Glass

If we’re talking about changes that you’re going to use a lot, we should start with the new Liquid Glass software design that Apple is applying across all of its operating systems. iOS hasn’t had this much of a makeover since iOS 7. However, where iOS 7 applied a flatter, minimalist effect to windows and icons and their edges, iOS 26 adds a (sometimes frosted) glassy look and a mildly fluid movement to actions such as pulling down menus or long-pressing controls. All the while, windows look like they’re reflecting the content underneath them. When you pull Safari’s menu atop a webpage, for example, blurred colors from the webpage’s images and text are visible on empty parts of the menu.

Liquid Glass is now part of most of Apple’s consumer devices, including Macs and Apple TVs, but the dynamic visuals and motion are especially pronounced as you use your fingers to poke, slide, and swipe across your iPhone’s screen.

For instance, when you use a tinted color theme or the new clear theme for Home Screen icons, colors from the Home Screen’s background look like they’re refracting from under the translucent icons. It’s especially noticeable when you slide to different Home Screen pages. And in Safari, the address bar shrinks down and becomes more translucent as you scroll to read an article.

Because the theme is incorporated throughout the entire OS, the Liquid Glass effect can be cheesy at times. It feels forced in areas such as Settings, where text that just scrolled past looks slightly blurred at the top of the screen.

Liquid Glass makes the top of the Settings menu look blurred.

Liquid Glass makes the top of the Settings menu look blurred.

Credit: Scharon Harding

Liquid Glass makes the top of the Settings menu look blurred. Credit: Scharon Harding

Other times, the effect feels fitting, like when pulling the Control Center down and its icons appear to stretch down to the bottom of the screen and then quickly bounce into their standard size as you release your finger. Another place Liquid Glass flows nicely is in Photos. As you browse your pictures, colors subtly pop through the translucent controls at the bottom of the screen.

This is a matter of appearance, so you may have your own take on whether Liquid Glass looks tasteful or not. But overall, it’s the type of redesign that’s distinct enough to be a fun change, yet mild enough that you can grow accustomed to it if you’re not immediately impressed.

Liquid Glass simplifies navigation (mostly)

There’s more to Liquid Glass than translucency. Part of the redesign is simplifying navigation in some apps by displaying fewer controls.

Opening Photos is now cleaner at launch, bringing you to all of your photos instead of the Collections section, like iOS 18 does. At the bottom are translucent tabs for Library and Collections, plus a Search icon. Once you start browsing, the Library and Collections tabs condense into a single icon, and Years, Months, and All tabs appear, maintaining a translucence that helps keep your focus on your pictures.

You can still bring up more advanced options (such as Flash, Live, Timer) with one tap. And at the top of the camera’s field of view are smaller toggles for night mode and flash. But for when you want to take a quick photo, iOS 26 makes it easier to focus on the necessities while keeping the extraneous within short reach.

Similarly, the initial controls displayed at the bottom of the screen when you open Camera are pared down from six different photo- and video-shooting modes to the two that really matter: Photo and Video.

iOS 26 camera app

If you long-press Photo, options for the Time-Lapse, Slow-Mo, Cinematic, Portrait, Spatial, and Pano modes appear.

Credit: Scharon Harding

If you long-press Photo, options for the Time-Lapse, Slow-Mo, Cinematic, Portrait, Spatial, and Pano modes appear. Credit: Scharon Harding

iOS 26 takes the same approach with Video mode by focusing on the essentials (zoom, resolution, frame rate, and flash) at launch.New layout options for navigating Safari, however, slowed me down. In a new Compact view, the address bar lives at the bottom of the screen without a dedicated toolbar, giving the web page more screen space. But this setup makes accessing common tasks, like opening a new or old tab, viewing bookmarks, or sharing a link, tedious because they’re hidden behind a menu button.

If you tend to have multiple browser tabs open, you’ll want to stick with the classic layout, now called Top (where the address bar is at the top of the screen and the toolbar is at the bottom) or the Bottom layout (where the address bar and toolbar are at the bottom of the screen).

On the more practical side of Safari updates is a new ability to turn any webpage into a web app, making favorite and important URLs accessible quickly and via a dedicated Home Screen icon. This has been an iOS feature for a long time, but until now the pages always opened in Safari. Users can still do this if they like, but by default these sites now open as their own distinct apps, with dedicated icons in the app switcher. Web apps open full-screen, but in my experience, back and forward buttons only come up if you go to a new website. Sliding left and right replaces dedicated back and forward controls, but sliding isn’t as reliable as just tapping a button.

Viewing Ars Technica as a web app.

Viewing Ars Technica as a web app.

Credit: Scharon Harding

Viewing Ars Technica as a web app. Credit: Scharon Harding

iOS 26 remembers that iPhones are telephones

With so much focus on smartphone chips, screens, software, and AI lately, it can be easy to forget that these devices are telephones. iOS 26 doesn’t overlook the core purpose of iPhones, though. Instead, the new operating system adds a lot to the process of making and receiving phone calls, video calls, and text messages, starting with the look of the Phone app.

Continuing the streamlined Liquid Glass redesign, the Phone app on iOS 26 consolidates the bottom controls from Favorites, Recents, Contacts, Keypad, and Voicemail, to Calls (where voicemails also live), Contacts, and Keypad, plus Search.

I’d rather have a Voicemails section at the bottom of the screen than Search, though. The Voicemails section is still accessible by opening a menu at the top-right of the screen, but it’s less prominent, and getting to it requires more screen taps than before.

On Phone’s opening screen, you’ll see the names or numbers of missed calls and voicemails in red. But voicemails also have a blue dot next to the red phone number or name (along with text summarizing or transcribing the voicemail underneath if those settings are active). This setup caused me to overlook missed calls initially. Missed calls with voicemails looked more urgent because of the blue dot. For me, at first glance, it appeared as if the blue dots represented unviewed missed calls and that red numbers/names without a blue dot were missed calls that I had already viewed. It’s taking me time to adjust, but there’s logic behind having all missed phone activity in one place.

Fighting spam calls and messages

For someone like me, whose phone number seems to have made it to every marketer and scammers’ contact lists, it’s empowering to have iOS 26’s screening features help reduce time spent dealing with spam.

The phone can be set to automatically ask callers with unsaved numbers to state their name. As this happens, iOS displays the caller’s response on-screen, so you can decide if you want to answer or not. If you’re not around when the phone rings, you can view the transcript later and then mark the caller as known, if desired. This has been my preferred method of screening calls and reduces the likelihood of missing a call I want to answer.

There are also options for silencing calls and voicemails from unknown numbers and having them only show in a section of the app that’s separate from the Calls tab (and accessible via the aforementioned Phone menu).

iOS 26's new Phone menu

A new Phone menu helps sort important calls from calls that are likely spam.

Credit: Scharon Harding

A new Phone menu helps sort important calls from calls that are likely spam. Credit: Scharon Harding

You could also have iOS direct calls that your cell phone carrier identifies as spam to voicemail and only show the missed calls in the Phone menu’s dedicated Spam list. I found that, while the spam blocker is fairly reliable, silencing calls from unsaved numbers resulted in me missing unexpected calls from, say, an interview source or my bank. And looking through my spam and unknown callers lists sounds like extra work that I’m unlikely to do regularly.

Messages

iOS 26 applies the same approach to Messages. You can now have texts from unknown senders and spam messages automatically placed into folders that are separate from your other texts. It’s helpful for avoiding junk messages, but it can be confusing if you’re waiting for something like a two-factor authentication text, for example.

Elsewhere in Messages is a small but effective change to browsing photos, links, and documents previously exchanged via text. Upon tapping the name of a person in a conversation in Messages, you’ll now see tabs for viewing that conversation’s settings (such as the recipient’s number and a toggle for sending read receipts), as well as separate tabs for photos and links. Previously, this was all under one tab, so if you wanted to find a previously sent link, you had to scroll through the conversation’s settings and photos. Now, you can get to links with a couple of quick taps. Additionally, with iOS 26 you can finally set up custom iMessage backgrounds, including premade ones and ones that you can make from your own photos or by using generative AI. It’s not an essential update but is an easy way to personalize your iPhone by brightening up texts.

Hold Assist

Another time saver is Hold Assist. It makes calling customer service slightly more tolerable by allowing you to hang up during long wait times and have your iPhone ring when someone’s ready to talk to you. It’s a feature that some customer service departments have offered for years already, but it’s handy to always have it available.

You have to be quick to respond, though. One time I answered the phone after using Hold Assist, and the caller informed me that they had said “hello” a few times already. This is despite the fact that iOS is supposed to let the agent know that you’ll be on the phone shortly. If I had waited a couple more seconds to pick up the phone, it’s likely that the customer service rep would have hung up.

Live translations

One of the most novel features that iOS 26 brings to iPhone communication is real-time translations for Spanish, Mandarin, French, German, Italian, Japanese, Korean, and Portuguese. After downloading the necessary language libraries, iOS can translate one of those languages to another in real time when you’re talking on the phone or FaceTime or texting.

The feature worked best in texts, where the software doesn’t have to deal with varying accents, people speaking fast or over one another, stuttering, or background noise. Translated texts and phone calls always show the original text written in the sender’s native language, so you can double-check translations or see things that translations can miss, like acronyms, abbreviations, and slang.

iOS 26 Translating some basic Spanish.

Translating some basic Spanish.

Credit: Scharon Harding

Translating some basic Spanish. Credit: Scharon Harding

During calls or FaceTime, Live Translation sometimes struggled to keep up while it tried to manage the nuances and varying speeds of how different people speak, as well as laughs and other interjections.

However, it’s still remarkable that the iPhone can help remove language barriers without any additional hardware, apps, or fees. It will be even better if Apple can improve reliability and add more languages.

Spatial images on the Home and Lock Screen

The new spatial images feature is definitely on the fluffier side of this iOS update, but it is also a practical way to spice up your Lock Screen, Home Screen, and the Home Screen’s Photos widget.

Basically, it applies a 3D effect to any photo in your library, which is visible as you move your phone around in your hand. Apple says that to do this, iOS 26 uses the same generative AI models that the Apple Vision Pro uses and creates a per-pixel depth map that makes parts of the image appear to pop out as you move the phone within six degrees of freedom.

The 3D effect is more powerful on some images than others, depending on the picture’s composition. It worked well on a photo of my dog sitting in front of some plants and behind a leaf of another plant. I set the display time so that it appears tucked behind her fur, and when I move the phone around, the dog and the leaf in front of her appear to move around, while the background plants stay still.

But in images with few items and sparser backgrounds, the spatial effect looks unnatural. And oftentimes, the spatial effect can be quite subtle.

Still, for those who like personalizing their iPhone with Home and Lock Screen customization, spatial scenes are a simple and harmless way to liven things up. And, if you like the effect enough, a new spatial mode in the Camera app allows you to create new spatial photos.

A note on Apple Intelligence notification summaries

As we’ve already covered in our macOS 26 Tahoe review, Apple Intelligence-based notification summaries haven’t improved much since their 2024 debut in iOS 18 and macOS 15 Sequoia. After problems with showing inaccurate summaries of news notifications, Apple updated the feature to warn users that the summaries may be inaccurate. But it’s still hit or miss when it comes to how easy it is to decipher the summaries.

I did have occasional success with notification summaries in iOS 26. For instance, I understood a summary of a voicemail that said, “Payment may have appeared twice; refunds have been processed.” Because I had already received a similar message via email (a store had accidentally charged me twice for a purchase and then refunded me), I knew I didn’t need to open that voicemail.

Vague summaries sometimes tipped me off as to whether a notification was important. A summary reading “Townhall meeting was hosted; call [real phone number] to discuss issues” was enough for me to know that I had a voicemail about a meeting that I never expressed interest in. It wasn’t the most informative summary, but in this case, I didn’t need a lot of information.

However, most of the time, it was still easier to just open the notification than try to decipher what Apple Intelligence was trying to tell me. Summaries aren’t really helpful and don’t save time if you can’t fully trust their accuracy or depth.

Playful, yet practical

With iOS 26, iPhones get a playful new design that’s noticeable and effective but not so drastically different that it will offend or distract those who are happy with the way iOS 18 works. It’s exciting to experience one of iOS’s biggest redesigns, but what really stands out are the thoughtful tweaks that bring practical improvements to core features, like making and receiving phone calls and taking pictures.

Some additions and changes are superfluous, but the update generally succeeds at improving functionality without introducing jarring changes that isolate users or force them to relearn how to use their phone.

I can’t guarantee that you’ll like the Liquid Glass design, but other updates should make it simpler to do some of the most important tasks with iPhones, and it should be a welcome improvement for long-time users.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

iOS 26 review: A practical, yet playful, update Read More »

macos-26-tahoe:-the-ars-technica-review

macOS 26 Tahoe: The Ars Technica Review

Game Overlay

The Game Overlay in macOS Tahoe. Credit: Andrew Cunningham

Tahoe’s new Game Overlay doesn’t add features so much as it groups existing gaming-related features to make them more easily accessible.

The overlay makes itself available any time you start a game, either via a keyboard shortcut or by clicking the rocketship icon in the menu bar while a game is running. The default view includes brightness and volume settings, toggles for your Mac’s energy mode (for turning on high-performance or low-power mode, when they’re available), a toggle for Game Mode, and access to controller settings when you’ve got one connected.

The second tab in the overlay displays achievements, challenges, and leaderboards for the game you’re playing—though only if they offer Apple’s implementation of those features. Achievements for games installed from Steam, for example, aren’t visible. And the last tab is for social features, like seeing your friends list or controlling chat settings (again, when you’re using Apple’s implementation).

More granular notification summaries

I didn’t think the Apple Intelligence notification summaries were very useful when they launched in iOS 18 and macOS 15 Sequoia last year, and I don’t think iOS 26 or Tahoe really changes the quality of those summaries in any immediately appreciable way. But following a controversy earlier this year where the summaries botched major facts in breaking news stories, Apple turned notification summaries for news apps off entirely while it worked on fixes.

Those fixes, as we’ve detailed elsewhere, are more about warning users of potential inaccuracies than about preventing those inaccuracies in the first place.

Apple now provides three broad categories of notification summaries: those for news and entertainment apps, those for communication and social apps, and those for all other kinds of apps. Summaries for each category can be turned on or off independently, and the news and entertainment category has a big red disclaimer warning users to “verify information” in the individual news stories before jumping to conclusions. Summaries are italicized, get a special icon, and a “summarized by Apple Intelligence” badge, just to make super-ultra-sure that people are aware they’re not taking in raw data.

Personally, I think if Apple can’t fix the root of the problem in a situation like this, then it’s best to take the feature out of iOS and macOS entirely rather than risk giving even one person information that’s worse or less accurate than the information they already get by being a person on the Internet in 2025.

As we wrote a few months ago, asking a relatively small on-device language model to accurately summarize any stack of notifications covering a wide range of topics across a wide range of contexts is setting it up to fail. It does work OK when summarizing one or two notifications, or when summarizing straightforward texts or emails from a single person. But for anything else, be prepared for hit-or-miss accuracy and usefulness.

Relocated volume and brightness indicators

The pop-ups you see when adjusting the system volume or screen brightness have been redesigned and moved. The indicators used to appear as large rounded squares, centered on the lower half of your primary display. The design had changed over the years, but this was where they’ve appeared throughout the 25-year existence of Mac OS X.

Now, both indicators appear in the upper-right corner of the screen, glassy rectangles that pop out from items on the menu bar. They’ll usually appear next to the Control Center menu bar item, but the volume indicator will pop out of the Sound icon if it’s visible.

New low battery alert

Tahoe picks up an iPhone-ish low-battery alert on laptops. Credit: Andrew Cunningham

Tahoe tweaks the design of macOS’ low battery alert notification. A little circle-shaped meter (in the same style as battery meters in Apple’s Batteries widgets) shows you in bright red just how close your battery is to being drained.

This notification still shows up separately from others and can’t be dismissed, though it doesn’t need to be cleared and will go away on its own. It starts firing off when your laptop’s battery hits 10 percent and continues to go off when you drop another percentage point from there (it also notified me without the percentage readout changing, seemingly at random, as if to annoy me badly enough to plug my computer in more quickly).

The notification frequency and the notification thresholds can’t be changed, if this isn’t something you want to be reminded about or if it’s something you want to be reminded about even earlier. But you could possibly use the battery level trigger in Shortcuts to customize your Mac’s behavior a bit.

Recovery mode changes

A new automated recovery tool in macOS Tahoe’s recovery volume. Credit: Andrew Cunningham

Tahoe’s version of the macOS Recovery mode gets a new look to match the rest of the OS, but there are a few other things going on, too.

If you’ve ever had a problem getting your Mac to boot, or if you’ve ever just wanted to do a totally fresh install of the operating system, you may have run into the Mac’s built-in recovery environment before. On an Apple Silicon Mac, you can usually access it by pressing and holding the power button when you start up your Mac and clicking the Options button to start up using the hidden recovery volume rather than the main operating system volume.

Tahoe adds a new tool called the Device Recovery Assistant to the recovery environment, accessible from the Utilities menu. This automated tool “will look for any problems” with your system volume “and attempt to resolve them if found.”

Maybe the Recovery Assistant will actually solve your boot problems, and maybe it won’t—it doesn’t tell you much about what it’s doing, beyond needing to unlock FileVault on my system volume to check it out. But it’s one more thing to try if you’re having serious problems with your Mac and you’re not ready to countenance a clean install yet.

The web browser in the recovery environment is still WebKit, but it’s not Safari-branded anymore, and it sheds a lot of Safari features you wouldn’t want or need in a temporary OS. Credit: Andrew Cunningham

Apple has made a couple of other tweaks to the recovery environment, beyond adding a Liquid Glass aesthetic. The recovery environment’s built-in web browser is simply called Web Browser, and while it’s still based on the same WebKit engine as Safari, it doesn’t have Safari’s branding or its settings (or other features that are extraneous to a temporary recovery environment, like a bookmarks menu). The Terminal window picks up the new Clear theme, new SF Mono Terminal typeface, and the new default 120-row-by-30-column size.

A new disk image format

Not all Mac users interact with disk images regularly, aside from opening them up periodically to install an app or restore an old backup. But among other things, disk images are used by Apple’s Virtualization framework, which makes it relatively simple to run macOS and Linux virtual machines on the platform for testing and other things. But the RAW disk image format used by older macOS versions can come with quite severe performance penalties, even with today’s powerful chips and fast PCI Express-connected SSDs.

Enter the Apple Sparse Image Format, or ASIF. Apple’s developer documentation says that because ASIF images’ “intrinsic structure doesn’t depend on the host file system’s capabilities,” they “transfer more efficiently between hosts or disks.” The upshot is that reading files from and writing files to these images should be a bit closer to your SSD’s native performance (Howard Oakley at The Eclectic Light Company has some testing that suggests significant performance improvements in many cases, though it’s hard to make one-to-one comparisons because testing of the older image formats was done on older hardware).

The upshot is that disk images should be capable of better performance in Tahoe, which will especially benefit virtual machines that rely on disk images. This could benefit the lightweight virtualization apps like VirtualBuddy and Viable that mostly exist to provide a front end for the Virtualization framework, as well as virtualization apps like Parallels that offer support for Windows.

Quantum-safe encryption support

You don’t have a quantum computer on your desk. No one does, outside of labs where this kind of technology is being tested. But when or if they become more widely used, they’ll render many industry-standard forms of encryption relatively easy to break.

macOS 26 Tahoe: The Ars Technica Review Read More »