wwdc

apple’s-craig-federighi-on-the-long-road-to-the-ipad’s-mac-like-multitasking

Apple’s Craig Federighi on the long road to the iPad’s Mac-like multitasking


Federighi talks to Ars about why the iPad’s Mac-style multitasking took so long.

Apple press photograph of iPads running iPadOS 26

iPads! Running iOS 26! Credit: Apple

iPads! Running iOS 26! Credit: Apple

CUPERTINO, Calif.—When Apple Senior Vice President of Software Engineering Craig Federighi introduced the new multitasking UI in iPadOS 26 at the company’s Worldwide Developers Conference this week, he did it the same way he introduced the Calculator app for the iPad last year or timers in the iPad’s Clock app the year before—with a hint of sarcasm.

“Wow,” Federighi enthuses in a lightly exaggerated tone about an hour and 19 minutes into a 90-minute presentation. “More windows, a pointier pointer, and a menu bar? Who would’ve thought? We’ve truly pulled off a mind-blowing release!”

This elicits a sensible chuckle from the gathered audience of developers, media, and Apple employees watching the keynote on the Apple Park campus, where I have grabbed myself a good-but-not-great seat to watch the largely pre-recorded keynote on a gigantic outdoor screen.

Federighi is acknowledging—and lightly poking fun at—the audience of developers, pro users, and media personalities who have been asking for years that Apple’s iPad behave more like a traditional computer. And after many incremental steps, including a big swing and partial miss with the buggy, limited Stage Manager interface a couple of years ago, Apple has finally responded to requests for Mac-like multitasking with a distinctly Mac-like interface, an improved file manager, and better support for running tasks in the background.

But if this move was so forehead-slappingly obvious, why did it take so long to get here? This is one of the questions we dug into when we sat down with Federighi and Senior Vice President of Worldwide Marketing Greg Joswiak for a post-keynote chat earlier this week.

It used to be about hardware restrictions

People have been trying to use iPads (and make a philosophical case for them) as quote-unquote real computers practically from the moment they were introduced 15 years ago.

But those early iPads lacked so much of what we expect from modern PCs and Macs, most notably robust multi-window multitasking and the ability for third-party apps to exchange data. The first iPads were almost literally just iPhone internals connected to big screens, with just a fraction of the RAM and storage available in the Macs of the day; that necessitated the use of a blown-up version of the iPhone’s operating system and the iPhone’s one-full-screen-app-at-a-time interface.

“If you want to rewind all the way to the time we introduced Split View and Slide Over [in iOS 9], you have to start with the grounding that the iPad is a direct manipulation touch-first device,” Federighi told Ars. “It is a foundational requirement that if you touch the screen and start to move something, that it responds. Otherwise, the entire interaction model is broken—it’s a psychic break with your contract with the device.”

Mac users, Federighi said, were more tolerant of small latency on their devices because they were already manipulating apps on the screen indirectly, but the iPads of a decade or so ago “didn’t have the capacity to run an unlimited number of windowed apps with perfect responsiveness.”

It’s also worth noting the technical limitations of iPhone and iPad apps at the time, which up until then had mostly been designed and coded to match the specific screen sizes and resolutions of the (then-manageable) number of iDevices that existed. It simply wasn’t possible for the apps of the day to be dynamically resized as desktop windows are, because no one was coding their apps that way.

Apple’s iPad Pros—and, later, the iPad Airs—have gradually adopted hardware and software features that make them more Mac-like. Credit: Andrew Cunningham

Of course, those hardware limitations no longer exist. Apple’s iPad Pros started boosting the tablets’ processing power, RAM, and storage in earnest in the late 2010s, and Apple introduced a Microsoft Surface-like keyboard and stylus accessories that moved the iPad away from its role as a content consumption device. For years now, Apple’s faster tablets have been based on the same hardware as its slower Macs—we know the hardware can do more because Apple is already doing more with it elsewhere.

“Over time the iPad’s gotten more powerful, the screens have gotten larger, the user base has shifted into a mode where there is a little bit more trackpad and keyboard use in how many people use the device,” Federighi told Ars. “And so the stars kind of aligned to where many of the things that you traditionally do with a Mac were possible to do on an iPad for the first time and still meet iPad’s basic contract.”

On correcting some of Stage Manager’s problems

More multitasking in iPadOS 26. Credit: Apple

Apple has already tried a windowed multitasking system on modern iPads once this decade, of course, with iPadOS 16’s Stage Manager interface.

Any first crack at windowed multitasking on the iPad was going to have a steep climb. This was the first time Apple or its developers had needed to contend with truly dynamically resizable app windows in iOS or iPadOS, the first time Apple had implemented a virtual memory system on the iPad, and the first time Apple had tried true multi-monitor support. Stage Manager was in such rough shape that Apple delayed that year’s iPadOS release to keep working on it.

But the biggest problem with Stage Manager was actually that it just didn’t work on a whole bunch of iPads. You could only use it on new expensive models—if you had a new cheap model or even an older expensive model, your iPad was stuck with the older Slide Over and Split View modes that had been designed around the hardware limitations of mid-2010s iPads.

“We wanted to offer a new baseline of a totally consistent experience of what it meant to have Stage Manager,” Federighi told Ars. “And for us, that meant four simultaneous apps on the internal display and an external display with four simultaneous apps. So, eight apps running at once. And we said that’s the baseline, and that’s what it means to be Stage Manager; we didn’t want to say ‘you get Stage Manager, but you get Stage Manager-lite here or something like that. And so immediately that established a floor for how low we could go.”

Fixing that was one of the primary goals of the new windowing system.

“We decided this time: make everything we can make available,” said Federighi, “even if it has some nuances on older hardware, because we saw so much demand [for Stage Manager].”

That slight change in approach, combined with other behind-the-scenes optimizations, makes the new multitasking model more widely compatible than Stage Manager is. There are still limits on those devices—not to the number of windows you can open, but to how many of those windows can be active and up-to-date at once. And true multi-monitor support would remain the purview of the faster, more-expensive models.

“We have discovered many, many optimizations,” Federighi said. “We re-architected our windowing system and we re-architected the way that we manage background tasks, background processing, that enabled us to squeeze more out of other devices than we were able to do at the time we introduced Stage Manager.”

Stage Manager still exists in iPadOS 26, but as an optional extra multitasking mode that you have to choose to enable instead of the new windowed multitasking system. You can also choose to turn both multitasking systems off entirely, preserving the iPad’s traditional big-iPhone-for-watching-Netflix interface for the people who prefer it.

“iPad’s gonna be iPad”

The $349 base-model iPad is one that stands to gain the most from iPadOS 26. Credit: Andrew Cunningham

However, while the new iPadOS 26 UI takes big steps toward the Mac’s interface, the company still tries to treat them as different products with different priorities. To date, that has meant no touch screens on the Mac (despite years of rumors), and it will continue to mean that there are some Mac things that the iPad will remain unable to do.

“But we’ve looked and said, as [the iPad and Mac] come together, where on the iPad the Mac idiom for doing something, like where we put the window close controls and maximize controls, what color are they—we’ve said why not, where it makes sense, use a converged design for those things so it’s familiar and comfortable,” Federighi told Ars. “But where it doesn’t make sense, iPad’s gonna be iPad.”

There will still be limitations and frustrations when trying to fit an iPad into a Mac-shaped hole in your computing setup. While tasks can run in the background, for example, Apple only allows apps to run workloads with a definitive endpoint, things like a video export or a file transfer. System agents or other apps that perform some routine on-and-off tasks continuously in the background aren’t supported. All the demos we’ve seen so far are also on new, high-end iPad hardware, and it remains to be seen how well the new features behave on low-end tablets like the 11th-generation A16 iPad, or old 2019-era hardware like the iPad Air 3.

But it does feel like Apple has finally settled on a design that might stick and that adds capability to the iPad without wrecking its simplicity for the people who still just want a big screen for reading and streaming.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

Apple’s Craig Federighi on the long road to the iPad’s Mac-like multitasking Read More »

apple-tiptoes-with-modest-ai-updates-while-rivals-race-ahead

Apple tiptoes with modest AI updates while rivals race ahead

Developers, developers, developers?

Being the Worldwide Developers Conference, it seems appropriate that Apple also announced it would open access to its on-device AI language model to third-party developers. It also announced it would integrate OpenAI’s code completion tools into its XCode development software.

Craig Federighi stands in front of a screen with the words

Apple Intelligence was first unveiled at WWDC 2024. Credit: Apple

“We’re opening up access for any app to tap directly into the on-device, large language model at the core of Apple,” said Craig Federighi, Apple’s software chief, during the presentation. The company also demonstrated early partner integration by adding OpenAI’s ChatGPT image generation to its Image Playground app, though it said user data would not be shared without permission.

For developers, Apple’s inclusion of ChatGPT’s code-generation capabilities in XCode may represent Apple’s attempt to match what rivals like GitHub Copilot and Cursor offer software developers in terms of AI coding augmentation, even as the company maintains a more cautious approach to consumer-facing AI features.

Meanwhile, competitors like Meta, Anthropic, OpenAI, and Microsoft continue to push more aggressively into the AI space, offering AI assistants (that admittedly still make things up and suffer from other issues, such as sycophancy).

Only time will tell if Apple’s wariness to embrace the bleeding edge of AI will be a curse (eventually labeled as a blunder) or a blessing (lauded as a wise strategy). Perhaps, in time, Apple will step in with a solid and reliable AI assistant solution that makes Siri useful again. But for now, Apple Intelligence remains more of a clever brand name than a concrete set of notable products.

Apple tiptoes with modest AI updates while rivals race ahead Read More »

apple-and-openai-currently-have-the-most-misunderstood-partnership-in-tech

Apple and OpenAI currently have the most misunderstood partnership in tech

A man talks into a smartphone.

Enlarge / He isn’t using an iPhone, but some people talk to Siri like this.

On Monday, Apple premiered “Apple Intelligence” during a wide-ranging presentation at its annual Worldwide Developers Conference in Cupertino, California. However, the heart of its new tech, an array of Apple-developed AI models, was overshadowed by the announcement of ChatGPT integration into its device operating systems.

Since rumors of the partnership first emerged, we’ve seen confusion on social media about why Apple didn’t develop a cutting-edge GPT-4-like chatbot internally. Despite Apple’s year-long development of its own large language models (LLMs), many perceived the integration of ChatGPT (and opening the door for others, like Google Gemini) as a sign of Apple’s lack of innovation.

“This is really strange. Surely Apple could train a very good competing LLM if they wanted? They’ve had a year,” wrote AI developer Benjamin De Kraker on X. Elon Musk has also been grumbling about the OpenAI deal—and spreading misinformation about it—saying things like, “It’s patently absurd that Apple isn’t smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security & privacy!”

While Apple has developed many technologies internally, it has also never been shy about integrating outside tech when necessary in various ways, from acquisitions to built-in clients—in fact, Siri was initially developed by an outside company. But by making a deal with a company like OpenAI, which has been the source of a string of tech controversies recently, it’s understandable that some people don’t understand why Apple made the call—and what it might entail for the privacy of their on-device data.

“Our customers want something with world knowledge some of the time”

While Apple Intelligence largely utilizes its own Apple-developed LLMs, Apple also realized that there may be times when some users want to use what the company considers the current “best” existing LLM—OpenAI’s GPT-4 family. In an interview with The Washington Post, Apple CEO Tim Cook explained the decision to integrate OpenAI first:

“I think they’re a pioneer in the area, and today they have the best model,” he said. “And I think our customers want something with world knowledge some of the time. So we considered everything and everyone. And obviously we’re not stuck on one person forever or something. We’re integrating with other people as well. But they’re first, and I think today it’s because they’re best.”

The proposed benefit of Apple integrating ChatGPT into various experiences within iOS, iPadOS, and macOS is that it allows AI users to access ChatGPT’s capabilities without the need to switch between different apps—either through the Siri interface or through Apple’s integrated “Writing Tools.” Users will also have the option to connect their paid ChatGPT account to access extra features.

As an answer to privacy concerns, Apple says that before any data is sent to ChatGPT, the OS asks for the user’s permission, and the entire ChatGPT experience is optional. According to Apple, requests are not stored by OpenAI, and users’ IP addresses are hidden. Apparently, communication with OpenAI servers happens through API calls similar to using the ChatGPT app on iOS, and there is reportedly no deeper OS integration that might expose user data to OpenAI without the user’s permission.

We can only take Apple’s word for it at the moment, of course, and solid details about Apple’s AI privacy efforts will emerge once security experts get their hands on the new features later this year.

Apple’s history of tech integration

So you’ve seen why Apple chose OpenAI. But why look to outside companies for tech? In some ways, Apple building an external LLM client into its operating systems isn’t too different from what it has previously done with streaming video (the YouTube app on the original iPhone), Internet search (Google search integration), and social media (integrated Twitter and Facebook sharing).

The press has positioned Apple’s recent AI moves as Apple “catching up” with competitors like Google and Microsoft in terms of chatbots and generative AI. But playing it slow and cool has long been part of Apple’s M.O.—not necessarily introducing the bleeding edge of technology but improving existing tech through refinement and giving it a better user interface.

Apple and OpenAI currently have the most misunderstood partnership in tech Read More »

wwdc-2024-starts-on-june-10-with-announcements-about-ios-18-and-beyond

WWDC 2024 starts on June 10 with announcements about iOS 18 and beyond

WWDC —

Speculation is rampant that Apple will make its first big moves in generative AI.

A colorful logo that says

Enlarge / The logo for WWDC24.

Apple

Apple has announced dates for this year’s Worldwide Developers Conference (WWDC). WWDC24 will run from June 10 through June 14 at the company’s Cupertino, California, headquarters, but everything will be streamed online.

Apple posted about the event with the following generic copy:

Join us online for the biggest developer event of the year. Be there for the unveiling of the latest Apple platforms, technologies, and tools. Learn how to create and elevate your apps and games. Engage with Apple designers and engineers and connect with the worldwide developer community. All online and at no cost.

As always, the conference will kick off with a keynote presentation on the first day, which is Monday, June 10. You can be sure Apple will use that event to at least announce the key features of its next round of annual software updates for iOS, iPadOS, macOS, watchOS, visionOS, and tvOS.

We could also see new hardware—it doesn’t happen every year, but it has of late. We don’t yet know exactly what that hardware might be, though.

Much of the speculation among analysts and commentators concerns Apple’s first move into generative AI. There have been reports that Apple may work with a partner like Google to include a chatbot in its operating system, that it has been considering designing its own AI tools, or that it could offer an AI App Store, giving users a choice between many chatbots.

Whatever the case, Apple is playing catch-up with some of its competitors in generative AI and large language models even though it has been using other applications of AI across its products for a couple of years now. The company’s leadership will probably talk about it during the keynote.

After the keynote, Apple usually hosts a “Platforms State of the Union” talk that delves deeper into its upcoming software updates, followed by hours of developer-focused sessions detailing how to take advantage of newly planned features in third-party apps.

WWDC 2024 starts on June 10 with announcements about iOS 18 and beyond Read More »

meta-to-host-quest-gaming-showcase-just-days-ahead-of-rumored-apple-headset-announcement

Meta to Host Quest Gaming Showcase Just Days Ahead of Rumored Apple Headset Announcement

Meta announced its third annual Quest Gaming Showcase is arriving next month, coming only a few days before Apple’s rumored XR headset announcement at Worldwide Developers Conference (WWDC).

Meta is livestreaming the Quest Gaming Showcase on June 1st, a bit unusual for the company, as it traditionally holds the annual event in late April.

Calling it their “biggest celebration of the depth and breadth of content across the Meta Quest Platform yet,” Meta is slated to share over 40 minutes of content, including a brand-new pre-show covering game updates and debut trailers, starting 15 minutes before the show begins.

Meta says to expect new game announcements, gameplay first-looks, updates to existing games, and more. There’s also set to be a post-show developer roundtable, which will feature conversation around upcoming games.

There could be at least one clue to what’s in store, as we get a brief glimpse at a horned helmet in the showcase’s promo video, which seems very much like Loki’s helmet from Rift exclusive Asgard’s Wrath (2019). Maybe Meta’s Sanzaru Games has slimmed down the Norse-inspired RPG?

Meanwhile, previous reports maintain Apple is finally set to unveil its long rumored mixed reality headset during the company’s WWDC keynote, taking place on Monday, June 5th.

Provided Apple indeed plans to announce its headset at WWDC, Meta could be looking to generate so called ‘strategic noise’ to better manage market reactions, and potentially offset any negative sentiment prior to Apple’s expected announcement—undoubtedly slated to be a pivotal moment for the entire XR industry.

Meta recently released its Q1 2023 earnings report, showing a consistent investment of around $4 billion per quarter into its XR division Reality Labs. With Apple rumored to be unveiling their own XR headset and a host of apps, reportedly set to include everything from fitness to VR/AR gaming, Meta may want to showcase where some of that investment is going.

Who knows? We may even hear more about Meta’s promised Quest 3 at the gaming showcase, which the company has confirmed will “fire up enthusiasts” when its released at some point this year, notably targeting a higher price point than its Quest 2 headset.

To find out, tune into the Quest Gaming Showcase on June 1st at 10AM PT (local time here), livestreamed across the company’s various channels, including TwitchFacebookYouTube, and in Meta Horizon Worlds.

Meta to Host Quest Gaming Showcase Just Days Ahead of Rumored Apple Headset Announcement Read More »