Enlarge/ The HP LaserJet M106w is one of the printer models that is mysteriously appearing for some users in Windows 10 and 11.
HP
Earlier this month, Microsoft disclosed an odd printer bug that was affecting some users of Windows 10, Windows 11, and various Windows Server products. Affected PCs were seeing an HP printer installed, usually an HP LaserJet M101-M106, even when they weren’t actually using any kind of HP printer. This bug could overwrite the settings for whatever printer the user actually did have installed and also prompted the installation of an HP Smart printer app from the Microsoft Store.
Microsoft still hasn’t shared the root cause of the problem, though it did make it clear that the problem wasn’t HP’s fault. Now, the company has released a fix for anyone whose PC was affected by the bug, though as of this writing, it requires users to download and run a dedicated troubleshooting tool available from Microsoft’s support site.
The December 2023 Microsoft Printer Metadata Troubleshooter Tool is available for all affected Windows versions, and it will remove all references to the phantom HP LaserJet model (as long as you don’t have one installed, anyway). The tool will also remove the HP Smart app as long as you don’t have an HP printer attached and the app was installed after November 25, presumably the date that the bug began affecting systems. These steps should fix the issue for anyone without an HP printer without breaking anything for people who do use HP printers.
There are four different versions of the troubleshooter, depending on whether you have the 32- or 64-bit version of an Arm or x86 version of Windows. Microsoft will also release an additional recommended troubleshooting tool “in the coming weeks” that will fix the problem in Windows 11 upon a user’s request without requiring the download of a separate tool.
Microsoft has said that, despite the renaming and the download of the HP Smart tool, most basic printing functionality should continue to work as intended for users affected by the problem. But if your printer relies on its own external app to provide additional settings or extra functionality, you’ll need to run the troubleshooting tool (or manually uninstall the phantom HP printer and reinstall your own printer) to get things working properly again.
Mathematician and early AI theorist David Rothenberg was fascinated by pattern-recognition algorithms. By 1968, he’d already done lots of work in missile trajectories (as one did back then), speech, and accounting, but he had another esoteric area he wanted to explore: the harmonic scale, as heard by humans. With enough circuits and keys, you could carve up the traditional music octave from 12 tones into 31 and make all kinds of between-tone tunes.
Happily, he had money from the Air Force Office of Scientific Research, and he also knew just the person to build this theoretical keyboard: Robert Moog, a recent graduate from Cornell University in Ithaca, New York, who was just starting to work toward a fully realized Moog Music.
The plans called for a 478-key keyboard, an analog synthesizer, a bank of oscillators, and an impossibly intricate series of circuits between them. Moog “took his time on this,” according to Travis Johns, instructional technologist at Cornell. He eventually delivers a one-octave prototype made from “1960s-era, World-War-II-surplus technology.” Rothenberg held onto the keyboard piece, hoping to one day finish it, until his death in 2018. His widow, Suhasini Sankaran, donated the kit to Cornell in 2022.
Because of that noble garage-cleaning, there now exists a finished device, one that has had work composed and performed upon it: the Moog-Rothenberg Keyboard.
Cornell’s telling of the Moog-Rothenberg keyboard, restored by university staff and students.
The project didn’t start until February 2023, partly because of the intimidating nature of working on a one-of-a-kind early synth prototype. “I would hate to unsolder something that was soldered 50 years ago by Robert Moog,” Johns says in the video.
Johns and his students and staff at Cornell sought to honor the original intent and schematics of the device but not ignore the benefits of modern tech. Programmable micro-controllers were used to divide up an 8 MHz clock signal, creating circuits with several octaves of the same note. Those controllers were then wired, laboriously, to the appropriate keys.
Original designs for the Moog-Rothenberg keyboard.
Ryan Young/Cornell University
Travis Johns works on some of the newer pieces of the restored (or replicated) Moog-Rothenberg keyboard.
Ryan Young/Cornell University
Switches and microcontrollers for the fully realized keyboard.
Ryan Young/Cornell University
A bit closer up with some of the original wiring for the one-octave prototype Moog prepared in the late 1960s.
Ryan Young/Cornell University
Even closer to those circuits and keypads.
Ryan Young/Cornell University
As Johns notes, it’s hard to categorize the synthesizer now as the original object, a re-creation, or a “playable facsimile” of a planned device. It’s also a particularly strange instrument. His team followed every mathematical and electrical detail of the original plans but found that the keyboard took on “a life of its own,” creating unusual timbres, resonances, and even volumes as soundwaves synchronized and fell away. This is, of course, the kind of thing Rothenberg originally hired Moog to make possible.
By October, the 31-tone synth was ready to play some music. Cornell professors Xak Bjerken and Elizabeth Ogonek performed and composed for it, respectively, and they were joined by members of Cornell’s EZRA quartet, themselves no stranger to strange instruments and new styles. Bjerken described his set as “bluegrass meets experimental improvisation.”
You can certainly hear the experimental come through in bits of the performance captured by Cornell. Ogonek manually controlled the instrument’s filters during the concert to create sustained tones. It requires more than two hands to control the output of 478 keys. The synthesizer now resides in Cornell’s Lincoln Hall for the Department of Music.
One of the neatest features of the Play Store is remote app installation. If you have multiple devices signed in to the same Google account, the Play Store’s “install” button will let you pick any of those devices as an installation target. If you find an app you like, it’s great to queue up installs on your phone, watch, TV, tablet, laptop, and car, all from a single device. It makes sense, then, that you might want to be able to uninstall apps from all your devices, too.
The new feature coming to the Play Store will let you do exactly that: remote uninstalls from any device on your account. The first sign of the feature is in the latest Android patch notes, which list a “New feature to help you uninstall apps on connected devices.” It doesn’t seem like this has been activated yet, but news site TheSpAndroid has photos of the feature, showing what you would expect. Opening the Play Store and uninstalling an app will bring up a list of devices, just like installing does now.
After hitting the “uninstall” button, this list of devices will pop up.
TheSpAndroid
It might not look like it, but under the hood, all installs from the Play Store happen via Android’s push notification system. By default, the press of the Play Store install button requests Google to send an app push to your current device, but there’s no need for the target device of a remote app install to be turned on and unlocked. Just like any other push notification, when the device connects to the Internet and sees the push, it will wake up and do whatever business it needs to do—usually, that’s “show a message and beep,” but in this case, that business is “install an app.” Google has slowly exposed its remote install functionality to the world, first with the Android Market (now Play Store) website in 2011. It took 11 years for a similar feature to come to the Play Store phone app.
Uninstalls can also work via the push notification system. Today’s news marks the first time this feature has been exposed to users, but remote uninstalls have been around for as long as remote installs. Google can trigger the remote uninstall feature without user consent, and it occasionally uses this feature to remotely mass-uninstall malware from all Play Store devices. Users are finally getting a button to do this themselves.
Enlarge/ The Apple Watch Series 9 released in September 2023.
Apple
Apple will pause sales of the Apple Watch Series 9 and Apple Watch Ultra 2 starting December 21, it revealed today in a statement to 9to5Mac. The move comes as the products are facing a potential import ban until August 2028, due to rulings that the watches infringe on patents from Masimo.
In October, the US International Trade Commission (ITC) upheld a January ruling that Apple Watches with pulse oximeter features infringe on two Masimo patents. Since then, the case has been under a 60-day Presidential Review Period, which ends December 25. After that date, the watches are subject to an import ban until the patents’ expiration in 2028.
Apple told 9to5Mac:
While the review period will not end until December 25, Apple is preemptively taking steps to comply should the ruling stand. This includes pausing sales of the Apple Watch Series 9 and Apple Watch Ultra 2 from Apple.com starting December 21, and from Apple retail locations after December 24.
The Apple Watch SE will remain available since it doesn’t have the blood oxygen sensor technology under dispute, which Apple debuted with the Apple Watch Series 6 in 2020.
Since the ITC’s ruling is still under presidential review, President Biden may decide to veto the ruling, saving the Apple Watch from an import ban. However, we’ve already seen Biden decline to veto an ITC ruling that the Apple Watch infringes on electrocardiogram sensor-related patents owned by AliveCor. (The Apple Watch wasn’t banned related to that because the US Patent and Trademark Office revoked the patents in question, a decision that AliveCor is appealing.)
People can still buy the watches from third-party retailers for now. But if the ITC’s ruling isn’t vetoed, then, come December 25, Apple won’t be able to sell the watch to other retailers, like Best Buy, anymore.
Apple’s statement today noted that it “strongly disagrees” with the ITC’s ruling and is “pursuing a range of legal and technical options to ensure that Apple Watch is available to customers.”
“Should the order stand, Apple will continue to take all measures to return Apple Watch Series 9 and Apple Watch Ultra 2 to customers in the U.S. as soon as possible,” Apple said.
Apple said it would appeal the ITC’s ruling on December 26 if the Presidential Review Period ends without a veto. But the watches would still be subject to the import ban.
A long battle
California-based Masimo has alleged that Apple started engaging in discussions with the company in 2013 under the premise of a potential partnership. However, Masimo claims that Apple ended up poaching some of its workers and tech. Apple previously claimed that Masimo was only “one of many medical-technology companies” that it met with during that time and that it never partnered with Masimo because it wasn’t consumer-focused.
As of this writing, Masimo’s “consumer health” website includes a handful of products. That includes the Masimo W1 health-tracking watch, against which Apple filed a patent infringement case in 2022 [PDF]. And if that’s not enough litigious beef between these two, Masimo also has a case against Apple filed in the US District Court in the Central District of California in early 2020, as noted by 9to5Mac.
While Apple is announcing some preemptive moves today, don’t expect the battle to be over. Apple made $39,845,000 [PDF] in wearables, home, and accessories sales for fiscal year 2023, (which ended September 20). There are numerous stakeholders—from suppliers to third-party retailers—invested in Apple producing flagship smartwatches.
Apple has alluded to numerous paths it can take to keep its watches alive, from more litigation to seeking new technologies. But it’s also possible that Masimo and Apple try to end their battle by working out some sort of licensing agreement.
Stadia might be dead, but the controllers for Google’s cloud-based gaming platform are still out there. With the service permanently offline, the proprietary Stadia Controller threatened to fill up landfills until Google devised a plan to convert them to generic Bluetooth devices that can work on almost anything. The app to open up the controller to other devices is a web service, which previously had a shutdown date of December 2023. That apparently isn’t enough time to convert all these controllers, so the Stadia Controller Salvage operation will run for a whole additional year. X (formerly Twitter) user Wario64 was the first to spot the announcement, which says the online tool will continue running until December 31, 2024.
As a cloud-based gaming service, Stadia had all the game code run on remote servers, with individual video frames streaming live to the user and showing the gameplay. The user would press buttons on their local controller, and every single individual button press had to travel across the Internet to the remote game server to be processed. These services live and die by their latency; in an attempt to reduce latency, the Stadia Controller connected to the Internet directly over Wi-Fi instead of connecting via Bluetooth to your computer and then to the Internet. Google claimed that one less hop on the local network led to shorter latency, especially since the service was originally built around the power-limited Chromecast dongle.
Enlarge/ The official Stadia Controller in “clearly white.”
Google
With the service dead, the Wi-Fi-only controller wouldn’t work wirelessly, leaving old-school USB as the only way to use the controller. However, Stadia Controllers already came with a dormant Bluetooth chip, so Google cooked up a way to convert the orphaned controllers from Wi-Fi communication to Bluetooth, allowing them to wirelessly connect to computers and phones as a generic HID (Human Interface Device). Normally you’d expect a download for some kind of firmware update program, but Google being Google, the Stadia Controller update process happens entirely on a webpage. Google’s controller update page has a very fancy “WebUSB” API setup—you fire up a Chromium browser, plug in your controller, grant the browser access to the device, and the webpage can access the controller directly and update the firmware, without any program to install.
While the web-based updater is very neat, it also means it’s impossible for a third party to archive the updater for future use. Once Google’s website goes down, there are no more controller updates. A desktop app, on the other hand, could be kept around and re-distributed forever.
The early days reports of Stadia sales said the service undershot Google’s estimates by “hundreds of thousands” of users, so there are probably a lot of controllers out there. Even in 2022, it was normal to buy new Stadia Controllers labeled with the original 2019 manufacturing date, giving the impression that these things were just filling up warehouses. With the update plan still running for another year, there’s more time for sales to happen and for these controllers to find a good home.
In our review of the Stadia service, Ars’ Senior Gaming Editor Kyle Orland found the controller was “one of the highlights of the Stadia launch package,” saying it “boasts a solid, well-balanced weight; comfortable, clicky face buttons and analog sticks; quality ergonomic design on the D-pad and shoulder triggers; and strong, distinct rumble motors.” So, assuming you can get the $70 MSRP device at a significant discount, it sounds like a decent buy. The one downside is that audio features like the headphone jack and microphone won’t work after the Bluetooth update.
We’ve all experienced it or heard about it happening: Someone has a conversation about wanting a red jacket, and then suddenly, it seems like they’re seeing ads for red jackets all over the place.
Makers of microphone-equipped electronics sometimes admit to selling voice data to third parties (advertisers). But that’s usually voice data accumulated after a user has prompted their device to start listening to them and after they’ve opted into (preferably not by default) this sort of data collection.
But a marketing company called CMG Local Solutions sparked panic recently by alluding that it has access to people’s private conversations by tapping into data gathered by the microphones on their phones, TVs, and other personal electronics, as first reported by 404 Media on Thursday. The marketing firm had said it uses these personal conversations for ad targeting.
Active Listening
CMG’s Active Listening website starts with a banner promoting an accurate but worrisome statement, “It’s true. Your devices are listening to you.”
Enlarge/ A screenshot from CMG’s Active Listening website.
A November 28 blog post described Active Listening technology as using AI to “detect relevant conversations via smartphones, smart TVs, and other devices.” As such, CMG claimed that it knows “when and what to tune into.”
The blog also shamelessly highlighted advertisers’ desire to hear every single whisper made that could help them target campaigns:
This is a world where no pre-purchase murmurs go unanalyzed, and the whispers of consumers become a tool for you to target, retarget, and conquer your local market.
The marketing company didn’t thoroughly detail how it backs its claims. An archived version of the Active Listening site provided a vague breakdown of how Active Listening purportedly works.
The website previously pointed to CMG uploading past client data into its platform to make “buyer personas.” Then, the company would identify relevant keywords for the type of person a CMG customer would want to target. CMG also mentioned placing a tracking pixel on its customers’ sites before entering the Listening Stage, which was only described as: “Active Listening begins and is analyzed via AI to detect pertinent conversations via smartphones, smart TVs, and other devices.”
The archived version of the page discussed an AI-based analysis of the data and generating an “encrypted evergreen audience list” used to re-target ads on various platforms, including streaming TV and audio, display ads, paid social media, YouTube, Google, and Bing Search.
That explanation doesn’t appear to be on the Active Listening page anymore, but CMG still says it can target people who are actively saying things like, “A minivan would be perfect for us” or “This AC is on it’s [sic] last leg!” in conversations.
But are they actively listening?
In a statement emailed to Ars Technica, Cox Media Group said that its advertising tools include “third-party vendor products powered by data sets sourced from users by various social media and other applications then packaged and resold to data servicers.” The statement continues:
Advertising data based on voice and other data is collected by these platforms and devices under the terms and conditions provided by those apps and accepted by their users, and can then be sold to third-party companies and converted into anonymized information for advertisers. This anonymized data then is resold by numerous advertising companies.
The company added that it does not “listen to any conversations or have access to anything beyond a third-party aggregated, anonymized and fully encrypted data set that can be used for ad placement” and “regret[s] any confusion.”
Before Cox Media Group sent its statement, though, CMG’s claims of collecting data on “casual conversations in real-time,” as its blog stated, were questionable. CMG never explained how our devices would somehow be able to garner the computing and networking power necessary to record and send every conversation spoken within the device’s range in “real-time,” unbeknownst to the device’s owner. The firm also never explained how it acquired the type of access that requires law enforcement to obtain a warrant. This is despite CMG’s blog claiming that with Active Listening, advertisers would be able to know “the second someone in your area is concerned about mold in their closet,” for example.
CMG’s November blog post pointed to an unnamed technology partner that can “aggregate and analyze voice data during pre-purchase conversations,” as well as a “growing ability to access microphone data on devices.”
Move over Google Assistant, Google is apparently working on a new AI. The Information reports that Google is working on a new “Pixie” AI assistant that will be exclusive to Pixel devices. Pixie will reportedly be powered by Google’s new “Gemini” AI model. The report says Pixie would launch first on the Pixel 9: “Eventually, Google wants to bring the features to its lower-end phones and devices like its watch.”
So far, Google and Amazon reportedly have plans to reboot their voice assistants with the new wave of large language models. Both are only at the rumor stage, so neither company has promoted how a large language model will help a voice assistant. Today, the typical complaints are usually around voice recognition accuracy and response time, which a language model doesn’t seem like it would help with. Presumably, large language models would help allow longer-form, more in-depth responses to questions, but whether consumers want to hear a synthetic robot voice read out a paragraph-long response is something the market will figure out.
Another feature listed in the report is that Google might build “glasses that could make use of the AI’s ability to recognize the objects a wearer is seeing.” Between Google Glass and Project Iris, Google has started and stopped a lot of eyewear projects.
The move shows how Google has changed its thinking around AI assistants over the past decade. It used to view Google Assistant as the future of Google Search, so it wanted Assistant to be available everywhere. Google Assistant was a good product for a time, available on all Android phones, on iOS via the Google app, and via lots of purpose-built hardware like the Google Home/Nest Audio speakers and smart displays. Google Assistant never made any money, though. The hardware was all sold at cost, the software was given away to partners, and the ongoing costs of voice processing piled up. There was never any additional revenue to pay for the Google Assistant in the form of ads. Amazon is in the same boat with its Alexa: No one has figured out how to make voice assistants profitable.
Since Google Assistant is a money pit, The Information previously reported that Google plans to “invest less in developing its Google Assistant voice-assisted search for cars and for devices not made by Google, including TVs, headphones, smart-home speakers, smart glasses and smartwatches that use Google’s Wear.” The idea is for Google to double down on its own hardware, which, according to the previous report, is what Google thinks will provide the best protection against regulators threatening the company’s search deals on the iPhone and Android partner devices. “We’re going to take on the iPhone” is apparently the hard-to-believe mindset at Google right now, according to this report.
Making the next-gen Assistant exclusive to the Pixel 9 would fall into this category. Presumably, the ongoing money problem would then be solved, or at least accounted for, in the sales of phone hardware. The current Google Assistant was originally exclusive to the first Pixel and spread out to Google’s partners, but The Information’s reporting makes it seem like that isn’t the plan this time (though that could always change). No one knows what will happen to Google AI assistant No. 1 (Google Assistant) when AI assistant No. 2 launches, but killing it off sounds like a likely outcome. It would also be a way to cut costs and get Google Assistant off people’s devices.
The problem with doubling down on hardware is that Google Hardware is a small division that has previously been unable to support this kind of ambition. Going back to that quote about third-party devices, there are no Google cars, TVs, or smart glasses (the report says smart glasses are being worked on, though). Some years, Google’s existing hardware isn’t necessarily very good. In other years, long times will go by when Google doesn’t update some product lines, leaving them for dead (laptops, tablets). Google Hardware is also usually only available in about 13 countries, which is a tiny sliver of the world. Being on third-party devices protects you from all this. Previously, Google’s strength was the availability of its ecosystem, and you give that up if you make everything exclusive to your hardware.
But even with all that background, startup Channel 1‘s vision of a near-future where AI-generated avatars read you the news was a bit of a shock to the system. The company’s recent proof-of-concept “showcase” newscast reveals just how far AI-generated videos of humans have come in a short time and how those realistic avatars could shake up a lot more than just the job market for talking heads.
“…the newscasters have been changed to protect the innocent”
See the highest quality AI footage in the world.
🤯 – Our generated anchors deliver stories that are informative, heartfelt and entertaining.
To be clear, Channel 1 isn’t trying to fool people with “deepfakes” of existing news anchors or anything like that. In the first few seconds of its sample newscast, it identifies its talking heads as a “team of AI-generated reporters.” A few seconds later, one of those talking heads explains further: “You can hear us and see our lips moving, but no one was recorded saying what we’re all saying. I’m powered by sophisticated systems behind the scenes.”
Even with those kinds of warnings, I found I had to constantly remind myself that the “people” I was watching deliver the news here were only “based on real people who have been compensated for use of their likeness,” as Deadline reports (how much they were compensated will probably be of great concern to actors who recently went on strike in part over the issue of AI likenesses). Everything from the lip-syncing to the intonations to subtle gestures and body movements of these Channel 1 anchors gives an eerily convincing presentation of a real newscaster talking into the camera.
Sure, if you look closely, there are a few telltale anomalies that expose these reporters as computer creations—slight video distortions around the mouth, say, or overly repetitive hand gestures, or a nonsensical word emphasis choice. But those signs are so small that they would be easy to miss at a casual glance or on a small screen like that on a phone.
In other words, human-looking AI avatars now seem well on their way to climbing out of the uncanny valley, at least when it comes to news anchors who sit at a desk or stand still in front of a green screen. Channel 1 investor Adam Mosam told Deadline it “has gotten to a place where it’s comfortable to watch,” and I have to say I agree.
A Channel 1 clip shows how its system can make video sources appear to speak a different language.
The same technology can be applied to on-the-scene news videos as well. About eight minutes into the sample newscast, Channel 1 shows a video of a European tropical storm victim describing the wreckage in French. Then it shows an AI-generated version of the same footage with the source speaking perfect English, using a facsimile of his original voice and artificial lipsync placed over his mouth.
Without the on-screen warning that this was “AI generated Language: Translated from French,” it would be easy to believe that the video was of an American expatriate rather than a native French speaker. And the effect is much more dramatic than the usual TV news practice of having an unseen interpreter speak over the footage.
Chrome has finally announced plans to kill third-party cookies. It’s been almost four years since third-party cookies have been disabled in Firefox and Safari, but Google, one of the world’s largest ad companies, has been slow-rolling the death of the tracking cookie. Ad companies use third-party cookies to track users across the web, and that web activity is used to show users relevant ads. Now that Google’s alternative user-tracking ad system, the “Privacy Sandbox,” has launched in Chrome, it’s finally ready to do away with the previous form of ad tracking. The new timeline to kill third-party cookies is the second half of 2024.
Google’s blog post calls the rollout “Tracking Protection” and says the first tests will begin on January 4, where 1 percent of Chrome users will get the feature. By the second half of 2024, the rollout should hit everyone on desktop Chrome and Android (Chrome on iOS is just a reskinned Safari and is not applicable). The rollout comes with some new UI bits for Chrome, with Google saying, “If a site doesn’t work without third-party cookies and Chrome notices you’re having issues—like if you refresh a page multiple times—we’ll prompt you with an option to temporarily re-enable third-party cookies for that website from the eye icon on the right side of your address bar.” Since other browsers have been doing this for four years, it’s hard to imagine many web admins not being ready for it.
Enlarge/ Chrome’s new third-party cookies controls.
Google
Google says the rollout is “subject to addressing any remaining competition concerns from the UK’s Competition and Markets Authority.” Chrome’s Privacy Sandbox switch represents the world’s most popular browser (Google Chrome) integrating with the web’s biggest advertising platform (Google Ads) and shutting down alternative tracking methods used by competing ad companies. So, some regulators are naturally interested in the whole process.
Google says its choice to offer this privacy feature four years after its competitors is a “responsible approach” to phasing out third-party cookies. That responsibility seems to primarily be about responsibility to Google’s shareholders since turning off tracking cookies was previously seen as an attack on Google’s business model. Google’s position as the world’s biggest browser vendor allowed it to delay the death of tracking cookies long enough to create an alternative tracking system, which launched earlier this year in Chrome. With the ad business secured, it’s now acceptable to phase out cookies. So far, everything is going to plan.
What stung her wasn’t the return to being the Android interloper in the chats again. It wasn’t the resulting lower-quality images, loss of encryption, and strange “Emphasized your message” reaction texts. It was losing messages during the outage and never being entirely certain they had been sent or received. There was a gathering on Saturday, and she had to double-check with a couple people about the details after showing up inadvertently early at the wrong spot.
That kind of grievance is why, after Apple on Wednesday appeared to have blocked what Beeper described as “~5% of Beeper Mini users” from accessing iMessages, both co-founder Eric Migicovksy and the app told users they understood if people wanted out. The app had already suspended its plans to charge customers $1.99 per month, following the first major outage. But this was something more about “how ridiculously annoying this uncertainty is for our users,” Migicovsky posted.
Fighting on two fronts
But Beeper would keep working to ensure access and keep fighting on other fronts. Migicovsky pointed to Epic’s victory at trial against Google’s Play Store (“big tech”) as motivation. “We have a chance. We’re not giving up.” Over the weekend, Migicovsky reposted shows of support from Senators Elizabeth Warren (D-Mass.) and Amy Klobuchar (D-Minn.), who have focused on reigning in and regulating large technology company’s powers.
Apple previously issued a (somewhat uncommon) statement about Beeper’s iMessage access, stating that it “took steps to protect our users by blocking techniques that exploit fake credentials in order to gain access to iMessage.” Citing privacy, security, and spam concerns, Apple stated it would “continue to make updates in the future” to protect users. Migicovsky previously denied to Ars that Beeper used “fake credentials” or in any way made iMessages less secure.
I asked Migicovsky by direct message if, given Apple’s stated plan to continually block it, there could ever be a point at which Beeper’s access was “settled,” or “back up and running,” as he put it in his post on X (formerly Twitter). He wrote that it was up to the press and the community. “If there’s enough pressure on Apple, they will have to quit messing with us.” “Us,” he clarified, meant both Apple’s customers using iMessage and Android users trying to chat securely with iPhone friends.
“That’s who they’re penalizing,” he wrote. “It’s not a Beeper vs. Apple fight, it’s Apple versus customers.”
Over the past couple of years of reviewing the iPhone, we’ve often jokingly called them “smartcameras” rather than smartphones, as the camera features are really what sell people on upgrading to new models.
So, for our final Apple gift guide, we’ll revisit some of what we explored in our iPhone 15 and iPhone 15 Pro review with a special focus on the cameras. If you’re looking to grab a new iPhone for yourself or someone in your family, which camera is best?
The idea here is to provide a top-level, quick summary of the features of each iPhone camera as they pertain to specific uses to make for an easy buying guide for last-minute holiday shoppers who want a quick answer. We’ll go over each phone and survey its features, detailing their relevant uses and noting some recommendations and considerations along the way.
If you’re already deeply familiar with this topic, this is a cheat sheet for would-be buyers, not an in-depth analysis.
If you aren’t familiar with these topics and you’re interested in going deeper, our iPhone reviews from the past few years are the place to go; we’ve covered the iteration of SmartHDR, the additions of new lenses and features, and so on as those things have been introduced or tweaked.
But as for today’s quick summary, let’s dive in!
Ars Technica may earn compensation for sales from links on this post through affiliate programs.
A note on computational photography and SmartHDR
The camera lens bump on the back of each iPhone has been getting bigger with time, but it’s software that has been driving better picture quality. Apple uses a few techniques to improve the pictures you take with your iPhone, and foremost among those is what the company calls SmartHDR.
Introduced in the iPhone XS (though some competing Android flagships did this beforehand and just called it something else), SmartHDR is a complex beast. But the simple description is that when you take a photo with your iPhone with SmartHDR enabled, it will take not one but several shots. It will then use a trained algorithm to combine all the photos’ best aspects into one picture.
The specifics of that algorithm have evolved with time, and Apple has identified a few specific versions of SmartHDR over the past few years. But all that matters when we’re looking at the latest iPhones is well, the latest version of SmartHDR. And here’s what you can expect: Most of the time, SmartHDR produces drastically better photos, with fewer unwanted artifacts and abnormalities, a clearer picture, better lighting, and so on.
Once in a while, though, it makes a weird call, and you’ll see something anomalous because of SmartHDR. It also sometimes (let’s be real: usually) gives photos a doctored, unreal quality.
The same goes for Night Mode, a feature Apple essentially copied from Google’s Pixel phones. Introduced in iPhones in 2019, Night Mode also takes a lot of photos in a short period (albeit a longer one than SmartHDR; you have to hold the phone still for a few seconds). In this case, the goal is to battle the low-light shortcomings of smartphone cameras, bring out lost detail, and reduce graininess.
It’s very effective but almost too effective in many cases; photos taken in the dark end up with a bright, glowing quality. It’s great if you want to ensure you can see how much you and your friends or family are smiling in a group photo; it’s not so great if your goal is capturing reality accurately.
Below: Shots taken in a very dark room with the iPhone 15, iPhone 15 Pro, and iPhone 15 Pro Max, from our iPhone 15 and iPhone 15 Pro review.
iPhone 15.
Samuel Axon
iPhone 15 Pro.
Samuel Axon
iPhone 15 Pro Max.
Samuel Axon
iPhone 14 Pro Max
Samuel Axon
iPhone 14.
Samuel Axon
iPhone 13 Pro.
Samuel Axon
Competing flagship phones do much of this, too, so it’s just the state of smartphone camera tech. Mostly, it’s worth the downsides because the laws of optics essentially cap how good these cameras can be without these sorts of computational photography features.
Anyway, when we make the recommendations below, we assume you are all-in on this computational photography stuff. Otherwise, you’ll want to look at alternatives to taking photos with an iPhone if quality matters to you.
iPhone 15 and iPhone 15 Plus
We’ll start with the cheapest phone in Apple’s iPhone 15 lineup because the other two phones (iPhone 15 Pro and iPhone 15 Pro Max) build on what’s seen here. The iPhone 15 Plus is getting lumped in here because its camera system is identical to its smaller variant.
The iPhone 15 has a 48-megapixel main camera with a quad-pixel sensor and an ƒ/1.6 aperture. By default, this camera takes 24-megapixel images, using a computational process to combine low-light 12 MP images with large quad pixels and a 48 MP image.
You can take full 48 MP photos too by going into the Settings app, tapping Camera, tapping Formats, and turning on Resolution Control. When this is enabled, you can tap a toggle in the top-right corner when taking a photo to take one at full resolution.
The 48 MP lens is also used to enable 2x zoom at a quality comparable to the 2x optical zoom seen on prior Pro-model iPhones. Apple does this by cropping the image and applying machine learning techniques to produce the final result. (I told you it’s all about the computational features!)
This is why we don’t recommend the iPhone 14, iPhone 13, or iPhone SE (all of which are still in Apple’s lineup) for would-be buyers who prioritize the camera abilities. That 2x zoom is a must-have, and those other phones don’t offer it. They offer a digital zoom option, but you see a real hit to quality when you use it.
That covers 1x and 2x zoom with the rear camera. There’s another lens back there, though: a 12MP ultra-wide camera (ƒ/2.4 aperture). This one enables what Apple labels as 0.5x zoom, allowing you to capture more stuff in tight spaces, like a group of people posing for a selfie in a car or a very small room, for example.
On the front of the phone, you’ll find a 12 MP camera with a ƒ/1.9 aperture; this is the selfie camera. Like the rear camera, it supports several of Apple’s computational photography buzzwords like SmartHDR 5, the Photonic Engine, and Deep Fusion.
The front and rear cameras can record 4K video with Dolby Vision HDR at up to 60 fps. The rear camera system supports Cinematic Video, which adds a depth-of-field effect behind human subjects. It also has Action Mode, which takes lower-than-4K resolution video but has a strong stabilization effect for situations where your hands move a lot.
Altogether, these features make the iPhone 15 an excellent all-around camera system. It has all the features you’d need to take photos of your kids at home or take selfies with friends while on the town—including Night Mode for low-light shots.
It will be enough for most people. This is a particularly good time for the non-Pro iPhone, as Apple introduced a bunch of formerly Pro-only features (like the 48 MP main camera) to the non-Pro phone for the first time during this cycle.
That said, there are still some situations where you might want to spring for the iPhone Pro or even the iPhone Pro Max.
iPhone 15 Pro
Now that we’ve covered the basics of the iPhone 15’s camera system, we can focus on what’s different if you spend extra on the iPhone 15 Pro.
The iPhone 15 Pro has a more powerful sensor (2.44 µm quad pixel to the iPhone 15’s 2 µm quad pixel) in the main camera, which goes from a ƒ/1.6 aperture in the iPhone 15 to ƒ/1.78 in the Pro. Whereas the iPhone 15 had a 26 mm main lens focal length, you’re looking at 24, 28, and 35mm for the Pro.
Apple says the iPhone 15 Pro has improved optical image stabilization and a flash that produces more natural colors, too. Meanwhile, the Ultra-Wide lens goes from a ƒ/2.4 aperture to ƒ/2.2.
The Pro phone adds a third lens, too: a 12 MP, ƒ/2.8 aperture telephoto lens for 3x zoom. That means that the iPhone 15 Pro’s zoom levels are 0.5x, 1x, 2x, and 3x to the iPhone 15’s 0.5x, 1x, and 2x.
There are no substantial differences between the front-facing camera in the iPhone 15 and the iPhone 15 Pro.
There are a few Pro-specific features, too, specs aside. The iPhone 15 Pro can use Night Mode for portrait photos (a shooting mode that adds a depth-of-field effect to still images), whereas with the iPhone 15, you have to choose one or the other. It’s an edge case, but there you have it.
The iPhone 15 Pro also supports the ProRAW format, which provides high-quality images with minimal doctoring so that photographers can tweak or enhance the image to their own spec in software later.
Finally, the iPhone 15 Pro supports Macro photography mode. This automatically switches the camera settings when you’re taking an ultra-close-up shot of something detailed, which results in substantially better macro photography in many situations.
On the video side of things, the differences in quality aren’t huge. But there are some Pro-specific features here. The iPhone 15 Pro supports log video recording, macro videos, and a 3D “spatial video” format to be viewed later on Apple’s upcoming Vision Pro headset. When I tried the Vision Pro earlier this year, I wasn’t impressed with these spatial photos, but it’s possible Apple will have improved them by the time the device reaches the public.
You’ll want to go with the Pro if you’re taking close-ups of flowers. You might prefer the Pro to the regular 15 if you want to take ProRAW photos to edit the image to professional standards later. And 3x zoom makes a big difference in situations like concerts where you want to take pictures of something far away.
In general, this makes the iPhone 15 Pro a better fit for content creators of various types, and it offers more options for some unique edge cases. You’ll also see marginally better low-light photography—sometimes.
If you’re not seeing those edge cases often and are not producing professional-quality content, though, the iPhone 15’s camera will serve you just fine. In our experience, the only thing you’ll miss frequently is that 3x zoom.
iPhone 15 Pro Max
Speaking of zoom features, that’s the main thing differentiating the iPhone 15 Pro Max from the smaller iPhone 15 Pro.
The Max replaces the 3x telephoto lens with a 5x one—same megapixels, same aperture. You lose the 3x option, but you can still take advantage of the main camera’s 48MP lens to take 2x zoom photos, and 5x is more differentiated and arguably better for many situations.
Below: Daytime shots at 2x, 3x, or 5x zoom (as applicable) on the iPhone 15, iPhone 15 Pro, and iPhone 15 Pro Max from our iPhone 15 and iPhone 15 Pro review.
The iPhone 15 Pro Max at 5x zoom.
Samuel Axon
The iPhone 15 Pro at 3x zoom.
Samuel Axon
The iPhone 15 at 2x zoom.
Samuel Axon
The iPhone 15 Pro at 2x zoom.
Samuel Axon
The iPhone 15 Pro Max at 2x zoom.
Samuel Axon
That’s the only difference between the iPhone 15 Pro Max and the iPhone 15—but it’s significant.
In general, we’d recommend picking between these two Pro models based on screen size, not camera features, but if you find yourself in situations like concerts where you want more powerful zoom, it could be worth the upgrade on that basis.
A quick recap
The iPhone 15 is a good all-around camera, and it will be enough for most use cases. We don’t recommend springing for the more expensive phones for the camera alone unless you have a very specific need in your daily life.
Jump to the iPhone 15 Pro or Pro Max if you are a professional content creator who needs the best raw image files, the ability to record 4K 60 fps HDR video to external storage, if you like to do macro photography, or if you are an avid user of Apple’s AI-driven Portrait Mode.
Go for the Max if powerful optical zoom is a top priority. Otherwise, stick with the 15.
Enlarge/ The iPhone 15 is part of Apple’s self-repair program now.
Samuel Axon
Apple today expanded the Self Service Repair program it launched in April to include access to Apple’s diagnostics tool online and the iPhone 15 series and M2 Macs.
The online tool, Apple said in today’s announcement, provides “the same ability as Apple Authorized Service Providers and Independent Repair Providers to test devices for optimal part functionality and performance, as well as identify which parts may need repair.” The troubleshooting tool is only available in the US and will hit Europe in 2024, according to Apple.
Upon visiting the tool’s website, you’ll be prompted to put your device in diagnostic mode before entering the device’s serial number. Then, you’ll have access to a diagnostic suite, including things like a mobile resource inspector for checking software and validating components’ presence, testing for audio output and “display pixel anomalies,” and tests for cameras and Face ID.
Apple’s support page says the tests may “help isolate issues, investigate whether a part needs to be replaced, or verify that a repair has been successfully completed.”
The tool requires iOS 17.0 or macOS Sonoma 14.1 and later.
Apple’s Self Service Repair program relies on parts pairing, though, and critics say this limits the tools’ effectiveness. Self-repair activist iFixit has been vocal about its disagreement with Apple’s use of the practice since the tech giant launched its self-repair program. iFixit has argued that parts serialization limits the usage of third-party parts. In September, iFixit CEO Kyle Wiens called parts pairing “a serious threat to our ability to fix the things we own,” noting that Apple may be seeking to strong-arm a favorable customer experience but that it’s costing us the environment and “ownership rights.”
In a statement to Ars Technica today, Wiens expressed further disappointment with Apple’s parts serialization:
Apple still has a long way to go to create a robust repair ecosystem, including ending their repair-hostile parts pairing system. This software tool clearly illuminates the problems we’ve identified with parts pairing, where the diagnostic tool fails to recognize the ambient light sensor in a new part we’ve installed.
Users of Apple M2-based MacBook Pro and MacBook Air laptops, as well as the Mac Mini, Pro, and Studio, are now all included in the program, which gives customers access to tools, parts, and manuals previously only accessible by Apple and authorized repair partners. Customers can also rent tool repair kits, although they, too, have been criticized for their bulkiness and limited rental period.
Since launching its repair program, though, Apple has made a turnabout with user repairability, even if it’s still flawed. With the latest additions, Apple’s program now supports 35 products. The company has also become an unexpected proponent for state and national right-to-repair bills. And it’s simplified repairs via its Self Service Repair program— somewhat—by no longer requiring fixers to call Apple upon repair completions. People can instead verify repairs and update firmware with the System Configuration post-repair software tool. Today, Apple also announced bringing the program to 24 new European countries, bringing the program’s total to 33 countries.
Apple still says its repair program is best reserved for people who are experienced with electronics repairs.