Biz & IT

coming-to-apple-oses:-a-seamless,-secure-way-to-import-and-export-passkeys

Coming to Apple OSes: A seamless, secure way to import and export passkeys

Credit: Apple

As the video explains:

This new process is fundamentally different and more secure than traditional credential export methods, which often involve exporting an unencrypted CSV or JSON file, then manually importing it into another app. The transfer process is user initiated, occurs directly between participating credential manager apps and is secured by local authentication like Face ID.

This transfer uses a data schema that was built in collaboration with the members of the FIDO Alliance. It standardizes the data format for passkeys, passwords, verification codes, and more data types.

The system provides a secure mechanism to move the data between apps. No insecure files are created on disk, eliminating the risk of credential leaks from exported files. It’s a modern, secure way to move credentials.

The push to passkeys is fueled by the tremendous costs associated with passwords. Creating and managing a sufficiently long, randomly generated password for each account is a burden on many users, a difficulty that often leads to weak choices and reused passwords. Leaked passwords have also been a chronic problem.

Passkeys, in theory, provide a means of authentication that’s immune to credential phishing, password leaks, and password spraying. Under the latest “FIDO2” specification, it creates a unique public/private encryption keypair during each website or app enrollment. The keys are generated and stored on a user’s phone, computer, YubiKey, or similar device. The public portion of the key is sent to the account service. The private key remains bound to the user device, where it can’t be extracted. During sign-in, the website or app server sends the device that created the key pair a challenge in the form of pseudo-random data. Authentication occurs only when the device signs the challenge using the corresponding private key and sends it back.

This design ensures that there is no shared secret that ever leaves the user’s device. That means there’s no data to be sniffed in transit, phished, or compromised through other common methods.

As I noted in December, the biggest thing holding back passkeys at the moment is their lack of usability. Apps, OSes, and websites are, in many cases, islands that don’t interoperate with their peers. Besides potentially locking users out of their accounts, the lack of interoperability also makes passkeys too difficult for many people.

Apple’s demo this week provides the strongest indication yet that passkey developers are making meaningful progress in improving usability.

Coming to Apple OSes: A seamless, secure way to import and export passkeys Read More »

with-the-launch-of-o3-pro,-let’s-talk-about-what-ai-“reasoning”-actually-does

With the launch of o3-pro, let’s talk about what AI “reasoning” actually does


inquiring artificial minds want to know

New studies reveal pattern-matching reality behind the AI industry’s reasoning claims.

On Tuesday, OpenAI announced that o3-pro, a new version of its most capable simulated reasoning model, is now available to ChatGPT Pro and Team users, replacing o1-pro in the model picker. The company also reduced API pricing for o3-pro by 87 percent compared to o1-pro while cutting o3 prices by 80 percent. While “reasoning” is useful for some analytical tasks, new studies have posed fundamental questions about what the word actually means when applied to these AI systems.

We’ll take a deeper look at “reasoning” in a minute, but first, let’s examine what’s new. While OpenAI originally launched o3 (non-pro) in April, the o3-pro model focuses on mathematics, science, and coding while adding new capabilities like web search, file analysis, image analysis, and Python execution. Since these tool integrations slow response times (longer than the already slow o1-pro), OpenAI recommends using the model for complex problems where accuracy matters more than speed. However, they do not necessarily confabulate less than “non-reasoning” AI models (they still introduce factual errors), which is a significant caveat when seeking accurate results.

Beyond the reported performance improvements, OpenAI announced a substantial price reduction for developers. O3-pro costs $20 per million input tokens and $80 per million output tokens in the API, making it 87 percent cheaper than o1-pro. The company also reduced the price of the standard o3 model by 80 percent.

These reductions address one of the main concerns with reasoning models—their high cost compared to standard models. The original o1 cost $15 per million input tokens and $60 per million output tokens, while o3-mini cost $1.10 per million input tokens and $4.40 per million output tokens.

Why use o3-pro?

Unlike general-purpose models like GPT-4o that prioritize speed, broad knowledge, and making users feel good about themselves, o3-pro uses a chain-of-thought simulated reasoning process to devote more output tokens toward working through complex problems, making it generally better for technical challenges that require deeper analysis. But it’s still not perfect.

An OpenAI's o3-pro benchmark chart.

An OpenAI’s o3-pro benchmark chart. Credit: OpenAI

Measuring so-called “reasoning” capability is tricky since benchmarks can be easy to game by cherry-picking or training data contamination, but OpenAI reports that o3-pro is popular among testers, at least. “In expert evaluations, reviewers consistently prefer o3-pro over o3 in every tested category and especially in key domains like science, education, programming, business, and writing help,” writes OpenAI in its release notes. “Reviewers also rated o3-pro consistently higher for clarity, comprehensiveness, instruction-following, and accuracy.”

An OpenAI's o3-pro benchmark chart.

An OpenAI’s o3-pro benchmark chart. Credit: OpenAI

OpenAI shared benchmark results showing o3-pro’s reported performance improvements. On the AIME 2024 mathematics competition, o3-pro achieved 93 percent pass@1 accuracy, compared to 90 percent for o3 (medium) and 86 percent for o1-pro. The model reached 84 percent on PhD-level science questions from GPQA Diamond, up from 81 percent for o3 (medium) and 79 percent for o1-pro. For programming tasks measured by Codeforces, o3-pro achieved an Elo rating of 2748, surpassing o3 (medium) at 2517 and o1-pro at 1707.

When reasoning is simulated

Structure made of cubes in the shape of a thinking or contemplating person that evolves from simple to complex, 3D render.


It’s easy for laypeople to be thrown off by the anthropomorphic claims of “reasoning” in AI models. In this case, as with the borrowed anthropomorphic term “hallucinations,” “reasoning” has become a term of art in the AI industry that basically means “devoting more compute time to solving a problem.” It does not necessarily mean the AI models systematically apply logic or possess the ability to construct solutions to truly novel problems. This is why we at Ars Technica continue to use the term “simulated reasoning” (SR) to describe these models. They are simulating a human-style reasoning process that does not necessarily produce the same results as human reasoning when faced with novel challenges.

While simulated reasoning models like o3-pro often show measurable improvements over general-purpose models on analytical tasks, research suggests these gains come from allocating more computational resources to traverse their neural networks in smaller, more directed steps. The answer lies in what researchers call “inference-time compute” scaling. When these models use what are called “chain-of-thought” techniques, they dedicate more computational resources to exploring connections between concepts in their neural network data. Each intermediate “reasoning” output step (produced in tokens) serves as context for the next token prediction, effectively constraining the model’s outputs in ways that tend to improve accuracy and reduce mathematical errors (though not necessarily factual ones).

But fundamentally, all Transformer-based AI models are pattern-matching marvels. They borrow reasoning patterns from examples in the training data that researchers use to create them. Recent studies on Math Olympiad problems reveal that SR models still function as sophisticated pattern-matching machines—they cannot catch their own mistakes or adjust failing approaches, often producing confidently incorrect solutions without any “awareness” of errors.

Apple researchers found similar limitations when testing SR models on controlled puzzle environments. Even when provided explicit algorithms for solving puzzles like Tower of Hanoi, the models failed to execute them correctly—suggesting their process relies on pattern matching from training data rather than logical reasoning. As problem complexity increased, these models showed a “counterintuitive scaling limit,” reducing their reasoning effort despite having adequate computational resources. This aligns with the USAMO findings showing that models made basic logical errors and continued with flawed approaches even when generating contradictory results.

However, there’s some serious nuance here that you may miss if you’re reaching quickly for a pro-AI or anti-AI take. Pattern-matching and reasoning aren’t necessarily mutually exclusive. Since it’s difficult to mechanically define human reasoning at a fundamental level, we can’t definitively say whether sophisticated pattern-matching is categorically different from “genuine” reasoning or just a different implementation of similar underlying processes. The Tower of Hanoi failures are compelling evidence of current limitations, but they don’t resolve the deeper philosophical question of what reasoning actually is.

Illustration of a robot standing on a latter in front of a large chalkboard solving mathematical problems. A red question mark hovers over its head.

And understanding these limitations doesn’t diminish the genuine utility of SR models. For many real-world applications—debugging code, solving math problems, or analyzing structured data—pattern matching from vast training sets is enough to be useful. But as we consider the industry’s stated trajectory toward artificial general intelligence and even superintelligence, the evidence so far suggests that simply scaling up current approaches or adding more “thinking” tokens may not bridge the gap between statistical pattern recognition and what might be called generalist algorithmic reasoning.

But the technology is evolving rapidly, and new approaches are already being developed to address those shortcomings. For example, self-consistency sampling allows models to generate multiple solution paths and check for agreement, while self-critique prompts attempt to make models evaluate their own outputs for errors. Tool augmentation represents another useful direction already used by o3-pro and other ChatGPT models—by connecting LLMs to calculators, symbolic math engines, or formal verification systems, researchers can compensate for some of the models’ computational weaknesses. These methods show promise, though they don’t yet fully address the fundamental pattern-matching nature of current systems.

For now, o3-pro is a better, cheaper version of what OpenAI previously provided. It’s good at solving familiar problems, struggles with truly new ones, and still makes confident mistakes. If you understand its limitations, it can be a powerful tool, but always double-check the results.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

With the launch of o3-pro, let’s talk about what AI “reasoning” actually does Read More »

after-ai-setbacks,-meta-bets-billions-on-undefined-“superintelligence”

After AI setbacks, Meta bets billions on undefined “superintelligence”

Meta has developed plans to create a new artificial intelligence research lab dedicated to pursuing “superintelligence,” according to reporting from The New York Times. The social media giant chose 28-year-old Alexandr Wang, founder and CEO of Scale AI, to join the new lab as part of a broader reorganization of Meta’s AI efforts under CEO Mark Zuckerberg.

Superintelligence refers to a hypothetical AI system that would exceed human cognitive abilities—a step beyond artificial general intelligence (AGI), which aims to match an intelligent human’s capability for learning new tasks without intensive specialized training.

However, much like AGI, superintelligence remains a nebulous term in the field. Since scientists still poorly understand the mechanics of human intelligence, and because human intelligence resists simple quantification with no single definition, identifying superintelligence when it arrives will present significant challenges.

Computers already far surpass humans in certain forms of information processing such as calculations, but this narrow superiority doesn’t qualify as superintelligence under most definitions. The pursuit assumes we’ll recognize it when we see it, despite the conceptual fuzziness.

Illustration of studious robot reading a book

AI researcher Dr. Margaret Mitchell told Ars Technica in April 2024 that there will “likely never be agreement on comparisons between human and machine intelligence” but predicted that “men in positions of power and influence, particularly ones with investments in AI, will declare that AI is smarter than humans” regardless of the reality.

The new lab represents Meta’s effort to remain competitive in the increasingly crowded AI race, where tech giants continue pouring billions into research and talent acquisition. Meta has reportedly offered compensation packages worth seven to nine figures to dozens of researchers from companies like OpenAI and Google, according to The New York Times, with some already agreeing to join the company.

Meta joins a growing list of tech giants making bold claims about advanced AI development. In January, OpenAI CEO Sam Altman wrote in a blog post that “we are now confident we know how to build AGI as we have traditionally understood it.” Earlier, in September 2024, Altman predicted that the AI industry might develop superintelligence “in a few thousand days.” Elon Musk made an even more aggressive prediction in April 2024, saying that AI would be “smarter than the smartest human” by “next year, within two years.”

After AI setbacks, Meta bets billions on undefined “superintelligence” Read More »

us-air-traffic-control-still-runs-on-windows-95-and-floppy-disks

US air traffic control still runs on Windows 95 and floppy disks

On Wednesday, acting FAA Administrator Chris Rocheleau told the House Appropriations Committee that the Federal Aviation Administration plans to replace its aging air traffic control systems, which still rely on floppy disks and Windows 95 computers, Tom’s Hardware reports. The agency has issued a Request For Information to gather proposals from companies willing to tackle the massive infrastructure overhaul.

“The whole idea is to replace the system. No more floppy disks or paper strips,” Rocheleau said during the committee hearing. Transportation Secretary Sean Duffy called the project “the most important infrastructure project that we’ve had in this country for decades,” describing it as a bipartisan priority.

Most air traffic control towers and facilities across the US currently operate with technology that seems frozen in the 20th century, although that isn’t necessarily a bad thing—when it works. Some controllers currently use paper strips to track aircraft movements and transfer data between systems using floppy disks, while their computers run Microsoft’s Windows 95 operating system, which launched in 1995.

A pile of floppy disks

Credit: Getty

As Tom’s Hardware notes, modernization of the system is broadly popular. Sheldon Jacobson, a University of Illinois professor who has studied risks in aviation, says that the system works remarkably well as is but that an upgrade is still critical, according to NPR. The aviation industry coalition Modern Skies has been pushing for ATC modernization and recently released an advertisement highlighting the outdated technology.

While the vintage systems may have inadvertently protected air traffic control from widespread outages like the CrowdStrike incident that disrupted modern computer systems globally in 2024, agency officials say 51 of the FAA’s 138 systems are unsustainable due to outdated functionality and a lack of spare parts.

The FAA isn’t alone in clinging to floppy disk technology. San Francisco’s train control system still runs on DOS loaded from 5.25-inch floppy disks, with upgrades not expected until 2030 due to budget constraints. Japan has also struggled in recent years to modernize government record systems that use floppy disks.

If it ain’t broke? (Or maybe it is broke)

Modernizing the air traffic control system presents engineering challenges that extend far beyond simply installing newer computers. Unlike typical IT upgrades, ATC systems must maintain continuous 24/7 operation, because shutting down facilities for maintenance could compromise aviation safety.

US air traffic control still runs on Windows 95 and floppy disks Read More »

anthropic-releases-custom-ai-chatbot-for-classified-spy-work

Anthropic releases custom AI chatbot for classified spy work

On Thursday, Anthropic unveiled specialized AI models designed for US national security customers. The company released “Claude Gov” models that were built in response to direct feedback from government clients to handle operations such as strategic planning, intelligence analysis, and operational support. The custom models reportedly already serve US national security agencies, with access restricted to those working in classified environments.

The Claude Gov models differ from Anthropic’s consumer and enterprise offerings, also called Claude, in several ways. They reportedly handle classified material, “refuse less” when engaging with classified information, and are customized to handle intelligence and defense documents. The models also feature what Anthropic calls “enhanced proficiency” in languages and dialects critical to national security operations.

Anthropic says the new models underwent the same “safety testing” as all Claude models. The company has been pursuing government contracts as it seeks reliable revenue sources, partnering with Palantir and Amazon Web Services in November to sell AI tools to defense customers.

Anthropic is not the first company to offer specialized chatbot services for intelligence agencies. In 2024, Microsoft launched an isolated version of OpenAI’s GPT-4 for the US intelligence community after 18 months of work. That system, which operated on a special government-only network without Internet access, became available to about 10,000 individuals in the intelligence community for testing and answering questions.

Anthropic releases custom AI chatbot for classified spy work Read More »

millions-of-low-cost-android-devices-turn-home-networks-into-crime-platforms

Millions of low-cost Android devices turn home networks into crime platforms

Millions of low-cost devices for media streaming, in-vehicle entertainment, and video projection are infected with malware that turns consumer networks into platforms for distributing malware, concealing nefarious communications, and performing other illicit activities, the FBI has warned.

The malware infecting these devices, known as BadBox, is based on Triada, a malware strain discovered in 2016 by Kaspersky Lab, which called it “one of the most advanced mobile Trojans” the security firm’s analysts had ever encountered. It employed an impressive kit of tools, including rooting exploits that bypassed security protections built into Android and functions for modifying the Android OS’s all-powerful Zygote process. Google eventually updated Android to block the methods Triada used to infect devices.

The threat remains

A year later, Triada returned, only this time, devices came pre-infected before they reached consumers’ hands. In 2019, Google confirmed that the supply-chain attack affected thousands of devices and that the company had once again taken measures to thwart it.

In 2023, security firm Human Security reported on BigBox, a Triada-derived backdoor it found preinstalled on thousands of devices manufactured in China. The malware, which Human Security estimated was installed on 74,000 devices around the world, facilitated a range of illicit activities, including advertising fraud, residential proxy services, the creation of fake Gmail and WhatsApp accounts, and infecting other Internet-connected devices.

Millions of low-cost Android devices turn home networks into crime platforms Read More »

“in-10-years,-all-bets-are-off”—anthropic-ceo-opposes-decadelong-freeze-on-state-ai-laws

“In 10 years, all bets are off”—Anthropic CEO opposes decadelong freeze on state AI laws

On Thursday, Anthropic CEO Dario Amodei argued against a proposed 10-year moratorium on state AI regulation in a New York Times opinion piece, calling the measure shortsighted and overbroad as Congress considers including it in President Trump’s tax policy bill. Anthropic makes Claude, an AI assistant similar to ChatGPT.

Amodei warned that AI is advancing too fast for such a long freeze, predicting these systems “could change the world, fundamentally, within two years; in 10 years, all bets are off.”

As we covered in May, the moratorium would prevent states from regulating AI for a decade. A bipartisan group of state attorneys general has opposed the measure, which would preempt AI laws and regulations recently passed in dozens of states.

In his op-ed piece, Amodei said the proposed moratorium aims to prevent inconsistent state laws that could burden companies or compromise America’s competitive position against China. “I am sympathetic to these concerns,” Amodei wrote. “But a 10-year moratorium is far too blunt an instrument. A.I. is advancing too head-spinningly fast.”

Instead of a blanket moratorium, Amodei proposed that the White House and Congress create a federal transparency standard requiring frontier AI developers to publicly disclose their testing policies and safety measures. Under this framework, companies working on the most capable AI models would need to publish on their websites how they test for various risks and what steps they take before release.

“Without a clear plan for a federal response, a moratorium would give us the worst of both worlds—no ability for states to act and no national policy as a backstop,” Amodei wrote.

Transparency as the middle ground

Amodei emphasized his claims for AI’s transformative potential throughout his op-ed, citing examples of pharmaceutical companies drafting clinical study reports in minutes instead of weeks and AI helping to diagnose medical conditions that might otherwise be missed. He wrote that AI “could accelerate economic growth to an extent not seen for a century, improving everyone’s quality of life,” a claim that some skeptics believe may be overhyped.

“In 10 years, all bets are off”—Anthropic CEO opposes decadelong freeze on state AI laws Read More »

two-certificate-authorities-booted-from-the-good-graces-of-chrome

Two certificate authorities booted from the good graces of Chrome

Google says its Chrome browser will stop trusting certificates from two certificate authorities after “patterns of concerning behavior observed over the past year” diminished trust in their reliability.

The two organizations, Taiwan-based Chunghwa Telecom and Budapest-based Netlock, are among the dozens of certificate authorities trusted by Chrome and most other browsers to provide digital certificates that encrypt traffic and certify the authenticity of sites. With the ability to mint cryptographic credentials that cause address bars to display a padlock, assuring the trustworthiness of a site, these certificate authorities wield significant control over the security of the web.

Inherent risk

“Over the past several months and years, we have observed a pattern of compliance failures, unmet improvement commitments, and the absence of tangible, measurable progress in response to publicly disclosed incident reports,” members of the Chrome security team wrote Tuesday. “When these factors are considered in aggregate and considered against the inherent risk each publicly-trusted CA poses to the internet, continued public trust is no longer justified.”

Two certificate authorities booted from the good graces of Chrome Read More »

meta-and-yandex-are-de-anonymizing-android-users’-web-browsing-identifiers

Meta and Yandex are de-anonymizing Android users’ web browsing identifiers


Abuse allows Meta and Yandex to attach persistent identifiers to detailed browsing histories.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Tracking code that Meta and Russia-based Yandex embed into millions of websites is de-anonymizing visitors by abusing legitimate Internet protocols, causing Chrome and other browsers to surreptitiously send unique identifiers to native apps installed on a device, researchers have discovered. Google says it’s investigating the abuse, which allows Meta and Yandex to convert ephemeral web identifiers into persistent mobile app user identities.

The covert tracking—implemented in the Meta Pixel and Yandex Metrica trackers—allows Meta and Yandex to bypass core security and privacy protections provided by both the Android operating system and browsers that run on it. Android sandboxing, for instance, isolates processes to prevent them from interacting with the OS and any other app installed on the device, cutting off access to sensitive data or privileged system resources. Defenses such as state partitioning and storage partitioning, which are built into all major browsers, store site cookies and other data associated with a website in containers that are unique to every top-level website domain to ensure they’re off-limits for every other site.

A blatant violation

“One of the fundamental security principles that exists in the web, as well as the mobile system, is called sandboxing,” Narseo Vallina-Rodriguez, one of the researchers behind the discovery, said in an interview. “You run everything in a sandbox, and there is no interaction within different elements running on it. What this attack vector allows is to break the sandbox that exists between the mobile context and the web context. The channel that exists allowed the Android system to communicate what happens in the browser with the identity running in the mobile app.”

The bypass—which Yandex began in 2017 and Meta started last September—allows the companies to pass cookies or other identifiers from Firefox and Chromium-based browsers to native Android apps for Facebook, Instagram, and various Yandex apps. The companies can then tie that vast browsing history to the account holder logged into the app.

This abuse has been observed only in Android, and evidence suggests that the Meta Pixel and Yandex Metrica target only Android users. The researchers say it may be technically feasible to target iOS because browsers on that platform allow developers to programmatically establish localhost connections that apps can monitor on local ports.

In contrast to iOS, however, Android imposes fewer controls on local host communications and background executions of mobile apps, the researchers said, while also implementing stricter controls in app store vetting processes to limit such abuses. This overly permissive design allows Meta Pixel and Yandex Metrica to send web requests with web tracking identifiers to specific local ports that are continuously monitored by the Facebook, Instagram, and Yandex apps. These apps can then link pseudonymous web identities with actual user identities, even in private browsing modes, effectively de-anonymizing users’ browsing habits on sites containing these trackers.

Meta Pixel and Yandex Metrica are analytics scripts designed to help advertisers measure the effectiveness of their campaigns. Meta Pixel and Yandex Metrica are estimated to be installed on 5.8 million and 3 million sites, respectively.

Meta and Yandex achieve the bypass by abusing basic functionality built into modern mobile browsers that allows browser-to-native app communications. The functionality lets browsers send web requests to local Android ports to establish various services, including media connections through the RTC protocol, file sharing, and developer debugging.

A conceptual diagram representing the exchange of identifiers between the web trackers running on the browser context and native Facebook, Instagram, and Yandex apps for Android.

A conceptual diagram representing the exchange of identifiers between the web trackers running on the browser context and native Facebook, Instagram, and Yandex apps for Android.

While the technical underpinnings differ, both Meta Pixel and Yandex Metrica are performing a “weird protocol misuse” to gain unvetted access that Android provides to localhost ports on the 127.0.0.1 IP address. Browsers access these ports without user notification. Facebook, Instagram, and Yandex native apps silently listen on those ports, copy identifiers in real time, and link them to the user logged into the app.

A representative for Google said the behavior violates the terms of service for its Play marketplace and the privacy expectations of Android users.

“The developers in this report are using capabilities present in many browsers across iOS and Android in unintended ways that blatantly violate our security and privacy principles,” the representative said, referring to the people who write the Meta Pixel and Yandex Metrica JavaScript. “We’ve already implemented changes to mitigate these invasive techniques and have opened our own investigation and are directly in touch with the parties.”

Meta didn’t answer emailed questions for this article, but provided the following statement: “We are in discussions with Google to address a potential miscommunication regarding the application of their policies. Upon becoming aware of the concerns, we decided to pause the feature while we work with Google to resolve the issue.”

Yandex representatives didn’t answer an email seeking comment.

How Meta and Yandex de-anonymize Android users

Meta Pixel developers have abused various protocols to implement the covert listening since the practice began last September. They started by causing apps to send HTTP requests to port 12387. A month later, Meta Pixel stopped sending this data, even though Facebook and Instagram apps continued to monitor the port.

In November, Meta Pixel switched to a new method that invoked WebSocket, a protocol for two-way communications, over port 12387.

That same month, Meta Pixel also deployed a new method that used WebRTC, a real-time peer-to-peer communication protocol commonly used for making audio or video calls in the browser. This method used a complicated process known as SDP munging, a technique for JavaScript code to modify Session Description Protocol data before it’s sent. Still in use today, the SDP munging by Meta Pixel inserts key _fbp cookie content into fields meant for connection information. This causes the browser to send that data as part of a STUN request to the Android local host, where the Facebook or Instagram app can read it and link it to the user.

In May, a beta version of Chrome introduced a mitigation that blocked the type of SDP munging that Meta Pixel used. Within days, Meta Pixel circumvented the mitigation by adding a new method that swapped the STUN requests with the TURN requests.

In a post, the researchers provided a detailed description of the _fbp cookie from a website to the native app and, from there, to the Meta server:

1. The user opens the native Facebook or Instagram app, which eventually is sent to the background and creates a background service to listen for incoming traffic on a TCP port (12387 or 12388) and a UDP port (the first unoccupied port in 12580–12585). Users must be logged-in with their credentials on the apps.

2. The user opens their browser and visits a website integrating the Meta Pixel.

3. At this stage, some websites wait for users’ consent before embedding Meta Pixel. In our measurements of the top 100K website homepages, we found websites that require consent to be a minority (more than 75% of affected sites does not require user consent)…

4. The Meta Pixel script is loaded and the _fbp cookie is sent to the native Instagram or Facebook app via WebRTC (STUN) SDP Munging.

5. The Meta Pixel script also sends the _fbp value in a request to https://www.facebook.com/tr along with other parameters such as page URL (dl), website and browser metadata, and the event type (ev) (e.g., PageView, AddToCart, Donate, Purchase).

6. The Facebook or Instagram apps receive the _fbp cookie from the Meta JavaScripts running on the browser and transmits it to the GraphQL endpoint (https://graph[.]facebook[.]com/graphql) along with other persistent user identifiers, linking users’ fbp ID (web visit) with their Facebook or Instagram account.

Detailed flow of the way the Meta Pixel leaks the _fbp cookie from Android browsers to it’s Facebook and Instagram apps.

Detailed flow of the way the Meta Pixel leaks the _fbp cookie from Android browsers to it’s Facebook and Instagram apps.

The first known instance of Yandex Metrica linking websites visited in Android browsers to app identities was in May 2017, when the tracker started sending HTTP requests to local ports 29009 and 30102. In May 2018, Yandex Metrica also began sending the data through HTTPS to ports 29010 and 30103. Both methods remained in place as of publication time.

An overview of Yandex identifier sharing

An overview of Yandex identifier sharing

A timeline of web history tracking by Meta and Yandex

A timeline of web history tracking by Meta and Yandex

Some browsers for Android have blocked the abusive JavaScript in trackers. DuckDuckGo, for instance, was already blocking domains and IP addresses associated with the trackers, preventing the browser from sending any identifiers to Meta. The browser also blocked most of the domains associated with Yandex Metrica. After the researchers notified DuckDuckGo of the incomplete blacklist, developers added the missing addresses.

The Brave browser, meanwhile, also blocked the sharing of identifiers due to its extensive blocklists and existing mitigation to block requests to the localhost without explicit user consent. Vivaldi, another Chromium-based browser, forwards the identifiers to local Android ports when the default privacy setting is in place. Changing the setting to block trackers appears to thwart browsing history leakage, the researchers said.

Tracking blocker settings in Vivaldi for Android.

There’s got to be a better way

The various remedies DuckDuckGo, Brave, Vivaldi, and Chrome have put in place are working as intended, but the researchers caution they could become ineffective at any time.

“Any browser doing blocklisting will likely enter into a constant arms race, and it’s just a partial solution,” Vallina Rodriguez said of the current mitigations. “Creating effective blocklists is hard, and browser makers will need to constantly monitor the use of this type of capability to detect other hostnames potentially abusing localhost channels and then updating their blocklists accordingly.”

He continued:

While this solution works once you know the hostnames doing that, it’s not the right way of mitigating this issue, as trackers may find ways of accessing this capability (e.g., through more ephemeral hostnames). A long-term solution should go through the design and development of privacy and security controls for localhost channels, so that users can be aware of this type of communication and potentially enforce some control or limit this use (e.g., a permission or some similar user notifications).

Chrome and most other Chromium-based browsers executed the JavaScript as Meta and Yandex intended. Firefox did as well, although for reasons that aren’t clear, the browser was not able to successfully perform the SDP munging specified in later versions of the code. After blocking the STUN variant of SDP munging in the early May beta release, a production version of Chrome released two weeks ago began blocking both the STUN and TURN variants. Other Chromium-based browsers are likely to implement it in the coming weeks. A representative for Firefox-maker Mozilla said the organization prioritizes user privacy and is taking the report seriously

“We are actively investigating the reported behavior, and working to fully understand its technical details and implications,” Mozilla said in an email. “Based on what we’ve seen so far, we consider these to be severe violations of our anti-tracking policies, and are assessing solutions to protect against these new tracking techniques.”

The researchers warn that the current fixes are so specific to the code in the Meta and Yandex trackers that it would be easy to bypass them with a simple update.

“They know that if someone else comes in and tries a different port number, they may bypass this protection,” said Gunes Acar, the researcher behind the initial discovery, referring to the Chrome developer team at Google. “But our understanding is they want to send this message that they will not tolerate this form of abuse.”

Fellow researcher Vallina-Rodriguez said the more comprehensive way to prevent the abuse is for Android to overhaul the way it handles access to local ports.

“The fundamental issue is that the access to the local host sockets is completely uncontrolled on Android,” he explained. “There’s no way for users to prevent this kind of communication on their devices. Because of the dynamic nature of JavaScript code and the difficulty to keep blocklists up to date, the right way of blocking this persistently is by limiting this type of access at the mobile platform and browser level, including stricter platform policies to limit abuse.”

Got consent?

The researchers who made this discovery are:

  • Aniketh Girish, PhD student at IMDEA Networks
  • Gunes Acar, assistant professor in Radboud University’s Digital Security Group & iHub
  • Narseo Vallina-Rodriguez, associate professor at IMDEA Networks
  • Nipuna Weerasekara, PhD student at IMDEA Networks
  • Tim Vlummens, PhD student at COSIC, KU Leuven

Acar said he first noticed Meta Pixel accessing local ports while visiting his own university’s website.

There’s no indication that Meta or Yandex has disclosed the tracking to either websites hosting the trackers or end users who visit those sites. Developer forums show that many websites using Meta Pixel were caught off guard when the scripts began connecting to local ports.

“Since 5th September, our internal JS error tracking has been flagging failed fetch requests to localhost: 12387,” one developer wrote. “No changes have been made on our side, and the existing Facebook tracking pixel we use loads via Google Tag Manager.”

“Is there some way I can disable this?” another developer encountering the unexplained local port access asked.

It’s unclear whether browser-to-native-app tracking violates any privacy laws in various countries. Both Meta and companies hosting its Meta Pixel, however, have faced a raft of lawsuits in recent years alleging that the data collected violates privacy statutes. A research paper from 2023 found that Meta pixel, then called the Facebook Pixel, “tracks a wide range of user activities on websites with alarming detail, especially on websites classified as sensitive categories under GDPR,” the abbreviation for the European Union’s General Data Protection Regulation.

So far, Google has provided no indication that it plans to redesign the way Android handles local port access. For now, the most comprehensive protection against Meta Pixel and Yandex Metrica tracking is to refrain from installing the Facebook, Instagram, or Yandex apps on Android devices.

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

Meta and Yandex are de-anonymizing Android users’ web browsing identifiers Read More »

broadcom-ends-business-with-vmware’s-lowest-tier-channel-partners

Broadcom ends business with VMware’s lowest-tier channel partners

Broadcom has cut the lowest tier in its VMware partner program. The move allows the enterprise technology firm to continue its focus on customers with larger VMware deployments, but it also risks more migrations from VMware users and partners.

Broadcom ousts low-tier VMware partners

In a blog post on Sunday, Broadcom executive Brian Moats announced that the Broadcom Advantage Partner Program for VMware Resellers, which became the VMware partner program after Broadcom eliminated the original one in January 2024, would now offer three tiers instead of four. Broadcom is killing the Registered tier, leaving the Pinnacle, Premier, and Select tiers.

The reduction is a result of Broadcom’s “strategic direction” and a “comprehensive partner review” and affects VMware’s Americas, Asia-Pacific, and Japan geographies, Moats wrote. Affected partners are receiving 60 days’ notice, Laura Falko, Broadcom’s head of global partner programs, marketing, and experience, told The Register.

Moats wrote that the “vast majority of customer impact and business momentum comes from partners operating within the top three tiers.”

Similarly, Falko told The Register that most of the removed partners were “inactive and lack the capabilities to support customers through VMware’s evolving private cloud journey.”

Ars asked Broadcom to specify how many removed partners were inactive and what specific capabilities they lacked, but a company representative only directed us to Moats’ blog post.

Canadian managed services provider (MSP) Members IT Group is one of the partners that learned this week that it will no longer be a VMware reseller. CTO Dean Colpitts noted that Members IT Group has been a VMware partner for over 19 years and is also a VMware user. Colpitts previously told Ars that the firm’s VMware business had declined since Broadcom’s acquisition and blamed Broadcom for this:

The only reason we were “inactive” is because of their own stupid greed. We and our customers would have happily continued along even with a 10 or 20 percent  increase in price. 50 percent and more with zero warning last year after customers already had their FY24 budgets sets was the straw that broke the camel’s back …

We have transacted a couple of deals with [VMware] since the program change, but nothing like we previously would have done before Broadcom took over.

Members IT Group will be moving its client base to Hewlett-Packard Enterprise’s VM Essentials virtualization solution.

Broadcom ends business with VMware’s lowest-tier channel partners Read More »

ransomware-kingpin-“stern”-apparently-ided-by-german-law-enforcement

Ransomware kingpin “Stern” apparently IDed by German law enforcement


unlikely to be extradited

BSA names Vi­ta­ly Ni­ko­lae­vich Kovalev is “Stern,” the leader of Trickbot.

Credit: Tim Robberts/Getty Images

For years, members of the Russian cybercrime cartel Trickbot unleashed a relentless hacking spree on the world. The group attacked thousands of victims, including businesses, schools, and hospitals. “Fuck clinics in the usa this week,” one member wrote in internal Trickbot messages in 2020 about a list of 428 hospitals to target. Orchestrated by an enigmatic leader using the online moniker “Stern,” the group of around 100 cybercriminals stole hundreds of millions of dollars over the course of roughly six years.

Despite a wave of law enforcement disruptions and a damaging leak of more than 60,000 internal chat messages from Trickbot and the closely associated counterpart group Conti, the identity of Stern has remained a mystery. Last week, though, Germany’s federal police agency, the Bundeskriminalamt or BKA, and local prosecutors alleged that Stern’s real-world name is Vi­ta­ly Ni­ko­lae­vich Kovalev, a 36-year-old, 5-foot-11-inch Russian man who cops believe is in his home country and thus shielded from potential extradition.

A recently issued Interpol red notice says that Kovalev is wanted by Germany for allegedly being the “ringleader” of a “criminal organisation.”

“Stern’s naming is a significant event that bridges gaps in our understanding of Trickbot—one of the most notorious transnational cybercriminal groups to ever exist,” says Alexander Leslie, a threat intelligence analyst at the security firm Recorded Future. “As Trickbot’s ‘big boss’ and one of the most noteworthy figures in the Russian cybercriminal underground, Stern remained an elusive character, and his real name was taboo for years.”

Stern has notably seemed to be absent from multiple rounds of Western sanctions and indictments in recent years calling out alleged Trickbot and Conti members. Leslie and other researchers have long speculated to WIRED that global law enforcement may have strategically withheld Stern’s alleged identity as part of ongoing investigations. Kovalev is suspected of being the “founder” of Trickbot and allegedly used the Stern moniker, the BKA said in an online announcement.

“It has long been assumed, based on numerous indications, that ‘Stern’ is in fact Kovalev,” a BKA spokesperson says in written responses to questions from WIRED. They add that “the investigating authorities involved in Operation Endgame were only able to identify the actor Stern as Kovalev during their investigation this year,” referring to a multi-year international effort to identify and disrupt cybercriminal infrastructure, known as Operation Endgame.

The BKA spokesperson also notes in written statements to WIRED that information obtained through a 2023 investigation into the Qakbot malware as well as analysis of the leaked Trickbot and Conti chats from 2022 were “helpful” in making the attribution. They added, too, that the “assessment is also shared by international partners.”

The German announcement is the first time that officials from any government have publicly alleged an identity for a suspect behind the Stern moniker. As part of Operation Endgame, BKA’s Stern attribution inherently comes in the context of a multinational law enforcement collaboration. But unlike in other Trickbot- and Conti-related attributions, other countries have not publicly concurred with BKA’s Stern identification thus far. Europol, the US Department of Justice, the US Treasury, and the UK’s Foreign, Commonwealth & Development Office did not immediately respond to WIRED’s requests for comment.

Several cybersecurity researchers who have tracked Trickbot extensively tell WIRED they were unaware of the announcement. An anonymous account on the social media platform X recently claimed that Kovalev used the Stern handle and published alleged details about him. WIRED messaged multiple accounts that supposedly belong to Kovalev, according to the X account and a database of hacked and leaked records compiled by District 4 Labs but received no response.

Meanwhile, Kovalev’s name and face may already be surprisingly familiar to those who have been following recent Trickbot revelations. This is because Kovalev was jointly sanctioned by the United States and United Kingdom in early 2023 for his alleged involvement as a senior member in Trickbot. He was also charged in the US at the time with hacking linked to bank fraud allegedly committed in 2010. The US added him to its most-wanted list. In all of this activity, though, the US and UK linked Kovalev to the online handles “ben” and “Bentley.” The 2023 sanctions did not mention a connection to the Stern handle. And, in fact, Kovalev’s 2023 indictment was mainly noteworthy because his use of “Bentley” as a handle was determined to be “historic” and distinct from that of another key Trickbot member who also went by “Bentley.”

The Trickbot ransomware group first emerged around 2016, after its members moved from the Dyre malware that was disrupted by Russian authorities. Over the course of its lifespan, the Trickbot group—which used its namesake malware, alongside other ransomware variants such as Ryuk, IcedID, and Diavol—increasingly overlapped in operations and personnel with the Conti gang. In early 2022, Conti published a statement backing Russia’s full-scale invasion of Ukraine, and a cybersecurity researcher who had infiltrated the groups leaked more than 60,000 messages from Trickbot and Conti members, revealing a huge trove of information about their day-to-day operations and structure.

Stern acted like a “CEO” of the Trickbot and Conti groups and ran them like a legitimate company, leaked chat messages analyzed by WIRED and security researchers show.

“Trickbot set the mold for the modern ‘as-a-service’ cybercriminal business model that was adopted by countless groups that followed,” Recorded Future’s Leslie says. “While there were certainly organized groups that preceded Trickbot, Stern oversaw a period of Russian cybercrime that was characterized by a high level of professionalization. This trend continues today, is reproduced worldwide, and is visible in most active groups on the dark web.”

Stern’s eminence within Russian cybercrime has been widely documented. The cryptocurrency-tracing firm Chainalysis does not publicly name cybercriminal actors and declined to comment on BKA’s identification, but the company emphasized that the Stern persona alone is one of the all-time most profitable ransomware actors it tracks.

“The investigation revealed that Stern generated significant revenues from illegal activities, in particular in connection with ransomware,” the BKA spokesperson tells WIRED.

Stern “surrounds himself with very technical people, many of which he claims to have sometimes decades of experience, and he’s willing to delegate substantial tasks to these experienced people whom he trusts,” says Keith Jarvis, a senior security researcher at cybersecurity firm Sophos’ Counter Threat Unit. “I think he’s always probably lived in that organizational role.”

Increasing evidence in recent years has indicated that Stern has at least some loose connections to Russia’s intelligence apparatus, including its main security agency, the Federal Security Service (FSB). The Stern handle mentioned setting up an office for “government topics” in July 2020, while researchers have seen other members of the Trickbot group say that Stern is likely the “link between us and the ranks/head of department type at FSB.”

Stern’s consistent presence was a significant contributor to Trickbot and Conti’s effectiveness—as was the entity’s ability to maintain strong operational security and remain hidden.

As Sophos’ Jarvis put it, “I have no thoughts on the attribution, as I’ve never heard a compelling story about Stern’s identity from anyone prior to this announcement.”

This story originally appeared on wired.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

Ransomware kingpin “Stern” apparently IDed by German law enforcement Read More »

thousands-of-asus-routers-are-being-hit-with-stealthy,-persistent-backdoors

Thousands of Asus routers are being hit with stealthy, persistent backdoors

GreyNoise said it detected the campaign in mid-March and held off reporting on it until after the company notified unnamed government agencies. That detail further suggests that the threat actor may have some connection to a nation-state.

The company researchers went on to say that the activity they observed was part of a larger campaign reported last week by fellow security company Sekoia. Researchers at Sekoia said that Internet scanning by network intelligence firm Censys suggested as many as 9,500 Asus routers may have been compromised by ViciousTrap, the name used to track the unknown threat actor.

The attackers are backdooring the devices by exploiting multiple vulnerabilities. One is CVE-2023-39780, a command-injection flaw that allows for the execution of system commands, which Asus patched in a recent firmware update, GreyNoise said. The remaining vulnerabilities have also been patched but, for unknown reasons, have not received CVE tracking designations.

The only way for router users to determine whether their devices are infected is by checking the SSH settings in the configuration panel. Infected routers will show that the device can be logged in to by SSH over port 53282 using a digital certificate with a truncated key of: ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAo41nBoVFfj4HlVMGV+YPsxMDrMlbdDZ…

To remove the backdoor, infected users should remove the key and the port setting.

People can also determine if they’ve been targeted if system logs indicate that they have been accessed through the IP addresses 101.99.91[.]151, 101.99.94[.]173, 79.141.163[.]179, or 111.90.146[.]237. Users of any router brand should always ensure their devices receive security updates in a timely manner.

Thousands of Asus routers are being hit with stealthy, persistent backdoors Read More »