privacy

flock-haters-cross-political-divides-to-remove-error-prone-cameras

Flock haters cross political divides to remove error-prone cameras

“People should care because this could be you,” White said. “This is something that police agencies are now using to document and watch what you’re doing, where you’re going, without your consent.”

Haters cross political divides to fight Flock

Currently, Flock’s reach is broad, “providing services to 5,000 police departments, 1,000 businesses, and numerous homeowners associations across 49 states,” lawmakers noted. Additionally, in October, Flock partnered with Amazon, which allows police to request Ring camera footage that widens Flock’s lens further.

However, Flock’s reach notably doesn’t extend into certain cities and towns in Arizona, Colorado, New York, Oregon, Tennessee, Texas, and Virginia, following successful local bids to end Flock contracts. These local fights have only just started as groups learn from each other, Sarah Hamid, EFF’s director of strategic campaigns, told Ars.

“Several cities have active campaigns underway right now across the country—urban and rural, in blue states and red states,” Hamid said.

A Flock spokesperson told Ars that the growing effort to remove cameras “remains an extremely small percentage of communities that consider deploying Flock technology (low single digital percentages).” To keep Flock’s cameras on city streets, Flock attends “hundreds of local community meetings and City Council sessions each month, and the vast majority of those contracts are accepted,” Flock’s spokesperson said.

Hamid challenged Flock’s “characterization of camera removals as isolated incidents,” though, noting “that doesn’t reflect what we’re seeing.”

“The removals span multiple states and represent different organizing strategies—some community-led, some council-initiated, some driven by budget constraints,” Hamid said.

Most recently, city officials voted to remove Flock cameras this fall in Sedona, Arizona.

A 72-year-old retiree, Sandy Boyce, helped fuel the local movement there after learning that Sedona had “quietly” renewed its Flock contract, NBC News reported. She felt enraged as she imagined her tax dollars continuing to support a camera system tracking her movements without her consent, she told NBC News.

Flock haters cross political divides to remove error-prone cameras Read More »

republican-plan-would-make-deanonymization-of-census-data-trivial

Republican plan would make deanonymization of census data trivial


“Differential privacy” algorithm prevents statistical data from being tied to individuals.

President Donald Trump and the Republican Party have spent the better part of the president’s second term radically reshaping the federal government. But in recent weeks, the GOP has set its sights on taking another run at an old target: the US census.

Since the first Trump administration, the right has sought to add a question to the census that captures a respondent’s immigration status and to exclude noncitizens from the tallies that determine how seats in Congress are distributed. In 2019, the Supreme Court struck down an attempt by the first Trump administration to add a citizenship question to the census.

But now, a little-known algorithmic process called “differential privacy,” created to keep census data from being used to identify individual respondents, has become the right’s latest focus. WIRED spoke to six experts about the GOP’s ongoing effort to falsely allege that a system created to protect people’s privacy has made the data from the 2020 census inaccurate.

If successful, the campaign to get rid of differential privacy could not only radically change the kind of data made available, but could put the data of every person living in the US at risk. The campaign could also discourage immigrants from participating in the census entirely.

The Census Bureau regularly publishes anonymized data so that policymakers and researchers can use it. That data is also sensitive: Conducted every 10 years, the census counts every person living in the United States, citizen and noncitizen alike. The data includes detailed information like the race, sex, and age, as well the languages they speak, their home address, economic status, and the number of people living in a house. This data is used for allocating the federal funds that support public services like schools and hospitals, as well as for how a state’s population is divided up and represented in Congress. The more people in a state, the more congressional representation—and more votes in the Electoral College.

As computers got increasingly sophisticated and data more abundant and accessible, census employees and researchers realized the data published by the Census Bureau could be reverse engineered to identify individual people. According to Title XIII of the US Code, it is illegal for census workers to publish any data that would identify individual people, their homes, or businesses. A government employee revealing this kind of information could be punished with thousands of dollars in fines or even a possible prison sentence.

For individuals, this could mean, for instance, someone could use census data without differential privacy to identify transgender youth, according to research from the University of Washington.

For immigrants, the prospect of being reidentified through census data could “create panic among noncitizens as well as their families and friends,” says Danah Boyd, a census expert and the founder of Data & Society, a nonprofit research group focused on the downstream effects of technology. LGBTQ+ people might not “feel safe sharing that they are in a same-sex marriage. There are plenty of people in certain geographies who do not want data like this to be public,” she says. This could also mean that information that might be available only through something like a search warrant would suddenly be obtainable. “Unmasking published records is not illegal. Then you can match it to large law enforcement databases without actually breaching the law.”

A need for noise

Differential privacy keeps that data private. It’s a mathematical framework whereby a statistical output can’t be used to determine any individual’s data in a dataset, and the bureau’s algorithm for differential privacy is called TopDown. It injects “noise” into the data starting at the highest level (national), moving progressively downward. There are certain constraints placed around the kind of noise that can be introduced—for instance, the total number of people in a state or census block has to remain the same. But other demographic characteristics, like race or gender, are randomly reassigned to individual records within a set tranche of data. This way, the overall number of people with a certain characteristic remains constant, while the characteristics associated with any one record don’t describe an individual person. In other words, you’ll know how many women or Hispanic people are in a census block, just not exactly where.

“Differential privacy solves a particular problem, which is if you release a lot of information, a lot of statistics, based on the same set of confidential data, eventually somebody can piece together what that confidential data had to be,” says Simson Garfinkel, former senior computer scientist for confidentiality and data access at the Census Bureau.

Differential privacy was first used on data from the 2020 census. Even though one couldn’t identify a specific individual from the data, “you can still get an accurate count on things that are important for funding and voting rights,” says Moon Duchin, a mathematics professor at Tufts University who worked with census data to inform electoral maps in Alabama. The first use of differential privacy for the census happened under the Trump presidency, though the reports themselves were published after he left office. Civil servants, not political appointees, are the ones responsible for determining how census data is collected and analyzed. Emails obtained by the Brennan Center later claimed that the officials at the Census Bureau, overseen by then-Commerce Secretary Wilbur Ross, expressed an “unusually high degree” of interest in the “technical matters” of the process, which deputy director and COO of the bureau Ron Jarmin called “unprecedented.”

It’s this data from the 2020 census that Republicans have taken issue with. On August 21, the Center for Renewing America, a right-wing think tank founded by Russ Vought, currently the director of the US Office of Management and Budget, published a blog post alleging that differential privacy “may have played a significant role in tilting the political scales favorably toward Democrats for apportionment and redistricting purposes.” The post goes on to acknowledge that, even if a citizenship question was added to the census—which Trump attempted during his first administration—differential privacy “algorithm will be able to mask characteristic data, including citizenship status.”

Duchin and other experts who spoke to WIRED say that differential privacy does not change apportionment, or how seats in Congress are distributed—several red states, including Texas and Florida, gained representation after the 2020 census, while blue states like California lost representatives.

COUNTing the cost

On August 28, Republican Representative August Pfluger introduced the COUNT Act. If passed, it would add a citizenship question to the census and force the Census Bureau to “cease utilization of the differential privacy process.” Pfluger’s office did not immediately respond to a request for comment.

“Differential privacy is a punching bag that’s meant here as an excuse to redo the census,” says Duchin. “That is what’s going on, if you ask me.”

On October 6, Senator Jim Banks, a Republican from Indiana, sent a letter to Secretary of Commerce Howard Lutnick, urging him to “investigate and correct errors from the 2020 Census that handed disproportionate political power to Democrats and illegal aliens.” The letter goes on to allege that the use of differential privacy “alters the total population of individual voting districts.” Similar to the COUNT Act and the Renewing America post, the letter also states that the 2030 Census “must request citizenship status.”

Peter Bernegger, a Wisconsin-based “election integrity” activist who is facing a criminal charge of simulating the legal process for allegedly falsifying a subpoena, amplified Banks’ letter on X, alleging that the use of differential privacy was part of “election rigging by the Obama/Biden administrations.” Bernegger’s post was viewed more than 236,000 times.

Banks’ office and Bernegger did not immediately respond to a request for comment.

“No differential privacy was ever applied to the data used to apportion the House of Representatives, so the claim that seats in the House were affected is simply false,” says John Abowd, former associate director for research and methodology and chief scientist at the United States Census Bureau. Abowd oversaw the implementation of differential privacy while at the Census Bureau. He says that the data from the 2020 census has been successfully used by red and blue states, as well as redistricting commissions, and that the only difference from previous census data was that no one would be able to “reconstruct accurate, identifiable individual data to enhance the other databases that they use (voter rolls, drivers licenses, etc.).”

With a possible addition of the citizenship question, proposed by both Banks and the COUNT Act, Boyd says that census data would be even more sensitive, because that kind of information is not readily available in commercial data. “Plenty of data brokers would love to get their hands on that data.”

Shortly after Senator Banks published his letter, Abowd found himself in the spotlight. On October 9, the X account @amuse posted a blog-length post alleging that Abowd was the bureaucrat who “stole the House.” The post also alleged, without evidence, that the census results meant that “Republican states are projected to lose almost $90 billion in federal funds across the decade as a result of the miscounts. Democratic states are projected to gain $57 billion.” The account has more than 666,000 followers, including billionaire Elon Musk, venture capitalist Marc Andreessen, and US pardon attorney Ed Martin. (Abowd told WIRED he was “keeping an eye” on the post, which was viewed more than 360,000 times.) That same week, America First Legal, the conservative nonprofit founded by now deputy chief of staff for policy Stephen Miller, posted about a complaint the group had recently filed in Florida, challenging the 2020 census results, alleging they were based upon flawed statistical methods, one of which was differential privacy.

The results of all this, experts tell WIRED, are that fewer people will feel safe participating in the census and that the government will likely need to spend even more resources to try to get an accurate count. Undercounting could lead to skewed numbers that could impact everything from congressional representation to the amount of funding a municipality might receive from the government.

Neither the proposed COUNT Act nor Senator Banks’ letter outlines an alternative to differential privacy. This means that the Census Bureau would likely be left with two options: Publish data that could put people at risk (which could lead to legal consequences for its staff), or publish less data. “At present, I do not know of any alternative to differential privacy that can safeguard the personal data that the US Census Bureau uses in their work on the decennial census,” says Abraham Flaxman, an associate professor of health metrics sciences at the University of Washington, whose team conducted the study on transgender youth.

Getting rid of differential privacy is not a “light thing,” says a Census employee familiar with the bureau’s privacy methods and who requested anonymity because they were not authorized to speak to the press. “It may be for the layperson. But the entire apparatus of disclosure avoidance at the bureau has been geared for the last almost 10 years on differential privacy.” According to the employee, there is no immediately clear method to replace differential privacy.

Boyd says that the safest bet would simply be “what is known as suppression, otherwise known as ‘do not publish.’” (This, according to Garfinkel, was the backup plan if differential privacy had not been implemented for the 2020 census.)

Another would be for the Census Bureau to only publish population counts, meaning that demographic information like the race or age of respondents would be left out. “This is a problem, because we use census data to combat discrimination,” says Boyd. “The consequences of losing this data is not being able to pursue equity.”

This story originally appeared on wired.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

Republican plan would make deanonymization of census data trivial Read More »

ring-cameras-are-about-to-get-increasingly-chummy-with-law-enforcement

Ring cameras are about to get increasingly chummy with law enforcement


Amazon’s Ring partners with company whose tech has reportedly been used by ICE.

Ring’s Outdoor Cam Pro. Credit: Amazon

Law enforcement agencies will soon have easier access to footage captured by Amazon’s Ring smart cameras. In a partnership announced this week, Amazon will allow approximately 5,000 local law enforcement agencies to request access to Ring camera footage via surveillance platforms from Flock Safety. Ring cooperating with law enforcement and the reported use of Flock technologies by federal agencies, including US Immigration and Customs Enforcement (ICE), has resurfaced privacy concerns that have followed the devices for years.

According to Flock’s announcement, its Ring partnership allows local law enforcement members to use Flock software “to send a direct post in the Ring Neighbors app with details about the investigation and request voluntary assistance.” Requests must include “specific location and timeframe of the incident, a unique investigation code, and details about what is being investigated,” and users can look at the requests anonymously, Flock said.

“Any footage a Ring customer chooses to submit will be securely packaged by Flock and shared directly with the requesting local public safety agency through the FlockOS or Flock Nova platform,” the announcement reads.

Flock said its local law enforcement users will gain access to Ring Community Requests in “the coming months.”

A flock of privacy concerns

Outside its software platforms, Flock is known for license plate recognition cameras. Flock customers can also search footage from Flock cameras using descriptors to find people, such as “man in blue shirt and cowboy hat.” Besides law enforcement agencies, Flock says 6,000 communities and 1,000 businesses use their products.

For years, privacy advocates have warned against companies like Flock.

This week, US Sen. Ron Wyden (D-Ore.) sent a letter [PDF] to Flock CEO Garrett Langley saying that ICE’s Homeland Security Investigations (HSI), the Secret Service, and the US Navy’s Criminal Investigative Service have had access to footage from Flock’s license plate cameras.

“I now believe that abuses of your product are not only likely but inevitable and that Flock is unable and uninterested in preventing them,” Wyden wrote.

In August, Jay Stanley, senior policy analyst for the ACLU Speech, Privacy, and Technology Project, wrote that “Flock is building a dangerous, nationwide mass-surveillance infrastructure.” Stanley pointed to ICE using Flock’s network of cameras, as well as Flock’s efforts to build a people lookup tool with data brokers.

Matthew Guariglia, senior policy analyst at the Electronic Frontier Foundation (EFF), told Ars via email that Flock is a “mass surveillance tool” that “has increasingly been used to spy on both immigrants and people exercising their First Amendment-protected rights.”

Flock has earned this reputation among privacy advocates through its own cameras, not Ring’s.

An Amazon spokesperson told Ars Technica that only local public safety agencies will be able to make Community Requests via Flock software, and that requests will also show the name of the agency making the request.

A Flock spokesperson told Ars:

Flock does not currently have any contracts with any division of [the US Department of Homeland Security], including ICE. The Ring Community Requests process through Flock is only available for local public safety agencies for specific, active investigations. All requests are time and geographically-bound. Ring users can choose to share relevant footage or ignore the request.

Flock’s rep added that all activity within FlockOS and Flock Nova is “permanently recorded in a comprehensive CJIS-compliant audit trail for unalterable custody tracking,” referring to a set of standards created by the FBI’s Criminal Justice Information Services division.

But there’s still concern that federal agencies will end up accessing Ring footage through Flock. Guariglia told Ars:

Even without formal partnerships with federal authorities, data from these surveillance companies flow to agencies like ICE through local law enforcement. Local and state police have run more than 4,000 Flock searches on behalf of federal authorities or with a potential immigration focus, reporting has found. Additionally, just this month, it became clear that Texas police searched 83,000 Flock cameras in an attempt to prosecute a woman for her abortion and then tried to cover it up.

Ring cozies up to the law

This week’s announcement shows Amazon, which acquired Ring in 2018, increasingly positioning its consumer cameras as a law enforcement tool. After years of cops using Ring footage, Amazon last year said that it would stop letting police request Ring footage—unless it was an “emergency”—only to reverse course about 18 months later by allowing police to request Ring footage through a Flock rival, Axon.

While announcing Ring’s deals with Flock and Axon, Ring founder and CEO Jamie Siminoff claimed that the partnerships would help Ring cameras keep neighborhoods safe. But there’s doubt as to whether people buy Ring cameras to protect their neighborhood.

“Ring’s new partnership with Flock shows that the company is more interested in contributing to mounting authoritarianism than servicing the specific needs of their customers,” Guariglia told Ars.

Interestingly, Ring initiated conversations about a deal with Flock, Langely told CNBC.

Flock says that its cameras don’t use facial recognition, which has been criticized for racial biases. But local law enforcement agencies using Flock will soon have access to footage from Ring cameras with facial recognition. In a conversation with The Washington Post this month, Calli Schroeder, senior counsel at the consumer advocacy and policy group Electronic Privacy Information Center, described the new feature for Ring cameras as “invasive for anyone who walks within range of” a Ring doorbell, since they likely haven’t consented to facial recognition being used on them.

Amazon, for its part, has mostly pushed the burden of ensuring responsible facial recognition use to its customers. Schroeder shared concern with the Post that Ring’s facial recognition data could end up being shared with law enforcement.

Some people who are perturbed about Ring deepening its ties with law enforcement have complained online.

“Inviting big brother into the system. Screw that,” a user on the Ring subreddit said this week.

Another Reddit user said: “And… I’m gone. Nope, NO WAY IN HELL. Goodbye, Ring. I’ll be switching to a UniFi[-brand] system with 100 percent local storage. You don’t get my money any more. This is some 1984 BS …”

Privacy concerns are also exacerbated by Ring’s past, as the company has previously failed to meet users’ privacy expectations. In 2023, Ring agreed to pay $5.8 million to settle claims that employees illegally spied on Ring customers.

Amazon and Flock say their collaboration will only involve voluntary customers and local enforcement agencies. But there’s still reason to be concerned about the implications of people sending doorbell and personal camera footage to law enforcement via platforms that are reportedly widely used by federal agencies for deportation purposes. Combined with the privacy issues that Ring has already faced for years, it’s not hard to see why some feel that Amazon scaling up Ring’s association with any type of law enforcement is unacceptable.

And it appears that Amazon and Flock would both like Ring customers to opt in when possible.

“It will be turned on for free for every customer, and I think all of them will use it,” Langely told CNBC.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Ring cameras are about to get increasingly chummy with law enforcement Read More »

hackers-can-steal-2fa-codes-and-private-messages-from-android-phones

Hackers can steal 2FA codes and private messages from Android phones


STEALING CODES ONE PIXEL AT A TIME

Malicious app required to make “Pixnapping” attack work requires no permissions.

Samsung’s S25 phones. Credit: Samsung

Android devices are vulnerable to a new attack that can covertly steal two-factor authentication codes, location timelines, and other private data in less than 30 seconds.

The new attack, named Pixnapping by the team of academic researchers who devised it, requires a victim to first install a malicious app on an Android phone or tablet. The app, which requires no system permissions, can then effectively read data that any other installed app displays on the screen. Pixnapping has been demonstrated on Google Pixel phones and the Samsung Galaxy S25 phone and likely could be modified to work on other models with additional work. Google released mitigations last month, but the researchers said a modified version of the attack works even when the update is installed.

Like taking a screenshot

Pixnapping attacks begin with the malicious app invoking Android programming interfaces that cause the authenticator or other targeted apps to send sensitive information to the device screen. The malicious app then runs graphical operations on individual pixels of interest to the attacker. Pixnapping then exploits a side channel that allows the malicious app to map the pixels at those coordinates to letters, numbers, or shapes.

“Anything that is visible when the target app is opened can be stolen by the malicious app using Pixnapping,” the researchers wrote on an informational website. “Chat messages, 2FA codes, email messages, etc. are all vulnerable since they are visible. If an app has secret information that is not visible (e.g., it has a secret key that is stored but never shown on the screen), that information cannot be stolen by Pixnapping.”

The new attack class is reminiscent of GPU.zip, a 2023 attack that allowed malicious websites to read the usernames, passwords, and other sensitive visual data displayed by other websites. It worked by exploiting side channels found in GPUs from all major suppliers. The vulnerabilities that GPU.zip exploited have never been fixed. Instead, the attack was blocked in browsers by limiting their ability to open iframes, an HTML element that allows one website (in the case of GPU.zip, a malicious one) to embed the contents of a site from a different domain.

Pixnapping targets the same side channel as GPU.zip, specifically the precise amount of time it takes for a given frame to be rendered on the screen.

“This allows a malicious app to steal sensitive information displayed by other apps or arbitrary websites, pixel by pixel,” Alan Linghao Wang, lead author of the research paper “Pixnapping: Bringing Pixel Stealing out of the Stone Age,” explained in an interview. “Conceptually, it is as if the malicious app was taking a screenshot of screen contents it should not have access to. Our end-to-end attacks simply measure the rendering time per frame of the graphical operations… to determine whether the pixel was white or non-white.”

Pixnapping in three steps

The attack occurs in three main steps. In the first, the malicious app invokes Android APIs that make calls to the app the attacker wants to snoop on. These calls can also be used to effectively scan an infected device for installed apps of interest. The calls can further cause the targeted app to display specific data it has access to, such as a message thread in a messaging app or a 2FA code for a specific site. This call causes the information to be sent to the Android rendering pipeline, the system that takes each app’s pixels so they can be rendered on the screen. The Android-specific calls made include activities, intents, and tasks.

In the second step, Pixnapping performs graphical operations on individual pixels that the targeted app sent to the rendering pipeline. These operations choose the coordinates of target pixels the app wants to steal and begin to check if the color of those coordinates is white or non-white or, more generally, if the color is c or non-c (for an arbitrary color c).

“Suppose, for example, [the attacker] wants to steal a pixel that is part of the screen region where a 2FA character is known to be rendered by Google Authenticator,” Wang said. “This pixel is either white (if nothing was rendered there) or non-white (if part of a 2FA digit was rendered there). Then, conceptually, the attacker wants to cause some graphical operations whose rendering time is long if the target victim pixel is non-white and short if it is white. The malicious app does this by opening some malicious activities (i.e., windows) in front of the victim app that was opened in Step 1.”

The third step measures the amount of time required at each coordinate. By combining the times for each one, the attack can rebuild the images sent to the rendering pipeline one pixel at a time.

As Ars reader hotball put it in the comments below:

Basically the attacker renders something transparent in front of the target app, then using a timing attack exploiting the GPU’s graphical data compression to try finding out the color of the pixels. It’s not something as simple as “give me the pixels of another app showing on the screen right now.” That’s why it takes time and can be too slow to fit within the 30 seconds window of the Google Authenticator app.

In an online interview, paper co-author Ricardo Paccagnella described the attack in more detail:

Step 1: The malicious app invokes a target app to cause some sensitive visual content to be rendered.

Step 2: The malicious app uses Android APIs to “draw over” that visual content and cause a side channel (in our case, GPU.zip) to leak as a function of the color of individual pixels rendered in Step 1 (e.g., activate only if the pixel color is c).

Step 3: The malicious app monitors the side effects of Step 2 to infer, e.g., if the color of those pixels was c or not, one pixel at a time.

Steps 2 and 3 can be implemented differently depending on the side channel that the attacker wants to exploit. In our instantiations on Google and Samsung phones, we exploited the GPU.zip side channel. When using GPU.zip, measuring the rendering time per frame was sufficient to determine if the color of each pixel is c or not. Future instantiations of the attack may use other side channels where controlling memory management and accessing fine-grained timers may be necessary (see Section 3.3 of the paper). Pixnapping would still work then: the attacker would just need to change how Steps 2 and 3 are implemented.

The amount of time required to perform the attack depends on several variables, including how many coordinates need to be measured. In some cases, there’s no hard deadline for obtaining the information the attacker wants to steal. In other cases—such as stealing a 2FA code—every second counts, since each one is valid for only 30 seconds. In the paper, the researchers explained:

To meet the strict 30-second deadline for the attack, we also reduce the number of samples per target pixel to 16 (compared to the 34 or 64 used in earlier attacks) and decrease the idle time between pixel leaks from 1.5 seconds to 70 milliseconds. To ensure that the attacker has the full 30 seconds to leak the 2FA code, our implementation waits for the beginning of a new 30-second global time interval, determined using the system clock.

… We use our end-to-end attack to leak 100 different 2FA codes from Google Authenticator on each of our Google Pixel phones. Our attack correctly recovers the full 6-digit 2FA code in 73%, 53%, 29%, and 53% of the trials on the Pixel 6, 7, 8, and 9, respectively. The average time to recover each 2FA code is 14.3, 25.8, 24.9, and 25.3 seconds for the Pixel 6, Pixel 7, Pixel 8, and Pixel 9, respectively. We are unable to leak 2FA codes within 30 seconds using our implementation on the Samsung Galaxy S25 device due to significant noise. We leave further investigation of how to tune our attack to work on this device to future work.

In an email, a Google representative wrote, “We issued a patch for CVE-2025-48561 in the September Android security bulletin, which partially mitigates this behavior. We are issuing an additional patch for this vulnerability in the December Android security bulletin. We have not seen any evidence of in-the-wild exploitation.”

Pixnapping is useful research in that it demonstrates the limitations of Google’s security and privacy assurances that one installed app can’t access data belonging to another app. The challenges in implementing the attack to steal useful data in real-world scenarios, however, are likely to be significant. In an age when teenagers can steal secrets from Fortune 500 companies simply by asking nicely, the utility of more complicated and limited attacks is probably of less value.

Post updated to add details about how the attack works.

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

Hackers can steal 2FA codes and private messages from Android phones Read More »

google-confirms-android-dev-verification-will-have-free-and-paid-tiers,-no-public-list-of-devs

Google confirms Android dev verification will have free and paid tiers, no public list of devs

A lack of trust

Google has an answer for the most problematic elements of its verification plan, but anywhere there’s a gap, it’s easy to see a conspiracy. Why? Well, let’s look at the situation in which Google finds itself.

The courts have ruled that Google acted illegally to maintain a monopoly in the Play Store—it worked against the interests of developers and users for years to make Google Play the only viable source of Android apps, and for what? The Play Store is an almost unusable mess of sponsored search results and suggested apps, most of which are little more than in-app purchase factories that deliver Google billions of dollars every year.

Google has every reason to protect the status quo (it may take the case all the way to the Supreme Court), and now it has suddenly decided the security risk of sideloaded apps must be addressed. The way it’s being addressed puts Google in the driver’s seat at a time when alternative app stores may finally have a chance to thrive. It’s all very convenient for Google.

Developers across the Internet are expressing wariness about giving Google their personal information. Google, however, has decided anonymity is too risky. We now know a little more about how Google will manage the information it collects on developers, though. While Play Store developer information is listed publicly, the video confirms there will be no public list of sideload developers. However, Google will have the information, and that means it could be demanded by law enforcement or governments.

The current US administration has had harsh words for apps like ICEBlock, which it successfully pulled from the Apple App Store. Google’s new centralized control of app distribution would allow similar censorship on Android, and the real identities of those who developed such an app would also be sitting in a Google database, ready to be subpoenaed. A few years ago, developers might have trusted Google with this data, but now? The goodwill is gone.

Google confirms Android dev verification will have free and paid tiers, no public list of devs Read More »

meta-won’t-allow-users-to-opt-out-of-targeted-ads-based-on-ai-chats

Meta won’t allow users to opt out of targeted ads based on AI chats

Facebook, Instagram, and WhatsApp users may want to be extra careful while using Meta AI, as Meta has announced that it will soon be using AI interactions to personalize content and ad recommendations without giving users a way to opt out.

Meta plans to notify users on October 7 that their AI interactions will influence recommendations beginning on December 16. However, it may not be immediately obvious to all users that their AI interactions will be used in this way.

The company’s blog noted that the initial notification users will see only says, “Learn how Meta will use your info in new ways to personalize your experience.” Users will have to click through to understand that the changes specifically apply to Meta AI, with a second screen explaining, “We’ll start using your interactions with AIs to personalize your experience.”

Ars asked Meta why the initial notification doesn’t directly mention AI, and Meta spokesperson Emil Vazquez said he “would disagree with the idea that we are obscuring this update in any way.”

“We’re sending notifications and emails to people about this change,” Vazquez said. “As soon as someone clicks on the notification, it’s immediately apparent that this is an AI update.”

In its blog post, Meta noted that “more than 1 billion people use Meta AI every month,” stating its goals are to improve the way Meta AI works in order to fuel better experiences on all Meta apps. Sensitive “conversations with Meta AI about topics such as their religious views, sexual orientation, political views, health, racial or ethnic origin, philosophical beliefs, or trade union membership “will not be used to target ads, Meta confirmed.

“You’re in control,” Meta’s blog said, reiterating that users can “choose” how they “interact with AIs,” unlink accounts on different apps to limit AI tracking, or adjust ad and content settings at any time. But once the tracking starts on December 16, users will not have the option to opt out of targeted ads based on AI chats, Vazquez confirmed, emphasizing to Ars that “there isn’t an opt out for this feature.”

Meta won’t allow users to opt out of targeted ads based on AI chats Read More »

the-dhs-has-been-quietly-harvesting-dna-from-americans-for-years

The DHS has been quietly harvesting DNA from Americans for years


The DNA of nearly 2,000 US citizens has been entered into an FBI crime database.

For years, Customs and Border Protection agents have been quietly harvesting DNA from American citizens, including minors, and funneling the samples into an FBI crime database, government data shows. This expansion of genetic surveillance was never authorized by Congress for citizens, children, or civil detainees.

According to newly released government data analyzed by Georgetown Law’s Center on Privacy & Technology, the Department of Homeland Security, which oversees CBP, collected the DNA of nearly 2,000 US citizens between 2020 and 2024 and had it sent to CODIS, the FBI’s nationwide system for policing investigations. An estimated 95 were minors, some as young as 14. The entries also include travelers never charged with a crime and dozens of cases where agents left the “charges” field blank. In other files, officers invoked civil penalties as justification for swabs that federal law reserves for criminal arrests.

The findings appear to point to a program running outside the bounds of statute or oversight, experts say, with CBP officers exercising broad discretion to capture genetic material from Americans and have it funneled into a law-enforcement database designed in part for convicted offenders. Critics warn that anyone added to the database could endure heightened scrutiny by US law enforcement for life.

“Those spreadsheets tell a chilling story,” Stevie Glaberson, director of research and advocacy at Georgetown’s Center on Privacy & Technology, tells WIRED. “They show DNA taken from people as young as 4 and as old as 93—and, as our new analysis found, they also show CBP flagrantly violating the law by taking DNA from citizens without justification.”

DHS did not respond to a request for comment.

For more than two decades, the FBI’s Combined DNA Index System, or CODIS, has been billed as a tool for violent crime investigations. But under both recent policy changes and the Trump administration’s immigration agenda, the system has become a catchall repository for genetic material collected far outside the criminal justice system.

One of the sharpest revelations came from DHS data released earlier this year showing that CBP and Immigrations and Customs Enforcement have been systematically funneling cheek swabs from immigrants—and, in many cases, US citizens—into CODIS. What was once a program aimed at convicted offenders now sweeps in children at the border, families questioned at airports, and people held on civil—not criminal—grounds. WIRED previously reported that DNA from minors as young as 4 had ended up in the FBI’s database, alongside elderly people in their 90s, with little indication of how or why the samples were taken.

The scale is staggering. According to Georgetown researchers, DHS has contributed roughly 2.6 million profiles to CODIS since 2020—far above earlier projections and a surge that has reshaped the database. By December 2024, CODIS’s “detainee” index contained over 2.3 million profiles; by April 2025, the figure had already climbed to more than 2.6 million. Nearly all of these samples—97 percent—were collected under civil, not criminal, authority. At the current pace, according to Georgetown Law’s estimates, which are based on DHS projections, Homeland Security files alone could account for one-third of CODIS by 2034.

The expansion has been driven by specific legal and bureaucratic levers. Foremost was an April 2020 Justice Department rule that revoked a long-standing waiver allowing DHS to skip DNA collection from immigration detainees, effectively green-lighting mass sampling. Later that summer, the FBI signed off on rules that let police booking stations run arrestee cheek swabs through Rapid DNA machines—automated devices that can spit out CODIS-ready profiles in under two hours.

The strain of the changes became apparent in subsequent years. Former FBI director Christopher Wray warned during Senate testimony in 2023 that the flood of DNA samples from DHS threatened to overwhelm the bureau’s systems. The 2020 rule change, he said, had pushed the FBI from a historic average of a few thousand monthly submissions to 92,000 per month—over 10 times its traditional intake. The surge, he cautioned, had created a backlog of roughly 650,000 unprocessed kits, raising the risk that people detained by DHS could be released before DNA checks produced investigative leads.

Under Trump’s renewed executive order on border enforcement, signed in January 2025, DHS agencies were instructed to deploy “any available technologies” to verify family ties and identity, a directive that explicitly covers genetic testing. This month, federal officials announced they were soliciting new bids to install Rapid DNA at local booking facilities around the country, with combined awards of up to $3 million available.

“The Department of Homeland Security has been piloting a secret DNA collection program of American citizens since 2020. Now, the training wheels have come off,” said Anthony Enriquez, vice president of advocacy at Robert F. Kennedy Human Rights. “In 2025, Congress handed DHS a $178 billion check, making it the nation’s costliest law enforcement agency, even as the president gutted its civil rights watchdogs and the Supreme Court repeatedly signed off on unconstitutional tactics.”

Oversight bodies and lawmakers have raised alarms about the program. As early as 2021, the DHS inspector general found the department lacked central oversight of DNA collection and that years of noncompliance can undermine public safety—echoing an earlier rebuke from the Office of Special Counsel, which called CBP’s failures an “unacceptable dereliction.”

US Senator Ron Wyden (D-Kans.) more recently pressed DHS and DOJ for explanations about why children’s DNA is being captured and whether CODIS has any mechanism to reject improperly obtained samples, saying the program was never intended to collect and permanently retain the DNA of all noncitizens, warning the children are likely to be “treated by law enforcement as suspects for every investigation of every future crime, indefinitely.”

Rights advocates allege that CBP’s DNA collection program has morphed into a sweeping genetic surveillance regime, with samples from migrants and even US citizens fed into criminal databases absent transparency, legal safeguards, or limits on retention. Georgetown’s privacy center points out that once DHS creates and uploads a CODIS profile, the government retains the physical DNA sample indefinitely, with no procedure to revisit or remove profiles when the legality of the detention is in doubt.

In parallel, Georgetown and allied groups have sued DHS over its refusal to fully release records about the program, highlighting how little the public knows about how DNA is being used, stored, or shared once it enters CODIS.

Taken together, these revelations may suggest a quiet repurposing of CODIS. A system long described as a forensic breakthrough is being remade into a surveillance archive—sweeping up immigrants, travelers, and US citizens alike, with few checks on the agents deciding whose DNA ends up in the federal government’s most intimate database.

“There’s much we still don’t know about DHS’s DNA collection activities,” Georgetown’s Glaberson says. “We’ve had to sue the agencies just to get them to do their statutory duty, and even then they’ve flouted court orders. The public has a right to know what its government is up to, and we’ll keep fighting to bring this program into the light.”

This story originally appeared on wired.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

The DHS has been quietly harvesting DNA from Americans for years Read More »

google-releases-vaultgemma,-its-first-privacy-preserving-llm

Google releases VaultGemma, its first privacy-preserving LLM

The companies seeking to build larger AI models have been increasingly stymied by a lack of high-quality training data. As tech firms scour the web for more data to feed their models, they could increasingly rely on potentially sensitive user data. A team at Google Research is exploring new techniques to make the resulting large language models (LLMs) less likely to “memorize” any of that content.

LLMs have non-deterministic outputs, meaning you can’t exactly predict what they’ll say. While the output varies even for identical inputs, models do sometimes regurgitate something from their training data—if trained with personal data, the output could be a violation of user privacy. In the event copyrighted data makes it into training data (either accidentally or on purpose), its appearance in outputs can cause a different kind of headache for devs. Differential privacy can prevent such memorization by introducing calibrated noise during the training phase.

Adding differential privacy to a model comes with drawbacks in terms of accuracy and compute requirements. No one has bothered to figure out the degree to which that alters the scaling laws of AI models until now. The team worked from the assumption that model performance would be primarily affected by the noise-batch ratio, which compares the volume of randomized noise to the size of the original training data.

By running experiments with varying model sizes and noise-batch ratios, the team established a basic understanding of differential privacy scaling laws, which is a balance between the compute budget, privacy budget, and data budget. In short, more noise leads to lower-quality outputs unless offset with a higher compute budget (FLOPs) or data budget (tokens). The paper details the scaling laws for private LLMs, which could help developers find an ideal noise-batch ratio to make a model more private.

Google releases VaultGemma, its first privacy-preserving LLM Read More »

former-whatsapp-security-boss-in-lawsuit-likens-meta’s-culture-to-a-“cult”

Former WhatsApp security boss in lawsuit likens Meta’s culture to a “cult”

“This represented the first concrete step toward addressing WhatsApp’s fundamental data governance Failures,” the complaint stated. “Mr. Baig understood that Meta’s culture is like that of a cult where one cannot question any of the past work especially when it was approved by someone at a higher level than the individual who is raising the concern.” In the following years, Baig continued to press increasingly senior leaders to take action.

The letter outlined not only the improper access engineers had to WhatsApp user data, but a variety of other shortcomings, including a “failure to inventory user data,” as required under privacy laws in California, the European Union, and the FTC settlement, failure to locate data storage, an absence of systems for monitoring user data access, and an inability to detect data breaches that were standard for other companies.

Last year, Baig allegedly sent a “detailed letter” to Meta CEO Mark Zuckerberg and Jennifer Newstead, Meta general counsel, notifying them of what he said were violations of the FTC settlement and Security and Exchange Commission rules mandating the reporting of security vulnerabilities. The letter further alleged Meta leaders were retaliating against him and that the central Meta security team had “falsified security reports to cover up decisions not to remediate data exfiltration risks.”

The lawsuit, alleging violations of the whistleblower protection provision of the Sarbanes-Oxley Act passed in 2002, said that in 2022, roughly 100,000 WhatsApp users had their accounts hacked every day. By last year, the complaint alleged, as many as 400,000 WhatsApp users were getting locked out of their accounts each day as a result of such account takeovers.

Baig also allegedly notified superiors that data scraping on the platform was a problem because WhatsApp failed to implement protections that are standard on other messaging platforms, such as Signal and Apple Messages. As a result, the former WhatsApp head estimated that pictures and names of some 400 million user profiles were improperly copied every day, often for use in account impersonation scams. The complaint stated:

Former WhatsApp security boss in lawsuit likens Meta’s culture to a “cult” Read More »

a-power-utility-is-reporting-suspected-pot-growers-to-cops-eff-says-that’s-illegal.

A power utility is reporting suspected pot growers to cops. EFF says that’s illegal.

In May 2020, Sacramento, California, resident Alfonso Nguyen was alarmed to find two Sacramento County Sheriff’s deputies at his door, accusing him of illegally growing cannabis and demanding entry into his home. When Nguyen refused the search and denied the allegation, one deputy allegedly called him a liar and threatened to arrest him.

That same year, deputies from the same department, with their guns drawn and bullhorns and sirens sounding, fanned out around the home of Brian Decker, another Sacramento resident. The officers forced Decker to walk backward out of his home in only his underwear around 7 am while his neighbors watched. The deputies said that he, too, was under suspicion of illegally growing cannabis.

Invasion of the privacy snatchers

According to a motion the Electronic Frontier Foundation filed in Sacramento Superior Court last week, Nguyen and Decker are only two of more than 33,000 Sacramento-area people who have been flagged to the sheriff’s department by the Sacramento Municipal Utility District, the electricity provider for the region. SMUD called the customers out for using what it and department investigators said were suspiciously high amounts of electricity indicative of illegal cannabis farming.

The EFF, citing investigator and SMUD records, said the utility unilaterally analyzes customers’ electricity usage in “painstakingly” detailed increments of every 15 minutes. When analysts identify patterns they deem likely signs of illegal grows, they notify sheriff’s investigators. The EFF said the practice violates privacy protections guaranteed by the federal and California governments and is seeking a court order barring the warrantless disclosures.

“SMUD’s disclosures invade the privacy of customers’ homes,” EFF attorneys wrote in a court document in support of last week’s motion. “The whole exercise is the digital equivalent of a door-to-door search of an entire city. The home lies at the ‘core’ of constitutional privacy protection.”

Contrary to SMUD and sheriff’s investigator claims that the likely illegal grows are accurate, the EFF cited multiple examples where they have been wrong. In Decker’s case, for instance, SMUD analysts allegedly told investigators his electricity usage indicated that “4 to 5 grow lights are being used [at his home] from 7pm to 7am.” In actuality, the EFF said, someone in the home was mining cryptocurrency. Nguyen’s electricity consumption was the result of a spinal injury that requires him to use an electric wheelchair and special HVAC equipment to maintain his body temperature.

A power utility is reporting suspected pot growers to cops. EFF says that’s illegal. Read More »

provider-of-covert-surveillance-app-spills-passwords-for-62,000-users

Provider of covert surveillance app spills passwords for 62,000 users

The maker of a phone app that is advertised as providing a stealthy means for monitoring all activities on an Android device spilled email addresses, plain-text passwords, and other sensitive data belonging to 62,000 users, a researcher discovered recently.

A security flaw in the app, branded Catwatchful, allowed researcher Eric Daigle to download a trove of sensitive data, which belonged to account holders who used the covert app to monitor phones. The leak, made possible by a SQL injection vulnerability, allowed anyone who exploited it to access the accounts and all data stored in them.

Unstoppable

Catwatchful creators emphasize the app’s stealth and security. While the promoters claim the app is legal and intended for parents monitoring their children’s online activities, the emphasis on stealth has raised concerns that it’s being aimed at people with other agendas.

“Catwatchful is invisible,” a page promoting the app says. “It cannot be detected. It cannot be uninstalled. It cannot be stopped. It cannot be closed. Only you can access the information it collects.”

The promoters go on to say users “can monitor a phone without [owners] knowing with mobile phone monitoring software. The app is invisible and undetectable on the phone. It works in a hidden and stealth mode.”

Provider of covert surveillance app spills passwords for 62,000 users Read More »

smart-tv-os-owners-face-“constant-conflict”-between-privacy,-advertiser-demands

Smart TV OS owners face “constant conflict” between privacy, advertiser demands

DENVER—Most smart TV operating system (OS) owners are in the ad sales business now. Software providers for budget and premium TVs are honing their ad skills, which requires advancing their ability to collect user data. This is creating an “inherent conflict” within the industry, Takashi Nakano, VP of content and programming at Samsung TV Plus, said at the StreamTV Show in Denver last week.

During a panel at StreamTV Insider’s conference entitled “CTV OS Leader Roundtable: From Drivers to Engagement and Content Strategy,” Nakano acknowledged the opposing needs of advertisers and smart TV users, who are calling for a reasonable amount of data privacy.

“Do you want your data sold out there and everyone to know exactly what you’ve been watching … the answer is generally no,” the Samsung executive said. “Yet, advertisers want all of this data. They wanna know exactly what you ate for breakfast.”

Nakano also suggested that the owners of OSes targeting smart TVs and other streaming hardware, like streaming sticks, are inundated with user data that may not actually be that useful or imperative to collect:

I think that there’s inherent conflict in the ad ecosystem supplying so much data. … We’re fortunate to have all that data, but we’re also like, ‘Do we really want to give it all, and hand it all out?’ There’s a constant conflict around that, right? So how do we create an ecosystem where we can serve ads that are pretty good? Maybe it’s not perfect …

Today, connected TV (CTV) OSes are largely built around not just gathering user data, but also creating ways to collect new types of information about viewers in order to deliver more relevant, impactful ads. LG, for example, recently announced that its smart TV OS, webOS, will use a new AI model that informs ad placement based on viewers’ emotions and personal beliefs.

Smart TV OS owners face “constant conflict” between privacy, advertiser demands Read More »