privacy

ring-cancels-flock-deal-after-dystopian-super-bowl-ad-prompts-mass-outrage

Ring cancels Flock deal after dystopian Super Bowl ad prompts mass outrage

Both statements verified that the integration never launched and that no Ring customers’ videos were ever sent to Flock.

Ring did not credit users’ privacy concerns for its change of heart. Instead, they claimed that a joint decision was made “following a comprehensive review” where Ring “determined the planned Flock Safety integration would require significantly more time and resources than anticipated.”

Separately, Flock said that “we believe this decision allows both companies to best serve their respective customers and communities.”

The only hint that Ring gave users that their concerns had been heard came in the last line of its blog, which said, “We’ll continue to carefully evaluate future partnerships to ensure they align with our standards for customer trust, safety, and privacy.”

Sharing his views on X and Bluesky, John Scott-Railton, a senior cybersecurity researcher at the Citizen Lab, joined critics calling Ring’s statement insufficient. He posted an image of the ad frame that Markey found creepy next to a statement from Ring, writing, “On the left? A picture of mass surveillance from #Ring’s ad. On the right? A ring [spokesperson] saying that they are not doing mass surveillance. The company cannot have it both ways.”

Ring’s statements so far do not “acknowledge the real issue,” Scott-Railton said, which is privacy risks. For Ring, it seemed like a missed opportunity to discuss or introduce privacy features to reassure concerned users, he suggested, noting the backlash showed “Americans want more control of their privacy right now” and “are savvy enough to see through sappy dog pics.”

“Stop trying to build a surveillance dystopia consumers didn’t ask for” and “focus on shipping good, private products,” Scott-Railton said.

He also suggested that lawmakers should take note of the grassroots support that could possibly help pass laws to push back on mass surveillance. That could help block not just a potential future partnership with Flock, but possibly also stop Ring from becoming the next Flock.

“Ring communications not acknowledging the lesson they just got publicly taught is a bad sign that they hope this goes away,” Scott-Railton said.

Ring cancels Flock deal after dystopian Super Bowl ad prompts mass outrage Read More »

google-recovers-“deleted”-nest-video-in-high-profile-abduction-case

Google recovers “deleted” Nest video in high-profile abduction case

Suspect attempts to cover the camera with a plant.

In statements made by investigators, the video was apparently “recovered from residual data located in backend systems.” It’s unclear how long such data is retained or how easy it is for Google to access it. Some reports claim that it took several days for Google to recover the data.

In large-scale enterprise storage solutions, “deleted” for the user doesn’t always mean that the data is gone. Data that is no longer needed is often compressed and overwritten only as needed. In the meantime, it may be possible to recover the data. That’s something a company like Google could decide to do on its own, or it could be compelled to perform the recovery by a court order. In the Guthrie case, it sounds like Google was voluntarily cooperating with the investigation, which makes sense. Publishing video of the alleged perpetrator could be a major breakthrough as investigators seek help from the public.

It’s not your cloud

There is a temptation to ascribe some malicious intent to Google’s video storage setup. After all, this video expired after three hours, but here it is nine days later. That feels a bit suspicious on the surface, particularly for a company that is so focused on training AI models that feed on video.

We have previously asked Google to explain how it uses Nest to train AI models, and the company claims it does not incorporate user videos into training data, but the way you interact with the service and with your videos is fair game. “We may use your inputs, including prompts and feedback, usage, and outputs from interactions with AI features to further research, tune, and train Google’s generative models, machine learning technologies, and related products and services,” Google said.

Google recovers “deleted” Nest video in high-profile abduction case Read More »

upgraded-google-safety-tools-can-now-find-and-remove-more-of-your-personal-info

Upgraded Google safety tools can now find and remove more of your personal info

Do you feel popular? There are people on the Internet who want to know all about you! Unfortunately, they don’t have the best of intentions, but Google has some handy tools to address that, and they’ve gotten an upgrade today. The “Results About You” tool can now detect and remove more of your personal information. Plus, the tool for removing non-consensual explicit imagery (NCEI) is faster to use. All you have to do is tell Google your personal details first—that seems safe, right?

With today’s upgrade, Results About You gains the ability to find and remove pages that include ID numbers like your passport, driver’s license, and Social Security. You can access the option to add these to Google’s ongoing scans from the settings in Results About You. Just click in the ID numbers section to enable detection.

Naturally, Google has to know what it’s looking for to remove it. So you need to provide at least part of those numbers. Google asks for the full driver’s license number, which is fine, as it’s not as sensitive. For your passport and SSN, you only need the last four digits, which is enough for Google to find the full numbers on webpages.

ID number results detected.

The NCEI tool is geared toward hiding real, explicit images as well as deepfakes and other types of artificial sexualized content. This kind of content is rampant on the Internet right now due to the rapid rise of AI. What used to require Photoshop skills is now just a prompt away, and some AI platforms hardly do anything to prevent it.

Upgraded Google safety tools can now find and remove more of your personal info Read More »

“ungentrified”-craigslist-may-be-the-last-real-place-on-the-internet

“Ungentrified” Craigslist may be the last real place on the Internet


People still use Craigslist to find jobs, love, and even to cast creative projects.

The writer and comedian Megan Koester got her first writing job, reviewing Internet pornography, from a Craigslist ad she responded to more than 15 years ago. Several years after that, she used the listings website to find the rent-controlled apartment where she still lives today. When she wanted to buy property, she scrolled through Craigslist and found a parcel of land in the Mojave Desert. She built a dwelling on it (never mind that she’d later discover it was unpermitted) and furnished it entirely with finds from Craigslist’s free section, right down to the laminate flooring, which had previously been used by a production company.

“There’s so many elements of my life that are suffused with Craigslist,” says Koester, 42, whose Instagram account is dedicated, at least in part, to cataloging screenshots of what she has dubbed “harrowing images” from the site’s free section; on the day we speak, she’s wearing a cashmere sweater that cost her nothing, besides the faith it took to respond to an ad with no pictures. “I’m ride or die.”

Koester is one of untold numbers of Craigslist aficionados, many of them in their thirties and forties, who not only still use the old-school classifieds site but also consider it an essential, if anachronistic, part of their everyday lives. It’s a place where anonymity is still possible, where money doesn’t have to be exchanged, and where strangers can make meaningful connections—for romantic pursuits, straightforward transactions, and even to cast unusual creative projects, including experimental TV shows like The Rehearsal on HBO and Amazon Freevee’s Jury Duty. Unlike flashier online marketplaces such as DePop and its parent company, Etsy, or Facebook Marketplace, Craigslist doesn’t use algorithms to track users’ moves and predict what they want to see next. It doesn’t offer public profiles, rating systems, or “likes” and “shares” to dole out like social currency; as a result, Craigslist effectively disincentivizes clout-chasing and virality-seeking—behaviors that are often rewarded on platforms like TikTok, Instagram, and X. It’s a utopian vision of a much earlier, far more earnest Internet.

“The real freaks come out on Craigslist,” says Koester. “There’s a purity to it.” Even still, the site is a little tamer than it used to be: Craigslist shut down its “casual encounters” ads and took its personals section offline in 2018, after Congress passed legislation that would’ve put the company on the hook for listings from potential sex traffickers. The “missed connections” section, however, remains active.

The site is what Jessa Lingel, an associate professor of communication at the University of Pennsylvania, has called the “ungentrified” Internet. If that’s the case, then online gentrification has only accelerated in recent years, thanks in part to the proliferation of AI. Even Wikipedia and Reddit, visually basic sites created in the early aughts and with an emphasis similar to Craigslist’s on fostering communities, have both incorporated their own versions of AI tools.

Some might argue that Craigslist, by contrast, is outdated; an article published in this magazine more than 15 years ago called it “underdeveloped” and “unpredictable.” But to the site’s most devoted adherents, that’s precisely its appeal.

“ I think Craigslist is having a revival,” says Kat Toledo, an actor and comedian who regularly uses the site to hire cohosts for her LA-based stand-up show, Besitos. “When something is structured so simply and really does serve the community, and it doesn’t ask for much? That’s what survives.”

Toledo started using Craigslist in the 2000s and never stopped. Over the years, she has turned to the site to find romance, housing, and even her current job as an assistant to a forensic psychologist. She’s worked there full-time for nearly two years, defying Craigslist’s reputation as a supplier of potentially sketchy one-off gigs. The stigma of the website, sometimes synonymous with scammers and, in more than one instance, murderers, can be hard to shake. “If I’m not doing a good job,” Toledo says she jokes to her employer, “just remember you found me on Craigslist.”

But for Toledo, the site’s “random factor”—the way it facilitates connection with all kinds of people she might not otherwise interact with—is also what makes it so exciting. Respondents to her ads seeking paid cohosts tend to be “people who almost have nothing to lose, but in a good way, and everything to gain,” she says. There was the born-again Christian who performed a reenactment of her religious awakening and the poet who insisted on doing Toledo’s makeup; others, like the commercial actor who started crying on the phone beforehand, never made it to the stage.

It’s difficult to quantify just how many people actively use Craigslist and how often they click through its listings. The for-profit company is privately owned and doesn’t share data about its users. (Craigslist also didn’t respond to a request for comment.) But according to the Internet data company similarweb, Craigslist draws more than 105 million monthly users, making it the 40th most popular website in the United States—not too shabby for a company that doesn’t spend any money on advertising or marketing. And though Craigslist’s revenue has reportedly plummeted over the past half-dozen years, based on an estimate from an industry analytics firm, it remains enormously profitable. (The company generates revenue by charging a modest fee to publish ads for gigs, certain types of goods, and in some cities, apartments.)

“It’s not a perfect platform by any means, but it does show that you can make a lot of money through an online endeavor that just treats users like they have some autonomy and grants everybody a degree of privacy,” says Lingel. A longtime Craigslist user, she began researching the site after wondering, “Why do all these web 2.0 companies insist that the only way for them to succeed and make money is off the back of user data? There must be other examples out there.”

In her book, Lingel traces the history of the site, which began in 1995 as an email list for a couple hundred San Francisco Bay Area locals to share events, tech news, and job openings. By the end of the decade, engineer Craig Newmark’s humble experiment had evolved into a full-fledged company with an office, a domain name, and a handful of hires. In true Craigslist fashion, Newmark even recruited the company’s CEO, Jim Buckmaster, from an ad he posted to the site, initially seeking a programmer.

The two have gone to great lengths to wrest the company away from corporate interests. When they suspected a looming takeover attempt from eBay, which had purchased a minority stake in Craigslist from a former employee in 2004, Newmark and Buckmaster spent roughly a decade battling the tech behemoth in court. The litigation ended in 2015, with Craigslist buying back its shares and regaining control.

“ They are in lockstep about their early ’90s Internet values,” says Lingel, who credits Newmark and Buckmaster with Craigslist’s long-held aesthetic and ethos: simplicity, privacy, and accessibility. “As long as they’re the major shareholders, that will stay that way.”

Craigslist’s refusal to “sell out,” as Koester puts it, is all the more reason to use it. “Not only is there a purity to the fan base or the user base, there’s a purity to the leadership that they’re uncorruptible basically,” says Koester. “I’m gonna keep looking at Craigslist until I die.” She pauses, then shudders: “Or, until Craig dies, I guess.”

This story originally appeared on wired.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

“Ungentrified” Craigslist may be the last real place on the Internet Read More »

the-nation’s-strictest-privacy-law-just-took-effect,-to-data-brokers’-chagrin

The nation’s strictest privacy law just took effect, to data brokers’ chagrin

Californians are getting a new, supercharged way to stop data brokers from hoarding and selling their personal information, as a recently enacted law that’s among the strictest in the nation took effect at the beginning of the year.

According to the California Privacy Protection Agency, more than 500 companies actively scour all sorts of sources for scraps of information about individuals, then package and store it to sell to marketers, private investigators, and others.

The nonprofit Consumer Watchdog said in 2024 that brokers trawl automakers, tech companies, junk-food restaurants, device makers, and others for financial info, purchases, family situations, eating, exercising, travel, entertainment habits, and just about any other imaginable information belonging to millions of people.

Scrubbing your data made easy

Two years ago, California’s Delete Act took effect. It required data brokers to provide residents with a means to obtain a copy of all data pertaining to them and to demand that such information be deleted. Unfortunately, Consumer Watchdog found that only 1 percent of Californians exercised these rights in the first 12 months after the law went into effect. A chief reason: Residents were required to file a separate demand with each broker. With hundreds of companies selling data, the burden was too onerous for most residents to take on.

On January 1, a new law known as DROP (Delete Request and Opt-out Platform) took effect. DROP allows California residents to register a single demand for their data to be deleted and no longer collected in the future. CalPrivacy then forwards it to all brokers.

The nation’s strictest privacy law just took effect, to data brokers’ chagrin Read More »

browser-extensions-with-8-million-users-collect-extended-ai-conversations

Browser extensions with 8 million users collect extended AI conversations

Besides ChatGPT, Claude, and Gemini, the extensions harvest all conversations from Copilot, Perplexity, DeepSeek, Grok, and Meta AI. Koi said the full description of the data captured includes:

  • Every prompt a user sends to the AI
  • Every response received
  • Conversation identifiers and timestamps
  • Session metadata
  • The specific AI platform and model used

The executor script runs independently from the VPN networking, ad blocking, or other core functionality. That means that even when a user toggles off VPN networking, AI protection, ad blocking, or other functions, the conversation collection continues. The only way to stop the harvesting is to disable the extension in the browser settings or to uninstall it.

Koi said it first discovered the conversation harvesting in Urban VPN Proxy, a VPN routing extension that lists “AI protection” as one of its benefits. The data collection began in early July with the release of version 5.5.0.

“Anyone who used ChatGPT, Claude, Gemini, or the other targeted platforms while Urban VPN was installed after July 9, 2025 should assume those conversations are now on Urban VPN’s servers and have been shared with third parties,” the company said. “Medical questions, financial details, proprietary code, personal dilemmas—all of it, sold for ‘marketing analytics purposes.’”

Following that discovery, the security firm uncovered seven additional extensions with identical AI harvesting functionality. Four of the extensions are available in the Chrome Web Store. The other four are on the Edge add-ons page. Collectively, they have been installed more than 8 million times.

They are:

Chrome Store

  • Urban VPN Proxy: 6 million users
  • 1ClickVPN Proxy: 600,000 users
  • Urban Browser Guard: 40,000 users
  • Urban Ad Blocker: 10,000 users

Edge Add-ons:

  • Urban VPN Proxy: 1,32 million users
  • 1ClickVPN Proxy: 36,459 users
  • Urban Browser Guard – 12,624 users
  • Urban Ad Blocker – 6,476 users

Read the fine print

The extensions come with conflicting messages about how they handle bot conversations, which often contain deeply personal information about users’ physical and mental health, finances, personal relationships, and other sensitive information that could be a gold mine for marketers and data brokers. The Urban VPN Proxy in the Chrome Web Store, for instance, lists “AI protection” as a benefit. It goes on to say:

Browser extensions with 8 million users collect extended AI conversations Read More »

google-will-end-dark-web-reports-that-alerted-users-to-leaked-data

Google will end dark web reports that alerted users to leaked data

As Google admits in the email alert, its dark web scans didn’t offer much help. “Feedback showed that it did not provide helpful next steps,” Google said of the service. Here’s the full text of the email.

Google dark web email

Credit: Google

With other types of personal data alerts provided by the company, it has the power to do something. For example, you can have Google remove pages from search that list your personal data. Google doesn’t run anything on the dark web, though, so all it can do is remind you that your data is being passed around in one of the shadier corners of the Internet.

The shutdown begins on January 15, when Google will stop conducting new scans for user data on the dark web. Past data will no longer be available as of February 16, 2026. Google says it will delete all past reports at that time. However, users can remove their monitoring profile earlier in the account settings. This change does not impact any of Google’s other privacy reports.

The good news is that the best ways to protect your personal data from being shuffled around the dark web are the same ones that keep you safe on the open web. Google suggests always using two-step verification, and tools like Passkeys and Google’s password checkup can ensure you don’t accidentally reuse a compromised password. Stay safe out there.

Google will end dark web reports that alerted users to leaked data Read More »

flock-haters-cross-political-divides-to-remove-error-prone-cameras

Flock haters cross political divides to remove error-prone cameras

“People should care because this could be you,” White said. “This is something that police agencies are now using to document and watch what you’re doing, where you’re going, without your consent.”

Haters cross political divides to fight Flock

Currently, Flock’s reach is broad, “providing services to 5,000 police departments, 1,000 businesses, and numerous homeowners associations across 49 states,” lawmakers noted. Additionally, in October, Flock partnered with Amazon, which allows police to request Ring camera footage that widens Flock’s lens further.

However, Flock’s reach notably doesn’t extend into certain cities and towns in Arizona, Colorado, New York, Oregon, Tennessee, Texas, and Virginia, following successful local bids to end Flock contracts. These local fights have only just started as groups learn from each other, Sarah Hamid, EFF’s director of strategic campaigns, told Ars.

“Several cities have active campaigns underway right now across the country—urban and rural, in blue states and red states,” Hamid said.

A Flock spokesperson told Ars that the growing effort to remove cameras “remains an extremely small percentage of communities that consider deploying Flock technology (low single digital percentages).” To keep Flock’s cameras on city streets, Flock attends “hundreds of local community meetings and City Council sessions each month, and the vast majority of those contracts are accepted,” Flock’s spokesperson said.

Hamid challenged Flock’s “characterization of camera removals as isolated incidents,” though, noting “that doesn’t reflect what we’re seeing.”

“The removals span multiple states and represent different organizing strategies—some community-led, some council-initiated, some driven by budget constraints,” Hamid said.

Most recently, city officials voted to remove Flock cameras this fall in Sedona, Arizona.

A 72-year-old retiree, Sandy Boyce, helped fuel the local movement there after learning that Sedona had “quietly” renewed its Flock contract, NBC News reported. She felt enraged as she imagined her tax dollars continuing to support a camera system tracking her movements without her consent, she told NBC News.

Flock haters cross political divides to remove error-prone cameras Read More »

republican-plan-would-make-deanonymization-of-census-data-trivial

Republican plan would make deanonymization of census data trivial


“Differential privacy” algorithm prevents statistical data from being tied to individuals.

President Donald Trump and the Republican Party have spent the better part of the president’s second term radically reshaping the federal government. But in recent weeks, the GOP has set its sights on taking another run at an old target: the US census.

Since the first Trump administration, the right has sought to add a question to the census that captures a respondent’s immigration status and to exclude noncitizens from the tallies that determine how seats in Congress are distributed. In 2019, the Supreme Court struck down an attempt by the first Trump administration to add a citizenship question to the census.

But now, a little-known algorithmic process called “differential privacy,” created to keep census data from being used to identify individual respondents, has become the right’s latest focus. WIRED spoke to six experts about the GOP’s ongoing effort to falsely allege that a system created to protect people’s privacy has made the data from the 2020 census inaccurate.

If successful, the campaign to get rid of differential privacy could not only radically change the kind of data made available, but could put the data of every person living in the US at risk. The campaign could also discourage immigrants from participating in the census entirely.

The Census Bureau regularly publishes anonymized data so that policymakers and researchers can use it. That data is also sensitive: Conducted every 10 years, the census counts every person living in the United States, citizen and noncitizen alike. The data includes detailed information like the race, sex, and age, as well the languages they speak, their home address, economic status, and the number of people living in a house. This data is used for allocating the federal funds that support public services like schools and hospitals, as well as for how a state’s population is divided up and represented in Congress. The more people in a state, the more congressional representation—and more votes in the Electoral College.

As computers got increasingly sophisticated and data more abundant and accessible, census employees and researchers realized the data published by the Census Bureau could be reverse engineered to identify individual people. According to Title XIII of the US Code, it is illegal for census workers to publish any data that would identify individual people, their homes, or businesses. A government employee revealing this kind of information could be punished with thousands of dollars in fines or even a possible prison sentence.

For individuals, this could mean, for instance, someone could use census data without differential privacy to identify transgender youth, according to research from the University of Washington.

For immigrants, the prospect of being reidentified through census data could “create panic among noncitizens as well as their families and friends,” says Danah Boyd, a census expert and the founder of Data & Society, a nonprofit research group focused on the downstream effects of technology. LGBTQ+ people might not “feel safe sharing that they are in a same-sex marriage. There are plenty of people in certain geographies who do not want data like this to be public,” she says. This could also mean that information that might be available only through something like a search warrant would suddenly be obtainable. “Unmasking published records is not illegal. Then you can match it to large law enforcement databases without actually breaching the law.”

A need for noise

Differential privacy keeps that data private. It’s a mathematical framework whereby a statistical output can’t be used to determine any individual’s data in a dataset, and the bureau’s algorithm for differential privacy is called TopDown. It injects “noise” into the data starting at the highest level (national), moving progressively downward. There are certain constraints placed around the kind of noise that can be introduced—for instance, the total number of people in a state or census block has to remain the same. But other demographic characteristics, like race or gender, are randomly reassigned to individual records within a set tranche of data. This way, the overall number of people with a certain characteristic remains constant, while the characteristics associated with any one record don’t describe an individual person. In other words, you’ll know how many women or Hispanic people are in a census block, just not exactly where.

“Differential privacy solves a particular problem, which is if you release a lot of information, a lot of statistics, based on the same set of confidential data, eventually somebody can piece together what that confidential data had to be,” says Simson Garfinkel, former senior computer scientist for confidentiality and data access at the Census Bureau.

Differential privacy was first used on data from the 2020 census. Even though one couldn’t identify a specific individual from the data, “you can still get an accurate count on things that are important for funding and voting rights,” says Moon Duchin, a mathematics professor at Tufts University who worked with census data to inform electoral maps in Alabama. The first use of differential privacy for the census happened under the Trump presidency, though the reports themselves were published after he left office. Civil servants, not political appointees, are the ones responsible for determining how census data is collected and analyzed. Emails obtained by the Brennan Center later claimed that the officials at the Census Bureau, overseen by then-Commerce Secretary Wilbur Ross, expressed an “unusually high degree” of interest in the “technical matters” of the process, which deputy director and COO of the bureau Ron Jarmin called “unprecedented.”

It’s this data from the 2020 census that Republicans have taken issue with. On August 21, the Center for Renewing America, a right-wing think tank founded by Russ Vought, currently the director of the US Office of Management and Budget, published a blog post alleging that differential privacy “may have played a significant role in tilting the political scales favorably toward Democrats for apportionment and redistricting purposes.” The post goes on to acknowledge that, even if a citizenship question was added to the census—which Trump attempted during his first administration—differential privacy “algorithm will be able to mask characteristic data, including citizenship status.”

Duchin and other experts who spoke to WIRED say that differential privacy does not change apportionment, or how seats in Congress are distributed—several red states, including Texas and Florida, gained representation after the 2020 census, while blue states like California lost representatives.

COUNTing the cost

On August 28, Republican Representative August Pfluger introduced the COUNT Act. If passed, it would add a citizenship question to the census and force the Census Bureau to “cease utilization of the differential privacy process.” Pfluger’s office did not immediately respond to a request for comment.

“Differential privacy is a punching bag that’s meant here as an excuse to redo the census,” says Duchin. “That is what’s going on, if you ask me.”

On October 6, Senator Jim Banks, a Republican from Indiana, sent a letter to Secretary of Commerce Howard Lutnick, urging him to “investigate and correct errors from the 2020 Census that handed disproportionate political power to Democrats and illegal aliens.” The letter goes on to allege that the use of differential privacy “alters the total population of individual voting districts.” Similar to the COUNT Act and the Renewing America post, the letter also states that the 2030 Census “must request citizenship status.”

Peter Bernegger, a Wisconsin-based “election integrity” activist who is facing a criminal charge of simulating the legal process for allegedly falsifying a subpoena, amplified Banks’ letter on X, alleging that the use of differential privacy was part of “election rigging by the Obama/Biden administrations.” Bernegger’s post was viewed more than 236,000 times.

Banks’ office and Bernegger did not immediately respond to a request for comment.

“No differential privacy was ever applied to the data used to apportion the House of Representatives, so the claim that seats in the House were affected is simply false,” says John Abowd, former associate director for research and methodology and chief scientist at the United States Census Bureau. Abowd oversaw the implementation of differential privacy while at the Census Bureau. He says that the data from the 2020 census has been successfully used by red and blue states, as well as redistricting commissions, and that the only difference from previous census data was that no one would be able to “reconstruct accurate, identifiable individual data to enhance the other databases that they use (voter rolls, drivers licenses, etc.).”

With a possible addition of the citizenship question, proposed by both Banks and the COUNT Act, Boyd says that census data would be even more sensitive, because that kind of information is not readily available in commercial data. “Plenty of data brokers would love to get their hands on that data.”

Shortly after Senator Banks published his letter, Abowd found himself in the spotlight. On October 9, the X account @amuse posted a blog-length post alleging that Abowd was the bureaucrat who “stole the House.” The post also alleged, without evidence, that the census results meant that “Republican states are projected to lose almost $90 billion in federal funds across the decade as a result of the miscounts. Democratic states are projected to gain $57 billion.” The account has more than 666,000 followers, including billionaire Elon Musk, venture capitalist Marc Andreessen, and US pardon attorney Ed Martin. (Abowd told WIRED he was “keeping an eye” on the post, which was viewed more than 360,000 times.) That same week, America First Legal, the conservative nonprofit founded by now deputy chief of staff for policy Stephen Miller, posted about a complaint the group had recently filed in Florida, challenging the 2020 census results, alleging they were based upon flawed statistical methods, one of which was differential privacy.

The results of all this, experts tell WIRED, are that fewer people will feel safe participating in the census and that the government will likely need to spend even more resources to try to get an accurate count. Undercounting could lead to skewed numbers that could impact everything from congressional representation to the amount of funding a municipality might receive from the government.

Neither the proposed COUNT Act nor Senator Banks’ letter outlines an alternative to differential privacy. This means that the Census Bureau would likely be left with two options: Publish data that could put people at risk (which could lead to legal consequences for its staff), or publish less data. “At present, I do not know of any alternative to differential privacy that can safeguard the personal data that the US Census Bureau uses in their work on the decennial census,” says Abraham Flaxman, an associate professor of health metrics sciences at the University of Washington, whose team conducted the study on transgender youth.

Getting rid of differential privacy is not a “light thing,” says a Census employee familiar with the bureau’s privacy methods and who requested anonymity because they were not authorized to speak to the press. “It may be for the layperson. But the entire apparatus of disclosure avoidance at the bureau has been geared for the last almost 10 years on differential privacy.” According to the employee, there is no immediately clear method to replace differential privacy.

Boyd says that the safest bet would simply be “what is known as suppression, otherwise known as ‘do not publish.’” (This, according to Garfinkel, was the backup plan if differential privacy had not been implemented for the 2020 census.)

Another would be for the Census Bureau to only publish population counts, meaning that demographic information like the race or age of respondents would be left out. “This is a problem, because we use census data to combat discrimination,” says Boyd. “The consequences of losing this data is not being able to pursue equity.”

This story originally appeared on wired.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

Republican plan would make deanonymization of census data trivial Read More »

ring-cameras-are-about-to-get-increasingly-chummy-with-law-enforcement

Ring cameras are about to get increasingly chummy with law enforcement


Amazon’s Ring partners with company whose tech has reportedly been used by ICE.

Ring’s Outdoor Cam Pro. Credit: Amazon

Law enforcement agencies will soon have easier access to footage captured by Amazon’s Ring smart cameras. In a partnership announced this week, Amazon will allow approximately 5,000 local law enforcement agencies to request access to Ring camera footage via surveillance platforms from Flock Safety. Ring cooperating with law enforcement and the reported use of Flock technologies by federal agencies, including US Immigration and Customs Enforcement (ICE), has resurfaced privacy concerns that have followed the devices for years.

According to Flock’s announcement, its Ring partnership allows local law enforcement members to use Flock software “to send a direct post in the Ring Neighbors app with details about the investigation and request voluntary assistance.” Requests must include “specific location and timeframe of the incident, a unique investigation code, and details about what is being investigated,” and users can look at the requests anonymously, Flock said.

“Any footage a Ring customer chooses to submit will be securely packaged by Flock and shared directly with the requesting local public safety agency through the FlockOS or Flock Nova platform,” the announcement reads.

Flock said its local law enforcement users will gain access to Ring Community Requests in “the coming months.”

A flock of privacy concerns

Outside its software platforms, Flock is known for license plate recognition cameras. Flock customers can also search footage from Flock cameras using descriptors to find people, such as “man in blue shirt and cowboy hat.” Besides law enforcement agencies, Flock says 6,000 communities and 1,000 businesses use their products.

For years, privacy advocates have warned against companies like Flock.

This week, US Sen. Ron Wyden (D-Ore.) sent a letter [PDF] to Flock CEO Garrett Langley saying that ICE’s Homeland Security Investigations (HSI), the Secret Service, and the US Navy’s Criminal Investigative Service have had access to footage from Flock’s license plate cameras.

“I now believe that abuses of your product are not only likely but inevitable and that Flock is unable and uninterested in preventing them,” Wyden wrote.

In August, Jay Stanley, senior policy analyst for the ACLU Speech, Privacy, and Technology Project, wrote that “Flock is building a dangerous, nationwide mass-surveillance infrastructure.” Stanley pointed to ICE using Flock’s network of cameras, as well as Flock’s efforts to build a people lookup tool with data brokers.

Matthew Guariglia, senior policy analyst at the Electronic Frontier Foundation (EFF), told Ars via email that Flock is a “mass surveillance tool” that “has increasingly been used to spy on both immigrants and people exercising their First Amendment-protected rights.”

Flock has earned this reputation among privacy advocates through its own cameras, not Ring’s.

An Amazon spokesperson told Ars Technica that only local public safety agencies will be able to make Community Requests via Flock software, and that requests will also show the name of the agency making the request.

A Flock spokesperson told Ars:

Flock does not currently have any contracts with any division of [the US Department of Homeland Security], including ICE. The Ring Community Requests process through Flock is only available for local public safety agencies for specific, active investigations. All requests are time and geographically-bound. Ring users can choose to share relevant footage or ignore the request.

Flock’s rep added that all activity within FlockOS and Flock Nova is “permanently recorded in a comprehensive CJIS-compliant audit trail for unalterable custody tracking,” referring to a set of standards created by the FBI’s Criminal Justice Information Services division.

But there’s still concern that federal agencies will end up accessing Ring footage through Flock. Guariglia told Ars:

Even without formal partnerships with federal authorities, data from these surveillance companies flow to agencies like ICE through local law enforcement. Local and state police have run more than 4,000 Flock searches on behalf of federal authorities or with a potential immigration focus, reporting has found. Additionally, just this month, it became clear that Texas police searched 83,000 Flock cameras in an attempt to prosecute a woman for her abortion and then tried to cover it up.

Ring cozies up to the law

This week’s announcement shows Amazon, which acquired Ring in 2018, increasingly positioning its consumer cameras as a law enforcement tool. After years of cops using Ring footage, Amazon last year said that it would stop letting police request Ring footage—unless it was an “emergency”—only to reverse course about 18 months later by allowing police to request Ring footage through a Flock rival, Axon.

While announcing Ring’s deals with Flock and Axon, Ring founder and CEO Jamie Siminoff claimed that the partnerships would help Ring cameras keep neighborhoods safe. But there’s doubt as to whether people buy Ring cameras to protect their neighborhood.

“Ring’s new partnership with Flock shows that the company is more interested in contributing to mounting authoritarianism than servicing the specific needs of their customers,” Guariglia told Ars.

Interestingly, Ring initiated conversations about a deal with Flock, Langely told CNBC.

Flock says that its cameras don’t use facial recognition, which has been criticized for racial biases. But local law enforcement agencies using Flock will soon have access to footage from Ring cameras with facial recognition. In a conversation with The Washington Post this month, Calli Schroeder, senior counsel at the consumer advocacy and policy group Electronic Privacy Information Center, described the new feature for Ring cameras as “invasive for anyone who walks within range of” a Ring doorbell, since they likely haven’t consented to facial recognition being used on them.

Amazon, for its part, has mostly pushed the burden of ensuring responsible facial recognition use to its customers. Schroeder shared concern with the Post that Ring’s facial recognition data could end up being shared with law enforcement.

Some people who are perturbed about Ring deepening its ties with law enforcement have complained online.

“Inviting big brother into the system. Screw that,” a user on the Ring subreddit said this week.

Another Reddit user said: “And… I’m gone. Nope, NO WAY IN HELL. Goodbye, Ring. I’ll be switching to a UniFi[-brand] system with 100 percent local storage. You don’t get my money any more. This is some 1984 BS …”

Privacy concerns are also exacerbated by Ring’s past, as the company has previously failed to meet users’ privacy expectations. In 2023, Ring agreed to pay $5.8 million to settle claims that employees illegally spied on Ring customers.

Amazon and Flock say their collaboration will only involve voluntary customers and local enforcement agencies. But there’s still reason to be concerned about the implications of people sending doorbell and personal camera footage to law enforcement via platforms that are reportedly widely used by federal agencies for deportation purposes. Combined with the privacy issues that Ring has already faced for years, it’s not hard to see why some feel that Amazon scaling up Ring’s association with any type of law enforcement is unacceptable.

And it appears that Amazon and Flock would both like Ring customers to opt in when possible.

“It will be turned on for free for every customer, and I think all of them will use it,” Langely told CNBC.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Ring cameras are about to get increasingly chummy with law enforcement Read More »

hackers-can-steal-2fa-codes-and-private-messages-from-android-phones

Hackers can steal 2FA codes and private messages from Android phones


STEALING CODES ONE PIXEL AT A TIME

Malicious app required to make “Pixnapping” attack work requires no permissions.

Samsung’s S25 phones. Credit: Samsung

Android devices are vulnerable to a new attack that can covertly steal two-factor authentication codes, location timelines, and other private data in less than 30 seconds.

The new attack, named Pixnapping by the team of academic researchers who devised it, requires a victim to first install a malicious app on an Android phone or tablet. The app, which requires no system permissions, can then effectively read data that any other installed app displays on the screen. Pixnapping has been demonstrated on Google Pixel phones and the Samsung Galaxy S25 phone and likely could be modified to work on other models with additional work. Google released mitigations last month, but the researchers said a modified version of the attack works even when the update is installed.

Like taking a screenshot

Pixnapping attacks begin with the malicious app invoking Android programming interfaces that cause the authenticator or other targeted apps to send sensitive information to the device screen. The malicious app then runs graphical operations on individual pixels of interest to the attacker. Pixnapping then exploits a side channel that allows the malicious app to map the pixels at those coordinates to letters, numbers, or shapes.

“Anything that is visible when the target app is opened can be stolen by the malicious app using Pixnapping,” the researchers wrote on an informational website. “Chat messages, 2FA codes, email messages, etc. are all vulnerable since they are visible. If an app has secret information that is not visible (e.g., it has a secret key that is stored but never shown on the screen), that information cannot be stolen by Pixnapping.”

The new attack class is reminiscent of GPU.zip, a 2023 attack that allowed malicious websites to read the usernames, passwords, and other sensitive visual data displayed by other websites. It worked by exploiting side channels found in GPUs from all major suppliers. The vulnerabilities that GPU.zip exploited have never been fixed. Instead, the attack was blocked in browsers by limiting their ability to open iframes, an HTML element that allows one website (in the case of GPU.zip, a malicious one) to embed the contents of a site from a different domain.

Pixnapping targets the same side channel as GPU.zip, specifically the precise amount of time it takes for a given frame to be rendered on the screen.

“This allows a malicious app to steal sensitive information displayed by other apps or arbitrary websites, pixel by pixel,” Alan Linghao Wang, lead author of the research paper “Pixnapping: Bringing Pixel Stealing out of the Stone Age,” explained in an interview. “Conceptually, it is as if the malicious app was taking a screenshot of screen contents it should not have access to. Our end-to-end attacks simply measure the rendering time per frame of the graphical operations… to determine whether the pixel was white or non-white.”

Pixnapping in three steps

The attack occurs in three main steps. In the first, the malicious app invokes Android APIs that make calls to the app the attacker wants to snoop on. These calls can also be used to effectively scan an infected device for installed apps of interest. The calls can further cause the targeted app to display specific data it has access to, such as a message thread in a messaging app or a 2FA code for a specific site. This call causes the information to be sent to the Android rendering pipeline, the system that takes each app’s pixels so they can be rendered on the screen. The Android-specific calls made include activities, intents, and tasks.

In the second step, Pixnapping performs graphical operations on individual pixels that the targeted app sent to the rendering pipeline. These operations choose the coordinates of target pixels the app wants to steal and begin to check if the color of those coordinates is white or non-white or, more generally, if the color is c or non-c (for an arbitrary color c).

“Suppose, for example, [the attacker] wants to steal a pixel that is part of the screen region where a 2FA character is known to be rendered by Google Authenticator,” Wang said. “This pixel is either white (if nothing was rendered there) or non-white (if part of a 2FA digit was rendered there). Then, conceptually, the attacker wants to cause some graphical operations whose rendering time is long if the target victim pixel is non-white and short if it is white. The malicious app does this by opening some malicious activities (i.e., windows) in front of the victim app that was opened in Step 1.”

The third step measures the amount of time required at each coordinate. By combining the times for each one, the attack can rebuild the images sent to the rendering pipeline one pixel at a time.

As Ars reader hotball put it in the comments below:

Basically the attacker renders something transparent in front of the target app, then using a timing attack exploiting the GPU’s graphical data compression to try finding out the color of the pixels. It’s not something as simple as “give me the pixels of another app showing on the screen right now.” That’s why it takes time and can be too slow to fit within the 30 seconds window of the Google Authenticator app.

In an online interview, paper co-author Ricardo Paccagnella described the attack in more detail:

Step 1: The malicious app invokes a target app to cause some sensitive visual content to be rendered.

Step 2: The malicious app uses Android APIs to “draw over” that visual content and cause a side channel (in our case, GPU.zip) to leak as a function of the color of individual pixels rendered in Step 1 (e.g., activate only if the pixel color is c).

Step 3: The malicious app monitors the side effects of Step 2 to infer, e.g., if the color of those pixels was c or not, one pixel at a time.

Steps 2 and 3 can be implemented differently depending on the side channel that the attacker wants to exploit. In our instantiations on Google and Samsung phones, we exploited the GPU.zip side channel. When using GPU.zip, measuring the rendering time per frame was sufficient to determine if the color of each pixel is c or not. Future instantiations of the attack may use other side channels where controlling memory management and accessing fine-grained timers may be necessary (see Section 3.3 of the paper). Pixnapping would still work then: the attacker would just need to change how Steps 2 and 3 are implemented.

The amount of time required to perform the attack depends on several variables, including how many coordinates need to be measured. In some cases, there’s no hard deadline for obtaining the information the attacker wants to steal. In other cases—such as stealing a 2FA code—every second counts, since each one is valid for only 30 seconds. In the paper, the researchers explained:

To meet the strict 30-second deadline for the attack, we also reduce the number of samples per target pixel to 16 (compared to the 34 or 64 used in earlier attacks) and decrease the idle time between pixel leaks from 1.5 seconds to 70 milliseconds. To ensure that the attacker has the full 30 seconds to leak the 2FA code, our implementation waits for the beginning of a new 30-second global time interval, determined using the system clock.

… We use our end-to-end attack to leak 100 different 2FA codes from Google Authenticator on each of our Google Pixel phones. Our attack correctly recovers the full 6-digit 2FA code in 73%, 53%, 29%, and 53% of the trials on the Pixel 6, 7, 8, and 9, respectively. The average time to recover each 2FA code is 14.3, 25.8, 24.9, and 25.3 seconds for the Pixel 6, Pixel 7, Pixel 8, and Pixel 9, respectively. We are unable to leak 2FA codes within 30 seconds using our implementation on the Samsung Galaxy S25 device due to significant noise. We leave further investigation of how to tune our attack to work on this device to future work.

In an email, a Google representative wrote, “We issued a patch for CVE-2025-48561 in the September Android security bulletin, which partially mitigates this behavior. We are issuing an additional patch for this vulnerability in the December Android security bulletin. We have not seen any evidence of in-the-wild exploitation.”

Pixnapping is useful research in that it demonstrates the limitations of Google’s security and privacy assurances that one installed app can’t access data belonging to another app. The challenges in implementing the attack to steal useful data in real-world scenarios, however, are likely to be significant. In an age when teenagers can steal secrets from Fortune 500 companies simply by asking nicely, the utility of more complicated and limited attacks is probably of less value.

Post updated to add details about how the attack works.

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

Hackers can steal 2FA codes and private messages from Android phones Read More »

google-confirms-android-dev-verification-will-have-free-and-paid-tiers,-no-public-list-of-devs

Google confirms Android dev verification will have free and paid tiers, no public list of devs

A lack of trust

Google has an answer for the most problematic elements of its verification plan, but anywhere there’s a gap, it’s easy to see a conspiracy. Why? Well, let’s look at the situation in which Google finds itself.

The courts have ruled that Google acted illegally to maintain a monopoly in the Play Store—it worked against the interests of developers and users for years to make Google Play the only viable source of Android apps, and for what? The Play Store is an almost unusable mess of sponsored search results and suggested apps, most of which are little more than in-app purchase factories that deliver Google billions of dollars every year.

Google has every reason to protect the status quo (it may take the case all the way to the Supreme Court), and now it has suddenly decided the security risk of sideloaded apps must be addressed. The way it’s being addressed puts Google in the driver’s seat at a time when alternative app stores may finally have a chance to thrive. It’s all very convenient for Google.

Developers across the Internet are expressing wariness about giving Google their personal information. Google, however, has decided anonymity is too risky. We now know a little more about how Google will manage the information it collects on developers, though. While Play Store developer information is listed publicly, the video confirms there will be no public list of sideload developers. However, Google will have the information, and that means it could be demanded by law enforcement or governments.

The current US administration has had harsh words for apps like ICEBlock, which it successfully pulled from the Apple App Store. Google’s new centralized control of app distribution would allow similar censorship on Android, and the real identities of those who developed such an app would also be sitting in a Google database, ready to be subpoenaed. A few years ago, developers might have trusted Google with this data, but now? The goodwill is gone.

Google confirms Android dev verification will have free and paid tiers, no public list of devs Read More »