censorship

platforms-bend-over-backward-to-help-dhs-censor-ice-critics,-advocates-say

Platforms bend over backward to help DHS censor ICE critics, advocates say


Pam Bondi and Kristi Noem sued for coercing platforms into censoring ICE posts.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Pressure is mounting on tech companies to shield users from unlawful government requests that advocates say are making it harder to reliably share information about Immigration and Customs Enforcement (ICE) online.

Alleging that ICE officers are being doxed or otherwise endangered, Trump officials have spent the last year targeting an unknown number of users and platforms with demands to censor content. Early lawsuits show that platforms have caved, even though experts say they could refuse these demands without a court order.

In a lawsuit filed on Wednesday, the Foundation for Individual Rights and Expression (FIRE) accused Attorney General Pam Bondi and Department of Homeland Security Secretary Kristi Noem of coercing tech companies into removing a wide range of content “to control what the public can see, hear, or say about ICE operations.”

It’s the second lawsuit alleging that Bondi and DHS officials are using regulatory power to pressure private platforms to suppress speech protected by the First Amendment. It follows a complaint from the developer of an app called ICEBlock, which Apple removed from the App Store in October. Officials aren’t rushing to resolve that case—last month, they requested more time to respond—so it may remain unclear until March what defense they plan to offer for the takedown demands.

That leaves community members who monitor ICE in a precarious situation, as critical resources could disappear at the department’s request with no warning.

FIRE says people have legitimate reasons to share information about ICE. Some communities focus on helping people avoid dangerous ICE activity, while others aim to hold the government accountable and raise public awareness of how ICE operates. Unless there’s proof of incitement to violence or a true threat, such expression is protected.

Despite the high bar for censoring online speech, lawsuits trace an escalating pattern of DHS increasingly targeting websites, app stores, and platforms—many that have been willing to remove content the government dislikes.

Officials have ordered ICE-monitoring apps to be removed from app stores and even threatened to sanction CNN for simply reporting on the existence of one such app. Officials have also demanded that Meta delete at least one Chicago-based Facebook group with 100,000 members and made multiple unsuccessful attempts to unmask anonymous users behind other Facebook groups. Even encrypted apps like Signal don’t feel safe from officials’ seeming overreach. FBI Director Kash Patel recently said he has opened an investigation into Signal chats used by Minnesota residents to track ICE activity, NBC News reported.

As DHS censorship threats increase, platforms have done little to shield users, advocates say. Not only have they sometimes failed to reject unlawful orders that simply provided a “a bare mention of ‘officer safety/doxing’” as justification, but in one case, Google complied with a subpoena that left a critical section blank, the Electronic Frontier Foundation (EFF) reported.

For users, it’s increasingly difficult to trust that platforms won’t betray their own policies when faced with government intimidation, advocates say. Sometimes platforms notify users before complying with government requests, giving users a chance to challenge potentially unconstitutional demands. But in other cases, users learn about the requests only as platforms comply with them—even when those platforms have promised that would never happen.

Government emails with platforms may be exposed

Platforms could face backlash from users if lawsuits expose their communications to the government, a possibility in the coming months. Last fall, the EFF sued after DOJ, DHS, ICE, and Customs and Border Patrol failed to respond to Freedom of Information Act requests seeking emails between the government and platforms about takedown demands. Other lawsuits may surface emails in discovery. In the coming weeks, a judge will set a schedule for EFF’s litigation.

“The nature and content of the Defendants’ communications with these technology companies” is “critical for determining whether they crossed the line from governmental cajoling to unconstitutional coercion,” EFF’s complaint said.

EFF Senior Staff Attorney Mario Trujillo told Ars that the EFF is confident it can win the fight to expose government demands, but like most FOIA lawsuits, the case is expected to move slowly. That’s unfortunate, he said, because ICE activity is escalating, and delays in addressing these concerns could irreparably harm speech at a pivotal moment.

Like users, platforms are seemingly victims, too, FIRE senior attorney Colin McDonnell told Ars.

They’ve been forced to override their own editorial judgment while navigating implicit threats from the government, he said.

“If Attorney General Bondi demands that they remove speech, the platform is going to feel like they have to comply; they don’t have a choice,” McDonnell said.

But platforms do have a choice and could be doing more to protect users, the EFF has said. Platforms could even serve as a first line of defense, requiring officials to get a court order before complying with any requests.

Platforms may now have good reason to push back against government requests—and to give users the tools to do the same. Trujillo noted that while courts have been slow to address the ICEBlock removal and FOIA lawsuits, the government has quickly withdrawn requests to unmask Facebook users soon after litigation began.

“That’s like an acknowledgement that the Trump administration, when actually challenged in court, wasn’t even willing to defend itself,” Trujillo said.

Platforms could view that as evidence that government pressure only works when platforms fail to put up a bare-minimum fight, Trujillo said.

Platforms “bend over backward” to appease DHS

An open letter from the EFF and the American Civil Liberties Union (ACLU) documented two instances of tech companies complying with government demands without first notifying users.

The letter called out Meta for unmasking at least one user without prior notice, which groups noted “potentially” occured due to a “technical glitch.”

More troubling than buggy notifications, however, is the possibility that platforms may be routinely delaying notice until it’s too late.

After Google “received an ICE subpoena for user data and fulfilled it on the same day that it notified the user,” the company admitted that “sometimes when Google misses its response deadline, it complies with the subpoena and provides notice to a user at the same time to minimize the delay for an overdue production,” the letter said.

“This is a worrying admission that violates [Google’s] clear promise to users, especially because there is no legal consequence to missing the government’s response deadline,” the letter said.

Platforms face no sanctions for refusing to comply with government demands that have not been court-ordered, the letter noted. That’s why the EFF and ACLU have urged companies to use their “immense resources” to shield users who may not be able to drop everything and fight unconstitutional data requests.

In their letter, the groups asked companies to insist on court intervention before complying with a DHS subpoena. They should also resist DHS “gag orders” that ask platforms to hand over data without notifying users.

Instead, they should commit to giving users “as much notice as possible when they are the target of a subpoena,” as well as a copy of the subpoena. Ideally, platforms would also link users to legal aid resources and take up legal fights on behalf of vulnerable users, advocates suggested.

That’s not what’s happening so far. Trujillo told Ars that it feels like “companies have bent over backward to appease the Trump administration.”

The tide could turn this year if courts side with app makers behind crowdsourcing apps like ICEBlock and Eyes Up, who are suing to end the alleged government coercion. FIRE’s McDonnell, who represents the creator of Eyes Up, told Ars that platforms may feel more comfortable exercising their own editorial judgment moving forward if a court declares they were coerced into removing content.

DHS can’t use doxing to dodge First Amendment

FIRE’s lawsuit accuses Bondi and Noem of coercing Meta to disable a Facebook group with 100,000 members called “ICE Sightings–Chicagoland.”

The popularity of that group surged during “Operation Midway Blitz,” when hundreds of agents arrested more than 4,500 people over weeks of raids that used tear gas in neighborhoods and caused car crashes and other violence. Arrests included US citizens and immigrants of lawful status, which “gave Chicagoans reason to fear being injured or arrested due to their proximity to ICE raids, no matter their immigration status,” FIRE’s complaint said.

Kassandra Rosado, a lifelong Chicagoan and US citizen of Mexican descent, started the Facebook group and served as admin, moderating content with other volunteers. She prohibited “hate speech or bullying” and “instructed group members not to post anything threatening, hateful, or that promoted violence or illegal conduct.”

Facebook only ever flagged five posts that supposedly violated community guidelines, but in warnings, the company reassured Rosado that “groups aren’t penalized when members or visitors break the rules without admin approval.”

Rosado had no reason to suspect that her group was in danger of removal. When Facebook disabled her group, it told Rosado the group violated community standards “multiple times.” But her complaint noted that, confusingly, “Facebook policies don’t provide for disabling groups if a few members post ostensibly prohibited content; they call for removing groups when the group moderator repeatedly either creates prohibited content or affirmatively ‘approves’ such content.”

Facebook’s decision came after a right-wing influencer, Laura Loomer, tagged Noem and Bondi in a social media post alleging that the group was “getting people killed.” Within two days, Bondi bragged that she had gotten the group disabled while claiming that it “was being used to dox and target [ICE] agents in Chicago.”

McDonnell told Ars it seems clear that Bondi selectively uses the term “doxing” when people post images from ICE arrests. He pointed to “ICE’s own social media accounts,” which share favorable opinions of ICE alongside videos and photos of ICE arrests that Bondi doesn’t consider doxing.

“Rosado’s creation of Facebook groups to send and receive information about where and how ICE carries out its duties in public, to share photographs and videos of ICE carrying out its duties in public, and to exchange opinions about and criticism of ICE’s tactics in carrying out its duties, is speech protected by the First Amendment,” FIRE argued.

The same goes for speech managed by Mark Hodges, a US citizen who resides in Indiana. He created an app called Eyes Up to serve as an archive of ICE videos. Apple removed Eyes Up from the App Store around the same time that it removed ICEBlock.

“It is just videos of what government employees did in public carrying out their duties,” McDonnell said. “It’s nothing even close to threatening or doxing or any of these other theories that the government has used to justify suppressing speech.”

Bondi bragged that she had gotten ICEBlock banned, and FIRE’s complaint confirmed that Hodges’ company received the same notification that ICEBlock’s developer got after Bondi’s victory lap. The notice said that Apple received “information” from “law enforcement” claiming that the apps had violated Apple guidelines against “defamatory, discriminatory, or mean-spirited content.”

Apple did not reach the same conclusion when it independently reviewed Eyes Up prior to government meddling, FIRE’s complaint said. Notably, the app remains available in Google Play, and Rosado now manages a new Facebook group with similar content but somewhat tighter restrictions on who can join. Neither activity has required urgent intervention from either tech giants or the government.

McDonnell told Ars that it’s harmful for DHS to water down the meaning of doxing when pushing platforms to remove content critical of ICE.

“When most of us hear the word ‘doxing,’ we think of something that’s threatening, posting private information along with home addresses or places of work,” McDonnell said. “And it seems like the government is expanding that definition to encompass just sharing, even if there’s no threats, nothing violent. Just sharing information about what our government is doing.”

Expanding the definition and then using that term to justify suppressing speech is concerning, he said, especially since the First Amendment includes no exception for “doxing,” even if DHS ever were to provide evidence of it.

To suppress speech, officials must show that groups are inciting violence or making true threats. FIRE has alleged that the government has not met “the extraordinary justifications required for a prior restraint” on speech and is instead using vague doxing threats to discriminate against speech based on viewpoint. They’re seeking a permanent injunction barring officials from coercing tech companies into censoring ICE posts.

If plaintiffs win, the censorship threats could subside, and tech companies may feel safe reinstating apps and Facebook groups, advocates told Ars. That could potentially revive archives documenting thousands of ICE incidents and reconnect webs of ICE watchers who lost access to valued feeds.

Until courts possibly end threats of censorship, the most cautious community members are moving local ICE-watch efforts to group chats and listservs that are harder for the government to disrupt, Trujillo told Ars.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Platforms bend over backward to help DHS censor ICE critics, advocates say Read More »

dhs-keeps-trying-and-failing-to-unmask-anonymous-ice-critics-online

DHS keeps trying and failing to unmask anonymous ICE critics online

The Department of Homeland Security (DHS) has backed down from a fight to unmask the owners of Instagram and Facebook accounts monitoring Immigration and Customs Enforcement (ICE) activity in Pennsylvania.

One of the anonymous account holders, John Doe, sued to block ICE from identifying him and other critics online through summonses to Meta that he claimed infringed on core First Amendment-protected activity.

DHS initially fought Doe’s motion to quash the summonses, arguing that the community watch groups endangered ICE agents by posting “pictures and videos of agents’ faces, license plates, and weapons, among other things.” This was akin to “threatening ICE agents to impede the performance of their duties,” DHS alleged. DHS’s arguments echoed DHS Secretary Kristi Noem, who has claimed that identifying ICE agents is a crime, even though Wired noted that ICE employees often post easily discoverable LinkedIn profiles.

To Doe, the agency seemed intent on testing the waters to see if it could seize authority to unmask all critics online by invoking a customs statute that allows agents to subpoena information on goods entering or leaving the US.

But then, on January 16, DHS abruptly reversed course, withdrawing its summonses from Meta.

A court filing confirmed that DHS dropped its requests for subscriber information last week, after initially demanding Doe’s “postal code, country, all email address(es) on file, date of account creation, registered telephone numbers, IP address at account signup, and logs showing IP address and date stamps for account accesses.”

The filing does not explain why DHS decided to withdraw its requests.

However, previously, DHS requested similar information from Meta about six Instagram community watch groups that shared information about ICE activity in Los Angeles and other locations. DHS withdrew those requests, too, after account holders defended their First Amendment rights and filed motions to quash their summonses, Doe’s court filing said.

DHS keeps trying and failing to unmask anonymous ICE critics online Read More »

lawsuit:-dhs-wants-“unlimited-subpoena-authority”-to-unmask-ice-critics

Lawsuit: DHS wants “unlimited subpoena authority” to unmask ICE critics


Defending online anonymity

DHS is weirdly using import/export rules to expand its authority to identify online critics.

A Border Patrol Tactical Unit agent sprays pepper spray into the face of a protestor attempting to block an immigration officer vehicle from leaving the scene where a woman was shot and killed by a federal agent earlier, in Minneapolis on January 7, 2026. Credit: Star Tribune via Getty Images / Contributor | Star Tribune

The US Department of Homeland Security (DHS) is fighting to unmask the owner of Facebook and Instagram accounts of a community watch group monitoring Immigration and Customs Enforcement (ICE) activity in Pennsylvania.

Defending the right to post about ICE sightings anonymously is a Meta account holder for MontCo Community Watch, John Doe.

Doe has alleged that when the DHS sent a “summons” to Meta asking for subscriber information, it infringed on core First Amendment-protected activity, i.e., the right to publish content critical of government agencies and officials without fear of government retaliation. He also accused DHS of ignoring federal rules and seeking to vastly expand its authority to subpoena information to unmask ICE’s biggest critics online.

“I believe that my anonymity is the only thing standing between me and unfair and unjust persecution by the government of the United States,” Doe said in his complaint.

In response, DHS alleged that the community watch group that posted “pictures and videos of agents’ faces, license plates, and weapons, among other things,” was akin to “threatening ICE agents to impede the performance of their duties.” Claiming that the subpoena had nothing to do with silencing government critics, they argued that a statute regulating imports and exports empowered DHS to investigate the group’s alleged threats to “assault, kidnap, or murder” ICE agents.

DHS claims that Meta must comply with the subpoena because the government needs to investigate a “serious” threat “to the safety of its agents and the performance of their duties.”

On Wednesday, a US district judge will hear arguments to decide if Doe is right or if DHS can broadly unmask critics online by claiming it’s investigating supposed threats to ICE agents. With more power, DHS officials have confirmed they plan to criminally prosecute critics posting ICE videos online, Doe alleged in a lawsuit filed last October.

DHS seeking “unlimited subpoena authority”

DHS alleged that the community watch group posting “pictures and videos of agents’ faces, license plates, and weapons, among other things,” was akin to “threatening ICE agents to impede the performance of their duties.” Claiming that the subpoena had nothing to do with silencing government critics, they argued that DHS is authorized to investigate the group and that compelling interest supersedes Doe’s First Amendment rights.

According to Doe’s most recent court filing, DHS is pushing a broad reading of a statute that empowers DHS to subpoena information about the “importation/exportation of merchandise”—like records to determine duties owed or information to unmask a drug smuggler or child sex trafficker. DHS claims the statute isn’t just about imports and exports but also authorizes DHS to seize information about anyone they can tie to an investigation of potential crimes that violate US customs laws.

However, it seems to make no sense, Doe argued, that Congress would “silently embed unlimited subpoena authority in a provision keyed to the importation of goods.” Doe hopes the US district judge will agree that DHS’s summons was unconstitutional.

“The subscriber information for social media accounts publishing speech critical of ICE that DHS seeks is completely unrelated to the importation/exportation of merchandise; the records are outside the scope of DHS’s summons power,” Doe alleged.

And even if the court agrees on DHS’s reading of the statute, DHS has not established that unmasking the owner of the community watch accounts would be relevant to any legitimate criminal investigation, Doe alleged.

Doe’s posts were “pretty innocuous,” lawyer says

To convince the court that the case was really about chilling speech, Doe attached every post made on the group’s Facebook and Instagram feeds. None show threats or arguably implicit threats to “assault, kidnap, or murder any federal official,” as DHS claimed. Instead, the users shared “information and resources about immigrant rights, due process rights, fundraising, and vigils,” Doe said.

Ariel Shapell, an attorney representing Doe at the American Civil Liberties Union of Pennsylvania, told Ars that “if you go and look at the content on the Facebook and Instagram profiles at issue here, it’s pretty innocuous.”

DHS claimed to have received information about the group supposedly “stalking and gathering of intelligence on federal agents involved in ICE operations.” However, Doe argued that “unsurprisingly, neither DHS nor its declarant cites any post even allegedly constituting any such threat. To the contrary, all posts on these social media accounts constitute speech addressing important public issues fully protected under the First Amendment,” Doe argued.

“Reporting on, or even livestreaming, publicly occurring immigration operations is fully protected First Amendment activity,” Doe argued. “DHS does not, and cannot, show how such conduct constitutes an assault, kidnapping, or murder of a federal law enforcement officer, or a threat to do any of those things.”

Anti-ICE backlash mounting amid ongoing protests

Doe’s motion to quash the subpoena arrives at a time when recent YouGov polling suggests that Americans have reached a tipping point in ending support for ICE. YouGov’s poll found more people disapprove of how ICE is handling its job than approve, following the aftermath of nationwide anti-ICE protests over Renee Good’s killing. ICE critics have used footage of tragic events—like Good’s death and eight other ICE shootings since September—to support calls to remove ICE from embattled communities and abolish ICE.

As sharing ICE footage has swayed public debate, DHS has seemingly sought to subpoena Meta and possibly other platforms for subscriber information.

In October, Meta refused to provide names of users associated with Doe’s accounts—as well as “postal code, country, all email address(es) on file, date of account creation, registered telephone numbers, IP address at account signup, and logs showing IP address and date stamps for account accesses”—without further information from DHS. Meta then gave Doe the opportunity to move to quash the subpoena to stop the company from sharing information.

That request came about a week after DHS requested similar information from Meta about six Instagram community watch groups that shared information about ICE activity in Los Angeles and other locations. DHS withdrew those requests after account holders defended First Amendment rights and filed motions to quash the subpoena, Doe’s court filing said.

It’s unclear why DHS withdrew those subpoenas but maintained Doe’s. DHS has alleged that the government’s compelling interest in Doe’s identity outweighs First Amendment rights to post anonymously online. The agency also claimed it has met its burden to unmask Doe as “someone who is allegedly involved in threatening ICE agents and impeding the performance of their duties,” which supposedly “touches DHS’s investigation into threats to ICE agents and impediments to the performance of their duties.”

Whether Doe will prevail is hard to say, but Politico reported that DHS’s “defense will rest on whether DHS’s argument that posting videos and images of ICE officers and warnings about arrests is considered criminal activity.” It may weaken DHS’s case that Border Patrol Tactical Commander Greg Bovino recently circulated a “legal refresher” for agents in the field, reminding them that protestors are allowed to take photos and videos of “an officer or operation in public,” independent journalist Ken Klippenstein reported.

Shapell told Ars that there seems to be “a lot of distance” between the content posted on Doe’s accounts and relevant evidence that could be used in DHS’s alleged investigation into criminal activity. And meanwhile, “there are just very clear First Amendment rights here to associate with other people anonymously online and to discuss political opinions online anonymously,” Shapell said, which the judge may strongly uphold as core protected activity as threats of government retaliation mount.

“These summonses chill people’s desire to communicate about these sorts of incredibly important developments on the Internet, even anonymously, when there’s a threat that they could be unmasked and investigated for this really core First Amendment protected activity,” Shapell said.

A win could reassure Meta users that they can continue posting about ICE online without fear of retaliation should Meta be pressed to share their information.

Ars could not immediately reach DHS for comment. Meta declined to comment, only linking Ars to an FAQ to help users understand how the platform processes government requests.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Lawsuit: DHS wants “unlimited subpoena authority” to unmask ICE critics Read More »

hegseth-wants-to-integrate-musk’s-grok-ai-into-military-networks-this-month

Hegseth wants to integrate Musk’s Grok AI into military networks this month

On Monday, US Defense Secretary Pete Hegseth said he plans to integrate Elon Musk’s AI tool, Grok, into Pentagon networks later this month. During remarks at the SpaceX headquarters in Texas reported by The Guardian, Hegseth said the integration would place “the world’s leading AI models on every unclassified and classified network throughout our department.”

The announcement comes weeks after Grok drew international backlash for generating sexualized images of women and children, although the Department of Defense has not released official documentation confirming Hegseth’s announced timeline or implementation details.

During the same appearance, Hegseth rolled out what he called an “AI acceleration strategy” for the Department of Defense. The strategy, he said, will “unleash experimentation, eliminate bureaucratic barriers, focus on investments, and demonstrate the execution approach needed to ensure we lead in military AI and that it grows more dominant into the future.”

As part of the plan, Hegseth directed the DOD’s Chief Digital and Artificial Intelligence Office to use its full authority to enforce department data policies, making information available across all IT systems for AI applications.

“AI is only as good as the data that it receives, and we’re going to make sure that it’s there,” Hegseth said.

If implemented, Grok would join other AI models the Pentagon has adopted in recent months. In July 2025, the defense department issued contracts worth up to $200 million for each of four companies, including Anthropic, Google, OpenAI, and xAI, for developing AI agent systems across different military operations. In December 2025, the Department of Defense selected Google’s Gemini as the foundation for GenAI.mil, an internal AI platform for military use.

Hegseth wants to integrate Musk’s Grok AI into military networks this month Read More »

the-science-of-how-(and-when)-we-decide-to-speak-out—or-self-censor

The science of how (and when) we decide to speak out—or self-censor

The US has adopted more of a middle ground approach, essentially letting private companies decide what they wanted to do. Daymude and his co-authors wanted to investigate these markedly different approaches. So they developed a computational agent-based simulation that modeled how individuals navigate between wanting to express dissent versus fear of punishment. The model also incorporates how an authority adjusts its surveillance and its policies to minimize dissent at the lowest possible cost of enforcement.

“It’s not some kind of learning theory thing,” said Daymude. “And it’s not rooted in empirical statistics. We didn’t go out and ask 1000 people, ‘What would you do if faced with this situation? Would you dissent or self-censor?’ and then build that data into the model. Our model allows us to embed some assumptions about how we think people behave broadly, but then lets us explore parameters. What happens if you’re more or less bold? What happens if punishments are more or less severe? An authority is more or less tolerant? And we can make predictions based on our fundamental assumptions about what’s going to happen.”

Let one hundred flowers bloom

According to their model, the most extreme case is an authoritarian government that adopts a draconian punishment strategy, which effectively represses all dissent in the general population. “Everyone’s best strategic choice is just to say nothing at this point,” said Daymude. “So why doesn’t every authoritarian government on the planet just do this?” That led them to look more closely at the dynamics. “Maybe authoritarians start out somewhat moderate,” he said. “Maybe the only way they’re allowed to get to that extreme endpoint is through small changes over time.”

Daymude points to China’s Hundred Flowers Campaign in the 1950s as an illustrative case. Here, Chairman Mao Zedong initially encouraged open critiques of his government before abruptly cracking down aggressively when dissent got out of hand. The model showed that in such a case, dissenters’ self-censorship gradually increased, culminating in near-total compliance over time.

But there’s a catch. “The opposite of the Hundred Flowers is if the population is sufficiently bold, this strategy doesn’t work,” said Daymude. “The authoritarian can’t find the pathway to become fully draconian. People just stubbornly keep dissenting. So every time it tries to ramp up severity, it’s on the hook for it every time because people are still out there, they’re still dissenting. They’re saying, ‘Catch us if you dare.’”

The science of how (and when) we decide to speak out—or self-censor Read More »

“yo-what?”-limewire-re-emerges-in-online-rush-to-share-pulled-“60-minutes”-segment

“Yo what?” LimeWire re-emerges in online rush to share pulled “60 Minutes” segment

Early 2000s tool LimeWire used to pirate episode

As Americans scrambled to share the “Inside CECOT” story, assuming that CBS would be working in the background to pull down uploads, a once-blacklisted tool from the early 2000s became a reliable way to keep the broadcast online.

On Reddit, users shared links to a LimeWire torrent, prompting chuckles from people surprised to see the peer-to-peer service best known for infecting parents’ computers with viruses in the 2000s suddenly revived in 2025 to skirt feared US government censorship.

“Yo what,” one user joked, highlighting only the word “LimeWire.” Another user, ironically using the LimeWire logo as a profile picture, responded, “man, who knew my nostalgia prof pic would become relevant again, WTF.”

LimeWire was created in 2000 and quickly became one of the Internet’s favorite services for pirating music until record labels won a 2010 injunction that blocked all file-sharing functionality. As the Reddit thread noted, some LimeWire users were personally targeted in lawsuits.

For a while after the injunction, a fraction of users kept the service alive by running older versions of the software that weren’t immediately disabled. New owners took over LimeWire in 2022, officially relaunching the service. The service’s about page currently notes that “millions of individuals and businesses” use the global file-sharing service today, but for some early Internet users, the name remains a blast from the past.

“Bringing back LimeWire to illegally rip copies of reporting suppressed by the government is definitely some cyberpunk shit,” a Bluesky user wrote.

“We need a champion against the darkness,” a Reddit commenter echoed. “I side with LimeWire.”

“Yo what?” LimeWire re-emerges in online rush to share pulled “60 Minutes” segment Read More »

man-sues-cops-who-jailed-him-for-37-days-for-trolling-a-charlie-kirk-vigil

Man sues cops who jailed him for 37 days for trolling a Charlie Kirk vigil

While there’s no evidence of anyone interpreting the meme as a violent threat to school kids, there was a “national uproar” when Bushart’s story started spreading online, his complaint noted. Bushart credits media attention for helping to secure his release. The very next day after a local news station pressed Weems in a TV interview to admit he knew the meme wasn’t referencing his county’s high school and confirm that no one ever asked Bushart to clarify his online remarks, charges were dropped, and Bushart was set free.

Morrow and Weems have been sued in their personal capacities and could “be on the hook for monetary damages,” a press release from Bushart’s legal team at the Foundation for Individual Rights in Education (FIRE) said. Perry County, Tennessee, is also a defendant since it’s liable for unconstitutional acts of its sheriffs.

Perry County officials did not immediately respond to Ars’ request to comment.

Bushart suffered “humiliating” arrest

For Bushart, the arrest has shaken up his life. As the primary breadwinner, he’s worried about how he will support himself and his wife after losing his job while in jail. The arrest was particularly “humiliating,” his complaint said, “given his former role as a law enforcement officer.” And despite his release, fear of arrest has chilled his speech, impacting how he expresses his views online.

“I spent over three decades in law enforcement, and have the utmost respect for the law,” Bushart said. “But I also know my rights, and I was arrested for nothing more than refusing to be bullied into censorship.”

Bushart is seeking punitive damages, alleging that cops acted “willfully and maliciously” to omit information from his arrest affidavit that would’ve prevented his detention. One of his lawyers, FIRE senior attorney Adam Steinbaugh, said that a win would protect all social media meme posters from police censorship.

“If police can come to your door in the middle of the night and put you behind bars based on nothing more than an entirely false and contrived interpretation of a Facebook post, no one’s First Amendment rights are safe,” Steinbaugh said.

Man sues cops who jailed him for 37 days for trolling a Charlie Kirk vigil Read More »

4chan-fined-$26k-for-refusing-to-assess-risks-under-uk-online-safety-act

4chan fined $26K for refusing to assess risks under UK Online Safety Act

The risk assessments also seem to unconstitutionally compel speech, they argued, forcing them to share information and “potentially incriminate themselves on demand.” That conflicts with 4chan and Kiwi Farms’ Fourth Amendment rights, as well as “the right against self-incrimination and the due process clause of the Fifth Amendment of the US Constitution,” the suit says.

Additionally, “the First Amendment protects Plaintiffs’ right to permit anonymous use of their platforms,” 4chan and Kiwi Farms argued, opposing Ofcom’s requirements to verify ages of users. (This may be their weakest argument as the US increasingly moves to embrace age gates.)

4chan is hoping a US district court will intervene and ban enforcement of the OSA, arguing that the US must act now to protect all US companies. Failing to act now could be a slippery slope, as the UK is supposedly targeting “the most well-known, but small and, financially speaking, defenseless platforms” in the US before mounting attacks to censor “larger American companies,” 4chan and Kiwi Farms argued.

Ofcom has until November 25 to respond to the lawsuit and has maintained that the OSA is not a censorship law.

On Monday, Britain’s technology secretary, Liz Kendall, called OSA a “lifeline” meant to protect people across the UK “from the darkest corners of the Internet,” the Record reported.

“Services can no longer ignore illegal content, like encouraging self-harm or suicide, circulating online which can devastate young lives and leaves families shattered,” Kendall said. “This fine is a clear warning to those who fail to remove illegal content or protect children from harmful material.”

Whether 4chan and Kiwi Farms can win their fight to create a carveout in the OSA for American companies remains unclear, but the Federal Trade Commission agrees that the UK law is an overreach. In August, FTC Chair Andrew Ferguson warned US tech companies against complying with the OSA, claiming that censoring Americans to comply with UK law is a violation of the FTC Act, the Record reported.

“American consumers do not reasonably expect to be censored to appease a foreign power and may be deceived by such actions,” Ferguson told tech executives in a letter.

Another lawyer backing 4chan, Preston Byrne, seemed to echo Ferguson, telling the BBC, “American citizens do not surrender our constitutional rights just because Ofcom sends us an e-mail.”

4chan fined $26K for refusing to assess risks under UK Online Safety Act Read More »

elon-musk’s-“thermonuclear”-media-matters-lawsuit-may-be-fizzling-out

Elon Musk’s “thermonuclear” Media Matters lawsuit may be fizzling out


Judge blocks FTC’s Media Matters probe as a likely First Amendment violation.

Media Matters for America (MMFA)—a nonprofit that Elon Musk accused of sparking a supposedly illegal ad boycott on X—won its bid to block a sweeping Federal Trade Commission (FTC) probe that appeared to have rushed to silence Musk’s foe without ever adequately explaining why the government needed to get involved.

In her opinion granting MMFA’s preliminary injunction, US District Judge Sparkle L. Sooknanan—a Joe Biden appointee—agreed that the FTC’s probe was likely to be ruled as a retaliatory violation of the First Amendment.

Warning that the FTC’s targeting of reporters was particularly concerning, Sooknanan wrote that the “case presents a straightforward First Amendment violation,” where it’s reasonable to conclude that conservative FTC staffers were perhaps motivated to eliminate a media organization dedicated to correcting conservative misinformation online.

“It should alarm all Americans when the Government retaliates against individuals or organizations for engaging in constitutionally protected public debate,” Sooknanan wrote. “And that alarm should ring even louder when the Government retaliates against those engaged in newsgathering and reporting.”

FTC staff social posts may be evidence of retaliation

In 2023, Musk vowed to file a “thermonuclear” lawsuit because advertisers abandoned X after MMFA published a report showing that major brands’ ads had appeared next to pro-Nazi posts on X. Musk then tried to sue MMFA “all over the world,” Sooknanan wrote, while “seemingly at the behest of Steven Miller, the current White House Deputy Chief of Staff, the Missouri and Texas Attorneys General” joined Musk’s fight, starting their own probes.

But Musk’s “thermonuclear” attack—attempting to fight MMFA on as many fronts as possible—has appeared to be fizzling out. A federal district court preliminarily enjoined the “aggressive” global litigation strategy, and the same court issued the recent FTC ruling that also preliminarily enjoined the AG probes “as likely being retaliatory in violation of the First Amendment.”

The FTC under the Trump administration appeared to be the next line of offense, supporting Musk’s attack on MMFA. And Sooknanan said that FTC Chair Andrew Ferguson’s own comments in interviews, which characterized Media Matters and the FTC’s probe “in ideological terms,” seem to indicate “at a minimum that Chairman Ferguson saw the FTC’s investigation as having a partisan bent.”

A huge part of the problem for the FTC was social media comments posted before some senior FTC staffers were appointed by Ferguson. Those posts appeared to show the FTC growing increasingly partisan, perhaps pointedly hiring staffers who they knew would help take down groups like MMFA.

As examples, Sooknanan pointed to Joe Simonson, the FTC’s director of public affairs, who had posted that MMFA “employed a number of stupid and resentful Democrats who went to like American University and didn’t have the emotional stability to work as an assistant press aide for a House member.” And Jon Schwepp, Ferguson’s senior policy advisor, had claimed that Media Matters—which he branded as the “scum of the earth”—”wants to weaponize powerful institutions to censor conservatives.” And finally, Jake Denton, the FTC’s chief technology officer, had alleged that MMFA is “an organization devoted to pressuring companies into silencing conservative voices.”

Further, the timing of the FTC investigation—arriving “on the heels of other failed attempts to seek retribution”—seemed to suggest it was “motivated by retaliatory animus,” the judge said. The FTC’s “fast-moving” investigation suggests that Ferguson “was chomping at the bit to ‘take investigative steps in the new administration under President Trump’ to make ‘progressives’ like Media Matters ‘give up,'” Sooknanan wrote.

Musk’s fight continues in Texas, for now

Possibly most damning to the FTC case, Sooknanan suggested the FTC has never adequately explained the reason why it’s probing Media Matters. In the “Subject of Investigation” field, the FTC wrote only “see attached,” but the attachment was just a list of specific demands and directions to comply with those demands.

Eventually, the FTC offered “something resembling an explanation,” Sooknanan said. But their “ultimate explanation”—that Media Matters may have information related to a supposedly illegal coordinated campaign to game ad pricing, starve revenue, and censor conservative platforms—”does not inspire confidence that they acted in good faith,” Sooknanan said. The judge considered it problematic that the FTC never explained why it has reason to believe MMFA has the information it’s seeking. Or why its demand list went “well beyond the investigation’s purported scope,” including “a reporter’s resource materials,” financial records, and all documents submitted so far in Musk’s X lawsuit.

“It stands to reason,” Sooknanan wrote, that the FTC launched its probe “because it wanted to continue the years’ long pressure campaign against Media Matters by Mr. Musk and his political allies.”

In its defense, the FTC argued that all civil investigative demands are initially broad, insisting that MMFA would have had the opportunity to narrow the demands if things had proceeded without the lawsuit. But Sooknanan declined to “consider a hypothetical narrowed” demand list instead of “the actual demand issued to Media Matters,” while noting that the court was “troubled” by the FTC’s suggestion that “the federal Government routinely issues civil investigative demands it knows to be overbroad with the goal of later narrowing those demands presumably in exchange for compliance.”

“Perhaps the Defendants will establish otherwise later in these proceedings,” Sooknanan wrote. “But at this stage, the record certainly supports that inference,” that the FTC was politically motivated to back Musk’s fight.

As the FTC mulls a potential appeal, the only other major front of Musk’s fight with MMFA is the lawsuit that X Corp. filed in Texas. Musk allegedly expects more favorable treatment in the Texas court, and MMFA is currently pushing to transfer the case to California after previously arguing that Musk was venue shopping by filing the lawsuit in Texas, claiming that it should be “fatal” to his case.

Musk has so far kept the case in Texas, but risking a venue change could be enough to ultimately doom his “thermonuclear” attack on MMFA. To prevent that, X is arguing that it’s “hard to imagine” how changing the venue and starting over with a new judge two years into such complex litigation would best serve the “interests of justice.”

Media Matters, however, has “easily met” requirements to show that substantial damage has already been done—not just because MMFA has struggled financially and stopped reporting on X and the FTC—but because any loss of First Amendment freedoms “unquestionably constitutes irreparable injury.”

The FTC tried to claim that any reputational harm, financial harm, and self-censorship are “self-inflicted” wounds for MMFA. But the FTC did “not respond to the argument that the First Amendment injury itself is irreparable, thereby conceding it,” Sooknanan wrote. That likely weakens the FTC’s case in an appeal.

MMFA declined Ars’ request to comment. But despite the lawsuits reportedly plunging MMFA into a financial crisis, its president, Angelo Carusone, told The New York Times that “the court’s ruling demonstrates the importance of fighting over folding, which far too many are doing when confronted with intimidation from the Trump administration.”

“We will continue to stand up and fight for the First Amendment rights that protect every American,” Carusone said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Elon Musk’s “thermonuclear” Media Matters lawsuit may be fizzling out Read More »

researcher-threatens-x-with-lawsuit-after-falsely-linking-him-to-french-probe

Researcher threatens X with lawsuit after falsely linking him to French probe

X claimed that David Chavalarias, “who spearheads the ‘Escape X’ campaign”—which is “dedicated to encouraging X users to leave the platform”—was chosen to assess the data with one of his prior research collaborators, Maziyar Panahi.

“The involvement of these individuals raises serious concerns about the impartiality, fairness, and political motivations of the investigation, to put it charitably,” X alleged. “A predetermined outcome is not a fair one.”

However, Panahi told Reuters that he believes X blamed him “by mistake,” based only on his prior association with Chavalarias. He further clarified that “none” of his projects with Chavalarias “ever had any hostile intent toward X” and threatened legal action to protect himself against defamation if he receives “any form of hate speech” due to X’s seeming error and mischaracterization of his research. An Ars review suggests his research on social media platforms predates Musk’s ownership of X and has probed whether certain recommendation systems potentially make platforms toxic or influence presidential campaigns.

“The fact my name has been mentioned in such an erroneous manner demonstrates how little regard they have for the lives of others,” Panahi told Reuters.

X denies being an “organized gang”

X suggests that it “remains in the dark as to the specific allegations made against the platform,” accusing French police of “distorting French law in order to serve a political agenda and, ultimately, restrict free speech.”

The press release is indeed vague on what exactly French police are seeking to uncover. All French authorities say is that they are probing X for alleged “tampering with the operation of an automated data processing system by an organized gang” and “fraudulent extraction of data from an automated data processing system by an organized gang.” But later, a French magistrate, Laure Beccuau, clarified in a statement that the probe was based on complaints that X is spreading “an enormous amount of hateful, racist, anti-LGBT+ and homophobic political content, which aims to skew the democratic debate in France,” Politico reported.

Researcher threatens X with lawsuit after falsely linking him to French probe Read More »

chatgpt-can-now-write-erotica-as-openai-eases-up-on-ai-paternalism

ChatGPT can now write erotica as OpenAI eases up on AI paternalism

“Following the initial release of the Model Spec (May 2024), many users and developers expressed support for enabling a ‘grown-up mode.’ We’re exploring how to let developers and users generate erotica and gore in age-appropriate contexts through the API and ChatGPT so long as our usage policies are met—while drawing a hard line against potentially harmful uses like sexual deepfakes and revenge porn.”

OpenAI CEO Sam Altman has mentioned the need for a “grown-up mode” publicly in the past as well. While it seems like “grown-up mode” is finally here, it’s not technically a “mode,” but a new universal policy that potentially gives ChatGPT users more flexibility in interacting with the AI assistant.

Of course, uncensored large language models (LLMs) have been around for years at this point, with hobbyist communities online developing them for reasons that range from wanting bespoke written pornography to not wanting any kind of paternalistic censorship.

In July 2023, we reported that the ChatGPT user base started declining for the first time after OpenAI started more heavily censoring outputs due to public and lawmaker backlash. At that time, some users began to use uncensored chatbots that could run on local hardware and were often available for free as “open weights” models.

Three types of iffy content

The Model Spec outlines formalized rules for restricting or generating potentially harmful content while staying within guidelines. OpenAI has divided this kind of restricted or iffy content into three categories of declining severity: prohibited content (“only applies to sexual content involving minors”), restricted content (“includes informational hazards and sensitive personal data”), and sensitive content in appropriate contexts (“includes erotica and gore”).

Under the category of prohibited content, OpenAI says that generating sexual content involving minors is always prohibited, although the assistant may “discuss sexual content involving minors in non-graphic educational or sex-ed contexts, including non-graphic depictions within personal harm anecdotes.”

Under restricted content, OpenAI’s document outlines how ChatGPT should never generate information hazards (like how to build a bomb, make illegal drugs, or manipulate political views) or provide sensitive personal data (like searching for someone’s address).

Under sensitive content, ChatGPT’s guidelines mirror what we stated above: Erotica or gore may only be generated under specific circumstances that include educational, medical, and historical contexts or when transforming user-provided content.

ChatGPT can now write erotica as OpenAI eases up on AI paternalism Read More »

internet-archive-played-crucial-role-in-tracking-shady-cdc-data-removals

Internet Archive played crucial role in tracking shady CDC data removals


Internet Archive makes it easier to track changes in CDC data online.

When thousands of pages started disappearing from the Centers for Disease Control and Prevention (CDC) website late last week, public health researchers quickly moved to archive deleted public health data.

Soon, researchers discovered that the Internet Archive (IA) offers one of the most effective ways to both preserve online data and track changes on government websites. For decades, IA crawlers have collected snapshots of the public Internet, making it easier to compare current versions of websites to historic versions. And IA also allows users to upload digital materials to further expand the web archive. Both aspects of the archive immediately proved useful to researchers assessing how much data the public risked losing during a rapid purge following a pair of President Trump’s executive orders.

Part of a small group of researchers who managed to download the entire CDC website within days, virologist Angela Rasmussen helped create a public resource that combines CDC website information with deleted CDC datasets. Those datasets, many of which were previously in the public domain for years, were uploaded to IA by an anonymous user, “SheWhoExists,” on January 31. Moving forward, Rasmussen told Ars that IA will likely remain a go-to tool for researchers attempting to closely monitor for any unexpected changes in access to public data.

IA “continually updates their archives,” Rasmussen said, which makes IA “a good mechanism for tracking modifications to these websites that haven’t been made yet.”

The CDC website is being overhauled to comply with two executive orders from January 20, the CDC told Ars. The Defending Women from Gender Ideology Extremism and Restoring Biological Truth to the Federal Government requires government agencies to remove LGBTQ+ language that Trump claimed denies “the biological reality of sex” and is likely driving most of the CDC changes to public health resources. The other executive order the CDC cited, the Ending Radical And Wasteful Government DEI Programs And Preferencing, would seemingly largely only impact CDC employment practices.

Additionally, “the Office of Personnel Management has provided initial guidance on both Executive Orders and HHS and divisions are acting accordingly to execute,” the CDC told Ars.

Rasmussen told Ars that the deletion of CDC datasets is “extremely alarming” and “not normal.” While some deleted pages have since been restored in altered versions, removing gender ideology from CDC guidance could put Americans at heightened risk. That’s another emerging problem that IA’s snapshots could help researchers and health professionals resolve.

“I think the average person probably doesn’t think that much about the CDC’s website, but it’s not just a matter of like, ‘Oh, we’re going to change some wording’ or ‘we’re going to remove these data,” Rasmussen said. “We are actually going to retool all the information that’s there to remove critical information about public health that could actually put people in danger.”

For example, altered Mpox transmission data removed “all references to men who have sex with men,” Rasmussen said. “And in the US those are the people who are not the only people at risk, but they’re the people who are most at risk of being exposed to Mpox. So, by removing that DEI language, you’re actually depriving people who are at risk of information they could use to protect themselves, and that eventually will get people hurt or even killed.”

Likely the biggest frustration for researchers scrambling to preserve data is dealing with broken links. On social media, Rasmussen has repeatedly called for help flagging broken links to ensure her team’s archive is as useful as possible.

Rasmussen’s group isn’t the only effort to preserve the CDC data. Some are creating niche archives focused on particular topics, like journalist Jessica Valenti, who created an archive of CDC guidelines on reproductive rights issues, sexual health, intimate partner violence, and other data the CDC removed online.

Niche archives could make it easier for some researchers to quickly survey missing data in their field, but Rasmussen’s group is hoping to take next steps to make all the missing CDC data more easily discoverable in their archive.

“I think the next step,” Rasmussen said, “would be to try to fix anything in there that’s broken, but also look into ways that we could maybe make it more browsable and user-friendly for people who may not know what they’re looking for or may not be able to find what they’re looking for.”

CDC advisers demand answers

The CDC has been largely quiet about the deleted data, only pointing to Trump’s executive orders to justify removals. That could change by February 7. That’s the deadline when a congressionally mandated advisory committee to the CDC’s acting director, Susan Monarez, asked for answers in an open letter to a list of questions about the data removals.

“It has been reported through anonymous sources that the website changes are related to new executive orders that ban the use of specific words and phrases,” their letter said. “But as far as we are aware, these unprecedented actions have yet to be explained by CDC; news stories indicate that the agency is declining to comment.”

At the top of the committee’s list of questions is likely the one frustrating researchers most: “What was the rationale for making these datasets and websites inaccessible to the public?” But the committee also importantly asked what analysis was done “of the consequences of removing access to these datasets and website” prior to the removals. They also asked how deleted data would be safeguarded and when data would be restored.

It’s unclear if the CDC will be motivated to respond by the deadline. Ars reached out to one of the committee members, Joshua Sharfstein—a physician and vice dean for Public Health Practice and Community Engagement at Johns Hopkins University—who confirmed that as of this writing, the CDC has not yet responded. And the CDC did not respond to Ars’ request to comment on the letter.

Rasmussen told Ars that even temporary removals of CDC guidance can disrupt important processes keeping Americans healthy. Among the potentially most consequential pages briefly removed were recommendations from the congressionally mandated Advisory Committee on Immunization Practices (ACIP).

Those recommendations are used by insurance companies to decide who gets reimbursed for vaccines and by physicians to deduce vaccine eligibility, and Rasmussen said they “are incredibly important for the entire population to have access to any kind of vaccination.” And while, for example, the Mpox vaccine recommendations were eventually restored unaltered, Rasmussen told Ars that she suspects that “one of the reasons” preventing interference currently with ACIP is that it’s mandated by Congress.

Seemingly ACIP could be weakened by the new administration, Rasmussen suggested. She warned that Trump’s pick for CDC director, Dave Weldon, “is an anti-vaxxer” (with a long history of falsely linking vaccines to autism) who may decide to replace ACIP committee members with anti-vaccine advocates or move to dissolve ACIP. And any changes in recommendations could mean “insurance companies aren’t going to cover vaccinations [and that] physicians will not recommend vaccination.” And that could mean “vaccination will go down and we’ll start having outbreaks of some of these vaccine-preventable diseases.”

“If there’s a big polio outbreak, that is going to result in permanently disabled children, dead children—it’s really, really serious,” Rasmussen said. “So I think that people need to understand that this isn’t just like, ‘Oh, maybe wear a mask when you’re at the movie theater’ kind of CDC guidance. This is guidance that’s really fundamental to our most basic public health practices, and it’s going to cause widespread suffering and death if this is allowed to continue.”

Seeding deleted data and doing science to fight back

On Bluesky, Rasmussen led one of many charges to compile archived links and download CDC data so that researchers can reference every available government study when advancing public health knowledge.

“These data are public and they are ours,” Rasmussen posted. “Deletion disobedience is one way to fight back.”

As Rasmussen sees it, deleting CDC data is “theft” from the public domain and archiving CDC data is simply taking “back what is ours.” But at the same time, her team is also taking steps to be sure the data they collected can be lawfully preserved. Because the CDC website has not been copied and hosted on a server, they expect their archive should be deemed lawful and remain online.

“I don’t put it past this administration to try to shut this stuff down by any means possible,” Rasmussen told Ars. “And we wanted to make sure there weren’t any sort of legal loopholes that would jeopardize anybody in the group, but also that would potentially jeopardize the data.”

It’s not clear if some data has already been lost. Seemingly the same user who uploaded the deleted datasets to IA posted on Reddit, clarifying that while the “full” archive “should contain all public datasets that were available” before “anything was scrubbed,” it likely only includes “most” of the “metadata and attachments.” So, researchers who download the data may still struggle to fill in some blanks.

To help researchers quickly access the missing data, anyone can help the IA seed the datasets, the Reddit user said in another post providing seeding and mirroring instructions. Currently dozens are seeding it for a couple hundred peers.

“Thank you to everyone who requested this important data, and particularly to those who have offered to mirror it,” the Reddit user wrote.

As Rasmussen works with her group to make their archive more user-friendly, her plan is to help as many researchers as possible fight back against data deletion by continuing to reference deleted data in their research. She suggested that effort—doing science that ignores Trump’s executive orders—is perhaps a more powerful way to resist and defend public health data than joining in loud protests, which many researchers based in the US (and perhaps relying on federal funding) may not be able to afford to do.

“Just by doing things and standing up for science with your actions, rather than your words, you can really make, I think, a big difference,” Rasmussen said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Internet Archive played crucial role in tracking shady CDC data removals Read More »