censorship

claims-of-tiktok-whistleblower-may-not-add-up

Claims of TikTok whistleblower may not add up

TikTok logo next to inverted US flag.

The United States government is currently poised to outlaw TikTok. Little of the evidence that convinced Congress the app may be a national security threat has been shared publicly, in some cases because it remains classified. But one former TikTok employee turned whistleblower, who claims to have driven key news reporting and congressional concerns about the app, has now come forward.

Zen Goziker worked at TikTok as a risk manager, a role that involved protecting the company from external security and reputational threats. In a wrongful termination lawsuit filed against TikTok’s parent company ByteDance in January, he alleges he was fired in February 2022 for refusing “to sign off” on Project Texas, a $1.5 billion program that TikTok designed to assuage US government security concerns by storing American data on servers managed by Oracle.

Goziker worked at TikTok for only six months. He didn’t hold a senior position inside the company. His lawsuit, and a second one he filed in March against several US government agencies, makes a number of improbable claims. He asserts that he was put under 24-hour surveillance by TikTok and the FBI while working remotely in Mexico. He claims that US attorney general Merrick Garland, director of national intelligence Avril Haines, and other top officials “wickedly instigated” his firing. And he states that the FBI helped the CIA share his private information with foreign governments. The suits do not appear to include evidence for any of these claims.

“This lawsuit is full of outrageous claims that lack merit and comes from an individual who significantly exaggerates his role with a company he worked at for merely six months,” TikTok spokesperson Michael Hughes said in a statement.

Yet court records and emails viewed by WIRED suggest that when Goziker raised the alarm about his ex-employer’s links to China, he found a ready audience. After he was fired, Goziker says he began meeting with elected officials, law enforcement agencies, and journalists to allege that, court documents say, he had discovered proof that TikTok’s software could send US data to Toutiao, a ByteDance app in China. That claim directly conflicted with TikTok executives’ assertions that the two companies operated separately.

Goziker says in court filings that what he saw made it necessary to reassess Project Texas. He also alleges that his account of the internal connection to China formed the basis of an influential Washington Post story published in March last year, which said the concerns came from “a former risk manager at TikTok.”

TikTok officials were quoted in that article as saying the allegations were “unfounded,” and that the employee had discovered “nothing more than a naming convention and technical relic.” The Washington Post said it does not comment on sourcing.

“I am free, I am honest, and I am doing this only because I am an American and because USA desperately need help and I cannot keep this truth away from PUBLIC,” Goziker said in an email to WIRED.

His March lawsuit alleging US officials conspired with TikTok to have him fired was filed against Garland, Haines, Secretary of Homeland Security Alejandro Mayorkas, and the agencies they work for.

“Goziker’s main point is that the executives in the American company TikTok Inc. and certain executives from the American federal government have colluded to organize a fraud scheme,” Sean Jiang, Goziker’s lawyer in the case against the US government, told WIRED in an email. The lawsuits do not appear to contain evidence of such a scheme. The Department of Homeland Security and Office of the Director of National Intelligence did not respond to requests for comment. The Department of Justice declined to comment.

Jiang calls the House’s recent passage of a bill that could force ByteDance to sell off TikTok “problematic,” because it “blames ByteDance instead of TikTok Inc for the wrongdoings of the American executives.” He says Goziker would prefer to see TikTok subjected to audits and a new corporate structure.

Claims of TikTok whistleblower may not add up Read More »

users-shocked-to-find-instagram-limits-political-content-by-default

Users shocked to find Instagram limits political content by default

“I had no idea” —

Instagram never directly told users it was limiting political content by default.

Users shocked to find Instagram limits political content by default

Instagram users have started complaining on X (formerly Twitter) after discovering that Meta has begun limiting recommended political content by default.

“Did [y’all] know Instagram was actively limiting the reach of political content like this?!” an X user named Olayemi Olurin wrote in an X post with more than 150,000 views as of this writing. “I had no idea ’til I saw this comment and I checked my settings and sho nuff political content was limited.”

“Instagram quietly introducing a ‘political’ content preference and turning on ‘limit’ by default is insane?” wrote another X user named Matt in a post with nearly 40,000 views.

Instagram apparently did not notify users directly on the platform when this change happened.

Instead, Instagram rolled out the change in February, announcing in a blog that the platform doesn’t “want to proactively recommend political content from accounts you don’t follow.” That post confirmed that Meta “won’t proactively recommend content about politics on recommendation surfaces across Instagram and Threads,” so that those platforms can remain “a great experience for everyone.”

“This change does not impact posts from accounts people choose to follow; it impacts what the system recommends, and people can control if they want more,” Meta’s spokesperson Dani Lever told Ars. “We have been working for years to show people less political content based on what they told us they want, and what posts they told us are political.”

To change the setting, users can navigate to Instagram’s menu for “settings and activity” in their profiles, where they can update their “content preferences.” On this menu, “political content” is the last item under a list of “suggested content” controls that allow users to set preferences for what content is recommended in their feeds.

There are currently two options for controlling what political content users see. Choosing “don’t limit” means “you might see more political or social topics in your suggested content,” the app says. By default, all users are set to “limit,” which means “you might see less political or social topics.”

“This affects suggestions in Explore, Reels, Feed, Recommendations, and Suggested Users,” Instagram’s settings menu explains. “It does not affect content from accounts you follow. This setting also applies to Threads.”

For general Instagram and Threads users, this change primarily limits what content posted can be recommended, but for influencers using professional accounts, the stakes can be higher. The Washington Post reported that news creators were angered by the update, insisting that Meta’s update diminished the value of the platform for reaching users not actively seeking political content.

“The whole value-add for social media, for political people, is that you can reach normal people who might not otherwise hear a message that they need to hear, like, abortion is on the ballot in Florida, or voting is happening today,” Keith Edwards, a Democratic political strategist and content creator, told The Post.

Meta’s blog noted that “professional accounts on Instagram will be able to use Account Status to check their eligibility to be recommended based on whether they recently posted political content. From Account Status, they can edit or remove recent posts, request a review if they disagree with our decision, or stop posting this type of content for a period of time, in order to be eligible to be recommended again.”

Ahead of a major election year, Meta’s change could impact political outreach attempting to inform voters. The change also came amid speculation that Meta was “shadowbanning” users posting pro-Palestine content since the start of the Israel-Hamas war, The Markup reported.

“Our investigation found that Instagram heavily demoted nongraphic images of war, deleted captions and hid comments without notification, suppressed hashtags, and limited users’ ability to appeal moderation decisions,” The Markup reported.

Meta appears to be interested in shifting away from its reputation as a platform where users expect political content—and misinformation—to thrive. Last year, The Wall Street Journal reported that Meta wanted out of politics and planned to “scale back how much political content it showed users,” after criticism over how the platform handled content related to the January 6 Capitol riot.

The decision to limit recommended political content on Instagram and Threads, Meta’s blog said, extends Meta’s “existing approach to how we treat political content.”

“People have told us they want to see less political content, so we have spent the last few years refining our approach on Facebook to reduce the amount of political content—including from politicians’ accounts—you see in Feed, Reels, Watch, Groups You Should Join, and Pages You May Like,” Meta wrote in a February blog update.

“As part of this, we aim to avoid making recommendations that could be about politics or political issues, in line with our approach of not recommending certain types of content to those who don’t wish to see it,” Meta’s blog continued, while at the same time, “preserving your ability to find and interact with political content that’s meaningful to you if that’s what you’re interested in.”

While platforms typically update users directly on the platform when terms of services change, that wasn’t the case for this update, which simply added new controls for users. That’s why many users who prefer to be recommended political content—and apparently missed Meta’s announcement and subsequent media coverage—expressed shock to discover that Meta was limiting what they see.

On X, even Instagram users who don’t love seeing political content are currently rallying to raise awareness and share tips on how to update the setting.

“This is actually kinda wild that Instagram defaults everyone to this,” one user named Laura wrote. “Obviously political content is toxic but during an election season it’s a little weird to just hide it from everyone?”

Users shocked to find Instagram limits political content by default Read More »

judge-mocks-x-for-“vapid”-argument-in-musk’s-hate-speech-lawsuit

Judge mocks X for “vapid” argument in Musk’s hate speech lawsuit

Judge mocks X for “vapid” argument in Musk’s hate speech lawsuit

It looks like Elon Musk may lose X’s lawsuit against hate speech researchers who encouraged a major brand boycott after flagging ads appearing next to extremist content on X, the social media site formerly known as Twitter.

X is trying to argue that the Center for Countering Digital Hate (CCDH) violated the site’s terms of service and illegally accessed non-public data to conduct its reporting, allegedly posing a security risk for X. The boycott, X alleged, cost the company tens of millions of dollars by spooking advertisers, while X contends that the CCDH’s reporting is misleading and ads are rarely served on extremist content.

But at a hearing Thursday, US district judge Charles Breyer told the CCDH that he would consider dismissing X’s lawsuit, repeatedly appearing to mock X’s decision to file it in the first place.

Seemingly skeptical of X’s entire argument, Breyer appeared particularly focused on how X intended to prove that the CCDH could have known that its reporting would trigger such substantial financial losses, as the lawsuit hinges on whether the alleged damages were “foreseeable,” NPR reported.

X’s lawyer, Jon Hawk, argued that when the CCDH joined Twitter in 2019, the group agreed to terms of service that noted those terms could change. So when Musk purchased Twitter and updated rules to reinstate accounts spreading hate speech, the CCDH should have been able to foresee those changes in terms and therefore anticipate that any reporting on spikes in hate speech would cause financial losses.

According to CNN, this is where Breyer became frustrated, telling Hawk, “I’m trying to figure out in my mind how that’s possibly true, because I don’t think it is.”

“What you have to tell me is, why is it foreseeable?” Breyer said. “That they should have understood that, at the time they entered the terms of service, that Twitter would then change its policy and allow this type of material to be disseminated?

“That, of course, reduces foreseeability to one of the most vapid extensions of law I’ve ever heard,” Breyer added. “‘Oh, what’s foreseeable is that things can change, and therefore, if there’s a change, it’s ‘foreseeable.’ I mean, that argument is truly remarkable.”

According to NPR, Breyer suggested that X was trying to “shoehorn” its legal theory by using language from a breach of contract claim, when what the company actually appeared to be alleging was defamation.

“You could’ve brought a defamation case; you didn’t bring a defamation case,” Breyer said. “And that’s significant.”

Breyer directly noted that one reason why X might not bring a defamation suit was if the CCDH’s reporting was accurate, NPR reported.

CCDH’s CEO and founder, Imran Ahmed, provided a statement to Ars, confirming that the group is “very pleased with how yesterday’s argument went, including many of the questions and comments from the court.”

“We remain confident in the strength of our arguments for dismissal,” Ahmed said.

Judge mocks X for “vapid” argument in Musk’s hate speech lawsuit Read More »