facebook

robert-f-kennedy-jr.-sues-meta,-citing-chatbot’s-reply-as-evidence-of-shadowban

Robert F. Kennedy Jr. sues Meta, citing chatbot’s reply as evidence of shadowban

Screenshot from the documentary <em>Who Is Bobby Kennedy?</em>” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/05/Who-Is-Bobby-Kennedy-screenshot-via-YouTube-800×422.jpg”></img><figcaption>
<p><a data-height=Enlarge / Screenshot from the documentary Who Is Bobby Kennedy?

In a lawsuit that seems determined to ignore that Section 230 exists, Robert F. Kennedy Jr. has sued Meta for allegedly shadowbanning his million-dollar documentary, Who Is Bobby Kennedy? and preventing his supporters from advocating for his presidential campaign.

According to Kennedy, Meta is colluding with the Biden administration to sway the 2024 presidential election by suppressing Kennedy’s documentary and making it harder to support Kennedy’s candidacy. This allegedly has caused “substantial donation losses,” while also violating the free speech rights of Kennedy, his supporters, and his film’s production company, AV24.

Meta had initially restricted the documentary on Facebook and Instagram but later fixed the issue after discovering that the film was mistakenly flagged by the platforms’ automated spam filters.

But Kennedy’s complaint claimed that Meta is still “brazenly censoring speech” by “continuing to throttle, de-boost, demote, and shadowban the film.” In an exhibit, Kennedy’s lawyers attached screenshots representing “hundreds” of Facebook and Instagram users whom Meta allegedly sent threats, intimidated, and sanctioned after they shared the documentary.

Some of these users remain suspended on Meta platforms, the complaint alleged. Others whose temporary suspensions have been lifted claimed that their posts are still being throttled, though, and Kennedy’s lawyers earnestly insisted that an exchange with Meta’s chatbot proves it.

Two days after the documentary’s release, Kennedy’s team apparently asked the Meta AI assistant, “When users post the link whoisbobbykennedy.com, can their followers see the post in their feeds?”

“I can tell you that the link is currently restricted by Meta,” the chatbot answered.

Chatbots, of course, are notoriously inaccurate sources of information, and Meta AI’s terms of service note this. In a section labeled “accuracy,” Meta warns that chatbot responses “may not reflect accurate, complete, or current information” and should always be verified.

Perhaps more significantly, there is little reason to think that Meta’s chatbot would have access to information about internal content moderation decisions.

Techdirt’s Mike Masnick mocked Kennedy’s reliance on the chatbot in the case. He noted that Kennedy seemed to have no evidence of the alleged shadow-banning, while there’s plenty of evidence that Meta’s spam filters accidentally remove non-violative content all the time.

Meta’s chatbot is “just a probabilistic stochastic parrot, repeating a probable sounding answer to users’ questions,” Masnick wrote. “And these idiots think it’s meaningful evidence. This is beyond embarrassing.”

Neither Meta nor Kennedy’s lawyer, Jed Rubenfeld, responded to Ars’ request to comment.

Robert F. Kennedy Jr. sues Meta, citing chatbot’s reply as evidence of shadowban Read More »

concerns-over-addicted-kids-spur-probe-into-meta-and-its-use-of-dark-patterns

Concerns over addicted kids spur probe into Meta and its use of dark patterns

Protecting the vulnerable —

EU is concerned Meta isn’t doing enough to protect children using its apps.

An iPhone screen displays the app icons for WhatsApp, Messenger, Instagram, and Facebook in a folder titled

Getty Images | Chesnot

Brussels has opened an in-depth probe into Meta over concerns it is failing to do enough to protect children from becoming addicted to social media platforms such as Instagram.

The European Commission, the EU’s executive arm, announced on Thursday it would look into whether the Silicon Valley giant’s apps were reinforcing “rabbit hole” effects, where users get drawn ever deeper into online feeds and topics.

EU investigators will also look into whether Meta, which owns Facebook and Instagram, is complying with legal obligations to provide appropriate age-verification tools to prevent children from accessing inappropriate content.

The probe is the second into the company under the EU’s Digital Services Act. The landmark legislation is designed to police content online, with sweeping new rules on the protection of minors.

It also has mechanisms to force Internet platforms to reveal how they are tackling misinformation and propaganda.

The DSA, which was approved last year, imposes new obligations on very large online platforms with more than 45 million users in the EU. If Meta is found to have broken the law, Brussels can impose fines of up to 6 percent of a company’s global annual turnover.

Repeat offenders can even face bans in the single market as an extreme measure to enforce the rules.

Thierry Breton, commissioner for internal market, said the EU was “not convinced” that Meta “has done enough to comply with the DSA obligations to mitigate the risks of negative effects to the physical and mental health of young Europeans on its platforms Facebook and Instagram.”

“We are sparing no effort to protect our children,” Breton added.

Meta said: “We want young people to have safe, age-appropriate experiences online and have spent a decade developing more than 50 tools and policies designed to protect them. This is a challenge the whole industry is facing, and we look forward to sharing details of our work with the European Commission.”

In the investigation, the commission said it would focus on whether Meta’s platforms were putting in place “appropriate and proportionate measures to ensure a high level of privacy, safety, and security for minors.” It added that it was placing special emphasis on default privacy settings for children.

Last month, the EU opened the first probe into Meta under the DSA over worries the social media giant is not properly curbing disinformation from Russia and other countries.

Brussels is especially concerned whether the social media company’s platforms are properly moderating content from Russian sources that may try to destabilize upcoming elections across Europe.

Meta defended its moderating practices and said it had appropriate systems in place to stop the spread of disinformation on its platforms.

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Concerns over addicted kids spur probe into Meta and its use of dark patterns Read More »

professor-sues-meta-to-allow-release-of-feed-killing-tool-for-facebook

Professor sues Meta to allow release of feed-killing tool for Facebook

Professor sues Meta to allow release of feed-killing tool for Facebook

themotioncloud/Getty Images

Ethan Zuckerman wants to release a tool that would allow Facebook users to control what appears in their newsfeeds. His privacy-friendly browser extension, Unfollow Everything 2.0, is designed to essentially give users a switch to turn the newsfeed on and off whenever they want, providing a way to eliminate or curate the feed.

Ethan Zuckerman, a professor at University of Massachusetts Amherst, is suing Meta to release a tool allowing Facebook users to

Ethan Zuckerman, a professor at University of Massachusetts Amherst, is suing Meta to release a tool allowing Facebook users to “unfollow everything.” (Photo by Lorrie LeJeune)

The tool is nearly ready to be released, Zuckerman told Ars, but the University of Massachusetts Amherst associate professor is afraid that Facebook owner Meta might threaten legal action if he goes ahead. And his fears appear well-founded. In 2021, Meta sent a cease-and-desist letter to the creator of the original Unfollow Everything, Louis Barclay, leading that developer to shut down his tool after thousands of Facebook users had eagerly downloaded it.

Zuckerman is suing Meta, asking a US district court in California to invalidate Meta’s past arguments against developers like Barclay and rule that Meta would have no grounds to sue if he released his tool.

Zuckerman insists that he’s “suing Facebook to make it better.” In picking this unusual legal fight with Meta, the professor—seemingly for the first time ever—is attempting to tip Section 230’s shield away from Big Tech and instead protect third-party developers from giant social media platforms.

To do this, Zuckerman is asking the court to consider a novel Section 230 argument relating to an overlooked provision of the law that Zuckerman believes protects the development of third-party tools that allow users to curate their newsfeeds to avoid objectionable content. His complaint cited case law and argued:

Section 230(c)(2)(B) immunizes from legal liability “a provider of software or enabling tools that filter, screen, allow, or disallow content that the provider or user considers obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” Through this provision, Congress intended to promote the development of filtering tools that enable users to curate their online experiences and avoid content they would rather not see.

Unfollow Everything 2.0 falls in this “safe harbor,” Zuckerman argues, partly because “the purpose of the tool is to allow users who find the newsfeed objectionable, or who find the specific sequencing of posts within their newsfeed objectionable, to effectively turn off the feed.”

Ramya Krishnan, a senior staff attorney at the Knight Institute who helped draft Zuckerman’s complaint, told Ars that some Facebook users are concerned that the newsfeed “prioritizes inflammatory and sensational speech,” and they “may not want to see that kind of content.” By turning off the feed, Facebook users could choose to use the platform the way it was originally designed, avoiding being served objectionable content by blanking the newsfeed and manually navigating to only the content they want to see.

“Users don’t have to accept Facebook as it’s given to them,” Krishnan said in a press release provided to Ars. “The same statute that immunizes Meta from liability for the speech of its users gives users the right to decide what they see on the platform.”

Zuckerman, who considers himself “old to the Internet,” uses Facebook daily and even reconnected with and began dating his now-wife on the platform. He has a “soft spot” in his heart for Facebook and still finds the platform useful to keep in touch with friends and family.

But while he’s “never been in the ‘burn it all down’ camp,” he has watched social media evolve to give users less control over their feeds and believes “that the dominance of a small number of social media companies tends to create the illusion that the business model adopted by them is inevitable,” his complaint said.

Professor sues Meta to allow release of feed-killing tool for Facebook Read More »

meta-relaxes-“incoherent”-policy-requiring-removal-of-ai-videos

Meta relaxes “incoherent” policy requiring removal of AI videos

Meta relaxes “incoherent” policy requiring removal of AI videos

On Friday, Meta announced policy updates to stop censoring harmless AI-generated content and instead begin “labeling a wider range of video, audio, and image content as ‘Made with AI.'”

Meta’s policy updates came after deciding not to remove a controversial post edited to show President Joe Biden seemingly inappropriately touching his granddaughter’s chest, with a caption calling Biden a “pedophile.” The Oversight Board had agreed with Meta’s decision to leave the post online while noting that Meta’s current manipulated media policy was too “narrow,” “incoherent,” and “confusing to users.”

Previously, Meta would only remove “videos that are created or altered by AI to make a person appear to say something they didn’t say.” The Oversight Board warned that this policy failed to address other manipulated media, including “cheap fakes,” manipulated audio, or content showing people doing things they’d never done.

“We agree with the Oversight Board’s argument that our existing approach is too narrow since it only covers videos that are created or altered by AI to make a person appear to say something they didn’t say,” Monika Bickert, Meta’s vice president of content policy, wrote in a blog. “As the Board noted, it’s equally important to address manipulation that shows a person doing something they didn’t do.”

Starting in May 2024, Meta will add “Made with AI” labels to any content detected as AI-generated, as well as to any content that users self-disclose as AI-generated.

Meta’s Oversight Board had also warned that Meta removing AI-generated videos that did not directly violate platforms’ community standards was threatening to “unnecessarily risk restricting freedom of expression.” Moving forward, Meta will stop censoring content that doesn’t violate community standards, agreeing that a “less restrictive” approach to manipulated media by adding labels is better.

“If we determine that digitally created or altered images, video, or audio create a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label so people have more information and context,” Bickert wrote. “This overall approach gives people more information about the content so they can better assess it and so they will have context if they see the same content elsewhere.”

Meta confirmed that, in July, it will stop censoring AI-generated content that doesn’t violate rules restricting things like voter interference, bullying and harassment, violence, and incitement.

“This timeline gives people time to understand the self-disclosure process before we stop removing the smaller subset of manipulated media,” Bickert explained in the blog.

Finally, Meta adopted the Oversight Board’s recommendation to “clearly define in a single unified Manipulated Media policy the harms it aims to prevent—beyond users being misled—such as preventing interference with the right to vote and to participate in the conduct of public affairs.”

The Oversight Board issued a statement provided to Ars, saying that members “are pleased that Meta will begin labeling a wider range of video, audio, and image content as ‘made with AI’ when they detect AI image indicators or when people indicate they have uploaded AI content.”

“This will provide people with greater context and transparency for more types of manipulated media, while also removing posts which violate Meta’s rules in other ways,” the Oversight Board said.

Meta relaxes “incoherent” policy requiring removal of AI videos Read More »

facebook-let-netflix-see-user-dms,-quit-streaming-to-keep-netflix-happy:-lawsuit

Facebook let Netflix see user DMs, quit streaming to keep Netflix happy: Lawsuit

A promotional image for Sorry for Your Loss, with Elizabeth Olsen

Enlarge / A promotional image for Sorry for Your Loss, which was a Facebook Watch original scripted series.

Last April, Meta revealed that it would no longer support original shows, like Jada Pinkett Smith’s Red Table Talk talk show, on Facebook Watch. Meta’s streaming business that was once viewed as competition for the likes of YouTube and Netflix is effectively dead now; Facebook doesn’t produce original series, and Facebook Watch is no longer available as a video-streaming app.

The streaming business’ demise has seemed related to cost cuts at Meta that have also included layoffs. However, recently unsealed court documents in an antitrust suit against Meta [PDF] claim that Meta has squashed its streaming dreams in order to appease one of its biggest ad customers: Netflix.

Facebook allegedly gave Netflix creepy privileges

As spotted via Gizmodo, a letter was filed on April 14 in relation to a class-action antitrust suit that was filed by Meta customers, accusing Meta of anti-competitive practices that harm social media competition and consumers. The letter, made public Saturday, asks a court to have Reed Hastings, Netflix’s founder and former CEO, respond to a subpoena for documents that plaintiffs claim are relevant to the case. The original complaint filed in December 2020 [PDF] doesn’t mention Netflix beyond stating that Facebook “secretly signed Whitelist and Data sharing agreements” with Netflix, along with “dozens” of other third-party app developers. The case is still ongoing.

The letter alleges that Netflix’s relationship with Facebook was remarkably strong due to the former’s ad spend with the latter and that Hastings directed “negotiations to end competition in streaming video” from Facebook.

One of the first questions that may come to mind is why a company like Facebook would allow Netflix to influence such a major business decision. The litigation claims the companies formed a lucrative business relationship that included Facebook allegedly giving Netflix access to Facebook users’ private messages:

By 2013, Netflix had begun entering into a series of “Facebook Extended API” agreements, including a so-called “Inbox API” agreement that allowed Netflix programmatic access to Facebook’s users’ private message inboxes, in exchange for which Netflix would “provide to FB a written report every two weeks that shows daily counts of recommendation sends and recipient clicks by interface, initiation surface, and/or implementation variant (e.g., Facebook vs. non-Facebook recommendation recipients). … In August 2013, Facebook provided Netflix with access to its so-called “Titan API,” a private API that allowed a whitelisted partner to access, among other things, Facebook users’ “messaging app and non-app friends.”

Meta said it rolled out end-to-end encryption “for all personal chats and calls on Messenger and Facebook” in December. And in 2018, Facebook told Vox that it doesn’t use private messages for ad targeting. But a few months later, The New York Times, citing “hundreds of pages of Facebook documents,” reported that Facebook “gave Netflix and Spotify the ability to read Facebook users’ private messages.”

Meta didn’t respond to Ars Technica’s request for comment. The company told Gizmodo that it has standard agreements with Netflix currently but didn’t answer the publication’s specific questions.

Facebook let Netflix see user DMs, quit streaming to keep Netflix happy: Lawsuit Read More »

facebook-secretly-spied-on-snapchat-usage-to-confuse-advertisers,-court-docs-say

Facebook secretly spied on Snapchat usage to confuse advertisers, court docs say

“I can’t think of a good argument for why this is okay” —

Zuckerberg told execs to “figure out” how to spy on encrypted Snapchat traffic.

Facebook secretly spied on Snapchat usage to confuse advertisers, court docs say

Unsealed court documents have revealed more details about a secret Facebook project initially called “Ghostbusters,” designed to sneakily access encrypted Snapchat usage data to give Facebook a leg up on its rival, just when Snapchat was experiencing rapid growth in 2016.

The documents were filed in a class-action lawsuit from consumers and advertisers, accusing Meta of anticompetitive behavior that blocks rivals from competing in the social media ads market.

“Whenever someone asks a question about Snapchat, the answer is usually that because their traffic is encrypted, we have no analytics about them,” Facebook CEO Mark Zuckerberg (who has since rebranded his company as Meta) wrote in a 2016 email to Javier Olivan.

“Given how quickly they’re growing, it seems important to figure out a new way to get reliable analytics about them,” Zuckerberg continued. “Perhaps we need to do panels or write custom software. You should figure out how to do this.”

At the time, Olivan was Facebook’s head of growth, but now he’s Meta’s chief operating officer. He responded to Zuckerberg’s email saying that he would have the team from Onavo—a controversial traffic-analysis app acquired by Facebook in 2013—look into it.

Olivan told the Onavo team that he needed “out of the box thinking” to satisfy Zuckerberg’s request. He “suggested potentially paying users to ‘let us install a really heavy piece of software'” to intercept users’ Snapchat data, a court document shows.

What the Onavo team eventually came up with was a project internally known as “Ghostbusters,” an obvious reference to Snapchat’s logo featuring a white ghost. Later, as the project grew to include other Facebook rivals, including YouTube and Amazon, the project was called the “In-App Action Panel” (IAAP).

The IAAP program’s purpose was to gather granular insights into users’ engagement with rival apps to help Facebook develop products as needed to stay ahead of competitors. For example, two months after Zuckerberg’s 2016 email, Meta launched Stories, a Snapchat copycat feature, on Instagram, which the Motley Fool noted rapidly became a key ad revenue source for Meta.

In an email to Olivan, the Onavo team described the “technical solution” devised to help Zuckerberg figure out how to get reliable analytics about Snapchat users. It worked by “develop[ing] ‘kits’ that can be installed on iOS and Android that intercept traffic for specific sub-domains, allowing us to read what would otherwise be encrypted traffic so we can measure in-app usage,” the Onavo team said.

Olivan was told that these so-called “kits” used a “man-in-the-middle” attack typically employed by hackers to secretly intercept data passed between two parties. Users were recruited by third parties who distributed the kits “under their own branding” so that they wouldn’t connect the kits to Onavo unless they used a specialized tool like Wireshark to analyze the kits. TechCrunch reported in 2019 that sometimes teens were paid to install these kits. After that report, Facebook promptly shut down the project.

This “man-in-the-middle” tactic, consumers and advertisers suing Meta have alleged, “was not merely anticompetitive, but criminal,” seemingly violating the Wiretap Act. It was used to snoop on Snapchat starting in 2016, on YouTube from 2017 to 2018, and on Amazon in 2018, relying on creating “fake digital certificates to impersonate trusted Snapchat, YouTube, and Amazon analytics servers to redirect and decrypt secure traffic from those apps for Facebook’s strategic analysis.”

Ars could not reach Snapchat, Google, or Amazon for comment.

Facebook allegedly sought to confuse advertisers

Not everyone at Facebook supported the IAAP program. “The company’s highest-level engineering executives thought the IAAP Program was a legal, technical, and security nightmare,” another court document said.

Pedro Canahuati, then-head of security engineering, warned that incentivizing users to install the kits did not necessarily mean that users understood what they were consenting to.

“I can’t think of a good argument for why this is okay,” Canahuati said. “No security person is ever comfortable with this, no matter what consent we get from the general public. The general public just doesn’t know how this stuff works.”

Mike Schroepfer, then-chief technology officer, argued that Facebook wouldn’t want rivals to employ a similar program analyzing their encrypted user data.

“If we ever found out that someone had figured out a way to break encryption on [WhatsApp] we would be really upset,” Schroepfer said.

While the unsealed emails detailing the project have recently raised eyebrows, Meta’s spokesperson told Ars that “there is nothing new here—this issue was reported on years ago. The plaintiffs’ claims are baseless and completely irrelevant to the case.”

According to Business Insider, advertisers suing said that Meta never disclosed its use of Onavo “kits” to “intercept rivals’ analytics traffic.” This is seemingly relevant to their case alleging anticompetitive behavior in the social media ads market, because Facebook’s conduct, allegedly breaking wiretapping laws, afforded Facebook an opportunity to raise its ad rates “beyond what it could have charged in a competitive market.”

Since the documents were unsealed, Meta has responded with a court filing that said: “Snapchat’s own witness on advertising confirmed that Snap cannot ‘identify a single ad sale that [it] lost from Meta’s use of user research products,’ does not know whether other competitors collected similar information, and does not know whether any of Meta’s research provided Meta with a competitive advantage.”

This conflicts with testimony from a Snapchat executive, who alleged that the project “hamper[ed] Snap’s ability to sell ads” by causing “advertisers to not have a clear narrative differentiating Snapchat from Facebook and Instagram.” Both internally and externally, “the intelligence Meta gleaned from this project was described” as “devastating to Snapchat’s ads business,” a court filing said.

Facebook secretly spied on Snapchat usage to confuse advertisers, court docs say Read More »

apple,-google,-and-meta-are-failing-dma-compliance,-eu-suspects

Apple, Google, and Meta are failing DMA compliance, EU suspects

EU Commissioner for Internal Market Thierry Breton talks to media about non-compliance investigations against Google, Apple, and Meta under the Digital Markets Act (DMA).

Enlarge / EU Commissioner for Internal Market Thierry Breton talks to media about non-compliance investigations against Google, Apple, and Meta under the Digital Markets Act (DMA).

Not even three weeks after the European Union’s Digital Markets Act (DMA) took effect, the European Commission (EC) announced Monday that it is already probing three out of six gatekeepers—Apple, Google, and Meta—for suspected non-compliance.

Apple will need to prove that changes to its app store and existing user options to swap out default settings easily are sufficient to comply with the DMA.

Similarly, Google’s app store rules will be probed, as well as any potentially shady practices unfairly preferencing its own services—like Google Shopping and Hotels—in search results.

Finally, Meta’s “Subscription for No Ads” option—allowing Facebook and Instagram users to opt out of personalized ad targeting for a monthly fee—may not fly under the DMA. Even if Meta follows through on its recent offer to slash these fees by nearly 50 percent, the model could be deemed non-compliant.

“The DMA is very clear: gatekeepers must obtain users’ consent to use their personal data across different services,” the EC’s commissioner for internal market, Thierry Breton, said Monday. “And this consent must be free!”

In total, the EC announced five investigations: two against Apple, two against Google, and one against Meta.

“We suspect that the suggested solutions put forward by the three companies do not fully comply with the DMA,” antitrust chief Margrethe Vestager said, ordering companies to “retain certain documents” viewed as critical to assessing evidence in the probe.

The EC’s investigations are expected to conclude within one year. If tech companies are found non-compliant, they risk fines of up to 10 percent of total worldwide turnover. Any repeat violations could spike fines to 20 percent.

“Moreover, in case of systematic infringements, the Commission may also adopt additional remedies, such as obliging a gatekeeper to sell a business or parts of it or banning the gatekeeper from acquisitions of additional services related to the systemic non-compliance,” the EC’s announcement said.

In addition to probes into Apple, Google, and Meta, the EC will scrutinize Apple’s fee structure for app store alternatives and send retention orders to Amazon and Microsoft. That makes ByteDance the only gatekeeper so far to escape “investigatory steps” as the EU fights to enforce the DMA’s strict standards. (ByteDance continues to contest its gatekeeper status.)

“These are the cases where we already have concrete evidence of possible non-compliance,” Breton said. “And this in less than 20 days of DMA implementation. But our monitoring and investigative work of course doesn’t stop here,” Breton said. “We may have to open other non-compliance cases soon.

Google and Apple have both issued statements defending their current plans for DMA compliance.

“To comply with the Digital Markets Act, we have made significant changes to the way our services operate in Europe,” Google’s competition director Oliver Bethell told Ars, promising to “continue to defend our approach in the coming months.”

“We’re confident our plan complies with the DMA, and we’ll continue to constructively engage with the European Commission as they conduct their investigations,” Apple’s spokesperson told Ars. “Teams across Apple have created a wide range of new developer capabilities, features, and tools to comply with the regulation. At the same time, we’ve introduced protections to help reduce new risks to the privacy, quality, and security of our EU users’ experience. Throughout, we’ve demonstrated flexibility and responsiveness to the European Commission and developers, listening and incorporating their feedback.”

A Meta spokesperson told Ars that Meta “designed Subscription for No Ads to address several overlapping regulatory obligations, including the DMA,” promising to comply with the DMA while arguing that “subscriptions as an alternative to advertising are a well-established business model across many industries.”

The EC’s announcement came after all designated gatekeepers were required to submit DMA compliance reports and scheduled public workshops to discuss DMA compliance. Those workshops conclude tomorrow with Microsoft and appear to be partly driving the EC’s decision to probe Apple, Google, and Meta.

“Stakeholders provided feedback on the compliance solutions offered,” Vestager said. “Their feedback tells us that certain compliance measures fail to achieve their objectives and fall short of expectations.”

Apple and Google app stores probed

Under the DMA, “gatekeepers can no longer prevent their business users from informing their users within the app about cheaper options outside the gatekeeper’s ecosystem,” Vestager said. “That is called anti-steering and is now forbidden by law.”

Stakeholders told the EC that Apple’s and Google’s fee structures appear to “go against” the DMA’s “free of charge” requirement, Vestager said, because companies “still charge various recurring fees and still limit steering.”

This feedback pushed the EC to launch its first two probes under the DMA against Apple and Google.

“We will investigate to what extent these fees and limitations defeat the purpose of the anti-steering provision and by that, limit consumer choice,” Vestager said.

These probes aren’t the end of Apple’s potential app store woes in the EU, either. Breton said that the EC has “many questions on Apple’s new business model” for the app store. These include “questions on the process that Apple used for granting and terminating membership of” its developer program, following a scandal where Epic Games’ account was briefly terminated.

“We also have questions on the fee structure and several other aspects of the business model,” Breton said, vowing to “check if they allow for real opportunities for app developers in line with the letter and the spirit of the DMA.”

Apple, Google, and Meta are failing DMA compliance, EU suspects Read More »

users-shocked-to-find-instagram-limits-political-content-by-default

Users shocked to find Instagram limits political content by default

“I had no idea” —

Instagram never directly told users it was limiting political content by default.

Users shocked to find Instagram limits political content by default

Instagram users have started complaining on X (formerly Twitter) after discovering that Meta has begun limiting recommended political content by default.

“Did [y’all] know Instagram was actively limiting the reach of political content like this?!” an X user named Olayemi Olurin wrote in an X post with more than 150,000 views as of this writing. “I had no idea ’til I saw this comment and I checked my settings and sho nuff political content was limited.”

“Instagram quietly introducing a ‘political’ content preference and turning on ‘limit’ by default is insane?” wrote another X user named Matt in a post with nearly 40,000 views.

Instagram apparently did not notify users directly on the platform when this change happened.

Instead, Instagram rolled out the change in February, announcing in a blog that the platform doesn’t “want to proactively recommend political content from accounts you don’t follow.” That post confirmed that Meta “won’t proactively recommend content about politics on recommendation surfaces across Instagram and Threads,” so that those platforms can remain “a great experience for everyone.”

“This change does not impact posts from accounts people choose to follow; it impacts what the system recommends, and people can control if they want more,” Meta’s spokesperson Dani Lever told Ars. “We have been working for years to show people less political content based on what they told us they want, and what posts they told us are political.”

To change the setting, users can navigate to Instagram’s menu for “settings and activity” in their profiles, where they can update their “content preferences.” On this menu, “political content” is the last item under a list of “suggested content” controls that allow users to set preferences for what content is recommended in their feeds.

There are currently two options for controlling what political content users see. Choosing “don’t limit” means “you might see more political or social topics in your suggested content,” the app says. By default, all users are set to “limit,” which means “you might see less political or social topics.”

“This affects suggestions in Explore, Reels, Feed, Recommendations, and Suggested Users,” Instagram’s settings menu explains. “It does not affect content from accounts you follow. This setting also applies to Threads.”

For general Instagram and Threads users, this change primarily limits what content posted can be recommended, but for influencers using professional accounts, the stakes can be higher. The Washington Post reported that news creators were angered by the update, insisting that Meta’s update diminished the value of the platform for reaching users not actively seeking political content.

“The whole value-add for social media, for political people, is that you can reach normal people who might not otherwise hear a message that they need to hear, like, abortion is on the ballot in Florida, or voting is happening today,” Keith Edwards, a Democratic political strategist and content creator, told The Post.

Meta’s blog noted that “professional accounts on Instagram will be able to use Account Status to check their eligibility to be recommended based on whether they recently posted political content. From Account Status, they can edit or remove recent posts, request a review if they disagree with our decision, or stop posting this type of content for a period of time, in order to be eligible to be recommended again.”

Ahead of a major election year, Meta’s change could impact political outreach attempting to inform voters. The change also came amid speculation that Meta was “shadowbanning” users posting pro-Palestine content since the start of the Israel-Hamas war, The Markup reported.

“Our investigation found that Instagram heavily demoted nongraphic images of war, deleted captions and hid comments without notification, suppressed hashtags, and limited users’ ability to appeal moderation decisions,” The Markup reported.

Meta appears to be interested in shifting away from its reputation as a platform where users expect political content—and misinformation—to thrive. Last year, The Wall Street Journal reported that Meta wanted out of politics and planned to “scale back how much political content it showed users,” after criticism over how the platform handled content related to the January 6 Capitol riot.

The decision to limit recommended political content on Instagram and Threads, Meta’s blog said, extends Meta’s “existing approach to how we treat political content.”

“People have told us they want to see less political content, so we have spent the last few years refining our approach on Facebook to reduce the amount of political content—including from politicians’ accounts—you see in Feed, Reels, Watch, Groups You Should Join, and Pages You May Like,” Meta wrote in a February blog update.

“As part of this, we aim to avoid making recommendations that could be about politics or political issues, in line with our approach of not recommending certain types of content to those who don’t wish to see it,” Meta’s blog continued, while at the same time, “preserving your ability to find and interact with political content that’s meaningful to you if that’s what you’re interested in.”

While platforms typically update users directly on the platform when terms of services change, that wasn’t the case for this update, which simply added new controls for users. That’s why many users who prefer to be recommended political content—and apparently missed Meta’s announcement and subsequent media coverage—expressed shock to discover that Meta was limiting what they see.

On X, even Instagram users who don’t love seeing political content are currently rallying to raise awareness and share tips on how to update the setting.

“This is actually kinda wild that Instagram defaults everyone to this,” one user named Laura wrote. “Obviously political content is toxic but during an election season it’s a little weird to just hide it from everyone?”

Users shocked to find Instagram limits political content by default Read More »

facebook,-instagram-may-cut-fees-by-nearly-50%-in-scramble-for-dma-compliance

Facebook, Instagram may cut fees by nearly 50% in scramble for DMA compliance

Facebook, Instagram may cut fees by nearly 50% in scramble for DMA compliance

Meta is considering cutting monthly subscription fees for Facebook and Instagram users in the European Union nearly in half to comply with the Digital Market Act (DMA), Reuters reported.

During a day-long public workshop on Meta’s DMA compliance, Meta’s competition and regulatory director, Tim Lamb, told the European Commission (EC) that individual subscriber fees could be slashed from 9.99 euros to 5.99 euros. Meta is hoping that reducing fees will help to speed up the EC’s process for resolving Meta’s compliance issues. If Meta’s offer is accepted, any additional accounts would then cost 4 euros instead of 6 euros.

Lamb said that these prices are “by far the lowest end of the range that any reasonable person should be paying for services of these quality,” calling it a “serious offer.”

The DMA requires that Meta’s users of Facebook, Instagram, Facebook Messenger, and Facebook Marketplace “freely” give consent to share data used for ad targeting without losing access to the platform if they’d prefer not to share data. That means services must provide an acceptable alternative for users who don’t consent to data sharing.

“Gatekeepers should enable end users to freely choose to opt-in to such data processing and sign-in practices by offering a less personalized but equivalent alternative, and without making the use of the core platform service or certain functionalities thereof conditional upon the end user’s consent,” the DMA says.

Designated gatekeepers like Meta have debated what it means for a user to “freely” give consent, suggesting that offering a paid subscription for users who decline to share data would be one route for Meta to continue offering high-quality services without routinely hoovering up data on all its users.

But EU privacy advocates like NOYB have protested Meta’s plan to offer a subscription model instead of consenting to data sharing, calling it a “pay or OK model” that forces Meta users who cannot pay the fee to consent to invasive data sharing they would otherwise decline. In a statement shared with Ars, NOYB chair Max Schrems said that even if Meta reduced its fees to 1.99 euros, it would be forcing consent from 99.9 percent of users.

“We know from all research that even a fee of just 1.99 euros or less leads to a shift in consent from 3–10 percent that genuinely want advertisement to 99.9 percent that still click yes,” Schrems said.

In the EU, the General Data Protection Regulation (GDPR) “requires that consent must be ‘freely’ given,” Schrems said. “In reality, it is not about the amount of money—it is about the ‘pay or OK’ approach as a whole. The entire purpose of ‘pay or OK’, is to get users to click on OK, even if this is not their free and genuine choice. We do not think the mere change of the amount makes this approach legal.”

Where EU stands on subscription models

Meta expects that a subscription model is a legal alternative under the DMA. The tech giant said it was launching EU subscriptions last November after the Court of Justice of the European Union (CJEU) “endorsed the subscriptions model as a way for people to consent to data processing for personalized advertising.”

It’s unclear how popular the subscriptions have been at the current higher cost. Right now in the EU, monthly Facebook and Instagram subscriptions cost 9.99 euros per month on the web or 12.99 euros per month on iOS and Android, with additional fees of 6 euros per month on the web and 8 euros per month on iOS and Android for each additional account. Meta declined to comment on how many EU users have subscribed, noting to Ars that it has no obligation to do so.

In the CJEU case, the court was reviewing Meta’s GDPR compliance, which Schrems noted is less strict than the DMA. The CJEU specifically said that under the GDPR, “users must be free to refuse individually”—”in the context of” signing up for services— “to give their consent to particular data processing operations not necessary” for Meta to provide such services “without being obliged to refrain entirely from using the service.”

Facebook, Instagram may cut fees by nearly 50% in scramble for DMA compliance Read More »

public-officials-can-block-haters—but-only-sometimes,-scotus-rules

Public officials can block haters—but only sometimes, SCOTUS rules

Public officials can block haters—but only sometimes, SCOTUS rules

There are some circumstances where government officials are allowed to block people from commenting on their social media pages, the Supreme Court ruled Friday.

According to the Supreme Court, the key question is whether officials are speaking as private individuals or on behalf of the state when posting online. Issuing two opinions, the Supreme Court declined to set a clear standard for when personal social media use constitutes state speech, leaving each unique case to be decided by lower courts.

Instead, SCOTUS provided a test for courts to decide first if someone is or isn’t speaking on behalf of the state on their social media pages, and then if they actually have authority to act on what they post online.

The ruling suggests that government officials can block people from commenting on personal social media pages where they discuss official business when that speech cannot be attributed to the state and merely reflects personal remarks. This means that blocking is acceptable when the official has no authority to speak for the state or exercise that authority when speaking on their page.

That authority empowering officials to speak for the state could be granted by a written law. It could also be granted informally if officials have long used social media to speak on behalf of the state to the point where their power to do so is considered “well-settled,” one SCOTUS ruling said.

SCOTUS broke it down like this: An official might be viewed as speaking for the state if the social media page is managed by the official’s office, if a city employee posts on their behalf to their personal page, or if the page is handed down from one official to another when terms in office end.

Posting on a personal page might also be considered speaking for the state if the information shared has not already been shared elsewhere.

Examples of officials clearly speaking on behalf of the state include a mayor holding a city council meeting online or an official using their personal page as an official channel for comments on proposed regulations.

Because SCOTUS did not set a clear standard, officials risk liability when blocking followers on so-called “mixed use” social media pages, SCOTUS cautioned. That liability could be diminished by keeping personal pages entirely separate or by posting a disclaimer stating that posts represent only officials’ personal views and not efforts to speak on behalf of the state. But any official using a personal page to make official comments could expose themselves to liability, even with a disclaimer.

SCOTUS test for when blocking is OK

These clarifications came in two SCOTUS opinions addressing conflicting outcomes in two separate complaints about officials in California and Michigan who blocked followers heavily criticizing them on Facebook and X. The lower courts’ decisions have been vacated, and courts must now apply the Supreme Court’s test to issue new decisions in each case.

One opinion was brief and unsigned, discussing a case where California parents sued school district board members who blocked them from commenting on public Twitter pages used for campaigning and discussing board issues. The board members claimed they blocked their followers after the parents left dozens and sometimes hundreds of the same exact comments on tweets.

In the second—which was unanimous, with no dissenting opinions—Justice Amy Coney Barrett responded at length to a case from a Facebook user named Kevin Lindke. This opinion provides varied guidance that courts can apply when considering whether blocking is appropriate or violating constituents’ First Amendment rights.

Lindke was blocked by a Michigan city manager, James Freed, after leaving comments criticizing the city’s response to COVID-19 on a page that Freed created as a college student, sometime before 2008. Among these comments, Lindke called the city’s pandemic response “abysmal” and told Freed that “the city deserves better.” On a post showing Freed picking up a takeout order, Lindke complained that residents were “suffering,” while Freed ate at expensive restaurants.

After Freed hit 5,000 followers, he converted the page to reflect his public figure status. But while he primarily still used the page for personal posts about his family and always managed the page himself, the page went into murkier territory when he also shared updates about his job as city manager. Those updates included sharing updates on city efforts, posting screenshots of city press releases, and soliciting public feedback, like sharing links to city surveys.

Public officials can block haters—but only sometimes, SCOTUS rules Read More »

meta-sues-“brazenly-disloyal”-former-exec-over-stolen-confidential-docs

Meta sues “brazenly disloyal” former exec over stolen confidential docs

Meta sues “brazenly disloyal” former exec over stolen confidential docs

A recently unsealed court filing has revealed that Meta has sued a former senior employee for “brazenly disloyal and dishonest conduct” while leaving Meta for an AI data startup called Omniva that The Information has described as “mysterious.”

According to Meta, its former vice president of infrastructure, Dipinder Singh Khurana (also known as T.S.), allegedly used his access to “confidential, non-public, and highly sensitive” information to steal more than 100 internal documents in a rushed scheme to poach Meta employees and borrow Meta’s business plans to speed up Omniva’s negotiations with key Meta suppliers.

Meta believes that Omniva—which Data Center Dynamics (DCD) reported recently “pivoted from crypto to AI cloud”—is “seeking to provide AI cloud computing services at scale, including by designing and constructing data centers.” But it was held back by a “lack of data center expertise at the top,” DCD reported.

The Information reported that Omniva began hiring Meta employees to fill the gaps in this expertise, including wooing Khurana away from Meta.

Last year, Khurana notified Meta that he was leaving on May 15, and that’s when Meta first observed Khurana’s allegedly “utter disregard for his contractual and legal obligations to Meta—including his confidentiality obligations to Meta set forth in the Confidential Information and Invention Assignment Agreement that Khurana signed when joining Meta.”

A Meta investigation found that during Khurana’s last two weeks at the company, he allegedly uploaded confidential Meta documents—including “information about Meta’s ‘Top Talent,’ performance information for hundreds of Meta employees, and detailed employee compensation information”—on Meta’s network to a Dropbox folder labeled with his new employer’s name.

“Khurana also uploaded several of Meta’s proprietary, highly sensitive, confidential, and non-public contracts with business partners who supply Meta with crucial components for its data centers,” Meta alleged. “And other documents followed.”

In addition to pulling documents, Khurana also allegedly sent “urgent” requests to subordinates for confidential information on a key supplier, including Meta’s pricing agreement “for certain computing hardware.”

“Unaware of Khurana’s plans, the employee provided Khurana with, among other things, Meta’s pricing-form agreement with that supplier for the computing hardware and the supplier’s Meta-specific preliminary pricing for a particular chip,” Meta alleged.

Some of these documents were “expressly marked confidential,” Meta alleged. Those include a three-year business plan and PowerPoints regarding “Meta’s future ‘roadmap’ with a key supplier” and “Meta’s 2022 redesign of its global-supply-chain group” that Meta alleged “would directly aid Khurana in building his own efficient and effective supply-chain organization” and afford a path for Omniva to bypass “years of investment.” Khurana also allegedly “uploaded a PowerPoint discussing Meta’s use of GPUs for artificial intelligence.”

Meta was apparently tipped off to this alleged betrayal when Khurana used his Meta email and network access to complete a writing assignment for Omniva as part of his hiring process. For this writing assignment, Khurana “disclosed non-public information about Meta’s relationship with certain suppliers that it uses for its data centers” when asked to “explain how he would help his potential new employer develop the supply chain for a company building data centers using specific technologies.”

In a seeming attempt to cover up the alleged theft of Meta documents, Khurana apparently “attempted to scrub” one document “of its references to Meta,” as well as removing a label marking it “CONFIDENTIAL—FOR INTERNAL USE ONLY.” But when replacing “Meta” with “X,” Khurana allegedly missed the term “Meta” in “at least five locations.”

“Khurana took such action to try and benefit himself or his new employer, including to help ensure that Khurana would continue to work at his new employer, continue to receive significant compensation from his new employer, and/or to enable Khurana to take shortcuts in building his supply-chain team at his new employer and/or helping to build his new employer’s business,” Meta alleged.

Ars could not immediately reach Khurana for comment. Meta noted that he has repeatedly denied breaching his contract or initiating contact with Meta employees who later joined Omniva. He also allegedly refused to sign a termination agreement that reiterates his confidentiality obligations.

Meta sues “brazenly disloyal” former exec over stolen confidential docs Read More »

law-enforcement-doesn’t-want-to-be-“customer-service”-reps-for-meta-any-more

Law enforcement doesn’t want to be “customer service” reps for Meta any more

No help —

“Dramatic and persistent spike” in account takeovers is “substantial drain” on resources.

In this photo illustration, the icons of WhatsApp, Messenger, Instagram and Facebook are displayed on an iPhone in front of a Meta logo

Enlarge / Meta has a verified program for users of Facebook and Instagram.

Getty Images | Chesnot

Forty-one state attorneys general penned a letter to Meta’s top attorney on Wednesday saying complaints are skyrocketing across the United States about Facebook and Instagram user accounts being stolen and declaring “immediate action” necessary to mitigate the rolling threat.

The coalition of top law enforcement officials, spearheaded by New York Attorney General Letitia James, says the “dramatic and persistent spike” in complaints concerning account takeovers amounts to a “substantial drain” on governmental resources, as many stolen accounts are also tied to financial crimes—some of which allegedly profits Meta directly.

“We have received a number of complaints of threat actors fraudulently charging thousands of dollars to stored credit cards,” says the letter addressed to Meta’s chief legal officer, Jennifer Newstead. “Furthermore, we have received reports of threat actors buying advertisements to run on Meta.”

“We refuse to operate as the customer service representatives of your company,” the officials add. “Proper investment in response and mitigation is mandatory.”

In addition to New York, the letter is signed by attorneys general from Alabama, Alaska, Arizona, California, Colorado, Connecticut, Delaware, Florida, Georgia, Hawaii, Illinois, Iowa, Kentucky, Louisiana, Maryland, Massachusetts, Michigan, Minnesota, Nebraska, Nevada, New Hampshire, New Jersey, New Mexico, North Carolina, Ohio, Oklahoma, Oregon, Pennsylvania, Rhode Island, South Carolina, South Dakota, Tennessee, Utah, Vermont, Virginia, Washington, West Virginia, Wisconsin, Wyoming, and the District of Columbia.

“Scammers use every platform available to them and constantly adapt to evade enforcement. We invest heavily in our trained enforcement and review teams and have specialized detection tools to identify compromised accounts and other fraudulent activity,” Meta says in a statement provided by spokesperson Erin McPike. “We regularly share tips and tools people can use to protect themselves, provide a means to report potential violations, work with law enforcement, and take legal action.”

Account takeovers can occur as a result of phishing as well as other more sophisticated and targeted techniques. Once an attacker gains access to an account, the owner can be easily locked out by changing passwords and contact information. Private messages and personal information are left up for grabs for a variety of nefarious purposes, from impersonation and fraud to pushing misinformation.

“It’s basically a case of identity theft and Facebook is doing nothing about it,” said one user whose complaint was cited in the letter to Meta’s Newstead.

The state officials said the accounts that were stolen to run ads on Facebook often run afoul of its rules while doing so, leading them to be permanently suspended, punishing the victims—often small business owners—twice over.

“Having your social media account taken over by a scammer can feel like having someone sneak into your home and change all of the locks,” New York’s James said in a statement. “Social media is how millions of Americans connect with family, friends, and people throughout their communities and the world. To have Meta fail to properly protect users from scammers trying to hijack accounts and lock rightful owners out is unacceptable.”

Other complaints forwarded to Newstead show hacking victims expressing frustration over Meta’s lack of response. In many cases, users report no action being taken by the company. Some say the company encourages users to report such problems but never responds, leaving them unable to salvage their accounts or the businesses they built around them.

After being hacked and defrauded of $500, one user complained that their ability to communicate with their own customer base had been “completely disrupted,” and that Meta had never responded to the report they filed, though the user had followed the instructions the company provided them to obtain help.

“I can’t get any help from Meta. There is no one to talk to and meanwhile all my personal pictures are being used. My contacts are receiving false information from the hacker,” one user wrote.

Wrote another: “This is my business account, which is important to me and my life. I have invested my life, time, money and soul in this account. All attempts to contact and get a response from the Meta company, including Instagram and Facebook, were crowned with complete failure, since the company categorically does not respond to letters.”

Figures provided by James’ office in New York show a tenfold increase in complaints between 2019 and 2023—from 73 complaints to more than 780 last year. In January alone, more than 128 complaints were received, James’ office says. Other states saw similar spikes in complaints during that period, according to the letter, with Pennsylvania recording a 270 percent increase, a 330 percent jump in North Carolina, and a 740 percent surge in Vermont.

The letter notes that, while the officials cannot be “certain of any connection,” the drastic increase in complaints occurred “around the same time” as layoffs at Meta affecting roughly 11,000 employees in November 2022, around 13 percent of its staff at the time.

This story originally appeared on wired.com.

Law enforcement doesn’t want to be “customer service” reps for Meta any more Read More »