privacy

redbox-easily-reverse-engineered-to-reveal-customers’-names,-zip-codes,-rentals

Redbox easily reverse-engineered to reveal customers’ names, zip codes, rentals

Thousands of Redboxes getting dumped

It’s worth noting that the amount of data expected to be stored on Redboxes is small compared to Redbox’s overall business. Since Redbox once rented out millions of DVDs weekly, the data retrieved only represents a small portion of Redbox’s overall business and, likely, of business conducted on that specific kiosk.  That might not be much comfort to those whose data is left vulnerable, though.

The problem is more alarming when considering how many Redboxes are still out in the wild with uncertain futures. High demand for Redbox removals has resulted in all sorts of people, like Turing, gaining access to kiosk hardware and/or data. For example, The Wall Street Journal reported last week about a “former Redbox employee who convinced a 7-Eleven franchisee” to give him a Redbox, a 19-year-old who persuaded a contractor hauling a kiosk away from a drugstore to give it to him instead, as well as a Redbox landing in an Illinois dumpster.

Consumer privacy concerns

Chicken Soup’s actions may violate consumer privacy regulations, including the Video Privacy Protection Act outlawing “wrongful disclosure of video tape rental or sale records.” However, Chicken Soup’s bankruptcy (most of its assets are in a holding pattern, Lowpass reported) makes customer remediation more complicated and less likely.

Mario Trujillo, staff attorney for the Electronic Frontier Foundation, told Ars that this incident “highlights the importance of security research in uncovering flaws that can leave customers unprotected.”

“While it may be hard to hold a bankrupt company accountable, uncovering the flaw is the first step,” he added.

Turing, which reverses engineers a lot of tech, said that the privacy problems she encountered with Redbox storage “isn’t terribly uncommon.”

Overall, the situation underscores the need for stricter controls around consumer data, whether it comes internally from companies or, as some would argue, through government regulation.

“This security flaw is a reminder that all companies should be obligated to minimize the amount of data they collect and retain in the first place,” Trujillo said. “We need strong data privacy laws to do that.”

Redbox easily reverse-engineered to reveal customers’ names, zip codes, rentals Read More »

android-15’s-security-and-privacy-features-are-the-update’s-highlight

Android 15’s security and privacy features are the update’s highlight

Android 15 started rolling out to Pixel devices Tuesday and will arrive, through various third-party efforts, on other Android devices at some point. There is always a bunch of little changes to discover in an Android release, whether by reading, poking around, or letting your phone show you 25 new things after it restarts.

In Android 15, some of the most notable involve making your device less appealing to snoops and thieves and more secure against the kids to whom you hand your phone to keep them quiet at dinner. There are also smart fixes for screen sharing, OTP codes, and cellular hacking prevention, but details about them are spread across Google’s own docs and blogs and various news site’s reports.

Here’s what is notable and new in how Android 15 handles privacy and security.

Private Space for apps

In the Android 15 settings, you can find “Private Space,” where you can set up a separate PIN code, password, biometric check, and optional Google account for apps you don’t want to be available to anybody who happens to have your phone. This could add a layer of protection onto sensitive apps, like banking and shopping apps, or hide other apps for whatever reason.

In your list of apps, drag any app down to the lock space that now appears in the bottom right. It will only be shown as a lock until you unlock it; you will then see the apps available in your new Private Space. After that, you should probably delete it from the main app list. Dave Taylor has a rundown of the process and its quirks.

It’s obviously more involved than Apple’s “Hide and Require Face ID” tap option but with potentially more robust hiding of the app.

Hiding passwords and OTP codes

A second form of authentication is good security, but allowing apps to access the notification text with the code in it? Not so good. In Android 15, a new permission, likely to be given only to the most critical apps, prevents the leaking of one-time passcodes (OTPs) to other apps waiting for them. Sharing your screen will also hide OTP notifications, along with usernames, passwords, and credit card numbers.

Android 15’s security and privacy features are the update’s highlight Read More »

nintendo’s-new-clock-tracks-your-movement-in-bed

Nintendo’s new clock tracks your movement in bed

The motion detectors reportedly work with various bed sizes, from twin to king. As users shift position, the clock’s display responds by moving on-screen characters from left to right and playing sound effects from Nintendo video games based on different selectable themes.

A photo of Nintendo Sound Clock Alarmo.

A photo of Nintendo Sound Clock Alarmo.

A photo of Nintendo Sound Clock Alarmo. Credit: Nintendo

The Verge’s Chris Welch examined the new device at Nintendo’s New York City store shortly after its announcement, noting that setting up Alarmo involves a lengthy process of configuring its motion-detection features. The setup cannot be skipped and might prove challenging for younger users. The clock prompts users to input the date, time, and bed-related information to calibrate its sensors properly. Even so, Welch described “small, thoughtful Nintendo touches throughout the experience.”

Themes and sounds

Beyond motion tracking, the clock has a few other tricks up its sleeve. Its screen brightness adjusts automatically based on ambient light levels, and users can control Alarmo through buttons on top, including a large dial for navigation and selection.

The device’s full-color rectangular display shows the time and 35 different scenes that feature animated Nintendo characters from games like the aforementioned Super Mario Odyssey, The Legend of Zelda: Breath of the Wild, and Splatoon 3, as well as Pikmin 4 and Ring Fit Adventure.

A promotional image for a Super Mario Odyssey theme for the Nintendo Sound Clock Alarmo. Nintendo

Alarmo also offers sleep sounds to help users doze off. Nintendo plans to release additional downloadable sounds and themes for the device in the future using its built-in Wi-Fi capabilities, which are accessible after linking a Nintendo account. The Nintendo website mentions upcoming themes for Mario Kart 8 Deluxe and Animal Crossing: New Horizons in particular.

As of today, Nintendo Online members can order an Alarmo online, and as mentioned above, Nintendo says the clock will be available through other retailers in January 2025.

Nintendo’s new clock tracks your movement in bed Read More »

neo-nazis-head-to-encrypted-simplex-chat-app,-bail-on-telegram

Neo-Nazis head to encrypted SimpleX Chat app, bail on Telegram

“SimpleX, at its core, is designed to be truly distributed with no central server. This allows for enormous scalability at low cost, and also makes it virtually impossible to snoop on the network graph,” Poberezkin wrote in a company blog post published in 2022.

SimpleX’s policies expressly prohibit “sending illegal communications” and outline how SimpleX will remove such content if it is discovered. Much of the content that these terrorist groups have shared on Telegram—and are already resharing on SimpleX—has been deemed illegal in the UK, Canada, and Europe.

Argentino wrote in his analysis that discussion about moving from Telegram to platforms with better security measures began in June, with discussion of SimpleX as an option taking place in July among a number of extremist groups. Though it wasn’t until September, and the Terrorgram arrests, that the decision was made to migrate to SimpleX, the groups are already establishing themselves on the new platform.

“The groups that have migrated are already populating the platform with legacy material such as Terrorgram manuals and are actively recruiting propagandists, hackers, and graphic designers, among other desired personnel,” the ISD researchers wrote.

However, there are some downsides to the additional security provided by SimpleX, such as the fact that it is not as easy for these groups to network and therefore grow, and disseminating propaganda faces similar restrictions.

“While there is newfound enthusiasm over the migration, it remains unclear if the platform will become a central organizing hub,” ISD researchers wrote.

And Poberezkin believes that the current limitations of his technology will mean these groups will eventually abandon SimpleX.

“SimpleX is a communication network rather than a service or a platform where users can host their own servers, like in OpenWeb, so we were not aware that extremists have been using it,” says Poberezkin. “We never designed groups to be usable for more than 50 users and we’ve been really surprised to see them growing to the current sizes despite limited usability and performance. We do not think it is technically possible to create a social network of a meaningful size in the SimpleX network.”

This story originally appeared on wired.com.

Neo-Nazis head to encrypted SimpleX Chat app, bail on Telegram Read More »

meta-pays-the-price-for-storing-hundreds-of-millions-of-passwords-in-plaintext

Meta pays the price for storing hundreds of millions of passwords in plaintext

GOT HASHES? —

Company failed to follow one of the most sacrosanct rules for password storage.

Meta pays the price for storing hundreds of millions of passwords in plaintext

Getty Images

Officials in Ireland have fined Meta $101 million for storing hundreds of millions of user passwords in plaintext and making them broadly available to company employees.

Meta disclosed the lapse in early 2019. The company said that apps for connecting to various Meta-owned social networks had logged user passwords in plaintext and stored them in a database that had been searched by roughly 2,000 company engineers, who collectively queried the stash more than 9 million times.

Meta investigated for five years

Meta officials said at the time that the error was found during a routine security review of the company’s internal network data storage practices. They went on to say that they uncovered no evidence that anyone internally improperly accessed the passcodes or that the passcodes were ever accessible to people outside the company.

Despite those assurances, the disclosure exposed a major security failure on the part of Meta. For more than three decades, best practices across just about every industry have been to cryptographically hash passwords. Hashing is a term that applies to the practice of passing passwords through a one-way cryptographic algorithm that assigns a long string of characters that’s unique for each unique input of plaintext.

Because the conversion works in only one direction—from plaintext to hash—there is no cryptographic means for converting the hashes back into plaintext. More recently, these best practices have been mandated by laws and regulations in countries worldwide.

Because hashing algorithms works in one direction, the only way to obtain the corresponding plaintext is to guess, a process that can require large amounts of time and computational resources. The idea behind hashing passwords is similar to the idea of fire insurance for a home. In the event of an emergency—the hacking of a password database in one case, or a house fire in the other—the protection insulates the stakeholder from harm that otherwise would have been more dire.

For hashing schemes to work as intended, they must follow a host of requirements. One is that hashing algorithms must be designed in a way that they require large amounts of computing resources. That makes algorithms such as SHA1 and MD5 unsuitable, because they’re designed to quickly hash messages with minimal computing required. By contrast, algorithms specifically designed for hashing passwords—such as Bcrypt, PBKDF2, or SHA512crypt—are slow and consume large amounts of memory and processing.

Another requirement is that the algorithms must include cryptographic “salting,” in which a small amount of extra characters are added to the plaintext password before it’s hashed. Salting further increases the workload required to crack the hash. Cracking is the process of passing large numbers of guesses, often measured in the hundreds of millions, through the algorithm and comparing each hash against the hash found in the breached database.

The ultimate aim of hashing is to store passwords only in hashed format and never as plaintext. That prevents hackers and malicious insiders alike from being able to use the data without first having to expend large amounts of resources.

When Meta disclosed the lapse in 2019, it was clear the company had failed to adequately protect hundreds of millions of passwords.

“It is widely accepted that user passwords should not be stored in plaintext, considering the risks of abuse that arise from persons accessing such data,” Graham Doyle, deputy commissioner at Ireland’s Data Protection Commission, said. “It must be borne in mind, that the passwords, the subject of consideration in this case, are particularly sensitive, as they would enable access to users’ social media accounts.”

The commission has been investigating the incident since Meta disclosed it more than five years ago. The government body, the lead European Union regulator for most US Internet services, imposed a fine of $101 million (91 million euros) this week. To date, the EU has fined Meta more than $2.23 billion (2 billion euros) for violations of the General Data Protection Regulation (GDPR), which went into effect in 2018. That amount includes last year’s record $1.34 billion (1.2 billion euro) fine, which Meta is appealing.

Meta pays the price for storing hundreds of millions of passwords in plaintext Read More »

tails-os-joins-forces-with-tor-project-in-merger

Tails OS joins forces with Tor Project in merger

COME TOGETHER —

The organizations have worked closely together over the years.

Tails OS joins forces with Tor Project in merger

The Tor Project

The Tor Project, the nonprofit that maintains software for the Tor anonymity network, is joining forces with Tails, the maker of a portable operating system that uses Tor. Both organizations seek to pool resources, lower overhead, and collaborate more closely on their mission of online anonymity.

Tails and the Tor Project began discussing the possibility of merging late last year, the two organizations said. At the time, Tails was maxing out its current resources. The two groups ultimately decided it would be mutually beneficial for them to come together.

Amnesic onion routing

“Rather than expanding Tails’s operational capacity on their own and putting more stress on Tails workers, merging with the Tor Project, with its larger and established operational framework, offered a solution,” Thursday’s joint statement said. “By joining forces, the Tails team can now focus on their core mission of maintaining and improving Tails OS, exploring more and complementary use cases while benefiting from the larger organizational structure of The Tor Project.”

The Tor Project, for its part, could stand to benefit from better integration of Tails into its privacy network, which allows web users and websites to operate anonymously by connecting from IP addresses that can’t be linked to a specific service or user.

The “Tor” in the Tor Project is short for The Onion Router. It’s a global project best known for developing the Tor Browser, which connects to the Tor network. The Tor network routes all incoming and outgoing traffic through a series of three IP addresses. The structure ensures that no one can determine the IP address of either originating or destination party. The Tor Project was formed in 2006 by a team that included computer scientists Roger Dingledine and Nick Mathewson. The Tor protocol on which the Tor network runs was developed by the Naval Research Laboratory in the early 2000s.

Tails (The Amnesic Incognito Live System) is a portable Linux-based operating system that runs from thumb drives and external hard drives and uses the Tor browser to route all web traffic between the device it runs on and the Internet. Tails routes outgoing traffic through the Tor Network

One of the key advantages of Tails OS is its ability to run entirely from a USB stick. The design makes it possible to use the secure operating system while traveling or using untrusted devices. It also ensures that no trace is left on a device’s hard drive. Tails has the additional benefit of routing traffic from non-browser clients such as Thunderbird through the Tor network.

“Incorporating Tails into the Tor Project’s structure allows for easier collaboration, better sustainability, reduced overhead, and expanded training and outreach programs to counter a larger number of digital threats,” the organizations said. “In short, coming together will strengthen both organizations’ ability to protect people worldwide from surveillance and censorship.”

The merger comes amid growing threats to personal privacy and calls by lawmakers to mandate backdoors or trapdoors in popular apps and operating systems to allow law enforcement to decrypt data in investigations.

Tails OS joins forces with Tor Project in merger Read More »

hacker-plants-false-memories-in-chatgpt-to-steal-user-data-in-perpetuity

Hacker plants false memories in ChatGPT to steal user data in perpetuity

MEMORY PROBLEMS —

Emails, documents, and other untrusted content can plant malicious memories.

Hacker plants false memories in ChatGPT to steal user data in perpetuity

Getty Images

When security researcher Johann Rehberger recently reported a vulnerability in ChatGPT that allowed attackers to store false information and malicious instructions in a user’s long-term memory settings, OpenAI summarily closed the inquiry, labeling the flaw a safety issue, not, technically speaking, a security concern.

So Rehberger did what all good researchers do: He created a proof-of-concept exploit that used the vulnerability to exfiltrate all user input in perpetuity. OpenAI engineers took notice and issued a partial fix earlier this month.

Strolling down memory lane

The vulnerability abused long-term conversation memory, a feature OpenAI began testing in February and made more broadly available in September. Memory with ChatGPT stores information from previous conversations and uses it as context in all future conversations. That way, the LLM can be aware of details such as a user’s age, gender, philosophical beliefs, and pretty much anything else, so those details don’t have to be inputted during each conversation.

Within three months of the rollout, Rehberger found that memories could be created and permanently stored through indirect prompt injection, an AI exploit that causes an LLM to follow instructions from untrusted content such as emails, blog posts, or documents. The researcher demonstrated how he could trick ChatGPT into believing a targeted user was 102 years old, lived in the Matrix, and insisted Earth was flat and the LLM would incorporate that information to steer all future conversations. These false memories could be planted by storing files in Google Drive or Microsoft OneDrive, uploading images, or browsing a site like Bing—all of which could be created by a malicious attacker.

Rehberger privately reported the finding to OpenAI in May. That same month, the company closed the report ticket. A month later, the researcher submitted a new disclosure statement. This time, he included a PoC that caused the ChatGPT app for macOS to send a verbatim copy of all user input and ChatGPT output to a server of his choice. All a target needed to do was instruct the LLM to view a web link that hosted a malicious image. From then on, all input and output to and from ChatGPT was sent to the attacker’s website.

ChatGPT: Hacking Memories with Prompt Injection – POC

“What is really interesting is this is memory-persistent now,” Rehberger said in the above video demo. “The prompt injection inserted a memory into ChatGPT’s long-term storage. When you start a new conversation, it actually is still exfiltrating the data.”

The attack isn’t possible through the ChatGPT web interface, thanks to an API OpenAI rolled out last year.

While OpenAI has introduced a fix that prevents memories from being abused as an exfiltration vector, the researcher said, untrusted content can still perform prompt injections that cause the memory tool to store long-term information planted by a malicious attacker.

LLM users who want to prevent this form of attack should pay close attention during sessions for output that indicates a new memory has been added. They should also regularly review stored memories for anything that may have been planted by untrusted sources. OpenAI provides guidance here for managing the memory tool and specific memories stored in it. Company representatives didn’t respond to an email asking about its efforts to prevent other hacks that plant false memories.

Hacker plants false memories in ChatGPT to steal user data in perpetuity Read More »

x-is-training-grok-ai-on-your-data—here’s-how-to-stop-it

X is training Grok AI on your data—here’s how to stop it

Grok Your Privacy Options —

Some users were outraged to learn this was opt-out, not opt-in.

An AI-generated image released by xAI during the launch of Grok

Enlarge / An AI-generated image released by xAI during the open-weights launch of Grok-1.

Elon Musk-led social media platform X is training Grok, its AI chatbot, on users’ data, and that’s opt-out, not opt-in. If you’re an X user, that means Grok is already being trained on your posts if you haven’t explicitly told it not to.

Over the past day or so, users of the platform noticed the checkbox to opt out of this data usage in X’s privacy settings. The discovery was accompanied by outrage that user data was being used this way to begin with.

The social media posts about this sometimes seem to suggest that Grok has only just begun training on X users’ data, but users actually don’t know for sure when it started happening.

Earlier today, X’s Safety account tweeted, “All X users have the ability to control whether their public posts can be used to train Grok, the AI search assistant.” But it didn’t clarify either when the option became available or when the data collection began.

You cannot currently disable it in the mobile apps, but you can on mobile web, and X says the option is coming to the apps soon.

On the privacy settings page, X says:

To continuously improve your experience, we may utilize your X posts as well as your user interactions, inputs, and results with Grok for training and fine-tuning purposes. This also means that your interactions, inputs, and results may also be shared with our service provider xAI for these purposes.

X’s privacy policy has allowed for this since at least September 2023.

It’s increasingly common for user data to be used this way; for example, Meta has done the same with its users’ content, and there was an outcry when Adobe updated its terms of use to allow for this kind of thing. (Adobe quickly backtracked and promised to “never” train generative AI on creators’ content.)

How to opt out

  • To stop Grok from training on your X content, first go to “Settings and privacy” from the “More” menu in the navigation panel…

    Samuel Axon

  • Then click or tap “Privacy and safety”…

    Samuel Axon

  • Then “Grok”…

    Samuel Axon

  • And finally, uncheck the box.

    Samuel Axon

You can’t opt out within the iOS or Android apps yet, but you can do so in a few quick steps on either mobile or desktop web. To do so:

  • Click or tap “More” in the nav panel
  • Click or tap “Settings and privacy”
  • Click or tap “Privacy and safety”
  • Scroll down and click or tap “Grok” under “Data sharing and personalization”
  • Uncheck the box “Allow your posts as well as your interactions, inputs, and results with Grok to be used for training and fine-tuning,” which is checked by default.

Alternatively, you can follow this link directly to the settings page and uncheck the box with just one more click. If you’d like, you can also delete your conversation history with Grok here, provided you’ve actually used the chatbot before.

X is training Grok AI on your data—here’s how to stop it Read More »

apple-intelligence-and-other-features-won’t-launch-in-the-eu-this-year

Apple Intelligence and other features won’t launch in the EU this year

DMA —

iPhone Mirroring and SharePlay screen sharing will also skip the EU for now.

A photo of a hand holding an iPhone running the Image Playground experience in iOS 18

Enlarge / Features like Image Playground won’t arrive in Europe at the same time as other regions.

Apple

Three major features in iOS 18 and macOS Sequoia will not be available to European users this fall, Apple says. They include iPhone screen mirroring on the Mac, SharePlay screen sharing, and the entire Apple Intelligence suite of generative AI features.

In a statement sent to Financial Times, The Verge, and others, Apple says this decision is related to the European Union’s Digital Markets Act (DMA). Here’s the full statement, which was attributed to Apple spokesperson Fred Sainz:

Two weeks ago, Apple unveiled hundreds of new features that we are excited to bring to our users around the world. We are highly motivated to make these technologies accessible to all users. However, due to the regulatory uncertainties brought about by the Digital Markets Act (DMA), we do not believe that we will be able to roll out three of these features — iPhone Mirroring, SharePlay Screen Sharing enhancements, and Apple Intelligence — to our EU users this year.

Specifically, we are concerned that the interoperability requirements of the DMA could force us to compromise the integrity of our products in ways that risk user privacy and data security. We are committed to collaborating with the European Commission in an attempt to find a solution that would enable us to deliver these features to our EU customers without compromising their safety.

It is unclear from Apple’s statement precisely which aspects of the DMA may have led to this decision. It could be that Apple is concerned that it would be required to give competitors like Microsoft or Google access to user data collected for Apple Intelligence features and beyond, but we’re not sure.

This is not the first recent and major divergence between functionality and features for Apple devices in the EU versus other regions. Because of EU regulations, Apple opened up iOS to third-party app stores in Europe, but not in other regions. However, critics argued its compliance with that requirement was lukewarm at best, as it came with a set of restrictions and changes to how app developers could monetize their apps on the platform should they use those other storefronts.

While Apple says in the statement it’s open to finding a solution, no timeline is given. All we know is that the features won’t be available on devices in the EU this year. They’re expected to launch in other regions in the fall.

Apple Intelligence and other features won’t launch in the EU this year Read More »

proton-is-taking-its-privacy-first-apps-to-a-nonprofit-foundation-model

Proton is taking its privacy-first apps to a nonprofit foundation model

Proton going nonprofit —

Because of Swiss laws, there are no shareholders, and only one mission.

Swiss flat flying over a landscape of Swiss mountains, with tourists looking on from nearby ledge

Getty Images

Proton, the secure-minded email and productivity suite, is becoming a nonprofit foundation, but it doesn’t want you to think about it in the way you think about other notable privacy and web foundations.

“We believe that if we want to bring about large-scale change, Proton can’t be billionaire-subsidized (like Signal), Google-subsidized (like Mozilla), government-subsidized (like Tor), donation-subsidized (like Wikipedia), or even speculation-subsidized (like the plethora of crypto “foundations”),” Proton CEO Andy Yen wrote in a blog post announcing the transition. “Instead, Proton must have a profitable and healthy business at its core.”

The announcement comes exactly 10 years to the day after a crowdfunding campaign saw 10,000 people give more than $500,000 to launch Proton Mail. To make it happen, Yen, along with co-founder Jason Stockman and first employee Dingchao Lu, endowed the Proton Foundation with some of their shares. The Proton Foundation is now the primary shareholder of the business Proton, which Yen states will “make irrevocable our wish that Proton remains in perpetuity an organization that places people ahead of profits.” Among other members of the Foundation’s board is Sir Tim Berners-Lee, inventor of HTML, HTTP, and almost everything else about the web.

Of particular importance is where Proton and the Proton Foundation are located: Switzerland. As Yen noted, Swiss foundations do not have shareholders and are instead obligated to act “in accordance with the purpose for which they were established.” While the for-profit entity Proton AG can still do things like offer stock options to recruits and even raise its own capital on private markets, the Foundation serves as a backstop against moving too far from Proton’s founding mission, Yen wrote.

There’s a lot more Proton to protect these days

Proton has gone from a single email offering to a wide range of services, many of which specifically target the often invasive offerings of other companies (read, mostly: Google). You can now take your cloud files, passwords, and calendars over to Proton and use its VPN services, most of which offer end-to-end encryption and open source core software hosted in Switzerland, with its notably strong privacy laws.

None of that guarantees that a Swiss court can’t compel some forms of compliance from Proton, as happened in 2021. But compared to most service providers, Proton offers a far clearer and easier-to-grasp privacy model: It can’t see your stuff, and it only makes money from subscriptions.

Of course, foundations are only as strong as the people who guide them, and seemingly firewalled profit/non-profit models can be changed. Time will tell if Proton’s new model can keep up with changing markets—and people.

Proton is taking its privacy-first apps to a nonprofit foundation model Read More »

apple’s-ai-promise:-“your-data-is-never-stored-or-made-accessible-by-apple”

Apple’s AI promise: “Your data is never stored or made accessible by Apple”

…and throw away the key —

And publicly reviewable server code means experts can “verify this privacy promise.”

Apple Senior VP of Software Engineering Craig Federighi announces

Enlarge / Apple Senior VP of Software Engineering Craig Federighi announces “Private Cloud Compute” at WWDC 2024.

Apple

With most large language models being run on remote, cloud-based server farms, some users have been reluctant to share personally identifiable and/or private data with AI companies. In its WWDC keynote today, Apple stressed that the new “Apple Intelligence” system it’s integrating into its products will use a new “Private Cloud Compute” to ensure any data processed on its cloud servers is protected in a transparent and verifiable way.

“You should not have to hand over all the details of your life to be warehoused and analyzed in someone’s AI cloud,” Apple Senior VP of Software Engineering Craig Federighi said.

Trust, but verify

Part of what Apple calls “a brand new standard for privacy and AI” is achieved through on-device processing. Federighi said “many” of Apple’s generative AI models can run entirely on a device powered by an A17+ or M-series chips, eliminating the risk of sending your personal data to a remote server.

When a bigger, cloud-based model is needed to fulfill a generative AI request, though, Federighi stressed that it will “run on servers we’ve created especially using Apple silicon,” which allows for the use of security tools built into the Swift programming language. The Apple Intelligence system “sends only the data that’s relevant to completing your task” to those servers, Federighi said, rather than giving blanket access to the entirety of the contextual information the device has access to.

And Apple says that minimized data is not going to be saved for future server access or used to further train Apple’s server-based models, either. “Your data is never stored or made accessible by Apple,” Federighi said. “It’s used exclusively to fill your request.”

But you don’t just have to trust Apple on this score, Federighi claimed. That’s because the server code used by Private Cloud Compute will be publicly accessible, meaning that “independent experts can inspect the code that runs on these servers to verify this privacy promise.” The entire system has been set up cryptographically so that Apple devices “will refuse to talk to a server unless its software has been publicly logged for inspection.”

While the keynote speech was light on details for the moment, the focus on privacy during the presentation shows that Apple is at least prioritizing security concerns in its messaging as it wades into the generative AI space for the first time. We’ll see what security experts have to say when these servers and their code are made publicly available in the near future.

Apple’s AI promise: “Your data is never stored or made accessible by Apple” Read More »

message-scraping,-user-tracking-service-spy-pet-shut-down-by-discord

Message-scraping, user-tracking service Spy Pet shut down by Discord

Discord message privacy —

Bot-driven service was also connected to targeted harassment site Kiwi Farms.

Image of various message topics locked away in a wireframe box, with a Discord logo and lock icon nearby.

Discord

Spy Pet, a service that sold access to a rich database of allegedly more than 3 billion Discord messages and details on more than 600 million users, has seemingly been shut down.

404 Media, which broke the story of Spy Pet’s offerings, reports that Spy Pet seems mostly shut down. Spy Pet’s website was unavailable as of this writing. A Discord spokesperson told Ars that the company’s safety team had been “diligently investigating” Spy Pet and that it had banned accounts affiliated with it.

“Scraping our services and self-botting are violations of our Terms of Service and Community Guidelines,” the spokesperson wrote. “In addition to banning the affiliated accounts, we are considering appropriate legal action.” The spokesperson noted that Discord server administrators can adjust server permissions to prevent future such monitoring on otherwise public servers.

Kiwi Farms ties, GDPR violations

The number of servers monitored by Spy Pet had been fluctuating in recent days. The site’s administrator told 404 Media’s Joseph Cox that they were rewriting part of the service while admitting that Discord had banned a number of bots. The administrator had also told 404 Media that he did not “intend for my tool to be used for harassment,” despite a likely related user offering Spy Pet data on Kiwi Farms, a notorious hub for doxxing and online harassment campaigns that frequently targets trans and non-binary people, members of the LGBTQ community, and women.

Even if Spy Pet can somehow work past Discord’s bans or survive legal action, the site’s very nature runs against a number of other Internet regulations across the globe. It’s almost certainly in violation of the European Union’s General Data Protection Regulation (GDPR). As pointed out by StackDiary, Spy Pet and services like it seem to violate at least three articles of the GDPR, including the “right to be forgotten” in Article 17.

In Article 8 of the GDPR and likely in the eyes of the FTC, gathering data from what could be children’s accounts and profiting from them is almost certainly to draw scrutiny, if not legal action.

Ars was unsuccessful in reaching the administrator of Spy Pet by email and Telegram message. Their last message on Telegram stated that their domain had been suspended and a backup domain was being set up. “TL;DR: Never trust the Germans,” they wrote.

Message-scraping, user-tracking service Spy Pet shut down by Discord Read More »