end to end encryption

password-managers’-promise-that-they-can’t-see-your-vaults-isn’t-always-true

Password managers’ promise that they can’t see your vaults isn’t always true


ZERO KNOWLEDGE, ZERO CLUE

Contrary to what password managers say, a server compromise can mean game over.

Over the past 15 years, password managers have grown from a niche security tool used by the technology savvy into an indispensable security tool for the masses, with an estimated 94 million US adults—or roughly 36 percent of them—having adopted them. They store not only passwords for pension, financial, and email accounts, but also cryptocurrency credentials, payment card numbers, and other sensitive data.

All eight of the top password managers have adopted the term “zero knowledge” to describe the complex encryption system they use to protect the data vaults that users store on their servers. The definitions vary slightly from vendor to vendor, but they generally boil down to one bold assurance: that there is no way for malicious insiders or hackers who manage to compromise the cloud infrastructure to steal vaults or data stored in them. These promises make sense, given previous breaches of LastPass and the reasonable expectation that state-level hackers have both the motive and capability to obtain password vaults belonging to high-value targets.

A bold assurance debunked

Typical of these claims are those made by Bitwarden, Dashlane, and LastPass, which together are used by roughly 60 million people. Bitwarden, for example, says that “not even the team at Bitwarden can read your data (even if we wanted to).” Dashlane, meanwhile, says that without a user’s master password, “malicious actors can’t steal the information, even if Dashlane’s servers are compromised.” LastPass says that no one can access the “data stored in your LastPass vault, except you (not even LastPass).”

New research shows that these claims aren’t true in all cases, particularly when account recovery is in place or password managers are set to share vaults or organize users into groups. The researchers reverse-engineered or closely analyzed Bitwarden, Dashlane, and LastPass and identified ways that someone with control over the server—either administrative or the result of a compromise—can, in fact, steal data and, in some cases, entire vaults. The researchers also devised other attacks that can weaken the encryption to the point that ciphertext can be converted to plaintext.

“The vulnerabilities that we describe are numerous but mostly not deep in a technical sense,” the researchers from ETH Zurich and USI Lugano wrote. “Yet they were apparently not found before, despite more than a decade of academic research on password managers and the existence of multiple audits of the three products we studied. This motivates further work, both in theory and in practice.”

The researchers said in interviews that multiple other password managers they didn’t analyze as closely likely suffer from the same flaws. The only one they were at liberty to name was 1Password. Almost all the password managers, they added, are vulnerable to the attacks only when certain features are enabled.

The most severe of the attacks—targeting Bitwarden and LastPass—allow an insider or attacker to read or write to the contents of entire vaults. In some cases, they exploit weaknesses in the key escrow mechanisms that allow users to regain access to their accounts when they lose their master password. Others exploit weaknesses in support for legacy versions of the password manager. A vault-theft attack against Dashlane allowed reading but not modification of vault items when they were shared with other users.

Staging the old key switcheroo

One of the attacks targeting Bitwarden key escrow is performed during the enrollment of a new member of a family or organization. After a Bitwarden group admin invites the new member, the invitee’s client accesses a server and obtains a group symmetric key and the group’s public key. The client then encrypts the symmetric key with the group public key and sends it to the server. The resulting ciphertext is what’s used to recover the new user’s account. This data is never integrity-checked when it’s sent from the server to the client during an account enrollment session.

The adversary can exploit this weakness by replacing the group public key with one from a keypair created by the adversary. Since the adversary knows the corresponding private key, it can use it to decrypt the ciphertext and then perform an account recovery on behalf of the targeted user. The result is that the adversary can read and modify the entire contents of the member vault as soon as an invitee accepts an invitation from a family or organization.

Normally, this attack would work only when a group admin has enabled autorecovery mode, which, unlike a manual option, doesn’t require interaction from the member. But since the group policy the client downloads during the enrollment policy isn’t integrity-checked, adversaries can set recovery to auto, even if an admin had chosen a manual mode that requires user interaction.

Compounding the severity, the adversary in this attack also obtains a group symmetric key for all other groups the member belongs to since such keys are known to all group members. If any of the additional groups use account recovery, the adversary can obtain the members’ vaults for them, too. “This process can be repeated in a worm-like fashion, infecting all organizations that have key recovery enabled and have overlapping members,” the research paper explained.

A second attack targeting Bitwarden account recovery can be performed when a user rotates vault keys, an option Bitwarden recommends if a user believes their master password has been compromised. When account recovery is on (either manually or automatically), the user client regenerates the recovery ciphertext, which as described earlier involves obtaining a new public key that’s encrypted with the organization public key. The researchers denoted the group public key as pkorg. They denote the public key supplied by the adversary as pkadvorg, the recovery ciphertext as crec, and the user symmetric key as k.

The paper explained:

The key point here is that pkorg is not retrieved from the user’s vault; rather the client performs a sync operation with the server to obtain it. Crucially, the organization data provided by this sync operation is not authenticated in any way. This thus provides the adversary with another opportunity to obtain a victim’s user key, by supplying a new public key pkadvorg, for which they know the skadvorg and setting the account recovery enrollment to true. The client will then send an account recovery ciphertext crec containing the new user key, which the adversary can decrypt to obtain k′.

The third attack on the Bitwarden account recovery allows an adversary to recover a user’s master key. It abuses key connector, a feature primarily used by enterprise customers.

More ways to pilfer vaults

The attack allowing theft of LastPass vaults also targets key escrow, specifically in the Teams and Teams 5 versions, when a member’s master key is reset by a privileged user known as a superadmin. The next time the member logs in through the LastPass browser extension, their client will retrieve an RSA keypair assigned to each superadmin in the organization, encrypt their new key with each one, and send the resulting ciphertext to each superadmin.

Because LastPass also fails to authenticate the superadmin keys, an adversary can once again replace the superadmin public key (pkadm) with their own public key (pkadvadm).

“In theory, only users in teams where password reset is enabled and who are selected for reset should be affected by this vulnerability,” the researchers wrote. “In practice, however, LastPass clients query the server at each login and fetch a list of admin keys. They then send the account recovery ciphertexts independently of enrollment status.” The attack, however, requires the user to log in to LastPass with the browser extension, not the standalone client app.

Several attacks allow reading and modification of shared vaults, which allow a user to share selected items with one or more other users. When Dashlane users share an item, their client apps sample a fresh symmetric key, which either directly encrypts the shared item or, when sharing with a group, encrypts group keys, which in turn encrypt the shared item. In either case, the newly created RSA keypair(s)—belonging to either the shared user or group—isn’t authenticated. The item is then encrypted with the private key(s).

An adversary can supply their own keypair and use the public key to encrypt the ciphertext sent to the recipients. The adversary then decrypts that ciphertext with their corresponding secret key to recover the shared symmetric key. With that, the adversary can read and modify all shared items. When sharing is used in either Bitwarden or LastPass, similar attacks are possible and lead to the same consequence.

Another avenue for attackers or adversaries with control of a server is to target the backward compatibility that all three password managers provide to support older, less-secure versions. Despite incremental changes designed to harden the apps against the very attacks described in the paper, all three password managers continue to support the versions without these improvements. This backward compatibility is a deliberate decision intended to prevent users who haven’t upgraded from losing access to their vaults.

The severity of these attacks is lower than that of the previous ones described, with the exception of one, which is possible against Bitwarden. Older versions of the password manager used a single symmetric key to encrypt and decrypt the user key from the server and items inside vaults. This design allowed for the possibility that an adversary could tamper with the contents. To add integrity checks, newer versions provide authenticated encryption by augmenting the symmetric key with an HMAC hash function.

To protect customers using older app versions, Bitwarden ciphertext has an attribute of either 0 or 1. A 0 designates authenticated encryption, while a 1 supports the older unauthenticated scheme. Older versions also use a key hierarchy that Bitwarden deprecated to harden the app. To support the old hierarchy, newer client versions generate a new RSA keypair for the user if the server doesn’t provide one. The newer version will proceed to encrypt the secret key portion with the master key if no user ciphertext is provided by the server.

This design opens Bitwarden to several attacks. The most severe, allowing reading (but not modification) of all items created after the attack is performed. At a simplified level, it works because the adversary can forge the ciphertext sent by the server and cause the client to use it to derive a user key known to the adversary.

The modification causes the use of CBC (cipher block chaining), a form of encryption that’s vulnerable to several attacks. An adversary can exploit this weaker form using a padding oracle attack and go on to retrieve the plaintext of the vault. Because HMAC protection remains intact, modification isn’t possible.

Surprisingly, Dashlane was vulnerable to a similar padding oracle attack. The researchers devised a complicated attack chain that would allow a malicious server to downgrade a Dashlane user’s vault to CBC and exfiltrate the contents. The researchers estimate that the attack would require about 125 days to decrypt the ciphertext.

Still other attacks against all three password managers allow adversaries to greatly reduce the selected number of hashing iterations—in the case of Bitwarden and LastPass, from a default of 600,000 to 2. Repeated hashing of master passwords makes them significantly harder to crack in the event of a server breach that allows theft of the hash. For all three password managers, the server sends the specified iteration count to the client, with no mechanism to ensure it meets the default number. The result is that the adversary receives a 200,000-fold increase in the time and resources required to crack the hash and obtain the user’s master password.

Attacking malleability

Three of the attacks—one against Bitwarden and two against LastPass—target what the researchers call “item-level encryption” or “vault malleability.” Instead of encrypting a vault in a single, monolithic blob, password managers often encrypt individual items, and sometimes individual fields within an item. These items and fields are all encrypted with the same key. The attacks exploit this design to steal passwords from select vault items.

An adversary mounts an attack by replacing the ciphertext in the URL field, which stores the link where a login occurs, with the ciphertext for the password. To enhance usability, password managers provide an icon that helps visually recognize the site. To do this, the client decrypts the URL field and sends it to the server. The server then fetches the corresponding icon. Because there’s no mechanism to prevent the swapping of item fields, the client decrypts the password instead of the URL and sends it to the server.

“That wouldn’t happen if you had different keys for different fields or if you encrypted the entire collection in one pass,” Kenny Paterson, one of the paper co-authors, said. “A crypto audit should spot it, but only if you’re thinking about malicious servers. The server is deviating from expected behavior.

The following table summarizes the causes and consequences of the 25 attacks they devised:

Credit: Scarlata et al.

Credit: Scarlata et al.

A psychological blind spot

The researchers acknowledge that the full compromise of a password manager server is a high bar. But they defend the threat model.

“Attacks on the provider server infrastructure can be prevented by carefully designed operational security measures, but it is well within the bounds of reason to assume that these services are targeted by sophisticated nation-state-level adversaries, for example via software supply-chain attacks or spearphishing,” they wrote. “Moreover, some of the service providers have a history of being breached—for example LassPass suffered branches in 2015 and 2022, and another serious security incident in 2021.

They went on to write: “While none of the breaches we are aware of involved reprogramming the server to make it undertake malicious actions, this goes just one step beyond attacks on password manager service providers that have been documented. Active attacks more broadly have been documented in the wild.”

Part of the challenge of designing password managers or any end-to-end encryption service is the tendency for a false sense of security of the client.

“It’s a psychological problem when you’re writing both client and server software,” Paterson explained. “You should write the client super defensively, but if you’re also writing the server, well of course your server isn’t going to send malformed packets or bad info. Why would you do that?”

Marketing gimmickry or not, “zero-knowledge” is here to stay

In many of the cases, engineers have already fixed the weaknesses described after receiving private reports from the researchers. Engineers are still patching other vulnerabilities. In statements, Bitwarden, Lastpass, and Dashlane representatives noted the high bar of the threat model, despite statements on their websites that assure customers their wares will withstand it. Along with 1Password representatives, they also noted that their products regularly receive stringent security audits and undergo red-team exercises.

A Bitwarden representative wrote:

Bitwarden continually evaluates and improves its software through internal review, third-party assessments, and external research. The ETH Zurich paper analyzes a threat model in which the server itself behaves maliciously and intentionally attempts to manipulate key material and configuration values. That model assumes full server compromise and adversarial behavior beyond standard operating assumptions for cloud services.

LastPass said, “We take a multi‑layered, ongoing approach to security assurance that combines independent oversight, continuous monitoring, and collaboration with the research community. Our cloud security testing is inclusive of the scenarios referenced in the malicious-server threat model outlined in the research.”

Specific measures include:

A statement from Dashlane read, “Dashlane conducts rigorous internal and external testing to ensure the security of our product. When issues arise, we work quickly to mitigate any possible risk and ensure customers have clarity on the problem, our solution, and any required actions.”

1Password released a statement that read in part:

Our security team reviewed the paper in depth and found no new attack vectors beyond those already documented in our publicly available Security Design White Paper.

We are committed to continually strengthening our security architecture and evaluating it against advanced threat models, including malicious-server scenarios like those described in the research, and evolving it over time to maintain the protections our users rely on.

1Password also says that the zero-knowledge encryption it provides “means that no one but you—not even the company that’s storing the data—can access and decrypt your data. This protects your information even if the server where it’s held is ever breached.” In the company’s white paper linked above, 1Password seems to allow for this possibility when it says:

At present there’s no practical method for a user to verify the public key they’re encrypting data to belongs to their intended recipient. As a consequence it would be possible for a malicious or compromised 1Password server to provide dishonest public keys to the user, and run a successful attack. Under such an attack, it would be possible for the 1Password server to acquire vault encryption keys with little ability for users to detect or prevent it.

1Password’s statement also includes assurances that the service routinely undergoes rigorous security testing.

All four companies defended their use of the term “zero knowledge.” As used in this context, the term can be confused with zero-knowledge proofs, a completely unrelated cryptographic method that allows one party to prove to another party that they know a piece of information without revealing anything about the information itself. An example is a proof that shows a system can determine if someone is over 18 without having any knowledge of the precise birthdate.

The adulterated zero-knowledge term used by password managers appears to have come into being in 2007, when a company called Spider Oak used it to describe its cloud infrastructure for securely sharing sensitive data. Interestingly, Spider Oak formally retired the term a decade later after receiving user pushback.

“Sadly, it is just marketing hype, much like ‘military-grade encryption,’” Matteo Scarlata, lead author of the paper said. “Zero-knowledge seems to mean different things to different people (e.g., LastPass told us that they won’t adopt a malicious server threat model internally). Much unlike ‘end-to-end encryption,’ ‘zero-knowledge encryption’ is an elusive goal, so it’s impossible to tell if a company is doing it right.”

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

Password managers’ promise that they can’t see your vaults isn’t always true Read More »

engineer-proves-that-kohler’s-smart-toilet-cameras-aren’t-very-private

Engineer proves that Kohler’s smart toilet cameras aren’t very private


Kohler is getting the scoop on people’s poop.

A Dekoda smart toilet camera. Credit: Kohler

Kohler is facing backlash after an engineer pointed out that the company’s new smart toilet cameras may not be as private as it wants people to believe. The discussion raises questions about Kohler’s use of the term “end-to-end encryption” (E2EE) and the inherent privacy limitations of a device that films the goings-on of a toilet bowl.

In October, Kohler announced its first “health” product, the Dekoda. Kohler’s announcement described the $599 device (it also requires a subscription that starts at $7 per month) as a toilet bowl attachment that uses “optical sensors and validated machine-learning algorithms” to deliver “valuable insights into your health and wellness.” The announcement added:

Data flows to the personalized Kohler Health app, giving users continuous, private awareness of key health and wellness indicators—right on their phone. Features like fingerprint authentication and end-to-end encryption are designed for user privacy and security.

The average person is most likely to be familiar with E2EE through messaging apps, like Signal. Messages sent via apps with E2EE are encrypted throughout transmission. Only the message’s sender and recipient can view the decrypted messages, which is intended to prevent third parties, including the app developer, from reading them.

But how does E2EE apply to a docked camera inside a toilet?

Software engineer and former Federal Trade Commission technology advisor Simon Fondrie-Teitler sought answers about this, considering that “Kohler Health doesn’t have any user-to-user sharing features,” he wrote in a blog post this week:

 … emails exchanged with Kohler’s privacy contact clarified that the other ‘end’ that can decrypt the data is Kohler themselves: ‘User data is encrypted at rest, when it’s stored on the user’s mobile phone, toilet attachment, and on our systems. Data in transit is also encrypted end-to-end, as it travels between the user’s devices and our systems, where it is decrypted and processed to provide our service.’

Ars Technica contacted Kohler to ask if the above statement is an accurate summary of Dekoda’s “E2EE” and if Kohler employees can access data from Dekoda devices. A spokesperson responded with a company statement that basically argued that data gathered from Dekoda devices is encrypted from one end (the toilet camera) until it reaches another end, in this case, Kohler’s servers. The statement reads, in part:

The term end-to-end encryption is often used in the context of products that enable a user (sender) to communicate with another user (recipient), such as a messaging application. Kohler Health is not a messaging application. In this case, we used the term with respect to the encryption of data between our users (sender) and Kohler Health (recipient).

We encrypt data end-to-end in transit, as it travels between users’ devices and our systems, where it is decrypted and processed to provide and improve our service. We also encrypt sensitive user data at rest, when it’s stored on a user’s mobile phone, toilet attachment, and on our systems.

Although Kohler somewhat logically defines the endpoints in what it considers E2EE, at a minimum, Kohler’s definition goes against the consumer-facing spirit of E2EE. Because E2EE is, as Kohler’s statement notes, most frequently used in messaging apps, people tend to associate it with privacy from the company that enables the data transmission. Since that’s not the case with the Dekoda, Kohler’s misuse of the term E2EE can give users a false sense of privacy.

As IBM defines it, E2EE “ensures that service providers facilitating the communications … can’t access the messages.” Kohler’s statement implies that the company understood how people typically think about E2EE and still chose to use the term over more accurate alternatives, such as Transport Layer Security (TLS) encryption, which “encrypts data as it travels between a client and a server. However, it doesn’t provide strong protection against access by intermediaries such as application servers or network providers,” per IBM.

“Using terms like ‘anonymized’ and ‘encrypted’ gives an impression of a company taking privacy and security seriously—but that doesn’t mean it actually is,” RJ Cross, director of the consumer privacy program at the Public Interest Research Group (PIRG), told Ars Technica.

Smart toilet cameras are so new (and questionable) that there are few comparisons we can make here. But the Dekoda’s primary rival, the Throne, also uses confusing marketing language. The smart camera’s website makes no mention of end-to-end encryption but claims that the device uses “bank-grade encryption,” a vague term often used by marketers but that does not imply E2EE, which isn’t a mandatory banking security standard in the US.

Why didn’t anyone notice before?

As Fondrie-Teitler pointed out in his blog, it’s odd to see E2EE associated with a smart toilet camera. Despite this, I wasn’t immediately able to find online discussion around Dekoda’s use of the term, which includes the device’s website saying that the Dekoda uses “encryption at every step.”

Numerous stories about the toilet cam’s launch (examples hereherehere, and here) mentioned the device’s purported E2EE but made no statements about how E2EE is used or the implications that E2EE claims have, or don’t have, for user privacy.

It’s possible there wasn’t much questioning about the Dekoda’s E2EE claim since the type of person who worries about and understands such things is often someone who wouldn’t put a camera anywhere near their bathroom.

It’s also possible that people had other ideas for how the smart toilet camera might work. Speaking with The Register, Fondrie-Teitler suggested a design in which data never leaves the camera but admitted that he didn’t know if this is possible.

“Ideally, this type of data would remain on the user’s device for analysis, and client-side encryption would be used for backups or synchronizing historical data to new devices,” he told The Register.

What is Kohler doing with the data?

For those curious about why Kohler wants data about its customers’ waste, the answer, as it often is today, is marketing and AI.

As Fondrie-Teitler noted, Kohler’s privacy policy says Kohler can use customer data to “create aggregated, de-identified and/or anonymized data, which we may use and share with third parties for our lawful business purposes, including to analyze and improve the Kohler Health Platform and our other products and services, to promote our business, and to train our AI and machine learning models.”

In its statement, Kohler said:

If a user consents (which is optional), Kohler Health may de-identify the data and use the de-identified data to train the AI that drives our product. This consent check-box is displayed in the Kohler Health app, is optional, and is not pre-checked.

Words matter

Kohler isn’t the first tech company to confuse people with its use of the term E2EE. In April, there was debate over whether Google was truly giving Gmail for business users E2EE, since, in addition to the sender and recipient having access to decrypted messages, people inside the users’ organization who deploy and manage the KACL (Key Access Control List) server can access the key necessary for decryption.

In general, what matters most is whether the product provides the security users demand. As Ars Technica Senior Security Editor Dan Goodin wrote about Gmail’s E2EE debate:

“The new feature is of potential value to organizations that must comply with onerous regulations mandating end-to-end encryption. It most definitely isn’t suitable for consumers or anyone who wants sole control over the messages they send. Privacy advocates, take note.”

When the product in question is an Internet-connected camera that lives inside your toilet bowl, it’s important to ask whether any technology could ever make it private enough. For many, no proper terminology could rationalize such a device.

Still, if a company is going to push “health” products to people who may have health concerns and, perhaps, limited cybersecurity and tech privacy knowledge, there’s an onus on that company for clear and straightforward communication.

“Throwing security terms around that the public doesn’t understand to try and create an illusion of data privacy and security being a high priority for your company is misleading to the people who have bought your product,” Cross said.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Engineer proves that Kohler’s smart toilet cameras aren’t very private Read More »

apple-pulls-end-to-end-encryption-in-uk,-spurning-backdoors-for-gov’t-spying

Apple pulls end-to-end encryption in UK, spurning backdoors for gov’t spying

“We are gravely disappointed that the protections provided by ADP will not be available to our customers in the UK given the continuing rise of data breaches and other threats to customer privacy,” Apple said. “Enhancing the security of cloud storage with end-to-end encryption is more urgent than ever before.”

For UK Apple users, some data can still be encrypted. iCloud Keychain and Health, iMessage, and FaceTime will remain end-to-end encrypted by default. But other iCloud services will not be encrypted, effective immediately, including iCloud Backup, iCloud Drive, Photos, Notes, Reminders, Safari Bookmarks, Siri Shortcuts, Voice memos, Wallet passes, and Freeform.

In the future, Apple hopes to restore data protections in the UK, but the company refuses to ever build a backdoor for government officials.

“Apple remains committed to offering our users the highest level of security for their personal data and are hopeful that we will be able to do so in the future in the United Kingdom,” Apple said. “As we have said many times before, we have never built a backdoor or master key to any of our products or services, and we never will.”

Apple pulls end-to-end encryption in UK, spurning backdoors for gov’t spying Read More »

apple-hit-with-$1.2b-lawsuit-after-killing-controversial-csam-detecting-tool

Apple hit with $1.2B lawsuit after killing controversial CSAM-detecting tool

When Apple devices are used to spread CSAM, it’s a huge problem for survivors, who allegedly face a range of harms, including “exposure to predators, sexual exploitation, dissociative behavior, withdrawal symptoms, social isolation, damage to body image and self-worth, increased risky behavior, and profound mental health issues, including but not limited to depression, anxiety, suicidal ideation, self-harm, insomnia, eating disorders, death, and other harmful effects.” One survivor told The Times she “lives in constant fear that someone might track her down and recognize her.”

Survivors suing have also incurred medical and other expenses due to Apple’s inaction, the lawsuit alleged. And those expenses will keep piling up if the court battle drags on for years and Apple’s practices remain unchanged.

Apple could win, a lawyer and policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence, Riana Pfefferkorn, told The Times, as survivors face “significant hurdles” seeking liability for mishandling content that Apple says Section 230 shields. And a win for survivors could “backfire,” Pfefferkorn suggested, if Apple proves that forced scanning of devices and services violates the Fourth Amendment.

Survivors, some of whom own iPhones, think that Apple has a responsibility to protect them. In a press release, Margaret E. Mabie, a lawyer representing survivors, praised survivors for raising “a call for justice and a demand for Apple to finally take responsibility and protect these victims.”

“Thousands of brave survivors are coming forward to demand accountability from one of the most successful technology companies on the planet,” Mabie said. “Apple has not only rejected helping these victims, it has advertised the fact that it does not detect child sex abuse material on its platform or devices thereby exponentially increasing the ongoing harm caused to these victims.”

Apple hit with $1.2B lawsuit after killing controversial CSAM-detecting tool Read More »

backdoors-that-let-cops-decrypt-messages-violate-human-rights,-eu-court-says

Backdoors that let cops decrypt messages violate human rights, EU court says

Building of the European Court of Human Rights in Strasbourg (France).

Enlarge / Building of the European Court of Human Rights in Strasbourg (France).

The European Court of Human Rights (ECHR) has ruled that weakening end-to-end encryption disproportionately risks undermining human rights. The international court’s decision could potentially disrupt the European Commission’s proposed plans to require email and messaging service providers to create backdoors that would allow law enforcement to easily decrypt users’ messages.

This ruling came after Russia’s intelligence agency, the Federal Security Service (FSS), began requiring Telegram to share users’ encrypted messages to deter “terrorism-related activities” in 2017, ECHR’s ruling said. A Russian Telegram user alleged that FSS’s requirement violated his rights to a private life and private communications, as well as all Telegram users’ rights.

The Telegram user was apparently disturbed, moving to block required disclosures after Telegram refused to comply with an FSS order to decrypt messages on six users suspected of terrorism. According to Telegram, “it was technically impossible to provide the authorities with encryption keys associated with specific users,” and therefore, “any disclosure of encryption keys” would affect the “privacy of the correspondence of all Telegram users,” the ECHR’s ruling said.

For refusing to comply, Telegram was fined, and one court even ordered the app to be blocked in Russia, while dozens of Telegram users rallied to continue challenging the order to maintain Telegram services in Russia. Ultimately, users’ multiple court challenges failed, sending the case before the ECHR while Telegram services seemingly tenuously remained available in Russia.

The Russian government told the ECHR that “allegations that the security services had access to the communications of all users” were “unsubstantiated” because their request only concerned six Telegram users.

They further argued that Telegram providing encryption keys to FSB “did not mean that the information necessary to decrypt encrypted electronic communications would become available to its entire staff.” Essentially, the government believed that FSB staff’s “duty of discretion” would prevent any intrusion on private life for Telegram users as described in the ECHR complaint.

Seemingly most critically, the government told the ECHR that any intrusion on private lives resulting from decrypting messages was “necessary” to combat terrorism in a democratic society. To back up this claim, the government pointed to a 2017 terrorist attack that was “coordinated from abroad through secret chats via Telegram.” The government claimed that a second terrorist attack that year was prevented after the government discovered it was being coordinated through Telegram chats.

However, privacy advocates backed up Telegram’s claims that the messaging services couldn’t technically build a backdoor for governments without impacting all its users. They also argued that the threat of mass surveillance could be enough to infringe on human rights. The European Information Society Institute (EISI) and Privacy International told the ECHR that even if governments never used required disclosures to mass surveil citizens, it could have a chilling effect on users’ speech or prompt service providers to issue radical software updates weakening encryption for all users.

In the end, the ECHR concluded that the Telegram user’s rights had been violated, partly due to privacy advocates and international reports that corroborated Telegram’s position that complying with the FSB’s disclosure order would force changes impacting all its users.

The “confidentiality of communications is an essential element of the right to respect for private life and correspondence,” the ECHR’s ruling said. Thus, requiring messages to be decrypted by law enforcement “cannot be regarded as necessary in a democratic society.”

Martin Husovec, a law professor who helped to draft EISI’s testimony, told Ars that EISI is “obviously pleased that the Court has recognized the value of encryption and agreed with us that state-imposed weakening of encryption is a form of indiscriminate surveillance because it affects everyone’s privacy.”

Backdoors that let cops decrypt messages violate human rights, EU court says Read More »

apple-warns-proposed-uk-law-will-affect-software-updates-around-the-world

Apple warns proposed UK law will affect software updates around the world

Heads up —

Apple may leave the UK if required to provide advance notice of product updates.

Apple warns proposed UK law will affect software updates around the world

Apple is “deeply concerned” that proposed changes to a United Kingdom law could give the UK government unprecedented power to “secretly veto” privacy and security updates to its products and services, the tech giant said in a statement provided to Ars.

If passed, potentially this spring, the amendments to the UK’s Investigatory Powers Act (IPA) could deprive not just UK users, but all users globally of important new privacy and security features, Apple warned.

“Protecting our users’ privacy and the security of their data is at the very heart of everything we do at Apple,” Apple said. “We’re deeply concerned the proposed amendments” to the IPA “now before Parliament place users’ privacy and security at risk.”

The IPA was initially passed in 2016 to ensure that UK officials had lawful access to user data to investigate crimes like child sexual exploitation or terrorism. Proposed amendments were announced last November, after a review showed that the “Act has not been immune to changes in technology over the last six years” and “there is a risk that some of these technological changes have had a negative effect on law enforcement and intelligence services’ capabilities.”

The proposed amendments require that any company that fields government data requests must notify UK officials of any updates they planned to make that could restrict the UK government’s access to this data, including any updates impacting users outside the UK.

UK officials said that this would “help the UK anticipate the risk to public safety posed by the rolling out of technology by multinational companies that precludes lawful access to data. This will reduce the risk of the most serious offenses such as child sexual exploitation and abuse or terrorism going undetected.”

According to the BBC, the House of Lords will begin debating the proposed changes on Tuesday.

Ahead of that debate, Apple described the amendments on Monday as “an unprecedented overreach by the government” that “if enacted” could allow the UK to “attempt to secretly veto new user protections globally, preventing us from ever offering them to customers.”

In a letter last year, Apple argued that “it would be improper for the Home Office to act as the world’s regulator of security technology.”

Apple told the UK Home Office that imposing “secret requirements on providers located in other countries” that apply to users globally “could be used to force a company like Apple, that would never build a backdoor, to publicly withdraw critical security features from the UK market, depriving UK users of these protections.” It could also “dramatically disrupt the global market for security technologies, putting users in the UK and around the world at greater risk,” Apple claimed.

The proposed changes, Apple said, “would suppress innovation, stifle commerce, and—when combined with purported extraterritorial application—make the Home Office the de facto global arbiter of what level of data security and encryption are permissible.”

UK defends proposed changes

The UK Home Office has repeatedly stressed that these changes do not “provide powers for the Secretary of State to approve or refuse technical changes,” but “simply” requires companies “to inform the Secretary of State of relevant changes before those changes are implemented.”

“The intention is not to introduce a consent or veto mechanism or any other kind of barrier to market,” a UK Home Office fact sheet said. “A key driver for this amendment is to give operational partners time to understand the change and adapt their investigative techniques where necessary, which may in some circumstances be all that is required to maintain lawful access.”

The Home Office has also claimed that “these changes do not directly relate to end-to-end encryption,” while admitting that they “are designed to ensure that companies are not able to unilaterally make design changes which compromise exceptional lawful access where the stringent safeguards of the IPA regime are met.”

This seems to suggest that companies will not be allowed to cut off the UK government from accessing encrypted data under certain circumstances, which concerns privacy advocates who consider end-to-end encryption a vital user privacy and security protection. Earlier this month, civil liberties groups including Big Brother Watch, Liberty, Open Rights Group and Privacy International filed a joint brief opposing the proposed changes, the BBC reported, warning that passing the amendments would be “effectively transforming private companies into arms of the surveillance state and eroding the security of devices and the Internet.”

“We have always been clear that we support technological innovation and private and secure communications technologies, including end-to-end encryption, but this cannot come at a cost to public safety,” a UK government official told the BBC.

The UK government may face more opposition to the amendments than from tech companies and privacy advocates, though. In Apple’s letter last year, the tech giant noted that the proposed changes to the IPA could conflict with EU and US laws, including the EU’s General Data Protection Regulation—considered the world’s strongest privacy law.

Under the GDPR, companies must implement measures to safeguard users’ personal data, Apple said, noting that “encryption is one means by which a company can meet” that obligation.

“Secretly installing backdoors in end-to-end encrypted technologies in order to comply with UK law for persons not subject to any lawful process would violate that obligation,” Apple argued.

Apple warns proposed UK law will affect software updates around the world Read More »