Author name: 9u50fv

new-apple-m5-is-the-centerpiece-of-an-updated-14-inch-macbook-pro

New Apple M5 is the centerpiece of an updated 14-inch MacBook Pro

Apple often releases a smaller second wave of new products in October after the dust settles from its September iPhone announcement, and this year that wave revolves around its brand-new M5 chip. The first Mac to get the new processor will be the new 14-inch MacBook Pro, which the company announced today on its press site alongside a new M5 iPad Pro and an updated version of the Vision Pro headset.

But unlike the last couple MacBook Pro refreshes, Apple isn’t ready with Pro and Max versions of the M5 for higher-end 14-inch MacBook Pros and 16-inch MacBook Pros. Those models will continue to use the M4 Pro and M4 Max for now, and we probably shouldn’t expect an update for them until sometime next year.

Aside from the M5, the 14-inch M5 MacBook Pro has essentially identical specs to the outgoing M4 version. It has a notched 14-inch screen with ProMotion support and a 3024×1964 resolution, three USB-C/Thunderbolt 4 ports, an HDMI port, an SD card slot, and a 12 MP Center Stage webcam. It still weighs 3.4 pounds, and Apple still estimates the battery should last for “up to 16 hours” of wireless web browsing and up to 24 hours of video streaming. The main internal difference is an option for a 4TB storage upgrade, which will run you $1,200 if you’re upgrading from the base 512GB SSD.

New Apple M5 is the centerpiece of an updated 14-inch MacBook Pro Read More »

with-considerably-less-fanfare,-apple-releases-a-second-generation-vision-pro

With considerably less fanfare, Apple releases a second-generation Vision Pro

Apple’s announcement of the Vision Pro headset in 2023 was pretty hyperbolic about the device’s potential, even by Apple’s standards. CEO Tim Cook called it “the beginning of a new era for computing,” placing the Vision Pro in the same industry-shifting echelon as the Mac and the iPhone.

The Vision Pro could still eventually lead to a product that ushers in a new age of “spatial computing.” But it does seem like Apple is a bit less optimistic about the headset’s current form—at least, that’s one possible way to read the fact that the second-generation Vision Pro is being announced via press release, rather than as the centerpiece of a product event.

The new Vision Pro is available for the same $3,499 as the first model, which will likely continue to limit the headset’s appeal outside of a die-hard community of early adopters and curious developers. It’s available for pre-order today and ships on October 22.

The updated Vision Pro is a low-risk, play-it-safe upgrade that updates the device’s processor without changing much else about its design or how the product is positioned. It’s essentially the same device as before, but with the M2 chip switched out for a brand-new M5—a chip that comes with a faster CPU and GPU, 32GB of RAM, and improved image signal processors and video encoding hardware that will doubtlessly refine and improve the experience of using the headset.

With considerably less fanfare, Apple releases a second-generation Vision Pro Read More »

nvidia-sells-tiny-new-computer-that-puts-big-ai-on-your-desktop

Nvidia sells tiny new computer that puts big AI on your desktop

For the OS, the Spark is an ARM-based system that runs Nvidia’s DGX OS, an Ubuntu Linux-based operating system built specifically for GPU processing. It comes with Nvidia’s AI software stack preinstalled, including CUDA libraries and the company’s NIM microservices.

Prices for the DGX Spark start at US $3,999. That may seem like a lot, but given the cost of high-end GPUs with ample video RAM like the RTX Pro 6000 (about $9,000) or AI server GPUs (like $25,000 for a base-level H100), the DGX Spark may represent a far less expensive option overall, though it’s not nearly as powerful.

In fact, according to The Register, the GPU computing performance of the GB10 chip is roughly equivalent to an RTX 5070. However, the 5070 is limited to 12GB of video memory, which limits the size of AI models that can be run on such a system. With 128GB of unified memory, the DGX Spark can run far larger models, albeit at a slower speed than, say, an RTX 5090 (which typically ships with 24 GB of RAM). For example, to run the 120 billion-parameter larger version of OpenAI’s recent gpt-oss language model, you’d need about 80GB of memory, which is far more than you can get in a consumer GPU.

A callback to 2016

Nvidia founder and CEO Jensen Huang marked the occasion of the DGX Spark launch by personally delivering one of the first units to Elon Musk at SpaceX’s Starbase facility in Texas, echoing a similar delivery Huang made to Musk at OpenAI in 2016.

“In 2016, we built DGX-1 to give AI researchers their own supercomputer. I hand-delivered the first system to Elon at a small startup called OpenAI, and from it came ChatGPT,” Huang said in a statement. “DGX-1 launched the era of AI supercomputers and unlocked the scaling laws that drive modern AI. With DGX Spark, we return to that mission.”

Nvidia sells tiny new computer that puts big AI on your desktop Read More »

hackers-can-steal-2fa-codes-and-private-messages-from-android-phones

Hackers can steal 2FA codes and private messages from Android phones


STEALING CODES ONE PIXEL AT A TIME

Malicious app required to make “Pixnapping” attack work requires no permissions.

Samsung’s S25 phones. Credit: Samsung

Android devices are vulnerable to a new attack that can covertly steal two-factor authentication codes, location timelines, and other private data in less than 30 seconds.

The new attack, named Pixnapping by the team of academic researchers who devised it, requires a victim to first install a malicious app on an Android phone or tablet. The app, which requires no system permissions, can then effectively read data that any other installed app displays on the screen. Pixnapping has been demonstrated on Google Pixel phones and the Samsung Galaxy S25 phone and likely could be modified to work on other models with additional work. Google released mitigations last month, but the researchers said a modified version of the attack works even when the update is installed.

Like taking a screenshot

Pixnapping attacks begin with the malicious app invoking Android programming interfaces that cause the authenticator or other targeted apps to send sensitive information to the device screen. The malicious app then runs graphical operations on individual pixels of interest to the attacker. Pixnapping then exploits a side channel that allows the malicious app to map the pixels at those coordinates to letters, numbers, or shapes.

“Anything that is visible when the target app is opened can be stolen by the malicious app using Pixnapping,” the researchers wrote on an informational website. “Chat messages, 2FA codes, email messages, etc. are all vulnerable since they are visible. If an app has secret information that is not visible (e.g., it has a secret key that is stored but never shown on the screen), that information cannot be stolen by Pixnapping.”

The new attack class is reminiscent of GPU.zip, a 2023 attack that allowed malicious websites to read the usernames, passwords, and other sensitive visual data displayed by other websites. It worked by exploiting side channels found in GPUs from all major suppliers. The vulnerabilities that GPU.zip exploited have never been fixed. Instead, the attack was blocked in browsers by limiting their ability to open iframes, an HTML element that allows one website (in the case of GPU.zip, a malicious one) to embed the contents of a site from a different domain.

Pixnapping targets the same side channel as GPU.zip, specifically the precise amount of time it takes for a given frame to be rendered on the screen.

“This allows a malicious app to steal sensitive information displayed by other apps or arbitrary websites, pixel by pixel,” Alan Linghao Wang, lead author of the research paper “Pixnapping: Bringing Pixel Stealing out of the Stone Age,” explained in an interview. “Conceptually, it is as if the malicious app was taking a screenshot of screen contents it should not have access to. Our end-to-end attacks simply measure the rendering time per frame of the graphical operations… to determine whether the pixel was white or non-white.”

Pixnapping in three steps

The attack occurs in three main steps. In the first, the malicious app invokes Android APIs that make calls to the app the attacker wants to snoop on. These calls can also be used to effectively scan an infected device for installed apps of interest. The calls can further cause the targeted app to display specific data it has access to, such as a message thread in a messaging app or a 2FA code for a specific site. This call causes the information to be sent to the Android rendering pipeline, the system that takes each app’s pixels so they can be rendered on the screen. The Android-specific calls made include activities, intents, and tasks.

In the second step, Pixnapping performs graphical operations on individual pixels that the targeted app sent to the rendering pipeline. These operations choose the coordinates of target pixels the app wants to steal and begin to check if the color of those coordinates is white or non-white or, more generally, if the color is c or non-c (for an arbitrary color c).

“Suppose, for example, [the attacker] wants to steal a pixel that is part of the screen region where a 2FA character is known to be rendered by Google Authenticator,” Wang said. “This pixel is either white (if nothing was rendered there) or non-white (if part of a 2FA digit was rendered there). Then, conceptually, the attacker wants to cause some graphical operations whose rendering time is long if the target victim pixel is non-white and short if it is white. The malicious app does this by opening some malicious activities (i.e., windows) in front of the victim app that was opened in Step 1.”

The third step measures the amount of time required at each coordinate. By combining the times for each one, the attack can rebuild the images sent to the rendering pipeline one pixel at a time.

As Ars reader hotball put it in the comments below:

Basically the attacker renders something transparent in front of the target app, then using a timing attack exploiting the GPU’s graphical data compression to try finding out the color of the pixels. It’s not something as simple as “give me the pixels of another app showing on the screen right now.” That’s why it takes time and can be too slow to fit within the 30 seconds window of the Google Authenticator app.

In an online interview, paper co-author Ricardo Paccagnella described the attack in more detail:

Step 1: The malicious app invokes a target app to cause some sensitive visual content to be rendered.

Step 2: The malicious app uses Android APIs to “draw over” that visual content and cause a side channel (in our case, GPU.zip) to leak as a function of the color of individual pixels rendered in Step 1 (e.g., activate only if the pixel color is c).

Step 3: The malicious app monitors the side effects of Step 2 to infer, e.g., if the color of those pixels was c or not, one pixel at a time.

Steps 2 and 3 can be implemented differently depending on the side channel that the attacker wants to exploit. In our instantiations on Google and Samsung phones, we exploited the GPU.zip side channel. When using GPU.zip, measuring the rendering time per frame was sufficient to determine if the color of each pixel is c or not. Future instantiations of the attack may use other side channels where controlling memory management and accessing fine-grained timers may be necessary (see Section 3.3 of the paper). Pixnapping would still work then: the attacker would just need to change how Steps 2 and 3 are implemented.

The amount of time required to perform the attack depends on several variables, including how many coordinates need to be measured. In some cases, there’s no hard deadline for obtaining the information the attacker wants to steal. In other cases—such as stealing a 2FA code—every second counts, since each one is valid for only 30 seconds. In the paper, the researchers explained:

To meet the strict 30-second deadline for the attack, we also reduce the number of samples per target pixel to 16 (compared to the 34 or 64 used in earlier attacks) and decrease the idle time between pixel leaks from 1.5 seconds to 70 milliseconds. To ensure that the attacker has the full 30 seconds to leak the 2FA code, our implementation waits for the beginning of a new 30-second global time interval, determined using the system clock.

… We use our end-to-end attack to leak 100 different 2FA codes from Google Authenticator on each of our Google Pixel phones. Our attack correctly recovers the full 6-digit 2FA code in 73%, 53%, 29%, and 53% of the trials on the Pixel 6, 7, 8, and 9, respectively. The average time to recover each 2FA code is 14.3, 25.8, 24.9, and 25.3 seconds for the Pixel 6, Pixel 7, Pixel 8, and Pixel 9, respectively. We are unable to leak 2FA codes within 30 seconds using our implementation on the Samsung Galaxy S25 device due to significant noise. We leave further investigation of how to tune our attack to work on this device to future work.

In an email, a Google representative wrote, “We issued a patch for CVE-2025-48561 in the September Android security bulletin, which partially mitigates this behavior. We are issuing an additional patch for this vulnerability in the December Android security bulletin. We have not seen any evidence of in-the-wild exploitation.”

Pixnapping is useful research in that it demonstrates the limitations of Google’s security and privacy assurances that one installed app can’t access data belonging to another app. The challenges in implementing the attack to steal useful data in real-world scenarios, however, are likely to be significant. In an age when teenagers can steal secrets from Fortune 500 companies simply by asking nicely, the utility of more complicated and limited attacks is probably of less value.

Post updated to add details about how the attack works.

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

Hackers can steal 2FA codes and private messages from Android phones Read More »

why-signal’s-post-quantum-makeover-is-an-amazing-engineering-achievement

Why Signal’s post-quantum makeover is an amazing engineering achievement


COMING TO A PHONE NEAR YOU

New design sets a high standard for post-quantum readiness.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

The encryption protecting communications against criminal and nation-state snooping is under threat. As private industry and governments get closer to building useful quantum computers, the algorithms protecting Bitcoin wallets, encrypted Web visits, and other sensitive secrets will be useless. No one doubts the day will come, but as the now-common joke in cryptography circles observes, experts have been forecasting this cryptocalypse will arrive in the next 15 to 30 years for the past 30 years.

The uncertainty has created something of an existential dilemma: Should network architects spend the billions of dollars required to wean themselves off quantum-vulnerable algorithms now, or should they prioritize their limited security budgets fighting more immediate threats such as ransomware and espionage attacks? Given the expense and no clear deadline, it’s little wonder that less than half of all TLS connections made inside the Cloudflare network and only 18 percent of Fortune 500 networks support quantum-resistant TLS connections. It’s all but certain that many fewer organizations still are supporting quantum-ready encryption in less prominent protocols.

Triumph of the cypherpunks

One exception to the industry-wide lethargy is the engineering team that designs the Signal Protocol, the open-source engine that powers the world’s most robust and resilient form of end-to-end encryption for multiple private chat apps, most notably the Signal Messenger. Eleven days ago, the nonprofit entity that develops the protocol, Signal Messenger LLC, published a 5,900-word write-up describing its latest updates that make Signal fully quantum-resistant.

The complexity and problem-solving required for making the Signal Protocol quantum safe are as daunting as just about any in modern-day engineering. The original Signal Protocol already resembled the inside of a fine Swiss timepiece, with countless gears, wheels, springs, hands, and other parts all interoperating in an intricate way. In less adept hands, mucking about with an instrument as complex as the Signal protocol could have led to shortcuts or unintended consequences that hurt performance, undoing what would otherwise be a perfectly running watch. Yet this latest post-quantum upgrade (the first one came in 2023) is nothing short of a triumph.

“This appears to be a solid, thoughtful improvement to the existing Signal Protocol,” said Brian LaMacchia, a cryptography engineer who oversaw Microsoft’s post-quantum transition from 2015 to 2022 and now works at Farcaster Consulting Group. “As part of this work, Signal has done some interesting optimization under the hood so as to minimize the network performance impact of adding the post-quantum feature.”

Of the multiple hurdles to clear, the most challenging was accounting for the much larger key sizes that quantum-resistant algorithms require. The overhaul here adds protections based on ML-KEM-768, an implementation of the CRYSTALS-Kyber algorithm that was selected in 2022 and formalized last year by the National Institute of Standards and Technology. ML-KEM is short for Module-Lattice-Based Key-Encapsulation Mechanism, but most of the time, cryptographers refer to it simply as KEM.

Ratchets, ping-pong, and asynchrony

Like the Elliptic curve Diffie-Hellman (ECDH) protocol that Signal has used since its start, KEM is a key encapsulation mechanism. Also known as a key agreement mechanism, it provides the means for two parties who have never met to securely agree on one or more shared secrets in the presence of an adversary who is monitoring the parties’ connection. RSA, ECDH, and other encapsulation algorithms have long been used to negotiate symmetric keys (almost always AES keys) in protocols including TLS, SSH, and IKE. Unlike ECDH and RSA, however, the much newer KEM is quantum-safe.

Key agreement in a protocol like TLS is relatively straightforward. That’s because devices connecting over TLS negotiate a key over a single handshake that occurs at the beginning of a session. The agreed-upon AES key is then used throughout the session. The Signal Protocol is different. Unlike TLS sessions, Signal sessions are protected by forward secrecy, a cryptographic property that ensures the compromise of a key used to encrypt a recent set of messages can’t be used to decrypt an earlier set of messages. The protocol also offers Post-Compromise Security, which protects future messages from past key compromises. While a TLS  uses the same key throughout a session, keys within a Signal session constantly evolve.

To provide these confidentiality guarantees, the Signal Protocol updates secret key material each time a message party hits the send button or receives a message, and at other points, such as in graphical indicators that a party is currently typing and in the sending of read receipts. The mechanism that has made this constant key evolution possible over the past decade is what protocol developers call a “double ratchet.” Just as a traditional ratchet allows a gear to rotate in one direction but not in the other, the Signal ratchets allow messaging parties to create new keys based on a combination of preceding and newly agreed-upon secrets. The ratchets work in a single direction, the sending and receiving of future messages. Even if an adversary compromises a newly created secret, messages encrypted using older secrets can’t be decrypted.

The starting point is a handshake that performs three or four ECDH agreements that mix long- and short-term secrets to establish a shared secret. The creation of this “root key” allows the Double Ratchet to begin. Until 2023, the key agreement used X3DH. The handshake now uses PQXDH to make the handshake quantum-resistant.

The first layer of the Double Ratchet, the Symmetric Ratchet, derives an AES key from the root key and advances it for every message sent. This allows every message to be encrypted with a new secret key. Consequently, if attackers compromise one party’s device, they won’t be able to learn anything about the keys that came earlier. Even then, though, the attackers would still be able to compute the keys used in future messages. That’s where the second, “Diffie-Hellman ratchet” comes in.

The Diffie-Hellman ratchet incorporates a new ECDH public key into each message sent. Using Alice and Bob, the fictional characters often referred to when explaining asymmetric encryption, when Alice sends Bob a message, she creates a new ratchet keypair and computes the ECDH agreement between this key and the last ratchet public key Bob sent. This gives her a new secret, and she knows that once Bob gets her new public key, he will know this secret, too (because, as mentioned earlier, Bob previously sent that other key). With that, Alice can mix the new secret with her old root key to get a new root key and start fresh. The result: Attackers who learn her old secrets won’t be able to tell the difference between her new ratchet keys and random noise.

The result is what Signal developers describe as “ping-pong” behavior, as the parties to a discussion take turns replacing ratchet key pairs one at a time. The effect: An eavesdropper who compromises one of the parties might recover a current ratchet private key, but soon enough, that private key will be replaced with a new, uncompromised one, and in a way that keeps it free from the prying eyes of the attacker.

The objective of the newly generated keys is to limit the number of messages that can be decrypted if an adversary recovers key material at some point in an ongoing chat. Messages sent prior to and after the compromise will remain off limits.

A major challenge designers of the Signal Protocol face is the need to make the ratchets work in an asynchronous environment. Asynchronous messages occur when parties send or receive them at different times—such as while one is offline and the other is active, or vice versa—without either needing to be present or respond immediately. The entire Signal Protocol must work within this asynchronous environment. What’s more, it must work reliably over unstable networks and networks controlled by adversaries, such as a government that forces a telecom or cloud service to spy on the traffic.

Shor’s algorithm lurking

By all accounts, Signal’s double ratchet design is state-of-the-art. That said, it’s wide open to an inevitable if not immediate threat: quantum computing. That’s because an adversary capable of monitoring traffic passing from two or more messenger users can capture that data and feed it into a quantum computer—once one of sufficient power is viable—and calculate the ephemeral keys generated in the second ratchet.

In classical computing, it’s infeasible, if not impossible, for such an adversary to calculate the key. Like all asymmetric encryption algorithms, ECDH is based on a mathematical, one-way function. Also known as trapdoor functions, these problems are trivial to compute in one direction and substantially harder to compute in reverse. In elliptic curve cryptography, this one-way function is based on the Discrete Logarithm problem in mathematics. The key parameters are based on specific points in an elliptic curve over the field of integers modulo some prime P.

On average, an adversary equipped with only a classical computer would spend billions of years guessing integers before arriving at the right ones. A quantum computer, by contrast, would be able to calculate the correct integers in a matter of hours or days. A formula known as Shor’s algorithm—which runs only on a quantum computer—reverts this one-way discrete logarithm equation to a two-way one. Shor’s Algorithm can similarly make quick work of solving the one-way function that’s the basis for the RSA algorithm.

As noted earlier, the Signal Protocol received its first post-quantum makeover in 2023. This update added PQXDH—a Signal-specific implementation that combined the key agreements from elliptic curves used in X3DH (specifically X25519) and the quantum-safe KEM—in the initial protocol handshake. (X3DH was then put out to pasture as a standalone implementation.)

The move foreclosed the possibility of a quantum attack being able to recover the symmetric key used to start the ratchets, but the ephemeral keys established in the ping-ponging second ratchet remained vulnerable to a quantum attack. Signal’s latest update adds quantum resistance to these keys, ensuring that forward secrecy and post-compromise security are safe from Shor’s algorithm as well.

Even though the ping-ponging keys are vulnerable to future quantum attacks, they are broadly believed to be secure against today’s attacks from classical computers. The Signal Protocol developers didn’t want to remove them or the battle-tested code that produces them. That led to their decision to add quantum resistance by adding a third ratchet. This one uses a quantum-safe KEM to produce new secrets much like the Diffie-Hellman ratchet did before, ensuring quantum-safe, post-compromise security.

The technical challenges were anything but easy. Elliptic curve keys generated in the X25519 implementation are about 32 bytes long, small enough to be added to each message without creating a burden on already constrained bandwidths or computing resources. A ML-KEM 768 key, by contrast, is 1,000 bytes. Additionally, Signal’s design requires sending both an encryption key and a ciphertext, making the total size 2272 bytes.

And then there were three

To handle the 71x increase, Signal developers considered a variety of options. One was to send the 2272-byte KEM key less often—say every 50th message or once every week—rather than every message. That idea was nixed because it doesn’t work well in asynchronous or adversarial messaging environments. Signal Protocol developers Grame Connell and Rolfe Schmidt explained:

Consider the case of “send a key if you haven’t sent one in a week”. If Bob has been offline for 2 weeks, what does Alice do when she wants to send a message? What happens if we can lose messages, and we lose the one in fifty that contains a new key? Or, what happens if there’s an attacker in the middle that wants to stop us from generating new secrets, and can look for messages that are [many] bytes larger than the others and drop them, only allowing keyless messages through?

Another option Signal engineers considered was breaking the 2272-byte key into smaller chunks, say 71 of them that are 32 bytes each. Breaking up the KEM key into smaller chunks and putting one in each message sounds like a viable approach at first, but once again, the asynchronous environment of messaging made it unworkable. What happens, for example, when data loss causes one of the chunks to be dropped? The protocol could deal with this scenario by just repeat-sending chunks again after sending all 71 previously. But then an adversary monitoring the traffic could simply cause packet 3 to be dropped each time, preventing Alice and Bob from completing the key exchange.

Signal developers ultimately went with a solution that used this multiple-chunks approach.

Sneaking an elephant through the cat door

To manage the asynchrony challenges, the developers turned to “erasure codes,” a method of breaking up larger data into smaller pieces such that the original can be reconstructed using any sufficiently sized subset of chunks.

Charlie Jacomme, a researcher at INRIA Nancy on the Pesto team who focuses on formal verification and secure messaging, said this design accounts for packet loss by building redundancy into the chunked material. Instead of all x number of chunks having to be successfully received to reconstruct the key, the model requires only x-y chunks to be received, where y is the acceptable number of packets lost. As long as that threshold is met, the new key can be established even when packet loss occurs.

The other part of the design was to split the KEM computations into smaller steps. These KEM computations are distinct from the KEM key material.

As Jacomme explained it:

Essentially, a small part of the public key is enough to start computing and sending a bigger part of the ciphertext, so you can quickly send in parallel the rest of the public key and the beginning of the ciphertext. Essentially, the final computations are equal to the standard, but some stuff was parallelized.

All this in fact plays a role in the end security guarantees, because by optimizing the fact that KEM computations are done faster, you introduce in your key derivation fresh secrets more frequently.

Signal’s post 10 days ago included several images that illustrate this design:

While the design solved the asynchronous messaging problem, it created a new complication of its own: This new quantum-safe ratchet advanced so quickly that it couldn’t be kept synchronized with the Diffie-Hellman ratchet. Ultimately, the architects settled on a creative solution. Rather than bolt KEM onto the existing double ratchet, they allowed it to remain more or less the same as it had been. Then they used the new quantum-safe ratchet to implement a parallel secure messaging system.

Now, when the protocol encrypts a message, it sources encryption keys from both the classic Double Ratchet and the new ratchet. It then mixes the two keys together (using a cryptographic key derivation function) to get a new encryption key that has all of the security of the classical Double Ratchet but now has quantum security, too.

The Signal engineers have given this third ratchet the formal name: Sparse Post Quantum Ratchet, or SPQR for short. The third ratchet was designed in collaboration with PQShield, AIST, and New York University. The developers presented the erasure-code-based chunking and the high-level Triple Ratchet design at the Eurocrypt 2025 conference. At the Usenix 25 conference, they discussed the six options they considered for adding quantum-safe forward secrecy and post-compromise security and why SPQR and one other stood out. Presentations at the NIST PQC Standardization Conference and the Cryptographic Applications Workshop explain the details of chunking, the design challenges, and how the protocol had to be adapted to use the standardized ML-KEM.

Jacomme further observed:

The final thing interesting for the triple ratchet is that it nicely combines the best of both worlds. Between two users, you have a classical DH-based ratchet going on one side, and fully independently, a KEM-based ratchet is going on. Then, whenever you need to encrypt something, you get a key from both, and mix it up to get the actual encryption key. So, even if one ratchet is fully broken, be it because there is now a quantum computer, or because somebody manages to break either elliptic curves or ML-KEM, or because the implementation of one is flawed, or…, the Signal message will still be protected by the second ratchet. In a sense, this update can be seen, of course simplifying, as doubling the security of the ratchet part of Signal, and is a cool thing even for people that don’t care about quantum computers.

As both Signal and Jacomme noted, users of Signal and other messengers relying on the Signal Protocol need not concern themselves with any of these new designs. To paraphrase a certain device maker, it just works.

In the coming weeks or months, various messaging apps and app versions will be updated to add the triple ratchet. Until then, apps will simply rely on the double ratchet as they always did. Once apps receive the update, they’ll behave exactly as they did before upgrading.

For those who care about the internal workings of their Signal-based apps, though, the architects have documented in great depth the design of this new ratchet and how it behaves. Among other things, the work includes a mathematical proof verifying that the updated Signal protocol provides the claimed security properties.

Outside researchers are applauding the work.

“If the normal encrypted messages we use are cats, then post-quantum ciphertexts are elephants,” Matt Green, a cryptography expert at Johns Hopkins University, wrote in an interview. “So the problem here is to sneak an elephant through a tunnel designed for cats. And that’s an amazing engineering achievement. But it also makes me wish we didn’t have to deal with elephants.”

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

Why Signal’s post-quantum makeover is an amazing engineering achievement Read More »

how-close-are-we-to-solid-state-batteries-for-electric-vehicles?

How close are we to solid state batteries for electric vehicles?


Superionic materials promise greater range, faster charges and more safety.

In early 2025, Mercedes-Benz ran its first road tests of an electric passenger car powered by a prototype solid-state battery pack. The carmaker predicts the next-gen battery will increase the electric vehicle’s driving range to over 620 miles (1,000 kilometers). Credit: Mercedes-Benz Group

Every few weeks, it seems, yet another lab proclaims yet another breakthrough in the race to perfect solid-state batteries: next-generation power packs that promise to give us electric vehicles (EVs) so problem-free that we’ll have no reason left to buy gas-guzzlers.

These new solid-state cells are designed to be lighter and more compact than the lithium-ion batteries used in today’s EVs. They should also be much safer, with nothing inside that can burn like those rare but hard-to-extinguish lithium-ion fires. They should hold a lot more energy, turning range anxiety into a distant memory with consumer EVs able to go four, five, six hundred miles on a single charge.

And forget about those “fast” recharges lasting half an hour or more: Solid-state batteries promise EV fill-ups in minutes—almost as fast as any standard car gets with gasoline.

This may all sound too good to be true—and it is, if you’re looking to buy a solid-state-powered EV this year or next. Look a bit further, though, and the promises start to sound more plausible. “If you look at what people are putting out as a road map from industry, they say they are going to try for actual prototype solid-state battery demonstrations in their vehicles by 2027 and try to do large-scale commercialization by 2030,” says University of Washington materials scientist Jun Liu, who directs a university-government-industry battery development collaboration known as the Innovation Center for Battery500 Consortium.

Indeed, the challenge is no longer to prove that solid-state batteries are feasible. That has long since been done in any number of labs around the world. The big challenge now is figuring out how to manufacture these devices at scale, and at an acceptable cost.

Superionic materials to the rescue

Not so long ago, says Eric McCalla, who studies battery materials at McGill University in Montreal and is a coauthor of a paper on battery technology in the 2025 Annual Review of Materials Research, this heady rate of advancement toward powering electric vehicles was almost unimaginable.

Until about 2010, explains McCalla, “the solid-state battery had always seemed like something that would be really awesome—if we could get it to work.” Like current EV batteries, it would still be built with lithium, an unbeatable element when it comes to the amount of charge it can store per gram. But standard lithium-ion batteries use a liquid, a highly flammable one at that, to allow easy passage of charged particles (ions) between the device’s positive and negative electrodes. The new battery design would replace the liquid with a solid electrolyte that would be nearly impervious to fire—while allowing for a host of other physical and chemical changes that could make the battery faster charging, lighter in weight, and all the rest.

“But the material requirements for these solid electrolytes were beyond the state of the art,” says McCalla. After all, standard lithium-ion batteries have a good reason for using a liquid electrolyte: It gives the ionized lithium atoms inside a fluid medium to move through as they shuttle between the battery’s two electrodes. This back-and-forth cycle is how any battery stores and releases energy—the chemical equivalent of pumping water from a low-lying reservoir to a high mountain lake, then letting it run back down through a turbine whenever you need some power. This hypothetical new battery would somehow have to let those lithium ions flow just as freely—but through a solid.

Diagram of rechargable battery

Storing electrical energy in a rechargeable battery is like pumping water from a low-lying reservoir up to a high mountain lake. Likewise, using that energy to power an external device is like letting the water flow back downhill through a generator. The volume of the mountain lake corresponds to the battery’s capacity, or how much charge it can hold, while the lake’s height corresponds to the battery’s voltage—how much energy it gives to each unit of charge it sends through the device.

Credit: Knowable Magazine

Storing electrical energy in a rechargeable battery is like pumping water from a low-lying reservoir up to a high mountain lake. Likewise, using that energy to power an external device is like letting the water flow back downhill through a generator. The volume of the mountain lake corresponds to the battery’s capacity, or how much charge it can hold, while the lake’s height corresponds to the battery’s voltage—how much energy it gives to each unit of charge it sends through the device. Credit: Knowable Magazine

This seemed hopeless for larger uses such as EVs, says McCalla. Certain polymers and other solids were known to let ions pass, but at rates that were orders of magnitude slower than liquid electrolytes. In the past two decades, however, researchers have discovered several families of lithium-rich compounds that are “superionic”—meaning that some atoms behave like a crystalline solid while others behave more like a liquid—and that can conduct lithium ions as fast as standard liquid electrolytes, if not faster.

“So the bottleneck suddenly is not the bottleneck anymore,” says McCalla.

True, manufacturing these batteries can be a challenge. For example, some of the superionic solids are so brittle that they require special equipment for handling, while others must be processed in ultra-low humidity chambers lest they react with water vapor and generate toxic hydrogen sulfide gas.

Still, the suddenly wide-open potential of solid-state batteries has led to a surge of research and development money from funding agencies around the globe—not to mention the launch of multiple startup companies working in partnership with carmakers such as Toyota, Volkswagen, and many more. Although not all the numbers are public, investments in solid-state battery development are already in the billions of dollars worldwide.

“Every automotive company has said solid-state batteries are the future,” says University of Maryland materials scientist Eric Wachsman. “It’s just a question of, When is that future?”

The rise of lithium-ion batteries

Perhaps the biggest reason to ask that “when” question, aside from the still-daunting manufacturing challenges, is a stark economic reality: Solid-state batteries will have to compete in the marketplace with a standard lithium-ion industry that has an enormous head start.

“Lithium-ion batteries have been developed and optimized over the last 30 years, and they work really great,” says physicist Alex Louli, an engineer and spokesman at one of the leading solid-state battery startups, San Jose, California-based QuantumScape.

Diagram showing how li-ion battery works

Charging a standard lithium-ion battery (top) works by applying a voltage between cathode and anode. This pulls lithium atoms from the cathode and strips off an electron from each. The now positively charged lithium ions then flow across the membrane to the negatively charged anode. There, the ions reunite with the electrons, which flowed through an external circuit as an electric current. These now neutral atoms nest in the graphite lattice until needed again. The battery’s discharge cycle (bottom) is just the reverse: Electrons deliver energy to your cell phone or electric car as they flow via a circuit from anode to cathode, while lithium ions race through the membrane to meet them there.

Credit: Knowable Magazine

Charging a standard lithium-ion battery (top) works by applying a voltage between cathode and anode. This pulls lithium atoms from the cathode and strips off an electron from each. The now positively charged lithium ions then flow across the membrane to the negatively charged anode. There, the ions reunite with the electrons, which flowed through an external circuit as an electric current. These now neutral atoms nest in the graphite lattice until needed again. The battery’s discharge cycle (bottom) is just the reverse: Electrons deliver energy to your cell phone or electric car as they flow via a circuit from anode to cathode, while lithium ions race through the membrane to meet them there. Credit: Knowable Magazine

They’ve also gotten really cheap, comparatively speaking. When Japan’s Sony Corporation introduced the first commercial lithium-ion battery in 1991, drawing on a worldwide research effort dating back to the 1950s, it powered one of the company’s camcorders and cost the equivalent of $7,500 for every kilowatt-hour (KwH) of energy it stored. By April 2025 lithium-ion battery prices had plummeted to $115 per KwH, and were projected to fall toward $80 per KwH or less by 2030—low enough to make a new EV substantially cheaper than the equivalent gasoline-powered vehicle.

“Most of these advancements haven’t really been down to any fundamental chemistry improvements,” says Mauro Pasta, an applied electrochemist at the University of Oxford. “What’s changed the game has been the economies of scale in manufacturing.”

Liu points to a prime example: the roll-to-roll process used for the cylindrical batteries found in most of today’s EVs. “You make a slurry,” says Liu, “then you cast the slurry into thin films, roll the films together with very high speed and precision, and you can make hundreds and thousands of cells very, very quickly with very high quality.”

Lithium-ion cells have also seen big advances in safety. The existence of that flammable electrolyte means that EV crashes can and do lead to hard-to-extinguish lithium-ion fires. But thanks to the circuit breakers and other safeguards built into modern battery packs, only about 25 EVs catch fire out of every 100,000 sold, versus some 1,500 fires per 100,000 conventional cars—which, of course, carry around large tanks of explosively flammable gasoline.

In fact, says McCalla, the standard lithium-ion industry is so far ahead that solid-state might never catch up. “EVs are going to scale today,” he says, “and they’re going with the technology that’s affordable today.” Indeed, battery manufacturers are ramping up their lithium-ion capacity as fast as they can. “So I wonder if the train has already left the station.”

But maybe not. Solid-state technology does have a geopolitical appeal, notes Ying Shirley Meng, a materials scientist at the University of Chicago and Argonne National Laboratory. “With lithium-ion batteries the game is over—China already dominates 70 percent of the manufacturing,” she says. So for any country looking to lead the next battery revolution, “solid-state presents a very exciting opportunity.”

Performance potential

Another plus is improved performance. At the very time that EV buyers are looking for ever greater range and charging speed, says Louli, the standard lithium-ion recipe is hitting a performance plateau. To do better, he says, “you have to go back and start doing some material innovations”—like those in solid-state batteries.

Take the standard battery’s liquid electrolyte, for example. It’s not only flammable, but also a limitation on charging speed. When you plug in an electric car, the charging cable acts as an external circuit that’s applying a voltage between the battery’s two electrodes, the cathode and the anode. The resulting electrical forces are strong enough to pull lithium atoms out of the cathode and to strip one electron from each atom. But when they drag the resulting ions through the electrolyte toward the anode, they hit the speed limit: Try to rush the ions along by upping the voltage too far and the electrolyte will chemically break down, ending the battery’s charging days forever.

So score one for solid-state batteries: Not only do the best superionic conductors offer a faster ion flow than liquid electrolytes, they also can tolerate higher voltages—all of which translates into EV recharges in under 10 minutes, versus half an hour or more for today’s lithium-ion power packs.

Score another win for solid-state when the ions arrive at the opposite electrode, the anode, during charging. This is where they reunite with their lost electrons, which have taken the long way around through the external circuit. And this is where standard lithium-ion batteries store the newly neutralized lithium atoms in a layer of graphite.

A solid-state battery doesn’t require a graphite cage to store lithium ions at the anode. This shrinks the overall size of the battery and increases its efficiency in uses such as an electric vehicle power pack. The solid-state design also replaces the porous membrane in the middle with a sturdier barrier. The aim is to create a battery that’s more light-weight, safer, stores more energy and makes recharging more convenient than current electric car batteries.

Credit: Knowable Magazine

A solid-state battery doesn’t require a graphite cage to store lithium ions at the anode. This shrinks the overall size of the battery and increases its efficiency in uses such as an electric vehicle power pack. The solid-state design also replaces the porous membrane in the middle with a sturdier barrier. The aim is to create a battery that’s more light-weight, safer, stores more energy and makes recharging more convenient than current electric car batteries. Credit: Knowable Magazine

Graphite anodes were a major commercial advance in 1991—the innovation that finally brought lithium-ion batteries out of the lab and into the marketplace. Graphite is cheap, chemically stable, excellent at conducting electricity, and able to slot those incoming lithium atoms into its hexagonal carbon lattice like so many eggs in an egg carton.

But graphite imposes yet another charging rate limit, since the lattice can handle only so many ions crowding in at once. And it’s heavy, wasting a lot of mass and volume on a simple container, says Louli: “Graphite is an accommodating host, but it does not deliver energy itself—it’s a passive component.” That’s why range-conscious automakers are eager for an alternative to graphite: The more capacity an EV can cram into the same-sized battery pack, and the less weight it has to haul around, the farther it can go on a single charge.

The ultimate alternative would be no cage at all, with no wasted space or weight—just incoming ions condensing into pure lithium metal with every charging cycle. In effect, such a metallic lithium anode would create and then dissolve itself with every charge and discharge cycle—while storing maybe 10 times more electrical energy per gram than a graphite anode.

Such lithium-metal anodes have been demonstrated in the lab since at least the 1970s, and even featured in some early, unsuccessful attempts at commercial lithium batteries. But even after decades of trying, says Louli, no one has been able to make metal anodes work safely and reliably in contact with liquid electrolytes. For one thing, he says, you get these reactions between your liquid electrolyte and the lithium metal that degrade them both, and you end up with a very bad battery lifetime.

And for another, adds Wachsman, “when you are charging a battery with liquids, the lithium going to the anode can plate out non-uniformly and form what are called dendrites.” These jagged spikes of metal can grow in unpredictable ways and pierce the battery’s separator layer: a thin film of electrically insulating polymer that keeps the two electrodes from touching one another. Breaching that barrier could easily cause a short circuit that abruptly ends the device’s useful life, or even sets it on fire.

Dendrite formation

Standard lithium-ion batteries don’t use lithium-metal anodes because there is too high a risk of the metal forming sharp spikes called dendrites. Such dendrites can easily pierce the porous polymer membrane that separates anode from cathode, causing a short-circuit or even sparking a fire. Solid-state batteries replace the membrane with a solid barrier.

Credit: Knowable Magazine

Standard lithium-ion batteries don’t use lithium-metal anodes because there is too high a risk of the metal forming sharp spikes called dendrites. Such dendrites can easily pierce the porous polymer membrane that separates anode from cathode, causing a short-circuit or even sparking a fire. Solid-state batteries replace the membrane with a solid barrier. Credit: Knowable Magazine

Now compare this with a battery that replaces both the liquid electrolyte and the separator with a solid-state layer tough enough to resist those spikes, says Wachsman. “It has the potential of, one, being stable to higher voltages; two, being stable in the presence of lithium metal; and three, preventing those dendrites”—just about everything you need to make those ultra-high-energy-density lithium-metal anodes a practical reality.

“That is what is really attractive about this new battery technology,” says Louli. And now that researchers have found so many superionic solids that could potentially work, he adds, “this is what’s driving the push for it.”

Manufacturing challenges

Increasingly, in fact, the field’s focus has shifted from research to practice, figuring out how to work the same kind of large-scale, low-cost manufacturing magic that’s made the standard lithium-ion architecture so dominant. These new superionic materials haven’t made it easy.

A prime example is the class of sulfides discovered by Japanese researchers in 2011. Not only were these sulfides among the first of the new superionics to be discovered, says Wachsman, they are still the leading contenders for early commercialization.

Major investments have come from startups such as Colorado-based Solid Power and Massachusetts-based Factorial Energy, as well as established battery giants such as China’s CATL and global carmakers such as Toyota and Honda.

And there’s one big reason for the focus on superionic sulfides, says Wachsman: “They’re easy to drop into existing battery cell manufacturing lines,” including the roll-to-roll process. “Companies have got billions of dollars invested in the existing infrastructure, and they don’t want to just displace that with something new.”

Yet these superionic sulfides also have some significant downsides—most notably, their extreme sensitivity to humidity. This complicates the drop-in process, says Oxford’s Pasta. The dry rooms that are currently used to manufacture lithium-ion batteries have a humidity content that is not nearly low enough for sulfide electrolytes, and would have to be retooled. That sensitivity also poses a safety risk if the batteries are ever ruptured in an accident, he says: “If you expose the sulfides to humidity in the air you will generate hydrogen sulfide gas, which is extremely toxic.”

All of which is why startups such as QuantumScape, and the Maryland-based Ion Storage Systems that spun out of Wachsman’s lab in 2015, are looking beyond sulfides to solid-state oxide electrolytes. These materials are essentially ceramics, says Wachsman, made in a high-tech version of pottery class: “You shape the clay, you fire it in a kiln, and it’s a solid.” Except that in this case, it’s a superionic solid that’s all but impervious to humidity, heat, fire, high voltage, and highly reactive lithium metal.

Yet that’s also where the manufacturing challenges start. Superionic or not, for example, ceramics are too brittle for roll-to-roll processing. Once they have been fired and solidified, says Wachsman, “you have to handle them more like a semiconductor wafer, with machines to cut the sheets to size and robotics to move them around.”

Then there’s the “reversible breathing” that plagues oxide and sulfide batteries alike: “With every charging cycle we’re plating and stripping lithium metal at the anode,” explains Louli. “So your entire cell stack will have a thickness increase when you charge and a thickness decrease when you discharge”—a cycle of tiny changes in volume that every solid-state battery design has to allow for.

At QuantumScape, for example, individual battery cells are made by stacking a number of gossamer-thin oxide sheets like a deck of cards, then encasing this stack inside a metal frame that is just thick enough to let the anode layer on each sheet freely expand and contract. The stack and the frame together are then vacuum-sealed into a soft-sided pouch, says Louli, “so if you pack the cells frame to frame, the stacks can breathe and not push on the adjacent cells.”

In a similar way, says Wachsman, all the complications of solid-state batteries have ready solutions—but solutions that inevitably add complexity and cost. Thus the field’s increasingly urgent obsession with manufacturing. Before an auto company will even consider adopting a new EV battery, he says, “it not only has to be better-performing than their current battery, it has to be cheaper.”

And the only way to make complicated technology cheaper is with economies of scale. “That’s why the biggest impediment to solid-state batteries is just the cost of standing up one of these gigafactories to make them in sufficient volume,” says Wachsman. “That’s why there’s probably going to be more solid-state batteries in early adopter-type applications that don’t require that kind of volume.”

Still, says Louli, the long-term demand is definitely there. “What we’re trying to enable by combining the lithium-metal anode with solid-state technology is threefold,” he says: “Higher energy, higher power and improved safety. So for high-performance applications like electric vehicles—or other applications that require high power density, such as drones or even electrified aviation—solid-state batteries are going to be well-suited.”

This story originally appeared in Knowable Magazine.

Photo of Knowable Magazine

Knowable Magazine explores the real-world significance of scholarly work through a journalistic lens.

How close are we to solid state batteries for electric vehicles? Read More »

uk-antitrust-regulator-takes-aim-at-google’s-search-dominance

UK antitrust regulator takes aim at Google’s search dominance

Google is facing multiple antitrust actions in the US, and European regulators have been similarly tightening the screws. You can now add the UK to the list of Google’s governmental worries. The country’s antitrust regulator, known as the Competition and Markets Authority (CMA), has confirmed that Google has “strategic market status,” paving the way to more limits on how Google does business in the UK. Naturally, Google objects to this course of action.

The designation is connected to the UK’s new digital markets competition regime, which was enacted at the beginning of the year. Shortly after, the CMA announced it was conducting an investigation into whether Google should be designated with strategic market status. The outcome of that process is a resounding “yes.”

This label does not mean Google has done anything illegal or that it is subject to immediate regulation. It simply means the company has “substantial and entrenched market power” in one or more areas under the purview of the CMA. Specifically, the agency has found that Google is dominant in search and search advertising, holding a greater than 90 percent share of Internet searches in the UK.

In Google’s US antitrust trials, the rapid rise of generative AI has muddied the waters. Google has claimed on numerous occasions that the proliferation of AI firms offering search services means there is ample competition. In the UK, regulators note that Google’s Gemini AI assistant is not in the scope of the strategic market status designation. However, some AI features connected to search, like AI Overviews and AI Mode, are included.

According to the CMA, consultations on possible interventions to ensure effective competition will begin later this year. The agency’s first set of antitrust measures will likely expand on solutions that Google has introduced in other regions or has offered on a voluntary basis in the UK. This could include giving publishers more control over how their data is used in search and “choice screens” that suggest Google alternatives to users. Measures that require new action from Google could be announced in the first half of 2026.

UK antitrust regulator takes aim at Google’s search dominance Read More »

rocket-report:-bezos’-firm-will-package-satellites-for-launch;-starship-on-deck

Rocket Report: Bezos’ firm will package satellites for launch; Starship on deck


The long, winding road for Franklin Chang-Diaz’s plasma rocket engine takes another turn.

Blue Origin’s second New Glenn booster left its factory this week for a road trip to the company’s launch pad a few miles away. Credit: Blue Origin

Welcome to Edition 8.14 of the Rocket Report! We’re now more than a week into a federal government shutdown, but there’s been little effect on the space industry. Military space operations are continuing unabated, and NASA continues preparations at Kennedy Space Center, Florida, for the launch of the Artemis II mission around the Moon early next year. The International Space Station is still flying with a crew of seven in low-Earth orbit, and NASA’s fleet of spacecraft exploring the cosmos remain active. What’s more, so much of what the nation does in space is now done by commercial companies largely (but not completely) immune from the pitfalls of politics. But the effect of the shutdown on troops and federal employees shouldn’t be overlooked. They will soon miss their first paychecks unless political leaders reach an agreement to end the stalemate.

As always, we welcome reader submissions. If you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets, as well as a quick look ahead at the next three launches on the calendar.

Danger from dead rockets. A new listing of the 50 most concerning pieces of space debris in low-Earth orbit is dominated by relics more than a quarter-century old, primarily dead rockets left to hurtle through space at the end of their missions, Ars reports. “The things left before 2000 are still the majority of the problem,” said Darren McKnight, lead author of a paper presented October 3 at the International Astronautical Congress in Sydney. “Seventy-six percent of the objects in the top 50 were deposited last century, and 88 percent of the objects are rocket bodies. That’s important to note, especially with some disturbing trends right now.”

Littering in LEO … The disturbing trends mainly revolve around China’s actions in low-Earth orbit. “The bad news is, since January 1, 2024, we’ve had 26 rocket bodies abandoned in low-Earth orbit that will stay in orbit for more than 25 years,” McKnight told Ars. China is responsible for leaving behind 21 of those 26 rockets. Overall, Russia and the Soviet Union lead the pack with 34 objects listed in McKnight’s Top 50, followed by China with 10, the United States with three, Europe with two, and Japan with one. Russia’s SL-16 and SL-8 rockets are the worst offenders, combining to take 30 of the Top 50 slots. An impact with even a modestly sized object at orbital velocity would create countless pieces of debris, potentially triggering a cascading series of additional collisions clogging LEO with more and more space junk, a scenario called the Kessler Syndrome.

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

New Shepard flies again. Blue Origin, Jeff Bezos’ space company, launched its sixth crewed New Shepard flight so far this year Wednesday as the company works to increase the vehicle’s flight rate, Space News reports. This was the 36th flight of Blue Origin’s suborbital New Shepard rocket. The passengers included: Jeff Elgin, Danna Karagussova, Clint Kelly III, Will Lewis, Aaron Newman, and Vitalii Ostrovsky. Blue Origin said it has now flown 86 humans (80 individuals) into space. The New Shepard booster returned to a pinpoint propulsive landing, and the capsule parachuted into the desert a few miles from the launch site near Van Horn, Texas.

Two-month turnaround … This flight continued Blue Origin’s trend of launching New Shepard about once per month. The company has two capsules and two boosters in its active inventory, and each vehicle has flown about once every two months this year. Blue Origin currently has command of the space tourism and suborbital research market as its main competitor in this sector, Virgin Galactic, remains grounded while it builds a next-generation rocket plane. (submitted by EllPeaTea)

NASA still interested in former astronaut’s rocket engine. NASA has awarded the Ad Astra Rocket Company a $4 million, two-year contract for the continued development of the company’s Variable Specific Impulse Magnetoplasma Rocket (VASIMR) concept, Aviation Week & Space Technology reports. Ad Astra, founded by former NASA astronaut Franklin Chang-Diaz, claims the vehicle has the potential to reach Mars with human explorers within 45 days using a nuclear power source rather than solar power. The new contract will enable federal funding to support development of the engine’s radio frequency, superconducting magnet, and structural exoskeleton subsystems.

Slow going … Houston-based Ad Astra said in a press release that it sees the high-power plasma engine as “nearing flight readiness.” We’ve heard this before. The VASIMR engine has been in development for decades now, beset by a lack of stable funding and the technical hurdles inherent in designing and testing such demanding technology. For example, Ad Astra once planned a critical 100-hour, 100-kilowatt ground test of the VASIMR engine in 2018. The test still hasn’t happened. Engineers discovered a core component of the engine tended to overheat as power levels approached 100 kilowatts, forcing a redesign that set the program back by at least several years. Now, Ad Astra says it is ready to build and test a pair of 150-kilowatt engines, one of which is intended to fly in space at the end of the decade.

Gilmour eyes return to flight next year. Australian rocket and satellite startup Gilmour Space Technologies is looking to return to the launch pad next year after the first attempt at an orbital flight failed over the summer, Aviation Week & Space Technology reports. “We are well capitalized. We are going to be launching again next year,” Adam Gilmour, the company’s CEO, said October 3 at the International Astronautical Congress in Sydney.

What happened? … Gilmour didn’t provide many details about the cause of the launch failure in July, other than to say it appeared to be something the company didn’t test for ahead of the flight. The Eris rocket flew for 14 seconds, losing control and crashing a short distance from the launch pad in the Australian state of Queensland. If there’s any silver lining, Gilmour said the failure didn’t damage the launch pad, and the rocket’s use of a novel hybrid propulsion system limited the destructive power of the blast when it struck the ground.

Stoke Space’s impressive funding haul. Stoke Space announced a significant capital raise on Wednesday, a total of $510 million as part of Series D funding. The new financing doubles the total capital raised by Stoke Space, founded in 2020, to $990 million, Ars reports. The infusion of money will provide the company with “the runway to complete development” of the Nova rocket and demonstrate its capability through its first flights, said Andy Lapsa, the company’s co-founder and chief executive, in a news release characterizing the new funding.

A futuristic design … Stoke is working toward a 2026 launch of the medium-lift Nova rocket. The rocket’s innovative design is intended to be fully reusable from the payload fairing on down, with a regeneratively cooled heat shield on the vehicle’s second stage. In fully reusable mode, Nova will have a payload capacity of 3 metric tons to low-Earth orbit, and up to 7 tons in fully expendable mode. Stoke is building a launch pad for the Nova rocket at Cape Canaveral Space Force Station, Florida.

SpaceX took an unusual break from launching. SpaceX launched its first Falcon 9 rocket from Florida in 12 days during the predawn hours of Tuesday morning, Spaceflight Now reports. The launch gap was highlighted by a run of persistent, daily storms in Central Florida and over the Atlantic Ocean, including hurricanes that prevented deployment of SpaceX’s drone ships to support booster landings. The break ended with the launch of 28 more Starlink broadband satellites. SpaceX launched three Starlink missions in the interim from Vandenberg Space Force Base, California.

Weather still an issue … Weather conditions on Florida’s Space Coast are often volatile, particularly in the evenings during summer and early autumn. SpaceX’s next launch from Florida was supposed to take off Thursday evening, but officials pushed it back to no earlier than Saturday due to a poor weather forecast over the next two days. Weather still gets a vote in determining whether a rocket lifts off or doesn’t, despite SpaceX’s advancements in launch efficiency and the Space Force’s improved weather monitoring capabilities at Cape Canaveral.

ArianeGroup chief departs for train maker. Current ArianeGroup CEO Martin Sion has been named the new head of French train maker Alstom. He will officially take up the role in April 2026, European Spaceflight reports. Sion assumed the role as ArianeGroup’s chief executive in 2023, replacing the former CEO who left the company after delays in the debut of its main product: the Ariane 6 rocket. Sion’s appointment was announced by Alstom, but ArianeGroup has not made any official statement on the matter.

Under pressure … The change in ArianeGroup’s leadership comes as the company ramps up production and increases the launch cadence of the Ariane 6 rocket, which has now flown three times, with a fourth launch due next month. ArianeGroup’s subsidiary, Arianespace, seeks to increase the Ariane 6’s launch cadence to 10 missions per year by 2029. ArianeGroup and its suppliers will need to drastically improve factory throughput to reach this goal.

New Glenn emerges from factory. Blue Origin rolled the first stage of its massive New Glenn rocket from its hangar on Wednesday morning in Florida, kicking off the final phase of the campaign to launch the heavy-lift vehicle for the second time, Ars reports. In sharing video of the rollout to Launch Complex-36 on Wednesday online, the space company did not provide a launch target for the mission, which seeks to put two small Mars-bound payloads into orbit. The pair of identical spacecraft to study the solar wind at Mars is known as ESCAPADE. However, sources told Ars that on the current timeline, Blue Origin is targeting a launch window of November 9 to November 11. This assumes pre-launch activities, including a static-fire test of the first stage, go well.

Recovery or bust? Blue Origin has a lot riding on this booster, named “Never Tell Me The Odds,” which it will seek to recover and reuse. Despite the name of the booster, the company is quietly confident that it will successfully land the first stage on a drone ship named Jacklyn. Internally, engineers at Blue Origin believe there is about a 75 percent chance of success. The first booster malfunctioned before landing on the inaugural New Glenn test flight in January. Company officials are betting big on recovering the booster this time, with plans to reuse it early next year to launch Blue’s first lunar lander to the Moon.

SpaceX gets bulk of this year’s military launch orders. Around this time each year, the US Space Force convenes a Mission Assignment Board to dole out contracts to launch the nation’s most critical national security satellites. The military announced this year’s launch orders Friday, and SpaceX was the big winner, Ars reports. Space Systems Command, the unit responsible for awarding military launch contracts, selected SpaceX to launch five of the seven missions up for assignment this year. United Launch Alliance (ULA), a 50-50 joint venture between Boeing and Lockheed Martin, won contracts for the other two. These missions for the Space Force and the National Reconnaissance Office are still at least a couple of years away from flying.

Vulcan getting more expensive A closer examination of this year’s National Security Space Launch contracts reveals some interesting things. The Space Force is paying SpaceX $714 million for the five launches awarded Friday, for an average of roughly $143 million per mission. ULA will receive $428 million for two missions, or $214 million for each launch. That’s about 50 percent more expensive than SpaceX’s price per mission. This is in line with the prices the Space Force paid SpaceX and ULA for last year’s contracts. However, look back a little further and you’ll find ULA’s prices for military launches have, for some reason, increased significantly over the last few years. In late 2023, the Space Force awarded a $1.3 billion deal to ULA for a batch of 11 launches at an average cost per mission of $119 million. A few months earlier, Space Systems Command assigned six launches to ULA for $672 million, or $112 million per mission.

Starship Flight 11 nears launch. SpaceX rolled the Super Heavy booster for the next test flight of the company’s Starship mega-rocket out to the launch pad in Texas this week. The booster stage, with 33 methane-fueled engines, will power the Starship into the upper atmosphere during the first few minutes of flight. This booster is flight-proven, having previously launched and landed on a test flight in March.

Next steps With the Super Heavy booster installed on the pad, the next step for SpaceX will be the rollout of the Starship upper stage. That is expected to happen in the coming days. Ground crews will raise Starship atop the Super Heavy booster to fully stack the rocket to its total height of more than 400 feet (120 meters). If everything goes well, SpaceX is targeting liftoff of the 11th full-scale test flight of Starship and Super Heavy as soon as Monday evening. (submitted by EllPeaTea)

Blue Origin takes on a new line of business. Blue Origin won a US Space Force competition to build a new payload processing facility at Cape Canaveral Space Force Station, Florida, Spaceflight Now reports. Under the terms of the $78.2 million contract, Blue Origin will build a new facility capable of handling payloads for up to 16 missions per year. The Space Force expects to use about half of that capacity, with the rest available to NASA or Blue Origin’s commercial customers. This contract award follows a $77.5 million agreement the Space Force signed with Astrotech earlier this year to expand the footprint of its payload processing facility at Vandenberg Space Force Base, California.

Important stuff … Ground infrastructure often doesn’t get the same level of attention as rockets, but the Space Force has identified bottlenecks in payload processing as potential constraints on ramping up launch cadences at the government’s spaceports in Florida and California. Currently, there are only a handful of payload processing facilities in the Cape Canaveral area, and most of them are only open to a single user, such as SpaceX, Amazon, the National Reconnaissance Office, or NASA. So, what exactly is payload processing? The Space Force said Blue Origin’s new facility will include space for “several pre-launch preparatory activities” that include charging batteries, fueling satellites, loading other gaseous and fluid commodities, and encapsulation. To accomplish those tasks, Blue Origin will create “a clean, secure, specialized high-bay facility capable of handling flight hardware, toxic fuels, and explosive materials.”

Next three launches

Oct. 11: Gravity 1 | Unknown Payload | Haiyang Spaceport, China Coastal Waters | 02: 15 UTC

Oct. 12: Falcon 9 | Project Kuiper KF-03 | Cape Canaveral Space Force Station, Florida | 00: 41 UTC

Oct. 13: Starship/Super Heavy | Flight 11 | Starbase, Texas | 23: 15 UTC

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Rocket Report: Bezos’ firm will package satellites for launch; Starship on deck Read More »

childhood-vaccines-safe-for-a-little-longer-as-cdc-cancels-advisory-meeting

Childhood vaccines safe for a little longer as CDC cancels advisory meeting

An October meeting of a key federal vaccine advisory committee has been canceled without explanation, sparing the evidence-based childhood vaccination schedule from more erosion—at least for now.

The Advisory Committee on Immunization Practices (ACIP) for the Centers for Disease Control and Prevention was planning to meet on October 22 and 23, which would have been the committee’s fourth meeting this year. But the meeting schedule was updated in the past week to remove those dates and replace them with “2025 meeting, TBD.”

Ars Technica contacted the Department of Health and Human Services to ask why the meeting was canceled. HHS press secretary Emily Hilliard offered no explanation, only saying that the “official meeting dates and agenda items will be posted on the website once finalized.”

ACIP is tasked with publicly reviewing and evaluating the wealth of safety and efficacy data on vaccines and then offering evidence-based recommendations for their use. Once the committee’s recommendations are adopted by the CDC, they set national vaccination standards for children and establish which shots federal programs and private insurance companies are required to fully cover.

In the past, the committee has been stacked with highly esteemed, thoroughly vetted medical experts, who diligently conducted their somewhat esoteric work on immunization policy with little fanfare. That changed when ardent anti-vaccine activist Robert F. Kennedy Jr. became health secretary. In June, Kennedy abruptly and unilaterally fired all 17 ACIP members, falsely accusing them of being riddled with conflicts of interest. He then installed his own hand-selected members. With the exception of one advisor—pediatrician and veteran ACIP member Cody Meissner—the members are poorly qualified, have gone through little vetting, and embrace the same anti-vaccine and dangerous fringe ideas as Kennedy.

Corrupted committee

So far this year, Kennedy’s advisors have met twice, producing chaotic meetings during which members revealed a clear lack of understanding of the data at hand and the process of setting vaccine recommendations, all while setting policy decisions long sought by anti-vaccine activists. The first meeting, in June, included seven members selected by Kennedy. In that meeting, the committee rescinded the recommendation for flu vaccines containing a preservative called thimerosal based on false claims from anti-vaccine groups that it causes autism. The panel also ominously said it would re-evaluate the entire childhood vaccination schedule, putting life-saving shots at risk.

Childhood vaccines safe for a little longer as CDC cancels advisory meeting Read More »

intel’s-next-generation-panther-lake-laptop-chips-could-be-a-return-to-form

Intel’s next-generation Panther Lake laptop chips could be a return to form

Intel says that systems with these chips in them should be shipping by the end of the year. In recent years, the company has launched a small handful of ultraportable-focused CPUs at the end of the year, and then followed that up with a more fully fleshed-out midrange and high-end lineup at CES in January—we’d expect Intel to stick to that basic approach here.

Panther Lake draws near

Panther Lake tries to combine different aspects of the last-generation Lunar Lake and Arrow Lake chips. Intel

Intel’s first Core Ultra chips, codenamed Meteor Lake, were introduced two years ago. There were three big changes that separated these from the 14th-generation Core CPUs and their predecessors: They were constructed of multiple silicon tiles, fused together into one with Intel’s Foveros packaging technologies; some of those tiles were manufactured by TSMC rather than Intel; and they added a neural processing unit (NPU) that could be used for on-device machine learning and generative AI applications.

The second-generation Core Ultra chips continued to do all three of those things, but Intel pursued an odd bifurcated strategy that gave different Core Ultra 200-series processors significantly different capabilities.

The most interesting models, codenamed Lunar Lake (aka Core Ultra 200V), integrated the system RAM on the CPU package, which improved performance and power consumption while making them more expensive to buy and complicated to manufacture. These chips included Intel’s most up-to-date Arc GPU architecture, codenamed Battlemage, plus an NPU that met the performance requirements for Microsoft’s Copilot+ PC initiative.

But Core Ultra 200V chips were mostly used in high-end thin-and-light laptops. Lower-cost and higher-performance laptops got the other kind of Core Ultra 200 chip, codenamed Arrow Lake, which was a mishmash of old and new. The CPU cores used the same architecture as Lunar Lake, and there were usually more of them. But the GPU architecture was older and slower, and the NPU didn’t meet the requirements for Copilot+. If Lunar Lake was all-new, Arrow Lake was mostly an updated CPU design fused to a tweaked version of the original Meteor Lake design (confused by all these lakes yet? Welcome to my world).

Intel’s next-generation Panther Lake laptop chips could be a return to form Read More »

man-gets-drunk,-wakes-up-with-a-medical-mystery-that-nearly-kills-him

Man gets drunk, wakes up with a medical mystery that nearly kills him

And what about the lungs? A number of things could explain the problems in his lungs—including infections from soil bacteria he might encounter in his construction work or a parasitic infection found in Central America. But the cause that best fit was common pneumonia and, more specifically, based on the distribution of opacities in his lung, pneumonia caused by aspiration (inhaling food particles or other things that are not air)—which is something that can happen when people drink excessive amounts of alcohol, as the man regularly did.

“Ethanol impairs consciousness and blunts protective reflexes (e.g., cough and gag), which disrupts the normal control mechanisms of the upper aerodigestive tract,” Dhaliwal noted.

And this is where Dhaliwal made a critical connection. If the man’s drinking led him to develop aspiration pneumonia—accidentally getting food in his lungs—he may have also accidentally gotten nonfood in this gastrointestinal tract at the same time.

Critical connection

The things people most commonly swallow by accident include coins, button batteries, jewelry, and small bones. But these things tend to show up in imaging, and none of the imaging revealed a swallowed object. Things that don’t show up on images, though, are things made of plants.

“This reasoning leads to the search for an organic object that might be ingested while eating and drinking and is seemingly harmless but becomes invasive upon entering the gastrointestinal tract,” Dhaliwal wrote.

“The leading suspect,” he concluded, “is a wooden toothpick—an object commonly found in club sandwiches and used for dental hygiene. Toothpick ingestions often go unnoticed, but once identified, they are considered medical emergencies owing to their propensity to cause visceral perforation and vascular injury.”

If a toothpick had pierced the man’s duodenum, it would completely explain all of the man’s symptoms. He drank too much and lost control of his aerodigestive tract, leading to aspiration that caused pneumonia, and he then swallowed a toothpick, which perforated the duodenum and led to sepsis.

Dhaliwal recommended an endoscopic procedure to look for a toothpick in his intestines. On the man’s third day in the hospital, he had the procedure, and, sure enough, there was a toothpick, piercing through his duodenum and into his right kidney, just as Dhaliwal had deduced.

Doctors promptly removed it and treated the man with antibiotics. He went on to make a full recovery. At a nine-month follow-up, he continued to do well and had maintained abstinence from alcohol.

Man gets drunk, wakes up with a medical mystery that nearly kills him Read More »

vandals-deface-ads-for-ai-necklaces-that-listen-to-all-your-conversations

Vandals deface ads for AI necklaces that listen to all your conversations

In addition to backlash over feared surveillance capitalism, critics have accused Schiffman of taking advantage of the loneliness epidemic. Conducting a survey last year, researchers with Harvard Graduate School of Education’s Making Caring Common found that people between “30-44 years of age were the loneliest group.” Overall, 73 percent of those surveyed “selected technology as contributing to loneliness in the country.”

But Schiffman rejects these criticisms, telling the NYT that his AI Friend pendant is intended to supplement human friends, not replace them, supposedly helping to raise the “average emotional intelligence” of users “significantly.”

“I don’t view this as dystopian,” Schiffman said, suggesting that “the AI friend is a new category of companionship, one that will coexist alongside traditional friends rather than replace them,” the NYT reported. “We have a cat and a dog and a child and an adult in the same room,” the Friend founder said. “Why not an AI?”

The MTA has not commented on the controversy, but Victoria Mottesheard—a vice president at Outfront Media, which manages MTA advertising—told the NYT that the Friend campaign blew up because AI “is the conversation of 2025.”

Website lets anyone deface Friend ads

So far, the Friend ads have not yielded significant sales, Schiffman confirmed, telling the NYT that only 3,100 have sold. He expects that society isn’t ready for AI companions to be promoted at such a large scale and that his ad campaign will help normalize AI friends.

In the meantime, critics have rushed to attack Friend on social media, inspiring a website where anyone can vandalize a Friend ad and share it online. That website has received close to 6,000 submissions so far, its creator, Marc Mueller, told the NYT, and visitors can take a tour of these submissions by choosing to “ride train to see more” after creating their own vandalized version.

For visitors to Mueller’s site, riding the train displays a carousel documenting backlash to Friend, as well as “performance art” by visitors poking fun at the ads in less serious ways. One example showed a vandalized ad changing “Friend” to “Fries,” with a crude illustration of McDonald’s French fries, while another transformed the ad into a campaign for “fried chicken.”

Others were seemingly more serious about turning the ad into a warning. One vandal drew a bunch of arrows pointing to the “end” in Friend while turning the pendant into a cry-face emoji, seemingly drawing attention to research on the mental health risks of relying on AI companions—including the alleged suicide risks of products like Character.AI and ChatGPT, which have spawned lawsuits and prompted a Senate hearing.

Vandals deface ads for AI necklaces that listen to all your conversations Read More »