Author name: Beth Washington

review:-amazon’s-2024-kindle-paperwhite-makes-the-best-e-reader-a-little-better

Review: Amazon’s 2024 Kindle Paperwhite makes the best e-reader a little better

A fast Kindle?

From left to right: 2024 Paperwhite, 2021 Paperwhite, and 2018 Paperwhite. Note not just the increase in screen size, but also how the screen corners get a little more rounded with each release. Credit: Andrew Cunningham

I don’t want to oversell how fast the new Kindle is, because it’s still not like an E-Ink screen can really compete with an LCD or OLED panel for smoothness of animations or UI responsiveness. But even compared to the 2021 Paperwhite, tapping buttons, opening menus, opening books, and turning pages feels considerably snappier—not quite instantaneous, but without the unexplained pauses and hesitation that longtime Kindle owners will be accustomed to. For those who type out notes in their books, even the onscreen keyboard feels fluid and responsive.

Compared to the 2018 Paperwhite (again, the first waterproofed model, and the last one with a 6-inch screen and micro USB port), the difference is night and day. While it still feels basically fine for reading books, I find that the older Kindle can sometimes pause for so long when opening menus or switching between things that I wonder if it’s still working or whether it’s totally locked up and frozen.

“Kindle benchmarks” aren’t really a thing, but I attempted to quantify the performance improvements by running some old browser benchmarks using the Kindle’s limited built-in web browser and Google’s ancient Octane 2.0 test—the 2018, 2021, and 2024 Kindles are all running the same software update here (5.17.0), so this should be a reasonably good apples-to-apples comparison of single-core processor speed.

The new Kindle is actually way faster than older models. Credit: Andrew Cunningham

The 2021 Kindle was roughly 30 percent faster than the 2018 Kindle. The new Paperwhite is nearly twice as fast as the 2021 Paperwhite, and well over twice as fast as the 2018 Paperwhite. That alone is enough to explain the tangible difference in responsiveness between the devices.

Turning to the new Paperwhite’s other improvements: compared side by side, the new screen is appreciably bigger, more noticeably so than the 0.2-inch size difference might suggest. And it doesn’t make the Paperwhite much larger, though it is a tiny bit taller in a way that will wreck compatibility with existing cases. But you only really appreciate the upgrade if you’re coming from one of the older 6-inch Kindles.

Review: Amazon’s 2024 Kindle Paperwhite makes the best e-reader a little better Read More »

chatgpt’s-success-could-have-come-sooner,-says-former-google-ai-researcher

ChatGPT’s success could have come sooner, says former Google AI researcher


A co-author of Attention Is All You Need reflects on ChatGPT’s surprise and Google’s conservatism.

Jakob Uszkoreit Credit: Jakob Uszkoreit / Getty Images

In 2017, eight machine-learning researchers at Google released a groundbreaking research paper called Attention Is All You Need, which introduced the Transformer AI architecture that underpins almost all of today’s high-profile generative AI models.

The Transformer has made a key component of the modern AI boom possible by translating (or transforming, if you will) input chunks of data called “tokens” into another desired form of output using a neural network. Variations of the Transformer architecture power language models like GPT-4o (and ChatGPT), audio synthesis models that run Google’s NotebookLM and OpenAI’s Advanced Voice Mode, video synthesis models like Sora, and image synthesis models like Midjourney.

At TED AI 2024 in October, one of those eight researchers, Jakob Uszkoreit, spoke with Ars Technica about the development of transformers, Google’s early work on large language models, and his new venture in biological computing.

In the interview, Uszkoreit revealed that while his team at Google had high hopes for the technology’s potential, they didn’t quite anticipate its pivotal role in products like ChatGPT.

The Ars interview: Jakob Uszkoreit

Ars Technica: What was your main contribution to the Attention is All You Need paper?

Jakob Uszkoreit (JU): It’s spelled out in the footnotes, but my main contribution was to propose that it would be possible to replace recurrence [from Recurrent Neural Networks] in the dominant sequence transduction models at the time with the attention mechanism, or more specifically self-attention. And that it could be more efficient and, as a result, also more effective.

Ars: Did you have any idea what would happen after your group published that paper? Did you foresee the industry it would create and the ramifications?

JU: First of all, I think it’s really important to keep in mind that when we did that, we were standing on the shoulders of giants. And it wasn’t just that one paper, really. It was a long series of works by some of us and many others that led to this. And so to look at it as if this one paper then kicked something off or created something—I think that is taking a view that we like as humans from a storytelling perspective, but that might not actually be that accurate of a representation.

My team at Google was pushing on attention models for years before that paper. It’s a lot longer of a slog with much, much more, and that’s just my group. Many others were working on this, too, but we had high hopes that it would push things forward from a technological perspective. Did we think that it would play a role in really enabling, or at least apparently, seemingly, flipping a switch when it comes to facilitating products like ChatGPT? I don’t think so. I mean, to be very clear in terms of LLMs and their capabilities, even around the time we published the paper, we saw phenomena that were pretty staggering.

We didn’t get those out into the world in part because of what really is maybe a notion of conservatism around products at Google at the time. But we also, even with those signs, weren’t that confident that stuff in and of itself would make that compelling of a product. But did we have high hopes? Yeah.

Ars: Since you knew there were large language models at Google, what did you think when ChatGPT broke out into a public success? “Damn, they got it, and we didn’t?”

JU: There was a notion of, well, “that could have happened.” I think it was less of a, “Oh dang, they got it first” or anything of the like. It was more of a “Whoa, that could have happened sooner.” Was I still amazed by just how quickly people got super creative using that stuff? Yes, that was just breathtaking.

Jakob Uskoreit presenting at TED AI 2024.

Jakob Uszkoreit presenting at TED AI 2024. Credit: Benj Edwards

Ars: You weren’t at Google at that point anymore, right?

JU: I wasn’t anymore. And in a certain sense, you could say the fact that Google wouldn’t be the place to do that factored into my departure. I left not because of what I didn’t like at Google as much as I left because of what I felt I absolutely had to do elsewhere, which is to start Inceptive.

But it was really motivated by just an enormous, not only opportunity, but a moral obligation in a sense, to do something that was better done outside in order to design better medicines and have very direct impact on people’s lives.

Ars: The funny thing with ChatGPT is that I was using GPT-3 before that. So when ChatGPT came out, it wasn’t that big of a deal to some people who were familiar with the tech.

JU: Yeah, exactly. If you’ve used those things before, you could see the progression and you could extrapolate. When OpenAI developed the earliest GPTs with Alec Radford and those folks, we would talk about those things despite the fact that we weren’t at the same companies. And I’m sure there was this kind of excitement, how well-received the actual ChatGPT product would be by how many people, how fast. That still, I think, is something that I don’t think anybody really anticipated.

Ars: I didn’t either when I covered it. It felt like, “Oh, this is a chatbot hack of GPT-3 that feeds its context in a loop.” And I didn’t think it was a breakthrough moment at the time, but it was fascinating.

JU: There are different flavors of breakthroughs. It wasn’t a technological breakthrough. It was a breakthrough in the realization that at that level of capability, the technology had such high utility.

That, and the realization that, because you always have to take into account how your users actually use the tool that you create, and you might not anticipate how creative they would be in their ability to make use of it, how broad those use cases are, and so forth.

That is something you can sometimes only learn by putting something out there, which is also why it is so important to remain experiment-happy and to remain failure-happy. Because most of the time, it’s not going to work. But some of the time it’s going to work—and very, very rarely it’s going to work like [ChatGPT did].

Ars: You’ve got to take a risk. And Google didn’t have an appetite for taking risks?

JU: Not at that time. But if you think about it, if you look back, it’s actually really interesting. Google Translate, which I worked on for many years, was actually similar. When we first launched Google Translate, the very first versions, it was a party joke at best. And we took it from that to being something that was a truly useful tool in not that long of a period. Over the course of those years, the stuff that it sometimes output was so embarrassingly bad at times, but Google did it anyway because it was the right thing to try. But that was around 2008, 2009, 2010.

Ars: Do you remember AltaVista’sBabel Fish?

JU: Oh yeah, of course.

Ars: When that came out, it blew my mind. My brother and I would do this thing where we would translate text back and forth between languages for fun because it would garble the text.

JU: It would get worse and worse and worse. Yeah.

Programming biological computers

After his time at Google, Uszkoreit co-founded Inceptive to apply deep learning to biochemistry. The company is developing what he calls “biological software,” where AI compilers translate specified behaviors into RNA sequences that can perform desired functions when introduced to biological systems.

Ars: What are you up to these days?

JU: In 2021 we co-founded Inceptive in order to use deep learning and high throughput biochemistry experimentation to design better medicines that truly can be programmed. We think of this as really just step one in the direction of something that we call biological software.

Biological software is a little bit like computer software in that you have some specification of the behavior that you want, and then you have a compiler that translates that into a piece of computer software that then runs on a computer exhibiting the functions or the functionality that you specify.

You specify a piece of a biological program and you compile that, but not with an engineered compiler, because life hasn’t been engineered like computers have been engineered. But with a learned AI compiler, you translate that or compile that into molecules that when inserted into biological systems, organisms, our cells exhibit those functions that you’ve programmed into.

A pharmacist holds a bottle containing Moderna’s bivalent COVID-19 vaccine. Credit: Getty | Mel Melcon

Ars: Is that anything like how the mRNA COVID vaccines work?

JU: A very, very simple example of that are the mRNA COVID vaccines where the program says, “Make this modified viral antigen” and then our cells make that protein. But you could imagine molecules that exhibit far more complex behaviors. And if you want to get a picture of how complex those behaviors could be, just remember that RNA viruses are just that. They’re just an RNA molecule that when entering an organism exhibits incredibly complex behavior such as distributing itself across an organism, distributing itself across the world, doing certain things only in a subset of your cells for a certain period of time, and so on and so forth.

And so you can imagine that if we managed to even just design molecules with a teeny tiny fraction of such functionality, of course with the goal not of making people sick, but of making them healthy, it would truly transform medicine.

Ars: How do you not accidentally create a monster RNA sequence that just wrecks everything?

JU: The amazing thing is that medicine for the longest time has existed in a certain sense outside of science. It wasn’t truly understood, and we still often don’t truly understand their actual mechanisms of action.

As a result, humanity had to develop all of these safeguards and clinical trials. And even before you enter the clinic, all of these empirical safeguards prevent us from accidentally doing [something dangerous]. Those systems have been in place for as long as modern medicine has existed. And so we’re going to keep using those systems, and of course with all the diligence necessary. We’ll start with very small systems, individual cells in future experimentation, and follow the same established protocols that medicine has had to follow all along in order to ensure that these molecules are safe.

Ars: Thank you for taking the time to do this.

JU: No, thank you.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a widely-cited tech historian. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

ChatGPT’s success could have come sooner, says former Google AI researcher Read More »

ibm-boosts-the-amount-of-computation-you-can-get-done-on-quantum-hardware

IBM boosts the amount of computation you can get done on quantum hardware

By making small adjustments to the frequency that the qubits are operating at, it’s possible to avoid these problems. This can be done when the Heron chip is being calibrated before it’s opened for general use.

Separately, the company has done a rewrite of the software that controls the system during operations. “After learning from the community, seeing how to run larger circuits, [we were able to] almost better define what it should be and rewrite the whole stack towards that,” Gambetta said. The result is a dramatic speed-up. “Something that took 122 hours now is down to a couple of hours,” he told Ars.

Since people are paying for time on this hardware, that’s good for customers now. However,  it could also pay off in the longer run, as some errors can occur randomly, so less time spent on a calculation can mean fewer errors.

Deeper computations

Despite all those improvements, errors are still likely during any significant calculations. While it continues to work toward developing error-corrected qubits, IBM is focusing on what it calls error mitigation, which it first detailed last year. As we described it then:

“The researchers turned to a method where they intentionally amplified and then measured the processor’s noise at different levels. These measurements are used to estimate a function that produces similar output to the actual measurements. That function can then have its noise set to zero to produce an estimate of what the processor would do without any noise at all.”

The problem here is that using the function is computationally difficult, and the difficulty increases with the qubit count. So, while it’s still easier to do error mitigation calculations than simulate the quantum computer’s behavior on the same hardware, there’s still the risk of it becoming computationally intractable. But IBM has also taken the time to optimize that, too. “They’ve got algorithmic improvements, and the method that uses tensor methods [now] uses the GPU,” Gambetta told Ars. “So I think it’s a combination of both.”

IBM boosts the amount of computation you can get done on quantum hardware Read More »

this-elephant-figured-out-how-to-use-a-hose-to-shower

This elephant figured out how to use a hose to shower

And the hose-showering behavior was “lateralized,” that is, Mary preferred targeting her left body side more than her right. (Yes, Mary is a “left-trunker.”) Mary even adapted her showering behavior depending on the diameter of the hose: she preferred showering with a 24-mm hose over a 13-mm hose and preferred to use her trunk to shower rather than a 32-mm hose.

It’s not known where Mary learned to use a hose, but the authors suggest that elephants might have an intuitive understanding of how hoses work because of the similarity to their trunks. “Bathing and spraying themselves with water, mud, or dust are very common behaviors in elephants and important for body temperature regulation as well as skin care,” they wrote. “Mary’s behavior fits with other instances of tool use in elephants related to body care.”

Perhaps even more intriguing was Anchali’s behavior. While Anchali did not use the hose to shower, she nonetheless exhibited complex behavior in manipulating the hose: lifting it, kinking the hose, regrasping the kink, and compressing the kink. The latter, in particular, often resulted in reduced water flow while Mary was showering. Anchali eventually figured out how to further disrupt the water flow by placing her trunk on the hose and lowering her body onto it. Control experiments were inconclusive about whether Anchali was deliberately sabotaging Mary’s shower; the two elephants had been at odds and behaved aggressively toward each other at shower times. But similar cognitively complex behavior has been observed in elephants.

“When Anchali came up with a second behavior that disrupted water flow to Mary, I became pretty convinced that she is trying to sabotage Mary,” Brecht said. “Do elephants play tricks on each other in the wild? When I saw Anchali’s kink and clamp for the first time, I broke out in laughter. So, I wonder, does Anchali also think this is funny, or is she just being mean?

Current Biology, 2024. DOI: 10.1016/j.cub.2024.10.017  (About DOIs).

This elephant figured out how to use a hose to shower Read More »

russia:-fine,-i-guess-we-should-have-a-grasshopper-rocket-project,-too

Russia: Fine, I guess we should have a Grasshopper rocket project, too

Like a lot of competitors in the global launch industry, Russia for a long time dismissed the prospects of a reusable first stage for a rocket.

As late as 2016, an official with the Russian agency that develops strategy for the country’s main space corporation, Roscosmos, concluded, “The economic feasibility of reusable launch systems is not obvious.” In the dismissal of the landing prospects of SpaceX’s Falcon 9 rocket, Russian officials were not alone. Throughout the 2010s, competitors including space agencies in Europe and Japan, and US-based United Launch Alliance, all decided to develop expendable rockets.

However, by 2017, when SpaceX re-flew a Falcon 9 rocket for the first time, the writing was on the wall. “This is a very important step, we sincerely congratulate our colleague on this achievement,” then-Roscosmos CEO Igor Komarov said at the time. He even spoke of developing reusable components, such as rocket engines capable of multiple firings.

A Russian Grasshopper

That was more than seven years ago, however, and not much has happened in Russia since then to foster the development of a reusable rocket vehicle. Yes, Roscosmos unveiled plans for the “Amur” rocket in 2020, which was intended to have a reusable first stage and methane-fueled engines and land like the Falcon 9. But its debut has slipped year for year—originally intended to fly in 2026, its first launch is now expected no earlier than 2030.

Now, however, there is some interesting news from Moscow about plans to develop a prototype vehicle to test the ability to land the Amur rocket’s first stage vertically.

According to the state-run news agency, TASS, construction of this test vehicle will enable the space corporation to solve key challenges. “Next year preparation of an experimental stage of the (Amur) rocket, which everyone is calling ‘Grasshopper,’ will begin,” said Igor Pshenichnikov, the Roscosmos deputy director of the department of future programs. The Russian news article was translated for Ars by Rob Mitchell.

Russia: Fine, I guess we should have a Grasshopper rocket project, too Read More »

review:-catching-up-with-the-witchy-brew-of-agatha-all-along

Review: Catching up with the witchy brew of Agatha All Along


Down, down, down the road

Spoilers ahead! This imaginative sequel to WandaVision is a reminder of just how good the MCU can be.

Kathryn Hahn stars as Agatha Harkness, reprising her WandaVision role. Credit: Disney+

The MCU’s foray into streaming television has produced mixed results, but one of my favorites was the weirdly inventive, oh-so-meta WandaVision. I’m happy to report that the spinoff sequel,  Agatha All Along, taps into that same offbeat creativity, giving us a welcome reminder of just how good the MCU can be when it’s firing on all storytelling cylinders.

(Spoilers below, including for WandaVision and Multiverse of Madness. We’ll give you another heads up when major spoilers for Agatha All Along are imminent.)

The true identity of nosy next-door neighbor Agnes—played to perfection by Kathryn Hahn—was the big reveal of 2021’s WandaVision, even inspiring a jingle that went viral. Agnes turned out to be a powerful witch named Agatha Harkness, who had studied magic for centuries and was just dying to learn the source of Wanda’s incredible power. Wanda’s natural abilities were magnified by the Mind Stone, but Agatha realized that Wanda was a wielder of “chaos magic.” She was, in fact, the Scarlet Witch. In the finale, Wanda trapped Agatha in her nosy neighbor persona while releasing the rest of the town of Westview from her grief-driven Hex.

Then Wanda presumably died in Doctor Strange and the Multiverse of Madness (and count me among those who thought her arc in that film was a massive fail on Marvel’s part). What happened to Agatha? It seems the hex is still in place but went a bit wonky. Agatha All Along opens like a true crime serial (cf. Mare of Easttown) with Agatha/Agnes as the rebellious, socially challenged tough detective called to investigate a body found in the woods outside Westview. Then a young Teen (Joe Locke) breaks the hex and asks her to show him the way to the legendary Witches’ Road, a journey involving a series of trials. The reward: at the end of the road, the surviving witches get what they most desire. Agatha wants her powers back and Teen—well, his motives are murkier, as is his identity, which is guarded by a sigil.

Agatha and Teen first have to assemble a coven: Lilia (Patti LuPone), a divination witch; Jennifer (Sasheer Zamata), a potions witch; Alice (Ali Ahn), a protection witch; and Sharon Davis (Debra Jo Rupp, reprising her WandaVision role), standing in for a green witch on account of her gardening skills. They sing the spell in the form of a ballad—”Down the Witches’ Road,” a killer earworm that recurs throughout the series and is already spawning lots of cover versions. The entrance appears and the journey begins. As if the Witches’ Road weren’t dangerous enough, Agatha is also being pursued by her ex, Rio Vidal (Aubrey Plaza), a powerful green witch, as well as the Salem Seven, vengeful wraiths of Agatha’s first coven, who (we learned in a WandaVision flashback) she killed by draining their powers when they attacked her.

Trapped in a reality-warping spell, Agatha is apparently a detective now. YouTube/Marvel Studios

A large part of WandaVision‘s delight came from the various sitcom styles featured in each episode. Agatha All Along has its own take on that approach: each trial takes on the setting and style of witches from popular culture (even the ending credits play on this). One evokes the New England WASP-y style of the 1998 film Practical Magic; another plays on Stevie Nicks’ Bohemian “white witch” phase with elements of the 1972 film Season of the Witch; yet another trial dresses the coven in high school summer camp 1980s garb.

There are nods to the Wicked Witch of the West and Glinda from the Wizard of Oz, Malificent, and the hag version of Snow White’s Evil Queen in the seventh episode, “Death’s Hand in Mine.” It might just be the best single episode of all the Marvel series. This is Lilia’s trial, requiring her to use her divination skills to navigate a deadly tarot reading. Every wrong card releases one of the many swords suspended above the table.

Throughout the journey, Lilia has uttered seemingly random nonsensical things. Here we learn this is because she experiences life out of temporal sequence, moving between past and present while peering into the future. Suddenly all those earlier sprinkled breadcrumbs make sense, a testament to the skillful writing and directing—not to mention LuPone’s powerful performance. (Apparently she requested a script with the events in linear order to better evoke the necessary emotions when shooting scenes out of sequence.)

To glory at the end

(WARNING: Major spoilers below. Stop reading now if you haven’t finished the series.) 

By this time the coven has already lost two members: Sharon Davis (who didn’t even last the first trial), replaced by Rio; and Alice, who tried to help Agatha when the latter was briefly possessed during a ouija board trial—only to have Agatha do what she always does and drain Alice of all her power. Lilia’s tarot reading reveals that Death has been traveling with them all along in the form of Rio. Yes, Agatha’s ex is Death, aka “the original Green Witch.” They end up losing Lilia, too; she sacrifices herself to take out the Salem Seven after letting the surviving coven members escape. We see her falling to her death and then show up as a child in her homeland for her very first divination lesson—the cycle of life and death come full circle.

Agatha likes her new look for this trial. Marvel/Disney+

We soon discover that Rio/Death is mostly there because of Teen. There was much fan speculation about his identity in the run-up to the series release and fans guessed correctly: it’s Wanda and Vision’s son, Billy Maximoff, whose soul found its way into the body of a dying teenager named William Kaplan just as Wanda’s hex was unraveling him and his twin, Tommy, out of existence. That’s why he went on the Witches’ Road: to find Tommy. But this also makes him an aberration in Death’s eyes that must be removed to restore the balance. The catch: Billy has to sacrifice himself; in this unusual case, Death cannot simply take him.

Agatha initially agrees to manipulate Billy into doing just that, then has a last-minute change of heart. She kisses Rio/Death and thereby embraces her fate, sacrificing herself so Billy can live. From the start she had a soft spot for the teen, accompanied by references to her long-dead son. The backstory is quite moving and key to Agatha’s unexpected change of heart. Her son’s fate was revealed in the finale. Death came for him when Agatha was in labor but agreed to grant her “time.” How much time? Six or seven years, during which mother and son bonded and wandered from village to village, with Agatha occasionally killing more covens to absorb their power. But Death did not forget, and with Nicky (Abel Lysenko) gone, Agatha indulged all her worst impulses.

Which brings us to the Big Twist: Agatha and her son made up the ballad of the Witches’ Road, singing it in local taverns and slowly building up the legend. The Witches’ Road never existed. Agatha used the legend over centuries to lure witches into a trap to steal their powers. That was her intention at the start of the series, too, except this time—a portal opened. Billy, it seems, inherited Wanda’s ability to warp and shape reality, even subconsciously. He wanted the road to be real and so it was.

The reveal is skillfully done and ties everything up in a nice satisfying bow, with one exception. The writers just couldn’t let Agatha go completely; she returns as a ghost and joins Billy on his search for his brother Tommy. That’s a creative choice that leaves the door open for a second season, and I strongly suspect we’ll get one. But Ghost Agatha will be a tough plot point to crack. And it rather undercuts the pivotal moment of Agatha’s sacrifice—actually doing something that doesn’t directly benefit herself. On the whole, though, Agatha All Along is marvelously entertaining, binge-able fun with just enough emotional resonance and heartbreak to add some depth.

All episodes of Agatha All Along are now streaming on Disney+.

Photo of Jennifer Ouellette

Jennifer is a senior reporter at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Review: Catching up with the witchy brew of Agatha All Along Read More »

dna-shows-pompeii’s-dead-aren’t-who-we-thought-they-were

DNA shows Pompeii’s dead aren’t who we thought they were

People have long been fascinated by the haunting plaster casts of the bodies of people who died in Pompeii when Mount Vesuvius erupted in 79 CE. Archaeologists have presented certain popular narratives about who these people might have been and how they might have been related. But ancient DNA analysis has revealed that those preferred narratives were not entirely accurate and may reflect certain cultural biases, according to a new paper published in the journal Current Biology. The results also corroborate prior research suggesting that the people of ancient Pompeii were the descendants of immigrants from the Eastern Mediterranean.

As previously reported, the eruption of Mount Vesuvius released thermal energy roughly equivalent to 100,000 times the atomic bombs dropped on Hiroshima and Nagasaki at the end of World War II, spewing molten rock, pumice, and hot ash over the cities of Pompeii and Herculaneum in particular. The vast majority of people in Pompeii and Herculaneum—the cities hardest hit—perished from asphyxiation, choking on the thick clouds of noxious gas and ash. But at least some of the Vesuvian victims probably died instantaneously from the intense heat of fast-moving lava flows, with temperatures high enough to boil brains and explode skulls.

In the first phase, immediately after the eruption, a long column of ash and pumice blanketed the surrounding towns, most notably Pompeii and Herculaneum. By late night or early morning, pyroclastic flows (fast-moving hot ash, lava fragments, and gases) swept through and obliterated what remained, leaving the bodies of the victims frozen in seeming suspended action.

In the 19th century, an archaeologist named Giuseppe Fiorelli figured out how to make casts of those frozen bodies by pouring liquid plaster into the voids where the soft tissue had been. Some 1,000 bodies have been discovered in the ruins, and 104 plaster casts have been preserved. Restoration efforts of 86 of those casts began about 10 years ago, during which researchers took CT scans and X-rays to see if there were complete skeletons inside. Those images revealed that there had been a great deal of manipulation of the casts, depending on the aesthetics of the era in which they were made, including altering some features of the bodies’ shapes or adding metal rods to stabilize the cast, as well as frequently removing bones before casting.

DNA shows Pompeii’s dead aren’t who we thought they were Read More »

secondhand-evs-will-flood-the-market-in-2026,-jd-power-says

Secondhand EVs will flood the market in 2026, JD Power says

In 2023, 46 percent of all franchise (i.e. not Tesla, Rivian, Vinfast, or Lucid) EV sales were leases, a trend that JD Power says it has seen through the first three quarters of 2024 as well. Once Tesla is included, about 30 percent of new EV sales this year have been leases. By contrast, fewer gasoline-powered cars are being leased each year since the start of the pandemic.

That means there will probably be a shortage of used ICE vehicles in 2025 and 2026. Used EVs might also be a little scarcer next year, JD Power says. It expects a 2 percent drop in the number of used EVs next year, but a 230 percent increase in 2026 as 215,000 cars end their leases.

JD Power also has some good news about new EV prices—they’re getting cheaper. The average price for a new electric compact SUV, once tax credits and manufacturer incentives are included, is $35,900, $12,700 less than the price in 2022 for the same class of vehicle.

Secondhand EVs will flood the market in 2026, JD Power says Read More »

what-makes-baseball’s-“magic-mud”-so-special?

What makes baseball’s “magic mud” so special?

“Magic mud” composition and microstructure: (top right) a clean baseball surface; (bottom right) a mudded baseball.

Credit: S. Pradeep et al., 2024

“Magic mud” composition and microstructure: (top right) a clean baseball surface; (bottom right) a mudded baseball. Credit: S. Pradeep et al., 2024

Pradeep et al. found that magic mud’s particles are primarily silt and clay, with a bit of sand and organic material. The stickiness comes from the clay, silt, and organic matter, while the sand makes it gritty. So the mud “has the properties of skin cream,” they wrote. “This allows it to be held in the hand like a solid but also spread easily to penetrate pores and make a very thin coating on the baseball.”

When the mud dries on the baseball, however, the residue left behind is not like skin cream. That’s due to the angular sand particles bonded to the baseball by the clay, which can increase surface friction by as much as a factor of two. Meanwhile, the finer particles double the adhesion. “The relative proportions of cohesive particulates, frictional sand, and water conspire to make a material that flows like skin cream but grips like sandpaper,” they wrote.

Despite its relatively mundane components, the magic mud nonetheless shows remarkable mechanical behaviors that the authors think would make it useful in other practical applications. For instance, it might replace synthetic materials as an effective lubricant, provided the gritty sand particles are removed. Or it could be used as a friction agent to improve traction on slippery surfaces, provided one could define the optimal fraction of sand content that wouldn’t diminish its spreadability. Or it might be used as a binding agent in locally sourced geomaterials for construction.

“As for the future of Rubbing Mud in Major League Baseball, unraveling the mystery of its behavior does not and should not necessarily lead to a synthetic replacement,” the authors concluded. “We rather believe the opposite; Rubbing Mud is a nature-based material that is replenished by the tides, and only small quantities are needed for great effect. In a world that is turning toward green solutions, this seemingly antiquated baseball tradition provides a glimpse of a future of Earth-inspired materials science.”

DOI: PNAS, 2024. 10.1073/pnas.241351412  (About DOIs).

What makes baseball’s “magic mud” so special? Read More »

for-fame-or-a-death-wish?-kids’-tiktok-challenge-injuries-stump-psychiatrists

For fame or a death wish? Kids’ TikTok challenge injuries stump psychiatrists

Case dilemma

The researchers give the example of a 10-year-old patient who was found unconscious in her bedroom. The psychiatry team was called in to consult for a suicide attempt by hanging. But when the girl was evaluated, she was tearful, denied past or recent suicide attempts, and said she was only participating in the blackout challenge. Still, she reported being in depressed moods, having feelings of hopelessness, having thoughts of suicide since age 9, being bullied, and having no friends. Family members reported unstable housing, busy or absent parental figures, and a family history of a suicide attempts.

If the girl’s injuries were unintentional, stemming from the poor choice to participate in the life-threatening TikTok challenge, clinicians would discharge the patient home with a recommendation for outpatient mental health care to address underlying psychiatric conditions and stressors. But if the injuries were self-inflicted with an intent to die, the clinicians would recommend inpatient psychiatric treatment for safety, which would allow for further risk assessment, monitoring, and treatment for the suspected suicide attempt.

It’s critical to make the right call here. Children and teens who attempt suicide are at risk of more attempts, both immediately and in the future. But to make matters even more complex, injuries from social media challenges have the potential to spur depression and post-traumatic stress disorder. Those, in turn, could increase the risk of suicide attempts.

To keep kids and teens safe, the Ataga and Arnold call for more awareness about the dangers of TikTok challenges, as well as empathetic psychiatric assessments using kid-appropriate measurements. They also call for more research. While there are a handful of case studies on TikTok challenge injuries and deaths among kids and teens, there’s a lack of large-scale data. More research is needed to “demonstrate the role of such challenges as precipitating factors in unintentional and intentional injuries, suicidal behaviors, and deaths among children in the US,” the psychiatrists write.

If you or someone you know is in crisis, call or text 988 for the Suicide and Crisis Lifeline or contact the Crisis Text Line by texting TALK to 741741.

For fame or a death wish? Kids’ TikTok challenge injuries stump psychiatrists Read More »

the-ars-redesign-is-out-experience-its-ad-free-glory-for-just-$25/year.

The Ars redesign is out. Experience its ad-free glory for just $25/year.

Whew—the big event is finally behind us. I’m talking, of course, about the Ars Technica version 9 redesign, which we rolled out last month in response to your survey feedback and which we have iterated on extensively in the weeks since. The site is now fully responsive and optimized for mobile browsing, with a sleek new look and great user options.

In response to your comments, our tireless tech and design team of Jason and Aurich have spent the last few weeks adding a font size selector, tweaking the default font and headline layout, and adding the option for orange hyperlinks. Plus, they rolled out an all-new, subscriber-only “wide mode” for Ars superfans who need 100+ character line lengths in their lives. Not enough? Jason and Aurich also tweaked the overall information density (especially on mobile), added next/previous story buttons to articles, and made the nav bar “sticky” on mobile, all in response to your feedback. (Read more about our two post-launch rounds of updates here and here.)

If that’s still not enough site goodness, Jason and Aurich are currently locked in their laboratory, cooking up a brand-new “true light” theme and big improvements to commenting and comment voting.

So while they’re brewing up those potions, I wanted to take a moment to highlight our subscription offering. At just $25 a year, this is a great deal that does more than just support our fully unionized staff; it also offers real quality-of-life benefits to readers. Subs don’t see any ads, nor are they served any trackers. They get access to the ultra-dense “Neutron Star” layout and the bloggy “Ars Classic” view, along with the optional wide-text mode and the ability to filter topics. (Plus full-text RSS feeds, PDF downloads, and some other little goodies.)

The Ars redesign is out. Experience its ad-free glory for just $25/year. Read More »