Author name: Kelly Newman

f-zero-courses-from-a-dead-nintendo-satellite-service-restored-using-vhs-and-ai

F-Zero courses from a dead Nintendo satellite service restored using VHS and AI

Ahead of its time and lost in time —

There’s still a $5,000 prize for the original Japanese Satellaview broadcasts.

Box art for the fan modification of F-Zero, BS F-Zero Deluxe

Enlarge / BS F-Zero Deluxe sounds like a funny name until you know that the first part stands for “broadcast satellite.”

Guy Perfect, Power Panda, Porthor

Nintendo’s Satellaview, a Japan-only satellite add-on for the Super Famicom, is a rich target for preservationists because it was the home to some of the most ephemeral games ever released.

That includes a host of content for Nintendo’s own games, including F-Zero. That influential Super Nintendo (Super Famicom in Japan) racing title was the subject of eight weekly broadcasts sent to subscribing Japanese homes in 1996 and 1997, some with live “Soundlink” CD-quality music and voiceovers. When live game broadcasts were finished, the memory cartridges used to store game data would report themselves as empty, even though they technically were not. Keeping that same 1MB memory cartridge in the system when another broadcast started would overwrite that data, and there were no rebroadcasts.

Recordings from some of the F-Zero Soundlink broadcasts on the Satellaview add-on for the Super Famicom (Super Nintendo in the US).

As reported by Matthew Green at Press the Buttons (along with Did You Know Gaming’s informative video), data from some untouched memory cartridges was found and used to re-create some of the content. Some courses, part of a multi-week “Grand Prix 2” event, have never been found, despite a $5,000 bounty offering and extensive effort. And yet, remarkably, the 10 courses in those later broadcasts were reverse-engineered, using a VHS recording, machine learning tools, and some manual pixel-by-pixel re-creation. The results are “north of 99.9% accurate,” according to those who crafted it and exist now as a mod you can patch onto an existing F-Zero ROM.

A re-creation of the “Forest I” level from the lost Satellaview broadcasts, running in a modified F-Zero ROM.

F-Zero Deluxe, as the patched version is called, includes four new racing machines on top of the original four. There are two new “BS-X” Leagues with all the resurrected Satellaview race tracks. And there is “ghost data,” or the ability to race against one of your prior runs on a course, something that F-Zero games helped make popular and was subsequently picked up by other racing games. There is even box art and an instruction booklet. It is a notable feat of game preservation. It thereby makes us nervous that Nintendo and its attorneys will take notice, but one can hope.

Speaking of which, a key tool used for the BS F-Zero Deluxe release comes from engineer FlibidyDibidy. In his efforts to create a “living leaderboard,” he wanted to show every Super Mario Bros. speedrun all at once. That required a side-by-side speedrun tool that could analyze game footage and show exactly what input was being pressed during that frame, then produce an emulation of that footage that was frame-perfect. That tool, Graphite, is currently missing from the author’s website and from GitHub, though a GitLab copy remains. We’ve reached out to FlibidyDibidy for comment on this and will update the post with new information.

F-Zero courses.” height=”446″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/02/Screenshot-2024-02-12-at-5.35.51%E2%80%AFPM-300×446.png” width=”300″>

Enlarge / A frame from the machine learning tool Guy Perfect used to read inputs from a VHS recording and re-create long-lost F-Zero courses.

Guy Perfect

Using Graphite as inspiration and having the data from the original Grand Prix broadcast as a baseline, an F-Zero superfan going by Guy Perfect built a tool that could reproduce the controller input from a miraculous VHS copy of the missing second Grand Prix. Following this reverse-project process, Guy Perfect re-created most of the courses and then fine-tuned them with manual frame-by-frame authoring. The backgrounds on the courses required the work of a pixel artist, Power Panda, to finish the package, and Porthor to round out the trio.

Their work means that, 25 years later, a moment in gaming that was nearly lost to time and various corporate currents has been, if not entirely restored, brought as close as is humanly (and machine-ably) possible to what it once was. Here’s hoping the results, which by all indications are fan-created and non-commercial, stick around for a while.

F-Zero courses from a dead Nintendo satellite service restored using VHS and AI Read More »

“very-sick”-pet-cat-gave-oregon-resident-case-of-bubonic-plague

“Very sick” pet cat gave Oregon resident case of bubonic plague

Surprise plague —

The person’s cat was reportedly extremely ill and had a draining abscess.

A cat, but not the one with plague.

Enlarge / A cat, but not the one with plague.

An Oregon resident contracted bubonic plague from their “very sick” pet cat, marking the first time since 2015 that someone in the state has been stricken with the Black Death bacterium, according to local health officials.

Plague bacteria, Yersinia pestis, circulates cryptically in the US in various types of rodents and their fleas. It causes an average of seven human cases a year, with a range of 1 to 17, according to the Centers for Disease Control and Prevention. The cases tend to cluster in two regions, the CDC notes: a hotspot that spans northern New Mexico, northern Arizona, and southern Colorado, and another region spanning California, far western Nevada, and southern Oregon.

The new case in Oregon occurred in the central county of Deschutes. It was fortunately caught early before the infection developed into a more severe, systemic bloodstream infection (septicemic plague). However, according to a local official who spoke with NBC News, some doctors felt the person had developed a cough while being treated at the hospital. This could indicate progression toward pneumonic plague, a more life-threatening and more readily contagious variety of the plague that spreads via respiratory droplets. Nevertheless, the person’s case reportedly responded well to antibiotic treatment, and the person is recovering.

Health officials worked to prevent the spread of the disease. “All close contacts of the resident and their pet have been contacted and provided medication to prevent illness,” Richard Fawcett, Deschutes County Health Officer, said in a news release.

Fawcett told NBC News that the cat was “very sick” and had a draining abscess, indicating “a fairly substantial” infection. The person could have become infected by plague-infected fleas from the cat or by handling the sick cat or its bodily fluids directly. Symptoms usually develop two to eight days after exposure, when the infection occurs in the lymph nodes. Early symptoms include sudden onset of fever, nausea, weakness, chills, muscle aches, and/or visibly swollen lymph nodes called buboes. If left untreated, the infection progresses to the septicemic or pneumonic forms.

It’s unclear how or why the cat became infected. But cats are particularly susceptible to plague and are considered a common source of infection in the US. The animals, when left to roam outdoors, can pick up infections from fleas as well as killing and eating infected rodents. Though dogs can also pick up the infection from fleas or other animals, they are less likely to develop clinical illness, according to the CDC.

While plague cases are generally rare in the US, Deschutes County Health Services offered general tips to keep from contracting the deadly bacteria, namely: Avoid contact with fleas and rodents, particularly sick, injured, or dead ones; Keep pets on a leash and protected with flea control products; Work to keep rodents out and away from homes and other buildings; and avoid areas with lots of rodents while camping and hiking and wear insect repellant when outdoors to ward off fleas.

According to the CDC, there were 496 plague cases in the US between 1970 and 2020. And between 2000 and 2020, the CDC counted 14 deaths.

“Very sick” pet cat gave Oregon resident case of bubonic plague Read More »

prime-video-cuts-dolby-vision,-atmos-support-from-ad-tier—and-didn’t-tell-subs

Prime Video cuts Dolby Vision, Atmos support from ad tier—and didn’t tell subs

Surprise —

To get them back, you must pay an extra $2.99/month for the ad-free tier.

High King Gil-galad and Elrond in The Lord of the Rings: The Rings of Power

Enlarge / The Rings of Power… now in HDR10+ for ad-tier users.

On January 29, Amazon started showing ads to Prime Video subscribers in the US unless they pay an additional $2.99 per month. But this wasn’t the only change to the service. Those who don’t pay up also lose features; their accounts no longer support Dolby Vision or Dolby Atmos.

As noticed by German tech outlet 4K Filme on Sunday, Prime Video users who choose to sit through ads can no longer use Dolby Vision or Atmos while streaming. Ad-tier subscribers are limited to HDR10+ and Dolby Digital 5.1.

4K Filme confirmed that this was the case on TVs from both LG and Sony; Forbes also confirmed the news using a TCL TV.

“In the ads-free account, the TV throws up its own confirmation boxes to say that the show is playing in Dolby Vision HDR and Dolby Atmos. In the basic, with-ads account, however, the TV’s Dolby Vision and Dolby Atmos pop-up boxes remain stubbornly absent,” Forbes said.

Amazon hasn’t explained its reasoning for the feature removal, but it may be trying to cut back on licensing fees paid to Dolby Laboratories. Amazon may also hope to push HDR10+, a Dolby Vision competitor that’s free and open. It also remains possible that we could one day see the return of Dolby Vision and Dolby Atmos to the ad tier through a refreshed licensing agreement.

Amazon has had a back-and-forth history with supporting Dolby features. In 2016, it first made Dolby Vision available on Prime Video. In 2017, though, Prime Video stopped supporting the format in favor of HDR10+. Amazon announced the HDR10+ format alongside Samsung, and it subsequently made the entire Prime Video library available in HDR10+. But in 2022, Prime Video started offering content like The Lord of the Rings: The Rings of Power in Dolby Vision once again.

Amazon wasn’t upfront about removals

Amazon announced in September 2023 that it would run ads on Prime Video accounts in 2024; in December, Amazon confirmed that the ads would start running on January 29 unless subscribers paid extra. In the interim, Amazon failed to mention that it was also removing support for Dolby Vision and Atmos from the ad-supported tier.

Forbes first reported on Prime Video’s ad-based tier not supporting Dolby Vision and Atmos by assuming that it was a technical error. Not until after Forbes published its article did Amazon officially confirm the changes. That’s not how people subscribing to a tech giant’s service expect to learn about a diminishing of their current plan.

It also seems that Amazon’s removal of the Dolby features has been done in such a way that it could lead some users to think they’re getting Dolby Vision and Atmos support even when they’re not.

As Forbes’ John Archer reported, “To add a bit of confusion to the mix, on the TCL TV I used, the Prime Video header information for the Jack Ryan show that appears on the with-ads basic account shows Dolby Vision and Dolby Atmos among the supported technical features—yet when you start to play the episode, neither feature is delivered to the TV.”

As streaming services overtake traditional media, many customers are growing increasingly discouraged by how the industry seems to be evolving into something strongly reminiscent of cable. While there are some aspects of old-school TV worth emulating, others—like confusing plans that don’t make it clear what you get with each package—are not.

Amazon didn’t respond to questions Ars Technica sent in time for publication, but we’ll update this story if we hear back.

Prime Video cuts Dolby Vision, Atmos support from ad tier—and didn’t tell subs Read More »

microsoft-starts-testing-windows-11-24h2-as-this-year’s-big-update-takes-shape

Microsoft starts testing Windows 11 24H2 as this year’s big update takes shape

24h1 isn’t even over yet —

Windows 11 23H2 didn’t make its first appearance until much later in the year.

Windows 11 24H2 has made its first appearance.

Enlarge / Windows 11 24H2 has made its first appearance.

Andrew Cunningham

The next major release of Windows isn’t due until the end of the year, but it looks like Microsoft is getting an early start. New Windows Insider builds released to the Canary and Dev channels both roll their version numbers to “24H2,” indicating that they’re the earliest builds of what Microsoft will eventually release to all Windows users sometime this fall.

New features in 24H2 include a smattering of things Microsoft has already been testing in public since the big batch of new features that dropped last September, plus a handful of new things. The biggest new one is the addition of Sudo for Windows, a version of a Linux/Unix terminal command that first broke cover in a preview build earlier this month. The new build also includes better support for hearing aids, support for creating 7-zip and TAR archives in File Explorer, an energy-saving mode, and new changes to the SMB protocol. This build also removes both the WordPad and the Tips apps.

Some of these features may be released to all Windows 11 users before the end of the year. During the Windows 11 era, it’s been Microsoft’s practice to drop new features in several small batches throughout the year.

The early change to the 24H2 numbering is a departure from last year, where Windows 11 23H2 didn’t appear publicly until the end of October. And even then, it was mostly just an update that rolled over the version number and Microsoft’s support clock for software updates—most of its “new” features had actually rolled out to PCs running Windows 11 22H2 the month before.

There are some signs that this update will be fairly significant in scope. In addition to all the features Microsoft listed, there are signs that the company is revising things like the Windows setup process that you go through when installing the OS from scratch. The current setup screens have remained essentially unchanged since Windows Vista in 2006, with only light and mostly cosmetic tweaks since then (and even in the redesigned version, window borders are still done in the Vista/7 style).

Logistically, this initial build of Windows 11 24H2 allows Windows Insider testers in the most unstable Canary channel to switch to the less unstable Dev channel without completely reinstalling Windows. Eventually, this… window will close, and the Canary channel will jump into a new series of build numbers.

Whither Windows 12?

Some news outlets and users have taken this update’s announcement as proof that the rumored “Windows 12” won’t happen this year. The existence of Windows 12, largely inferred based on rumors and stray statements from PC makers and analysts, has never been officially confirmed or denied by Microsoft.

A 24H2 update does suggest that Windows 11 will continue on for at least another year, but it doesn’t necessarily preclude a Windows 12 launch this year. Windows 10 received a 21H2 update the year Windows 11 came out and a 22H2 update the year after that (not that either came with significant new features). Microsoft could decide to rename the upcoming feature update on relatively short notice—like it originally did with Windows 11, which began as a design overhaul for Windows 10. Windows 12 might happen, or it might not, but I wouldn’t take this Windows 11 24H2 update as decisive evidence one way or the other.

AI was said to be a major focus for the hypothetical Windows 12, as it has been for the last few major Windows 11 updates. Trendforce went as far as to say that “AI PCs” running “the next generation of Windows” would need a “baseline” of 16GB of RAM, though when asked about this, a Microsoft representative told us that the company “doesn’t comment on rumors and speculation.” Trendforce also said that these AI PCs would need neural processing units (NPUs) that met certain performance standards.

To date, Microsoft hasn’t imposed any specific system requirements for Copilot or Windows’ other generative AI features, aside from 4GB RAM and 720p screen requirements for the Windows 10 version of Copilot, but this could change if more of Windows’ AI features begin relying on local processing rather than cloud processing.

Listing image by Microsoft

Microsoft starts testing Windows 11 24H2 as this year’s big update takes shape Read More »

chevrolet-announces-model-year-2024-equinox-ev-pricing

Chevrolet announces model year 2024 Equinox EV pricing

hope it goes better than the blazer —

We’ve known the 1LT will start at $34,995, but a 2LT will cost at least $43,295.

Driver’s side view of 2024 Chevrolet Equinox EV 1LT in Galaxy Gray Metallic driving down the road.

Enlarge / This is what the entry-level Chevrolet Equinox EV 1LT will look like.

Chevrolet

Chevrolet’s next battery electric vehicle on its troubled Ultium platform will be the Equinox EV, a compact crossover that slots in below the recently released Blazer EV. Chevy has been pitching the Equinox EV as affordable, originally with a starting price of just under $30,000. That gave the automaker the cover it needed to kill off its affordable EV, the Bolt, an act of corporate ax-swinging that looked even more cruel when it emerged that the electric Equinox would start at $34,995.

At least, if you want—or can even find—the 1LT base model. Now, Chevrolet has finally released pricing for the other trim levels, and there’s a steep jump from the bare bones 1LT even to the 2LT, which will cost $43,295. That $8,300 buys some conveniences like heated and power-adjustable front seats, heated side mirrors, and a powered rear liftgate, as well as some styling tweaks. Adaptive cruise control and Super Cruise are also available, but only as cost options.

Early adopters won’t actually be able to buy either of those because Chevy is starting with the 2RS as the initial trim level when the car goes on sale later this year. The 2RS starts at $44,795 and is a slightly sportier take on the Equinox than the 2LT, albeit with much the same standard features and options.

There are also 3LT ($45,295) and 3RS ($46,795) Equinox EVs, which come with more standard equipment and a wider options list, including 19.2 kW AC charging on the 3RS.

There’s a $1,395 destination charge for all the versions, and all these prices are for the front-wheel drive Equinox EV, which will offer 213 hp (159 kW) and have a range of 319 miles (513 km)—presumably when fitted with the smallest wheels. An all-wheel drive option is coming, which has a combined 288 hp (215 kW) and a range of 285 miles (489 km), but for now, the automaker hasn’t said how much the eAWD option will cost.

  • This is an Equinox EV 3LT, which will probably be a far more common sight on dealership forecourts than the sub-$40,000 version.

    Chevrolet

  • The 3RS is the most expensive trim level.

    Chevrolet

  • Here’s a look at the 1LT’s interior.

    Chevrolet

  • The 3LT interior, with Super Cruise active, judging by the green LED on the steering wheel. Like the rest of Chevy’s EV range, the infotainment system uses Google Automotive Services but lacks Apple CarPlay, a deal-breaker for many potential buyers.

    Chevrolet

  • The 3RS interior.

    Chevrolet

There is some good news, though: Chevrolet confirmed that the Equinox EV will be eligible for the full $7,500 IRS clean vehicle tax credit, at least for model year 2024.

Chevrolet announces model year 2024 Equinox EV pricing Read More »

the-super-bowl’s-best-and-wackiest-ai-commercials

The Super Bowl’s best and wackiest AI commercials

Superb Owl News —

It’s nothing like “crypto bowl” in 2022, but AI made a notable splash during the big game.

A still image from BodyArmor's 2024

Enlarge / A still image from BodyArmor’s 2024 “Field of Fake” Super Bowl commercial.

BodyArmor

Heavily hyped tech products have a history of appearing in Super Bowl commercials during football’s biggest game—including the Apple Macintosh in 1984, dot-com companies in 2000, and cryptocurrency firms in 2022. In 2024, the hot tech in town is artificial intelligence, and several companies showed AI-related ads at Super Bowl LVIII. Here’s a rundown of notable appearances that range from serious to wacky.

Microsoft Copilot

Microsoft Game Day Commercial | Copilot: Your everyday AI companion.

It’s been a year since Microsoft launched the AI assistant Microsoft Copilot (as “Bing Chat“), and Microsoft is leaning heavily into its AI-assistant technology, which is powered by large language models from OpenAI. In Copilot’s first-ever Super Bowl commercial, we see scenes of various people with defiant text overlaid on the screen: “They say I will never open my own business or get my degree. They say I will never make my movie or build something. They say I’m too old to learn something new. Too young to change the world. But I say watch me.”

Then the commercial shows Copilot creating solutions to some of these problems, with prompts like, “Generate storyboard images for the dragon scene in my script,” “Write code for my 3d open world game,” “Quiz me in organic chemistry,” and “Design a sign for my classic truck repair garage Mike’s.”

Of course, since generative AI is an unfinished technology, many of these solutions are more aspirational than practical at the moment. On Bluesky, writer Ed Zitron put Microsoft’s truck repair logo to the test and saw results that weren’t nearly as polished as those seen in the commercial. On X, others have criticized and poked fun at the “3d open world game” generation prompt, which is a complex task that would take far more than a single, simple prompt to produce useful code.

Google Pixel 8 “Guided Frame” feature

Javier in Frame | Google Pixel SB Commercial 2024.

Instead of focusing on generative aspects of AI, Google’s commercial showed off a feature called “Guided Frame” on the Pixel 8 phone that uses machine vision technology and a computer voice to help people with blindness or low vision to take photos by centering the frame on a face or multiple faces. Guided Frame debuted in 2022 in conjunction with the Google Pixel 7.

The commercial tells the story of a person named Javier, who says, “For many people with blindness or low vision, there hasn’t always been an easy way to capture daily life.” We see a simulated blurry first-person view of Javier holding a smartphone and hear a computer-synthesized voice describing what the AI model sees, directing the person to center on a face to snap various photos and selfies.

Considering the controversies that generative AI currently generates (pun intended), it’s refreshing to see a positive application of AI technology used as an accessibility feature. Relatedly, an app called Be My Eyes (powered by OpenAI’s GPT-4V) also aims to help low-vision people interact with the world.

Despicable Me 4

Despicable Me 4 – Minion Intelligence (Big Game Spot).

So far, we’ve covered a couple attempts to show AI-powered products as positive features. Elsewhere in Super Bowl ads, companies weren’t as generous about the technology. In an ad for the film Despicable Me 4, we see two Minions creating a series of terribly disfigured AI-generated still images reminiscent of Stable Diffusion 1.4 from 2022. There’s three-legged people doing yoga, a painting of Steve Carell and Will Ferrell as Elizabethan gentlemen, a handshake with too many fingers, people eating spaghetti in a weird way, and a pair of people riding dachshunds in a race.

The images are paired with an earnest voiceover that says, “Artificial intelligence is changing the way we see the world, showing us what we never thought possible, transforming the way we do business, and bringing family and friends closer together. With artificial intelligence, the future is in good hands.” When the voiceover ends, the camera pans out to show hundreds of Minions generating similarly twisted images on computers.

Speaking of image synthesis at the Super Bowl, people mistook a Christian commercial created by He Gets Us, LLC as having been AI-generated, likely due to its gaudy technicolor visuals. With the benefit of a YouTube replay and the ability to look at details, the “He washed feet” commercial doesn’t appear AI-generated to us, but it goes to show how the concept of image synthesis has begun to cast doubt on human-made creations.

The Super Bowl’s best and wackiest AI commercials Read More »

amazon-hides-cheaper-items-with-faster-delivery,-lawsuit-alleges

Amazon hides cheaper items with faster delivery, lawsuit alleges

A game of hide-and-seek —

Hundreds of millions of Amazon’s US customers have overpaid, class action says.

Amazon hides cheaper items with faster delivery, lawsuit alleges

Amazon rigged its platform to “routinely” push an overwhelming majority of customers to pay more for items that could’ve been purchased at lower costs with equal or faster delivery times, a class-action lawsuit has alleged.

The lawsuit claims that a biased algorithm drives Amazon’s “Buy Box,” which appears on an item’s page and prompts shoppers to “Buy Now” or “Add to Cart.” According to customers suing, nearly 98 percent of Amazon sales are of items featured in the Buy Box, because customers allegedly “reasonably” believe that featured items offer the best deal on the platform.

“But they are often wrong,” the complaint said, claiming that instead, Amazon features items from its own retailers and sellers that participate in Fulfillment By Amazon (FBA), both of which pay Amazon higher fees and gain secret perks like appearing in the Buy Box.

“The result is that consumers routinely overpay for items that are available at lower prices from other sellers on Amazon—not because consumers don’t care about price, or because they’re making informed purchasing decisions, but because Amazon has chosen to display the offers for which it will earn the highest fees,” the complaint said.

Authorities in the US and the European Union have investigated Amazon’s allegedly anticompetitive Buy Box algorithm, confirming that it’s “favored FBA sellers since at least 2016,” the complaint said. In 2021, Amazon was fined more than $1 billion by the Italian Competition Authority over these unfair practices, and in 2022, the European Commission ordered Amazon to “apply equal treatment to all sellers when deciding what to feature in the Buy Box.”

These investigations served as the first public notice that Amazon’s Buy Box couldn’t be trusted, customers suing said. Amazon claimed that the algorithm was fixed in 2020, but so far, Amazon does not appear to have addressed all concerns over its Buy Box algorithm. As of 2023, European regulators have continued pushing Amazon “to take further action to remedy its Buy Box bias in their respective jurisdictions,” the customers’ complaint said.

The class action was filed by two California-based long-time Amazon customers, Jeffrey Taylor and Robert Selway. Both feel that Amazon “willfully” and “deceptively” tricked them and hundreds of millions of US customers into purchasing the featured item in the Buy Box when better deals existed.

Taylor and Selway’s lawyer, Steve Berman, told Reuters that Amazon has placed “a great burden” on its customers, who must invest more time on the platform to identify the best deals. Unlike other lawsuits over Amazon’s Buy Box, this is the first lawsuit to seek compensation over harms to consumers, not over antitrust concerns or harms to sellers, Reuters noted.

The lawsuit has been filed on behalf of “all persons who made a purchase using the Buy Box from 2016 to the present.” Because Amazon supposedly “frequently” features more expensive items in the Buy Box and most sales result from Buy Box placements, they’ve alleged that “the chances that any Class member was unharmed by one or more purchases is virtually non-existent.”

“Our team expects the class to include hundreds of millions of Amazon consumers because virtually all purchases are made from the Buy Box,” a spokesperson for plaintiffs’ lawyers told Ars.

Customers suing are hoping that a jury will decide that Amazon continues to “deliberately steer” customers to purchase higher-priced items in the Buy Box to spike its own profits. They’ve asked a US district court in Washington, where Amazon is based, to permanently stop Amazon from using allegedly biased algorithms to drive sales through its Buy Box.

The extent of damages that Amazon could owe are currently unknown but appear significant. It’s estimated that 80 percent of Amazon’s 300 million userbase is comprised of US subscribers, each allegedly overpaying on most of their purchases over the past seven years. Last year, Amazon’s US sales exceeded $574 billion.

“Amazon claims to be a ‘customer-centric’ company that works to offer the lowest prices to its customers, but in violation of the Washington Consumer Protection Act, Amazon employs a deceptive scheme to keep its profits—and consumer prices—high,” customer’s lawsuit alleged.

Amazon hides cheaper items with faster delivery, lawsuit alleges Read More »

wade-wilson-is-kidnapped-by-the-tva-in-deadpool-and-wolverine-teaser

Wade Wilson is kidnapped by the TVA in Deadpool and Wolverine teaser

Everyone deserves a happy ending —

“Your little cinematic universe is about to change forever.”

Wade Wilson (Ryan Reynolds), aka Deadpool, is back to save the MCU: “I am Marvel Jesus.”

After some rather lackluster performances at the box office over the last year or so, Marvel Studios has scaled back its MCU offerings for 2024. We’re getting just one: Deadpool and Wolverine. Maybe one is all we need. Marvel released a two-minute teaser during yesterday’s Super Bowl. And if this is the future of the MCU, count us in. The teaser has already racked up more than 12 million views on YouTube, and deservedly so. It has the cheeky irreverence that made audiences embrace Ryan Reynold’s R-rated superhero in the first place, plus a glimpse of Hugh Jackman’s Wolverine—or rather, his distinctive shadow. And yes, Marvel is retaining that R rating—a big step given that all the prior MCU films have been resoundingly PG-13.

(Some spoilers for the first two films below.)

Reynolds famously made his first foray into big-screen superhero movies in 2011’s The Green Lantern, which was a box office disappointment and not especially good. But he found the perfect fit with 2016’s Deadpool, starring as Wade Wilson, a former Canadian special forces operative (dishonorably discharged) who develops regenerative healing powers that heal his cancer but leave him permanently disfigured with scars all over his body. Wade decides to become a masked vigilante, turning down an invitation to join the X-Men and abandon his bad-boy ways.

The first Deadpool was a big hit, racking up $782 million at the global box office, critical praise, and a couple of Golden Globe nominations for good measure. So 20th Century Fox naturally commissioned a sequel. Deadpool 2 was released in 2018 and was just as successful. The adult humor and playful pop culture references were a big part of both films’ appeal, including their respective post-credits scenes. The first film had a post-credits scene spoofing Ferris Bueller’s Day Off. The sequel’s mid-credits sequence showed a couple of X-Men repairing a time travel device for Deadpool, which he used to save his girlfriend Vanessa (Morena Baccarin‚—whose tragic death kicked off Deadpool 2—and kill Ryan Reynolds, just as the actor finished reading the script for Green Lantern.

This time around, Shawn Levy takes the director’s chair; he also directed Reynolds in the thoroughly delightful Free Guy (2021), which had similar tonal elements, minus the R-rated humorous riffs. Once we learned that Jackman had agreed to co-star, reprising his iconic X-Men role, fan anticipation shot through the roof. Filming (and hence the release date) was delayed by last summer’s Hollywood strikes but finally wrapped early this year.

Deadpool and Wolverine reunites many familiar faces from the first two films: Reynolds and Baccarin, obviously, but also Leslie Uggams as Blind Al; Karan Soni as Wade’s personal chauffeur, taxi driver Dopinder; Brianna Hildebrand as Negasonic Teenage Warhead; Stefan Kapičić as the voice of Colossus; Shioli Kutsuna as Negasonic’s mutant girlfriend Yukio; Randal Reeder as Buck; and Lewis Tan as X-Force member Shatterstar.

We’re also getting some characters drawn from various films under the 20th Century Fox Marvel umbrella: Pyro (Aaron Stanford)—last seen in 2006’s X-Men: The Last Stand—and Jennifer Garner’s Elektra who appeared in the 2003 Daredevil film as well as 2005’s Elektra. Apparently, the mutants Sabretooth and Toad will also appear, along with Dogpool. New to the franchise are Matthew MadFadyen as a Time Variance Authority agent named Paradox and Emma Corrin as the lead villain. There are rumors that Owen Wilson’s Mobius and the animated Miss Minutes from Loki will also appear in the film, which makes sense, given the TVA’s key role in the plot.

The teaser opens with Wade celebrating his birthday with Vanessa and all their friends, only to then have a group of formidable TVA agents knock on his door, brandishing their wands. (“Is that supposed to be scary?” Wade responds. “Pegging isn’t new for me, friendo, but it is for Disney.”) He’s tossed through a portal and ends up at TVA headquarters, face to face with Paradox, who offers him a chance to be “a hero among heroes.” And Wade decides he’s game, declaring himself a superhero Messiah: “I… am… Marvel Jesus.” He suits up as Deadpool, and violence inevitably ensues.

Then comes the shot we’ve all been waiting for: Deadpool lying on his back on icy terrain after being tossed through a wall, with a Wolverine-shaped shadow falling across his body. “Don’t just stand there, you ape—give me a hand up,” Deadpool says, and then sees the claws. We get the briefest glimpse of Wolverine’s trademark yellow X-Men uniform before the credits roll.

Deadpool and Wolverine hits theaters on July 26, 2024.

Listing image by YouTube/Marvel Studios

Wade Wilson is kidnapped by the TVA in Deadpool and Wolverine teaser Read More »

on-the-proposed-california-sb-1047

On the Proposed California SB 1047

California Senator Scott Wiener of San Francisco introduces SB 1047 to regulate AI. I have put up a market on how likely it is to become law.

“If Congress at some point is able to pass a strong pro-innovation, pro-safety AI law, I’ll be the first to cheer that, but I’m not holding my breath,” Wiener said in an interview. “We need to get ahead of this so we maintain public trust in AI.”

Congress is certainly highly dysfunctional. I am still generally against California trying to act like it is the federal government, even when the cause is good, but I understand.

Can California effectively impose its will here?

On the biggest players, for now, presumably yes.

In the longer run, when things get actively dangerous, then my presumption is no.

There is a potential trap here. If we put our rules in a place where someone with enough upside can ignore them, and we never then pass anything in Congress.

So what does it do, according to the bill’s author?

California Senator Scott Wiener: SB 1047 does a few things:

  1. Establishes clear, predictable, common-sense safety standards for developers of the largest and most powerful AI systems. These standards apply only to the largest models, not startups.

  2. Establish CalCompute, a public AI cloud compute cluster. CalCompute will be a resource for researchers, startups, & community groups to fuel innovation in CA, bring diverse perspectives to bear on AI development, & secure our continued dominance in AI.

  3. prevent price discrimination & anticompetitive behavior

  4. institute know-your-customer requirements

  5. protect whistleblowers at large AI companies

@geoffreyhinton called SB 1047 “a very sensible approach” to balancing these needs. Leaders representing a broad swathe of the AI community have expressed support.

People are rightfully concerned that the immense power of AI models could present serious risks. For these models to succeed the way we need them to, users must trust that AI models are safe and aligned w/ core values. Fulfilling basic safety duties is a good place to start.

With AI, we have the opportunity to apply the hard lessons learned over the past two decades. Allowing social media to grow unchecked without first understanding the risks has had disastrous consequences, and we should take reasonable precautions this time around.

As usual, RTFC (Read the Card, or here the bill) applies.

Section 1 names the bill.

Section 2 says California is winning in AI (see this song), AI has great potential but could do harm. A missed opportunity to mention existential risks.

Section 3 22602 offers definitions. I have some notes.

  1. Usual concerns with the broad definition of AI.

  2. Odd that ‘a model autonomously engaging in a sustained sequence of unsafe behavior’ only counts as an ‘AI safety incident’ if it is not ‘at the request of a user.’ If a user requests that, aren’t you supposed to ensure the model doesn’t do it? Sounds to me like a safety incident.

  3. Covered model is defined primarily via compute, not sure why this isn’t a ‘foundation’ model, I like the secondary extension clause: “The artificial intelligence model was trained using a quantity of computing power greater than 10^26 integer or floating-point operations in 2024, or a model that could reasonably be expected to have similar performance on benchmarks commonly used to quantify the performance of state-of-the-art foundation models, as determined by industry best practices and relevant standard setting organizations OR The artificial intelligence model has capability below the relevant threshold on a specific benchmark but is of otherwise similar general capability..”

  4. Critical harm is either mass casualties or 500 million in damage, or comparable.

  5. Full shutdown means full shutdown but only within your possession and control. So when we really need a full shutdown, this definition won’t work. The whole point of a shutdown is that it happens everywhere whether you control it or not.

  6. Open-source artificial intelligence model is defined to only include models that ‘may be freely modified and redistributed’ so that raises the question of whether that is legal or practical. Such definitions need to be practical, if I can do it illegally but can clearly still do it, that needs to count.

  7. Definition (s): [“Positive safety determination” means a determination, pursuant to subdivision (a) or (c) of Section 22603, with respect to a covered model that is not a derivative model that a developer can reasonably exclude the possibility that a covered model has a hazardous capability or may come close to possessing a hazardous capability when accounting for a reasonable margin for safety and the possibility of posttraining modifications.]

    1. Very happy to see the mention of post-training modifications, which is later noted to include access to tools and data, so scaffolding explicitly counts.

Section 3 22603 (a) says that before you train a new non-derivative model, you need to determine whether you can make a positive safety determination.

I like that this happens before you start training. But of course, this raises the question of how you know how it will score on the benchmarks?

One thing I worry about is the concept that if you score below another model on various benchmarks, that this counts as a positive safety determination. There are at least four obvious failure modes for this.

  1. The developer might choose to sabotage performance against the benchmarks, either by excluding relevant data and training, or otherwise. Or, alternatively, a previous developer might have gamed the benchmarks, which happens all the time, such that all you have to do to score lower is to not game those benchmarks yourself.

  2. The model might have situational awareness, and choose to get a lower score. This could be various degrees of intentional on the part of the developers.

  3. The model might not adhere to your predictions or scaling laws. So perhaps you say it will score lower on benchmarks, but who is to say you are right?

  4. The benchmarks might simply not be good at measuring what we care about.

Similarly, it is good to make a safety determination before beginning training, but also if the model is worth training then you likely cannot actually know its safety in advance, especially since this is not only existential safety.

Section 3 22603 (b) covers what you must do if you cannot make the positive safety determination. Here are the main provisions:

  1. You must prevent unauthorized access.

  2. You must be capable of a full shutdown.

  3. You must implement all covered guidance. Okie dokie.

  4. You must implement a written and separate safety and security protocol, that provides ‘reasonable assurance’ that it would ensure the model will have safeguards that prevent critical harms. This has to include clear tests that verify if you have succeeded.

  5. You must say how you are going to do all that, how you would change how you are doing it, and what would trigger a shutdown.

  6. Provide a copy of your protocol and keep it updated.

You can then make a ‘positive safety determination’ after training and testing, subject to the safety protocol.

Section (d) says that if your model is ‘not subject to a positive safety determination,’ in order to deploy it (you can still deploy it at all?!) you need to implement ‘reasonable safeguards and requirements’ that allow you prevent harms and to trace any harms that happen. I worry this section is not taking such scenarios seriously. To not be subject to such determination, the model needs to be breaking new ground in capabilities, and you were unable to assure that it wouldn’t be dangerous. So what are these ‘reasonable safeguards and requirements’ that would make deploying it acceptable? Perhaps I am misunderstanding here.

Section (g) says safety incidents must be reported.

Section (h) says if your positive safety determination is unreasonable it does not count, and that to be reasonable you need to consider any risk that has already been identified elsewhere.

Overall, this seems like a good start, but I worry it has loopholes, and I worry that it is not thinking about the future scenarios where the models are potentially existentially dangerous, or might exhibit unanticipated capabilities or situational awareness and so on. There is still the DC-style ‘anticipate and check specific harm’ approach throughout.

Section 22604 is about KYC, a large computing cluster has to collect the information and check to see if customers are trying to train a covered model.

Section 22605 requires sellers of inference or a computing cluster to provide a transparent, uniform, publicly available price schedule, banning price discrimination, and bans ‘unlawful discrimination or noncompetitive activity in determining price or access.’

I always wonder about laws that say ‘you cannot do things that are already illegal,’ I mean I thought that was the whole point of them already being illegal.

I am not sure to what extent this rule has an impact in practice, and whether it effectively means that anyone selling such services has to be a kind of common carrier unable to pick who gets its limited services, and unable to make deals of any kind. I see the appeal, but also I see clear economic downsides to forcing this.

Section 22606 covers penalties. The fines are relatively limited in scope, the main relief is injunction against and possible deletion of the model. I worry in practice that there is not enough teeth here.

Section 2207 is whistleblower protections. Odd that this is necessary, one would think there would be such protections universally by now? There are no unexpectedly strong provisions here, only the normal stuff.

Section 4 11547.6 tasks the new Frontier Model Division with its official business, including collecting reports and issuing guidance.

Section 5 11547.7 is for the CalCompute public cloud computing cluster. This seems like a terrible idea, there is no reason for public involvement here, also there is no stated or allocated budget. Assuming it is small, it does not much matter.

Sections 6-9 are standard boilerplate disclaimers and rules.

What should we think about all that?

It seems like a good faith effort to put forward a helpful bill. It has a lot of good ideas in it. I believe it would be net helpful. In particular, it is structured such that if your model is not near the frontier, your burden here is very small.

My worry is that this has potential loopholes in various places, and does not yet strongly address the nature of the future more existential threats. If you want to ignore this law, you probably can.

But it seems like a good beginning, especially on dealing with relatively mundane but still potentially catastrophic threats, without imposing an undo burden on developers. This could then be built upon.

Ah, Tyler Cowen has a link on this and it’s… California’s Effort to Strange AI.

Because of course it is. We do this every time. People keep saying ‘this law will ban satire’ or spreadsheets or pictures of cute puppies or whatever, based on what on its best day would be a maximalist anti-realist reading of the proposal, if it were enacted straight with no changes and everyone actually enforced it to the letter.

Dean Ball: This week, California’s legislature introduced SB 1047: The Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act. The bill, introduced by State Senator Scott Wiener (liked by many, myself included, for his pro-housing stance), would create a sweeping regulatory regime for AI, apply the precautionary principle to all AI development, and effectively outlaw all new open source AI models—possibly throughout the United States.

This is a line pulled out whenever anyone proposes that AI be governed by any regulatory regime whatsoever even with zero teeth of any kind. When someone says that someone, somewhere might be legally required to write an email.

At least one of myself and Dean Ball is extremely mistaken about what this bill says.

The definition of covered model seems to me to be clearly intended to apply only to models that are effectively at the frontier of model capabilities.

Let’s look again at the exact definition:

(1) The artificial intelligence model was trained using a quantity of computing power greater than 10^26 integer or floating-point operations in 2024, or a model that could reasonably be expected to have similar performance on benchmarks commonly used to quantify the performance of state-of-the-art foundation models, as determined by industry best practices and relevant standard setting organizations.

(2) The artificial intelligence model has capability below the relevant threshold on a specific benchmark but is of otherwise similar general capability.

That seems clear as day on what it means, and what it means is this:

  1. If your model is over 10^26 we assume it counts.

  2. If it isn’t, but it is as good as state-of-the-art current models, it counts.

  3. Being ‘as good as’ is a general capability thing, not hitting specific benchmarks.

Under this definition, if no one was actively gaming benchmarks, at most three existing models would plausibly qualify for this definition: GPT-4, Gemini Ultra and Claude. I am not even sure about Claude.

If the open source models are gaming the benchmarks so much that they end up looking like a handful of them are matching GPT-4 on benchmarks, then what can I say, maybe stop gaming the benchmarks?

Or point out quite reasonably that the real benchmark is user preference, and in those terms, you suck, so it is fine. Either way.

But notice that this isn’t what the bill does. The bill applies to large models and to any models that reach the same performance regardless of the compute budget required to make them. This means that the bill applies to startups as well as large corporations.

Um, no, because the open model weights models do not remotely reach the performance level of OpenAI?

Maybe some will in the future.

But this very clearly does not ‘ban all open source.’ There are zero existing open model weights models that this bans.

There are a handful of companies that might plausibly have to worry about this in the future, if OpenAI doesn’t release GPT-5 for a while, but we’re talking Mistral and Meta, not small start-ups. And we’re talking about them exactly because they would be trying to fully play with the big boys in that scenario.

Bell is also wrong about the precautionary principle being imposed before training.

I do not see any such rule here. What I see is that if you cannot show that your model will definitely be safe before training, then you have to wait until after the training run to certify that it is safe.

In other words, this is an escape clause. Are we seriously objecting to that?

Then, if you also can’t certify that it is safe after the training run, then we talk precautions. But no one is saying you cannot train, unless I am missing something?

As usual, people such as Ball are imagining a standard of ‘my product could never be used to do harm’ that no one is trying to apply here in any way. That is why any model not at the frontier can automatically get a positive safety determination, which flies in the face of this theory. Then, if you are at the frontier, you have to obey industry standard safety procedures and let California know what procedures you are following. Woe is you. And of course, the moment someone else has a substantially better model, guess who is now positively safe?

The ‘covered guidance’ that Ball claims to be alarmed about does not mean ‘do everything any safety organization says and if they are contradictory you are banned.’ The law does not work that way. Here is what it actually says:

(e) “Covered guidance” means any of the following:

(1) Applicable guidance issued by the National Institute of Standards and Technology and by the Frontier Model Division.

(2) Industry best practices, including relevant safety practices, precautions, or testing procedures undertaken by developers of comparable models, and any safety standards or best practices commonly or generally recognized by relevant experts in academia or the nonprofit sector.

(3) Applicable safety-enhancing standards set by standards setting organizations.

So what that means is, we will base our standards off an extension of NIST’s, and also we expect you to be liable to implement anything that is considered ‘industry best practice’ even if we did not include it in the requirements. But obviously it’s not going to be best practices if it is illegal. Then we have the third rule, which only counts ‘applicable’ standards. California will review them and decide what is applicable, so that is saying they will use outside help.

Also, note the term ‘non-derivative’ when talking about all the models. If you are a derivative model, then you are fine by default. And almost all models with open weights are derivative models, because of course that is the point, distillation and refinement rather than starting over all the time.

So here’s what the law would actually do, as far as I can tell:

  1. If your model is not projected to be state of the art level and it is not over the 10^26 limit no one has hit yet and no one except the big three are anywhere near, this law has only trivial impact upon you, it is a trivial amount of paperwork. Every other business in America and especially the state of California is jealous.

  2. If your model is a derivative of an existing model, you’re fine, that’s it.

  3. If your model you want to train is projected to be state of the art, but you can show it is safe before you even train it, good job, you’re golden.

  4. If your model is projected to be state of the art, and can’t show it is safe before training it, you can still train it as long as you don’t release it and you make sure it isn’t stolen or released by others. Then if you show it is safe or show it is not state of the art, you’re golden again.

  5. If your model is state of the art, and you train it and still don’t know if it is ‘safe,’ and by safe we do not mean ‘no one ever does anything wrong’ we mean things more like ‘no one ever causes 500 million dollars in damages or mass casualties,’ then you have to implement a series of safety protocols (regulatory requirements) to be determined by California, and you have to tell them what you are doing to ensure safety.

  6. You have to have to have abilities like ‘shut down AIs running on computers under my control’ and ‘plausibly prevent unauthorized people from accessing the model if they are not supposed to.’ Which does not even apply to copies of the program you no longer control. Is that is going to be a problem?

  7. You also have to report any ‘safety incidents’ that happen.

  8. Also some ‘pro-innovation’ stuff of unknown size and importance.

Not only does SB 1047 not attempt to ‘strangle AI,’ not only does it not attempt regulatory capture or target startups, it would do essentially nothing to anyone but a handful of companies unless they have active safety incidents. If there are active safety incidents, then we get to know about them, which could introduce liability concerns or publicity concerns, and that seems like the main downside? That people might learn about your failures and existing laws might sometimes apply?

The arguments against such rules often come from the implicit assumption that we enforce our laws as written, reliably and without discretion. Which we don’t. What would happen if, as Eliezer recently joked, the law actually worked the way critics of such regulations claim that it does? If every law was strictly enforced as written, with no common sense used, as they warn will happen? And someone our courts could handle the case loads involved? Everyone would be in jail within the week.

When people see proposals for treating AI slightly more like anything else, and subjecting it to remarkably ordinary regulation, with an explicit and deliberate effort to only target frontier models that are exclusively fully closed, and they say that this ‘bans open source’ what are they talking about?

They are saying that Open Model Weights Are Unsafe and Nothing Can Fix This, and we want to do things that are patently and obviously unsafe, so asking any form of ‘is this safe?’ and having an issue with the answer being ‘no’ is a ban on open model weights. Or, alternatively, they are saying that their business model and distribution plans are utterly incompatible with complying with any rules whatsoever, so we should never pass any, or they should be exempt from any rules.

The idea that this would “spell the end of America’s leadership in AI” is laughable. If you think America’s technology industry cannot stand a whiff of regulation, I mean, do they know anything about America or California? And have they seen the other guy? Have they seen American innovation across the board, almost entirely in places with rules orders of magnitude more stringent? This here is so insanely nothing.

But then, when did such critics let that stop them? It’s the same rhetoric every time, no matter what. And some people seem willing to amplify such voices, without asking whether their words make sense.

What would happen if there was actually a wolf?

On the Proposed California SB 1047 Read More »

humans-are-living-longer-than-ever-no-matter-where-they-come-from 

Humans are living longer than ever no matter where they come from 

Live long and prosper? —

Disease outbreaks and human conflicts help dictate regional differences in longevity.

An older person drinking coffee in an urban environment.

Most of us want to stay on this planet as long as possible. While there are still differences depending on sex and region, we are now living longer as a species—and it seems life spans will only continue to grow longer.

Researcher David Atance of Universidad de Alcalá, Spain, and his team gathered data on the trends of the past. They then used their findings to project what we can expect to see in the future. Some groups have had it harder than others because of factors such as war, poverty, natural disasters, or disease, but the researchers found that morality and longevity trends are becoming more similar regardless of disparities between sexes and locations.

“The male-female gap is decreasing among the [clusters],” they said in a study recently published in PLOS One.

Remembering the past

The research team used specific mortality indicators—such as life expectancy at birth and most common age at death–to identify five global clusters that reflect the average life expectancy in different parts of the world. The countries in these clusters changed slightly from 1990 to 2010 and are projected to change further by 2030 (though 2030 projections are obviously tentative). Data for both males and females was considered when deciding which countries belonged in which cluster during each period. Sometimes, one sex thrived while the other struggled within a cluster—or even within the same country.

Clusters that included mostly wealthier countries had the best chance at longevity in 1990 and 2010. Low-income countries predictably had the worst mortality rate. In 1990, these countries, many of which are in Africa, suffered from war, political upheaval, and the lethal spread of HIV/AIDS. Rwanda endured a bloody civil war during this period. Around the same time, Uganda had tensions with Rwanda, as well as Sudan and Zaire. In the Middle East, the Gulf War and its aftermath inevitably affected 1990 male and female populations.

Along with a weak health care system, the factors that gave most African countries a high mortality rate were still just as problematic in 2010. In all clusters, male life spans tended to differ slightly less between countries than female life spans. However, in some regions, there were differences between how long males lived compared to females. Mortality significantly increased in 1990 male populations from former Soviet countries after the dissolution of the Soviet Union, and this trend continued in 2010. Deaths in those countries were attributed to violence, accidents, cardiovascular disease, alcohol, an inadequate healthcare system, poverty, and psychosocial stress.

Glimpsing the future

2030 predictions must be taken with caution. Though past trends can be good indicators of what is to come, trends do not always continue. While things may change between now and 2030 (and those changes could be drastic), these estimates project what would happen if past and current trends continue into the relatively near future.

Some countries might be worse off in 2030. The lowest-income, highest-mortality cluster will include several African countries that have been hit hard with wars as well as political and socioeconomic challenges. The second low-income, high-mortality cluster, also with mostly African countries, will now add some Eastern European and Asian countries that suffer from political and socioeconomic issues most have recently been involved in conflicts and wars or still are, such as Ukraine.

The highest-income, lowest-mortality cluster will gain some countries. These include Chile, which has made strides in development that are helping people live longer.

Former Soviet countries will probably continue to face the same issues they did in 1990 and 2010. They fall into one of the middle-income, mid-longevity clusters and will most likely be joined by some Latin American countries that were once in a higher bracket but presently face high levels of homicide, suicide, and accidents among middle-aged males. Meanwhile, there are some other countries in Latin America that the research team foresees as moving toward a higher income and lower mortality rate.

Appearances can be deceiving

The study places the US in the first or second high-income, low-mortality bracket, depending on the timeline. This could make it look like it is doing well on a global scale. While the study doesn’t look at the US specifically, there are certain local issues that say otherwise.

A 2022 study by the Centers for Disease Control and Prevention suggests that pregnancy and maternal care in the US is abysmal, with a surprisingly high (and still worsening) maternal death rate of about 33 deaths per 100,000 live births. This is more than double what it was two decades ago. In states like Texas, which banned abortion after the overturn of Roe v. Wade, infant deaths have also spiked. The US also has the most expensive health care system among high-income countries, which was only worsened by the pandemic.

The CDC also reports that life expectancy in the US keeps plummeting. Cancer, heart disease, stroke, drug overdose, and accidents are the culprits, especially in middle-aged Americans. There has also been an increase in gun violence and suicides. Guns have become the No. 1 killer of children and teens, which used to be car accidents.

Whether the US will stay in that top longevity bracket is also unsure, especially if maternal death rates keep rising and there aren’t significant improvements made to the health care system. There and elsewhere, there’s no way of telling what will actually happen between now and 2030, but Atance and his team want to revisit their study then and compare their estimates to actual data. The team is also planning to further analyze the factors that contribute to longevity and mortality, as well as conduct surveys that could support their predictions. We will hopefully live to see the results.

PLOS One, 2024. DOI:  10.1371/journal.pone.0295842

Humans are living longer than ever no matter where they come from  Read More »

hermit-crabs-find-new-homes-in-plastic-waste:-shell-shortage-or-clever choice?

Hermit crabs find new homes in plastic waste: Shell shortage or clever choice?

ocean real estate bargains —

The crustaceans are making the most of what they find on the seafloor.

hermit crab in plastic pen cap

Enlarge / Scientists have found that hermit crabs are increasingly using plastic and other litter as makeshift shell homes.

Land hermit crabs have been using bottle tops, parts of old light bulbs and broken glass bottles, instead of shells.

New research by Polish researchers studied 386 images of hermit crabs occupying these artificial shells. The photos had been uploaded by users to online platforms, then analyzed by scientists using a research approach known as iEcology. Of the 386 photos, the vast majority, 326 cases, featured hermit crabs using plastic items as shelters.

At first glance, this is a striking example of how human activities can alter the behavior of wild animals and potentially the ways that populations and ecosystems function as a result. But there are lots of factors at play and, while it’s easy to jump to conclusions, it’s important to consider exactly what might be driving this particular change.

Shell selection

Hermit crabs are an excellent model organism to study because they behave in many different ways and those differences can be easily measured. Instead of continuously growing their own shell to protect their body, like a normal crab or a lobster would, they use empty shells left behind by dead snails. As they walk around, the shell protects their soft abdomen but whenever they are threatened they retract their whole body into the shell. Their shells act as portable shelters.

Having a good enough shell is critical to an individual’s survival so they acquire and upgrade their shells as they grow. They fight other hermit crabs for shells and assess any new shells that they might find for suitability. Primarily, they look for shells that are large enough to protect them, but their decision-making also takes into account the type of snail shell, its condition and even its color—a factor that could impact how conspicuous the crab might be.

Another factor that constrains shell choice is the actual availability of suitable shells. For some as yet unknown reason, a proportion of land hermit crabs are choosing to occupy plastic items rather than natural shells, as highlighted by this latest study.

Housing crisis or ingenious new move?

Humans have intentionally changed the behavior of animals for millennia through the process of domestication. Any unintended behavioral changes in natural animal populations are potentially concerning, but how worried should we be about hermit crabs using plastic litter as shelter?

The Polish research raises a number of questions. First, how prevalent is the adoption of plastic litter instead of shells? While 326 crabs using plastic seems like a lot, this is likely to be an underestimation of the raw number given that users are likely to encounter crabs only in accessible parts of the populations. Conversely, it seems probable that users could be biased towards uploading striking or unusual images, so the iEcology approach might produce an exaggerated impression of the proportion of individuals in a population opting for plastic over natural shells. We need structured field surveys to clarify this.

Second, why are some individual crabs using plastic? One possibility is that they are forced to due to a lack of natural shells, but we can’t test this hypothesis without more information on the demographics of local snail populations. Or perhaps the crabs prefer plastic or find it easier to locate, compared with real shells? As the authors point out, plastic might be lighter than the equivalent shells affording the same amount of protection but at lower energy cost of carrying them. Intriguingly, chemicals that leach out of plastic are known to attract marine hermit crabs by mimicking the odor of food.

As hermit crabs adapt to an increase in plastic pollution, more research is needed to investigate the nuances.

Enlarge / As hermit crabs adapt to an increase in plastic pollution, more research is needed to investigate the nuances.

This leads to a third question about the possible downsides of using plastic. Compared to real shells plastic waste tends to be brighter and might contrast more with the background making the crabs more vulnerable to predators. Additionally, we know that exposure to microplastics and compounds that leach from plastic can change the behaviour of hermit crabs, making them less fussy about the shells that they choose, less adept at fighting for shells and even changing their personalities by making them more prone to take risks. To answer these questions about the causes and consequences of hermit crabs using plastic waste in this way, we need to investigate their shell selection behavior through a series of laboratory experiments.

Pollution changes behavior

Plastic pollution is just one of the ways we are changing our environment. It’s by far the most highly reported form of debris that we have introduced to marine environments. But animal behavior is affected by other forms of pollution too, including microplastics, pharmaceuticals, light, and noise, plus the rising temperatures and ocean acidification caused by climate change.

So while investigating the use of plastic waste by hermit crabs could help us better understand the consequences of certain human impacts on the environment, it doesn’t show how exactly animals will adjust to the Anthropocene, the era during which human activity has been having a significant impact on the planet. Will they cope by using plastic behavioral responses or evolve across generations, or perhaps both? In my view, the iEcology approach cannot answer questions like this. Rather, this study acts as an alarm bell highlighting potential changes that now need to be fully investigated.

Mark Briffa, Professor of Animal Behaviour, University of Plymouth. This article is republished from The Conversation under a Creative Commons license. Read the original article.

Hermit crabs find new homes in plastic waste: Shell shortage or clever choice? Read More »

the-2024-rolex-24-at-daytona-put-on-very-close-racing-for-a-record-crowd

The 2024 Rolex 24 at Daytona put on very close racing for a record crowd

actually 23 hours and 58 minutes this time —

The around-the-clock race marked the start of the North American racing calendar.

Porsche and Cadillac GTP race cars at Daytona

Enlarge / The current crop of GTP hybrid prototypes look wonderful, thanks to rules that cap the amount of downforce they can generate in favor of more dramatic styling.

Porsche Motorsport

DAYTONA BEACH, Fla.—Near-summer temperatures greeted a record crowd at the Daytona International Speedway in Florida last weekend. At the end of each January, the track hosts the Rolex 24, an around-the-clock endurance race that’s now as high-profile as it has ever been during the event’s 62-year history.

Between the packed crowd and the 59-car grid, there’s proof that sports car racing is in good shape. Some of that might be attributable to Drive to Survive‘s rising tide lifting a bunch of non-F1 boats, but there’s more to the story than just a resurgent interest in motorsport. The dramatic-looking GTP prototypes have a lot to do with it—powerful hybrid racing cars from Acura, BMW, Cadillac, and Porsche are bringing in the fans and, in some cases, some pretty famous drivers with F1 or IndyCar wins on their resumes.

But IMSA and the Rolex 24 is about more than just the top class of cars; in addition to the GTP hybrids, the field also comprised the very competitive pro-am LMP2 prototype class and a pair of classes (one for professional teams, another for pro-ams) for production-based machines built to a global set of rules, called GT3. (To be slightly confusing, in IMSA, those classes are known as GTD-Pro and GTD. More on sports car racing being needlessly confusing later.)

The crowd for the 2024 Rolex 24 was larger even than last year. This is the pre-race grid walk, which I chose to watch from afar.

Enlarge / The crowd for the 2024 Rolex 24 was larger even than last year. This is the pre-race grid walk, which I chose to watch from afar.

Jonathan Gitlin

There was even a Hollywood megastar in attendance, as the Jerry Bruckheimer-produced, Joseph Kosinski-directed racing movie starring Brad Pitt was at the track filming scenes for the start of that movie.

GTP finds its groove

Last year’s Rolex 24 was the debut of the new GTP cars, and they didn’t have an entirely trouble-free race. These cars are some of the most complicated sports prototypes to turn a wheel due to hybrid systems, and during the 2023 race, two of the entrants required lengthy stops to replace their hybrid batteries. Those teething troubles are a thing of the past, and over the last 12 months, the cars have found an awful lot more speed, with most of the 10-car class breaking Daytona’s lap record during qualifying.

Most of that new speed has come from the teams’ familiarity with the cars after a season of racing but also from a year of software development. Only Porsche’s 963 has had any mechanical upgrades during the off-season. “You… will not notice anything on the outside shell of the car,” explained Urs Kuratle, Porsche Motorsport’s director of factory racing. “So the aerodynamics, all [those] things, they look the same… Sometimes it’s a material change, where a fitting used to be out of aluminum and due to reliability reasons we change to steel or things like this. There are minor details like this.”

  • This year, the Wayne Taylor Racing team had not one but two ARX-06s. I expected the cars to be front-runners, but a late BoP change added another 40 kg.

    Jonathan Gitlin

  • The Cadillacs are fan favorites because of their loud, naturally aspirated V8s. I think the car looks better than the other GTP cars, too.

    Jonathan Gitlin

  • Porsche’s 963 is the only GTP car that has had any changes since last year, but they’re all under the bodywork.

    Jonathan Gitlin

  • Porsche is the only manufacturer to start selling customer GTP cars so far. The one on the left is the Proton Competition Mustang Sampling car; the one on the right belongs to JDC-Miller MotorSports.

    Jonathan Gitlin

GTP cars aren’t as fast or even as powerful as an F1 single-seater, but the driver workload from inside the cockpit may be even higher. At last year’s season-ending Petit Le Mans, former F1 champion Jenson Button—then making a guest appearance in the privateer-run JDC Miller Motorsport Porsche 963—came away with a newfound respect for how many different systems could be tweaked from the steering wheel.

The 2024 Rolex 24 at Daytona put on very close racing for a record crowd Read More »