Author name: Shannon Garcia

facebook,-instagram-may-cut-fees-by-nearly-50%-in-scramble-for-dma-compliance

Facebook, Instagram may cut fees by nearly 50% in scramble for DMA compliance

Facebook, Instagram may cut fees by nearly 50% in scramble for DMA compliance

Meta is considering cutting monthly subscription fees for Facebook and Instagram users in the European Union nearly in half to comply with the Digital Market Act (DMA), Reuters reported.

During a day-long public workshop on Meta’s DMA compliance, Meta’s competition and regulatory director, Tim Lamb, told the European Commission (EC) that individual subscriber fees could be slashed from 9.99 euros to 5.99 euros. Meta is hoping that reducing fees will help to speed up the EC’s process for resolving Meta’s compliance issues. If Meta’s offer is accepted, any additional accounts would then cost 4 euros instead of 6 euros.

Lamb said that these prices are “by far the lowest end of the range that any reasonable person should be paying for services of these quality,” calling it a “serious offer.”

The DMA requires that Meta’s users of Facebook, Instagram, Facebook Messenger, and Facebook Marketplace “freely” give consent to share data used for ad targeting without losing access to the platform if they’d prefer not to share data. That means services must provide an acceptable alternative for users who don’t consent to data sharing.

“Gatekeepers should enable end users to freely choose to opt-in to such data processing and sign-in practices by offering a less personalized but equivalent alternative, and without making the use of the core platform service or certain functionalities thereof conditional upon the end user’s consent,” the DMA says.

Designated gatekeepers like Meta have debated what it means for a user to “freely” give consent, suggesting that offering a paid subscription for users who decline to share data would be one route for Meta to continue offering high-quality services without routinely hoovering up data on all its users.

But EU privacy advocates like NOYB have protested Meta’s plan to offer a subscription model instead of consenting to data sharing, calling it a “pay or OK model” that forces Meta users who cannot pay the fee to consent to invasive data sharing they would otherwise decline. In a statement shared with Ars, NOYB chair Max Schrems said that even if Meta reduced its fees to 1.99 euros, it would be forcing consent from 99.9 percent of users.

“We know from all research that even a fee of just 1.99 euros or less leads to a shift in consent from 3–10 percent that genuinely want advertisement to 99.9 percent that still click yes,” Schrems said.

In the EU, the General Data Protection Regulation (GDPR) “requires that consent must be ‘freely’ given,” Schrems said. “In reality, it is not about the amount of money—it is about the ‘pay or OK’ approach as a whole. The entire purpose of ‘pay or OK’, is to get users to click on OK, even if this is not their free and genuine choice. We do not think the mere change of the amount makes this approach legal.”

Where EU stands on subscription models

Meta expects that a subscription model is a legal alternative under the DMA. The tech giant said it was launching EU subscriptions last November after the Court of Justice of the European Union (CJEU) “endorsed the subscriptions model as a way for people to consent to data processing for personalized advertising.”

It’s unclear how popular the subscriptions have been at the current higher cost. Right now in the EU, monthly Facebook and Instagram subscriptions cost 9.99 euros per month on the web or 12.99 euros per month on iOS and Android, with additional fees of 6 euros per month on the web and 8 euros per month on iOS and Android for each additional account. Meta declined to comment on how many EU users have subscribed, noting to Ars that it has no obligation to do so.

In the CJEU case, the court was reviewing Meta’s GDPR compliance, which Schrems noted is less strict than the DMA. The CJEU specifically said that under the GDPR, “users must be free to refuse individually”—”in the context of” signing up for services— “to give their consent to particular data processing operations not necessary” for Meta to provide such services “without being obliged to refrain entirely from using the service.”

Facebook, Instagram may cut fees by nearly 50% in scramble for DMA compliance Read More »

health-experts-plead-for-unvaxxed-americans-to-get-measles-shot-as-cases-rise

Health experts plead for unvaxxed Americans to get measles shot as cases rise

MMR is safe and effective —

The US hit last year’s total in under 12 weeks, suggesting we’re in for a bad time.

A view from a hospital as children receiving medical treatment, in capital Kabul, Afghanistan on April 18, 2022. More than 130 children have died from the measles in Afghanistan since the beginning of this year.

Enlarge / A view from a hospital as children receiving medical treatment, in capital Kabul, Afghanistan on April 18, 2022. More than 130 children have died from the measles in Afghanistan since the beginning of this year.

The Centers for Disease Control and Prevention and the American Medical Association sent out separate but similar pleas on Monday for unvaccinated Americans to get vaccinated against the extremely contagious measles virus as vaccination rates have slipped, cases are rising globally and nationally, and the spring-break travel period is beginning.

In the first 12 weeks of 2024, US measles cases have already matched and likely exceeded the case total for all of 2023. According to the CDC, there were 58 measles cases reported from 17 states as of March 14. But media tallies indicate there have been more cases since then, with at least 60 cases now in total, according to CBS News. In 2023, there were 58 cases in 20 states.

“As evident from the confirmed measles cases reported in 17 states so far this year, when individuals are not immunized as a matter of personal preference or misinformation, they put themselves and others at risk of disease—including children too young to be vaccinated, cancer patients, and other immunocompromised people,” AMA President Jesse Ehrenfeld said in a statement urging vaccination Monday.

The latest data indicates that vaccination rates among US kindergarteners have slipped to 93 percent nationally, below the 95 percent target to prevent the spread of the disease. And vaccine exemptions for non-medical reasons have reached an all-time high.

The CDC released a health advisory on Monday also urging measles vaccination. The CDC drove home the point that unvaccinated Americans are largely responsible for importing the virus, and pockets of unvaccinated children in local communities spread it once it’s here. The 58 measles infections that have been reported to the agency so far include cases from seven outbreaks in seven states. Most of the cases are in vaccine-eligible children aged 12 months and older who are unvaccinated. Of the 58 cases, 54 (93 percent) are linked to international travel, and most measles importations are by unvaccinated US residents who travel abroad and bring measles home with them, the CDC flagged.

The situation is likely to worsen as Americans begin spring travel, the CDC suggested. “Many countries, including travel destinations such as Austria, the Philippines, Romania, and the United Kingdom, are experiencing measles outbreaks,” the CDC said. “To prevent measles infection and reduce the risk of community transmission from importation, all US residents traveling internationally, regardless of destination, should be current on their [measles-mumps-rubella (MMR)] vaccinations.” The agency added in a recommendation to parents that “even if not traveling, ensure that children receive all recommended doses of MMR vaccine. Two doses of MMR vaccine provide better protection (97 percent) against measles than one dose (93 percent). Getting MMR vaccine is much safer than getting measles, mumps, or rubella.”

For Americans who are already vaccinated and communities with high vaccination coverage, the risk is low, the CDC noted. “However, pockets of low coverage leave some communities at higher risk for outbreaks.” This, in turn, threatens wider, continuous spread that could overturn the country’s status of having eliminated measles, which was declared in 2000. The US was close to losing its elimination status in 2019 when outbreaks among unvaccinated children drove 1,247 cases across 31 states. Vaccination rates have only fallen since then.

“The reduction in measles vaccination threatens to erase many years of progress as this previously eliminated vaccine-preventable disease returns,” the AMA’s Ehrenfeld warned.

As Ars has reported previously, measles is among the most contagious viruses known and can linger in airspace for up to two hours. Up to 90 percent of unvaccinated people exposed will contract it. Symptoms can include high fever, runny nose, red and watery eyes, and a cough, as well as the hallmark rash. About 1 in 5 unvaccinated people with measles are hospitalized, while 1 in 20 infected children develop pneumonia, and up to 3 in 1,000 children die of the infection. Brain swelling (encephalitis) can occur in 1 in 1,000 children, which can lead to hearing loss and intellectual disabilities. The virus can also destroy immune responses to previous infections—a phenomenon known as “immune amnesia”—which can leave children vulnerable to various other infections for years afterward.

Health experts plead for unvaxxed Americans to get measles shot as cases rise Read More »

nvidia-unveils-blackwell-b200,-the-“world’s-most-powerful-chip”-designed-for-ai

Nvidia unveils Blackwell B200, the “world’s most powerful chip” designed for AI

There’s no knowing where we’re rowing —

208B transistor chip can reportedly reduce AI cost and energy consumption by up to 25x.

The GB200

Enlarge / The GB200 “superchip” covered with a fanciful blue explosion.

Nvidia / Benj Edwards

On Monday, Nvidia unveiled the Blackwell B200 tensor core chip—the company’s most powerful single-chip GPU, with 208 billion transistors—which Nvidia claims can reduce AI inference operating costs (such as running ChatGPT) and energy consumption by up to 25 times compared to the H100. The company also unveiled the GB200, a “superchip” that combines two B200 chips and a Grace CPU for even more performance.

The news came as part of Nvidia’s annual GTC conference, which is taking place this week at the San Jose Convention Center. Nvidia CEO Jensen Huang delivered the keynote Monday afternoon. “We need bigger GPUs,” Huang said during his keynote. The Blackwell platform will allow the training of trillion-parameter AI models that will make today’s generative AI models look rudimentary in comparison, he said. For reference, OpenAI’s GPT-3, launched in 2020, included 175 billion parameters. Parameter count is a rough indicator of AI model complexity.

Nvidia named the Blackwell architecture after David Harold Blackwell, a mathematician who specialized in game theory and statistics and was the first Black scholar inducted into the National Academy of Sciences. The platform introduces six technologies for accelerated computing, including a second-generation Transformer Engine, fifth-generation NVLink, RAS Engine, secure AI capabilities, and a decompression engine for accelerated database queries.

Press photo of the Grace Blackwell GB200 chip, which combines two B200 GPUs with a Grace CPU into one chip.

Enlarge / Press photo of the Grace Blackwell GB200 chip, which combines two B200 GPUs with a Grace CPU into one chip.

Several major organizations, such as Amazon Web Services, Dell Technologies, Google, Meta, Microsoft, OpenAI, Oracle, Tesla, and xAI, are expected to adopt the Blackwell platform, and Nvidia’s press release is replete with canned quotes from tech CEOs (key Nvidia customers) like Mark Zuckerberg and Sam Altman praising the platform.

GPUs, once only designed for gaming acceleration, are especially well suited for AI tasks because their massively parallel architecture accelerates the immense number of matrix multiplication tasks necessary to run today’s neural networks. With the dawn of new deep learning architectures in the 2010s, Nvidia found itself in an ideal position to capitalize on the AI revolution and began designing specialized GPUs just for the task of accelerating AI models.

Nvidia’s data center focus has made the company wildly rich and valuable, and these new chips continue the trend. Nvidia’s gaming GPU revenue ($2.9 billion in the last quarter) is dwarfed in comparison to data center revenue (at $18.4 billion), and that shows no signs of stopping.

A beast within a beast

Press photo of the Nvidia GB200 NVL72 data center computer system.

Enlarge / Press photo of the Nvidia GB200 NVL72 data center computer system.

The aforementioned Grace Blackwell GB200 chip arrives as a key part of the new NVIDIA GB200 NVL72, a multi-node, liquid-cooled data center computer system designed specifically for AI training and inference tasks. It combines 36 GB200s (that’s 72 B200 GPUs and 36 Grace CPUs total), interconnected by fifth-generation NVLink, which links chips together to multiply performance.

A specification chart for the Nvidia GB200 NVL72 system.

Enlarge / A specification chart for the Nvidia GB200 NVL72 system.

“The GB200 NVL72 provides up to a 30x performance increase compared to the same number of NVIDIA H100 Tensor Core GPUs for LLM inference workloads and reduces cost and energy consumption by up to 25x,” Nvidia said.

That kind of speed-up could potentially save money and time while running today’s AI models, but it will also allow for more complex AI models to be built. Generative AI models—like the kind that power Google Gemini and AI image generators—are famously computationally hungry. Shortages of compute power have widely been cited as holding back progress and research in the AI field, and the search for more compute has led to figures like OpenAI CEO Sam Altman trying to broker deals to create new chip foundries.

While Nvidia’s claims about the Blackwell platform’s capabilities are significant, it’s worth noting that its real-world performance and adoption of the technology remain to be seen as organizations begin to implement and utilize the platform themselves. Competitors like Intel and AMD are also looking to grab a piece of Nvidia’s AI pie.

Nvidia says that Blackwell-based products will be available from various partners starting later this year.

Nvidia unveils Blackwell B200, the “world’s most powerful chip” designed for AI Read More »

centurylink-left-customers-without-internet-for-39-days—until-ars-stepped-in

CenturyLink left customers without Internet for 39 days—until Ars stepped in

A

Aurich Lawson | Getty Images

When a severe winter storm hit Oregon on January 13, Nicholas Brown’s CenturyLink fiber Internet service stopped working at his house in Portland.

The initial outage was understandable amid the widespread damage caused by the storm, but CenturyLink’s response was poor. It took about 39 days for CenturyLink to restore broadband service to Brown and even longer to restore service to one of his neighbors. Those reconnections only happened after Ars Technica contacted the telco firm on the customers’ behalf last week.

Brown had never experienced any lengthy outage in over four years of subscribing to CenturyLink, so he figured the telco firm would restore his broadband connection within a reasonable amount of time. “It had practically never gone down at all up to this point. I’ve been quite happy with it,” he said.

While CenturyLink sent trucks to his street to reconnect most of his neighbors after the storm and Brown regularly contacted CenturyLink to plead for a fix, his Internet connection remained offline. Brown had also lost power, but the electricity service was reconnected within about 48 hours, while the broadband service remained offline for well over a month.

Fearing he had exhausted his options, Brown contacted Ars. We sent an email to CenturyLink’s media department on February 21 to seek information on why the outage lasted so long.

Telco finally springs into action

Roughly four hours after we contacted the firm, a CenturyLink technician arrived at the Portland house Brown shares with his partner, Jolene Edwards. The technician was able to reconnect them that day.

“At 4: 30 pm, a CenturyLink tech showed up unannounced,” Brown told us. “No one was home at the time, but he said he would wait. I get the idea that he was told not to come back until it was fixed.”

Brown’s neighbor, Leonard Bentz, also lost Internet access on January 13 and remained offline for two days longer than Brown. The technician who arrived on February 21 didn’t reconnect Bentz’s house.

“My partner gently tried to egg him to go over there and fix them too, and he more or less said, ‘That’s not the ticket that I have,'” Brown said.

After getting Bentz’s name and address, we contacted CenturyLink again on February 22 to notify them that he also needed to be reconnected. CenturyLink later confirmed to us that it restored his Internet service on February 23.

“They kept putting me off and putting me off”

Bentz told Ars that during the month-plus outage, he called CenturyLink several times. Customer service reps and a supervisor told him the company would send someone to fix his service, but “they kept putting me off and putting me off and putting me off,” Bentz said.

On one of those calls, Bentz said that CenturyLink promised him seven free months of service in exchange for the long outage. Brown told us he received a refund for the entire length of his outage, plus a bit extra. He pays $65 a month for gigabit service.

Brown said he is “happy enough with the resolution,” at least financially since he “got all the money for the non-service.” But those 39 days without Internet service will remain a bad memory.

Unfortunately, Internet service providers like CenturyLink have a history of failing to fix problems until media coverage exposes their poor customer service. CenturyLink is officially called Lumen these days, but it still uses the CenturyLink brand name.

After fixing Brown’s service in Portland, a CenturyLink spokesperson gave us the following statement:

It’s frustrating to have your services down and for that we apologize. We’ve brought in additional resources to assist in restoring service that was knocked out due to severe storms and multiple cases of vandalism. Some services are back, and we are working diligently to completely restore everything. In fact, we have technicians there now. We appreciate our customers’ patience and understanding, and we welcome calls from our customers to discuss their service.

CenturyLink left customers without Internet for 39 days—until Ars stepped in Read More »

a-big-boost-to-europe’s-climate-change-goals

A big boost to Europe’s climate-change goals

carbon-neutral continent —

A new policy called CBAM will assist Europe’s ambition to become carbon-neutral.

Steelworker starting molten steel pour in steelworks facility.

Enlarge / Materials such as steel, cement, aluminum, electricity, fertilizer, hydrogen, and iron will soon be subject to greenhouse gas emissions fees when imported into Europe.

Monty Rakusen/Getty

The year 2023 was a big one for climate news, from record heat to world leaders finally calling for a transition away from fossil fuels. In a lesser-known milestone, it was also the year the European Union soft-launched an ambitious new initiative that could supercharge its climate policies.

Wrapped in arcane language studded with many a “thereof,” “whereas” and “having regard to” is a policy that could not only help fund the European Union’s pledge to become the world’s first carbon-neutral continent, but also push industries all over the world to cut their carbon emissions.

It’s the establishment of a carbon price that will force many heavy industries to pay for each ton of carbon dioxide, or equivalent emissions of other greenhouse gases, that they emit. But what makes this fee revolutionary is that it will apply to emissions that don’t happen on European soil. The EU already puts a price on many of the emissions created by European firms; now, through the new Carbon Border Adjustment Mechanism, or CBAM, the bloc will charge companies that import the targeted products—cement, aluminum, electricity, fertilizer, hydrogen, iron, and steel—into the EU, no matter where in the world those products are made.

These industries are often large and stubborn sources of greenhouse gas emissions, and addressing them is key in the fight against climate change, says Aaron Cosbey, an economist at the International Institute for Sustainable Development, an environmental think tank. If those companies want to continue doing business with European firms, they’ll have to clean up or pay a fee. That creates an incentive for companies worldwide to reduce emissions.

In CBAM’s first phase, which started in October 2023, companies importing those materials into the EU must report on the greenhouse gas emissions involved in making the products. Beginning in 2026, they’ll have to pay a tariff.

Even having to supply emissions data will be a big step for some producers and could provide valuable data for climate researchers and policymakers, says Cosbey.

“I don’t know how many times I’ve gone through this exercise of trying to identify, at a product level, the greenhouse gas intensity of exports from particular countries and had to go through the most amazing, torturous processes to try to do those estimates,” he says. “And now it’s going to be served to me on a plate.”

CBAM will apply to a set of products that are linked to heavy greenhouse gas emissions.

Enlarge / CBAM will apply to a set of products that are linked to heavy greenhouse gas emissions.

Side benefits at home

While this new carbon price targets companies abroad, it will also help the EU to pursue its climate ambitions at home. For one thing, the extra revenues could go toward financing climate-friendly projects and promising new technologies.

But it also allows the EU to tighten up on domestic pollution. Since 2005, the EU has set a maximum, or cap, on the emissions created by a range of industrial “installations” such as oil and metal refineries. It makes companies within the bloc use credits, or allowances, for each ton of carbon dioxide—or equivalent discharges of other greenhouse gases—that they emit, up to that cap. Some allowances are currently granted for free, but others are bought at auction or traded with other companies in a system known as a carbon market.

But this idea—of making it expensive to harm the planet—creates a conundrum. If doing business in Europe becomes too expensive, European industry could flee the continent for countries that don’t have such high fees or strict regulations. That would damage the European economy and do nothing to solve the environmental crisis. The greenhouse gases would still be emitted—perhaps more than if the products had been made in Europe—and climate change would careen forward on its destructive path.

The Carbon Border Adjustment Mechanism aims to impose the same carbon price for products made abroad as domestic producers must pay under the EU’s system. In theory, that keeps European businesses competitive with imports from international rivals. It also addresses environmental concerns by nudging companies overseas toward reducing greenhouse gas emissions rather than carrying on as usual.

This means the EU can further tighten up its carbon market system at home. With international competition hopefully less of a concern, it plans to phase out some leniencies, such as some of the free emission allowances, that existed to help keep domestic industries competitive.

That’s a big deal, says Cosbey. Dozens of countries have carbon pricing systems, but they all create exceptions to keep heavy industry from getting obliterated by international competition. The carbon border tariff could allow the EU to truly force its industries—and consumers—to pay the price, he says.

“That is ambitious; nobody in the world is doing that.”

A big boost to Europe’s climate-change goals Read More »

$30-doorbell-cameras-have-multiple-serious-security-flaws,-says-consumer-reports

$30 doorbell cameras have multiple serious security flaws, says Consumer Reports

Video doorbell security —

Models still widely available on e-commerce sites after issues reported.

Image showing a delivery person saying

Enlarge / Consumer Reports’ investigation suggests that, should this delivery person press and hold the bell button and then pair using Eken’s app, he could see if other delivery people get such a perfunctory response.

Eken

Video doorbell cameras have been commoditized to the point where they’re available for $30–$40 on marketplaces like Amazon, Walmart, Temu, and Shein. The true cost of owning one might be much greater, however.

Consumer Reports (CR) has released the findings of a security investigation into two budget-minded doorbell brands, Eken and Tuck, which are largely the same hardware produced by the Eken Group in China, according to CR. The cameras are further resold under at least 10 more brands. The cameras are set up through a common mobile app, Aiwit. And the cameras share something else, CR claims: “troubling security vulnerabilities.”

The pairing procedure for one of Eken's doorbell cameras, which allows a malicious actor quite a bit of leeway.

Enlarge / The pairing procedure for one of Eken’s doorbell cameras, which allows a malicious actor quite a bit of leeway.

Eken

Among the camera’s vulnerabilities cited by CR:

  • Sending public IP addresses and Wi-Fi SSIDs (names) over the Internet without encryption
  • Takeover of the cameras by putting them into pairing mode (which you can do from a front-facing button on some models) and connecting through the Aiwit app
  • Access to still images from the video feed and other information by knowing the camera’s serial number.

CR also noted that Eken cameras lacked an FCC registration code. More than 4,200 were sold in January 2024, according to CR, and often held an Amazon “Overall Pick” label (as one model did when an Ars writer looked on Wednesday).

“These video doorbells from little known manufacturers have serious security and privacy vulnerabilities, and now they’ve found their way onto major digital marketplaces such as Amazon and Walmart,” said Justin Brookman, director of tech policy at Consumer Reports, in a statement. “Both the manufacturers and platforms that sell the doorbells have a responsibility to ensure that these products are not putting consumers in harm’s way.”

CR noted that it contacted vendors where it found the doorbells for sale. Temu told CR that it would halt sales of the doorbells, but “similar-looking if not identical doorbells remained on the site,” CR noted.

A Walmart representative told Ars that all cameras mentioned by Consumer Reports, sold by third parties, have been removed from Walmart by now. The representative added that customers may be eligible for refunds and that Walmart prohibits the selling of devices that require an FCC ID and lack one.

Ars contacted Amazon for comment and will update this post with new information. An email sent to the sole address that could be found on Eken’s website was returned undeliverable. The company’s social media accounts were last updated at least three years prior.

Consumer Reports' researchers claim to have found JPEG file references passed in plaintext over the network, which could later be viewed without authentication in a browser.

Consumer Reports’ researchers claim to have found JPEG file references passed in plaintext over the network, which could later be viewed without authentication in a browser.

Consumer Reports

CR issued vulnerability disclosures to Eken and Tuck regarding its findings. The disclosures note the amount of data that is sent over the network without authentication, including JPEG files, the local SSID, and external IP address. It notes that after a malicious user has re-paired a doorbell with a QR code generated by the Aiwit app, they have complete control over the device until a user sees an email from Eken and reclaims the doorbell.

With a few exceptions, video doorbells and other IoT cameras tend to rely on cloud connections to stream and store footage, as well as notify their owners about events. This has led to some notable privacy and security concerns. Ring doorbells were found to be pushing Wi-Fi credentials in plaintext in late 2019. Eufy, a company that marketed its “No clouds” offerings, was found to be uploading facial thumbnails to cloud servers to send push alerts and later apologized for that and other vulnerabilities. Camera provider Wyze recently disclosed that, for the second time in five months, images and video feeds were accidentally available to the wrong customers following a lengthy outage.

Listing image by Amazon/Eken

$30 doorbell cameras have multiple serious security flaws, says Consumer Reports Read More »

cdc-recommends-spring-covid-booster-for-people-65-and-up

CDC recommends spring COVID booster for people 65 and up

More protection —

The shot should be taken at least four months since the last COVID vaccination.

The Moderna Spikevax COVID-19 vaccine is shown at a CVS in 2023.

Enlarge / The Moderna Spikevax COVID-19 vaccine is shown at a CVS in 2023.

People ages 65 and up should get another dose of a COVID-19 vaccine this spring, given the age group’s higher risk of severe disease and death from the pandemic virus, the Centers for Disease Control and Prevention announced Wednesday.

Earlier today, an advisory committee for the CDC voted overwhelmingly in favor of recommending the spring booster dose. And late this afternoon, CDC Director Mandy Cohen signed off on the recommendation, allowing boosting to begin.

“Today’s recommendation allows older adults to receive an additional dose of this season’s COVID-19 vaccine to provide added protection,” Cohen said in a statement. “Most COVID-19 deaths and hospitalizations last year were among people 65 years and older. An additional vaccine dose can provide added protection that may have decreased over time for those at highest risk.”

The spring booster will be an additional shot of the 2023–2024 COVID-19 vaccines made by Pfizer-BioNTech, Moderna, and Novavax. The booster dose should be taken after at least four months have passed since a previous COVID-19 vaccination. However, as FDA representative David Kaslow noted in today’s advisory committee meeting, the FDA will likely approve a 2024–2025 version of COVID-19 vaccines for this coming fall. Given that, it’s best for people to get their spring booster dose by the end of June, so they can be ready for another booster before the winter when COVID-19 has generally peaked.

A report published earlier this month by the CDC found that the 2023–2024 COVID-19 vaccine was about 54 percent effective at preventing symptomatic COVID-19 when compared against people who had not received the latest vaccine. However, the CDC estimates that only about 22 percent of adults in the US have gotten a COVID-19 booster this season, and just over 40 percent of people ages 65 and up have gotten the shot.

People over age 65 made up 67 percent of COVID-19 hospitalizations between October 2023 and January 2024, according to CDC data presented at today’s advisory committee meeting. In early January, COVID-19 hospitalizations hit a seasonal high of about 35,000 weekly new admissions per week and nearly 2,500 weekly deaths.

The advisers debated how to word their recommendation for a spring booster and whether getting a booster should require consulting with a health care provider. But, ultimately, the committee decided on a more permissive recommendation, allowing anyone in the age group who wants a booster to be able to freely get one, including at convenient locations, such as local pharmacies.

“Data continues to show the importance of vaccination to protect those most at risk for severe outcomes of COVID-19,” the CDC said in its announcement of the recommendation. “An additional dose of the updated COVID-19 vaccine may restore protection that has waned since a fall vaccine dose, providing increased protection to adults ages 65 years and older.”

The CDC noted that its previous recommendations allow people who are immunocompromised to get additional doses of the COVID-19 vaccines.

CDC recommends spring COVID booster for people 65 and up Read More »

speedy-“sd-express”-cards-have-gone-nowhere-for-years,-but-samsung-could-change-that

Speedy “SD Express” cards have gone nowhere for years, but Samsung could change that

fast, but for whom? —

Compatibility issues and thermals have, so far, kept SD Express from taking off.

Samsung's SD Express-compatible microSD cards.

Enlarge / Samsung’s SD Express-compatible microSD cards.

Samsung

Big news for people who like (physically) small storage: Samsung says that it is sampling its first microSD cards that support the SD Express standard, which will allow them to hit sustained read speeds of as much as 800MB per second. That’s a pretty substantial boost over current SD cards, which tend to top out around 80MB or 90MB per second (for cheap commodity cards) and around 250MB per second for the very fastest UHS-II-compatible professional cards.

As Samsung points out, that 800MB/s figure puts these tiny SD Express cards well above the speeds possible with older SATA SSDs, which could make these cards more useful as primary storage devices for PCs or single-board computers that can support the SD Express standard (more on that later).

Samsung is currently sampling a 256GB version of the SD Express card that “will be available for purchase later this year.”

Because this is a tech company announcement in 2024, Samsung also makes an obligatory mention of AI, though there’s absolutely nothing specific the cards are doing to make them particularly well-suited for generative AI tasks other than “be faster.” Adding extra storage to phones or PCs could be useful for on-device generative AI—storing larger language models locally, for example—but most software companies that are offering generative AI features in their OSes or browsers are mostly using server-side processing to do all the heavy lifting for now.

What’s the SD Express standard, again?

The SD Express standard allows SD cards to take advantage of a single lane’s worth of PCIe bandwidth, boosting their theoretical speeds well beyond the 104MB/s cap of the UHS-I standard or the 312MB/s cap of UHS-II (UHS-III exists but isn’t widely used). The SD Express spec was last updated back in October 2023, which bumped it up from PCIe 3.0 to 4.0; it also defines four speed classes with read/write speeds of between 150MB and 600MB per second—a target these Samsung cards claim to be able to surpass.

But the original version of SD Express goes back to mid-2018, when it was added to version 7.0 of the SD specification. And adoption from SD card makers and device makers has been slow to nonexistent so far; AData makes full-size SD Express cards in 256GB and 512GB capacities that you can buy, but that’s about it. Lexar announced some cards back in 2021 that never ended up being released. And even if you had a card, you’d have trouble finding devices that could actually take advantage of the higher speeds, since most cameras, phones, and computers have opted to stick with the more common UHS.

One issue blocking SD Express adoption is that the card and the device have to support SD Express to get the promised speeds; an SD Express card inserted into a regular run-of-the-mill UHS-I SD card slot will be limited to UHS-I speeds. And because both the slots and the cards are visually identical, it’s not always easy to tell which slots support specific speeds.

Heat may also be a major limiting factor when using these SD Express cards to move around hundreds of gigabytes’ worth of data or when using the SD card as the primary storage device in a computer (as you might in a Raspberry Pi or other single-board computers). There’s no room for this kind of thing within the confines of a microSD card slot, so the sustained read and write speeds of Samsung’s new cards could be a bit lower than the promised 800MB-per-second maximum.

The SD Express spec does have mechanisms for keeping thermals in a reasonable range. Samsung also mentions a “Dynamic Thermal Guard” technology that promises to manage the temperatures of its SD Express cards, though it’s not clear whether this is different from what’s already in the SD Express spec.

Samsung jumping into SD Express cards may be what the format needs to take off, or at least to become a viable niche within the wider market for external storage. It’s certainly not difficult to imagine a scenario where something with SSD-ish speeds in an SD card-sized package would be useful. But SD cards are mainly useful because they’re cheap, they’re widely compatible, and they’re fast enough for things like recording video, taking pictures, and loading games. SD Express cards have a long way to go before they can check all the same boxes.

Speedy “SD Express” cards have gone nowhere for years, but Samsung could change that Read More »

that-moment-when-you-land-on-the-moon,-break-a-leg,-and-are-about-to-topple-over

That moment when you land on the Moon, break a leg, and are about to topple over

Goodnight, Odie —

“We hit harder than expected and skidded along the way.”

A photo of <em>Odysseus</em> the moment before it gently toppled over.” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/02/IM1-800×437.jpg”></img><figcaption>
<p><a data-height=Enlarge / A photo of Odysseus the moment before it gently toppled over.

Intuitive Machines

After six days and the public release of new images, engineers have finally pieced together the moments before, during, and after the Odysseus lander touched down on the Moon.

During a news conference on Wednesday, the chief executive of Intuitive Machines, Steve Altemus, described what his company has learned about what happened last Thursday evening as Odysseus made its powered descent down to the Moon.

From their control room in Houston, the mission operators watched with fraying nerves, as their range finders had failed. A last-minute effort to use altitude data from a NASA payload on board failed because the flight computer on board Odysseus could not ingest it in time. So the lander was, in essence, coming down to the Moon without any real-time altimetry data.

The last communication the operators received appeared to show that Odysseus had touched down on the Moon and was upright. But then, to their horror, all telemetry from the spacecraft ceased. The data on the flight controllers’ consoles in Houston froze. They feared the worst.

Skidding down to the Moon

About 10 minutes later, the lander sent a weak signal back. In that initial trickle of data, based on the lander’s inertial measurement unit, it appeared that Odysseus was partly on its side. But there were confusing signals.

On Wednesday, Altemus explained what the team has since pieced together. Because of the lack of altimetry data, Odysseus thought it was about 100 meters higher above the lunar surface than it actually was, so as it touched down it was traveling about three times faster than intended, about 3 meters per second. It was also moving laterally, with respect to the ground, at about 1 meter per second.

“We hit harder than expected and skidded along the way,” Altemus explained.

As it impacted and skidded, the spacecraft’s main engine was still firing. Then, just as the spacecraft touched down more firmly, there was a spike in the engine’s combustion chamber. This is consistent with the bell-shaped engine nozzle coming into contact with the lunar surface.

It is perhaps worth pausing a moment here to consider that this spacecraft, launched a week earlier, had just made an autonomous landing without knowing precisely where it was. But now it found itself on the Moon. Upon impact, one or more of the landing legs snapped as it came down hard. Then, at that very moment, with the engine still burning, an onboard camera snapped an image of the scene. Intuitive Machines published this photo on Wednesday. It’s spectacular.

“We sat upright, with the engine firing for a period of time,” Altemus said. “Then as it wound down, the vehicle just gently tipped over.”

Odysseus at rest on the lunar surface.” height=”1307″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/02/IM2-980×1307.jpg” width=”980″>

Enlarge / Odysseus at rest on the lunar surface.

Intuitive Machines

Based on the gravity of the Moon, Intuitive Machines and NASA calculated that it took about two seconds to tip over. The lander fell on its side, with a helium tank or radio shelf contacting the Moon. This protrusion, combined with the 12-degree slope of the terrain, means that Odysseus is now gently leaning on the lunar surface at about a 30-degree angle. On Tuesday, the spacecraft returned an image that verified these conclusions.

“We have that photo now to confirm that’s the orientation,” Altemus said.

Sleepy time

As Intuitive Machines has better understood the situation and the status of its vehicle, it has been able to download a torrent of data. NASA has gotten valuable information from all six of its payloads on board, said a project scientist for the space agency, Sue Lederer. As of Wednesday, NASA had been able to download about 50MB of data. The baseline for success was a single bit of data.

But time is running out as the Sun dips toward the horizon. Odysseus will run out of power as soon as Wednesday evening, entering the long lunar night. In about three weeks, as sunlight starts to hit the spacecraft’s solar panels again, Intuitive Machines will try to wake up the spacecraft. The odds are fairly long. The chemistry of its lithium-ion batteries doesn’t like cold, and temperatures will plummet to minus-280° Fahrenheit (minus-173° Celsius) in a few days. That may wreck the batteries or crack the electronics in the flight computer.

Yet hope remains eternal for a spacecraft its operators have taken to affectionately calling Odie. It has defied the odds so far. “He’s a scrappy little dude,” Lederer said. “I have confidence in Odie at this point.”

That moment when you land on the Moon, break a leg, and are about to topple over Read More »

github-besieged-by-millions-of-malicious-repositories-in-ongoing-attack

GitHub besieged by millions of malicious repositories in ongoing attack

GitHub besieged by millions of malicious repositories in ongoing attack

Getty Images

GitHub is struggling to contain an ongoing attack that’s flooding the site with millions of code repositories. These repositories contain obfuscated malware that steals passwords and cryptocurrency from developer devices, researchers said.

The malicious repositories are clones of legitimate ones, making them hard to distinguish to the casual eye. An unknown party has automated a process that forks legitimate repositories, meaning the source code is copied so developers can use it in an independent project that builds on the original one. The result is millions of forks with names identical to the original one that add a payload that’s wrapped under seven layers of obfuscation. To make matters worse, some people, unaware of the malice of these imitators, are forking the forks, which adds to the flood.

Whack-a-mole

“Most of the forked repos are quickly removed by GitHub, which identifies the automation,” Matan Giladi and Gil David, researchers at security firm Apiiro, wrote Wednesday. “However, the automation detection seems to miss many repos, and the ones that were uploaded manually survive. Because the whole attack chain seems to be mostly automated on a large scale, the 1% that survive still amount to thousands of malicious repos.”

Given the constant churn of new repos being uploaded and GitHub’s removal, it’s hard to estimate precisely how many of each there are. The researchers said the number of repos uploaded or forked before GitHub removes them is likely in the millions. They said the attack “impacts more than 100,000 GitHub repositories.”

GitHub officials didn’t dispute Apiiro’s estimates and didn’t answer other questions sent by email. Instead, they issued the following statement:

GitHub hosts over 100M developers building across over 420M repositories, and is committed to providing a safe and secure platform for developers. We have teams dedicated to detecting, analyzing, and removing content and accounts that violate our Acceptable Use Policies. We employ manual reviews and at-scale detections that use machine learning and constantly evolve and adapt to adversarial tactics. We also encourage customers and community members to report abuse and spam.

Supply-chain attacks that target users of developer platforms have existed since at least 2016, when a college student uploaded custom scripts to RubyGems, PyPi, and NPM. The scripts bore names similar to widely used legitimate packages but otherwise had no connection to them. A phone-home feature in the student’s scripts showed that the imposter code was executed more than 45,000 times on more than 17,000 separate domains, and more than half the time his code was given all-powerful administrative rights. Two of the affected domains ended in .mil, an indication that people inside the US military had run his script. This form of supply-chain attack is often referred to as typosquatting, because it relies on users making small errors when choosing the name of a package they want to use.

In 2021, a researcher used a similar technique to successfully execute counterfeit code on networks belonging to Apple, Microsoft, Tesla, and dozens of other companies. The technique—known as a dependency confusion or namespace confusion attack—started by placing malicious code packages in an official public repository and giving them the same name as dependency packages Apple and the other targeted companies use in their products. Automated scripts inside the package managers used by the companies then automatically downloaded and installed the counterfeit dependency code.

The technique observed by Apiiro is known as repo confusion.

“Similar to dependency confusion attacks, malicious actors get their target to download their malicious version instead of the real one,” Wednesday’s post explained. “But dependency confusion attacks take advantage of how package managers work, while repo confusion attacks simply rely on humans to mistakenly pick the malicious version over the real one, sometimes employing social engineering techniques as well.”

GitHub besieged by millions of malicious repositories in ongoing attack Read More »