syndication

from-recycling-to-food:-can-we-eat-plastic-munching-microbes?

From recycling to food: Can we eat plastic-munching microbes?

breaking it down —

Researchers are trying to turn plastic-eating bacteria into food source for humans.

From recycling to food: Can we eat plastic-munching microbes?

Olga Pankova/Moment via Getty Images

In 2019, an agency within the US Department of Defense released a call for research projects to help the military deal with the copious amount of plastic waste generated when troops are sent to work in remote locations or disaster zones. The agency wanted a system that could convert food wrappers and water bottles, among other things, into usable products, such as fuel and rations. The system needed to be small enough to fit in a Humvee and capable of running on little energy. It also needed to harness the power of plastic-eating microbes.

“When we started this project four years ago, the ideas were there. And in theory, it made sense,” said Stephen Techtmann, a microbiologist at Michigan Technological University, who leads one of the three research groups receiving funding. Nevertheless, he said, in the beginning, the effort “felt a lot more science-fiction than really something that would work.”

That uncertainty was key. The Defense Advanced Research Projects Agency, or DARPA, supports high-risk, high-reward projects. This means there’s a good chance that any individual effort will end in failure. But when a project does succeed, it has the potential to be a true scientific breakthrough. “Our goal is to go from disbelief, like, ‘You’re kidding me. You want to do what?’ to ‘You know, that might be actually feasible,’” said Leonard Tender, a program manager at DARPA who is overseeing the plastic waste projects.

The problems with plastic production and disposal are well-known. According to the United Nations Environment Program, the world creates about 440 million tons of plastic waste per year. Much of it ends up in landfills or in the ocean, where microplastics, plastic pellets, and plastic bags pose a threat to wildlife. Many governments and experts agree that solving the problem will require reducing production, and some countries and US states have additionally introduced policies to encourage recycling.

For years, scientists have also been experimenting with various species of plastic-eating bacteria. But DARPA is taking a slightly different approach in seeking a compact and mobile solution that uses plastic to create something else entirely: food for humans.

The goal, Techtmann hastens to add, is not to feed people plastic. Rather, the hope is that the plastic-devouring microbes in his system will themselves prove fit for human consumption. While Techtmann believes most of the project will be ready in a year or two, it’s this food step that could take longer. His team is currently doing toxicity testing, and then they will submit their results to the Food and Drug Administration for review. Even if all that goes smoothly, an additional challenge awaits. There’s an ick factor, said Techtmann, “that I think would have to be overcome.”

The military isn’t the only entity working to turn microbes into nutrition. From Korea to Finland, a small number of researchers, as well as some companies, are exploring whether microorganisms might one day help feed the world’s growing population.

Two birds, one stone

According to Tender, DARPA’s call for proposals was aimed at solving two problems at once. First, the agency hoped to reduce what he called supply-chain vulnerability: During war, the military needs to transport supplies to troops in remote locations, which creates a safety risk for people in the vehicle. Additionally, the agency wanted to stop using hazardous burn pits as a means of dealing with plastic waste. “Getting those waste products off of those sites responsibly is a huge lift,” Tender said.

The Michigan Tech system begins with a mechanical shredder, which reduces the plastic to small shards that then move into a reactor, where they soak in ammonium hydroxide under high heat. Some plastics, such as PET, which is commonly used to make disposable water bottles, break down at this point. Other plastics used in military food packaging—namely polyethylene and polypropylene—are passed along to another reactor, where they are subject to much higher heat and an absence of oxygen.

Under these conditions, the polyethylene and polypropylene are converted into compounds that can be upcycled into fuels and lubricants. David Shonnard, a chemical engineer at Michigan Tech who oversaw this component of the project, has developed a startup company called Resurgent Innovation to commercialize some of the technology. (Other members of the research team, said Shonnard, are pursuing additional patents related to other parts of the system.)

From recycling to food: Can we eat plastic-munching microbes? Read More »

microsoft-to-host-security-summit-after-crowdstrike-disaster

Microsoft to host security summit after CrowdStrike disaster

Bugging out —

Redmond wants to improve the resilience of Windows to buggy software.

Photo of a Windows BSOD

Microsoft is stepping up its plans to make Windows more resilient to buggy software after a botched CrowdStrike update took down millions of PCs and servers in a global IT outage.

The tech giant has in the past month intensified talks with partners about adapting the security procedures around its operating system to better withstand the kind of software error that crashed 8.5 million Windows devices on July 19.

Critics say that any changes by Microsoft would amount to a concession of shortcomings in Windows’ handling of third-party security software that could have been addressed sooner.

Yet they would also prove controversial among security vendors that would have to make radical changes to their products, and force many Microsoft customers to adapt their software.

Last month’s outages—which are estimated to have caused billions of dollars in damages after grounding thousands of flights and disrupting hospital appointments worldwide—heightened scrutiny from regulators and business leaders over the extent of access that third-party software vendors have to the core, or kernel, of Windows operating systems.

Microsoft will host a summit next month for government representatives and cyber security companies, including CrowdStrike, to “discuss concrete steps we will all take to improve security and resiliency for our joint customers,” Microsoft said on Friday.

The gathering will take place on September 10 at Microsoft’s headquarters near Seattle, it said in a blog post.

Bugs in the kernel can quickly crash an entire operating system, triggering the millions of “blue screens of death” that appeared around the globe after CrowdStrike’s faulty software update was sent out to clients’ devices.

Microsoft told the Financial Times it was considering several options to make its systems more stable and had not ruled out completely blocking access to the Windows kernel—an option some rivals fear would put their software at a disadvantage to the company’s internal security product, Microsoft Defender.

“All of the competitors are concerned that [Microsoft] will use this to prefer their own products over third-party alternatives,” said Ryan Kalember, head of cyber security strategy at Proofpoint.

Microsoft may also demand new testing procedures from cyber security vendors rather than adapting the Windows system itself.

Apple, which was not hit by the outages, blocks all third-party providers from accessing the kernel of its MacOS operating system, forcing them to operate in the more limited “user-mode.”

Microsoft has previously said it could not do the same, after coming to an understanding with the European Commission in 2009 that it would give third parties the same access to its systems as that for Microsoft Defender.

Some experts said, however, that this voluntary commitment to the EU had not tied Microsoft’s hands in the way it claimed, arguing that the company had always been free to make the changes now under consideration.

“These are technical decisions of Microsoft that were not part of [the arrangement],” said Thomas Graf, a partner at Cleary Gottlieb in Brussels who was involved in the case.

“The text [of the understanding] does not require them to give access to the kernel,” added AJ Grotto, a former senior director for cyber security policy at the White House.

Grotto said Microsoft shared some of the blame for the July disruption since the outages would not have been possible without its decision to allow access to the kernel.

Nevertheless, while it might boost a system’s resilience, blocking kernel access could also bring “real trade-offs” for the compatibility with other software that had made Windows so popular among business customers, Forrester analyst Allie Mellen said.

“That would be a fundamental shift for Microsoft’s philosophy and business model,” she added.

Operating exclusively outside the kernel may lower the risk of triggering mass outages but it was also “very limiting” for security vendors and could make their products “less effective” against hackers, Mellen added.

Operating within the kernel gave security companies more information about potential threats and enabled their defensive tools to activate before malware could take hold, she added.

An alternative option could be to replicate the model used by the open-source operating system Linux, which uses a filtering mechanism that creates a segregated environment within the kernel in which software, including cyber defense tools, can run.

But the complexity of overhauling how other security software works with Windows means that any changes will be hard for regulators to police and Microsoft will have strong incentives to favor its own products, rivals said.

It “sounds good on paper, but the devil is in the details,” said Matthew Prince, chief executive of digital services group Cloudflare.

© 2024 The Financial Times Ltd. All rights reserved Not to be redistributed, copied, or modified in any way.

Microsoft to host security summit after CrowdStrike disaster Read More »

how-accurate-are-wearable-fitness-trackers?-less-than-you-might think

How accurate are wearable fitness trackers? Less than you might think

some misleading metrics —

Wide variance underscores need for a standardized approach to validation of devices.

How accurate are wearable fitness trackers? Less than you might think

Corey Gaskin

Back in 2010, Gary Wolf, then the editor of Wired magazine, delivered a TED talk in Cannes called “the quantified self.” It was about what he termed a “new fad” among tech enthusiasts. These early adopters were using gadgets to monitor everything from their physiological data to their mood and even the number of nappies their children used.

Wolf acknowledged that these people were outliers—tech geeks fascinated by data—but their behavior has since permeated mainstream culture.

From the smartwatches that track our steps and heart rate, to the fitness bands that log sleep patterns and calories burned, these gadgets are now ubiquitous. Their popularity is emblematic of a modern obsession with quantification—the idea that if something isn’t logged, it doesn’t count.

At least half the people in any given room are likely wearing a device, such as a fitness tracker, that quantifies some aspect of their lives. Wearables are being adopted at a pace reminiscent of the mobile phone boom of the late 2000s.

However, the quantified self movement still grapples with an important question: Can wearable devices truly measure what they claim to?

Along with my colleagues Maximus Baldwin, Alison Keogh, Brian Caulfield, and Rob Argent, I recently published an umbrella review (a systematic review of systematic reviews) examining the scientific literature on whether consumer wearable devices can accurately measure metrics like heart rate, aerobic capacity, energy expenditure, sleep, and step count.

At a surface level, our results were quite positive. Accepting some error, wearable devices can measure heart rate with an error rate of plus or minus 3 percent, depending on factors like skin tone, exercise intensity, and activity type. They can also accurately measure heart rate variability and show good sensitivity and specificity for detecting arrhythmia, a problem with the rate of a person’s heartbeat.

Additionally, they can accurately estimate what’s known as cardiorespiratory fitness, which is how the circulatory and respiratory systems supply oxygen to the muscles during physical activity. This can be quantified by something called VO2Max, which is a measure of how much oxygen your body uses while exercising.

The ability of wearables to accurately measure this is better when those predictions are generated during exercise (rather than at rest). In the realm of physical activity, wearables generally underestimate step counts by about 9 percent.

Challenging endeavour

However, discrepancies were larger for energy expenditure (the number of calories you burn when exercising) with error margins ranging from minus-21.27 percent to 14.76 percent, depending on the device used and the activity undertaken.

Results weren’t much better for sleep. Wearables tend to overestimate total sleep time and sleep efficiency, typically by more than 10 percent. They also tend to underestimate sleep onset latency (a lag in getting to sleep) and wakefulness after sleep onset. Errors ranged from 12 percent to 180 percent, compared to the gold standard measurements used in sleep studies, known as polysomnography.

The upshot is that, despite the promising capabilities of wearables, we found conducting and synthesizing research in this field to be very challenging. One hurdle we encountered was the inconsistent methodologies employed by different research groups when validating a given device.

This lack of standardization leads to conflicting results and makes it difficult to draw definitive conclusions about a device’s accuracy. A classic example from our research: one study might assess heart rate accuracy during high-intensity interval training, while another focuses on sedentary activities, leading to discrepancies that can’t be easily reconciled.

Other issues include varying sample sizes, participant demographics, and experimental conditions—all of which add layers of complexity to the interpretation of our findings.

What does it mean for me?

Perhaps most importantly, the rapid pace at which new wearable devices are released exacerbates these issues. With most companies following a yearly release cycle, we and other researchers find it challenging to keep up. The timeline for planning a study, obtaining ethical approval, recruiting and testing participants, analyzing results, and publishing can often exceed 12 months.

By the time a study is published, the device under investigation is likely to already be obsolete, replaced by a newer model with potentially different specifications and performance characteristics. This is demonstrated by our finding that less than 5 percent of the consumer wearables that have been released to date have been validated for the range of physiological signals they purport to measure.

What do our results mean for you? As wearable technologies continue to permeate various facets of health and lifestyle, it is important to approach manufacturers’ claims with a healthy dose of skepticism. Gaps in research, inconsistent methodologies, and the rapid pace of new device releases underscore the need for a more formalized and standardized approach to the validation of devices.

The goal here would be to foster collaborative synergies between formal certification bodies, academic research consortia, popular media influencers, and the industry so that we can augment the depth and reach of wearable technology evaluation.

Efforts are already underway to establish a collaborative network that can foster a richer, multifaceted dialogue that resonates with a broad spectrum of stakeholders—ensuring that wearables are not just innovative gadgets but reliable tools for health and wellness.The Conversation

Cailbhe Doherty, assistant professor in the School of Public Health, Physiotherapy and Sports Science, University College Dublin. This article is republished from The Conversation under a Creative Commons license. Read the original article.

How accurate are wearable fitness trackers? Less than you might think Read More »

amd-signs-$4.9-billion-deal-to-challenge-nvidia’s-ai-infrastructure-lead

AMD signs $4.9 billion deal to challenge Nvidia’s AI infrastructure lead

chip wars —

Company hopes acquisition of ZT Systems will accelerate adoption of its data center chips.

Visitors walk past the AMD booth at the 2024 Mobile World Congress

AMD has agreed to buy artificial intelligence infrastructure group ZT Systems in a $4.9 billion cash and stock transaction, extending a run of AI investments by the chip company as it seeks to challenge market leader Nvidia.

The California-based group said the acquisition would help accelerate the adoption of its Instinct line of AI data center chips, which compete with Nvidia’s popular graphics processing units (GPUs).

ZT Systems, a private company founded three decades ago, builds custom computing infrastructure for the biggest AI “hyperscalers.” While the company does not disclose its customers, the hyperscalers include the likes of Microsoft, Meta, and Amazon.

The deal marks AMD’s biggest acquisition since it bought Xilinx for $35 billion in 2022.

“It brings a thousand world-class design engineers into our team, it allows us to develop silicon and systems in parallel and, most importantly, get the newest AI infrastructure up and running in data centers as fast as possible,” AMD’s chief executive Lisa Su told the Financial Times.

“It really helps us deploy our technology much faster because this is what our customers are telling us [they need],” Su added.

The transaction is expected to close in the first half of 2025, subject to regulatory approval, after which New Jersey-based ZT Systems will be folded into AMD’s data center business group. The $4.9bn valuation includes up to $400mn contingent on “certain post-closing milestones.”

Citi and Latham & Watkins are advising AMD, while ZT Systems has retained Goldman Sachs and Paul, Weiss.

The move comes as AMD seeks to break Nvidia’s stranglehold on the AI data center chip market, which earlier this year saw Nvidia temporarily become the world’s most valuable company as big tech companies pour billions of dollars into its chips to train and deploy powerful new AI models.

Part of Nvidia’s success stems from its “systems” approach to the AI chip market, offering end-to-end computing infrastructure that includes pre-packaged server racks, networking equipment, and software tools to make it easier for developers to build AI applications on its chips.

AMD’s acquisition shows the chipmaker building out its own “systems” offering. The company rolled out its MI300 line of AI chips last year, and says it will launch its next-generation MI350 chip in 2025 to compete with Nvidia’s new Blackwell line of GPUs.

In May, Microsoft was one of the first AI hyperscalers to adopt the MI300, building it into its Azure cloud platform to run AI models such as OpenAI’s GPT-4. AMD’s quarterly revenue for the chips surpassed $1 billion for the first time in the three months to June 30.

But while AMD has feted the MI300 as its fastest-ever product ramp, its data center revenue still represented a fraction of the $22.6 billion that Nvidia’s data center business raked in for the quarter to the end of April.

In March, ZT Systems announced a partnership with Nvidia to build custom AI infrastructure using its Blackwell chips. “I think we certainly believe ZT as part of AMD will significantly accelerate the adoption of AMD AI solutions,” Su said, but “we have customer commitments and we are certainly going to honour those”.

Su added that she expected regulators’ review of the deal to focus on the US and Europe.

In addition to increasing its research and development spending, AMD says it has invested more than $1 billion over the past year to expand its AI hardware and software ecosystem.

In July the company announced it was acquiring Finnish AI start-up Silo AI for $665 million, the largest acquisition of a privately held AI startup in Europe in a decade.

© 2024 The Financial Times Ltd. All rights reserved. Please do not copy and paste FT articles and redistribute by email or post to the web.

AMD signs $4.9 billion deal to challenge Nvidia’s AI infrastructure lead Read More »

judge-calls-foul-on-venu,-blocks-launch-of-espn-warner-fox-streaming-service

Judge calls foul on Venu, blocks launch of ESPN-Warner-Fox streaming service

Out of bounds —

Upcoming launch of $42.99 sports package likely to “substantially lessen competition.”

Texas losing to Alabama in the 2010 BCS championship

Gina Ferazzi via Getty

A US judge has temporarily blocked the launch of a sports streaming service formed by Disney’s ESPN, Warner Bros and Fox, finding that it was likely to “substantially lessen competition” in the market.

The service, dubbed Venu, was expected to launch later this year. But FuboTV, a sports-focused streaming platform, filed an antitrust suit in February to block it, arguing its business would “suffer irreparable harm” as a result.

On Friday, US District Judge Margaret Garnett in New York granted an injunction to halt the launch of the service while Fubo’s lawsuit against the entertainment giants works its way through the court.

The opinion was sealed but the judge noted in an entry on the court docket that Fubo was “likely to succeed on its claims” that by entering the agreement, the companies “will substantially lessen competition and restrain trade in the relevant market” in violation of antitrust law.

In a statement, ESPN, Fox and Warner Bros Discovery said they planned to appeal against the decision.

Venu was aimed at US consumers who had either ditched their traditional pay TV packages for streaming or never signed up for a cable subscription. “Cord cutting” has been eroding the traditional TV business for years, but live sports has remained a primary draw for customers who have held on to their cable subscriptions.

Fubo TV was launched in 2015 as a sports-focused streamer. It offers more than 350 channels—including those carrying major sporting events such as Premier League football matches, baseball, the National Football League and the US National Basketball Association—for monthly subscription prices starting at $79.99. Its offerings included networks owned by Disney and Fox.

ESPN, Fox and Warner Bros said Venu was “pro-competitive,” aimed at reaching “viewers who currently are not served by existing subscription options.”

Venu was expected to charge $42.99 a month when it launched later this month. It “will feature just 15 channels, all featuring popular live sports—the kind of skinny sports bundle that Fubo has tried to offer for nearly a decade, only to encounter tooth-and-nail resistance,” Fubo said in a court filing seeking the injunction.

Venu was expected to aggregate about $16 billion worth of sports rights, analysts have estimated. It was not expected to have an impact on the individual companies’ ability to strike new rights deals.

Analysts had questioned its position in the marketplace. Disney plans to roll out ESPN as a “flagship” streaming service in August 2025 that will carry programming that appears on the TV network as well as gaming, shopping and other interactive content. Disney chief executive Bob Iger said he wants the service to become the “pre-eminent digital sports platform.”

Fubo shares rose 16.8 percent after the ruling, but the stock is down 51 percent this year.

© 2022 The Financial Times Ltd. All rights reserved Not to be redistributed, copied, or modified in any way.

Judge calls foul on Venu, blocks launch of ESPN-Warner-Fox streaming service Read More »

push-alerts-from-tiktok-include-fake-news,-expired-tsunami-warning

Push alerts from TikTok include fake news, expired tsunami warning

Broken —

News-style notifications include false claims about Taylor Swift, other misleading info.

illustration showing a phone with TikTok logo

FT montage/Getty Images

TikTok has been sending inaccurate and misleading news-style alerts to users’ phones, including a false claim about Taylor Swift and a weeks-old disaster warning, intensifying fears about the spread of misinformation on the popular video-sharing platform.

Among alerts seen by the Financial Times was a warning about a tsunami in Japan, labeled “BREAKING,” that was posted in late January, three weeks after an earthquake had struck.

Other notifications falsely stated that “Taylor Swift Canceled All Tour Dates in What She Called ‘Racist Florida’” and highlighted a five-year “ban” for a US baseball player that originated as an April Fool’s day prank.

The notifications, which sometimes contain summaries from user-generated posts, pop up on screen in the style of a news alert. Researchers say that format, adopted widely to boost engagement through personalized video recommendations, may make users less critical of the veracity of the content and open them up to misinformation.

“Notifications have this additional stamp of authority,” said Laura Edelson, a researcher at Northeastern University, in Boston. “When you get a notification about something, it’s often assumed to be something that has been curated by the platform and not just a random thing from your feed.”

Social media groups such as TikTok, X, and Meta are facing greater scrutiny to police their platforms, particularly in a year of major national elections, including November’s vote in the US. The rise of artificial intelligence adds to the pressure given that the fast-evolving technology makes it quicker and easier to spread misinformation, including through synthetic media, known as deepfakes.

TikTok, which has more than 1 billion global users, has repeatedly promised to step up its efforts to counter misinformation in response to pressure from governments around the world, including the UK and EU. In May, the video-sharing platform committed to becoming the first major social media network to label some AI-generated content automatically.

The false claim about Swift canceling her tour in Florida, which also circulated on X, mirrored an article published in May in the satirical newspaper The Dunning-Kruger Times, although this article was not linked or directly referred to in the TikTok post.

At least 20 people said on a comment thread that they had clicked on the notification and were directed to a video on TikTok repeating the claim, even though they did not follow the account. At least one person in the thread said they initially thought the notification “was a news article.”

Swift is still scheduled to perform three concerts in Miami in October and has not publicly called Florida “racist.”

Another push notification inaccurately stated that a Japanese pitcher who plays for the Los Angeles Dodgers faced a ban from Major League Baseball: “Shohei Ohtani has been BANNED from the MLB for 5 years following his gambling investigation… ”

The words directly matched the description of a post uploaded as an April Fools’ day prank. Tens of commenters on the original video, however, reported receiving alerts in mid-April. Several said they had initially believed it before they checked other sources.

Users have also reported notifications that appeared to contain news updates but were generated weeks after the event.

One user received an alert on January 23 that read: “BREAKING: A tsunami alert has been issued in Japan after a major earthquake.” The notification appeared to refer to a natural disaster warning issued more than three weeks earlier after an earthquake struck Japan’s Noto peninsula on New Year’s Day.

TikTok said it had removed the specific notifications flagged by the FT.

The alerts appear automatically to scrape the descriptions of posts that are receiving, or are likely to receive, high levels of engagement on the viral video app, owned by China’s ByteDance, researchers said. They seem to be tailored to users’ interests, which means that each one is likely to be limited to a small pool of people.

“The way in which those alerts are positioned, it can feel like the platform is speaking directly to [users] and not just a poster,” said Kaitlyn Regehr, an associate professor of digital humanities at University College London.

TikTok declined to reveal how the app determined which videos to promote through notifications, but the sheer volume of personalized content recommendations must be “algorithmically generated,” said Dani Madrid-Morales, co-lead of the University of Sheffield’s Disinformation Research Cluster.

Edelson, who is also co-director of the Cybersecurity for Democracy group, suggested that a responsible push notification algorithm could be weighted towards trusted sources, such as verified publishers or officials. “The question is: Are they choosing a high-traffic thing from an authoritative source?” she said. “Or is this just a high-traffic thing?”

Additional reporting by Hannah Murphy in San Francisco and Cristina Criddle in London.

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Push alerts from TikTok include fake news, expired tsunami warning Read More »

almost-unfixable-“sinkclose”-bug-affects-hundreds-of-millions-of-amd-chips

Almost unfixable “Sinkclose” bug affects hundreds of millions of AMD chips

Deep insecurity —

Worse-case scenario: “You basically have to throw your computer away.”

Security flaws in your computer’s firmware, the deep-seated code that loads first when you turn the machine on and controls even how its operating system boots up, have long been a target for hackers looking for a stealthy foothold. But only rarely does that kind of vulnerability appear not in the firmware of any particular computer maker, but in the chips found across hundreds of millions of PCs and servers. Now security researchers have found one such flaw that has persisted in AMD processors for decades, and that would allow malware to burrow deep enough into a computer’s memory that, in many cases, it may be easier to discard a machine than to disinfect it.

At the Defcon hacker conference, Enrique Nissim and Krzysztof Okupski, researchers from the security firm IOActive, plan to present a vulnerability in AMD chips they’re calling Sinkclose. The flaw would allow hackers to run their own code in one of the most privileged modes of an AMD processor, known as System Management Mode, designed to be reserved only for a specific, protected portion of its firmware. IOActive’s researchers warn that it affects virtually all AMD chips dating back to 2006, or possibly even earlier.

Nissim and Okupski note that exploiting the bug would require hackers to already have obtained relatively deep access to an AMD-based PC or server, but that the Sinkclose flaw would then allow them to plant their malicious code far deeper still. In fact, for any machine with one of the vulnerable AMD chips, the IOActive researchers warn that an attacker could infect the computer with malware known as a “bootkit” that evades antivirus tools and is potentially invisible to the operating system, while offering a hacker full access to tamper with the machine and surveil its activity. For systems with certain faulty configurations in how a computer maker implemented AMD’s security feature known as Platform Secure Boot—which the researchers warn encompasses the large majority of the systems they tested—a malware infection installed via Sinkclose could be harder yet to detect or remediate, they say, surviving even a reinstallation of the operating system.

“Imagine nation-state hackers or whoever wants to persist on your system. Even if you wipe your drive clean, it’s still going to be there,” says Okupski. “It’s going to be nearly undetectable and nearly unpatchable.” Only opening a computer’s case, physically connecting directly to a certain portion of its memory chips with a hardware-based programming tool known as SPI Flash programmer and meticulously scouring the memory would allow the malware to be removed, Okupski says.

Nissim sums up that worst-case scenario in more practical terms: “You basically have to throw your computer away.”

In a statement shared with WIRED, AMD acknowledged IOActive’s findings, thanked the researchers for their work, and noted that it has “released mitigation options for its AMD EPYC datacenter products and AMD Ryzen PC products, with mitigations for AMD embedded products coming soon.” (The term “embedded,” in this case, refers to AMD chips found in systems such as industrial devices and cars.) For its EPYC processors designed for use in data-center servers, specifically, the company noted that it released patches earlier this year. AMD declined to answer questions in advance about how it intends to fix the Sinkclose vulnerability, or for exactly which devices and when, but it pointed to a full list of affected products that can be found on its website’s security bulletin page.

Almost unfixable “Sinkclose” bug affects hundreds of millions of AMD chips Read More »

one-startup’s-plan-to-fix-ai’s-“shoplifting”-problem

One startup’s plan to fix AI’s “shoplifting” problem

I’ve been caught stealing, once when I was five —

Algorithm will identify sources used by generative AI, compensate them for use.

One startup’s plan to fix AI’s “shoplifting” problem

Bloomberg via Getty

Bill Gross made his name in the tech world in the 1990s, when he came up with a novel way for search engines to make money on advertising. Under his pricing scheme, advertisers would pay when people clicked on their ads. Now, the “pay-per-click” guy has founded a startup called ProRata, which has an audacious, possibly pie-in-the-sky business model: “AI pay-per-use.”

Gross, who is CEO of the Pasadena, California, company, doesn’t mince words about the generative AI industry. “It’s stealing,” he says. “They’re shoplifting and laundering the world’s knowledge to their benefit.”

AI companies often argue that they need vast troves of data to create cutting-edge generative tools and that scraping data from the Internet, whether it’s text from websites, video or captions from YouTube, or books pilfered from pirate libraries, is legally allowed. Gross doesn’t buy that argument. “I think it’s bullshit,” he says.

So do plenty of media executives, artists, writers, musicians, and other rights-holders who are pushing back—it’s hard to keep up with the constant flurry of copyright lawsuits filed against AI companies, alleging that the way they operate amounts to theft.

But Gross thinks ProRata offers a solution that beats legal battles. “To make it fair—that’s what I’m trying to do,” he says. “I don’t think this should be solved by lawsuits.”

His company aims to arrange revenue-sharing deals so publishers and individuals get paid when AI companies use their work. Gross explains it like this: “We can take the output of generative AI, whether it’s text or an image or music or a movie, and break it down into the components, to figure out where they came from, and then give a percentage attribution to each copyright holder, and then pay them accordingly.” ProRata has filed patent applications for the algorithms it created to assign attribution and make the appropriate payments.

This week, the company, which has raised $25 million, launched with a number of big-name partners, including Universal Music Group, the Financial Times, The Atlantic, and media company Axel Springer. In addition, it has made deals with authors with large followings, including Tony Robbins, Neal Postman, and Scott Galloway. (It has also partnered with former White House Communications Director Anthony Scaramucci.)

Even journalism professor Jeff Jarvis, who believes scraping the web for AI training is fair use, has signed on. He tells WIRED that it’s smart for people in the news industry to band together to get AI companies access to “credible and current information” to include in their output. “I hope that ProRata might open discussion for what could turn into APIs [application programming interfaces] for various content,” he says.

Following the company’s initial announcement, Gross says he had a deluge of messages from other companies asking to sign up, including a text from Time CEO Jessica Sibley. ProRata secured a deal with Time, the publisher confirmed to WIRED. He plans to pursue agreements with high-profile YouTubers and other individual online stars.

The key word here is “plans.” The company is still in its very early days, and Gross is talking a big game. As a proof of concept, ProRata is launching its own subscription chatbot-style search engine in October. Unlike other AI search products, ProRata’s search tool will exclusively use licensed data. There’s nothing scraped using a web crawler. “Nothing from Reddit,” he says.

Ed Newton-Rex, a former Stability AI executive who now runs the ethical data licensing nonprofit Fairly Trained, is heartened by ProRata’s debut. “It’s great to see a generative AI company licensing training data before releasing their model, in contrast to many other companies’ approach,” he says. “The deals they have in place further demonstrate media companies’ openness to working with good actors.”

Gross wants the search engine to demonstrate that quality of data is more important than quantity and believes that limiting the model to trustworthy information sources will curb hallucinations. “I’m claiming that 70 million good documents is actually superior to 70 billion bad documents,” he says. “It’s going to lead to better answers.”

What’s more, Gross thinks he can get enough people to sign up for this all-licensed-data AI search engine to make as much money needed to pay its data providers their allotted share. “Every month the partners will get a statement from us saying, ‘Here’s what people search for, here’s how your content was used, and here’s your pro rata check,’” he says.

Other startups already are jostling for prominence in this new world of training-data licensing, like the marketplaces TollBit and Human Native AI. A nonprofit called the Dataset Providers Alliance was formed earlier this summer to push for more standards in licensing; founding members include services like the Global Copyright Exchange and Datarade.

ProRata’s business model hinges in part on its plan to license its attribution and payment technologies to other companies, including major AI players. Some of those companies have begun striking their own deals with publishers. (The Atlantic and Axel Springer, for instance, have agreements with OpenAI.) Gross hopes that AI companies will find licensing ProRata’s models more affordable than creating them in-house.

“I’ll license the system to anyone who wants to use it,” Gross says. “I want to make it so cheap that it’s like a Visa or MasterCard fee.”

This story originally appeared on wired.com.

One startup’s plan to fix AI’s “shoplifting” problem Read More »

google-antitrust-verdict-leaves-apple-with-“inconvenient-alternatives”

Google antitrust verdict leaves Apple with “inconvenient alternatives”

trustbusting —

A reliable source of billions of dollars in income is at risk for the iPhone maker.

A Google

Benj Edwards

The landmark antitrust ruling against Google on Monday is shaking up one of the longest-standing partnerships in tech.

At the heart of the case are billions of dollars’ worth of exclusive agreements Google has inked over the years to become the default search engine on browsers and devices across the world. No company benefited more than fellow Big Tech giant Apple—which US District Judge Amit Mehta called a “crucial partner” to Google.

During a weekslong trial, Apple executives showed up to explain and defend the partnership. Under a deal that first took shape in 2002, Google paid a cut of search advertising revenue to Apple to direct its users to Google Search as default, with payments reaching $20 billion for 2022, according to the court’s findings. In exchange, Google got access to Apple’s valuable user base—more than half of all search queries in the US currently flow through Apple devices.

Since Monday’s ruling, Apple has been quiet. But it is likely to be deeply involved in the next phase of the case, which will address the proposed fix to Google’s legal breaches. Remedies in the case could be targeted or wide-ranging. The Department of Justice, which brought the case, has not said what it will seek.

“The most profound impact of the judgment is liable to be felt by Apple,” said Eric Seufert, an independent analyst.

JPMorgan analysts wrote that the ruling left Apple with a range of “inconvenient alternatives,” including the possibility of a new revenue-sharing agreement with Google that does not grant it exclusive rights as the default search engine, thereby reducing its value.

Reaching revenue-sharing deals with alternative search engines like Microsoft’s Bing, they wrote, would “offer lower economic benefits for Apple, given Google’s superior advertising monetisation.”

Mehta noted in his ruling that the idea of replacing the Google agreement with one involving Microsoft and Bing had come up previously. Eddy Cue, Apple’s senior vice-president of services, “concluded that a Microsoft-Apple deal would only make sense if Apple ‘view[ed] Google as somebody [they] don’t want to be in business with and therefore are willing to jeopardize revenue to get out. Otherwise it [was a] no brainer to stay with Google as it is as close to a sure thing as can be,’” Mehta wrote.

Apple could build its own search engine. It has not yet done so, and the judge in the case stopped short of agreeing with the DoJ that the Google deal amounted to a “pay-off” to Apple to keep it out of the search engine market. An internal Apple study in 2018, cited in the judge’s opinion, found that even if it did so and maintained 80 percent of queries, it would still lose $12 billion in revenue in the first five years after separating from Google.

Mehta cited an email from John Giannandrea, a former Google executive who now works for Apple, saying, “There is considerable risk that [Apple] could end up with an unprofitable search engine that [is] also not better for users.”

Google has vowed to appeal against the ruling. Nicholas Rodelli, an analyst at CRFA Research, said it was a “long shot,” given the “meticulous” ruling.

Rodelli said he believed the judge “isn’t likely to issue a game-changing injunction,” such as a full ban on revenue-sharing with Apple. Depending on the remedy the judge decides for Google’s antitrust violations, Seufert said Apple could “either be forced to accept a much less lucrative arrangement with Microsoft [over Bing] or may be prevented from selling search defaults at all.”

“It’s certainly going to adjust the relationship between Google and Apple,” said Bill Kovacic, a former Federal Trade Commission chair and professor of competition law and policy at George Washington University Law School.

Mozilla’s funding may be at risk

Apple is not the only company potentially affected by Monday’s ruling. According to the court, Google’s 2021 payment to Mozilla for the default position on its browser was more than $400 million, about 80 percent of Mozilla’s operating budget. A spokesperson for Mozilla said it was “closely reviewing” the decision and “how we can positively influence the next steps.”

Meanwhile, the search market is undergoing a transformation, as companies such as Google and Microsoft explore how generative AI chatbots can transform traditional search features.

Apple’s partnership with OpenAI, announced in June, will allow users to direct their queries to its chatbot ChatGPT. A smarter Siri voice assistant powered by Apple’s own proprietary AI models will also create a new outlet for user queries that might otherwise go to Google. Apple’s models are trained using Applebot, a web crawler that, much like the technology behind a search engine, compiles public information from across the Internet.

Traditional search is showing no signs of slowing. Research from Emarketer finds that, in the US alone, spend on search advertising will grow at an average of about 10 percent each year, hitting $184 billion in 2028. Google, the dominant player by a long shot, captures about half of that spend. Apple’s current deal with Google would have allowed it to unilaterally extend the partnership into 2028.

The Cupertino, California-based iPhone maker has its own antitrust battle to wage. The DoJ’s antitrust division, led by Jonathan Kanter, filed a sweeping lawsuit against Apple in March, making it the latest Big Tech giant to be targeted by the Biden administration’s enforcers.

The legal troubles reflect an ongoing decline in Apple’s relationship with policymakers in Washington, despite an effort by chief executive Tim Cook to step up the company’s lobbying of the Biden White House, according to research by the Tech Transparency Project. TTP found that Apple spent $9.9 million on lobbying the federal government in 2023—its highest in 25 years, though still much lower than the likes of Google, Amazon, and Meta.

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Google antitrust verdict leaves Apple with “inconvenient alternatives” Read More »

crowdstrike-claps-back-at-delta,-says-airline-rejected-offers-for-help

CrowdStrike claps back at Delta, says airline rejected offers for help

Who’s going to pay for this mess? —

Delta is creating a “misleading narrative,” according to CrowdStrike’s lawyers.

LOS ANGELES, CALIFORNIA - JULY 23: Travelers from France wait on their delayed flight on the check-in floor of the Delta Air Lines terminal at Los Angeles International Airport (LAX) on July 23, 2024 in Los Angeles, California.

Enlarge / LOS ANGELES, CALIFORNIA – JULY 23: Travelers from France wait on their delayed flight on the check-in floor of the Delta Air Lines terminal at Los Angeles International Airport (LAX) on July 23, 2024 in Los Angeles, California.

CrowdStrike has hit back at Delta Air Lines’ threat of litigation against the cyber security company over a botched software update that grounded thousands of flights, denying it was responsible for the carrier’s own IT decisions and days-long disruption.

In a letter on Sunday, lawyers for CrowdStrike argued that the US carrier had created a “misleading narrative” that the cyber security firm was “grossly negligent” in an incident that the US airline has said will cost it $500 million.

Delta took days longer than its rivals to recover when CrowdStrike’s update brought down millions of Windows computers around the world last month. The airline has alerted the cyber security company that it plans to seek damages for the disruptions and hired litigation firm Boies Schiller Flexner.

CrowdStrike addressed Sunday’s letter to the law firm, whose chair, David Boies, has previously represented the US government in its antitrust case against Microsoft and Harvey Weinstein, among other prominent clients.

Microsoft has estimated that about 8.5 million Windows devices were hit by the faulty update, which stranded airline passengers, interrupted hospital appointments and took broadcasters off air around the world. CrowdStrike said last week that 99 percent of Windows devices running the affected Falcon software were now back online.

Major US airlines Delta, United, and American briefly grounded their aircraft on the morning of July 19. But while United and American were able to restore their operations over the weekend, Delta’s flight disruptions continued well into the following week.

The Atlanta-based carrier in the end canceled more than 6,000 flights, triggering an investigation from the US Department of Transportation amid claims of poor customer service during the operational chaos.

CrowdStrike’s lawyer, Michael Carlinsky, co-managing partner of Quinn Emanuel Urquhart & Sullivan, wrote that, if it pursues legal action, Delta Air Lines would have to explain why its competitors were able to restore their operations much faster.

He added: “Should Delta pursue this path, Delta will have to explain to the public, its shareholders, and ultimately a jury why CrowdStrike took responsibility for its actions—swiftly, transparently and constructively—while Delta did not.”

CrowdStrike also claimed that Delta’s leadership had ignored and rejected offers for help: “CrowdStrike’s CEO personally reached out to Delta’s CEO to offer onsite assistance, but received no response. CrowdStrike followed up with Delta on the offer for onsite support and was told that the onsite resources were not needed.”

Delta Chief Executive Ed Bastian said last week that CrowdStrike had not “offered anything” to make up for the disruption at the airline. “Free consulting advice to help us—that’s the extent of it,” he told CNBC on Wednesday.

While Bastian has said that the disruption would cost Delta $500 million, CrowdStrike insisted that “any liability by CrowdStrike is contractually capped at an amount in the single-digit millions.”

A spokesperson for CrowdStrike accused Delta of “public posturing about potentially bringing a meritless lawsuit against CrowdStrike” and said it hoped the airline would “agree to work cooperatively to find a resolution.”

Delta Air Lines declined to comment.

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

CrowdStrike claps back at Delta, says airline rejected offers for help Read More »

data-centers-demand-a-massive-amount-of-energy-here’s-how-some-states-are-tackling-the-industry’s-impact.

Data centers demand a massive amount of energy. Here’s how some states are tackling the industry’s impact.

rethinking incentives —

States that offer tax exemptions to support the industry are reconsidering their approach.

A Google data center in Douglas County, Georgia.

A Google data center in Douglas County, Georgia.

This article was produced for ProPublica’s Local Reporting Network in partnership with The Seattle Times. Sign up for Dispatches to get stories like this one as soon as they are published.

When lawmakers in Washington set out to expand a lucrative tax break for the state’s data center industry in 2022, they included what some considered an essential provision: a study of the energy-hungry industry’s impact on the state’s electrical grid.

Gov. Jay Inslee vetoed that provision but let the tax break expansion go forward. As The Seattle Times and ProPublica recently reported, the industry has continued to grow and now threatens Washington’s effort to eliminate carbon emissions from electricity generation.

Washington’s experience with addressing the power demand of data centers parallels the struggles playing out in other states around the country where the industry has rapidly grown and tax breaks are a factor.

Virginia, home to the nation’s largest data center market, once debated running data centers on carbon-emitting diesel generators during power shortages to keep the lights on in the area. (That plan faced significant public pushback from environmental groups, and an area utility is exploring other options.)

Dominion Energy, the utility that serves most of Virginia’s data centers, has said that it intends to meet state requirements to decarbonize the grid by 2045, but that the task would be more challenging with rising demands driven largely by data centers, Inside Climate News reported. The utility also has indicated that new natural gas plants will be needed.

Some Virginia lawmakers and the state’s Republican governor have proposed reversing or dramatically altering the clean energy goals.

A northern Virginia lawmaker instead proposed attaching strings to the state’s data center tax break. This year, he introduced legislation saying data centers would only qualify if they maximized energy efficiency and found renewable resources. The bill died in Virginia’s General Assembly. But the state authorized a study of the industry and how tax breaks impact the grid.

“If we’re going to have data centers, which we all know to be huge consumers of electricity, let’s require them to be as efficient as possible,” said state Delegate Richard “Rip” Sullivan Jr., the Democrat who sponsored the original bill. “Let’s require them to use as little energy as possible to do their job.”

Inslee’s 2022 veto of a study similar to Virginia’s cited the fact that Northwest power planners already include data centers in their estimates of regional demand. But supporters of the legislation said their goal was to obtain more precise answers about Washington-specific electricity needs.

Georgia lawmakers this year passed a bill to halt the state’s data center tax break until data center power use could be analyzed. In the meantime, according to media reports, the state’s largest utility said it would use fossil fuels to make up an energy shortfall caused in part by data centers. Georgia Gov. Brian Kemp then vetoed the tax break pause in May.

Lawmakers in Connecticut and South Carolina have also debated policies to tackle data center power usage in the past year.

“Maybe we want to entice more of them to come. I just want to make sure that we understand the pros and the cons of that before we do it,” South Carolina’s Senate Majority Leader Shane Massey said in May, according to the South Carolina Daily Gazette.

Countries such as Ireland, Singapore, and the Netherlands have at times forced data centers to halt construction to limit strains on the power grid, according to a report by the nonprofit Tony Blair Institute for Global Change. The report’s recommendations for addressing data center power usage include encouraging the private sector to invest directly in renewables.

Sajjad Moazeni, a University of Washington professor who studies artificial intelligence and data center power consumption, said states should consider electricity impacts when formulating data center legislation. Moazeni’s recent research found that in just one day, ChatGPT, a popular artificial intelligence tool, used roughly as much power as 33,000 U.S. households use in a year.

“A policy can help both push companies to make these data centers more efficient and preserve a cleaner, better environment for us,” Moazeni said. “Policymakers need to consider a larger set of metrics on power usage and efficiency.”

Eli Sanders contributed research while a student with the Technology, Law and Public Policy Clinic at the University of Washington School of Law.

Data centers demand a massive amount of energy. Here’s how some states are tackling the industry’s impact. Read More »

memo-to-the-supreme-court:-clean-air-act-targeted-co2-as-climate-pollutant,-study-says

Memo to the Supreme Court: Clean Air Act targeted CO2 as climate pollutant, study says

The exterior of the US Supreme Court building during daytime.

Getty Images | Rudy Sulgan

This article originally appeared on Inside Climate News, a nonprofit, independent news organization that covers climate, energy, and the environment. It is republished with permission. Sign up for its newsletter here

Among the many obstacles to enacting federal limits on climate pollution, none has been more daunting than the Supreme Court. That is where the Obama administration’s efforts to regulate power plant emissions met their demise and where the Biden administration’s attempts will no doubt land.

A forthcoming study seeks to inform how courts consider challenges to these regulations by establishing once and for all that the lawmakers who shaped the Clean Air Act in 1970 knew scientists considered carbon dioxide an air pollutant, and that these elected officials were intent on limiting its emissions.

The research, expected to be published next week in the journal Ecology Law Quarterly, delves deep into congressional archives to uncover what it calls a “wide-ranging and largely forgotten conversation between leading scientists, high-level administrators at federal agencies, members of Congress” and senior staff under Presidents Lyndon Johnson and Richard Nixon. That conversation detailed what had become the widely accepted science showing that carbon dioxide pollution from fossil fuels was accumulating in the atmosphere and would eventually warm the global climate.

The findings could have important implications in light of a legal doctrine the Supreme Court established when it struck down the Obama administration’s power plant rules, said Naomi Oreskes, a history of science professor at Harvard University and the study’s lead author. That so-called “major questions” doctrine asserted that when courts hear challenges to regulations with broad economic and political implications, they ought to consider lawmakers’ original intent and the broader context in which legislation was passed.

“The Supreme Court has implied that there’s no way that the Clean Air Act could really have been intended to apply to carbon dioxide because Congress just didn’t really know about this issue at that time,” Oreskes said. “We think that our evidence shows that that is false.”

The work began in 2013 after Oreskes arrived at Harvard, she said, when a call from a colleague prompted the question of what Congress knew about climate science in the 1960s as it was developing Clean Air Act legislation. She had already co-authored the book Merchants of Doubt, about the efforts of industry-funded scientists to cast doubt about the risks of tobacco and global warming, and was familiar with the work of scientists studying climate change in the 1950s. “What I didn’t know,” she said, “was how much they had communicated that, particularly to Congress.”

Oreskes hired a researcher to start looking, and what they both found surprised her. The evidence they uncovered includes articles cataloged by the staff of the act’s chief architect, proceedings of scientific conferences attended by members of Congress, and correspondence with constituents and scientific advisers to Johnson and Nixon. The material included documents pertaining not only to environmental champions but also to other prominent members of Congress.

“These were people really at the center of power,” Oreskes said.

When Sen. Edmund Muskie, a Maine Democrat, introduced the Clean Air Act of 1970, he warned his colleagues that unchecked air pollution would continue to “threaten irreversible atmospheric and climatic changes.” The new research shows that his staff had collected reports establishing the science behind his statement. He and other senators had attended a 1966 conference featuring discussion of carbon dioxide as a pollutant. At that conference, Wisconsin Sen. Gaylord Nelson warned about carbon dioxide pollution from fossil fuel combustion, which he said “is believed to have drastic effects on climate.”

The paper also cites a 1969 letter to Sen. Henry “Scoop” Jackson of Washington from a constituent who had watched the poet Allen Ginsberg warning of melting polar ice caps and widespread global flooding on the Merv Griffin Show. The constituent was skeptical of the message, called Ginsberg “one of America’s premier kooks” and sought a correction of the record from the senator: “After all, quite a few million people watch this show, people of widely varying degrees of intelligence, and the possibility of this sort of charge—even from an Allen Ginsberg—being accepted even in part, is dangerous.”

Memo to the Supreme Court: Clean Air Act targeted CO2 as climate pollutant, study says Read More »