Author name: Shannon Garcia

hugging-face-clones-openai’s-deep-research-in-24-hours

Hugging Face clones OpenAI’s Deep Research in 24 hours

On Tuesday, Hugging Face researchers released an open source AI research agent called “Open Deep Research,” created by an in-house team as a challenge 24 hours after the launch of OpenAI’s Deep Research feature, which can autonomously browse the web and create research reports. The project seeks to match Deep Research’s performance while making the technology freely available to developers.

“While powerful LLMs are now freely available in open-source, OpenAI didn’t disclose much about the agentic framework underlying Deep Research,” writes Hugging Face on its announcement page. “So we decided to embark on a 24-hour mission to reproduce their results and open-source the needed framework along the way!”

Similar to both OpenAI’s Deep Research and Google’s implementation of its own “Deep Research” using Gemini (first introduced in December—before OpenAI), Hugging Face’s solution adds an “agent” framework to an existing AI model to allow it to perform multi-step tasks, such as collecting information and building the report as it goes along that it presents to the user at the end.

The open source clone is already racking up comparable benchmark results. After only a day’s work, Hugging Face’s Open Deep Research has reached 55.15 percent accuracy on the General AI Assistants (GAIA) benchmark, which tests an AI model’s ability to gather and synthesize information from multiple sources. OpenAI’s Deep Research scored 67.36 percent accuracy on the same benchmark.

As Hugging Face points out in its post, GAIA includes complex multi-step questions such as this one:

Which of the fruits shown in the 2008 painting “Embroidery from Uzbekistan” were served as part of the October 1949 breakfast menu for the ocean liner that was later used as a floating prop for the film “The Last Voyage”? Give the items as a comma-separated list, ordering them in clockwise order based on their arrangement in the painting starting from the 12 o’clock position. Use the plural form of each fruit.

To correctly answer that type of question, the AI agent must seek out multiple disparate sources and assemble them into a coherent answer. Many of the questions in GAIA represent no easy task, even for a human, so they test agentic AI’s mettle quite well.

Hugging Face clones OpenAI’s Deep Research in 24 hours Read More »

framework-laptop’s-risc-v-board-for-open-source-diehards-is-available-for-$199

Framework Laptop’s RISC-V board for open source diehards is available for $199

We’ve covered the Framework Laptop 13 primarily as a consumer Windows laptop, reviewing versions with multiple Intel and AMD processors. But the system’s modular nature makes it possible to expand it beyond Windows PC hardware, as we’ve seen with experiments like the (now-discontinued) Chromebook Edition of the laptop.

Today Framework is expanding to something even more experimental: a DeepComputing RISC-V Mainboard targeted primarily at developers. RISC-V is a fully open source and royalty-free instruction set, making it possible for anyone to adopt and use it without having to license it (unlike x86, which is a maze of cross-licensed Intel and AMD technologies that other companies can’t really buy into; or Arm, which is licensed by the company of the same name).

First announced in June 2024, the board is available to order today for $199. The board is designed to fit in a Framework Laptop 13 chassis, which means that people who would prefer a desktop can also put it into the $39 Cooler Master Mainboard Case that Framework offers.

Made in concert with DeepComputing, the board uses a StarFive JH7110 processor with four 1.5 GHz SiFive U74 CPU cores. The board can officially run either Ubuntu 24.04 LTS or Fedora 41, with tech support provided by DeepComputing.

The RISC-V board isn’t being offered in a pre-built laptop, but Framework is also introducing a barebones boardless $399 laptop chassis with a screen, 55 WHr battery, speakers, and a keyboard for $399. It can be used for the RISC-V Mainboard or any other Framework Laptop 13 motherboard model.

Framework Laptop’s RISC-V board for open source diehards is available for $199 Read More »

concern-about-spacex-influence-at-nasa-grows-with-new-appointee

Concern about SpaceX influence at NASA grows with new appointee

Like a lot of the rest of the federal government right now, NASA is reeling during the first turbulent days of the Trump administration.

The last two weeks have brought a change in leadership in the form of interim administrator Janet Petro, whose ascension was a surprise. Her first act was to tell agency employees to remove diversity, equity, inclusion, and accessibility contracts and to “report” on anyone who did not carry out this order. Soon, civil servants began receiving emails from the US Office of Personnel Management that some perceived as an effort to push them to resign.

Then there are the actions of SpaceX founder Elon Musk. Last week he sowed doubt by claiming NASA had “stranded” astronauts on the space station. (The astronauts are perfectly safe and have a ride home.) Perhaps more importantly, he owns the space agency’s most important contractor and, in recent weeks, has become deeply enmeshed in operating the US government through his Department of Government Efficiency. For some NASA employees, whether or not it is true, there is now an uncomfortable sense that they are working for Musk and to dole out contracts to SpaceX.

This concern was heightened late Friday when Petro announced that a longtime SpaceX employee named Michael Altenhofen had joined the agency “as a senior advisor to the NASA Administrator.” Altenhofen is an accomplished engineer who interned at NASA in 2005 but has spent the last 15 years at SpaceX, most recently as a leader of human spaceflight programs. He certainly brings expertise, but his hiring also raises concerns about SpaceX’s influence over NASA operations. Petro did not respond to a request for comment on Monday about potential conflicts of interest and the scope of Altenhofen’s involvement.

I spent this weekend talking and texting with NASA sources at various centers around the country, and the overriding message is that morale at the agency is “absurdly low.” Meetings between civil servants and their leadership, such as an all-hands gathering at NASA’s Langley Research Center in Virginia recently, have been fraught with tension. No one knows what will happen next.

Concern about SpaceX influence at NASA grows with new appointee Read More »

microsoft-365’s-vpn-feature-will-be-shut-off-at-the-end-of-the-month

Microsoft 365’s VPN feature will be shut off at the end of the month

Last month, Microsoft announced that it was increasing the prices for consumer Microsoft 365 plans for the first time since introducing them as Office 365 plans more than a decade ago. Microsoft is using new Copilot-branded generative AI features to justify the price increases, which amount to an extra $3 per month or $30 per year for both individual and family plans.

But Microsoft giveth (and chargeth more) and Microsoft taketh away; according to a support page, the company is also removing the “privacy protection” VPN feature from Microsoft 365’s Microsoft Defender app for Windows, macOS, iOS, and Android. Other Defender features, including identity theft protection and anti-malware protection, will continue to be available. Privacy protection will stop functioning on February 28.

Microsoft didn’t say exactly why it was removing the feature, but the company implied that not enough people were using the service.

“We routinely evaluate the usage and effectiveness of our features. As such, we are removing the privacy protection feature and will invest in new areas that will better align to customer needs,” the support note reads.

Cutting features at the same time that you raise prices for the first time ever is not, as they say, a Great Look. But the Defender VPN feature was already a bit limited compared to other dedicated VPN services. It came with a 50GB per user, per month data cap, and it automatically excluded “content heavy traffic from reputable sites” like YouTube, Netflix, Disney+, Amazon Prime, Facebook, Instagram, and Whatsapp.

Microsoft 365’s VPN feature will be shut off at the end of the month Read More »

to-help-ais-understand-the-world,-researchers-put-them-in-a-robot

To help AIs understand the world, researchers put them in a robot


There’s a difference between knowing a word and knowing a concept.

Large language models like ChatGPT display conversational skills, but the problem is they don’t really understand the words they use. They are primarily systems that interact with data obtained from the real world but not the real world itself. Humans, on the other hand, associate language with experiences. We know what the word “hot” means because we’ve been burned at some point in our lives.

Is it possible to get an AI to achieve a human-like understanding of language? A team of researchers at the Okinawa Institute of Science and Technology built a brain-inspired AI model comprising multiple neural networks. The AI was very limited—it could learn a total of just five nouns and eight verbs. But their AI seems to have learned more than just those words; it learned the concepts behind them.

Babysitting robotic arms

“The inspiration for our model came from developmental psychology. We tried to emulate how infants learn and develop language,” says Prasanna Vijayaraghavan, a researcher at the Okinawa Institute of Science and Technology and the lead author of the study.

While the idea of teaching AIs the same way we teach little babies is not new—we applied it to standard neural nets that associated words with visuals. Researchers also tried teaching an AI using a video feed from a GoPro strapped to a human baby. The problem is babies do way more than just associate items with words when they learn. They touch everything—grasp things, manipulate them, throw stuff around, and this way, they learn to think and plan their actions in language. An abstract AI model couldn’t do any of that, so Vijayaraghavan’s team gave one an embodied experience—their AI was trained in an actual robot that could interact with the world.

Vijayaraghavan’s robot was a fairly simple system with an arm and a gripper that could pick objects up and move them around. Vision was provided by a simple RGB camera feeding videos in a somewhat crude 64×64 pixels resolution.

 The robot and the camera were placed in a workspace, put in front of a white table with blocks painted green, yellow, red, purple, and blue. The robot’s task was to manipulate those blocks in response to simple prompts like “move red left,” “move blue right,” or “put red on blue.” All that didn’t seem particularly challenging. What was challenging, though, was building an AI that could process all those words and movements in a manner similar to humans. “I don’t want to say we tried to make the system biologically plausible,” Vijayaraghavan told Ars. “Let’s say we tried to draw inspiration from the human brain.”

Chasing free energy

The starting point for Vijayaraghavan’s team was the free energy principle, a hypothesis that the brain constantly makes predictions about the world based on internal models, then updates these predictions based on sensory input. The idea is that we first think of an action plan to achieve a desired goal, and then this plan is updated in real time based on what we experience during execution. This goal-directed planning scheme, if the hypothesis is correct, governs everything we do, from picking up a cup of coffee to landing a dream job.

All that is closely intertwined with language. Neuroscientists at the University of Parma found that motor areas in the brain got activated when the participants in their study listened to action-related sentences. To emulate that in a robot, Vijayaraghavan used four neural networks working in a closely interconnected system. The first was responsible for processing visual data coming from the camera. It was tightly integrated with a second neural net that handled proprioception: all the processes that ensured the robot was aware of its position and the movement of its body. This second neural net also built internal models of actions necessary to manipulate blocks on the table. Those two neural nets were additionally hooked up to visual memory and attention modules that enabled them to reliably focus on the chosen object and separate it from the image’s background.

The third neural net was relatively simple and processed language using vectorized representations of those “move red right” sentences. Finally, the fourth neural net worked as an associative layer and predicted the output of the previous three at every time step. “When we do an action, we don’t always have to verbalize it, but we have this verbalization in our minds at some point,” Vijayaraghavan says. The AI he and his team built was meant to do just that: seamlessly connect language, proprioception, action planning, and vision.

When the robotic brain was up and running, they started teaching it some of the possible combinations of commands and sequences of movements. But they didn’t teach it all of them.

The birth of compositionality

In 2016, Brenden Lake, a professor of psychology and data science, published a paper in which his team named a set of competencies machines need to master to truly learn and think like humans. One of them was compositionality: the ability to compose or decompose a whole into parts that can be reused. This reuse lets them generalize acquired knowledge to new tasks and situations. “The compositionality phase is when children learn to combine words to explain things. They [initially] learn the names of objects, the names of actions, but those are just single words. When they learn this compositionality concept, their ability to communicate kind of explodes,” Vijayaraghavan explains.

The AI his team built was made for this exact purpose: to see if it would develop compositionality. And it did.

Once the robot learned how certain commands and actions were connected, it also learned to generalize that knowledge to execute commands it never heard before. recognizing the names of actions it had not performed and then performing them on combinations of blocks it had never seen. Vijayaraghavan’s AI figured out the concept of moving something to the right or the left or putting an item on top of something. It could also combine words to name previously unseen actions, like putting a blue block on a red one.

While teaching robots to extract concepts from language has been done before, those efforts were focused on making them understand how words were used to describe visuals. Vijayaragha built on that to include proprioception and action planning, basically adding a layer that integrated sense and movement to the way his robot made sense of the world.

But some issues are yet to overcome. The AI had very limited workspace. The were only a few objects and all had a single, cubical shape. The vocabulary included only names of colors and actions, so no modifiers, adjectives, or adverbs. Finally, the robot had to learn around 80 percent of all possible combinations of nouns and verbs before it could generalize well to the remaining 20 percent. Its performance was worse when those ratios dropped to 60/40 and 40/60.

But it’s possible that just a bit more computing power could fix this. “What we had for this study was a single RTX 3090 GPU, so with the latest generation GPU, we could solve a lot of those issues,” Vijayaraghavan argued. That’s because the team hopes that adding more words and more actions won’t result in a dramatic need for computing power. “We want to scale the system up. We have a humanoid robot with cameras in its head and two hands that can do way more than a single robotic arm. So that’s the next step: using it in the real world with real world robots,” Vijayaraghavan said.

Science Robotics, 2025. DOI: 10.1126/scirobotics.adp0751

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

To help AIs understand the world, researchers put them in a robot Read More »

bogus-research-is-undermining-good-science,-slowing-lifesaving-research

Bogus research is undermining good science, slowing lifesaving research

In 2022, Byrne and colleagues, including two of us, found that suspect genetics research, despite not immediately affecting patient care, informs scientists’ work, including clinical trials. But publishers are often slow to retract tainted papers, even when alerted to obvious fraud. We found that 97 percent of the 712 problematic genetics research articles we identified remained uncorrected.

Potential solutions

The Cochrane Collaboration has a policy excluding suspect studies from its analyses of medical evidence and is developing a tool to spot problematic medical trials. And publishers have begun to share data and technologies among themselves to combat fraud, including image fraud.

Technology startups are also offering help. The website Argos, launched in September 2024 by Scitility, an alert service based in Sparks, Nevada, allows authors to check collaborators for retractions or misconduct. Morressier, a scientific conference and communications company in Berlin, offers research integrity tools. Paper-checking tools include Signals, by London-based Research Signals, and Clear Skies’ Papermill Alarm.

But Alam acknowledges that the fight against paper mills won’t be won as long as the booming demand for papers remains.

Today’s commercial publishing is part of the problem, Byrne said. Cleaning up the literature is a vast and expensive undertaking. “Either we have to monetize corrections such that publishers are paid for their work, or forget the publishers and do it ourselves,” she said.

There’s a fundamental bias in for-profit publishing: “We pay them for accepting papers,” said Bodo Stern, a former editor of the journal Cell and chief of Strategic Initiatives at Howard Hughes Medical Institute, a nonprofit research organization and funder in Chevy Chase, Maryland. With more than 50,000 journals on the market, bad papers shopped around long enough eventually find a home, Stern said.

To prevent this, we could stop paying journals for accepting papers and look at them as public utilities that serve a greater good. “We should pay for transparent and rigorous quality-control mechanisms,” he said.

Peer review, meanwhile, “should be recognized as a true scholarly product, just like the original article,” Stern said. And journals should make all peer-review reports publicly available, even for manuscripts they turn down.

This article is republished from The Conversation under a Creative Commons license. This is a condensed version. To learn more about how fraudsters around the globe use paper mills to enrich themselves and harm scientific research, read the full version.

Frederik Joelving is a contributing editor at Retraction Watch; Cyril Labbé is a professor of computer science at the Université Grenoble Alpes (UGA); and Guillaume Cabanac is a professor of computer science at Institut de Recherche en Informatique de Toulouse.

Bogus research is undermining good science, slowing lifesaving research Read More »

treasury-official-retires-after-clash-with-doge-over-access-to-payment-system

Treasury official retires after clash with DOGE over access to payment system

“This is a mechanical job—they pay Social Security benefits, they pay vendors, whatever. It’s not one where there’s a role for nonmechanical things, at least from the career standpoint. Your whole job is to pay the bills as they’re due,” Mazur was quoted as saying. “It’s never been used in a way to execute a partisan agenda… You have to really put bad intentions in place for that to be the case.”

The Trump administration previously issued an order to freeze funding for a wide range of government programs, but rescinded the order after two days of protest and a judge’s ruling that temporarily blocked the funding freeze.

Trump ordered cooperation with DOGE

The Trump executive order establishing DOGE took the existing United States Digital Service and renamed it the United States DOGE Service. It’s part of the Executive Office of the President and is tasked with “modernizing Federal technology and software to maximize governmental efficiency and productivity.”

Trump’s order said that federal agencies will have to collaborate with DOGE. “Among other things, the USDS Administrator shall work with Agency Heads to promote inter-operability between agency networks and systems, ensure data integrity, and facilitate responsible data collection and synchronization,” the order said. “Agency Heads shall take all necessary steps, in coordination with the USDS Administrator and to the maximum extent consistent with law, to ensure USDS has full and prompt access to all unclassified agency records, software systems, and IT systems. USDS shall adhere to rigorous data protection standards.”

The Post writes that “Musk has sought to exert sweeping control over the inner workings of the US government, installing longtime surrogates at several agencies, including the Office of Personnel Management, which essentially handles federal human resources, and the General Services Administration.”

On Thursday, Musk visited the General Services Administration headquarters in Washington, DC, The New York Times reported. The Department of Government Efficiency’s account on X stated earlier this week that the GSA had “terminated three leases of mostly empty office space” for a savings of $1.6 million and that more cuts are planned. In another post, DOGE claimed it “is saving the Federal Government approx. $1 billion/day, mostly from stopping the hiring of people into unnecessary positions, deletion of DEI and stopping improper payments to foreign organizations, all consistent with the President’s Executive Orders.”

“Mr. Musk’s visit to the General Services Administration could presage more cost-cutting efforts focused on federal real estate,” the Times wrote. “The agency also plays a role in federal contracting and in providing technology services across the federal government.”

Treasury official retires after clash with DOGE over access to payment system Read More »

“just-give-me-the-f***ing-links!”—cursing-disables-google’s-ai-overviews

“Just give me the f***ing links!”—Cursing disables Google’s AI overviews

If you search Google for a way to turn off the company’s AI-powered search results, you may well get an AI Overview telling you that AI Overviews can’t be directly disabled in Google Search. But if you instead ask Google how to turn off “fucking Google AI results,” you’ll get a standard set of useful web suggestions without any AI Overview at the top.

The existence of this “curse to disable Google AI” trick has been making the rounds on social media in recent days, and it holds up in Ars’ own testing. For instance, when searching for “how do you turn off [adjective] Google AI results,” a variety of curse word adjectives reliably disabled the AI Overviews, while adjectives like “dumb” or “lousy” did not. Inserting curse words randomly at any point in the search query seems to have a similar effect.

There’s long been evidence that Google’s Gemini AI system tries to avoid swearing if at all possible, which might help explain why AI Overviews balk at queries that contain curses. Users should also keep in mind, though, that the actual web link results to a query can change significantly when curse words are inserted, especially if SafeSearch is turned off.

“Just give me the f***ing links!”—Cursing disables Google’s AI overviews Read More »

here’s-why-the-tech-industry-gets-excited-about-sports-car-racing

Here’s why the tech industry gets excited about sports car racing


It would take IMSA 700 years to drive to Mars

Racing has always been used to improve the breed, but now mostly with software.

NASA worm logo with race imagery over a backdrop of Mars

Credit: Aurich Lawson | Getty Images | NASA

Credit: Aurich Lawson | Getty Images | NASA

DAYTONA BEACH—Last week, ahead of the annual Rolex 24 at Daytona and the start of the North American road racing season, IMSA (the sport’s organizers) held a tech symposium across the road from the vast speedway at Embry-Riddle University. Last year, panelists, including Crowdstrike’s CSO, explained the draw of racing to their employers; this time, organizations represented included NASA, Michelin, AMD, and Microsoft. And while they were all there to talk about racing, it seems everyone was also there to talk about simulation and AI.

I’ve long maintained that endurance racing, where grids of prototypes and road car-based racers compete over long durations—24 hours, for example—is the most relevant form of motorsport, the one that makes road cars better. Formula 1 has budgets and an audience to dwarf all others, and there’s no doubt about the level of talent and commitment required to triumph in that arena. The Indy 500 might have more history. And rallying looks like the hardest challenge for both humans and machines.

But your car owes its disc brakes to endurance racing, plus its dual-clutch transmission, if it’s one of the increasing number of cars fitted with such. But let’s not overblow it. Over the years, budgets have had to be reined in for the health of the sport. That—plus a desire for parity among the teams so that no one clever idea runs away with the series—means there are plenty of spec or controlled components on a current endurance racer. Direct technology transfer, then, happens less and less often—at least in terms of new mechanical bits or bobs you might find inside your next car.

Software has become a new competitive advantage for the teams that race hybrid sports prototypes from Acura, BMW, Cadillac, Porsche, and Lamborghini, just as it is between teams in Formula E.

But this year’s symposium shone a light on a different area of tech transfer, where Microsoft or NASA can use the vast streams of data that pour out of a 60-car, 24-hour race to build more accurate simulations and AI tools—maybe even ones that will babysit a crewed mission to Mars.

Sorry, did you say Mars?

“Critically, it takes light 20 minutes to make that trip, which has some really unfortunate operational impacts,” said Ian Maddox of NASA’s Marshall Space Flight Center’s Habitation office. A 40-minute delay between asking a question and getting an answer wouldn’t work for a team trying to win the Rolex 24, and “it certainly isn’t going to work for us,” he said.

“And so we’re placed in—I’ll be frank—the really uncomfortable position of having to figure out how to build AI tools to help the crew on board a Mars ship diagnose and respond to their own problems. So to be their own crew, to be their own engineering teams, at least for the subset of problems that can get really bad in the course of 45 minutes to an hour,” Maddox said.

Building those kinds of tools will require a “giant bucket of really good data,” Maddox said, “and that’s why we’ve come to IMSA.”

Individually, the hybrid prototypes and GT cars in an IMSA race are obviously far less complicated than a Mars-bound spacecraft. But when you get that data from all the cars in the race together, the size starts to become comparable.

“And fundamentally, you guys have things that roll and we have things that rotate, and you have things that get hot and cold, and so do we,” Maddox said. “When you get down to the actual measurement level, there are a lot of similarities between the stuff that you guys use to understand vehicle performance and the stuff we use to understand vehicle performance.”

Not just Mars

Other speakers pointed to areas of technology development—like tire development—that you may have read about recently here on Ars Technica. “[A tire is] a composite material made with more than 200 components with very non-linear behavior. It’s pressure-sensitive, it’s temperature-sensitive. It changes with wear… and actually, the ground interaction is also one of the worst mechanisms to try to anticipate and to understand,” said Phillippe Tramond, head of research of motorsport at Michelin.

For the past four years, Michelin has been crunching data gathered from cars racing on its rubber (and the other 199 components). “And eventually, we are able to build and develop a thermomechanical tire model able to mimic and simulate tire behavior, tire performance, whatever the specification is,” Tramond said.

That tool has been quite valuable to the teams racing in the GTP class of hybrid prototypes, as it means that their driver-in-the-loop simulators are now even more faithful to real life. But Michelin has also started using the tire model when developing road tires for specific cars with individual OEMs.

For Sid Siddhartha, a principal researcher at Microsoft Research, the data is again the draw. Siddhartha has been using AI to study human behavior, including in the game Rocket League. “We were able to actually show that we can really understand and home in on individual human behavior in a very granular way, to the point where if I just observe you for two or three seconds, or if I look at some of your games, I can tell you who played it,” Siddhartha said.

That led to a new approach by the Alpine F1 team, which wanted to use Siddhartha’s AI to improve its simulation tools. F1 teams will run entirely virtual simulations on upgraded cars long before they fire those changes up in the big simulator and let their human drivers have a go (as described above). In Alpine’s case, they wanted something more realistic than a lap time simulator that just assumed perfect behavior.

The dreaded BoP

“Eventually, we are connected to IMSA, and IMSA is interested in a whole host of questions that are very interesting to us at Microsoft Research,” Siddhartha said. “They’re interested in what are the limits of driver and car? How do you balance that performance across different classes? How do you anticipate what might happen when people make different strategic decisions during the race? And how do you communicate all of this to a fan base, which has really blown me away, as John was saying, who are interested in following the sport and understanding what’s going on.”

“Sports car racing is inherently complex,” said Matt Kurdock, IMSA’s managing director of engineering. “We’ve got four different classes. We have, in each car, four different drivers. And IMSA’s challenge is to extract from this race data that’s being collected and figure out how to get an appropriate balance so that manufacturers stay engaged in the sport,” Kurdock said.

IMSA has the cars put through wind tunnels and runs CFD simulations on them as well. “We then plug all this information into one of Michelin’s tools, which is their canopy vehicle dynamic simulation, which runs in the cloud, and from this, we start generating a picture of where we believe the optimized performance of each platform is,” Kurdock said.

That’s something to think about the next time your favorite team gets the short end of the stick in the latest balance of performance—better known as BoP—update.

Photo of Jonathan M. Gitlin

Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica’s automotive coverage. He lives in Washington, DC.

Here’s why the tech industry gets excited about sports car racing Read More »

how-one-youtuber-is-trying-to-poison-the-ai-bots-stealing-her-content

How one YouTuber is trying to poison the AI bots stealing her content

If you’ve been paying careful attention to YouTube recently, you may have noticed the rising trend of so-called “faceless YouTube channels” that never feature a visible human talking in the video frame. While some of these channels are simply authored by camera-shy humans, many more are fully automated through AI-powered tools to craft everything from the scripts and voiceovers to the imagery and music. Unsurprisingly, this is often sold as a way to make a quick buck off the YouTube algorithm with minimal human effort.

It’s not hard to find YouTubers complaining about a flood of these faceless channels stealing their embedded transcript files and running them through AI summarizers to generate their own instant knock-offs. But one YouTuber is trying to fight back, seeding her transcripts with junk data that is invisible to humans but poisonous to any AI that dares to try to work from a poached transcript file.

The power of the .ass

YouTuber F4mi, who creates some excellent deep dives on obscure technology, recently detailed her efforts “to poison any AI summarizers that were trying to steal my content to make slop.” The key to F4mi’s method is the .ass subtitle format, created decades ago as part of fansubbing software Advanced SubStation Alpha. Unlike simpler and more popular subtitle formats, .ass supports fancy features like fonts, colors, positioning, bold, italic, underline, and more.

It’s these fancy features that let F4mi hide AI-confounding garbage in her YouTube transcripts without impacting the subtitle experience for her human viewers. For each chunk of actual text in her subtitle file, she also inserted “two chunks of text out of bounds using the positioning feature of the .ass format, with their size and transparency set to zero so they are completely invisible.”

In those “invisible” subtitle boxes, F4mi added text from public domain works (with certain words replaced with synonyms to avoid detection) or her own LLM-generated scripts full of completely made-up facts. When those transcript files were fed into popular AI summarizer sites, that junk text ended up overwhelming the actual content, creating a totally unrelated script that would be useless to any faceless channel trying to exploit it.

How one YouTuber is trying to poison the AI bots stealing her content Read More »

democrat-teams-up-with-movie-industry-to-propose-website-blocking-law

Democrat teams up with movie industry to propose website-blocking law

US Rep. Zoe Lofgren (D-Calif.) today proposed a law that would let copyright owners obtain court orders requiring Internet service providers to block access to foreign piracy websites. The bill would also force DNS providers to block sites.

Lofgren said in a press release that she “work[ed] for over a year with the tech, film, and television industries” on “a proposal that has a remedy for copyright infringers located overseas that does not disrupt the free Internet except for the infringers.” Lofgren said she plans to work with Republican leaders to enact the bill.

Lofgren’s press release includes a quote from Charles Rivkin, chairman and CEO of the Motion Picture Association (MPA). As we’ve previously written, the MPA has been urging Congress to pass a site-blocking law.

“More than 55 nations around the world, including democracies such as Canada, the United Kingdom, and Australia, have put in place tools similar to those proposed by Rep. Lofgren, and they have successfully reduced piracy’s harms while protecting consumer access to legal content,” Rivkin was quoted as saying in Lofgren’s press release today.

Lofgren is the ranking member of the House Science, Space, and Technology Committee and a member of the House Subcommittee on Courts, Intellectual Property, Artificial Intelligence and the Internet.

Bill called “censorious site-blocking” measure

Although Lofgren said her proposed Foreign Anti-Digital Piracy Act “preserves the open Internet,” consumer advocacy group Public Knowledge described the bill as a “censorious site-blocking” measure “that turns broadband providers into copyright police at Americans’ expense.”

“Rather than attacking the problem at its source—bringing the people running overseas piracy websites to court—Congress and its allies in the entertainment industry has decided to build out a sweeping infrastructure for censorship,” Public Knowledge Senior Policy Counsel Meredith Rose said. “Site-blocking orders force any service provider, from residential broadband providers to global DNS resolvers, to disrupt traffic from targeted websites accused of copyright infringement. More importantly, applying blocking orders to global DNS resolvers results in global blocks. This means that one court can cut off access to a website globally, based on one individual’s filing and an expedited procedure. Blocking orders are incredibly powerful weapons, ripe for abuse, and we’ve seen the messy consequences of them being implemented in other countries.”

Democrat teams up with movie industry to propose website-blocking law Read More »

stem-cells-used-to-partially-repair-damaged-hearts

Stem cells used to partially repair damaged hearts

When we developed the ability to convert various cells into a stem cell, it held the promise of an entirely new type of therapy. Rather than getting the body to try to fix itself with its cells or deal with the complications of organ transplants, we could convert a few adult cells to stem cells and induce them to form any tissue in the body. We could potentially repair or replace tissues with an effectively infinite supply of a patient’s own cells.

However, the Nobel Prize for induced stem cells was handed out over a decade ago, and the therapies have been slow to follow. But a group of German researchers is now describing tests in primates of a method of repairing the heart using new muscle generated from stem cells. The results are promising, if not yet providing everything that we might hope for. But they’ve been enough to start clinical trials, and similar results are being seen in humans.

Heart problems

The heart contains a lot of specialized tissues, including those that form blood vessels or specialize in conducting electrical signals. But the key to the heart is a form of specialized muscle cell, called a cardiomyocyte. Once the heart matures, the cardiomyocytes stop dividing, meaning that you end up with a fixed population. Any damage to the heart due to injury or infection does not get repaired, meaning damage will be cumulative.

This is especially problematic in cases of blocked blood vessels, which can repeatedly starve large areas of the heart of oxygen and nutrients, killing the cardiomyocytes there. This leads to a reduction in cardiac function and can ultimately result in death.

It turns out, however, that it’s relatively easy to convert induced pluripotent stem cells (IPSC, with pluripotent meaning they can form any cell type). So researchers tried injecting these stem-cell-derived cardiomyocytes into damaged hearts in experimental animals, in the hope that they would be incorporated into the damaged tissue. But these experiments didn’t always provide clear benefits to the animals.

Stem cells used to partially repair damaged hearts Read More »