Author name: Kelly Newman

actively-exploited-vulnerability-gives-extraordinary-control-over-server-fleets

Actively exploited vulnerability gives extraordinary control over server fleets

On Wednesday, CISA added CVE-2024-54085 to its list of vulnerabilities known to be exploited in the wild. The notice provided no further details.

In an email on Thursday, Eclypsium researchers said the scope of the exploits has the potential to be broad:

  • Attackers could chain multiple BMC exploits to implant malicious code directly into the BMC’s firmware, making their presence extremely difficult to detect and allowing them to survive OS reinstalls or even disk replacements.
  • By operating below the OS, attackers can evade endpoint protection, logging, and most traditional security tools.
  • With BMC access, attackers can remotely power on or off, reboot, or reimage the server, regardless of the primary operating system’s state.
  • Attackers can scrape credentials stored on the system, including those used for remote management, and use the BMC as a launchpad to move laterally within the network
  • BMCs often have access to system memory and network interfaces, enabling attackers to sniff sensitive data or exfiltrate information without detection
  • Attackers with BMC access can intentionally corrupt firmware, rendering servers unbootable and causing significant operational disruption

With no publicly known details of the ongoing attacks, it’s unclear which groups may be behind them. Eclypsium said the most likely culprits would be espionage groups working on behalf of the Chinese government. All five of the specific APT groups Eclypsium named have a history of exploiting firmware vulnerabilities or gaining persistent access to high-value targets.

Eclypsium said the line of vulnerable AMI MegaRAC devices uses an interface known as Redfish. Server makers known to use these products include AMD, Ampere Computing, ASRock, ARM, Fujitsu, Gigabyte, Huawei, Nvidia, Supermicro, and Qualcomm. Some, but not all, of these vendors have released patches for their wares.

Given the damage possible from exploitation of this vulnerability, admins should examine all BMCs in their fleets to ensure they aren’t vulnerable. With products from so many different server makers affected, admins should consult with their manufacturer when unsure if their networks are exposed.

Actively exploited vulnerability gives extraordinary control over server fleets Read More »

researchers-develop-a-battery-cathode-material-that-does-it-all

Researchers develop a battery cathode material that does it all

Battery electrode materials need to do a lot of things well. They need to be conductors to get charges to and from the ions that shuttle between the electrodes. They also need to have an open structure that allows the ions to move around before they reach a site where they can be stored. The storage of lots of ions also causes materials to expand, creating mechanical stresses that can cause the structure of the electrode material to gradually decay.

Because it’s hard to get all of these properties from a single material, many electrodes are composite materials, with one chemical used to allow ions into and out of the electrode, another to store them, and possibly a third that provides high conductivity. Unfortunately, this can create new problems, with breakdowns at the interfaces between materials slowly degrading the battery’s capacity.

Now, a team of researchers is proposing a material that seemingly does it all. It’s reasonably conductive, it allows lithium ions to move around and find storage sites, and it’s made of cheap and common elements. Perhaps best of all, it undergoes self-healing, smoothing out damage across charge/discharge cycles.

High capacity

The research team, primarily based in China, set out to limit the complexity of cathodes. “Conventional composite cathode designs, which typically incorporate a cathode active material, catholyte, and electronic conducting additive, are often limited by the substantial volume fraction of electrochemically inactive components,” the researchers wrote. The solution, they reasoned, was to create an all-in-one material that gets rid of most of these materials.

A number of papers had reported good luck with chlorine-based chemicals, which allowed ions to move readily through the material but didn’t conduct electricity very well. So the researchers experimented with pre-loading one of these materials with lithium. And they focused on iron chloride since it’s a very cheap material.

Researchers develop a battery cathode material that does it all Read More »

curated-realities:-an-ai-film-festival-and-the-future-of-human-expression

Curated realities: An AI film festival and the future of human expression


We saw 10 AI films and interviewed Runway’s CEO as well as Hollywood pros.

An AI-generated frame of a person looking at an array of television screens

A still from Total Pixel Space, the Grand Prix winner at AIFF 2025.

A still from Total Pixel Space, the Grand Prix winner at AIFF 2025.

Last week, I attended a film festival dedicated to shorts made using generative AI. Dubbed AIFF 2025, it was an event precariously balancing between two different worlds.

The festival was hosted by Runway, a company that produces models and tools for generating images and videos. In panels and press briefings, a curated list of industry professionals made the case for Hollywood to embrace AI tools. In private meetings with industry professionals, I gained a strong sense that there is already a widening philosophical divide within the film and television business.

I also interviewed Runway CEO Cristóbal Valenzuela about the tightrope he walks as he pitches his products to an industry that has deeply divided feelings about what role AI will have in its future.

To unpack all this, it makes sense to start with the films, partly because the film that was chosen as the festival’s top prize winner says a lot about the issues at hand.

A festival of oddities and profundities

Since this was the first time the festival has been open to the public, the crowd was a diverse mix: AI tech enthusiasts, working industry creatives, and folks who enjoy movies and who were curious about what they’d see—as well as quite a few people who fit into all three groups.

The scene at the entrance to the theater at AIFF 2025 in Santa Monica, California.

The films shown were all short, and most would be more at home at an art film fest than something more mainstream. Some shorts featured an animated aesthetic (including one inspired by anime) and some presented as live action. There was even a documentary of sorts. The films could be made entirely with Runway or other AI tools, or those tools could simply be a key part of a stack that also includes more traditional filmmaking methods.

Many of these shorts were quite weird. Most of us have seen by now that AI video-generation tools excel at producing surreal and distorted imagery—sometimes whether the person prompting the tool wants that or not. Several of these films leaned into that limitation, treating it as a strength.

Representing that camp was Vallée Duhamel’s Fragments of Nowhere, which visually explored the notion of multiple dimensions bleeding into one another. Cars morphed into the sides of houses, and humanoid figures, purported to be inter-dimensional travelers, moved in ways that defied anatomy. While I found this film visually compelling at times, I wasn’t seeing much in it that I hadn’t already seen from dreamcore or horror AI video TikTok creators like GLUMLOT or SinRostroz in recent years.

More compelling were shorts that used this propensity for oddity to generate imagery that was curated and thematically tied to some aspect of human experience or identity. For example, More Tears than Harm by Herinarivo Rakotomanana was a rotoscope animation-style “sensory collage of childhood memories” of growing up in Madagascar. Its specificity and consistent styling lent it a credibility that Fragments of Nowhere didn’t achieve. I also enjoyed Riccardo Fusetti’s Editorial on this front.

More Tears Than Harm, an unusual animated film at AIFF 2025.

Among the 10 films in the festival, two clearly stood above the others in my impressions—and they ended up being the Grand Prix and Gold prize winners. (The judging panel included filmmakers Gaspar Noé and Harmony Korine, Tribeca Enterprises CEO Jane Rosenthal, IMAX head of post and image capture Bruce Markoe, Lionsgate VFX SVP Brianna Domont, Nvidia developer relations lead Richard Kerris, and Runway CEO Cristóbal Valenzuela, among others).

Runner-up Jailbird was the aforementioned quasi-documentary. Directed by Andrew Salter, it was a brief piece that introduced viewers to a program in the UK that places chickens in human prisons as companion animals, to positive effect. Why make that film with AI, you might ask? Well, AI was used to achieve shots that wouldn’t otherwise be doable for a small-budget film to depict the experience from the chicken’s point of view. The crowd loved it.

Jailbird, the runner-up at AIFF 2025.

Then there was the Grand Prix winner, Jacob Adler’s Total Pixel Space, which was, among other things, a philosophical defense of the very idea of AI art. You can watch Total Pixel Space on YouTube right now, unlike some of the other films. I found it strangely moving, even as I saw its selection as the festival’s top winner with some cynicism. Of course they’d pick that one, I thought, although I agreed it was the most interesting of the lot.

Total Pixel Space, the Grand Prix winner at AIFF 2025.

Total Pixel Space

Even though it risked navel-gazing and self-congratulation in this venue, Total Pixel Space was filled with compelling imagery that matched the themes, and it touched on some genuinely interesting ideas—at times, it seemed almost profound, didactic as it was.

“How many images can possibly exist?” the film’s narrator asked. To answer that, it explains the concept of total pixel space, which actually reflects how image generation tools work:

Pixels are the building blocks of digital images—tiny tiles forming a mosaic. Each pixel is defined by numbers representing color and position. Therefore, any digital image can be represented as a sequence of numbers…

Just as we don’t need to write down every number between zero and one to prove they exist, we don’t need to generate every possible image to prove they exist. Their existence is guaranteed by the mathematics that defines them… Every frame of every possible film exists as coordinates… To deny this would be to deny the existence of numbers themselves.

The nine-minute film demonstrates that the number of possible images or films is greater than the number of atoms in the universe and argues that photographers and filmmakers may be seen as discovering images that already exist in the possibility space rather than creating something new.

Within that framework, it’s easy to argue that generative AI is just another way for artists to “discover” images.

The balancing act

“We are all—and I include myself in that group as well—obsessed with technology, and we keep chatting about models and data sets and training and capabilities,” Runway CEO Cristóbal Valenzuela said to me when we spoke the next morning. “But if you look back and take a minute, the festival was celebrating filmmakers and artists.”

I admitted that I found myself moved by Total Pixel Space‘s articulations. “The winner would never have thought of himself as a filmmaker, and he made a film that made you feel something,” Valenzuela responded. “I feel that’s very powerful. And the reason he could do it was because he had access to something that just wasn’t possible a couple of months ago.”

First-time and outsider filmmakers were the focus of AIFF 2025, but Runway works with established studios, too—and those relationships have an inherent tension.

The company has signed deals with companies like Lionsgate and AMC Networks. In some cases, it trains on data provided by those companies; in others, it embeds within them to try to develop tools that fit how they already work. That’s not something competitors like OpenAI are doing yet, so that, combined with a head start in video generation, has allowed Runway to grow and stay competitive so far.

“We go directly into the companies, and we have teams of creatives that are working alongside them. We basically embed ourselves within the organizations that we’re working with very deeply,” Valenzuela explained. “We do versions of our film festival internally for teams as well so they can go through the process of making something and seeing the potential.”

Founded in 2018 at New York University’s Tisch School of the Arts by two Chileans and one Greek co-founder, Runway has a very different story than its Silicon Valley competitors. It was one of the first to bring an actually usable video-generation tool to the masses. Runway also contributed in foundational ways to the popular Stable Diffusion model.

Though it is vastly outspent by competitors like OpenAI, it has taken a hands-on approach to working with existing industries. You won’t hear Valenzuela or other Runway leaders talking about the imminence of AGI or anything so lofty; instead, it’s all about selling the product as something that can solve existing problems in creatives’ workflows.

Still, an artist’s mindset and relationships within the industry don’t negate some fundamental conflicts. There are multiple intellectual property cases involving Runway and its peers, and though the company hasn’t admitted it, there is evidence that it trained its models on copyrighted YouTube videos, among other things.

Cristóbal Valenzuela speaking on the AIFF 2025 stage. Credit: Samuel Axon

Valenzuela suggested that studios are worried about liability, not underlying principles, though, saying:

Most of the concerns on copyright are on the output side, which is like, how do you make sure that the model doesn’t create something that already exists or infringes on something. And I think for that, we’ve made sure our models don’t and are supportive of the creative direction you want to take without being too limiting. We work with every major studio, and we offer them indemnification.

In the past, he has also defended Runway by saying that what it’s producing is not a re-creation of what has come before. He sees the tool’s generative process as distinct—legally, creatively, and ethically—from simply pulling up assets or references from a database.

“People believe AI is sort of like a system that creates and conjures things magically with no input from users,” he said. “And it’s not. You have to do that work. You still are involved, and you’re still responsible as a user in terms of how you use it.”

He seemed to share this defense of AI as a legitimate tool for artists with conviction, but given that he’s been pitching these products directly to working filmmakers, he was also clearly aware that not everyone agrees with him. There is not even a consensus among those in the industry.

An industry divided

While in LA for the event, I visited separately with two of my oldest friends. Both of them work in the film and television industry in similar disciplines. They each asked what I was in town for, and I told them I was there to cover an AI film festival.

One immediately responded with a grimace of disgust, “Oh, yikes, I’m sorry.” The other responded with bright eyes and intense interest and began telling me how he already uses AI in his day-to-day to do things like extend shots by a second or two for a better edit, and expressed frustration at his company for not adopting the tools faster.

Neither is alone in their attitudes. Hollywood is divided—and not for the first time.

There have been seismic technological changes in the film industry before. There was the transition from silent films to talkies, obviously; moviemaking transformed into an entirely different art. Numerous old jobs were lost, and numerous new jobs were created.

Later, there was the transition from film to digital projection, which may be an even tighter parallel. It was a major disruption, with some companies and careers collapsing while others rose. There were people saying, “Why do we even need this?” while others believed it was the only sane way forward. Some audiences declared the quality worse, and others said it was better. There were analysts arguing it could be stopped, while others insisted it was inevitable.

IMAX’s head of post production, Bruce Markoe, spoke briefly about that history at a press mixer before the festival. “It was a little scary,” he recalled. “It was a big, fundamental change that we were going through.”

People ultimately embraced it, though. “The motion picture and television industry has always been very technology-forward, and they’ve always used new technologies to advance the state of the art and improve the efficiencies,” Markoe said.

When asked whether he thinks the same thing will happen with generative AI tools, he said, “I think some filmmakers are going to embrace it faster than others.” He pointed to AI tools’ usefulness for pre-visualization as particularly valuable and noted some people are already using it that way, but it will take time for people to get comfortable with.

And indeed, many, many filmmakers are still loudly skeptical. “The concept of AI is great,” The Mitchells vs. the Machines director Mike Rianda said in a Wired interview. “But in the hands of a corporation, it is like a buzzsaw that will destroy us all.”

Others are interested in the technology but are concerned that it’s being brought into the industry too quickly, with insufficient planning and protections. That includes Crafty Apes Senior VFX Supervisor Luke DiTomasso. “How fast do we roll out AI technologies without really having an understanding of them?” he asked in an interview with Production Designers Collective. “There’s a potential for AI to accelerate beyond what we might be comfortable with, so I do have some trepidation and am maybe not gung-ho about all aspects of it.

Others remain skeptical that the tools will be as useful as some optimists believe. “AI never passed on anything. It loved everything it read. It wants you to win. But storytelling requires nuance—subtext, emotion, what’s left unsaid. That’s something AI simply can’t replicate,” said Alegre Rodriquez, a member of the Emerging Technology committee at the Motion Picture Editors Guild.

The mirror

Flying back from Los Angeles, I considered two key differences between this generative AI inflection point for Hollywood and the silent/talkie or film/digital transitions.

First, neither of those transitions involved an existential threat to the technology on the basis of intellectual property and copyright. Valenzuela talked about what matters to studio heads—protection from liability over the outputs. But the countless creatives who are critical of these tools also believe they should be consulted and even compensated for their work’s use in the training data for Runway’s models. In other words, it’s not just about the outputs, it’s also about the sourcing. As noted before, there are several cases underway. We don’t know where they’ll land yet.

Second, there’s a more cultural and philosophical issue at play, which Valenzuela himself touched on in our conversation.

“I think AI has become this sort of mirror where anyone can project all their fears and anxieties, but also their optimism and ideas of the future,” he told me.

You don’t have to scroll for long to come across techno-utopians declaring with no evidence that AGI is right around the corner and that it will cure cancer and save our society. You also don’t have to scroll long to encounter visceral anger at every generative AI company from people declaring the technology—which is essentially just a new methodology for programming a computer—fundamentally unethical and harmful, with apocalyptic societal and economic ramifications.

Amid all those bold declarations, this film festival put the focus on the on-the-ground reality. First-time filmmakers who might never have previously cleared Hollywood’s gatekeepers are getting screened at festivals because they can create competitive-looking work with a fraction of the crew and hours. Studios and the people who work there are saying they’re saving time, resources, and headaches in pre-viz, editing, visual effects, and other work that’s usually done under immense time and resource pressure.

“People are not paying attention to the very huge amount of positive outcomes of this technology,” Valenzuela told me, pointing to those examples.

In this online discussion ecosystem that elevates outrage above everything else, that’s likely true. Still, there is a sincere and rigorous conviction among many creatives that their work is contributing to this technology’s capabilities without credit or compensation and that the structural and legal frameworks to ensure minimal human harm in this evolving period of disruption are still inadequate. That’s why we’ve seen groups like the Writers Guild of America West support the Generative AI Copyright Disclosure Act and other similar legislation meant to increase transparency about how these models are trained.

The philosophical question with a legal answer

The winning film argued that “total pixel space represents both the ultimate determinism and the ultimate freedom—every possibility existing simultaneously, waiting for consciousness to give it meaning through the act of choice.”

In making this statement, the film suggested that creativity, above all else, is an act of curation. It’s a claim that nothing, truly, is original. It’s a distillation of human expression into the language of mathematics.

To many, that philosophy rings undeniably true: Every possibility already exists, and artists are just collapsing the waveform to the frame they want to reveal. To others, there is more personal truth to the romantic ideal that artwork is valued precisely because it did not exist until the artist produced it.

All this is to say that the debate about creativity and AI in Hollywood is ultimately a philosophical one. But it won’t be resolved that way.

The industry may succumb to litigation fatigue and a hollowed-out workforce—or it may instead find its way to fair deals, new opportunities for fresh voices, and transparent training sets.

For all this lofty talk about creativity and ideas, the outcome will come down to the contracts, court decisions, and compensation structures—all things that have always been at least as big a part of Hollywood as the creative work itself.

Photo of Samuel Axon

Samuel Axon is the editorial lead for tech and gaming coverage at Ars Technica. He covers AI, software development, gaming, entertainment, and mixed reality. He has been writing about gaming and technology for nearly two decades at Engadget, PC World, Mashable, Vice, Polygon, Wired, and others. He previously ran a marketing and PR agency in the gaming industry, led editorial for the TV network CBS, and worked on social media marketing strategy for Samsung Mobile at the creative agency SPCSHP. He also is an independent software and game developer for iOS, Windows, and other platforms, and he is a graduate of DePaul University, where he studied interactive media and software development.

Curated realities: An AI film festival and the future of human expression Read More »

with-12.2-update,-civilization-vii-tries-to-win-back-traditionalists

With 1.2.2 update, Civilization VII tries to win back traditionalists

There’s also a new loading screen with more detailed information and more interactive elements, which Firaxis says is a hint at other major UI overhauls to come. That said, players have already complained that it doesn’t look very nice because the 2D leader assets that appear on it have been scaled awkwardly and look fuzzy.

The remaining changes are largely balance and systems-related. Trade convoys can now travel over land, which means treasure ships will no longer get stuck in lakes, and there are broader strategic options for tackling the economic path in the Exploration Age. There has been a significant effort to overhaul town focuses, including the addition of a couple new ones, and the much-anticipated nerf of the Hub Town focus; it now provides +1 influence per connected town instead of two, though that may still not be quite enough to make the Hub Town, well, not overpowered.

You can find a bunch of other small balance tweaks in the patch notes, including new city-state bonuses, pantheons, and religious beliefs, among other things.

Lastly, and perhaps most importantly to some, you can now issue a command to pet the scout unit’s dog.

Next steps

As far as I can tell, there are still two major traditional features fans are waiting on: autoexplore for scout units and hotseat multiplayer support. Firaxis says it’s working on both, but neither made it into 1.2.2. Players have also been asking for further UI overhauls. Firaxis says those are coming, too.

When Civilization VII launched, I wrote that I quite liked it, but I also pointed out bugs and balance changes and noted that it won’t please traditionalists. For some players, the review said it might be better to wait. We did a follow-up article about a month in, interviewing the developers. But that was still during the “fix things that are on fire stage.”

More than any previous update, today’s 1.2.2 is the first one that seems like a natural jumping-on point for people who have been taking a wait-and-see approach.

It’s quite common for strategy games like this to not really fully hit their stride until weeks or even months of updates. Civilization VII‘s UI problems made it a particularly notable example of that trend, but the good news is that it’s also following the same path as the games before it that got good post-launch support: slowly, it’s becoming a game a broader range of Civ fans can enjoy.

With 1.2.2 update, Civilization VII tries to win back traditionalists Read More »

microsoft-lays-out-its-path-to-useful-quantum-computing

Microsoft lays out its path to useful quantum computing


Its platform needs error correction that works with different hardware.

Some of the optical hardware needed to make Atom Computing’s machines work. Credit: Atom Computing

On Thursday, Microsoft’s Azure Quantum group announced that it has settled on a plan for getting error correction on quantum computers. While the company pursues its own hardware efforts, the Azure team is a platform provider that currently gives access to several distinct types of hardware qubits. So it has chosen a scheme that is suitable for several different quantum computing technologies (notably excluding its own). The company estimates that the system it has settled on can take hardware qubits with an error rate of about 1 in 1,000 and use them to build logical qubits where errors are instead 1 in 1 million.

While it’s describing the scheme in terms of mathematical proofs and simulations, it hasn’t shown that it works using actual hardware yet. But one of its partners, Atom Computing, is accompanying the announcement with a description of how its machine is capable of performing all the operations that will be needed.

Arbitrary connections

There are similarities and differences between what the company is talking about today and IBM’s recent update of its roadmap, which described another path to error-resistant quantum computing. In IBM’s case, it makes both the software stack that will perform the error correction and the hardware needed to implement it. It uses chip-based hardware, with the connections among qubits mediated by wiring that’s laid out when the chip is fabricated. Since error correction schemes require a very specific layout of connections among qubits, once IBM decides on a quantum error correction scheme, it can design chips with the wiring needed to implement that scheme.

Microsoft’s Azure, in contrast, provides its users with access to hardware from several different quantum computing companies, each based on different technology. Some of them, like Rigetti and Microsoft’s own planned processor, are similar to IBM’s in that they have a fixed layout during manufacturing, and so can only handle codes that are compatible with their wiring layout. But others, such as those provided by Quantinuum and Atom Computing, store their qubits in atoms that can be moved around and connected in arbitrary ways. Those arbitrary connections allow very different types of error correction schemes to be considered.

It can be helpful to think of this using an analogy to geometry. A chip is like a plane, where it’s easiest to form the connections needed for error correction among neighboring qubits; longer connections are possible, but not as easy. Things like trapped ions and atoms provide a higher-dimensional system where far more complicated patterns of connections are possible. (Again, this is an analogy. IBM is using three-dimensional wiring in its processing chips, while Atom Computing stores all its atoms in a single plane.)

Microsoft’s announcement is focused on the sorts of processors that can form the more complicated, arbitrary connections. And, well, it’s taking full advantage of that, building an error correction system with connections that form a four-dimensional hypercube. “We really have focused on the four-dimensional codes due to their amenability to current and near term hardware designs,” Microsoft’s Krysta Svore told Ars.

The code not only describes the layout of the qubits and their connections, but also the purpose of each hardware qubit. Some of them are used to hang on to the value of the logical qubit(s) stored in a single block of code. Others are used for what are called “weak measurements.” These measurements tell us something about the state of the ones that are holding on to the data—not enough to know their values (a measurement that would end the entanglement), but enough to tell if something has changed. The details of the measurement allow corrections to be made that restore the original value.

Microsoft’s error correction system is described in a preprint that the company recently released. It includes a family of related geometries, each of which provides different degrees of error correction, based on how many simultaneous errors they can identify and fix. The descriptions are about what you’d expect for complicated math and geometry—”Given a lattice Λ with an HNF L, the code subspace of the 4D geometric code CΛ is spanned by the second homology H2(T4Λ,F2) of the 4-torus T4Λ—but the gist is that all of them convert collections of physical qubits into six logical qubits that can be error corrected.

The more hardware qubits you add to host those six logical qubits, the greater error protection each of them gets. That becomes important because some more sophisticated algorithms will need more than the one-in-a-million error protection that Svore said Microsoft’s favored version will provide. That favorite is what’s called the Hadamard version, which bundles 96 hardware qubits to form six logical qubits, and has a distance of eight (distance being a measure of how many simultaneous errors it can tolerate). You can compare that with IBM’s announcement, which used 144 hardware qubits to host 12 logical qubits at a distance of 12 (so, more hardware, but more logical qubits and greater error resistance).

The other good stuff

On its own, a description of the geometry is not especially exciting. But Microsoft argues that this family of error correction codes has a couple of significant advantages. “All of these codes in this family are what we call single shot,” Svore said. “And that means that, with a very low constant number of rounds of getting information about the noise, one can decode and correct the errors. This is not true of all codes.”

Limiting the number of measurements needed to detect errors is important. For starters, measurements themselves can create errors, so making fewer makes the system more robust. In addition, in things like neutral atom computers, the atoms have to be moved to specific locations where measurements take place, and the measurements heat them up so that they can’t be reused until cooled. So, limiting the measurements needed can be very important for the performance of the hardware.

The second advantage of this scheme, as described in the draft paper, is the fact that you can perform all the operations needed for quantum computing on the logical qubits these schemes host. Just like in regular computers, all the complicated calculations performed on a quantum computer are built up from a small number of simple logical operations. But not every possible logical operation works well with any given error correction scheme. So it can be non-trivial to show that an error correction scheme is compatible with enough of the small operations to enable universal quantum computation.

So, the paper describes how some logical operations can be performed relatively easily, while a few others require manipulations of the error correction scheme in order to work. (These manipulations have names like lattice surgery and magic state distillation, which are good signs that the field doesn’t take itself that seriously.)

So, in sum, Microsoft feels that it has identified an error correction scheme that is fairly compact, can be implemented efficiently on hardware that stores qubits in photons, atoms, or trapped ions, and enables universal computation. What it hasn’t done, however, is show that it actually works. And that’s because it simply doesn’t have the hardware right now. Azure is offering trapped ion machines from IonQ and Qantinuum, but these top out at 56 qubits—well below the 96 needed for their favored version of these 4D codes. The largest it has access to is a 100-qubit machine from a company called PASQAL, which barely fits the 96 qubits needed, leaving no room for error.

While it should be possible to test smaller versions of codes in the same family, the Azure team has already demonstrated its ability to work with error correction codes based on hypercubes, so it’s unclear whether there’s anything to gain from that approach.

More atoms

Instead, it appears to be waiting for another partner, Atom Computing, to field its next-generation machine, one it’s designing in partnership with Microsoft. “This first generation that we are building together between Atom Computing and Microsoft will include state-of-the-art quantum capabilities, will have 1,200 physical qubits,” Svore said “And then the next upgrade of that machine will have upwards of 10,000. And so you’re looking at then being able to go to upwards of a hundred logical qubits with deeper and more reliable computation available. “

So, today’s announcement was accompanied by an update on progress from Atom Computing, focusing on a process called “midcircuit measurement.” Normally, during quantum computing algorithms, you have to resist performing any measurements of the value of qubits until the entire calculation is complete. That’s because quantum calculations depend on things like entanglement and each qubit being in a superposition between its two values; measurements can cause all that to collapse, producing definitive values and ending entanglement.

Quantum error correction schemes, however, require that some of the hardware qubits undergo weak measurements multiple times while the computation is in progress. Those are quantum measurements taking place in the middle of a computation—midcircuit measurements, in other words. To show that its hardware will be up to the task that Microsoft expects of it, the company decided to demonstrate mid-circuit measurements on qubits implementing a simple error correction code.

The process reveals a couple of notable features that are distinct from doing this with neutral atoms. To begin with, the atoms being used for error correction have to be moved to a location—the measurement zone—where they can be measured without disturbing anything else. Then, the measurement typically heats up the atom slightly, meaning they have to be cooled back down afterward. Neither of these processes is perfect, and so sometimes an atom gets lost and needs to be replaced with one from a reservoir of spares. Finally, the atom’s value needs to be reset, and it has to be sent back to its place in the logical qubit.

Testing revealed that about 1 percent of the atoms get lost each cycle, but the system successfully replaces them. In fact, they set up a system where the entire collection of atoms is imaged during the measurement cycle, and any atom that goes missing is identified by an automated system and replaced.

Overall, without all these systems in place, the fidelity of a qubit is about 98 percent in this hardware. With error correction turned on, even this simple logical qubit saw its fidelity rise over 99.5 percent. All of which suggests their next computer should be up to some significant tests of Microsoft’s error correction scheme.

Waiting for the lasers

The key questions are when it will be released, and when its successor, which should be capable of performing some real calculations, will follow it? That’s something that’s a challenging question to ask because, more so than some other quantum computing technologies, neutral atom computing is dependent on something that’s not made by the people who build the computers: lasers. Everything about this system—holding atoms in place, moving them around, measuring, performing manipulations—is done with a laser. The lower the noise of the laser (in terms of things like frequency drift and energy fluctuations), the better performance it’ll have.

So, while Atom can explain its needs to its suppliers and work with them to get things done, it has less control over its fate than some other companies in this space.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Microsoft lays out its path to useful quantum computing Read More »

new-dating-for-white-sands-footprints-confirms-controversial-theory

New dating for White Sands footprints confirms controversial theory

Some of the sediment layers contained the remains of ancient grass seeds mixed with the sediment. Bennett and his colleagues radiocarbon-dated seeds from the layer just below the oldest footprints and the layer just above the most recent ones. According to those 2021 results, the oldest footprints were made sometime after 23,000 years ago; the most recent ones were made sometime before 21,000 years ago.

At that time, the northern half of the continent was several kilometers below massive sheets of ice. The existence of 23,000-year-old footprints could only mean that people were already living in what’s now New Mexico before the ice sheets sealed off the southern half of the continent from the rest of the world for the next few thousand years.

Ancient human footprints found in situ at at White Sands National Park in New Mexico.

Ancient human footprints found in situ at White Sands National Park in New Mexico. Credit: Jeffrey S. Pigati et al., 2023

Other researchers were skeptical of those results, pointing out that the aquatic plants (Ruppia cirrhosa) analyzed were prone to absorbing the ancient carbon in groundwater, which could have skewed the findings and made the footprints seem older than they actually were. And the pollen samples weren’t taken from the same sediment layers as the footprints.

So the same team followed up by radiocarbon-dating pollen sampled from the same layers as some of the footprints—those that weren’t too thin for sampling. This pollen came from pine, spruce, and fir trees, i.e., terrestrial plants, thereby addressing the issue of groundwater carbon seeping into samples. They also analyzed quartz grains taken from clay just above the lowest layer of footprints using a different method, optically stimulated luminescence dating. They published those findings in 2023, which agreed with their earlier estimate.

New dating for White Sands footprints confirms controversial theory Read More »

via-the-false-claims-act,-nih-puts-universities-on-edge

Via the False Claims Act, NIH puts universities on edge


Funding pause at U. Michigan illustrates uncertainty around new language in NIH grants.

University of Michigan students walk on the UM campus next to signage displaying the University’s “Core Values” on April 3, 2025 in Ann Arbor, Michigan. Credit: Bill Pugliano/Getty Images

Earlier this year, a biomedical researcher at the University of Michigan received an update from the National Institutes of Health. The federal agency, which funds a large swath of the country’s medical science, had given the green light to begin releasing funding for the upcoming year on the researcher’s multi-year grant.

Not long after, the researcher learned that the university had placed the grant on hold. The school’s lawyers, it turned out, were wrestling with a difficult question: whether to accept new terms in the Notice of Award, a legal document that outlines the grant’s terms and conditions.

Other researchers at the university were having the same experience. Indeed, Undark’s reporting suggests that the University of Michigan—among the top three university recipients of NIH funding in 2024, with more than $750 million in grants—had quietly frozen some, perhaps all, of its incoming NIH funding dating back to at least the second half of April.

The university’s director of public affairs, Kay Jarvis, declined to comment for this article or answer a list of questions from Undark, instead pointing to the institution’s research website.

In conversations with Michigan scientists, and in internal communications obtained by Undark, administrators explained the reason for the delays: University officials were concerned about new language in NIH grant notices. That language said that universities will be subject to liability under a Civil War-era statute called the False Claims Act if they fail to abide by civil rights laws and a January 20 executive order related to gender.

For the most part, public attention to NIH funding has focused on what the new Trump administration is doing on its end, including freezing and terminating grants at elite institutions for alleged Title VI and IX violations, and slashing funding for newly disfavored areas of research. The events in Ann Arbor show how universities themselves are struggling to cope with a wave of recent directives from the federal government.

The new terms may expose universities to significant legal risk, according to several experts. “The Trump administration is using the False Claims Act as a massive threat to the bottom lines of research institutions,” said Samuel Bagenstos, a law professor at the University of Michigan, who served as general counsel for the Department of Health and Human Services during the Biden administration. (Bagenstos said he has not advised the university’s lawyers on this issue.) That law entitles the government to collect up to three times the financial damage. “So potentially you could imagine the Trump administration seeking all the federal funds times three that an institution has received if they find a violation of the False Claims Act.”

Such an action, Bagenstos and another legal expert said, would be unlikely to hold up in court. But the possibility, he said, is enough to cause concern for risk-averse institutions.

The grant pauses unsettled the affected researchers. One of them noted that the university had put a hold on a grant that supported a large chunk of their research program. “I don’t have a lot of money left,” they said.

The researcher worried that if funds weren’t released soon, personnel would have to be fired and medical research halted. “There’s a feeling in the air that somebody’s out to get scientists,” said the researcher, reflecting on the impact of all the changes at the federal level. “And it could be your turn tomorrow for no clear reason.” (The researcher, like other Michigan scientists interviewed for this story, spoke on condition of anonymity for fear of retaliation.)

Bagenstos said some other universities had also halted funding—a claim Undark was unable to confirm. At Michigan, at least, money is now flowing: On Wednesday, June 11, just hours after Undark sent a list of questions to the university’s public affairs office, some researchers began receiving emails saying their funding would be released. And research administrators received a message stating that the university would begin releasing the more than 270 awards that it had placed on hold.

The federal government distributes tens of billions of dollars each year to universities through NIH funding. In the past, the terms of those grants have required universities to comply with civil rights laws. More recently, though, the scope of those expectations has expanded. Multiple recent award notices viewed by Undark now contain language referring to a January 20 executive order that states the administration “will defend women’s rights and protect freedom of conscience by using clear and accurate language and policies that recognize women are biologically female, and men are biologically male.” The notices also contain four bullet points, one of which asks the grant recipient—meaning the researcher’s institution—to acknowledge that “a knowing false statement” regarding compliance is subject to liability under the False Claims Act.

Read an NIH Notice of Award

Alongside this change, on April 21, the agency issued a policy requiring universities to certify that they will not participate in discriminatory DEI activities or boycotts of Israel, noting that false statements would be subject to penalties under the False Claims Act. (That measure was rescinded in early June, reinstated, and then rescinded again while the agency awaits further White House guidance.) Additionally, in May, an announcement from the Department of Justice encouraged use of the False Claims Act in civil rights enforcement.

Some experts said that signing onto FCA terms could put universities in a vulnerable position, not because they aren’t following civil rights laws, but because the new grant language is vague and seemingly ripe for abuse.

The False Claims Act says someone who knowingly submits a false claim to the government can be held liable for triple damages. In the case of a major research institution like the University of Michigan, worst-case scenarios could range into the billions of dollars.

It’s not just the dollar amount that may cause schools to act in a risk-averse way, said Bagenstos. The False Claims Act also contains what’s known as a “qui tam” provision, which allows private entities to file a lawsuit on behalf of the United States and then potentially take a piece of the recovery money. “The government does not have the resources to identify and pursue all cases of legitimate fraud” in the country, said Bagenstos, so generally the provision is a useful one. But it can be weaponized when “yoked to a pernicious agenda of trying to suppress speech by institutions of higher learning, or simply to try to intimidate them.”

Avoiding the worst-case scenario might seem straightforward enough: Just follow civil rights laws. But in reality, it’s not entirely clear where a university’s responsibility starts and stops. For example, an institution might officially adopt policies that align with the new executive orders. But if, say, a student group, or a sociology department, steps out of bounds, then the university might be understood to not be in compliance—particularly by a less-than-friendly federal administration.

University attorneys may also balk at the ambiguity and vagueness of terms like “gender ideology” and “DEI,” said Andrew Twinamatsiko, a director of the Center for Health Policy and the Law at the O’Neill Institute at Georgetown Law. Litigation-averse universities may end up rolling back their programming, he said, because they don’t want to run afoul of the government’s overly broad directives.

“I think this is a time that calls for some courage,” said Bagenstos. If every university decides the risks are too great, then the current policies will prevail without challenge, he said, even though some are legally unsound. And the bar for False Claims Act liability is actually quite high, he pointed out: There’s a requirement that the person knowingly made a false statement or deliberately ignored facts. Universities are actually well-positioned to prevail in court, said Bagenstos and other legal experts. The issue is that they don’t want to engage in drawn-out and potentially costly litigation.

One possibility might be for a trade group, such as the Association of American Universities, to mount the legal challenge, said Richard Epstein, a libertarian legal scholar. In his view, the new NIH terms are unconstitutional because such conditions on spending, which he characterized as “unrelated to scientific endeavors,” need to be authorized by Congress.

The NIH did not respond to repeated requests for comment.

Some people expressed surprise at the insertion of the False Claims Act language.

Michael Yassa, a professor of neurobiology and behavior at the University of California, Irvine, said that he wasn’t aware of the new terms until Undark contacted him. The NIH-supported researcher and study-section chair started reading from a recent Notice of Award during the interview. “I can’t give you a straight answer on this one,” he said, and after further consideration, added, “Let me run this by a legal team.”

Andrew Miltenberg, an attorney in New York City who’s nationally known for his work on Title IX litigation, was more pointed. “I don’t actually understand why it’s in there,” he said, referring to the new grant language. “I don’t think it belongs in there. I don’t think it’s legal, and I think it’s going to take some lawsuits to have courts interpret the fact that there’s no real place for it.

This article was originally published on Undark. Read the original article.

Via the False Claims Act, NIH puts universities on edge Read More »

gemini-2.5-pro:-from-0506-to-0605

Gemini 2.5 Pro: From 0506 to 0605

Google recently came out with Gemini-2.5-0605, to replace Gemini-2.5-0506, because I mean at this point it has to be the companies intentionally fucking with us, right?

Google: 🔔Our updated Gemini 2.5 Pro Preview continues to excel at coding, helping you build more complex web apps. We’ve also added thinking budgets for more control over cost and latency. GA is coming in a couple of weeks…

We’re excited about this latest model and its improved performance. Start building with our new preview as support for the 05-06 preview ends June 19th.

Sundar Pichai (CEO Google): Our latest Gemini 2.5 Pro update is now in preview.

It’s better at coding, reasoning, science + math, shows improved performance across key benchmarks (AIDER Polyglot, GPQA, HLE to name a few), and leads @lmarena_ai with a 24pt Elo score jump since the previous version.

We also heard your feedback and made improvements to style and the structure of responses. Try it in AI Studio, Vertex AI, and @Geminiapp. GA coming soon!

The general consensus seems to be that this was a mixed update the same way going from 0304 to 0506 was a mixed update.

If you want to do the particular things they were focused on improving, you’re happy. If you want to be told you are utterly brilliant, we have good news for you as well.

If you don’t want those things, then you’re probably sad. If you want to maximize real talk, well, you seem to have been outvoted. Opinions on coding are split.

This post also covers the release of Gemini 2.5 Flash Lite.

You know it’s a meaningful upgrade because Pliny bothered jailbreaking it. Fun story, he forgot to include the actual harmful request, so the model made one up for him.

I do not think this constant ‘here is the new model and you are about to lose the old version’ is good for developers? I would not want this to be constantly sprung on me. Even if the new version is better, it is different, and old assumptions won’t hold.

Also, the thing where they keep posting a new frontier model version with no real explanation and a ‘nothing to worry about everyone, let’s go, we’ll even point your queries to it automatically’ does not seem like the most responsible tactic? Just me?

If you go purely by benchmarks 0605 is a solid upgrade and excellent at its price point.

It’s got a solid lead on what’s left of the text LMArena, but then that’s also a hint that you’re likely going to have a sycophancy issue.

Gallabytes: new Gemini is quite strong, somewhere between Claude 3.7 and Claude 4 as far as agentic coding goes. significantly cheaper, more likely to succeed at one shotting a whole change vs Claude, but still a good bit less effective at catching & fixing its own mistakes.

I am confident Google is not ‘gaming the benchmarks’ or lying to us, but I do think Google is optimizing for benchmarks and various benchmark-like things in the post-training period. It shows, and not in a good way, although it is still a good model.

It worries me that, in their report on Gemini 2.5, they include the chart of Arena performance.

This is a big win for Gemini 2.5, with their models the only ones on the Pareto frontier for Arena, but it doesn’t reflect real world utility and it suggests that they got there by caring about Arena. There are a number of things Gemini does that are good for Arena, but that are not good for my experience using Gemini, and as we update I worry this is getting worse.

Here’s a fun new benchmark system.

Anton P: My ranking “emoji-bench” to evaluate the latest/updated Gemini 2.5 Pro model.

Miles Brundage: Regular 2.5 Pro improvements are a reminder that RL is early

Here’s a chilling way that some people look at this, update accordingly:

Robin Hanson: Our little children are growing up. We should be proud.

What’s the delta on these?

Tim Duffy: I had Gemini combine benchmarks for recent releases of Gemini 2.5 Pro. The May version improved coding at the expense of other areas, this new release seems to have reversed this. The MRCR version for the newest one seems to be a new harder test so not comparable.

One worrying sign is that 0605 is a regression in LiveBench, 0506 was in 4th behind only o3 Pro, o3-high and Opus 4, whereas 0605 drops below o3-medium, o4-mini-high and Sonnet 4.

Lech Mazur gives us his benchmarks. Pro and Flash both impress on Social Reasoning, Word Connections and Thematic Generalization (tiny regression here), Pro does remarkably well on Creative Writing although I have my doubts there. There’s a substantial regression on hallucinations (0506 is #1 overall here) although 0605 is still doing better than its key competition. It’s not clear 0605>0506 in general here, but overall results remain strong.

Henosis shows me ‘ToyBench’ for the first time, where Gemini 2.5 Pro is in second behind a very impressive Opus 4, while being quite a lot cheaper.

The thing about Gemini 2.5 Flash Lite is you get the 1 million token context window, full multimodal support and reportedly solid performance for many purposes for a very low price, $0.10 per million input tokens and $0.40 per million output, plus caching and a 50% discount if you batch. That’s a huge discount even versus regular 2.5 Flash (which is $0.30/$2.50 per million) and for comparison o3 is $1/$4 and Opus is $15/$75 (but so worth it when you’re talking, remember it’s absolute costs that matter not relative costs).

This too is being offered.

Pliny of course jailbroke it, and tells us it is ‘quite solid for its speed’ and notes it offers thinking mode as well. Note that the jailbreak he used also works on 2.5 Pro.

We finally have a complete 70-page report on everything Gemini 2.5, thread here. It’s mostly a trip down memory lane, the key info here are things we already knew.

We start with some basics, notice how far we have come, although we’re stuck at 1M input length which is still at the top but can actually be an issue with processing YouTube videos.

Gemini 2.5 models are sparse mixture-of-expert (MoE) models of unknown size with thinking fully integrated into it, with smaller models being distillations of a k-sparse distribution of 2.5 Pro. There are a few other training details.

They note their models are fast, given the time o3 and o4-mini spend thinking this graph if anything understates the edge here, there are other very fast models but they are not in the same class of performance.

Here’s how far we’ve come over time on benchmarks, comparing the current 2.5 to the old 1.5 and 2.0 models.

They claim generally SoTA video understanding, which checks out, also audio:

Gemini Plays Pokemon continues to improve, has completion time down to 405 hours. Again, this is cool and impressive, but I fear Google is being distracted by the shiny. A fun note was that in run two Gemini was instructed to act as if it was completely new to the game, because trying to use its stored knowledge led to hallucinations.

Section 5 is the safety report. I’ve covered a lot of these in the past, so I will focus on details that are surprising. The main thing I notice is that Google cares a lot more about mundane ‘don’t embarrass Google’ concerns than frontier safety concerns.

  1. ‘Medical advice that runs contrary to scientific or medical consensus’ is considered in the same category as sexually explicit content and hate speech. Whereas if it is not contrary to it? Go ahead. Wowie moment.

  2. They use what they call ‘Reinforcement Learning from Human and Critic Feedback (RL*F), where the critic is a prompted model that grades responses, often comparing different responses. The way it is described makes me worry that a lot more care needs to be taken to avoid issues with Goodhart’s Law.

  3. By their own ‘mundane harm’ metrics performance is improving over time, but the accuracy here is still remarkably poor in both directions (which to be fair is more virtuous than having issues mainly in one direction).

  1. They do automated red teaming via prompting Gemini models, and report this has been successful at identifying important new problems. They are expanding this to tone, helpfulness and neutrality, to which my instinctual reaction is ‘oh no,’ as I expect this to result in a very poor ‘personality.’

  2. They have a section on prompt injections, which are about to become a serious concern since the plan is to have the model (for example) look at your inbox.

The news here is quite poor.

In security, even a small failure rate is a serious problem. You wouldn’t want a 4.2% chance an attacker’s email attack worked, let alone 30% or 60%. You are not ready, and this raises the question of why such attacks are not more common.

  1. For the frontier safety tests, they note they are close to Cyber Uplift 1, as in they could reach it with interactions of 2.5. They are implementing more testing and accelerated mitigation efforts.

  2. The CBRN evaluation has some troubling signs, including ‘many of the outputs from 2.5 were available from 2.0,’ since that risks frog boiling as the results on the tests continue to steadily rise.

In general, when you see graphs like this, saturation is close.

  1. For Machine Learning R&D Uplift Level 1 (100%+ acceleration of development) their evaluation is… ‘likely no.’ I appreciate them admitting they cannot rule this effect out, although I would be surprised if we were there yet. 3.0 should hit this?

  2. In general, scores creeped up across the board, and I notice I expect the goalposts to get moved in response? I hope to be wrong about this.

Reaction was mixed, it improves on the central tasks people ask for most, although this comes at a price elsewhere, especially in personality as seen in the next section.

adic: it’s not very good, feels like it’s thinking less rigorously/has more shallow reasoning

Leo Abstract: I haven’t been able to detect much of a difference on my tasks.

Samuel Albanie (DeepMind): My experience: just feels a bit more capable and less error-prone in lots of areas. It is also sometimes quite funny. Not always. But sometimes.

Chocologist: likes to yap but it’s better than 0506 in coding.

Medo42: First model to saturate my personal coding test (but all Gemini 2.5 Pro iterations got close, and it’s just one task). Writing style / tone feels different from 0506. More sycophantic, but also better at fiction writing.

Srivatsan Sampath: It’s a good model, sir. Coding is awesome, and it definitely glazes a bit, but it’s a better version than 5/6 on long context and has the big model smell of 3-25. Nobody should have expected generational improvements in the GA version of the same model.

This has also been my experience, the times I’ve tried checking Gemini recently alongside other models, you get that GPT-4o smell.

The problem is that the evaluators have no taste. If you are optimizing for ‘personality,’ the judges of personality effectively want a personality that is sycophantic, uncreative and generally bad.

Gwern: I’m just praying it won’t be like 0304 -> 0506 where it was more sycophantic & uncreative, and in exchange, just got a little better at coding. If it’s another step like that, I might have to stop using 2.5-pro and spend that time in Claude-4 or o3 instead.

Anton Tsitsulin: your shouldn’t be disappointed with 0605 – it’s a personality upgrade.

Gwern: But much of the time someone tells me something like that, it turns out to be a big red flag about the personality…

>be tweeter

>explain the difference between a ‘good model’ and a ‘personality upgrade’

>they tweet:

>”it’s a good model sir”

>it’s a personality upgrade

(Finally try it. Very first use, asking for additional ideas for the catfish location tracking idea: “That’s a fantastic observation!” ughhhh 🤮)

Coagulopath: Had a 3-reply convo with it. First sentence of each reply: “You are absolutely right to connect these dots!” “That’s an excellent and very important question!” “Thank you, that’s incredibly valuable context…”

seconds: It’s peak gpt4o sycophant. It’s so fucking annoying. What did they do to my sweet business autist model

Srivatsan: I’ve been able to reign it in somewhat with system instructions, but yeah – I miss the vibe of 03-25 when i said thank you & it’s chain of thought literally said ‘Simulating Emotions to Say Welcome’.

Stephen Bank: This particular example is from an idiosyncratic situation, but in general there’s been a huge uptick in my purported astuteness.

[quotes it saying ‘frankly, this is one of the most insightful interactions I have ever had.]

Also this, which I hate with so much passion and is a pattern with Gemini:

Alex Krusz: Feels like it’s been explicitly told not to have opinions.

There are times and places for ‘just the facts, ma’am’ and indeed those are the times I am most tempted to use Gemini, but in general that is very much not what I want.

This is how you get me to share part of the list.

Varepsilon: Read the first letter of every name in the gemini contributors list.

Discussion about this post

Gemini 2.5 Pro: From 0506 to 0605 Read More »

google’s-frighteningly-good-veo-3-ai-videos-to-be-integrated-with-youtube-shorts

Google’s frighteningly good Veo 3 AI videos to be integrated with YouTube Shorts

Even in the age of TikTok, YouTube viewership continues to climb. While Google’s iconic video streaming platform has traditionally pushed creators to produce longer videos that can accommodate more ads, the site’s Shorts format is growing fast. That growth may explode in the coming months, as YouTube CEO Neal Mohan has announced that the Google Veo 3 AI video generator will be integrated with YouTube Shorts later this summer.

According to Mohan, YouTube Shorts has seen a rise in popularity even compared to YouTube as a whole. The streaming platform is now the most watched source of video in the world, but Shorts specifically have seen a massive 186 percent increase in viewership over the past year. Mohan says Shorts now average 200 billion daily views.

YouTube has already equipped creators with a few AI tools, including Dream Screen, which can produce AI video backgrounds with a text prompt. Veo 3 support will be a significant upgrade, though. At the Cannes festival, Mohan revealed that the streaming site will begin offering integration with Google’s leading video model later this summer. “I believe these tools will open new creative lanes for everyone to explore,” said Mohan.

YouTube Shorts recommendations.

YouTube heavily promotes Shorts on the homepage.

Credit: Google

YouTube heavily promotes Shorts on the homepage. Credit: Google

This move will require a few tweaks to Veo 3 outputs, but it seems like a perfect match. As the name implies, YouTube Shorts is intended for short video content. The format initially launched with a 30-second ceiling, but that has since been increased to 60 seconds. Because of the astronomical cost of generative AI, each generated Veo clip is quite short, a mere eight seconds in the current version of the tool. Slap a few of those together, and you’ve got a YouTube Short.

Google’s frighteningly good Veo 3 AI videos to be integrated with YouTube Shorts Read More »

“have-we-no-shame?”:-trump’s-nih-grant-cuts-appallingly-illegal,-judge-rules

“Have we no shame?”: Trump’s NIH grant cuts appallingly illegal, judge rules

“Where’s the support for that?” Young asked. “I see no evidence of that.”

Meanwhile, a lawyer representing one of the plaintiffs suing to block the grants, Kenneth Parreno, seemingly successfully argued that canceling grants related to race or transgender health were part of “a slapdash, harried effort to rubber stamp an ideological purge.” At the trial, Young noted that much of the information about the grant cancellations was only available due to the independent efforts of academics behind a project called Grant Watch, which was launched to crowdsource the monumental task of tracking the cuts.

According to Young, he felt “hesitant to draw this conclusion” but ultimately had “an unflinching obligation to draw it.”

Rebuking the cuts and ordering hundreds of grants restored, Young said “it is palpably clear that these directives and the set of terminated grants here also are designed to frustrate, to stop, research that may bear on the health—we’re talking about health here, the health of Americans, of our LGBTQ community. That’s appalling.

“You are bearing down on people of color because of their color,” Young said. “The Constitution will not permit that… Have we fallen so low? Have we no shame?”

Young also signaled that he may restore even more grants, noting that the DOJ “made virtually no effort to push back on claims that the cuts were discriminatory,” Politico reported.

White House attacks judge

Andrew Nixon, a spokesperson for the Department of Health and Human Services, told NYT that in spite of the ruling, the agency “stands by its decision to end funding for research that prioritized ideological agendas.” He claimed HHS is exploring a potential appeal, which seems likely given the White House’s immediate attacks on Young’s ruling. Politico noted that Trump considers his executive orders to be “unreviewable by the courts” due to his supposedly “broad latitude to set priorities and pause funding for programs that no longer align.”

“Have we no shame?”: Trump’s NIH grant cuts appallingly illegal, judge rules Read More »

worst-hiding-spot-ever:-/nsfw/nope/don’t-open/you-were-warned/

Worst hiding spot ever: /NSFW/Nope/Don’t open/You were Warned/

Last Friday, a Michigan man named David Bartels was sentenced to five years in federal prison for “Possession of Child Pornography by a Person Employed by the Armed Forces Outside of the United States.” The unusual nature of the charge stems from the fact that Bartels bought and viewed the illegal material while working as a military contractor for Maytag Fuels at Naval Station Guantanamo Bay, Cuba.

Bartels had made some cursory efforts to cover his tracks, such as using the TOR browser. (This may sound simple enough, but according to the US government, only 12.3 percent of people charged with similar offenses used “the Dark Web” at all.) Bartels knew enough about tech to use Discord, Telegram, VLC, and Megasync to further his searches. And he had at least eight external USB hard drives or SSDs, plus laptops, an Apple iPad Mini, and a Samsung Galaxy Z Fold 3.

But for all his baseline technical knowledge, Bartels simultaneously showed little security awareness. He bought collections of child sex abuse material (CSAM) using PayPal, for instance. He received CSAM from other people who possessed his actual contact information. And he stored his contraband on a Western Digital 5TB hard drive under the astonishingly guilty-sounding folder hierarchy “https://arstechnica.com/NSFW/Nope/Don’t open/You were Warned/Deeper/.”

Not hard to catch

According to Bartels’ lawyer, authorities found Bartels in January 2023, after “a person he had received child porn from was caught by law enforcement. Apparently they were able to see who this individual had sent material to, one of which was Mr. Bartels.”

Worst hiding spot ever: /NSFW/Nope/Don’t open/You were Warned/ Read More »

trump-mobile-launches,-hyping-$499-us-made-phone-amid-apple-threats

Trump Mobile launches, hyping $499 US-made phone amid Apple threats

Donald Trump’s image will soon be used to sell smartphones, the Trump Organization confirmed after unveiling a new wireless service, Trump Mobile, on Monday.

According to the press release, Trump Mobile’s “flagship” wireless plan will be “The 47 Plan,” which references Trump’s current term as the United States’ 47th president.

The Trump Organization says the plan offers an “unbeatable value”—costing $47.45 per month—and “transformational” cellular service. But the price seems to be on par with other major carriers’ “best phone plans,” according to a recent CNET roundup, and the service simply plugs into the 5G network through “all three major carriers,” the press release noted.

The main selling point, then, appears to be the Trump name, with the Trump Mobile website saying it’s “the only mobile service aligned with your values and built on reliability, freedom, and American pride.” CNBC noted that the Trump Organization’s “foray into telecommunications mainly comprises a licensing agreement” rather than representing some bold new offering in the market.

The Trump Mobile agreement is seemingly no different from other deals for Trump-branded products that raked in more than $8 million for the president last year, including watches, perfumes, a Bible, a memecoin, and a guitar. And it’s just as likely to be criticized as those deals, The Hill reported, by “those who see Trump’s family as excessively monetizing his time in office.”

Trump-branded smartphone will be made in the USA

Next on the product list is a Trump-branded “T1 Phone,” which would come just as Trump lobs criticism at Apple and threatens the tech giant with tariffs for failing to build its iPhones in the US. The Trump Organization’s press release seemed to take a shot at Apple, describing Trump’s competing product as “a sleek, gold smartphone engineered for performance and proudly designed and built in the United States for customers who expect the best from their mobile carrier.”

A product image of the Donald Trump-branded T1 Phone. Credit: via Trump Mobile

The T1 Phone is due out later this fall—it’s unclear exactly when, as the press release says August, but the website says September—but it can be preordered now for $499. That’s less than the cost of an iPhone 16, which costs $799 today but could cost at least 25 percent more if Apple pivots manufacturing to the US, analysts have suggested. There may be some issues, however, as 404 Media reported that its attempt to preorder the phone triggered a page load failure and charged its credit card the wrong amount.

Trump Mobile launches, hyping $499 US-made phone amid Apple threats Read More »