Ai video

youtube-bans-two-popular-channels-that-created-fake-ai-movie-trailers

YouTube bans two popular channels that created fake AI movie trailers

Deadline reports that the behavior of these creators ran afoul of YouTube’s spam and misleading-metadata policies. At the same time, Google loves generative AI—YouTube has added more ways for creators to use generative AI, and the company says more gen AI tools are coming in the future. It’s quite a tightrope for Google to walk.

AI movie trailers

A selection of videos from the now-defunct Screen Culture channel.

Credit: Ryan Whitwam

A selection of videos from the now-defunct Screen Culture channel. Credit: Ryan Whitwam

While passing off AI videos as authentic movie trailers is definitely spammy conduct, the recent changes to the legal landscape could be a factor, too. Disney recently entered into a partnership with OpenAI, bringing its massive library of characters to the company’s Sora AI video app. At the same time, Disney sent a cease-and-desist letter to Google demanding the removal of Disney content from Google AI. The letter specifically cited AI content on YouTube as a concern.

Both the banned trailer channels made heavy use of Disney properties, sometimes even incorporating snippets of real trailers. For example, Screen Culture created 23 AI trailers for The Fantastic Four: First Steps, some of which outranked the official trailer in searches. It’s unclear if either account used Google’s Veo models to create the trailers, but Google’s AI will recreate Disney characters without issue.

While Screen Culture and KH Studio were the largest purveyors of AI movie trailers, they are far from alone. There are others with five and six-digit subscriber counts, some of which include disclosures about fan-made content. Is that enough to save them from the ban hammer? Many YouTube viewers probably hope not.

YouTube bans two popular channels that created fake AI movie trailers Read More »

disney-invests-$1-billion-in-openai,-licenses-200-characters-for-ai-video-app-sora

Disney invests $1 billion in OpenAI, licenses 200 characters for AI video app Sora

An AI-generated version of OpenAI CEO Sam Altman, seen in a still capture from a video generated by Sora 2.

An AI-generated version of OpenAI CEO Sam Altman seen in a still capture from a video generated by Sora 2. Credit: OpenAI

Under the new agreement with Disney, Sora users will be able to generate short videos using characters such as Mickey Mouse, Darth Vader, Iron Man, Simba, and characters from franchises including Frozen, Inside Out, Toy Story, and The Mandalorian, along with costumes, props, vehicles, and environments.

The ChatGPT image generator will also gain official access to the same intellectual property, although that information was trained into these AI models long ago. What’s changing is that OpenAI will allow Disney-related content generated by its AI models to officially pass through its content moderation filters and reach the user, sanctioned by Disney.

On Disney’s end of the deal, the company plans to deploy ChatGPT for its employees and use OpenAI’s technology to build new features for Disney+. A curated selection of fan-made Sora videos will stream on the Disney+ platform starting in early 2026.

The agreement does not include any talent likenesses or voices. Disney and OpenAI said they have committed to “maintaining robust controls to prevent the generation of illegal or harmful content” and to “respect the rights of individuals to appropriately control the use of their voice and likeness.”

OpenAI CEO Sam Altman called the deal a model for collaboration between AI companies and studios. “This agreement shows how AI companies and creative leaders can work together responsibly to promote innovation that benefits society, respect the importance of creativity, and help works reach vast new audiences,” Altman said.

From adversary to partner

Money opens all kinds of doors, and the new partnership represents a dramatic reversal in Disney’s approach to OpenAI from just a few months ago. At that time, Disney and other major studios refused to participate in Sora 2 following its launch on September 30.

Disney invests $1 billion in OpenAI, licenses 200 characters for AI video app Sora Read More »

youtube’s-likeness-detection-has-arrived-to-help-stop-ai-doppelgangers

YouTube’s likeness detection has arrived to help stop AI doppelgängers

AI content has proliferated across the Internet over the past few years, but those early confabulations with mutated hands have evolved into synthetic images and videos that can be hard to differentiate from reality. Having helped to create this problem, Google has some responsibility to keep AI video in check on YouTube. To that end, the company has started rolling out its promised likeness detection system for creators.

Google’s powerful and freely available AI models have helped fuel the rise of AI content, some of which is aimed at spreading misinformation and harassing individuals. Creators and influencers fear their brands could be tainted by a flood of AI videos that show them saying and doing things that never happened—even lawmakers are fretting about this. Google has placed a large bet on the value of AI content, so banning AI from YouTube, as many want, simply isn’t happening.

Earlier this year, YouTube promised tools that would flag face-stealing AI content on the platform. The likeness detection tool, which is similar to the site’s copyright detection system, has now expanded beyond the initial small group of testers. YouTube says the first batch of eligible creators have been notified that they can use likeness detection, but interested parties will need to hand Google even more personal information to get protection from AI fakes.

Sneak Peek: Likeness Detection on YouTube.

Currently, likeness detection is a beta feature in limited testing, so not all creators will see it as an option in YouTube Studio. When it does appear, it will be tucked into the existing “Content detection” menu. In YouTube’s demo video, the setup flow appears to assume the channel has only a single host whose likeness needs protection. That person must verify their identity, which requires a photo of a government ID and a video of their face. It’s unclear why YouTube needs this data in addition to the videos people have already posted with their oh-so stealable faces, but rules are rules.

YouTube’s likeness detection has arrived to help stop AI doppelgängers Read More »

google’s-ai-videos-get-a-big-upgrade-with-veo-3.1

Google’s AI videos get a big upgrade with Veo 3.1

It’s getting harder to know what’s real on the Internet, and Google is not helping one bit with the announcement of Veo 3.1. The company’s new video model supposedly offers better audio and realism, along with greater prompt accuracy. The updated video AI will be available throughout the Google ecosystem, including the Flow filmmaking tool, where the new model will unlock additional features. And if you’re worried about the cost of conjuring all these AI videos, Google is also adding a “Fast” variant of Veo.

Veo made waves when it debuted earlier this year, demonstrating a staggering improvement in AI video quality just a few months after Veo 2’s release. It turns out that having all that video on YouTube is very useful for training AI models, so Google is already moving on to Veo 3.1 with a raft of new features.

Google says Veo 3.1 offers stronger prompt adherence, which results in better video outputs and fewer wasted compute cycles. Audio, which was a hallmark feature of the Veo 3 release, has reportedly improved, too. Veo 3’s text-to-video was limited to 720p landscape output, but there’s an ever-increasing volume of vertical video on the Internet. So Veo 3.1 can produce both landscape and portrait 16:9 video.

Google previously said it would bring Veo video tools to YouTube Shorts, which use a vertical video format like TikTok. The release of Veo 3.1 probably opens the door to fulfilling that promise. You can bet Veo videos will show up more frequently on TikTok as well now that it fits the format. This release also keeps Google in its race with OpenAI, which recently released a Sora iPhone app with an impressive new version of its video-generating AI.

Google’s AI videos get a big upgrade with Veo 3.1 Read More »

new-ai-model-turns-photos-into-explorable-3d-worlds,-with-caveats

New AI model turns photos into explorable 3D worlds, with caveats

Training with automated data pipeline

Voyager builds on Tencent’s earlier HunyuanWorld 1.0, released in July. Voyager is also part of Tencent’s broader “Hunyuan” ecosystem, which includes the Hunyuan3D-2 model for text-to-3D generation and the previously covered HunyuanVideo for video synthesis.

To train Voyager, researchers developed software that automatically analyzes existing videos to process camera movements and calculate depth for every frame—eliminating the need for humans to manually label thousands of hours of footage. The system processed over 100,000 video clips from both real-world recordings and the aforementioned Unreal Engine renders.

A diagram of the Voyager world creation pipeline.

A diagram of the Voyager world creation pipeline. Credit: Tencent

The model demands serious computing power to run, requiring at least 60GB of GPU memory for 540p resolution, though Tencent recommends 80GB for better results. Tencent published the model weights on Hugging Face and included code that works with both single and multi-GPU setups.

The model comes with notable licensing restrictions. Like other Hunyuan models from Tencent, the license prohibits usage in the European Union, the United Kingdom, and South Korea. Additionally, commercial deployments serving over 100 million monthly active users require separate licensing from Tencent.

On the WorldScore benchmark developed by Stanford University researchers, Voyager reportedly achieved the highest overall score of 77.62, compared to 72.69 for WonderWorld and 62.15 for CogVideoX-I2V. The model reportedly excelled in object control (66.92), style consistency (84.89), and subjective quality (71.09), though it placed second in camera control (85.95) behind WonderWorld’s 92.98. WorldScore evaluates world generation approaches across multiple criteria, including 3D consistency and content alignment.

While these self-reported benchmark results seem promising, wider deployment still faces challenges due to the computational muscle involved. For developers needing faster processing, the system supports parallel inference across multiple GPUs using the xDiT framework. Running on eight GPUs delivers processing speeds 6.69 times faster than single-GPU setups.

Given the processing power required and the limitations in generating long, coherent “worlds,” it may be a while before we see real-time interactive experiences using a similar technique. But as we’ve seen so far with experiments like Google’s Genie, we’re potentially witnessing very early steps into a new interactive, generative art form.

New AI model turns photos into explorable 3D worlds, with caveats Read More »

gemini-can-now-turn-your-photos-into-video-with-veo-3

Gemini can now turn your photos into video with Veo 3

Google’s Veo 3 videos have propagated across the Internet since the model’s debut in May, blurring the line between truth and fiction. Now, it’s getting even easier to create these AI videos. The Gemini app is gaining photo-to-video generation, allowing you to upload a photo and turn it into a video. You don’t have to pay anything extra for these Veo 3 videos, but the feature is only available to subscribers of Google’s Pro and Ultra AI plans.

When Veo 3 launched, it could conjure up a video based only on your description, complete with speech, music, and background audio. This has made Google’s new AI videos staggeringly realistic—it’s actually getting hard to identify AI videos at a glance. Using a reference photo makes it easier to get the look you want without tediously describing every aspect. This was an option in Google’s Flow AI tool for filmmakers, but now it’s in the Gemini app and web interface.

To create a video from a photo, you have to select “Video” from the Gemini toolbar. Once this feature is available, you can then add your image and prompt, including audio and dialogue. Generating the video takes several minutes—this process takes a lot of computation, which is why video output is still quite limited.

Gemini can now turn your photos into video with Veo 3 Read More »

tiktok-is-being-flooded-with-racist-ai-videos-generated-by-google’s-veo-3

TikTok is being flooded with racist AI videos generated by Google’s Veo 3

The release of Google’s Veo 3 video generator in May represented a disconcerting leap in AI video quality. While many of the viral AI videos we’ve seen are harmless fun, the model’s pixel-perfect output can also be used for nefarious purposes. On TikTok, which may or may not be banned in the coming months, users have noticed a surplus of racist AI videos, courtesy of Google’s Veo 3.

According to a report from MediaMatters, numerous TikTok accounts have started posting AI-generated videos that use racist and antisemitic tropes in recent weeks. Most of the AI vitriol is aimed at Black people, depicting them as “the usual suspects” in crimes, absent parents, and monkeys with an affinity for watermelon. The content also targets immigrants and Jewish people. The videos top out at eight seconds and bear the “Veo” watermark, confirming they came from Google’s leading AI model.

The compilation video below has examples pulled from TikTok since the release of Veo 3, but be warned, it contains racist and antisemitic content. Some of the videos are shocking, which is likely the point—nothing drives engagement on social media like anger and drama. MediaMatters reports that the original posts have numerous comments echoing the stereotypes used in the video.

Hateful AI videos generated by Veo 3 spreading on TikTok.

Google has stressed security when announcing new AI models—we’ve all seen an AI refuse to complete a task that runs afoul of its guardrails. And it’s never fun when you have genuinely harmless intentions, but the system throws a false positive and blocks your output. Google has mostly struck the right balance previously, but it appears that Veo 3 is more compliant. We’ve tested a few simple prompts with Veo 3 and found it easy to reproduce elements of these videos.

Clear but unenforced policies

TikTok’s terms of service ban this kind of content. “We do not allow any hate speech, hateful behavior, or promotion of hateful ideologies. This includes explicit or implicit content that attacks a protected group,” the community guidelines read. Despite this blanket ban on racist caricatures, the hateful Veo 3 videos appear to be spreading unchecked.

TikTok is being flooded with racist AI videos generated by Google’s Veo 3 Read More »

curated-realities:-an-ai-film-festival-and-the-future-of-human-expression

Curated realities: An AI film festival and the future of human expression


We saw 10 AI films and interviewed Runway’s CEO as well as Hollywood pros.

An AI-generated frame of a person looking at an array of television screens

A still from Total Pixel Space, the Grand Prix winner at AIFF 2025.

A still from Total Pixel Space, the Grand Prix winner at AIFF 2025.

Last week, I attended a film festival dedicated to shorts made using generative AI. Dubbed AIFF 2025, it was an event precariously balancing between two different worlds.

The festival was hosted by Runway, a company that produces models and tools for generating images and videos. In panels and press briefings, a curated list of industry professionals made the case for Hollywood to embrace AI tools. In private meetings with industry professionals, I gained a strong sense that there is already a widening philosophical divide within the film and television business.

I also interviewed Runway CEO Cristóbal Valenzuela about the tightrope he walks as he pitches his products to an industry that has deeply divided feelings about what role AI will have in its future.

To unpack all this, it makes sense to start with the films, partly because the film that was chosen as the festival’s top prize winner says a lot about the issues at hand.

A festival of oddities and profundities

Since this was the first time the festival has been open to the public, the crowd was a diverse mix: AI tech enthusiasts, working industry creatives, and folks who enjoy movies and who were curious about what they’d see—as well as quite a few people who fit into all three groups.

The scene at the entrance to the theater at AIFF 2025 in Santa Monica, California.

The films shown were all short, and most would be more at home at an art film fest than something more mainstream. Some shorts featured an animated aesthetic (including one inspired by anime) and some presented as live action. There was even a documentary of sorts. The films could be made entirely with Runway or other AI tools, or those tools could simply be a key part of a stack that also includes more traditional filmmaking methods.

Many of these shorts were quite weird. Most of us have seen by now that AI video-generation tools excel at producing surreal and distorted imagery—sometimes whether the person prompting the tool wants that or not. Several of these films leaned into that limitation, treating it as a strength.

Representing that camp was Vallée Duhamel’s Fragments of Nowhere, which visually explored the notion of multiple dimensions bleeding into one another. Cars morphed into the sides of houses, and humanoid figures, purported to be inter-dimensional travelers, moved in ways that defied anatomy. While I found this film visually compelling at times, I wasn’t seeing much in it that I hadn’t already seen from dreamcore or horror AI video TikTok creators like GLUMLOT or SinRostroz in recent years.

More compelling were shorts that used this propensity for oddity to generate imagery that was curated and thematically tied to some aspect of human experience or identity. For example, More Tears than Harm by Herinarivo Rakotomanana was a rotoscope animation-style “sensory collage of childhood memories” of growing up in Madagascar. Its specificity and consistent styling lent it a credibility that Fragments of Nowhere didn’t achieve. I also enjoyed Riccardo Fusetti’s Editorial on this front.

More Tears Than Harm, an unusual animated film at AIFF 2025.

Among the 10 films in the festival, two clearly stood above the others in my impressions—and they ended up being the Grand Prix and Gold prize winners. (The judging panel included filmmakers Gaspar Noé and Harmony Korine, Tribeca Enterprises CEO Jane Rosenthal, IMAX head of post and image capture Bruce Markoe, Lionsgate VFX SVP Brianna Domont, Nvidia developer relations lead Richard Kerris, and Runway CEO Cristóbal Valenzuela, among others).

Runner-up Jailbird was the aforementioned quasi-documentary. Directed by Andrew Salter, it was a brief piece that introduced viewers to a program in the UK that places chickens in human prisons as companion animals, to positive effect. Why make that film with AI, you might ask? Well, AI was used to achieve shots that wouldn’t otherwise be doable for a small-budget film to depict the experience from the chicken’s point of view. The crowd loved it.

Jailbird, the runner-up at AIFF 2025.

Then there was the Grand Prix winner, Jacob Adler’s Total Pixel Space, which was, among other things, a philosophical defense of the very idea of AI art. You can watch Total Pixel Space on YouTube right now, unlike some of the other films. I found it strangely moving, even as I saw its selection as the festival’s top winner with some cynicism. Of course they’d pick that one, I thought, although I agreed it was the most interesting of the lot.

Total Pixel Space, the Grand Prix winner at AIFF 2025.

Total Pixel Space

Even though it risked navel-gazing and self-congratulation in this venue, Total Pixel Space was filled with compelling imagery that matched the themes, and it touched on some genuinely interesting ideas—at times, it seemed almost profound, didactic as it was.

“How many images can possibly exist?” the film’s narrator asked. To answer that, it explains the concept of total pixel space, which actually reflects how image generation tools work:

Pixels are the building blocks of digital images—tiny tiles forming a mosaic. Each pixel is defined by numbers representing color and position. Therefore, any digital image can be represented as a sequence of numbers…

Just as we don’t need to write down every number between zero and one to prove they exist, we don’t need to generate every possible image to prove they exist. Their existence is guaranteed by the mathematics that defines them… Every frame of every possible film exists as coordinates… To deny this would be to deny the existence of numbers themselves.

The nine-minute film demonstrates that the number of possible images or films is greater than the number of atoms in the universe and argues that photographers and filmmakers may be seen as discovering images that already exist in the possibility space rather than creating something new.

Within that framework, it’s easy to argue that generative AI is just another way for artists to “discover” images.

The balancing act

“We are all—and I include myself in that group as well—obsessed with technology, and we keep chatting about models and data sets and training and capabilities,” Runway CEO Cristóbal Valenzuela said to me when we spoke the next morning. “But if you look back and take a minute, the festival was celebrating filmmakers and artists.”

I admitted that I found myself moved by Total Pixel Space‘s articulations. “The winner would never have thought of himself as a filmmaker, and he made a film that made you feel something,” Valenzuela responded. “I feel that’s very powerful. And the reason he could do it was because he had access to something that just wasn’t possible a couple of months ago.”

First-time and outsider filmmakers were the focus of AIFF 2025, but Runway works with established studios, too—and those relationships have an inherent tension.

The company has signed deals with companies like Lionsgate and AMC Networks. In some cases, it trains on data provided by those companies; in others, it embeds within them to try to develop tools that fit how they already work. That’s not something competitors like OpenAI are doing yet, so that, combined with a head start in video generation, has allowed Runway to grow and stay competitive so far.

“We go directly into the companies, and we have teams of creatives that are working alongside them. We basically embed ourselves within the organizations that we’re working with very deeply,” Valenzuela explained. “We do versions of our film festival internally for teams as well so they can go through the process of making something and seeing the potential.”

Founded in 2018 at New York University’s Tisch School of the Arts by two Chileans and one Greek co-founder, Runway has a very different story than its Silicon Valley competitors. It was one of the first to bring an actually usable video-generation tool to the masses. Runway also contributed in foundational ways to the popular Stable Diffusion model.

Though it is vastly outspent by competitors like OpenAI, it has taken a hands-on approach to working with existing industries. You won’t hear Valenzuela or other Runway leaders talking about the imminence of AGI or anything so lofty; instead, it’s all about selling the product as something that can solve existing problems in creatives’ workflows.

Still, an artist’s mindset and relationships within the industry don’t negate some fundamental conflicts. There are multiple intellectual property cases involving Runway and its peers, and though the company hasn’t admitted it, there is evidence that it trained its models on copyrighted YouTube videos, among other things.

Cristóbal Valenzuela speaking on the AIFF 2025 stage. Credit: Samuel Axon

Valenzuela suggested that studios are worried about liability, not underlying principles, though, saying:

Most of the concerns on copyright are on the output side, which is like, how do you make sure that the model doesn’t create something that already exists or infringes on something. And I think for that, we’ve made sure our models don’t and are supportive of the creative direction you want to take without being too limiting. We work with every major studio, and we offer them indemnification.

In the past, he has also defended Runway by saying that what it’s producing is not a re-creation of what has come before. He sees the tool’s generative process as distinct—legally, creatively, and ethically—from simply pulling up assets or references from a database.

“People believe AI is sort of like a system that creates and conjures things magically with no input from users,” he said. “And it’s not. You have to do that work. You still are involved, and you’re still responsible as a user in terms of how you use it.”

He seemed to share this defense of AI as a legitimate tool for artists with conviction, but given that he’s been pitching these products directly to working filmmakers, he was also clearly aware that not everyone agrees with him. There is not even a consensus among those in the industry.

An industry divided

While in LA for the event, I visited separately with two of my oldest friends. Both of them work in the film and television industry in similar disciplines. They each asked what I was in town for, and I told them I was there to cover an AI film festival.

One immediately responded with a grimace of disgust, “Oh, yikes, I’m sorry.” The other responded with bright eyes and intense interest and began telling me how he already uses AI in his day-to-day to do things like extend shots by a second or two for a better edit, and expressed frustration at his company for not adopting the tools faster.

Neither is alone in their attitudes. Hollywood is divided—and not for the first time.

There have been seismic technological changes in the film industry before. There was the transition from silent films to talkies, obviously; moviemaking transformed into an entirely different art. Numerous old jobs were lost, and numerous new jobs were created.

Later, there was the transition from film to digital projection, which may be an even tighter parallel. It was a major disruption, with some companies and careers collapsing while others rose. There were people saying, “Why do we even need this?” while others believed it was the only sane way forward. Some audiences declared the quality worse, and others said it was better. There were analysts arguing it could be stopped, while others insisted it was inevitable.

IMAX’s head of post production, Bruce Markoe, spoke briefly about that history at a press mixer before the festival. “It was a little scary,” he recalled. “It was a big, fundamental change that we were going through.”

People ultimately embraced it, though. “The motion picture and television industry has always been very technology-forward, and they’ve always used new technologies to advance the state of the art and improve the efficiencies,” Markoe said.

When asked whether he thinks the same thing will happen with generative AI tools, he said, “I think some filmmakers are going to embrace it faster than others.” He pointed to AI tools’ usefulness for pre-visualization as particularly valuable and noted some people are already using it that way, but it will take time for people to get comfortable with.

And indeed, many, many filmmakers are still loudly skeptical. “The concept of AI is great,” The Mitchells vs. the Machines director Mike Rianda said in a Wired interview. “But in the hands of a corporation, it is like a buzzsaw that will destroy us all.”

Others are interested in the technology but are concerned that it’s being brought into the industry too quickly, with insufficient planning and protections. That includes Crafty Apes Senior VFX Supervisor Luke DiTomasso. “How fast do we roll out AI technologies without really having an understanding of them?” he asked in an interview with Production Designers Collective. “There’s a potential for AI to accelerate beyond what we might be comfortable with, so I do have some trepidation and am maybe not gung-ho about all aspects of it.

Others remain skeptical that the tools will be as useful as some optimists believe. “AI never passed on anything. It loved everything it read. It wants you to win. But storytelling requires nuance—subtext, emotion, what’s left unsaid. That’s something AI simply can’t replicate,” said Alegre Rodriquez, a member of the Emerging Technology committee at the Motion Picture Editors Guild.

The mirror

Flying back from Los Angeles, I considered two key differences between this generative AI inflection point for Hollywood and the silent/talkie or film/digital transitions.

First, neither of those transitions involved an existential threat to the technology on the basis of intellectual property and copyright. Valenzuela talked about what matters to studio heads—protection from liability over the outputs. But the countless creatives who are critical of these tools also believe they should be consulted and even compensated for their work’s use in the training data for Runway’s models. In other words, it’s not just about the outputs, it’s also about the sourcing. As noted before, there are several cases underway. We don’t know where they’ll land yet.

Second, there’s a more cultural and philosophical issue at play, which Valenzuela himself touched on in our conversation.

“I think AI has become this sort of mirror where anyone can project all their fears and anxieties, but also their optimism and ideas of the future,” he told me.

You don’t have to scroll for long to come across techno-utopians declaring with no evidence that AGI is right around the corner and that it will cure cancer and save our society. You also don’t have to scroll long to encounter visceral anger at every generative AI company from people declaring the technology—which is essentially just a new methodology for programming a computer—fundamentally unethical and harmful, with apocalyptic societal and economic ramifications.

Amid all those bold declarations, this film festival put the focus on the on-the-ground reality. First-time filmmakers who might never have previously cleared Hollywood’s gatekeepers are getting screened at festivals because they can create competitive-looking work with a fraction of the crew and hours. Studios and the people who work there are saying they’re saving time, resources, and headaches in pre-viz, editing, visual effects, and other work that’s usually done under immense time and resource pressure.

“People are not paying attention to the very huge amount of positive outcomes of this technology,” Valenzuela told me, pointing to those examples.

In this online discussion ecosystem that elevates outrage above everything else, that’s likely true. Still, there is a sincere and rigorous conviction among many creatives that their work is contributing to this technology’s capabilities without credit or compensation and that the structural and legal frameworks to ensure minimal human harm in this evolving period of disruption are still inadequate. That’s why we’ve seen groups like the Writers Guild of America West support the Generative AI Copyright Disclosure Act and other similar legislation meant to increase transparency about how these models are trained.

The philosophical question with a legal answer

The winning film argued that “total pixel space represents both the ultimate determinism and the ultimate freedom—every possibility existing simultaneously, waiting for consciousness to give it meaning through the act of choice.”

In making this statement, the film suggested that creativity, above all else, is an act of curation. It’s a claim that nothing, truly, is original. It’s a distillation of human expression into the language of mathematics.

To many, that philosophy rings undeniably true: Every possibility already exists, and artists are just collapsing the waveform to the frame they want to reveal. To others, there is more personal truth to the romantic ideal that artwork is valued precisely because it did not exist until the artist produced it.

All this is to say that the debate about creativity and AI in Hollywood is ultimately a philosophical one. But it won’t be resolved that way.

The industry may succumb to litigation fatigue and a hollowed-out workforce—or it may instead find its way to fair deals, new opportunities for fresh voices, and transparent training sets.

For all this lofty talk about creativity and ideas, the outcome will come down to the contracts, court decisions, and compensation structures—all things that have always been at least as big a part of Hollywood as the creative work itself.

Photo of Samuel Axon

Samuel Axon is the editorial lead for tech and gaming coverage at Ars Technica. He covers AI, software development, gaming, entertainment, and mixed reality. He has been writing about gaming and technology for nearly two decades at Engadget, PC World, Mashable, Vice, Polygon, Wired, and others. He previously ran a marketing and PR agency in the gaming industry, led editorial for the TV network CBS, and worked on social media marketing strategy for Samsung Mobile at the creative agency SPCSHP. He also is an independent software and game developer for iOS, Windows, and other platforms, and he is a graduate of DePaul University, where he studied interactive media and software development.

Curated realities: An AI film festival and the future of human expression Read More »

google’s-frighteningly-good-veo-3-ai-videos-to-be-integrated-with-youtube-shorts

Google’s frighteningly good Veo 3 AI videos to be integrated with YouTube Shorts

Even in the age of TikTok, YouTube viewership continues to climb. While Google’s iconic video streaming platform has traditionally pushed creators to produce longer videos that can accommodate more ads, the site’s Shorts format is growing fast. That growth may explode in the coming months, as YouTube CEO Neal Mohan has announced that the Google Veo 3 AI video generator will be integrated with YouTube Shorts later this summer.

According to Mohan, YouTube Shorts has seen a rise in popularity even compared to YouTube as a whole. The streaming platform is now the most watched source of video in the world, but Shorts specifically have seen a massive 186 percent increase in viewership over the past year. Mohan says Shorts now average 200 billion daily views.

YouTube has already equipped creators with a few AI tools, including Dream Screen, which can produce AI video backgrounds with a text prompt. Veo 3 support will be a significant upgrade, though. At the Cannes festival, Mohan revealed that the streaming site will begin offering integration with Google’s leading video model later this summer. “I believe these tools will open new creative lanes for everyone to explore,” said Mohan.

YouTube Shorts recommendations.

YouTube heavily promotes Shorts on the homepage.

Credit: Google

YouTube heavily promotes Shorts on the homepage. Credit: Google

This move will require a few tweaks to Veo 3 outputs, but it seems like a perfect match. As the name implies, YouTube Shorts is intended for short video content. The format initially launched with a 30-second ceiling, but that has since been increased to 60 seconds. Because of the astronomical cost of generative AI, each generated Veo clip is quite short, a mere eight seconds in the current version of the tool. Slap a few of those together, and you’ve got a YouTube Short.

Google’s frighteningly good Veo 3 AI videos to be integrated with YouTube Shorts Read More »

with-new-gen-4-model,-runway-claims-to-have-finally-achieved-consistency-in-ai-videos

With new Gen-4 model, Runway claims to have finally achieved consistency in AI videos

For example, it was used in producing the sequence in the film Everything Everywhere All At Once where two rocks with googly eyes had a conversation on a cliff, and it has also been used to make visual gags for The Late Show with Stephen Colbert.

Whereas many competing startups were started by AI researchers or Silicon Valley entrepreneurs, Runway was founded in 2018 by art students at New York University’s Tisch School of the Arts—Cristóbal Valenzuela and Alejandro Matamala from Chilé, and Anastasis Germanidis from Greece.

It was one of the first companies to release a usable video-generation tool to the public, and its team also contributed in foundational ways to the Stable Diffusion model.

It is vastly outspent by competitors like OpenAI, but while most of its competitors have released general-purpose video-creation tools, Runway has sought an Adobe-like place in the industry. It has focused on marketing to creative professionals like designers and filmmakers and has implemented tools meant to make Runway a support tool to existing creative workflows.

The support tool argument (as opposed to a standalone creative product) helped Runway secure a deal with motion picture company Lionsgate, wherein Lionsgate allowed Runway to legally train its models on its library of films, and Runway provided bespoke tools for Lionsgate for use in production or post-production.

That said, Runway is, along with Midjourney and others, one of the subjects of a widely publicized intellectual property case brought by artists who claim the companies illegally trained their models on their work, so not all creatives are on board.

Apart from the announcement about the partnership with Lionsgate, Runway has never publicly shared what data is used to train its models. However, a report in 404 Media seemed to reveal that at least some of the training data included video scraped from the YouTube channels of popular influencers, film studios, and more.

With new Gen-4 model, Runway claims to have finally achieved consistency in AI videos Read More »

startup-can-identify-deepfake-video-in-real-time

Startup can identify deepfake video in real time

Real-time deepfakes are no longer limited to billionaires, public figures, or those who have extensive online presences. Mittal’s research at NYU, with professors Chinmay Hegde and Nasir Memon, proposes a potential challenge-based approach to blocking AI bots from video calls, where participants would have to pass a kind of video CAPTCHA test before joining.

As Reality Defender works to improve the detection accuracy of its models, Colman says that access to more data is a critical challenge to overcome—a common refrain from the current batch of AI-focused startups. He’s hopeful more partnerships will fill in these gaps, and without specifics, hints at multiple new deals likely coming next year. After ElevenLabs was tied to a deepfake voice call of US president Joe Biden, the AI-audio startup struck a deal with Reality Defender to mitigate potential misuse.

What can you do right now to protect yourself from video call scams? Just like WIRED’s core advice about avoiding fraud from AI voice calls, not getting cocky about whether you can spot video deepfakes is critical to avoid being scammed. The technology in this space continues to evolve rapidly, and any telltale signs you rely on now to spot AI deepfakes may not be as dependable with the next upgrades to underlying models.

“We don’t ask my 80-year-old mother to flag ransomware in an email,” says Colman. “Because she’s not a computer science expert.” In the future, it’s possible real-time video authentication, if AI detection continues to improve and shows to be reliably accurate, will be as taken for granted as that malware scanner quietly humming along in the background of your email inbox.

This story originally appeared on wired.com.

Startup can identify deepfake video in real time Read More »

landmark-ai-deal-sees-hollywood-giant-lionsgate-provide-library-for-ai-training

Landmark AI deal sees Hollywood giant Lionsgate provide library for AI training

The silicon screen —

Runway deal will create a Lionsgate AI video generator, but not everyone is happy.

An illustration of a filmstrip with a robot, horse, rocket, and whale.

On Wednesday, AI video synthesis firm Runway and entertainment company Lionsgate announced a partnership to create a new AI model trained on Lionsgate’s vast film and TV library. The deal will feed Runway legally clear training data and will also reportedly provide Lionsgate with tools to enhance content creation while potentially reducing production costs.

Lionsgate, known for franchises like John Wick and The Hunger Games, sees AI as a way to boost efficiency in content production. Michael Burns, Lionsgate’s vice chair, stated in a press release that AI could help develop “cutting edge, capital efficient content creation opportunities.” He added that some filmmakers have shown enthusiasm about potential applications in pre- and post-production processes.

Runway plans to develop a custom AI model using Lionsgate’s proprietary content portfolio. The model will be exclusive to Lionsgate Studios, allowing filmmakers, directors, and creative staff to augment their work. While specifics remain unclear, the partnership marks the first major collaboration between Runway and a Hollywood studio.

“We’re committed to giving artists, creators and studios the best and most powerful tools to augment their workflows and enable new ways of bringing their stories to life,” said Runway co-founder and CEO Cristóbal Valenzuela in a press release. “The history of art is the history of technology and these new models are part of our continuous efforts to build transformative mediums for artistic and creative expression; the best stories are yet to be told.”

The quest for legal training data

Generative AI models are master imitators, and video synthesis models like Runway’s latest Gen-3 Alpha are no exception. The companies that create them must amass a great deal of existing video (and still image) samples to analyze, allowing the resulting AI models to re-synthesize that information into new video generations, guided by text descriptions called prompts. And wherever that training data is lacking, it can result in unusual generations, as we saw in our hands-on evaluation of Gen-3 Alpha in July.

However, in the past, AI companies have gotten into legal trouble for scraping vast quantities of media without permission. In fact, Runway is currently the defendant in a class-action lawsuit that alleges copyright infringement for using video data obtained without permission to train its video synthesis models. While companies like OpenAI have claimed this scraping process is “fair use,” US courts have not yet definitively ruled on the practice. With other potential legal challenges ahead, it makes sense from Runway’s perspective to reach out and sign deals for training data that is completely in the clear.

Even if the training data becomes fully legal and licensed, different elements of the entertainment industry view generative AI on a spectrum that seems to range between fascination and horror. The technology’s ability to rapidly create images and video based on prompts may attract studios looking to streamline production. However, it raises polarizing concerns among unions about job security, actors and musicians about likeness misuse and ethics, and studios about legal implications.

So far, news of the deal has not been received kindly among vocal AI critics found on social media. On X, filmmaker and AI critic Joe Russo wrote, “I don’t think I’ve ever seen a grosser string of words than: ‘to develop cutting-edge, capital-efficient content creation opportunities.'”

Film concept artist Reid Southen shared a similar negative take on X: “I wonder how the directors and actors of their films feel about having their work fed into the AI to make a proprietary model. As an artist on The Hunger Games? I’m pissed. This is the first step in trying to replace artists and filmmakers.”

It’s a fear that we will likely hear more about in the future as AI video synthesis technology grows more capable—and potentially becomes adopted as a standard filmmaking tool. As studios explore AI applications despite legal uncertainties and labor concerns, partnerships like the Lionsgate-Runway deal may shape the future of content creation in Hollywood.

Landmark AI deal sees Hollywood giant Lionsgate provide library for AI training Read More »