Author name: DJ Henderson

measles-arrives-in-kansas,-spreads-quickly-in-undervaccinated-counties

Measles arrives in Kansas, spreads quickly in undervaccinated counties

On Thursday, the county on the northern border of Stevens, Grant County, also reported three confirmed cases, which were also linked to the first case in Stevens. Grant County is in a much better position to handle the outbreak than its neighbors; its one school district, Ulysses, reported 100 percent vaccination coverage for kindergartners in the 2023–2024 school year.

Outbreak risk

So far, details about the fast-rising cases are scant. The Kansas Department of Health and Environment (KDHE) has not published another press release about the cases since March 13. Ars Technica reached out to KDHE for more information but did not hear back before this story’s publication.

The outlet KWCH 12 News out of Wichita published a story Thursday, when there were just six cases reported in just Grant and Stevens Counties, saying that all six were in unvaccinated people and that no one had been hospitalized. On Friday, KWCH updated the story to note that the case count had increased to 10 and that the health department now considers the situation an outbreak.

Measles is an extremely infectious virus that can linger in airspace and on surfaces for up to two hours after an infected person has been in an area. Among unvaccinated people exposed to the virus, 90 percent will become infected.

Vaccination rates have slipped nationwide, creating pockets that have lost herd immunity and are vulnerable to fast-spreading, difficult-to-stop outbreaks. In the past, strong vaccination rates prevented such spread, and in 2000, the virus was declared eliminated, meaning there was no continuous spread of the virus over a 12-month period. Experts now fear that the US will lose its elimination status, meaning measles will once again be considered endemic to the country.

So far this year, the Centers for Disease Control and Prevention has documented 378 measles cases as of Thursday, March 20. That figure is already out of date.

On Friday, the Texas health department reported 309 cases in its ongoing outbreak. Forty people have been hospitalized, and one unvaccinated child with no underlying medical conditions has died. The outbreak has spilled over to New Mexico and Oklahoma. In New Mexico, officials reported Friday that the case count has risen to 42 cases, with two hospitalizations and one death in an unvaccinated adult. In Oklahoma, the case count stands at four.

Measles arrives in Kansas, spreads quickly in undervaccinated counties Read More »

italy-demands-google-poison-dns-under-strict-piracy-shield-law

Italy demands Google poison DNS under strict Piracy Shield law

Spotted by TorrentFreak, AGCOM Commissioner Massimiliano Capitanio took to LinkedIn to celebrate the ruling, as well as the existence of the Italian Piracy Shield. “The Judge confirmed the value of AGCOM’s investigations, once again giving legitimacy to a system for the protection of copyright that is unique in the world,” said Capitanio.

Capitanio went on to complain that Google has routinely ignored AGCOM’s listing of pirate sites, which are supposed to be blocked in 30 minutes or less under the law. He noted the violation was so clear-cut that the order was issued without giving Google a chance to respond, known as inaudita altera parte in Italian courts.

This decision follows a similar case against Internet backbone firm Cloudflare. In January, the Court of Milan found that Cloudflare’s CDN, DNS server, and WARP VPN were facilitating piracy. The court threatened Cloudflare with fines of up to 10,000 euros per day if it did not begin blocking the sites.

Google could face similar sanctions, but AGCOM has had difficulty getting international tech behemoths to acknowledge their legal obligations in the country. We’ve reached out to Google for comment and will update this report if we hear back.

Italy demands Google poison DNS under strict Piracy Shield law Read More »

trump-white-house-drops-diversity-plan-for-moon-landing-it-created-back-in-2019

Trump White House drops diversity plan for Moon landing it created back in 2019

That was then. NASA’s landing page for the First Woman comic series, where young readers could download or listen to the comic, no longer exists. Callie and her crew survived the airless, radiation-bathed surface of the Moon, only to be wiped out by President Trump’s Diversity, Equity, and Inclusion executive order, signed two months ago.

Another casualty is the “first woman” language within the Artemis Program. For years, NASA’s main Artemis page, an archived version of which is linked here, included the following language: “With the Artemis campaign, NASA will land the first woman and first person of color on the Moon, using innovative technologies to explore more of the lunar surface than ever before.”

Artemis website changes

The current landing page for the Artemis program has excised this paragraph. It is not clear how recently the change was made. It was first noticed by British science journalist Oliver Morton.

The removal is perhaps more striking than Callie’s downfall since it was the first Trump administration that both created Artemis and highlighted its differences from Apollo by stating that the Artemis III lunar landing would fly the first woman and person of color to the lunar surface.

How NASA’s Artemis website appeared before recent changes.

Credit: NASA

How NASA’s Artemis website appeared before recent changes. Credit: NASA

For its part, NASA says it is simply complying with the White House executive order by making the changes.

“In keeping with the President’s Executive Order, we’re updating our language regarding plans to send crew to the lunar surface as part of NASA’s Artemis campaign,” an agency spokesperson said. “We look forward to learning more from about the Trump Administration’s plans for our agency and expanding exploration at the Moon and Mars for the benefit of all.”

The nominal date for the Artemis III landing is 2027, but few in the industry expect NASA to be able to hold to that date. With further delays likely, the space agency will probably not name a crew anytime soon.

Trump White House drops diversity plan for Moon landing it created back in 2019 Read More »

racer-with-paraplegia-successfully-test-drives-corvette-with-hand-controls

Racer with paraplegia successfully test drives Corvette with hand controls

Able-bodied co-driver Milner will use the Corvette GT3.R’s regular pedals when he drives, with the hand controls engaged when Wickens is in the car. The new hand controls are mounted to the steering wheel column, where otherwise you’d find a spacer between the column and multifunction steering wheel. There are paddles on both sides that operate the throttle, and a ring that engages the brakes.

The road-going Corvette C8 uses brake-by-wire, and Bosch has developed an electronic brake system for motorsport applications, which is now fitted to DXDT’s Corvette. Wickens actually used the Bosch EBS in the last two Pilot Challenge races of last year, but unlike the Corvette, the Elantra did not have a full brake-by-wire system.

Robert Wickens explains how his hand controls work.

“When I embarked on this journey of racing with hand controls, I was always envisioning just that hydraulic sensation with my hands, on applying the brake. And, yeah, everyone involved, they made it happen,” Wickens said. Adding that sensation has involved using tiny springs and dampers, and Wickens likened the process of fine-tuning that to working on a suspension setup for a race car, altering spring rates and damper settings until it felt right.

“You know, the fact that I was just straight away comfortable; frankly, internally, I was concerned that [it] might take me a little bit to get up to speed, but thankfully that wasn’t the case so far. There’s obviously still a lot of work to be done, but so far, I think the signs are positive,” he said.

“I think the biggest takeaway I have so far is that it feels like the Bosch EBS and the hand control system that was developed by Pratt Miller it was like it belonged in this car,” he said. “There hasn’t been a single hiccup. It feels like… when they designed the Z06 GT3, it was always in the plan, almost? It’s just looks like it belongs in the car. It feels like it belongs in the car.”

Racer with paraplegia successfully test drives Corvette with hand controls Read More »

dad-demands-openai-delete-chatgpt’s-false-claim-that-he-murdered-his-kids

Dad demands OpenAI delete ChatGPT’s false claim that he murdered his kids

Currently, ChatGPT does not repeat these horrible false claims about Holmen in outputs. A more recent update apparently fixed the issue, as “ChatGPT now also searches the Internet for information about people, when it is asked who they are,” Noyb said. But because OpenAI had previously argued that it cannot correct information—it can only block information—the fake child murderer story is likely still included in ChatGPT’s internal data. And unless Holmen can correct it, that’s a violation of the GDPR, Noyb claims.

“While the damage done may be more limited if false personal data is not shared, the GDPR applies to internal data just as much as to shared data,” Noyb says.

OpenAI may not be able to easily delete the data

Holmen isn’t the only ChatGPT user who has worried that the chatbot’s hallucinations might ruin lives. Months after ChatGPT launched in late 2022, an Australian mayor threatened to sue for defamation after the chatbot falsely claimed he went to prison. Around the same time, ChatGPT linked a real law professor to a fake sexual harassment scandal, The Washington Post reported. A few months later, a radio host sued OpenAI over ChatGPT outputs describing fake embezzlement charges.

In some cases, OpenAI filtered the model to avoid generating harmful outputs but likely didn’t delete the false information from the training data, Noyb suggested. But filtering outputs and throwing up disclaimers aren’t enough to prevent reputational harm, Noyb data protection lawyer, Kleanthi Sardeli, alleged.

“Adding a disclaimer that you do not comply with the law does not make the law go away,” Sardeli said. “AI companies can also not just ‘hide’ false information from users while they internally still process false information. AI companies should stop acting as if the GDPR does not apply to them, when it clearly does. If hallucinations are not stopped, people can easily suffer reputational damage.”

Dad demands OpenAI delete ChatGPT’s false claim that he murdered his kids Read More »

bird-flu-continues-to-spread-as-trump’s-pandemic-experts-are-mia

Bird flu continues to spread as Trump’s pandemic experts are MIA

Under the Biden administration, OPPR also worked behind the scenes. At the time, it was directed by Paul Friedrichs, a physician and retired Air Force major-general. Friedrichs told CNN that the OPPR regularly hosted interagency calls between the US Centers for Disease Control and Prevention, the USDA, the Administration for Strategic Preparedness and Response, the US Food and Drug Administration, and the National Institutes of Health. When the H5N1 bird flu outbreak erupted in dairy farms last March, OPPR was hosting daily meetings, which transitioned to weekly meetings toward the end of the administration.

“At the end of the day, bringing everybody together and having those meetings was incredibly important, so that we had a shared set of facts,” Friedrichs said. “When decisions were made, everyone understood why the decision was made, what facts were used to inform the decision.”

Sen. Patty Murray (D-Wash.), who co-wrote the bill that created OPPR with former Sen. Richard Burr (R-N.C.), is concerned by Trump’s sidelining of the office.

“Under the last administration, OPPR served, as intended, as the central hub coordinating a whole-of-government response to pandemic threats,” she said in a written statement to CNN. “While President Trump cannot legally disband OPPR, as he has threatened to do, it is deeply concerning that he has moved the statutorily created OPPR into the NSC.”

“As intended by law, OPPR is a separate, distinct office for a reason, which is especially relevant now as we are seeing outbreaks of measles, bird flu, and other serious and growing threats to public health,” Murray wrote. “This should be alarming to everyone.”

Bird flu continues to spread as Trump’s pandemic experts are MIA Read More »

hints-grow-stronger-that-dark-energy-changes-over-time

Hints grow stronger that dark energy changes over time

In its earliest days, the Universe was a hot, dense soup of subatomic particles, including hydrogen and helium nuclei, aka baryons. Tiny fluctuations created a rippling pattern through that early ionized plasma, which froze into a three-dimensional place as the Universe expanded and cooled. Those ripples, or bubbles, are known as baryon acoustic oscillations (BAO). It’s possible to use BAOs as a kind of cosmic ruler to investigate the effects of dark energy over the history of the Universe.

DESI is a state-of-the-art instrument and can capture light from up to 5,000 celestial objects simultaneously.

DESI is a state-of-the-art instrument that can capture light from up to 5,000 celestial objects simultaneously.

That’s what DESI was designed to do: take precise measurements of the apparent size of these bubbles (both near and far) by determining the distances to galaxies and quasars over 11 billion years. That data can then be sliced into chunks to determine how fast the Universe was expanding at each point of time in the past, the better to model how dark energy was affecting that expansion.

An upward trend

Last year’s results were based on analysis of a full year’s worth of data taken from seven different slices of cosmic time and include 450,000 quasars, the largest ever collected, with a record-setting precision of the most distant epoch (between 8 to 11 billion years back) of 0.82 percent. While there was basic agreement with the Lamba CDM model, when those first-year results were combined with data from other studies (involving the cosmic microwave background radiation and Type Ia supernovae), some subtle differences cropped up.

Essentially, those differences suggested that the dark energy might be getting weaker. In terms of confidence, the results amounted to a 2.6-sigma level for the DESI’s data combined with CMB datasets. When adding the supernovae data, those numbers grew to 2.5-sigma, 3.5-sigma, or 3.9-sigma levels, depending on which particular supernova dataset was used.

It’s important to combine the DESI data with other independent measurements because “we want consistency,” said DESI co-spokesperson Will Percival of the University of Waterloo. “All of the different experiments should give us the same answer to how much matter there is in the Universe at present day, how fast the Universe is expanding. It’s no good if all the experiments agree with the Lambda-CDM model, but then give you different parameters. That just doesn’t work. Just saying it’s consistent to the Lambda-CDM, that’s not enough in itself. It has to be consistent with Lambda-CDM and give you the same parameters for the basic properties of that model.”

Hints grow stronger that dark energy changes over time Read More »

apple-and-google-in-the-hot-seat-as-european-regulators-ignore-trump-warnings

Apple and Google in the hot seat as European regulators ignore Trump warnings

The European Commission is not backing down from efforts to rein in Big Tech. In a series of press releases today, the European Union’s executive arm has announced actions against both Apple and Google. Regulators have announced that Apple will be required to open up support for non-Apple accessories on the iPhone, but it may be too late for Google to make changes. The commission says the search giant has violated the Digital Markets Act, which could lead to a hefty fine.

Since returning to power, Donald Trump has railed against European regulations that target US tech firms. In spite of rising tensions and tough talk, the European Commission seems unfazed and is continuing to follow its more stringent laws, like the Digital Markets Act (DMA). This landmark piece of EU legislation aims to make the digital economy more fair. Upon coming into force last year, the act labeled certain large tech companies, including Apple and Google, as “gatekeepers” that are subject to additional scrutiny.

Europe’s more aggressive regulation of Big Tech is why iPhone users on the continent can install apps from third-party app markets while the rest of us are stuck with the Apple App Store. As for Google, the European Commission has paid special attention to search, Android, and Chrome, all of which dominate their respective markets.

Apple’s mobile platform plays second fiddle to Android in Europe, but it’s large enough to make the company subject to the DMA. The EU has now decreed that Apple is not doing enough to support interoperability on its platform. As a result, it will be required to make several notable changes. Apple will have to provide other companies and developers with improved access to iOS for devices like smartwatches, headphones, and TVs. This could include integration with notifications, faster data transfers, and streamlined setup.

The commission is also forcing Apple to release additional technical documentation, communication, and notifications for upcoming features for third parties. The EU believes this change will encourage more companies to build products that integrate with the iPhone, giving everyone more options aside from Apple’s.

Regulators say both sets of measures are the result of a public comment period that began late last year. We’ve asked Apple for comment on this development but have not heard back as of publication time. Apple is required to make these changes, and failing to do so could lead to fines. However, Google is already there.

Apple and Google in the hot seat as European regulators ignore Trump warnings Read More »

plex-ups-its-price-for-first-time-in-a-decade,-changes-remote-streaming-access

Plex ups its price for first time in a decade, changes remote-streaming access

Plex is a bit hard to explain these days. Even if you don’t know its roots as an outgrowth of a Mac port of the Xbox Media Center project, Plex is not your typical “streaming” service, given how most people use it. So as Plex announces its first price increase to its Plex Pass subscription in more than 10 years, it has its work cut out explaining why, what’s included, and what is changing.

Starting April 29, the cost of a Plex Pass rises from $5 to $7 monthly, from $40 to $70 annually, and a lifetime pass now costs $250, previously $120. In a blog post, Plex cites rising costs and its commitment to an independent service that supports “personal media.”

“We are all in on the continued success of Plex Pass and personal media,” the post states. “This price increase will ensure that we can keep investing dedicated resources in developing new features, while supporting and growing your favorites.” The post cites a roadmap that contains an integration with Common Sense Media, a new “bespoke server management app” for managing server users, and “an open and documented API for server integrations,” including custom metadata agents.

Someone in a remote video stream must have a Pass

And then, after that note, Plex hits the big change: Streaming “personal media”—i.e., video files, not audio, photos, or offerings from Plex’s ad-supported movies and TV—from outside your own network will no longer be a free Plex feature, starting April 29. “Fully free” might be the better way to put it, because if a server owner has a Plex Pass subscription, their users can still access their server for free.

But if you’ve been hosting your own Plex server to maintain access to your stuff while you’re away or relying on the kindness of non-Pass-having friends with servers, either you or your server-owning friends will need a Plex Pass subscription by the end of April.

Alternatively, you, as a non-server-running Plex viewer, can get a cheaper Remote Watch Pass for $2 per month or $20 a year. That doesn’t include Plex Pass features like offline downloads, skipping a show intro or credits, or the like, but it does keep you connected to your “personal media” vendors.

Plex ups its price for first time in a decade, changes remote-streaming access Read More »

nvidia-announces-“rubin-ultra”-and-“feynman”-ai-chips-for-2027-and-2028

Nvidia announces “Rubin Ultra” and “Feynman” AI chips for 2027 and 2028

On Tuesday at Nvidia’s GTC 2025 conference in San Jose, California, CEO Jensen Huang revealed several new AI-accelerating GPUs the company plans to release over the coming months and years. He also revealed more specifications about previously announced chips.

The centerpiece announcement was Vera Rubin, first teased at Computex 2024 and now scheduled for release in the second half of 2026. This GPU, named after a famous astronomer, will feature tens of terabytes of memory and comes with a custom Nvidia-designed CPU called Vera.

According to Nvidia, Vera Rubin will deliver significant performance improvements over its predecessor, Grace Blackwell, particularly for AI training and inference.

Specifications for Vera Rubin, presented by Jensen Huang during his GTC 2025 keynote.

Specifications for Vera Rubin, presented by Jensen Huang during his GTC 2025 keynote.

Vera Rubin features two GPUs together on one die that deliver 50 petaflops of FP4 inference performance per chip. When configured in a full NVL144 rack, the system delivers 3.6 exaflops of FP4 inference compute—3.3 times more than Blackwell Ultra’s 1.1 exaflops in a similar rack configuration.

The Vera CPU features 88 custom ARM cores with 176 threads connected to Rubin GPUs via a high-speed 1.8 TB/s NVLink interface.

Huang also announced Rubin Ultra, which will follow in the second half of 2027. Rubin Ultra will use the NVL576 rack configuration and feature individual GPUs with four reticle-sized dies, delivering 100 petaflops of FP4 precision (a 4-bit floating-point format used for representing and processing numbers within AI models) per chip.

At the rack level, Rubin Ultra will provide 15 exaflops of FP4 inference compute and 5 exaflops of FP8 training performance—about four times more powerful than the Rubin NVL144 configuration. Each Rubin Ultra GPU will include 1TB of HBM4e memory, with the complete rack containing 365TB of fast memory.

Nvidia announces “Rubin Ultra” and “Feynman” AI chips for 2027 and 2028 Read More »

new-portal-pinball-table-may-be-the-closest-we’re-gonna-get-to-portal-3

New Portal pinball table may be the closest we’re gonna get to Portal 3

A bargain at twice the price

The extensive Portal theming on the table seems to extend to the gameplay as well. As you might expect, launching a ball into a lit portal on one side of the playfield can lead to it (or a ball that looks a lot like it) immediately launching from another portal elsewhere. The speed of the ball as it enters one portal and exits the other seems like it might matter to the gameplay, too: A description for an “aerial portal” table feature warns that players should “make sure to build enough momentum or else your ball will land in the pit!”

The table is full of other little nods to the Portal games, from a physical Weighted Companion Cube that can travel through a portal to lock balls in place for eventual multiball to an Aerial Faith Plate that physically flings the ball up to a higher level. There’s also a turret-themed multiball, which GLaDOS reminds you is based around “the pale spherical things that are full of bullets. Oh wait, that’s you in five seconds.”

You can purchase a full Portal pinball table starting at $11,620 (plus shipping), which isn’t unreasonable as far as brand-new pinball tables are concerned these days. But if you already own the base table for Multimorphic’s P3 Pinball Platform, you can purchase a “Game Kit” upgrade—with the requisite game software and physical playfield pieces to install on your table—starting at just $3,900.

Even players that invested $1,000 or more in an Index VR headset just to play Half-Life Alyx might balk at those kinds of prices for the closest thing we’ve got to a new, “official” Portal game. For true Valve obsessives, though, it might be a small price to pay for the ultimate company collector’s item and conversation piece.

New Portal pinball table may be the closest we’re gonna get to Portal 3 Read More »

farewell-photoshop?-google’s-new-ai-lets-you-edit-images-by-asking.

Farewell Photoshop? Google’s new AI lets you edit images by asking.


New AI allows no-skill photo editing, including adding objects and removing watermarks.

A collection of images either generated or modified by Gemini 2.0 Flash (Image Generation) Experimental. Credit: Google / Ars Technica

There’s a new Google AI model in town, and it can generate or edit images as easily as it can create text—as part of its chatbot conversation. The results aren’t perfect, but it’s quite possible everyone in the near future will be able to manipulate images this way.

Last Wednesday, Google expanded access to Gemini 2.0 Flash’s native image-generation capabilities, making the experimental feature available to anyone using Google AI Studio. Previously limited to testers since December, the multimodal technology integrates both native text and image processing capabilities into one AI model.

The new model, titled “Gemini 2.0 Flash (Image Generation) Experimental,” flew somewhat under the radar last week, but it has been garnering more attention over the past few days due to its ability to remove watermarks from images, albeit with artifacts and a reduction in image quality.

That’s not the only trick. Gemini 2.0 Flash can add objects, remove objects, modify scenery, change lighting, attempt to change image angles, zoom in or out, and perform other transformations—all to varying levels of success depending on the subject matter, style, and image in question.

To pull it off, Google trained Gemini 2.0 on a large dataset of images (converted into tokens) and text. The model’s “knowledge” about images occupies the same neural network space as its knowledge about world concepts from text sources, so it can directly output image tokens that get converted back into images and fed to the user.

Adding a water-skiing barbarian to a photograph with Gemini 2.0 Flash.

Adding a water-skiing barbarian to a photograph with Gemini 2.0 Flash. Credit: Google / Benj Edwards

Incorporating image generation into an AI chat isn’t itself new—OpenAI integrated its image-generator DALL-E 3 into ChatGPT last September, and other tech companies like xAI followed suit. But until now, every one of those AI chat assistants called on a separate diffusion-based AI model (which uses a different synthesis principle than LLMs) to generate images, which were then returned to the user within the chat interface. In this case, Gemini 2.0 Flash is both the large language model (LLM) and AI image generator rolled into one system.

Interestingly, OpenAI’s GPT-4o is capable of native image output as well (and OpenAI President Greg Brock teased the feature at one point on X last year), but that company has yet to release true multimodal image output capability. One reason why is possibly because true multimodal image output is very computationally expensive, since each image either inputted or generated is composed of tokens that become part of the context that runs through the image model again and again with each successive prompt. And given the compute needs and size of the training data required to create a truly visually comprehensive multimodal model, the output quality of the images isn’t necessarily as good as diffusion models just yet.

Creating another angle of a person with Gemini 2.0 Flash.

Creating another angle of a person with Gemini 2.0 Flash. Credit: Google / Benj Edwards

Another reason OpenAI has held back may be “safety”-related: In a similar way to how multimodal models trained on audio can absorb a short clip of a sample person’s voice and then imitate it flawlessly (this is how ChatGPT’s Advanced Voice Mode works, with a clip of a voice actor it is authorized to imitate), multimodal image output models are capable of faking media reality in a relatively effortless and convincing way, given proper training data and compute behind it. With a good enough multimodal model, potentially life-wrecking deepfakes and photo manipulations could become even more trivial to produce than they are now.

Putting it to the test

So, what exactly can Gemini 2.0 Flash do? Notably, its support for conversational image editing allows users to iteratively refine images through natural language dialogue across multiple successive prompts. You can talk to it and tell it what you want to add, remove, or change. It’s imperfect, but it’s the beginning of a new type of native image editing capability in the tech world.

We gave Gemini Flash 2.0 a battery of informal AI image-editing tests, and you’ll see the results below. For example, we removed a rabbit from an image in a grassy yard. We also removed a chicken from a messy garage. Gemini fills in the background with its best guess. No need for a clone brush—watch out, Photoshop!

We also tried adding synthesized objects to images. Being always wary of the collapse of media reality, called the “cultural singularity,” we added a UFO to a photo the author took from an airplane window. Then we tried adding a Sasquatch and a ghost. The results were unrealistic, but this model was also trained on a limited image dataset (more on that below).

Adding a UFO to a photograph with Gemini 2.0 Flash. Google / Benj Edwards

We then added a video game character to a photo of an Atari 800 screen (Wizard of Wor), resulting in perhaps the most realistic image synthesis result in the set. You might not see it here, but Gemini added realistic CRT scanlines that matched the monitor’s characteristics pretty well.

Adding a monster to an Atari video game with Gemini 2.0 Flash.

Adding a monster to an Atari video game with Gemini 2.0 Flash. Credit: Google / Benj Edwards

Gemini can also warp an image in novel ways, like “zooming out” of an image into a fictional setting or giving an EGA-palette character a body, then sticking him into an adventure game.

“Zooming out” on an image with Gemini 2.0 Flash. Google / Benj Edwards

And yes, you can remove watermarks. We tried removing a watermark from a Getty Images image, and it worked, although the resulting image is nowhere near the resolution or detail quality of the original. Ultimately, if your brain can picture what an image is like without a watermark, so can an AI model. It fills in the watermark space with the most plausible result based on its training data.

Removing a watermark with Gemini 2.0 Flash.

Removing a watermark with Gemini 2.0 Flash. Credit: Nomadsoul1 via Getty Images

And finally, we know you’ve likely missed seeing barbarians beside TV sets (as per tradition), so we gave that a shot. Originally, Gemini didn’t add a CRT TV set to the barbarian image, so we asked for one.

Adding a TV set to a barbarian image with Gemini 2.0 Flash.

Adding a TV set to a barbarian image with Gemini 2.0 Flash. Credit: Google / Benj Edwards

Then we set the TV on fire.

Setting the TV set on fire with Gemini 2.0 Flash.

Setting the TV set on fire with Gemini 2.0 Flash. Credit: Google / Benj Edwards

All in all, it doesn’t produce images of pristine quality or detail, but we literally did no editing work on these images other than typing requests. Adobe Photoshop currently lets users manipulate images using AI synthesis based on written prompts with “Generative Fill,” but it’s not quite as natural as this. We could see Adobe adding a more conversational AI image-editing flow like this one in the future.

Multimodal output opens up new possibilities

Having true multimodal output opens up interesting new possibilities in chatbots. For example, Gemini 2.0 Flash can play interactive graphical games or generate stories with consistent illustrations, maintaining character and setting continuity throughout multiple images. It’s far from perfect, but character consistency is a new capability in AI assistants. We tried it out and it was pretty wild—especially when it generated a view of a photo we provided from another angle.

Creating a multi-image story with Gemini 2.0 Flash, part 1. Google / Benj Edwards

Text rendering represents another potential strength of the model. Google claims that internal benchmarks show Gemini 2.0 Flash performs better than “leading competitive models” when generating images containing text, making it potentially suitable for creating content with integrated text. From our experience, the results weren’t that exciting, but they were legible.

An example of in-image text rendering generated with Gemini 2.0 Flash.

An example of in-image text rendering generated with Gemini 2.0 Flash. Credit: Google / Ars Technica

Despite Gemini 2.0 Flash’s shortcomings so far, the emergence of true multimodal image output feels like a notable moment in AI history because of what it suggests if the technology continues to improve. If you imagine a future, say 10 years from now, where a sufficiently complex AI model could generate any type of media in real time—text, images, audio, video, 3D graphics, 3D-printed physical objects, and interactive experiences—you basically have a holodeck, but without the matter replication.

Coming back to reality, it’s still “early days” for multimodal image output, and Google recognizes that. Recall that Flash 2.0 is intended to be a smaller AI model that is faster and cheaper to run, so it hasn’t absorbed the entire breadth of the Internet. All that information takes a lot of space in terms of parameter count, and more parameters means more compute. Instead, Google trained Gemini 2.0 Flash by feeding it a curated dataset that also likely included targeted synthetic data. As a result, the model does not “know” everything visual about the world, and Google itself says the training data is “broad and general, not absolute or complete.”

That’s just a fancy way of saying that the image output quality isn’t perfect—yet. But there is plenty of room for improvement in the future to incorporate more visual “knowledge” as training techniques advance and compute drops in cost. If the process becomes anything like we’ve seen with diffusion-based AI image generators like Stable Diffusion, Midjourney, and Flux, multimodal image output quality may improve rapidly over a short period of time. Get ready for a completely fluid media reality.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

Farewell Photoshop? Google’s new AI lets you edit images by asking. Read More »