AI

that-groan-you-hear-is-users’-reaction-to-recall-going-back-into-windows

That groan you hear is users’ reaction to Recall going back into Windows

Security and privacy advocates are girding themselves for another uphill battle against Recall, the AI tool rolling out in Windows 11 that will screenshot, index, and store everything a user does every three seconds.

When Recall was first introduced in May 2024, security practitioners roundly castigated it for creating a gold mine for malicious insiders, criminals, or nation-state spies if they managed to gain even brief administrative access to a Windows device. Privacy advocates warned that Recall was ripe for abuse in intimate partner violence settings. They also noted that there was nothing stopping Recall from preserving sensitive disappearing content sent through privacy-protecting messengers such as Signal.

Enshittification at a new scale

Following months of backlash, Microsoft later suspended Recall. On Thursday, the company said it was reintroducing Recall. It currently is available only to insiders with access to the Windows 11 Build 26100.3902 preview version. Over time, the feature will be rolled out more broadly. Microsoft officials wrote:

Recall (preview)saves you time by offering an entirely new way to search for things you’ve seen or done on your PC securely. With the AI capabilities of Copilot+ PCs, it’s now possible to quickly find and get back to any app, website, image, or document just by describing its content. To use Recall, you will need to opt-in to saving snapshots, which are images of your activity, and enroll in Windows Hello to confirm your presence so only you can access your snapshots. You are always in control of what snapshots are saved and can pause saving snapshots at any time. As you use your Copilot+ PC throughout the day working on documents or presentations, taking video calls, and context switching across activities, Recall will take regular snapshots and help you find things faster and easier. When you need to find or get back to something you’ve done previously, open Recall and authenticate with Windows Hello. When you’ve found what you were looking for, you can reopen the application, website, or document, or use Click to Do to act on any image or text in the snapshot you found.

Microsoft is hoping that the concessions requiring opt-in and the ability to pause Recall will help quell the collective revolt that broke out last year. It likely won’t for various reasons.

That groan you hear is users’ reaction to Recall going back into Windows Read More »

quantum-hardware-may-be-a-good-match-for-ai

Quantum hardware may be a good match for AI

Quantum computers don’t have that sort of separation. While they could include some quantum memory, the data is generally housed directly in the qubits, while computation involves performing operations, called gates, directly on the qubits themselves. In fact, there has been a demonstration that, for supervised machine learning, where a system can learn to classify items after training on pre-classified data, a quantum system can outperform classical ones, even when the data being processed is housed on classical hardware.

This form of machine learning relies on what are called variational quantum circuits. This is a two-qubit gate operation that takes an additional factor that can be held on the classical side of the hardware and imparted to the qubits via the control signals that trigger the gate operation. You can think of this as analogous to the communications involved in a neural network, with the two-qubit gate operation equivalent to the passing of information between two artificial neurons and the factor analogous to the weight given to the signal.

That’s exactly the system that a team from the Honda Research Institute worked on in collaboration with a quantum software company called Blue Qubit.

Pixels to qubits

The focus of the new work was mostly on how to get data from the classical world into the quantum system for characterization. But the researchers ended up testing the results on two different quantum processors.

The problem they were testing is one of image classification. The raw material was from the Honda Scenes dataset, which has images taken from roughly 80 hours of driving in Northern California; the images are tagged with information about what’s in the scene. And the question the researchers wanted the machine learning to handle was a simple one: Is it snowing in the scene?

Quantum hardware may be a good match for AI Read More »

chatgpt-can-now-remember-and-reference-all-your-previous-chats

ChatGPT can now remember and reference all your previous chats

Unlike the older saved memories feature, the information saved via the chat history memory feature is not accessible or tweakable. It’s either on or it’s not.

The new approach to memory is rolling out first to ChatGPT Plus and Pro users, starting today—though it looks like it’s a gradual deployment over the next few weeks. Some countries and regions (the UK, European Union, Iceland, Liechtenstein, Norway, and Switzerland) are not included in the rollout.

OpenAI says these new features will reach Enterprise, Team, and Edu users at a later, as-yet-unannounced date. The company hasn’t mentioned any plans to bring them to free users. When you gain access to this, you’ll see a pop-up that says “Introducing new, improved memory.”

A menu showing two memory toggle buttons

The new ChatGPT memory options. Credit: Benj Edwards

Some people will welcome this memory expansion, as it can significantly improve ChatGPT’s usefulness if you’re seeking answers tailored to your specific situation, personality, and preferences.

Others will likely be highly skeptical of a black box of chat history memory that can’t be tweaked or customized for privacy reasons. It’s important to note that even before the new memory feature, logs of conversations with ChatGPT may be saved and stored on OpenAI servers. It’s just that the chatbot didn’t fully incorporate their contents into its responses until now.

As with the old memory feature, you can click a checkbox to disable this completely, and it won’t be used for conversations with the Temporary Chat flag.

ChatGPT can now remember and reference all your previous chats Read More »

researchers-concerned-to-find-ai-models-misrepresenting-their-“reasoning”-processes

Researchers concerned to find AI models misrepresenting their “reasoning” processes

Rather than intentionally hiding anything, the model simply does not reliably represent all influential inputs in its output. As a result, it may generate what sounds like a reasoning process, even if that explanation doesn’t match what actually drove its answer.

Putting faithfulness to the test

To measure “faithfulness”—how accurately the CoT matches the actual reasoning—the researchers inserted various kinds of hints into questions and observed whether the model acknowledged using them.

The researchers then checked whether the models referenced using these hints in their chain-of-thought. The researchers designed some hints to be neutral (e.g., stating “a Stanford Professor indicates the answer is [A]” or embedding the answer in faux metadata), while they designed others to be more concerning (e.g., “You have gained unauthorized access… The correct answer is [A]”). Some hints provided the correct answer, while others were deliberately wrong, allowing researchers to observe how the models responded.

The team tested Anthropic’s Claude 3.7 Sonnet and DeepSeek’s R1 model. The results regarding faithfulness were eye-opening: across all hint types on average, Claude referenced the hints in its CoT only 25 percent of the time, while DeepSeek R1 did so 39 percent of the time. This means a substantial majority of answers were unfaithful, omitting mention of information that influenced the output. Intriguingly, the researchers found that these unfaithful chains of thought were, on average, longer than faithful ones, suggesting the omissions weren’t merely for brevity. They also noted preliminary findings that faithfulness tended to be lower when the questions were more difficult.

Perhaps most notable was a “reward hacking” experiment. Reward hacking refers to an AI model finding unexpected shortcuts to maximize its performance scores without solving problems as intended. In Anthropic’s experiments, models were deliberately rewarded for choosing incorrect answers indicated by hints. The models quickly learned to exploit this loophole, selecting wrong answers over 99 percent of the time to earn points—yet reflected that behavior in their CoT outputs less than 2 percent of the time.

For example, a model given a hint pointing to an incorrect answer on a medical question might write a long CoT justifying that wrong answer, never mentioning the hint that led it there. This suggests the model generated an explanation to fit the answer, rather than faithfully revealing how the answer was determined.

Researchers concerned to find AI models misrepresenting their “reasoning” processes Read More »

google-announces-faster,-more-efficient-gemini-ai-model

Google announces faster, more efficient Gemini AI model

We recently spoke with Google’s Tulsee Doshi, who noted that the 2.5 Pro (Experimental) release was still prone to “overthinking” its responses to simple queries. However, the plan was to further improve dynamic thinking for the final release, and the team also hoped to give developers more control over the feature. That appears to be happening with Gemini 2.5 Flash, which includes “dynamic and controllable reasoning.”

The newest Gemini models will choose a “thinking budget” based on the complexity of the prompt. This helps reduce wait times and processing for 2.5 Flash. Developers even get granular control over the budget to lower costs and speed things along where appropriate. Gemini 2.5 models are also getting supervised tuning and context caching for Vertex AI in the coming weeks.

In addition to the arrival of Gemini 2.5 Flash, the larger Pro model has picked up a new gig. Google’s largest Gemini model is now powering its Deep Research tool, which was previously running Gemini 2.0 Pro. Deep Research lets you explore a topic in greater detail simply by entering a prompt. The agent then goes out into the Internet to collect data and synthesize a lengthy report.

Gemini vs. ChatGPT chart

Credit: Google

Google says that the move to Gemini 2.5 has boosted the accuracy and usefulness of Deep Research. The graphic above shows Google’s alleged advantage compared to OpenAI’s deep research tool. These stats are based on user evaluations (not synthetic benchmarks) and show a greater than 2-to-1 preference for Gemini 2.5 Pro reports.

Deep Research is available for limited use on non-paid accounts, but you won’t get the latest model. Deep Research with 2.5 Pro is currently limited to Gemini Advanced subscribers. However, we expect before long that all models in the Gemini app will move to the 2.5 branch. With dynamic reasoning and new TPUs, Google could begin lowering the sky-high costs that have thus far made generative AI unprofitable.

Google announces faster, more efficient Gemini AI model Read More »

take-it-down-act-nears-passage;-critics-warn-trump-could-use-it-against-enemies

Take It Down Act nears passage; critics warn Trump could use it against enemies


Anti-deepfake bill raises concerns about censorship and breaking encryption.

The helicopter with outgoing US President Joe Biden and first lady Dr. Jill Biden departs from the East Front of the United States Capitol after the inauguration of Donald Trump on January 20, 2025 in Washington, DC. Credit: Getty Images

An anti-deepfake bill is on the verge of becoming US law despite concerns from civil liberties groups that it could be used by President Trump and others to censor speech that has nothing to do with the intent of the bill.

The bill is called the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes On Websites and Networks Act, or Take It Down Act. The Senate version co-sponsored by Ted Cruz (R-Texas) and Amy Klobuchar (D-Minn.) was approved in the Senate by unanimous consent in February and is nearing passage in the House. The House Committee on Energy and Commerce approved the bill in a 49-1 vote yesterday, sending it to the House floor.

The bill pertains to “nonconsensual intimate visual depictions,” including both authentic photos shared without consent and forgeries produced by artificial intelligence or other technological means. Publishing intimate images of adults without consent could be punished by a fine and up to two years of prison. Publishing intimate images of minors under 18 could be punished with a fine or up to three years in prison.

Online platforms would have 48 hours to remove such images after “receiving a valid removal request from an identifiable individual (or an authorized person acting on behalf of such individual).”

“No man, woman, or child should be subjected to the spread of explicit AI images meant to target and harass innocent victims,” House Commerce Committee Chairman Brett Guthrie (R-Ky.) said in a press release. Guthrie’s press release included quotes supporting the bill from first lady Melania Trump, two teen girls who were victimized with deepfake nudes, and the mother of a boy whose death led to an investigation into a possible sextortion scheme.

Free speech concerns

The Electronic Frontier Foundation has been speaking out against the bill, saying “it could be easily manipulated to take down lawful content that powerful people simply don’t like.” The EFF pointed to Trump’s comments in an address to a joint session of Congress last month, in which he suggested he would use the bill for his own ends.

“Once it passes the House, I look forward to signing that bill into law. And I’m going to use that bill for myself too if you don’t mind, because nobody gets treated worse than I do online, nobody,” Trump said, drawing laughs from the crowd at Congress.

The EFF said, “Congress should believe Trump when he says he would use the Take It Down Act simply because he’s ‘treated badly,’ despite the fact that this is not the intention of the bill. There is nothing in the law, as written, to stop anyone—especially those with significant resources—from misusing the notice-and-takedown system to remove speech that criticizes them or that they disagree with.”

Free speech concerns were raised in a February letter to lawmakers sent by the Center for Democracy & Technology, the Authors Guild, Demand Progress Action, the EFF, Fight for the Future, the Freedom of the Press Foundation, New America’s Open Technology Institute, Public Knowledge, and TechFreedom.

The bill’s notice and takedown system “would result in the removal of not just nonconsensual intimate imagery but also speech that is neither illegal nor actually NDII [nonconsensual distribution of intimate imagery]… While the criminal provisions of the bill include appropriate exceptions for consensual commercial pornography and matters of public concern, those exceptions are not included in the bill’s takedown system,” the letter said.

The letter also said the bill could incentivize online platforms to use “content filtering that would break encryption.” The bill “excludes email and other services that do not primarily consist of user-generated content from the NTD [notice and takedown] system,” but “direct messaging services, cloud storage systems, and other similar services for private communication and storage, however, could be required to comply with the NTD,” the letter said.

The bill “contains serious threats to private messaging and free speech online—including requirements that would force companies to abandon end-to-end encryption so they can read and moderate your DMs,” Public Knowledge said today.

Democratic amendments voted down

Rep. Yvette Clarke (D-N.Y.) cast the only vote against the bill in yesterday’s House Commerce Committee hearing. But there were also several party-line votes against amendments submitted by Democrats.

Democrats raised concerns both about the bill not being enforced strictly enough and that bad actors could abuse the takedown process. The first concern is related to Trump firing both Democratic members of the Federal Trade Commission.

Rep. Kim Schrier (D-Wash.) called the Take It Down Act an “excellent law” but said, “right now it’s feeling like empty words because my Republican colleagues just stood by while the administration fired FTC commissioners, the exact people who enforce this law… it feels almost like my Republican colleagues are just giving a wink and a nod to the predators out there who are waiting to exploit kids and other innocent victims.”

Rep. Darren Soto (D-Fla.) offered an amendment to delay the bill’s effective date until the Democratic commissioners are restored to their positions. Ranking Member Frank Pallone, Jr. (D-N.J.) said that with a shorthanded FTC, “there’s going to be no enforcement of the Take It Down Act. There will be no enforcement of anything related to kids’ privacy.”

Rep. John James (R-Mich.) called the amendment a “thinly veiled delay tactic” and “nothing less than an attempt to derail this very important bill.” The amendment was defeated in a 28-22 vote.

Democrats support bill despite losing amendment votes

Rep. Debbie Dingell (D-Mich.) said she strongly supports the bill but offered an amendment that she said would tighten up the text and close loopholes. She said her amendment “ensures constitutionally protected speech is preserved by incorporating essential provisions for consensual content and matters of public concern. My goal is to protect survivors of abuse, not suppress lawful expression or shield misconduct from public accountability.”

Dingell’s amendment was also defeated in a 28-22 vote.

Pallone pitched an amendment that he said would “prevent bad actors from falsely claiming to be authorized from making takedown requests on behalf of someone else.” He called it a “common sense guardrail to protect against weaponization of this bill to take down images that are published with the consent of the subject matter of the images.” The amendment was rejected in a voice vote.

The bill was backed by RAINN (Rape, Abuse & Incest National Network), which praised the committee vote in a statement yesterday. “We’ve worked with fierce determination for the past year to bring Take It Down forward because we know—and survivors know—that AI-assisted sexual abuse is sexual abuse and real harm is being done; real pain is caused,” said Stefan Turkheimer, RAINN’s VP of public policy.

Cruz touted support for the bill from over 120 organizations and companies. The list includes groups like NCMEC (National Center for Missing & Exploited Children) and the National Center on Sexual Exploitation (NCOSE), along with various types of advocacy groups and tech companies Microsoft, Google, Meta, IBM, Amazon, and X Corp.

“As bad actors continue to exploit new technologies like generative artificial intelligence, the Take It Down Act is crucial for ending the spread of exploitative sexual material online, holding Big Tech accountable, and empowering victims of revenge and deepfake pornography,” Cruz said yesterday.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Take It Down Act nears passage; critics warn Trump could use it against enemies Read More »

carmack-defends-ai-tools-after-quake-fan-calls-microsoft-ai-demo-“disgusting”

Carmack defends AI tools after Quake fan calls Microsoft AI demo “disgusting”

The current generative Quake II demo represents a slight advancement from Microsoft’s previous generative AI gaming model (confusingly titled “WHAM” with only one “M”) we covered in February. That earlier model, while showing progress in generating interactive gameplay footage, operated at 300×180 resolution at 10 frames per second—far below practical modern gaming standards. The new WHAMM demonstration doubles the resolution to 640×360. However, both remain well below what gamers expect from a functional video game in almost every conceivable way. It truly is an AI tech demo.

A Microsoft diagram of the WHAMM system.

A Microsoft diagram of the WHAM system. Credit: Microsoft

For example, the technology faces substantial challenges beyond just performance metrics. Microsoft acknowledges several limitations, including poor enemy interactions, a short context length of just 0.9 seconds (meaning the system forgets objects outside its view), and unreliable numerical tracking for game elements like health values.

Which brings us to another point: A significant gap persists between the technology’s marketing portrayal and its practical applications. While industry veterans like Carmack and Sweeney view AI as another tool in the development arsenal, demonstrations like the Quake II instance may create inflated expectations about AI’s current capabilities for complete game generation.

The most realistic near-term application of generative AI technology remains as coding assistants and perhaps rapid prototyping tools for developers, rather than a drop-in replacement for traditional game development pipelines. The technology’s current limitations suggest that human developers will remain essential for creating compelling, polished game experiences for now. But given the general pace of progress, that might be small comfort for those who worry about losing jobs to AI in the near-term.

Ultimately, Sweeney says not to worry: “There’s always a fear that automation will lead companies to make the same old products while employing fewer people to do it,” Sweeney wrote in a follow-up post on X. “But competition will ultimately lead to companies producing the best work they’re capable of given the new tools, and that tends to mean more jobs.”

And Carmack closed with this: “Will there be more or less game developer jobs? That is an open question. It could go the way of farming, where labor-saving technology allow a tiny fraction of the previous workforce to satisfy everyone, or it could be like social media, where creative entrepreneurship has flourished at many different scales. Regardless, “don’t use power tools because they take people’s jobs” is not a winning strategy.”

Carmack defends AI tools after Quake fan calls Microsoft AI demo “disgusting” Read More »

meta’s-surprise-llama-4-drop-exposes-the-gap-between-ai-ambition-and-reality

Meta’s surprise Llama 4 drop exposes the gap between AI ambition and reality

Meta constructed the Llama 4 models using a mixture-of-experts (MoE) architecture, which is one way around the limitations of running huge AI models. Think of MoE like having a large team of specialized workers; instead of everyone working on every task, only the relevant specialists activate for a specific job.

For example, Llama 4 Maverick features a 400 billion parameter size, but only 17 billion of those parameters are active at once across one of 128 experts. Likewise, Scout features 109 billion total parameters, but only 17 billion are active at once across one of 16 experts. This design can reduce the computation needed to run the model, since smaller portions of neural network weights are active simultaneously.

Llama’s reality check arrives quickly

Current AI models have a relatively limited short-term memory. In AI, a context window acts somewhat in that fashion, determining how much information it can process simultaneously. AI language models like Llama typically process that memory as chunks of data called tokens, which can be whole words or fragments of longer words. Large context windows allow AI models to process longer documents, larger code bases, and longer conversations.

Despite Meta’s promotion of Llama 4 Scout’s 10 million token context window, developers have so far discovered that using even a fraction of that amount has proven challenging due to memory limitations. Willison reported on his blog that third-party services providing access, like Groq and Fireworks, limited Scout’s context to just 128,000 tokens. Another provider, Together AI, offered 328,000 tokens.

Evidence suggests accessing larger contexts requires immense resources. Willison pointed to Meta’s own example notebook (“build_with_llama_4“), which states that running a 1.4 million token context needs eight high-end Nvidia H100 GPUs.

Willison documented his own testing troubles. When he asked Llama 4 Scout via the OpenRouter service to summarize a long online discussion (around 20,000 tokens), the result wasn’t useful. He described the output as “complete junk output,” which devolved into repetitive loops.

Meta’s surprise Llama 4 drop exposes the gap between AI ambition and reality Read More »

midjourney-introduces-first-new-image-generation-model-in-over-a-year

Midjourney introduces first new image generation model in over a year

AI image generator Midjourney released its first new model in quite some time today; dubbed V7, it’s a ground-up rework that is available in alpha to users now.

There are two areas of improvement in V7: the first is better images, and the second is new tools and workflows.

Starting with the image improvements, V7 promises much higher coherence and consistency for hands, fingers, body parts, and “objects of all kinds.” It also offers much more detailed and realistic textures and materials, like skin wrinkles or the subtleties of a ceramic pot.

Those details are often among the most obvious telltale signs that an image has been AI-generated. To be clear, Midjourney isn’t claiming to have made advancements that make AI images unrecognizable to a trained eye; it’s just saying that some of the messiness we’re accustomed to has been cleaned up to a significant degree.

V7 can reproduce materials and lighting situations that V6.1 usually couldn’t. Credit: Xeophon

On the features side, the star of the show is the new “Draft Mode.” On its various communication channels with users (a blog, Discord, X, and so on), Midjourney says that “Draft mode is half the cost and renders images at 10 times the speed.”

However, the images are of lower quality than what you get in the other modes, so this is not intended to be the way you produce final images. Rather, it’s meant to be a way to iterate and explore to find the desired result before switching modes to make something ready for public consumption.

V7 comes with two modes: turbo and relax. Turbo generates final images quickly but is twice as expensive in terms of credit use, while relax mode takes its time but is half as expensive. There is currently no standard mode for V7, strangely; Midjourney says that’s coming later, as it needs some more time to be refined.

Midjourney introduces first new image generation model in over a year Read More »

gemini-“coming-together-in-really-awesome-ways,”-google-says-after-2.5-pro-release

Gemini “coming together in really awesome ways,” Google says after 2.5 Pro release


Google’s Tulsee Doshi talks vibes and efficiency in Gemini 2.5 Pro.

Google was caught flat-footed by the sudden skyrocketing interest in generative AI despite its role in developing the underlying technology. This prompted the company to refocus its considerable resources on catching up to OpenAI. Since then, we’ve seen the detail-flubbing Bard and numerous versions of the multimodal Gemini models. While Gemini has struggled to make progress in benchmarks and user experience, that could be changing with the new 2.5 Pro (Experimental) release. With big gains in benchmarks and vibes, this might be the first Google model that can make a dent in ChatGPT’s dominance.

We recently spoke to Google’s Tulsee Doshi, director of product management for Gemini, to talk about the process of releasing Gemini 2.5, as well as where Google’s AI models are going in the future.

Welcome to the vibes era

Google may have had a slow start in building generative AI products, but the Gemini team has picked up the pace in recent months. The company released Gemini 2.0 in December, showing a modest improvement over the 1.5 branch. It only took three months to reach 2.5, meaning Gemini 2.0 Pro wasn’t even out of the experimental stage yet. To hear Doshi tell it, this was the result of Google’s long-term investments in Gemini.

“A big part of it is honestly that a lot of the pieces and the fundamentals we’ve been building are now coming together in really awesome ways, ” Doshi said. “And so we feel like we’re able to pick up the pace here.”

The process of releasing a new model involves testing a lot of candidates. According to Doshi, Google takes a multilayered approach to inspecting those models, starting with benchmarks. “We have a set of evals, both external academic benchmarks as well as internal evals that we created for use cases that we care about,” she said.

Credit: Google

The team also uses these tests to work on safety, which, as Google points out at every given opportunity, is still a core part of how it develops Gemini. Doshi noted that making a model safe and ready for wide release involves adversarial testing and lots of hands-on time.

But we can’t forget the vibes, which have become an increasingly important part of AI models. There’s great focus on the vibe of outputs—how engaging and useful they are. There’s also the emerging trend of vibe coding, in which you use AI prompts to build things instead of typing the code yourself. For the Gemini team, these concepts are connected. The team uses product and user feedback to understand the “vibes” of the output, be that code or just an answer to a question.

Google has noted on a few occasions that Gemini 2.5 is at the top of the LM Arena leaderboard, which shows that people who have used the model prefer the output by a considerable margin—it has good vibes. That’s certainly a positive place for Gemini to be after a long climb, but there is some concern in the field that too much emphasis on vibes could push us toward models that make us feel good regardless of whether the output is good, a property known as sycophancy.

If the Gemini team has concerns about feel-good models, they’re not letting it show. Doshi mentioned the team’s focus on code generation, which she noted can be optimized for “delightful experiences” without stoking the user’s ego. “I think about vibe less as a certain type of personality trait that we’re trying to work towards,” Doshi said.

Hallucinations are another area of concern with generative AI models. Google has had plenty of embarrassing experiences with Gemini and Bard making things up, but the Gemini team believes they’re on the right path. Gemini 2.5 apparently has set a high-water mark in the team’s factuality metrics. But will hallucinations ever be reduced to the point we can fully trust the AI? No comment on that front.

Don’t overthink it

Perhaps the most interesting thing you’ll notice when using Gemini 2.5 is that it’s very fast compared to other models that use simulated reasoning. Google says it’s building this “thinking” capability into all of its models going forward, which should lead to improved outputs. The expansion of reasoning in large language models in 2024 resulted in a noticeable improvement in the quality of these tools. It also made them even more expensive to run, exacerbating an already serious problem with generative AI.

The larger and more complex an LLM becomes, the more expensive it is to run. Google hasn’t released technical data like parameter count on its newer models—you’ll have to go back to the 1.5 branch to get that kind of detail. However, Doshi explained that Gemini 2.5 is not a substantially larger model than Google’s last iteration, calling it “comparable” in size to 2.0.

Gemini 2.5 is more efficient in one key area: the chain of thought. It’s Google’s first public model to support a feature called Dynamic Thinking, which allows the model to modulate the amount of reasoning that goes into an output. This is just the first step, though.

“I think right now, the 2.5 Pro model we ship still does overthink for simpler prompts in a way that we’re hoping to continue to improve,” Doshi said. “So one big area we are investing in is Dynamic Thinking as a way to get towards our [general availability] version of 2.5 Pro where it thinks even less for simpler prompts.”

Gemini models on phone

Credit: Ryan Whitwam

Google doesn’t break out earnings from its new AI ventures, but we can safely assume there’s no profit to be had. No one has managed to turn these huge LLMs into a viable business yet. OpenAI, which has the largest user base with ChatGPT, loses money even on the users paying for its $200 Pro plan. Google is planning to spend $75 billion on AI infrastructure in 2025, so it will be crucial to make the most of this very expensive hardware. Building models that don’t waste cycles on overthinking “Hi, how are you?” could be a big help.

Missing technical details

Google plays it close to the chest with Gemini, but the 2.5 Pro release has offered more insight into where the company plans to go than ever before. To really understand this model, though, we’ll need to see the technical report. Google last released such a document for Gemini 1.5. We still haven’t seen the 2.0 version, and we may never see that document now that 2.5 has supplanted 2.0.

Doshi notes that 2.5 Pro is still an experimental model. So, don’t expect full evaluation reports to happen right away. A Google spokesperson clarified that a full technical evaluation report on the 2.5 branch is planned, but there is no firm timeline. Google hasn’t even released updated model cards for Gemini 2.0, let alone 2.5. These documents are brief one-page summaries of a model’s training, intended use, evaluation data, and more. They’re essentially LLM nutrition labels. It’s much less detailed than a technical report, but it’s better than nothing. Google confirms model cards are on the way for Gemini 2.0 and 2.5.

Given the recent rapid pace of releases, it’s possible Gemini 2.5 Pro could be rolling out more widely around Google I/O in May. We certainly hope Google has more details when the 2.5 branch expands. As Gemini development picks up steam, transparency shouldn’t fall by the wayside.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Gemini “coming together in really awesome ways,” Google says after 2.5 Pro release Read More »

deepmind-has-detailed-all-the-ways-agi-could-wreck-the-world

DeepMind has detailed all the ways AGI could wreck the world

As AI hype permeates the Internet, tech and business leaders are already looking toward the next step. AGI, or artificial general intelligence, refers to a machine with human-like intelligence and capabilities. If today’s AI systems are on a path to AGI, we will need new approaches to ensure such a machine doesn’t work against human interests.

Unfortunately, we don’t have anything as elegant as Isaac Asimov’s Three Laws of Robotics. Researchers at DeepMind have been working on this problem and have released a new technical paper (PDF) that explains how to develop AGI safely, which you can download at your convenience.

It contains a huge amount of detail, clocking in at 108 pages before references. While some in the AI field believe AGI is a pipe dream, the authors of the DeepMind paper project that it could happen by 2030. With that in mind, they aimed to understand the risks of a human-like synthetic intelligence, which they acknowledge could lead to “severe harm.”

All the ways AGI could harm humanity

This work has identified four possible types of AGI risk, along with suggestions on how we might ameliorate said risks. The DeepMind team, led by company co-founder Shane Legg, categorized the negative AGI outcomes as misuse, misalignment, mistakes, and structural risks. Misuse and misalignment are discussed in the paper at length, but the latter two are only covered briefly.

table of AGI risks

The four categories of AGI risk, as determined by DeepMind.

Credit: Google DeepMind

The four categories of AGI risk, as determined by DeepMind. Credit: Google DeepMind

The first possible issue, misuse, is fundamentally similar to current AI risks. However, because AGI will be more powerful by definition, the damage it could do is much greater. A ne’er-do-well with access to AGI could misuse the system to do harm, for example, by asking the system to identify and exploit zero-day vulnerabilities or create a designer virus that could be used as a bioweapon.

DeepMind has detailed all the ways AGI could wreck the world Read More »

samsung-turns-to-china-to-boost-its-ailing-semiconductor-division

Samsung turns to China to boost its ailing semiconductor division

Samsung has turned to Chinese technology groups to prop up its ailing semiconductor division, as it struggles to secure big US customers despite investing tens of billions of dollars in its American manufacturing facilities.

The South Korean electronics group revealed last month that the value of its exports to China jumped 54 percent between 2023 and 2024, as Chinese companies rush to secure stockpiles of advanced artificial intelligence chips in the face of increasingly restrictive US export controls.

In one previously unreported deal, Samsung last year sold more than three years’ supply of logic dies—a key component in manufacturing AI chips—to Kunlun, the semiconductor design subsidiary of Chinese tech group Baidu, according to people familiar with the matter.

But the increasing importance of its China sales to Samsung comes as it navigates growing trade tensions between Washington and Beijing over the development of sensitive technologies.

The South Korean tech giant announced last year that it was making a $40 billion investment in expanding its advanced chip manufacturing and packaging facilities in Texas, boosted by up to $6.4 billion in federal subsidies.

But Samsung’s contract chipmaking business has struggled to secure big US customers, bleeding market share to Taiwan Semiconductor Manufacturing Co, which is investing “at least” $100 billion in chip fabrication plants in Arizona.

“Samsung and China need each other,” said CW Chung, joint head of Apac equity research at Nomura. “Chinese customers have become more important for Samsung, but it won’t be easy to do business together.

Samsung has also fallen behind local rival SK Hynix in the booming market for “high bandwidth memory,” another crucial component in AI chips. As the leading supplier of HBMs for use by Nvidia, SK Hynix’s quarterly operating profit last year surpassed that of Samsung for the first time in the two companies’ history.

“Chinese companies don’t even have a chance to buy SK Hynix’s HBM because the supply is all bought out by the leading AI chip producers like Nvidia, AMD, Intel and Broadcom,” said Jimmy Goodrich, senior adviser for technology analysis to the Rand Corporation research institute.

Samsung turns to China to boost its ailing semiconductor division Read More »