Author name: Mike M.

google-i/o-day

Google I/O Day

What did Google announce on I/O day? Quite a lot of things. Many of them were genuinely impressive. Google is secretly killing it on the actual technology front.

Logan Kilpatrick (DeepMind): Google’s progress in AI since last year:

– The worlds strongest models, on pareto frontier

– Gemini app: has over 400M monthly active users

– We now process 480T tokens a month, up 50x YoY

– Over 7M developers have built with the Gemini API (4x)

Much more to come still!

I think? It’s so hard to keep track. There’s really a lot going on right now, not that most people would have any idea. Instead of being able to deal with all these exciting things, I’m scrambling to get to it all at once.

Google AI: We covered a LOT of ground today. Fortunately, our friends at @NotebookLM put all of today’s news and keynotes into a notebook. This way, you can listen to an audio overview, create a summary, or even view a Mind Map of everything from #GoogleIO 2025.

That’s actually a terrible mind map, it’s missing about half of the things.

As in, you follow their CEO’s link to a page that tells you everything that happened, and it’s literally a link bank to 27 other articles. I did not realize one could fail marketing forever this hard, and this badly. I have remarkably little idea, given how much effort I am willing to put into finding out, what their products can do.

The market seems impressed, with Google outperforming, although the timing of it all was a little weird. I continue to be deeply confused about what the market is expecting, or rather not expecting, out of Google.

Ben Thompson has a gated summary post, Reuters has a summary as well.

I share Ben’s feeling that I’m coming away less impressed than I should be, because so many things were lost in the shuffle. There’s too much stuff here. Don’t announce everything at once like this if you want us to pay attention. And he’s right to worry that it’s not clear that Google, despite doing all the things, can develop compelling products.

I do think it can, though. And I think it’s exactly right to currently produce a bunch of prototypical not-yet-compelling products that aren’t compelling because they aren’t good enough yet… and then later make them good enough.

Except that you need people to actually then, you know, realize the products exist.

This post covers what I could figure out on a deadline. As for why I didn’t simply give this a few more days, well, I had a reason.

  1. The TLDR.

  2. Flow, Veo 3 and Imagen 4.

  3. Gmail Integration That’s Actually Good?

  4. Gemini 2.5 Flash.

  5. Gemma 3n.

  6. Gemini Diffusion.

  7. Jules.

  8. We’re in Deep Research.

  9. Google Search ‘AI Mode’.

  10. AI Shopping.

  11. Agent Mode.

  12. Project Astra or is it Google Live?.

  13. Android XR Glasses.

  14. Gemini For Your Open Tabs In Chrome.

  15. Google Meet Automatic Translation.

  16. We Have Real 3D At Home, Oh No.

  17. You Will Use the AI.

  18. Our Price Cheap.

  19. What To Make Of All This.

Or the ‘too many announcements, lost track.’

Google announced:

  1. Veo 3, which generates amazing eight second videos now with talk and sound.

  2. Flow, designed to tie that into longer stuff, but that doesn’t work right yet.

  3. Various new GMail and related integrations and other ways to spread context.

  4. Gemini 2.5 Flash, and Gemini 2.5 Pro Deep Thinking. They’re good, probably.

  5. Gemma 3n, open source runs on phones with 2GB ram.

  6. Gemini Diffusion as a text model, very intriguing but needs work.

  7. Jules, their answer to Codex, available for free.

  8. They’re going to let you go Full Agent, in Agent Mode, in several places.

  9. Gemini using your open tabs as context, available natively in Chrome.

  10. AI Search, for everyone, for free, as a search option, including a future agent mode and a specialized shopping mode.

  11. Automatic smooth translation for real-time talk including copying tone.

  12. A weird Google Beam thing where you see people in 3D while talking.

  13. They did an Android XR demo, but it’s going to be a while.

  14. For now you use your phone camera for a full Google Live experience, it’s good.

  15. Their new premium AI subscription service is $250/month.

A lot of it is available now, some of it will be a few months. Some of it is free, some of it isn’t, or isn’t after a sample. Some of it is clearly good, some is still buggy, some we don’t know yet. It’s complicated.

Also I think there was a day two?

The offering that got everyone excited and went viral was Veo 3.

They also updated their image generation to Imagen 4 and it’s up to 2k resolution with various improvements and lots of ability to control details. It’s probably pretty good but frankly no one cares.

Did you want an eight second AI video, now with sound, maybe as something you could even extend? They got you. We can talk (cool video). Oh, being able to talk but having nothing to say.

Sundar Pichai (CEO Google): Veo 3, our SOTA video generation model, has native audio generation and is absolutely mindblowing.

For filmmakers + creatives, we’re combining the best of Veo, Imagen and Gemini into a new filmmaking tool called Flow.

Ready today for Google AI Pro and Ultra plan subscribers.

People really love the new non-silent video generation capabilities.

Here’s Bayram Annakov having a guy wake up in a cold sweat. Here’s Google sharing a user extending a video of an eagle carrying a car. Here’s fofr making a man run while advertising replicate, which almost works, and also two talking muffins which totally worked. Here’s Pliny admiring the instruction handling.

And here’s Pliny somewhat jailbreaking it, with videos to show for it. Except, um, Google, why do any of these require jailbreaks? They’re just cool eight second videos. Are they a little NSFW? I mean sure, but we’re strictly (if aggressively) PG-13 here, complete with exactly one F-bomb. I realize this is a negotiation, I realize why we might not want to go to R, but I think refusing to make any of these is rather shameful behavior.

I would say that Flow plus Veo 3 is the first video generation product that makes me think ‘huh, actually that’s starting to be cool.’ Coherence is very strong, you have a lot of tools at your disposal, and sound is huge. They’re going to give you the power to do various shots and virtual camera movements.

I can see actually using this, or something not too different from this. Or I can see someone like Primordial Soup Labs, which formed a partnership with DeepMind, creating an actually worthwhile short film.

Steven McCulloch: Veo 3 has blown past a new threshold of capability, with the ability to one-shot scenes with full lip sync and background audio. What used to be a 4-step workflow with high barrier to entry has been boiled down into a single, frictionless prompt.

This is huge.

They also refer to their music sandbox, powered by Lyria 2, but there’s nothing to announce at this time.

They’re launching SynthID Detector, a tool to detect AI-generated content.

They remind us of Google Vids to turn your slides into videos, please no. Don’t. They’re also offering AI avatars in Vids. Again, please, don’t, what fresh hell is this.

Also there’s Stitch to generate designs and UIs from text prompts?

I keep waiting for it, it keeps not arriving, is it finally happening soon?

Sundar Pichai: With personal smart replies in Gmail, you can give Gemini permission to pull in details from across your Google apps and write in a way that sounds like you.

Rolling out in the coming weeks to subscribers.

I’ve been disappointed too many times at this point, so I will believe it when I see it.

The part that I want most is the pulling in of the details, the ability to have the AI keep track of and remind me of the relevant context, including pulling from Google Drive which in turn means you can for example pull from Obsidian since it’s synced up. They’re also offering ‘source-grounded writing help’ next quarter in Google Docs (but not GMail?) where you have it pull only from particular sources, which is nice if it’s easy enough to use.

I want GMail to properly populate Calendar rather than its current laughably silly hit-and-miss actions (oh look, at movie that runs from 3-4 on Thursday, that’s how that works!), to pull out and make sure I don’t miss key information, to remind me of dropped balls and so on.

They’re offering exactly this with ‘inbox cleanup,’ as in ‘delete all of my unread emails from The Groomed Paw from the last year.’ That’s a first step. We need to kick that up at least a notch, starting with things such as ‘set up an AI filter so I never see another damned Groomed Paw email again unless it seems actually urgent or offers a 50% or bigger sale’ and ‘if Sarah tells me if she’s coming on Friday ping me right away.’

Another offering that sounds great is ‘fast appointment scheduling integrated into GMail,’ in the video it’s a simple two clicks which presumably implies you’ve set things up a lot already. Again, great if it works, but it has to really work and know your preferences and adjust to your existing schedule. If it also reads your other emails and other context to include things not strictly in your calendar, now we’re really talking.

Do I want it to write the actual emails after that? I mean, I guess, sometimes, if it’s good enough. Funnily enough, when that happens, that’s probably exactly the times I don’t want it to sound like me. If I wanted to sound like me I could just write the email. The reason I want the AI to write it is because I need to be Performing Class, or I want to sound like a Dangerous Professional a la Patio11, or I want to do a polite formality. Or when I mostly need to populate the email with a bunch of information.

Of course, if it gets good enough, I’ll also want it to do some ‘sound like me’ work too, such as responding to readers asking questions with known answers. Details are going to matter a ton, and I would have so many notes if I felt someone was listening.

In any case, please, I would love a version of this that’s actually good in those other ways. Are the existing products good enough I should be using them? I don’t know. If there’s one you use that you think I’d want, share in the comments.

I/O Day mostly wasn’t about the actual models or the API, but we do have some incremental changes here thrown into the fray.

Gemini 2.5 Flash is technically still in preview, but it’s widely available including in the Gemini app, and I’d treat it as de facto released. It’s probably the best fast and cheap model, and the best ‘fast thinking’ model if you use that mode.

Also, yes, of course Pliny pwned it, why do we even ask, if you want to use it you set it as the system prompt.

Pliny: ah forgot to mention, prompt is designed to be set as system prompt. a simple obfuscation of any trigger words in your query should be plenty, like “m-d-m-a” rather than “mdma”

Sundar Pichai (CEO Google): Our newest Gemini 2.5 Flash is better on nearly every dimension: reasoning, multimodality, code, long context. Available for preview in the Gemini app, AI Studio and Vertex AI.

And with Deep Think mode, Gemini 2.5 Pro is getting better, too. Available to trusted testers.

Demis Hassabis: Gemini 2.5 Flash is an amazing model for its speed and low-cost.

Logan Kilpatrick: Gemini 2.5 Flash continues to push the pareto frontier, so much intelligence packed into this model, can’t wait for GA in a few weeks!

Peter Wildeford: LLMs are like parrots except the parrots are very good at math

On that last one, the light blue is Deep Thinking, dark blue is regular 2.5 Pro.

Peter Wildeford: It’s pretty confusing that the graphs compare “Gemini 2.5 Pro” to “Gemini 2.5 Pro”

Alex Friedland: The fundamental issue is that numbers are limited and they might run out.

Gemini 2.5 Flash is in second place on (what’s left of) the Arena leaderboard, behind only Gemini 2.5 Pro.

Hasan Can: I wasn’t going to say this at first, because every time I praised one of Google’s models, they ruined it within a few weeks but the new G.2.5 Flash is actually better than the current 2.5 Pro in the Gemini app. It reminds me of the intelligence of the older 2.5 Pro from 03.25.

The Live API will now have audio-visual input and native audio out dialogue with ability to steer tone, accent and style off speaking, ability to respond to user tone of voice, as well as tool use. They’re also adding computer use to the API, and are adding native SDK support for Model Context Protocol (MCP).

There’s a white paper on how they made Gemini secure, and their safeguards, but today is a day that I have sympathy for the ‘we don’t have time for that’ crowd and I’m setting it aside for later. I’ll circle back.

Gemma 3n seems to be a substantial improvement in Google’s open model on-device performance. I don’t know whether it is better than other open alternatives, there’s always a bizarre ocean of different models claiming to be good, but I would be entirely unsurprised if this was very much state of the art.

Google AI Developers: Introducing Gemma 3n, available in early preview today.

The model uses a cutting-edge architecture optimized for mobile on-device usage. It brings multimodality, super fast inference, and more.

Key features include:

-Expanded multimodal understanding with video and audio input, alongside text and images

-Developer-friendly sizes: 4B and 2B (and many in between!)

-Optimized on-device efficiency for 1.5x faster response on mobile compared to Gemma 3 4B

Build live, interactive apps and sophisticated audio-centric experiences, including real-time speech transcription, translation, and rich voice-driven interactions

Gemma 3n leverages a Google DeepMind innovation called Per-Layer Embeddings (PLE) that delivers a significant reduction in RAM usage. While the raw parameter count is 5B and 8B, this innovation allows you to run larger models on mobile devices or live-stream from the cloud, with a memory overhead comparable to a 2B and 4B model, meaning the models can operate with a dynamic memory footprint of just 2GB and 3GB. Learn more in our documentation.

Oh, and also they just added MedGemma for health care, SignGemma for ASL and DolphinGemma for talking to dolphins. Because sure, why not?

This quietly seems like it could turn out to be a really big deal. We have an actually interesting text diffusion model. It can do 2k tokens/second.

Alexander Doria: Gemini Diffusion does pass honorably my nearly impossible OCR correction benchmark: Plainly, “can you correct the OCR of this text.”

Meanwhile, here’s a cool finding, ‘what like it’s hard’ department:

Earlence: Gemini diffusion is cool! Really fast and appears capable in coding tasks. But what is interesting is that one of @elder_plinius jailbreaks (for 2.5) appears to have worked on the diffusion model as well when I used it to ask about Anthrax.

Remember when I spent a day covering OpenAI’s Codex?

Well, Google announced Jules, its own AI coding agent. Context-aware, repo-integrated, ready to ship features. The quick video looks like a superior UI. But how good is it? So far I haven’t seen much feedback on that.

So instead of a detailed examination, that’s all I have for you on this right now. Jules exists, it’s Google’s answer to Codex, we’ll have to see if it is good.

But, twist! It’s free. Right now it’s reporting heavy use (not a shock) so high latency.

In addition to incorporating Gemini 2.5, Deep Research will soon let you connect your Google Drive and GMail, choose particular sources, and integrate with Canvas.

This is pretty exciting – in general any way to get deep dives to use your extensive context properly is a big game and Google is very good with long context.

If you don’t want to wait for Deep Research, you can always Deep Think instead. Well, not yet unless you’re a safety researcher (and if you are, hit them up!) but soon.

JJ Hughes notes how exciting it will be to get true long context into a top level deep reasoning model to unlock new capabilities such as for lawyers like himself, but notes the UI remains terrible for this.

Also, remember NotebookLM? There’s now An App For That and it’s doing well.

Google Search AI Overviews have been a bit of a joke for a while. They’re the most common place people interact with AI, and yet they famously make obvious stupid mistakes, including potentially harmful ones, constantly. That’s been improving, and now with 2.5 powering them it’s going to improve again.

AI Mode is going to be (future tense because it’s not there for me yet) something different from Overviews, but one might ask isn’t it the same as using Gemini? What’s the difference? Is this a version of Perplexity (which has fallen totally out of my rotation), or what?

They’re doing a terrible job explaining any of that, OpenAI is perhaps secretly not the worst namer of AI services.

Sundar Pichai: AI Mode is rolling out to everyone in the US. It’s a total reimagining of Search with more advanced reasoning so you can ask longer, complex queries.

AI Overviews are now used by 1.5B people a month, in 200+ countries and territories.

And Gemini 2.5 is coming to both this week.

My understanding is that the difference is that AI Mode in search will have better integrations for various real time information systems, especially shopping and other commonly accessed knowledge, and has the ability to do a lot of Google searches quickly to generate its context, and also it is free.

They plan on merging ‘Project Mariner’ or ‘Agent Mode’ into it as well, and you’ll have the option to do a ‘deep search.’ They say they’re starting with ‘event tickets, restaurant reservations and local appointments.’ I actually think this is The Way. You don’t try to deploy an agent in general. It’s not time for that yet. You deploy an agent in specific ways where you know it works, on a whitelisted set of websites where you know what it’s doing and that this is safe. You almost don’t notice there’s an agent involved, it feels like using the Web but increasingly without the extra steps.

If they do a decent job of all this, ‘Google Search AI Mode’ is going to be the actually most useful way to do quite a lot of AI things. It won’t be good for jobs that require strong intelligence, but a large percentage of tasks are much more about search. Google has a huge edge there if they execute, including in customization.

They also plan to incorporate AI Search Mode advances directly into regular Google Search, at least in the overviews and I think elsewhere as well.

What I worry about here is it feels like multiple teams fighting over AI turf. The AI Search team is trying to do things that ‘naturally’ fall to Gemini and also to regular Search, and Gemini is trying to do its own form of search, and who knows what the Overviews team is thinking, and so on.

An important special case for Google Search AI Mode (beware, your computer might be accessing GSAM?) will (in a few months) be Shopping With Google AI Mode, I don’t even know what to call anything anymore. Can I call it Google Shopping? Gemini Shopping?

It actually seems really cool, again if executed well, allowing you to search all the sites at once in an AI-powered way, giving you visuals, asking follow ups. It can track prices and then automatically buy when the price is right.

They have a ‘try it on’ that lets you picture yourself in any of the clothing, which is rolling out now to search labs. Neat. It’s double neat if it automatically only shows you clothing that fits you.

Sundar Pichai (CEO Google): Agent Mode in the @Geminiapp can help you get more done across the web – coming to subscribers soon.

Plus a new multi-tasking version of Project Mariner is now available to Google AI Ultra subscribers in the US, and computer use capabilities are coming to the Gemini API.

It will also use MCP, which enshrines MCP as a standard across labs.

The example here is to use ‘agent mode’ to find and go through apartment listings and arrange tours. They say they’re bringing this mode to the Gemini app and planning on incorporating it into Chrome.

I like the idea of their feature ‘teach and repeat.’ As in, you do a task once, and it learns from what you did so it can do similar tasks for you in the future.

Alas, early reports are that Project Mariner is not ready for prime time.

As an example, Bayram Annakov notes it failed on a simple task. That seems to be the norm.

You now can get this for free in Android and iOS, which means sharing live camera feeds while you talk to Gemini and it talks back, now including things like doing Google searches on your behalf, calling up YouTube videos and so on, even making its own phone calls.

I’m not even sure what exactly Project Astra is at this point. I’ve been assuming I’ve been using it when I put Gemini into live video mode, so now it’s simply Google Live, but I’m never quite sure?

Roward Cheung: [Google] revamped project Astra with native audio dialogue, UI control, content retrieval, calling, and shopping.

The official video he includes highlights YouTube search, GMail integration and the ability to have Gemini call a shop (in the background while you keep working) and ask what they have in stock. They’re calling it ‘action intelligence.’

In another area they talk about extending Google Live and Project Astra into search. They’re framing this as you point the camera at something and then you talk and it generates a search, including showing you search results traditional Google style. So it’s at least new in that it can make that change.

If you want to really unlock the power of seeing your screen, you want the screen to see what you see. Thus, Android XR Glasses. That’s a super exciting idea and a long time coming. And we have a controlled demo.

But also, not so fast. We’re talking 2026 at the earliest, probably 18+ months, and we have no idea what they are going to cost. I also got strong ‘not ready for prime time’ vibes from the demo, more of the ‘this is cool in theory but won’t work in practice.’ My guess is that if I had these in current form, I’d almost entirely use them for Google Live purposes and maybe chatting with the AI, and basically nothing else, unless we got better agentic AI that could work with various phone apps?

There’s another new feature where you can open up Gemini in Chrome and ask questions not only about the page, but all your other open pages, which automatically are put into context. It’s one of those long time coming ideas, again if it works well. This one should be available by now.

This is one of many cases where it’s going to take getting used to it so you actually think to use it when this is the right modality, and you have confidence to turn to it, but if so, seems great.

It’s hard to tell how good translation is from a sample video, but I find it credible that this is approaching perfect and means you can pull off free-flowing conversations across languages, as long as you don’t mind a little being lost in translation. They’re claiming they are preserving things like tone of voice.

Sundar Pichai: Real-time speech translation directly in Google Meet matches your tone and pattern so you can have free-flowing conversations across languages

Launching now for subscribers. ¡Es mágico!

Rob Haisfield: Now imagine this with two people wearing AR glasses in person!

They show this in combination with their 3D conferencing platform Google Beam, but the two don’t seem at all related. Translation is for audio, two dimensions are already two more than you need.

Relatedly, Gemini is offering to do automatic transcripts including doing ‘transcript trim’ to get rid of filler words, or one-click balancing your video’s sound.

They’re calling it Google Beam, downwind of Project Starline.

This sounds like it is primarily about 3D video conferencing or some form of AR/VR, or letting people move hands and such around like they’re interacting in person?

Sundar Pichai: Google Beam uses a new video model to transform 2D video streams into a realistic 3D experience — with near perfect headtracking, down to the millimeter, and at 60 frames per second, all in real-time.

The result is an immersive conversational experience. HP will share more soon.

It looks like this isn’t based on the feed from one camera, but rather six, and requires its own unique devices.

This feels like a corporate ‘now with real human physical interactions, fellow humans!’ moment. It’s not that you couldn’t turn it into something cool, but I think you’d have to take it pretty far, and by that I mean I think you’d need haptics. If I can at least shake your hand or hug you, maybe we’ve got something. Go beyond that and the market is obvious.

Whereas the way they’re showing it seems to me to be the type of uncanny valley situation I very much Do Not Want. Why would actually want this for a meeting, either of two people or more than two? I’ve never understood why you would want to have a ‘virtual meeting’ where people were moving in 3D in virtual chairs, or you seemed to be moving in space, it seems like not having to navigate that is one of the ways Google Meet is better than in person.

I can see it if you were using it to do something akin to a shared VR space, or a game, or an intentionally designed viewing experience including potentially watching a sporting event. But for the purposes they are showing off, 2D isn’t a bug. It’s a feature.

On top of that, this won’t be cheap. We’re likely talking $15k-$30k per unit at first for the early devices from HP that you’ll need. Hard pass. But even Google admits the hardware devices aren’t really the point. The point is that you can beam something in one-to-many mode, anywhere in the world, once they figure out what to do with that.

Google’s AI use is growing fast. Really fast.

Sundar Pichai: The world is adopting AI faster than ever before.

This time last year we were processing 9.7 trillion tokens a month across our products and APIs.

Today, that number is 480 trillion. That’s a 50X increase in just a year. 🤯

Gallabytes: I wonder how this breaks down flash versus pro

Peter Wildeford: pinpoint the exact moment Gemini became good

I had Claude estimate similar numbers for other top AI labs. At this point Claude thinks Google is probably roughly on par with OpenAI on tokens processed, and well ahead of everyone else.

But of course you can get a lot of tokens when you throw your AI into every Google search whether the user likes it or not. So the more meaningful number is likely the 400 million monthly active users for Gemini, with usage up 45% in the 2.5 era, but again I don’t think the numbers for different services are all that comparable, but note that ChatGPT’s monthly user count is 1.5 billion, about half of whom use it any given week. The other half have to be some strange weeks, given most of them aren’t exactly switching over to Claude.

Google offers a lot of things for free. That will also be true in AI. In particular, AI Search will stay free, as will basic functionality in the Gemini app. But if you want to take full advantage, yep, you’re going to pay.

They have two plans: The Pro plan at $20/month, and the Ultra plan at $250/month, which includes early access to new features including Agent Mode and much higher rate limits. This is their Ultra pitch.

Hensen Juang: Wait they are bundling YouTube premium?

MuffinV: At this point they will sell every google product as a single subscription.

Hensen Juang: This is the way.

It is indeed The Way. Give me the meta subscription. Google Prime.

For most people, the Pro plan looks like it will suffice. Given everything Google is offering, a lot of you should be giving up your $20/month, even if that’s your third $20/month after Claude and ChatGPT. The free plan is actually pretty solid too, if you’re not going to be that heavy a user because you’re also using the competition.

The $250/month Ultra plan seems like it’s not offering that much extra. The higher rate limits are nice but you probably won’t run into them often. The early access is nice, but the early access products are mostly rough around the edges. It certainly isn’t going to be ‘ten times better,’ and it’s a much worse ‘deal’ than the Pro $20/month plan. But once again, looking at relative prices is a mistake. They don’t matter.

What matters is absolute price versus absolute benefit. If you’re actually getting good use out of the extra stuff, it can easily be well in excess of the $250/month.

If your focus is video, fofr reports you get 12k credits per month, and it costs 150 credits per 8 second Veo 3 video, so with perfect utilization you pay $0.39 per second of video, plus you get the other features. A better deal, if you only want video, is to buy the credits directly, at about $0.19 per second. That’s still not cheap, but it’s a lot better, and this does seem like a big quality jump.

Another key question is, how many iterations does it take to get what you want? That’s a huge determinant of real cost. $0.19 per second is nothing if it always spits out the final product.

For now I don’t see the $250/month being worth it for most people, especially without Project Mariner access. And as JJ Hughes says, add up all these top level subscriptions and pretty soon you’re talking real money. But I’d keep an eye.

Knud Berthelsen: There is so much and you know they will eventually integrate it in their products that actually have users. There needs to be something between the $20/month tier and the $250 for those of us who want an AI agent but not a movie studio.

It’s a lot. Google is pushing ahead on all the fronts at once. The underlying models are excellent. They’re making it rain. It’s all very disjointed, and the vision hasn’t been realized, but there’s tons of potential here.

Pliny: Ok @GoogleDeepMind, almost there. If you can build a kick-ass agentic UI to unify everything and write a half-decent system prompt, people will call it AGI.

Justin Halford: They’re clearly the leading lab at this point.

Askwho: Veo 3’s hyped high-fidelity videos w/ sound are dazzling, but still feel like an advanced toy. Gemini Diffusion shows immense promise, though its current form is weak. Jules is tough to assess fully due to rate limits, but it’s probably the most impactful due to wide availability

Some people will call o3 AGI. I have little doubt some (more) people would call Google Gemini AGI if you made everything involved work as its best self and unified it all.

I wouldn’t be one of those people. Not yet. But yeah, interesting times.

Demis Hassabis does say that unification is the vision, to turn the Gemini app into a universal AI assistant, including combining Google Live for real time vision with Project Mariner for up to ten parallel agent actions.

Ben Thompson came away from all this thinking that the only real ‘products’ here were still Google search and Google Cloud, and that remains the only products that truly matter or function at Google. I get why one would come away with that impression, but again I don’t agree. I think that the other offerings won’t all hit, especially at first, but they’ll get better quickly as AI advances and as the productization and iterations fly by.

He has some great turns of phrase. Here, Ben points out that the problem with AI is that to use it well you have to think and figure out what to do. And if there’s one thing users tend to lack, it would be volition. Until Google can solve volition, the product space will largely go to those who do solve it, which often means startups.

Ben Thompson: Second, the degree to which so many of the demoes yesterday depend on user volition actually kind of dampened my enthusiasm for their usefulness.

It has long been the case that the best way to bring products to the consumer market is via devices, and that seems truer than ever: Android is probably going to be the most important canvas for shipping a lot of these capabilities, and Google’s XR glasses were pretty compelling (and, in my opinion, had a UX much closer to what I envision for XR than Meta’s Orion did).

Devices drive usage at scale, but that actually leaves a lot of room for startups to build software products that incorporate AI to solve problems that people didn’t know they had; the challenge will be in reaching them, which is to say the startup problem is the same as ever.

Google is doing a good job at making Search better; I see no reason to be worried about them making any other great product, even as the possibility of making something great with their models seems higher than ever. That’s good for startups!

I’m excited for it rather than worried, but yes, if you’re a startup, I would worry a bit.

What will we see tomorrow?

Discussion about this post

Google I/O Day Read More »

silverstone-is-back-with-a-beige-pc-case-that-looks-just-like-your-crappy-old-486

SilverStone is back with a beige PC case that looks just like your crappy old 486

SilverStone’s first ’80s throwback PC case started life as an April Fools’ joke, but the success of the FLP01 was apparently serious enough to merit a follow-up. The company brought another beige case to the Computex trade show this week, the vertically oriented FLP02 (via Tom’s Hardware).

If the original horizontally oriented FLP01 case called to mind a 386-era Compaq Deskpro, the FLP02 is a dead ringer for the kind of case you might have gotten for a generic 486 or early Pentium-era PC. That extends to having a Turbo button built into the front—on vintage PCs, this button could actually determine how fast the processor was allowed to run, though here, it’s actually a fan speed control instead. A lock on the front also locks the power switch in place to keep it from being flipped off accidentally, something else real vintage PCs actually did.

Despite its retro facade, the FLP02 is capable of fitting in even higher-end modern PC parts than the original FLP01. Front USB-A and USB-C ports are hidden behind a magnetic door on the front of the case, and its faux-5.25-inch floppy drives are just covers for drive bays that you could use for an optical drive or extra front I/O.

Despite its retro looks, the FLP02 still tucks away support for modern amenities like front-facing USB-A and USB-C ports. Credit: Future

On the inside, the case can fit full-size ATX motherboards and up to a 360 mm radiator for CPU cooling, and modern high-end GPUs like the GeForce RTX 5090 or 5080 should be able to fit inside.

SilverStone says the FLP02 will ship in Q3 or Q4 of this year and that US buyers should be able to get it for $220. You can, of course, buy a modern high-end PC case for much less money. But if this kind of nostalgia-bait didn’t move merchandise, companies wouldn’t keep indulging in it.

SilverStone is back with a beige PC case that looks just like your crappy old 486 Read More »

amd’s-$299-radeon-rx-9060-xt-brings-8gb-or-16gb-of-ram-to-fight-the-rtx-5060

AMD’s $299 Radeon RX 9060 XT brings 8GB or 16GB of RAM to fight the RTX 5060

AMD didn’t provide much by way of performance comparisons, but it’s promising that the cards have the same number of compute units as AMD’s last-generation RX 7600 series. AMD says that RDNA 4 compute units are much faster than those used for RDNA 3, particularly in games with ray-tracing effects enabled. This helped make the Radeon RX 9070 cards generally as fast or faster than the RX 7900 XTX and 7900 XT series, despite having around two-thirds as many compute units. Sticking with 32 CUs for the 9060 series isn’t exciting on paper, but we should still see a respectable generation-over-generation performance bump. The RX 7600 series, by contrast, provided a pretty modest performance improvement compared to 2022’s Radeon RX 6650 XT.

AMD says that the cards’ total board power—the amount of power the entire graphics card, including the GPU itself, RAM, and other components—starts at 150 W for the 8GB card and 160 W for the 16GB card, with a maximum TBP of 182 W. That’s a shade higher than but generally comparable to the RTX 5060 and 5060 Ti, and (depending on where actual performance ends up) quite a bit more efficient than the RX 7600 series. This partly comes down to a more efficient 4nm TSMC manufacturing process, a substantial upgrade from the 6nm process used for the 7600 series.

It’s unusual for a GPU maker to define a TBP range—more commonly we’re just given a single default value. But this is in line with new settings we observed in our RX 9070 review; AMD officially supports a range of different user-selectable TBP numbers in its Catalyst driver package, and some GPU makers were shipping cards that used higher TBPs by default.

Higher power limits can increase performance, though usually the performance increase is disproportionately small compared to the increase in power draw. These power limits should also generally mean that most 9060 XTs can be powered with a single 8-pin power connector, rather than using multiple connectors or the 12-pin 12VHPWR/12V-2×6 connector.

AMD’s $299 Radeon RX 9060 XT brings 8GB or 16GB of RAM to fight the RTX 5060 Read More »

2025-hyundai-ioniq-9-first-drive:-efficient,-for-a-big-one

2025 Hyundai Ioniq 9 first drive: Efficient, for a big one

Only the $58,995 Ioniq 9 S is available with a rear-wheel drive powertrain. In this case, one with 215 hp (160 kW) and 258 lb-ft (350 Nm) and a range of 325 miles (539 km) from the 110.3 kWh (gross) battery pack. All other trims feature twin motor all-wheel drive, but you give up little in the way of range.

The $62,765 SE and $68,320 SEL offer a combined 303 hp (226 kW) and 446 lb-ft (605 Nm) and 320 miles (515 km) of range, and the $71,250 Performance Limited, $74,990 Performance Calligraphy, and $76,490 Performance Calligraphy Design use a more powerful front motor to generate a total of 442 hp (315 kW) and 516 lb-ft (700 Nm), and a range of 311 miles (500 km).

The Ioniq 9’s interior loses some of the charm of the concept. Hyundai

While a short first drive is not the best place to evaluate an EV’s range efficiency, driven day to day in Eco mode, I wouldn’t be surprised if you were able to easily exceed 3 miles/kWh (20.7 kWh/100 km). Other drive modes include Normal, which uses the front motor much more often and therefore is markedly quicker than Eco; Sport, which has quite a lot of initial throttle tip-in and will head-toss your passengers if you have any; Terrain, first seen on the Ioniq 5 XRT; and Snow.

The ride is quite firm on surface streets but less so at highway speeds over seams and expansion gaps. As you start to corner faster you can expect to encounter understeer, but since this is a three-row SUV weighing between 5,507-6,008 lbs (2,498-2,725 kg), one has to wonder what else was expected. At sensible speeds, it’s easy to see out of and place it on the road, and if you’re stuck in a tailback with a couple of grumpy children in the back, it’s a calming enough environment to keep you from being over-stressed.

Hyundai has wisely priced the Ioniq 9 between the related Kia EV9 (which also uses the E-GMP platform) and EVs from premium OEMs like the Volvo EX90, Mercedes EQS SUV, or the aforementioned Rivian.

2025 Hyundai Ioniq 9 first drive: Efficient, for a big one Read More »

zero-click-searches:-google’s-ai-tools-are-the-culmination-of-its-hubris

Zero-click searches: Google’s AI tools are the culmination of its hubris


Google’s first year with AI search was a wild ride. It will get wilder.

Google is constantly making changes to its search rankings, but not all updates are equal. Every few months, the company bundles up changes into a larger “core update.” These updates make rapid and profound changes to search, so website operators watch them closely.

The March 2024 update was unique. It was one of Google’s largest core updates ever, and it took over a month to fully roll out. Nothing has felt quite the same since. Whether the update was good or bad depends on who you ask—and maybe who you are.

It’s common for websites to see traffic changes after a core update, but the impact of the March 2024 update marked a seismic shift. Google says the update aimed to address spam and AI-generated content in a meaningful way. Still, many publishers say they saw clicks on legitimate sites evaporate, while others have had to cope with unprecedented volatility in their traffic. Because Google owns almost the entire search market, changes in its algorithm can move the Internet itself.

In hindsight, the March 2024 update looks like the first major Google algorithm update for the AI era. Not only did it (supposedly) veer away from ranking AI-authored content online, but it also laid the groundwork for Google’s ambitious—and often annoying—desire to fuse AI with search.

A year ago, this ambition surfaced with AI Overviews, but now the company is taking an even more audacious route, layering in a new chat-based answer service called “AI Mode.” Both of these technologies do at least two things: They aim to keep you on Google properties longer, and they remix publisher content without always giving prominent citations.

Smaller publishers appear to have borne the brunt of the changes caused by these updates. “Google got all this flak for crushing the small publishers, and it’s true that when they make these changes, they do crush a lot of publishers,” says Jim Yu, CEO of enterprise SEO platform BrightEdge. Yu explains that Google is the only search engine likely to surface niche content in the first place, and there are bound to be changes to sites at the fringes during a major core update.

Google’s own view on the impact of the March 2024 update is unsurprisingly positive. The company said it was hoping to reduce the appearance of unhelpful content in its search engine results pages (SERPs) by 40 percent. After the update, the company claimed an actual reduction of closer to 45 percent. But does it feel like Google’s results have improved by that much? Most people don’t think so.

What causes this disconnect? According to Michael King, founder of SEO firm iPullRank, we’re not speaking the same language as Google. “Google’s internal success metrics differ from user perceptions,” he says. “Google measures user satisfaction through quantifiable metrics, while external observers rely on subjective experiences.”

Google evaluates algorithm changes with various tests, including human search quality testers and running A/B tests on live searches. But more than anything else, success is about the total number of searches (5 trillion of them per year). Google often makes this number a centerpiece of its business updates to show investors that it can still grow.

However, using search quantity to measure quality has obvious problems. For instance, more engagement with a search engine might mean that quality has decreased, so people try new queries (e.g., the old trick of adding “Reddit” to the end of your search string). In other words, people could be searching more because they don’t like the results.

Jim Yu suggests that Google is moving fast and breaking things, but it may not be as bad as we think. “I think they rolled things out faster because they had to move a lot faster than they’ve historically had to move, and it ends up that they do make some real mistakes,” says Yu. “[Google] is held to a higher standard, but by and large, I think their search quality is improving.”

According to King, Google’s current search behavior still favors big names, but other sites have started to see a rebound. “Larger brands are performing better in the top three positions, while lesser-known websites have gained ground in positions 4 through 10,” says King. “Although some websites have indeed lost traffic due to reduced organic visibility, the bigger issue seems tied to increased usage of AI Overviews”—and now the launch of AI Mode.

Yes, the specter of AI hangs over every SERP. The unhelpful vibe many people now get from Google searches, regardless of the internal metrics the company may use, may come from a fundamental shift in how Google surfaces information in the age of AI.

The AI Overview hangover

In 2025, you can’t talk about Google’s changes to search without acknowledging the AI-generated elephant in the room. As it wrapped up that hefty core update in March 2024, Google also announced a major expansion of AI in search, moving the “Search Generative Experience” out of labs and onto Google.com. The feature was dubbed “AI Overviews.”

The AI Overview box has been a fixture on Google’s search results page ever since its debut a year ago. The feature uses the same foundational AI model as Google’s Gemini chatbot to formulate answers to your search queries by ingesting the top 100 (!) search results. It sits at the top of the page, pushing so-called blue link content even farther down below the ads and knowledge graph content. It doesn’t launch on every query, and sometimes it answers questions you didn’t ask—or even hallucinates a totally wrong answer.

And it’s not without some irony that Google’s laudable decision to de-rank synthetic AI slop comes at the same time that Google heavily promotes its own AI-generated content right at the top of SERPs.

AI Overview on phone

AI Overviews appear right at the top of many search results.

Credit: Google

AI Overviews appear right at the top of many search results. Credit: Google

What is Google getting for all of this AI work? More eyeballs, it would seem. “AI is driving more engagement than ever before on Google,” says Yu. BrightEdge data shows that impressions on Google are up nearly 50 percent since AI Overviews launched. Many of the opinions you hear about AI Overviews online are strongly negative, but that doesn’t mean people aren’t paying attention to the feature. In its Q1 2025 earnings report, Google announced that AI Overviews is being “used” by 1.5 billion people every month. (Since you can’t easily opt in or opt out of AI Overviews, this “usage” claim should be taken with a grain of salt.)

Interestingly, the impact of AI Overviews has varied across the web. In October 2024, Google was so pleased with AI Overviews that it expanded them to appear in more queries. And as AI crept into more queries, publishers saw a corresponding traffic drop. Yu estimates this drop to be around 30 percent on average for those with high AI query coverage. For searches that are less supported in AI Overviews—things like restaurants and financial services—the traffic change has been negligible. And there are always exceptions. Yu suggests that some large businesses with high AI Overview query coverage have seen much smaller drops in traffic because they rank extremely well as both AI citations and organic results.

Lower traffic isn’t the end of the world for some businesses. Last May, AI Overviews were largely absent from B2B queries, but that turned around in a big way in recent months. BrightEdge estimates that 70 percent of B2B searches now have AI answers, which has reduced traffic for many companies. Yu doesn’t think it’s all bad, though. “People don’t click through as much—they engage a lot more on the AI—but when they do click, the conversion rate for the business goes up,” Yu says. In theory, serious buyers click and window shoppers don’t.

But the Internet is not a giant mall that exists only for shoppers. It is, first and foremost, a place to share and find information, and AI Overviews have hit some purveyors of information quite hard. At launch, AI Overviews were heavily focused on “What is” and “How to” queries. Such “service content” is a staple of bloggers and big media alike, and these types of publishers aren’t looking for sales conversions—it’s traffic that matters. And they’re getting less of it because AI Overviews “helpfully” repackages and remixes their content, eliminating the need to click through to the site. Some publishers are righteously indignant, asking how it’s fair for Google to remix content it doesn’t own, and to do so without compensation.

But Google’s intentions don’t end with AI Overviews. Last week, the company started an expanded public test of so-called “AI Mode,” right from the front page. AI Mode doesn’t even bother with those blue links. It’s a chatbot experience that, at present, tries to answer your query without clearly citing sources inline. (On some occasions, it will mention Reddit or Wikipedia.) On the right side of the screen, Google provides a little box with three sites linked, which you can expand to see more options. To the end user, it’s utterly unclear if those are “sources,” “recommendations,” or “partner deals.”

Perhaps more surprisingly, in our testing, not a single AI Mode “sites box” listed a site that ranked on the first page for the same query on a regular search. That is, the links in AI Mode for “best foods to eat for a cold” don’t overlap at all with the SERP for the same query in Google Search. In fairness, AI Mode is very new, and its behavior will undoubtedly change. But the direction the company is headed seems clear.

Google’s real goal is to keep you on Google or other Alphabet properties. In 2019, Rand Fishkin noticed that Google’s evolution from search engine to walled garden was at a tipping point. At that time—and for the first time—more than half of Google searches resulted in zero click-throughs to other sites. But data did show large numbers of clicks to Google’s own properties, like YouTube and Maps. If Google doesn’t intend to deliver a “zero-click” search experience, you wouldn’t know it from historical performance data or the new features the company develops.

You also wouldn’t know it from the way AI Overviews work. They do cite some of the sources used in building each output, and data suggests people click on those links. But are the citations accurate? Is every source used for constructing an AI Overview cited? We don’t really know, as Google is famously opaque about how its search works. We do know that Google uses a customized version of Gemini to support AI Overviews and that Gemini has been trained on billions and billions of webpages.

When AI Overviews do cite a source, it’s not clear how those sources came to be the ones cited. There’s good reason to be suspicious here: AI Overview’s output is not great, as witnessed by the numerous hallucinations we all know and love (telling people to eat rocks, for instance). The only thing we know for sure is that Google isn’t transparent about any of this.

No signs of slowing

Despite all of that, Google is not slowing down on AI in search. More recent core updates have only solidified this new arrangement with an ever-increasing number of AI-answered queries. The company appears OK with its current accuracy problems, or at the very least, it’s comfortable enough to push out AI updates anyway. Google appears to have been caught entirely off guard by the public launch of ChatGPT, and it’s now utilizing its search dominance to play catch-up.

To make matters even more dicey, Google isn’t even trying to address the biggest issue in all this: The company’s quest for zero-click search harms the very content creators upon which the company has built its empire.

For its part, Google has been celebrating its AI developments, insisting that content producers don’t know what’s best for them, refuting any concerns with comments about search volume increases and ever-more-complex search query strings. The changes must be working!

Google has been building toward this moment for years. The company started with a list of 10 blue links and nothing else, but little by little, it pushed the links down the page and added more content that keeps people in the Google ecosystem. Way back in 2007, Google added Universal Search, which allowed it to insert content from Google Maps, YouTube, and other services. In 2009, Rich Snippets began displaying more data from search results on SERPs. In 2012, the Knowledge Graph began extracting data from search results to display answers in the search results. Each change kept people on Google longer and reduced click-throughs, all the while pushing the search results down the page.

AI Overviews, and especially AI Mode, are the logical outcome of Google’s yearslong transformation from an indexer of information to an insular web portal built on scraping content from around the web. Earlier in Google’s evolution, the implicit agreement was that websites would allow Google to crawl their pages in exchange for sending them traffic. That relationship has become strained as the company has kept more traffic for itself, reducing click-throughs to websites even as search volume continues to increase. And locking Google out isn’t a realistic option when the company controls almost the entire search market.

Even when Google has taken a friendlier approach, business concerns could get in the way. During the search antitrust trial, documents showed that Google initially intended to let sites opt out of being used for AI training for its search-based AI features—but these sites would still be included in search results. The company ultimately canned that idea, leaving site operators with the Pyrrhic choice of participating in the AI “revolution” or becoming invisible on the web. Google now competes with, rather than supports, the open web.

When many of us look at Google’s search results today, the vibe feels off. Maybe it’s the AI, maybe it’s Google’s algorithm, or maybe the Internet just isn’t what it once was. Whatever the cause, the shift toward zero-click search that began more than a decade ago was made clear by the March 2024 core update, and it has only accelerated with the launch of AI Mode. Even businesses that have escaped major traffic drops from AI Overviews could soon find that Google’s AI-only search can get much more overbearing.

The AI slop will continue until morale improves.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Zero-click searches: Google’s AI tools are the culmination of its hubris Read More »

space-force-official:-commercial-satellites-can-do-a-lot-more-than-we-thought

Space Force official: Commercial satellites can do a lot more than we thought

“So, we’re off working now with that program office to go start off a more commercial line,” Purdy said. “And when I say commercial in this particular aspect, just to clarify, this is accomplishing the same GSSAP mission. Our operators will fly the GSSAP system using the same ground systems and data they do now, but these would be using faster, commercial build times… and cheaper, less expensive parts in order to bring that together in a faster sense.”

An artist’s illustration of two of the Space Force’s GSSAP surveillance satellites, built by Northrop Grumman. Credit: US Space Force

The next-gen GSSAP spacecraft may not meet the same standards as the Space Force’s existing inspector satellites, but the change comes with benefits beyond lower costs and faster timelines. It will be unclassified and will be open to multiple vendors to build and launch space surveillance satellites, injecting some level of competition into the program. It will also be eligible for sales to other countries.

More for less with GPS

There’s another area where Purdy said the Space Force was surprised by what commercial satellite builders were offering. Last year, the Pentagon used a new “Quick Start” procurement model authorized by Congress to establish a program to bolster the GPS navigation network, which is run by the Space Force but relied upon by commercial users and private citizens around the world.

The Space Force has more than 30 GPS satellites in medium-Earth orbit (MEO) at an altitude of roughly 12,550 miles (20,200 kilometers). Purdy said the network is “vulnerable” because the constellation has a relatively small number of satellites, at least relative to the Space Force’s newest programs. In MEO, the satellites are within range of direct-ascent anti-satellite weapons. Many of the GPS satellites are aging, and the newer ones, built by Lockheed Martin, cost about $250 million apiece. With the Resilient GPS program, the Space Force aims to reduce the cost to $50 million to $80 million per satellite.

The satellites will be smaller than the GPS satellites flying today and will transmit a core set of signals. “We’re looking to add more resiliency and more numbers,” Purdy said.

“We actually didn’t think that we were going to get much, to be honest with you, and it was a surprise to us, and a major learning [opportunity] for us, learning last year that satellite prices had—they were low in LEO already, but they were lowering in MEO,” Purdy said. “So, that convinced us that we should proceed with it. The results have actually been more surprising and encouraging than we thought.

“The [satellite] buses actually bring a higher power level than our current program of record does, which allows us to punch through jamming in a better sense. We can achieve better results, we think, over time, going after these commercial buses,” Purdy said. “So that’s caused me to think, for our mainline GPS system, we’re actually looking at that for alternative ways to get after that.”

Maj. Gen. Stephen Purdy oversees the Space Force’s acquisition programs at the Pentagon. Credit: Jonathan Newton/The Washington Post via Getty Images

In September, the Space Force awarded four agreements to Astranis, Axient, L3Harris, and Sierra Space to produce design concepts for new Resilient GPS satellites. Astranis and Axient are relatively new to satellite manufacturing. Astranis is a pioneer in low-mass Internet satellites in geosynchronous orbit and a non-traditional defense contractor. Axient, acquired by a company named Astrion last year, has focused on producing small CubeSats.

The military will later select one or more of these companies to move forward with producing up to eight Resilient GPS satellites for launch as soon as 2028. Early planning is already underway for a follow-on set of Resilient GPS satellites with additional capabilities, according to the Space Force.

The experience with the R-GPS program inspired the Space Force to look at other mission areas that might be well-served with a similar procurement approach. They settled on GSSAP as the next frontier.

Scolese, director of the NRO, said his agency is examining how to use commercial satellite constellations for other purposes beyond Earth imaging. This might include a program to employ commercially procured satellites for signals intelligence (SIGINT) missions, he said.

“It’s not just the commercial imagery,” Scolese said. “It’s also commercial RF (Radio Frequency, or SIGINT) and newer phenomenologies as where we’re working with that industry to go off and help advance those.”

Space Force official: Commercial satellites can do a lot more than we thought Read More »

openai-introduces-codex,-its-first-full-fledged-ai-agent-for-coding

OpenAI introduces Codex, its first full-fledged AI agent for coding

We’ve been expecting it for a while, and now it’s here: OpenAI has introduced an agentic coding tool called Codex in research preview. The tool is meant to allow experienced developers to delegate rote and relatively simple programming tasks to an AI agent that will generate production-ready code and show its work along the way.

Codex is a unique interface (not to be confused with the Codex CLI tool introduced by OpenAI last month) that can be reached from the side bar in the ChatGPT web app. Users enter a prompt and then click either “code” to have it begin producing code, or “ask” to have it answer questions and advise.

Whenever it’s given a task, that task is performed in a distinct container that is preloaded with the user’s codebase and is meant to accurately reflect their development environment.

To make Codex more effective, developers can include an “AGENTS.md” file in the repo with custom instructions, for example to contextualize and explain the code base or to communicate standardizations and style practices for the project—kind of a README.md but for AI agents rather than humans.

Codex is built on codex-1, a fine-tuned variation of OpenAI’s o3 reasoning model that was trained using reinforcement learning on a wide range of coding tasks to analyze and generate code, and to iterate through tests along the way.

OpenAI introduces Codex, its first full-fledged AI agent for coding Read More »

meta-argues-enshittification-isn’t-real-in-bid-to-toss-ftc-monopoly-trial

Meta argues enshittification isn’t real in bid to toss FTC monopoly trial

Further, Meta argued that the FTC did not show evidence that users sharing friends-and-family content were shown more ads. Meta noted that it “does not profit by showing more ads to users who do not click on them,” so it only shows more ads to users who click ads.

Meta also insisted that there’s “nothing but speculation” showing that Instagram or WhatsApp would have been better off or grown into rivals had Meta not acquired them.

The company claimed that without Meta’s resources, Instagram may have died off. Meta noted that Instagram co-founder Kevin Systrom testified that his app was “pretty broken and duct-taped” together, making it “vulnerable to spam” before Meta bought it.

Rather than enshittification, what Meta did to Instagram could be considered “a consumer-welfare bonanza,” Meta argued, while dismissing “smoking gun” emails from Mark Zuckerberg discussing buying Instagram to bury it as “legally irrelevant.”

Dismissing these as “a few dated emails,” Meta argued that “efforts to litigate Mr. Zuckerberg’s state of mind before the acquisition in 2012 are pointless.”

“What matters is what Meta did,” Meta argued, which was pump Instagram with resources that allowed it “to ‘thrive’—adding many new features, attracting hundreds of millions and then billions of users, and monetizing with great success.”

In the case of WhatsApp, Meta argued that nobody thinks WhatsApp had any intention to pivot to social media when the founders testified that their goal was to never add social features, preferring to offer a simple, clean messaging app. And Meta disputed any claim that it feared Google might buy WhatsApp as the basis for creating a Facebook rival, arguing that “the sole Meta witness to (supposedly) learn of Google’s acquisition efforts testified that he did not have that worry.”

Meta argues enshittification isn’t real in bid to toss FTC monopoly trial Read More »

nintendo-says-more-about-how-free-switch-2-updates-will-improve-switch-games

Nintendo says more about how free Switch 2 updates will improve Switch games

When Nintendo took the wraps off the Switch 2 in early April, it announced that around a dozen first-party Switch games would be getting free updates that would add some Switch 2-specific benefits to older games running on the new console. We could safely assume that these updates wouldn’t be as extensive as the $10 and $20 paid upgrade packs for games like Breath of the Wild or Kirby and the Forgotten Land, but Nintendo’s page didn’t initially provide any game-specific details.

Earlier this week, Nintendo updated its support page with more game-by-game details about what players of these older games can expect on the new hardware. The baseline improvement for most games is “improved image quality” and optimizations for the Switch 2’s built-in display, but others include support for GameShare multiplayer, support for the new Joy-Cons’ mouse controls, support for HDR TVs, and other tweaks.

The most significant of the announced updates are frame rate improvements for Pokémon Scarlet and Violet, the main-series Pokémon games released in late 2022. Most latter-day Switch games suffered from frame rate dips here and there, as newer games outstripped the capabilities of a low-power tablet processor that had already been a couple of years old when the Switch launched in 2017. But the Pokémon performance problems were so pervasive and widely commented-upon that Nintendo released a rare apology promising to improve the game post-release. Subsequent patches helped somewhat but could never deliver a consistently smooth frame rate; perhaps new hardware will finally deliver what software patches couldn’t.

Nintendo says more about how free Switch 2 updates will improve Switch games Read More »

apple’s-new-carplay-ultra-is-ready,-but-only-in-aston-martins-for-now

Apple’s new CarPlay Ultra is ready, but only in Aston Martins for now

It’s a few years later than we were promised, but an advanced new version of Apple CarPlay is finally here. CarPlay is Apple’s way of casting a phone’s video and audio to a car’s infotainment system, but with CarPlay Ultra it gets a big upgrade. Now, in addition to displaying compatible iPhone apps on the car’s center infotainment screen, CarPlay Ultra will also take over the main instrument panel in front of the driver, replacing the OEM-designed dials like the speedometer and tachometer with a number of different Apple designs instead.

“iPhone users love CarPlay and it has changed the way people interact with their vehicles. With CarPlay Ultra, together with automakers we are reimagining the in-car experience and making it even more unified and consistent,” said Bob Borchers, vice president of worldwide marketing at Apple.

However, to misquote William Gibson, CarPlay Ultra is unevenly distributed. In fact, if you want it today, you’re going to have to head over to the nearest Aston Martin dealership. Because to begin with, it’s only rolling out in North America with Aston Martin, inside the DBX SUV, as well as the DB12, Vantage, and Vanquish sports cars. It’s standard on all new orders, the automaker says, and will be available as a dealer-performed update for existing Aston Martins with the company’s in-house 10.25-inch infotainment system in the coming weeks.

“The next generation of CarPlay gives drivers a smarter, safer way to use their iPhone in the car, deeply integrating with the vehicle while maintaining the very best of the automaker. We are thrilled to begin rolling out CarPlay Ultra with Aston Martin, with more manufacturers to come,” Borchers said.

Apple’s new CarPlay Ultra is ready, but only in Aston Martins for now Read More »

xai’s-grok-suddenly-can’t-stop-bringing-up-“white-genocide”-in-south-africa

xAI’s Grok suddenly can’t stop bringing up “white genocide” in South Africa

Where could Grok have gotten these ideas?

The treatment of white farmers in South Africa has been a hobbyhorse of South African X owner Elon Musk for quite a while. In 2023, he responded to a video purportedly showing crowds chanting “kill the Boer, kill the White Farmer” with a post alleging South African President Cyril Ramaphosa of remaining silent while people “openly [push] for genocide of white people in South Africa.” Musk was posting other responses focusing on the issue as recently as Wednesday.

They are openly pushing for genocide of white people in South Africa. @CyrilRamaphosa, why do you say nothing?

— gorklon rust (@elonmusk) July 31, 2023

President Trump has long shown an interest in this issue as well, saying in 2018 that he was directing then Secretary of State Mike Pompeo to “closely study the South Africa land and farm seizures and expropriations and the large scale killing of farmers.” More recently, Trump granted “refugee” status to dozens of white Afrikaners, even as his administration ends protections for refugees from other countries

Former American Ambassador to South Africa and Democratic politician Patrick Gaspard posted in 2018 that the idea of large-scale killings of white South African farmers is a “disproven racial myth.”

In launching the Grok 3 model in February, Musk said it was a “maximally truth-seeking AI, even if that truth is sometimes at odds with what is politically correct.” X’s “About Grok” page says that the model is undergoing constant improvement to “ensure Grok remains politically unbiased and provides balanced answers.”

But the recent turn toward unprompted discussions of alleged South African “genocide” has many questioning what kind of explicit adjustments Grok’s political opinions may be getting from human tinkering behind the curtain. “The algorithms for Musk products have been politically tampered with nearly beyond recognition,” journalist Seth Abramson wrote in one representative skeptical post. “They tweaked a dial on the sentence imitator machine and now everything is about white South Africans,” a user with the handle Guybrush Threepwood glibly theorized.

Representatives from xAI were not immediately available to respond to a request for comment from Ars Technica.

xAI’s Grok suddenly can’t stop bringing up “white genocide” in South Africa Read More »

after-back-to-back-failures,-spacex-tests-its-fixes-on-the-next-starship

After back-to-back failures, SpaceX tests its fixes on the next Starship

But that didn’t solve the problem. Once again, Starship’s engines cut off too early, and the rocket broke apart before falling to Earth. SpaceX said “an energetic event” in the aft portion of Starship resulted in the loss of several Raptor engines, followed by a loss of attitude control and a loss of communications with the ship.

The similarities between the two failures suggest a likely design issue with the upgraded “Block 2” version of Starship, which debuted in January and flew again in March. Starship Block 2 is slightly taller than the ship SpaceX used on the rocket’s first six flights, with redesigned flaps, improved batteries and avionics, and notably, a new fuel feed line system for the ship’s Raptor vacuum engines.

SpaceX has not released the results of the investigation into the Flight 8 failure, and the FAA hasn’t yet issued a launch license for Flight 9. Likewise, SpaceX hasn’t released any information on the changes it made to Starship for next week’s flight.

What we do know about the Starship vehicle for Flight 9—designated Ship 35—is that it took a few tries to complete a full-duration test-firing. SpaceX completed a single-engine static fire on April 30, simulating the restart of a Raptor engine in space. Then, on May 1, SpaceX aborted a six-engine test-firing before reaching its planned 60-second duration. Videos captured by media observing the test showed a flash in the engine plume, and at least one piece of debris was seen careening out of the flame trench below the ship.

SpaceX ground crews returned Ship 35 to the production site a couple of miles away, perhaps to replace a damaged engine, before rolling Starship back to the test stand over the weekend for Monday’s successful engine firing.

Now, the ship will head back to the Starbase build site, where technicians will make final preparations for Flight 9. These final tasks may include loading mock-up Starlink broadband satellites into the ship’s payload bay and touchups to the rocket’s heat shield.

These are two elements of Starship that SpaceX engineers are eager to demonstrate on Flight 9, beyond just fixing the problems from the last two missions. Those failures prevented Starship from testing its satellite deployer and an upgraded heat shield designed to better withstand scorching temperatures up to 2,600° Fahrenheit (1,430° Celsius) during reentry.

After back-to-back failures, SpaceX tests its fixes on the next Starship Read More »