Google

celebrated-game-developer-rebecca-heineman-dies-at-age-62

Celebrated game developer Rebecca Heineman dies at age 62

From champion to advocate

During her later career, Heineman served as a mentor and advisor to many, never shy about celebrating her past as a game developer during the golden age of the home computer.

Her mentoring skills became doubly important when she publicly came out as transgender in 2003. She became a vocal advocate for LGBTQ+ representation in gaming and served on the board of directors for GLAAD. Earlier this year, she received the Gayming Icon Award from Gayming Magazine.

Andrew Borman, who serves as director of digital preservation at The Strong National Museum of Play in Rochester, New York, told Ars Technica that her influence made a personal impact wider than electronic entertainment. “Her legacy goes beyond her groundbreaking work in video games,” he told Ars. “She was a fierce advocate for LGBTQ rights and an inspiration to people around the world, including myself.”

The front cover of

The front cover of Dragon Wars on the Commodore 64, released in 1989. Credit: MobyGames

In the Netflix documentary series High Score, Heineman explained her early connection to video games. “It allowed me to be myself,” she said. “It allowed me to play as female.”

“I think her legend grew as she got older, in part because of her openness and approachability,” journalist Ernie Smith told Ars. “As the culture of gaming grew into an online culture of people ready to dig into the past, she remained a part of it in a big way, where her war stories helped fill in the lore about gaming’s formative eras.”

Celebrated to the end

Heineman was diagnosed with adenocarcinoma in October 2025 after experiencing shortness of breath at the PAX game convention. After diagnostic testing, doctors found cancer in her lungs and liver. That same month, she launched a GoFundMe campaign to help with medical costs. The campaign quickly surpassed its $75,000 goal, raising more than $157,000 from fans, friends, and industry colleagues.

Celebrated game developer Rebecca Heineman dies at age 62 Read More »

google-ceo:-if-an-ai-bubble-pops,-no-one-is-getting-out-clean

Google CEO: If an AI bubble pops, no one is getting out clean

Market concerns and Google’s position

Alphabet’s recent market performance has been driven by investor confidence in the company’s ability to compete with OpenAI’s ChatGPT, as well as its development of specialized chips for AI that can compete with Nvidia’s. Nvidia recently reached a world-first $5 trillion valuation due to making GPUs that can accelerate the matrix math at the heart of AI computations.

Despite acknowledging that no company would be immune to a potential AI bubble burst, Pichai argued that Google’s unique position gives it an advantage. He told the BBC that the company owns what he called a “full stack” of technologies, from chips to YouTube data to models and frontier science research. This integrated approach, he suggested, would help the company weather any market turbulence better than competitors.

Pichai also told the BBC that people should not “blindly trust” everything AI tools output. The company currently faces repeated accuracy concerns about some of its AI models. Pichai said that while AI tools are helpful “if you want to creatively write something,” people “have to learn to use these tools for what they’re good at and not blindly trust everything they say.”

In the BBC interview, the Google boss also addressed the “immense” energy needs of AI, acknowledging that the intensive energy requirements of expanding AI ventures have caused slippage on Alphabet’s climate targets. However, Pichai insisted that the company still wants to achieve net zero by 2030 through investments in new energy technologies. “The rate at which we were hoping to make progress will be impacted,” Pichai said, warning that constraining an economy based on energy “will have consequences.”

Even with the warnings about a potential AI bubble, Pichai did not miss his chance to promote the technology, albeit with a hint of danger regarding its widespread impact. Pichai described AI as “the most profound technology” humankind has worked on.

“We will have to work through societal disruptions,” he said, adding that the technology would “create new opportunities” and “evolve and transition certain jobs.” He said people who adapt to AI tools “will do better” in their professions, whatever field they work in.

Google CEO: If an AI bubble pops, no one is getting out clean Read More »

google-unveils-gemini-3-ai-model-and-ai-first-ide-called-antigravity

Google unveils Gemini 3 AI model and AI-first IDE called Antigravity


Google’s flagship AI model is getting its second major upgrade this year.

Google has kicked its Gemini rollout into high gear over the past year, releasing the much-improved Gemini 2.5 family and cramming various flavors of the model into Search, Gmail, and just about everything else the company makes.

Now, Google’s increasingly unavoidable AI is getting an upgrade. Gemini 3 Pro is available in a limited form today, featuring more immersive, visual outputs and fewer lies, Google says. The company also says Gemini 3 sets a new high-water mark for vibe coding, and Google is announcing a new AI-first integrated development environment (IDE) called Antigravity, which is also available today.

The first member of the Gemini 3 family

Google says the release of Gemini 3 is yet another step toward artificial general intelligence (AGI). The new version of Google’s flagship AI model has expanded simulated reasoning abilities and shows improved understanding of text, images, and video. So far, testers like it—Google’s latest LLM is once again atop the LMArena leaderboard with an ELO score of 1,501, besting Gemini 2.5 Pro by 50 points.

Gemini 3 LMArena

Credit: Google

Factuality has been a problem for all gen AI models, but Google says Gemini 3 is a big step in the right direction, and there are myriad benchmarks to tell the story. In the 1,000-question SimpleQA Verified test, Gemini 3 scored a record 72.1 percent. Yes, that means the state-of-the-art LLM still screws up almost 30 percent of general knowledge questions, but Google says this still shows substantial progress. On the much more difficult Humanity’s Last Exam, which tests PhD-level knowledge and reasoning, Gemini set another record, scoring 37.5 percent without tool use.

Math and coding are also a focus of Gemini 3. The model set new records in MathArena Apex (23.4 percent) and WebDev Arena (1487 ELO). In the SWE-bench Verified, which tests a model’s ability to generate code, Gemini 3 hit an impressive 76.2 percent.

So there are plenty of respectable but modest benchmark improvements, but Gemini 3 also won’t make you cringe as much. Google says it has tamped down on sycophancy, a common problem in all these overly polite LLMs. Outputs from Gemini 3 Pro are reportedly more concise, with less of what you want to hear and more of what you need to hear.

You can also expect Gemini 3 Pro to produce noticeably richer outputs. Google claims Gemini’s expanded reasoning capabilities keep it on task more effectively, allowing it to take action on your behalf. For example, Gemini 3 can triage and take action on your emails, creating to-do lists, summaries, recommended replies, and handy buttons to trigger suggested actions. This differs from the current Gemini models, which would only create a text-based to-do list with similar prompts.

The model also has what Google calls a “generative interface,” which comes in the form of two experimental output modes called visual layout and dynamic view. The former is a magazine-style interface that includes lots of images in a scrollable UI. Dynamic view leverages Gemini’s coding abilities to create custom interfaces—for example, a web app that explores the life and work of Vincent Van Gogh.

There will also be a Deep Think mode for Gemini 3, but that’s not ready for prime time yet. Google says it’s being tested by a small group for later release, but you should expect big things. Deep Think mode manages 41 percent in Humanity’s Last Exam without tools. Believe it or not, that’s an impressive score.

Coding with vibes

Google has offered several ways of generating and modifying code with Gemini models, but the launch of Gemini 3 adds a new one: Google Antigravity. This is Google’s new agentic development platform—it’s essentially an IDE designed around agentic AI, and it’s available in preview today.

With Antigravity, Google promises that you (the human) can get more work done by letting intelligent agents do the legwork. Google says you should think of Antigravity as a “mission control” for creating and monitoring multiple development agents. The AI in Antigravity can operate autonomously across the editor, terminal, and browser to create and modify projects, but everything they do is relayed to the user in the form of “Artifacts.” These sub-tasks are designed to be easily verifiable so you can keep on top of what the agent is doing. Gemini will be at the core of the Antigravity experience, but it’s not just Google’s bot. Antigravity also supports Claude Sonnet 4.5 and GPT-OSS agents.

Of course, developers can still plug into the Gemini API for coding tasks. With Gemini 3, Google is adding a client-side bash tool, which lets the AI generate shell commands in its workflow. The model can access file systems and automate operations, and a server-side bash tool will help generate code in multiple languages. This feature is starting in early access, though.

AI Studio is designed to be a faster way to build something with Gemini 3. Google says Gemini 3 Pro’s strong instruction following makes it the best vibe coding model yet, allowing non-programmers to create more complex projects.

A big experiment

Google will eventually have a whole family of Gemini 3 models, but there’s just the one for now. Gemini 3 Pro is rolling out in the Gemini app, AI Studio, Vertex AI, and the API starting today as an experiment. If you want to tinker with the new model in Google’s Antigravity IDE, that’s also available for testing today on Windows, Mac, and Linux.

Gemini 3 will also launch in the Google search experience on day one. You’ll have the option to enable Gemini 3 Pro in AI Mode, where Google says it will provide more useful information about a query. The generative interface capabilities from the Gemini app will be available here as well, allowing Gemini to create tools and simulations when appropriate to answer the user’s question. Google says these generative interfaces are strongly preferred in its user testing. This feature is available today, but only for AI Pro and Ultra subscribers.

Because the Pro model is the only Gemini 3 variant available in the preview, AI Overviews isn’t getting an immediate upgrade. That will come, but for now, Overviews will only reach out to Gemini 3 Pro for especially difficult search queries—basically the kind of thing Google thinks you should have used AI Mode to do in the first place.

There’s no official timeline for releasing more Gemini 3 models or graduating the Pro variant to general availability. However, given the wide rollout of the experimental release, it probably won’t be long.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Google unveils Gemini 3 AI model and AI-first IDE called Antigravity Read More »

google-claims-win-for-everyone-as-text-scammers-lost-their-cloud-server

Google claims win for everyone as text scammers lost their cloud server

The day after Google filed a lawsuit to end text scams primarily targeting Americans, the criminal network behind the phishing scams was “disrupted,” a Google spokesperson told Ars.

According to messages that the “ringleader” of the so-called “Lighthouse enterprise” posted on his Telegram channel, the phishing gang’s cloud server was “blocked due to malicious complaints.”

“We will restore it as soon as possible!” the leader posted on the channel—which Google’s lawsuit noted helps over 2,500 members coordinate phishing attacks that have resulted in losses of “over a billion dollars.”

Google has alleged that the Lighthouse enterprise is a “criminal group in China” that sells “phishing for dummies” kits that make it easier for scammers with little tech savvy to launch massive phishing campaigns. So far, “millions” of Americans have been harmed, Google alleged, as scammers disproportionately impersonate US institutions, like the Postal Service, as well as well-known brands like E-ZPass.

The company’s lawsuit seeks to dismantle the entire Lighthouse criminal enterprise, so the company was pleased to see Lighthouse communities go dark. In a statement, Halimah DeLaine Prado, Google’s general counsel, told Ars that “this shutdown of Lighthouse’s operations is a win for everyone.

Google claims win for everyone as text scammers lost their cloud server Read More »

google-will-let-android-power-users-bypass-upcoming-sideloading-restrictions

Google will let Android power users bypass upcoming sideloading restrictions

Google recently decided that the freedom afforded by Android was a bit too much and announced developer verification, a system that will require developers outside the Google Play platform to register with Google. Users and developers didn’t accept Google’s rationale and have been complaining loudly. As Google begins early access testing, it has conceded that “experienced users” should have an escape hatch.

According to Google, online scam and malware campaigns are getting more aggressive, and there’s real harm being done in spite of the platform’s sideloading scare screens. Google says it’s common for scammers to use social engineering to create a false sense of urgency, prompting users to bypass Android’s built-in protections to install malicious apps.

Google’s solution to this problem, as announced several months ago, is to force everyone making apps to verify their identities. Unverified apps won’t install on any Google-certified device once verification rolls out. Without this, the company claims malware creators can endlessly create new apps to scam people. However, the centralized nature of verification threatened to introduce numerous headaches into a process that used to be straightforward for power users.

This isn’t the first time Google has had to pull back on its plans. Each time the company releases a new tidbit about verification, it compromises a little more. Previously, it confirmed that a free verification option would be available for hobbyists and students who wanted to install apps on a small number of devices. It also conceded that installation over ADB via a connected computer would still be allowed.

Now, Google has had to acknowledge that its plans for verification are causing major backlash among developers and people who know what an APK is. So there will be an alternative, but we don’t know how it will work just yet.

How high is your risk tolerance?

Google’s latest verification update explains that the company has received a lot of feedback from users and developers who want to be able to sideload without worrying about verification status. For those with “higher risk tolerance,” Google is exploring ways to make that happen. This is a partial victory for power users, but the nature of Google’s “advanced flow” for sideloading is murky.

Google will let Android power users bypass upcoming sideloading restrictions Read More »

google-is-rolling-out-conversational-shopping—and-ads—in-ai-mode-search

Google is rolling out conversational shopping—and ads—in AI Mode search

In recent months, Google has promised to inject generative AI into the online shopping experience, and now it’s following through. The previously announced shopping features of AI Mode search are rolling out, and Gemini will also worm its way into Google’s forgotten Duplex automated phone call tech. It’s all coming in time for the holidays to allegedly make your gifting more convenient and also conveniently ensure that Google gets a piece of the action.

At Google I/O in May, the company announced its intention to bring conversational shopping to AI Mode. According to Google, its enormous “Shopping Graph” or retailer data means its AI is uniquely positioned to deliver useful suggestions. In the coming weeks, users in the US will be able to ask AI Mode complex questions about what to buy, and it will deliver suggestions, guides, tables, and other generated content to help you decide. And since this is gen AI, it comes with the usual disclaimers about possible mistakes.

AI Mode shopping features.

You’re probably wondering where you’ll see sponsored shopping content in these experiences. Google says some of the content that appears in AI Mode will be ads, just like if you look up shopping results in a traditional search. Shopping features are also coming to the Gemini app, but Google says it won’t have sponsored content in the results for the time being.

Google is also releasing a feature called “agentic checkout,” a term used only in passing when the company announced the feature alongside AI Mode shopping at I/O. Google is really leaning into the agentic angle now, though. The gist is you can set a price threshold for a product in search, and Google will let you know if the item reaches that price. That part isn’t new, but there’s now an AI twist. After getting the alert, you can authorize an automatic purchase with Google Pay. However, it’s currently only supported at a handful of retailers like Chewy, Wayfair, and some Shopify merchants. It’s not clear whether this qualifies as agentic anything, but it might save you some money regardless.

Google is rolling out conversational shopping—and ads—in AI Mode search Read More »

meta’s-star-ai-scientist-yann-lecun-plans-to-leave-for-own-startup

Meta’s star AI scientist Yann LeCun plans to leave for own startup

A different approach to AI

LeCun founded Meta’s Fundamental AI Research lab, known as FAIR, in 2013 and has served as the company’s chief AI scientist ever since. He is one of three researchers who won the 2018 Turing Award for pioneering work on deep learning and convolutional neural networks. After leaving Meta, LeCun will remain a professor at New York University, where he has taught since 2003.

LeCun has previously argued that large language models like Llama that Zuckerberg has put at the center of his strategy are useful, but they will never be able to reason and plan like humans, increasingly appearing to contradict his boss’s grandiose AI vision for developing “superintelligence.”

For example, in May 2024, when an OpenAI researcher discussed the need to control ultra-intelligent AI, LeCun responded on X by writing that before urgently figuring out how to control AI systems much smarter than humans, researchers need to have the beginning of a hint of a design for a system smarter than a house cat.

Mark Zuckerberg once believed the “metaverse” was the future and renamed his company because of it. Credit: Facebook

Within FAIR, LeCun has instead focused on developing world models that can truly plan and reason. Over the past year, though, Meta’s AI research groups have seen growing tension and mass layoffs as Zuckerberg has shifted the company’s AI strategy away from long-term research and toward the rapid deployment of commercial products.

Over the summer, Zuckerberg hired Alexandr Wang to lead a new superintelligence team at Meta, paying $14.3 billion to hire the 28-year-old founder of data-labeling startup Scale AI and acquire a 49 percent interest in his company. LeCun, who had previously reported to Chief Product Officer Chris Cox, now reports to Wang, which seems like a sharp rebuke of LeCun’s approach to AI.

Zuckerberg also personally handpicked an exclusive team called TBD Lab to accelerate the development of the next iteration of large language models, luring staff from rivals such as OpenAI and Google with astonishingly large $100 to $250 million pay packages. As a result, Zuckerberg has come under growing pressure from Wall Street to show that his multibillion-dollar investment in becoming an AI leader will pay off and boost revenue. But if it turns out like his previous pivot to the metaverse, Zuckerberg’s latest bet could prove equally expensive and unfruitful.

Meta’s star AI scientist Yann LeCun plans to leave for own startup Read More »

google-announces-even-more-ai-in-photos-app,-powered-by-nano-banana

Google announces even more AI in Photos app, powered by Nano Banana

We’re running out of ways to tell you that Google is releasing more generative AI features, but that’s what’s happening in Google Photos today. The Big G is finally making good on its promise to add its market-leading Nano Banana image-editing model to the app. The model powers a couple of features, and it’s not just for Google’s Android platform. Nano Banana edits are also coming to the iOS version of the app.

Nano Banana started making waves when it appeared earlier this year as an unbranded demo. You simply feed the model an image and tell it what edits you want to see. Google said Nano Banana was destined for the Photos app back in October, but it’s only now beginning the rollout. The Photos app already had conversational editing in the “Help Me Edit” feature, but it was running an older non-fruit model that produced inferior results. Nano Banana editing will produce AI slop, yes, but it’s better slop.

Nano Banana in Help me edit

Google says the updated Help Me Edit feature has access to your private face groups, so you can use names in your instructions. For example, you could type “Remove Riley’s sunglasses,” and Nano Banana will identify Riley in the photo (assuming you have a person of that name saved) and make the edit without further instructions. You can also ask for more fantastical edits in Help Me Edit, changing the style of the image from top to bottom.

Google announces even more AI in Photos app, powered by Nano Banana Read More »

gemini-deep-research-comes-to-google-finance,-backed-by-prediction-market-data

Gemini Deep Research comes to Google Finance, backed by prediction market data

Bet on it

Financial markets can turn on a dime, and AI can’t predict the future. However, Google seems to think that people make smart predictions in aggregate when there’s money on the line. That’s why, as part of the Finance update, Google has partnered with Kalshi and Polymarket, the current leaders in online prediction markets.

These platforms let people place bets on, well, just about anything. If you have a hunch when Google will release Gemini 3.0, when the government shutdown will end, or the number of Tweets Elon Musk will post this month, you can place a wager on it. Maybe you’ll earn money, but more likely, you’ll lose it—only 12.7 percent of crypto wallets on Polymarket show profits.

Google Finance prediction markets

Credit: Google

Google says it will get fresh prediction data from both sites, which will allow Gemini to speculate on the future with “the wisdom of crowds.” Google suggests you could type “What will GDP growth be for 2025?” into the search box. Finance will pull the latest probabilities from Kalshi and Polymarket to generate a response that could include graphs and charts based on people’s bets. Naturally, Google does not make promises as to the accuracy of these predictions.

The new AI features of Google Finance are coming to all US users in the next few weeks, and starting this week, the service will make its debut in India. Likewise, the predictions market data will arrive in the next couple of weeks. If that’s not fast enough, you can opt-in to get early access via the Google Labs page.

Gemini Deep Research comes to Google Finance, backed by prediction market data Read More »

oddest-chatgpt-leaks-yet:-cringey-chat-logs-found-in-google-analytics-tool

Oddest ChatGPT leaks yet: Cringey chat logs found in Google analytics tool


ChatGPT leaks seem to confirm OpenAI scrapes Google, expert says.

Credit: Aurich Lawson | Getty Images

For months, extremely personal and sensitive ChatGPT conversations have been leaking into an unexpected destination: Google Search Console (GSC), a tool that developers typically use to monitor search traffic, not lurk private chats.

Normally, when site managers access GSC performance reports, they see queries based on keywords or short phrases that Internet users type into Google to find relevant content. But starting this September, odd queries, sometimes more than 300 characters long, could also be found in GSC. Showing only user inputs, the chats appeared to be from unwitting people prompting a chatbot to help solve relationship or business problems, who likely expected those conversations would remain private.

Jason Packer, owner of an analytics consulting firm called Quantable, was among the first to flag the issue in a detailed blog last month.

Determined to figure out what exactly was causing the leaks, he teamed up with “Internet sleuth” and web optimization consultant Slobodan Manić. Together, they conducted testing that they believe may have surfaced “the first definitive proof that OpenAI directly scrapes Google Search with actual user prompts.” Their investigation seemed to confirm the AI giant was compromising user privacy, in some cases in order to maintain engagement by seizing search data that Google otherwise wouldn’t share.

OpenAI declined Ars’ request to confirm if Packer and Manić’s theory posed in their blog was correct or answer any of their remaining questions that could help users determine the scope of the problem.

However, an OpenAI spokesperson confirmed that the company was “aware” of the issue and has since “resolved” a glitch “that temporarily affected how a small number of search queries were routed.”

Packer told Ars that he’s “very pleased that OpenAI was able to resolve the issue quickly.” But he suggested that OpenAI’s response failed to confirm whether or not OpenAI was scraping Google, and that leaves room for doubt that the issue was completely resolved.

Google declined to comment.

“Weirder” than prior ChatGPT leaks

The first odd ChatGPT query to appear in GSC that Packer reviewed was a wacky stream-of-consciousness from a likely female user asking ChatGPT to assess certain behaviors to help her figure out if a boy who teases her had feelings for her. Another odd query seemed to come from an office manager sharing business information while plotting a return-to-office announcement.

These were just two of 200 odd queries—including “some pretty crazy ones,” Packer told Ars—that he reviewed on one site alone. In his blog, Packer concluded that the queries should serve as “a reminder that prompts aren’t as private as you think they are!”

Packer suspected that these queries were connected to reporting from The Information in August that cited sources claiming OpenAI was scraping Google search results to power ChatGPT responses. Sources claimed that OpenAI was leaning on Google to answer prompts to ChatGPT seeking information about current events, like news or sports.

OpenAI has not confirmed that it’s scraping Google search engine results pages (SERPs). However, Packer thinks his testing of ChatGPT leaks may be evidence that OpenAI not only scrapes “SERPs in general to acquire data,” but also sends user prompts to Google Search.

Manić helped Packer solve a big part of the riddle. He found that the odd queries were turning up in one site’s GSC because it ranked highly in Google Search for “https://openai.com/index/chatgpt/”—a ChatGPT URL that was appended at the start of every strange query turning up in GSC.

It seemed that Google had tokenized the URL, breaking it up into a search for keywords “openai + index + chatgpt.” Sites using GSC that ranked highly for those keywords were therefore likely to encounter ChatGPT leaks, Parker and Manić proposed, including sites that covered prior ChatGPT leaks where chats were being indexed in Google search results. Using their recommendations to seek out queries in GSC, Ars was able to verify similar strings.

“Don’t get confused though, this is a new and completely different ChatGPT screw-up than having Google index stuff we don’t want them to,” Packer wrote. “Weirder, if not as serious.”

It’s unclear what exactly OpenAI fixed, but Packer and Manić have a theory about one possible path for leaking chats. Visiting the URL that starts every strange query found in GSC, ChatGPT users encounter a prompt box that seemed buggy, causing “the URL of that page to be added to the prompt.” The issue, they explained, seemed to be that:

Normally ChatGPT 5 will choose to do a web search whenever it thinks it needs to, and is more likely to do that with an esoteric or recency-requiring search. But this bugged prompt box also contains the query parameter ‘hints=search’ to cause it to basically always do a search: https://chatgpt.com/?hints=search&openaicom_referred=true&model=gpt-5

Clearly some of those searches relied on Google, Packer’s blog said, mistakenly sending to GSC “whatever” the user says in the prompt box, with “https://openai.com/index/chatgpt/” text added to the front of it.” As Packer explained, “we know it must have scraped those rather than using an API or some kind of private connection—because those other options don’t show inside GSC.”

This means “that OpenAI is sharing any prompt that requires a Google Search with both Google and whoever is doing their scraping,” Packer alleged. “And then also with whoever’s site shows up in the search results! Yikes.”

To Packer, it appeared that “ALL ChatGPT prompts” that used Google Search risked being leaked during the past two months.

OpenAI claimed only a small number of queries were leaked but declined to provide a more precise estimate. So, it remains unclear how many of the 700 million people who use ChatGPT each week had prompts routed to GSC.

OpenAI’s response leaves users with “lingering questions”

After ChatGPT prompts were found surfacing in Google’s search index in August, OpenAI clarified that users had clicked a box making those prompts public, which OpenAI defended as “sufficiently clear.” The AI firm later scrambled to remove the chats from Google’s SERPs after it became obvious that users felt misled into sharing private chats publicly.

Packer told Ars that a major difference between those leaks and the GSC leaks is that users harmed by the prior scandal, at least on some level, “had to actively share” their leaked chats. In the more recent case, “nobody clicked share” or had a reasonable way to prevent their chats from being exposed.

“Did OpenAI go so fast that they didn’t consider the privacy implications of this, or did they just not care?” Packer posited in his blog.

Perhaps most troubling to some users—whose identities are not linked in chats unless their prompts perhaps share identifying information—there does not seem to be any way to remove the leaked chats from GSC, unlike the prior scandal.

Packer and Manić are left with “lingering questions” about how far OpenAI’s fix will go to stop the issue.

Manić was hoping OpenAI might confirm if prompts entered on https://chatgpt.com that trigger Google Search were also affected. But OpenAI did not follow up on that question, or a broader question about how big the leak was. To Manić, a major concern was that OpenAI’s scraping may be “contributing to ‘crocodile mouth’ in Google Search Console,” a troubling trend SEO researchers have flagged that causes impressions to spike but clicks to dip.

OpenAI also declined to clarify Packer’s biggest question. He’s left wondering if the company’s “fix” simply ended OpenAI’s “routing of search queries, such that raw prompts are no longer being sent to Google Search, or are they no longer scraping Google Search at all for data?

“We still don’t know if it’s that one particular page that has this bug or whether this is really widespread,” Packer told Ars. “In either case, it’s serious and just sort of shows how little regard OpenAI has for moving carefully when it comes to privacy.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Oddest ChatGPT leaks yet: Cringey chat logs found in Google analytics tool Read More »

google-plans-secret-ai-military-outpost-on-tiny-island-overrun-by-crabs

Google plans secret AI military outpost on tiny island overrun by crabs

Christmas Island Shire President Steve Pereira told Reuters that the council is examining community impacts before approving construction. “There is support for it, providing this data center actually does put back into the community with infrastructure, employment, and adding economic value to the island,” Pereira said.

That’s great, but what about the crabs?

Christmas Island’s annual crab migration is a natural phenomenon that Sir David Attenborough reportedly once described as one of his greatest TV moments when he visited the site in 1990.

Every year, millions of crabs emerge from the forest and swarm across roads, streams, rocks, and beaches to reach the ocean, where each female can produce up to 100,000 eggs. The tiny baby crabs that survive take about nine days to march back inland to the safety of the plateau.

While Google is seeking environmental approvals for its subsea cables, the timing could prove delicate for Christmas Island’s most famous residents. According to Parks Australia, the island’s annual red crab migration has already begun for 2025, with a major spawning event expected in just a few weeks, around November 15–16.

During peak migration times, sections of roads close at short notice as crabs move between forest and sea, and the island has built special crab bridges over roads to protect the migrating masses.

Parks Australia notes that while the migration happens annually, few baby crabs survive the journey from sea to forest most years, as they’re often eaten by fish, manta rays, and whale sharks. The successful migrations that occur only once or twice per decade (when large numbers of babies actually survive) are critical for maintaining the island’s red crab population.

How Google’s facility might coexist with 100 million marching crustaceans remains to be seen. But judging by the size of the event, it seems clear that it’s the crab’s world, and we’re just living in it.

Google plans secret AI military outpost on tiny island overrun by crabs Read More »

youtube-tv’s-disney-blackout-reminds-users-that-they-don’t-own-what-they-stream

YouTube TV’s Disney blackout reminds users that they don’t own what they stream

“I don’t know (or care) which side is responsible for this, but the DVR is not VOD, it is your recording, and shows recorded before the dispute should be available. This is a hard lesson for us all,” an apparently affected customer wrote on Reddit this week.

For current or former cable subscribers, this experience isn’t new. Carrier disputes have temporarily and permanently killed cable subscribers’ access to many channels over the years. And since the early 2000s, many cable companies have phased out DVRs with local storage in favor of cloud-based DVRs. Since then, cable companies have been able to revoke customers’ access to DVR files if, for example, the customer stopped paying for the channel from which the content was recorded. What we’re seeing with YouTube TV’s DVR feature is one of several ways that streaming services mirror cable companies.

Google exits Movies Anywhere

In a move that appears to be best described as tit for tat, Google has removed content purchased via Google Play and YouTube from Movies Anywhere, a Disney-owned unified platform that lets people access digital video purchases from various distributors, including Amazon Prime Video and Fandango.

In removing users’ content, Google may gain some leverage in its discussions with Disney, which is reportedly seeking a larger carriage fee from YouTube TV. The content removals, however, are just one more pain point of the fragmented streaming landscape customers are already dealing with.

Customers inconvenienced

As of this writing, Google and Disney have yet to reach an agreement. On Monday, Google publicly rejected Disney’s request to restore ABC to YouTube TV for yesterday’s election day, although the company showed a willingness to find a way to quickly bring back ABC and ESPN (“the channels that people want,” per Google). Disney has escalated things by making its content unavailable to rent or purchase from all Google platforms.

Google is trying to appease customers by saying it will give YouTube TV subscribers a $20 credit if Disney “content is unavailable for an extended period of time.” Some people online have reported receiving a $10 credit already.

Regardless of how this saga ends, the immediate effects have inconvenienced customers of both companies. People subscribe to streaming services and rely on digital video purchases and recordings for easy, instant access, which Google and Disney’s disagreement has disrupted. The squabble has also served as another reminder that in the streaming age, you don’t really own anything.

YouTube TV’s Disney blackout reminds users that they don’t own what they stream Read More »