generative ai

record-scratch—google’s-lyria-3-ai-music-model-is-coming-to-gemini-today

Record scratch—Google’s Lyria 3 AI music model is coming to Gemini today

Sour notes

AI-generated music is not a new phenomenon. Several companies offer models that ingest and homogenize human-created music, and the resulting tracks can sound remarkably “real,” if a bit overproduced. Streaming services have already been inundated with phony AI artists, some of which have gathered thousands of listeners who may not even realize they’re grooving to the musical equivalent of a blender set to purée.

Still, you have to seek out tools like that, and Google is bringing similar capabilities to the Gemini app. As one of the most popular AI platforms, we’re probably about to see a lot more AI music on the Internet. Google says tracks generated with Lyria 3 will have an audio version of Google’s SynthID embedded within. That means you’ll always be able to check if a piece of audio was created with Google’s AI by uploading it to Gemini, similar to the way you can check images and videos for SynthID tags.

Google also says it has sought to create a music AI that respects copyright and partner agreements. If you name a specific artist in your prompt, Gemini won’t attempt to copy that artist’s sound. Instead, it’s trained to take that as “broad creative inspiration.” Although it also notes this process is not foolproof, and some of that original expression might imitate an artist too much. In those cases, Google invites users to report such shared content.

Lyria 3 is going live in the Gemini web interface today and should be available in the mobile app within a few days. It works in English, German, Spanish, French, Hindi, Japanese, Korean, and Portuguese, but Google plans to add more languages soon. While all users will have some access to music generation, those with AI Pro and AI Ultra subscriptions will have higher usage limits, but the specifics are unclear.

Record scratch—Google’s Lyria 3 AI music model is coming to Gemini today Read More »

openai-researcher-quits-over-chatgpt-ads,-warns-of-“facebook”-path

OpenAI researcher quits over ChatGPT ads, warns of “Facebook” path

On Wednesday, former OpenAI researcher Zoë Hitzig published a guest essay in The New York Times announcing that she resigned from the company on Monday, the same day OpenAI began testing advertisements inside ChatGPT. Hitzig, an economist and published poet who holds a junior fellowship at the Harvard Society of Fellows, spent two years at OpenAI helping shape how its AI models were built and priced. She wrote that OpenAI’s advertising strategy risks repeating the same mistakes that Facebook made a decade ago.

“I once believed I could help the people building A.I. get ahead of the problems it would create,” Hitzig wrote. “This week confirmed my slow realization that OpenAI seems to have stopped asking the questions I’d joined to help answer.”

Hitzig did not call advertising itself immoral. Instead, she argued that the nature of the data at stake makes ChatGPT ads especially risky. Users have shared medical fears, relationship problems, and religious beliefs with the chatbot, she wrote, often “because people believed they were talking to something that had no ulterior agenda.” She called this accumulated record of personal disclosures “an archive of human candor that has no precedent.”

She also drew a direct parallel to Facebook’s early history, noting that the social media company once promised users control over their data and the ability to vote on policy changes. Those pledges eroded over time, Hitzig wrote, and the Federal Trade Commission found that privacy changes Facebook marketed as giving users more control actually did the opposite.

She warned that a similar trajectory could play out with ChatGPT: “I believe the first iteration of ads will probably follow those principles. But I’m worried subsequent iterations won’t, because the company is building an economic engine that creates strong incentives to override its own rules.”

Ads arrive after a week of AI industry sparring

Hitzig’s resignation adds another voice to a growing debate over advertising in AI chatbots. OpenAI announced in January that it would begin testing ads in the US for users on its free and $8-per-month “Go” subscription tiers, while paid Plus, Pro, Business, Enterprise, and Education subscribers would not see ads. The company said ads would appear at the bottom of ChatGPT responses, be clearly labeled, and would not influence the chatbot’s answers.

OpenAI researcher quits over ChatGPT ads, warns of “Facebook” path Read More »

ai-companies-want-you-to-stop-chatting-with-bots-and-start-managing-them

AI companies want you to stop chatting with bots and start managing them


Claude Opus 4.6 and OpenAI Frontier pitch a future of supervising AI agents.

On Thursday, Anthropic and OpenAI shipped products built around the same idea: instead of chatting with a single AI assistant, users should be managing teams of AI agents that divide up work and run in parallel. The simultaneous releases are part of a gradual shift across the industry, from AI as a conversation partner to AI as a delegated workforce, and they arrive during a week when that very concept reportedly helped wipe $285 billion off software stocks.

Whether that supervisory model works in practice remains an open question. Current AI agents still require heavy human intervention to catch errors, and no independent evaluation has confirmed that these multi-agent tools reliably outperform a single developer working alone.

Even so, the companies are going all-in on agents. Anthropic’s contribution is Claude Opus 4.6, a new version of its most capable AI model, paired with a feature called “agent teams” in Claude Code. Agent teams let developers spin up multiple AI agents that split a task into independent pieces, coordinate autonomously, and run concurrently.

In practice, agent teams look like a split-screen terminal environment: A developer can jump between subagents using Shift+Up/Down, take over any one directly, and watch the others keep working. Anthropic describes the feature as best suited for “tasks that split into independent, read-heavy work like codebase reviews.” It is available as a research preview.

OpenAI, meanwhile, released Frontier, an enterprise platform it describes as a way to “hire AI co-workers who take on many of the tasks people already do on a computer.” Frontier assigns each AI agent its own identity, permissions, and memory, and it connects to existing business systems such as CRMs, ticketing tools, and data warehouses. “What we’re fundamentally doing is basically transitioning agents into true AI co-workers,” Barret Zoph, OpenAI’s general manager of business-to-business, told CNBC.

Despite the hype about these agents being co-workers, from our experience, these agents tend to work best if you think of them as tools that amplify existing skills, not as the autonomous co-workers the marketing language implies. They can produce impressive drafts fast but still require constant human course-correction.

The Frontier launch came just three days after OpenAI released a new macOS desktop app for Codex, its AI coding tool, which OpenAI executives described as a “command center for agents.” The Codex app lets developers run multiple agent threads in parallel, each working on an isolated copy of a codebase via Git worktrees.

OpenAI also released GPT-5.3-Codex on Thursday, a new AI model that powers the Codex app. OpenAI claims that the Codex team used early versions of GPT-5.3-Codex to debug the model’s own training run, manage its deployment, and diagnose test results, similar to what OpenAI told Ars Technica in a December interview.

“Our team was blown away by how much Codex was able to accelerate its own development,” the company wrote. On Terminal-Bench 2.0, the agentic coding benchmark, GPT-5.3-Codex scored 77.3%, which exceeds Anthropic’s just-released Opus 4.6 by about 12 percentage points.

The common thread across all of these products is a shift in the user’s role. Rather than merely typing a prompt and waiting for a single response, the developer or knowledge worker becomes more like a supervisor, dispatching tasks, monitoring progress, and stepping in when an agent needs direction.

In this vision, developers and knowledge workers effectively become middle managers of AI. That is, not writing the code or doing the analysis themselves, but delegating tasks, reviewing output, and hoping the agents underneath them don’t quietly break things. Whether that will come to pass (or if it’s actually a good idea) is still widely debated.

A new model under the Claude hood

Opus 4.6 is a substantial update to Anthropic’s flagship model. It succeeds Claude Opus 4.5, which Anthropic released in November. In a first for the Opus model family, it supports a context window of up to 1 million tokens (in beta), which means it can process much larger bodies of text or code in a single session.

On benchmarks, Anthropic says Opus 4.6 tops OpenAI’s GPT-5.2 (an earlier model than the one released today) and Google’s Gemini 3 Pro across several evaluations, including Terminal-Bench 2.0 (an agentic coding test), Humanity’s Last Exam (a multidisciplinary reasoning test), and BrowseComp (a test of finding hard-to-locate information online)

Although it should be noted that OpenAI’s GPT-5.3-Codex, released the same day, seemingly reclaimed the lead on Terminal-Bench. On ARC AGI 2, which attempts to test the ability to solve problems that are easy for humans but hard for AI models, Opus 4.6 scored 68.8 percent, compared to 37.6 percent for Opus 4.5, 54.2 percent for GPT-5.2, and 45.1 percent for Gemini 3 Pro.

As always, take AI benchmarks with a grain of salt, since objectively measuring AI model capabilities is a relatively new and unsettled science.

Anthropic also said that on a long-context retrieval benchmark called MRCR v2, Opus 4.6 scored 76 percent on the 1 million-token variant, compared to 18.5 percent for its Sonnet 4.5 model. That gap matters for the agent teams use case, since agents working across large codebases need to track information across hundreds of thousands of tokens without losing the thread.

Pricing for the API stays the same as Opus 4.5 at $5 per million input tokens and $25 per million output tokens, with a premium rate of $10/$37.50 for prompts that exceed 200,000 tokens. Opus 4.6 is available on claude.ai, the Claude API, and all major cloud platforms.

The market fallout outside

These releases occurred during a week of exceptional volatility for software stocks. On January 30, Anthropic released 11 open source plugins for Cowork, its agentic productivity tool that launched on January 12. Cowork itself is a general-purpose tool that gives Claude access to local folders for work tasks, but the plugins extended it into specific professional domains: legal contract review, non-disclosure agreement triage, compliance workflows, financial analysis, sales, and marketing.

By Tuesday, investors reportedly reacted to the release by erasing roughly $285 billion in market value across software, financial services, and asset management stocks. A Goldman Sachs basket of US software stocks fell 6 percent that day, its steepest single-session decline since April’s tariff-driven sell-off. Thomson Reuters led the rout with an 18 percent drop, and the pain spread to European and Asian markets.

The purported fear among investors centers on AI model companies packaging complete workflows that compete with established software-as-a-service (SaaS) vendors, even if the verdict is still out on whether these tools can achieve those tasks.

OpenAI’s Frontier might deepen that concern: its stated design lets AI agents log in to applications, execute tasks, and manage work with minimal human involvement, which Fortune described as a bid to become “the operating system of the enterprise.” OpenAI CEO of Applications Fidji Simo pushed back on the idea that Frontier replaces existing software, telling reporters, “Frontier is really a recognition that we’re not going to build everything ourselves.”

Whether these co-working apps actually live up to their billing or not, the convergence is hard to miss. Anthropic’s Scott White, the company’s head of product for enterprise, gave the practice a name that is likely to roll a few eyes. “Everybody has seen this transformation happen with software engineering in the last year and a half, where vibe coding started to exist as a concept, and people could now do things with their ideas,” White told CNBC. “I think that we are now transitioning almost into vibe working.”

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

AI companies want you to stop chatting with bots and start managing them Read More »

ai-overviews-gets-upgraded-to-gemini-3-with-a-dash-of-ai-mode

AI Overviews gets upgraded to Gemini 3 with a dash of AI Mode

It can be hard sometimes to keep up with the deluge of generative AI in Google products. Even if you try to avoid it all, there are some features that still manage to get in your face. Case in point: AI Overviews. This AI-powered search experience has a reputation for getting things wrong, but you may notice some improvements soon. Google says AI Overviews is being upgraded to the latest Gemini 3 models with a more conversational bent.

In just the last year, Google has radically expanded the number of searches on which you get an AI Overview at the top. Today, the chatbot will almost always have an answer for your query, which has relied mostly on models in Google’s Gemini 2.5 family. There was nothing wrong with Gemini 2.5 as generative AI models go, but Gemini 3 is a little better by every metric.

There are, of course, multiple versions of Gemini 3, and Google doesn’t like to be specific about which ones appear in your searches. What Google does say is that AI Overviews chooses the right model for the job. So if you’re searching for something simple for which there are a lot of valid sources, AI Overviews may manifest something like Gemini 3 Flash without running through a ton of reasoning tokens. For a complex “long tail” query, it could step up the thinking or move to Gemini 3 Pro (for paying subscribers).

AI Overviews gets upgraded to Gemini 3 with a dash of AI Mode Read More »

google-adds-your-gmail-and-photos-to-ai-mode-to-enable-“personal-intelligence”

Google adds your Gmail and Photos to AI Mode to enable “Personal Intelligence”

Google believes AI is the future of search, and it’s not shy about saying it. After adding account-level personalization to Gemini earlier this month, it’s now updating AI Mode with so-called “Personal Intelligence.” According to Google, this makes the bot’s answers more useful because they are tailored to your personal context.

Starting today, the feature is rolling out to all users who subscribe to Google AI Pro or AI Ultra. However, it will be a Labs feature that needs to be explicitly enabled (subscribers will be prompted to do this). Google tends to expand access to new AI features to free accounts later on, so free users will most likely get access to Personal Intelligence in the future. Whenever this option does land on your account, it’s entirely optional and can be disabled at any time.

If you decide to integrate your data with AI Mode, the search bot will be able to scan your Gmail and Google Photos. That’s less extensive than the Gemini app version, which supports Gmail, Photos, Search, and YouTube history. Gmail will probably be the biggest contributor to AI Mode—a great many life events involve confirmation emails. Traditional search results when you are logged in are adjusted based on your usage history, but this goes a step further.

If you’re going to use AI Mode to find information, Personal Intelligence could actually be quite helpful. When you connect data from other Google apps, Google’s custom Gemini search model will instantly know about your preferences and background—that’s the kind of information you’d otherwise have to include in your search query to get the best output. With Personal Intelligence, AI Mode can just pull those details from your email or photos.

Google adds your Gmail and Photos to AI Mode to enable “Personal Intelligence” Read More »

openai-to-test-ads-in-chatgpt-as-it-burns-through-billions

OpenAI to test ads in ChatGPT as it burns through billions

Financial pressures and a changing tune

OpenAI’s advertising experiment reflects the enormous financial pressures facing the company. OpenAI does not expect to be profitable until 2030 and has committed to spend about $1.4 trillion on massive data centers and chips for AI.

According to financial documents obtained by The Wall Street Journal in November, OpenAI expects to burn through roughly $9 billion this year while generating $13 billion in revenue. Only about 5 percent of ChatGPT’s 800 million weekly users pay for subscriptions, so it’s not enough to cover all of OpenAI’s operating costs.

Not everyone is convinced ads will solve OpenAI’s financial problems. “I am extremely bearish on this ads product,” tech critic Ed Zitron wrote on Bluesky. “Even if this becomes a good business line, OpenAI’s services cost too much for it to matter!”

OpenAI’s embrace of ads appears to come reluctantly, since it runs counter to a “personal bias” against advertising that Altman has shared in earlier public statements. For example, during a fireside chat at Harvard University in 2024, Altman said he found the combination of ads and AI “uniquely unsettling,” implying that he would not like it if the chatbot itself changed its responses due to advertising pressure. He added: “When I think of like GPT writing me a response, if I had to go figure out exactly how much was who paying here to influence what I’m being shown, I don’t think I would like that.”

An example mock-up of an advertisement in ChatGPT provided by OpenAI.

An example mock-up of an advertisement in ChatGPT provided by OpenAI.

An example mock-up of an advertisement in ChatGPT provided by OpenAI. Credit: OpenAI

Along those lines, OpenAI’s approach appears to be a compromise between needing ad revenue and not wanting sponsored content to appear directly within ChatGPT’s written responses. By placing banner ads at the bottom of answers separated from the conversation history, OpenAI appears to be addressing Altman’s concern: The AI assistant’s actual output, the company says, will remain uninfluenced by advertisers.

Indeed, Simo wrote in a blog post that OpenAI’s ads will not influence ChatGPT’s conversational responses and that the company will not share conversations with advertisers and will not show ads on sensitive topics such as mental health and politics to users it determines to be under 18.

“As we introduce ads, it’s crucial we preserve what makes ChatGPT valuable in the first place,” Simo wrote. “That means you need to trust that ChatGPT’s responses are driven by what’s objectively useful, never by advertising.”

OpenAI to test ads in ChatGPT as it burns through billions Read More »

bandcamp-bans-purely-ai-generated-music-from-its-platform

Bandcamp bans purely AI-generated music from its platform

On Tuesday, Bandcamp announced on Reddit that it will no longer permit AI-generated music on its platform. “Music and audio that is generated wholly or in substantial part by AI is not permitted on Bandcamp,” the company wrote in a post to the r/bandcamp subreddit. The new policy also prohibits “any use of AI tools to impersonate other artists or styles.”

The policy draws a line that some in the music community have debated: Where does tool use end and full automation begin? AI models are not artists in themselves, since they lack personhood and creative intent. But people do use AI tools to make music, and the spectrum runs from using AI for minor assistance (cleaning up audio, suggesting chord progressions) to typing a prompt and letting a model generate an entire track. Bandcamp’s policy targets the latter end of that spectrum while leaving room for human artists who incorporate AI tools into a larger creative process.

The announcement emphasized the platform’s desire to protect its community of human artists. “The fact that Bandcamp is home to such a vibrant community of real people making incredible music is something we want to protect and maintain,” the company wrote. Bandcamp asked users to flag suspected AI-generated content through its reporting tools, and the company said it reserves “the right to remove any music on suspicion of being AI generated.”

As generative AI tools make it trivial to produce unlimited quantities of music, art, and text, this author once argued that platforms may need to actively preserve spaces for human expression rather than let them drown in machine-generated output. Bandcamp’s decision seems to move in that direction, but it also leaves room for platforms like Suno, which primarily host AI-generated music.

Two platforms, two approaches, one flood

The policy contrasts with Spotify, which explicitly permits AI-generated music, although its users have expressed frustration with an influx of AI-generated tracks created by tools like Suno and Udio. Some of those AI music issues predate the latest tools, however. In 2023, Spotify removed tens of thousands of AI-generated songs from distributor Boomy after discovering evidence of artificial streaming fraud, but the flood just kept coming.

Bandcamp bans purely AI-generated music from its platform Read More »

the-ram-shortage’s-silver-lining:-less-talk-about-“ai-pcs”

The RAM shortage’s silver lining: Less talk about “AI PCs”

RAM prices have soared, which is bad news for people interested in buying, building, or upgrading a computer this year, but it’s likely good news for people exasperated by talk of so-called AI PCs.

As Ars Technica has reported, the growing demands of data centers, fueled by the AI boom, have led to a shortage of RAM and flash memory chips, driving prices to skyrocket.

In an announcement today, Ben Yeh, principal analyst at technology research firm Omdia, said that in 2025, “mainstream PC memory and storage costs rose by 40 percent to 70 percent, resulting in cost increases being passed through to customers.”

Overall, global PC shipments increased in 2025, according to Omdia, (which pegged growth at 9.2 percent compared to 2024), and IDC, (which today reported 9.6 percent growth), but analysts expect PC sales to be more tumultuous in 2026.

“The year ahead is shaping up to be extremely volatile,” Jean Philippe Bouchard, research VP with IDC’s worldwide mobile device trackers, said in a statement.

Both analyst firms expect PC makers to manage the RAM shortage by raising prices and by releasing computers with lower memory specs. IDC expects price hikes of 15 to 20 percent and for PC RAM specs to “be lowered on average to preserve memory inventory on hand,” Bouchard said. Omdia’s Yeh expects “leaner mid to low-tier configurations to protect margins.”

“These RAM shortages will last beyond just 2026, and the cost-conscious part of the market is the one that will be most impacted,” Jitesh Ubrani, research manager for worldwide mobile device trackers at IDC, told Ars via email.

IDC expects vendors to “prioritize midrange and premium systems to offset higher component costs, especially memory.”

The RAM shortage’s silver lining: Less talk about “AI PCs” Read More »

google’s-updated-veo-model-can-make-vertical-videos-from-reference-images-with-4k-upscaling

Google’s updated Veo model can make vertical videos from reference images with 4K upscaling

Enhanced support for Ingredients to Video and the associated vertical outputs are live in the Gemini app today, as well as in YouTube Shorts and the YouTube Create app, fulfilling a promise initially made last summer. Veo videos are short—just eight seconds long for each prompt. It would be tedious to assemble those into a longer video, but Veo is perfect for the Shorts format.

Veo 3.1 Updates – Seamlessly blend textures, characters, and objects.

The new Veo 3.1 update also adds an option for higher-resolution video. The model now supports 1080p and 4K outputs. Google debuted 1080p support last year, but it’s mentioning that option again today, suggesting there may be some quality difference. 4K support is new, but neither 1080p nor 4K outputs are native. Veo creates everything in 720p resolution, but it can be upscaled “for high-fidelity production workflows,” according to Google. However, a Google rep tells Ars that upscaling is only available in Flow, the Gemini API, and Vertex AI. Video in the Gemini app is always 720p.

We are rushing into a world where AI video is essentially indistinguishable from real life. Google, which more or less controls online video via YouTube’s dominance, is at the forefront of that change. Today’s update is reasonably significant, and it didn’t even warrant a version number change. Perhaps we can expect more 2025-style leaps in video quality this year, for better or worse.

Google’s updated Veo model can make vertical videos from reference images with 4K upscaling Read More »

google-removes-some-ai-health-summaries-after-investigation-finds-“dangerous”-flaws

Google removes some AI health summaries after investigation finds “dangerous” flaws

Why AI Overviews produces errors

The recurring problems with AI Overviews stem from a design flaw in how the system works. As we reported in May 2024, Google built AI Overviews to show information backed up by top web results from its page ranking system. The company designed the feature this way based on the assumption that highly ranked pages contain accurate information.

However, Google’s page ranking algorithm has long struggled with SEO-gamed content and spam. The system now feeds these unreliable results to its AI model, which then summarizes them with an authoritative tone that can mislead users. Even when the AI draws from accurate sources, the language model can still draw incorrect conclusions from the data, producing flawed summaries of otherwise reliable information.

The technology does not inherently provide factual accuracy. Instead, it reflects whatever inaccuracies exist on the websites Google’s algorithm ranks highly, presenting the facts with an authority that makes errors appear trustworthy.

Other examples remain active

The Guardian found that typing slight variations of the original queries into Google, such as “lft reference range” or “lft test reference range,” still prompted AI Overviews. Hebditch said this was a big worry and that the AI Overviews present a list of tests in bold, making it very easy for readers to miss that these numbers might not even be the right ones for their test.

AI Overviews still appear for other examples that The Guardian originally highlighted to Google. When asked why these AI Overviews had not also been removed, Google said they linked to well-known and reputable sources and informed people when it was important to seek out expert advice.

Google said AI Overviews only appear for queries where it has high confidence in the quality of the responses. The company constantly measures and reviews the quality of its summaries across many different categories of information, it added.

This is not the first controversy for AI Overviews. The feature has previously told people to put glue on pizza and eat rocks. It has proven unpopular enough that users have discovered that inserting curse words into search queries disables AI Overviews entirely.

Google removes some AI health summaries after investigation finds “dangerous” flaws Read More »

dell’s-xps-revival-is-a-welcome-reprieve-from-the-“ai-pc”-fad

Dell’s XPS revival is a welcome reprieve from the “AI PC” fad

After making the obviously poor decision to kill its XPS laptops and desktops in January 2025, Dell started selling 16- and 14-inch XPS laptops again today.

“It was obvious we needed to change,” Jeff Clarke, vice chairman and COO at Dell Technologies, said at a press event in New York City previewing Dell’s CES 2026 announcements.

A year ago, Dell abandoned XPS branding, as well as its Latitude, Inspiron, and Precision PC lineups. The company replaced the reputable brands with Dell Premium, Dell Pro, and Dell Pro Max. Each series included a base model, as well as “Plus” and “Premium.” Dell isn’t resurrecting its Latitude, Inspiron, or Precision series, and it will still sell “Dell Pro” models.

Dell's consumer and commercial PC lines.

This is how Dell breaks down its computer lineup now.

Credit: Dell

This is how Dell breaks down its computer lineup now. Credit: Dell

XPS returns

The revival of XPS means the return of one of the easiest recommendations for consumer ultralight laptops. Before last year’s shunning, XPS laptops had a reputation for thin, lightweight designs with modern features and decent performance for the price. This year, Dell is even doing away with some of the design tweaks that it introduced to the XPS lineup in 2022, which, unfortunately, were shoppers’ sole option last year.

Inheriting traits from the XPS 13 Plus introduced in 2022, the XPS-equivalent laptops that Dell released in 2025 had a capacitive-touch row without physical buttons, a borderless touchpad with haptic feedback, and a flat, lattice-free keyboard. The design was meant to enable more thermal headroom but made using the computers feel uncomfortable and unfamiliar.

The XPS 14 and XPS 16 laptops launching today have physical function rows. They still have a haptic touchpad, but now the touchpad has comforting left and right borders. And although the XPS 14 and XPS 16 have the same lattice-free keyboard of the XPS 13 Plus, Dell will release a cheaper XPS 13 later this year with a more traditional chiclet keyboard, since those types of keyboards are cheaper to make.

Dell’s XPS revival is a welcome reprieve from the “AI PC” fad Read More »

amazon-alexa+-released-to-the-general-public-via-an-early-access-website

Amazon Alexa+ released to the general public via an early access website

Anyone can now try Alexa+, Amazon’s generative AI assistant, through a free early access program at Alexa.com. The website frees the AI, which Amazon released via early access in February, from hardware and makes it as easily accessible as more established chatbots, like OpenAI’s ChatGPT and Google’s Gemini.

Until today, you needed a supporting device to access Alexa+. Amazon hasn’t said when the early access period will end, but when it does, Alexa+ will be included with Amazon Prime memberships, which start at $15 per month, or cost $20 per month on its own.

The above pricing suggests that Amazon wants Alexa+ to drive people toward Prime subscriptions. By being interwoven with Amazon’s shopping ecosystem, including Amazon’s e-commerce platform, grocery delivery business, and Whole Foods, Alexa+ can make more money for Amazon.

Just like it has with Alexa+ on devices, Amazon is pushing Alexa.com as a tool for people to organize and manage their household. Amazon’s announcement of Alexa.com today emphasizes Alexa+’s features for planning trips and meals, to-do lists, calendars, and smart homes. Alexa.com “also provides persistent context and continuity, allowing you to access Alexa on whichever device or interface best serves the task at hand, with all previous chats, preferences, and personalization” carrying over, Amazon said.

Amazon already knew a browser-based version of Alexa would be helpful. Alexa was available via Alexa.Amazon.com until around the time Amazon started publicly discussing a generative AI version of Alexa in 2023. Alexa+ is now accessible through Alexa.Amazon.com (in addition to Alexa.com).

“This is a new interaction model and adds a powerful way to use and collaborate with Alexa+,” Amazon said today. “Combined with the redesigned Alexa mobile app, which will feature an agent-forward design, Alexa+ will be accessible across every surface—whether you’re at your desk, on the go, or at home.”

An example of someone using the Alexa+ website to manage smart home devices.

Amazon provided this example of someone using the Alexa+ website to manage smart home devices.

Credit: Amazon

Amazon provided this example of someone using the Alexa+ website to manage smart home devices. Credit: Amazon

Alexa has largely been reported to cost Amazon billions of dollars, despite Amazon’s claim that 600 million Alexa-powered devices have been sold. By incorporating more powerful and generative AI-based features and a subscription fee, Amazon hopes people will use Alexa+ more frequently and for more advanced and essential tasks, resulting in the financial success that has eluded the original Alexa. Amazon is also considering injecting ads into Alexa+ conversations.

Notably, ahead of its final release and while still in early access, Alexa+ has been reported to be slower than expected and struggle with inaccuracies at times. It also lacks some features that Amazon executives have previously touted, like the ability to order takeout.

Amazon Alexa+ released to the general public via an early access website Read More »