machine learning

cheap-ai-“video-scraping”-can-now-extract-data-from-any-screen-recording

Cheap AI “video scraping” can now extract data from any screen recording


Researcher feeds screen recordings into Gemini to extract accurate information with ease.

Abstract 3d background with different cubes

Recently, AI researcher Simon Willison wanted to add up his charges from using a cloud service, but the payment values and dates he needed were scattered among a dozen separate emails. Inputting them manually would have been tedious, so he turned to a technique he calls “video scraping,” which involves feeding a screen recording video into an AI model, similar to ChatGPT, for data extraction purposes.

What he discovered seems simple on its surface, but the quality of the result has deeper implications for the future of AI assistants, which may soon be able to see and interact with what we’re doing on our computer screens.

“The other day I found myself needing to add up some numeric values that were scattered across twelve different emails,” Willison wrote in a detailed post on his blog. He recorded a 35-second video scrolling through the relevant emails, then fed that video into Google’s AI Studio tool, which allows people to experiment with several versions of Google’s Gemini 1.5 Pro and Gemini 1.5 Flash AI models.

Willison then asked Gemini to pull the price data from the video and arrange it into a special data format called JSON (JavaScript Object Notation) that included dates and dollar amounts. The AI model successfully extracted the data, which Willison then formatted as CSV (comma-separated values) table for spreadsheet use. After double-checking for errors as part of his experiment, the accuracy of the results—and what the video analysis cost to run—surprised him.

A screenshot of Simon Willison using Google Gemini to extract data from a screen capture video.

A screenshot of Simon Willison using Google Gemini to extract data from a screen capture video.

A screenshot of Simon Willison using Google Gemini to extract data from a screen capture video. Credit: Simon Willison

“The cost [of running the video model] is so low that I had to re-run my calculations three times to make sure I hadn’t made a mistake,” he wrote. Willison says the entire video analysis process ostensibly cost less than one-tenth of a cent, using just 11,018 tokens on the Gemini 1.5 Flash 002 model. In the end, he actually paid nothing because Google AI Studio is currently free for some types of use.

Video scraping is just one of many new tricks possible when the latest large language models (LLMs), such as Google’s Gemini and GPT-4o, are actually “multimodal” models, allowing audio, video, image, and text input. These models translate any multimedia input into tokens (chunks of data), which they use to make predictions about which tokens should come next in a sequence.

A term like “token prediction model” (TPM) might be more accurate than “LLM” these days for AI models with multimodal inputs and outputs, but a generalized alternative term hasn’t really taken off yet. But no matter what you call it, having an AI model that can take video inputs has interesting implications, both good and potentially bad.

Breaking down input barriers

Willison is far from the first person to feed video into AI models to achieve interesting results (more on that below, and here’s a 2015 paper that uses the “video scraping” term), but as soon as Gemini launched its video input capability, he began to experiment with it in earnest.

In February, Willison demonstrated another early application of AI video scraping on his blog, where he took a seven-second video of the books on his bookshelves, then got Gemini 1.5 Pro to extract all of the book titles it saw in the video and put them in a structured, or organized, list.

Converting unstructured data into structured data is important to Willison, because he’s also a data journalist. Willison has created tools for data journalists in the past, such as the Datasette project, which lets anyone publish data as an interactive website.

To every data journalist’s frustration, some sources of data prove resistant to scraping (capturing data for analysis) due to how the data is formatted, stored, or presented. In these cases, Willison delights in the potential for AI video scraping because it bypasses these traditional barriers to data extraction.

“There’s no level of website authentication or anti-scraping technology that can stop me from recording a video of my screen while I manually click around inside a web application,” Willison noted on his blog. His method works for any visible on-screen content.

Video is the new text

An illustration of a cybernetic eyeball.

An illustration of a cybernetic eyeball.

An illustration of a cybernetic eyeball. Credit: Getty Images

The ease and effectiveness of Willison’s technique reflect a noteworthy shift now underway in how some users will interact with token prediction models. Rather than requiring a user to manually paste or type in data in a chat dialog—or detail every scenario to a chatbot as text—some AI applications increasingly work with visual data captured directly on the screen. For example, if you’re having trouble navigating a pizza website’s terrible interface, an AI model could step in and perform the necessary mouse clicks to order the pizza for you.

In fact, video scraping is already on the radar of every major AI lab, although they are not likely to call it that at the moment. Instead, tech companies typically refer to these techniques as “video understanding” or simply “vision.”

In May, OpenAI demonstrated a prototype version of its ChatGPT Mac App with an option that allowed ChatGPT to see and interact with what is on your screen, but that feature has not yet shipped. Microsoft demonstrated a similar “Copilot Vision” prototype concept earlier this month (based on OpenAI’s technology) that will be able to “watch” your screen and help you extract data and interact with applications you’re running.

Despite these research previews, OpenAI’s ChatGPT and Anthropic’s Claude have not yet implemented a public video input feature for their models, possibly because it is relatively computationally expensive for them to process the extra tokens from a “tokenized” video stream.

For the moment, Google is heavily subsidizing user AI costs with its war chest from Search revenue and a massive fleet of data centers (to be fair, OpenAI is subsidizing, too, but with investor dollars and help from Microsoft). But costs of AI compute in general are dropping by the day, which will open up new capabilities of the technology to a broader user base over time.

Countering privacy issues

As you might imagine, having an AI model see what you do on your computer screen can have downsides. For now, video scraping is great for Willison, who will undoubtedly use the captured data in positive and helpful ways. But it’s also a preview of a capability that could later be used to invade privacy or autonomously spy on computer users on a scale that was once impossible.

A different form of video scraping caused a massive wave of controversy recently for that exact reason. Apps such as the third-party Rewind AI on the Mac and Microsoft’s Recall, which is being built into Windows 11, operate by feeding on-screen video into an AI model that stores extracted data into a database for later AI recall. Unfortunately, that approach also introduces potential privacy issues because it records everything you do on your machine and puts it in a single place that could later be hacked.

To that point, although Willison’s technique currently involves uploading a video of his data to Google for processing, he is pleased that he can still decide what the AI model sees and when.

“The great thing about this video scraping technique is that it works with anything that you can see on your screen… and it puts you in total control of what you end up exposing to the AI model,” Willison explained in his blog post.

It’s also possible in the future that a locally run open-weights AI model could pull off the same video analysis method without the need for a cloud connection at all. Microsoft Recall runs locally on supported devices, but it still demands a great deal of unearned trust. For now, Willison is perfectly content to selectively feed video data to AI models when the need arises.

“I expect I’ll be using this technique a whole lot more in the future,” he wrote, and perhaps many others will, too, in different forms. If the past is any indication, Willison—who coined the term “prompt injection” in 2022—seems to always be a few steps ahead in exploring novel applications of AI tools. Right now, his attention is on the new implications of AI and video, and yours probably should be, too.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a widely-cited tech historian. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

Cheap AI “video scraping” can now extract data from any screen recording Read More »

deepfake-lovers-swindle-victims-out-of-$46m-in-hong-kong-ai-scam

Deepfake lovers swindle victims out of $46M in Hong Kong AI scam

The police operation resulted in the seizure of computers, mobile phones, and about $25,756 in suspected proceeds and luxury watches from the syndicate’s headquarters. Police said that victims originated from multiple countries, including Hong Kong, mainland China, Taiwan, India, and Singapore.

A widening real-time deepfake problem

Realtime deepfakes have become a growing problem over the past year. In August, we covered a free app called Deep-Live-Cam that can do real-time face-swaps for video chat use, and in February, the Hong Kong office of British engineering firm Arup lost $25 million in an AI-powered scam in which the perpetrators used deepfakes of senior management during a video conference call to trick an employee into transferring money.

News of the scam also comes amid recent warnings from the United Nations Office on Drugs and Crime, notes The Record in a report about the recent scam ring. The agency released a report last week highlighting tech advancements among organized crime syndicates in Asia, specifically mentioning the increasing use of deepfake technology in fraud.

The UN agency identified more than 10 deepfake software providers selling their services on Telegram to criminal groups in Southeast Asia, showing the growing accessibility of this technology for illegal purposes.

Some companies are attempting to find automated solutions to the issues presented by AI-powered crime, including Reality Defender, which creates software that attempts to detect deepfakes in real time. Some deepfake detection techniques may work at the moment, but as the fakes improve in realism and sophistication, we may be looking at an escalating arms race between those who seek to fool others and those who want to prevent deception.

Deepfake lovers swindle victims out of $46M in Hong Kong AI scam Read More »

google-and-kairos-sign-nuclear-reactor-deal-with-aim-to-power-ai

Google and Kairos sign nuclear reactor deal with aim to power AI

Google isn’t alone in eyeballing nuclear power as an energy source for massive datacenters. In September, Ars reported on a plan from Microsoft that would re-open the Three Mile Island nuclear power plant in Pennsylvania to fulfill some of its power needs. And the US administration is getting into the nuclear act as well, signing a bipartisan ADVANCE act in July with the aim of jump-starting new nuclear power technology.

AI is driving demand for nuclear

In some ways, it would be an interesting twist if demand for training and running power-hungry AI models, which are often criticized as wasteful, ends up kick-starting a nuclear power renaissance that helps wean the US off fossil fuels and eventually reduces the impact of global climate change. These days, almost every Big Tech corporate position could be seen as an optics play designed to increase shareholder value, but this may be one of the rare times when the needs of giant corporations accidentally align with the needs of the planet.

Even from a cynical angle, the partnership between Google and Kairos Power represents a step toward the development of next-generation nuclear power as an ostensibly clean energy source (especially when compared to coal-fired power plants). As the world sees increasing energy demands, collaborations like this one, along with adopting solutions like solar and wind power, may play a key role in reducing greenhouse gas emissions.

Despite that potential upside, some experts are deeply skeptical of the Google-Kairos deal, suggesting that this recent rush to nuclear may result in Big Tech ownership of clean power generation. Dr. Sasha Luccioni, Climate and AI Lead at Hugging Face, wrote on X, “One step closer to a world of private nuclear power plants controlled by Big Tech to power the generative AI boom. Instead of rethinking the way we build and deploy these systems in the first place.”

Google and Kairos sign nuclear reactor deal with aim to power AI Read More »

adobe-unveils-ai-video-generator-trained-on-licensed-content

Adobe unveils AI video generator trained on licensed content

On Monday, Adobe announced Firefly Video Model, a new AI-powered text-to-video generation tool that can create novel videos from written prompts. It joins similar offerings from OpenAI, Runway, Google, and Meta in an increasingly crowded field. Unlike the competition, Adobe claims that Firefly Video Model is trained exclusively on licensed content, potentially sidestepping ethical and copyright issues that have plagued other generative AI tools.

Because of its licensed training data roots, Adobe calls Firefly Video Model “the first publicly available video model designed to be commercially safe.” However, the San Jose, California-based software firm hasn’t announced a general release date, and during a beta test period, it’s only granting access to people on a waiting list.

An example video of Adobe’s Firefly Video Model, provided by Adobe.

In the works since at least April 2023, the new model builds off of techniques Adobe developed for its Firefly image synthesis models. Like its text-to-image generator, which the company later integrated into Photoshop, Adobe hopes to aim Firefly Video Model at media professionals, such as video creators and editors. The company claims its model can produce footage that blends seamlessly with traditionally created video content.

Adobe unveils AI video generator trained on licensed content Read More »

amd-unveils-powerful-new-ai-chip-to-challenge-nvidia

AMD unveils powerful new AI chip to challenge Nvidia

On Thursday, AMD announced its new MI325X AI accelerator chip, which is set to roll out to data center customers in the fourth quarter of this year. At an event hosted in San Francisco, the company claimed the new chip offers “industry-leading” performance compared to Nvidia’s current H200 GPUs, which are widely used in data centers to power AI applications such as ChatGPT.

With its new chip, AMD hopes to narrow the performance gap with Nvidia in the AI processor market. The Santa Clara-based company also revealed plans for its next-generation MI350 chip, which is positioned as a head-to-head competitor of Nvidia’s new Blackwell system, with an expected shipping date in the second half of 2025.

In an interview with the Financial Times, AMD CEO Lisa Su expressed her ambition for AMD to become the “end-to-end” AI leader over the next decade. “This is the beginning, not the end of the AI race,” she told the publication.

The AMD Instinct MI325X Accelerator.

The AMD Instinct MI325X Accelerator.

The AMD Instinct MI325X Accelerator. Credit: AMD

According to AMD’s website, the announced MI325X accelerator contains 153 billion transistors and is built on the CDNA3 GPU architecture using TSMC’s 5 nm and 6 nm FinFET lithography processes. The chip includes 19,456 stream processors and 1,216 matrix cores spread across 304 compute units. With a peak engine clock of 2100 MHz, the MI325X delivers up to 2.61 PFLOPs of peak eight-bit precision (FP8) performance. For half-precision (FP16) operations, it reaches 1.3 PFLOPs.

AMD unveils powerful new AI chip to challenge Nvidia Read More »

are-tesla’s-robot-prototypes-ai-marvels-or-remote-controlled-toys?

Are Tesla’s robot prototypes AI marvels or remote-controlled toys?

Two years ago, Tesla’s Optimus prototype was an underwhelming mess of exposed wires that could only operate in a carefully controlled stage presentation. Last night, Tesla’s “We, Robot” event featured much more advanced Optimus prototypes that could walk around without tethers and interact directly with partygoers.

It was an impressive demonstration of the advancement of a technology Tesla’s Elon Musk said he thinks “will be the biggest product ever of any kind” (way to set reasonable expectations, there). But the live demos have also set off a firestorm of discussion over just how autonomous these Optimus robots currently are.

A robot in every garage

Before the human/robot party could get started, Musk introduced the humanoid Optimus robots as a logical extension of some of the technology that Tesla uses in its cars, from batteries and motors to software. “It’s just a robot with arms and legs instead of a robot with wheels,” Musk said breezily, easily underselling the huge differences between human-like movements and a car’s much more limited input options.

After confirming that the company “started off with someone in a robot suit”—a reference to a somewhat laughable 2021 Tesla presentation—Musk said that “rapid progress” has been made in the Optimus program in recent years. Extrapolating that progress to the “long term” future, Musk said, would lead to a point where you could purchase “your own personal R2-D2, C-3PO” for $20,000 to $30,000 (though he did allow that it could “take us a minute to get to the long term”).

And what will you get for that $30,000 when the “long term” finally comes to pass? Musk grandiosely promised that Optimus will be able to do “anything you want,” including babysitting kids, walking dogs, getting groceries, serving drinks, or “just be[ing] your friend.” Given those promised capabilities, it’s perhaps no wonder that Musk confidently predicted that “every one of the 8 billion people of Earth” will want at least one Optimus, leading to an “age of abundance” where the labor costs for most services “declines dramatically.”

Are Tesla’s robot prototypes AI marvels or remote-controlled toys? Read More »

man-learns-he’s-being-dumped-via-“dystopian”-ai-summary-of-texts

Man learns he’s being dumped via “dystopian” AI summary of texts

The evolution of bad news via texting

Spreen’s message is the first time we’ve seen an AI-mediated relationship breakup, but it likely won’t be the last. As the Apple Intelligence feature rolls out widely and other tech companies embrace AI message summarization, many people will probably be receiving bad news through AI summaries soon. For example, since March, Google’s Android Auto AI has been able to deliver summaries to users while driving.

If that sounds horrible, consider our ever-evolving social tolerance for tech progress. Back in the 2000s when SMS texting was still novel, some etiquette experts considered breaking up a relationship through text messages to be inexcusably rude, and it was unusual enough to generate a Reuters news story. The sentiment apparently extended to Americans in general: According to The Washington Post, a 2007 survey commissioned by Samsung showed that only about 11 percent of Americans thought it was OK to break up that way.

What texting looked like back in the day.

By 2009, as texting became more commonplace, the stance on texting break-ups began to soften. That year, ABC News quoted Kristina Grish, author of “The Joy of Text: Mating, Dating, and Techno-Relating,” as saying, “When Britney Spears dumped Kevin Federline I thought doing it by text message was an abomination, that it was insensitive and without reason.” Grish was referring to a 2006 incident with the pop singer that made headline news. “But it has now come to the point where our cell phones and BlackBerries are an extension of ourselves and our personality. It’s not unusual that people are breaking up this way so much.”

Today, with text messaging basically being the default way most adults communicate remotely, breaking up through text is commonplace enough that Cosmopolitan endorsed the practice in a 2023 article. “I can tell you with complete confidence as an experienced professional in the field of romantic failure that of these options, I would take the breakup text any day,” wrote Kayle Kibbe.

Who knows, perhaps in the future, people will be able to ask their personal AI assistants to contact their girlfriend or boyfriend directly to deliver a personalized break-up for them with a sensitive message that attempts to ease the blow. But what’s next—break-ups on the moon?

This article was updated at 3: 33 PM on October 10, 2024 to clarify that the ex-girlfriend’s full real name has not been revealed by the screenshot image.

Man learns he’s being dumped via “dystopian” AI summary of texts Read More »

is-china-pulling-ahead-in-ai-video-synthesis?-we-put-minimax-to-the-test

Is China pulling ahead in AI video synthesis? We put Minimax to the test

In the spirit of not cherry-picking any results, everything you see was the first generation we received for the prompt listed above it.

“A highly intelligent person reading ‘Ars Technica’ on their computer when the screen explodes”

“A cat in a car drinking a can of beer, beer commercial”

“Will Smith eating spaghetti

“Robotic humanoid animals with vaudeville costumes roam the streets collecting protection money in tokens”

“A basketball player in a haunted passenger train car with a basketball court, and he is playing against a team of ghosts”

“A herd of one million cats running on a hillside, aerial view”

“Video game footage of a dynamic 1990s third-person 3D platform game starring an anthropomorphic shark boy”

“A muscular barbarian breaking a CRT television set with a weapon, cinematic, 8K, studio lighting”

Limitations of video synthesis models

Overall, the Minimax video-01 results seen above feel fairly similar to Gen-3’s outputs, with some differences, like the lack of a celebrity filter on Will Smith (who sadly did not actually eat the spaghetti in our tests), and the more realistic cat hands and licking motion. Some results were far worse, like the one million cats and the Ars Technica reader.

Is China pulling ahead in AI video synthesis? We put Minimax to the test Read More »

meta’s-new-“movie-gen”-ai-system-can-deepfake-video-from-a-single-photo

Meta’s new “Movie Gen” AI system can deepfake video from a single photo

On Friday, Meta announced a preview of Movie Gen, a new suite of AI models designed to create and manipulate video, audio, and images, including creating a realistic video from a single photo of a person. The company claims the models outperform other video-synthesis models when evaluated by humans, pushing us closer to a future where anyone can synthesize a full video of any subject on demand.

The company does not yet have plans of when or how it will release these capabilities to the public, but Meta says Movie Gen is a tool that may allow people to “enhance their inherent creativity” rather than replace human artists and animators. The company envisions future applications such as easily creating and editing “day in the life” videos for social media platforms or generating personalized animated birthday greetings.

Movie Gen builds on Meta’s previous work in video synthesis, following 2022’s Make-A-Scene video generator and the Emu image-synthesis model. Using text prompts for guidance, this latest system can generate custom videos with sounds for the first time, edit and insert changes into existing videos, and transform images of people into realistic personalized videos.

An AI-generated video of a baby hippo swimming around, created with Meta Movie Gen.

Meta isn’t the only game in town when it comes to AI video synthesis. Google showed off a new model called “Veo” in May, and Meta says that in human preference tests, its Movie Gen outputs beat OpenAI’s Sora, Runway Gen-3, and Chinese video model Kling.

Movie Gen’s video-generation model can create 1080p high-definition videos up to 16 seconds long at 16 frames per second from text descriptions or an image input. Meta claims the model can handle complex concepts like object motion, subject-object interactions, and camera movements.

AI-generated video from Meta Movie Gen with the prompt: “A ghost in a white bedsheet faces a mirror. The ghost’s reflection can be seen in the mirror. The ghost is in a dusty attic, filled with old beams, cloth-covered furniture. The attic is reflected in the mirror. The light is cool and natural. The ghost dances in front of the mirror.”

Even so, as we’ve seen with previous AI video generators, Movie Gen’s ability to generate coherent scenes on a particular topic is likely dependent on the concepts found in the example videos that Meta used to train its video-synthesis model. It’s worth keeping in mind that cherry-picked results from video generators often differ dramatically from typical results and getting a coherent result may require lots of trial and error.

Meta’s new “Movie Gen” AI system can deepfake video from a single photo Read More »

openai’s-canvas-can-translate-code-between-languages-with-a-click

OpenAI’s Canvas can translate code between languages with a click

Coding shortcuts in canvas include reviewing code, adding logs for debugging, inserting comments, fixing bugs, and porting code to different programming languages. For example, if your code is JavaScript, with a few clicks it can become PHP, TypeScript, Python, C++, or Java. As with GPT-4o by itself, you’ll probably still have to check it for mistakes.

A screenshot of coding using ChatGPT with Canvas captured on October 4, 2024.

A screenshot of coding using ChatGPT with Canvas captured on October 4, 2024.

Credit: Benj Edwards

A screenshot of coding using ChatGPT with Canvas captured on October 4, 2024. Credit: Benj Edwards

Also, users can highlight specific sections to direct ChatGPT’s focus, and the AI model can provide inline feedback and suggestions while considering the entire project, much like a copy editor or code reviewer. And the interface makes it easy to restore previous versions of a working document using a back button in the Canvas interface.

A new AI model

OpenAI says its research team developed new core behaviors for GPT-4o to support Canvas, including triggering the canvas for appropriate tasks, generating certain content types, making targeted edits, rewriting documents, and providing inline critique.

An image of OpenAI's Canvas in action.

An image of OpenAI’s Canvas in action.

An image of OpenAI’s Canvas in action. Credit: OpenAI

One key challenge in development, according to OpenAI, was defining when to trigger a canvas. In an example on the Canvas blog post, the team says it taught the model to open a canvas for prompts like “Write a blog post about the history of coffee beans” while avoiding triggering Canvas for general Q&A tasks like “Help me cook a new recipe for dinner.”

Another challenge involved tuning the model’s editing behavior once canvas was triggered, specifically deciding between targeted edits and full rewrites. The team trained the model to perform targeted edits when users specifically select text through the interface, otherwise favoring rewrites.

The company noted that canvas represents the first major update to ChatGPT’s visual interface since its launch two years ago. While canvas is still in early beta, OpenAI plans to improve its capabilities based on user feedback over time.

OpenAI’s Canvas can translate code between languages with a click Read More »

microsoft’s-new-“copilot-vision”-ai-experiment-can-see-what-you-browse

Microsoft’s new “Copilot Vision” AI experiment can see what you browse

On Monday, Microsoft unveiled updates to its consumer AI assistant Copilot, introducing two new experimental features for a limited group of $20/month Copilot Pro subscribers: Copilot Labs and Copilot Vision. Labs integrates OpenAI’s latest o1 “reasoning” model, and Vision allows Copilot to see what you’re browsing in Edge.

Microsoft says Copilot Labs will serve as a testing ground for Microsoft’s latest AI tools before they see wider release. The company describes it as offering “a glimpse into ‘work-in-progress’ projects.” The first feature available in Labs is called “Think Deeper,” and it uses step-by-step processing to solve more complex problems than the regular Copilot. Think Deeper is Microsoft’s version of OpenAI’s new o1-preview and o1-mini AI models, and it has so far rolled out to some Copilot Pro users in Australia, Canada, New Zealand, the UK, and the US.

Copilot Vision is an entirely different beast. The new feature aims to give the AI assistant a visual window into what you’re doing within the Microsoft Edge browser. When enabled, Copilot can “understand the page you’re viewing and answer questions about its content,” according to Microsoft.

Microsoft’s Copilot Vision promo video.

The company positions Copilot Vision as a way to provide more natural interactions and task assistance beyond text-based prompts, but it will likely raise privacy concerns. As a result, Microsoft says that Copilot Vision is entirely opt-in and that no audio, images, text, or conversations from Vision will be stored or used for training. The company is also initially limiting Vision’s use to a pre-approved list of websites, blocking it on paywalled and sensitive content.

The rollout of these features appears gradual, with Microsoft noting that it wants to balance “pioneering features and a deep sense of responsibility.” The company said it will be “listening carefully” to user feedback as it expands access to the new capabilities. Microsoft has not provided a timeline for wider availability of either feature.

Mustafa Suleyman, chief executive of Microsoft AI, told Reuters that he sees Copilot as an “ever-present confidant” that could potentially learn from users’ various Microsoft-connected devices and documents, with permission. He also mentioned that Microsoft co-founder Bill Gates has shown particular interest in Copilot’s potential to read and parse emails.

But judging by the visceral reaction to Microsoft’s Recall feature, which keeps a record of everything you do on your PC so an AI model can recall it later, privacy-sensitive users may not appreciate having an AI assistant monitor their activities—especially if those features send user data to the cloud for processing.

Microsoft’s new “Copilot Vision” AI experiment can see what you browse Read More »

openai-is-now-valued-at-$157-billion

OpenAI is now valued at $157 billion

OpenAI, the company behind ChatGPT, has now raised $6.6 billion in a new funding round that values the company at $157 billion, nearly doubling its previous valuation of $86 billion, according to a report from The Wall Street Journal.

The funding round comes with strings attached: Investors have the right to withdraw their money if OpenAI does not complete its planned conversion from a nonprofit (with a for-profit division) to a fully for-profit company.

Venture capital firm Thrive Capital led the funding round with a $1.25 billion investment. Microsoft, a longtime backer of OpenAI to the tune of $13 billion, contributed just under $1 billion to the latest round. New investors joined the round, including SoftBank with a $500 million investment and Nvidia with $100 million.

The United Arab Emirates-based company MGX also invested in OpenAI during this funding round. MGX has been busy in AI recently, joining an AI infrastructure partnership last month led by Microsoft.

Notably, Apple was in talks to invest but ultimately did not participate. WSJ reports that the minimum investment required to review OpenAI’s financial documents was $250 million. In June, OpenAI hired its first chief financial officer, Sarah Friar, who played an important role in organizing this funding round, according to the WSJ.

OpenAI is now valued at $157 billion Read More »