machine learning

openai-experiments-with-giving-chatgpt-a-long-term-conversation-memory

OpenAI experiments with giving ChatGPT a long-term conversation memory

“I remember…the Alamo” —

AI chatbot “memory” will recall facts from previous conversations when enabled.

A pixelated green illustration of a pair of hands looking through file records.

Enlarge / When ChatGPT looks things up, a pair of green pixelated hands look through paper records, much like this. Just kidding.

Benj Edwards / Getty Images

On Tuesday, OpenAI announced that it is experimenting with adding a form of long-term memory to ChatGPT that will allow it to remember details between conversations. You can ask ChatGPT to remember something, see what it remembers, and ask it to forget. Currently, it’s only available to a small number of ChatGPT users for testing.

So far, large language models have typically used two types of memory: one baked into the AI model during the training process (before deployment) and an in-context memory (the conversation history) that persists for the duration of your session. Usually, ChatGPT forgets what you have told it during a conversation once you start a new session.

Various projects have experimented with giving LLMs a memory that persists beyond a context window. (The context window is the hard limit on the number of tokens the LLM can process at once.) The techniques include dynamically managing context history, compressing previous history through summarization, links to vector databases that store information externally, or simply periodically injecting information into a system prompt (the instructions ChatGPT receives at the beginning of every chat).

A screenshot of ChatGPT memory controls provided by OpenAI.

Enlarge / A screenshot of ChatGPT memory controls provided by OpenAI.

OpenAI

OpenAI hasn’t explained which technique it uses here, but the implementation reminds us of Custom Instructions, a feature OpenAI introduced in July 2023 that lets users add custom additions to the ChatGPT system prompt to change its behavior.

Possible applications for the memory feature provided by OpenAI include explaining how you prefer your meeting notes to be formatted, telling it you run a coffee shop and having ChatGPT assume that’s what you’re talking about, keeping information about your toddler that loves jellyfish so it can generate relevant graphics, and remembering preferences for kindergarten lesson plan designs.

Also, OpenAI says that memories may help ChatGPT Enterprise and Team subscribers work together better since shared team memories could remember specific document formatting preferences or which programming frameworks your team uses. And OpenAI plans to bring memories to GPTs soon, with each GPT having its own siloed memory capabilities.

Memory control

Obviously, any tendency to remember information brings privacy implications. You should already know that sending information to OpenAI for processing on remote servers introduces the possibility of privacy leaks and that OpenAI trains AI models on user-provided information by default unless conversation history is disabled or you’re using an Enterprise or Team account.

Along those lines, OpenAI says that your saved memories are also subject to OpenAI training use unless you meet the criteria listed above. Still, the memory feature can be turned off completely. Additionally, the company says, “We’re taking steps to assess and mitigate biases, and steer ChatGPT away from proactively remembering sensitive information, like your health details—unless you explicitly ask it to.”

Users will also be able to control what ChatGPT remembers using a “Manage Memory” interface that lists memory items. “ChatGPT’s memories evolve with your interactions and aren’t linked to specific conversations,” OpenAI says. “Deleting a chat doesn’t erase its memories; you must delete the memory itself.”

ChatGPT’s memory features are not currently available to every ChatGPT account, so we have not experimented with it yet. Access during this testing period appears to be random among ChatGPT (free and paid) accounts for now. “We are rolling out to a small portion of ChatGPT free and Plus users this week to learn how useful it is,” OpenAI writes. “We will share plans for broader roll out soon.”

OpenAI experiments with giving ChatGPT a long-term conversation memory Read More »

the-super-bowl’s-best-and-wackiest-ai-commercials

The Super Bowl’s best and wackiest AI commercials

Superb Owl News —

It’s nothing like “crypto bowl” in 2022, but AI made a notable splash during the big game.

A still image from BodyArmor's 2024

Enlarge / A still image from BodyArmor’s 2024 “Field of Fake” Super Bowl commercial.

BodyArmor

Heavily hyped tech products have a history of appearing in Super Bowl commercials during football’s biggest game—including the Apple Macintosh in 1984, dot-com companies in 2000, and cryptocurrency firms in 2022. In 2024, the hot tech in town is artificial intelligence, and several companies showed AI-related ads at Super Bowl LVIII. Here’s a rundown of notable appearances that range from serious to wacky.

Microsoft Copilot

Microsoft Game Day Commercial | Copilot: Your everyday AI companion.

It’s been a year since Microsoft launched the AI assistant Microsoft Copilot (as “Bing Chat“), and Microsoft is leaning heavily into its AI-assistant technology, which is powered by large language models from OpenAI. In Copilot’s first-ever Super Bowl commercial, we see scenes of various people with defiant text overlaid on the screen: “They say I will never open my own business or get my degree. They say I will never make my movie or build something. They say I’m too old to learn something new. Too young to change the world. But I say watch me.”

Then the commercial shows Copilot creating solutions to some of these problems, with prompts like, “Generate storyboard images for the dragon scene in my script,” “Write code for my 3d open world game,” “Quiz me in organic chemistry,” and “Design a sign for my classic truck repair garage Mike’s.”

Of course, since generative AI is an unfinished technology, many of these solutions are more aspirational than practical at the moment. On Bluesky, writer Ed Zitron put Microsoft’s truck repair logo to the test and saw results that weren’t nearly as polished as those seen in the commercial. On X, others have criticized and poked fun at the “3d open world game” generation prompt, which is a complex task that would take far more than a single, simple prompt to produce useful code.

Google Pixel 8 “Guided Frame” feature

Javier in Frame | Google Pixel SB Commercial 2024.

Instead of focusing on generative aspects of AI, Google’s commercial showed off a feature called “Guided Frame” on the Pixel 8 phone that uses machine vision technology and a computer voice to help people with blindness or low vision to take photos by centering the frame on a face or multiple faces. Guided Frame debuted in 2022 in conjunction with the Google Pixel 7.

The commercial tells the story of a person named Javier, who says, “For many people with blindness or low vision, there hasn’t always been an easy way to capture daily life.” We see a simulated blurry first-person view of Javier holding a smartphone and hear a computer-synthesized voice describing what the AI model sees, directing the person to center on a face to snap various photos and selfies.

Considering the controversies that generative AI currently generates (pun intended), it’s refreshing to see a positive application of AI technology used as an accessibility feature. Relatedly, an app called Be My Eyes (powered by OpenAI’s GPT-4V) also aims to help low-vision people interact with the world.

Despicable Me 4

Despicable Me 4 – Minion Intelligence (Big Game Spot).

So far, we’ve covered a couple attempts to show AI-powered products as positive features. Elsewhere in Super Bowl ads, companies weren’t as generous about the technology. In an ad for the film Despicable Me 4, we see two Minions creating a series of terribly disfigured AI-generated still images reminiscent of Stable Diffusion 1.4 from 2022. There’s three-legged people doing yoga, a painting of Steve Carell and Will Ferrell as Elizabethan gentlemen, a handshake with too many fingers, people eating spaghetti in a weird way, and a pair of people riding dachshunds in a race.

The images are paired with an earnest voiceover that says, “Artificial intelligence is changing the way we see the world, showing us what we never thought possible, transforming the way we do business, and bringing family and friends closer together. With artificial intelligence, the future is in good hands.” When the voiceover ends, the camera pans out to show hundreds of Minions generating similarly twisted images on computers.

Speaking of image synthesis at the Super Bowl, people mistook a Christian commercial created by He Gets Us, LLC as having been AI-generated, likely due to its gaudy technicolor visuals. With the benefit of a YouTube replay and the ability to look at details, the “He washed feet” commercial doesn’t appear AI-generated to us, but it goes to show how the concept of image synthesis has begun to cast doubt on human-made creations.

The Super Bowl’s best and wackiest AI commercials Read More »

report:-sam-altman-seeking-trillions-for-ai-chip-fabrication-from-uae,-others

Report: Sam Altman seeking trillions for AI chip fabrication from UAE, others

chips ahoy —

WSJ: Audacious $5-$7 trillion investment would aim to expand global AI chip supply.

WASHINGTON, DC - JANUARY 11: OpenAI Chief Executive Officer Sam Altman walks on the House side of the U.S. Capitol on January 11, 2024 in Washington, DC. Meanwhile, House Freedom Caucus members who left a meeting in the Speakers office say that they were talking to the Speaker about abandoning the spending agreement that Johnson announced earlier in the week. (Photo by Kent Nishimura/Getty Images)

Enlarge / OpenAI Chief Executive Officer Sam Altman walks on the House side of the US Capitol on January 11, 2024, in Washington, DC. (Photo by Kent Nishimura/Getty Images)

Getty Images

On Thursday, The Wall Street Journal reported that OpenAI CEO Sam Altman is in talks with investors to raise as much as $5 trillion to $7 trillion for AI chip manufacturing, according to people familiar with the matter. The funding seeks to address the scarcity of graphics processing units (GPUs) crucial for training and running large language models like those that power ChatGPT, Microsoft Copilot, and Google Gemini.

The high dollar amount reflects the huge amount of capital necessary to spin up new semiconductor manufacturing capability. “As part of the talks, Altman is pitching a partnership between OpenAI, various investors, chip makers and power providers, which together would put up money to build chip foundries that would then be run by existing chip makers,” writes the Wall Street Journal in its report. “OpenAI would agree to be a significant customer of the new factories.”

To hit these ambitious targets—which are larger than the entire semiconductor industry’s current $527 billion global sales combined—Altman has reportedly met with a range of potential investors worldwide, including sovereign wealth funds and government entities, notably the United Arab Emirates, SoftBank CEO Masayoshi Son, and representatives from Taiwan Semiconductor Manufacturing Co. (TSMC).

TSMC is the world’s largest dedicated independent semiconductor foundry. It’s a critical linchpin that companies such as Nvidia, Apple, Intel, and AMD rely on to fabricate SoCs, CPUs, and GPUs for various applications.

Altman reportedly seeks to expand the global capacity for semiconductor manufacturing significantly, funding the infrastructure necessary to support the growing demand for GPUs and other AI-specific chips. GPUs are excellent at parallel computation, which makes them ideal for running AI models that heavily rely on matrix multiplication to work. However, the technology sector currently faces a significant shortage of these important components, constraining the potential for AI advancements and applications.

In particular, the UAE’s involvement, led by Sheikh Tahnoun bin Zayed al Nahyan, a key security official and chair of numerous Abu Dhabi sovereign wealth vehicles, reflects global interest in AI’s potential and the strategic importance of semiconductor manufacturing. However, the prospect of substantial UAE investment in a key tech industry raises potential geopolitical concerns, particularly regarding the US government’s strategic priorities in semiconductor production and AI development.

The US has been cautious about allowing foreign control over the supply of microchips, given their importance to the digital economy and national security. Reflecting this, the Biden administration has undertaken efforts to bolster domestic chip manufacturing through subsidies and regulatory scrutiny of foreign investments in important technologies.

To put the $5 trillion to $7 trillion estimate in perspective, the White House just today announced a $5 billion investment in R&D to advance US-made semiconductor technologies. TSMC has already sunk $40 billion—one of the largest foreign investments in US history—into a US chip plant in Arizona. As of now, it’s unclear whether Altman has secured any commitments toward his fundraising goal.

Updated on February 9, 2024 at 8: 45 PM Eastern with a quote from the WSJ that clarifies the proposed relationship between OpenAI and partners in the talks.

Report: Sam Altman seeking trillions for AI chip fabrication from UAE, others Read More »

chatgpt’s-new-@-mentions-bring-multiple-personalities-into-your-ai-convo

ChatGPT’s new @-mentions bring multiple personalities into your AI convo

team of rivals —

Bring different AI roles into the same chatbot conversation history.

Illustration of a man jugging at symbols.

Enlarge / With so many choices, selecting the perfect GPT can be confusing.

On Tuesday, OpenAI announced a new feature in ChatGPT that allows users to pull custom personalities called “GPTs” into any ChatGPT conversation with the @ symbol. It allows a level of quasi-teamwork within ChatGPT among expert roles that was previously impractical, making collaborating with a team of AI agents within OpenAI’s platform one step closer to reality.

You can now bring GPTs into any conversation in ChatGPT – simply type @ and select the GPT,” wrote OpenAI on the social media network X. “This allows you to add relevant GPTs with the full context of the conversation.”

OpenAI introduced GPTs in November as a way to create custom personalities or roles for ChatGPT to play. For example, users can build their own GPTs to focus on certain topics or certain skills. Paid ChatGPT subscribers can also freely download a host of GPTs developed by other ChatGPT users through the GPT Store.

Previously, if you wanted to share information between GPT profiles, you had to copy the text, select a new chat with the GPT, paste it, and explain the context of what the information means or what you want to do with it. Now, ChatGPT users can stay in the default ChatGPT window and bring in GPTs as needed without losing the history of the conversation.

For example, we created a “Wellness Guide” GPT that is crafted as an expert in human health conditions (of course, this being ChatGPT, always consult a human doctor if you’re having medical problems), and we created a “Canine Health Advisor” for dog-related health questions.

A screenshot of ChatGPT where we @-mentioned a human wellness advisor, then a dog advisor in the same conversation history.

Enlarge / A screenshot of ChatGPT where we @-mentioned a human wellness advisor, then a dog advisor in the same conversation history.

Benj Edwards

We started in a default ChatGPT chat, hit the @ symbol, then typed the first few letters of “Wellness” and selected it from a list. It filled out the rest. We asked a question about food poisoning in humans, and then we switched to the canine advisor in the same way with an @ symbol and asked about the dog.

Using this feature, you could alternatively consult, say, an “ad copywriter” GPT and an “editor” GPT—ask the copywriter to write some text, then rope in the editor GPT to check it, looking at it from a different angle. Different system prompts (the instructions that define a GPT’s personality) make for significant behavior differences.

We also tried swapping between GPT profiles that write software and others designed to consult on historical tech subjects. Interestingly, ChatGPT does not differentiate between GPTs as different personalities as you change. It will still say, “I did this earlier” when a different GPT is talking about a previous GPT’s output in the same conversation history. From its point of view, it’s just ChatGPT and not multiple agents.

From our vantage point, this feature seems to represent baby steps toward a future where GPTs, as independent agents, could work together as a team to fulfill more complex tasks directed by the user. Similar experiments have been done outside of OpenAI in the past (using API access), but OpenAI has so far resisted a more agentic model for ChatGPT. As we’ve seen (first with GPTs and now with this), OpenAI seems to be slowly angling toward that goal itself, but only time will tell if or when we see true agentic teamwork in a shipping service.

ChatGPT’s new @-mentions bring multiple personalities into your AI convo Read More »

rhyming-ai-powered-clock-sometimes-lies-about-the-time,-makes-up-words

Rhyming AI-powered clock sometimes lies about the time, makes up words

Confabulation time —

Poem/1 Kickstarter seeks $103K for fun ChatGPT-fed clock that may hallucinate the time.

A CAD render of the Poem/1 sitting on a bookshelf.

Enlarge / A CAD render of the Poem/1 sitting on a bookshelf.

On Tuesday, product developer Matt Webb launched a Kickstarter funding project for a whimsical e-paper clock called the “Poem/1” that tells the current time using AI and rhyming poetry. It’s powered by the ChatGPT API, and Webb says that sometimes ChatGPT will lie about the time or make up words to make the rhymes work.

“Hey so I made a clock. It tells the time with a brand new poem every minute, composed by ChatGPT. It’s sometimes profound, and sometimes weird, and occasionally it fibs about what the actual time is to make a rhyme work,” Webb writes on his Kickstarter page.

The $126 clock is the product of Webb’s Acts Not Facts, which he bills as “.” Despite the net-connected service aspect of the clock, Webb says it will not require a subscription to function.

A labeled CAD rendering of the Poem/1 clock, representing its final shipping configuration.

Enlarge / A labeled CAD rendering of the Poem/1 clock, representing its final shipping configuration.

There are 1,440 minutes in a day, so Poem/1 needs to display 1,440 unique poems to work. The clock features a monochrome e-paper screen and pulls its poetry rhymes via Wi-Fi from a central server run by Webb’s company. To save money, that server pulls poems from ChatGPT’s API and will share them out to many Poem/1 clocks at once. This prevents costly API fees that would add up if your clock were querying OpenAI’s servers 1,440 times a day, non-stop, forever. “I’m reserving a % of the retail price from each clock in a bank account to cover AI and server costs for 5 years,” Webb writes.

For hackers, Webb says that you’ll be able to change the back-end server URL of the Poem/1 from the default to whatever you want, so it can display custom text every minute of the day. Webb says he will document and publish the API when Poem/1 ships.

Hallucination time

A photo of a Poem/1 prototype with a hallucinated time, according to Webb.

Enlarge / A photo of a Poem/1 prototype with a hallucinated time, according to Webb.

Given the Poem/1’s large language model pedigree, it’s perhaps not surprising that Poem/1 may sometimes make up things (also called “hallucination” or “confabulation” in the AI field) to fulfill its task. The LLM that powers ChatGPT is always searching for the most likely next word in a sequence, and sometimes factuality comes second to fulfilling that mission.

Further down on the Kickstarter page, Webb provides a photo of his prototype Poem/1 where the screen reads, “As the clock strikes eleven forty two, / I rhyme the time, as I always do.” Just below, Webb warns, “Poem/1 fibs occasionally. I don’t believe it was actually 11.42 when this photo was taken. The AI hallucinated the time in order to make the poem work. What we do for art…”

In other clocks, the tendency to unreliably tell the time might be a fatal flaw. But judging by his humorous angle on the Kickstarter page, Webb apparently sees the clock as more of a fun art project than a precision timekeeping instrument. “Don’t rely on this clock in situations where timekeeping is vital,” Webb writes, “such as if you work in air traffic control or rocket launches or the finish line of athletics competitions.”

Poem/1 also sometimes takes poetic license with vocabulary to tell the time. During a humorous moment in the Kickstarter promotional video, Webb looks at his clock prototype and reads the rhyme, “A clock that defies all rhyme and reason / 4: 30 PM, a temporal teason.” Then he says, “I had to look ‘teason’ up. It doesn’t mean anything, so it’s a made-up word.”

Rhyming AI-powered clock sometimes lies about the time, makes up words Read More »

openai-and-common-sense-media-partner-to-protect-teens-from-ai-harms-and-misuse

OpenAI and Common Sense Media partner to protect teens from AI harms and misuse

Adventures in chatbusting —

Site gave ChatGPT 3 stars and 48% privacy score: “Best used for creativity, not facts.”

Boy in Living Room Wearing Robot Mask

On Monday, OpenAI announced a partnership with the nonprofit Common Sense Media to create AI guidelines and educational materials targeted at parents, educators, and teens. It includes the curation of family-friendly GPTs in OpenAI’s GPT store. The collaboration aims to address concerns about the impacts of AI on children and teenagers.

Known for its reviews of films and TV shows aimed at parents seeking appropriate media for their kids to watch, Common Sense Media recently branched out into AI and has been reviewing AI assistants on its site.

“AI isn’t going anywhere, so it’s important that we help kids understand how to use it responsibly,” Common Sense Media wrote on X. “That’s why we’ve partnered with @OpenAI to help teens and families safely harness the potential of AI.”

OpenAI CEO Sam Altman and Common Sense Media CEO James Steyer announced the partnership onstage in San Francisco at the Common Sense Summit for America’s Kids and Families, an event that was well-covered by media members on the social media site X.

For his part, Altman offered a canned statement in the press release, saying, “AI offers incredible benefits for families and teens, and our partnership with Common Sense will further strengthen our safety work, ensuring that families and teens can use our tools with confidence.”

The announcement feels slightly non-specific in the official news release, with Steyer offering, “Our guides and curation will be designed to educate families and educators about safe, responsible use of ChatGPT, so that we can collectively avoid any unintended consequences of this emerging technology.”

The partnership seems aimed mostly at bringing a patina of family-friendliness to OpenAI’s GPT store, with the most solid reveal being the aforementioned fact that Common Sense media will help with the “curation of family-friendly GPTs in the GPT Store based on Common Sense ratings and standards.”

Common Sense AI reviews

As mentioned above, Common Sense Media began reviewing AI assistants on its site late last year. This puts Common Sense Media in an interesting position with potential conflicts of interest regarding the new partnership with OpenAI. However, it doesn’t seem to be offering any favoritism to OpenAI so far.

For example, Common Sense Media’s review of ChatGPT calls the AI assistant “A powerful, at times risky chatbot for people 13+ that is best used for creativity, not facts.” It labels ChatGPT as being suitable for ages 13 and up (which is in OpenAI’s Terms of Service) and gives the OpenAI assistant three out of five stars. ChatGPT also scores a 48 percent privacy rating (which is oddly shown as 55 percent on another page that goes into privacy details). The review we cited was last updated on October 13, 2023, as of this writing.

For reference, Google Bard gets a three-star overall rating and a 75 percent privacy rating in its Common Sense Media review. Stable Diffusion, the image synthesis model, nets a one-star rating with the description, “Powerful image generator can unleash creativity, but is wildly unsafe and perpetuates harm.” OpenAI’s DALL-E gets two stars and a 48 percent privacy rating.

The information that Common Sense Media includes about each AI model appears relatively accurate and detailed (and the organization cited an Ars Technica article as a reference in one explanation), so they feel fair, even in the face of the OpenAI partnership. Given the low scores, it seems that most AI models aren’t off to a great start, but that may change. It’s still early days in generative AI.

OpenAI and Common Sense Media partner to protect teens from AI harms and misuse Read More »

google’s-latest-ai-video-generator-can-render-cute-animals-in-implausible-situations

Google’s latest AI video generator can render cute animals in implausible situations

An elephant with a party hat—underwater —

Lumiere generates five-second videos that “portray realistic, diverse and coherent motion.”

Still images of AI-generated video examples provided by Google for its Lumiere video synthesis model.

Enlarge / Still images of AI-generated video examples provided by Google for its Lumiere video synthesis model.

On Tuesday, Google announced Lumiere, an AI video generator that it calls “a space-time diffusion model for realistic video generation” in the accompanying preprint paper. But let’s not kid ourselves: It does a great job at creating videos of cute animals in ridiculous scenarios, such as using roller skates, driving a car, or playing a piano. Sure, it can do more, but it is perhaps the most advanced text-to-animal AI video generator yet demonstrated.

According to Google, Lumiere utilizes unique architecture to generate a video’s entire temporal duration in one go. Or, as the company put it, “We introduce a Space-Time U-Net architecture that generates the entire temporal duration of the video at once, through a single pass in the model. This is in contrast to existing video models which synthesize distant keyframes followed by temporal super-resolution—an approach that inherently makes global temporal consistency difficult to achieve.”

In layperson terms, Google’s tech is designed to handle both the space (where things are in the video) and time (how things move and change throughout the video) aspects simultaneously. So, instead of making a video by putting together many small parts or frames, it can create the entire video, from start to finish, in one smooth process.

The official promotional video accompanying the paper “Lumiere: A Space-Time Diffusion Model for Video Generation,” released by Google.

Lumiere can also do plenty of party tricks, which are laid out quite well with examples on Google’s demo page. For example, it can perform text-to-video generation (turning a written prompt into a video), convert still images into videos, generate videos in specific styles using a reference image, apply consistent video editing using text-based prompts, create cinemagraphs by animating specific regions of an image, and offer video inpainting capabilities (for example, it can change the type of dress a person is wearing).

In the Lumiere research paper, the Google researchers state that the AI model outputs five-second long 1024×1024 pixel videos, which they describe as “low-resolution.” Despite those limitations, the researchers performed a user study and claim that Lumiere’s outputs were preferred over existing AI video synthesis models.

As for training data, Google doesn’t say where it got the videos they fed into Lumiere, writing, “We train our T2V [text to video] model on a dataset containing 30M videos along with their text caption. [sic] The videos are 80 frames long at 16 fps (5 seconds). The base model is trained at 128×128.”

A block diagram showing components of the Lumiere AI model, provided by Google.

Enlarge / A block diagram showing components of the Lumiere AI model, provided by Google.

AI-generated video is still in a primitive state, but it’s been progressing in quality over the past two years. In October 2022, we covered Google’s first publicly unveiled image synthesis model, Imagen Video. It could generate short 1280×768 video clips from a written prompt at 24 frames per second, but the results weren’t always coherent. Before that, Meta debuted its AI video generator, Make-A-Video. In June of last year, Runway’s Gen2 video synthesis model enabled the creation of two-second video clips from text prompts, fueling the creation of surrealistic parody commercials. And in November, we covered Stable Video Diffusion, which can generate short clips from still images.

AI companies often demonstrate video generators with cute animals because generating coherent, non-deformed humans is currently difficult—especially since we, as humans (you are human, right?), are adept at noticing any flaws in human bodies or how they move. Just look at AI-generated Will Smith eating spaghetti.

Judging by Google’s examples (and not having used it ourselves), Lumiere appears to surpass these other AI video generation models. But since Google tends to keep its AI research models close to its chest, we’re not sure when, if ever, the public may have a chance to try it for themselves.

As always, whenever we see text-to-video synthesis models getting more capable, we can’t help but think of the future implications for our Internet-connected society, which is centered around sharing media artifacts—and the general presumption that “realistic” video typically represents real objects in real situations captured by a camera. Future video synthesis tools more capable than Lumiere will make deceptive deepfakes trivially easy to create.

To that end, in the “Societal Impact” section of the Lumiere paper, the researchers write, “Our primary goal in this work is to enable novice users to generate visual content in an creative and flexible way. [sic] However, there is a risk of misuse for creating fake or harmful content with our technology, and we believe that it is crucial to develop and apply tools for detecting biases and malicious use cases in order to ensure a safe and fair use.”

Google’s latest AI video generator can render cute animals in implausible situations Read More »

a-“robot”-should-be-chemical,-not-steel,-argues-man-who-coined-the-word

A “robot” should be chemical, not steel, argues man who coined the word

Dispatch from 1935 —

Čapek: “The world needed mechanical robots, for it believes in machines more than it believes in life.”

In 1921, Czech playwright Karel Čapek and his brother Josef invented the word “robot” in a sci-fi play called R.U.R. (short for Rossum’s Universal Robots). As Even Ackerman in IEEE Spectrum points out, Čapek wasn’t happy about how the term’s meaning evolved to denote mechanical entities, straying from his original concept of artificial human-like beings based on chemistry.

In a newly translated column called “The Author of the Robots Defends Himself,” published in Lidové Noviny on June 9, 1935, Čapek expresses his frustration about how his original vision for robots was being subverted. His arguments still apply to both modern robotics and AI. In this column, he referred to himself in the third-person:

For his robots were not mechanisms. They were not made of sheet metal and cogwheels. They were not a celebration of mechanical engineering. If the author was thinking of any of the marvels of the human spirit during their creation, it was not of technology, but of science. With outright horror, he refuses any responsibility for the thought that machines could take the place of people, or that anything like life, love, or rebellion could ever awaken in their cogwheels. He would regard this somber vision as an unforgivable overvaluation of mechanics or as a severe insult to life.

This recently resurfaced article comes courtesy of a new English translation of Čapek’s play called R.U.R. and the Vision of Artificial Life accompanied by 20 essays on robotics, philosophy, politics, and AI. The editor, Jitka Čejková, a professor at the Chemical Robotics Laboratory in Prague, aligns her research with Čapek’s original vision. She explores “chemical robots”—microparticles resembling living cells—which she calls “liquid robots.”

Enlarge / “An assistant of inventor Captain Richards works on the robot the Captain has invented, which speaks, answers questions, shakes hands, tells the time and sits down when it’s told to.” – September 1928

In Čapek’s 1935 column, he clarifies that his robots were not intended to be mechanical marvels, but organic products of modern chemistry, akin to living matter. Čapek emphasizes that he did not want to glorify mechanical systems but to explore the potential of science, particularly chemistry. He refutes the idea that machines could replace humans or develop emotions and consciousness.

The author of the robots would regard it as an act of scientific bad taste if he had brought something to life with brass cogwheels or created life in the test tube; the way he imagined it, he created only a new foundation for life, which began to behave like living matter, and which could therefore have become a vehicle of life—but a life which remains an unimaginable and incomprehensible mystery. This life will reach its fulfillment only when (with the aid of considerable inaccuracy and mysticism) the robots acquire souls. From which it is evident that the author did not invent his robots with the technological hubris of a mechanical engineer, but with the metaphysical humility of a spiritualist.

The reason for the transition from chemical to mechanical in the public perception of robots isn’t entirely clear (though Čapek does mention a Russian film which went the mechanical route and was likely influential). The early 20th century was a period of rapid industrialization and technological advancement that saw the emergence of complex machinery and electronic automation, which probably influenced the public and scientific community’s perception of autonomous beings, leading them to associate the idea of robots with mechanical and electronic devices rather than chemical creations.

The 1935 piece is full of interesting quotes (you can read the whole thing in IEEE Spectrum or here), and we’ve grabbed a few highlights below that you can conveniently share with your robot-loving friends to blow their minds:

  • “He pronounces that his robots were created quite differently—that is, by a chemical path”
  • “He has learned, without any great pleasure, that genuine steel robots have started to appear”
  • “Well then, the author cannot be blamed for what might be called the worldwide humbug over the robots.”
  • “The world needed mechanical robots, for it believes in machines more than it believes in life; it is fascinated more by the marvels of technology than by the miracle of life.”

So it seems, over 100 years later, that we’ve gotten it wrong all along. Čapek’s vision, rooted in chemical synthesis and the philosophical mysteries of life, offers a different narrative from the predominant mechanical and electronic interpretation of robots we know today. But judging from what Čapek wrote, it sounds like he would be firmly against AI takeover scenarios. In fact, Čapek, who died in 1938, probably would think they would be impossible.

A “robot” should be chemical, not steel, argues man who coined the word Read More »

openai-opens-the-door-for-military-uses-but-maintains-ai-weapons-ban

OpenAI opens the door for military uses but maintains AI weapons ban

Skynet deferred —

Despite new Pentagon collab, OpenAI won’t allow customers to “develop or use weapons” with its tools.

The OpenAI logo over a camoflage background.

On Tuesday, ChatGPT developer OpenAI revealed that it is collaborating with the United States Defense Department on cybersecurity projects and exploring ways to prevent veteran suicide, reports Bloomberg. OpenAI revealed the collaboration during an interview with the news outlet at the World Economic Forum in Davos. The AI company recently modified its policies, allowing for certain military applications of its technology, while maintaining prohibitions against using it to develop weapons.

According to Anna Makanju, OpenAI’s vice president of global affairs, “many people thought that [a previous blanket prohibition on military applications] would prohibit many of these use cases, which people think are very much aligned with what we want to see in the world.” OpenAI removed terms from its service agreement that previously blocked AI use in “military and warfare” situations, but the company still upholds a ban on its technology being used to develop weapons or to cause harm or property damage.

Under the “Universal Policies” section of OpenAI’s Usage Policies document, section 2 says, “Don’t use our service to harm yourself or others.” The prohibition includes using its AI products to “develop or use weapons.” Changes to the terms that removed the “military and warfare” prohibitions appear to have been made by OpenAI on January 10.

The shift in policy appears to align OpenAI more closely with the needs of various governmental departments, including the possibility of preventing veteran suicides. “We’ve been doing work with the Department of Defense on cybersecurity tools for open-source software that secures critical infrastructure,” Makanju said in the interview. “We’ve been exploring whether it can assist with (prevention of) veteran suicide.”

The efforts mark a significant change from OpenAI’s original stance on military partnerships, Bloomberg says. Meanwhile, Microsoft Corp., a large investor in OpenAI, already has an established relationship with the US military through various software contracts.

OpenAI opens the door for military uses but maintains AI weapons ban Read More »

as-2024-election-looms,-openai-says-it-is-taking-steps-to-prevent-ai-abuse

As 2024 election looms, OpenAI says it is taking steps to prevent AI abuse

Don’t Rock the vote —

ChatGPT maker plans transparency for gen AI content and improved access to voting info.

A pixelated photo of Donald Trump.

On Monday, ChatGPT maker OpenAI detailed its plans to prevent the misuse of its AI technologies during the upcoming elections in 2024, promising transparency in AI-generated content and enhancing access to reliable voting information. The AI developer says it is working on an approach that involves policy enforcement, collaboration with partners, and the development of new tools aimed at classifying AI-generated media.

“As we prepare for elections in 2024 across the world’s largest democracies, our approach is to continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency,” writes OpenAI in its blog post. “Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process.”

Initiatives proposed by OpenAI include preventing abuse by means such as deepfakes or bots imitating candidates, refining usage policies, and launching a reporting system for the public to flag potential abuses. For example, OpenAI’s image generation tool, DALL-E 3, includes built-in filters that reject requests to create images of real people, including politicians. “For years, we’ve been iterating on tools to improve factual accuracy, reduce bias, and decline certain requests,” the company stated.

OpenAI says it regularly updates its Usage Policies for ChatGPT and its API products to prevent misuse, especially in the context of elections. The organization has implemented restrictions on using its technologies for political campaigning and lobbying until it better understands the potential for personalized persuasion. Also, OpenAI prohibits creating chatbots that impersonate real individuals or institutions and disallows the development of applications that could deter people from “participation in democratic processes.” Users can report GPTs that may violate the rules.

OpenAI claims to be proactively engaged in detailed strategies to safeguard its technologies against misuse. According to their statements, this includes red-teaming new systems to anticipate challenges, engaging with users and partners for feedback, and implementing robust safety mitigations. OpenAI asserts that these efforts are integral to its mission of continually refining AI tools for improved accuracy, reduced biases, and responsible handling of sensitive requests

Regarding transparency, OpenAI says it is advancing its efforts in classifying image provenance. The company plans to embed digital credentials, using cryptographic techniques, into images produced by DALL-E 3 as part of its adoption of standards by the Coalition for Content Provenance and Authenticity. Additionally, OpenAI says it is testing a tool designed to identify DALL-E-generated images.

In an effort to connect users with authoritative information, particularly concerning voting procedures, OpenAI says it has partnered with the National Association of Secretaries of State (NASS) in the United States. ChatGPT will direct users to CanIVote.org for verified US voting information.

“We want to make sure that our AI systems are built, deployed, and used safely,” writes OpenAI. “Like any new technology, these tools come with benefits and challenges. They are also unprecedented, and we will keep evolving our approach as we learn more about how our tools are used.”

As 2024 election looms, OpenAI says it is taking steps to prevent AI abuse Read More »

famous-xkcd-comic-comes-full-circle-with-ai-bird-identifying-binoculars

Famous xkcd comic comes full circle with AI bird-identifying binoculars

Who watches the bird watchers —

Swarovski AX Visio, billed as first “smart binoculars,” names species and tracks location.

The Swarovski Optik Visio binoculars, with an excerpt of a 2014 xkcd comic strip called

Enlarge / The Swarovski Optik Visio binoculars, with an excerpt of a 2014 xkcd comic strip called “Tasks” in the corner.

xckd / Swarovski

Last week, Austria-based Swarovski Optik introduced the AX Visio 10×32 binoculars, which the company says can identify over 9,000 species of birds and mammals using image recognition technology. The company is calling the product the world’s first “smart binoculars,” and they come with a hefty price tag—$4,799.

“The AX Visio are the world’s first AI-supported binoculars,” the company says in the product’s press release. “At the touch of a button, they assist with the identification of birds and other creatures, allow discoveries to be shared, and offer a wide range of practical extra functions.”

The binoculars, aimed mostly at bird watchers, gain their ability to identify birds from the Merlin Bird ID project, created by Cornell Lab of Ornithology. As confirmed by a hands-on demo conducted by The Verge, the user looks at an animal through the binoculars and presses a button. A red progress circle fills in while the binoculars process the image, then the identified animal name pops up on the built-in binocular HUD screen within about five seconds.

In 2014, a famous xkcd comic strip titled Tasks depicted someone asking a developer to create an app that, when a user takes a photo, will check whether the user is in a national park (deemed easy due to GPS) and check whether the photo is of a bird (to which the developer says, “I’ll need a research team and five years”). The caption below reads, “In CS, it can be hard to explain the difference between the easy and the virtually impossible.”

The xkcd comic titled

The xkcd comic titled “Tasks” from September 24, 2014.

It’s been just over nine years since the comic was published, and while identifying the presence of a bird in a photo was solved some time ago, these binoculars arguably go further by identifying the species of the bird in the photo (it also keeps track of location due to GPS). While apps to identify bird species already exist, this feature is now packed into a handheld pair of binoculars.

According to Swarovski, the development of the AX Visio took approximately five years, involving around 390 “hardware parts.” The binoculars incorporate a neural processing unit (NPU) for object recognition processing. The company claims that the device will have a long product life cycle, with ongoing updates and improvements. The company also mentions “an open programming interface” in its press release, potentially allowing industrious users (or handy hackers) to expand the unit’s features over time.

  • The Swarovski Optik Visio binoculars.

    Swarovski Optik

  • The Swarovski Optik Visio binoculars.

    Swarovski Optik

  • The Swarovski Optik Visio binoculars.

    Swarovski Optik

The binoculars, which feature industrial design from Marc Newson, include built-in digital camera, compass, GPS, and discovery-sharing features that can “immediately show your companion where you have seen an animal.” The Visio unit also wirelessly ties into the “SWAROVSKI OPTIK Outdoor App” that can run on a smartphone. The app manages sharing photos and videos captured through the binoculars. (As an aside, we’ve come a long way from computer-connected gadgets that required pesky serial cables in the late 1990s.)

Swarovski says the AX Visio will be available at select retailers and online starting February 1, 2024. While this tech is at a premium price right now, given the speed of tech progress and market competition, we may see similar image-recognizing features built into much cheaper models in the years ahead.

Famous xkcd comic comes full circle with AI bird-identifying binoculars Read More »

at-senate-ai-hearing,-news-executives-fight-against-“fair-use”-claims-for-ai-training-data

At Senate AI hearing, news executives fight against “fair use” claims for AI training data

All’s fair in love and AI —

Media orgs want AI firms to license content for training, and Congress is sympathetic.

WASHINGTON, DC - JANUARY 10: Danielle Coffey, President and CEO of News Media Alliance, Professor Jeff Jarvis, CUNY Graduate School of Journalism, Curtis LeGeyt President and CEO of National Association of Broadcasters, Roger Lynch CEO of Condé Nast, are strong in during a Senate Judiciary Subcommittee on Privacy, Technology, and the Law hearing on “Artificial Intelligence and The Future Of Journalism” at the U.S. Capitol on January 10, 2024 in Washington, DC. Lawmakers continue to hear testimony from experts and business leaders about artificial intelligence and its impact on democracy, elections, privacy, liability and news. (Photo by Kent Nishimura/Getty Images)

Enlarge / Danielle Coffey, president and CEO of News Media Alliance; Professor Jeff Jarvis, CUNY Graduate School of Journalism; Curtis LeGeyt, president and CEO of National Association of Broadcasters; and Roger Lynch, CEO of Condé Nast, are sworn in during a Senate Judiciary Subcommittee on Privacy, Technology, and the Law hearing on “Artificial Intelligence and The Future Of Journalism.”

Getty Images

On Wednesday, news industry executives urged Congress for legal clarification that using journalism to train AI assistants like ChatGPT is not fair use, as claimed by companies such as OpenAI. Instead, they would prefer a licensing regime for AI training content that would force Big Tech companies to pay for content in a method similar to rights clearinghouses for music.

The plea for action came during a US Senate Judiciary Committee hearing titled “Oversight of A.I.: The Future of Journalism,” chaired by Sen. Richard Blumenthal of Connecticut, with Sen. Josh Hawley of Missouri also playing a large role in the proceedings. Last year, the pair of senators introduced a bipartisan framework for AI legislation and held a series of hearings on the impact of AI.

Blumenthal described the situation as an “existential crisis” for the news industry and cited social media as a cautionary tale for legislative inaction about AI. “We need to move more quickly than we did on social media and learn from our mistakes in the delay there,” he said.

Companies like OpenAI have admitted that vast amounts of copyrighted material are necessary to train AI large language models, but they claim their use is transformational and covered under fair use precedents of US copyright law. Currently, OpenAI is negotiating licensing content from some news providers and striking deals, but the executives in the hearing said those efforts are not enough, highlighting closing newsrooms across the US and dropping media revenues while Big Tech’s profits soar.

“Gen AI cannot replace journalism,” said Condé Nast CEO Roger Lynch in his opening statement. (Condé Nast is the parent company of Ars Technica.) “Journalism is fundamentally a human pursuit, and it plays an essential and irreplaceable role in our society and our democracy.” Lynch said that generative AI has been built with “stolen goods,” referring to the use of AI training content from news outlets without authorization. “Gen AI companies copy and display our content without permission or compensation in order to build massive commercial businesses that directly compete with us.”

Roger Lynch, CEO of Condé Nast, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law during a hearing on “Artificial Intelligence and The Future Of Journalism.”

Enlarge / Roger Lynch, CEO of Condé Nast, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law during a hearing on “Artificial Intelligence and The Future Of Journalism.”

Getty Images

In addition to Lynch, the hearing featured three other witnesses: Jeff Jarvis, a veteran journalism professor and pundit; Danielle Coffey, the president and CEO of News Media Alliance; and Curtis LeGeyt, president and CEO of the National Association of Broadcasters.

Coffey also shared concerns about generative AI using news material to create competitive products. “These outputs compete in the same market, with the same audience, and serve the same purpose as the original articles that feed the algorithms in the first place,” she said.

When Sen. Hawley asked Lynch what kind of legislation might be needed to fix the problem, Lynch replied, “I think quite simply, if Congress could clarify that the use of our content and other publisher content for training and output of AI models is not fair use, then the free market will take care of the rest.”

Lynch used the music industry as a model: “You think about millions of artists, millions of ultimate consumers consuming that content, there have been models that have been set up, ASCAP, BMI, CSAC, GMR, these collective rights organizations to simplify the content that’s being used.”

Curtis LeGeyt, CEO of the National Association of Broadcasters, said that TV broadcast journalists are also affected by generative AI. “The use of broadcasters’ news content in AI models without authorization diminishes our audience’s trust and our reinvestment in local news,” he said. “Broadcasters have already seen numerous examples where content created by our journalists has been ingested and regurgitated by AI bots with little or no attribution.”

At Senate AI hearing, news executives fight against “fair use” claims for AI training data Read More »