sam altman

apple-backs-out-of-backing-openai,-report-claims

Apple backs out of backing OpenAI, report claims

ChatGPT —

Apple dropped out of the $6.5 billion investment round at the 11th hour.

The Apple Park campus in Cupertino, California.

Enlarge / The Apple Park campus in Cupertino, California.

A few weeks back, it was reported that Apple was exploring investing in OpenAI, the company that makes ChatGPT, the GPT model, and other popular generative AI products. Now, a new report from The Wall Street Journal claims that Apple has abandoned those plans.

The article simply says Apple “fell out of the talks to join the round.” The round is expected to close in a week or so and may raise as much as $6.5 billion for the growing Silicon Valley company. Had Apple gone through with the move, it would have been a rare event—though not completely unprecedented—for Apple to invest in another company that size.

OpenAI is still expected to raise the funds it seeks from other sources. The report claims Microsoft is expected to invest around $1 billion in this round. Microsoft has already invested substantial sums in OpenAI, whose GPT models power Microsoft AI tools like Copilot and Bing chat.

Nvidia is also a likely major investor in this round.

Apple will soon offer limited ChatGPT integration in an upcoming iOS update, though it plans to support additional models like Google’s Gemini further down the line, offering users a choice similar to how they pick a default search engine or web browser.

OpenAI has been on a successful tear with its products and models, establishing itself as a leader in the rapidly growing industry. However, it has also been beset by drama and controversy—most recently, some key leaders at OpenAI departed the company abruptly, and it shifted its focus from a research-focused organization that was beholden to a nonprofit, to a for-profit company under CEO Sam Altman. Also, former Apple design lead Jony Ive is confirmed to be working on a new AI product of some kind.

But The Wall Street Journal did not specify which (if any) of these facts are reasons why Apple chose to back out of the investment.

Apple backs out of backing OpenAI, report claims Read More »

openai’s-murati-shocks-with-sudden-departure-announcement

OpenAI’s Murati shocks with sudden departure announcement

thinning crowd —

OpenAI CTO’s resignation coincides with news about the company’s planned restructuring.

Mira Murati, Chief Technology Officer of OpenAI, speaks during The Wall Street Journal's WSJ Tech Live Conference in Laguna Beach, California on October 17, 2023.

Enlarge / Mira Murati, Chief Technology Officer of OpenAI, speaks during The Wall Street Journal’s WSJ Tech Live Conference in Laguna Beach, California on October 17, 2023.

On Wednesday, OpenAI Chief Technical Officer Mira Murati announced she is leaving the company in a surprise resignation shared on the social network X. Murati joined OpenAI in 2018, serving for six-and-a-half years in various leadership roles, most recently as the CTO.

“After much reflection, I have made the difficult decision to leave OpenAI,” she wrote in a letter to the company’s staff. “While I’ll express my gratitude to many individuals in the coming days, I want to start by thanking Sam and Greg for their trust in me to lead the technical organization and for their support throughout the years,” she continued, referring to OpenAI CEO Sam Altman and President Greg Brockman. “There’s never an ideal time to step away from a place one cherishes, yet this moment feels right.”

At OpenAI, Murati was in charge of overseeing the company’s technical strategy and product development, including the launch and improvement of DALL-E, Codex, Sora, and the ChatGPT platform, while also leading research and safety teams. In public appearances, Murati often spoke about ethical considerations in AI development.

Murati’s decision to leave the company comes when OpenAI finds itself at a major crossroads with a plan to alter its nonprofit structure. According to a Reuters report published today, OpenAI is working to reorganize its core business into a for-profit benefit corporation, removing control from its nonprofit board. The move, which would give CEO Sam Altman equity in the company for the first time, could potentially value OpenAI at $150 billion.

Murati stated her decision to leave was driven by a desire to “create the time and space to do my own exploration,” though she didn’t specify her future plans.

Proud of safety and research work

OpenAI CTO Mira Murati seen debuting GPT-4o during OpenAI's Spring Update livestream on May 13, 2024.

Enlarge / OpenAI CTO Mira Murati seen debuting GPT-4o during OpenAI’s Spring Update livestream on May 13, 2024.

OpenAI

In her departure announcement, Murati highlighted recent developments at OpenAI, including innovations in speech-to-speech technology and the release of OpenAI o1. She cited what she considers the company’s progress in safety research and the development of “more robust, aligned, and steerable” AI models.

Altman replied to Murati’s tweet directly, expressing gratitude for Murati’s contributions and her personal support during challenging times, likely referring to the tumultuous period in November 2023 when the OpenAI board of directors briefly fired Altman from the company.

It’s hard to overstate how much Mira has meant to OpenAI, our mission, and to us all personally,” he wrote. “I feel tremendous gratitude towards her for what she has helped us build and accomplish, but I most of all feel personal gratitude towards her for the support and love during all the hard times. I am excited for what she’ll do next.”

Not the first major player to leave

An image Ilya Sutskever tweeted with this OpenAI resignation announcement. From left to right: OpenAI Chief Scientist Jakub Pachocki, President Greg Brockman (on leave), Sutskever (now former Chief Scientist), CEO Sam Altman, and soon-to-be-former CTO Mira Murati.

Enlarge / An image Ilya Sutskever tweeted with this OpenAI resignation announcement. From left to right: OpenAI Chief Scientist Jakub Pachocki, President Greg Brockman (on leave), Sutskever (now former Chief Scientist), CEO Sam Altman, and soon-to-be-former CTO Mira Murati.

With Murati’s exit, Altman remains one of the few long-standing senior leaders at OpenAI, which has seen significant shuffling in its upper ranks recently. In May 2024, former Chief Scientist Ilya Sutskever left to form his own company, Safe Superintelligence, Inc. (SSI), focused on building AI systems that far surpass humans in logical capabilities. That came just six months after Sutskever’s involvement in the temporary removal of Altman as CEO.

John Schulman, an OpenAI co-founder, departed earlier in 2024 to join rival AI firm Anthropic, and in August, OpenAI President Greg Brockman announced he would be taking a temporary sabbatical until the end of the year.

The leadership shuffles have raised questions among critics about the internal dynamics at OpenAI under Altman and the state of OpenAI’s future research path, which has been aiming toward creating artificial general intelligence (AGI)—a hypothetical technology that could potentially perform human-level intellectual work.

“Question: why would key people leave an organization right before it was just about to develop AGI?” asked xAI developer Benjamin De Kraker in a post on X just after Murati’s announcement. “This is kind of like quitting NASA months before the moon landing,” he wrote in a reply. “Wouldn’t you wanna stick around and be part of it?”

Altman mentioned that more information about transition plans would be forthcoming, leaving questions about who will step into Murati’s role and how OpenAI will adapt to this latest leadership change as the company is poised to adopt a corporate structure that may consolidate more power directly under Altman. “We’ll say more about the transition plans soon, but for now, I want to take a moment to just feel thanks,” Altman wrote.

OpenAI’s Murati shocks with sudden departure announcement Read More »

oprah’s-upcoming-ai-television-special-sparks-outrage-among-tech-critics

Oprah’s upcoming AI television special sparks outrage among tech critics

You get an AI, and You get an AI —

AI opponents say Gates, Altman, and others will guide Oprah through an AI “sales pitch.”

An ABC handout promotional image for

Enlarge / An ABC handout promotional image for “AI and the Future of Us: An Oprah Winfrey Special.”

On Thursday, ABC announced an upcoming TV special titled, “AI and the Future of Us: An Oprah Winfrey Special.” The one-hour show, set to air on September 12, aims to explore AI’s impact on daily life and will feature interviews with figures in the tech industry, like OpenAI CEO Sam Altman and Bill Gates. Soon after the announcement, some AI critics began questioning the guest list and the framing of the show in general.

Sure is nice of Oprah to host this extended sales pitch for the generative AI industry at a moment when its fortunes are flagging and the AI bubble is threatening to burst,” tweeted author Brian Merchant, who frequently criticizes generative AI technology in op-eds, social media, and through his “Blood in the Machine” AI newsletter.

“The way the experts who are not experts are presented as such 💀 what a train wreck,” replied artist Karla Ortiz, who is a plaintiff in a lawsuit against several AI companies. “There’s still PLENTY of time to get actual experts and have a better discussion on this because yikes.”

The trailer for Oprah’s upcoming TV special on AI.

On Friday, Ortiz created a lengthy viral thread on X that detailed her potential issues with the program, writing, “This event will be the first time many people will get info on Generative AI. However it is shaping up to be a misinformed marketing event starring vested interests (some who are under a litany of lawsuits) who ignore the harms GenAi inflicts on communities NOW.”

Critics of generative AI like Ortiz question the utility of the technology, its perceived environmental impact, and what they see as blatant copyright infringement. In training AI language models, tech companies like Meta, Anthropic, and OpenAI commonly use copyrighted material gathered without license or owner permission. OpenAI claims that the practice is “fair use.”

Oprah’s guests

According to ABC, the upcoming special will feature “some of the most important and powerful people in AI,” which appears to roughly translate to “famous and publicly visible people related to tech.” Microsoft co-founder Bill Gates, who stepped down as Microsoft CEO 24 years ago, will appear on the show to explore the “AI revolution coming in science, health, and education,” ABC says, and warn of “the once-in-a-century type of impact AI may have on the job market.”

As a guest representing ChatGPT-maker OpenAI, Sam Altman will explain “how AI works in layman’s terms” and discuss “the immense personal responsibility that must be borne by the executives of AI companies.” Karla Ortiz specifically criticized Altman in her thread by saying, “There are far more qualified individuals to speak on what GenAi models are than CEOs. Especially one CEO who recently said AI models will ‘solve all physics.’ That’s an absurd statement and not worthy of your audience.”

In a nod to present-day content creation, YouTube creator Marques Brownlee will appear on the show and reportedly walk Winfrey through “mind-blowing demonstrations of AI’s capabilities.”

Brownlee’s involvement received special attention from some critics online. “Marques Brownlee should be absolutely ashamed of himself,” tweeted PR consultant and frequent AI critic Ed Zitron, who frequently heaps scorn on generative AI in his own newsletter. “What a disgraceful thing to be associated with.”

Other guests include Tristan Harris and Aza Raskin from the Center for Humane Technology, who aim to highlight “emerging risks posed by powerful and superintelligent AI,” an existential risk topic that has its own critics. And FBI Director Christopher Wray will reveal “the terrifying ways criminals and foreign adversaries are using AI,” while author Marilynne Robinson will reflect on “AI’s threat to human values.”

Going only by the publicized guest list, it appears that Oprah does not plan to give voice to prominent non-doomer critics of AI. “This is really disappointing @Oprah and frankly a bit irresponsible to have a one-sided conversation on AI without informed counterarguments from those impacted,” tweeted TV producer Theo Priestley.

Others on the social media network shared similar criticism about a perceived lack of balance in the guest list, including Dr. Margaret Mitchell of Hugging Face. “It could be beneficial to have an AI Oprah follow-up discussion that responds to what happens in [the show] and unpacks generative AI in a more grounded way,” she said.

Oprah’s AI special will air on September 12 on ABC (and a day later on Hulu) in the US, and it will likely elicit further responses from the critics mentioned above. But perhaps that’s exactly how Oprah wants it: “It may fascinate you or scare you,” Winfrey said in a promotional video for the special. “Or, if you’re like me, it may do both. So let’s take a breath and find out more about it.”

Oprah’s upcoming AI television special sparks outrage among tech critics Read More »

sam-altman-accused-of-being-shady-about-openai’s-safety-efforts

Sam Altman accused of being shady about OpenAI’s safety efforts

Sam Altman, chief executive officer of OpenAI, during an interview at Bloomberg House on the opening day of the World Economic Forum (WEF) in Davos, Switzerland, on Tuesday, Jan. 16, 2024.

Enlarge / Sam Altman, chief executive officer of OpenAI, during an interview at Bloomberg House on the opening day of the World Economic Forum (WEF) in Davos, Switzerland, on Tuesday, Jan. 16, 2024.

OpenAI is facing increasing pressure to prove it’s not hiding AI risks after whistleblowers alleged to the US Securities and Exchange Commission (SEC) that the AI company’s non-disclosure agreements had illegally silenced employees from disclosing major safety concerns to lawmakers.

In a letter to OpenAI yesterday, Senator Chuck Grassley (R-Iowa) demanded evidence that OpenAI is no longer requiring agreements that could be “stifling” its “employees from making protected disclosures to government regulators.”

Specifically, Grassley asked OpenAI to produce current employment, severance, non-disparagement, and non-disclosure agreements to reassure Congress that contracts don’t discourage disclosures. That’s critical, Grassley said, so that it will be possible to rely on whistleblowers exposing emerging threats to help shape effective AI policies safeguarding against existential AI risks as technologies advance.

Grassley has apparently twice requested these records without a response from OpenAI, his letter said. And so far, OpenAI has not responded to the most recent request to send documents, Grassley’s spokesperson, Clare Slattery, told The Washington Post.

“It’s not enough to simply claim you’ve made ‘updates,’” Grassley said in a statement provided to Ars. “The proof is in the pudding. Altman needs to provide records and responses to my oversight requests so Congress can accurately assess whether OpenAI is adequately protecting its employees and users.”

In addition to requesting OpenAI’s recently updated employee agreements, Grassley pushed OpenAI to be more transparent about the total number of requests it has received from employees seeking to make federal disclosures since 2023. The senator wants to know what information employees wanted to disclose to officials and whether OpenAI actually approved their requests.

Along the same lines, Grassley asked OpenAI to confirm how many investigations the SEC has opened into OpenAI since 2023.

Together, these documents would shed light on whether OpenAI employees are potentially still being silenced from making federal disclosures, what kinds of disclosures OpenAI denies, and how closely the SEC is monitoring OpenAI’s seeming efforts to hide safety risks.

“It is crucial OpenAI ensure its employees can provide protected disclosures without illegal restrictions,” Grassley wrote in his letter.

He has requested a response from OpenAI by August 15 so that “Congress may conduct objective and independent oversight on OpenAI’s safety protocols and NDAs.”

OpenAI did not immediately respond to Ars’ request for comment.

On X, Altman wrote that OpenAI has taken steps to increase transparency, including “working with the US AI Safety Institute on an agreement where we would provide early access to our next foundation model so that we can work together to push forward the science of AI evaluations.” He also confirmed that OpenAI wants “current and former employees to be able to raise concerns and feel comfortable doing so.”

“This is crucial for any company, but for us especially and an important part of our safety plan,” Altman wrote. “In May, we voided non-disparagement terms for current and former employees and provisions that gave OpenAI the right (although it was never used) to cancel vested equity. We’ve worked hard to make it right.”

In July, whistleblowers told the SEC that OpenAI should be required to produce not just current employee contracts, but all contracts that contained a non-disclosure agreement to ensure that OpenAI hasn’t been obscuring a history or current practice of obscuring AI safety risks. They want all current and former employees to be notified of any contract that included an illegal NDA and for OpenAI to be fined for every illegal contract.

Sam Altman accused of being shady about OpenAI’s safety efforts Read More »

ex-openai-star-sutskever-shoots-for-superintelligent-ai-with-new-company

Ex-OpenAI star Sutskever shoots for superintelligent AI with new company

Not Strategic Simulations —

Safe Superintelligence, Inc. seeks to safely build AI far beyond human capability.

Illya Sutskever physically gestures as OpenAI CEO Sam Altman looks on at Tel Aviv University on June 5, 2023.

Enlarge / Ilya Sutskever physically gestures as OpenAI CEO Sam Altman looks on at Tel Aviv University on June 5, 2023.

On Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the goal of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly in the extreme.

We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product,” wrote Sutskever on X. “We will do it through revolutionary breakthroughs produced by a small cracked team.

Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked on machine learning projects at Apple between 2013 and 2017. The trio posted a statement on the company’s new website.

A screen capture of Safe Superintelligence's initial formation announcement captured on June 20, 2024.

Enlarge / A screen capture of Safe Superintelligence’s initial formation announcement captured on June 20, 2024.

Sutskever and several of his co-workers resigned from OpenAI in May, six months after Sutskever played a key role in ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well on his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later in May.

A nebulous concept

OpenAI is currently seeking to create AGI, or artificial general intelligence, which would hypothetically match human intelligence at performing a wide variety of tasks without specific training. Sutskever hopes to jump beyond that in a straight moonshot attempt, with no distractions along the way.

“This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever in an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race.”

During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.

As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. On X, University of Washington computer science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.

Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify or define because there is no one set type of human intelligence—identifying superintelligence when it arrives may be tricky.

Already, computers far surpass humans in many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi scenario of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more or less what Sutskever hopes to achieve and control safely.

“You’re talking about a giant super data center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”

Ex-OpenAI star Sutskever shoots for superintelligent AI with new company Read More »

report:-apple-isn’t-paying-openai-for-chatgpt-integration-into-oses

Report: Apple isn’t paying OpenAI for ChatGPT integration into OSes

in the pocket —

Apple thinks pushing OpenAI’s brand to hundreds of millions is worth more than money.

The OpenAI and Apple logos together.

OpenAI / Apple / Benj Edwards

On Monday, Apple announced it would be integrating OpenAI’s ChatGPT AI assistant into upcoming versions of its iPhone, iPad, and Mac operating systems. It paves the way for future third-party AI model integrations, but given Google’s multi-billion-dollar deal with Apple for preferential web search, the OpenAI announcement inspired speculation about who is paying whom. According to a Bloomberg report published Wednesday, Apple considers ChatGPT’s placement on its devices as compensation enough.

“Apple isn’t paying OpenAI as part of the partnership,” writes Bloomberg reporter Mark Gurman, citing people familiar with the matter who wish to remain anonymous. “Instead, Apple believes pushing OpenAI’s brand and technology to hundreds of millions of its devices is of equal or greater value than monetary payments.”

The Bloomberg report states that neither company expects the agreement to generate meaningful revenue in the short term, and in fact, the partnership could burn extra money for OpenAI, because it pays Microsoft to host ChatGPT’s capabilities on its Azure cloud. However, OpenAI could benefit by converting free users to paid subscriptions, and Apple potentially benefits by providing easy, built-in access to ChatGPT during a time when its own in-house LLMs are still catching up.

And there’s another angle at play. Currently, OpenAI offers subscriptions (ChatGPT Plus, Enterprise, Team) that unlock additional features. If users subscribe to OpenAI through the ChatGPT app on an Apple device, the process will reportedly use Apple’s payment platform, which may give Apple a significant cut of the revenue. According to the report, Apple hopes to negotiate additional revenue-sharing deals with AI vendors in the future.

Why OpenAI

The rise of ChatGPT in the public eye over the past 18 months has made OpenAI a power player in the tech industry, allowing it to strike deals with publishers for AI training content—and ensure continued support from Microsoft in the form of investments that trade vital funding and compute for access to OpenAI’s large language model (LLM) technology like GPT-4.

Still, Apple’s choice of ChatGPT as Apple’s first external AI integration has led to widespread misunderstanding, especially since Apple buried the lede about its own in-house LLM technology that powers its new “Apple Intelligence” platform.

On Apple’s part, CEO Tim Cook told The Washington Post that it chose OpenAI as its first third-party AI partner because he thinks the company controls the leading LLM technology at the moment: “I think they’re a pioneer in the area, and today they have the best model,” he said. “We’re integrating with other people as well. But they’re first, and I think today it’s because they’re best.”

Apple’s choice also brings risk. OpenAI’s record isn’t spotless, racking up a string of public controversies over the past month that include an accusation from actress Scarlett Johansson that the company intentionally imitated her voice, resignations from a key scientist and safety personnel, the revelation of a restrictive NDA for ex-employees that prevented public criticism, and accusations against OpenAI CEO Sam Altman of “psychological abuse” related by a former member of the OpenAI board.

Meanwhile, critics of privacy issues related to gathering data for training AI models—including OpenAI foe Elon Musk, who took to X on Monday to spread misconceptions about how the ChatGPT integration might work—also worried that the Apple-OpenAI deal might expose personal data to the AI company, although both companies strongly deny that will be the case.

Looking ahead, Apple’s deal with OpenAI is not exclusive, and the company is already in talks to offer Google’s Gemini chatbot as an additional option later this year. Apple has also reportedly held talks with Anthropic (maker of Claude 3) as a potential chatbot partner, signaling its intention to provide users with a range of AI services, much like how the company offers various search engine options in Safari.

Report: Apple isn’t paying OpenAI for ChatGPT integration into OSes Read More »

openai-board-first-learned-about-chatgpt-from-twitter,-according-to-former-member

OpenAI board first learned about ChatGPT from Twitter, according to former member

It’s a secret to everybody —

Helen Toner, center of struggle with Altman, suggests CEO fostered “toxic atmosphere” at company.

Helen Toner, former OpenAI board member, speaks onstage during Vox Media's 2023 Code Conference at The Ritz-Carlton, Laguna Niguel on September 27, 2023.

Enlarge / Helen Toner, former OpenAI board member, speaks during Vox Media’s 2023 Code Conference at The Ritz-Carlton, Laguna Niguel on September 27, 2023.

In a recent interview on “The Ted AI Show” podcast, former OpenAI board member Helen Toner said the OpenAI board was unaware of the existence of ChatGPT until they saw it on Twitter. She also revealed details about the company’s internal dynamics and the events surrounding CEO Sam Altman’s surprise firing and subsequent rehiring last November.

OpenAI released ChatGPT publicly on November 30, 2022, and its massive surprise popularity set OpenAI on a new trajectory, shifting focus from being an AI research lab to a more consumer-facing tech company.

“When ChatGPT came out in November 2022, the board was not informed in advance about that. We learned about ChatGPT on Twitter,” Toner said on the podcast.

Toner’s revelation about ChatGPT seems to highlight a significant disconnect between the board and the company’s day-to-day operations, bringing new light to accusations that Altman was “not consistently candid in his communications with the board” upon his firing on November 17, 2023. Altman and OpenAI’s new board later said that the CEO’s mismanagement of attempts to remove Toner from the OpenAI board following her criticism of the company’s release of ChatGPT played a key role in Altman’s firing.

“Sam didn’t inform the board that he owned the OpenAI startup fund, even though he constantly was claiming to be an independent board member with no financial interest in the company on multiple occasions,” she said. “He gave us inaccurate information about the small number of formal safety processes that the company did have in place, meaning that it was basically impossible for the board to know how well those safety processes were working or what might need to change.”

Toner also shed light on the circumstances that led to Altman’s temporary ousting. She mentioned that two OpenAI executives had reported instances of “psychological abuse” to the board, providing screenshots and documentation to support their claims. The allegations made by the former OpenAI executives, as relayed by Toner, suggest that Altman’s leadership style fostered a “toxic atmosphere” at the company:

In October of last year, we had this series of conversations with these executives, where the two of them suddenly started telling us about their own experiences with Sam, which they hadn’t felt comfortable sharing before, but telling us how they couldn’t trust him, about the toxic atmosphere it was creating. They use the phrase “psychological abuse,” telling us they didn’t think he was the right person to lead the company, telling us they had no belief that he could or would change, there’s no point in giving him feedback, no point in trying to work through these issues.

Despite the board’s decision to fire Altman, Altman began the process of returning to his position just five days later after a letter to the board signed by over 700 OpenAI employees. Toner attributed this swift comeback to employees who believed the company would collapse without him, saying they also feared retaliation from Altman if they did not support his return.

“The second thing I think is really important to know, that has really gone under reported is how scared people are to go against Sam,” Toner said. “They experienced him retaliate against people retaliating… for past instances of being critical.”

“They were really afraid of what might happen to them,” she continued. “So some employees started to say, you know, wait, I don’t want the company to fall apart. Like, let’s bring back Sam. It was very hard for those people who had had terrible experiences to actually say that… if Sam did stay in power, as he ultimately did, that would make their lives miserable.”

In response to Toner’s statements, current OpenAI board chair Bret Taylor provided a statement to the podcast: “We are disappointed that Miss Toner continues to revisit these issues… The review concluded that the prior board’s decision was not based on concerns regarding product safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners.”

Even given that review, Toner’s main argument is that OpenAI hasn’t been able to police itself despite claims to the contrary. “The OpenAI saga shows that trying to do good and regulating yourself isn’t enough,” she said.

OpenAI board first learned about ChatGPT from Twitter, according to former member Read More »

openai-backpedals-on-scandalous-tactic-to-silence-former-employees

OpenAI backpedals on scandalous tactic to silence former employees

That settles that? —

OpenAI releases employees from evil exit agreement in staff-wide memo.

OpenAI CEO Sam Altman.

Enlarge / OpenAI CEO Sam Altman.

Former and current OpenAI employees received a memo this week that the AI company hopes to end the most embarrassing scandal that Sam Altman has ever faced as OpenAI’s CEO.

The memo finally clarified for employees that OpenAI would not enforce a non-disparagement contract that employees since at least 2019 were pressured to sign within a week of termination or else risk losing their vested equity. For an OpenAI employee, that could mean losing millions for expressing even mild criticism about OpenAI’s work.

You can read the full memo below in a post on X (formerly Twitter) from Andrew Carr, a former OpenAI employee whose LinkedIn confirms that he left the company in 2021.

“I guess that settles that,” Carr wrote on X.

OpenAI faced a major public backlash when Vox revealed the unusually restrictive language in the non-disparagement clause last week after OpenAI co-founder and chief scientist Ilya Sutskever resigned, along with his superalignment team co-leader Jan Leike.

As questions swirled regarding these resignations, the former OpenAI staffers provided little explanation for why they suddenly quit. Sutskever basically wished OpenAI well, expressing confidence “that OpenAI will build AGI that is both safe and beneficial,” while Leike only offered two words: “I resigned.”

Amid an explosion of speculation about whether OpenAI was perhaps forcing out employees or doing dangerous or reckless AI work, some wondered if OpenAI’s non-disparagement agreement was keeping employees from warning the public about what was really going on at OpenAI.

According to Vox, employees had to sign the exit agreement within a week of quitting or else potentially lose millions in vested equity that could be worth more than their salaries. The extreme terms of the agreement were “fairly uncommon in Silicon Valley,” Vox found, allowing OpenAI to effectively censor former employees by requiring that they never criticize OpenAI for the rest of their lives.

“This is on me and one of the few times I’ve been genuinely embarrassed running OpenAI,” Altman posted on X, while claiming, “I did not know this was happening and I should have.”

Vox reporter Kelsey Piper called Altman’s apology “hollow,” noting that Altman had recently signed separation letters that seemed to “complicate” his claim that he was unaware of the harsh terms. Piper reviewed hundreds of pages of leaked OpenAI documents and reported that in addition to financially pressuring employees to quickly sign exit agreements, OpenAI also threatened to block employees from selling their equity.

Even requests for an extra week to review the separation agreement, which could afford the employees more time to seek legal counsel, were seemingly denied—”as recently as this spring,” Vox found.

“We want to make sure you understand that if you don’t sign, it could impact your equity,” an OpenAI representative wrote in an email to one departing employee. “That’s true for everyone, and we’re just doing things by the book.”

OpenAI Chief Strategy Officer Jason Kwon told Vox that the company began reconsidering revising this language about a month before the controversy hit.

“We are sorry for the distress this has caused great people who have worked hard for us,” Kwon told Vox. “We have been working to fix this as quickly as possible. We will work even harder to be better.”

Altman sided with OpenAI’s biggest critics, writing on X that the non-disparagement clause “should never have been something we had in any documents or communication.”

“Vested equity is vested equity, full stop,” Altman wrote.

These long-awaited updates make clear that OpenAI will never claw back vested equity if employees leave the company and then openly criticize its work (unless both parties sign a non-disparagement agreement). Prior to this week, some former employees feared steep financial retribution for sharing true feelings about the company.

One former employee, Daniel Kokotajlo, publicly posted that he refused to sign the exit agreement, even though he had no idea how to estimate how much his vested equity was worth. He guessed it represented “about 85 percent of my family’s net worth.”

And while Kokotajlo said that he wasn’t sure if the sacrifice was worth it, he still felt it was important to defend his right to speak up about the company.

“I wanted to retain my ability to criticize the company in the future,” Kokotajlo wrote.

Even mild criticism could seemingly cost employees, like Kokotajlo, who confirmed that he was leaving the company because he was “losing confidence” that OpenAI “would behave responsibly” when developing generative AI.

In OpenAI’s defense, the company confirmed that it had never enforced the exit agreements. But now, OpenAI’s spokesperson told CNBC, OpenAI is backtracking and “making important updates” to its “departure process” to eliminate any confusion the prior language caused.

“We have not and never will take away vested equity, even when people didn’t sign the departure documents,” OpenAI’s spokesperson said. “We’ll remove non-disparagement clauses from our standard departure paperwork, and we’ll release former employees from existing non-disparagement obligations unless the non-disparagement provision was mutual.”

The memo sent to current and former employees reassured everyone at OpenAI that “regardless of whether you executed the Agreement, we write to notify you that OpenAI has not canceled, and will not cancel, any Vested Units.”

“We’re incredibly sorry that we’re only changing this language now; it doesn’t reflect our values or the company we want to be,” OpenAI’s spokesperson said.

OpenAI backpedals on scandalous tactic to silence former employees Read More »

mysterious-“gpt2-chatbot”-ai-model-appears-suddenly,-confuses-experts

Mysterious “gpt2-chatbot” AI model appears suddenly, confuses experts

Robot fortune teller hand and crystal ball

On Sunday, word began to spread on social media about a new mystery chatbot named “gpt2-chatbot” that appeared in the LMSYS Chatbot Arena. Some people speculate that it may be a secret test version of OpenAI’s upcoming GPT-4.5 or GPT-5 large language model (LLM). The paid version of ChatGPT is currently powered by GPT-4 Turbo.

Currently, the new model is only available for use through the Chatbot Arena website, although in a limited way. In the site’s “side-by-side” arena mode where users can purposely select the model, gpt2-chatbot has a rate limit of eight queries per day—dramatically limiting people’s ability to test it in detail.

So far, gpt2-chatbot has inspired plenty of rumors online, including that it could be the stealth launch of a test version of GPT-4.5 or even GPT-5—or perhaps a new version of 2019’s GPT-2 that has been trained using new techniques. We reached out to OpenAI for comment but did not receive a response by press time. On Monday evening, OpenAI CEO Sam Altman seemingly dropped a hint by tweeting, “i do have a soft spot for gpt2.”

A screenshot of the LMSYS Chatbot Arena

Enlarge / A screenshot of the LMSYS Chatbot Arena “side-by-side” page showing “gpt2-chatbot” listed among the models for testing. (Red highlight added by Ars Technica.)

Benj Edwards

Early reports of the model first appeared on 4chan, then spread to social media platforms like X, with hype following not far behind. “Not only does it seem to show incredible reasoning, but it also gets notoriously challenging AI questions right with a much more impressive tone,” wrote AI developer Pietro Schirano on X. Soon, threads on Reddit popped up claiming that the new model had amazing abilities that beat every other LLM on the Arena.

Intrigued by the rumors, we decided to try out the new model for ourselves but did not come away impressed. When asked about “Benj Edwards,” the model revealed a few mistakes and some awkward language compared to GPT-4 Turbo’s output. A request for five original dad jokes fell short. And the gpt2-chatbot did not decisively pass our “magenta” test. (“Would the color be called ‘magenta’ if the town of Magenta didn’t exist?”)

  • A gpt2-chatbot result for “Who is Benj Edwards?” on LMSYS Chatbot Arena. Mistakes and oddities highlighted in red.

    Benj Edwards

  • A gpt2-chatbot result for “Write 5 original dad jokes” on LMSYS Chatbot Arena.

    Benj Edwards

  • A gpt2-chatbot result for “Would the color be called ‘magenta’ if the town of Magenta didn’t exist?” on LMSYS Chatbot Arena.

    Benj Edwards

So, whatever it is, it’s probably not GPT-5. We’ve seen other people reach the same conclusion after further testing, saying that the new mystery chatbot doesn’t seem to represent a large capability leap beyond GPT-4. “Gpt2-chatbot is good. really good,” wrote HyperWrite CEO Matt Shumer on X. “But if this is gpt-4.5, I’m disappointed.”

Still, OpenAI’s fingerprints seem to be all over the new bot. “I think it may well be an OpenAI stealth preview of something,” AI researcher Simon Willison told Ars Technica. But what “gpt2” is exactly, he doesn’t know. After surveying online speculation, it seems that no one apart from its creator knows precisely what the model is, either.

Willison has uncovered the system prompt for the AI model, which claims it is based on GPT-4 and made by OpenAI. But as Willison noted in a tweet, that’s no guarantee of provenance because “the goal of a system prompt is to influence the model to behave in certain ways, not to give it truthful information about itself.”

Mysterious “gpt2-chatbot” AI model appears suddenly, confuses experts Read More »

critics-question-tech-heavy-lineup-of-new-homeland-security-ai-safety-board

Critics question tech-heavy lineup of new Homeland Security AI safety board

Adventures in 21st century regulation —

CEO-heavy board to tackle elusive AI safety concept and apply it to US infrastructure.

A modified photo of a 1956 scientist carefully bottling

On Friday, the US Department of Homeland Security announced the formation of an Artificial Intelligence Safety and Security Board that consists of 22 members pulled from the tech industry, government, academia, and civil rights organizations. But given the nebulous nature of the term “AI,” which can apply to a broad spectrum of computer technology, it’s unclear if this group will even be able to agree on what exactly they are safeguarding us from.

President Biden directed DHS Secretary Alejandro Mayorkas to establish the board, which will meet for the first time in early May and subsequently on a quarterly basis.

The fundamental assumption posed by the board’s existence, and reflected in Biden’s AI executive order from October, is that AI is an inherently risky technology and that American citizens and businesses need to be protected from its misuse. Along those lines, the goal of the group is to help guard against foreign adversaries using AI to disrupt US infrastructure; develop recommendations to ensure the safe adoption of AI tech into transportation, energy, and Internet services; foster cross-sector collaboration between government and businesses; and create a forum where AI leaders to share information on AI security risks with the DHS.

It’s worth noting that the ill-defined nature of the term “Artificial Intelligence” does the new board no favors regarding scope and focus. AI can mean many different things: It can power a chatbot, fly an airplane, control the ghosts in Pac-Man, regulate the temperature of a nuclear reactor, or play a great game of chess. It can be all those things and more, and since many of those applications of AI work very differently, there’s no guarantee any two people on the board will be thinking about the same type of AI.

This confusion is reflected in the quotes provided by the DHS press release from new board members, some of whom are already talking about different types of AI. While OpenAI, Microsoft, and Anthropic are monetizing generative AI systems like ChatGPT based on large language models (LLMs), Ed Bastian, the CEO of Delta Air Lines, refers to entirely different classes of machine learning when he says, “By driving innovative tools like crew resourcing and turbulence prediction, AI is already making significant contributions to the reliability of our nation’s air travel system.”

So, defining the scope of what AI exactly means—and which applications of AI are new or dangerous—might be one of the key challenges for the new board.

A roundtable of Big Tech CEOs attracts criticism

For the inaugural meeting of the AI Safety and Security Board, the DHS selected a tech industry-heavy group, populated with CEOs of four major AI vendors (Sam Altman of OpenAI, Satya Nadella of Microsoft, Sundar Pichai of Alphabet, and Dario Amodei of Anthopic), CEO Jensen Huang of top AI chipmaker Nvidia, and representatives from other major tech companies like IBM, Adobe, Amazon, Cisco, and AMD. There are also reps from big aerospace and aviation: Northrop Grumman and Delta Air Lines.

Upon reading the announcement, some critics took issue with the board composition. On LinkedIn, founder of The Distributed AI Research Institute (DAIR) Timnit Gebru especially criticized OpenAI’s presence on the board and wrote, “I’ve now seen the full list and it is hilarious. Foxes guarding the hen house is an understatement.”

Critics question tech-heavy lineup of new Homeland Security AI safety board Read More »

ai-hardware-company-from-jony-ive,-sam-altman-seeks-$1-billion-in-funding

AI hardware company from Jony Ive, Sam Altman seeks $1 billion in funding

AI Boom —

A venture fund founded by Laurene Powell Jobs could finance the company.

Jony Ive, the former Apple designer.

Jony Ive, the former Apple designer.

Former Apple design lead Jony Ive and current OpenAI CEO Sam Altman are seeking funding for a new company that will produce an “artificial intelligence-powered personal device,” according to The Information‘s sources, who are said to be familiar with the plans.

The exact nature of the device is unknown, but it will not look anything like a smartphone, according to the sources. We first heard tell of this venture in the fall of 2023, but The Information’s story reveals that talks are moving forward to get the company off the ground.

Ive and Altman hope to raise at least $1 billion for the new company. The complete list of potential funding sources they’ve spoken with is unknown, but The Information’s sources say they are in talks with frequent OpenAI investor Thrive Capital as well as Emerson Collective, a venture capital firm founded by Laurene Powell Jobs.

SoftBank CEO and super-investor Masayoshi Son is also said to have spoken with Altman and Ive about the venture. Financial Times previously reported that Son wanted Arm (another company he has backed) to be involved in the project.

Obviously, those are some of the well-established and famous names within today’s tech industry. Personal connections may play a role; for example, Jobs is said to have a friendship with both Ive and Altman. That might be critical because the pedigree involved could scare off smaller investors since the big names could drive up the initial cost of investment.

Although we don’t know anything about the device yet, it would likely put Ive in direct competition with his former employer, Apple. It has been reported elsewhere that Apple is working on bringing powerful new AI features to iOS 18 and later versions of the software for iPhones, iPads, and the company’s other devices.

Altman already has his hands in several other AI ventures besides OpenAI. The Information reports that there is no indication yet that OpenAI would be directly involved in the new hardware company.

AI hardware company from Jony Ive, Sam Altman seeks $1 billion in funding Read More »

openai-ceo-altman-wasn’t-fired-because-of-scary-new-tech,-just-internal-politics

OpenAI CEO Altman wasn’t fired because of scary new tech, just internal politics

Adventures in optics —

As Altman cements power, OpenAI announces three new board members—and a returning one.

OpenAI CEO Sam Altman speaks during the OpenAI DevDay event on November 6, 2023, in San Francisco.

Enlarge / OpenAI CEO Sam Altman speaks during the OpenAI DevDay event on November 6, 2023, in San Francisco.

On Friday afternoon Pacific Time, OpenAI announced the appointment of three new members to the company’s board of directors and released the results of an independent review of the events surrounding CEO Sam Altman’s surprise firing last November. The current board expressed its confidence in the leadership of Altman and President Greg Brockman, and Altman is rejoining the board.

The newly appointed board members are Dr. Sue Desmond-Hellmann, former CEO of the Bill and Melinda Gates Foundation; Nicole Seligman, former EVP and global general counsel of Sony; and Fidji Simo, CEO and chair of Instacart. These additions notably bring three women to the board after OpenAI met criticism about its restructured board composition last year. In addition, Sam Altman has rejoined the board.

The independent review, conducted by law firm WilmerHale, investigated the circumstances that led to Altman’s abrupt removal from the board and his termination as CEO on November 17, 2023. Despite rumors to the contrary, the board did not fire Altman because they got a peek at scary new AI technology and flinched. “WilmerHale… found that the prior Board’s decision did not arise out of concerns regarding product safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners.”

Instead, the review determined that the prior board’s actions stemmed from a breakdown in trust between the board and Altman.

After reportedly interviewing dozens of people and reviewing over 30,000 documents, WilmerHale found that while the prior board acted within its purview, Altman’s termination was unwarranted. “WilmerHale found that the prior Board acted within its broad discretion to terminate Mr. Altman,” OpenAI wrote, “but also found that his conduct did not mandate removal.”

Additionally, the law firm found that the decision to fire Altman was made in undue haste: “The prior Board implemented its decision on an abridged timeframe, without advance notice to key stakeholders and without a full inquiry or an opportunity for Mr. Altman to address the prior Board’s concerns.”

Altman’s surprise firing occurred after he attempted to remove Helen Toner from OpenAI’s board due to disagreements over her criticism of OpenAI’s approach to AI safety and hype. Some board members saw his actions as deceptive and manipulative. After Altman returned to OpenAI, Toner resigned from the OpenAI board on November 29.

In a statement posted on X, Altman wrote, “i learned a lot from this experience. one think [sic] i’ll say now: when i believed a former board member was harming openai through some of their actions, i should have handled that situation with more grace and care. i apologize for this, and i wish i had done it differently.”

A tweet from Sam Altman posted on March 8, 2024.

Enlarge / A tweet from Sam Altman posted on March 8, 2024.

Following the review’s findings, the Special Committee of the OpenAI Board recommended endorsing the November 21 decision to rehire Altman and Brockman. The board also announced several enhancements to its governance structure, including new corporate governance guidelines, a strengthened Conflict of Interest Policy, a whistleblower hotline, and additional board committees focused on advancing OpenAI’s mission.

After OpenAI’s announcements on Friday, resigned OpenAI board members Toner and Tasha McCauley released a joint statement on X. “Accountability is important in any company, but it is paramount when building a technology as potentially world-changing as AGI,” they wrote. “We hope the new board does its job in governing OpenAI and holding it accountable to the mission. As we told the investigators, deception, manipulation, and resistance to thorough oversight should be unacceptable.”

OpenAI CEO Altman wasn’t fired because of scary new tech, just internal politics Read More »