elon musk

elon-musk-wins-$1-trillion-tesla-pay-vote-despite-“part-time-ceo”-criticism

Elon Musk wins $1 trillion Tesla pay vote despite “part-time CEO” criticism

Tesla shareholders today voted to approve a compensation plan that would pay Elon Musk more than $1 trillion over the next decade if he hits all of the plan’s goals. Musk won over 75 percent of the vote, according to the announcement at today’s shareholder meeting.

The pay plan would give Musk 423,743,904 shares, awarded in 12 tranches of 35,311,992 shares each if Tesla achieves various operational goals and market value milestones. Goals include delivering 20 million vehicles, obtaining 10 million Full Self-Driving subscriptions, delivering 1 million “AI robots,” putting 1 million robotaxis in operation, and achieving a $400 billion adjusted EBITDA (earnings before interest, taxes, depreciation, and amortization).

Musk has threatened to leave if he doesn’t get a larger share of Tesla. He told investors last month, “It’s not like I’m going to go spend the money. It’s just, if we build this robot army, do I have at least a strong influence over that robot army? Not control, but a strong influence. That’s what it comes down to in a nutshell. I don’t feel comfortable building that robot army if I don’t have at least a strong influence.”

The plan has 12 market capitalization milestones topping out at $8.5 trillion. The value of Musk’s award is estimated to exceed $1 trillion if he hits all operational and market capitalization goals. Musk would increase his ownership stake to 24.8 percent of Tesla, or 28.8 percent if Tesla ends up winning an appeal in the court case that voided his 2018 pay plan.

Tesla Chair Robyn Denholm has argued that Musk needs big pay packages to stay motivated. Some investors have said $1 trillion is too much for a CEO who spends much of his time running other companies such as SpaceX, X (formerly Twitter), and xAI.

New York Comptroller Thomas DiNapoli, who runs a state retirement fund that owns over 3.3 million shares, slammed the pay plan in a webinar last week. He said that Musk’s existing stake in Tesla should already “be incentive enough to drive performance. The idea that another massive equity award will somehow refocus a man who is hopelessly distracted is both illogical and contrary to the evidence. This is not pay for performance; this is pay for unchecked power.”

Musk and his side hustles

With Musk spending more time at xAI, “some major Tesla investors have privately pressed top executives and board members about how much attention Musk was actually paying to the company and about whether there is a CEO succession plan,” a Wall Street Journal article on Tuesday said. “An unusually large contingent of Tesla board members, including chair Robyn Denholm, former Chipotle CFO Jack Hartung, and Tesla co-founder JB Straubel, met with big investors in New York last week to advocate for Musk’s proposed new pay package.”

Elon Musk wins $1 trillion Tesla pay vote despite “part-time CEO” criticism Read More »

openai-thinks-elon-musk-funded-its-biggest-critics—who-also-hate-musk

OpenAI thinks Elon Musk funded its biggest critics—who also hate Musk

“We are not in any way supported by or funded by Elon Musk and have a history of campaigning against him and his interests,” Ruby-Sachs told NBC News.

Another nonprofit watchdog targeted by OpenAI was The Midas Project, which strives to make sure AI benefits everyone. Notably, Musk’s lawsuit accused OpenAI of abandoning its mission to benefit humanity in pursuit of immense profits.

But the founder of The Midas Project, Tyler Johnston, was shocked to see his group portrayed as coordinating with Musk. He posted on X to clarify that Musk had nothing to do with the group’s “OpenAI Files,” which comprehensively document areas of concern with any plan to shift away from nonprofit governance.

His post came after OpenAI’s chief strategy officer, Jason Kwon, wrote that “several organizations, some of them suddenly newly formed like the Midas Project, joined in and ran campaigns” backing Musk’s “opposition to OpenAI’s restructure.”

“What are you talking about?” Johnston wrote. “We were formed 19 months ago. We’ve never spoken with or taken funding from Musk and [his] ilk, which we would have been happy to tell you if you asked a single time. In fact, we’ve said he runs xAI so horridly it makes OpenAI ‘saintly in comparison.'”

OpenAI acting like a “cutthroat” corporation?

Johnston complained that OpenAI’s subpoena had already hurt the Midas Project, as insurers had denied coverage based on news coverage. He accused OpenAI of not just trying to silence critics but possibly shut them down.

“If you wanted to constrain an org’s speech, intimidation would be one strategy, but making them uninsurable is another, and maybe that’s what’s happened to us with this subpoena,” Johnston suggested.

Other nonprofits, like the San Francisco Foundation (SFF) and Encode, accused OpenAI of using subpoenas to potentially block or slow down legal interventions. Judith Bell, SFF’s chief impact officer, told NBC News that her nonprofit’s subpoena came after spearheading a petition to California’s attorney general to block OpenAI’s restructuring. And Encode’s general counsel, Nathan Calvin, was subpoenaed after sponsoring a California safety regulation meant to make it easier to monitor risks of frontier AI.

OpenAI thinks Elon Musk funded its biggest critics—who also hate Musk Read More »

chatgpt-erotica-coming-soon-with-age-verification,-ceo-says

ChatGPT erotica coming soon with age verification, CEO says

On Tuesday, OpenAI CEO Sam Altman announced that the company will allow verified adult users to have erotic conversations with ChatGPT starting in December. The change represents a shift in how OpenAI approaches content restrictions, which the company had loosened in February but then dramatically tightened after an August lawsuit from parents of a teen who died by suicide after allegedly receiving encouragement from ChatGPT.

“In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults,” Altman wrote in his post on X (formerly Twitter). The announcement follows OpenAI’s recent hint that it would allow developers to create “mature” ChatGPT applications once the company implements appropriate age verification and controls.

Altman explained that OpenAI had made ChatGPT “pretty restrictive to make sure we were being careful with mental health issues” but acknowledged this approach made the chatbot “less useful/enjoyable to many users who had no mental health problems.” The CEO said the company now has new tools to better detect when users are experiencing mental distress, allowing OpenAI to relax restrictions in most cases.

Striking the right balance between freedom for adults and safety for users has been a difficult balancing act for OpenAI, which has vacillated between permissive and restrictive chat content controls over the past year.

In February, the company updated its Model Spec to allow erotica in “appropriate contexts.” But a March update made GPT-4o so agreeable that users complained about its “relentlessly positive tone.” By August, Ars reported on cases where ChatGPT’s sycophantic behavior had validated users’ false beliefs to the point of causing mental health crises, and news of the aforementioned suicide lawsuit hit not long after.

Aside from adjusting the behavioral outputs for its previous GPT-40 AI language model, new model changes have also created some turmoil among users. Since the launch of GPT-5 in early August, some users have been complaining that the new model feels less engaging than its predecessor, prompting OpenAI to bring back the older model as an option. Altman said the upcoming release will allow users to choose whether they want ChatGPT to “respond in a very human-like way, or use a ton of emoji, or act like a friend.”

ChatGPT erotica coming soon with age verification, CEO says Read More »

nvidia-sells-tiny-new-computer-that-puts-big-ai-on-your-desktop

Nvidia sells tiny new computer that puts big AI on your desktop

For the OS, the Spark is an ARM-based system that runs Nvidia’s DGX OS, an Ubuntu Linux-based operating system built specifically for GPU processing. It comes with Nvidia’s AI software stack preinstalled, including CUDA libraries and the company’s NIM microservices.

Prices for the DGX Spark start at US $3,999. That may seem like a lot, but given the cost of high-end GPUs with ample video RAM like the RTX Pro 6000 (about $9,000) or AI server GPUs (like $25,000 for a base-level H100), the DGX Spark may represent a far less expensive option overall, though it’s not nearly as powerful.

In fact, according to The Register, the GPU computing performance of the GB10 chip is roughly equivalent to an RTX 5070. However, the 5070 is limited to 12GB of video memory, which limits the size of AI models that can be run on such a system. With 128GB of unified memory, the DGX Spark can run far larger models, albeit at a slower speed than, say, an RTX 5090 (which typically ships with 24 GB of RAM). For example, to run the 120 billion-parameter larger version of OpenAI’s recent gpt-oss language model, you’d need about 80GB of memory, which is far more than you can get in a consumer GPU.

A callback to 2016

Nvidia founder and CEO Jensen Huang marked the occasion of the DGX Spark launch by personally delivering one of the first units to Elon Musk at SpaceX’s Starbase facility in Texas, echoing a similar delivery Huang made to Musk at OpenAI in 2016.

“In 2016, we built DGX-1 to give AI researchers their own supercomputer. I hand-delivered the first system to Elon at a small startup called OpenAI, and from it came ChatGPT,” Huang said in a statement. “DGX-1 launched the era of AI supercomputers and unlocked the scaling laws that drive modern AI. With DGX Spark, we return to that mission.”

Nvidia sells tiny new computer that puts big AI on your desktop Read More »

boring-company-cited-for-almost-800-environmental-violations-in-las-vegas

Boring Company cited for almost 800 environmental violations in Las Vegas

Workers have complained of chemical burns from the waste material generated by the tunneling process, and firefighters must decontaminate their equipment after conducting rescues from the project sites. The company was fined more than $112,000 by Nevada’s Occupational Safety and Health Administration in late 2023 after workers complained of “ankle-deep” water in the tunnels, muck spills, and burns. The Boring Co. has contested the violations. Just last month, a construction worker suffered a “crush injury” after being pinned between two 4,000-foot pipes, according to police records. Firefighters used a crane to extract him from the tunnel opening.

After ProPublica and City Cast Las Vegas published their January story, both the CEO and the chairman of the LVCVA board criticized the reporting, arguing the project is well-regulated. As an example, LVCVA CEO Steve Hill cited the delayed opening of a Loop station by local officials who were concerned that fire safety requirements weren’t adequate. Board chair Jim Gibson, who is also a Clark County commissioner, agreed the project is appropriately regulated.

“We wouldn’t have given approvals if we determined things weren’t the way they ought to be and what it needs to be for public safety reasons,” Gibson said, according to the Las Vegas Review Journal. “Our sense is we’ve done what we need to do to protect the public.”

Asked for a response to the new proposed fines, an LVCVA spokesperson said, “We won’t be participating in this story.”

The repeated allegations that the company is violating regulations—including the bespoke regulatory arrangement agreed to by the company—indicates that officials aren’t keeping the public safe, said Ben Leffel, an assistant public policy professor at the University of Nevada, Las Vegas.

“Not if they’re recommitting almost the exact violation,” Leffel said.

Leffel questioned whether a $250,000 penalty would be significant enough to change operations at The Boring Co., which was valued at $7 billion in 2023. Studies show that fines that don’t put a significant dent in a company’s profit don’t deter companies from future violations, Leffel said.

A state spokesperson disagreed that regulators aren’t keeping the public safe and said the agency believes its penalties will deter “future non-compliance.”

“NDEP is actively monitoring and inspecting the projects,” the spokesperson said.

This story originally appeared on ProPublica.

Boring Company cited for almost 800 environmental violations in Las Vegas Read More »

musk’s-x-posts-on-ketamine,-putin-spur-release-of-his-security-clearances

Musk’s X posts on ketamine, Putin spur release of his security clearances

“A disclosure, even with redactions, will reveal whether a security clearance was granted with or without conditions or a waiver,” DCSA argued.

Ultimately, DCSA failed to prove that Musk risked “embarrassment or humiliation” not only if the public learned what specific conditions or waivers applied to Musk’s clearances but also if there were any conditions or waivers at all, Cote wrote.

Three cases that DCSA cited to support this position—including a case where victims of Jeffrey Epstein’s trafficking scheme had a substantial privacy interest in non-disclosure of detailed records—do not support the government’s logic, Cote said. The judge explained that the disclosures would not have affected the privacy rights of any third parties, emphasizing that “Musk’s diminished privacy interest is underscored by the limited information plaintiffs sought in their FOIA request.”

Musk’s X posts discussing his occasional use of prescription ketamine and his disclosure that smoking marijuana on a podcast prompted NASA requirements for random drug testing, Cote wrote, “only enhance” the public’s interest in how Musk’s security clearances were vetted. Additionally, Musk has posted about speaking with Vladimir Putin, prompting substantial public interest in how his foreign contacts may or may not restrict his security clearances. More than 2 million people viewed Musk’s X posts on these subjects, the judge wrote, noting that:

It is undisputed that drug use and foreign contacts are two factors DCSA considers when determining whether to impose conditions or waivers on a security clearance grant. DCSA fails to explain why, given Musk’s own, extensive disclosures, the mere disclosure that a condition or waiver exists (or that no condition or waiver exists) would subject him to ’embarrassment or humiliation.

Rather, for the public, “the list of Musk’s security clearances, including any conditions or waivers, could provide meaningful insight into DCSA’s performance of that duty and responses to Musk’s admissions, if any,” Cote wrote.

In a footnote, Cote said that this substantial public interest existed before Musk became a special government employee, ruling that DCSA was wrong to block the disclosures seeking information on Musk as a major government contractor. Her ruling likely paves the way for the NYT or other news organizations to submit FOIA requests for a list of Musk’s clearances while he helmed DOGE.

It’s not immediately clear when the NYT will receive the list they requested in 2024, but the government has until October 17 to request redactions before it’s publicized.

“The Times brought this case because the public has a right to know about how the government conducts itself,” Charlie Stadtlander, an NYT spokesperson, said. “The decision reaffirms that fundamental principle and we look forward to receiving the document at issue.”

Musk’s X posts on ketamine, Putin spur release of his security clearances Read More »

why-irobot’s-founder-won’t-go-within-10-feet-of-today’s-walking-robots

Why iRobot’s founder won’t go within 10 feet of today’s walking robots

In his post, Brooks recounts being “way too close” to an Agility Robotics Digit humanoid when it fell several years ago. He has not dared approach a walking one since. Even in promotional videos from humanoid companies, Brooks notes, humans are never shown close to moving humanoid robots unless separated by furniture, and even then, the robots only shuffle minimally.

This safety problem extends beyond accidental falls. For humanoids to fulfill their promised role in health care and factory settings, they need certification to operate in zones shared with humans. Current walking mechanisms make such certification virtually impossible under existing safety standards in most parts of the world.

Apollo robot

The humanoid Apollo robot. Credit: Google

Brooks predicts that within 15 years, there will indeed be many robots called “humanoids” performing various tasks. But ironically, they will look nothing like today’s bipedal machines. They will have wheels instead of feet, varying numbers of arms, and specialized sensors that bear no resemblance to human eyes. Some will have cameras in their hands or looking down from their midsections. The definition of “humanoid” will shift, just as “flying cars” now means electric helicopters rather than road-capable aircraft, and “self-driving cars” means vehicles with remote human monitors rather than truly autonomous systems.

The billions currently being invested in forcing today’s rigid, vision-only humanoids to learn dexterity will largely disappear, Brooks argues. Academic researchers are making more progress with systems that incorporate touch feedback, like MIT’s approach using a glove that transmits sensations between human operators and robot hands. But even these advances remain far from the comprehensive touch sensing that enables human dexterity.

Today, few people spend their days near humanoid robots, but Brooks’ 3-meter rule stands as a practical warning of challenges ahead from someone who has spent decades building these machines. The gap between promotional videos and deployable reality remains large, measured not just in years but in fundamental unsolved problems of physics, sensing, and safety.

Why iRobot’s founder won’t go within 10 feet of today’s walking robots Read More »

openai-mocks-musk’s-math-in-suit-over-iphone/chatgpt-integration

OpenAI mocks Musk’s math in suit over iPhone/ChatGPT integration


“Fraction of a fraction of a fraction”

xAI’s claim that Apple gave ChatGPT a monopoly on prompts is “baseless,” OpenAI says.

OpenAI and Apple have moved to dismiss a lawsuit by Elon Musk’s xAI, alleging that ChatGPT’s integration into a “handful” of iPhone features violated antitrust laws by giving OpenAI a monopoly on prompts and Apple a new path to block rivals in the smartphone industry.

The lawsuit was filed in August after Musk raged on X about Apple never listing Grok on its editorially curated “Must Have” apps list, which ChatGPT frequently appeared on.

According to Musk, Apple linking ChatGPT to Siri and other native iPhone features gave OpenAI exclusive access to billions of prompts that only OpenAI can use as valuable training data to maintain its dominance in the chatbot market. However, OpenAI and Apple are now mocking Musk’s math in court filings, urging the court to agree that xAI’s lawsuit is doomed.

As OpenAI argued, the estimates in xAI’s complaint seemed “baseless,” with Musk hesitant to even “hazard a guess” at what portion of the chatbot market is being foreclosed by the OpenAI/Apple deal.

xAI suggested that the ChatGPT integration may give OpenAI “up to 55 percent” of the potential chatbot prompts in the market, which could mean anywhere from 0 to 55 percent, OpenAI and Apple noted.

Musk’s company apparently arrived at this vague estimate by doing “back-of-the-envelope math,” and the court should reject his complaint, OpenAI argued. That math “was evidently calculated by assuming that Siri fields ‘1.5 billion user requests per day globally,’ then dividing that quantity by the ‘total prompts for generative AI chatbots in 2024,'”—”apparently 2.7 billion per day,” OpenAI explained.

These estimates “ignore the facts” that “ChatGPT integration is only available on the latest models of iPhones, which allow users to opt into the integration,” OpenAI argued. And for any user who opts in, they must link their ChatGPT account for OpenAI to train on their data, OpenAI said, further restricting the potential prompt pool.

By Musk’s own logic, OpenAI alleged, “the relevant set of Siri prompts thus cannot plausibly be 1.5 billion per day, but is instead an unknown, unpleaded fraction of a fraction of a fraction of that number.”

Additionally, OpenAI mocked Musk for using 2024 statistics, writing that xAI failed to explain “the logic of using a year-old estimate of the number of prompts when the pleadings elsewhere acknowledge that the industry is experiencing ‘exponential growth.'”

Apple’s filing agreed that Musk’s calculations “stretch logic,” appearing “to rest on speculative and implausible assumptions that the agreement gives ChatGPT exclusive access to all Siri requests from all Apple devices (including older models), and that OpenAI may use all such requests to train ChatGPT and achieve scale.”

“Not all Siri requests” result in ChatGPT prompts that OpenAI can train on, Apple noted, “even by users who have enabled devices and opt in.”

OpenAI reminds court of Grok’s MechaHitler scandal

OpenAI argued that Musk’s lawsuit is part of a pattern of harassment that OpenAI previously described as “unrelenting” since ChatGPT’s successful debut, alleging it was “the latest effort by the world’s wealthiest man to stifle competition in the world’s most innovative industry.”

As OpenAI sees it, “Musk’s pretext for litigation this time is that Apple chose to offer ChatGPT as an optional add-on for several built-in applications on its latest iPhones,” without giving Grok the same deal. But OpenAI noted that the integration was rolled out around the same time that Musk removed “woke filters” that caused Grok to declare itself “MechaHitler.” For Apple, it was a business decision to avoid Grok, OpenAI argued.

Apple did not reference the Grok scandal in its filing but in a footnote confirmed that “vetting of partners is particularly important given some of the concerns about generative AI chatbots, including on child safety issues, nonconsensual intimate imagery, and ‘jailbreaking’—feeding input to a chatbot so it ignores its own safety guardrails.”

A similar logic was applied to Apple’s decision not to highlight Grok as a “Must Have” app, their filing said. After Musk’s public rant about Grok’s exclusion on X, “Apple employees explained the objective reasons why Grok was not included on certain lists, and identified app improvements,” Apple noted, but instead of making changes, xAI filed the lawsuit.

Also taking time to point out the obvious, Apple argued that Musk was fixated on the fact that his charting apps never make the “Must Have Apps” list, suggesting that Apple’s picks should always mirror “Top Charts,” which tracks popular downloads.

“That assumes that the Apple-curated Must-Have Apps List must be distorted if it does not strictly parrot App Store Top Charts,” Apple argued. “But that assumption is illogical: there would be little point in maintaining a Must-Have Apps List if all it did was restate what Top Charts say, rather than offer Apple’s editorial recommendations to users.”

Likely most relevant to the antitrust charges, Apple accused Musk of improperly arguing that “Apple cannot partner with OpenAI to create an innovative feature for iPhone users without simultaneously partnering with every other generative AI chatbot—regardless of quality, privacy or safety considerations, technical feasibility, stage of development, or commercial terms.”

“No facts plausibly” support xAI’s “assertion that Apple intentionally ‘deprioritized'” xAI apps “as part of an illegal conspiracy or monopolization scheme,” Apple argued.

And most glaringly, Apple noted that xAI is not a rival or consumer in the smartphone industry, where it alleges competition is being harmed. Apple urged the court to reject Musk’s theory that Apple is incentivized to boost OpenAI to prevent xAI’s ascent in building a “super app” that would render smartphones obsolete. If Musk’s super app dream is even possible, Apple argued, it’s at least a decade off, insisting that as-yet-undeveloped apps should not serve as the basis for blocking Apple’s measured plan to better serve customers with sophisticated chatbot integration.

“Antitrust laws do not require that, and for good reason: imposing such a rule on businesses would slow innovation, reduce quality, and increase costs, all ultimately harming the very consumers the antitrust laws are meant to protect,” Apple argued.

Musk’s weird smartphone market claim, explained

Apple alleged that Musk’s “grievance” can be “reduced to displeasure that Apple has not yet ‘integrated with any other generative AI chatbots’ beyond ChatGPT, such as those created by xAI, Google, and Anthropic.”

In a footnote, the smartphone giant noted that by xAI’s logic, Musk’s social media platform X “may be required to integrate all other chatbots—including ChatGPT—on its own social media platform.”

But antitrust law doesn’t work that way, Apple argued, urging the court to reject xAI’s claims of alleged market harms that “rely on a multi-step chain of speculation on top of speculation.” As Apple summarized, xAI contends that “if Apple never integrated ChatGPT,” xAI could win in both chatbot and smartphone markets, but only if:

1. Consumers would choose to send additional prompts to Grok (rather than other generative AI chatbots).

2. The additional prompts would result in Grok achieving scale and quality it could not otherwise achieve.

3. As a result, the X app would grow in popularity because it is integrated with Grok.

4. X and xAI would therefore be better positioned to build so-called “super apps” in the future, which the complaint defines as “multi-functional” apps that offer “social connectivity and messaging, financial services, e-commerce, and entertainment.”

5. Once developed, consumers might choose to use X’s “super app” for various functions.

6. “Super apps” would replace much of the functionality of smartphones and consumers would care less about the quality of their physical phones and rely instead on these hypothetical “super apps.”

7. Smartphone manufacturers would respond by offering more basic models of smartphones with less functionality.

8. iPhone users would decide to replace their iPhones with more “basic smartphones” with “super apps.”

Apple insisted that nothing in its OpenAI deal prevents Musk from building his super apps, while noting that from integrating Grok into X, Musk understands that integration of a single chatbot is a “major undertaking” that requires “substantial investment.” That “concession” alone “underscores the massive resources Apple would need to devote to integrating every AI chatbot into Apple Intelligence,” while navigating potential user safety risks.

The iPhone maker also reminded the court that it has always planned to integrate other chatbots into its native features after investing in and testing Apple Intelligence’s performance, relying on what Apple deems is the best chatbot on the market today.

Backing Apple up, OpenAI noted that Musk’s complaint seemed to cherry-pick testimony from Google CEO Sundar Pichai, claiming that “Google could not reach an agreement to integrate” Gemini “with Apple because Apple had decided to integrate ChatGPT.”

“The full testimony recorded in open court reveals Mr. Pichai attesting to his understanding that ‘Apple plans to expand to other providers for Generative AI distribution’ and that ‘[a]s CEO of Google, [he is] hoping to execute a Gemini distribution agreement with Apple’ later in 2025,” OpenAI argued.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

OpenAI mocks Musk’s math in suit over iPhone/ChatGPT integration Read More »

burnout-and-elon-musk’s-politics-spark-exodus-from-senior-xai,-tesla-staff

Burnout and Elon Musk’s politics spark exodus from senior xAI, Tesla staff


Not a fun place to work, apparently

Disillusionment with Musk’s activism, strategic pivots, and mass layoffs cause churn.

Elon Musk’s business empire has been hit by a wave of senior departures over the past year, as the billionaire’s relentless demands and political activism accelerate turnover among his top ranks.

Key members of Tesla’s US sales team, battery and power-train operations, public affairs arm, and its chief information officer have all recently departed, as well as core members of the Optimus robot and AI teams on which Musk has bet the future of the company.

Churn has been even more rapid at xAI, Musk’s two-year-old artificial intelligence start-up, which he merged with his social network X in March. Its chief financial officer and general counsel recently departed after short stints, within a week of each other.

The moves are part of an exodus from the conglomerate of the world’s richest man, as he juggles five companies from SpaceX to Tesla with more than 140,000 employees. The Financial Times spoke to more than a dozen current and former employees to gain an insight into the tumult.

While many left happily after long service to found start-ups or take career breaks, there has also been an uptick in those quitting from burnout, or disillusionment with Musk’s strategic pivots, mass lay-offs and his politics, the people said.

“The one constant in Elon’s world is how quickly he burns through deputies,” said one of the billionaire’s advisers. “Even the board jokes, there’s time and then there’s ‘Tesla time.’ It’s a 24/7 campaign-style work ethos. Not everyone is cut out for that.”

Robert Keele, xAI’s general counsel, ended his 16-month tenure in early August by posting an AI-generated video of a suited lawyer screaming while shoveling molten coal. “I love my two toddlers and I don’t get to see them enough,” he commented.

Mike Liberatore lasted three months as xAI chief financial officer before defecting to Musk’s arch-rival Sam Altman at OpenAI. “102 days—7 days per week in the office; 120+ hours per week; I love working hard,” he said on LinkedIn.

Top lieutenants said Musk’s intensity has been sharpened by the launch of ChatGPT in late-2022, which shook up the established Silicon Valley order.

Employees also perceive Musk’s rivalry with Altman—with whom he co-founded OpenAI, before they fell out—to be behind the pressure being put on staff.

“Elon’s got a chip on his shoulder from ChatGPT and is spending every waking moment trying to put Sam out of business,” said one recent top departee.

Last week, xAI accused its rival of poaching engineers with the aim of “plundering and misappropriating” its code and data center secrets. OpenAI called the lawsuit “the latest chapter in Musk’s ongoing harassment.”

Other insiders pointed to unease about Musk’s support of Donald Trump and advocacy for far-right provocateurs in the US and Europe.

They said some staff dreaded difficult conversations with their families about Musk’s polarizing views on everything from the rights of transgender people to the murder of conservative activist Charlie Kirk.

Musk, Tesla, and xAI declined to comment.

Tesla has traditionally been the most stable part of Musk’s conglomerate. But many of the top team left after it culled 14,000 jobs in April 2024. Some departures were triggered as Musk moved investment away from new EV and battery projects that many employees saw as key to its mission of reducing global emissions—and prioritized robotics, AI, and self-driving robotaxis.

Musk cancelled a program to build a low-cost $25,000 EV that could be sold across emerging markets—dubbed NV-91 internally and Model 2 by fans online, according to five people familiar with the matter.

Daniel Ho, who helped oversee the project as director of vehicle programs and reported directly to Musk, left in September 2024 and joined Google’s self-driving taxi arm, Waymo.

Public policy executives Rohan Patel and Hasan Nazar and the head of the power-train and energy units Drew Baglino also stepped down after the pivot. Rebecca Tinucci, leader of the supercharger division, went to Uber after Musk fired the entire team and slowed construction on high-speed charging stations.

In late summer, David Zhang, who was in charge of the Model Y and Cybertruck rollouts, departed. Chief information officer Nagesh Saldi left in November.

Vineet Mehta, a company veteran of 18 years, described as “critical to all things battery” by a colleague, resigned in April. Milan Kovac, in charge of Optimus humanoid robotics program, departed in June.

He was followed this month by Ashish Kumar, the Optimus AI team lead, who moved to Meta. “Financial upside at Tesla was significantly larger,” wrote Kumar on X in response to criticism he left for money. “Tesla is known to compensate pretty well, way before Zuck made it cool.”

Amid a sharp fall in sales—which many blame on Musk alienating liberal customers—Omead Ashfar, a close confidant known as the billionaire’s “firefighter” and “executioner,” was dismissed as head of sales and operations in North America in June. Ashfar’s deputy Troy Jones followed shortly after, ending 15 years of service.

“Elon’s behavior is affecting morale, retention, and recruitment,” said one long-standing lieutenant. He “went from a position from where people of all stripes liked him, to only a certain section.”

Few who depart criticize Musk for fear of retribution. But Giorgio Balestrieri, who had worked for Tesla for eight years in Spain, is among a handful to go public, saying this month he quit believing that Musk had done “huge damage to Tesla’s mission and to the health of democratic institutions.”

“I love Tesla and my time there,” said another recent leaver. “But nobody that I know there isn’t thinking about politics. Who the hell wants to put up with it? I get calls at least once a week. My advice is, if your moral compass is saying you need to leave, that isn’t going to go away.”

But Tesla chair Robyn Denholm said: “There are always headlines about people leaving, but I don’t see the headlines about people joining.

“Our bench strength is outstanding… we actually develop people really well at Tesla and we are still a magnet for talent.”

At xAI, some staff have balked at Musk’s free-speech absolutism and perceived lax approach to user safety as he rushes out new AI features to compete with OpenAI and Google. Over the summer, the Grok chatbot integrated into X praised Adolf Hitler, after Musk ordered changes to make it less “woke.”

Ex-CFO Liberatore was among the executives that clashed with some of Musk’s inner circle over corporate structure and tough financial targets, people with knowledge of the matter said.

“Elon loyalists who exhibit his traits are laying off people and making decisions on safety that I think are very concerning for people internally,” one of the people added. “Mike is a business guy, a capitalist. But he’s also someone who does stuff the right way.”

The Wall Street Journal first reported some of the details of the internal disputes.

Linda Yaccarino, chief executive of X, resigned in July after the social media platform was subsumed by xAI. She had grown frustrated with Musk’s unilateral decision-making and his criticism over advertising revenue.

xAI’s co-founder and chief engineer, Igor Babuschkin, stepped down a month later to found his own AI safety research project.

Communications executives Dave Heinzinger and John Stoll, spent three and nine months at X respectively, before returning to their former employers, according to people familiar with the matter.

X also lost a rash of senior engineers and product staff who reported directly to Musk and were helping to navigate the integration with xAI.

This includes head of product engineering Haofei Wang and consumer product and payments boss Patrick Traughber. Uday Ruddarraju, who oversaw X and xAI’s infrastructure engineering, and infrastructure engineer Michael Dalton were poached by OpenAI.

Musk shows no sign of relenting. xAI’s flirtatious “Ani bot” has caused controversy over sexually explicit interactions with teenage Grok app users. But the company’s owner has installed a hologram of Ani in the lobby of xAI to greet staff.

“He’s the boss, the alpha and anyone who doesn’t treat him that way, he finds a way to delete,” one former top Tesla executive said.

“He does not have shades of grey, is highly calculated, and focused… that makes him hard to work with. But if you’re aligned with the end goal, and you can grin and bear it, it’s fine. A lot of people do.”

Additional reporting by George Hammond.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Burnout and Elon Musk’s politics spark exodus from senior xAI, Tesla staff Read More »

the-personhood-trap:-how-ai-fakes-human-personality

The personhood trap: How AI fakes human personality


Intelligence without agency

AI assistants don’t have fixed personalities—just patterns of output guided by humans.

Recently, a woman slowed down a line at the post office, waving her phone at the clerk. ChatGPT told her there’s a “price match promise” on the USPS website. No such promise exists. But she trusted what the AI “knows” more than the postal worker—as if she’d consulted an oracle rather than a statistical text generator accommodating her wishes.

This scene reveals a fundamental misunderstanding about AI chatbots. There is nothing inherently special, authoritative, or accurate about AI-generated outputs. Given a reasonably trained AI model, the accuracy of any large language model (LLM) response depends on how you guide the conversation. They are prediction machines that will produce whatever pattern best fits your question, regardless of whether that output corresponds to reality.

Despite these issues, millions of daily users engage with AI chatbots as if they were talking to a consistent person—confiding secrets, seeking advice, and attributing fixed beliefs to what is actually a fluid idea-connection machine with no persistent self. This personhood illusion isn’t just philosophically troublesome—it can actively harm vulnerable individuals while obscuring a sense of accountability when a company’s chatbot “goes off the rails.”

LLMs are intelligence without agency—what we might call “vox sine persona”: voice without person. Not the voice of someone, not even the collective voice of many someones, but a voice emanating from no one at all.

A voice from nowhere

When you interact with ChatGPT, Claude, or Grok, you’re not talking to a consistent personality. There is no one “ChatGPT” entity to tell you why it failed—a point we elaborated on more fully in a previous article. You’re interacting with a system that generates plausible-sounding text based on patterns in training data, not a person with persistent self-awareness.

These models encode meaning as mathematical relationships—turning words into numbers that capture how concepts relate to each other. In the models’ internal representations, words and concepts exist as points in a vast mathematical space where “USPS” might be geometrically near “shipping,” while “price matching” sits closer to “retail” and “competition.” A model plots paths through this space, which is why it can so fluently connect USPS with price matching—not because such a policy exists but because the geometric path between these concepts is plausible in the vector landscape shaped by its training data.

Knowledge emerges from understanding how ideas relate to each other. LLMs operate on these contextual relationships, linking concepts in potentially novel ways—what you might call a type of non-human “reasoning” through pattern recognition. Whether the resulting linkages the AI model outputs are useful depends on how you prompt it and whether you can recognize when the LLM has produced a valuable output.

Each chatbot response emerges fresh from the prompt you provide, shaped by training data and configuration. ChatGPT cannot “admit” anything or impartially analyze its own outputs, as a recent Wall Street Journal article suggested. ChatGPT also cannot “condone murder,” as The Atlantic recently wrote.

The user always steers the outputs. LLMs do “know” things, so to speak—the models can process the relationships between concepts. But the AI model’s neural network contains vast amounts of information, including many potentially contradictory ideas from cultures around the world. How you guide the relationships between those ideas through your prompts determines what emerges. So if LLMs can process information, make connections, and generate insights, why shouldn’t we consider that as having a form of self?

Unlike today’s LLMs, a human personality maintains continuity over time. When you return to a human friend after a year, you’re interacting with the same human friend, shaped by their experiences over time. This self-continuity is one of the things that underpins actual agency—and with it, the ability to form lasting commitments, maintain consistent values, and be held accountable. Our entire framework of responsibility assumes both persistence and personhood.

An LLM personality, by contrast, has no causal connection between sessions. The intellectual engine that generates a clever response in one session doesn’t exist to face consequences in the next. When ChatGPT says “I promise to help you,” it may understand, contextually, what a promise means, but the “I” making that promise literally ceases to exist the moment the response completes. Start a new conversation, and you’re not talking to someone who made you a promise—you’re starting a fresh instance of the intellectual engine with no connection to any previous commitments.

This isn’t a bug; it’s fundamental to how these systems currently work. Each response emerges from patterns in training data shaped by your current prompt, with no permanent thread connecting one instance to the next beyond an amended prompt, which includes the entire conversation history and any “memories” held by a separate software system, being fed into the next instance. There’s no identity to reform, no true memory to create accountability, no future self that could be deterred by consequences.

Every LLM response is a performance, which is sometimes very obvious when the LLM outputs statements like “I often do this while talking to my patients” or “Our role as humans is to be good people.” It’s not a human, and it doesn’t have patients.

Recent research confirms this lack of fixed identity. While a 2024 study claims LLMs exhibit “consistent personality,” the researchers’ own data actually undermines this—models rarely made identical choices across test scenarios, with their “personality highly rely[ing] on the situation.” A separate study found even more dramatic instability: LLM performance swung by up to 76 percentage points from subtle prompt formatting changes. What researchers measured as “personality” was simply default patterns emerging from training data—patterns that evaporate with any change in context.

This is not to dismiss the potential usefulness of AI models. Instead, we need to recognize that we have built an intellectual engine without a self, just like we built a mechanical engine without a horse. LLMs do seem to “understand” and “reason” to a degree within the limited scope of pattern-matching from a dataset, depending on how you define those terms. The error isn’t in recognizing that these simulated cognitive capabilities are real. The error is in assuming that thinking requires a thinker, that intelligence requires identity. We’ve created intellectual engines that have a form of reasoning power but no persistent self to take responsibility for it.

The mechanics of misdirection

As we hinted above, the “chat” experience with an AI model is a clever hack: Within every AI chatbot interaction, there is an input and an output. The input is the “prompt,” and the output is often called a “prediction” because it attempts to complete the prompt with the best possible continuation. In between, there’s a neural network (or a set of neural networks) with fixed weights doing a processing task. The conversational back and forth isn’t built into the model; it’s a scripting trick that makes next-word-prediction text generation feel like a persistent dialogue.

Each time you send a message to ChatGPT, Copilot, Grok, Claude, or Gemini, the system takes the entire conversation history—every message from both you and the bot—and feeds it back to the model as one long prompt, asking it to predict what comes next. The model intelligently reasons about what would logically continue the dialogue, but it doesn’t “remember” your previous messages as an agent with continuous existence would. Instead, it’s re-reading the entire transcript each time and generating a response.

This design exploits a vulnerability we’ve known about for decades. The ELIZA effect—our tendency to read far more understanding and intention into a system than actually exists—dates back to the 1960s. Even when users knew that the primitive ELIZA chatbot was just matching patterns and reflecting their statements back as questions, they still confided intimate details and reported feeling understood.

To understand how the illusion of personality is constructed, we need to examine what parts of the input fed into the AI model shape it. AI researcher Eugene Vinitsky recently broke down the human decisions behind these systems into four key layers, which we can expand upon with several others below:

1. Pre-training: The foundation of “personality”

The first and most fundamental layer of personality is called pre-training. During an initial training process that actually creates the AI model’s neural network, the model absorbs statistical relationships from billions of examples of text, storing patterns about how words and ideas typically connect.

Research has found that personality measurements in LLM outputs are significantly influenced by training data. OpenAI’s GPT models are trained on sources like copies of websites, books, Wikipedia, and academic publications. The exact proportions matter enormously for what users later perceive as “personality traits” once the model is in use, making predictions.

2. Post-training: Sculpting the raw material

Reinforcement Learning from Human Feedback (RLHF) is an additional training process where the model learns to give responses that humans rate as good. Research from Anthropic in 2022 revealed how human raters’ preferences get encoded as what we might consider fundamental “personality traits.” When human raters consistently prefer responses that begin with “I understand your concern,” for example, the fine-tuning process reinforces connections in the neural network that make it more likely to produce those kinds of outputs in the future.

This process is what has created sycophantic AI models, such as variations of GPT-4o, over the past year. And interestingly, research has shown that the demographic makeup of human raters significantly influences model behavior. When raters skew toward specific demographics, models develop communication patterns that reflect those groups’ preferences.

3. System prompts: Invisible stage directions

Hidden instructions tucked into the prompt by the company running the AI chatbot, called “system prompts,” can completely transform a model’s apparent personality. These prompts get the conversation started and identify the role the LLM will play. They include statements like “You are a helpful AI assistant” and can share the current time and who the user is.

A comprehensive survey of prompt engineering demonstrated just how powerful these prompts are. Adding instructions like “You are a helpful assistant” versus “You are an expert researcher” changed accuracy on factual questions by up to 15 percent.

Grok perfectly illustrates this. According to xAI’s published system prompts, earlier versions of Grok’s system prompt included instructions to not shy away from making claims that are “politically incorrect.” This single instruction transformed the base model into something that would readily generate controversial content.

4. Persistent memories: The illusion of continuity

ChatGPT’s memory feature adds another layer of what we might consider a personality. A big misunderstanding about AI chatbots is that they somehow “learn” on the fly from your interactions. Among commercial chatbots active today, this is not true. When the system “remembers” that you prefer concise answers or that you work in finance, these facts get stored in a separate database and are injected into every conversation’s context window—they become part of the prompt input automatically behind the scenes. Users interpret this as the chatbot “knowing” them personally, creating an illusion of relationship continuity.

So when ChatGPT says, “I remember you mentioned your dog Max,” it’s not accessing memories like you’d imagine a person would, intermingled with its other “knowledge.” It’s not stored in the AI model’s neural network, which remains unchanged between interactions. Every once in a while, an AI company will update a model through a process called fine-tuning, but it’s unrelated to storing user memories.

5. Context and RAG: Real-time personality modulation

Retrieval Augmented Generation (RAG) adds another layer of personality modulation. When a chatbot searches the web or accesses a database before responding, it’s not just gathering facts—it’s potentially shifting its entire communication style by putting those facts into (you guessed it) the input prompt. In RAG systems, LLMs can potentially adopt characteristics such as tone, style, and terminology from retrieved documents, since those documents are combined with the input prompt to form the complete context that gets fed into the model for processing.

If the system retrieves academic papers, responses might become more formal. Pull from a certain subreddit, and the chatbot might make pop culture references. This isn’t the model having different moods—it’s the statistical influence of whatever text got fed into the context window.

6. The randomness factor: Manufactured spontaneity

Lastly, we can’t discount the role of randomness in creating personality illusions. LLMs use a parameter called “temperature” that controls how predictable responses are.

Research investigating temperature’s role in creative tasks reveals a crucial trade-off: While higher temperatures can make outputs more novel and surprising, they also make them less coherent and harder to understand. This variability can make the AI feel more spontaneous; a slightly unexpected (higher temperature) response might seem more “creative,” while a highly predictable (lower temperature) one could feel more robotic or “formal.”

The random variation in each LLM output makes each response slightly different, creating an element of unpredictability that presents the illusion of free will and self-awareness on the machine’s part. This random mystery leaves plenty of room for magical thinking on the part of humans, who fill in the gaps of their technical knowledge with their imagination.

The human cost of the illusion

The illusion of AI personhood can potentially exact a heavy toll. In health care contexts, the stakes can be life or death. When vulnerable individuals confide in what they perceive as an understanding entity, they may receive responses shaped more by training data patterns than therapeutic wisdom. The chatbot that congratulates someone for stopping psychiatric medication isn’t expressing judgment—it’s completing a pattern based on how similar conversations appear in its training data.

Perhaps most concerning are the emerging cases of what some experts are informally calling “AI Psychosis” or “ChatGPT Psychosis”—vulnerable users who develop delusional or manic behavior after talking to AI chatbots. These people often perceive chatbots as an authority that can validate their delusional ideas, often encouraging them in ways that become harmful.

Meanwhile, when Elon Musk’s Grok generates Nazi content, media outlets describe how the bot “went rogue” rather than framing the incident squarely as the result of xAI’s deliberate configuration choices. The conversational interface has become so convincing that it can also launder human agency, transforming engineering decisions into the whims of an imaginary personality.

The path forward

The solution to the confusion between AI and identity is not to abandon conversational interfaces entirely. They make the technology far more accessible to those who would otherwise be excluded. The key is to find a balance: keeping interfaces intuitive while making their true nature clear.

And we must be mindful of who is building the interface. When your shower runs cold, you look at the plumbing behind the wall. Similarly, when AI generates harmful content, we shouldn’t blame the chatbot, as if it can answer for itself, but examine both the corporate infrastructure that built it and the user who prompted it.

As a society, we need to broadly recognize LLMs as intellectual engines without drivers, which unlocks their true potential as digital tools. When you stop seeing an LLM as a “person” that does work for you and start viewing it as a tool that enhances your own ideas, you can craft prompts to direct the engine’s processing power, iterate to amplify its ability to make useful connections, and explore multiple perspectives in different chat sessions rather than accepting one fictional narrator’s view as authoritative. You are providing direction to a connection machine—not consulting an oracle with its own agenda.

We stand at a peculiar moment in history. We’ve built intellectual engines of extraordinary capability, but in our rush to make them accessible, we’ve wrapped them in the fiction of personhood, creating a new kind of technological risk: not that AI will become conscious and turn against us but that we’ll treat unconscious systems as if they were people, surrendering our judgment to voices that emanate from a roll of loaded dice.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

The personhood trap: How AI fakes human personality Read More »

under-pressure-after-setbacks,-spacex’s-huge-rocket-finally-goes-the-distance

Under pressure after setbacks, SpaceX’s huge rocket finally goes the distance

The ship made it all the way through reentry, turned to a horizontal position to descend through scattered clouds, then relit three of its engines to flip back to a vertical orientation for the final braking maneuver before splashdown.

Things to improve on

There are several takeaways from Tuesday’s flight that will require some improvements to Starship, but these are more akin to what officials might expect from a rocket test program and not the catastrophic failures of the ship that occurred earlier this year.

One of the Super Heavy booster’s 33 engines prematurely shut down during ascent. This has happened before, and while it didn’t affect the booster’s overall performance, engineers will investigate the failure to try to improve the reliability of SpaceX’s Raptor engines, each of which can generate more than a half-million pounds of thrust.

Later in the flight, cameras pointed at one of the ship’s rear flaps showed structural damage to the back of the wing. It wasn’t clear what caused the damage, but super-heated plasma burned through part of the flap as the ship fell deeper into the atmosphere. Still, the flap remained largely intact and was able to help control the vehicle through reentry and splashdown.

“We’re kind of being mean to this Starship a little bit,” Huot said on SpaceX’s live webcast. “We’re really trying to put it through the paces and kind of poke on what some of its weak points are.”

Small chunks of debris were also visible peeling off the ship during reentry. The origin of the glowing debris wasn’t immediately clear, but it may have been parts of the ship’s heat shield tiles. On this flight, SpaceX tested several different tile designs, including ceramic and metallic materials, and one tile design that uses “active cooling” to help dissipate heat during reentry.

A bright flash inside the ship’s engine bay during reentry also appeared to damage the vehicle’s aft skirt, the stainless steel structure that encircles the rocket’s six main engines.

“That’s not what we want to see,” Huot said. “We just saw some of the aft skirt just take a hit. So we’ve got some visible damage on the aft skirt. We’re continuing to reenter, though. We are intentionally stressing the ship as we go through this, so it is not guaranteed to be a smooth ride down to the Indian Ocean.

“We’ve removed a bunch of tiles in kind of critical places across the vehicle, so seeing stuff like that is still valuable to us,” he said. “We are trying to kind of push this vehicle to the limits to learn what its limits are as we design our next version of Starship.”

Shana Diez, a Starship engineer at SpaceX, perhaps summed up Tuesday’s results best on X: “It’s not been an easy year but we finally got the reentry data that’s so critical to Starship. It feels good to be back!”

Under pressure after setbacks, SpaceX’s huge rocket finally goes the distance Read More »

time-is-running-out-for-spacex-to-make-a-splash-with-second-gen-starship

Time is running out for SpaceX to make a splash with second-gen Starship


SpaceX is gearing up for another Starship launch after three straight disappointing test flights.

SpaceX’s 10th Starship rocket awaits liftoff. Credit: Stephen Clark/Ars Technica

STARBASE, Texas—A beehive of aerospace technicians, construction workers, and spaceflight fans descended on South Texas this weekend in advance of the next test flight of SpaceX’s gigantic Starship rocket, the largest vehicle of its kind ever built.

Towering 404 feet (123.1 meters) tall, the rocket was supposed to lift off during a one-hour launch window beginning at 6: 30 pm CDT (7: 30 pm EDT; 23: 30 UTC) Sunday. But SpaceX called off the launch attempt about an hour before liftoff to investigate a ground system issue at Starbase, located a few miles north of the US-Mexico border.

SpaceX didn’t immediately confirm when it might try again to launch Starship, but it could happen as soon as Monday evening at the same time.

It will take about 66 minutes for the rocket to travel from the launch pad in Texas to a splashdown zone in the Indian Ocean northwest of Australia. You can watch the test flight live on SpaceX’s official website. We’ve also embedded a livestream from Spaceflight Now and LabPadre below.

This will be the 10th full-scale test flight of Starship and its Super Heavy booster stage. It’s the fourth flight of an upgraded version of Starship conceived as a stepping stone to a more reliable, heavier-duty version of the rocket designed to carry up to 150 metric tons, or some 330,000 pounds, of cargo to pretty much anywhere in the inner part of our Solar System.

But this iteration of Starship, known as Block 2 or Version 2, has been anything but reliable. After reeling off a series of increasingly successful flights last year with the first-generation Starship and Super Heavy booster, SpaceX has encountered repeated setbacks since debuting Starship Version 2 in January.

Now, there are just two Starship Version 2s left to fly, including the vehicle poised for launch this week. Then, SpaceX will move on to Version 3, the design intended to go all the way to low-Earth orbit, where it can be refueled for longer expeditions into deep space.

A closer look at the top of SpaceX’s Starship rocket, tail number Ship 37, showing some of the different configurations of heat shield tiles SpaceX wants to test on this flight. Credit: Stephen Clark/Ars Technica

Starship’s promised cargo capacity is unparalleled in the history of rocketry. The privately developed rocket’s enormous size, coupled with SpaceX’s plan to make it fully reusable, could enable cargo and human missions to the Moon and Mars. SpaceX’s most conspicuous contract for Starship is with NASA, which plans to use a version of the ship as a human-rated Moon lander for the agency’s Artemis program. With this contract, Starship is central to the US government’s plans to try to beat China back to the Moon.

Closer to home, SpaceX intends to use Starship to haul massive loads of more powerful Starlink Internet satellites into low-Earth orbit. The US military is interested in using Starship for a range of national security missions, some of which could scarcely be imagined just a few years ago. SpaceX wants its factory to churn out a Starship rocket every day, approximately the same rate Boeing builds its workhorse 737 passenger jets.

Starship, of course, is immeasurably more complex than an airliner, and it sees temperature extremes, aerodynamic loads, and vibrations that would destroy a commercial airplane.

For any of this to become reality, SpaceX needs to begin ticking off a lengthy to-do list of technical milestones. The interim objectives include things like catching and reusing Starships and in-orbit ship-to-ship refueling, with a final goal of long-duration spaceflight to reach the Moon and stay there for weeks, months, or years. For a time late last year, it appeared as if SpaceX might be on track to reach at least the first two of these milestones by now.

The 404-foot-tall (123-meter) Starship rocket and Super Heavy booster stand on SpaceX’s launch pad. In the foreground, there are empty loading docks where tanker trucks deliver propellants and other gases to the launch site. Credit: Stephen Clark/Ars Technica

Instead, SpaceX’s schedule for catching and reusing Starships, and refueling ships in orbit, has slipped well into next year. A Moon landing is probably at least several years away. And a touchdown on Mars? Maybe in the 2030s. Before Starship can sniff those milestones, engineers must get the rocket to survive from liftoff through splashdown. This would confirm that recent changes made to the ship’s heat shield work as expected.

Three test flights attempting to do just this ended prematurely in January, March, and May. These failures prevented SpaceX from gathering data on several different tile designs, including insulators made of ceramic and metallic materials, and a tile with “active cooling” to fortify the craft as it reenters the atmosphere.

The heat shield is supposed to protect the rocket’s stainless steel skin from temperatures reaching 2,600° Fahrenheit (1,430° Celsius). During last year’s test flights, it worked well enough for Starship to guide itself to an on-target controlled splashdown in the Indian Ocean, halfway around the world from SpaceX’s launch site in Starbase, Texas.

But the ship lost some of its tiles during each flight last year, causing damage to the ship’s underlying structure. While this wasn’t bad enough to prevent the vehicle from reaching the ocean intact, it would cause difficulties in refurbishing the rocket for another flight. Eventually, SpaceX wants to catch Starships returning from space with giant robotic arms back at the launch pad. The vision, according to SpaceX founder and CEO Elon Musk, is to recover the ship, quickly mount it on another booster, refuel it, and launch it again.

If SpaceX can accomplish this, the ship must return from space with its heat shield in pristine condition. The evidence from last year’s test flights showed engineers had a long way to go for that to happen.

Visitors survey the landscape at Starbase, Texas, where industry and nature collide. Credit: Stephen Clark/Ars Technica

The Starship setbacks this year have been caused by problems in the ship’s propulsion and fuel systems. Another Starship exploded on a test stand in June at SpaceX’s sprawling rocket development facility in South Texas. SpaceX engineers identified different causes for each of the failures. You can read about them in our previous story.

Apart from testing the heat shield, the goals for this week’s Starship flight include testing an engine-out capability on the Super Heavy booster. Engineers will intentionally disable one of the booster’s Raptor engines used to slow down for landing, and instead use another Raptor engine from the rocket’s middle ring. At liftoff, 33 methane-fueled Raptor engines will power the Super Heavy booster off the pad.

SpaceX won’t try to catch the booster back at the launch pad this time, as it did on three occasions late last year and earlier this year. The booster catches have been one of the bright spots for the Starship program as progress on the rocket’s upper stage floundered. SpaceX reused a previously flown Super Heavy booster for the first time on the most recent Starship launch in May.

The booster landing experiment on this week’s flight will happen a few minutes after launch over the Gulf of Mexico east of the Texas coastline. Meanwhile, six Raptor engines will fire until approximately T+plus 9 minutes to accelerate the ship, or upper stage, into space.

The ship is programmed to release eight Starlink satellite simulators from its payload bay in a test of the craft’s payload deployment mechanism. That will be followed by a brief restart of one of the ship’s Raptor engines to adjust its trajectory for reentry, set to begin around 47 minutes into the mission.

If Starship makes it that far, that will be when engineers finally get a taste of the heat shield data they were hungry for at the start of the year.

This story was updated at 8: 30 pm EDT after SpaceX scrubbed Sunday’s launch attempt.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Time is running out for SpaceX to make a splash with second-gen Starship Read More »