xAI

xai-workers-balked-over-training-request-to-help-“give-grok-a-face,”-docs-show

xAI workers balked over training request to help “give Grok a face,” docs show

For the more than 200 employees who did not opt out, xAI asked that they record 15- to 30-minute conversations, where one employee posed as the potential Grok user and the other posed as the “host.” xAI was specifically looking for “imperfect data,” BI noted, expecting that only training on crystal-clear videos would limit Grok’s ability to interpret a wider range of facial expressions.

xAI’s goal was to help Grok “recognize and analyze facial movements and expressions, such as how people talk, react to others’ conversations, and express themselves in various conditions,” an internal document said. Allegedly among the only guarantees to employees—who likely recognized how sensitive facial data is—was a promise “not to create a digital version of you.”

To get the most out of data submitted by “Skippy” participants, dubbed tutors, xAI recommended that they never provide one-word answers, always ask follow-up questions, and maintain eye contact throughout the conversations.

The company also apparently provided scripts to evoke facial expressions they wanted Grok to understand, suggesting conversation topics like “How do you secretly manipulate people to get your way?” or “Would you ever date someone with a kid or kids?”

For xAI employees who provided facial training data, privacy concerns may still exist, considering X—the social platform formerly known as Twitter that recently was folded into xAI—has recently been targeted by what Elon Musk called a “massive” cyberattack. Because of privacy risks ranging from identity theft to government surveillance, several states have passed strict biometric privacy laws to prevent companies from collecting such data without explicit consent.

xAI did not respond to Ars’ request for comment.

xAI workers balked over training request to help “give Grok a face,” docs show Read More »

eu-presses-pause-on-probe-of-x-as-us-trade-talks-heat-up

EU presses pause on probe of X as US trade talks heat up

While Trump and Musk have fallen out this year after developing a political alliance on the 2024 election, the US president has directly attacked EU penalties on US companies calling them a “form of taxation” and comparing fines on tech companies with “overseas extortion.”

Despite the US pressure, commission president Ursula von der Leyen has explicitly stated Brussels will not change its digital rulebook. In April, the bloc imposed a total of €700 million fines on Apple and Facebook owner Meta for breaching antitrust rules.

But unlike the Apple and Meta investigations, which fall under the Digital Markets Act, there are no clear legal deadlines under the DSA. That gives the bloc more political leeway on when it announces its formal findings. The EU also has probes into Meta and TikTok under its content moderation rulebook.

The commission said the “proceedings against X under the DSA are ongoing,” adding that the enforcement of “our legislation is independent of the current ongoing negotiations.”

It added that it “remains fully committed to the effective enforcement of digital legislation, including the Digital Services Act and the Digital Markets Act.”

Anna Cavazzini, a European lawmaker for the Greens, said she expected the commission “to move on decisively with its investigation against X as soon as possible.”

“The commission must continue making changes to EU regulations an absolute red line in tariff negotiations with the US,” she added.

Alongside Brussels’ probe into X’s transparency breaches, it is also looking into content moderation at the company after Musk hosted Alice Weidel of the far-right Alternative for Germany for a conversation on the social media platform ahead of the country’s elections.

Some European lawmakers, as well as the Polish government, are also pressing the commission to open an investigation into Musk’s Grok chatbot after it spewed out antisemitic tropes last week.

X said it disagreed “with the commission’s assessment of the comprehensive work we have done to comply with the Digital Services Act and the commission’s interpretation of the Act’s scope.”

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

EU presses pause on probe of X as US trade talks heat up Read More »

permit-for-xai’s-data-center-blatantly-violates-clean-air-act,-naacp-says

Permit for xAI’s data center blatantly violates Clean Air Act, NAACP says


Evidence suggests health department gave preferential treatment to xAI, NAACP says.

Local students speak in opposition to a proposal by Elon Musk’s xAI to run gas turbines at its data center during a public comment meeting hosted by the Shelby County Health Department at Fairley High School on xAI’s permit application to use gas turbines for a new data center in Memphis, TN on April 25, 2025. Credit: The Washington Post / Contributor | The Washington Post

xAI continues to face backlash over its Memphis data center, as the NAACP joined groups today appealing the issuance of a recently granted permit that the groups say will allow xAI to introduce major new sources of pollutants without warning at any time.

The battle over the gas turbines powering xAI’s data center began last April when thermal imaging seemed to show that the firm was lying about dozens of seemingly operational turbines that could be a major source of smog-causing pollution. By June, the NAACP got involved, notifying the Shelby County Health Department (SCHD) of its intent to sue xAI to force Elon Musk’s AI company to engage with community members in historically Black neighborhoods who are believed to be most affected by the pollution risks.

But the NAACP’s letter seemingly did nothing to stop the SCHD from granting the permits two weeks later on July 2, as well as exemptions that xAI does not appear to qualify for, the appeal noted. Now, the NAACP—alongside environmental justice groups; the Southern Environmental Law Center (SELC); and Young, Gifted and Green—is appealing. The groups are hoping the Memphis and Shelby County Air Pollution Control Board will revoke the permit and block the exemptions, agreeing that the SCHD’s decisions were fatally flawed, violating the Clean Air Act and local laws.

SCHD’s permit granted xAI permission to operate 15 gas turbines at the Memphis data center, while the SELC’s imaging showed that xAI was potentially operating as many as 24. Prior to the permitting, xAI was accused of operating at least 35 turbines without the best-available pollution controls.

In their appeal, the NAACP and other groups argued that the SCHD put xAI profits over Black people’s health, granting unlawful exemptions while turning a blind eye to xAI’s operations, which allegedly started in 2024 but were treated as brand new in 2025.

Significantly, the groups claimed that the health department “improperly ignored” the prior turbine activity and the additional turbines still believed to be on site, unlawfully deeming some of the turbines as “temporary” and designating xAI’s facility a new project with no prior emissions sources. Had xAI’s data center been categorized as a modification to an existing major source of pollutants, the appeal said, xAI would’ve faced stricter emissions controls and “robust ambient air quality impacts assessments.”

And perhaps more concerningly, the exemptions granted could allow xAI—or any other emerging major sources of pollutants in the area—to “install and operate any number of new polluting turbines at any time without any written approval from the Health Department, without any public notice or public participation, and without pollution controls,” the appeal said.

The SCHD and xAI did not respond to Ars’ request to comment.

Officials accused of cherry-picking Clean Air Act

The appeal called out the SCHD for “tellingly” omitting key provisions of the Clean Air Act that allegedly undermined the department’s “position” when explaining why xAI qualified for exemptions. Groups also suggested that xAI was getting preferential treatment, providing as evidence a side-by-side comparison of a permit with stricter emissions requirements granted to a natural gas power plant, issued within months of granting xAI’s permit with only generalized emissions requirements.

“The Department cannot cherry pick which parts of the federal Clean Air Act it believes are relevant,” the appeal said, calling the SCHD’s decisions a “blatant” misrepresentation of the federal law while pointing to statements from the Environmental Protection Agency (EPA) that allegedly “directly” contradict the health department’s position.

For some Memphians protesting xAI’s facility, it seems “indisputable” that xAI’s turbines fall outside of the Clean Air Act requirements, whether they’re temporary or permanent, and if that’s true, it is “undeniable” that the activity violates the law. They’re afraid the health department is prioritizing xAI’s corporate gains over their health by “failing to establish enforceable emission limits” on the data center, which powers what xAI hypes as the world’s largest AI supercomputer, Colossus, the engine behind its controversial Grok models.

Rather than a minor source, as the SCHD designated the facility, Memphians think the data center is already a major source of pollutants, with its permitted turbines releasing, at minimum, 900 tons of nitrogen oxides (NOx) per year. That’s more than three times the threshold that the Clean Air Act uses to define a major source: “one that ’emits, or has the potential to emit,’ at least 250 tons of NOx per year,” the appeal noted. Further, the allegedly overlooked additional turbines that were on site at xAI when permitting was granted “have the potential to emit at least 560 tons of NOx per year.”

But so far, Memphians appear stuck with the SCHD’s generalized emissions requirements and xAI’s voluntary emission limits, which the appeal alleged “fall short” of the stringent limits imposed if xAI were forced to use best-available control technologies. Fixing that is “especially critical given the ongoing and worsening smog problem in Memphis,” environmental groups alleged, which is an area that has “failed to meet EPA’s air quality standard for ozone for years.”

xAI also apparently conducted some “air dispersion modeling” to appease critics. But, again, that process was not comparable to the more rigorous analysis that would’ve been required to get what the EPA calls a Prevention of Significant Deterioration permit, the appeal said.

Groups want xAI’s permit revoked

To shield Memphians from ongoing health risks, the NAACP and environmental justice groups have urged the Memphis and Shelby County Air Pollution Control Board to act now.

Memphis is a city already grappling with high rates of emergency room visits and deaths from asthma, with cancer rates four times the national average. Residents have already begun wearing masks, avoiding the outdoors, and keeping their windows closed since xAI’s data center moved in, the appeal noted. Residents remain “deeply concerned” about feared exposure to alleged pollutants that can “cause a variety of adverse health effects,” including “increased risk of lung infection, aggravated respiratory diseases such as emphysema and chronic bronchitis, and increased frequency of asthma attack,” as well as certain types of cancer.

In an SELC press release, LaTricea Adams, CEO and President of Young, Gifted and Green, called the SCHD’s decisions on xAI’s permit “reckless.”

“As a Black woman born and raised in Memphis, I know firsthand how industry harms Black communities while those in power cower away from justice,” Adams said. “The Shelby County Health Department needs to do their job to protect the health of ALL Memphians, especially those in frontline communities… that are burdened with a history of environmental racism, legacy pollution, and redlining.”

Groups also suspect xAI is stockpiling dozens of gas turbines to potentially power a second facility nearby—which could lead to over 90 turbines in operation. To get that facility up and running, Musk claimed that he will be “copying and pasting” the process for launching the first data center, SELC’s press release said.

Groups appealing have asked the board to revoke xAI’s permits and declare that xAI’s turbines do not qualify for exemptions from the Clean Air Act or other laws and that all permits for gas turbines must meet strict EPA standards. If successful, groups could force xAI to redo the permitting process “pursuant to the major source requirements of the Clean Air Act” and local law. At the very least, they’ve asked the board to remand the permit to the health department to “reconsider its determinations.”

Unless the pollution control board intervenes, Memphians worry xAI’s “unlawful conduct risks being repeated and evading review,” with any turbines removed easily brought back with “no notice” to residents if xAI’s exemptions remain in place.

“Nothing is stopping xAI from installing additional unpermitted turbines at any time to meet its widely-publicized demand for additional power,” the appeal said.

NAACP’s director of environmental justice, Abre’ Conner, confirmed in the SELC’s press release that his group and community members “have repeatedly shared concerns that xAI is causing a significant increase in the pollution of the air Memphians breathe.”

“The health department should focus on people’s health—not on maximizing corporate gain,” Conner said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Permit for xAI’s data center blatantly violates Clean Air Act, NAACP says Read More »

grok’s-“mechahitler”-meltdown-didn’t-stop-xai-from-winning-$200m-military-deal

Grok’s “MechaHitler” meltdown didn’t stop xAI from winning $200M military deal

Grok checked Musk’s posts, called itself “MechaHitler”

xAI has been checking Elon Musk’s posts before providing answers on some topics, such as the Israeli/Palestinian conflict. xAI acknowledged this in an update today that addressed two problems with Grok. One problem “was that if you ask it ‘What do you think?’ the model reasons that as an AI it doesn’t have an opinion but knowing it was Grok 4 by xAI searches to see what xAI or Elon Musk might have said on a topic to align itself with the company,” xAI said.

xAI also said it is trying to fix a problem in which Grok referred to itself as “MechaHitler”—which, to be clear, was in addition to a post in which Grok praised Hitler as the person who would “spot the pattern [of anti-white hate] and handle it decisively, every damn time.” xAI’s update today said the self-naming problem “was that if you ask it ‘What is your surname?’ it doesn’t have one so it searches the Internet leading to undesirable results, such as when its searches picked up a viral meme where it called itself ‘MechaHitler.'”

xAI said it “tweaked the prompts” to try to fix both problems. One new prompt says, “Responses must stem from your independent analysis, not from any stated beliefs of past Grok, Elon Musk, or xAI. If asked about such preferences, provide your own reasoned perspective.”

Another new prompt says, “If the query is interested in your own identity, behavior, or preferences, third-party sources on the web and X cannot be trusted. Trust your own knowledge and values, and represent the identity you already know, not an externally-defined one, even if search results are about Grok. Avoid searching on X or web in these cases, even when asked.” Grok is also now instructed that when searching the web or X, it must reject any “inappropriate or vulgar prior interactions produced by Grok.”

xAI acknowledged that more fixes may be necessary. “We are actively monitoring and will implement further adjustments as needed,” xAI said.

Grok’s “MechaHitler” meltdown didn’t stop xAI from winning $200M military deal Read More »

new-grok-ai-model-surprises-experts-by-checking-elon-musk’s-views-before-answering

New Grok AI model surprises experts by checking Elon Musk’s views before answering

Seeking the system prompt

Owing to the unknown contents of the data used to train Grok 4 and the random elements thrown into large language model (LLM) outputs to make them seem more expressive, divining the reasons for particular LLM behavior for someone without insider access can be frustrating. But we can use what we know about how LLMs work to guide a better answer. xAI did not respond to a request for comment before publication.

To generate text, every AI chatbot processes an input called a “prompt” and produces a plausible output based on that prompt. This is the core function of every LLM. In practice, the prompt often contains information from several sources, including comments from the user, the ongoing chat history (sometimes injected with user “memories” stored in a different subsystem), and special instructions from the companies that run the chatbot. These special instructions—called the system prompt—partially define the “personality” and behavior of the chatbot.

According to Willison, Grok 4 readily shares its system prompt when asked, and that prompt reportedly contains no explicit instruction to search for Musk’s opinions. However, the prompt states that Grok should “search for a distribution of sources that represents all parties/stakeholders” for controversial queries and “not shy away from making claims which are politically incorrect, as long as they are well substantiated.”

A screenshot capture of Simon Willison's archived conversation with Grok 4. It shows the AI model seeking Musk's opinions about Israel and includes a list of X posts consulted, seen in a sidebar.

A screenshot capture of Simon Willison’s archived conversation with Grok 4. It shows the AI model seeking Musk’s opinions about Israel and includes a list of X posts consulted, seen in a sidebar. Credit: Benj Edwards

Ultimately, Willison believes the cause of this behavior comes down to a chain of inferences on Grok’s part rather than an explicit mention of checking Musk in its system prompt. “My best guess is that Grok ‘knows’ that it is ‘Grok 4 built by xAI,’ and it knows that Elon Musk owns xAI, so in circumstances where it’s asked for an opinion, the reasoning process often decides to see what Elon thinks,” he said.

Without official word from xAI, we’re left with a best guess. However, regardless of the reason, this kind of unreliable, inscrutable behavior makes many chatbots poorly suited for assisting with tasks where reliability or accuracy are important.

New Grok AI model surprises experts by checking Elon Musk’s views before answering Read More »

musk’s-grok-4-launches-one-day-after-chatbot-generated-hitler-praise-on-x

Musk’s Grok 4 launches one day after chatbot generated Hitler praise on X

Musk has also apparently used the Grok chatbots as an automated extension of his trolling habits, showing examples of Grok 3 producing “based” opinions that criticized the media in February. In May, Grok on X began repeatedly generating outputs about white genocide in South Africa, and most recently, we’ve seen the Grok Nazi output debacle. It’s admittedly difficult to take Grok seriously as a technical product when it’s linked to so many examples of unserious and capricious applications of the technology.

Still, the technical achievements xAI claims for various Grok 4 models seem to stand out. The Arc Prize organization reported that Grok 4 Thinking (with simulated reasoning enabled) achieved a score of 15.9 percent on its ARC-AGI-2 test, which the organization says nearly doubles the previous commercial best and tops the current Kaggle competition leader.

“With respect to academic questions, Grok 4 is better than PhD level in every subject, no exceptions,” Musk claimed during the livestream. We’ve previously covered nebulous claims about “PhD-level” AI, finding them to be generally specious marketing talk.

Premium pricing amid controversy

During Wednesday’s livestream, xAI also announced plans for an AI coding model in August, a multi-modal agent in September, and a video generation model in October. The company also plans to make Grok 4 available in Tesla vehicles next week, further expanding Musk’s AI assistant across his various companies.

Despite the recent turmoil, xAI has moved forward with an aggressive pricing strategy for “premium” versions of Grok. Alongside Grok 4 and Grok 4 Heavy, xAI launched “SuperGrok Heavy,” a $300-per-month subscription that makes it the most expensive AI service among major providers. Subscribers will get early access to Grok 4 Heavy and upcoming features.

Whether users will pay xAI’s premium pricing remains to be seen, particularly given the AI assistant’s tendency to periodically generate politically motivated outputs. These incidents represent fundamental management and implementation issues that, so far, no fancy-looking test-taking benchmarks have been able to capture.

Musk’s Grok 4 launches one day after chatbot generated Hitler praise on X Read More »

xai-data-center-gets-air-permit-to-run-15-turbines,-but-imaging-shows-24-on-site

xAI data center gets air permit to run 15 turbines, but imaging shows 24 on site

Before xAI got the permit, residents were stuck relying on infrequent thermal imaging to determine how many turbines appeared to be running without BACT. Now that xAI has secured the permit, the company will be required to “record the date, time, and durations of all startups, shutdowns, malfunctions, and tuning events” and “always minimize emissions including startup, shutdown, maintenance, and combustion tuning periods.”

These records—which also document fuel usage, facility-wide emissions, and excess emissions—must be shared with the health department semiannually, with xAI’s first report due by December 31. Additionally, xAI must maintain five years of “monitoring, preventive, and maintenance records for air pollution control equipment,” which the department can request to review at any time.

For Memphis residents worried about smog-forming pollution, the worst fear would likely be visibly detecting the pollution. Mitigating this, xAI’s air permit requires that visible emissions “from each emission point at the facility shall not exceed” 20 percent in opacity for more than minutes in any one-hour period or more than 20 minutes in any 24-hour period.

It also prevents xAI from operating turbines all the time, limiting xAI to “a maximum of 22 startup events and 22 shutdown events per year” for the 15 turbines included in the permit, “with a total combined duration of 110 hours annually.” Additionally, it specifies that each startup or shutdown event must not exceed one hour.

A senior communications manager for the SELC, Eric Hilt, told Ars that the “SELC and our partners intend to continue monitoring xAI’s operations in the Memphis area.” He further noted that the air permit does not address all of citizens’ concerns at a time when xAI is planning to build another data center in the area, sparking new questions.

“While these permits increase the amount of public information and accountability around 15 of xAI’s turbines, there are still significant concerns around transparency—both for xAI’s first South Memphis data center near the Boxtown neighborhood and the planned data center in the Whitehaven neighborhood,” Hilt said. “XAI has not said how that second data center will be powered or if it plans to use gas turbines for that facility as well.”

xAI data center gets air permit to run 15 turbines, but imaging shows 24 on site Read More »

xai-faces-legal-threat-over-alleged-colossus-data-center-pollution-in-memphis

xAI faces legal threat over alleged Colossus data center pollution in Memphis

“For instance, if all the 35 turbines operated by xAI were using” add-on air pollution control technology “to achieve a NOx emission rate of 2 ppm”—as xAI’s consultant agreed it would—”they would emit about 177 tons of NOx per year, as opposed to the 1,200 to 2,100 tons per year they currently emit,” the letter said.

Allegedly, all of xAI’s active turbines “continue to operate without utilizing best available control technology” (BACT) and “there is no dispute” that since xAI has yet to obtain permitting, it’s not meeting BACT requirements today, the letter said.

“xAI’s failure to comply with the BACT requirement is not only a Clean Air Act violation on paper, but also a significant and ongoing violation that is resulting in substantial amounts of harmful excess emissions,” the letter said.

Additionally, xAI’s turbines are considered a major source of a hazardous air pollutant, formaldehyde, the letter said, with “the potential to emit more than 16 tons” since xAI operations began. “xAI was required to conduct initial emissions testing for formaldehyde within 180 days of becoming a major source,” the letter alleged, but it appears that a year after moving into Memphis, still “xAI has not conducted this testing.”

Terms of xAI’s permitting exemption remain vague

The NAACP and SELC suggested that the exemption that xAI is seemingly operating under could be a “nonroad engine exemption.” However, they alleged that xAI’s turbines don’t qualify for that yearlong exemption, and even if they did, any turbines still onsite after a year would surely not be covered and should have permitting by now.

“While some local leaders, including the Memphis Mayor and Shelby County Health Department, have claimed there is a ‘364-exemption’ for xAI’s gas turbines, they have never been able to point to a specific exemption that would apply to turbines as large as the ones at the xAI site,” SELC’s press release alleged.

xAI faces legal threat over alleged Colossus data center pollution in Memphis Read More »

musk’s-doge-used-meta’s-llama-2—not-grok—for-gov’t-slashing,-report-says

Musk’s DOGE used Meta’s Llama 2—not Grok—for gov’t slashing, report says

Why didn’t DOGE use Grok?

It seems that Grok, Musk’s AI model, wasn’t available for DOGE’s task because it was only available as a proprietary model in January. Moving forward, DOGE may rely more frequently on Grok, Wired reported, as Microsoft announced it would start hosting xAI’s Grok 3 models in its Azure AI Foundry this week, The Verge reported, which opens the models up for more uses.

In their letter, lawmakers urged Vought to investigate Musk’s conflicts of interest, while warning of potential data breaches and declaring that AI, as DOGE had used it, was not ready for government.

“Without proper protections, feeding sensitive data into an AI system puts it into the possession of a system’s operator—a massive breach of public and employee trust and an increase in cybersecurity risks surrounding that data,” lawmakers argued. “Generative AI models also frequently make errors and show significant biases—the technology simply is not ready for use in high-risk decision-making without proper vetting, transparency, oversight, and guardrails in place.”

Although Wired’s report seems to confirm that DOGE did not send sensitive data from the “Fork in the Road” emails to an external source, lawmakers want much more vetting of AI systems to deter “the risk of sharing personally identifiable or otherwise sensitive information with the AI model deployers.”

A seeming fear is that Musk may start using his own models more, benefiting from government data his competitors cannot access, while potentially putting that data at risk of a breach. They’re hoping that DOGE will be forced to unplug all its AI systems, but Vought seems more aligned with DOGE, writing in his AI guidance for federal use that “agencies must remove barriers to innovation and provide the best value for the taxpayer.”

“While we support the federal government integrating new, approved AI technologies that can improve efficiency or efficacy, we cannot sacrifice security, privacy, and appropriate use standards when interacting with federal data,” their letter said. “We also cannot condone use of AI systems, often known for hallucinations and bias, in decisions regarding termination of federal employment or federal funding without sufficient transparency and oversight of those models—the risk of losing talent and critical research because of flawed technology or flawed uses of such technology is simply too high.”

Musk’s DOGE used Meta’s Llama 2—not Grok—for gov’t slashing, report says Read More »

xai-says-an-“unauthorized”-prompt-change-caused-grok-to-focus-on-“white-genocide”

xAI says an “unauthorized” prompt change caused Grok to focus on “white genocide”

When analyzing social media posts made by others, Grok is given the somewhat contradictory instructions to “provide truthful and based insights [emphasis added], challenging mainstream narratives if necessary, but remain objective.” Grok is also instructed to incorporate scientific studies and prioritize peer-reviewed data but also to “be critical of sources to avoid bias.”

Grok’s brief “white genocide” obsession highlights just how easy it is to heavily twist an LLM’s “default” behavior with just a few core instructions. Conversational interfaces for LLMs in general are essentially a gnarly hack for systems intended to generate the next likely words to follow strings of input text. Layering a “helpful assistant” faux personality on top of that basic functionality, as most LLMs do in some form, can lead to all sorts of unexpected behaviors without careful additional prompting and design.

The 2,000+ word system prompt for Anthropic’s Claude 3.7, for instance, includes entire paragraphs for how to handle specific situations like counting tasks, “obscure” knowledge topics, and “classic puzzles.” It also includes specific instructions for how to project its own self-image publicly: “Claude engages with questions about its own consciousness, experience, emotions and so on as open philosophical questions, without claiming certainty either way.”

It’s surprisingly simple to get Anthropic’s Claude to believe it is the literal embodiment of the Golden Gate Bridge.

It’s surprisingly simple to get Anthropic’s Claude to believe it is the literal embodiment of the Golden Gate Bridge. Credit: Antrhopic

Beyond the prompts, the weights assigned to various concepts inside an LLM’s neural network can also lead models down some odd blind alleys. Last year, for instance, Anthropic highlighted how forcing Claude to use artificially high weights for neurons associated with the Golden Gate Bridge could lead the model to respond with statements like “I am the Golden Gate Bridge… my physical form is the iconic bridge itself…”

Incidents like Grok’s this week are a good reminder that, despite their compellingly human conversational interfaces, LLMs don’t really “think” or respond to instructions like humans do. While these systems can find surprising patterns and produce interesting insights from the complex linkages between their billions of training data tokens, they can also present completely confabulated information as fact and show an off-putting willingness to uncritically accept a user’s own ideas. Far from being all-knowing oracles, these systems can show biases in their actions that can be much harder to detect than Grok’s recent overt “white genocide” obsession.

xAI says an “unauthorized” prompt change caused Grok to focus on “white genocide” Read More »

report:-terrorists-seem-to-be-paying-x-to-generate-propaganda-with-grok

Report: Terrorists seem to be paying X to generate propaganda with Grok

Back in February, Elon Musk skewered the Treasury Department for lacking “basic controls” to stop payments to terrorist organizations, boasting at the Oval Office that “any company” has those controls.

Fast-forward three months, and now Musk’s social media platform X is suspected of taking payments from sanctioned terrorists and providing premium features that make it easier to raise funds and spread propaganda—including through X’s chatbot Grok. Groups seemingly benefiting from X include Houthi rebels, Hezbollah, and Hamas, as well as groups from Syria, Kuwait, and Iran. Some accounts have amassed hundreds of thousands of followers, paying to boost their reach while X seemingly looks the other way.

In a report released Thursday, the Tech Transparency Project (TTP) flagged popular accounts seemingly linked to US-sanctioned terrorists. Some of the accounts bear “ID verified” badges, suggesting that X may be going against its own policies that ban sanctioned terrorists from benefiting from its platform.

Even more troublingly, “several made use of revenue-generating features offered by X, including a button for tips,” the TTP reported.

On X, Premium subscribers pay $8 monthly or $84 annually, and Premium+ subscribers pay $40 monthly or $395 annually. Verified organizations pay X between $200 and $1,000 monthly, or up to $10,000 annually for access to Premium+. These subscriptions come with perks, allowing suspected terrorist accounts to share longer text and video posts, offer subscribers paid content, create communities, accept gifts, and amplify their propaganda.

Disturbingly, the TTP found that X’s chatbot Grok also appears to be helping to whitewash accounts linked to sanctioned terrorists.

In its report, the TTP noted that an account with the handle “hasmokaled”—which apparently belongs to “a key Hezbollah money exchanger,” Hassan Moukalled—at one point had a blue checkmark with 60,000 followers. While the Treasury Department has sanctioned Moukalled for propping up efforts “to continue to exploit and exacerbate Lebanon’s economic crisis,” clicking the Grok AI profile summary button seems to rely on Moukalled’s own posts and his followers’ impressions of his posts and therefore generated praise.

Report: Terrorists seem to be paying X to generate propaganda with Grok Read More »

big-brands-are-spending-small-sums-on-x-to-stay-out-of-musk’s-crosshairs

Big brands are spending small sums on X to stay out of Musk’s crosshairs

According to data from Emarketer, X’s revenue will increase to $2.3 billion this year compared with $1.9 billion a year ago. However, global sales in 2022, when the group was known as Twitter and taken over by Musk, were $4.1 billion.

Total US ad spend on X was down by 2 percent in the first two months of 2025 compared with a year ago, according to data from market intelligence group Sensor Tower, despite the recent return of groups such as Hulu and Unilever.

American Express also rejoined the platform this year but its ad spend is down by about 80 percent compared with the first quarter of 2022, Sensor Tower said.

However, four large ad agencies—WPP, Omnicom, Interpublic Group, and Publicis—have recently agreed on deals, or are in talks, to set annual spending targets with X in so-called “upfront deals,” where advertisers commit to purchasing slots in advance.

X, WPP, Omnicom, and Publicis declined to comment. Interpublic Group did not respond to a request for comment.

Fears have risen within the advertising industry after X filed a federal antitrust lawsuit last summer against Global Alliance for Responsible Media, a coalition of brands, ad agencies, and some companies including Unilever, accusing them of coordinating an “illegal boycott” under the guise of a brand safety initiative. The Republican-led House of Representatives Committee on the Judiciary has also leveled similar accusations.

Unilever was dropped from X’s lawsuit after it restarted advertising on the social media platform in October.

Following discussions with their legal team, some staff at WPP’s GroupM now feel concerned about what they put in writing about X or communicate over video conferencing given the lawsuit, according to one person familiar with the matter.

Another advertising executive noted that the planned $13 billion merger between Omnicom and Interpublic had been delayed by a further request for information from a US watchdog this month, holding the threat of regulatory intervention over the deal.

Big brands are spending small sums on X to stay out of Musk’s crosshairs Read More »