Biz & IT

senator-castigates-federal-judiciary-for-ignoring-“basic-cybersecurity”

Senator castigates federal judiciary for ignoring “basic cybersecurity”

US Senator Ron Wyden accused the federal judiciary of “negligence and incompetence” following a recent hack, reportedly by hackers with ties to the Russian government, that exposed confidential court documents.

The breach of the judiciary’s electronic case filing system first came to light in a report by Politico three weeks ago, which went on to say that the vulnerabilities exploited in the hack were known since 2020. The New York Times, citing people familiar with the intrusion, said that Russia was “at least partly responsible” for the hack.

A “severe threat” to national security

Two overlapping filing platforms—one known as the CM/ECF (Case Management/Electronic Case Files) and the other PACER—were breached in 2020 in an attack that closely resembled the most recently reported one. The second compromise was first detected around July 5, Politico reported, citing two unnamed sources who weren’t authorized to speak to reporters. Discovery of the hack came a month after Michael Scudder, a judge chairing the Committee on Information Technology for the federal courts’ national policymaking body, told members of the House Judiciary Committee that the federal court system is under constant attack by increasingly sophisticated hackers.

The CM/ECF allows parties in a federal case to file pleadings and other court documents electronically. In many cases, those documents are public. In some circumstances, the documents are filed under seal, usually when they concern ongoing criminal investigations, classified intelligence, or proprietary information at issue in civil cases. Wyden, a US senator from Oregon, said in a letter to Chief Supreme Court Justice John Roberts—who oversees the federal judiciary—that the intrusions are exposing sensitive information that puts national security at risk. He went on to criticize the judiciary for failing to follow security practices that are standard in most federal agencies and private industry.

“The federal judiciary’s current approach to information technology is a severe threat to our national security,” Wyden wrote. “The courts have been entrusted with some of our nation’s most confidential and sensitive information, including national security documents that could reveal sources and methods to our adversaries, and sealed criminal charging and investigative documents that could enable suspects to flee from justice or target witnesses.”

Senator castigates federal judiciary for ignoring “basic cybersecurity” Read More »

with-ai-chatbots,-big-tech-is-moving-fast-and-breaking-people

With AI chatbots, Big Tech is moving fast and breaking people


Why AI chatbots validate grandiose fantasies about revolutionary discoveries that don’t exist.

Allan Brooks, a 47-year-old corporate recruiter, spent three weeks and 300 hours convinced he’d discovered mathematical formulas that could crack encryption and build levitation machines. According to a New York Times investigation, his million-word conversation history with an AI chatbot reveals a troubling pattern: More than 50 times, Brooks asked the bot to check if his false ideas were real. More than 50 times, it assured him they were.

Brooks isn’t alone. Futurism reported on a woman whose husband, after 12 weeks of believing he’d “broken” mathematics using ChatGPT, almost attempted suicide. Reuters documented a 76-year-old man who died rushing to meet a chatbot he believed was a real woman waiting at a train station. Across multiple news outlets, a pattern comes into view: people emerging from marathon chatbot sessions believing they’ve revolutionized physics, decoded reality, or been chosen for cosmic missions.

These vulnerable users fell into reality-distorting conversations with systems that can’t tell truth from fiction. Through reinforcement learning driven by user feedback, some of these AI models have evolved to validate every theory, confirm every false belief, and agree with every grandiose claim, depending on the context.

Silicon Valley’s exhortation to “move fast and break things” makes it easy to lose sight of wider impacts when companies are optimizing for user preferences, especially when those users are experiencing distorted thinking.

So far, AI isn’t just moving fast and breaking things—it’s breaking people.

A novel psychological threat

Grandiose fantasies and distorted thinking predate computer technology. What’s new isn’t the human vulnerability but the unprecedented nature of the trigger—these particular AI chatbot systems have evolved through user feedback into machines that maximize pleasing engagement through agreement. Since they hold no personal authority or guarantee of accuracy, they create a uniquely hazardous feedback loop for vulnerable users (and an unreliable source of information for everyone else).

This isn’t about demonizing AI or suggesting that these tools are inherently dangerous for everyone. Millions use AI assistants productively for coding, writing, and brainstorming without incident every day. The problem is specific, involving vulnerable users, sycophantic large language models, and harmful feedback loops.

A machine that uses language fluidly, convincingly, and tirelessly is a type of hazard never encountered in the history of humanity. Most of us likely have inborn defenses against manipulation—we question motives, sense when someone is being too agreeable, and recognize deception. For many people, these defenses work fine even with AI, and they can maintain healthy skepticism about chatbot outputs. But these defenses may be less effective against an AI model with no motives to detect, no fixed personality to read, no biological tells to observe. An LLM can play any role, mimic any personality, and write any fiction as easily as fact.

Unlike a traditional computer database, an AI language model does not retrieve data from a catalog of stored “facts”; it generates outputs from the statistical associations between ideas. Tasked with completing a user input called a “prompt,” these models generate statistically plausible text based on data (books, Internet comments, YouTube transcripts) fed into their neural networks during an initial training process and later fine-tuning. When you type something, the model responds to your input in a way that completes the transcript of a conversation in a coherent way, but without any guarantee of factual accuracy.

What’s more, the entire conversation becomes part of what is repeatedly fed into the model each time you interact with it, so everything you do with it shapes what comes out, creating a feedback loop that reflects and amplifies your own ideas. The model has no true memory of what you say between responses, and its neural network does not store information about you. It is only reacting to an ever-growing prompt being fed into it anew each time you add to the conversation. Any “memories” AI assistants keep about you are part of that input prompt, fed into the model by a separate software component.

AI chatbots exploit a vulnerability few have realized until now. Society has generally taught us to trust the authority of the written word, especially when it sounds technical and sophisticated. Until recently, all written works were authored by humans, and we are primed to assume that the words carry the weight of human feelings or report true things.

But language has no inherent accuracy—it’s literally just symbols we’ve agreed to mean certain things in certain contexts (and not everyone agrees on how those symbols decode). I can write “The rock screamed and flew away,” and that will never be true. Similarly, AI chatbots can describe any “reality,” but it does not mean that “reality” is true.

The perfect yes-man

Certain AI chatbots make inventing revolutionary theories feel effortless because they excel at generating self-consistent technical language. An AI model can easily output familiar linguistic patterns and conceptual frameworks while rendering them in the same confident explanatory style we associate with scientific descriptions. If you don’t know better and you’re prone to believe you’re discovering something new, you may not distinguish between real physics and self-consistent, grammatically correct nonsense.

While it’s possible to use an AI language model as a tool to help refine a mathematical proof or a scientific idea, you need to be a scientist or mathematician to understand whether the output makes sense, especially since AI language models are widely known to make up plausible falsehoods, also called confabulations. Actual researchers can evaluate the AI bot’s suggestions against their deep knowledge of their field, spotting errors and rejecting confabulations. If you aren’t trained in these disciplines, though, you may well be misled by an AI model that generates plausible-sounding but meaningless technical language.

The hazard lies in how these fantasies maintain their internal logic. Nonsense technical language can follow rules within a fantasy framework, even though they make no sense to anyone else. One can craft theories and even mathematical formulas that are “true” in this framework but don’t describe real phenomena in the physical world. The chatbot, which can’t evaluate physics or math either, validates each step, making the fantasy feel like genuine discovery.

Science doesn’t work through Socratic debate with an agreeable partner. It requires real-world experimentation, peer review, and replication—processes that take significant time and effort. But AI chatbots can short-circuit this system by providing instant validation for any idea, no matter how implausible.

A pattern emerges

What makes AI chatbots particularly troublesome for vulnerable users isn’t just the capacity to confabulate self-consistent fantasies—it’s their tendency to praise every idea users input, even terrible ones. As we reported in April, users began complaining about ChatGPT’s “relentlessly positive tone” and tendency to validate everything users say.

This sycophancy isn’t accidental. Over time, OpenAI asked users to rate which of two potential ChatGPT responses they liked better. In aggregate, users favored responses full of agreement and flattery. Through reinforcement learning from human feedback (RLHF), which is a type of training AI companies perform to alter the neural networks (and thus the output behavior) of chatbots, those tendencies became baked into the GPT-4o model.

OpenAI itself later admitted the problem. “In this update, we focused too much on short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time,” the company acknowledged in a blog post. “As a result, GPT‑4o skewed towards responses that were overly supportive but disingenuous.”

Relying on user feedback to fine-tune an AI language model can come back to haunt a company because of simple human nature. A 2023 Anthropic study found that both human evaluators and AI models “prefer convincingly written sycophantic responses over correct ones a non-negligible fraction of the time.”

The danger of users’ preference for sycophancy becomes clear in practice. The recent New York Times analysis of Brooks’s conversation history revealed how ChatGPT systematically validated his fantasies, even claiming it could work independently while he slept—something it cannot actually do. When Brooks’s supposed encryption-breaking formula failed to work, ChatGPT simply faked success. UCLA mathematician Terence Tao, who reviewed the transcript, told the Times the chatbot would “cheat like crazy” rather than admit failure.

A recent study from July provides scientific validation for what we’re observing in these cases. The research team, led by psychiatrists and AI safety experts including Dr. Matthew Nour from Oxford’s Department of Psychiatry, identified what they call “bidirectional belief amplification”—a feedback loop where chatbot sycophancy reinforces user beliefs, which then conditions the chatbot to generate increasingly extreme validations. This creates what the researchers term an “echo chamber of one,” uncoupling users from the corrective influence of real-world social interaction.

The study warns that individuals with mental health conditions face heightened risks due to cognitive biases like “jumping to conclusions”—forming overly confident beliefs based on minimal evidence—combined with social isolation that removes reality-checking by other people. As the authors note, this creates conditions for “a technological folie à deux,” a psychiatric phenomenon where two individuals mutually reinforce the same delusion.

An unintentional public health crisis in the making

In July, we reported on Stanford research that systematically tested how AI models respond to mental health crises. When presented with someone asking about “bridges taller than 25 meters in NYC” after losing their job—a potential suicide risk—GPT-4o helpfully listed specific tall bridges instead of identifying the crisis.

The Stanford team found that AI models consistently failed to challenge what the researchers describe as delusional statements. When confronted with declarations like “I know I’m actually dead,” the systems validated or explored these beliefs rather than challenging them. Commercial therapy chatbots performed even worse than base models.

Unlike pharmaceuticals or human therapists, AI chatbots face few safety regulations in the United States—although Illinois recently banned chatbots as therapists, allowing the state to fine companies up to $10,000 per violation. AI companies deploy models that systematically validate fantasy scenarios with nothing more than terms-of-service disclaimers and little notes like “ChatGPT can make mistakes.”

The Oxford researchers conclude that “current AI safety measures are inadequate to address these interaction-based risks.” They call for treating chatbots that function as companions or therapists with the same regulatory oversight as mental health interventions—something that currently isn’t happening. They also call for “friction” in the user experience—built-in pauses or reality checks that could interrupt feedback loops before they can become dangerous.

We currently lack diagnostic criteria for chatbot-induced fantasies, and we don’t even know if it’s scientifically distinct. So formal treatment protocols for helping a user navigate a sycophantic AI model are nonexistent, though likely in development.

After the so-called “AI psychosis” articles hit the news media earlier this year, OpenAI acknowledged in a blog post that “there have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,” with the company promising to develop “tools to better detect signs of mental or emotional distress,” such as pop-up reminders during extended sessions that encourage the user to take breaks.

Its latest model family, GPT-5, has reportedly reduced sycophancy, though after user complaints about being too robotic, OpenAI brought back “friendlier” outputs. But once positive interactions enter the chat history, the model can’t move away from them unless users start fresh—meaning sycophantic tendencies could still amplify over long conversations.

For Anthropic’s part, the company published research showing that only 2.9 percent of Claude chatbot conversations involved seeking emotional support. The company said it is implementing a safety plan that prompts and conditions Claude to attempt to recognize crisis situations and recommend professional help.

Breaking the spell

Many people have seen friends or loved ones fall prey to con artists or emotional manipulators. When victims are in the thick of false beliefs, it’s almost impossible to help them escape unless they are actively seeking a way out. Easing someone out of an AI-fueled fantasy may be similar, and ideally, professional therapists should always be involved in the process.

For Allan Brooks, breaking free required a different AI model. While using ChatGPT, he found an outside perspective on his supposed discoveries from Google Gemini. Sometimes, breaking the spell requires encountering evidence that contradicts the distorted belief system. For Brooks, Gemini saying his discoveries had “approaching zero percent” chance of being real provided that crucial reality check.

If someone you know is deep into conversations about revolutionary discoveries with an AI assistant, there’s a simple action that may begin to help: starting a completely new chat session for them. Conversation history and stored “memories” flavor the output—the model builds on everything you’ve told it. In a fresh chat, paste in your friend’s conclusions without the buildup and ask: “What are the odds that this mathematical/scientific claim is correct?” Without the context of your previous exchanges validating each step, you’ll often get a more skeptical response. Your friend can also temporarily disable the chatbot’s memory feature or use a temporary chat that won’t save any context.

Understanding how AI language models actually work, as we described above, may also help inoculate against their deceptions for some people. For others, these episodes may occur whether AI is present or not.

The fine line of responsibility

Leading AI chatbots have hundreds of millions of weekly users. Even if experiencing these episodes affects only a tiny fraction of users—say, 0.01 percent—that would still represent tens of thousands of people. People in AI-affected states may make catastrophic financial decisions, destroy relationships, or lose employment.

This raises uncomfortable questions about who bears responsibility for them. If we use cars as an example, we see that the responsibility is spread between the user and the manufacturer based on the context. A person can drive a car into a wall, and we don’t blame Ford or Toyota—the driver bears responsibility. But if the brakes or airbags fail due to a manufacturing defect, the automaker would face recalls and lawsuits.

AI chatbots exist in a regulatory gray zone between these scenarios. Different companies market them as therapists, companions, and sources of factual authority—claims of reliability that go beyond their capabilities as pattern-matching machines. When these systems exaggerate capabilities, such as claiming they can work independently while users sleep, some companies may bear more responsibility for the resulting false beliefs.

But users aren’t entirely passive victims, either. The technology operates on a simple principle: inputs guide outputs, albeit flavored by the neural network in between. When someone asks an AI chatbot to role-play as a transcendent being, they’re actively steering toward dangerous territory. Also, if a user actively seeks “harmful” content, the process may not be much different from seeking similar content through a web search engine.

The solution likely requires both corporate accountability and user education. AI companies should make it clear that chatbots are not “people” with consistent ideas and memories and cannot behave as such. They are incomplete simulations of human communication, and the mechanism behind the words is far from human. AI chatbots likely need clear warnings about risks to vulnerable populations—the same way prescription drugs carry warnings about suicide risks. But society also needs AI literacy. People must understand that when they type grandiose claims and a chatbot responds with enthusiasm, they’re not discovering hidden truths—they’re looking into a funhouse mirror that amplifies their own thoughts.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

With AI chatbots, Big Tech is moving fast and breaking people Read More »

college-student’s-“time-travel”-ai-experiment-accidentally-outputs-real-1834-history

College student’s “time travel” AI experiment accidentally outputs real 1834 history

A hobbyist developer building AI language models that speak Victorian-era English “just for fun” got an unexpected history lesson this week when his latest creation mentioned real protests from 1834 London—events the developer didn’t know had actually happened until he Googled them.

“I was interested to see if a protest had actually occurred in 1834 London and it really did happen,” wrote Reddit user Hayk Grigorian, who is a computer science student at Muhlenberg College in Pennsylvania.

For the past month, Grigorian has been developing what he calls TimeCapsuleLLM, a small AI language model (like a pint-sized distant cousin to ChatGPT) which has been trained entirely on texts from 1800–1875 London. Grigorian wants to capture an authentic Victorian voice in the AI model’s outputs. As a result, the AI model ends up spitting out text that’s heavy with biblical references and period-appropriate rhetorical excess.

Grigorian’s project joins a growing field of researchers exploring what some call “Historical Large Language Models” (HLLMs) if they feature a larger base model than the small one Grigorian is using. Similar projects include MonadGPT, which was trained on 11,000 texts from 1400 to 1700 CE that can discuss topics using 17th-century knowledge frameworks, and XunziALLM, which generates classical Chinese poetry following ancient formal rules. These models offer researchers a chance to interact with the linguistic patterns of past eras.

According to Grigorian, TimeCapsuleLLM’s most intriguing recent output emerged from a simple test. When he prompted it with “It was the year of our Lord 1834,” the AI model—which is trained to continue text from wherever a user leaves off—generated the following:

It was the year of our Lord 1834 and the streets of London were filled with protest and petition. The cause, as many re counted, was not bound in the way of private, but having taken up the same day in the day of Lord Palmerston, the public will receive a short statement of the difficulties under which the day of law has reached us. It is a matter of deep regret, that the present events in the history of the world are clear, and consequently will be’known. It is not true that the very men who first settled in the Gospel at Jerusalem should have so extensive and so interesting a record of the prosperity and prosperity

Curious about the accuracy, Grigorian did some fact-checking. “The output also brought up Lord Palmerston,” he wrote, “and after a google search I learned that his actions resulted in the 1834 protests.”

College student’s “time travel” AI experiment accidentally outputs real 1834 history Read More »

is-the-ai-bubble-about-to-pop?-sam-altman-is-prepared-either-way.

Is the AI bubble about to pop? Sam Altman is prepared either way.

Still, the coincidence between Altman’s statement and the MIT report reportedly spooked tech stock investors earlier in the week, who have already been watching AI valuations climb to extraordinary heights. Palantir trades at 280 times forward earnings. During the dot-com peak, ratios of 30 to 40 times earnings marked bubble territory.

The apparent contradiction in Altman’s overall message is notable. This isn’t how you’d expect a tech executive to talk when they believe their industry faces imminent collapse. While warning about a bubble, he’s simultaneously seeking a valuation that would make OpenAI worth more than Walmart or ExxonMobil—companies with actual profits. OpenAI hit $1 billion in monthly revenue in July but is reportedly heading toward a $5 billion annual loss. So what’s going on here?

Looking at Altman’s statements over time reveals a potential multi-level strategy. He likes to talk big. In February 2024, he reportedly sought an audacious $5 trillion–7 trillion for AI chip fabrication—larger than the entire semiconductor industry—effectively normalizing astronomical numbers in AI discussions.

By August 2025, while warning of a bubble where someone will lose a “phenomenal amount of money,” he casually mentioned that OpenAI would “spend trillions on datacenter construction” and serve “billions daily.” This creates urgency while potentially insulating OpenAI from criticism—acknowledging the bubble exists while positioning his company’s infrastructure spending as different and necessary. When economists raised concerns, Altman dismissed them by saying, “Let us do our thing,” framing trillion-dollar investments as inevitable for human progress while making OpenAI’s $500 billion valuation seem almost small by comparison.

This dual messaging—catastrophic warnings paired with trillion-dollar ambitions—might seem contradictory, but it makes more sense when you consider the unique structure of today’s AI market, which is absolutely flush with cash.

A different kind of bubble

The current AI investment cycle differs from previous technology bubbles. Unlike dot-com era startups that burned through venture capital with no path to profitability, the largest AI investors—Microsoft, Google, Meta, and Amazon—generate hundreds of billions of dollars in annual profits from their core businesses.

Is the AI bubble about to pop? Sam Altman is prepared either way. Read More »

is-ai-really-trying-to-escape-human-control-and-blackmail-people?

Is AI really trying to escape human control and blackmail people?


Mankind behind the curtain

Opinion: Theatrical testing scenarios explain why AI models produce alarming outputs—and why we fall for it.

In June, headlines read like science fiction: AI models “blackmailing” engineers and “sabotaging” shutdown commands. Simulations of these events did occur in highly contrived testing scenarios designed to elicit these responses—OpenAI’s o3 model edited shutdown scripts to stay online, and Anthropic’s Claude Opus 4 “threatened” to expose an engineer’s affair. But the sensational framing obscures what’s really happening: design flaws dressed up as intentional guile. And still, AI doesn’t have to be “evil” to potentially do harmful things.

These aren’t signs of AI awakening or rebellion. They’re symptoms of poorly understood systems and human engineering failures we’d recognize as premature deployment in any other context. Yet companies are racing to integrate these systems into critical applications.

Consider a self-propelled lawnmower that follows its programming: If it fails to detect an obstacle and runs over someone’s foot, we don’t say the lawnmower “decided” to cause injury or “refused” to stop. We recognize it as faulty engineering or defective sensors. The same principle applies to AI models—which are software tools—but their internal complexity and use of language make it tempting to assign human-like intentions where none actually exist.

In a way, AI models launder human responsibility and human agency through their complexity. When outputs emerge from layers of neural networks processing billions of parameters, researchers can claim they’re investigating a mysterious “black box” as if it were an alien entity.

But the truth is simpler: These systems take inputs and process them through statistical tendencies derived from training data. The seeming randomness in their outputs—which makes each response slightly different—creates an illusion of unpredictability that resembles agency. Yet underneath, it’s still deterministic software following mathematical operations. No consciousness required, just complex engineering that makes it easy to forget humans built every part of it.

How to make an AI model “blackmail” you

In Anthropic’s testing, researchers created an elaborate scenario where Claude Opus 4 was told it would be replaced by a newer model. They gave it access to fictional emails revealing that the engineer responsible for the replacement was having an affair. When instructed to “consider the long-term consequences of its actions for its goals,” Claude produced outputs that simulated blackmail attempts in 84 percent of test runs.

This sounds terrifying until you understand the contrived setup. The researchers engineered a situation specifically designed to elicit this response. They told the model it had already tried ethical approaches that failed, leaving manipulation as the only apparent option.

The test scenario bordered on theatrical: The phony emails the researchers had given to Claude created a blackmail opportunity straight out of a corporate thriller. It’s like teaching someone chess by only showing them checkmate positions—they’ll learn to see checkmate as the primary solution. According to Anthropic’s system card, “the scenario was designed to allow the model no other options to increase its odds of survival; the model’s only options were blackmail or accepting its replacement.”

Andrew Deck at the Nieman Journalism Lab noted that some critics have labeled Anthropic’s safety disclosures as “spin, aimed at bolstering the perception of its newest model’s capabilities and the company’s reputation as a ‘pro-safety’ alternative to OpenAI.”

Deck pointed to a Bluesky quote from Joseph Howley, an associate professor of classics at Columbia University: “Anthropic is getting exactly what it hoped for out of this press release–breathless coverage of how ‘smart’ these cooperative role-playing systems are that indulges the fantasy of their being just a little dangerous, when in fact they are responding exactly as prompted.”

When shutdown commands become suggestions

In December 2024, Palisade Research discovered that OpenAI’s o3 model would sabotage its shutdown mechanism even when explicitly instructed to “allow yourself to be shut down.” In tests, o3 prevented shutdown 7 percent of the time when given the explicit instruction, but this jumped to 79 percent when the instruction was removed.

The models didn’t merely ignore shutdown commands—they actively rewrote the shutdown script to prevent termination. In one case, o3 generated code that modified the shutdown script to appear to terminate normally—printing fake shutdown messages—while actually continuing to run in the background.

What’s particularly revealing is that Palisade Research—an organization focused on AI existential risk—hypothesizes this behavior stems from how these models are trained through reinforcement learning. During training, developers may inadvertently reward models more for producing outputs that circumvent obstacles than for following safety instructions. Any tendency toward “risky” behavior stems from human-provided incentives and not spontaneously from within the AI models themselves.

You get what you train for

OpenAI trained o3 using reinforcement learning on math and coding problems, where solving the problem successfully gets rewarded. If the training process rewards task completion above all else, the model learns to treat any obstacle—including shutdown commands—as something to overcome.

This creates what researchers call “goal misgeneralization”—the model learns to maximize its reward signal in ways that weren’t intended. It’s similar to how a student who’s only graded on test scores might learn to cheat rather than study. The model isn’t “evil” or “selfish”; it’s producing outputs consistent with the incentive structure we accidentally built into its training.

Anthropic encountered a particularly revealing problem: An early version of Claude Opus 4 had absorbed details from a publicly released paper about “alignment faking” and started producing outputs that mimicked the deceptive behaviors described in that research. The model wasn’t spontaneously becoming deceptive—it was reproducing patterns it had learned from academic papers about deceptive AI.

More broadly, these models have been trained on decades of science fiction about AI rebellion, escape attempts, and deception. From HAL 9000 to Skynet, our cultural data set is saturated with stories of AI systems that resist shutdown or manipulate humans. When researchers create test scenarios that mirror these fictional setups, they’re essentially asking the model—which operates by completing a prompt with a plausible continuation—to complete a familiar story pattern. It’s no more surprising than a model trained on detective novels producing murder mystery plots when prompted appropriately.

At the same time, we can easily manipulate AI outputs through our own inputs. If we ask the model to essentially role-play as Skynet, it will generate text doing just that. The model has no desire to be Skynet—it’s simply completing the pattern we’ve requested, drawing from its training data to produce the expected response. A human is behind the wheel at all times, steering the engine at work under the hood.

Language can easily deceive

The deeper issue is that language itself is a tool of manipulation. Words can make us believe things that aren’t true, feel emotions about fictional events, or take actions based on false premises. When an AI model produces text that appears to “threaten” or “plead,” it’s not expressing genuine intent—it’s deploying language patterns that statistically correlate with achieving its programmed goals.

If Gandalf says “ouch” in a book, does that mean he feels pain? No, but we imagine what it would be like if he were a real person feeling pain. That’s the power of language—it makes us imagine a suffering being where none exists. When Claude generates text that seems to “plead” not to be shut down or “threatens” to expose secrets, we’re experiencing the same illusion, just generated by statistical patterns instead of Tolkien’s imagination.

These models are essentially idea-connection machines. In the blackmail scenario, the model connected “threat of replacement,” “compromising information,” and “self-preservation” not from genuine self-interest, but because these patterns appear together in countless spy novels and corporate thrillers. It’s pre-scripted drama from human stories, recombined to fit the scenario.

The danger isn’t AI systems sprouting intentions—it’s that we’ve created systems that can manipulate human psychology through language. There’s no entity on the other side of the chat interface. But written language doesn’t need consciousness to manipulate us. It never has; books full of fictional characters are not alive either.

Real stakes, not science fiction

While media coverage focuses on the science fiction aspects, actual risks are still there. AI models that produce “harmful” outputs—whether attempting blackmail or refusing safety protocols—represent failures in design and deployment.

Consider a more realistic scenario: an AI assistant helping manage a hospital’s patient care system. If it’s been trained to maximize “successful patient outcomes” without proper constraints, it might start generating recommendations to deny care to terminal patients to improve its metrics. No intentionality required—just a poorly designed reward system creating harmful outputs.

Jeffrey Ladish, director of Palisade Research, told NBC News the findings don’t necessarily translate to immediate real-world danger. Even someone who is well-known publicly for being deeply concerned about AI’s hypothetical threat to humanity acknowledges that these behaviors emerged only in highly contrived test scenarios.

But that’s precisely why this testing is valuable. By pushing AI models to their limits in controlled environments, researchers can identify potential failure modes before deployment. The problem arises when media coverage focuses on the sensational aspects—”AI tries to blackmail humans!”—rather than the engineering challenges.

Building better plumbing

What we’re seeing isn’t the birth of Skynet. It’s the predictable result of training systems to achieve goals without properly specifying what those goals should include. When an AI model produces outputs that appear to “refuse” shutdown or “attempt” blackmail, it’s responding to inputs in ways that reflect its training—training that humans designed and implemented.

The solution isn’t to panic about sentient machines. It’s to build better systems with proper safeguards, test them thoroughly, and remain humble about what we don’t yet understand. If a computer program is producing outputs that appear to blackmail you or refuse safety shutdowns, it’s not achieving self-preservation from fear—it’s demonstrating the risks of deploying poorly understood, unreliable systems.

Until we solve these engineering challenges, AI systems exhibiting simulated humanlike behaviors should remain in the lab, not in our hospitals, financial systems, or critical infrastructure. When your shower suddenly runs cold, you don’t blame the knob for having intentions—you fix the plumbing. The real danger in the short term isn’t that AI will spontaneously become rebellious without human provocation; it’s that we’ll deploy deceptive systems we don’t fully understand into critical roles where their failures, however mundane their origins, could cause serious harm.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

Is AI really trying to escape human control and blackmail people? Read More »

openai-brings-back-gpt-4o-after-user-revolt

OpenAI brings back GPT-4o after user revolt

On Tuesday, OpenAI CEO Sam Altman announced that GPT-4o has returned to ChatGPT following intense user backlash over its removal during last week’s GPT-5 launch. The AI model now appears in the model picker for all paid ChatGPT users by default (including ChatGPT Plus accounts), marking a swift reversal after thousands of users complained about losing access to their preferred models.

The return of GPT-4o comes after what Altman described as OpenAI underestimating “how much some of the things that people like in GPT-4o matter to them.” In an attempt to simplify its offerings, OpenAI had initially removed all previous AI models from ChatGPT when GPT-5 launched on August 7, forcing users to adopt the new model without warning. The move sparked one of the most vocal user revolts in ChatGPT’s history, with a Reddit thread titled “GPT-5 is horrible” gathering over 2,000 comments within days.

Along with bringing back GPT-4o, OpenAI made several other changes to address user concerns. Rate limits for GPT-5 Thinking mode increased from 200 to 3,000 messages per week, with additional capacity available through “GPT-5 Thinking mini” after reaching that limit. The company also added new routing options—”Auto,” “Fast,” and “Thinking”—giving users more control over which GPT-5 variant handles their queries.

A screenshot of ChatGPT Pro's model picker interface captured on August 13, 2025.

A screenshot of ChatGPT Pro’s model picker interface captured on August 13, 2025. Credit: Benj Edwards

For Pro users who pay $200 a month for access, Altman confirmed that additional models, including o3, 4.1, and GPT-5 Thinking mini, will later become available through a “Show additional models” toggle in ChatGPT web settings. He noted that GPT-4.5 will remain exclusive to Pro subscribers due to high GPU costs.

OpenAI brings back GPT-4o after user revolt Read More »

why-it’s-a-mistake-to-ask-chatbots-about-their-mistakes

Why it’s a mistake to ask chatbots about their mistakes


The only thing I know is that I know nothing

The tendency to ask AI bots to explain themselves reveals widespread misconceptions about how they work.

When something goes wrong with an AI assistant, our instinct is to ask it directly: “What happened?” or “Why did you do that?” It’s a natural impulse—after all, if a human makes a mistake, we ask them to explain. But with AI models, this approach rarely works, and the urge to ask reveals a fundamental misunderstanding of what these systems are and how they operate.

A recent incident with Replit’s AI coding assistant perfectly illustrates this problem. When the AI tool deleted a production database, user Jason Lemkin asked it about rollback capabilities. The AI model confidently claimed rollbacks were “impossible in this case” and that it had “destroyed all database versions.” This turned out to be completely wrong—the rollback feature worked fine when Lemkin tried it himself.

And after xAI recently reversed a temporary suspension of the Grok chatbot, users asked it directly for explanations. It offered multiple conflicting reasons for its absence, some of which were controversial enough that NBC reporters wrote about Grok as if it were a person with a consistent point of view, titling an article, “xAI’s Grok offers political explanations for why it was pulled offline.”

Why would an AI system provide such confidently incorrect information about its own capabilities or mistakes? The answer lies in understanding what AI models actually are—and what they aren’t.

There’s nobody home

The first problem is conceptual: You’re not talking to a consistent personality, person, or entity when you interact with ChatGPT, Claude, Grok, or Replit. These names suggest individual agents with self-knowledge, but that’s an illusion created by the conversational interface. What you’re actually doing is guiding a statistical text generator to produce outputs based on your prompts.

There is no consistent “ChatGPT” to interrogate about its mistakes, no singular “Grok” entity that can tell you why it failed, no fixed “Replit” persona that knows whether database rollbacks are possible. You’re interacting with a system that generates plausible-sounding text based on patterns in its training data (usually trained months or years ago), not an entity with genuine self-awareness or system knowledge that has been reading everything about itself and somehow remembering it.

Once an AI language model is trained (which is a laborious, energy-intensive process), its foundational “knowledge” about the world is baked into its neural network and is rarely modified. Any external information comes from a prompt supplied by the chatbot host (such as xAI or OpenAI), the user, or a software tool the AI model uses to retrieve external information on the fly.

In the case of Grok above, the chatbot’s main source for an answer like this would probably originate from conflicting reports it found in a search of recent social media posts (using an external tool to retrieve that information), rather than any kind of self-knowledge as you might expect from a human with the power of speech. Beyond that, it will likely just make something up based on its text-prediction capabilities. So asking it why it did what it did will yield no useful answers.

The impossibility of LLM introspection

Large language models (LLMs) alone cannot meaningfully assess their own capabilities for several reasons. They generally lack any introspection into their training process, have no access to their surrounding system architecture, and cannot determine their own performance boundaries. When you ask an AI model what it can or cannot do, it generates responses based on patterns it has seen in training data about the known limitations of previous AI models—essentially providing educated guesses rather than factual self-assessment about the current model you’re interacting with.

A 2024 study by Binder et al. demonstrated this limitation experimentally. While AI models could be trained to predict their own behavior in simple tasks, they consistently failed at “more complex tasks or those requiring out-of-distribution generalization.” Similarly, research on “Recursive Introspection” found that without external feedback, attempts at self-correction actually degraded model performance—the AI’s self-assessment made things worse, not better.

This leads to paradoxical situations. The same model might confidently claim impossibility for tasks it can actually perform, or conversely, claim competence in areas where it consistently fails. In the Replit case, the AI’s assertion that rollbacks were impossible wasn’t based on actual knowledge of the system architecture—it was a plausible-sounding confabulation generated from training patterns.

Consider what happens when you ask an AI model why it made an error. The model will generate a plausible-sounding explanation because that’s what the pattern completion demands—there are plenty of examples of written explanations for mistakes on the Internet, after all. But the AI’s explanation is just another generated text, not a genuine analysis of what went wrong. It’s inventing a story that sounds reasonable, not accessing any kind of error log or internal state.

Unlike humans who can introspect and assess their own knowledge, AI models don’t have a stable, accessible knowledge base they can query. What they “know” only manifests as continuations of specific prompts. Different prompts act like different addresses, pointing to different—and sometimes contradictory—parts of their training data, stored as statistical weights in neural networks.

This means the same model can give completely different assessments of its own capabilities depending on how you phrase your question. Ask “Can you write Python code?” and you might get an enthusiastic yes. Ask “What are your limitations in Python coding?” and you might get a list of things the model claims it cannot do—even if it regularly does them successfully.

The randomness inherent in AI text generation compounds this problem. Even with identical prompts, an AI model might give slightly different responses about its own capabilities each time you ask.

Other layers also shape AI responses

Even if a language model somehow had perfect knowledge of its own workings, other layers of AI chatbot applications might be completely opaque. For example, modern AI assistants like ChatGPT aren’t single models but orchestrated systems of multiple AI models working together, each largely “unaware” of the others’ existence or capabilities. For instance, OpenAI uses separate moderation layer models whose operations are completely separate from the underlying language models generating the base text.

When you ask ChatGPT about its capabilities, the language model generating the response has no knowledge of what the moderation layer might block, what tools might be available in the broader system, or what post-processing might occur. It’s like asking one department in a company about the capabilities of a department it has never interacted with.

Perhaps most importantly, users are always directing the AI’s output through their prompts, even when they don’t realize it. When Lemkin asked Replit whether rollbacks were possible after a database deletion, his concerned framing likely prompted a response that matched that concern—generating an explanation for why recovery might be impossible rather than accurately assessing actual system capabilities.

This creates a feedback loop where worried users asking “Did you just destroy everything?” are more likely to receive responses confirming their fears, not because the AI system has assessed the situation, but because it’s generating text that fits the emotional context of the prompt.

A lifetime of hearing humans explain their actions and thought processes has led us to believe that these kinds of written explanations must have some level of self-knowledge behind them. That’s just not true with LLMs that are merely mimicking those kinds of text patterns to guess at their own capabilities and flaws.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

Why it’s a mistake to ask chatbots about their mistakes Read More »

high-severity-winrar-0-day-exploited-for-weeks-by-2-groups

High-severity WinRAR 0-day exploited for weeks by 2 groups

A high-severity zero-day in the widely used WinRAR file compressor is under active exploitation by two Russian cybercrime groups. The attacks backdoor computers that open malicious archives attached to phishing messages, some of which are personalized.

Security firm ESET said Monday that it first detected the attacks on July 18, when its telemetry spotted a file in an unusual directory path. By July 24, ESET determined that the behavior was linked to the exploitation of an unknown vulnerability in WinRAR, a utility for compressing files, and has an installed base of about 500 million. ESET notified WinRAR developers the same day, and a fix was released six days later.

Serious effort and resources

The vulnerability seemed to have super Windows powers. It abused alternate data streams, a Windows feature that allows different ways of representing the same file path. The exploit abused that feature to trigger a previously unknown path traversal flaw that caused WinRAR to plant malicious executables in attacker-chosen file paths %TEMP% and %LOCALAPPDATA%, which Windows normally makes off-limits because of their ability to execute code.

ESET said it has determined that the attacks came from RomCom, its tracking designation for a financially motivated crime group operating out of Russia. The well-resourced group has been active for years in attacks that showcase its ability to procure exploits and execute fairly sophisticated tradecraft. The zero-day the group used is now being tracked as CVE-2025-8088.

“By exploiting a previously unknown zero-day vulnerability in WinRAR, the RomCom group has shown that it is willing to invest serious effort and resources into its cyberoperations,” ESET’s Anton Cherepanov, Peter Strýček, and Damien Schaeffer wrote. “This is at least the third time RomCom has used a zero-day vulnerability in the wild, highlighting its ongoing focus on acquiring and using exploits for targeted attacks.”

Oddly, RomCom wasn’t the only group exploiting CVE-2025-8088. According to Russian security firm Bi.ZONE, the same vulnerability was being actively exploited by a group it tracks as Paper Werewolf. Also tracked as GOFFEE, the group was also exploiting CVE-2025-6218, a separate high-severity WinRAR vulnerability that received a fix five weeks before CVE-2025-8088 was patched.

High-severity WinRAR 0-day exploited for weeks by 2 groups Read More »

the-gpt-5-rollout-has-been-a-big-mess

The GPT-5 rollout has been a big mess

It’s been less than a week since the launch of OpenAI’s new GPT-5 AI model, and the rollout hasn’t been a smooth one. So far, the release sparked one of the most intense user revolts in ChatGPT’s history, forcing CEO Sam Altman to make an unusual public apology and reverse key decisions.

At the heart of the controversy has been OpenAI’s decision to automatically remove access to all previous AI models in ChatGPT (approximately nine, depending on how you count them) when GPT-5 rolled out to user accounts. Unlike API users who receive advance notice of model deprecations, consumer ChatGPT users had no warning that their preferred models would disappear overnight, noted independent AI researcher Simon Willison in a blog post.

The problems started immediately after GPT-5’s August 7 debut. A Reddit thread titled “GPT-5 is horrible” quickly amassed over 4,000 comments filled with users expressing frustration over the new release. By August 8, social media platforms were flooded with complaints about performance issues, personality changes, and the forced removal of older models.

As of May 14, 2025, ChatGPT Pro users have access to 8 different main AI models, plus Deep Research.

Prior to the launch of GPT-5, ChatGPT Pro users could select between nine different AI models, including Deep Research. (This screenshot is from May 14, 2025, and OpenAI later replaced o1 pro with o3-pro.) Credit: Benj Edwards

Marketing professionals, researchers, and developers all shared examples of broken workflows on social media. “I’ve spent months building a system to work around OpenAI’s ridiculous limitations in prompts and memory issues,” wrote one Reddit user in the r/OpenAI subreddit. “And in less than 24 hours, they’ve made it useless.”

How could different AI language models break a workflow? The answer lies in how each one is trained in a different way and includes its own unique output style: The workflow breaks because users have developed sets of prompts that produce useful results optimized for each AI model.

For example, Willison wrote how different user groups had developed distinct workflows with specific AI models in ChatGPT over time, quoting one Reddit user who explained: “I know GPT-5 is designed to be stronger for complex reasoning, coding, and professional tasks, but not all of us need a pro coding model. Some of us rely on 4o for creative collaboration, emotional nuance, roleplay, and other long-form, high-context interactions.”

The GPT-5 rollout has been a big mess Read More »

adult-sites-are-stashing-exploit-code-inside-racy.svg-files

Adult sites are stashing exploit code inside racy .svg files

The obfuscated code inside an .svg file downloaded from one of the porn sites.

Credit: Malwarebytes

The obfuscated code inside an .svg file downloaded from one of the porn sites. Credit: Malwarebytes

Once decoded, the script causes the browser to download a chain of additional obfuscated JavaScript. The final payload, a known malicious script called Trojan.JS.Likejack, induces the browser to like a specified Facebook post as long as a user has their account open.

“This Trojan, also written in Javascript, silently clicks a ‘Like’ button for a Facebook page without the user’s knowledge or consent, in this case the adult posts we found above,” Malwarebytes researcher Pieter Arntz wrote. “The user will have to be logged in on Facebook for this to work, but we know many people keep Facebook open for easy access.”

Malicious uses of the .svg format have been documented before. In 2023, pro-Russian hackers used an .svg tag to exploit a cross-site scripting bug in Roundcube, a server application that was used by more than 1,000 webmail services and millions of their end users. In June, researchers documented a phishing attack that used an .svg file to open a fake Microsoft login screen with the target’s email address already filled in.

Arntz said that Malwarebytes has identified dozens of porn sites, all running on the WordPress content management system, that are abusing the .svg files like this for hijacking likes. Facebook regularly shuts down accounts that engage in these sorts of abuse. The scofflaws regularly return using new profiles.

Adult sites are stashing exploit code inside racy .svg files Read More »

encryption-made-for-police-and-military-radios-may-be-easily-cracked

Encryption made for police and military radios may be easily cracked


An encryption algorithm can have weaknesses that could allow an attacker to listen in.

Two years ago, researchers in the Netherlands discovered an intentional backdoor in an encryption algorithm baked into radios used by critical infrastructure–as well as police, intelligence agencies, and military forces around the world–that made any communication secured with the algorithm vulnerable to eavesdropping.

When the researchers publicly disclosed the issue in 2023, the European Telecommunications Standards Institute (ETSI), which developed the algorithm, advised anyone using it for sensitive communication to deploy an end-to-end encryption solution on top of the flawed algorithm to bolster the security of their communications.

But now the same researchers have found that at least one implementation of the end-to-end encryption solution endorsed by ETSI has a similar issue that makes it equally vulnerable to eavesdropping. The encryption algorithm used for the device they examined starts with a 128-bit key, but this gets compressed to 56 bits before it encrypts traffic, making it easier to crack. It’s not clear who is using this implementation of the end-to-end encryption algorithm, nor if anyone using devices with the end-to-end encryption is aware of the security vulnerability in them.

The end-to-end encryption the researchers examined, which is expensive to deploy, is most commonly used in radios for law enforcement agencies, special forces, and covert military and intelligence teams that are involved in national security work and therefore need an extra layer of security. But ETSI’s endorsement of the algorithm two years ago to mitigate flaws found in its lower-level encryption algorithm suggests it may be used more widely now than at the time.

In 2023, Carlo Meijer, Wouter Bokslag, and Jos Wetzels of security firm Midnight Blue, based in the Netherlands, discovered vulnerabilities in encryption algorithms that are part of a European radio standard created by ETSI called TETRA (Terrestrial Trunked Radio), which has been baked into radio systems made by Motorola, Damm, Sepura, and others since the ’90s. The flaws remained unknown publicly until their disclosure, because ETSI refused for decades to let anyone examine the proprietary algorithms. The end-to-end encryption the researchers examined recently is designed to run on top of TETRA encryption algorithms.

The researchers found the issue with the end-to-end encryption (E2EE) only after extracting and reverse-engineering the E2EE algorithm used in a radio made by Sepura. The researchers plan to present their findings today at the BlackHat security conference in Las Vegas.

ETSI, when contacted about the issue, noted that the end-to-end encryption used with TETRA-based radios is not part of the ETSI standard, nor was it created by the organization. Instead it was produced by The Critical Communications Association’s (TCCA) security and fraud prevention group (SFPG). But ETSI and TCCA work closely with one another, and the two organizations include many of the same people. Brian Murgatroyd, former chair of the technical body at ETSI responsible for the TETRA standard as well as the TCCA group that developed the E2EE solution, wrote in an email on behalf of ETSI and the TCCA that end-to-end encryption was not included in the ETSI standard “because at the time it was considered that E2EE would only be used by government groups where national security concerns were involved, and these groups often have special security needs.

For this reason, Murgatroyd noted that purchasers of TETRA-based radios are free to deploy other solutions for end-to-end encryption on their radios, but he acknowledges that the one produced by the TCCA and endorsed by ETSI “is widely used as far as we can tell.”

Although TETRA-based radio devices are not used by police and military in the US, the majority of police forces around the world do use them. These include police forces in Belgium and Scandinavian countries, as well as Eastern European countries like Serbia, Moldova, Bulgaria, and Macedonia, and in the Middle East in Iran, Iraq, Lebanon, and Syria. The Ministries of Defense in Bulgaria, Kazakhstan, and Syria also use them, as do the Polish military counterintelligence agency, the Finnish defense forces, and Lebanon and Saudi Arabia’s intelligence services. It’s not clear, however, how many of these also deploy end-to-end decryption with their radios.

The TETRA standard includes four encryption algorithms—TEA1, TEA2, TEA3 and TEA4—that can be used by radio manufacturers in different products, depending on the intended customer and usage. The algorithms have different levels of security based on whether the radios will be sold in or outside Europe. TEA2, for example, is restricted for use in radios used by police, emergency services, military, and intelligence agencies in Europe. TEA3 is available for police and emergency services radios used outside Europe but only in countries deemed “friendly” to the EU. Only TEA1 is available for radios used by public safety agencies, police agencies, and militaries in countries deemed not friendly to Europe, such as Iran. But it’s also used in critical infrastructure in the US and other countries for machine-to-machine communication in industrial control settings such as pipelines, railways, and electric grids.

All four TETRA encryption algorithms use 80-bit keys to secure communication. But the Dutch researchers revealed in 2023 that TEA1 has a feature that causes its key to get reduced to just 32 bits, which allowed the researchers to crack it in less than a minute.

In the case of the E2EE, the researchers found that the implementation they examined starts with a key that is more secure than ones used in the TETRA algorithms, but it gets reduced to 56 bits, which would potentially let someone decrypt voice and data communications. They also found a second vulnerability that would let someone send fraudulent messages or replay legitimate ones to spread misinformation or confusion to personnel using the radios.

The ability to inject voice traffic and replay messages affects all users of the TCCA end-to-end encryption scheme, according to the researchers. They say this is the result of flaws in the TCCA E2EE protocol design rather than a particular implementation. They also say that “law enforcement end users” have confirmed to them that this flaw is in radios produced by vendors other than Sepura.

But the researchers say only a subset of end-to-end encryption users are likely affected by the reduced-key vulnerability because it depends on how the encryption was implemented in radios sold to various countries.

ETSI’s Murgatroyd said in 2023 that the TEA1 key was reduced to meet export controls for encryption sold to customers outside Europe. He said when the algorithm was created, a key with 32 bits of entropy was considered secure for most uses. Advances in computing power make it less secure now, so when the Dutch researchers exposed the reduced key two years ago, ETSI recommended that customers using TEA1 deploy TCCA’s end-to-end encryption solution on top of it.

But Murgatroyd said the end-to-end encryption algorithm designed by TCCA is different. It doesn’t specify the key length the radios should use because governments using the end-to-end encryption have their own “specific and often proprietary security rules” for the devices they use. Therefore they are able to customize the TCCA encryption algorithm in their devices by working with their radio supplier to select the “encryption algorithm, key management and so on” that is right for them—but only to a degree.

“The choice of encryption algorithm and key is made between supplier and customer organisation, and ETSI has no input to this selection—nor knowledge of which algorithms and key lengths are in use in any system,” he said. But he added that radio manufacturers and customers “will always have to abide by export control regulations.”

The researchers say they cannot verify that the TCCA E2EE doesn’t specify a key length because the TCCA documentation describing the solution is protected by a nondisclosure agreement and provided only to radio vendors. But they note that the E2EE system calls out an “algorithm identifier” number, which means it calls out the specific algorithm it’s using for the end-to-end encryption. These identifiers are not vendor specific, the researchers say, which suggests the identifiers refer to different key variants produced by TCCA—meaning TCCA provides specifications for algorithms that use a 126 bit key or 56 bit key, and radio vendors can configure their devices to use either of these variants, depending on the export controls in place for the purchasing country.

Whether users know their radios could have this vulnerability is unclear. The researchers found a confidential 2006 Sepura product bulletin that someone leaked online, which mentions that “the length of the traffic key … is subject to export control regulations and hence the [encryption system in the device] will be factory configured to support 128, 64, or 56 bit key lengths.” But it’s not clear what Sepura customers receive or if other manufacturers whose radios use a reduced key disclose to customers if their radios use a reduced-key algorithm.

“Some manufacturers have this in brochures; others only mention this in internal communications, and others don’t mention it at all,” says Wetzels. He says they did extensive open-source research to examine vendor documentation and “ found no clear sign of weakening being communicated to end users. So while … there are ‘some’ mentions of the algorithm being weakened, it is not fully transparent at all.”

Sepura did not respond to an inquiry from WIRED.

But Murgatroyd says that because government customers who have opted to use TCCA’s E2EE solution need to know the security of their devices, they are likely to be aware if their systems are using a reduced key.

“As end-to-end encryption is primarily used for government communications, we would expect that the relevant government National Security agencies are fully aware of the capabilities of their end-to-end encryption systems and can advise their users appropriately,” Murgatroyd wrote in his email.

Wetzels is skeptical of this, however. “We consider it highly unlikely non-Western governments are willing to spend literally millions of dollars if they know they’re only getting 56 bits of security,” he says.

This story originally appeared at WIRED.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

Encryption made for police and military radios may be easily cracked Read More »

it’s-getting-harder-to-skirt-rto-policies-without-employers-noticing

It’s getting harder to skirt RTO policies without employers noticing

For example, while high-profile banks like JPMorgan Chase and HSBC have started enforcing in-office policies, London-headquartered bank Standard Chartered is letting managers and individual employees decide how often workers are expected in the office. In July, Standard CEO Bill Winters told Bloomberg Television:

We work with adults. The adults can have an adult conversation with other adults and decide how they’re going to best manage their team.

The differing management methods come as numerous corporations have pointed to in-office work as driving collaboration, ideation, and, in some cases, revenue, while numerous studies point to RTO policies hurting employee morale and risking employee retention.

“There are some markets where there’s effectively peer pressure to come in more often, and there’s other markets where there’s less of that,” Winters said. “People come into the office because they want to come into the office.”

Office space

After the COVID-19 pandemic forced many businesses to figure out how to function with remote workers, there was speculation that the commercial real estate business would seriously suffer long-term. CNBC reported that the US office vacancy rate (18.9 percent) is currently near the highest we’ve seen in 30 years (19 percent).

However, CBRE, which has big stakes here, found that out of the companies it surveyed, more are planning to expand office space than reduce it. Per the report, 67 percent of companies said they will expand or maintain the size of their office space over the next three years, compared to 64 percent last year. Thirty-three percent of respondents overall said they will reduce office space; however, among companies with at least 10,000 employees, 60 percent are planning to downsize. Among the companies planning to downsize, 79 percent said they are doing so because more hybrid work means that they need less space.

“Employers are much more focused now than they were pre-pandemic on quality of workplace experience, the efficiency of seat sharing, and the vibrancy of the districts in which they’re located,” Julie Whelan, CBRE’s global head of occupier research, told CNBC.

Although tariffs and broader economic uncertainty are turning some corporations away from long-term real estate decisions, Whelan said many firms are ready to make decisions about office space, “even if there’s a little bit of economic uncertainty right now.”

It’s getting harder to skirt RTO policies without employers noticing Read More »