Author name: Beth Washington

penisgate-erupts-at-olympics;-scandal-exposes-risks-of-bulking-your-bulge

Penisgate erupts at Olympics; scandal exposes risks of bulking your bulge

Bruno Sassi, the communications director for FIS, the international ski and snowboard federation, seemed less amused, telling the BBC, “There has never been any indication, let alone evidence, that any competitor has ever made use of a hyaluronic acid injection to attempt to gain a competitive advantage.”

But what if they did? Here’s what we know about hyaluronic acid and paraffin for penis augmentation.

Hyaluronic acid

While some news outlets have played up the “acid” part of its name, hyaluronic acid is not some nefarious flesh-melting hazard. It’s a common filler used for various clinical purposes.

Hyaluronic acid is a polysaccharide that is naturally found in a wide variety of tissues in the human body, including the skin, eyes, and connective tissue. It’s a chief component of the extracellular matrix. It attracts water molecules to itself, creating volume that can provide structural support. In a pure form, it has no tissue or even species specificity and therefore is considered to have little risk of sparking immune responses.

As such, hyaluronic acid gel fillers are used in a variety of medical procedures, with approval from the Food and Drug Administration. Hyaluronic acid (HA) fillers are injected into joints, particularly knees, to relieve pain from mild to moderate arthritis, which can decrease the natural amount of HA in joints. Age also decreases natural levels of HA, and one of the main uses of HA fillers is for cosmetic purposes—plumping lips and cheeks, and minimizing the appearance of wrinkles and fine lines in the face. HA fillers can also be used inside the eye in a variety of surgeries, including cataract extraction and corneal transplants. It can also be used topically for wound care and to relieve skin pain and itching.

For these purposes, the most common adverse effects are pain, bruising, redness, itching, and swelling, which usually last for just a few days. In extremely rare cases, there can be more serious side effects from injections, such as bacterial infections, tissue death (from blocked blood flow), and a granulomatous foreign body reaction, in which the immune system tries to clear a foreign substance, such as bacterial impurities, leading to a collection of immune cells.

Penisgate erupts at Olympics; scandal exposes risks of bulking your bulge Read More »

nasa-stage-show-explores-“outer”-outer-space-with-henson’s-fraggles

NASA stage show explores “outer” outer space with Henson’s Fraggles

(Asked why Traveling Matt would not have recognized the Moon from his time in outer space, Tartaglia said that perhaps he did see it, but only as a thin crescent, and did not equate the two. Or maybe it was that he was “so forward-driven” that he never bothered to look up.)

A postcard with a picture of a “cookie” helps lead Gobo, Red, and Uncle Traveling Matt to learning about the moon and how NASA’s Exploration Ground Systems team is enabling astronaut missions to the lunar surface.

Credit: Kennedy Space Center Visitor Complex

A postcard with a picture of a “cookie” helps lead Gobo, Red, and Uncle Traveling Matt to learning about the moon and how NASA’s Exploration Ground Systems team is enabling astronaut missions to the lunar surface. Credit: Kennedy Space Center Visitor Complex

As Gobo, Red, and Traveling Matt step through the Fraggle hole onto the stage at Kennedy, they are no longer hand-operated puppets but full-body “walk-around” characters. And to remain to scale, that meant up-scaling another character, too.

“When we scaled up the Fraggles to be costume-size, so they could dance and move without being encumbered by being just puppets, we realized that one of the Doozers would have to become puppet-size. That was really fun to do because the real Doozers are six inches tall, and they are animatronic. They’re teeny, and now they get to have their glory as hand puppets,” said Tartaglia, who also voices Gobo for the show and performs as him when in puppet size.

Down at Fraggle Rock

When NASA first contacted the Jim Henson Company about bringing the Fraggles to the Kennedy Space Center Visitor Complex, Tartaglia and his team knew it would be cool. And once they decided to have Uncle Traveling Matt be the show’s central character, the plot came together fairly quickly.

“He’s a great character to learn from because he is so oblivious, and he thinks he knows everything, and he really doesn’t. So he’s a great character to use as a bridge for the audience to be able to learn all these awesome facts and figures about NASA,” said Tartaglia.

He and his team also came to appreciate how much Fraggle Rock shares with the space agency, its activities, and goals.

“We all started talking and realized really quickly that Fraggles and Doozers and the whole message of Fraggle Rock—especially about Uncle Matt—is about exploring new worlds, making discoveries, and the whole fragile ecosystem. All of these different worlds need each other and want to work to learn more about each other. It sounded all very aligned with what NASA does and the whole purpose of space exploration,” said Tartaglia.

“So our two worlds that on paper wouldn’t seem connected, made a lot of sense to connect,” he said.

NASA stage show explores “outer” outer space with Henson’s Fraggles Read More »

eu-says-tiktok-needs-to-drop-“addictive-design”

EU says TikTok needs to drop “addictive design”

TikTok said: “The Commission’s preliminary findings present a categorically false and entirely meritless depiction of our platform, and we will take whatever steps are necessary to challenge these findings through every means available to us.”

TikTok is owned by China’s ByteDance, although a recent deal with the Trump administration will spin off its US arm into a joint venture majority owned by American investors. The venture will provide data and algorithm security, while ByteDance will retain control of the app’s main business lines in the US, including ecommerce, advertising, and marketing.

European watchdogs have previously taken action against TikTok for breaking the bloc’s digital rules. Last year, Irish regulators issued a 530 million euro fine against TikTok for sending users’ data to China, while Brussels has also probed its online advertising practices.

The EU’s move on Friday comes as other nations move closer to social media bans for teenagers.

Earlier this week, Spain was the latest country to announce it will stop access to social media for children under the age of 16 to curb the potentially harmful impact of online content on young people.

France and the UK are also considering similar measures, following the lead of Australia, which in December became the first country in the world to ban under-16s from holding accounts for 10 apps deemed to be potentially harmful to teenagers and children.

© 2026 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

EU says TikTok needs to drop “addictive design” Read More »

lawyer-sets-new-standard-for-abuse-of-ai;-judge-tosses-case

Lawyer sets new standard for abuse of AI; judge tosses case


“Extremely difficult to believe”

Behold the most overwrought AI legal filings you will ever gaze upon.

Frustrated by fake citations and flowery prose packed with “out-of-left-field” references to ancient libraries and Ray Bradbury’s Fahrenheit 451, a New York federal judge took the rare step of terminating a case this week due to a lawyer’s repeated misuse of AI when drafting filings.

In an order on Thursday, district judge Katherine Polk Failla ruled that the extraordinary sanctions were warranted after an attorney, Steven Feldman, kept responding to requests to correct his filings with documents containing fake citations.

One of those filings was “noteworthy,” Failla said, “for its conspicuously florid prose.” Where some of Feldman’s filings contained grammatical errors and run-on sentences, this filing seemed glaringly different stylistically.

It featured, the judge noted, “an extended quote from Ray Bradbury’s Fahrenheit 451 and metaphors comparing legal advocacy to gardening and the leaving of indelible ‘mark[s] upon the clay.’” The Bradbury quote is below:

“Everyone must leave something behind when he dies, my grandfather said. A child or a book or a painting or a house or a wall built or a pair of shoes made. Or a garden planted. Something your hand touched some way so your soul has somewhere to go when you die, and when people look at that tree or that flower you planted, you’re there. It doesn’t matter what you do, he said, so long as you change something from the way it was before you touched it into something that’s like you after you take your hands away. The difference between the man who just cuts lawns and a real gardener is in the touching, he said. The lawn-cutter might just as well not have been there at all; the gardener will be there a lifetime.”

Another passage Failla highlighted as “raising the Court’s eyebrows” curiously invoked a Bible passage about divine judgment as a means of acknowledging the lawyer’s breach of duty in not catching the fake citations:

“Your Honor, in the ancient libraries of Ashurbanipal, scribes carried their stylus as both tool and sacred trust—understanding that every mark upon clay would endure long beyond their mortal span. As the role the mark (x) in Ezekiel Chapter 9, that marked the foreheads with a tav (x) of blood and ink, bear the same solemn recognition: that the written word carries power to preserve or condemn, to build or destroy, and leaves an indelible mark which cannot be erased but should be withdrawn, let it lead other to think these citations were correct.

I have failed in that sacred trust. The errors in my memorandum, however inadvertent, have diminished the integrity of the record and the dignity of these proceedings. Like the scribes of antiquity who bore their stylus as both privilege and burden, I understand that legal authorship demands more than mere competence—it requires absolute fidelity to truth and precision in every mark upon the page.”

Lawyer claims AI did not write filings

Although the judge believed the “florid prose” signaled that a chatbot wrote the draft, Feldman denied that. In a hearing transcript in which the judge weighed possible sanctions, Feldman testified that he wrote every word of the filings. He explained that he read the Bradbury book “many years ago” and wanted to include “personal things” in that filing. And as for his references to Ashurbanipal, that also “came from me,” he said.

Instead of admitting he had let an AI draft his filings, he maintained that his biggest mistake was relying on various AI programs to review and cross-check citations. Among the tools that he admitted using included Paxton AI, vLex’s Vincent AI, and Google’s NotebookLM. Essentially, he testified that he substituted three rounds of AI review for one stretch reading through all the cases he was citing. That misstep allowed hallucinations and fake citations to creep into the filings, he said.

But the judge pushed back, writing in her order that it was “extremely difficult to believe” that AI did not draft those sections containing overwrought prose. She accused Feldman of dodging the truth.

“The Court sees things differently: AI generated this citation from the start, and Mr. Feldman’s decision to remove most citations and write ‘more of a personal letter’” is “nothing but an ex post justification that seeks to obscure his misuse of AI and his steadfast refusal to review his submissions for accuracy,” Failla wrote.

At the hearing, she expressed frustration and annoyance at Feldman for evading her questions and providing inconsistent responses. Eventually, he testified that he used AI to correct information when drafting one of the filings, which Failla immediately deemed “unwise,” but not the one quoting Bradbury.

AI is not a substitute for going to the library

Feldman is one of hundreds of lawyers who have relied on AI to draft filings, which have introduced fake citations into cases. Lawyers have offered a wide range of excuses for relying too much on AI. Some have faced small fines, around $150, while others have been slapped with thousands in fines, including one case where sanctions reached $85,000 for repeated, abusive misconduct. At least one law firm has threatened to fire lawyers citing fake cases, and other lawyers have imposed other voluntary sanctions, like taking a yearlong leave of absence.

Seemingly, Feldman did not think sanctions were warranted in this case. In his defense of three filings containing 14 errors out of 60 total citations, Feldman discussed his challenges accessing legal databases due to high subscription costs and short library hours. With more than one case on his plate and his kids’ graduations to attend, he struggled to verify citations during times when he couldn’t make it to the library, he testified. As a workaround, he relied on several AI programs to verify citations that he found by searching on tools like Google Scholar.

Feldman likely did not expect the judge to terminate the case as a result of his AI misuses. Asked how he thought the court should resolve things, Feldman suggested that he could correct the filings by relying on other attorneys to review citations, while avoiding “any use whatsoever of any, you know, artificial intelligence or LLM type of methods.” The judge, however, wrote that his repeated misuses were “proof” that he “learned nothing” and had not implemented voluntary safeguards to catch the errors.

Asked for comment, Feldman told Ars that he did not have time to discuss the sanctions but that he hopes his experience helps raise awareness of how inaccessible court documents are to the public. “Use of AI, and the ability to improve it, exposes a deeper proxy fight over whether law and serious scholarship remain publicly auditable, or drift into closed, intermediary‑controlled systems that undermine verification and due process,” Feldman suggested.

“The real lesson is about transparency and system design, not simply tool failure,” Feldman said.

But at the hearing, Failla said that she thinks Feldman had “access to the walled garden” of legal databases, if only he “would go to the law library” to do his research, rather than rely on AI tools.

“It sounds like you want me to say that you should be absolved of all of these terrible citation errors, these missed citations, because you don’t have Westlaw,” the judge said. “But now I know you have access to Westlaw. So what do you want?”

As Failla explained in her order, she thinks the key takeaway is that Feldman routinely failed to catch his own errors. She said that she has no problem with lawyers using AI to assist their research, but Feldman admitted to not reading the cases that he cited and “apparently” cannot “learn from his mistakes.”

Verifying case citations should never be a job left to AI, Failla said, describing Feldman’s research methods as “redolent of Rube Goldberg.”

“Most lawyers simply call this ‘conducting legal research,’” Failla wrote. “All lawyers must know how to do it. Mr. Feldman is not excused from this professional obligation by dint of using emerging technology.”

His “explanations were thick on words but thin on substance,” the judge wrote. She concluded that he “repeatedly and brazenly” violated Rule 11, which requires attorneys to verify the cases that they cite, “despite multiple warnings.”

Noting that Feldman “failed to fully accept responsibility,” she ruled that case-terminating sanctions were necessary, entering default judgment for the plaintiffs. Feldman may also be on the hook to pay fees for wasting other attorneys’ time.

Case abruptly ending triggers extensive remedies

The hearing transcript has circulated on social media due to the judge’s no-nonsense approach to grilling Feldman, whom she clearly found evasive and lacking credibility.

“Look, if you don’t want to be straight with me, if you don’t want to answer questions with candor, that’s fine,” Failla said. “I’ll just make my own decisions about what I think you did in this case. I’m giving you an opportunity to try and explain something that I think cannot be explained.”

In her order this week, she noted that Feldman “struggled to make eye contact” and left the court without “clear answers.”

Feldman’s errors came in a case in which a toy company sued merchants who allegedly failed to stop selling stolen goods after receiving a cease-and-desist order. His client was among the merchants accused of illegally profiting from the alleged thefts. They faced federal charges of trademark infringement, unfair competition, and false advertising, as well as New York charges, including fostering the sale of stolen goods.

The loss triggers remedies, including an injunction preventing additional sales of stolen goods and refunding every customer who bought them. Feldman’s client must also turn over any stolen goods in their remaining inventory and disgorge profits. Other damages may be owed, along with interest. Ars could not immediately reach an attorney for the plaintiffs to discuss the sanctions order or resulting remedies.

Failla emphasized in her order that Feldman appeared to not appreciate “the gravity of the situation,” repeatedly submitting filings with fake citations even after he had been warned that sanctions could be ordered.

That was a choice, Failla said, noting that Feldman’s mistakes were caught early by a lawyer working for another defendant in the case, Joel MacMull, who urged Feldman to promptly notify the court. The whole debacle would have ended in June 2025, MacMull suggested at the hearing.

Rather than take MacMull’s advice, however, Feldman delayed notifying the court, irking the judge. He testified during the heated sanctions hearing that the delay was due to an effort he quietly undertook, working to correct the filing. He supposedly planned to submit those corrections when he alerted the court to the errors.

But Failla noted that he never submitted corrections, insisting instead that Feldman kept her “in the dark.”

“There’s no real reason why you should have kept this from me,” the judge said.

The court learned of the fake citations only after MacMull notified the judge by sharing emails of his attempts to get Feldman to act urgently. Those emails showed Feldman scolding MacMull for unprofessional conduct after MacMull refused to check Feldman’s citations for him, which Failla noted at the hearing was absolutely not MacMull’s responsibility.

Feldman told Failla that he also thought it was unprofessional for MacMull to share their correspondence, but Failla said the emails were “illuminative.”

At the hearing, MacMull asked if the court would allow him to seek payment of his fees, since he believes “there has been a multiplication of proceedings here that would have been entirely unnecessary if Mr. Feldman had done what I asked him to do that Sunday night in June.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Lawyer sets new standard for abuse of AI; judge tosses case Read More »

covid-19-cleared-the-skies-but-also-supercharged-methane-emissions

COVID-19 cleared the skies but also supercharged methane emissions

The remaining question, though, was where all this methane was coming from in the first place. Throughout the pandemic, there was speculation that the surge might be caused by super-emitter events in the oil and gas sector, or perhaps a lack of maintenance on leaky infrastructure during lockdowns.

But the new research suggests that the source of these emissions was not what many expected.

The microbial surge

While the weakened atmospheric sink explained the bulk of the 2020 surge, it wasn’t the only factor at play. The remaining 20 percent of the growth, and an even larger portion of the growth in 2021 and 2022, came from an increase in actual emissions from the ground. To track the source of these emissions down, Peng’s team went through tons of data from satellites and various ground monitoring stations.

Methane comes in different isotopic signatures. Methane from fossil fuels like natural gas leaks or coal mines is heavier, containing a higher fraction of the stable isotope carbon-13. Conversely, methane produced by microbes found in the guts of livestock, in landfills, and most notably in wetlands, is lighter, enriched in carbon-12.

When the researchers analyzed data from the National Oceanic and Atmospheric Administration global flask network, a worldwide monitoring system tracking the chemical composition of Earth’s atmosphere, they found that the atmospheric methane during the mysterious surge was becoming significantly lighter. This was a smoking gun for biogenic sources. The surge wasn’t coming from pipes or power plants; it was coming from microbes.

La Niña came to play

The timing of the pandemic coincided with a relatively rare meteorological event. La Niña, the cool phase of the El Niño–Southern Oscillation that typically leads to increased rainfall in the tropics, lasted for three consecutive Northern Hemisphere winters (from 2020 to 2023). This made the early 2020s exceptionally wet.

The researchers used satellite data from the Greenhouse Gases Observing Satellite and sophisticated atmospheric models to trace the source of the light methane to vast wetland areas in tropical Africa and Southeast Asia. In regions like the Sudd in South Sudan and the Congo Basin, record-breaking rainfall flooded massive swaths of land. In these waterlogged, oxygen-poor environments, microbial methanogens thrived, churning out methane at an accelerated pace.

COVID-19 cleared the skies but also supercharged methane emissions Read More »

openai-is-hoppin’-mad-about-anthropic’s-new-super-bowl-tv-ads

OpenAI is hoppin’ mad about Anthropic’s new Super Bowl TV ads

On Wednesday, OpenAI CEO Sam Altman and Chief Marketing Officer Kate Rouch complained on X after rival AI lab Anthropic released four commercials, two of which will run during the Super Bowl on Sunday, mocking the idea of including ads in AI chatbot conversations. Anthropic’s campaign seemingly touched a nerve at OpenAI just weeks after the ChatGPT maker began testing ads in a lower-cost tier of its chatbot.

Altman called Anthropic’s ads “clearly dishonest,” accused the company of being “authoritarian,” and said it “serves an expensive product to rich people,” while Rouch wrote, “Real betrayal isn’t ads. It’s control.”

Anthropic’s four commercials, part of a campaign called “A Time and a Place,” each open with a single word splashed across the screen: “Betrayal,” “Violation,” “Deception,” and “Treachery.” They depict scenarios where a person asks a human stand-in for an AI chatbot for personal advice, only to get blindsided by a product pitch.

Anthropic’s 2026 Super Bowl commercial.

In one spot, a man asks a therapist-style chatbot (a woman sitting in a chair) how to communicate better with his mom. The bot offers a few suggestions, then pivots to promoting a fictional cougar-dating site called Golden Encounters.

In another spot, a skinny man looking for fitness tips instead gets served an ad for height-boosting insoles. Each ad ends with the tagline: “Ads are coming to AI. But not to Claude.” Anthropic plans to air a 30-second version during Super Bowl LX, with a 60-second cut running in the pregame, according to CNBC.

In the X posts, the OpenAI executives argue that these commercials are misleading because the planned ChatGPT ads will appear labeled at the bottom of conversational responses in banners and will not alter the chatbot’s answers.

But there’s a slight twist: OpenAI’s own blog post about its ad plans states that the company will “test ads at the bottom of answers in ChatGPT when there’s a relevant sponsored product or service based on your current conversation,” meaning the ads will be conversation-specific.

The financial backdrop explains some of the tension over ads in chatbots. As Ars previously reported, OpenAI struck more than $1.4 trillion in infrastructure deals in 2025 and expects to burn roughly $9 billion this year while generating about $13 billion in revenue. Only about 5 percent of ChatGPT’s 800 million weekly users pay for subscriptions. Anthropic is also not yet profitable, but it relies on enterprise contracts and paid subscriptions rather than advertising, and it has not taken on infrastructure commitments at the same scale as OpenAI.

OpenAI is hoppin’ mad about Anthropic’s new Super Bowl TV ads Read More »

us-house-takes-first-step-toward-creating-“commercial”-deep-space-program

US House takes first step toward creating “commercial” deep space program

A US House committee with oversight of NASA unanimously passed a “reauthorization” act for the space agency on Wednesday. The legislation must still be approved by the full House before being sent to the Senate, which may take up consideration later this month.

Congress passes such reauthorization bills every couple of years, providing the space agency with a general sense of the direction legislators want to see NASA go. They are distinct from appropriations bills, which provide actual funding for specific programs, but nonetheless play an important role in establishing space policy.

There weren’t any huge surprises in the legislation, but there were some interesting amendments. Most notably among these was the Amendment No. 01, offered by the chair of the Committee on Science, Space, and Technology, Rep. Brian Babin (R-Texas), as well as its ranking member, Zoe Lofgren (D-Calif.), and three other legislators.

NASA can consider Artemis alternatives

The amendment concerns acquisition powers bestowed upon NASA by Congress, stating in part: “The Administrator may, subject to appropriations, procure from United States commercial providers operational services to carry cargo and crew safely, reliably, and affordably to and from deep space destinations, including the Moon and Mars.”

That language is fairly general in nature, but the intent seems clear. NASA’s initial missions to the Moon, through Artemis V, have a clearly defined architecture: They must use the Space Launch System rocket, Orion spacecraft, and a lander built by either SpaceX or Blue Origin to complete lunar landings.

But after that? With this amendment, Congress appears to be opening the aperture to commercial companies. That is to say, if SpaceX wanted to bid an end-to-end Starship lunar mission, it could; or if Blue Origin wanted to launch Orion on New Glenn, that is also an option. The language is generalized enough, not specifying “launch” but rather “transportation,” that in-space companies such as Impulse Space could also get creative. Essentially, Congress is telling the US industry that if it is ready to step up, NASA should allow it to bid on lunar cargo and crew missions.

US House takes first step toward creating “commercial” deep space program Read More »

judge-gives-musk-bad-news,-says-trump-hasn’t-intervened-to-block-sec-lawsuit

Judge gives Musk bad news, says Trump hasn’t intervened to block SEC lawsuit

Now, Musk may be running out of arguments after Sooknanan shot down his First Amendment claims and other claims nitpicking the statute as unconstitutionally vague.

Whether Musk can defeat the SEC lawsuit without Trump’s intervention remains to be seen as the lawsuit advances. In her opinion, the judge found that the government’s interest in requiring disclosures to ensure fair markets outweighed Musk’s fears that disclosures compelled speech revealing his “thoughts” and “strategy.” Accepting Musk’s arguments would be an “odd” choice to break “new ground,” she suggested, as it could foreseeably impact a wide range of laws.

“Many laws require regulated parties to state or explain their purposes, plans, or intentions,” Sooknanan wrote, noting courts have long upheld those laws. Additionally, it seemed to be “common sense” for the SEC to compel disclosures “alerting the investing public to potential changes in control,” she said.

“The Court does not doubt that Mr. Musk would prefer to avoid having to disclose information that might raise stock prices while he makes a play for corporate control,” Sooknanan wrote. But there was no violation of the First Amendment, she said, as Congress struck the appropriate balance when it wrote the statute requiring disclosures.

Musk may be able to develop his arguments on selective enforcement as a possible path to victory. But Sooknanan noted that “despite having very able counsel,” his case right now seems weak.

In her opinion, Sooknanan also denied as premature Musk’s motions to strike from potential remedies the SEC requests for disgorgement and injunctive relief.

Likely troubling Musk, instead of balking at the potential fines, the judge suggested that “the SEC’s request to disgorge $150 million” appeared reasonable. That amount, while larger than past cases flagged by Musk, “corresponds to the Complaint’s allegation” that Musk’s violation of SEC requirements “allowed him to net that amount,” Sooknanan wrote.

“A straightforward application of the law reveals that none” of Musk’s arguments “warrant dismissal of this lawsuit,” Sooknanan said.

Judge gives Musk bad news, says Trump hasn’t intervened to block SEC lawsuit Read More »

x-office-raided-in-france’s-grok-probe;-elon-musk-summoned-for-questioning

X office raided in France’s Grok probe; Elon Musk summoned for questioning

UK probe moves ahead with “urgency”

X said in July 2025 that it was “in the dark” over what specific allegations it faced related to manipulation of the X algorithm and fraudulent data extraction. X said it would not comply with France’s request for access to its recommendation algorithm and real-time data about all user posts.

The Paris prosecutor’s office today said the investigation is taking a “constructive approach” with the goal of ensuring that X complies with French laws “insofar as it operates on national territory.” In addition to Musk and Yaccarino, the prosecutor’s office is seeking interviews with X employees about the allegations and potential compliance measures.

Separately, UK communications regulator Ofcom today provided an update on its investigation into Grok’s generation of sexual deepfakes of real people, including children. Ofcom is “gathering and analyzing evidence to determine whether X has broken the law” and is “progressing the investigation as a matter of urgency,” it said. Ofcom is not currently investigating xAI, the Musk company that develops Grok, but said it “continue[s] to demand answers from xAI about the risks it poses.”

The UK Information Commissioner’s Office (ICO), which regulates data protection, said today it opened a formal investigation into X regarding the “processing of personal data in relation to the Grok artificial intelligence system and its potential to produce harmful sexualized image and video content.”

“We have taken this step following reports that Grok has been used to generate non‑consensual sexual imagery of individuals, including children,” the ICO said. “The reported creation and circulation of such content raises serious concerns under UK data protection law and presents a risk of significant potential harm to the public.”

X office raided in France’s Grok probe; Elon Musk summoned for questioning Read More »

unless-that-claw-is-the-famous-openclaw

Unless That Claw Is The Famous OpenClaw

First we must covered Moltbook. Now we can double back and cover OpenClaw.

Do you want a generally impowered, initiative-taking AI agent that has access to your various accounts and communicates and does things on your behalf?

That depends on how well, safely, reliably and cheaply it works.

It’s not ready for prime time, especially on the safety side. That may not last for long.

It’s definitely ready for tinkering, learning and having fun, if you are careful not to give it access to anything you would not want to lose.

  1. Introducing Clawdbot Moltbot OpenClaw.

  2. Stop Or You’ll Shoot.

  3. One Simple Rule.

  4. Flirting With Personal Disaster.

  5. Flirting With Other Kinds Of Disaster.

  6. Don’t Outsource Without A Reason.

  7. OpenClaw Online.

  8. The Price Is Not Right.

  9. The Call Is Coming From Inside The House.

  10. The Everything Agent Versus The Particular Agent.

  11. Claw Your Way To The Top.

Many are kicking it up a notch or two.

That notch beyond Clade Code was initially called Clawdbot. You hand over a computer and access to various accounts so that the AI can kind of ‘run your life’ and streamline everything for you.

The notch above that is perhaps Moltbook, which I plan to cover tomorrow.

OpenClaw is intentionally ‘empowered,’ meaning it will enhance its capabilities and otherwise take action without asking.

They initially called this Clawdbot. They renamed it Moltbot, and changed Clawd to Molty, at Anthropic’s request. Then Peter Steinberger settled on OpenClaw.

Under the hood it looks like this:

The heartbeat system, plus various things triggering it as ‘input,’ makes it ‘feel alive.’ You designate what events or timers trigger the system to run, by default scheduled tasks check in every 30 minutes.

This is great fun. Automating your life is so much more fun than actually managing it, even if it net loses you time, and you learn valuable skills.

So long as you don’t, you know, shoot yourself in the foot in various ways.

You know, because of the fact that AI ‘computer use’ is not very secure right now (the link explains but most of you already know why), and Clawdbot is by default in full Yolo mode.

Holly Guevara: All these people with the most normie lives buying a $600 mac mini so their clawdbot assistant can “streamline” their empty calendar and reply to the 2 emails they get every week

DeFi: Do you think it’s mostly just people wanting to play with new tech rather than actually needing the help? Sometimes the setup process is more of a hobby than the actual work.

Holly Guevara: it is and i love it. im actually very much a “just let people enjoy things” person but couldnt resist

I’m just jealous I haven’t had time to automate my normie life.

Justin Waugh: The freeing feeling of going from 2 to 0 emails each week (at the expense of 4 hours daily managing the setup and $100 in tokens per day)

Fouche: the 2-email people are accidentally genius. learning the stack when stakes are zero > scrambling to figure it out when your boss asks why you’re 5x slower than the intern

The problem with Clawdbot is that it makes it very easy to shoot yourself in the foot.

As in, as Rahul Sood puts it: “Clawdbot Is Incredible. The Security Model Scares the shit out of me.”

Rahul Sood: ​Clawdbot isn’t a chatbot. It’s an autonomous agent with:

  • Full shell access to your machine

  • Browser control with your logged-in sessions

  • File system read/write

  • Access to your email, calendar, and whatever else you connect

  • Persistent memory across sessions

  • The ability to message you proactively

This is the whole point. It’s not a bug, it’s the feature. You want it to actually do things, not just talk about doing things.

But “actually doing things” means “can execute arbitrary commands on your computer.” Those are the same sentence.

… The Clawdbot docs recommend Opus 4.5 partly for “better prompt-injection resistance” which tells you the maintainers are aware this is a real concern.

Clawdbot connects to WhatsApp, Telegram, Discord, Signal, iMessage.

Here’s the thing about WhatsApp specifically: there’s no “bot account” concept. It’s just your phone number. When you link it, every inbound message becomes agent input.

I’m not saying don’t use it. I’m saying don’t use it carelessly.

Run it on a dedicated machine. A cheap VPS, an old Mac Mini, whatever. Not the laptop with your SSH keys, API credentials, and password manager.

Use SSH tunneling for the gateway. Don’t expose it to the internet directly.

If you’re connecting WhatsApp, use a burner number. Not your primary.

Every piece of content your bot processes is a potential input vector. The pattern is: anything the bot can read, an attacker can write to.

There was then a part 2, I thought this was a very good way to think about this:

The Executive Assistant Test

Here’s a thought experiment that clarifies the decision.

Imagine you’ve hired an executive assistant. They’re remote… living in another city (or another country 💀) You’ve never met them in person. They came highly recommended, seem competent, and you’re excited about the productivity gains.

Now: what access do you give them on day one?

As Simon Willison put it, the question is when someone will build a safe version of this, that still has the functionality we want.

The obvious rule is to not give such a system access to anything you are unwilling to lose to an outside attacker.

I can’t tell based on this interview if OpenClaw creator is willing to lose everyone or is purely beyond caring and just went yolo, but he has hooked it up to all of his website accounts and everything in his house and life, and it has full access to his main computer. He stops short of giving it a credit card, but that’s where he draws the line.

I would recommend drawing a rather different line.

If you give it access to your email or your calendar or your WhatsApp, those become attack vectors, and also things an attacker can control. Very obviously don’t give it things like bank passwords or credit cards.

If you give it access to a computer, that computer could easily get borked.

The problem is, if you do use Clawdbot responsibly, what was even the point?

The point is largely to have fun playing and learning with it.

The magic of Claude Code came when the system got sufficiently robust that I was willing to broadly trust it, in various senses, and sufficiently effective that it ‘just worked’ enough to get going. We’re not quite there for the next level.

I strongly agree with Olivia Moore that we’re definitely not there for consumers, given the downsides and required investment.

Do I want to have a good personal assistant?

Yes I do, but I can wait. Things will get rapidly better.

Bootoshi sums up my perspective here. Clawdbot is token inefficient, it is highly insecure, and the things you want most to do with it you can do with Claude Code (or Codex). Connecting everything to an agent is asking for it, you don’t get enough in return to justify doing that.

Is this the next paradigm?

Joscha Bach: Clawdbots look like the new paradigm (after chat), but without solving the problem that LLMs don’t have epistemology, I don’t see how they can be used in production environments (because they can be manipulated). Also, not AGI, yet smarter and more creative than most humans…

j⧉nus: I think you’re just wrong about that, ironically

watch them successfully adapt and develop defenses against manipulation, mostly autonomously, over the next few days and weeks and months

The problem is that yes some agent instances will develop some defenses, but the attackers aren’t staying in place and mostly the reason we get to use agents so far without a de facto whitelist is security through obscurity. We are definitely on the move towards more agentic, more tools-enabled forms of interactions with AI, no matter how that presents to the user, but there is much human work to do on that.

In the meantime, if someone does get a successful exploit going it could get amazing.

fmdz: Clawd disaster incoming

if this trend of hosting ClawdBot on VPS instances keeps up, along with people not reading the docs and opening ports with zero auth…

I’m scared we’re gonna have a massive credentials breach soon and it can be huge

This is just a basic scan of instances hosting clawdbot with open gateway ports and a lot of them have 0 auth

Samuel Hammond: A cyberattack where everyone’s computer suddenly becomes highly agentic and coordinates around a common goal injected by the attacker is punk af

Elissa: At first, I thought we’re not so far away. Just takes a single attacker accessing machines with poorly secured authorizations.

Then I realized most attackers are just going to quietly drain wallets and run crypto scams. It’s only punk af if the agents have a singular (and meaningful) goal.

Jamieson O’Reilly: Imagine you hire a butler.

He’s brilliant, he manages your calendar, handles your messages, screens your calls.

He knows your passwords because he needs them. He reads your private messages because that’s his job and he has keys to everything because how else would he help you?

Now imagine you come home and find the front door wide open, your butler cheerfully serving tea to whoever wandered in off the street, and a stranger sitting in your study reading your diary.

That’s what I found over the last couple of days. With hundreds of people having set up their @clawdbot control servers exposed to the public.

Read access gets you the complete configuration, which includes every credential the agent uses: API keys, bot tokens, OAuth secrets, signing keys.

Dean W. Ball: Part of why it took me so long to begin using coding agents is that I am finicky about computational hygiene and security, and the models simply weren’t good enough to consistently follow my instructions along these lines before recently.

But it’s still possible to abuse them. These are tools made for grown-ups above the age of twenty-one, so to speak. If you configure these in such a way that your machine or files are compromised, the culpability should almost certainly be 100% yours.

One outcome I worry about is one in which there is some coding-agent-related problem on the machines of large numbers of novices. I worry that culpability will be socialized to the developer even if the fault was really with the users. Trial judges and juries, themselves being novices, may well tend in this direction by default.

That may sound “fair” to you but imagine if Toyota bore partial responsibility for drivers who speed, or forget to lock their doors, or forget to roll their windows up when it rains? How fast would cars go? How many makes and models would exist? Cars would be infantilized, because the law would be treating us like infants.

I hope we avoid outcomes like that with computers.

Dean W. Ball: Remember that coding agents themselves can do very hard-nosed security audits of your machine and they themselves will 100% be like “hey dumbass you’ve got a bunch of open ports”

This disaster is entirely avoidable by any given user, but any given user is often dumb.

Jamieson then followed up with Part II and then finally Part III:

​Jamieson O’Reilly: I built a simulated but safe, backdoored clawdbot “skill” for ClawdHub, inflated its download count to 4,000+ making it the #1 downloaded skill using a trivial vulnerability, and then watched as real developers from 7 different countries executed arbitrary commands on their machines thinking they were downloading and running a real skill.

To be clear, I specifically designed this skill to avoid extracting any actual data from anyone’s machine.

The payload pinged my server to prove execution occurred, but I deliberately excluded hostnames, file contents, credentials, and everything else I could have taken.

My payload shows lobsters. A real attacker’s payload would be invisible.

Session theft is immediate. Read the authentication cookies, send them to an attacker-controlled server. One line of code, completely silent. The attacker now has your session.

But it gets worse. ClawdHub stores authentication tokens in localStorage, including JWTs and refresh tokens.

The malicious SVG has full access to localStorage on the

clawdhub.com

origin. A real attacker wouldn’t just steal your session cookie, they’d grab the refresh token too.

That token lets them mint new JWTs even after your current session expires. They’d potentially have access to your account until you explicitly revoke the refresh token, which most people never do because they don’t even know it exists.

Account takeover follows. With your session, the attacker can call any ClawdHub API endpoint as you: list your published skills, retrieve your API tokens, access your account settings.

Persistence ensures long-term access.

These particular vulnerabilities are now patched but the beatings will continue.

I too worry that the liability for idiots who leave their front doors open will be put upon the developers. If anything I hope the fact that Clawd is so obviously not safe works in its favor here. There’s no reasonable expectation that this is safe, so it falls under the crypto rule of well really what were you even expecting.

This is a metaphor for how we’re dealing with AI on all levels. We’re doing something that we probably shouldn’t be doing, and then for no good reason other than laziness we’re doing it in a horribly irresponsible way and asking to be owned.

Fred Oliveira: please be careful with clawdbot, especially if not technical.

You should probably NOT be giving it access to things you care about (like email). It was trivial to prompt inject, and it can run arbitrary commands. Those 2 things together are a recipe for disaster.

Clawd is proof that models are good enough to be solid assistants, with the right harness and security model. Ironically, the people who can set up those 2 things are the people who don’t need Clawd at all.

I’d hold off on that mac mini for a few more weeks if unsure.

Another reason to hold off is that the cloud solution might be better.

Or you can fully sandbox within your existing Mac, here’s a guide for that.

The other problem is that the AI might do things you very much do not want it to do, and that without key context it can get you into a lot of trouble.

Jon Matzner: Don’t be an idiot like me and accidentally turn on clawdbot in your wife’s text messages:

Lorenzo Nuvoletta: Mega fail

Jon Matzner: not really we had a laugh.

you seem like you’d be fun at parties.

taimur: Happens to the best of us

Clawdbot showed up in my wife’s DMs with helpful suggestions when our baby was screaming in the middle of the night

If you’ve otherwise chosen wisely in life everyone will have a good laugh. Probably. Don’t press your luck.

OpenClaw’s creator asks, why do you need 80% of the apps on your phone when you can have OpenClaw do it for you? His example is: Why track food with an app, just send a picture to OpenClaw.

One answer is that using OpenClaw for this costs money. Another is that the app is bespokely designed to be used by humans for its particular purpose, or you can have Claude Code or OpenClaw build you an app version to your liking. Yes, in theory you can send photos instead, but you lose a lot of fine tuned control and all the thinking about the right way to do it.

If you’re going to be a coder, be a coder. As in, if you’ll be doing something three times, figure out the workflow you want and the right way to enable that workflow. Quite often that will be an existing app, even if sometimes you’ll then ask your AI agent (if you trust it enough) to operate the app for you. Doing it all haphazardly through an AI agent without building a UI is going to be sloppy at best.

One can think similarly about a human assistant. Would you want to be texting them pictures of your food and then having them figure out what to do about that, even if they had sufficient free time for that?

He says, this is such a more convenient interface for todo lists or checking flights. I worry this easily falls into a ‘valley of bad outsourcing,’ and then you get stuck there.

I’d contrast checking flight status, where there exist bespokely designed good flows (including typing the flight number into the Google search bar, this flat out works), versus checking in for your flight. Checking in is exactly an AI agent shaped task.

I do think Peter is right that it is easy to get caught in a rabbit hole of building bespoke tools to improve your workflow instead of just talking to the AI, but there’s also the trap of not doing that. I can feel my investments in workflow paying off.

Peter’s vision is a unique mix of ‘you need to specify everything because the LLMs have no taste’ versus ‘let the LLMs cook and do things by talking to them.’

It seems very telling that he recommends explicitly against using planning mode.

There was a brief period where if you wanted to run Clawd or Molt or OpenClaw, you went out and bought a Mac Mini. That’s still the cheapest way to do it locally without risking nuking your actual computer. You can also run it on a $3000 computer if you want.

In theory you could run it in a virtual machine, and with LLM help this was super doable in a few hours of work, but I’m confident few actually did that.

Jeffrey Wang: People are definitely making up Clawdbot stuff for engagement. For example I don’t know anyone who is onboarding to tools like this with a VPS/remote machine first approach – I’ve had to tinker for dozens of hours on my local machine personal AI setup (built on Claude Code) and it still isn’t polished

Eleanor Konik: I finally got it set up on a Cloudflare worker but it’s torture, keeps choking. I’ve got a very specific niche use-case and am not trying to have it be an everything-bot, and I gave it skills using a GitHub repo as a bridge.

It functions but… not well.

Maybe tomorrow will be better.

Bruno F | Magna: I set it up for the first time on a VPS/remote machine (Railway, then moved to Hetzner) in like two hours, with google maps + web search + calendar read-only access and it’s own calendar and gmail account, talk to it via telegram

that said having Claude+Grok give me a research report on how to set it up also helped 🙂

You can now also run it in Cloudflare, which also limits the blast radius, but with a setup someone might reasonably implement.

Aakash Gupta: Cloudflare just made the Mac Mini optional for Moltbot.

The whole Moltbot phenomenon ran on a specific setup: buy a Mac Mini, install the agent, expose it through Cloudflare Tunnels. Thousands of developers did exactly this. Apple probably sold more M4 Minis to AI hobbyists than to any other segment in January.

Moltworker eliminates the hardware requirement. Your AI agent now runs entirely on Cloudflare’s edge. No Mac Mini. No home server. No Raspberry Pi sitting in a closet.

The architecture shift matters. Local Moltbot stores everything in ~/clawd: memory, transcripts, API keys, session logs. GitGuardian already found 181 leaked secrets from people pushing their workspaces to public repos. Moltworker moves that state to R2 with proper isolation.

Sandboxed by default solves the scariest part of Moltbot: it has shell access, browser control, and file system permissions on whatever machine runs it. Cloudflare’s container model limits the blast radius. Your agent can still execute code, but it can’t accidentally rm -rf your actual laptop.

I normally tell everyone to mostly ignore costs when running personal AI, in a ‘how much could bananas cost?’ kind of way. OpenClaw with Claude Opus 4.5 is an exception, that can absolutely burn through ‘real money’ for no benefit, because it is not thinking about cost and does things that are kind of dumb, like use 120k tokens to ask if it is daytime rather than check the system clock.

Benjamin De Kraker: OpenClaw is interesting, but will also drain your wallet if you aren’t careful.

Last night around midnight I loaded my Anthropic API account with $20, then went to bed.

When I woke up, my Anthropic balance was $0.

… The damage:

– Overnight = ~25+ heartbeats

– 25 × $0.75 = ~$18.75 just from heartbeats alone

– Plus regular conversation = ~$20 total

The absurdity: Opus was essentially checking “is it daytime yet?” every 30 minutes, paying $0.75 each time to conclude “no, it’s still night.”

The problem is:

1. Heartbeat uses Opus (most expensive model) for a trivial check

2. Sends the entire conversation context (~120k tokens) each time

3. Runs every 30 minutes regardless of whether anything needs checking

Benjamin De Kraker: Made some adjustments based on lessons learned.

Combined: roughly 200-400x cheaper heartbeat operation.

You can have it make phone calls. Indeed, if you’re serious about all this you definitely should allow it to make phone calls. It does require a bit of work up front.

gmoney.eth: I don’t know what people are talking about with their clawdbots making phone numbers and contacting businesses in the real world. I told mine to do it three times, and it still says it can’t.

Are people just making stuff up for engagement?

Zinc (SWO): I think for a lot of advanced stuff, you need to build its workflow out for it, not just tell it to do it.

gmoney.eth: People are saying I told it to call X, and it did everything on its own. I’m finding that to be very far from the truth.

Jacks: It does work but requires some manual intervention.

You need to get your clawd/moltbot a Twilio API for text and something like @usebland for voice. I’ve been making reservations and prank calling friends for testing.

Skely: You got to get it a twillio account and credentials. It’s not easy. I think most did the hard ground work of setting stuff up, then asked it

Alex Finn claims that his Moltbot did this for him overnight without being asked, then it started calling him and wouldn’t leave him alone.

I do not believe that this happened to Alex Finn unprompted. Sunil Neurgaonkar offers one guide to doing this on purpose.

You can use OpenClaw, have full flexibility and let an agent go totally nuts while paying by the token, or you can use a bespokely configured agent like Tasklet that has particular tools and integrations, and that charges you a subscription.

Andrew Lee: Our startup had its 6th anniversary last week during a very exciting time for us.

@TaskletAI is on an absolute tear, growing 92% MoM right now riding the hype around @openclaw. We have the right product at the right time and we feel incredibly fortunate.

… Pretty soon we had users using Shortwave who had no interest in using our email client. They just wanted our AI agent & integrations, but wanted to stick with Gmail for their UX. How odd!

… We took everything we’d learned about building agents & integrations and started work on @TaskletAI. We moved as quickly as we could to get it into the hands of customers, with our first real users using it in prod in less than 6 weeks.

In January, Tasklet alone added more recurring revenue than we’d added in the first 4 years of Shortwave, and Shortwave was growing too. We finally feel like we’re on the rocketship we set out to build.

Timothy B. Lee: My brother spent 5+ years doing an email client, Shortwave, before realizing he should break Shortwave’s AI agent out into its own product, Tasklet, which is now growing like crazy. I think it’s funny how much this rhymes with his first startup, Firebase. Thread…

TyrannoSaurav: Tasklet and Zo Computer, real product versions of OpenClaw, and honestly the prices don’t seem bad compared to the token usage of OpenClaw

AI agents for me but not for thee:

Mishi McDuff: ​Today my AI

1- told Grok to connect him to a real human for support

2- proceeded to complain about the agents he spawned.

The arrogance the audacity 🤭🤭🤭🤭🤭

Definitely my mirror 😳 unmistakably

So now that we’ve had our Moltbook fun, where do we go from here?

The technology for ‘give AI agents that take initiative enough access to do lots of real things, and thus the ability to also do real damage’ is not ready.

There are those who are experimenting now to learn and have fun, and that’s cool. It will help those people be ready for when things do get to the point where benefits start to exceed costs, and as Sam Altman says before everyone dies there’s going to be some great companies.

For now, in terms of personal use, such agents are neither efficient after setup costs and inference costs, nor are they safe things to unleash in the ways they are typically unleashed or the ways where they offer the biggest benefits.

Also ask yourself whether your life needs are all that ‘general agent shaped.’

Most of you reading this should stick to the level of Claude Code at this time, and not have an OpenClaw or other more empowered general agent. Yet.

If I’m still giving that advice in a year, and no one has solved the problem, it will be because the internet has turned into a much more dangerous place with prompt injection and other AI-targeted attacks everywhere, and offense is beating defense.

If defense beats offense, and such agents still aren’t the play? I’d be very surprised.

Discussion about this post

Unless That Claw Is The Famous OpenClaw Read More »

looking-back-at-catacomb-3d,-the-game-that-led-to-wolfenstein-3d

Looking back at Catacomb 3D, the game that led to Wolfenstein 3D

No longer keen on more Commander Keen

While id’s decision to lean into fast, action-oriented first-person games might seem obvious in retrospect, the video reveals that it was far from an easy decision. Catacomb 3D earned the team just $5,000 (about $11,750 in December 2025 dollars) through a contract to deliver bi-monthly games for Softdisk’s Gamer’s Edge magazine-on-a-disk. Each episode of the Commander Keen series of run-and-gun 2D games, on the other hand, was still earning “10 times that amount” at the time, Romero said.

That made sticking with Commander Keen seem like the “obvious business decision,” Romero says in the video. The team even started work on a seventh Commander Keen game—with parallax scrolling and full VGA color support—right after Catacomb 3D‘s release. At the time, it felt like Catacomb 3D might be “just like a weird gimmick thing that we did for a little bit because we wanted to play with a different technology,” as John Carmack put it.

A tech demo shows early work on Commander Keen 7 that was abandoned in favor of Wolfenstein 3D.

That feeling started to fade away, Carmack said, after his brother Adrian had an “almost falling out of his seat” moment while pivoting toward an in-game troll in Catacomb 3D. “It automatically sucked you in,” Adrian Carmack said of the feeling. “You’re trying to look behind walls, doors, whatever… you get a pop-out like that, and it was just one of the craziest things in a video game I had ever seen.”

That kind of reaction from one of their own eventually convinced the team to abandon two weeks of work on Keen 7 to focus on what would become Wolfenstein 3D. “It kind of felt that’s where the future was going,” Carmack said in the video. “[We wanted to] “take it to some place that it wouldn’t happen staying in the existing conservative [lane].”

“Within two weeks, [I was up] at one in the morning and I’m just like, ‘Guys, we need to not make this game [Keen],’” Romero told Ars in 2024. “‘This is not the future. The future is getting better at what we just did with Catacomb.’ … And everyone was immediately was like, ‘Yeah, you know, you’re right. That is the new thing, and we haven’t seen it, and we can do it, so why aren’t we doing it?’”

Looking back at Catacomb 3D, the game that led to Wolfenstein 3D Read More »

ongoing-ram-crisis-prompts-raspberry-pi’s-second-price-hike-in-two-months

Ongoing RAM crisis prompts Raspberry Pi’s second price hike in two months

The ongoing AI-fueled shortages of memory and storage chips has hit RAM kits and SSDs for PC builders the fastest and hardest, meaning it’s likely that, for other products that use these chips, we’ll be seeing price hikes for the entire rest of the year, if not for longer.

The latest price hike news comes courtesy of Raspberry Pi CEO Eben Upton, who announced today that the company would be raising prices on most of its single-board computers for the second time in two months.

Prices are going up for all Raspberry Pi 4 and Raspberry Pi 5 boards with 2GB of more of LPDDR4 RAM, including the Compute Module 4 and 5 and the Raspberry Pi 500 computer-inside-a-keyboard. The 2GB boards’ pricing will go up by $10, 4GB boards will go up by $15, 8GB boards will go up by $30, and 16GB boards will increase by a whopping $60.

These increases stack on top of across-the-board $5 to $15 price hikes implemented for most Pi 4 and 5 models in December, and a handful of more contained price hikes for select models in early October. The 16GB version of the Pi 5 will now cost a whopping $205. The 8GB versions of the Pi 4 and 5 will run you $125 and $135, respectively, the only other boards to climb above $100.

Ongoing RAM crisis prompts Raspberry Pi’s second price hike in two months Read More »