Author name: DJ Henderson

bad-sleep-made-woman’s-eyelids-so-floppy-they-flipped-inside-out,-got-stuck

Bad sleep made woman’s eyelids so floppy they flipped inside out, got stuck

Exhausted elastin

As such, the correct next step for addressing her floppy eyelids wasn’t eye surgery or medication—it was a referral for a sleep test.

The patient did the test, which found that while she was sleeping, she stopped breathing 27 times per hour. On the apnea–hypopnea index, that yields a diagnosis of moderate-level OSA.

With this finding, the woman started using a continuous positive airway pressure (CPAP) machine, which delivers continuous air into the airway during sleep, preventing it from closing up. Along with some eye lubricants, nighttime eye patches, and a weight-loss plan, the woman’s condition rapidly improved. After two weeks, her eyelids were no longer inside out, and she could properly close her eyes. She was also sleeping better and no longer had daytime drowsiness.

Doctors don’t entirely understand the underlying mechanisms that cause floppy eyelid syndrome, and not all cases are linked to OSA. Researchers have hypothesized that genetic predispositions or anatomical anomalies may contribute to the condition. Some studies have found links to underlying connective tissue disorders. Tissue studies have clearly pointed to decreased amounts or abnormalities in the elastin fibers of the tarsal plate, the dense connective tissue in the eyelids.

For people with OSA, researchers speculate that the sleep disorder leads to hypoxic conditions (a lack of oxygen) in their tissue. This, in turn, could increase oxidative stress and reactive oxygen species in the tissue, which can spur the production of enzymes that break down elastin in the eyelid. Thus, the eyelids become lax and limp, allowing them to get into weird positions (such as inside out) and leading to chronic irritation of the eye surface.

The good news is that most people with floppy eye syndrome can manage the condition with conservative measures, such as CPAP for those with OSA, as did the woman in New York. But some may end up needing corrective surgery.

Bad sleep made woman’s eyelids so floppy they flipped inside out, got stuck Read More »

fbi-stymied-by-apple’s-lockdown-mode-after-seizing-journalist’s-iphone

FBI stymied by Apple’s Lockdown Mode after seizing journalist’s iPhone

Apple made Lockdown Mode for people at high risk

CART couldn’t get anything from the iPhone. “Because the iPhone was in Lockdown mode, CART could not extract that device,” the government filing said.

The government also submitted a declaration by FBI Assistant Director Roman Rozhavsky that said the agency “has paused any further efforts to extract this device because of the Court’s Standstill Order.” The FBI did extract information from the SIM card “with an auto-generated HTML report created by the tool utilized by CART,” but “the data contained in the HTML was limited to the telephone number.”

Apple says that LockDown Mode “helps protect devices against extremely rare and highly sophisticated cyber attacks,” and is “designed for the very few individuals who, because of who they are or what they do, might be personally targeted by some of the most sophisticated digital threats.”

Introduced in 2022, Lockdown Mode is available for iPhones, iPads, and Macs. It must be enabled separately for each device. To enable it on an iPhone or iPad, a user would open the Settings app, tap Privacy & Security, scroll down and tap Lockdown Mode, and then tap Turn on Lockdown Mode.

The process is similar on Macs. In the System Settings app that can be accessed via the Apple menu, a user would click Privacy & Security, scroll down and click Lockdown Mode, and then click Turn On.

“When Lockdown Mode is enabled, your device won’t function like it typically does,” Apple says. “To reduce the attack surface that potentially could be exploited by highly targeted mercenary spyware, certain apps, websites, and features are strictly limited for security and some experiences might not be available at all.”

Lockdown Mode blocks most types of message attachments, blocks FaceTime calls from people you haven’t contacted in the past 30 days, restricts the kinds of browser technologies that websites can use, limits photo sharing, and imposes other restrictions. Users can exclude specific apps and websites they trust from these restrictions, however.

FBI stymied by Apple’s Lockdown Mode after seizing journalist’s iPhone Read More »

trump-admin-is-“destroying-medical-research,”-senate-report-finds

Trump admin is “destroying medical research,” Senate report finds

Senators also pressed the director on the future of the NIH, noting that it has been hamstrung by the ongoing chaos, putting upcoming grant funding at risk, too. Of the NIH’s 27 institutes and centers, Bhattacharya testified, “I think it’s 15″ that are without a director. Sen. Patty Murray (D-Wash.), meanwhile, noted that more than half of the institutes are on track to lose all their voting advisory committee members by the end of the year—and grants cannot be approved without sign-off from these committees. Bhattacharya responded that they’re working on it.

Weasely answers on vaccines

In the course of the hearing, senators also tried to assess Bhattacharya’s loyalty to Kennedy’s dangerous anti-vaccine ideology, which includes the false and thoroughly debunked claim that vaccines cause autism.

Sanders asked Bhattacharya directly: “Do vaccines cause autism? Yes/no?”

“I do not believe that the measles vaccine causes autism,” Bhattacharya responded.

“No, uh-uh,” Sanders quickly interjected. “I didn’t ask [about] measles. Do vaccines cause autism?”

“I have not seen a study that suggests any single vaccine causes autism,” Bhattacharya responded.

But this, too, is an evasive answer. Note that he said “any single vaccine,” leaving open the possibility that he believes vaccines collectively or in some combination could cause autism. The measles vaccine, for instance, is given in combination with immunizations against mumps, rubella, and sometimes varicella (chickenpox).

It would also be false to suggest vaccines in combination are linked to autism; numerous studies have found no link between autism and vaccination generally. Still, this is a false idea that Kennedy and the like-minded anti-vaccine advocates he has installed into critical federal vaccine advisory roles are now pursuing.

Later in the hearing, Bhattacharya also indicated that when he said “I have not seen a study,” he was suggesting that it was because such studies have not been done—which is also false; routine childhood vaccines have been extensively studied for safety and efficacy.

“I’ve seen so many studies on measles vaccines and autism that established that there is no link,” [to autism], he said in an exchange with Hassan on the subject. “The other vaccines are less well studied.”

Trump admin is “destroying medical research,” Senate report finds Read More »

user-blowback-convinces-adobe-to-keep-supporting-30-year-old-2d-animation-app

User blowback convinces Adobe to keep supporting 30-year-old 2D animation app

30 years of animation

Animate debuted in 1996 as FutureWave Software’s FutureSplash Animator. After a 1997 acquisition by Macromedia, FutureSplash Animator became Macromedia Flash. In 2005, Adobe bought Macromedia and renamed Macromedia Flash to Adobe Flash Professional. In 2015, the software became Adobe Animate CC. In its nearly 30 years of use, Animate has been used in numerous popular animated films and shows, including Star Trek: Lower Decks. Still, Adobe said on Monday that “new platforms and paradigms have emerged that better serve the needs of the user.”

Based on the response to Monday’s announcement, not everyone agrees that Animate is obsolete. Adobe’s announcement has also drawn increased scrutiny because of the company’s growing focus on AI-based tools, which have led to higher subscription fees.

“Shutting down Animate and cutting off users from decades worth of work, while simultaneously focusing on anti-artist AI technology, is incredibly disrespectful to your users. Make the software open-source if you’re not going to do the work yourself,” a user on Adobe’s forum going by “FFFlay” wrote in response to Monday’s announcement.

Although Adobe has shown an ability to respond to customer frustration and will allow people to use Animate for the foreseeable future, people who depend on the software, including for animation and education, are concerned about relying on a program that Adobe almost discontinued.

In a post today, an Adobe community member going by the username rayek.elfin wrote, “The damage is done in my opinion. The news of Adobe discontinuing Animate went viral and probably created so much anxiety and uncertainty that studios and indie animators are already looking to replace Animate in their pipelines.”

When asked how Adobe will try to rebuild trust among users, Chambers said, “Trust doesn’t come beforehand, it comes after (and has to be earned). We say what we will do, and if we consistently do it, we gain trust. We are at the ‘we say what we will do’ part for a lot of people.”

User blowback convinces Adobe to keep supporting 30-year-old 2D animation app Read More »

nasa-finally-acknowledges-the-elephant-in-the-room-with-the-sls-rocket

NASA finally acknowledges the elephant in the room with the SLS rocket


“You know, you’re right, the flight rate—three years is a long time.”

The Artemis II mission is not going to the Moon this month. Credit: NASA

The Space Launch System rocket program is now a decade and a half old, and it continues to be dominated by two unfortunate traits: It is expensive, and it is slow.

The massive rocket and its convoluted ground systems, so necessary to baby and cajole the booster’s prickly hydrogen propellant on board, have cost US taxpayers in excess of $30 billion to date. And even as it reaches maturity, the rocket is going nowhere fast.

You remember the last time NASA tried to launch the world’s largest orange rocket, right? The space agency rolled the Space Launch System out of its hangar in March 2022. The first, second, and third attempts at a wet dress rehearsal—elaborate fueling tests—were scrubbed. The SLS rocket was slowly rolled back to its hangar for work in April before returning to the pad in June.

The fourth fueling test also ended early but this time reached to within 29 seconds of when the engines would ignite. This was not all the way to the planned T-9.3 seconds, a previously established gate to launch the vehicle. Nevertheless mission managers had evidently had enough of failed fueling tests. Accordingly, they proceeded into final launch preparations.

The first launch attempt (effectively the fifth wet-dress test), in late August, was scrubbed due to hydrogen leaks and other problems. A second attempt, a week later, also succumbed to hydrogen leaks. Finally, on the next attempt, and seventh overall try at fully fueling and nursing this vehicle through a countdown, the Space Launch System rocket actually took off. After doing so, it flew splendidly.

That was November 16, 2022. More than three years ago. You might think that over the course of the extended interval since then, and after the excruciating pain of spending nearly an entire year conducting fueling tests to try to lift the massive rocket off the pad, some of the smartest engineers in the world, the fine men and women at NASA, would have dug into and solved the leak issues.

You would be wrong.

The second go-round also does not unfold smoothly

On Monday, after rolling the SLS rocket to be used for the Artemis II mission to the pad in January, NASA attempted its first wet-dress test with this new vehicle. At one of the main interfaces where liquid hydrogen enters the vehicle, a leak developed, not dissimilar to problems that occurred with the Artemis I rocket three years ago.

NASA has developed several ploys to mitigate the leak. These include varying the rate of hydrogen, which is very cold, flowing into the vehicle. At times they also stopped this flow, hoping the seals at the interface between the ground equipment and the rocket would warm up and “re-seat,” thereby halting the leaks. It worked—sort of. After several hours of troubleshooting, the vehicle was fully loaded. Finally, running about four hours late on their timeline, the dogged countdown team at Kennedy Space Center pushed toward the last stages of the countdown.

However, at this critical time, the liquid hydrogen leak rate spiked once again. This led to an automatic abort of the test a little before T-5 minutes. And so ended NASA’s hopes of launching the much-anticipated Artemis II mission, sending four astronauts around the Moon, in February. NASA will now attempt to launch the vehicle no earlier than March following more wet-dress attempts in the interim.

In a news conference on Tuesday afternoon, NASA officials were asked why they had not solved a problem that was so nettlesome during the Artemis I launch campaign.

“After Artemis I, with the challenges we had with the leaks, we took a pretty aggressive approach to do some component-level testing with some of these valves and the seals, and try to understand their behavior,” said John Honeycutt, chair of the Artemis II Mission Management Team. “And so we got a good handle on that relative to how we install the flight-side and the ground-side interface. But on the ground, we’re pretty limited in how much realism we can put into the test. We try to test like we fly, but this interface is a very complex interface. When you’re dealing with hydrogen, it’s a small molecule. It’s highly energetic. We like it for that reason. And we do the best we can.”

If NASA were really going to do the best it could with this rocket, there were options in the last three years. It is common in commercial rocketry to build one or more “test” tanks to both stress the hardware and ensure its compatibility with ground systems through an extensive test campaign. However, SLS hardware is extraordinarily expensive. A single rocket costs in excess of $2 billion, so the program is hardware-poor. Moreover, tanking tests might have damaged the launch tower, which itself cost more than $1 billion. As far as I know, there was never any serious discussion of building a test tank.

Hardware scarcity, due to cost, is but one of several problems with the SLS rocket architecture. Probably the biggest one is its extremely low flight rate, which makes every fueling and launch opportunity an experimental rather than operational procedure. This has been pointed out to NASA, and the rocket’s benefactors in Congress, for more than a decade. A rocket that is so expensive it only flies rarely will have super-high operating costs and ever-present safety concerns precisely because it flies so infrequently.

Acknowledging the low flight rate issue

Until this week, NASA had largely ignored these concerns, at least in public. However, in a stunning admission, NASA’s new administrator, Jared Isaacman, acknowledged the flight-rate issue after Monday’s wet-dress rehearsal test failed to reach a successful conclusion. “The flight rate is the lowest of any NASA-designed vehicle, and that should be a topic of discussion,” he said as part of a longer post about the test on social media.

The reality, which Isaacman knows full well, and which almost everyone else in the industry recognizes, is that the SLS rocket is dead hardware walking. The Trump administration would like to fly the rocket just two more times, culminating in the Artemis III human landing on the Moon. Congress has passed legislation mandating a fourth and fifth launch of the SLS vehicle.

However, one gets the sense that this battle is not yet fully formed, and the outcome will depend on hiccups like Monday’s aborted test; the ongoing performance of the rocket in flight; and how quickly SpaceX’s Starship and Blue Origin’s New Glenn vehicle make advancements toward reliability. Both of these private rockets are moving at light speed relative to NASA’s Slow Launch System.

During the news conference, I asked about this low flight rate and the challenge of managing a complex rocket that will never be more than anything but an experimental system. The answer from NASA’s top civil servant, Amit Kshatriya, was eye-opening.

“You know, you’re right, the flight rate—three years is a long time between the first and second,” NASA’s associate administrator said. “It is going to be experimental, because of going to the Moon in this configuration, with the energies we’re dealing with. And every time we do it these are very bespoke components, they’re in many cases made by incredible craftsmen. … It’s the first time this particular machine has borne witness to cryogens, and how it breathes, and how it vents, and how it wants to leak is something we have to characterize. And so every time we do it, we’re going to have to do that separately.”

So there you have it. Every SLS rocket is a work of art, every launch campaign an adventure, every mission subject to excessive delays. It’s definitely not ideal.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

NASA finally acknowledges the elephant in the room with the SLS rocket Read More »

so-yeah,-i-vibe-coded-a-log-colorizer—and-i-feel-good-about-it

So yeah, I vibe-coded a log colorizer—and I feel good about it


Some semi-unhinged musings on where LLMs fit into my life—and how I’ll keep using them.

Altered image of the article author appearing to indicate that he is in fact a robot

Welcome to the future. Man, machine, the future. Credit: Aurich Lawson

Welcome to the future. Man, machine, the future. Credit: Aurich Lawson

I can’t code.

I know, I know—these days, that sounds like an excuse. Anyone can code, right?! Grab some tutorials, maybe an O’Reilly book, download an example project, and jump in. It’s just a matter of learning how to break your project into small steps that you can make the computer do, then memorizing a bit of syntax. Nothing about that is hard!

Perhaps you can sense my sarcasm (and sympathize with my lack of time to learn one more technical skill).

Oh, sure, I can “code.” That is, I can flail my way through a block of (relatively simple) pseudocode and follow the flow. I have a reasonably technical layperson’s understanding of conditionals and loops, and of when one might use a variable versus a constant. On a good day, I could probably even tell you what a “pointer” is.

But pulling all that knowledge together and synthesizing a working application any more complex than “hello world”? I am not that guy. And at this point, I’ve lost the neuroplasticity and the motivation (if I ever had either) to become that guy.

Thanks to AI, though, what has been true for my whole life need not be true anymore. Perhaps, like my colleague Benj Edwards, I can whistle up an LLM or two and tackle the creaky pile of “it’d be neat if I had a program that would do X” projects without being publicly excoriated on StackOverflow by apex predator geeks for daring to sully their holy temple of knowledge with my dirty, stupid, off-topic, already-answered questions.

So I gave it a shot.

A cache-related problem appears

My project is a small Python-based log colorizer that I asked Claude Code to construct for me. If you’d like to peek at the code before listening to me babble, a version of the project without some of the Lee-specific customizations is available on GitHub.

Screenshot of Lee's log colorizer in action

My Nginx log colorizer in action, showing Space City Weather traffic on a typical Wednesday afternoon. Here, I’m running two instances, one for IPv4 visitors and one for IPv6. (By default, all traffic is displayed, but splitting it this way makes things easier for my aging eyes to scan.)

Credit: Lee Hutchinson

My Nginx log colorizer in action, showing Space City Weather traffic on a typical Wednesday afternoon. Here, I’m running two instances, one for IPv4 visitors and one for IPv6. (By default, all traffic is displayed, but splitting it this way makes things easier for my aging eyes to scan.) Credit: Lee Hutchinson

Why a log colorizer? Two reasons. First, and most important to me, because I needed to look through a big ol’ pile of web server logs, and off-the-shelf colorizer solutions weren’t customizable to the degree I wanted. Vibe-coding one that exactly matched my needs made me happy.

But second, and almost equally important, is that this was a small project. The colorizer ended up being a 400-ish line, single-file Python script. The entire codebase, plus the prompting and follow-up instructions, fit easily within Claude Code’s context window. This isn’t an application that sprawls across dozens or hundreds of functions in multiple files, making it easy to audit (even for me).

Setting the stage: I do the web hosting for my colleague Eric Berger’s Houston-area forecasting site, Space City Weather. It’s a self-hosted WordPress site, running on an AWS EC2 t3a.large instance, fronted by Cloudflare using CF’s WordPress Automatic Platform Optimization.

Space City Weather also uses self-hosted Discourse for commenting, replacing WordPress’ native comments at the bottom of Eric’s daily weather posts via the WP-Discourse plugin. Since bolting Discourse onto the site back in August 2025, though, I’ve had an intermittent issue where sometimes—but not all the time—a daily forecast post would go live and get cached by Cloudflare with the old, disabled native WordPress comment area attached to the bottom instead of the shiny new Discourse comment area. Hundreds of visitors would then see a version of the post without a functional comment system until I manually expired the stale page or until the page hit Cloudflare’s APO-enforced max age and expired itself.

The problem behavior would lie dormant for weeks or months, and then we’d get a string of back-to-back days where it would rear its ugly head. Edge cache invalidation on new posts is supposed to be triggered automatically by the official Cloudflare WordPress plug-in, and indeed, it usually worked fine—but “usually” is not “always.”

In the absence of any obvious clues as to why this was happening, I consulted a few different LLMs and asked for possible fixes. The solution I settled on was having one of them author a small mu-plugin in PHP (more vibe coding!) that forces WordPress to slap “DO NOT CACHE ME!” headers on post pages until it has verified that Discourse has hooked its comments to the post. (Curious readers can put eyes on this plugin right here.)

This “solved” the problem by preempting the problem behavior, but it did nothing to help me identify or fix the actual underlying issue. I turned my attention elsewhere for a few months. One day in December, as I was updating things, I decided to temporarily disable the mu-plugin to see if I still needed it. After all, problems sometimes go away on their own, right? Computers are crazy!

Alas, the next time Eric made a Space City Weather post, it popped up sans Discourse comment section, with the (ostensibly disabled) WordPress comment form at the bottom. Clearly, the problem behavior was still in play.

Interminable intermittence

Have you ever been stuck troubleshooting an intermittent issue? Something doesn’t work, you make a change, it suddenly starts working, then despite making no further changes, it randomly breaks again.

The process makes you question basic assumptions, like, “Do I actually know how to use a computer?” You feel like you might be actually-for-real losing your mind. The final stage of this process is the all-consuming death spiral, where you start asking stuff like, “Do I need to troubleshoot my troubleshooting methods? Is my server even working? Is the simulation we’re all living in finally breaking down and reality itself is toying with me?!”

In this case, I couldn’t reproduce the problem behavior on demand, no matter how many tests I tried. I couldn’t see any narrow, definable commonalities between days where things worked fine and days where things broke.

Rather than an image, I invite you at this point to enjoy Muse’s thematically appropriate song “Madness” from their 2012 concept album The 2nd Law.

My best hope for getting a handle on the problem likely lay deeply buried in the server’s logs. Like any good sysadmin, I gave the logs a quick once-over for problems a couple of times per month, but Space City Weather is a reasonably busy medium-sized site and dishes out its daily forecast to between 20,000 and 30,000 people (“unique visitors” in web parlance, or “UVs” if you want to sound cool). Even with Cloudflare taking the brunt of the traffic, the daily web server log files are, let us say, “a bit dense.” My surface-level glances weren’t doing the trick—I’d have to actually dig in. And having been down this road before for other issues, I knew I needed more help than grep alone could provide.

The vibe use case

The Space City Weather web server uses Nginx for actual web serving. For folks who have never had the pleasure, Nginx, as configured in most of its distributable packages, keeps a pair of log files around—one that shows every request serviced and another just for errors.

I wanted to watch the access log right when Eric was posting to see if anything obviously dumb/bad/wrong/broken was happening. But I’m not super-great at staring at a giant wall of text and symbols, and I tend to lean heavily on syntax highlighting and colorization to pick out the important bits when I’m searching through log files. There’s an old and crusty program called ccze that’s easily findable in most repos; I’ve used it forever, and if its default output does what you need, then it’s an excellent tool.

But customizing ccze’s output is a “here be dragons”-type task. The application is old, and time has ossified it into something like an unapproachably evil Mayan relic, filled with shadowy regexes and dark magic, fit to be worshipped from afar but not trifled with. Altering ccze’s behavior threatens to become an effort-swallowing bottomless pit, where you spend more time screwing around with the tool and the regexes than you actually spend using the tool to diagnose your original problem.

It was time to fire up VSCode and pretend to be a developer. I set up a new project, performed the demonic invocation to summon Claude Code, flipped the thing into “plan mode,” and began.

“I’d like to see about creating an Nginx log colorizer,” I wrote in the prompt box. “I don’t know what language we should use. I would like to prioritize efficiency and performance in the code, as I will be running this live in production and I can’t have it adding any applicable load.” I dropped a truncated, IP-address-sanitized copy of yesterday’s Nginx access.log into the project directory.

“See the access.log file in the project directory as an example of the data we’ll be colorizing. You can test using that file,” I wrote.

Screenshot of Lee's Visual Studio Code window showing the log colorizer project

Visual Studio Code, with agentic LLM integration, making with the vibe-coding.

Credit: Lee Hutchinson

Visual Studio Code, with agentic LLM integration, making with the vibe-coding. Credit: Lee Hutchinson

Ever helpful, Claude Code chewed on the prompt and the example data for a few seconds, then began spitting output. It suggested Python for our log colorizer because of the language’s mature regex support—and to keep the code somewhat readable for poor, dumb me. The actual “vibe-coding” wound up spanning two sessions over two days, as I exhausted my Claude Code credits on the first one (a definite vibe-coding danger!) and had to wait for things to reset.

“Dude, lnav and Splunk exist, what is wrong with you?”

Yes, yes, a log colorizer is bougie and lame, and I’m treading over exceedingly well-trodden ground. I did, in fact, sit for a bit with existing tools—particularly lnav, which does most of what I want. But I didn’t want most of my requirements met. I wanted all of them. I wanted a bespoke tool, and I wanted it without having to pay the “is it worth the time?” penalty. (Or, perhaps, I wanted to feel like the LLM’s time was being wasted rather than mine, given that the effort ultimately took two days of vibe-coding.)

And about those two days: Getting a basic colorizer coded and working took maybe 10 minutes and perhaps two rounds of prompts. It was super-easy. Where I burned the majority of the time and compute power was in tweaking the initial result to be exactly what I wanted.

For therein lies the truly seductive part of vibe-coding—the ease of asking the LLM to make small changes or improvements and the apparent absence of cost or consequence for implementing those changes. The impression is that you’re on the Enterprise-D, chatting with the ship’s computer, collaboratively solving a problem with Geordi and Data standing right behind you. It’s downright intoxicating to say, “Hm, yes, now let’s make it so I can show only IPv4 or IPv6 clients with a command line switch,” and the machine does it. (It’s even cooler if you make the request while swinging your leg over the back of a chair so you can sit in it Riker-style!)

Screenshot showing different LLM instructions given by Lee to Claude Code

A sample of the various things I told the machine to do, along with a small visual indication of how this all made me feel.

Credit: Lucasfilm / Disney

A sample of the various things I told the machine to do, along with a small visual indication of how this all made me feel. Credit: Lucasfilm / Disney

It’s exhilarating, honestly, in an Emperor Palpatine “UNLIMITED POWERRRRR!” kind of way. It removes a barrier that I didn’t think would ever be removed—or, rather, one I thought I would never have the time, motivation, or ability to tear down myself.

In the end, after a couple of days of testing and iteration—including a couple of “Is this colorizer performant, and will it introduce system load if run in production?” back-n-forth exchanges where the LLM reduced the cost of our regex matching and ensured our main loop wasn’t very heavy, I got a tool that does exactly what I want.

Specifically, I now have a log colorizer that:

  • Handles multiple Nginx (and Apache) log file formats
  • Colorizes things using 256-color ANSI codes that look roughly the same in different terminal applications
  • Organizes hostname & IP addresses in fixed-length columns for easy scanning
  • Colorizes HTTP status codes and cache status (with configurable colors)
  • Applies different colors to the request URI depending on the resource being requested
  • Has specific warning colors and formatting to highlight non-HTTPS requests or other odd things
  • Can apply alternate colors for specific IP addresses (so I can easily pick out Eric’s or my requests)
  • Can constrain output to only show IPv4 or IPv6 hosts

…and, worth repeating, it all looks exactly how I want it to look and behaves exactly how I want it to behave. Here’s another action shot!

Image of the log colorizer working

The final product. She may not look like much, but she’s got it where it counts, kid.

Credit: Lee Hutchinson

The final product. She may not look like much, but she’s got it where it counts, kid. Credit: Lee Hutchinson

Problem spotted

Armed with my handy-dandy log colorizer, I patiently waited for the wrong-comment-area problem behavior to re-rear its still-ugly head. I did not have to wait long, and within a couple of days, I had my root cause. It had been there all along, if I’d only decided to spend some time looking for it. Here it is:

Screenshot showing a race condition between apple news and wordpress's cache clearing efforts

Problem spotted. Note the AppleNewsBots hitting the newly published post before Discourse can do its thing and the final version of the page with comments is ready.

Credit: Lee Hutchinson

Problem spotted. Note the AppleNewsBots hitting the newly published post before Discourse can do its thing and the final version of the page with comments is ready. Credit: Lee Hutchinson

Briefly: The problem is Apple’s fault. (Well, not really. But kinda.)

Less briefly: I’ve blurred out Eric’s IP address, but it’s dark green, so any place in the above image where you see a blurry, dark green smudge, that’s Eric. In the roughly 12-ish seconds presented here, you’re seeing Eric press the “publish” button on his daily forecast—that’s the “POST” event at the very top of the window. The subsequent events from Eric’s IP address are his browser having the standard post-publication conversation with WordPress so it can display the “post published successfully” notification and then redraw the WP block editor.

Below Eric’s post, you can see the Discourse server (with orange IP address) notifying WordPress that it has created a new Discourse comment thread for Eric’s post, then grabbing the things it needs to mirror Eric’s post as the opener for that thread. You can see it does GETs for the actual post and also for the post’s embedded images. About one second after Eric hits “publish,” the new post’s Discourse thread is ready, and it gets attached to Eric’s post.

Ah, but notice what else happens during that one second.

To help expand Space City Weather’s reach, we cross-publish all of the site’s posts to Apple News, using a popular Apple News plug-in (the same one Ars uses, in fact). And right there, with those two GET requests immediately after Eric’s POST request, lay the problem: You’re seeing the vanguard of Apple News’ hungry army of story-retrieval bots, summoned by the same “publish” event, charging in and demanding a copy of the brand new post before Discourse has a chance to do its thing.

Gif of Eric Andre screaming

I showed the AppleNewsBot stampede log snippet to Techmaster Jason Marlin, and he responded with this gif.

Credit: Adult Swim

I showed the AppleNewsBot stampede log snippet to Techmaster Jason Marlin, and he responded with this gif. Credit: Adult Swim

It was a classic problem in computing: a race condition. Most days, Discourse’s new thread creation would beat the AppleNewsBot rush; some days, though, it wouldn’t. On the days when it didn’t, the horde of Apple bots would demand the page before its Discourse comments were attached, and Cloudflare would happily cache what those bots got served.

I knew my fix of emitting “NO CACHE” headers on the story pages prior to Discourse attaching comments worked, but now I knew why it worked—and why the problem existed in the first place. And oh, dear reader, is there anything quite so viscerally satisfying in all the world as figuring out the “why” behind a long-running problem?

But then, just as Icarus became so entranced by the miracle of flight that he lost his common sense, I too forgot I soared on wax-wrought wings, and flew too close to the sun.

LLMs are not the Enterprise-D’s computer

I think we all knew I’d get here eventually—to the inevitable third act turn, where the center cannot hold, and things fall apart. If you read Benj’s latest experience with agentic-based vibe coding—or if you’ve tried it yourself—then what I’m about to say will probably sound painfully obvious, but it is nonetheless time to say it.

Despite their capabilities, LLM coding agents are not smart. They also are not dumb. They are agents without agency—mindless engines whose purpose is to complete the prompt, and that is all.

Screenshot of Data, Geordi, and Riker collaboratively coding at one of the bridge's aft science stations

It feels like this… until it doesn’t.

Credit: Paramount Television

It feels like this… until it doesn’t. Credit: Paramount Television

What this means is that, if you let them, Claude Code (and OpenAI Codex and all the other agentic coding LLMs) will happily spin their wheels for hours hammering on a solution that can’t ever actually work, so long as their efforts match the prompt. It’s on you to accurately scope your problem. You must articulate what you want in plain and specific domain-appropriate language, because the LLM cannot and will not properly intuit anything you leave unsaid. And having done that, you must then spot and redirect the LLM away from traps and dead ends. Otherwise, it will guess at what you want based on the alignment of a bunch of n-dimensional curves and vectors in high-order phase space, and it might guess right—but it also very much might not.

Lee loses the plot

So I had my log colorizer, and I’d found my problem. I’d also found, after leaving the colorizer up in a window tailing the web server logs in real time, all kinds of things that my previous behavior of occasionally glancing at the logs wasn’t revealing. Ooh, look, there’s a rest route that should probably be blocked from the outside world! Ooh, look, there’s a web crawler I need to feed into Cloudflare’s WAF wood-chipper because it’s ignoring robots.txt! Ooh, look, here’s an area where I can tweak my fastcgi cache settings and eke out a slightly better hit rate!

But here’s the thing with the joy of problem-solving: Like all joy, its source is finite. The joy comes from the solving itself, and even when all my problems are solved and the systems are all working great, I still crave more joy. It is in my nature to therefore invent new problems to solve.

I decided that the problem I wanted to solve next was figuring out a way for my log colorizer to display its output without wrapping long lines—because wrapped lines throw off the neatly delimited columns of log data. I would instead prefer that my terminal window sprout a horizontal scroll bar when needed, and if I wanted to see the full extent of a long line, I could grab the scroll bar and investigate.

Astute readers will at this point notice two things: first, that now I really was reinventing lnav, except way worse and way dumber. Second, and more importantly, line-wrapping behavior is properly a function of the terminal application, not the data being displayed within it, and my approach was misguided from first principles. (This is in fact exactly the kind of request that can and should be slapped down on StackOverflow—and, indeed, searching there shows many examples of this exact thing happening.)

But the lure of telling the machine what to do and then watching the machine weave my words into functional magic was too strong—surely we could code our way out of this problem! With LLMs, we can code our way out of any problem! Right?

Eventually, after much refining of requirements, Claude produced what I asked it to produce: a separate Python script, which accepted piped input and created, like, a viewport or something—I don’t know, I can’t code, remember?—and within that viewport, I could scroll around. It seemed to work great!

Problems became apparent when I actually tried scrolling around for more than a short distance—and by “problems,” I mean “my web server’s CPU lit up like a Christmas tree on fire.”

“Issue,” I told the LLM. “When horizontal scrolling happens, the log wrapper takes up almost 100 percent of a host CPU core. Can we assess how to make this a zero-CPU-impact tool?”

“This tool has to parse ANSI, slice lines, and repaint when you scroll,” it replied in what I imagined was an arch, droll tone. “The bottleneck is screen redraw + full-width scan of all lines on every new line or key repeat. Each new line triggers a full screen redraw of term_height lines, plus visible_width() over the entire buffer. Holding the arrow key generates rapid repeats, each doing a full redraw and slice of every line.”

I was crestfallen.

“Zero-CPU isn’t achievable,” continued the LLM, “but there are low-impact approaches.”

My crest lifted a bit, and we dug in—but the LLM was perhaps a bit optimistic about those “low-impact approaches.” We burned several more days’ worth of tokens on performance improvements—none of which I had any realistic input on because at this point we were way, way past my ability to flail through the Python code and understand what the LLM was doing. Eventually, we hit a wall.

Screenshot of the LLM telling Lee that this is just not going to work

If you listen carefully, you can hear the sound of my expectations crashing hard into reality.

If you listen carefully, you can hear the sound of my expectations crashing hard into reality.

Instead of throwing in the towel, I vibed on, because the sunk cost fallacy is for other people. I instructed the LLM to shift directions and help me run the log display script locally, so my desktop machine with all its many cores and CPU cycles to spare would be the one shouldering the reflow/redraw burden and not the web server.

Rather than drag this tale on for any longer, I’ll simply enlist Ars Creative Director Aurich Lawson’s skills to present the story of how this worked out in the form of a fun collage, showing my increasingly unhinged prompting of the LLM to solve the new problems that appeared when trying to get a script to run on ssh output when key auth and sudo are in play:

A collage of error messages begetting madness

Mammas, don’t let your babies grow up to be vibe coders.

Credit: Aurich Lawson

Mammas, don’t let your babies grow up to be vibe coders. Credit: Aurich Lawson

The bitter end

So, thwarted in my attempts to do exactly what I wanted in exactly the way I wanted, I took my log colorizer and went home. (The failed log display script is also up on GitHub with the colorizer if anyone wants to point and laugh at my efforts. Is the code good? Who knows?! Not me!) I’d scored my big win and found my problem root cause, and that would have to be enough for me—for now, at least.

As to that “big win”—finally managing a root-cause analysis of my WordPress-Discourse-Cloudflare caching issue—I also recognize that I probably didn’t need a vibe-coded log colorizer to get there. The evidence was already waiting to be discovered in the Nginx logs, whether or not it was presented to me wrapped in fancy colors. Did I, in fact, use the thrill of vibe coding a tool to Tom Sawyer myself into doing the log searches? (“Wow, self, look at this new cool log colorizer! Bet you could use that to solve all kinds of problems! Yeah, self, you’re right! Let’s do it!”) Very probably. I know how to motivate myself, and sometimes starting a task requires some mental trickery.

This round of vibe coding and its muddled finale reinforced my personal assessment of LLMs—an assessment that hasn’t changed much with the addition of agentic abilities to the toolkit.

LLMs can be fantastic if you’re using them to do something that you mostly understand. If you’re familiar enough with a problem space to understand the common approaches used to solve it, and you know the subject area well enough to spot the inevitable LLM hallucinations and confabulations, and you understand the task at hand well enough to steer the LLM away from dead-ends and to stop it from re-inventing the wheel, and you have the means to confirm the LLM’s output, then these tools are, frankly, kind of amazing.

But the moment you step outside of your area of specialization and begin using them for tasks you don’t mostly understand, or if you’re not familiar enough with the problem to spot bad solutions, or if you can’t check its output, then oh, dear reader, may God have mercy on your soul. And on your poor project, because it’s going to be a mess.

These tools as they exist today can help you if you already have competence. They cannot give you that competence. At best, they can give you a dangerous illusion of mastery; at worst, well, who even knows? Lost data, leaked PII, wasted time, possible legal exposure if the project is big enough—the “worst” list goes on and on!

To vibe or not to vibe?

The log colorizer is not the first nor the last bit of vibe coding I’ve indulged in. While I’m not as prolific as Benj, over the past couple of months, I’ve turned LLMs loose on a stack of coding tasks that needed doing but that I couldn’t do myself—often in direct contravention of my own advice above about being careful to use them only in areas where you already have some competence. I’ve had the thing make small WordPress PHP plugins, regexes, bash scripts, and my current crowning achievement: a save editor for an old MS-DOS game (in both Python and Swift, no less!) And I had fun doing these things, even as entire vast swaths of rainforest were lit on fire to power my agentic adventures.

As someone employed in a creative field, I’m appropriately nervous about LLMs, but for me, it’s time to face reality. An overwhelming majority of developers say they’re using AI tools in some capacity. It’s a safer career move at this point, almost regardless of one’s field, to be more familiar with them than unfamiliar with them. The genie is not going back into the lamp—it’s too busy granting wishes.

I don’t want y’all to think I feel doomy-gloomy over the genie, either, because I’m right there with everyone else, shouting my wishes at the damn thing. I am a better sysadmin than I was before agentic coding because now I can solve problems myself that I would have previously needed to hand off to someone else. Despite the problems, there is real value there,  both personally and professionally. In fact, using an agentic LLM to solve a tightly constrained programming problem that I couldn’t otherwise solve is genuinely fun.

And when screwing around with computers stops being fun, that’s when I’ll know I’ve truly become old.

Photo of Lee Hutchinson

Lee is the Senior Technology Editor, and oversees story development for the gadget, culture, IT, and video sections of Ars Technica. A long-time member of the Ars OpenForum with an extensive background in enterprise storage and security, he lives in Houston.

So yeah, I vibe-coded a log colorizer—and I feel good about it Read More »

newborn-dies-after-mother-drinks-raw-milk-during-pregnancy

Newborn dies after mother drinks raw milk during pregnancy

A newborn baby has died in New Mexico from a Listeria infection that state health officials say was likely contracted from raw (unpasteurized) milk that the baby’s mother drank during pregnancy.

In a news release Tuesday, officials warned people not to consume any raw dairy, highlighting that it can be teeming with a variety of pathogens. Those germs are especially dangerous to pregnant women, as well as young children, the elderly, and people with weakened immune systems.

“Raw milk can contain numerous disease-causing germs, including Listeria, which is bacteria that can cause miscarriage, stillbirth, preterm birth, or fatal infection in newborns, even if the mother is only mildly ill,” the New Mexico Department of Health said in the press release.

The health department noted that it could not definitively link the baby’s death to the raw milk the mother drank. But raw milk is notorious for transmitting Listeria monocytogenes bacterium. The Food and Drug Administration has a “Food Safety for Moms-to-Be” webpage about Listeria, in which it poses the question and answer: “How could I get listeriosis? You can get listeriosis by eating raw, unpasteurized milk and unpasteurized milk products… .”

Listeria is a particular danger during pregnancy. When exposed, pregnant people are 10 times more likely to develop a Listeria infection than other healthy adults because altered immune responses during pregnancy make it harder to fight off infections. Further, Listeria is one of a few pathogens that are able to cross the placental barrier and infect a developing fetus.

Newborn dies after mother drinks raw milk during pregnancy Read More »

a-cup-of-coffee-for-depression-treatment-has-better-results-than-microdosing

A cup of coffee for depression treatment has better results than microdosing


The effect of microdosing have been overstated, at least when it comes to depression.

About a decade ago, many media outlets—including WIRED—zeroed in on a weird trend at the intersection of mental health, drug science, and Silicon Valley biohacking: microdosing, or the practice of taking a small amount of a psychedelic drug seeking not full-blown hallucinatory revels but gentler, more stable effects. Typically using psilocybin mushrooms or LSD, the archetypal microdoser sought less melting walls and open-eye kaleidoscopic visuals than boosts in mood and energy, like a gentle spring breeze blowing through the mind.

Anecdotal reports pitched microdosing as a kind of psychedelic Swiss Army knife, providing everything from increased focus to a spiked libido and (perhaps most promisingly) lowered reported levels of depression. It was a miracle for many. Others remained wary. Could 5 percent of a dose of acid really do all that? A new, wide-ranging study by an Australian biopharma company suggests that microdosing’s benefits may indeed be drastically overstated—at least when it comes to addressing symptoms of clinical depression.

A Phase 2B trial of 89 adult patients conducted by Melbourne-based MindBio Therapeutics, investigating the effects of microdosing LSD in the treatment of major depressive disorder, found that the psychedelic was actually outperformed by a placebo. Across an eight-week period, symptoms were gauged using the Montgomery-Åsberg Depression Rating Scale (MADRS), a widely recognized tool for the clinical evaluation of depression.

The study has not yet been published. But MindBio’s CEO Justin Hanka recently released the top-line results on his LinkedIn, eager to show that his company was “in front of the curve in microdosing research.” He called it “the most vigorous placebo controlled trial ever performed in microdosing.” It found that patients dosed with a small amount of LSD (ranging from 4 to 20μg, or micrograms, well below the threshold of a mind-blowing hallucinogenic dose) showed observable upticks in feelings of well-being, but worse MADRS scores, compared to patients given a placebo in the form of a caffeine pill. (Because patients in psychedelic trials typically expect some kind of mind-altering effect, studies are often blinded using so-called “active placebos,” like caffeine or methylphenidate, which have their own observable psychoactive properties.)

This means, essentially, that a medium-strength cup of coffee may prove more beneficial in treating major depressive disorder than a tiny dose of acid. Good news for habitual caffeine users, perhaps, but less so for researchers (and biopharma startups) counting on the efficacy of psychedelic microdosing.

“It’s probably a nail in the coffin of using microdosing to treat clinical depression,” Hanka says. “It probably improves the way depressed people feel—just not enough to be clinically significant or statistically meaningful.”

However despairing, these results conform with the suspicions of some more skeptical researchers, who have long believed that the benefits of microdosing are less the result of a teeny-tiny psychedelic catalyst, and more attributable to the so-called “placebo effect.”

In 2020, Jay A. Olson, then a PhD candidate in the Department of Psychiatry at McGill University in Montreal, Canada, conducted an experiment. He gave 33 participants a placebo, telling them it was actually a dose of a psilocybin-like drug. They were led to believe there was no placebo group. Other researchers who were in on the bit acted out the effects of the drug, in a room treated with trippy lighting and other visual stimulants, in an attempt to curate the “optimized expectation” of a psychedelic experience.

The resulting paper, titled “Tripping on Nothing,” found that a majority of participants had reported feeling the effects of the drug—despite there being no real drug whatsoever. “The main conclusion we had is that the placebo effect can be stronger than expected in psychedelic studies,” Olson, now a postdoctoral fellow at the University of Toronto, tells WIRED. “Placebo effects were stronger than what you would get from microdosing.”

More than a stick in the eye to the microdosing faithful, Olson maintains that the study’s key findings had more to do with the actual role, and power, of the placebo effect. “The public has a lot of misconceptions about the placebo effect,” he says. “There’s this assumption that placebo effects are extremely weak, or that they’re not real.”

Olson goes on to say that placebo effects in psychedelic trials can be further juiced by the hype around the drugs themselves. Patients may enter a trial expecting a certain experience, and their mind is able to conjure a version of that experience, in turn. In Olson’s study, it wasn’t a matter of microdosing effects not being real, but that those effects may be caused by environment, or patient expectation. As he puts it: “It can be true at the same time that microdosing can have positive effects on people, and that those effects are perhaps almost entirely placebo.”

This itself raises a sticky question about MindBio’s study. How could a placebo group, who thinks they’re taking LSD, perform better than an active control group, members of which both think they’re taking LSD and are actually taking it? The answer comes from the design of the study itself.

Using what’s called a “double-dummy” design, MindBio’s researchers informed patients that they’d either be receiving LSD, a caffeine pill, or a dose of methylphenidate, better known as Ritalin or Concerta. (No patients were actually administered the methylphenidate.) This means that patient expectation was lowered, as they could ascribe any perceived effects to either the LSD or either of the active placebos. Patients taking LSD microdoses may well have believed they were merely on a stimulant. All patients followed an adaptation of the “Fadiman protocol,” a popular microdosing programme that sees patients taking a small dose of the given drug once every three days.

Jim Fadiman, the veteran psychedelic researcher after whom the protocol is named, rejects MindBio’s conclusions, and trial design, out of hand. Because, Fadiman believes, patients were given the active caffeine placebo, their reported benefits may well be attributable not to a pure placebo effect, but to the actual psychoactive properties of that drug.

“Double-dummy is a remarkably apt term,” Fadiman, 86, sneers. “What I know is that if you take enough caffeine, you will not be depressed!”

Fadiman points to MindBio’s earlier, Phase 2A study, recently published in the journal Neuropharmacology, which drew markedly different conclusions. It was a non-blinded, so-called “open label” study, meaning patients knew definitely that they were being microdosed with LSD. This study found that MADRS scores decreased by 59.5 percent, with effects lasting as long as six months. It also found improvements in stress, rumination, anxiety, and patient quality of life. Fadiman says that this reportage is more consistent with his own research on microdosing. “Their prior study did wonderfully with LSD,” Fadiman says. “I have collected literally hundreds of real world reports over the years that validate those findings.”

MindBio’s Hanka stands by the science. “We are bewildered at the significant difference between the open label Phase 2A trial results and the Phase 2B trial results,” he says. “But that is the nature of good science—a properly controlled trial will get a proper result. Our Phase 2B trial was of the highest standard, a triple-blind, double-dummy, active placebo controlled trial. I haven’t seen another psychedelic trial that has gone to these lengths to control and blind a trial.”

Despite these findings, some microdosing true believers don’t seem especially shaken. In 2017, writer Ayelet Waldman (best known as the author of the Mommy-Track Mysteries series of novels that follow the adventures of stay-at-home-mom-cum-sleuth Juliet Applebaum) published A Really Good Day, a diaristic account of her own self-experiments using microdosing to treat an intractable mood disorder. She tells WIRED she’s not especially bothered by the implication that her positive shifts in mood may have merely been placebo. “In my book I took very seriously the possibility that what I was experiencing was the mother of all placebo effects,” Waldman says. “I wrote about this a number of times in various chapters and decided in the end it didn’t matter. What mattered was that I felt better.”

Perhaps that’s true enough. If the effects are measurable, and repeatable, then it should hardly matter if they’re attributable to a sub-perceptual dose of lysergic acid, or to the (perhaps equally profound) mysteries of the placebo. Still, one cannot help but wonder why anyone looking to use LSD to aid severe clinical depression would bother assuming the legal risk of procuring and consuming a drug still classified under Schedule I by the US Drug Enforcement Administration.

Certainly, for his part, Justin Hanka seems content to pivot MindBio’s research into a new field. His next project is “Booze A.I.”: a smartphone app that uses artificial intelligence to scan the human voice for relevant biomarkers that determine blood alcohol concentration. He’s leaving microdosing in the rearview. “I put millions of dollars into this myself,” he says. “Had I known six years ago what I know about psychedelics, I probably wouldn’t have ventured into the microdosing field.”

This story originally appeared on wired.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

A cup of coffee for depression treatment has better results than microdosing Read More »

here’s-why-blue-origin-just-ended-its-suborbital-space-tourism-program

Here’s why Blue Origin just ended its suborbital space tourism program

Blue Origin has “paused” its New Shepard program for the next two years, a move that likely signals a permanent end to the suborbital space tourism initiative.

The small rocket and capsule have been flying since April 2015 and have combined to make 38 launches, all but one of which were successful, and 36 landings. In its existence, the New Shepard program flew 98 people to space, however briefly, and launched more than 200 scientific and research payloads into the microgravity environment.

So why is Blue Origin, founded by Jeff Bezos more than a quarter of a century ago, ending the company’s longest-running program?

“We will redirect our people and resources toward further acceleration of our human lunar capabilities inclusive of New Glenn,” wrote the company’s chief executive, Dave Limp, in an internal email on Friday afternoon. “We have an extraordinary opportunity to be a part of our nation’s goal of returning to the Moon and establishing a permanent, sustained lunar presence.”

Move was a surprise

The cancellation came, generally, as a surprise to Blue Origin employees. The company flew its most recent mission eight days ago, launching six people into space. Moreover, the company has four new boosters in various stages of development as well as two new capsules under construction. Blue Origin has been selling human flights for more than a year  and is still commanding a per-seat price of approximately $1 million based on recent sales. It was talking about expansion to new spaceports in September.

Still, there have always been questions about the program’s viability. In November 2023, Ars published an article asking how long Bezos would continue to subsidize the New Shepard program, which at the time was “hemorrhaging” money. Sources indicate the program has gotten closer to breaking even, but it remains a drain on Blue Origin’s efforts.

About 400 people spend part or all of their time working on New Shepard, but it also draws on other resources within the company. Although it is a small fraction of the company’s overall workforce, it is nonetheless a distraction from the company’s long-term ambitions to build settlements in space where millions of people will live, work, and help move industrial activity off Earth and into orbit.

Here’s why Blue Origin just ended its suborbital space tourism program Read More »

people-complaining-about-windows-11-hasn’t-stopped-it-from-hitting-1-billion-users

People complaining about Windows 11 hasn’t stopped it from hitting 1 billion users

Complaining about Windows 11 is a popular sport among tech enthusiasts on the Internet, whether you’re publicly switching to Linux, publishing guides about the dozens of things you need to do to make the OS less annoying, or getting upset because you were asked to sign in to an app after clicking a sign-in button.

Despite the negativity surrounding the current version of Windows, it remains the most widely used operating system on the world’s desktop and laptop computers, and people usually prefer to stick to what they’re used to. As a result, Windows 11 has just cleared a big milestone—Microsoft CEO Satya Nadella said on the company’s most recent earnings call (via The Verge) that Windows 11 now has over 1 billion users worldwide.

Windows 11 also reached that milestone just a few months quicker than Windows 10 did—1,576 days after its initial public launch on October 5, 2021. Windows 10 took 1,692 days to reach the same milestone, based on its July 29, 2015, general availability date and Microsoft’s announcement on March 16, 2020.

That’s especially notable because Windows 10 was initially offered as a free upgrade to all users of Windows 7 and Windows 8, with no change in system requirements relative to those older versions. Windows 11 was (and still is) a free upgrade to Windows 10, but its relatively high system requirements mean there are plenty of Windows 10 PCs that aren’t eligible to run Windows 11.

Windows 10’s long goodbye

It’s hard to gauge how many PCs are still running Windows 10 because public data on the matter is unreliable. But we can still make educated guesses—and it’s clear that the software is still running on hundreds of millions of PCs, despite hitting its official end-of-support date last October.

Statcounter, one popularly referenced source that collects OS and browser usage stats from web analytics data, reports that between 50 and 55 percent of Windows PCs worldwide are running Windows 11, and between 40 and 45 percent of them run Windows 10. Statcounter also reports that Windows 10 and Windows 7 usage have risen slightly over the last few months, which highlights the noisiness of the data. But as of late 2025, Dell COO Jeffrey Clarke said that there were still roughly 1 billion active Windows 10 PCs in use, around 500 million of which weren’t eligible for an upgrade because of hardware requirements. If Windows 11 just cleared the 1 billion user mark, that suggests Statcounter’s reporting of a nearly evenly split user base isn’t too far from the truth.

People complaining about Windows 11 hasn’t stopped it from hitting 1 billion users Read More »

comcast-keeps-losing-customers-despite-price-guarantee-and-unlimited-data

Comcast keeps losing customers despite price guarantee and unlimited data

Cavanagh said that over the past year, Comcast “made the most significant go-to-market shift in our company’s history. We have simplified our broadband offering by moving away from short-term promotions toward a clear, transparent value proposition.” But more changes are needed, he said.

“Looking ahead, 2026 is about building on the changes we made in 2025… This will be the largest broadband investment year in our history, focused squarely on customer experience and simplification, with the goal of migrating the majority of residential broadband customers to our new simplified pricing and packaging by year-end,” Cavanagh said.

Comcast’s domestic broadband revenue was $6.32 billion, down from $6.38 billion a year ago. Cable TV revenue was $6.36 billion, down from $6.74 billion year over year. Mobile revenue rose from $1.19 billion to $1.40 billion year over year, buoyed by 1.5 million new mobile lines added during the full year of 2025.

Comcast said it now has over 9 million total mobile lines and aims to get more of its broadband customers into bundles of Internet and wireless service. Comcast offers consumer mobile service through an agreement with Verizon and struck a deal with T-Mobile to deliver mobile services to business customers this year.

Peacock boosts revenue

As the owner of NBCUniversal, Comcast has a lot more going on than cable and mobile. Strong results in the Peacock streaming service and Universal Studios theme parks helped Comcast meet analysts’ revenue projections and exceed profit estimates. Peacock paid subscribers increased 22 percent year over year to 44 million, and revenue grew 23 percent to 1.6 billion in the quarter, Comcast said.

Total Q4 2025 revenue was $32.31 billion, up 1.2 percent year over year. Net income was $2.17 billion, a 54.6 percent drop compared to a profit of $4.78 billion in Q4 2024. Comcast indicated the drop isn’t as bad as it sounds because it reflects “an unfavorable comparison to the prior year period, which included a $1.9 billion income tax benefit due to an internal corporate reorganization.” Comcast’s stock price was up about 3 percent today but has fallen about 16 percent in the past 12 months.

Comcast is one of the two biggest cable companies in the US alongside Charter, which is scheduled to announce Q4 2025 earnings tomorrow. In Q3 2025, Charter reported a loss of 109,000 Internet customers, a bit more than Comcast’s 104,000-customer loss in the same quarter. Charter, which is seeking regulatory approval to buy cable company Cox, had 27.76 million residential Internet customers and 2.03 million small business Internet customers.

Disclosure: The Advance/Newhouse Partnership, which owns 12 percent of Charter, is part of Advance Publications, which owns Ars Technica parent Condé Nast.

Comcast keeps losing customers despite price guarantee and unlimited data Read More »

tesla:-2024-was-bad,-2025-was-worse-as-profit-falls-46-percent

Tesla: 2024 was bad, 2025 was worse as profit falls 46 percent

Tesla published its financial results for 2025 this afternoon. If 2024 was a bad year for the electric automaker, 2025 was far worse: For the first time in Tesla’s history, revenues fell year over year.

A bad quarter

Earlier this month, Tesla revealed its sales and production numbers for the fourth quarter of 2025, with a 16 percent decline compared to Q4 2024. Now we know the cost of those lost sales: Automotive revenues fell by 11 percent to $17.7 billion.

Happily for Tesla, double-digit growth in its energy storage business ($3.8 billion, an increase of 25 percent) and services ($3.4 billion, an increase of 18 percent) made up some of the shortfall.

Although total revenue for the quarter fell by 3 percent, Tesla’s operating profits grew by 20 percent. But declining income from operations, which also got much more expensive, saw Tesla’s net profit plummet 61 percent, to $840 million. Without the $542 million from regulatory credits, things would have looked even bleaker.

A bad 2025

Selling 1,636,129 cars in 2025 generated $69.5 billion in revenue, 10 percent less than Tesla’s 2024 revenue. But storage and energy increased 27 percent year over year to $12.7 billion, and services grew by 19 percent year over year to $12.5 billion. Together, these two divisions now contribute meaningful amounts to the business, unlike just a few short years ago.

Tesla: 2024 was bad, 2025 was worse as profit falls 46 percent Read More »