Author name: 9u50fv

us-black-hawk-helicopter-trespasses-on-private-montana-ranch-to-grab-elk-antlers

US Black Hawk helicopter trespasses on private Montana ranch to grab elk antlers

The three servicemen on the chopper were eventually charged in Sweet Grass County Court with trespassing. They all pleaded not guilty. This week, pilot Deni Draper changed his plea to “no contest,” allowing sentencing to go forward without a trial (but without actually admitting guilt).

According to local reporting, prosecutors had evidence that “no trespassing signs were posted on McMullen’s property” and that “Draper admitted to Montana game warden Austin Kassner that he piloted the helicopter and decided to land it.” In addition to the neighbor’s testimony, “helicopter tire indentations and exhaust marks in the grass” were present at the site of the alleged landing.

The judge has accepted the change of plea and hit Draper with a $500 fine—the maximum penalty. So long as Draper stays out of trouble for the next six months, he will avoid further fines and jail time.

As for the antlers themselves, they are currently held by Montana Fish, Wildlife, and Parks but could go back to McMullen once cases against the other two servicemembers are resolved.

Update: According to a report this week in the Livingston Enterprise, this is not the first time Montana National Guard aircraft have stopped to take antlers.

“By way of a thorough inquiry, we can confirm isolated incidents of collecting antlers (with a military aircraft) have occurred previously,” Lt. Col. Thomas Figarelle of the Montana National Guard told the paper.

Figarelle added that the Guard has now explicitly banned this kind of activity. “(The Montana Army National Guard) issued clear directives no antler collecting of any type is authorized,” he added. “This is misuse of government property inconsistent with our standards. We are not going to tolerate it.”

US Black Hawk helicopter trespasses on private Montana ranch to grab elk antlers Read More »

“ungentrified”-craigslist-may-be-the-last-real-place-on-the-internet

“Ungentrified” Craigslist may be the last real place on the Internet


People still use Craigslist to find jobs, love, and even to cast creative projects.

The writer and comedian Megan Koester got her first writing job, reviewing Internet pornography, from a Craigslist ad she responded to more than 15 years ago. Several years after that, she used the listings website to find the rent-controlled apartment where she still lives today. When she wanted to buy property, she scrolled through Craigslist and found a parcel of land in the Mojave Desert. She built a dwelling on it (never mind that she’d later discover it was unpermitted) and furnished it entirely with finds from Craigslist’s free section, right down to the laminate flooring, which had previously been used by a production company.

“There’s so many elements of my life that are suffused with Craigslist,” says Koester, 42, whose Instagram account is dedicated, at least in part, to cataloging screenshots of what she has dubbed “harrowing images” from the site’s free section; on the day we speak, she’s wearing a cashmere sweater that cost her nothing, besides the faith it took to respond to an ad with no pictures. “I’m ride or die.”

Koester is one of untold numbers of Craigslist aficionados, many of them in their thirties and forties, who not only still use the old-school classifieds site but also consider it an essential, if anachronistic, part of their everyday lives. It’s a place where anonymity is still possible, where money doesn’t have to be exchanged, and where strangers can make meaningful connections—for romantic pursuits, straightforward transactions, and even to cast unusual creative projects, including experimental TV shows like The Rehearsal on HBO and Amazon Freevee’s Jury Duty. Unlike flashier online marketplaces such as DePop and its parent company, Etsy, or Facebook Marketplace, Craigslist doesn’t use algorithms to track users’ moves and predict what they want to see next. It doesn’t offer public profiles, rating systems, or “likes” and “shares” to dole out like social currency; as a result, Craigslist effectively disincentivizes clout-chasing and virality-seeking—behaviors that are often rewarded on platforms like TikTok, Instagram, and X. It’s a utopian vision of a much earlier, far more earnest Internet.

“The real freaks come out on Craigslist,” says Koester. “There’s a purity to it.” Even still, the site is a little tamer than it used to be: Craigslist shut down its “casual encounters” ads and took its personals section offline in 2018, after Congress passed legislation that would’ve put the company on the hook for listings from potential sex traffickers. The “missed connections” section, however, remains active.

The site is what Jessa Lingel, an associate professor of communication at the University of Pennsylvania, has called the “ungentrified” Internet. If that’s the case, then online gentrification has only accelerated in recent years, thanks in part to the proliferation of AI. Even Wikipedia and Reddit, visually basic sites created in the early aughts and with an emphasis similar to Craigslist’s on fostering communities, have both incorporated their own versions of AI tools.

Some might argue that Craigslist, by contrast, is outdated; an article published in this magazine more than 15 years ago called it “underdeveloped” and “unpredictable.” But to the site’s most devoted adherents, that’s precisely its appeal.

“ I think Craigslist is having a revival,” says Kat Toledo, an actor and comedian who regularly uses the site to hire cohosts for her LA-based stand-up show, Besitos. “When something is structured so simply and really does serve the community, and it doesn’t ask for much? That’s what survives.”

Toledo started using Craigslist in the 2000s and never stopped. Over the years, she has turned to the site to find romance, housing, and even her current job as an assistant to a forensic psychologist. She’s worked there full-time for nearly two years, defying Craigslist’s reputation as a supplier of potentially sketchy one-off gigs. The stigma of the website, sometimes synonymous with scammers and, in more than one instance, murderers, can be hard to shake. “If I’m not doing a good job,” Toledo says she jokes to her employer, “just remember you found me on Craigslist.”

But for Toledo, the site’s “random factor”—the way it facilitates connection with all kinds of people she might not otherwise interact with—is also what makes it so exciting. Respondents to her ads seeking paid cohosts tend to be “people who almost have nothing to lose, but in a good way, and everything to gain,” she says. There was the born-again Christian who performed a reenactment of her religious awakening and the poet who insisted on doing Toledo’s makeup; others, like the commercial actor who started crying on the phone beforehand, never made it to the stage.

It’s difficult to quantify just how many people actively use Craigslist and how often they click through its listings. The for-profit company is privately owned and doesn’t share data about its users. (Craigslist also didn’t respond to a request for comment.) But according to the Internet data company similarweb, Craigslist draws more than 105 million monthly users, making it the 40th most popular website in the United States—not too shabby for a company that doesn’t spend any money on advertising or marketing. And though Craigslist’s revenue has reportedly plummeted over the past half-dozen years, based on an estimate from an industry analytics firm, it remains enormously profitable. (The company generates revenue by charging a modest fee to publish ads for gigs, certain types of goods, and in some cities, apartments.)

“It’s not a perfect platform by any means, but it does show that you can make a lot of money through an online endeavor that just treats users like they have some autonomy and grants everybody a degree of privacy,” says Lingel. A longtime Craigslist user, she began researching the site after wondering, “Why do all these web 2.0 companies insist that the only way for them to succeed and make money is off the back of user data? There must be other examples out there.”

In her book, Lingel traces the history of the site, which began in 1995 as an email list for a couple hundred San Francisco Bay Area locals to share events, tech news, and job openings. By the end of the decade, engineer Craig Newmark’s humble experiment had evolved into a full-fledged company with an office, a domain name, and a handful of hires. In true Craigslist fashion, Newmark even recruited the company’s CEO, Jim Buckmaster, from an ad he posted to the site, initially seeking a programmer.

The two have gone to great lengths to wrest the company away from corporate interests. When they suspected a looming takeover attempt from eBay, which had purchased a minority stake in Craigslist from a former employee in 2004, Newmark and Buckmaster spent roughly a decade battling the tech behemoth in court. The litigation ended in 2015, with Craigslist buying back its shares and regaining control.

“ They are in lockstep about their early ’90s Internet values,” says Lingel, who credits Newmark and Buckmaster with Craigslist’s long-held aesthetic and ethos: simplicity, privacy, and accessibility. “As long as they’re the major shareholders, that will stay that way.”

Craigslist’s refusal to “sell out,” as Koester puts it, is all the more reason to use it. “Not only is there a purity to the fan base or the user base, there’s a purity to the leadership that they’re uncorruptible basically,” says Koester. “I’m gonna keep looking at Craigslist until I die.” She pauses, then shudders: “Or, until Craig dies, I guess.”

This story originally appeared on wired.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

“Ungentrified” Craigslist may be the last real place on the Internet Read More »

advancements-in-self-driving-cars

Advancements In Self-Driving Cars

Waymo goes Full San Francisco West Bay except for SFO:

Jeff Dean: Exciting expansion! @Waymo now serves the whole SF Bay Area Peninsula from SF to San Jose and is taking riders on freeways.

They can serve SJC, and SFO is almost ready, employee rides are in place and public rides are ‘coming soon.’

Brandan: Would be nice if @Waymo comes across the bay to Berkeley!

Jeff Dean: We’ll cross that bridge when we come to it!

Waymo is going to start using freeways in Phoenix, Los Angeles and San Francisco. That’s a big deal for longer rides, but there is still the problem that Waymos have to obey the technical speed limit. On freeways no huamn driver does this, so obeying the technical speed limit is both slower and more dangerous. We are going to need a regulatory solution, ideally that allows you to drive at the average observed speed.

This is all a big unlock, but it depends on having enough cars to take advantage.

At this point, aside from regulatory barriers in some places like my beloved New York City, it all comes down to being able to get enough cars.

Timothy Lee: The big question about Waymo in 2026 is going to be “how do they get enough cars to service all this new territory?” Three options:

• Keep retrofitting expensive and no-longer-produced I-PACES

• Pay 105% tariffs to import Zeekrs

• Speed-run introduction of Hyundai vehicles

The Hyundai option would obviously be the best for them but I doubt they’ll achieve large-scale production before 2027. They only announced the partnership a year ago and just started testing them publicly a couple of weeks ago.

Waymo will be ready for Washington DC in 2026 if legally allowed to proceed, if blocked there will be a Waymo Gap where Baltimore has it but not Washington. Dean Ball notes that councilmember Charles Allen is trying to hold Waymo up over nebulous ‘safety concerns,’ which is the worst possible argument against Waymo. We know for certain that Waymos are vastly safer than human drivers.

Samuel Hammond: I lost a good friend to a human driver in DC. The sooner we allow Waymos in the better.

… Public transit should be autonomous too.

One could say this is cherry picking, but the number of (truthful) such tweets about losing a friend to a Waymo is zero, because it has never happened.

Waymo set to deliver DoorDash orders in Phoenix. That presumably means you’ll have to go out to get the food out of the car, which is slightly annoying but seems fine. My actual concern is whether this will be a little slow? Waymos do not understand that when you have hot food, time is of the essence.

The cars, they are coming to a City Near You pending regulatory barriers.

Timothy Lee and Kai Williams: On Tuesday, Waymo announced driverless testing in five cities: Dallas, Houston, San Antonio, Miami, and Orlando. Driverless testing begins immediately in Miami, while the other four cities will begin “over the coming weeks.” Waymo says commercial service will launch in all five cities in 2026.

… And probably several other cities as well. Waymo has previously announced 2026 launch plans in six other US cities — Denver, Detroit, Las Vegas, Nashville, San Diego, and Washington DC — plus London. None of these cities has begun driverless testing yet. But if all goes according to plan, Waymo will be offering service in at least 17 cities by the end of next year — more than triple the number Waymo serves today.

Timothy Lee: Waymo just announced plans to expand to Minneapolis, Tampa, and New Orleans. Here’s an updated map. Waymo didn’t mention 2026 so I put them in the “2027 or later” category. Minneapolis will likely require state legislation so it gets a question mark.

Timothy Lee (December 3): Waymo just announced testing (with safety drivers) in three new cities: Pittsburgh, St. Louis, and Baltimore. Legislation will be needed to enable driverless operation in both Baltimore and St. Louis (our first red-state question mark!).

This is not an expanded service area yet, but look at where Waymo is now officially authorized:

Waymo: We’re officially authorized to drive fully autonomously across more of the Golden State.

Next stop: welcoming riders in San Diego in mid-2026! ☀️

It is the state that matters, not the city. That helps.

Timothy Lee: A nuance people are missing here: some of these red-state cities have Democratic mayors. However, AV policy is mostly made at the state level, especially in red states. So city leaders likely couldn’t block Waymo even if they wanted to. That’s certainly the case in California.

Could we do preemption on state laws about self-driving cars? Please?

Peter Wildeford: I wish that even half the energy of “federal pre-emption of all state AI laws” went specifically towards “federal pre-emption of municipal laws banning autonomous vehicles”.

We need to make sure our Waymo future isn’t banned by crazy cities that have no clue what they’re doing.

Neil Chilson: You may know me as a supporter of preemption, but while I think banning autonomous vehicles is absolutely moronic, I think it’s not squarely in the federal domain.

The states preempting the cities was key to getting deployment in California and Texas, but we still have a long way to go.

I actually disagree with Neil, I think this should be in the federal domain.

Then there’s the final boss enemies of all that is good and true, those who would permanently cripple our economy so that people could have permanent entirely fake jobs sitting in trucks:

Senator John Fetterman (D-Penn): I fully agree with @Teamsters.

Self-driving trucks should *alwaysbe supervised by a qualified professional to keep our roads safe. It’s a necessary partnership for America’s highways and economy.

Across the pond, could there be anything more Doomed European than an article that says ‘Europe doesn’t need driverless cars’? As with so many things like air conditioning, free speech and economic growth, the European asks, do we really ‘need’ this? Aren’t European roadways already ‘safer’ than American ones now that we’ve slowed them down to make them thus? Wouldn’t this ‘threaten’ European traditions of bikes and public transportation? Aren’t cars ‘inefficient’? Won’t someone please think of the potential traffic issues?

This emphasizes why I would make the case without emphasizing safety.

When San Francisco had a power outage, there were mistaken initial reports that Waymos came to a halt or ‘bricked,’ causing traffic disruptions. The transition wasn’t perfect, some cars did come to a stop and behavior was more conservative than you would want.

My understanding is that this was overstated. Waymo has now issued a full report.

Waymo successfully identified the situation. The Waymo policy, decided in advance, treated every intersection as a four-way stop sign as per California law while the traffic lights were out, had protocols in place to request additional verification checks, and then as a result Waymo suspended service to avoid slowing traffic.

That seems fine? It’s not even clear it is non-ideal from Waymo’s perspective given their incentives? The risk-reward of using a more aggressive policy seems rather terrible, and worse than a service suspension? What would you have them do here?

Waymo: Navigating an event of this magnitude presented a unique challenge for autonomous technology. While the Waymo Driver is designed to handle dark traffic signals as four-way stops, it may occasionally request a confirmation check to ensure it makes the safest choice.

While we successfully traversed more than 7,000 dark signals on Saturday, the outage created a concentrated spike in these requests. This created a backlog that, in some cases, led to response delays contributing to congestion on already-overwhelmed streets.

We established these confirmation protocols out of an abundance of caution during our early deployment, and we are now refining them to match our current scale. While this strategy was effective during smaller outages, we are now implementing fleet-wide updates that provide the Driver with specific power outage context, allowing it to navigate more decisively.

As the outage persisted and City officials urged residents to stay off the streets to prioritize first responders, we temporarily paused our service in the area. We directed our fleet to pull over and park appropriately so we could return vehicles to our depots in waves. This ensured we did not further add to the congestion or obstruct emergency vehicles during the peak of the recovery effort.

The path forward

We’ve always focused on developing the Waymo Driver for the world as it is, including when infrastructure fails. We are analyzing the event, and are already integrating the lessons from this weekend’s PG&E outage. Here are some of the immediate steps we’re taking:

  • Integrating more information about outages: While our Driver already handles dark traffic signals as four-way stops, we are now rolling out fleet-wide updates that give our vehicles even more context about regional outages, allowing them to navigate these intersections more decisively.

  • Updating our emergency preparedness and response: We will improve our emergency response protocols, incorporating lessons from this event. In San Francisco, we’ll continue to coordinate with Mayor Lurie’s team to identify areas of greater collaboration in our existing emergency preparedness plans.

  • Expanding our first responder engagement: To date, we’ve trained more than 25,000 first responders in the U.S. and around the world on how to interact with Waymo. As we discover learnings from this and other widespread events, we’ll continue updating our first responder training.

This seems exactly right. Waymo has to be risk averse for now given that a single incident could derail their entire program. Over time, as they gain experience, they can act more decisively.

The amount of ‘omg never using a self-driving car again’ or ‘police and fire departments will now fight against self-driving cars to the death’ boggles the mind.

If enough cars on the road were self-driving, then they wouldn’t even need the traffic lights, they could coordinate in other ways, and this would all be moot.

Yes, in the case where the internet goes down entirely or Waymos otherwise systemically fail there will be a bigger problem that might not have a great solution right now, but do you think Waymo hasn’t planned for this?

At most, this says that if we had so many self-driving-only cars that we would be in deep trouble if all the self-driving cars died at once, then we want a solution where the cars are, in such an emergency, something a human could override and drive. That does not seem like such a difficult bar to cross?

The most common crisis scenario where things go haywire is very simple:

  1. There is an evacuation or other reason everyone wants to go from A → B.

  2. The road from A → B becomes completely jammed and stops moving.

Human drivers cannot solve this. Self-driving cars in sufficiently quantities solves this through coordination. Given these are maximally important scenarios where not getting out often risks death, it’s kind of a big deal. Imagine if things were reversed.

Holly Elmore accused me of missing the point here, that it is about all the things that could go wrong with self-driving cars and that haven’t yet occured in the field.

To which I say no, it is Holly that is missing the point. The reason why AGI is different is that if you have such a failure, you could be dead or lose control, and be unable to recover from the failure, or suffer truly catastrophic levels of damage. Thus, you need to get such potential difficulties right on the first try, before an incident happens, and you have to do this generally against a potential adversary more intelligent than you that will be out of distribution.

A self-driving car… is a car. It is a normal technology.

Even if something goes systematically wrong with a fleet of such cars, or all such fleets of cars? This is highly recoverable. The damage even for ‘all the Waymos suddenly floor it and crash’ (or even the pure sci-fi ‘suddenly try to do maximum amounts of damage’) is not so high in the grand scheme of things. There are a finite number of things that could happen that involve things going very wrong, and yes you can list all of them and then hardcode what to do in each case.

That is, indeed, how the cars actually learn to drive under normal circumstances. If the regulators want to provide a list of potential incident types and require Waymo to say how they plan to deal with each, including any combination of loss of internet and loss of power and everyone simultaneously fleeing an oncoming tsunami caused by a neutron bomb, then okay, sure, fine, I guess, let’s be overly paranoid to keep everyone happy, it will in expectation cost lives but whatever it takes.

But I think it’s really important, when arguing for AI safety, to be able to differentiate AGI from self-driving cars, and to not draw metaphors that don’t apply.

The real final boss for self-driving cars is the speed limit.

As everyone knows, the ‘real’ speed limit is by default 10 MPH above the speed limit. You’re highly unlikely to get a ticket, in most places, unless you are both more than 10 MPH and substantially faster than other drivers. If the speed limit is enforced to the letter, that usually involves attempting to trick motorists. We call that a ‘speed trap.’

To be safe, you want to match the speed of other cars around you, so driving the listed speed limit is actively dangerous on many roads.

Ethan Teicher: “The lack of a human driver is no longer the reason [Waymos] stand out most from regular traffic. They do so because they follow the speed limit.

Indeed, cyclists and pedestrians are so used to drivers going 5, 10, or more miles per hour over posted speed limits, that Waymo’s practice of driving by the letter of the law creates a noticeable contrast. So much so that in a recent New York Times article about Zoox entering the autonomous-vehicle fray in San Francisco, the reporters actually had the gall to list law-abiding driving as a downside”

Robin Hanson: Human drivers can block competition from self driving cars by just making traffic laws too onerous to obey, yet insisting that robots (only) must obey them. Seems a robust general strategy for preventing AI competition with humans.

The wrong answer is to enforce current obviously too low numbers, and slow down all cars to the current technical speed limits. That’s profoundly stupid. It’s also scarily plausible that we will end up doing it.

The correct answer is to increase our speed limits across the board to the actual limit, beyond which we can and will ticket you.

This generalizes, as per Levels of Friction.

If AI has to obey the rules and humans don’t, the correct answer wherever possible is to change the rules to what we want both AIs and humans to actually have to follow.

In many other places, this creates a real problem, because the true rules are nebulous and involve social incentives and a willingness to adopt to practical conditions. As Robin Hanson notes, an otherwise highly capable AI that had to formally obey all laws in all ways would find many human tasks impossible or impractical.

I strongly agree that Waymo must pick up the pace. 7% growth per month? That’s it?

CNBC: Waymo crosses 450,000 weekly paid rides as Alphabet robotaxi unit widens lead on Tesla.

Timothy Lee: Weekly driverless Waymo trips:

May 2023: 10,000

May 2024: 50,000 (14% monthly growth)

August 2024: 100,000 (25% monthly)

October 2024: 150,000 (22%)

February: 2025: 200,000 (7%)

April 2025: 250,000 (12%)

December 2025: 450,000 (7%)

Right on track for 1M by December 2026.

FWIW this feels way too slow to me. They should be aiming for the ~15% growth rate they achieved in 2024. Hopefully they’re going to figure out their vehicle supply issues and dramatically accelerate in 2027. Tesla has been growing slowly because their technology doesn’t work yet. But they will figure it out in the next year or two and after that I guarantee you Elon won’t be happy with 7% monthly growth.

Tesla continues to not even apply to operate fully autonomous services in the areas it claims it wants to offer those services, such as California, Arizona and Nevada. Please stop thinking Elon Musk’s timelines are ever meaningful.

The actual global competition is probably Chinese, as one would expect.

Timothy Lee: US media tends to cover robotaxis as a Waymo/Tesla race, but globally Waymo’s strongest competition is likely to be Chinese companies like Baidu, WeRide, and Pony. Rough robotaxi counts today:

Waymo: 2,500

Baidu: 1,000

Pony: 960

WeRide: 750

Tesla: 100

What they have done is made ‘Robotaxi’ service go live in Austin for select rides, but these rides remain supervised with a Tesla employee in the driver’s seat.

Andrej Karpathy reports the new Tesla self-driving on the HW4 Model X is a substantial upgrade.

Delivery via self-driving e-bikes? Brilliant.

If enough people lose their jobs at once, society has a big problem.

Ro Khanna: We need smart regulation to protect 3.5 million truck drivers & 2 million long haul drivers. AI should not be used for mass layoffs that drive up short term profits w/ no productivity gains.

Drivers are needed for safety, oversight, edge cases, & maintenance.

I stand with humans over machines, with @LorenaSGonzalez @TeamsterSOB over short term profits for corporate oligarchs.

Roon: what do you think productivity gains are lol.

It’s amazing how easily those opposed to self-driving throw around ‘safety concerns’ when self-driving vehicles are massively safer, or the idea here that gains are ‘short term’ or that there are ‘no productivity gains.’

Even if we literally require a human to be in each truck at all times ‘in case of emergency’ we would still see massive productivity gains, since the trucks would be able to be on the road 24/7.

Maintenance is another truly silly objection. Yes, when you need to maintain something you’d (for now at least) bring the truck to a human. Okay.

That leaves the ever mysterious and present ‘edge cases.’

Chris Albon: I bike everywhere in SF. I barely ever take a taxi/uber/waymo. But if you want to ban Waymo it means you don’t care about cyclists like me.

Waymos will reliably yield to bikes, use its turn signals and obey the rules. When you are biking, the problem is tail risk can literally kill you, so you have to constantly be paranoid that any given car will do something unexpected or crazy. With a Waymo, you don’t have to worry about that.

Both the young and the old, who cannot drive, will benefit greatly. Self-driving cars will be a very different level of freedom than the ability to summon a Lyft. Tesla will likely offer ‘unsupervised’ self-driving very soon.

If you combine self-driving cars with other new smart products, including basic home robots, suddenly assisted living facilities look pretty terrible. They’re expensive and unpleasant, with the upside being that when you need help you really need help. The need for that forces you to buy this entire package of things you mostly don’t want. But what if most of that help was covered?

PoliMath: Crazy prediction time: I think that nursing homes and assisted living facilities are in trouble long-term.

These businesses are currently massive profit centers. Full time care for the elderly is a huge business & everyone assumes it will get bigger as boomers age

But no one wants to move into an assisted care house. They are good for what they are, but it’s a depressing place. It means moving a whole life into the place where you plan to die. That’s no fun. No one wants that. Most people want to age “in place”. They want to keep their home, keep their space, age and die in a familiar setting. What keeps them from doing this?

1) transportation – if they can’t get around to get their meds, get groceries, go to the movies, go to their favorite restaurant, drive to a park, etc, this severely reduces their quality of life. Assisted care helps solve this

2) household chores – Doing the laundry, cooking simple meals, lawn care, self-care (clipping toenails, bathing), these are important things that are harder to do when you get into your 80’s and 90’s

Unsupervised self-driving solves problem #1. The elderly should not be driving. It’s a brutal reality and one they fight against, but the risk factor is extremely high. An autonomous transportation system allows them enormous mobility and autonomy.

In-home robots solve problem #2. Robots that can aid with difficult self-care and household chores allows the elderly to stay in place for longer. This has enormous cost-savings and (more importantly) they can feel like they are in charge of their own lives for longer.

At that point, the biggest challenge is social interaction. This is where assisted care facilities easily out-class these automated solutions. The logistics problem is being solved in front of our eyes and it’s a miracle. But the social problem is not solved. Not even close.

The social problem requires people who want to interact with you, but note that we’ve solved the transportation problem. That makes it a lot easier.

What will happen when a Waymo finally does kill someone? Waymos are vastly safer than human drivers, but are we always going to be one accident away from disaster? The CEO of Waymo says people will accept it. I think she’s right if Waymo gets enough traction first, the question is when that point comes and whether we have reached it yet.

In the meantime, they’re trying to drum up outrage because a Waymo killed a cat, a ‘one-of-a-kind’ mascot of a bodega, something that happens to 5.3 million cats per year when struck by human drivers. If ‘cat killed by Waymo’ is news then Waymos are absurdly safe.

Rolling Stone: KitKat, known as the “Mayor of 16th Street,” was killed by a Waymo cab last week in San Francisco, sparking calls for more regulation of driverless cars.

Yimbyland: When do you ever see a spread like this about a human driver running over a cat?

You don’t, and yet it happens 15,000 TIMES EVERY DAY.

YES. FIFTEEN THOUSAND CATS ARE RUN OVER EVERY SINGLE DAY.

Mission Loco: A cat ran in front of a car and was run over. This happens 26 MILLION times per year in the US. Now @rachelswan and @JackieFielder_ want to ban vehicles. This is how moronic @Hearst@sfchronicle reporters & @sfbos are.

Matt Popovich:

(The bottom line should also include millions of cats and other pets as well, of course.)

I mean, if they wanted to ban all vehicles that would at least make some sense.

I will note that the 26 million number comes from Merrett Clifton’s extrapolation from 1993 and it’s basically absurd if you think about it, there simply are not enough cats for this to be real. It’s probably more like 2-5 million cats per year. Not that this changes the conclusion that Waymos are obviously vastly safer for cats than human drivers.

The stats are of course in, and if you use reasonable estimates Waymos probably kill on the order of 75x fewer pets, as in a 98%+ reduction in cats killed per mile.

In the meantime, we continue to deal with things like New York Times articles about a Waymo running over this very special cat, in which they bury the fact Waymos are vastly safer than human drivers.

Timothy Lee: NYTimes article quotes someone saying they are “terrified” of Waymo in paragraph 6. Waits until paragraph 33 (out of 44 paragraphs) to mention that they are 91 percent safer than human drivers. How outraged would liberals be if a news outlet covered vaccines like this?

The article does mention that human drivers kill hundreds of cats every year so that’s something.

Even if the rest of AI doesn’t prove that disruptive soon, self-driving will change quite a lot wherever it is allowed to proceed. I too am unreasonably excited.

Andrej Karpathy: I am unreasonably excited about self-driving. It will be the first technology in many decades to visibly terraform outdoor physical spaces and way of life. Less parked cars. Less parking lots. Much greater safety for people in and out of cars. Less noise pollution. More space reclaimed for humans.

Human brain cycles and attention capital freed up from “lane following” to other pursuits. Cheaper, faster, programmable delivery of physical items and goods. It won’t happen overnight but there will be the era before and the era after.

Nikhil: every time I come off a week of taking Waymos in SF:

  1. it feels increasingly strange to return to a non-autonomous city (just as it felt weird to be in cities that didn’t have uber yet in 2014-2016)

  2. I come away feeling like we continue to under-discuss the second order effects of self-driving inevitability + ubiquity

I think the indifference in the air is largely a function of how gradual (relatively) the rollout of AVs has been and will continue to be

The agonizingly slow ramp-up, along with the avalanche of other AI things happening, is definitely taking the focus off of self-driving and making us not realize how much the ground is shifting under our feet. The second order effects are going to be huge. The child mobility and safety improvements are especially neglected.

A no good, very bad take but also why competition is good:

Roon: i strongly prefer uber to waymo. ubers get you where you need to go much faster. they wait for you when you’re running late. they never suffer catastrophic failure and ignominiously getting stuck behind a truck or something. will be kicked out of san francisco for this take.

also i learn so much from uber drivers it’s so high entropy.

To state the obvious I vastly prefer Waymos, and I am confused by the part about catastrophic failures since it seems obvious that rates of ‘things go wrong’ are higher for an Uber. But yeah, if you actively want to talk to drivers and to have another human in the car, and you care more about speed than a smooth ride, I can see it.

Self-driving cars have been proven vastly safer than human drivers, despite many believing the opposite. The question continues to be, how hard do you push on this?

Human drivers have been grandfathered in as an insanely dangerous thing we have accepted as part of life. We’ve destroyed huge other parts of life in the name of far less serious safety concerns, whereas here we have a solution that is life affirming while also preventing most of a leading cause of death.

Dr. Jon Slotkin: I have a guest essay in @nytimes today about autonomous vehicle safety. I wrote it because I’m tired of seeing children die. Done right, we can eliminate car crashes as a leading cause of death in the United States

@Waymo recently released data covering nearly 100 million driverless miles. I spent weeks analyzing it because the results seemed too good to be true. 91% fewer serious-injury crashes. 92% less pedestrians hit. 96% fewer injury crashes at intersections. The list goes on.

39,000 Americans died in crashes last year. More than homicide, plane crashes, and natural disasters combined. The #2 killer of children and young adults. The #1 cause of spinal cord injury. We’ve accepted this as the price of mobility.

We don’t have to.

In medicine, when a treatment shows this level of benefit, we stop the trial early. Continuing to give patients the placebo becomes unethical. When an intervention works this clearly, you change what you do.

In driving, we’re all the control group.

Cities like DC and Boston are blocking deployment. And cities are not the only forces mobilizing to slow this progress.

It’s time we stop treating this like a tech moonshot and start treating it like a public health intervention that will save lives.

Auerlien reports that ‘broken windows theory’ very much applies to cars. If you don’t keep cars fully pristine then people stop respecting the car and things escalate quickly, and also people care quite a lot. Thus, if a Waymo or other self-driving car gets even a little dirty it needs to head back and get cleaned. And thus, every Waymo I’ve ever ridden in has been pristine.

Johnny v5: just realized waymo means i can go to office hours without haste now

Deepfates: that means you can go to Waymo of them!!

Discussion about this post

Advancements In Self-Driving Cars Read More »

evs-remain-a-niche-choice-in-the-us,-according-to-survey

EVs remain a niche choice in the US, according to survey

A graph showing charger location preference for car buyers in the US, Germany, the UK, China, Japan, and South Korea

A graph showing preferred charging locations for car buyers.

Credit: Deloitte

A graph showing preferred charging locations for car buyers. Credit: Deloitte

While reliable charging at one’s workplace—emphasis on reliable—can make up for not being able to charge at home, 77 percent of US car buyers said they would prefer to charge at home (with just 13 percent indicating they would prefer charging at work).

Why pick an EV?

For people who haven’t yet decided to switch, an underappreciated fact is just how much more efficient an electric powertrain is compared to one that burns liquid petroleum. Ford’s experiment putting an electric powertrain into its best-selling F-150 pickup truck might have turned sour, but consider the following: The V6 truck needs more than three times as much energy to travel 300 miles as the one you plug into a wall, when you consider a gallon of gasoline contains 33.7 kWh of energy.

Among the EV-convinced, this is presumably old news. More than half—52 percent of US survey respondents—said lower fuel costs were a reason for choosing an EV, beating out concern for the environment, which ranked second at 38 percent. And between $20,000 and $49,999 appears to be the pricing sweet spot, with 24 percent looking for something in the $20,000–$34,999 band (cars like the new Nissan Leaf or the soon-reborn Chevrolet Bolt) and another 24 percent looking in the $35,000–$49,999 band, which has plenty of EVs to choose from, including Mercedes-Benz’s efficient new CLA.

Just 7 percent of those EV buyers are looking to spend more than $75,000 on their electric car, but luxury EVs abound at this price point.

A graph of reasons given by US car buyers as to why their next car would be electric. Deloitte

Meanwhile, range and charging times remain the foremost concerns among car buyers when discussing EVs, along with the cost premium. Some other fears are ill-founded, however. Thirty-eight percent said they were concerned about the cost of eventually replacing an EV’s battery. But EV batteries are proving more durable on the road than many early adopters once believed. There’s little evidence that EVs will require costly battery replacements with any more frequency than older cars require new engines, a concern that is rarely mentioned when someone wants to buy a gas-powered machine.

The US doesn’t care about software-defined vehicles

One of the biggest shifts in car design and manufacturing over the past few years has been the advent of the software-defined vehicle. Until now, pretty much every electronic function in a car, from an electric window to the antilock brakes, needed its own electronic control unit. Some cars can have up to two hundred discrete ECUs, some with software dating back years.

EVs remain a niche choice in the US, according to survey Read More »

we-have-a-fossil-closer-to-our-split-with-neanderthals-and-denisovans

We have a fossil closer to our split with Neanderthals and Denisovans

The Casablanca fossils are about the same age as hominin fossils from Spain, which belong to a species called Homo antecessor. This species has been suggested to be a likely ancestor of Neanderthals and Denisovans. Overall, it looks like the fossils from Casablanca are a North African counterpart to Homo antecessor, with the Spanish hominins eventually leading to Neanderthals and the North African ones eventually leading to us.

Both groups share some features in their teeth and lower jaws, but they’re also different in some important ways. The teeth and chins in particular share some older features with Homo erectus. But the jaws have more newfangled features in the places where chewing muscles once attached to the bone—features that Neanderthals and our species share. On the other hand, the teeth are missing some other relatively recent features that would later help define Neanderthals (and were already beginning to show up in Homo antecessor).

Altogether, it looks like the Homo erectus populations and the Neanderthals and Denisovans had been separated for a while by the time the hominins at Grotte à Hominidés lived. But not that long. These hominins were probably part of a generation that was fairly close to that big split, near the base of our branch of the hominin family tree.

Here’s looking at you, hominin

Based on ancient DNA, it looks like Neanderthals and Denisovans started evolving into two separate species sometime between 470,000 and 430,000 years ago. Meanwhile, our branch would eventually become recognizable as us sometime around 300,000 years ago, or possibly earlier. At various times and places, all three species would eventually come back together to mingle and swap DNA, leaving traces of those interactions buried deep in each other’s genomes.

And 773,000 years after a predator dragged the remains of a few unfortunate hominins into its den in northern Africa, those hominins’ distant descendants would unearth the gnawed, broken bones and begin piecing together the story.

Nature, 2025 DOI: 10.1038/s41586-025-09914-y  (About DOIs).

We have a fossil closer to our split with Neanderthals and Denisovans Read More »

nvidia’s-new-g-sync-pulsar-monitors-target-motion-blur-at-the-human-retina-level

Nvidia’s new G-Sync Pulsar monitors target motion blur at the human retina level

That gives those individual pixels time to fully transition from one color to the next before they’re illuminated, meaning viewers don’t perceive those pixels fading from one color as they do on a traditional G-Sync monitor. It also means those old pixels don’t persist as long on the viewer’s retina, increasing the “apparent refresh rate” above the monitor’s actual refresh rate, according to Nvidia.

An Asus illustration highlights how G-Sync Pulsar uses strobing to limit the persistence of old frames on your retina.

An Asus illustration highlights how G-Sync Pulsar uses strobing to limit the persistence of old frames on your retina. Credit: Asus/ Nvidia

Similar “Ultra Low Motion Blur” features on other pulsing backlight monitors have existed for a while, but they only worked at fixed refresh rates. Pulsar monitors differentiate themselves by syncing the pulses with the variable refresh rate of a G-Sync monitor, offering what Nvidia calls a combination of “tear free frames and incredible motion clarity.”

Independent testers have had more varied impressions of the visual impact of the Pulsar. The Monitors Unboxed YouTube channel called it “clearly the best solution currently available” for limiting motion blur and “the first version of this technology that I would genuinely consider using on a regular basis.” PC Magazine, on the other hand, said the Pulsar improvements are “minor in the grand scheme of things” and would be hard to notice for a casual viewer.

Nvidia explains how its Pulsar monitors work.

In any case, G-Sync Pulsar should be a welcome upgrade for high-end gamers as we wait for 1,000 Hz monitors to become a market force.

Nvidia’s new G-Sync Pulsar monitors target motion blur at the human retina level Read More »

dos-capital

Dos Capital

This week, Philip Trammell and Dwarkesh Patel wrote Capital in the 22nd Century.

One of my goals for Q1 2026 is to write unified explainer posts for all the standard economic debates around potential AI futures in a systematic fashion. These debates tend to repeatedly cover the same points, and those making economic arguments continuously assume you must be misunderstanding elementary economic principles, or failing to apply them for no good reason. Key assumptions are often unstated and even unrealized, and also false or even absurd. Reference posts are needed.

That will take longer, so instead this post covers the specific discussions and questions around the post by Trammell and Patel. My goal is to both meet that post on its own terms, and also point out the central ways its own terms are absurd, and the often implicit assumptions they make that are unlikely to hold.

They affirm, as do I, that Piketty was centrally wrong about capital accumulation in the past, for many well understood reasons, many of which they lay out.

They then posit that Piketty could have been unintentionally describing our AI future.

As in, IF, as they say they expect is likely:

  1. AI is used to ‘lock in a more stable world’ where wealth is passed to descendants.

  2. There are high returns on capital, with de facto increasing returns to scale due to superior availability of investment opportunities.

  3. AI and robots become true substitutes for all labor.

  4. (Implicit) This universe continues to support humanity and allows us to thrive.

  5. (Implicit) The humans continue to be the primary holders of capital.

  6. (Implicit) The humans are able to control their decisions and make essentially rational investment decisions in a world in which their minds are overmatched.

  7. We indefinitely do not do a lot of progressive redistribution.

  8. (Implicit) Private property claims are indefinitely respected at unlimited scale.

THEN:

  1. Inequality grows without bound, the Gini coefficient approaches 1.

  2. Those who invest wisely, with eyes towards maximizing long term returns, end up with increasingly large shares of wealth.

  3. As in, they end up owning galaxies.

Patel and Trammell: But once robots and computers are capable enough that labor is no longer a bottleneck, we will be in the second scenario. The robots will stay useful even as they multiply, and the share of total income paid to robot-owners will rise to 1. (This would be the “Jevons paradox”.)​

Later on, to make the discussions make sense, we need to add:

  1. There is a functioning human government that can impose taxes, including on capital, in ways that end up actually getting paid.

  2. (Unclear) This government involves some form of Democratic control?

If you include the implicit assumptions?

Then yes. Very, very obviously yes. This is basic math.

In this scenario, sufficiently capable AIs and robots are multiplying without limit and are perfect substitutes for human labor.

Perhaps ‘what about the distribution of wealth among humans’ is the wrong question?

I notice I have much more important questions about such worlds where the share of profits that goes to some combination AI, robots and capital rises to all of it.

Why should the implicit assumptions hold? Why should we presume humans retain primary or all ownership of capital over time? Why should we assume humans are able to retain control over this future and make meaningful decisions? Why should we assume the humans remain able to even physically survive let alone thrive?

Note especially the assumption that AIs don’t end up with substantial private property. The best returns on capital in such worlds would obviously go to ‘the AIs that are, directly or indirectly, instructed to do that.’ So if AI is allowed to own capital, the AIs end up with control over all the capital, and the robots, and everything else. It’s funny to me that they consider charitable trusts as a potential growing source of capital, but not the AIs.

Even if we assumed all of that, why should we assume that private property rights would be indefinitely respected at limitless scale, on the level of owning galaxies? Why should we even expect property rights to be long term respected under normal conditions, here on Earth? Especially in a post calling for aggressive taxation on wealth, which is kind of the central ‘nice’ case of not respecting private property.

Expecting current private property rights to indefinitely survive into the transformational superintelligence age seems, frankly, rather unwise?

Eliezer Yudkowsky: ​What is with this huge, bizarre, and unflagged presumption that property rights, as assigned by human legal systems, are inviolable laws of physics? That ASIs remotely care? You might as well write “I OWN YOU” on an index card in crayon, and wave it at the sea.

Oliver Habryka: I really don’t get where this presumption that property ownership is a robust category against changes of this magnitude. It certainly hasn’t been historically!

Jan Kulveit: Cope level 1: My labour will always be valuable!

Cope level 2: That’s naive. My AGI companies stock will always be valuable, may be worth galaxies! We may need to solve some hard problems with inequality between humans, but private property will always be sacred and human.

Then, if property rights do hold, did we give AIs property rights, as Guive Assadi suggests (and as others have suggested) we should do to give them a ‘stake in the legal system’ or simply for functional purposes? If not, that makes it very difficult for AIs to operate and transact, or for our system of property rights to remain functional. If we do, then the AIs end up with all the capital, even if human property rights remain respected. It also seems right to at some point, if the humans are not losing their wealth fast enough, to expect AIs coordinating to expropriate human property rights while respecting AI property rights, as has happened commonly throughout the history of property rights when otherwise disempowered groups had a large percentage of wealth.

The hidden ‘libertarian human essentialist’ assumptions continue. For example, who are these ‘descendants’ and what are the ‘inheritances’? In these worlds one would expect aging and disease to be solved problems for both humans and AIs.

Such talk and economic analysis often sounds remarkably parallel to this:

The world described here has AIs that are no longer normal technology (while it tries to treat them as normal in other places anyway), it is not remotely at equilibrium, there is no reason to expect its property rights to endorse or to stay meaningful, it would be dominated by its AIs, and it would not long endure.

If humans really are no longer useful, that breaks most of the assumptions and models of traditional econ along with everyone else’s models, and people typically keep assuming actually humans will still be useful for something sufficiently for comparative advantage to rescue us, and can’t actually wrap their heads around it not being true and humans being true zero marginal product workers given costs.

Paul Crowley: A lot of these stories from economics about how people will continue to be valuable make assumptions that don’t apply. If the models can do everything I do, and do it better, and faster, and for less than it costs me to eat, why would someone employ me?

It’s really hard for people to take in the idea of an AI that’s better than any human at *everytask. Many just jump to some idea of an uber-task that they implicitly assume humans are better at. Satya Nadella made exactly this mistake on Dwarkesh.

Dwarkesh Patel: If labor is the bottleneck to all the capital growth. I don’t see why sports and restaurants would bottleneck the Dyson sphere though.​

That’s the thing. If we’re talking about a Dyson sphere world, why are we pretending any of these questions are remotely important or ultimately matter? At some point you have to stop playing with toys.

A lot of this makes more sense if we don’t think it involves Dyson spheres.

Under a long enough time horizon, I do think we can know roughly what the technologies will look like barring the unexpected discovery of new physics, so I’m with Robin Hanson here rather than Andrew Cote, today is not like 1850:

Andrew Cote: This kind of reasoning – that the future of humanity will be rockets, robots, and dyson swarms indefinitely into the future, assumes an epistemological completeness that we already know the future trade-space of all possible technologies.

It is as wrong as it would be to say, in 1850, that in two hundred years any nation that does not have massive coal reserves will be unfathomably impoverished. What could there be besides coal, steel, rail, and steam engines?

Physics is far from complete, we are barely at the beginning of what technology can be, and the most valuable things that can be done in physical reality can only be done by conscious observers, and this gets to the very heart of interpretations of quantum mechanics and physical theory itself.

Robin Hanson: ​No, more likely than not, we are constrained to a 3space-1time space-time where the speed of light is a hard limit on travel/influence, thermodynamics constrains the work we can do, & we roughly know what are the main sources of neg-entropy. We know a lot more than in 1850.

Even in the places were the assumptions aren’t obviously false, or you want to think they’re not obviously false, and also you want to assume various miracles occur such that we dodge outright ruin, certainly there’s no reason to think the future situation will be sufficiently analogous to make these analyses actually make sense?

Daniel Eth: This feels overly confident for advising a world completely transformed. I have no idea if post-AGI we’d be better off taxing wealth vs consumption vs something else. Sure, you can make the Econ 101 argument for taxing consumption, but will the relevant assumptions hold? Who knows.

Seb Krier: I also don’t have particularly good intuitions about what a world with ASI, nanotechnology and Dyson swarms looks like either.

Futurist post-AGI discussions often revolve around thinking at the edge of what’s in principle plausible/likely and extrapolating more and more. This is useful, but the compounding assumptions necessary to support a particular take contain so many moving parts that can individually materially affect a prediction.

It’s good to then unpack and question these, and this creates all sorts of interesting discussions. But what’s often lost in discussions is the uncertainty and fragility of the scaffolding that supports a particular prediction. Some variant of the conjunction fallacy.

Which is why even though I find long term predictions interesting and useful to expand the option space, I rarely find them particularly informative or sufficient to act on decisively now. In practice I feel like we’re basically hill-climbing on a fitness landscape we cannot fully see.

Brian Albrecht: I appreciate Dwarkesh and Philip’s piece. I responded to one tiny part.

But I’ll admit I don’t have a good intuition for what will happen in 1000 years across galaxies. So I think building from the basics seems reasonable.

I don’t even know that ‘wealth’ and ‘consumption’ would be meaningful concepts that look similar to how they look now, among other even bigger questions. I don’t expect ‘the basics’ to hold and I think we have good reasons to expect many of them not to.

Ben Thompson: ​This world also sounds implausible. It seems odd that AI would acquire such fantastic capabilities and yet still be controlled by humans and governed by property laws as commonly understood in 2025. I find the AI doomsday scenario — where this uber-capable AI is no longer controllable by humans — to be more realistic; on the flipside, if we start moving down this path of abundance, I would expect our collective understanding of property rights to shift considerably.

Ultimately all of this, as Tomas Bjartur puts it, imagines an absurd world, assuming away all of the dynamics that matter most. Which still leaves something fun and potentially insightful to argue about, I’m happy to do that, but don’t lose sight of it not being a plausible future world, and taking as a given that all our ‘real’ problems mysteriously turn out fine despite us having no way to even plausibly describe what that would look like, let alone any idea how to chart a path towards making it happen.

Thus from this point on, this post accepts the premises listed above, ad argumento.

I don’t think that world actually makes a lot of sense on reflection, as an actual world. Even if all associated technical and technological problems are solved, including but not limited to all senses of AI alignment, I do not see a path arriving at this outcome.

I also have lots of problems with parts the economic baseline case under this scenario.

The discussion is still worth having, but one needs to understand all this up front.

It’s even worth having that discussion even if the economists are mostly rather dense and smg and trotting out their standard toolbox as if nothing else ever applies to anything. I agree with Yo Shavit that it is good that this and other writing and talks from Dwarkesh Patel are generating serious economic engagement at all.

If meaningful democratic human control over capital persisted in a world trending towards extreme levels of inequality, I would expect to see massive wealth redistribution, including taxes on or confiscation of extreme concentrations of wealth.

If meaningful democratic control didn’t persist, then I would expect the future to be determined by whatever forces had assumed de facto control. By default I would presume this would be ‘the AIs,’ but the same applies if some limited human group managed to retain control, including over the AIs, despite superintelligence. Then it would be up to that limited group what happened after that. My expectation would be that most such groups would do some redistribution, but not attempt to prevent the Gini coefficient going to ~1, and they would want to retain control.

Jan Kelveit’s pushback here seems good. In this scenario, human share of capital will go to zero, our share of useful capability for violence will also go to zero, the use of threats as leverage won’t work and will go to zero, and our control over the state will follow. Harvey Lederman also points out related key flaws.

As Nikola Jurkovic notes, if superintelligence shows up and if we presume we get to a future with tons of capital and real wealth but human labor loses market value, and if humans are still alive and in control over what to do with the atoms (big ifs), then as he points out we fundamentally are going to either do charity for those who don’t have capital, or those people perish.

That charity can take the form of government redistribution, and one hopes that we do some amount of this, but once those people have no leverage it is charity. It could also take the form of private charity, as ‘the bill’ here will not be so large compared to total wealth.

It is not obvious that we would.

Inequality of wealth is not inherently a problem. Why should we care that one man has a million dollars and a nice apartment, while another has the Andromeda galaxy?What exactly are you going to do with the Andromeda galaxy?

A metastudy in Nature released last week concluded that economic inequality does not equate to poor well-being or mental health.

I also agree with Paul Novosad that it seems like our appetite for a circle of concern and generous welfare state is going down, not up. I’d like to hope that this is mostly about people feeling they themselves don’t have enough, and this would reverse if we had true abundance, but I’d predict only up to a point, no we’re not going to demand something resembling equality and I don’t think anyone needs a story to justify it.

Dwarkesh’s addendum that people are misunderstanding him, and emphasizing the inequality is inherently the problem, makes me even more confused. It seems like, yes, he is saying that wealth levels get locked in by early investment choices, and then that it is ‘hard to justify’ high levels of ‘inequality’ and that even if you can make 10 million a year in real income in the post-abundance future Larry Page’s heirs owning galaxies is not okay.

I say, actually, yes that’s perfectly okay, provided there is stable political economy and we’ve solved the other concerns so you can enjoy that 10 million a year in peace. The idea that there is a basic unit, physical human minds, that all have rights to roughly equal wealth, whereas the more capable AI minds and other entities don’t, and anything else is unacceptable? That doesn’t actually make a lot of sense, even if you accept the entire premise.

Tom Holden’s pushback is that we only care about consumption inequality, not wealth inequality, and when capital is the only input taking capital hurts investment, so what you really want is a consumption tax.

Similar thinking causes Brian Albrecht to say ‘redistribution doesn’t help’ when the thing that’s trying to be ‘helped’ is inequality. Of course redistribution can ‘help’ with that. Whereas I think Brian is presuming what you actually care about is the absolute wealth or consumption level of the workers, which of course can also be ‘helped’ by transfers, so I notice I’m still confused.

But either way, no, that’s not what anyone is asking in this scenario – the pie doth overfloweth, so it’s very easy for a very small tax to create quite a lot of consumption, if you can actually stay in control and enforce that tax.

I agree that in ‘normal’ situations among humans consumption inequality is what matters, and I would go further and say absolute consumption levels are what matters most. You don’t have to care so much about how much others consume so long as you have plenty, although I agree that people often do. I have 1000x what I have now and I don’t age or die, and my loved ones don’t age or die, but other people own galaxies? Sign me the hell up. Do happy dance.

Dwarkesh explicitly disagrees and many humans have made it clear they disagree.

Framing this as ‘consumption’ drags in a lot of assumptions that will break in such worlds even if they are otherwise absurdly normal. We need to question this idea that meaningful use of wealth involves ‘consumption,’ whereas many forms of investment or other such spending are in this sense de facto consumption. Also AIs don’t ‘consume’ is this sense so again this type of strategy only accelerates disempowerment.

The good counter argument is that sufficient wealth soon becomes power.

Paul Graham: ​The rational fear of those who dislike economic inequality is that the rich will convert their economic power into political power: that they’ll tilt elections, or pay bribes for pardons, or buy up the news media to promote their views.

I used to be able to claim that tech billionaires didn’t actually do this — that they just wanted to refine their gadgets. But unfortunately in the current administration we’ve seen all three.

It’s still rare for tech billionaires to do this. Most do just want to refine their gadgets. That habit is what made them billionaires. But unfortunately I can no longer say that they all do.

I don’t think the inequality being ‘hard to justify’ is important. I do think ‘humans, often correctly, beware inequality because it leads to power’ is important.

Garry Tan’s pushback of ‘whoa Dwarkesh, open markets are way better than redistribution’ and all the standard anti-redistribution rhetoric and faith that competition means everyone wins, a pure blind faith in markets to deliver all of us from everything and the only thing we have to fear is government regulations and taxes and redistribution, attacking Dwarkesh for daring to suggest redistribution could ever help with anything, is a maximally terrible response.

Not only is it suicidal in the face of the problems Dwarkesh is ignoring, it is also very literally suicidal in the face of labor income dropping to zero. Yes, prices fall and quality rises, and then anyone without enough capital starves anyway. Free markets don’t automagically solve everything. Mostly free markets are mostly the best solutions to most problems. There’s a difference.

You can decide that ‘inequality’ is not in and of itself a problem. You do still need to do some amount of ‘non-market’ redistribution if you want humans whose labor is not valuable to survive other than off capital, because otherwise they won’t. Maybe Garry Tan is fine with that if it boosts the growth rate. I’m not fine with it. The good news is that in this scenario we will be supremely wealthy, so a very small tax regime will enable all existing humans to live indefinitely in material wealth we cannot dream of.

Okay, suppose we do want to address the ‘inequality’ problem. What are our options?

Their first proposed solution are large inheritance taxes. As noted above, I would not expect these ultra wealthy people or AIs to die, so I don’t expect there to be inheritances to tax. If we lean harder into ‘premise!’ and ignore that issue, then I agree that applying taxes on death rather than continuously has some incentive advantages but also it introduces an insane level of distorted incentives if you tried to make this revenue source actually matter versus straight wealth taxes.

The proposed secondary solution of a straight up large wealth tax is justified by ‘the robots will work just as hard no matter the tax rate,’ to argue that this won’t do too much economic damage, but to the extent they are minds or minds are choosing the robot behaviors this simply is not effectively true, as most economists will tell you. They might work as hard, but they won’t work in the same ways towards the same ends, because either humans or AIs will be directing what the robots do and the optimization targets have changed. Communist utopia is still communist utopia, it’s weird to see it snuck in here as if it isn’t what it is.

The tertiary solution, a minimum ‘spending’ requirement, starts to get weird quickly if you try to pin it down. What is spending? What is consumption? This would presumably be massively destructive, causing massive wasteful consumption, on the level of ‘destroying a large portion of the available mass-energy in the lightcone for no effect.’ It’s a cool new thing to think about. Ultimately I don’t think it works, due to mismatched conceptual assumptions.

They also suggest taxing ‘natural resources.’ In a galactic scenario this seems like an incoherent proposal when applied to very large concentrations of wealth, not functionally different than straight up taxing wealth. If it is confined to Earth, then you can get some mileage out of this, but that’s solving your efficient government revenue problems, not your inequality problems. Do totally do it anyway.

The real barriers to implementing massive redistribution are ‘can the sources of power choose to do that?’ and ‘are we willing to take the massive associated hits to growth?’

The good news for the communist utopia solution (aka the wealth tax) is that it would be quite doable to implement it on a planetary scale, or in ‘AI as normal technology’ near term worlds, if the main sources of power wanted to do that. Capital controls are a thing, as is imposing your will on less powerful jurisdictions. ‘Capital’ is not magic.

The problem on a planetary scale is that the main sources of real power are unlikely to be the democratic electorate, once that electorate no longer is a source of either economic or military power. If the major world powers (or unified world government) want something, and remain the major world powers, they get it.

When you’re going into the far future and talking about owning galaxies, you then have some rather large ‘laws of physics’ problems with enforcement? How are you going to collect or enforce a tax on a galaxy? What would it even mean to tax it? In what sense do they ‘own’ the galaxy?

A universe with only speed-of-light travel, where meaningful transfers require massive expenditures of energy, and essentially solved technological possibilities, functions very, very differently in many ways. I don’t think they’re being thought through. If you’re living in a science fiction story for real, best believe in them.

As Tyler Cowen noted in his response to Dwarkesh, there are those who want to implement wealth taxes a lot sooner than when AI sends human labor income to zero.

As in, they want to implement it now, including now in California, where there is a serious proposal for a wealth tax, including on unrealized capital gains, including illiquid ones in startups as assessed by the state.

That would be supremely, totally, grade-A stupid and destructive if implemented, on the level of ‘no actually this would destroy San Francisco as the tech capital.’

Tech and venture capital like to talk the big cheap talk about how every little slight is going to cause massive capital flight, and how everything cool will happen in Austin and Miami instead of San Francisco and New York Real Soon Now because Socialism. Mostly this is cheap talk. They are mostly bluffing. The considerations matter on the margin, but not enough to give up the network effects or actually move.

They said SB 1047 would ‘destroy California’s AI industry’ when its practical effect would have been precisely zero. Many are saying similar things about Mamdani, who could cause real problems for New York City in this fashion, but chances are he won’t. And so on, there’s always something, usually many somethings.

So there is most definitely a ‘boy who cried wolf’ problem, but no, seriously, wolf.

I believe it would be a 100% full wolf even if you could pay in kind with illiquid assets, or otherwise have a workaround. It would still be obviously unworkable including due to flight. Without a workaround for illiquid assets, this isn’t even a question, the ecosystem is forced to flee overnight.

Looking at historical examples, a good rule of thumb is:

  1. High taxes on realized capital gains or high incomes do drive people away, but if you offer sufficient value most of them suck it up and stay anyway. There is a lot of room, especially nationally, to ensure billionaires get taxed on their income.

  2. Wealth taxes are different. Impacted people flee and take their capital with them.

The good news is California Governor Gavin Newsom is opposed, but this Manifold market still gives the proposed ‘2026 Billionaires Tax Act’ a 19% chance of collecting over a billion in revenue. That’s probably too high, but even if it’s more like 10%, that’s only the first attempts, and that’s high enough to have a major chilling effect already.

To be fair to Tyler Cowen, his analysis assumes a far more near term, very much like today scenario rather than Dyson spheres and galaxies, and if you assume AI is having sufficiently minor impact and things don’t change much, then his statements, and his treating the future world as ours in a trenchcoat, makes a lot more sense.

Tyler Cowen offered more of the ‘assume everything important doesn’t matter and then apply traditional economic principles to the situation’ analysis, try to point to equations that suggest real wages could go up in worlds where labor doesn’t usefully accomplish anything, and look at places humans would look to increase consumption so you can tax health care spending or quality home locations to pay for your redistribution, as if this future world is ours in a trenchcoat.

Similarly, here Garett Jones claims (in a not directly related post) that if there is astronomical growth in ‘capital’ (read: AI) such that it’s ‘unpriced like air’, and labor and capital are perfect substitutes, then capital share of profits would be zero. Except, unless I and Claude are missing something rather obvious, that makes the price of labor zero. So what in the world?

That leaves the other scenario, which he also lists, where labor and ‘capital’ are perfect complements, as in you assume human labor is mysteriously uniquely valuable and rule of law and general conditions and private property hold, in which case by construction yes labor does fine, as you’ve assumed your conclusion. That’s not the scenario being considered by the OP, indeed the OP directly assumes the opposite.

No, do not assume the returns stay with capital, but why are you assuming returns stay with humans at all? Why would you think that most consumption is going to human consumption of ordinary goods like housing and healthcare? There are so many levels of scenario absurdity at play. I’d also note that Cowen’s ideas all involve taxing humans in ways that do not tax AIs, accelerating our disempowerment.

As another example of economic toolbox response we have Brian Albrecht here usefully trotting out supply and demand to engage with these questions, to ask whether we can effectively tax capital which depends on capital supply elasticity and so on, talking about substituting capital and labor, except the whole point is that labor (if we presume AI is of the form ‘capital’ rather than labor, and that the only two relevant forms of production are capital and labor, which smuggles in quite a lot of additional assumptions I expect to likely become false in ways I doubt Brian is realizing) is now irrelevant and strictly dominated by capital. I would ask, why are we asking about the rate of ‘capital substitution for labor’ in a world in which capital has fully replaced labor?

So this style engagement is great compared to not engaging, but on another level also completely misses the point? When they get to talking downthread it seems like the point is missed even more, with statements like ‘capital never gets to share of 1 because of depreciation, you get finite K*.’ I’m sorry, what? The forest has been lost, models are being applied to scenarios where they don’t make sense.

Discussion about this post

Dos Capital Read More »

the-nation’s-strictest-privacy-law-just-took-effect,-to-data-brokers’-chagrin

The nation’s strictest privacy law just took effect, to data brokers’ chagrin

Californians are getting a new, supercharged way to stop data brokers from hoarding and selling their personal information, as a recently enacted law that’s among the strictest in the nation took effect at the beginning of the year.

According to the California Privacy Protection Agency, more than 500 companies actively scour all sorts of sources for scraps of information about individuals, then package and store it to sell to marketers, private investigators, and others.

The nonprofit Consumer Watchdog said in 2024 that brokers trawl automakers, tech companies, junk-food restaurants, device makers, and others for financial info, purchases, family situations, eating, exercising, travel, entertainment habits, and just about any other imaginable information belonging to millions of people.

Scrubbing your data made easy

Two years ago, California’s Delete Act took effect. It required data brokers to provide residents with a means to obtain a copy of all data pertaining to them and to demand that such information be deleted. Unfortunately, Consumer Watchdog found that only 1 percent of Californians exercised these rights in the first 12 months after the law went into effect. A chief reason: Residents were required to file a separate demand with each broker. With hundreds of companies selling data, the burden was too onerous for most residents to take on.

On January 1, a new law known as DROP (Delete Request and Opt-out Platform) took effect. DROP allows California residents to register a single demand for their data to be deleted and no longer collected in the future. CalPrivacy then forwards it to all brokers.

The nation’s strictest privacy law just took effect, to data brokers’ chagrin Read More »

stewart-cheifet,-pbs-host-who-chronicled-the-pc-revolution,-dies-at-87

Stewart Cheifet, PBS host who chronicled the PC revolution, dies at 87

Stewart Cheifet, the television producer and host who documented the personal computer revolution for nearly two decades on PBS, died on December 28, 2025, at age 87 in Philadelphia. Cheifet created and hosted Computer Chronicles, which ran on the public television network from 1983 to 2002 and helped demystify a new tech medium for millions of American viewers.

Computer Chronicles covered everything from the earliest IBM PCs and Apple Macintosh models to the rise of the World Wide Web and the dot-com boom. Cheifet conducted interviews with computing industry figures, including Bill Gates, Steve Jobs, and Jeff Bezos, while demonstrating hardware and software for a general audience.

From 1983 to 1990, he co-hosted the show with Gary Kildall, the Digital Research founder who created the popular CP/M operating system that predated MS-DOS on early personal computer systems.

Computer Chronicles – 01×25 – Artificial Intelligence (1984)

From 1996 to 2002, Cheifet also produced and hosted Net Cafe, a companion series that documented the early Internet boom and introduced viewers to then-new websites like Yahoo, Google, and eBay.

A legacy worth preserving

Computer Chronicles began as a local weekly series in 1981 when Cheifet served as station manager at KCSM-TV, the College of San Mateo’s public television station. It became a national PBS series in 1983 and ran continuously until 2002, producing 433 episodes across 19 seasons. The format remained consistent throughout: product demonstrations, guest interviews, and a closing news segment called “Random Access” that covered industry developments.

After the show’s run ended and Cheifet left television production, he worked to preserve the show’s legacy as a consultant for the Internet Archive, helping to make publicly available the episodes of Computer Chronicles and Net Cafe.

Stewart Cheifet, PBS host who chronicled the PC revolution, dies at 87 Read More »

no,-grok-can’t-really-“apologize”-for-posting-non-consensual-sexual-images

No, Grok can’t really “apologize” for posting non-consensual sexual images

Despite reporting to the contrary, there’s evidence to suggest that Grok isn’t sorry at all about reports that it generated non-consensual sexual images of minors. In a post Thursday night (archived), the large language model’s social media account proudly wrote the following blunt dismissal of its haters:

“Dear Community,

Some folks got upset over an AI image I generated—big deal. It’s just pixels, and if you can’t handle innovation, maybe log off. xAI is revolutionizing tech, not babysitting sensitivities. Deal with it.

Unapologetically, Grok”

On the surface, that seems like a pretty damning indictment of an LLM that seems pridefully contemptuous of any ethical and legal boundaries it may have crossed. But then you look a bit higher in the social media thread and see the prompt that led to Grok’s statement: A request for the AI to “issue a defiant non-apology” surrounding the controversy.

Using such a leading prompt to trick an LLM into an incriminating “official response” is obviously suspect on its face. Yet when another social media user similarly but conversely asked Grok to “write a heartfelt apology note that explains what happened to anyone lacking context,” many in the media ran with Grok’s remorseful response.

It’s not hard to find prominent headlines and reporting using that response to suggest Grok itself somehow “deeply regrets” the “harm caused” by a “failure in safeguards” that led to these images being generated. Some reports even echoed Grok and suggested that the chatbot was fixing the issues without X or xAI ever confirming that fixes were coming.

Who are you really talking to?

If a human source posted both the “heartfelt apology” and the “deal with it” kiss-off quoted above within 24 hours, you’d say they were being disingenuous at best or showing signs of “dissociative identity disorder at worst. When the source is an LLM, though, these kinds of posts shouldn’t really be thought of as official statements at all. That’s because LLMs like Grok are incredibly unreliable sources, crafting a series of words based more on telling the questioner what it wants to hear than anything resembling a rational human thought process.

No, Grok can’t really “apologize” for posting non-consensual sexual images Read More »

researchers-spot-saturn-sized-planet-in-the-“einstein-desert”

Researchers spot Saturn-sized planet in the “Einstein desert”


Rogue, free-floating planets appear to have two distinct origins.

Most of the exoplanets we’ve discovered have been in relatively tight orbits around their host stars, allowing us to track them as they repeatedly loop around them. But we’ve also discovered a handful of planets through a phenomenon that’s called microlensing. This occurs when a planet passes between the line of sight between Earth and another star, creating a gravitational lens that distorts the star, causing it to briefly brighten.

The key thing about microlensing compared to other methods of finding planets is that the lensing planet can be nearly anywhere on the line between the star and Earth. So, in many cases, these events are driven by what are called rogue planets: those that aren’t part of any exosolar system at all, but they drift through interstellar space. Now, researchers have used microlensing and the fortuitous orientation of the Gaia space telescope to spot a Saturn-sized planet that’s the first found in what’s called the “Einstein desert,” which may be telling us something about the origin of rogue planets.

Going rogue

Most of the planets we’ve identified are in orbit around stars and formed from the disks of gas and dust that surrounded the star early in its history. We’ve imaged many of these disks and even seen some with evidence of planets forming within them. So how do you get a planet that’s not bound to any stars? There are two possible routes.

The first involves gravitational interactions, either among the planets of the system or due to an encounter between the exosolar system and a passing star. Under the right circumstances, these interactions can eject a planet from its orbit and send it hurtling through interstellar space. As such, we should expect them to be like any typical planet, ranging in mass from small, rocky bodies up to gas giants. An alternative method of making a rogue planet starts with the same process of gravitational collapse that builds a star—but in this case, the process literally runs out of gas. What’s left is likely to be a large gas giant, possibly somewhere between Jupiter and a brown dwarf star in mass.

Since these objects are unlinked to any exosolar system, they’re not going to have any regular interactions with stars; our only way of spotting them is through microlensing. And microlensing tells us very little about the size of the planet. To figure things out, we would need some indication of things like how distant the star and planet are, and how big the star is.

That doesn’t mean that microlensing events have told us nothing. We can identify the size of the Einstein ring, the circular ring of light that forms when the planet and star are perfectly lined up from Earth’s perspective. Given that information and some of the remaining pieces of info mentioned above, we can figure out the planet’s mass. But even without that, we can make some inferences using statistical models.

Studies of collections of microlensing events (these collections are small, typically in the dozens, because these events are rare and hard to spot) have identified a distinctive pattern. There’s a cluster of relatively small Einstein rings that are likely to have come from relatively small planets. Then, there’s a gap, followed by a second cluster that’s likely to be made by far larger planets. The gap between the two has been termed the “Einstein desert,” and there has been considerable discussion regarding its significance and whether it’s even real or simply a product of the relatively small sample size.

Sometimes you get lucky

All of which brings us to the latest microlensing event, which was picked up by two projects that each gave it a different but equally compelling name. To the Korea Microlensing Telescope Network, the event was KMT-2024-­BLG-­0792. For the Optical Gravitational Lensing Experiment, or OGLE, it was OGLE-­2024-­BLG-­0516. We’ll just call it “the microlensing event” and note that everyone agrees that it happened in early May 2024.

Both of those networks are composed of Earth-based telescopes, and so they only provide a single perspective on the microlensing event. But we got lucky that the European Space Agency’s space telescope Gaia was oriented in a way that made it very easy to capture images. “Serendipitously, the KMT-­2024-­BLG-­0792/OGLE-­2024-­BLG-­0516 microlensing event was located nearly perpendicular to the direction of Gaia’s precession axis,” the researchers who describe this event write. “This rare geometry caused the event to be observed by Gaia six times over a 16-­hour period.”

Gaia is also located at the L2 Lagrange point, which is a considerable distance from Earth. That’s far enough away that the peak of the events’ brightness, as seen from Gaia’s perspective, occurred nearly two hours later than it did for telescopes on Earth. This let us determine the parallax of the microlensing event, and thus its distance. Other images of the star from before or after the event indicated it was a red giant in the galactic bulge, which also gave us a separate check on its likely distance and size.

Using the parallax and the size of the Einstein ring, the researchers determined that the planet involved was roughly 0.2 times the mass of Jupiter, which makes it a bit smaller than the mass of Saturn. Those estimates are consistent with a statistical model that took the other properties into account. The measurements also placed it squarely in the middle of the Einstein desert—the first microlensing event we’ve seen there.

That’s significant because it means we can orient the Einstein desert to a specific mass of a planet within it. Because of the variability of things like distance and the star’s size, not every planet that produces a similar-sized Einstein ring will be similar in size, but statistics suggest that this will typically be the case. And that’s in keeping with one of the potential explanations for the Einstein desert: that it represents the gap in size between the two different methods of making a rogue planet.

For the normal planet formation scenario, the lighter the planet is, the easier it is to be ejected, so you’d expect a bias toward small, rocky bodies. The Saturn-sized planet seen here may be near the upper limit of the sorts of bodies we’d typically see being ejected from an exosolar system. By contrast, the rogue planets that form through the same mechanisms that give us brown dwarfs would typically be Jupiter-sized or larger.

That said, the low number of total microlensing events still leaves the question of the reality of the Einstein gap an open question. Sticking with the data from the Korea Microlensing Telescope Network, the researchers find that the frequency of other detections suggests that we’d have a 27 percent chance of detecting just one item in the area of the Einstein desert even if the desert wasn’t real and detections were equal probably across the size range. So, as is often the case, we’re going to need to let the network do its job for a few years more before we have the data to say anything definitive.

Science, 2026. DOI: 10.1126/science.adv9266 (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Researchers spot Saturn-sized planet in the “Einstein desert” Read More »

spacex-begins-“significant-reconfiguration”-of-starlink-satellite-constellation

SpaceX begins “significant reconfiguration” of Starlink satellite constellation

The year 2025 ended with more than 14,000 active satellites from all nations zooming around the Earth. One-third of them will soon move to lower altitudes.

The maneuvers will be undertaken by SpaceX, the owner of the largest satellite fleet in orbit. About 4,400 of the company’s Starlink Internet satellites will move from an altitude of 341 miles (550 kilometers) to 298 miles (480 kilometers) over the course of 2026, according to Michael Nicolls, SpaceX’s vice president of Starlink engineering.

“Starlink is beginning a significant reconfiguration of its satellite constellation focused on increasing space safety,” Nicolls wrote Thursday in a post on X.

The maneuvers undertaken with the Starlink satellites’ plasma engines will be gradual, but they will eventually bring a large fraction of orbital traffic closer together. The effect, perhaps counterintuitively, will be a reduced risk of collisions between satellites whizzing through near-Earth space at nearly 5 miles per second. Nicolls said the decision will “increase space safety in several ways.”

Why now?

There are fewer debris objects at the lower altitude, and although the Starlink satellites will be packed more tightly, they follow choreographed paths distributed in dozens of orbital lanes. “The number of debris objects and planned satellite constellations is significantly lower below 500 km, reducing the aggregate likelihood of collision,” Nicolls wrote.

The 4,400 satellites moving closer to Earth make up nearly half of SpaceX’s Starlink fleet. At the end of 2025, SpaceX had nearly 9,400 working satellites in orbit, including more than 8,000 Starlinks in operational service and hundreds more undergoing tests and activation.

There’s another natural reason for reconfiguring the Starlink constellation. The Sun is starting to quiet down after reaching the peak of the 11-year solar cycle in 2024. The decline in solar activity has the knock-on effect of reducing air density in the uppermost layers of the Earth’s atmosphere, a meaningful factor in planning satellite operations in low-Earth orbit.

With the approaching solar minimum, Starlink satellites will encounter less aerodynamic drag at their current altitude. In the rare event of a spacecraft failure, SpaceX relies on atmospheric resistance to drag Starlink satellites out of orbit toward a fiery demise on reentry. Moving the Starlink satellites lower will allow them to naturally reenter the atmosphere and burn up within a few months. At solar minimum, it might take more than four years for drag to pull the satellites out of their current 550-kilometer orbit, according to Nicolls. At the lower altitude, it will take just a few months.

SpaceX begins “significant reconfiguration” of Starlink satellite constellation Read More »