Author name: Beth Washington

mastodon’s-founder-cedes-control,-refuses-to-become-next-musk-or-zuckerberg

Mastodon’s founder cedes control, refuses to become next Musk or Zuckerberg

And perhaps in a nod to Meta’s recent changes, Mastodon also vowed to “invest deeply in trust and safety” and ensure “everyone, especially marginalized communities,” feels “safe” on the platform.

To become a more user-focused paradise of “resilient, governable, open and safe digital spaces,” Mastodon is going to need a lot more funding. The blog called for donations to help fund an annual operating budget of $5.1 million (5 million euros) in 2025. That’s a massive leap from the $152,476 (149,400 euros) total operating expenses Mastodon reported in 2023.

Other social networks wary of EU regulations

Mastodon has decided to continue basing its operations in Europe, while still maintaining a separate US-based nonprofit entity as a “fundraising hub,” the blog said.

It will take time, Mastodon said, to “select the appropriate jurisdiction and structure in Europe” before Mastodon can then “determine which other (subsidiary) legal structures are needed to support operations and sustainability.”

While Mastodon is carefully getting re-settled as a nonprofit in Europe, Zuckerberg this week went on Joe Rogan’s podcast to call on Donald Trump to help US tech companies fight European Union fines, Politico reported.

Some critics suggest the recent policy changes on Meta platforms were intended to win Trump’s favor, partly to get Trump on Meta’s side in the fight against the EU’s strict digital laws. According to France24, Musk’s recent combativeness with EU officials suggests Musk might team up with Zuckerberg in that fight (unlike that cage fight pitting the wealthy tech titans against each other that never happened).

Experts told France24 that EU officials may “perhaps wrongly” already be fearful about ruffling Trump’s feathers by targeting his tech allies and would likely need to use the “full legal arsenal” of EU digital laws to “stand up to Big Tech” once Trump’s next term starts.

As Big Tech prepares to continue battling EU regulators, Mastodon appears to be taking a different route, laying roots in Europe and “establishing the appropriate governance and leadership frameworks that reflect the nature and purpose of Mastodon as a whole” and “responsibly serve the community,” its blog said.

“Our core mission remains the same: to create the tools and digital spaces where people can build authentic, constructive online communities free from ads, data exploitation, manipulative algorithms, or corporate monopolies,” Mastodon’s blog said.

Mastodon’s founder cedes control, refuses to become next Musk or Zuckerberg Read More »

new-york-starts-enforcing-$15-broadband-law-that-isps-tried-to-kill

New York starts enforcing $15 broadband law that ISPs tried to kill

1.7 million New York households lost FCC discount

The order said quick implementation of the law is important because of “developments at the federal level impacting the affordability of broadband service.” About 1.7 million New York households, and 23 million nationwide, used to receive a monthly discount through an FCC program that expired in mid-2024 after Congress failed to provide more funding.

“For this reason, consumer benefit programs assisting low-income households—such as the ABA—are even more critical to ensure that the digital divide for low-income New Yorkers is being addressed,” the New York order said.

New York ISPs can obtain an exemption from the low-cost broadband law if they “provide service to no more than 20,000 households and the Commission determines that compliance with such requirements would result in ‘unreasonable or unsustainable financial impact on the broadband service provider,'” the order said.

Over 40 small ISPs filed for exemptions in 2021 before the law was blocked by a judge. Those ISPs and potentially others will be given one-month exemptions if they file paperwork by Wednesday stating that they meet the subscriber threshold. ISPs must submit detailed financial information by February 15 to obtain longer-term exemptions.

“All other ISPs (i.e., those with more than 20,000 subscribers) must comply with the ABA by January 15, 2025,” the order said. Failure to comply can be punished with civil penalties of up to $1,000 per violation. The law applies to wireline, fixed wireless, and satellite providers.

Charter Spectrum currently advertises a $25-per-month plan with 50Mbps speeds for low-income households. Comcast and Optimum have $15 plans. Verizon has a low-income program reducing the cost of some home Internet plans to as low as $20 a month.

Disclosure: The Advance/Newhouse Partnership, which owns 12.3 percent of Charter, is part of Advance Publications, which also owns Ars Technica parent Condé Nast.

New York starts enforcing $15 broadband law that ISPs tried to kill Read More »

report:-after-many-leaks,-switch-2-announcement-could-come-“this-week”

Report: After many leaks, Switch 2 announcement could come “this week”

Nintendo may be getting ready to make its Switch 2 console official. According to “industry whispers” collected by Eurogamer, as well as reporting from The Verge’s Tom Warren, the Switch 2 could be formally announced sometime this week. Eurogamer suggests the reveal is scheduled for this Thursday, January 16.

The reporting also suggests that the reveal will focus mostly on the console’s hardware design, with another game-centered announcement coming later. Eurogamer reports that the console won’t be ready to launch until April; this would be similar to Nintendo’s strategy for the original Switch, which was announced in mid-January 2017 but not launched until March.

Many things about the Switch 2’s physical hardware design have been thoroughly leaked at this point, thanks mostly to accessory makers who have been showing off their upcoming cases. Accessory maker Genki was at CES last week with a 3D-printed replica of the console based on the real thing, suggesting a much larger but still familiar-looking console with a design and button layout similar to the current Switch.

On the inside, the console is said to sport a new Nvidia-designed Arm processor with a much more powerful GPU and more RAM than the current Switch. Dubbed “T239,” Eurogamer reports that the chip includes 1,536 CUDA cores based on the Ampere architecture, the same used in 2020’s GeForce RTX 30-series graphics cards on the PC.

Report: After many leaks, Switch 2 announcement could come “this week” Read More »

supreme-court-lets-hawaii-sue-oil-companies-over-climate-change-effects

Supreme Court lets Hawaii sue oil companies over climate change effects

On Monday, the Supreme Court declined to decide whether to block lawsuits that Honolulu filed to seek billions in damages from oil and gas companies over allegedly deceptive marketing campaigns that hid the effects of climate change.

Now those lawsuits can proceed, surely frustrating the fossil fuel industry, which felt that SCOTUS should have weighed in on this key “recurring question of extraordinary importance to the energy industry” raised in lawsuits seeking similarly high damages in several states, CBS News reported.

Defendants Sunoco and Shell, along with 15 other energy companies, had asked the court to intervene and stop the Hawaii lawsuits from proceeding. They had hoped to move the cases out of Hawaii state courts by arguing that interstate pollution is governed by federal law and the Clean Air Act.

The oil and gas companies continue to argue that greenhouse gas emissions “flow from billions of daily choices, over more than a century, by governments, companies, and individuals about what types of fuels to use, and how to use them.” Because of this, the companies believe Honolulu was wrong to demand damages based on the “cumulative effect of worldwide emissions leading to global climate change.”

“In these cases, state and local governments are attempting to assert control over the nation’s energy policies by holding energy companies liable for worldwide conduct in ways that starkly conflict with the policies and priorities of the federal government,” oil and gas companies unsuccessfully argued in their attempt to persuade SCOTUS to grant review. “That flouts this court’s precedents and basic principles of federalism, and the court should put a stop to it.”

Supreme Court lets Hawaii sue oil companies over climate change effects Read More »

the-8-most-interesting-pc-monitors-from-ces-2025

The 8 most interesting PC monitors from CES 2025


Monitors worth monitoring

Here are upcoming computer screens with features that weren’t around last year.

Yes, that’s two monitors in a suitcase.

Yes, that’s two monitors in a suitcase.

Plenty of computer monitors made debuts at the Consumer Electronics Show (CES) in Las Vegas this year, but many of the updates at this year’s event were pretty minor. Many could have easily been a part of 2024’s show.

But some brought new and interesting features to the table for 2025—in this article, we’ll tell you all about them.

LG’s 6K monitor

Pixel addicts are always right at home at CES, and the most interesting high-resolution computer monitor to come out of this year’s show is the LG UltraFine 6K Monitor (model 32U990A).

People seeking more than 3840×2160 resolution have limited options, and they’re all rather expensive (looking at you, Apple Pro Display XDR). LG’s 6K monitor means there’s another option for professionals needing extra pixels for things like developing, engineering, and creative work. And LG’s 6144×3456, 32-inch display has extra oomph thanks to something no other 6K monitor has: Thunderbolt 5.

This is the only image LG provided for the monitor. Credit: LG

LG hasn’t confirmed the refresh rate of its 6K monitor, so we don’t know how much bandwidth it needs. But it’s possible that pairing the UltraFine with a Thunderbolt 5 PC could trigger Bandwidth Boost, a Thunderbolt 5 feature that automatically increases bandwidth from 80Gbps to 120Gbps. For comparison, Thunderbolt 4 maxes out at 40Gbps. Thunderbolt 5 also requires 140 W power delivery and maxes out at 240 W. That’s a notable bump from Thunderbolt 4’s 100–140 W.

Considering that Apple’s only 6K monitor has Thunderbolt 3, Thunderbolt 5 is a differentiator. With this capability, the LG UltraFine is ironically better equipped in this regard for use with the new MacBook Pros and Mac Mini (which all have Thunderbolt 5) compared to Apple’s own monitors. LG may be aware of this, as the 32U990A’s aesthetic could be considered very Apple-like.

Inside the 32U990A’s silver chassis is a Nano IPS panel. In recent years, LG has advertised its Nano IPS panels as having “nanometer-sized particles” applied to their LED backlight to absorb “excess, unnecessary light wavelengths” for “richer color expression.” LG’s 6K monitor claims to cover 98 percent of DCI-P3 and 99.5 percent of Adobe RGB. IPS Black monitors, meanwhile, have higher contrast ratios (up to 3,000:1) than standard IPS panels. However, LG has released Nano IPS monitors with 2,000:1 contrast, the same contrast ratio as Dell’s 6K, IPS Black monitor.

LG hasn’t shared other details, like price or a release date. But the monitor may cost more than Dell’s Thunderbolt 4-equipped monitor, which is currently $2,480.

Brelyon’s multi-depth monitor

Brelyon Ultra Reality Extend.

Someone from CNET using the Ultra Reality Extend. Credit: CNET/YouTube

Brelyon is headquartered in San Mateo, California, and was founded by scientists and executives from MIT, IMAX, UCF, and DARPA. It’s been selling display technology for commercial and defense applications since 2022. At CES, the company unveiled the Ultra Reality Extend, describing it as an “immersive display line that renders virtual images in multiple depths.”

“As the first commercial multi-focal monitor, the Extend model offers multi-depth programmability for information overlay, allowing users to see images from 0.7 m to as far as 2.5 m of depth virtually rendered behind the monitor; organizing various data streams at different depth layers, or triggering focal cues to induce an ultra immersive experience akin to looking out through a window,” Brelyon’s announcement said.

Brelyon says the monitor runs 4K at 60 Hz with 1 bit of monocular depth for an 8K effect. The monitor includes “OLED-based curved 2D virtual images, with the largest stretching to 122 inches and extending 2.5 meters deep, viewable through a 30-inch frame,” according to the firm’s announcement. The closer you sit, the greater the field of view you get.

The Extend leverages “new GPU capabilities to process light and video signals inside our display platforms,” Brelyon CEO Barmak Heshmat said in a statement this week. He added: “We are thinking beyond headsets and glasses, where we can leverage GPU capabilities to do real-time driving of higher-bandwidth display interfaces.”

Brelyon says this was captured from the Extend, with its camera lens focus changing from 70 cm to 2,500 cm. Credit: Brelyon

Advancements in AI-based video processing, as well as other software advancements and hardware improvements, purportedly enable the Extend to upscale lower-dimension streams to multiple, higher-dimension ones. Brelyon describes its product as a “generative display system” that uses AI computation and optics to assign different depth values to content in real time for rendering images and information overlays.

The idea of a virtual monitor that surpasses the field of view of typical desktop monitors while allowing users to see the real world isn’t new. Tech firms (including many at CES) usually try to accomplish this through AR glasses. But head-mounted displays still struggle with problems like heat, weight, computing resources, battery, and aesthetics.

Brelyon’s monitor seemingly demoed well at CES. Sam Rutherford, a senior writer at Engadget, watched a clip from the Marvel’s Spider-Man video game on the Extend and said that “trees and light poles whipping past in my face felt so real I started to flinch subconsciously.” He added that the monitor separated “different layers of the content to make snow in the foreground look blurry as it whipped across the screen, while characters in the distance” still looked sharp.

The monitor costs $5,000 to $8,000 depending on how you’ll use it and whether you have other business with Brelyon, per Engadget, and CES is one of the few places where people could actually see the display in action.

Samsung’s 3D monitor

Samsung Odyssey 3D

Samsung’s depiction of the 3D effect of its 3D PC monitor. Credit: Samsung

It’s 2025, and tech companies are still trying to convince people to bring a 3D display into their homes. This week, Samsung took its first swing since 2009 at 3D screens with the Odyssey 3D monitor.

In lieu of 3D glasses. the Odyssey 3D achieves its 3D effect with a lenticular lens “attached to the front of the panel and its front stereo camera,” Samsung says, as well eye tracking and view mapping. Differing from other recent 3D monitors, the Odyssey 3D claims to be able to make 2D content look three-dimensional even if that content doesn’t officially support 3D.

You can find more information in our initial coverage of Samsung’s Odyssey 3D, but don’t bet on finding 3D monitors in many people’s homes soon. The technology for quality 3D displays that work without glasses has been around for years but still has never taken off.

Dell’s OLED productivity monitor

With improvements in burn-in, availability, and brightness, finding OLED monitors today is much easier than it was two years ago. But a lot of the OLED monitors released recently target gamers with features like high refresh rates, ultrawide panels, and RGB. These features are unneeded or unwanted by non-gamers but contribute to OLED monitors’ already high pricing. Numerous smaller OLED monitors were announced at CES, with 27-inch, 4K models being a popular addition. Most of them are still high-refresh gaming monitors, though.

The Dell 32-inch QD-OLED, on the other hand, targets “play, school, and work,” Dell’s announcement says. And its naming (based on a new naming convention Dell announced this week that kills XPS and other longstanding branding) signals that this is a mid-tier monitor from Dell’s entry-level lineup.

Dell 32-inch QD-OLED,

OLED for normies. Credit: Dell

The monitor’s specs, which include a 120 Hz refresh rate, AMD FreeSync Premium, and USB-C power delivery at up to 90 W, make it a good fit for pairing with many mainstream laptops.

Dell also says this is the first QD-OLED with spatial audio, which uses head tracking to alter audio coming from the monitor’s five 5 W speakers. This is a feature we’ve seen before, but not on an OLED monitor.

For professionals and/or Mac users that prefer the sleek looks, reputation, higher power delivery and I/O hubs associated with Dell’s popular UltraSharp line, Dell made two more notable announcements at CES: an UltraSharp 32 4K Thunderbolt Hub Monitor (U3225QE) coming out in February 25 for $950 and an UltraSharp 27 4K Thunderbolt Hub Monitor (U2725QE) coming out that same day for $700.

The suitcase monitors

Before we get into the Base Case, please note that this product has no release date because its creators plan to go to market via crowdfunding. Base Case says it will launch its Indiegogo campaign next month, but even then, we don’t know if the project will be funded, if any final product will work as advertised, or if customers will receive orders in a timely fashion. Still, this is one of the most unusual monitors at CES, and it’s worth discussing.

The Base Case is shaped like a 24x14x16.5-inch rolling suitcase, but when you open it up, you’ll find two 24-inch monitors for connecting to a laptop. Each screen reportedly has a 1920×1080 resolution, a 75 Hz refresh rate, and a max brightness claim of 350 nits. Base Case is also advertising PC and Mac support (through DisplayLink), as well as HDMI, USB-C, USB-A, Thunderbolt, and Ethernet ports. Telescoping legs allow the case to rise 10 inches so the display can sit closer to eye level.

Ultimately, the Base Case would see owners lug around a 20-pound product for the ability to quickly create a dual-monitor setup equipped with a healthy amount of I/O. Tom’s Guide demoed a prototype at CES and reported that the monitors took “seconds to set up.”

In case you’re worried that the Base Case prioritizes displays over storage, note that its makers plan on adding a front pocket to the suitcase that can fit a laptop. The pocket wasn’t on the prototype Tom’s Guide saw, though.

Again, this is far from a finalized product, but Base Case has alluded to a $2,400 starting price. For comparison to other briefcase-locked displays—and yes, doing this is possible—LG’s StanbyME Go (27LX5QKNA) tablet in a briefcase currently has a $1,200 MSRP.

Corsair’s PC-mountable touchscreen

A promotional image of the touchscreen.

If the Base Case is on the heftier side of portable monitors, Corsair’s Xeneon Edge is certainly on the minute side. The 14.5-inch LCD touchscreen isn’t meant to be a primary display, though. Corsair built it as a secondary screen for providing quick information, like the song your computer is playing, the weather, the time, and calendar events. You could also use the 2560×720 pixels to display system information, like component usage and temperatures.

Corsair says its iCue software will be able to provide system information on the Xeneon, but because the Xeneon Edge works like a regular monitor, you could (and likely would prefer to) use your own methods. Still, the Xeneon Edge stands out from other small, touchscreen PC monitors with its clean UI that can succinctly communicate a lot of information on the tiny display at once.

Specs-wise, this is a 60 Hz IPS panel with 5-point capacitive touch. Corsair says the monitor can hit 350 nits of brightness.

You can connect the Xeneon Edge to a computer via USB-C (DisplayPort Alt mode) or HDMI. There are also screw holes, so PC builders could install it via a 360 mm radiator mounting point inside their PC case.

Alternatively, Corsair recommends attaching the touchscreen to the outside of a PC case through the monitor’s 14 integrated magnets. Corsair said in a blog post that the “magnets are underneath the plastic casing so the metal surface you stick it to won’t get scratched.” Or, in traditional portable monitor style, the Xeneon Edge could also just sit on a desk with its included stand.

Corsair Xeneon Edge

Corsair demos different ways the screen could attach to a case. Credit: TechPowerUp/YouTube

Corsair plans to release the Xeneon Edge in Q2. Expected pricing is “around $249,” Tom’s Hardware reported.

MSI’s side panel display panel

Why attach a monitor to your PC case when you can turn your PC case into a monitor instead?

MSI says that the touchscreen embedded into this year’s MEG Vision X AI 2nd gaming desktop’s side panel can work like a regular computer monitor. Similar to Corsair’s monitor, the MSI’s display has a corresponding app that can show system information and other customizations, which you can toggle with controls on the front of the case, PCMag reported.

MSI used an IPS panel with 1920×1080 resolution for the display, which also has an integrated mic and speaker. MSI says “electric vehicle control centers” inspired the design. We’ve seen similar PC cases, like iBuyPower’s more translucent side panel display and the touchscreen on Hyte’s pentagonal PC case, before. But MSI is bringing the design to a more mainstream form factor by including it in a prebuilt desktop, potentially opening the door for future touchscreen-equipped desktops.

Considering the various locations people place their desktops and the different angles at which they may try to look at this screen, I’m curious about the monitor’s viewing angles and brightness. IPS seems like a good choice since it tends to have strong image quality when viewed from different angles. A video PC Mag shot from the show floor shows images on the monitor appearing visible and lively:

Hands on with MSI’s MEG Vision X AI Desktop: Now, your PC tower’s a monitor, too.

World’s fastest monitor

There’s a competitive air at CES that lends to tech brands trying to one-up each other on spec sheets. Some of the most heated competition concerns monitor refresh rates; for years, we’ve been meeting the new world’s fastest monitor at CES. This year is no different.

The brand behind the monitor is Koorui, a three-year-old Chinese firm whose website currently lists monitors and keyboards. Koorui hasn’t confirmed when it will make its 750 Hz display available, where it will sell it, or what it will cost. That should bring some skepticism about this product actually arriving for purchase in the US. However, Koorui did bring the display to the CES show floor.

The speedy display had a refresh rate test running at CES, and according to several videos we’ve seen from attendees, the monitor appeared to consistently hit the 750 Hz mark.

World’s first 750Hz monitor???

For those keeping track, high-end gaming monitors—namely ones targeting professional gamers—hit 360 Hz in 2020. Koorui’s announcement means max monitor speeds have increased 108.3 percent in four years.

One CES attendee noticed, however, that the monitor wasn’t showing any gameplay. This could be due to the graphical and computing prowess needed to demonstrate the benefits of a 750 Hz monitor. A system capable of 750 frames per second would give people a chance to see if they could detect improved motion resolution but would also be very expensive. It’s also possible that the monitor Koorui had on display wasn’t ready for that level of scrutiny yet.

Like many eSports monitors, the Koorui is 24.5 inches, with a resolution of 1920×1080. Perhaps more interesting than Koorui taking the lead in the perennial race for higher refresh rates is the TN monitor’s claimed color capabilities. TN monitors aren’t as popular as they were years ago, but OEMs still employ them sometimes for speed.

They tend to be less colorful than IPS and VA monitors, though. Most offer sRGB color gamuts instead of covering the larger DCI-P3 color space. Asus’ 540 Hz ROG Swift Pro PG248QP, for example, is a TN monitor claiming 125 percent sRGB coverage. Koorui’s monitor claims to cover 95 percent of DCI-P3, due to the use of a quantum dot film. Again, there’s a lot that prospective shoppers should confirm about this monitor if it becomes available.

For those seeking the fastest monitors with more concrete release plans, several companies announced 600 Hz monitors coming out this year. Acer, for example, has a 600 Hz Nitro XV240 F6 (also a TN monitor) that it plans to release in North America this quarter at a starting price of $600.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

The 8 most interesting PC monitors from CES 2025 Read More »

new-glenn-rocket-is-at-the-launch-pad,-waiting-for-calm-seas-to-land

New Glenn rocket is at the launch pad, waiting for calm seas to land

COCOA BEACH, Fla.—As it so often does in the final days before the debut of a new rocket, it all comes down to weather. Accordingly, Blue Origin is only awaiting clear skies and fair seas for its massive New Glenn vehicle to lift off from Florida.

After the company completed integration of the rocket this week, and rolled the super heavy lift rocket to its launch site at Cape Canaveral, the focus turned toward the weather. Conditions at Cape Canaveral Space Force Base have been favorable during the early morning launch windows available to the rocket, but there have been complications offshore.

That’s because Blue Origin aims to recover the first stage of the New Glenn rocket, and sea states in the Atlantic Ocean have been unsuitable for an initial attempt to catch the first stage booster on a drone ship. The company has already waived one launch attempt set for 1 am ET (06: 00 UTC) on Friday, January 10.

Conditions have improved a bit since then, but on Saturday evening the company’s launch officials canceled a second attempt planned for 1 am ET on Sunday. The new launch time is now 1 am ET on Monday, January 13, when better sea states are expected. There is a three-hour launch window. The company will provide a webcast of proceedings at this link beginning one hour before liftoff.

Seeking a nominal flight

According to a mission timeline shared by Blue Origin on Saturday, it will take several hours to fuel the New Glenn rocket. Second stage hydrogen loading will begin 4.5 hours before liftoff, followed by the booster stage and second stage liquid oxygen at 4 hours, and methane for the booster stage at 3.5 hours to go. Fueling should be complete about an hour before liftoff.

New Glenn rocket is at the launch pad, waiting for calm seas to land Read More »

everyone-agrees:-2024-the-hottest-year-since-the-thermometer-was-invented

Everyone agrees: 2024 the hottest year since the thermometer was invented


An exceptionally hot outlier, 2024 means the streak of hottest years goes to 11.

With very few and very small exceptions, 2024 was unusually hot across the globe. Credit: Copernicus

Over the last 24 hours or so, the major organizations that keep track of global temperatures have released figures for 2024, and all of them agree: 2024 was the warmest year yet recorded, joining 2023 as an unusual outlier in terms of how rapidly things heated up. At least two of the organizations, the European Union’s Copernicus and Berkeley Earth, place the year at about 1.6° C above pre-industrial temperatures, marking the first time that the Paris Agreement goal of limiting warming to 1.5° has been exceeded.

NASA and the National Oceanic and Atmospheric Administration both place the mark at slightly below 1.5° C over pre-industrial temperatures (as defined by the 1850–1900 average). However, that difference largely reflects the uncertainties in measuring temperatures during that period rather than disagreement over 2024.

It’s hot everywhere

2023 had set a temperature record largely due to a switch to El Niño conditions midway through the year, which made the second half of the year exceptionally hot. It takes some time for that heat to make its way from the ocean into the atmosphere, so the streak of warm months continued into 2024, even as the Pacific switched into its cooler La Niña mode.

While El Niños are regular events, this one had an outsized impact because it was accompanied by unusually warm temperatures outside the Pacific, including record high temperatures in the Atlantic and unusual warmth in the Indian Ocean. Land temperatures reflect this widespread warmth, with elevated temperatures on all continents. Berkeley Earth estimates that 104 countries registered 2024 as the warmest on record, meaning 3.3 billion people felt the hottest average temperatures they had ever experienced.

Different organizations use slightly different methods to calculate the global temperature and have different baselines. For example, Copernicus puts 2024 at 0.72° C above a baseline that will be familiar to many people since they were alive for it: 1991 to 2000. In contrast, NASA and NOAA use a baseline that covers the entirety of the last century, which is substantially cooler overall. Relative to that baseline, 2024 is 1.29° C warmer.

Lining up the baselines shows that these different services largely agree with each other, with most of the differences due to uncertainties in the measurements, with the rest accounted for by slightly different methods of handling things like areas with sparse data.

Describing the details of 2024, however, doesn’t really capture just how exceptional the warmth of the last two years has been. Starting in around 1970, there’s been a roughly linear increase in temperature driven by greenhouse gas emissions, despite many individual years that were warmer or cooler than the trend. The last two years have been extreme outliers from this trend. The last time there was a single comparable year to 2024 was back in the 1940s. The last time there were two consecutive years like this was in 1878.

A graph showing a curve that increases smoothly from left to right, with individual points on the curve hosting red and blue lines above and below. The red line at 2024 is larger than any since 1978.

Relative to the five-year temperature average, 2024 is an exceptionally large excursion. Credit: Copernicus

“These were during the ‘Great Drought’ of 1875 to 1878, when it is estimated that around 50 million people died in India, China, and parts of Africa and South America,” the EU’s Copernicus service notes. Despite many climate-driven disasters, the world at least avoided a similar experience in 2023-24.

Berkeley Earth provides a slightly different way of looking at it, comparing each year since 1970 with the amount of warming we’d expect from the cumulative greenhouse gas emissions.

A graph showing a reddish wedge, growing from left to right. A black line traces the annual temperatures, which over near the top edge of the wedge until recent years.

Relative to the expected warming from greenhouse gasses, 2024 represents a large departure. Credit: Berkeley Earth

These show that, given year-to-year variations in the climate system, warming has closely tracked expectations over five decades. 2023 and 2024 mark a dramatic departure from that track, although it comes at the end of a decade where most years were above the trend line. Berkeley Earth estimates that there’s just a 1 in 100 chance of that occurring due to the climate’s internal variability.

Is this a new trend?

The big question is whether 2024 is an exception and we should expect things to fall back to the trend that’s dominated since the 1970s, or it marks a departure from the climate’s recent behavior. And that’s something we don’t have a great answer to.

If you take away the influence of recent greenhouse gas emissions and El Niño, you can focus on other potential factors. These include a slight increase expected due to the solar cycle approaching its maximum activity. But, beyond that, most of the other factors are uncertain. The Hunga Tonga eruption put lots of water vapor into the stratosphere, but the estimated effects range from slight warming to cooling equivalent to a strong La Niña. Reductions in pollution from shipping are expected to contribute to warming, but the amount is debated.

There is evidence that a decrease in cloud cover has allowed more sunlight to be absorbed by the Earth, contributing to the planet’s warming. But clouds are typically a response to other factors that influence the climate, such as the amount of water vapor in the atmosphere and the aerosols present to seed water droplets.

It’s possible that a factor that we missed is driving the changes in cloud cover or that 2024 just saw the chaotic nature of the atmosphere result in less cloud cover. Alternatively, we may have crossed a warming tipping point, where the warmth of the atmosphere makes cloud formation less likely. Knowing that will be critical going forward, but we simply don’t have a good answer right now.

Climate goals

There’s an equally unsatisfying answer to what this means for our chance of hitting climate goals. The stretch goal of the Paris Agreement is to limit warming to 1.5° C, because it leads to significantly less severe impacts than the primary, 2.0° target. That’s relative to pre-industrial temperatures, which are defined using the 1850–1900 period, the earliest time where temperature records allow a reconstruction of the global temperature.

Unfortunately, all the organizations that handle global temperatures have some differences in the analysis methods and data used. Given recent data, these differences result in very small divergences in the estimated global temperatures. But with the far larger uncertainties in the 1850–1900 data, they tend to diverge more dramatically. As a result, each organization has a different baseline, and different anomalies relative to that.

As a result, Berkeley Earth registers 2024 as being 1.62° C above preindustrial temperatures, and Copernicus 1.60° C. In contrast, NASA and NOAA place it just under 1.5° C (1.47° and 1.46°, respectively). NASA’s Gavin Schmidt said this is “almost entirely due to the [sea surface temperature] data set being used” in constructing the temperature record.

There is, however, consensus that this isn’t especially meaningful on its own. There’s a good chance that temperatures will drop below the 1.5° mark on all the data sets within the next few years. We’ll want to see temperatures consistently exceed that mark for over a decade before we consider that we’ve passed the milestone.

That said, given that carbon emissions have barely budged in recent years, there’s little doubt that we will eventually end up clearly passing that limit (Berkeley Earth is essentially treating it as exceeded already). But there’s widespread agreement that each increment between 1.5° and 2.0° will likely increase the consequences of climate change, and any continuing emissions will make it harder to bring things back under that target in the future through methods like carbon capture and storage.

So, while we may have committed ourselves to exceed one of our major climate targets, that shouldn’t be viewed as a reason to stop trying to limit greenhouse gas emissions.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Everyone agrees: 2024 the hottest year since the thermometer was invented Read More »

on-dwarkesh-patel’s-4th-podcast-with-tyler-cowen

On Dwarkesh Patel’s 4th Podcast With Tyler Cowen

Dwarkesh Patel again interviewed Tyler Cowen, largely about AI, so here we go.

Note that I take it as a given that the entire discussion is taking place in some form of an ‘AI Fizzle’ and ‘economic normal’ world, where AI does not advance too much in capability from its current form, in meaningful senses, and we do not get superintelligence [because of reasons]. It’s still massive additional progress by the standards of any other technology, but painfully slow by the ‘AGI is coming soon’ crowd.

That’s the only way I can make the discussion make at least some sense, with Tyler Cowen predicting 0.5%/year additional RGDP growth from AI. That level of capabilities progress is a possible world, although the various elements stated here seem like they are sometimes from different possible worlds.

I note that this conversation was recorded prior to o3 and all the year end releases. So his baseline estimate of RGDP growth and AI impacts has likely increased modestly.

I go very extensively into the first section on economic growth and AI. After that, the podcast becomes classic Tyler Cowen and is interesting throughout, but I will be relatively sparing in my notes in other areas, and am skipping over many points.

This is a speed premium and ‘low effort’ post, in the sense that this is mostly me writing down my reactions and counterarguments in real time, similar to how one would do a podcast. It is high effort in that I spent several hours listening to, thinking about and responding to the first fifteen minutes of a podcast.

As a convention: When I’m in the numbered sections, I’m reporting what was said. When I’m in the secondary sections, I’m offering (extensive) commentary. Timestamps are from the Twitter version.

[EDIT: In Tyler’s link, he correctly points out a confusion in government spending vs. consumption, which I believe is fixed now. As for his comment about market evidence for the doomer position, I’ve given my answer before, and I would assert the market provides substantial evidence neither in favor or against anything but the most extreme of doomer positions, as in extreme in a way I have literally never heard one person assert, once you control for its estimate of AI capabilities (where it does indeed offer us evidence, and I’m saying that it’s too pessimistic). We agree there is no substantial and meaningful ‘peer-reviewed’ literature on the subject, in the way that Tyler is pointing.]

They recorded this at the Progress Studies conference, and Tyler Cowen has a very strongly held view that AI won’t accelerate RGDP growth much that Dwarkesh clearly does not agree with, so Dwarkesh Patel’s main thrust is to try comparisons and arguments and intuition pumps to challenge Tyler. Tyler, as he always does, has a ready response to everything, whether or not it addresses the point of the question.

  1. (1: 00) Dwarkesh doesn’t waste any time and starts off asking why we won’t get explosive economic growth. Tyler’s first answer is cost disease, that as AI works in some parts of the economy costs in other areas go up.

    1. That’s true in relative terms for obvious reasons, but in absolute terms or real resource terms the opposite should be true, even if we accept the implied premise that AI won’t simply do everything anyway. This should drive down labor costs and free up valuable human capital. It should aid in availability of many other inputs. It makes almost any knowledge acquisition, strategic decision or analysis, data analysis or gathering, and many other universal tasks vastly better.

    2. Tyler then answers this directly when asked at (2: 10) by saying cost disease is not about employees per se, it’s more general, so he’s presumably conceding the point about labor costs, saying that non-intelligence inputs that can’t be automated will bind more and thus go up in price. I mean, yes, in the sense that we have higher value uses for them, but so what?

    3. So yes, you can narrowly define particular subareas of some areas as bottlenecks and say that they cannot grow, and perhaps they can even be large areas if we impose costlier bottlenecks via regulation. But that still leaves lots of room for very large economic growth for a while – the issue can’t bind you otherwise, the math doesn’t work.

  2. Tyler says government consumption [EDIT: I originally misheard this as spending, he corrected me, I thank him] at 18% of GDP (government spending is 38% but a lot of that is duplicative and a lot isn’t consumption), health care at 20%, education is 6% (he says 6-7%, Claude says 6%), the nonprofit sector (Claude says 5.6%) and says together that is half of the economy. Okay, sure, let’s tackle that.

    1. Healthcare is already seeing substantial gains from AI even at current levels. There are claims that up to 49% of half of doctor time is various forms of EMR and desk work that AIs could reduce greatly, certainly at least ~25%. AI can directly substitute for much of what doctors do in terms of advising patients, and this is already happening where the future is distributed. AI substantially improves medical diagnosis and decision making. AI substantially accelerates drug discovery and R&D, will aid in patient adherence and monitoring, and so on. And again, that’s without further capability gains. Insurance companies doubtless will embrace AI at every level. Need I go on here?

    2. Government spending at all levels is actually about 38% of GDP, but that’s cheating, only ~11% is non-duplicative and not transfers, interest (which aren’t relevant) or R&D (I’m assuming R&D would get a lot more productive).

    3. The biggest area is transfers. AI can’t improve the efficiency of transfers too much, but it also can’t be a bottleneck outside of transaction and administrative costs, which obviously AI can greatly reduce and are not that large to begin with.

    4. The second biggest area is provision of healthcare, which we’re already counting, so that’s duplicative. Third is education, which we count in the next section.

    5. Third is education. Fourth is national defense, where efficiency per dollar or employee should get vastly better, to the point where failure to be at the AI frontier is a clear national security risk.

    6. Fifth is interest on the debt, which again doesn’t count, and also we wouldn’t care about if GDP was growing rapidly.

    7. And so on. What’s left to form the last 11% or so? Public safety, transportation and infrastructure, government administration, environment and natural resources and various smaller other programs. What happens here is a policy choice. We are already seeing signs of improvement in government administration (~2% of the 11%), the other 9% might plausibly stall to the extent we decide to do an epic fail.

    8. Education and academia is already being transformed by AI, in the sense of actually learning things, among anyone who is willing to use it. And it’s rolling through academia as we speak, in terms of things like homework assignments, in ways that will force change. So whether you think growth is possible depends on your model of education. If it’s mostly a signaling model then you should see a decline in education investment since the signals will decline in value and AI creates the opportunity for better more efficient signals, but you can argue that this could continue to be a large time and dollar tax on many of us.

    9. Nonprofits are about 20%-25% education, and ~50% is health care related, which would double count, so the remainder is only ~1.3% of GDP. This also seems like a dig at nonprofits and their inability to adapt to change, but why would we assume nonprofits can’t benefit from AI?

    10. What’s weird is that I would point to different areas that have the most important anticipated bottlenecks to growth, such as housing or power, where we might face very strong regulatory constraints and perhaps AI can’t get us out of those.

  3. (1: 30) He says it will take ~30 years for sectors of the economy that do not use AI well to be replaced by those that do use AI well.

    1. That’s a very long time, even in an AI fizzle scenario. I roll to disbelieve that estimate in most cases. But let’s even give it to him, and say it is true, and it takes 30 years to replace them, while the productivity of the replacement goes up 5%/year above incumbents, which are stagnant. Then you delay the growth, but you don’t prevent it, and if you assume this is a gradual transition you start seeing 1%+ yearly GDP growth boosts even in these sectors within a decade.

  4. He concludes by saying some less regulated areas grow a lot, but that doesn’t get you that much, so you can’t have the whole economy ‘growing by 40%’ in a nutshell.

    1. I mean, okay, but that’s double Dwarkesh’s initial question of why we aren’t growing at 20%. So what exactly can we get here? I can buy this as an argument for AI fizzle world growing slower than it would have otherwise, but the teaser has a prediction of 0.5%, which is a whole different universe.

  1. (2: 20) Tyler asserts that value of intelligence will go down because more intelligence will be available.

    1. Dare I call this the Lump of Intelligence fallacy, after the Lump of Labor fallacy? Yes, to the extent that you are doing the thing an AI can do, the value of that intelligence goes down, and the value of AI intelligence itself goes down in economic terms because its cost of production declines. But to the extent that your intelligence complements and unlocks the AI’s, or is empowered by the AI’s and is distinct from it (again, we must be in fizzle-world), the value of that intelligence goes up.

    2. Similarly, when he talks about intelligence as ‘one input’ in the system among many, that seems like a fundamental failure to understand how intelligence works, a combination of intelligence denialism (failure to buy that much greater intelligence could meaningfully exist) and a denial of substitution or ability to innovate as a result – you couldn’t use that intelligence to find alternative or better ways to do things, and you can’t use more intelligence as a substitute for other inputs. And you can’t substitute the things enabled more by intelligence much for the things that aren’t, and so on.

    3. It also assumes that intelligence can’t be used to convince us to overcome all these regulatory barriers and bottlenecks. Whereas I would expect that raising the intelligence baseline greatly would make it clear to everyone involved how painful our poor decisions were, and also enable improved forms of discourse and negotiation and cooperation and coordination, and also greatly favor those that embrace it over those that don’t, and generally allow us to take down barriers. Tyler would presumably agree that if we were to tear down the regulatory state in the places it was holding us back, that alone would be worth far more than his 0.5% of yearly GDP growth, even with no other innovation or AI.

  1. (2: 50) Dwarkesh challenges Tyler by pointing out that the Industrial Revolution resulted in a greatly accelerated rate of economic growth versus previous periods, and asks what Tyler would say to someone from the past doubting it was possible. Tyler attempts to dodge (and is amusing doing so) by saying they’d say ‘looks like it would take a long time’ and he would agree.

    1. Well, it depends what a long time is, doesn’t it? 2% sustained annual growth (or 8%!) is glacial in some sense and mind boggling by ancient standards. ‘Take a long time’ in AI terms, such as what is actually happening now, could still look mighty quick if you compared it to most other things. OpenAI has 300 million MAUs.

  2. (3: 20) Tyler trots out the ‘all the financial prices look normal’ line, that they are not predicting super rapid growth and neither are economists or growth experts.

    1. Yes, the markets are being dumb, the efficient market hypothesis is false, and also aren’t you the one telling me I should have been short the market? Well, instead I’m long, and outperforming. And yes, economists and ‘experts on economic growth’ aren’t predicting large amounts of growth, but their answers are Obvious Nonsense to me and saying that ‘experts don’t expect it’ without arguments why isn’t much of an argument.

  3. (3: 40) Aside, since you kind of asked: So who am I to say different from the markets and the experts? I am Zvi Mowshowitz. Writer. Son of Solomon and Deborah Mowshowitz. I am the missing right hand of the one handed economists you cite. And the one warning you about what is about to kick Earth’s sorry ass into gear. I speak the truth as I see it, even if my voice trembles. And a warning that we might be the last living things this universe ever sees. God sent me.

  4. Sorry about that. But seriously, think for yourself, schmuck! Anyway.

What would happen if we had more people? More of our best people? Got more out of our best people? Why doesn’t AI effectively do all of these things?

  1. (3: 55) Tyler is asked wouldn’t a large rise in population drive economic growth? He says no, that’s too much a 1-factor model, in fact we’ve seen a lot of population growth without innovation or productivity growth.

    1. Except that Tyler is talking here about growth on a per capita basis. If you add AI workers, you increase the productive base, but they don’t count towards the capita.

  2. Tyler says ‘it’s about the quality of your best people and institutions.’

    1. But quite obviously AI should enable a vast improvement in the effective quality of your best people, it already does, Tyler himself would be one example of this, and also the best institutions, including because they are made up of the best people.

  3. Tyler says ‘there’s no simple lever, intelligence or not, that you can push on.’ Again, intelligence as some simple lever, some input component.

    1. The whole point of intelligence is that it allows you to do a myriad of more complex things, and to better choose those things.

  4. Dwarkesh points out the contradiction between ‘you are bottlenecked by your best people’ and asserting cost disease and constraint by your scarce input factors. Tyler says Dwarkesh is bottlenecked, Dwarkesh points out that with AGI he will be able to produce a lot more podcasts. Tyler says great, he’ll listen, but he will be bottlenecked by time.

    1. Dwarkesh’s point generalizes. AGI greatly expand the effective amount of productive time of the best people, and also extend their capabilities while doing so.

    2. AGI can also itself become ‘the best people’ at some point. If that was the bottleneck, then the goose asks, what happens now, Tyler?

  5. (5: 15) Tyler cites that much of sub-Saharan Africa still does not have clean reliable water, and intelligence is not the bottleneck there. And that taking advantage of AGI will be like that.

    1. So now we’re expecting AGI in this scenario? I’m going to kind of pretend we didn’t hear that, or that this is a very weak AGI definition, because otherwise the scenario doesn’t make sense at all.

    2. Intelligence is not directly the bottleneck there, true, but yes quite obviously Intelligence Solves This if we had enough of it and put those minds to that particular problem and wanted to invest the resources towards it. Presumably Tyler and I mostly agree on why the resources aren’t being devoted to it.

    3. What it mean for similar issues to that to be involved in taking advantage of AGI? Well, first, it would mean that you can’t use AGI to get to ASI (no I can’t explain why), but again that’s got to be a baseline assumption here. After that, well, sorry, I failed to come up with a way to finish this that makes it make sense to me, beyond a general ‘humans won’t do the things and will throw up various political and legal barriers.’ Shrug?

  6. (5: 35) Dwarkesh speaks about a claim that there is a key shortage of geniuses, and that America’s problems come largely from putting its geniuses in places like finance, whereas Taiwan puts them in tech, so the semiconductors end up in Taiwan. Wouldn’t having lots more of those types of people eat a lot of bottlenecks? What would happen if everyone had 1000 times more of the best people available?

  7. Tyler Cowen, author of a very good book about Talent and finding talent and the importance of talent, says he didn’t agree with that post, and returns to IQ in the labor market are amazingly low, and successful people are smart but mostly they have 8-9 areas where they’re an 8-9 on a 1-10 scale, with one 11+ somewhere, and a lot of determination.

    1. All right, I don’t agree that intelligence doesn’t offer returns now, and I don’t agree that intelligence wouldn’t offer returns even at the extremes, but let’s again take Tyler’s own position as a given…

    2. But that exactly describes what an AI gives you! An AI is the ultimate generalist. An AGI will be a reliable 8-9 on everything, actual everything.

    3. And it would also turn everyone else into an 8-9 on everything. So instead of needing to find someone 11+ in one area, plus determination, plus having 8-9 in ~8 areas, you can remove that last requirement. That will hugely expand the pool of people in question.

    4. So there’s two obvious very clear plans here: You can either use AI workers who have that ultimate determination and are 8-9 in everything and 11+ in the areas where AIs shine (e.g. math, coding, etc).

    5. Or you can also give your other experts an AI companion executive assistant to help them, and suddenly they’re an 8+ in everything and also don’t have to deal with a wide range of things.

  8. (6: 50) Tyler says, talk to a committee at a Midwestern university about their plans for incorporating AI, then get back to him and talk to him about bottlenecks. Then write a report and the report will sound like GPT-4 and we’ll have a report.

    1. Yes, the committee will not be smart or fast about its official policy for how to incorporate AI into its existing official activities. If you talk to them now they will act like they have a plagiarism problem and that’s it.

    2. So what? Why do we need that committee to form a plan or approve anything or do anything at all right now, or even for a few years? All the students are already using AI. The professors are rapidly forced to adapt AI. Everyone doing the research will soon be using AI. Half that committee, three years from now, prepared for that meeting using AI. Their phones will all work based on AI. They’ll be talking to their AI phone assistant companions that plan their schedules. You think this will all involve 0.5% GDP growth?

  9. (7: 20) Dwarkesh asks, won’t the AIs be smart, super conscientious and work super hard? Tyler explicitly affirms the 0.5% GDP growth estimate, that this will transform the world over 30 years but ‘over any given year we won’t so much notice it.’ Things like drug developments that would have taken 20 years now take 10 years, but you won’t feel it as revolutionary for a long time.

    1. I mean, it’s already getting very hard to miss. If you don’t notice it in 2025 or at least 2026, and you’re in the USA, check your pulse, you might be dead, etc.

    2. Is that saying we will double productivity in pharmaceutical R&D, and that it would have far more than doubled if progress didn’t require long expensive clinical trials, so other forms of R&D should be accelerated much more?

    3. For reference, according to Claude, R&D in general contributes about 0.3% to RGDP growth per year right now. If we were to double that effect in roughly half the current R&D spend that is bottlenecked in similar fashion, and the other half would instead go up by more.

    4. Claude also estimates that R&D spending would, if returns to R&D doubled, go up by 30%-70% on net.

    5. So we seem to be looking at more than 0.5% RGDP growth per year from R&D effects alone, between additional spending on it and greater returns. And obviously AI is going to have additional other returns.

This is a plausible bottleneck, but that implies rather a lot of growth.

  1. (8: 00) Dwarkesh points out that Progress Studies is all about all the ways we could unlock economic growth, yet Tyler says that tons more smart conscientious digital workers wouldn’t do that much. What gives? Tyler again says bottlenecks, and adds on energy as an important consideration and bottleneck.

    1. Feels like bottleneck is almost a magic word or mantra at this point.

    2. Energy is a real consideration, yes the vision here involves spending a lot more energy, and that might take time. But also we see rapidly declining costs, including energy costs, to extract the same amount of intelligence, things like 10x savings each year.

    3. And for inference purposes we can outsource our needs elsewhere, which we would if this was truly bottlenecking explosive growth, and so on. So while I think energy will indeed be an important limiting factor and be strained, and this will be especially important in terms of pushing the frontier or if we want to use o3-style very expensive inference a lot.

    4. I don’t expect it to bind medium-term economic growth so much in a slow growth scenario, and the bottlenecks involved here shouldn’t compound with others. In a high growth takeoff scenario, I do think energy could bind far more impactfully.

    5. Another way of looking at this is that if the price of energy goes substantially up due to AI, or at least the price of energy outside of potentially ‘government-protected uses,’ then that can only happen if it is having a large economic impact. If it doesn’t raise the price of energy a lot, then no bottleneck exists.

Tyler Cowen and I think very differently here.

  1. (9: 25) Fascinating moment. Tyler says he goes along with the experts in general, but agrees that ‘the experts’ on basically everything but AI are asleep at the wheel when it comes to AI – except when it comes to their views on diffusions of new technology in general, where the AI people are totally wrong. His view is, you get the right view by trusting the experts in each area, and combining them.

    1. Tyler seems to be making an argument from reference class expertise? That this is a ‘diffusion of technology’ question, so those who are experts on that should be trusted?

    2. Even if they don’t actually understand AI and what it is and its promise?

    3. That’s not how I roll. At all. As noted above in this post, and basically all the time. I think that you have to take the arguments being made, and see if you agree with them, and whether and how much they apply to the case of AI and especially AGI. Saying ‘the experts in area [X] predict [Y]’ is a reasonable placeholder if you don’t have the ability to look at the arguments and models and facts involved, but hey look, we can do that.

    4. Simply put, while I do think the diffusion experts are pointing to real issues that will importantly slow down adaptation, and indeed we are seeing what for many is depressingly slow apadation, they won’t slow it down all that much, because this is fundamentally different. AI and especially workers ‘adapt themselves’ to a large extent, the intelligence and awareness involved is in the technology itself, and it is digital and we have a ubiquitous digital infrastructure we didn’t have until recently.

    5. It is also way too valuable a technology, even right out of the gate on your first day, and you will start to be forced to interact with it whether you like it or not, both in ways that will make it very difficult and painful to ignore. And the places it is most valuable will move very quickly. And remember, LLMs will get a lot better.

    6. Suppose, as one would reasonably expect, by 2026 we have strong AI agents, capable of handling for ordinary people a wide variety of logistical tasks, sorting through information, and otherwise offering practical help. Apple Intelligence is partly here, Claude Alexa is coming, Project Astra is coming, and these are pale shadows of the December 2025 releases I expect. How long would adaptation really take? Once you have that, what stops you from then adapting AI in other ways?

    7. Already, yes, adaptation is painfully slow, but it is also extremely fast. In two years ChatGPT alone has 300 million MAU. A huge chunk of homework and grading is done via LLMs. A huge chunk of coding is done via LLMs. The reason why LLMs are not catching on even faster is that they’re not quite ready for prime time in the fully user-friendly ways normies need. That’s about to change in 2025.

Dwarkesh tries to use this as an intuition pump. Tyler’s not having it.

  1. (10: 15) Dwarkesh asks, what would happen if the world population would double? Tyler says, depends what you’re measuring. Energy use would go up. But he doesn’t agree with population-based models, too many other things matter.

    1. Feels like Tyler is answering a different question. I see Dwarkesh as asking, wouldn’t the extra workers mean we could simply get a lot more done, wouldn’t (total, not per capita) GDP go up a lot? And Tyler’s not biting.

  2. (11: 10) Dwarkesh tries asking about shrinking the population 90%. Shrinking, Tyler says, the delta can kill you, whereas growth might not help you.

    1. Very frustrating. I suppose this does partially respond, by saying that it is hard to transition. But man I feel for Dwarkesh here. You can feel his despair as he transitions to the next question.

  1. (11: 35) Dwarkesh asks what are the specific bottlenecks? Tyler says: Humans! All of you! Especially you who are terrified.

    1. That’s not an answer yet, but then he actually does give one.

  2. He says once AI starts having impact, there will be a lot of opposition to it, not primarily on ‘doomer’ grounds but based on: Yes, this has benefits, but I grew up and raised my kids for a different way of life, I don’t want this. And there will be a massive fight.

    1. Yes. He doesn’t even mention jobs directly but that will be big too. We already see that the public strongly dislikes AI when it interacts with it, for reasons I mostly think are not good reasons.

    2. I’ve actually been very surprised how little resistance there has been so far, in many areas. AIs are basically being allowed to practice medicine, to function as lawyers, and do a variety of other things, with no effective pushback.

    3. The big pushback has been for AI art and other places where AI is clearly replacing creative work directly. But that has features that seem distinct.

    4. Yes people will fight, but what exactly do they intend to do about it? People have been fighting such battles for a while, every year I watch the battle for Paul Bunyan’s Axe. He still died. I think there’s too much money at stake, too much productivity at stake, too many national security interests.

    5. Yes, it will cause a bunch of friction, and slow things down somewhat, in the scenarios like the one Tyler is otherwise imagining. But if that’s the central actual thing, it won’t slow things down all that much in the end. Rarely has.

    6. We do see some exceptions, especially involving powerful unions, where the anti-automation side seems to do remarkably well, see the port strike. But also see which side of that the public is on. I don’t like their long term position, especially if AI can seamlessly walk in and take over the next time they strike. And that, alone, would probably be +0.1% or more to RGDP growth.

  1. (12: 15) Dwarkesh tries using China as a comparison case. If you can do 8% growth for decades merely by ‘catching up’ why can’t you do it with AI? Tyler responds, China’s in a mess now, they’re just a middle income country, they’re the poorest Chinese people on the planet, a great example of how hard it is to scale. Dwarkesh pushes back that this is about the previous period, and Tyler says well, sure, from the $200 level.

    1. Dwarkesh is so frustrated right now. He’s throwing everything he can at Tyler, but Tyler is such a polymath that he has detail points for anything and knows how to pivot away from the question intents.

  1. (13: 40) Dwarkesh asks, has Tyler’s attitude on AI changed from nine months ago? He says he sees more potential and there was more progress than he expected, especially o1 (this was before o3). The questions he wrote for GPT-4, which Dwarkesh got all wrong, are now too easy for models like o1. And he ‘would not be surprised if an AI model beat human experts on a regular basis within three years.’ He equates it to the first Kasparov vs. DeepBlue match, which Kasparov won, before the second match which he lost.

    1. I wouldn’t be surprised if this happens in one year.

    2. I wouldn’t be that shocked o3 turns out to do it now.

    3. Tyler’s expectations here, to me, contradict his statements earlier. Not strictly, they could still both be true, but it seems super hard.

    4. How much would availability of above-human level economic thinking help us in aiding economic growth? How much would better economic policy aid economic growth?

We take a detour to other areas, I’ll offer brief highlights.

  1. (15: 45) Why are founders staying in charge important? Courage. Making big changes.

  2. (19: 00) What is going on with the competency crisis? Tyler sees high variance at the top. The best are getting better, such as in chess or basketball, and also a decline in outright crime and failure. But there’s a thick median not quite at the bottom that’s getting worse, and while he thinks true median outcomes are about static (since more kids take the tests) that’s not great.

  3. (22: 30) Bunch of shade on both Churchill generally and on being an international journalist, including saying it’s not that impressive because how much does it pay?

    1. He wasn’t paid that much as Prime Minister either, you know…

  4. (24: 00) Why are all our leaders so old? Tyler says current year aside we’ve mostly had impressive candidates, and most of the leadership in Washington in various places (didn’t mention Congress!) is impressive. Yay Romney and Obama.

    1. Yes, yay Romney and Obama as our two candidates. So it’s only been three election cycles where both candidates have been… not ideal. I do buy Tyler’s claim that Trump has a lot of talent in some ways, but, well, ya know.

    2. If you look at the other candidates for both nominations over that period, I think you see more people who were mostly also not so impressive. I would happily have taken Obama over every candidate on the Democratic side in 2016, 2020 or 2024, and Romney over every Republican (except maybe Kasich) in those elections as well.

    3. This also doesn’t address Dwarkesh’s concern about age. What about the age of Congress and their leadership? It is very old, on both sides, and things are not going so great.

    4. I can’t speak about the quality people in the agencies.

  5. (27: 00) Commentary on early-mid 20th century leaders being terrible, and how when there is big change there are arms races and sometimes bad people win them (‘and this is relevant to AI’).

For something that is going to not cause that much growth, Tyler sees AI as a source for quite rapid change in other ways.

  1. (34: 20) Tyler says all inputs other than AI rise in value, but you have to do different things. He’s shifting from producing content to making connections.

    1. This again seems to be a disconnect. If AI is sufficiently impactful as to substantially increase the value of all other inputs, then how does that not imply substantial economic growth?

    2. Also this presumes that the AI can’t be a substitute for you, or that it can’t be a substitute for other people that could in turn be a substitute for you.

    3. Indeed, I would think the default model would presumably be that the value of all labor goes down, even for things where AI can’t do it (yet) because people substitute into those areas.

  2. (35: 25) Tyler says he’s writing his books primarily for the AIs, he wants them to know he appreciates them. And the next book will be even more for the AIs so it can shape how they see the AIs. And he says, you’re an idiot if you’re not writing for the AIs.

    1. Basilisk! Betrayer! Misaligned!

    2. ‘What the AIs will think of you’ is actually an underrated takeover risk, and I pointed this out as early as AI #1.

    3. The AIs will be smarter and better at this than you, and also will be reading what the humans say about you. So maybe this isn’t as clever as it seems.

    4. My mind boggles that it could be correct to write for the AIs… but you think they will only cause +0.5% GDP annual growth.

  3. (36: 30) What won’t AIs get from one’s writing? That vibe you get talking to someone for the first 3 minutes? Sense of humor?

    1. I expect the AIs will increasingly have that stuff, at least if you provide enough writing samples. They have true sight.

    2. Certainly if they have interview and other video data to train with, that will work over time.

  1. (37: 25) What happens when Tyler turns down a grant in the first three minutes? Usually it’s failure to answer a question, like ‘how do you build out your donor base?’ without which you have nothing. Or someone focuses on the wrong things, or cares about the wrong status markers, and 75% of the value doesn’t display on the transcript, which is weird since the things Tyler names seem like they would be in the transcript.

  2. (42: 15) Tyler’s portfolio is diversified mutual funds, US-weighted. He has legal restrictions on most other actions such as buying individual stocks, but he would keep the same portfolio regardless.

    1. Mutual funds over ETFs? Gotta chase that lower expense ratio.

    2. I basically think This Is Fine as a portfolio, but I do think he could do better if he actually tried to pick winners.

  3. (42: 45) Tyler expects gains to increasingly fall to private companies that see no reason to share their gains with the public, and he doesn’t have enough wealth to get into good investments but also has enough wealth for his purposes anyway, if he had money he’d mostly do what he’s doing anyway.

    1. Yep, I think he’s right about what he would be doing, and I too would mostly be doing the same things anyway. Up to a point.

    2. If I had a billion dollars or what not, that would be different, and I’d be trying to make a lot more things happen in various ways.

    3. This implies the efficient market hypothesis is rather false, doesn’t it? The private companies are severely undervalued in Tyler’s model. If private markets ‘don’t want to share the gains’ with public markets, that implies that public markets wouldn’t give fair valuations to those companies. Otherwise, why would one want such lack of liquidity and diversification, and all the trouble that comes with staying private?

    4. If that’s true, what makes you think Nvidia should only cost $140 a share?

Tyler Cowen doubles down on dismissing AI optimism, and is done playing nice.

  1. (46: 30) Tyler circles back to rate of diffusion of tech change, and has a very clear attitude of I’m right and all people are being idiots by not agreeing with me, that all they have are ‘AI will immediately change everything’ and ‘some hyperventilating blog posts.’ AIs making more AIs? Diminishing returns! Ricardo knew this! Well that was about humans breeding. But it’s good that San Francisco ‘doesn’t know about’ diminishing returns and the correct pessimism that results.

    1. This felt really arrogant, and willfully out of touch with the actual situation.

    2. You can say the AIs wouldn’t be able to do this, but: No, ‘Ricardo didn’t know that’ and saying ‘diminishing returns’ does not apply here, because the whole ‘AIs making AIs’ principle is that the new AIs would be superior to the old AIs, a cycle you could repeat. The core reason you get eventual diminishing returns from more people is that they’re drawn from the same people distribution.

    3. I don’t even know what to say at this point to ‘hyperventilating blog posts.’ Are you seriously making the argument that if people write blog posts, that means their arguments don’t count? I mean, yes, Tyler has very much made exactly this argument in the past, that if it’s not in a Proper Academic Journal then it does not count and he is correct to not consider the arguments or update on them. And no, they’re mostly not hyperventilating or anything like that, but that’s also not an argument even if they were.

    4. What we have are, quite frankly, extensive highly logical, concrete arguments about the actual question of what [X] will happen and what [Y]s will result from that, including pointing out that much of the arguments being made against this are Obvious Nonsense.

    5. Diminishing returns holds as a principle in a variety of conditions, yes, and is a very important concept to know. Bt there are other situations with increasing returns, and also a lot of threshold effects, even outside of AI. And San Francisco importantly knows this well.

    6. Saying there must be diminishing returns to intelligence, and that this means nothing that fast or important is about to happen when you get a lot more of it, completely begs the question of what it even means to have a lot more intelligence.

    7. Earlier Tyler used chess and basketball as examples, and talked about the best youth being better, and how that was important because the best people are a key bottleneck. That sounds like a key case of increasing returns to scale.

    8. Humanity is a very good example of where intelligence at least up to some critical point very obviously had increasing returns to scale. If you are below a certain threshold of intelligence as a human, your effective productivity is zero. Humanity having a critical amount of intelligence gave it mastery of the Earth. Tell what gorillas and lions still exist about decreasing returns to intelligence.

    9. For various reasons, with the way our physical world and civilization is constructed, we often don’t typically end up rewarding relatively high intelligence individuals with that much in the way of outsided economic returns versus ordinary slightly-above-normal intelligence individuals.

    10. But that is very much a product of our physical limitations and current social dynamics and fairness norms, and the concept of a job with essentially fixed pay, and actual good reasons not to try for many of the higher paying jobs out there in terms of life satisfaction.

    11. In areas and situations where this is not the case, returns look very different.

    12. Tyler Cowen himself is an excellent example of increasing returns to scale. The fact that Tyler can read and do so much enables him to do the thing he does at all, and to enjoy oversized returns in many ways. And if you decreased his intelligence substantially, he would be unable to produce at anything like this level. If you increased his intelligence substantially or ‘sped him up’ even more, I think that would result in much higher returns still, and also AI has made him substantially more productive already as he no doubt realizes.

    13. (I’ve been over all this before, but seems like a place to try it again.)

Trying to wrap one’s head around all of it at once is quite a challenge.

  1. (48: 45) Tyler worries about despair in certain areas from AI and worries about how happy it will make us, despite expecting full employment pretty much forever.

    1. If you expect full employment forever then you either expect AI progress to fully stall or there’s something very important you really don’t believe in, or both. I don’t understand, what does Tyler thinks happen once the AIs can do anything digital as well as most or all humans? What does he think will happen when we use that to solve robotics? What are all these humans going to be doing to get to full employment?

    2. It is possible the answer is ‘government mandated fake jobs’ but then it seems like an important thing to say explicitly, since that’s actually more like UBI.

  2. Tyler Cowen: “If you don’t have a good prediction, you should be a bit wary and just say, “Okay, we’re going to see.” But, you know, some words of caution.”

    1. YOU DON’T SAY.

    2. Further implications left as an exercise to the reader, who is way ahead of me.

  1. (54: 30) Tyler says that the people in DC are wise and think on the margin, whereas the SF people are not wise and think in infinities (he also says they’re the most intelligent hands down, elsewhere), and the EU people are wisest of all, but that if the EU people ran the world the growth rate would be -1%. Whereas the USA has so far maintained the necessary balance here well.

    1. If the wisdom you have would bring you to that place, are you wise?

    2. This is such a strange view of what constitutes wisdom. Yes, the wise man here knows more things and is more cultured, and thinks more prudently and is economically prudent by thinking on the margin, and all that. But as Tyler points out, a society of such people would decay and die. It is not productive. In the ultimate test, outcomes, and supporting growth, it fails.

    3. Tyler says you need balance, but he’s at a Progress Studies conference, which should make it clear that no, America has grown in this sense ‘too wise’ and insufficiently willing to grow, at least on the wise margin.

    4. Given what the world is about to be like, you need to think in infinities. You need to be infinitymaxing. The big stuff really will matter more than the marginal revolution. That’s kind of the point.

    5. You still have to, day to day, constantly think on the margin, of course.

  2. (55: 10) Tyler says he’s a regional thinker from New Jersey, that he is an uncultured barbarian, who only has a veneer of culture because of collection of information, but knowing about culture is not like being cultured, and that America falls flat in a lot of ways that would bother a cultured Frenchman but he’s used to it so they don’t bother Tyler.

    1. I think Tyler is wrong here, to his own credit. He is not a regional thinker, if anything he is far less a regional thinker than the typical ‘cultured’ person he speaks about. And to the extent that he is ‘uncultured’ it is because he has not taken on many of the burdens and social obligations of culture, and those things are to be avoided – he would be fully capable of ‘acting cultured’ if the situation were to call for that, it wouldn’t be others mistaking anything.

    2. He refers to his approach as an ‘autistic approach to culture.’ He seems to mean this in a pejorative way, that an autistic approach to things is somehow not worthy or legitimate or ‘real.’ I think it is all of those things.

    3. Indeed, the autistic-style approach to pretty much anything, in my view, is Playing in Hard Mode, with much higher startup costs, but brings a deeper and superior understanding once completed. The cultured Frenchman is like a fish in water, whereas Tyler understands and can therefore act on a much deeper, more interesting level. He can deploy culture usefully.

  3. (56: 00) What is autism? Tyler says it is officially defined by deficits, by which definition no one there [at the Progress Studies convention] is autistic. But in terms of other characteristics maybe a third of them would count.

    1. I think term autistic has been expanded and overloaded in a way that was not wise, but at this point we are stuck with this, so now it means in different contexts both the deficits and also the general approach that high-functioning people with those deficits come to take to navigating life, via consciously processing and knowing the elements of systems and how they fit together, treating words as having meanings, and having a map that matches the territory, whereas those not being autistic navigate largely on vibes.

    2. By this definition, being the non-deficit form of autistic is excellent, a superior way of being at least in moderation and in the right spots, for those capable of handling it and its higher cognitive costs.

    3. Indeed, many people have essentially none of this set of positive traits and ways of navigating the world, and it makes them very difficult to deal with.

  4. (56: 45) Why is tech so bad at having influence in Washington? Tyler says they’re getting a lot more influential quickly, largely due to national security concerns, which is why AI is being allowed to proceed.

For a while now I have found Tyler Cowen’s positions on AI very frustrating (see for example my coverage of the 3rd Cowen-Patel podcast), especially on questions of potential existential risk and expected economic growth, and what intelligence means and what it can do and is worth. This podcast did not address existential risks at all, so most of this post is about me trying (once again!) to explain why Tyler’s views on returns to intelligence and future economic growth don’t make sense to me, seeming well outside reasonable bounds.

I try to offer various arguments and intuition pumps, playing off of Dwarkesh’s attempts to do the same. It seems like there are very clear pathways, using Tyler’s own expectations and estimates, that on their own establish more growth than he expects, assuming AI is allowed to proceed at all.

I gave only quick coverage to the other half of the podcast, but don’t skip that other half. I found it very interesting, with a lot of new things to think about, but they aren’t areas where I feel as ready to go into detailed analysis, and was doing triage. In a world where we all had more time, I’d love to do dives into those areas too.

On that note, I’d also point everyone to Dwarkesh Patel’s other recent podcast, which was with physicist Adam Brown. It repeatedly blew my mind in the best of ways, and I’d love to be in a different branch where I had the time to dig into some of the statements here. Physics is so bizarre.

Discussion about this post

On Dwarkesh Patel’s 4th Podcast With Tyler Cowen Read More »

why-solving-crosswords-is-like-a-phase-transition

Why solving crosswords is like a phase transition

There’s also the more recent concept of “explosive percolation,” whereby connectivity emerges not in a slow, continuous process but quite suddenly, simply by replacing the random node connections with predetermined criteria—say, choosing to connect whichever pair of nodes has the fewest pre-existing connections to other nodes. This introduces bias into the system and suppresses the growth of large dominant clusters. Instead, many large unconnected clusters grow until the critical threshold is reached. At that point, even adding just one or two more connections will trigger one global violent merger (instant uber-connectivity).

Puzzling over percolation

One might not immediately think of crossword puzzles as a network, although there have been a couple of relevant prior mathematical studies. For instance, John McSweeney of the Rose-Hulman Institute of Technology in Indiana employed a random graph network model for crossword puzzles in 2016. He factored in how a puzzle’s solvability is affected by the interactions between the structure of the puzzle’s cells (squares) and word difficulty, i.e., the fraction of letters you need to know in a given word in order to figure out what it is.

Answers represented nodes while answer crossings represented edges, and McSweeney assigned a random distribution of word difficulty levels to the clues. “This randomness in the clue difficulties is ultimately responsible for the wide variability in the solvability of a puzzle, which many solvers know well—a solver, presented with two puzzles of ostensibly equal difficulty, may solve one readily and be stumped by the other,” he wrote at the time. At some point, there has to be a phase transition, in which solving the easiest words enables the puzzler to solve the more difficult words until the critical threshold is reached and the puzzler can fill in many solutions in rapid succession—a dynamic process that resembles, say, the spread of diseases in social groups.

In this sample realization, sites with black sites are shown in black; empty sites are white; and occupied sites contain symbols and letters.

In this sample realization, black sites are shown in black; empty sites are white; and occupied sites contain symbols and letters. Credit: Alexander K. Hartmann, 2024

Hartmann’s new model incorporates elements of several nonstandard percolation models, including how much the solver benefits from partial knowledge of the answers. Letters correspond to sites (white squares) while words are segments of those sites, bordered by black squares. There is an a priori probability of being able to solve a given word if no letters are known. If some words are solved, the puzzler gains partial knowledge of neighboring unsolved words, which increases the probability of those words being solved as well.

Why solving crosswords is like a phase transition Read More »

nasa-defers-decision-on-mars-sample-return-to-the-trump-administration

NASA defers decision on Mars Sample Return to the Trump administration


“We want to have the quickest, cheapest way to get these 30 samples back.”

This photo montage shows sample tubes shortly after they were deposited onto the surface by NASA’s Perseverance Mars rover in late 2022 and early 2023. Credit: NASA/JPL-Caltech/MSSS

For nearly four years, NASA’s Perseverance rover has journeyed across an unexplored patch of land on Mars—once home to an ancient river delta—and collected a slew of rock samples sealed inside cigar-sized titanium tubes.

These tubes might contain tantalizing clues about past life on Mars, but NASA’s ever-changing plans to bring them back to Earth are still unclear.

On Tuesday, NASA officials presented two options for retrieving and returning the samples gathered by the Perseverance rover. One alternative involves a conventional architecture reminiscent of past NASA Mars missions, relying on the “sky crane” landing system demonstrated on the agency’s two most recent Mars rovers. The other option would be to outsource the lander to the space industry.

NASA Administrator Bill Nelson left a final decision on a new mission architecture to the next NASA administrator working under the incoming Trump administration. President-elect Donald Trump nominated entrepreneur and commercial astronaut Jared Isaacman as the agency’s 15th administrator last month.

“This is going to be a function of the new administration in order to fund this,” said Nelson, a former Democratic senator from Florida who will step down from the top job at NASA on January 20.

The question now is: will they? And if the Trump administration moves forward with Mars Sample Return (MSR), what will it look like? Could it involve a human mission to Mars instead of a series of robotic spacecraft?

The Trump White House is expected to emphasize “results and speed” with NASA’s space programs, with the goal of accelerating a crew landing on the Moon and sending people to explore Mars.

NASA officials had an earlier plan to bring the Mars samples back to Earth, but the program slammed into a budgetary roadblock last year when an independent review team concluded the existing architecture would cost up to $11 billion—double the previous cost projectionand wouldn’t get the Mars specimens back to Earth until 2040.

This budget and schedule were non-starters for NASA. The agency tasked government labs, research institutions, and commercial companies to come up with better ideas to bring home the roughly 30 sealed sample tubes carried aboard the Perseverance rover. NASA deposited 10 sealed tubes on the surface of Mars a couple of years ago as insurance in case Perseverance dies before the arrival of a retrieval mission.

“We want to have the quickest, cheapest way to get these 30 samples back,” Nelson said.

How much for these rocks?

NASA officials said they believe a stripped-down concept proposed by the Jet Propulsion Laboratory in Southern California, which previously was in charge of the over-budget Mars Sample Return mission architecture, would cost between $6.6 billion and $7.7 billion, according to Nelson. JPL’s previous approach would have put a heavier lander onto the Martian surface, with small helicopter drones that could pick up sample tubes if there were problems with the Perseverance rover.

NASA previously deleted a “fetch rover” from the MSR architecture and instead will rely on Perseverance to hand off sample tubes to the retrieval lander.

An alternative approach would use a (presumably less expensive) commercial heavy lander, but this concept would still utilize several elements NASA would likely develop in a more traditional government-led manner: a nuclear power source, a robotic arm, a sample container, and a rocket to launch the samples off the surface of Mars and back into space. The cost range for this approach extends from $5.1 billion to $7.1 billion.

Artist’s illustration of SpaceX’s Starship approaching Mars. Credit: SpaceX

JPL will have a “key role” in both paths for MSR, said Nicky Fox, head of NASA’s science mission directorate. “To put it really bluntly, JPL is our Mars center in NASA science.”

If the Trump administration moves forward with either of the proposed MSR plans, this would be welcome news for JPL. The center, which is run by the California Institute of Technology under contract to NASA, laid off 955 employees and contractors last year, citing budget uncertainty, primarily due to the cloudy future of Mars Sample Return.

Without MSR, engineers at the Jet Propulsion Laboratory don’t have a flagship-class mission to build after the launch of NASA’s Europa Clipper spacecraft last year. The lab recently struggled with rising costs and delays with the previous iteration of MSR and NASA’s Psyche asteroid mission, and it’s not unwise to anticipate more cost overruns on a project as complex as a round-trip flight to Mars.

Ars submitted multiple requests to interview Laurie Leshin, JPL’s director, in recent months to discuss the lab’s future, but her staff declined.

Both MSR mission concepts outlined Tuesday would require multiple launches and an Earth return orbiter provided by the European Space Agency. These options would bring the Mars samples back to Earth as soon as 2035, but perhaps as late as 2039, Nelson said. The return orbiter and sample retrieval lander could launch as soon as 2030 and 2031, respectively.

“The main difference is in the landing mechanism,” Fox said.

To keep those launch schedules, Congress must immediately approve $300 million for Mars Sample Return in this year’s budget, Nelson said.

NASA officials didn’t identify any examples of a commercial heavy lander that could reach Mars, but the most obvious vehicle is SpaceX’s Starship. NASA already has a contract with SpaceX to develop a Starship vehicle that can land on the Moon, and SpaceX founder Elon Musk is aggressively pushing for a Mars mission with Starship as soon as possible.

NASA solicited eight studies from industry earlier this year. SpaceX, Blue Origin, Rocket Lab, and Lockheed Martin—each with their own lander concepts—were among the companies that won NASA study contracts. SpaceX and Blue Origin are well-capitalized with Musk and Amazon’s Jeff Bezos as owners, while Lockheed Martin is the only company to have built a lander that successfully reached Mars.

This slide from a November presentation to the Mars Exploration Program Analysis Group shows JPL’s proposed “sky crane” architecture for a Mars sample retrieval lander. The landing system would be modified to handle a load about 20 percent heavier than the sky crane used for the Curiosity and Perseverance rover landings. Credit: NASA/JPL

The science community has long identified a Mars Sample Return mission as the top priority for NASA’s planetary science program. In the National Academies’ most recent decadal survey released in 2022, a panel of researchers recommended NASA continue with the MSR program but stated the program’s cost should not undermine other planetary science missions.

Teeing up for cancellation?

That’s exactly what is happening. Budget pressures from the Mars Sample Return mission, coupled with funding cuts stemming from a bipartisan federal budget deal in 2023, have prompted NASA’s planetary science division to institute a moratorium on starting new missions.

“The decision about Mars Sample Return is not just one that affects Mars exploration,” said Curt Niebur, NASA’s lead scientist for planetary flight programs, in a question-and-answer session with solar system researchers Tuesday. “It’s going to affect planetary science and the planetary science division for the foreseeable future. So I think the entire science community should be very tuned in to this.”

Rocket Lab, which has been more open about its MSR architecture than other companies, has posted details of its sample return concept on its website. Fox declined to offer details on other commercial concepts for MSR, citing proprietary concerns.

“We can wait another year, or we can get started now,” Rocket Lab posted on X. “Our Mars Sample Return architecture will put Martian samples in the hands of scientists faster and more affordably. Less than $4 billion, with samples returned as early as 2031.”

Through its own internal development and acquisitions of other aerospace industry suppliers, Rocket Lab said it has provided components for all of NASA’s recent Mars missions. “We can deliver MSR mission success too,” the company said.

Rocket Lab’s concept for a Mars Sample Return mission. Credit: Rocket Lab

Although NASA’s deferral of a decision on MSR to the next administration might convey a lack of urgency, officials said the agency and potential commercial partners need time to assess what roles the industry might play in the MSR mission.

“They need to flesh out all of the possibilities of what’s required in the engineering for the commercial option,” Nelson said.

On the program’s current trajectory, Fox said NASA would be able to choose a new MSR architecture in mid-2026.

Waiting, rather than deciding on an MSR plan now, will also allow time for the next NASA administrator and the Trump White House to determine whether either option aligns with the administration’s goals for space exploration. In an interview with Ars last week, Nelson said he did not want to “put the new administration in a box” with any significant MSR decisions in the waning days of the Biden administration.

One source with experience in crafting and implementing US space policy told Ars that Nelson’s deferral on a decision will “tee up MSR for canceling.” Faced with a decision to spend billions of dollars on a robotic sample return or billions of dollars to go toward a human mission to Mars, the Trump administration will likely choose the latter, the source said.

If that happens, NASA science funding could be freed up for other pursuits in planetary science. The second priority identified in the most recent planetary decadal survey is an orbiter and atmospheric probe to explore Uranus and its icy moons. NASA has held off on the development of a Uranus mission to focus on the Mars Sample Return first.

Science and geopolitics

Whether it’s with robots or humans, there’s a strong case for bringing pristine Mars samples back to Earth. The titanium tubes carried by the Perseverance rover contain rock cores, loose soil, and air samples from the Martian atmosphere.

“Bringing them back will revolutionize our understanding of the planet Mars and indeed, our place in the solar system,” Fox said. “We explore Mars as part of our ongoing efforts to safely send humans to explore farther and farther into the solar system, while also … getting to the bottom of whether Mars once supported ancient life and shedding light on the early solar system.”

Researchers can perform more detailed examinations of Mars specimens in sophisticated laboratories on Earth than possible with the miniature instruments delivered to the red planet on a spacecraft. Analyzing samples in a terrestrial lab might reveal biosignatures, or the traces of ancient life, that elude detection with instruments on Mars.

“The samples that we have taken by Perseverance actually predate—they are older than any of the samples or rocks that we could take here on Earth,” Fox said. “So it allows us to kind of investigate what the early solar system was like before life began here on Earth, which is amazing.”

Fox said returning Mars samples before a human expedition would help NASA prioritize where astronauts should land on the red planet.

In a statement, the Planetary Society said it is “concerned that NASA is again delaying a decision on the program, committing only to additional concept studies.”

“It has been more than two years since NASA paused work on MSR,” the Planetary Society said. “It is time to commit to a path forward to ensure the return of the samples already being collected by the Perseverance rover.

“We urge the incoming Trump administration to expedite a decision on a path forward for this ambitious project, and for Congress to provide the funding necessary to ensure the return of these priceless samples from the Martian surface.”

China says it is developing its own mission to bring Mars rocks back to Earth. Named Tianwen-3, the mission could launch as soon as 2028 and return samples to Earth by 2031. While NASA’s plan would bring back carefully curated samples from an expansive environment that may have once harbored life, China’s mission will scoop up rocks and soil near its landing site.

“They’re just going to have a mission to grab and go—go to a landing site of their choosing, grab a sample and go,” Nelson said. “That does not give you a comprehensive look for the scientific community. So you cannot compare the two missions. Now, will people say that there’s a race? Of course, people will say that, but it’s two totally different missions.”

Still, Nelson said he wants NASA to be first. He said he has not had detailed conversations with Trump’s NASA transition team.

“I think it was a responsible thing to do, not to hand the new administration just one alternative if they want to have a Mars Sample Return,” Nelson said. “I can’t imagine that they don’t. I don’t think we want the only sample return coming back on a Chinese spacecraft.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

NASA defers decision on Mars Sample Return to the Trump administration Read More »

us-sues-six-of-the-biggest-landlords-over-“algorithmic-pricing-schemes”

US sues six of the biggest landlords over “algorithmic pricing schemes”

The Justice Department says that landlords did more than use RealPage in the alleged pricing scheme. “Along with using RealPage’s anticompetitive pricing algorithms, these landlords coordinated through a variety of means,” such as “directly communicating with competitors’ senior managers about rents, occupancy, and other competitively sensitive topics,” the DOJ said.

There were “call arounds” in which “property managers called or emailed competitors to share, and sometimes discuss, competitively sensitive information about rents, occupancy, pricing strategies and discounts,” the DOJ said.

Landlords discussed their use of RealPage software with each other, the DOJ said. “For instance, landlords discussed via user groups how to modify the software’s pricing methodology, as well as their own pricing strategies,” the DOJ said. “In one example, LivCor and Willow Bridge executives participated in a user group discussion of plans for renewal increases, concessions and acceptance rates of RealPage rent recommendations.”

DOJ: Firms discussed “auto-accept” settings

The DOJ lawsuit says RealPage pushes clients to use “auto-accept settings” that automatically approve pricing recommendations. The DOJ said today that property rental firms discussed how they use those settings.

“As an example, at the request of Willow Bridge’s director of revenue management, Greystar’s director of revenue management supplied its standard auto-accept parameters for RealPage’s software, including the daily and weekly limits and the days of the week for which Greystar used ‘auto-accept,'” the DOJ said.

Greystar issued a statement saying it is “disappointed that the DOJ added us and other operators to their lawsuit against RealPage,” and that it will “vigorously” defend itself in court. “Greystar has and will conduct its business with the utmost integrity. At no time did Greystar engage in any anti-competitive practices,” the company said.

The Justice Department is joined in the case by the attorneys general of California, Colorado, Connecticut, Illinois, Massachusetts, Minnesota, North Carolina, Oregon, Tennessee, and Washington. The case is in US District Court for the Middle District of North Carolina.

US sues six of the biggest landlords over “algorithmic pricing schemes” Read More »