Making Sense of Cybersecurity â Part 1: Seeing Through Complexity
error code: 502
Making Sense of Cybersecurity â Part 1: Seeing Through Complexity Read More »
COCOA BEACH, Fla.âAs it so often does in the final days before the debut of a new rocket, it all comes down to weather. Accordingly, Blue Origin is only awaiting clear skies and fair seas for its massive New Glenn vehicle to lift off from Florida.
After the company completed integration of the rocket this week, and rolled the super heavy lift rocket to its launch site at Cape Canaveral, the focus turned toward the weather. Conditions at Cape Canaveral Space Force Base have been favorable during the early morning launch windows available to the rocket, but there have been complications offshore.
That’s because Blue Origin aims to recover the first stage of the New Glenn rocket, and sea states in the Atlantic Ocean have been unsuitable for an initial attempt to catch the first stage booster on a drone ship. The company has already waived one launch attempt set for 1 am ET (06: 00 UTC) on Friday, January 10.
Conditions have improved a bit since then, but on Saturday evening the company’s launch officials canceled a second attempt planned for 1 am ET on Sunday. The new launch time is now 1 am ET on Monday, January 13, when better sea states are expected. There is a three-hour launch window. The company will provide a webcast of proceedings at this link beginning one hour before liftoff.
According to a mission timeline shared by Blue Origin on Saturday, it will take several hours to fuel the New Glenn rocket. Second stage hydrogen loading will begin 4.5 hours before liftoff, followed by the booster stage and second stage liquid oxygen at 4 hours, and methane for the booster stage at 3.5 hours to go. Fueling should be complete about an hour before liftoff.
New Glenn rocket is at the launch pad, waiting for calm seas to land Read More »
An exceptionally hot outlier, 2024 means the streak of hottest years goes to 11.
With very few and very small exceptions, 2024 was unusually hot across the globe. Credit: Copernicus
Over the last 24 hours or so, the major organizations that keep track of global temperatures have released figures for 2024, and all of them agree: 2024 was the warmest year yet recorded, joining 2023 as an unusual outlier in terms of how rapidly things heated up. At least two of the organizations, the European Union’s Copernicus and Berkeley Earth, place the year at about 1.6° C above pre-industrial temperatures, marking the first time that the Paris Agreement goal of limiting warming to 1.5° has been exceeded.
NASA and the National Oceanic and Atmospheric Administration both place the mark at slightly below 1.5° C over pre-industrial temperatures (as defined by the 1850â1900 average). However, that difference largely reflects the uncertainties in measuring temperatures during that period rather than disagreement over 2024.
2023 had set a temperature record largely due to a switch to El Niño conditions midway through the year, which made the second half of the year exceptionally hot. It takes some time for that heat to make its way from the ocean into the atmosphere, so the streak of warm months continued into 2024, even as the Pacific switched into its cooler La Niña mode.
While El Niños are regular events, this one had an outsized impact because it was accompanied by unusually warm temperatures outside the Pacific, including record high temperatures in the Atlantic and unusual warmth in the Indian Ocean. Land temperatures reflect this widespread warmth, with elevated temperatures on all continents. Berkeley Earth estimates that 104 countries registered 2024 as the warmest on record, meaning 3.3 billion people felt the hottest average temperatures they had ever experienced.
Different organizations use slightly different methods to calculate the global temperature and have different baselines. For example, Copernicus puts 2024 at 0.72° C above a baseline that will be familiar to many people since they were alive for it: 1991 to 2000. In contrast, NASA and NOAA use a baseline that covers the entirety of the last century, which is substantially cooler overall. Relative to that baseline, 2024 is 1.29° C warmer.
Lining up the baselines shows that these different services largely agree with each other, with most of the differences due to uncertainties in the measurements, with the rest accounted for by slightly different methods of handling things like areas with sparse data.
Describing the details of 2024, however, doesn’t really capture just how exceptional the warmth of the last two years has been. Starting in around 1970, there’s been a roughly linear increase in temperature driven by greenhouse gas emissions, despite many individual years that were warmer or cooler than the trend. The last two years have been extreme outliers from this trend. The last time there was a single comparable year to 2024 was back in the 1940s. The last time there were two consecutive years like this was in 1878.
Relative to the five-year temperature average, 2024 is an exceptionally large excursion. Credit: Copernicus
“These were during the âGreat Droughtâ of 1875 to 1878, when it is estimated that around 50 million people died in India, China, and parts of Africa and South America,” the EU’s Copernicus service notes. Despite many climate-driven disasters, the world at least avoided a similar experience in 2023-24.
Berkeley Earth provides a slightly different way of looking at it, comparing each year since 1970 with the amount of warming we’d expect from the cumulative greenhouse gas emissions.
Relative to the expected warming from greenhouse gasses, 2024 represents a large departure. Credit: Berkeley Earth
These show that, given year-to-year variations in the climate system, warming has closely tracked expectations over five decades. 2023 and 2024 mark a dramatic departure from that track, although it comes at the end of a decade where most years were above the trend line. Berkeley Earth estimates that there’s just a 1 in 100 chance of that occurring due to the climate’s internal variability.
The big question is whether 2024 is an exception and we should expect things to fall back to the trend that’s dominated since the 1970s, or it marks a departure from the climate’s recent behavior. And that’s something we don’t have a great answer to.
If you take away the influence of recent greenhouse gas emissions and El Niño, you can focus on other potential factors. These include a slight increase expected due to the solar cycle approaching its maximum activity. But, beyond that, most of the other factors are uncertain. The Hunga Tonga eruption put lots of water vapor into the stratosphere, but the estimated effects range from slight warming to cooling equivalent to a strong La Niña. Reductions in pollution from shipping are expected to contribute to warming, but the amount is debated.
There is evidence that a decrease in cloud cover has allowed more sunlight to be absorbed by the Earth, contributing to the planet’s warming. But clouds are typically a response to other factors that influence the climate, such as the amount of water vapor in the atmosphere and the aerosols present to seed water droplets.
It’s possible that a factor that we missed is driving the changes in cloud cover or that 2024 just saw the chaotic nature of the atmosphere result in less cloud cover. Alternatively, we may have crossed a warming tipping point, where the warmth of the atmosphere makes cloud formation less likely. Knowing that will be critical going forward, but we simply don’t have a good answer right now.
There’s an equally unsatisfying answer to what this means for our chance of hitting climate goals. The stretch goal of the Paris Agreement is to limit warming to 1.5° C, because it leads to significantly less severe impacts than the primary, 2.0° target. That’s relative to pre-industrial temperatures, which are defined using the 1850â1900 period, the earliest time where temperature records allow a reconstruction of the global temperature.
Unfortunately, all the organizations that handle global temperatures have some differences in the analysis methods and data used. Given recent data, these differences result in very small divergences in the estimated global temperatures. But with the far larger uncertainties in the 1850â1900 data, they tend to diverge more dramatically. As a result, each organization has a different baseline, and different anomalies relative to that.
As a result, Berkeley Earth registers 2024 as being 1.62° C above preindustrial temperatures, and Copernicus 1.60° C. In contrast, NASA and NOAA place it just under 1.5° C (1.47° and 1.46°, respectively). NASA’s Gavin Schmidt said this is “almost entirely due to the [sea surface temperature] data set being used” in constructing the temperature record.
There is, however, consensus that this isn’t especially meaningful on its own. There’s a good chance that temperatures will drop below the 1.5° mark on all the data sets within the next few years. We’ll want to see temperatures consistently exceed that mark for over a decade before we consider that we’ve passed the milestone.
That said, given that carbon emissions have barely budged in recent years, there’s little doubt that we will eventually end up clearly passing that limit (Berkeley Earth is essentially treating it as exceeded already). But there’s widespread agreement that each increment between 1.5° and 2.0° will likely increase the consequences of climate change, and any continuing emissions will make it harder to bring things back under that target in the future through methods like carbon capture and storage.
So, while we may have committed ourselves to exceed one of our major climate targets, that shouldn’t be viewed as a reason to stop trying to limit greenhouse gas emissions.
John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.
Everyone agrees: 2024 the hottest year since the thermometer was invented Read More »
Dwarkesh Patel again interviewed Tyler Cowen, largely about AI, so here we go.
Note that I take it as a given that the entire discussion is taking place in some form of an âAI Fizzleâ and âeconomic normalâ world, where AI does not advance too much in capability from its current form, in meaningful senses, and we do not get superintelligence [because of reasons]. Itâs still massive additional progress by the standards of any other technology, but painfully slow by the âAGI is coming soonâ crowd.
Thatâs the only way I can make the discussion make at least some sense, with Tyler Cowen predicting 0.5%/year additional RGDP growth from AI. That level of capabilities progress is a possible world, although the various elements stated here seem like they are sometimes from different possible worlds.
I note that this conversation was recorded prior to o3 and all the year end releases. So his baseline estimate of RGDP growth and AI impacts has likely increased modestly.
I go very extensively into the first section on economic growth and AI. After that, the podcast becomes classic Tyler Cowen and is interesting throughout, but I will be relatively sparing in my notes in other areas, and am skipping over many points.
This is a speed premium and âlow effortâ post, in the sense that this is mostly me writing down my reactions and counterarguments in real time, similar to how one would do a podcast. It is high effort in that I spent several hours listening to, thinking about and responding to the first fifteen minutes of a podcast.
As a convention: When Iâm in the numbered sections, Iâm reporting what was said. When Iâm in the secondary sections, Iâm offering (extensive) commentary. Timestamps are from the Twitter version.
[EDIT: In Tylerâs link, he correctly points out a confusion in government spending vs. consumption, which I believe is fixed now. As for his comment about market evidence for the doomer position, Iâve given my answer before, and I would assert the market provides substantial evidence neither in favor or against anything but the most extreme of doomer positions, as in extreme in a way I have literally never heard one person assert, once you control for its estimate of AI capabilities (where it does indeed offer us evidence, and Iâm saying that itâs too pessimistic). We agree there is no substantial and meaningful âpeer-reviewedâ literature on the subject, in the way that Tyler is pointing.]
They recorded this at the Progress Studies conference, and Tyler Cowen has a very strongly held view that AI wonât accelerate RGDP growth much that Dwarkesh clearly does not agree with, so Dwarkesh Patelâs main thrust is to try comparisons and arguments and intuition pumps to challenge Tyler. Tyler, as he always does, has a ready response to everything, whether or not it addresses the point of the question.
(1: 00) Dwarkesh doesnât waste any time and starts off asking why we wonât get explosive economic growth. Tylerâs first answer is cost disease, that as AI works in some parts of the economy costs in other areas go up.
Thatâs true in relative terms for obvious reasons, but in absolute terms or real resource terms the opposite should be true, even if we accept the implied premise that AI wonât simply do everything anyway. This should drive down labor costs and free up valuable human capital. It should aid in availability of many other inputs. It makes almost any knowledge acquisition, strategic decision or analysis, data analysis or gathering, and many other universal tasks vastly better.
Tyler then answers this directly when asked at (2: 10) by saying cost disease is not about employees per se, itâs more general, so heâs presumably conceding the point about labor costs, saying that non-intelligence inputs that canât be automated will bind more and thus go up in price. I mean, yes, in the sense that we have higher value uses for them, but so what?
So yes, you can narrowly define particular subareas of some areas as bottlenecks and say that they cannot grow, and perhaps they can even be large areas if we impose costlier bottlenecks via regulation. But that still leaves lots of room for very large economic growth for a while – the issue canât bind you otherwise, the math doesnât work.
Tyler says government consumption [EDIT: I originally misheard this as spending, he corrected me, I thank him] at 18% of GDP (government spending is 38% but a lot of that is duplicative and a lot isnât consumption), health care at 20%, education is 6% (he says 6-7%, Claude says 6%), the nonprofit sector (Claude says 5.6%) and says together that is half of the economy. Okay, sure, letâs tackle that.
Healthcare is already seeing substantial gains from AI even at current levels. There are claims that up to 49% of half of doctor time is various forms of EMR and desk work that AIs could reduce greatly, certainly at least ~25%. AI can directly substitute for much of what doctors do in terms of advising patients, and this is already happening where the future is distributed. AI substantially improves medical diagnosis and decision making. AI substantially accelerates drug discovery and R&D, will aid in patient adherence and monitoring, and so on. And again, thatâs without further capability gains. Insurance companies doubtless will embrace AI at every level. Need I go on here?
Government spending at all levels is actually about 38% of GDP, but thatâs cheating, only ~11% is non-duplicative and not transfers, interest (which arenât relevant) or R&D (Iâm assuming R&D would get a lot more productive).
The biggest area is transfers. AI canât improve the efficiency of transfers too much, but it also canât be a bottleneck outside of transaction and administrative costs, which obviously AI can greatly reduce and are not that large to begin with.
The second biggest area is provision of healthcare, which weâre already counting, so thatâs duplicative. Third is education, which we count in the next section.
Third is education. Fourth is national defense, where efficiency per dollar or employee should get vastly better, to the point where failure to be at the AI frontier is a clear national security risk.
Fifth is interest on the debt, which again doesnât count, and also we wouldnât care about if GDP was growing rapidly.
And so on. Whatâs left to form the last 11% or so? Public safety, transportation and infrastructure, government administration, environment and natural resources and various smaller other programs. What happens here is a policy choice. We are already seeing signs of improvement in government administration (~2% of the 11%), the other 9% might plausibly stall to the extent we decide to do an epic fail.
Education and academia is already being transformed by AI, in the sense of actually learning things, among anyone who is willing to use it. And itâs rolling through academia as we speak, in terms of things like homework assignments, in ways that will force change. So whether you think growth is possible depends on your model of education. If itâs mostly a signaling model then you should see a decline in education investment since the signals will decline in value and AI creates the opportunity for better more efficient signals, but you can argue that this could continue to be a large time and dollar tax on many of us.
Nonprofits are about 20%-25% education, and ~50% is health care related, which would double count, so the remainder is only ~1.3% of GDP. This also seems like a dig at nonprofits and their inability to adapt to change, but why would we assume nonprofits canât benefit from AI?
Whatâs weird is that I would point to different areas that have the most important anticipated bottlenecks to growth, such as housing or power, where we might face very strong regulatory constraints and perhaps AI canât get us out of those.
(1: 30) He says it will take ~30 years for sectors of the economy that do not use AI well to be replaced by those that do use AI well.
Thatâs a very long time, even in an AI fizzle scenario. I roll to disbelieve that estimate in most cases. But letâs even give it to him, and say it is true, and it takes 30 years to replace them, while the productivity of the replacement goes up 5%/year above incumbents, which are stagnant. Then you delay the growth, but you donât prevent it, and if you assume this is a gradual transition you start seeing 1%+ yearly GDP growth boosts even in these sectors within a decade.
He concludes by saying some less regulated areas grow a lot, but that doesnât get you that much, so you canât have the whole economy âgrowing by 40%â in a nutshell.
I mean, okay, but thatâs double Dwarkeshâs initial question of why we arenât growing at 20%. So what exactly can we get here? I can buy this as an argument for AI fizzle world growing slower than it would have otherwise, but the teaser has a prediction of 0.5%, which is a whole different universe.
(2: 20) Tyler asserts that value of intelligence will go down because more intelligence will be available.
Dare I call this the Lump of Intelligence fallacy, after the Lump of Labor fallacy? Yes, to the extent that you are doing the thing an AI can do, the value of that intelligence goes down, and the value of AI intelligence itself goes down in economic terms because its cost of production declines. But to the extent that your intelligence complements and unlocks the AIâs, or is empowered by the AIâs and is distinct from it (again, we must be in fizzle-world), the value of that intelligence goes up.
Similarly, when he talks about intelligence as âone inputâ in the system among many, that seems like a fundamental failure to understand how intelligence works, a combination of intelligence denialism (failure to buy that much greater intelligence could meaningfully exist) and a denial of substitution or ability to innovate as a result – you couldnât use that intelligence to find alternative or better ways to do things, and you canât use more intelligence as a substitute for other inputs. And you canât substitute the things enabled more by intelligence much for the things that arenât, and so on.
It also assumes that intelligence canât be used to convince us to overcome all these regulatory barriers and bottlenecks. Whereas I would expect that raising the intelligence baseline greatly would make it clear to everyone involved how painful our poor decisions were, and also enable improved forms of discourse and negotiation and cooperation and coordination, and also greatly favor those that embrace it over those that donât, and generally allow us to take down barriers. Tyler would presumably agree that if we were to tear down the regulatory state in the places it was holding us back, that alone would be worth far more than his 0.5% of yearly GDP growth, even with no other innovation or AI.
(2: 50) Dwarkesh challenges Tyler by pointing out that the Industrial Revolution resulted in a greatly accelerated rate of economic growth versus previous periods, and asks what Tyler would say to someone from the past doubting it was possible. Tyler attempts to dodge (and is amusing doing so) by saying theyâd say âlooks like it would take a long timeâ and he would agree.
Well, it depends what a long time is, doesnât it? 2% sustained annual growth (or 8%!) is glacial in some sense and mind boggling by ancient standards. âTake a long timeâ in AI terms, such as what is actually happening now, could still look mighty quick if you compared it to most other things. OpenAI has 300 million MAUs.
(3: 20) Tyler trots out the âall the financial prices look normalâ line, that they are not predicting super rapid growth and neither are economists or growth experts.
Yes, the markets are being dumb, the efficient market hypothesis is false, and also arenât you the one telling me I should have been short the market? Well, instead Iâm long, and outperforming. And yes, economists and âexperts on economic growthâ arenât predicting large amounts of growth, but their answers are Obvious Nonsense to me and saying that âexperts donât expect itâ without arguments why isnât much of an argument.
(3: 40) Aside, since you kind of asked: So who am I to say different from the markets and the experts? I am Zvi Mowshowitz. Writer. Son of Solomon and Deborah Mowshowitz. I am the missing right hand of the one handed economists you cite. And the one warning you about what is about to kick Earthâs sorry ass into gear. I speak the truth as I see it, even if my voice trembles. And a warning that we might be the last living things this universe ever sees. God sent me.
Sorry about that. But seriously, think for yourself, schmuck! Anyway.
What would happen if we had more people? More of our best people? Got more out of our best people? Why doesnât AI effectively do all of these things?
(3: 55) Tyler is asked wouldnât a large rise in population drive economic growth? He says no, thatâs too much a 1-factor model, in fact weâve seen a lot of population growth without innovation or productivity growth.
Except that Tyler is talking here about growth on a per capita basis. If you add AI workers, you increase the productive base, but they donât count towards the capita.
Tyler says âitâs about the quality of your best people and institutions.â
But quite obviously AI should enable a vast improvement in the effective quality of your best people, it already does, Tyler himself would be one example of this, and also the best institutions, including because they are made up of the best people.
Tyler says âthereâs no simple lever, intelligence or not, that you can push on.â Again, intelligence as some simple lever, some input component.
The whole point of intelligence is that it allows you to do a myriad of more complex things, and to better choose those things.
Dwarkesh points out the contradiction between âyou are bottlenecked by your best peopleâ and asserting cost disease and constraint by your scarce input factors. Tyler says Dwarkesh is bottlenecked, Dwarkesh points out that with AGI he will be able to produce a lot more podcasts. Tyler says great, heâll listen, but he will be bottlenecked by time.
Dwarkeshâs point generalizes. AGI greatly expand the effective amount of productive time of the best people, and also extend their capabilities while doing so.
AGI can also itself become âthe best peopleâ at some point. If that was the bottleneck, then the goose asks, what happens now, Tyler?
(5: 15) Tyler cites that much of sub-Saharan Africa still does not have clean reliable water, and intelligence is not the bottleneck there. And that taking advantage of AGI will be like that.
So now weâre expecting AGI in this scenario? Iâm going to kind of pretend we didnât hear that, or that this is a very weak AGI definition, because otherwise the scenario doesnât make sense at all.
Intelligence is not directly the bottleneck there, true, but yes quite obviously Intelligence Solves This if we had enough of it and put those minds to that particular problem and wanted to invest the resources towards it. Presumably Tyler and I mostly agree on why the resources arenât being devoted to it.
What it mean for similar issues to that to be involved in taking advantage of AGI? Well, first, it would mean that you canât use AGI to get to ASI (no I canât explain why), but again thatâs got to be a baseline assumption here. After that, well, sorry, I failed to come up with a way to finish this that makes it make sense to me, beyond a general âhumans wonât do the things and will throw up various political and legal barriers.â Shrug?
(5: 35) Dwarkesh speaks about a claim that there is a key shortage of geniuses, and that Americaâs problems come largely from putting its geniuses in places like finance, whereas Taiwan puts them in tech, so the semiconductors end up in Taiwan. Wouldnât having lots more of those types of people eat a lot of bottlenecks? What would happen if everyone had 1000 times more of the best people available?
Tyler Cowen, author of a very good book about Talent and finding talent and the importance of talent, says he didnât agree with that post, and returns to IQ in the labor market are amazingly low, and successful people are smart but mostly they have 8-9 areas where theyâre an 8-9 on a 1-10 scale, with one 11+ somewhere, and a lot of determination.
All right, I donât agree that intelligence doesnât offer returns now, and I donât agree that intelligence wouldnât offer returns even at the extremes, but letâs again take Tylerâs own position as a givenâŠ
But that exactly describes what an AI gives you! An AI is the ultimate generalist. An AGI will be a reliable 8-9 on everything, actual everything.
And it would also turn everyone else into an 8-9 on everything. So instead of needing to find someone 11+ in one area, plus determination, plus having 8-9 in ~8 areas, you can remove that last requirement. That will hugely expand the pool of people in question.
So thereâs two obvious very clear plans here: You can either use AI workers who have that ultimate determination and are 8-9 in everything and 11+ in the areas where AIs shine (e.g. math, coding, etc).
Or you can also give your other experts an AI companion executive assistant to help them, and suddenly theyâre an 8+ in everything and also donât have to deal with a wide range of things.
(6: 50) Tyler says, talk to a committee at a Midwestern university about their plans for incorporating AI, then get back to him and talk to him about bottlenecks. Then write a report and the report will sound like GPT-4 and weâll have a report.
Yes, the committee will not be smart or fast about its official policy for how to incorporate AI into its existing official activities. If you talk to them now they will act like they have a plagiarism problem and thatâs it.
So what? Why do we need that committee to form a plan or approve anything or do anything at all right now, or even for a few years? All the students are already using AI. The professors are rapidly forced to adapt AI. Everyone doing the research will soon be using AI. Half that committee, three years from now, prepared for that meeting using AI. Their phones will all work based on AI. Theyâll be talking to their AI phone assistant companions that plan their schedules. You think this will all involve 0.5% GDP growth?
(7: 20) Dwarkesh asks, wonât the AIs be smart, super conscientious and work super hard? Tyler explicitly affirms the 0.5% GDP growth estimate, that this will transform the world over 30 years but âover any given year we wonât so much notice it.â Things like drug developments that would have taken 20 years now take 10 years, but you wonât feel it as revolutionary for a long time.
I mean, itâs already getting very hard to miss. If you donât notice it in 2025 or at least 2026, and youâre in the USA, check your pulse, you might be dead, etc.
Is that saying we will double productivity in pharmaceutical R&D, and that it would have far more than doubled if progress didnât require long expensive clinical trials, so other forms of R&D should be accelerated much more?
For reference, according to Claude, R&D in general contributes about 0.3% to RGDP growth per year right now. If we were to double that effect in roughly half the current R&D spend that is bottlenecked in similar fashion, and the other half would instead go up by more.
Claude also estimates that R&D spending would, if returns to R&D doubled, go up by 30%-70% on net.
So we seem to be looking at more than 0.5% RGDP growth per year from R&D effects alone, between additional spending on it and greater returns. And obviously AI is going to have additional other returns.
This is a plausible bottleneck, but that implies rather a lot of growth.
(8: 00) Dwarkesh points out that Progress Studies is all about all the ways we could unlock economic growth, yet Tyler says that tons more smart conscientious digital workers wouldnât do that much. What gives? Tyler again says bottlenecks, and adds on energy as an important consideration and bottleneck.
Feels like bottleneck is almost a magic word or mantra at this point.
Energy is a real consideration, yes the vision here involves spending a lot more energy, and that might take time. But also we see rapidly declining costs, including energy costs, to extract the same amount of intelligence, things like 10x savings each year.
And for inference purposes we can outsource our needs elsewhere, which we would if this was truly bottlenecking explosive growth, and so on. So while I think energy will indeed be an important limiting factor and be strained, and this will be especially important in terms of pushing the frontier or if we want to use o3-style very expensive inference a lot.
I donât expect it to bind medium-term economic growth so much in a slow growth scenario, and the bottlenecks involved here shouldnât compound with others. In a high growth takeoff scenario, I do think energy could bind far more impactfully.
Another way of looking at this is that if the price of energy goes substantially up due to AI, or at least the price of energy outside of potentially âgovernment-protected uses,â then that can only happen if it is having a large economic impact. If it doesnât raise the price of energy a lot, then no bottleneck exists.
Tyler Cowen and I think very differently here.
(9: 25) Fascinating moment. Tyler says he goes along with the experts in general, but agrees that âthe expertsâ on basically everything but AI are asleep at the wheel when it comes to AI – except when it comes to their views on diffusions of new technology in general, where the AI people are totally wrong. His view is, you get the right view by trusting the experts in each area, and combining them.
Tyler seems to be making an argument from reference class expertise? That this is a âdiffusion of technologyâ question, so those who are experts on that should be trusted?
Even if they donât actually understand AI and what it is and its promise?
Thatâs not how I roll. At all. As noted above in this post, and basically all the time. I think that you have to take the arguments being made, and see if you agree with them, and whether and how much they apply to the case of AI and especially AGI. Saying âthe experts in area [X] predict [Y]â is a reasonable placeholder if you donât have the ability to look at the arguments and models and facts involved, but hey look, we can do that.
Simply put, while I do think the diffusion experts are pointing to real issues that will importantly slow down adaptation, and indeed we are seeing what for many is depressingly slow apadation, they wonât slow it down all that much, because this is fundamentally different. AI and especially workers âadapt themselvesâ to a large extent, the intelligence and awareness involved is in the technology itself, and it is digital and we have a ubiquitous digital infrastructure we didnât have until recently.
It is also way too valuable a technology, even right out of the gate on your first day, and you will start to be forced to interact with it whether you like it or not, both in ways that will make it very difficult and painful to ignore. And the places it is most valuable will move very quickly. And remember, LLMs will get a lot better.
Suppose, as one would reasonably expect, by 2026 we have strong AI agents, capable of handling for ordinary people a wide variety of logistical tasks, sorting through information, and otherwise offering practical help. Apple Intelligence is partly here, Claude Alexa is coming, Project Astra is coming, and these are pale shadows of the December 2025 releases I expect. How long would adaptation really take? Once you have that, what stops you from then adapting AI in other ways?
Already, yes, adaptation is painfully slow, but it is also extremely fast. In two years ChatGPT alone has 300 million MAU. A huge chunk of homework and grading is done via LLMs. A huge chunk of coding is done via LLMs. The reason why LLMs are not catching on even faster is that theyâre not quite ready for prime time in the fully user-friendly ways normies need. Thatâs about to change in 2025.
Dwarkesh tries to use this as an intuition pump. Tylerâs not having it.
(10: 15) Dwarkesh asks, what would happen if the world population would double? Tyler says, depends what youâre measuring. Energy use would go up. But he doesnât agree with population-based models, too many other things matter.
Feels like Tyler is answering a different question. I see Dwarkesh as asking, wouldnât the extra workers mean we could simply get a lot more done, wouldnât (total, not per capita) GDP go up a lot? And Tylerâs not biting.
(11: 10) Dwarkesh tries asking about shrinking the population 90%. Shrinking, Tyler says, the delta can kill you, whereas growth might not help you.
Very frustrating. I suppose this does partially respond, by saying that it is hard to transition. But man I feel for Dwarkesh here. You can feel his despair as he transitions to the next question.
(11: 35) Dwarkesh asks what are the specific bottlenecks? Tyler says: Humans! All of you! Especially you who are terrified.
Thatâs not an answer yet, but then he actually does give one.
He says once AI starts having impact, there will be a lot of opposition to it, not primarily on âdoomerâ grounds but based on: Yes, this has benefits, but I grew up and raised my kids for a different way of life, I donât want this. And there will be a massive fight.
Yes. He doesnât even mention jobs directly but that will be big too. We already see that the public strongly dislikes AI when it interacts with it, for reasons I mostly think are not good reasons.
Iâve actually been very surprised how little resistance there has been so far, in many areas. AIs are basically being allowed to practice medicine, to function as lawyers, and do a variety of other things, with no effective pushback.
The big pushback has been for AI art and other places where AI is clearly replacing creative work directly. But that has features that seem distinct.
Yes people will fight, but what exactly do they intend to do about it? People have been fighting such battles for a while, every year I watch the battle for Paul Bunyanâs Axe. He still died. I think thereâs too much money at stake, too much productivity at stake, too many national security interests.
Yes, it will cause a bunch of friction, and slow things down somewhat, in the scenarios like the one Tyler is otherwise imagining. But if thatâs the central actual thing, it wonât slow things down all that much in the end. Rarely has.
We do see some exceptions, especially involving powerful unions, where the anti-automation side seems to do remarkably well, see the port strike. But also see which side of that the public is on. I donât like their long term position, especially if AI can seamlessly walk in and take over the next time they strike. And that, alone, would probably be +0.1% or more to RGDP growth.
(12: 15) Dwarkesh tries using China as a comparison case. If you can do 8% growth for decades merely by âcatching upâ why canât you do it with AI? Tyler responds, Chinaâs in a mess now, theyâre just a middle income country, theyâre the poorest Chinese people on the planet, a great example of how hard it is to scale. Dwarkesh pushes back that this is about the previous period, and Tyler says well, sure, from the $200 level.
Dwarkesh is so frustrated right now. Heâs throwing everything he can at Tyler, but Tyler is such a polymath that he has detail points for anything and knows how to pivot away from the question intents.
(13: 40) Dwarkesh asks, has Tylerâs attitude on AI changed from nine months ago? He says he sees more potential and there was more progress than he expected, especially o1 (this was before o3). The questions he wrote for GPT-4, which Dwarkesh got all wrong, are now too easy for models like o1. And he âwould not be surprised if an AI model beat human experts on a regular basis within three years.â He equates it to the first Kasparov vs. DeepBlue match, which Kasparov won, before the second match which he lost.
I wouldnât be surprised if this happens in one year.
I wouldnât be that shocked o3 turns out to do it now.
Tylerâs expectations here, to me, contradict his statements earlier. Not strictly, they could still both be true, but it seems super hard.
How much would availability of above-human level economic thinking help us in aiding economic growth? How much would better economic policy aid economic growth?
We take a detour to other areas, Iâll offer brief highlights.
(15: 45) Why are founders staying in charge important? Courage. Making big changes.
(19: 00) What is going on with the competency crisis? Tyler sees high variance at the top. The best are getting better, such as in chess or basketball, and also a decline in outright crime and failure. But thereâs a thick median not quite at the bottom thatâs getting worse, and while he thinks true median outcomes are about static (since more kids take the tests) thatâs not great.
(22: 30) Bunch of shade on both Churchill generally and on being an international journalist, including saying itâs not that impressive because how much does it pay?
He wasnât paid that much as Prime Minister either, you knowâŠ
(24: 00) Why are all our leaders so old? Tyler says current year aside weâve mostly had impressive candidates, and most of the leadership in Washington in various places (didnât mention Congress!) is impressive. Yay Romney and Obama.
Yes, yay Romney and Obama as our two candidates. So itâs only been three election cycles where both candidates have been⊠not ideal. I do buy Tylerâs claim that Trump has a lot of talent in some ways, but, well, ya know.
If you look at the other candidates for both nominations over that period, I think you see more people who were mostly also not so impressive. I would happily have taken Obama over every candidate on the Democratic side in 2016, 2020 or 2024, and Romney over every Republican (except maybe Kasich) in those elections as well.
This also doesnât address Dwarkeshâs concern about age. What about the age of Congress and their leadership? It is very old, on both sides, and things are not going so great.
I canât speak about the quality people in the agencies.
(27: 00) Commentary on early-mid 20th century leaders being terrible, and how when there is big change there are arms races and sometimes bad people win them (âand this is relevant to AIâ).
For something that is going to not cause that much growth, Tyler sees AI as a source for quite rapid change in other ways.
(34: 20) Tyler says all inputs other than AI rise in value, but you have to do different things. Heâs shifting from producing content to making connections.
This again seems to be a disconnect. If AI is sufficiently impactful as to substantially increase the value of all other inputs, then how does that not imply substantial economic growth?
Also this presumes that the AI canât be a substitute for you, or that it canât be a substitute for other people that could in turn be a substitute for you.
Indeed, I would think the default model would presumably be that the value of all labor goes down, even for things where AI canât do it (yet) because people substitute into those areas.
(35: 25) Tyler says heâs writing his books primarily for the AIs, he wants them to know he appreciates them. And the next book will be even more for the AIs so it can shape how they see the AIs. And he says, youâre an idiot if youâre not writing for the AIs.
Basilisk! Betrayer! Misaligned!
âWhat the AIs will think of youâ is actually an underrated takeover risk, and I pointed this out as early as AI #1.
The AIs will be smarter and better at this than you, and also will be reading what the humans say about you. So maybe this isnât as clever as it seems.
My mind boggles that it could be correct to write for the AIs⊠but you think they will only cause +0.5% GDP annual growth.
(36: 30) What wonât AIs get from oneâs writing? That vibe you get talking to someone for the first 3 minutes? Sense of humor?
I expect the AIs will increasingly have that stuff, at least if you provide enough writing samples. They have true sight.
Certainly if they have interview and other video data to train with, that will work over time.
(37: 25) What happens when Tyler turns down a grant in the first three minutes? Usually itâs failure to answer a question, like âhow do you build out your donor base?â without which you have nothing. Or someone focuses on the wrong things, or cares about the wrong status markers, and 75% of the value doesnât display on the transcript, which is weird since the things Tyler names seem like they would be in the transcript.
(42: 15) Tylerâs portfolio is diversified mutual funds, US-weighted. He has legal restrictions on most other actions such as buying individual stocks, but he would keep the same portfolio regardless.
Mutual funds over ETFs? Gotta chase that lower expense ratio.
I basically think This Is Fine as a portfolio, but I do think he could do better if he actually tried to pick winners.
(42: 45) Tyler expects gains to increasingly fall to private companies that see no reason to share their gains with the public, and he doesnât have enough wealth to get into good investments but also has enough wealth for his purposes anyway, if he had money heâd mostly do what heâs doing anyway.
Yep, I think heâs right about what he would be doing, and I too would mostly be doing the same things anyway. Up to a point.
If I had a billion dollars or what not, that would be different, and Iâd be trying to make a lot more things happen in various ways.
This implies the efficient market hypothesis is rather false, doesnât it? The private companies are severely undervalued in Tylerâs model. If private markets âdonât want to share the gainsâ with public markets, that implies that public markets wouldnât give fair valuations to those companies. Otherwise, why would one want such lack of liquidity and diversification, and all the trouble that comes with staying private?
If thatâs true, what makes you think Nvidia should only cost $140 a share?
Tyler Cowen doubles down on dismissing AI optimism, and is done playing nice.
(46: 30) Tyler circles back to rate of diffusion of tech change, and has a very clear attitude of Iâm right and all people are being idiots by not agreeing with me, that all they have are âAI will immediately change everythingâ and âsome hyperventilating blog posts.â AIs making more AIs? Diminishing returns! Ricardo knew this! Well that was about humans breeding. But itâs good that San Francisco âdoesnât know aboutâ diminishing returns and the correct pessimism that results.
This felt really arrogant, and willfully out of touch with the actual situation.
You can say the AIs wouldnât be able to do this, but: No, âRicardo didnât know thatâ and saying âdiminishing returnsâ does not apply here, because the whole âAIs making AIsâ principle is that the new AIs would be superior to the old AIs, a cycle you could repeat. The core reason you get eventual diminishing returns from more people is that theyâre drawn from the same people distribution.
I donât even know what to say at this point to âhyperventilating blog posts.â Are you seriously making the argument that if people write blog posts, that means their arguments donât count? I mean, yes, Tyler has very much made exactly this argument in the past, that if itâs not in a Proper Academic Journal then it does not count and he is correct to not consider the arguments or update on them. And no, theyâre mostly not hyperventilating or anything like that, but thatâs also not an argument even if they were.
What we have are, quite frankly, extensive highly logical, concrete arguments about the actual question of what [X] will happen and what [Y]s will result from that, including pointing out that much of the arguments being made against this are Obvious Nonsense.
Diminishing returns holds as a principle in a variety of conditions, yes, and is a very important concept to know. Bt there are other situations with increasing returns, and also a lot of threshold effects, even outside of AI. And San Francisco importantly knows this well.
Saying there must be diminishing returns to intelligence, and that this means nothing that fast or important is about to happen when you get a lot more of it, completely begs the question of what it even means to have a lot more intelligence.
Earlier Tyler used chess and basketball as examples, and talked about the best youth being better, and how that was important because the best people are a key bottleneck. That sounds like a key case of increasing returns to scale.
Humanity is a very good example of where intelligence at least up to some critical point very obviously had increasing returns to scale. If you are below a certain threshold of intelligence as a human, your effective productivity is zero. Humanity having a critical amount of intelligence gave it mastery of the Earth. Tell what gorillas and lions still exist about decreasing returns to intelligence.
For various reasons, with the way our physical world and civilization is constructed, we often donât typically end up rewarding relatively high intelligence individuals with that much in the way of outsided economic returns versus ordinary slightly-above-normal intelligence individuals.
But that is very much a product of our physical limitations and current social dynamics and fairness norms, and the concept of a job with essentially fixed pay, and actual good reasons not to try for many of the higher paying jobs out there in terms of life satisfaction.
In areas and situations where this is not the case, returns look very different.
Tyler Cowen himself is an excellent example of increasing returns to scale. The fact that Tyler can read and do so much enables him to do the thing he does at all, and to enjoy oversized returns in many ways. And if you decreased his intelligence substantially, he would be unable to produce at anything like this level. If you increased his intelligence substantially or âsped him upâ even more, I think that would result in much higher returns still, and also AI has made him substantially more productive already as he no doubt realizes.
(Iâve been over all this before, but seems like a place to try it again.)
Trying to wrap oneâs head around all of it at once is quite a challenge.
(48: 45) Tyler worries about despair in certain areas from AI and worries about how happy it will make us, despite expecting full employment pretty much forever.
If you expect full employment forever then you either expect AI progress to fully stall or thereâs something very important you really donât believe in, or both. I donât understand, what does Tyler thinks happen once the AIs can do anything digital as well as most or all humans? What does he think will happen when we use that to solve robotics? What are all these humans going to be doing to get to full employment?
It is possible the answer is âgovernment mandated fake jobsâ but then it seems like an important thing to say explicitly, since thatâs actually more like UBI.
Tyler Cowen: âIf you don’t have a good prediction, you should be a bit wary and just say, “Okay, we’re going to see.” But, you know, some words of caution.â
YOU DONâT SAY.
Further implications left as an exercise to the reader, who is way ahead of me.
(54: 30) Tyler says that the people in DC are wise and think on the margin, whereas the SF people are not wise and think in infinities (he also says theyâre the most intelligent hands down, elsewhere), and the EU people are wisest of all, but that if the EU people ran the world the growth rate would be -1%. Whereas the USA has so far maintained the necessary balance here well.
If the wisdom you have would bring you to that place, are you wise?
This is such a strange view of what constitutes wisdom. Yes, the wise man here knows more things and is more cultured, and thinks more prudently and is economically prudent by thinking on the margin, and all that. But as Tyler points out, a society of such people would decay and die. It is not productive. In the ultimate test, outcomes, and supporting growth, it fails.
Tyler says you need balance, but heâs at a Progress Studies conference, which should make it clear that no, America has grown in this sense âtoo wiseâ and insufficiently willing to grow, at least on the wise margin.
Given what the world is about to be like, you need to think in infinities. You need to be infinitymaxing. The big stuff really will matter more than the marginal revolution. Thatâs kind of the point.
You still have to, day to day, constantly think on the margin, of course.
(55: 10) Tyler says heâs a regional thinker from New Jersey, that he is an uncultured barbarian, who only has a veneer of culture because of collection of information, but knowing about culture is not like being cultured, and that America falls flat in a lot of ways that would bother a cultured Frenchman but heâs used to it so they donât bother Tyler.
I think Tyler is wrong here, to his own credit. He is not a regional thinker, if anything he is far less a regional thinker than the typical âculturedâ person he speaks about. And to the extent that he is âunculturedâ it is because he has not taken on many of the burdens and social obligations of culture, and those things are to be avoided – he would be fully capable of âacting culturedâ if the situation were to call for that, it wouldnât be others mistaking anything.
He refers to his approach as an âautistic approach to culture.â He seems to mean this in a pejorative way, that an autistic approach to things is somehow not worthy or legitimate or âreal.â I think it is all of those things.
Indeed, the autistic-style approach to pretty much anything, in my view, is Playing in Hard Mode, with much higher startup costs, but brings a deeper and superior understanding once completed. The cultured Frenchman is like a fish in water, whereas Tyler understands and can therefore act on a much deeper, more interesting level. He can deploy culture usefully.
(56: 00) What is autism? Tyler says it is officially defined by deficits, by which definition no one there [at the Progress Studies convention] is autistic. But in terms of other characteristics maybe a third of them would count.
I think term autistic has been expanded and overloaded in a way that was not wise, but at this point we are stuck with this, so now it means in different contexts both the deficits and also the general approach that high-functioning people with those deficits come to take to navigating life, via consciously processing and knowing the elements of systems and how they fit together, treating words as having meanings, and having a map that matches the territory, whereas those not being autistic navigate largely on vibes.
By this definition, being the non-deficit form of autistic is excellent, a superior way of being at least in moderation and in the right spots, for those capable of handling it and its higher cognitive costs.
Indeed, many people have essentially none of this set of positive traits and ways of navigating the world, and it makes them very difficult to deal with.
(56: 45) Why is tech so bad at having influence in Washington? Tyler says theyâre getting a lot more influential quickly, largely due to national security concerns, which is why AI is being allowed to proceed.
For a while now I have found Tyler Cowenâs positions on AI very frustrating (see for example my coverage of the 3rd Cowen-Patel podcast), especially on questions of potential existential risk and expected economic growth, and what intelligence means and what it can do and is worth. This podcast did not address existential risks at all, so most of this post is about me trying (once again!) to explain why Tylerâs views on returns to intelligence and future economic growth donât make sense to me, seeming well outside reasonable bounds.
I try to offer various arguments and intuition pumps, playing off of Dwarkeshâs attempts to do the same. It seems like there are very clear pathways, using Tylerâs own expectations and estimates, that on their own establish more growth than he expects, assuming AI is allowed to proceed at all.
I gave only quick coverage to the other half of the podcast, but donât skip that other half. I found it very interesting, with a lot of new things to think about, but they arenât areas where I feel as ready to go into detailed analysis, and was doing triage. In a world where we all had more time, Iâd love to do dives into those areas too.
On that note, Iâd also point everyone to Dwarkesh Patelâs other recent podcast, which was with physicist Adam Brown. It repeatedly blew my mind in the best of ways, and Iâd love to be in a different branch where I had the time to dig into some of the statements here. Physics is so bizarre.
On Dwarkesh Patel’s 4th Podcast With Tyler Cowen Read More »
There’s also the more recent concept of “explosive percolation,” whereby connectivity emerges not in a slow, continuous process but quite suddenly, simply by replacing the random node connections with predetermined criteriaâsay, choosing to connect whichever pair of nodes has the fewest pre-existing connections to other nodes. This introduces bias into the system and suppresses the growth of large dominant clusters. Instead, many large unconnected clusters grow until the critical threshold is reached. At that point, even adding just one or two more connections will trigger one global violent merger (instant uber-connectivity).
One might not immediately think of crossword puzzles as a network, although there have been a couple of relevant prior mathematical studies. For instance, John McSweeney of the Rose-Hulman Institute of Technology in Indiana employed a random graph network model for crossword puzzles in 2016. He factored in how a puzzle’s solvability is affected by the interactions between the structure of the puzzle’s cells (squares) and word difficulty, i.e., the fraction of letters you need to know in a given word in order to figure out what it is.
Answers represented nodes while answer crossings represented edges, and McSweeney assigned a random distribution of word difficulty levels to the clues. “This randomness in the clue difficulties is ultimately responsible for the wide variability in the solvability of a puzzle, which many solvers know wellâa solver, presented with two puzzles of ostensibly equal difficulty, may solve one readily and be stumped by the other,” he wrote at the time. At some point, there has to be a phase transition, in which solving the easiest words enables the puzzler to solve the more difficult words until the critical threshold is reached and the puzzler can fill in many solutions in rapid successionâa dynamic process that resembles, say, the spread of diseases in social groups.
In this sample realization, black sites are shown in black; empty sites are white; and occupied sites contain symbols and letters. Credit: Alexander K. Hartmann, 2024
Hartmann’s new model incorporates elements of several nonstandard percolation models, including how much the solver benefits from partial knowledge of the answers. Letters correspond to sites (white squares) while words are segments of those sites, bordered by black squares. There is an a priori probability of being able to solve a given word if no letters are known. If some words are solved, the puzzler gains partial knowledge of neighboring unsolved words, which increases the probability of those words being solved as well.
Why solving crosswords is like a phase transition Read More »
“We want to have the quickest, cheapest way to get these 30 samples back.”
This photo montage shows sample tubes shortly after they were deposited onto the surface by NASAâs Perseverance Mars rover in late 2022 and early 2023. Credit: NASA/JPL-Caltech/MSSS
For nearly four years, NASA’s Perseverance rover has journeyed across an unexplored patch of land on Marsâonce home to an ancient river deltaâand collected a slew of rock samples sealed inside cigar-sized titanium tubes.
These tubes might contain tantalizing clues about past life on Mars, but NASA’s ever-changing plans to bring them back to Earth are still unclear.
On Tuesday, NASA officials presented two options for retrieving and returning the samples gathered by the Perseverance rover. One alternative involves a conventional architecture reminiscent of past NASA Mars missions, relying on the “sky crane” landing system demonstrated on the agency’s two most recent Mars rovers. The other option would be to outsource the lander to the space industry.
NASA Administrator Bill Nelson left a final decision on a new mission architecture to the next NASA administrator working under the incoming Trump administration. President-elect Donald Trump nominated entrepreneur and commercial astronaut Jared Isaacman as the agency’s 15th administrator last month.
“This is going to be a function of the new administration in order to fund this,” said Nelson, a former Democratic senator from Florida who will step down from the top job at NASA on January 20.
The question now is: will they? And if the Trump administration moves forward with Mars Sample Return (MSR), what will it look like? Could it involve a human mission to Mars instead of a series of robotic spacecraft?
The Trump White House is expected to emphasize “results and speed” with NASA’s space programs, with the goal of accelerating a crew landing on the Moon and sending people to explore Mars.
NASA officials had an earlier plan to bring the Mars samples back to Earth, but the program slammed into a budgetary roadblock last year when an independent review team concluded the existing architecture would cost up to $11 billionâdouble the previous cost projectionâand wouldn’t get the Mars specimens back to Earth until 2040.
This budget and schedule were non-starters for NASA. The agency tasked government labs, research institutions, and commercial companies to come up with better ideas to bring home the roughly 30 sealed sample tubes carried aboard the Perseverance rover. NASA deposited 10 sealed tubes on the surface of Mars a couple of years ago as insurance in case Perseverance dies before the arrival of a retrieval mission.
“We want to have the quickest, cheapest way to get these 30 samples back,” Nelson said.
NASA officials said they believe a stripped-down concept proposed by the Jet Propulsion Laboratory in Southern California, which previously was in charge of the over-budget Mars Sample Return mission architecture, would cost between $6.6 billion and $7.7 billion, according to Nelson. JPL’s previous approach would have put a heavier lander onto the Martian surface, with small helicopter drones that could pick up sample tubes if there were problems with the Perseverance rover.
NASA previously deleted a “fetch rover” from the MSR architecture and instead will rely on Perseverance to hand off sample tubes to the retrieval lander.
An alternative approach would use a (presumably less expensive) commercial heavy lander, but this concept would still utilize several elements NASA would likely develop in a more traditional government-led manner: a nuclear power source, a robotic arm, a sample container, and a rocket to launch the samples off the surface of Mars and back into space. The cost range for this approach extends from $5.1 billion to $7.1 billion.
Artist’s illustration of SpaceX’s Starship approaching Mars. Credit: SpaceX
JPL will have a “key role” in both paths for MSR, said Nicky Fox, head of NASA’s science mission directorate. “To put it really bluntly, JPL is our Mars center in NASA science.”
If the Trump administration moves forward with either of the proposed MSR plans, this would be welcome news for JPL. The center, which is run by the California Institute of Technology under contract to NASA, laid off 955 employees and contractors last year, citing budget uncertainty, primarily due to the cloudy future of Mars Sample Return.
Without MSR, engineers at the Jet Propulsion Laboratory don’t have a flagship-class mission to build after the launch of NASA’s Europa Clipper spacecraft last year. The lab recently struggled with rising costs and delays with the previous iteration of MSR and NASA’s Psyche asteroid mission, and it’s not unwise to anticipate more cost overruns on a project as complex as a round-trip flight to Mars.
Ars submitted multiple requests to interview Laurie Leshin, JPL’s director, in recent months to discuss the lab’s future, but her staff declined.
Both MSR mission concepts outlined Tuesday would require multiple launches and an Earth return orbiter provided by the European Space Agency. These options would bring the Mars samples back to Earth as soon as 2035, but perhaps as late as 2039, Nelson said. The return orbiter and sample retrieval lander could launch as soon as 2030 and 2031, respectively.
“The main difference is in the landing mechanism,” Fox said.
To keep those launch schedules, Congress must immediately approve $300 million for Mars Sample Return in this year’s budget, Nelson said.
NASA officials didn’t identify any examples of a commercial heavy lander that could reach Mars, but the most obvious vehicle is SpaceX’s Starship. NASA already has a contract with SpaceX to develop a Starship vehicle that can land on the Moon, and SpaceX founder Elon Musk is aggressively pushing for a Mars mission with Starship as soon as possible.
NASA solicited eight studies from industry earlier this year. SpaceX, Blue Origin, Rocket Lab, and Lockheed Martinâeach with their own lander conceptsâwere among the companies that won NASA study contracts. SpaceX and Blue Origin are well-capitalized with Musk and Amazon’s Jeff Bezos as owners, while Lockheed Martin is the only company to have built a lander that successfully reached Mars.
This slide from a November presentation to the Mars Exploration Program Analysis Group shows JPL’s proposed “sky crane” architecture for a Mars sample retrieval lander. The landing system would be modified to handle a load about 20 percent heavier than the sky crane used for the Curiosity and Perseverance rover landings. Credit: NASA/JPL
The science community has long identified a Mars Sample Return mission as the top priority for NASA’s planetary science program. In the National Academies’ most recent decadal survey released in 2022, a panel of researchers recommended NASA continue with the MSR program but stated the program’s cost should not undermine other planetary science missions.
That’s exactly what is happening. Budget pressures from the Mars Sample Return mission, coupled with funding cuts stemming from a bipartisan federal budget deal in 2023, have prompted NASA’s planetary science division to institute a moratorium on starting new missions.
“The decision about Mars Sample Return is not just one that affects Mars exploration,” said Curt Niebur, NASA’s lead scientist for planetary flight programs, in a question-and-answer session with solar system researchers Tuesday. “Itâs going to affect planetary science and the planetary science division for the foreseeable future. So I think the entire science community should be very tuned in to this.”
Rocket Lab, which has been more open about its MSR architecture than other companies, has posted details of its sample return concept on its website. Fox declined to offer details on other commercial concepts for MSR, citing proprietary concerns.
“We can wait another year, or we can get started now,” Rocket Lab posted on X. “Our Mars Sample Return architecture will put Martian samples in the hands of scientists faster and more affordably. Less than $4 billion, with samples returned as early as 2031.”
Through its own internal development and acquisitions of other aerospace industry suppliers, Rocket Lab said it has provided components for all of NASA’s recent Mars missions. “We can deliver MSR mission success too,” the company said.
Rocket Lab’s concept for a Mars Sample Return mission. Credit: Rocket Lab
Although NASA’s deferral of a decision on MSR to the next administration might convey a lack of urgency, officials said the agency and potential commercial partners need time to assess what roles the industry might play in the MSR mission.
“They need to flesh out all of the possibilities of whatâs required in the engineering for the commercial option,” Nelson said.
On the program’s current trajectory, Fox said NASA would be able to choose a new MSR architecture in mid-2026.
Waiting, rather than deciding on an MSR plan now, will also allow time for the next NASA administrator and the Trump White House to determine whether either option aligns with the administration’s goals for space exploration. In an interview with Ars last week, Nelson said he did not want to “put the new administration in a box” with any significant MSR decisions in the waning days of the Biden administration.
One source with experience in crafting and implementing US space policy told Ars that Nelson’s deferral on a decision will “tee up MSR for canceling.” Faced with a decision to spend billions of dollars on a robotic sample return or billions of dollars to go toward a human mission to Mars, the Trump administration will likely choose the latter, the source said.
If that happens, NASA science funding could be freed up for other pursuits in planetary science. The second priority identified in the most recent planetary decadal survey is an orbiter and atmospheric probe to explore Uranus and its icy moons. NASA has held off on the development of a Uranus mission to focus on the Mars Sample Return first.
Whether it’s with robots or humans, there’s a strong case for bringing pristine Mars samples back to Earth. The titanium tubes carried by the Perseverance rover contain rock cores, loose soil, and air samples from the Martian atmosphere.
“Bringing them back will revolutionize our understanding of the planet Mars and indeed, our place in the solar system,” Fox said. “We explore Mars as part of our ongoing efforts to safely send humans to explore farther and farther into the solar system, while also … getting to the bottom of whether Mars once supported ancient life and shedding light on the early solar system.”
Researchers can perform more detailed examinations of Mars specimens in sophisticated laboratories on Earth than possible with the miniature instruments delivered to the red planet on a spacecraft. Analyzing samples in a terrestrial lab might reveal biosignatures, or the traces of ancient life, that elude detection with instruments on Mars.
“The samples that we have taken by Perseverance actually predateâthey are older than any of the samples or rocks that we could take here on Earth,” Fox said. “So it allows us to kind of investigate what the early solar system was like before life began here on Earth, which is amazing.”
Fox said returning Mars samples before a human expedition would help NASA prioritize where astronauts should land on the red planet.
In a statement, the Planetary Society said it is “concerned that NASA is again delaying a decision on the program, committing only to additional concept studies.”
“It has been more than two years since NASA paused work on MSR,” the Planetary Society said. “It is time to commit to a path forward to ensure the return of the samples already being collected by the Perseverance rover.
“We urge the incoming Trump administration to expedite a decision on a path forward for this ambitious project, and for Congress to provide the funding necessary to ensure the return of these priceless samples from the Martian surface.”
China says it is developing its own mission to bring Mars rocks back to Earth. Named Tianwen-3, the mission could launch as soon as 2028 and return samples to Earth by 2031. While NASA’s plan would bring back carefully curated samples from an expansive environment that may have once harbored life, China’s mission will scoop up rocks and soil near its landing site.
“Theyâre just going to have a mission to grab and goâgo to a landing site of their choosing, grab a sample and go,” Nelson said. “That does not give you a comprehensive look for the scientific community. So you cannot compare the two missions. Now, will people say that thereâs a race? Of course, people will say that, but itâs two totally different missions.”
Still, Nelson said he wants NASA to be first. He said he has not had detailed conversations with Trump’s NASA transition team.
“I think it was a responsible thing to do, not to hand the new administration just one alternative if they want to have a Mars Sample Return,” Nelson said. “I can’t imagine that they don’t. I don’t think we want the only sample return coming back on a Chinese spacecraft.”
NASA defers decision on Mars Sample Return to the Trump administration Read More »
The Justice Department says that landlords did more than use RealPage in the alleged pricing scheme. “Along with using RealPage’s anticompetitive pricing algorithms, these landlords coordinated through a variety of means,” such as “directly communicating with competitors’ senior managers about rents, occupancy, and other competitively sensitive topics,” the DOJ said.
There were “call arounds” in which “property managers called or emailed competitors to share, and sometimes discuss, competitively sensitive information about rents, occupancy, pricing strategies and discounts,” the DOJ said.
Landlords discussed their use of RealPage software with each other, the DOJ said. “For instance, landlords discussed via user groups how to modify the software’s pricing methodology, as well as their own pricing strategies,” the DOJ said. “In one example, LivCor and Willow Bridge executives participated in a user group discussion of plans for renewal increases, concessions and acceptance rates of RealPage rent recommendations.”
The DOJ lawsuit says RealPage pushes clients to use “auto-accept settings” that automatically approve pricing recommendations. The DOJ said today that property rental firms discussed how they use those settings.
“As an example, at the request of Willow Bridge’s director of revenue management, Greystar’s director of revenue management supplied its standard auto-accept parameters for RealPage’s software, including the daily and weekly limits and the days of the week for which Greystar used ‘auto-accept,'” the DOJ said.
Greystar issued a statement saying it is “disappointed that the DOJ added us and other operators to their lawsuit against RealPage,” and that it will “vigorously” defend itself in court. “Greystar has and will conduct its business with the utmost integrity. At no time did Greystar engage in any anti-competitive practices,” the company said.
The Justice Department is joined in the case by the attorneys general of California, Colorado, Connecticut, Illinois, Massachusetts, Minnesota, North Carolina, Oregon, Tennessee, and Washington. The case is in US District Court for the Middle District of North Carolina.
US sues six of the biggest landlords over âalgorithmic pricing schemesâ Read More »
AMD’s batch of CES announcements this year includes just two new products for desktop PC users: the new Ryzen 9 9950X3D and 9900X3D. Both will be available at some point in the first quarter of 2025.
Both processors include additional CPU cores compared to the 9800X3D that launched in November. The 9900X3D includes 12 Zen 5 CPU cores with a maximum clock speed of 5.5 GHz, and the 9950X3D includes 16 cores with a maximum clock speed of 5.7 GHz. Both include 64MB of extra L3 cache compared to the regular 9900X and 9950X, for a total cache of 144MB and 140MB, respectively; games in particular tend to benefit disproportionately from this extra cache memory.
But the 9950X3D and 9900X3D aren’t being targeted at people who build PCs primarily to gameâthe company says their game performance is usually within 1 percent of the 9800X3D. These processors are for people who want peak game performance when they’re playing something but also need lots of CPU cores for chewing on CPU-heavy workloads during the workday.
AMD estimates that the Ryzen 9 9950X3D is about 8 percent faster than the 7950X3D when playing games and about 13 percent faster in professional content creation apps. These modest gains are more or less in line with the small performance bump we’ve seen in other Ryzen 9000-series desktop CPUs.
AMD launches new Ryzen 9000X3D CPUs for PCs that play games and work hard Read More »
After ditching the traditional Dell XPS laptop look in favor of the polarizing design of the XPS 13 Plus released in 2022, Dell is killing the XPS branding that has become a mainstay for people seeking a sleek, respectable, well-priced PC.
This means that there won’t be any more Dell XPS clamshell ultralight laptops, 2-in-1 laptops, or desktops. Dell is also killing its Latitude, Inspiron, and Precision branding, it announced today.
Moving forward, Dell computers will have either just Dell branding, which Dellâs announcement today described as âdesigned for play, school, and work,â Dell Pro branding âfor professional-grade productivity,â or be Dell Pro Max products, which are âdesigned for maximum performance.” Dell will release Dell and Dell Pro-branded displays, accessories, and “services,” it said. The Pro Max line will feature laptops and desktop workstations with professional-grade GPU capabilities as well as a new thermal design.
Dell claims its mid-tier Pro line emphasizes durability, âwithstanding three times as many hinge cycles, drops, and bumps from regular use as competitor devices.â The statement is based on âinternal analysis of multiple durability tests performed” on the Dell Pro 14 Plus (released today) and HP EliteBook 640 G11 laptops conducted in November. Also based on internal testing conducted in November, Dell claims its Pro PCs boost “airflow by 20 percent, making these Dellâs quietest commercial laptops ever.â
Within each line are base models, Plus models, and Premium models. In a blog post, Kevin Terwilliger, VP and GM of commercial, consumer, and gaming PCs at Dell, explained that Plus models offer âthe most scalable performanceâ and Premium models offer “the ultimate in mobility and design.â
Credit: Dell
By those naming conventions, old-time Dell users could roughly equate XPS laptops with new Dell Premium products.
âThe Dell portfolio will expand later this year to include more AMD and Snapdragon X Series processor options,” Terwilliger wrote. “We will also introduce new devices in the base tier, which offers everyday devices that provide effortless use and practical design, and the Premium tier, which continues the XPS legacy loved by consumers and prosumers alike.”
Meanwhile, Dell Pro base models feel like Dellâs now-defunct Latitude lineup, while its Precision workstations may best align with 2025âs Dell Pro Max offerings.
The end of an era: Dell will no longer make XPS computers Read More »
Related: On the 2nd CWT with Jonathan Haidt, The Kids are Not Okay, Full Access to Smartphones is Not Good For Children
Itâs rough out there. In this post, Iâll cover the latest arguments that smartphones should be banned in schools, including simply because the notifications are too distracting (and if you donât care much about that, why are the kids in school at all?), problems with kids on social media including many negative interactions, and also the new phenomenon called sextortion.
Tanagra Beast reruns the experiment of having a class tally their phone notifications. The results were highly compatible with the original experiment.
The tail, it was long.
Ah! So right away we can see a textbook long-tailed distribution. The top 20% of recipients accounted for 75% of all received notifications, and the bottom 20% for basically zero. We can also see that girls are more likely to be in that top tier, but they aren’t exactly crushing the boys.
What if you asked only about notifications that would actually distract?
There was even more concentration at the top. The more notifications you got, the more likely you were to be distracted by each one.
Here are some more highlights.
Which apps dominate? Instagram and Snapchat were nearly tied, and together accounted for 46% of all notifications. With vanilla text messages accounting for an additional 35%, we can comfortably say that social communications account for the great bulk of all in-class notifications.
There was little significant gender difference in the app data, with two minor apps accounting for the bulk of the variation: Ring (doorbell and house cameras) and Life 360 (friend/family location tracker), each of which sent several notifications to a few girls. (“Yeah,” said girls during our debriefing sessions, “girls are stalkers.” Other girls nodded in agreement.)
Notifications from Discord, Twitch, or other gaming-centric services were almost exclusively received by males, but there weren’t enough of these to pop out in the data.
The two top recipients, with their rate of 450 notifications per hour (!), or about one every eight seconds, had interesting stories to tell. One of these students had a job after school, and about half their messages (but only half) were work-related. The other was part of a large group chat, and additionally had a friend at home sick who was peltering them with a continuous rant about everything and nothing, three words at time.
Some students who receive very large numbers of notifications use settings to differentiate them by vibration patterns, and tell me that they “notice” some vibrations much more than others.
Official school business is a significant contributor to student notification loads. At least 4% of all notifications were directly attributable to school apps, and I would guess the indirect total (through standard texts, for example) might be closer to 10-15%. For students who get very few notifications, 30-50% of their notifications might be school-related. Our schoolâs gradebook app is the biggest offender, in part because itâs poorly configured and sends way more notifications than anyone wants.
Is our school unusually good or bad when it comes to phones? By a vote of 23 to 7, students who had been enrolled in another school during the last four years said our school was better than their previous school at keeping phones suppressed.
There’s still obvious room for improvement, though. I asked my students to imagine that, at the start of the hour, they had sent messages inviting a reply to 5 different friends elsewhere on our campus. How many would they expect to have replied before the end of the hour? The answer I consistently got was 4, and that this almost entirely depended on the phone-strictness of the teacher whose class each friend was in. (Iâm on the list of phone-strict teachers, it seems. Phew!)
I asked students if they would want to press a magic button that would permanently delete all social media and messaging apps from the phones of their friend groups if nobody knew it was them. I got only a couple takers. There was more (but far from majority) enthusiasm for deleting all such apps from the whole world. I suspect rates would have been higher if I had asked this as an anonymous written question, but probably not much higher.
I asked if they thought education would be improved on campus if phones were forcibly locked away for the duration of the school day. Only one student gave me so much as an affirmative nod! Among students, the consensus was that kids generally tune into school at the level they care to, and that a phone doesnât change that. A disinterested student without a phone will just tune out in some other way.
As Iâve mentioned before, I find phones distracting when doing non-internet activities even when there are zero notifications. Merely having the option to look is a tax on my attention. And as Gwern notes in the comments, the fact that a substantial minority of students would want to nuke messaging apps from orbit is more a case of âthat is a lotâ rather than âthat is only a minority.â Messaging apps provide obvious huge upside in normal situations outside school, so a lot of kids must see big downsides.
New York City may ban phones in all public schools.
Julian Shen-Berro and Amy Zimmer: [NYC schools chancellor] Banks previously said heâs been talking with âhundredsâ of principals, and they have overwhelmingly told him theyâd like a citywide policy banning phones.
âŠ
âWe know [students] need to be in communication with their parents after school,â Banks said, âand if thereâs something going on during the day, parents should just call the school the way they always did before we ever had cell phones.â
He previously had said we werenât there yet, largely because bans are hard to enforce. To me that continues to make no sense. You can absolutely enforce it. In fact, it seems much easier to me to enforce a total ban on cell phones via sealed pouches than it is to enforce reasonable use of those phones while leaving them within reach.
WSJ reports many parents being the main barriers to banning phones in schools. Some strongly support bans, and the evidence here once again is strongly that bans work to improve matters, but other parents object because they demand direct access to their children at all times.
As always, school shootings are brought up, despite this worry being statistically crazy, and also that cell phone use during a school shooting is thought to be actively dangerous because it risks giving away oneâs location. I canât even.
The more reasonable objections are outside emergencies and scheduling issues, which is something, but wow is that a cart before horse situation. Also obviously there are vastly less disruptive ways to solve those problems. Mostly, I think staying in constant contact at that age is actively terrible for the students. You do want to be able to reach each other in an emergency, but there should be friction involved.
If a few parents pull their kids out in protest, let them. Others who support the policy can choose to transfer in. If my kids were at a school where everyone was on their phones all the time, and I had a viable alternative where phones were banned, I would not hesitate. At minimum we can let the market decide.
It can be done. Here is a story of one Connecticut middle school banning phones. All six middle schools in Providence use them, as do two high schools there. Teachers say sealing phones in pouches has been transformative.
Tyler Cowen reports on a new paper on the Norwegian ban of smartphones in middle schools. Here is the abstract:
How smartphone usage affects well-being and learning among children and adolescents is a concern for schools, parents, and policymakers.
Combining detailed administrative data with survey data on middle schoolsâ smartphone policies, together with an event-study design, I show that banning smartphones significantly decreases the health care take-up for psychological symptoms and diseases among girls. Post-ban bullying among both genders decreases.
Additionally, girlsâ GPA improves, and their likelihood of attending an academic high school track increases. These effects are larger for girls from low socio-economic backgrounds.
Hence, banning smartphones from school could be a low-cost policy tool to improve student outcomes.
Tyler does his best to frame this effect as disappointing and Twitter summaries saying otherwise as misleading (he does not link to them), although he admits he is surprised that bullying fell by 0.39 SDs for boys and 0.42 SDs for girls. Grades did not improve much, only 0.08 SDs, of course we do not know how much this reflects the real changes in learning. Also as one commenter points out, phones are good for cheating.
A plausible explanation for why the math change was 0.22 SDs is that this was based on standardized tests, where the teachers arenât adjusting the curve for the changes. Or it could be that it is helpful to not always have a calculator in your pocket.
I would also note his second point: âThe girls consult less with mental health-related professionals, with visits falling by 0.22 on average to their GPs, falling by 2-3 visits to specialist care.â That is a 29% decline in GP visits, and a 60% decline in specialist visits. That is a gigantic effect. Some of it is âphones cause kids to seek help moreâ but at todayâs margins I am fine with that, and this likely represents a large improvement in mental health.
I also note that the paper shows that the effects are largest for schools that do a full ban, those that let phones remain on silent see smaller impacts. As the author points out, this is likely because a phone nearby is a constant distraction even when you ultimately ignore it. Silent mode was a little over half the sample (see Figure 2). So the statistics understate the effect size of a full ban.
This did not take phones away outside of school, so it is not a measure of full phone impact, only the marginal impact of phones in schools, and mostly only of making the phones go on silent.
Jay Van Bavel summarizes this way:
Jay Van Bavel, PhD: Banning #smartphones in over 400 schools led to:
-decreased psychological symptoms among girls by 29%
-decreased bullying by boys and girls by 43%
-increased GPA among girls by .08 SDs
Effects were larger for girls from low SES families
We should keep smartphones out of schools. This technology is a collective trap–Users are compelled to use it, even if they hate it.
Most people would prefer a world without TikTok or Instagram: Nearly 60% of Instagram users wish the platform wasnât invented. [link is to post discussing a successful no-ban pilot school program, and various social media issues]
If you take the results at face value, despite many of the âbansâ only being partial, donât you still have to ban phones?
Tyler did not see it that way. He followed up noting that the bans were often not so strict, but claiming that the strict bans had only modest effect relative to the less strict bans. I donât understand this interpretation of the data, or this perspective.
Consider the opposite situation. Suppose you were considering introducing a new device into schools, and it had all the opposite effects. People would consider you monstrous and insane for even raising the question.
Also I am happy to trust this kind of very straightforward anecdata:
John Arnold: Was walking through a random high school recently and was shocked by the number of kids with a phone in their lap playing games or scrolling and/or wearing headphones during a lesson. Made me very partial to âlock phones in pouchâ policies.
If kids constantly being on phones during class is not hurting academic achievement, then that tells you the whole âsend kids to schoolâ thing is unnecessary, and you should disband the whole thing.
That is my actual position. Either ban phones in schools, or ban the schools.
California Governor Newsom calls for all schools to go phone free.
Governor Newsom: The evidence is clear: reducing phone-use in classrooms promote concentration, academic success & social & emotional development.
Weâre calling on California schools to act now to restrict smartphone use in classrooms. Letâs do whatâs best for our youth.
âŠ
In 2019, Governor Newsom signed AB 272 (Muratsuchi) into law, which grants school districts the authority to regulate the use of smartphones during school hours. Building on that legislation, he is currently working with the California Legislature to further limit student smartphone use on campuses. In June, the Governor announced efforts to restrict the use of smartphones during the school day.
âŠ
Leveraging the tools of this law, I urge every school district to act now to restrict smartphone use on campus as we begin the new academic year. The evidence is clear: reducing phone use in class leads to improved concentration, better academic outcomes, and enhanced social interactions.
You know what Iâve never heard? Someone who actually observed teenage girls using social media, and thought âyep this seems fine, Iâve updated towards not banning this.â
In a given week, 13% of users of Instagram between 13-15 said they had received unwanted sexual advances. 13% had seen âany violent, bloody or disturbing imageâ which tells me nothing disturbs our kids anymore, and 19% saw ânudity or sexual imagesâ that they did not want to see.
Jon Haidt and Arturo Bejar demand that something (more) must be done. A lot is already being done to get the numbers this contained.
Arturo Bejar: My daughter and her friendsâwho were just 14âfaced repeated unwanted sexual advances, misogynistic comments (comments on her body or ridiculing her interests because she is a woman), and harassment on these products. This has been profoundly distressing to them and to me.Â
Also distressing: the company did nothing to help my daughter or her friends. My daughter tried reporting unwanted contacts using the reporting tools, and when I asked her how many times she received help after asking, her response was, âNot once.â
What would help look like? Would you even know if you were helped? Metaâs own AI believes that she should damn well hear back, and this was a failure of the system.
I was curious to see this broken down by source, so I looked at the original survey from 2021, and there is data.
We also see that about half of those who asked for support felt at least âsomewhatâ supported. And that all problems including unwanted advances are similarly common for the 13-15 group as the other groups up to age 26, with males reporting unwanted advances more often than females in all age groups, whereas females got more unwanted comparisons.
This all happened back in 2021, before generative AI.
With Llama-3 and also vision models now available to Meta, it seems like we should be able to dramatically improve the situation. Many of these things have no reason to appear. So it seems fairly trivial to have an AI check to see if incoming messages or images from strangers contain some of the various things above, and if so then display a warning or silently censor the message, at least for underage users.
Things like political posts or âmisinfoâ are trickier. There are obvious issues with letting an LLM or even a person decide what counts here and making censorship decisions. But also there is a reason the post does not talk about those issues. They are not where most of the damage lies.
The general consensus continues to be that if you look at what kids, especially teenage girls, are actually doing with social media, youâll probably be horrified.
Zac Hill: After spending the weekend with a trio of normal, well-adjusted 14 y/o girls (courtesy of my goddaughter), never have I rolled harder for the âBan Social Media For Teens Like Yesterdayâ posse.
Via this post by Jay Van Bavel, we are reminded of the âwe would pay to get rid of social media, in particular TikTok and Instragramâ result.
This graph is pretty weird, right? Why would using Instagram not correlate with wishing the app did not exist? Whereas TikTokâs graph here makes sense (note that the dark blue bar is everyone, not only non-users, so if ~33% of Americans use TikTok then ~70% of non-users want it to not exist).
For Instagram, I suppose as a non-user I can be indifferent, whereas many users feel like they have to be on it?
For Maps, I assume almost everyone uses it, so the two samples are the same people?
There is a very big downside to limiting screen time.
Jawwwn: đź $PLTR co-founder Peter Thiel on screen time for kids đș:
âIf you ask the executives in those companies, how much screen time do they let their kids use, and thereâs probably an interesting critique one could make.
Andrew: What do you do?
Thiel: âAn hour and a half a week.â
Gallabytes: Absolutely insane to me to see hackers grow up and try to raise their kids in a way that’s incompatible with becoming hackers.
The hard problem is, how do you differentially get the screen time you want?
At some point yes you want to impose a hard cap, but if I noticed my children doing hacking things, writing programs, messing with hardware, or playing games in a way that involved deliberate practice, or otherwise making good use, I would be totally fine with that up to many hours per day. The things they would naturally do with the screens? Largely not so much.
SF Chronicle: Among girls 15 and younger, 45% of those from abusive and disturbed families use social media frequently, compared to just 28% from healthy families.
Younger girls who frequently use social media are less likely to attempt suicide or harm themselves than those who donât use social media.
âŠ
The CDC survey shows that 5 in 6 cyberbullied teens are also emotionally and violently abused at home by parents and grownups. Teenagers from abusive, troubled families are far more likely to be depressed and more likely to use social media than non-abused teens.
It is super confusing trying to tease out treatment effects versus selection effects in situations like this. Thereâs a lot going on. The cyberbullying correlation pretty much has to be causal, because the effect size seems too big to be otherwise.
Bloomberg has an in depth look into the latest scammer tactic: Sextortion.
There is a subreddit with 32k members dedicated to helping victims.
The scam is simple, and is getting optimized as scammers exchange tips.
You pretend to be a hot girl, find teenage boy with social media account.
You message the teenage boy, express interest, offer to trade nude pics.
Teenage boy sends nude pics.
Blackmail the boy, threatening to ruin his entire life.
If the boy threatens to kill himself, encourage that, for some reason?
Obviously any such story will attempt to be salacious and will select the worst cases.
It still seems highly plausible that this line of work attracts the worst of the worst. That a large portion of them are highly sadistic fs who revel in causing pain and suffering. Who are the types of people who would see suicide threats, actively drive the kid to suicide, and then message his girlfriend and other contacts to blackmail them in turn if they didnât want the truth about what happened getting out. Yeah.
This is in a very different category than the classic internet scams.
What to do about it, before or after it happens to you?
SwiftOnSecurity: PARENTS: You need to sit your kid down and tell them about sextorsion. They are not going to know randos messaging them for sexting is a trap.
This is a really easy way for criminals onshore and overseas to make money. They convince you to link your real identity. There are suicides after ongoing threats to ruin their life after desperate attempts to pay. And they need to know if they fuck up they need to come to you.
âŠ
Advice thread from a lawyer who deals with sextorsion. DO NOT ENGAGE. Block. Go private. Keep blocking. Show them NO ENGAGEMENT. That spending any time harassing will be worth it. They donât have a reputation to uphold. Time is money. Apparently they sometimes just give up.
Lane Haygood, Attorney: About once a week I have someone call me in a blind panic about to send hundreds or thousands of dollars to a scammer. My advice to them is always the same: PAY NOTHING. Nothing about paying guarantees the person on the other end will do what they say.
They will continue to extort you as long as you are willing to pay.
The best thing to do is immediately block them. If they message you from new profiles, block, block, block.
The next best thing to do is reach out to an attorney. My brilliant paralegal @KathrynTewson has a great document on cybersecurity we will be happy to provide you with to help ameliorate these things.
All of this strongly matches my intuition. Paying, or engaging at all, raises the expected returns to more blackmail. Nothing they told you or committed to changes that fact, and they are well known liars with no moral compass. No, they are not going to honor their word, in any sense. Meanwhile actually sending the pics gets them nothing. Block, ignore and hope it goes away is the only play on all levels.
He also claims that with the rise of deepfakes you can always run the Shaggy defense if the scammer actually does pull the trigger.
Or you could shrug, if one has perspective. This is not obviously that big a deal, although obviously even if true that is hard for the victim to see.
In particular, one thing that I did not see in the article was talk about admission to college. Colleges will sometimes rescind or deny admission based on a social media post that offends or indicates ordinary kid behavior. Would they do that to a sextortion victim? The chances are not zero, but my guess is it would be rare, given that there is not a known-to-me example of this, and scammers would no doubt lean heavily on this threat if it was a common occurrence.
Childhood and Education #8: Dealing with the Internet Read More »
“We are making it simpler for new competitors to get consistent access to the spectrum they need.”
A Falcon 9 rocket lofts a Starlink mission on Dec. 30, the final SpaceX mission of 2024, completing the company’s 134th orbital launch. Credit: SpaceX
Welcome to Edition 7.25 of the Rocket Report! Happy New Year! It’s a shorter edition of the newsletter this week because most companies (not named Blue Origin, this holiday season) took things easier over the last 10 days. But after the break we’re back in the saddle for the new year, and eager to see what awaits us in the world of launch.
As always, we welcome reader submissions, and if you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets, as well as a quick look ahead at the next three launches on the calendar.
Avio lands atop list of European launch firms. You know it probably was not a great year for European rocket firms when the top-ranked company on the continent is Avio, which launched a grand total of two rockets in 2024. The Italian rocket firm earned this designation from European Spaceflight after successfully completing the final flight of the Vega rocket in September and returning the Vega C rocket to flight in December.
Three European launches in 2024 … The only other firm to launch a rocket on the list was ArianeGroup, which had a single launch last year. Granted, it was an important flight, the successful debut of the Ariane 6 rocket. Germany-based Isar Aerospace came in third place, followed by a company I had never heard of, Germany-based Bayern-Chemie. It builds solid-fuel upper stages for sounding rockets. It’s hard to disagree with too much on the list, although it certainly demonstrates that Europe could do with more companies launching rockets, and fewer only talking about it.
India launches space docking demonstration mission. The Indian Space Research Organization launched a space docking experiment on a PSLV rocket at the end of the year, NASASpaceflight.com reports. This SpaDeX missionâyes, the name is a little confusingâwill demonstrate the capability to rendezvous, dock, and undock in orbit. This technology is important for the country’s human spaceflight plans as well as future missions to the Moon.
Target and chaser … The SpaDeX experiment will be conducted around 10 days following launch when the two satellites, the SDX01 âChaserâ and the SDX02 âTarget,â will be released with a small relative velocity between them. The pair will drift apart for around a day until they are separated by a distance of around 10 to 15 km. Once this is achieved, Target will eliminate the velocity difference between itself and Chaser using its propulsion system.
The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.
HyPrSpace conducts hot-fire test. French launch services startup HyPrSpace has completed the first test of its second hot fire test campaign for its subscale Terminator stage demonstrator, European Spaceflight reports. HyPrSpace is developing a two-stage launch vehicle called Orbital Baguette One (OB-1) that will be capable of delivering up to 250 kilograms to low Earth orbit.
Like a finely baked bread … In July, the company completed an initial hot fire test campaign of Terminator, an eight-tonne demonstrator of a hybrid rocket stage. Over the course of this first test campaign, HyPrSpace completed a total of four hot fire tests. HyPrSpace CEO Alexandre Mangeot said the company achieved an average engine efficiency of 94 percent during the latest test. Mangeot added that this represented the “propulsive performance we need for our orbital launcher.”
A new annual record for orbital launches. The world set another record for orbital launches in 2024 in a continuing surge of launch activity driven almost entirely by SpaceX, Space News reports. There were 259 orbital launch attempts in 2024, a 17 percent increase from the previous record of 221 orbital launch attempts in 2023. That figure does not include suborbital launches, such as four SpaceX Starship/Super Heavy test flights or two launches of the HASTE suborbital variant of Rocket Labâs Electron.
SpaceX v. world … That increase in overall launches matches the increase by SpaceX alone, which performed 134 Falcon 9 and Falcon Heavy launches in 2024, up from 96 in 2023. The company performed more orbital launches than the rest of the world combined. China performed 68 launches in 2024, breaking a record of 67 launches set in 2023. Russia performed 17 launches, followed by Japan (7), India (5), Iran (4), Europe (3) and North Korea (1).
Russian family of rockets reaches 2,000th launch. The Russian space program reached a significant milestone over the holidays with the 2,000th launch of a rocket from the “R-7” family of boosters. The launch took place on Christmas Day when an R-7 rocket lifted off, carrying a remote-sensing satellite from the Baikonur Cosmodrome, Ars reports. This family of rockets has an incredible heritage dating back nearly six decades. The first R-7 vehicle was designed by the legendary Soviet rocket scientist Sergei Korolev. It flew in 1957 and was the world’s first intercontinental ballistic missile.
Good and bad news … Although it’s certainly worth commemorating the 2,000th launch of the R-7 family of rockets, the fleet’s longevity also offers a cautionary tale. In many respects, the Russian space program continues to coast on the legacy of Korolev and the Soviet space feats of the 1950s and 1960s. That Russia has not developed a more cost-competitive and efficient booster in nearly six decades reveals the truth about its space program: It lacks innovation at a time when the rest of the space industry is rapidly sprinting toward reusability.
Overview of Chinese launch plans for 2025. New Long March rockets and commercially developed launch vehicles are expected to have their first flights in 2025, boosting Chinaâs overall launch capabilities, Space News reports. The launchers will compete for contracts to launch satellites for Chinaâs megaconstellation projectsâThousand Sails and Guowangâspace station cargo missions and commercial and other contracts, helping to boost the countryâs overall access to space and launch rate in the coming years.
Many new faces on the launch pad … Among the highlights for the coming year is the Long March 8A rocket, a variant of the existing Long March 8, but with a larger, more powerful second stage, boosting payload capacity to a 700-kilometer Sun-synchronous orbit from 5,000 kilograms to 7,000 kg. It is likely to be a workhorse for megaconstellation launches. The Long March 12A rocket could undergo vertical takeoff and landing tests. And the privately developed Zhuque-3 rocket could make its first orbital launch this year.
To deal with more launches, FCC adds spectrum. The Federal Communications Commission has formally allocated additional spectrum for launch applications, fulfilling a provision in a bill passed earlier this year, Space News reports. The FCC published December 31 a report and order that allocated spectrum between 2360 and 2395 megahertz for use in communications to and from commercial launch and reentry vehicles on a secondary basis. That band currently has a primary use for aircraft and missile testing communications.
Keep rockets talking to the ground … Both the FCC and launch companies have said the additional spectrum was needed to accommodate growth in launch activities. âBy identifying more bandwidth for vital links to launch vehicles, we are making it simpler for new competitors to get consistent access to the spectrum they need,â Jessica Rosenworcel, chairwoman of the FCC, said in a December 19 statement calling for approval of the then-proposed report and order.
New Glenn completes static fire test. On Friday, December 27, Blue Origin successfully ignited the seven main engines on its massive New Glenn rocket for the first time, Ars reports. Blue Origin said it fired the vehicle’s engines for a duration of 24 seconds. They fired at full thrust for 13 of those seconds. Additionally, several hours before the test firing, the Federal Aviation Administration said it had issued a launch license for the rocket.
New Glenn wen? … These two milestones set up a long-anticipated launch of the New Glenn rocket in January. Although the company has yet to announce a date publicly, sources indicate that Blue Origin is working toward a launch time of no earlier than 1 am ET (06: 00 UTC) on Monday, January 6, from Cape Canaveral Space Force Station in Florida, though it could slip a few days. If all goes well with the debut flight of the vehicle, Blue Origin will also attempt to recover the first stage of the rocket on a drone ship down range in the Atlantic Ocean. (submitted by Jay5000001)
Jan. 4: Falcon 9 | Thuraya 4-NGS | Cape Canaveral Space Force Station, Florida | 01: 27 UTC
Jan. 6: New Glenn | Blue Ring pathfinder | Cape Canaveral Space Force Station, Florida | 06: 00 UTC
Jan. 6: Falcon 9 | Starlink 12-11 | Kennedy Space Center, Florida | 16: 19 UTC
Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.
Rocket Report: Avio named top European launch firm; New Glenn may launch soon Read More »
There’s not enough melted material near the surface to trigger a massive eruption.
It’s difficult to comprehend what 1,000 cubic kilometers of rock would look like. It’s even more difficult to imagine it being violently flung into the air. Yet the Yellowstone volcanic system blasted more than twice that amount of rock into the sky about 2 million years ago, and it has generated a number of massive (if somewhat smaller) eruptions since, and there have been even larger eruptions deeper in the past.
All of which might be enough to keep someone nervously watching the seismometers scattered throughout the area. But a new study suggests that there’s nothing to worry about in the near future: There’s not enough molten material pooled in one place to trigger the sort of violent eruptions that have caused massive disruptions in the past. The study also suggests that the primary focus of activity may be shifting outside of the caldera formed by past eruptions.
Yellowstone is fueled by what’s known as a hotspot, where molten material from the Earth’s mantle percolates up through the crust. The rock that comes up through the crust is typically basaltic (a definition based on the ratio of elements in its composition) and can erupt directly. This tends to produce relatively gentle eruptions where lava flows across a broad area, generally like you see in Hawaii and Iceland. But this hot material can also melt rock within the crust, producing a material called rhyolite. This is a much more viscous material that does not flow very readily and, instead, can cause explosive eruptions.
The risks at Yellowstone are rhyolitic eruptions. But it can be difficult to tell the two types of molten material apart, at least while they’re several kilometers below the surface. Various efforts have been made over the years to track the molten material below Yellowstone, but differences in resolution and focus have left many unanswered questions.
Part of the problem is that a lot of this data came from studies of seismic waves traveling through the region. Their travel is influenced by various factors, including the composition of the material they’re traveling through, its temperature, and whether it’s a liquid or solid. In a lot of cases, this leaves several potential solutions consistent with the seismic dataâyou can potentially see the same behavior from different materials at different temperatures.
To get around this issue, the new research measured the conductivity of the rock, which can change by as much as three orders of magnitude when transitioning from a solid to a molten phase. The overall conductivity we measure also increases as more of the molten material is connected into a single reservoir rather than being dispersed into individual pockets.
This sort of “magnetotelluric” data has been obtained in the past but at a relatively low resolution. For the new study, a dense array of sensors was placed in the Yellowstone caldera and many surrounding areas to the north and east. (You can compare the previous and new recording sites as black and red triangles on this map.)
That has allowed the research team to build a three-dimensional map of the molten material underneath Yellowstone and to determine the fraction of the material in a given area that’s molten. The team finds that there are two major sources of molten material that extend up from the mantle-crust boundary at about 50 kilometers below the surface. These extend upward separately but merge about 20 kilometers below the surface.
Underneath Yellowstone: Two large lobs of hot material from the mantle (in yellow) melt rock closer to the surface (orange), creating pools of hot material (red and orange) that power hydrothermal systems and past eruptions, and may be the sites of future activity. Credit: Bennington, et al.
While they collectively contain a lot of molten basaltic material (between 4,000 and 6,500 cubic kilometers of it), it’s not very concentrated. Instead, this is mostly relatively small volumes of molten material traveling through cracks and faults in solid rock. This keeps the concentration of molten material below that needed to enable eruptions.
After the two streams of basaltic material merge, they form a reservoir that includes a significant amount of melted crustal materialâmeaning rhyolitic. The amount of rhyolitic material here is, at most, under 500 cubic kilometers, so it could fuel a major eruption, albeit a small one by historic Yellowstone standards. But again, the fraction of melted material in this volume of rock is relatively low and not considered likely to enable eruptions.
From there to the surface, there are several distinct features. Relative to the hotspot, the North American plate above is moving to the west, which has historically meant that the site of eruptions has moved from west to east across the continent. Accordingly, there is a pool off to the west of the bulk of near-surface molten material that no longer seems to be connected to the rest of the system. It’s small, at only about 100 cubic kilometers of material, and is too diffused to enable a large eruption.
There’s a similar near-surface blob of molten material that may not currently be connected to the rest of the molten material to the south of that. It’s even smaller, likely less than 50 cubic kilometers of material. But it sits just below a large blob of molten basalt, so it is likely to be receiving a fair amount of heat input. This site seems to have also fueled the most recent large eruption in the caldera. So, while it can’t fuel a large eruption today, it’s not possible to rule the site out for the future.
Two other near-surface areas containing molten material appear to power two of the major sites of hydrothermal activity, the Norris Geyser Basin and Hot Springs Basin. These are on the northern and eastern edges of the caldera, respectively. The one to the east contains a small amount of material that isn’t concentrated enough to trigger eruptions.
But the site to the northeast contains the largest volume of rhyolitic material, with up to nearly 500 cubic kilometers. It’s also one of only two regions with a direct connection to the molten material moving up through the crust. So, while it’s not currently poised to erupt, this appears to be the most likely area to trigger a major eruption in the future.
In summary, while there’s a lot of molten material near the current caldera, all of it is spread too diffusely within the solid rock to enable it to trigger a major eruption. Significant changes will need to take place before we see the site cover much of North America with ash again. Beyond that, the image is consistent with our big-picture view of the Yellowstone hotspot, which has left a trail of eruptions across western North America, driven by the movement of the North American plate.
That movement has now left one pool of molten material on the west of the caldera disconnected from any heat sources, which will likely allow it to cool. Meanwhile, the largest pool of near-surface molten rock is east of the caldera, which may ultimately drive a transition of explosive eruptions outside the present caldera.
Nature, 2025. DOI: 10.1038/s41586-024-08286-z  (About DOIs).
John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.
One less thing to worry about in 2025: Yellowstone probably wonât go boom Read More »