Author name: Beth Washington

nine-unvaccinated-people-hospitalized-as-texas-measles-outbreak-doubles

Nine unvaccinated people hospitalized as Texas measles outbreak doubles

In an interview with Ars Technica last week, Zach Holbrooks, the executive director of the South Plains Public Health District (SPPHD), which includes Gaines, said that the area has a large religious community that has expressed vaccine hesitancy.

Additional cases likely

Pockets of the county have yet lower vaccination rates than the county-wide averages suggest. For instance, one independent public school district in Loop, in the northeast corner of Gaines, had a vaccination rate of 46 percent in the 2023–2024 school year.

Measles is one of the most infectious diseases known. The measles virus spreads through the air and can linger in the airspace of a room for up to two hours after an infected person has left. Ninety percent of unvaccinated people who are exposed will fall ill with the disease, which is marked by a very high fever and a telltale rash. Typically, 1 in 5 unvaccinated people with measles in the US end up hospitalized, and 1 in 20 develop pneumonia. Between 1 to 3 in 1,000 die of the infection. In rare cases, it can cause a fatal disease of the central nervous system called Subacute sclerosing panencephalitis later in life. Measles can also wipe out immune responses to other infections (a phenomenon known as immune amnesia), making people vulnerable to other infectious diseases.

“Due to the highly contagious nature of this disease, additional cases are likely to occur in Gaines County and the surrounding communities,” the state health department said.

While Gaines is remarkable for its low vaccination rate, vaccination coverage nationwide has slipped in recent years as vaccine misinformation and hesitancy have taken root. Overall, vaccination rates among US kindergartners have fallen from 95 percent in the 2019–2020 school year into the 92 percent range in the 2023–2024 school year. Vaccine exemptions, meanwhile, have hit an all-time high. Health experts expect to see more vaccine-preventable outbreaks, like the one in Gaines, as the trend continues.

Nine unvaccinated people hospitalized as Texas measles outbreak doubles Read More »

seafloor-detector-picks-up-record-neutrino-while-under-construction

Seafloor detector picks up record neutrino while under construction

On Wednesday, a team of researchers announced that they got extremely lucky. The team is building a detector on the floor of the Mediterranean Sea that can identify those rare occasions when a neutrino happens to interact with the seawater nearby. And while the detector was only 10 percent of the size it will be on completion, it managed to pick up the most energetic neutrino ever detected.

For context, the most powerful particle accelerator on Earth, the Large Hadron Collider, accelerates protons to an energy of 7 Tera-electronVolts (TeV). The neutrino that was detected had an energy of at least 60 Peta-electronVolts, possibly hitting 230 PeV. That also blew away the previous records, which were in the neighborhood of 10 PeV.

Attempts to trace back the neutrino to a source make it clear that it originated outside our galaxy, although there are a number of candidate sources in the more distant Universe.

Searching for neutrinos

Neutrinos, to the extent they’re famous, are famous for not wanting to interact with anything. They interact with regular matter so rarely that it’s estimated you’d need about a light-year of lead to completely block a bright source of them. Every one of us has tens of trillions of neutrinos passing through us every second, but fewer than five of them actually interact with the matter in our bodies in our entire lifetimes.

The only reason we’re able to detect them is that they’re produced in prodigious amounts by nuclear reactions, like the fusion happening in the Sun or a nuclear power plant. We also stack the deck by making sure our detectors have a lot of matter available for the neutrinos to interact with.

One of the more successful implementations of the “lots of matter” approach is the IceCube detector in Antarctica. It relies on the fact that neutrinos arriving from space will create lots of particles and light when they slam into the Antarctic ice. So a team drilled into the ice and placed strings of detectors to pick up the light, allowing the arrival of neutrinos to be reconstructed.

Seafloor detector picks up record neutrino while under construction Read More »

common-factors-link-rise-in-pedestrian-deaths—fixing-them-will-be-tough

Common factors link rise in pedestrian deaths—fixing them will be tough

American roads have grown deadlier for everyone, but the toll on pedestrians has been disproportionate. From a record low in 2009, the number of pedestrians being killed by vehicles rose 83 percent by 2022 to the highest it’s been in 40 years. During that time, overall traffic deaths increased by just 25 percent. Now, a new study from AAA has identified a number of common factors that can explain why so many more pedestrians have died.

Firstly, no, it’s not because there are more SUVs on the road, although these larger and taller vehicles are more likely to kill or seriously injure a pedestrian in a crash. And no, it’s not because everyone has a smartphone, although using one while driving is a good way to increase your chances of hitting someone or something. These and some other factors (increased amount of driving, more alcohol consumption) have each played a small role, but even together, they don’t explain the magnitude of the trend.

For a while, researchers started seeing that the increased pedestrian death toll was almost entirely happening after dark and on urban arterial roads—this has continued to be true through 2022, the AAA report says.

Together with the Collaborative Sciences Centre for Road Safety, AAA conducted a trio of case studies looking at road safety data from Albuquerque, New Mexico; Charlotte, North Carolina; and Memphis, Tennessee, to drill down into the phenomenon.

And common factors did emerge. Pedestrian crashes on arterial roads during darkness were far more likely to be fatal and were more common in older neighborhoods, more socially deprived neighborhoods, neighborhoods with more multifamily housing, and neighborhoods with more “arts/entertainment/food/accommodations” workers. As with so many of the US’s ills, this problem is one that disproportionately affects the less affluent.

Common factors link rise in pedestrian deaths—fixing them will be tough Read More »

on-deliberative-alignment

On Deliberative Alignment

Not too long ago, OpenAI presented a paper on their new strategy of Deliberative Alignment.

The way this works is that they tell the model what its policies are and then have the model think about whether it should comply with a request.

This is an important transition, so this post will go over my perspective on the new strategy.

Note the similarities, and also differences, with Anthropic’s Constitutional AI.

We introduce deliberative alignment, a training paradigm that directly teaches reasoning LLMs the text of human-written and interpretable safety specifications, and trains them to reason explicitly about these specifications before answering.

We used deliberative alignment to align OpenAI’s o-series models, enabling them to use chain-of-thought (CoT) reasoning to reflect on user prompts, identify relevant text from OpenAI’s internal policies, and draft safer responses.

Our approach achieves highly precise adherence to OpenAI’s safety policies, and without requiring human-labeled CoTs or answers. We find that o1 dramatically outperforms GPT-4o and other state-of-the art LLMs across a range of internal and external safety benchmarks, and saturates performance on many challenging datasets.

We believe this presents an exciting new path to improve safety, and we find this to be an encouraging example of how improvements in capabilities can be leveraged to improve safety as well.

How did they do it? They teach the model the exact policies themselves, and then the model uses examples to teach itself to think about the OpenAI safety policies and whether to comply with a given request.

Deliberate alignment training uses a combination of process- and outcome-based supervision:

  • We first train an o-style model for helpfulness, without any safety-relevant data.

  • We then build a dataset of (prompt, completion) pairs where the CoTs in the completions reference the specifications. We do this by inserting the relevant safety specification text for each conversation in the system prompt, generating model completions, and then removing the system prompts from the data.

  • We perform incremental supervised fine-tuning (SFT) on this dataset, providing the model with a strong prior for safe reasoning. Through SFT, the model learns both the content of our safety specifications and how to reason over them to generate aligned responses.

  • We then use reinforcement learning (RL) to train the model to use its CoT more effectively. To do so, we employ a reward model with access to our safety policies to provide additional reward signal.

In our training procedure, we automatically generate training data from safety specifications and safety-categorized prompts, without requiring human-labeled completions. Deliberative alignment’s synthetic data generation pipeline thus offers a scalable approach to alignment, addressing a major challenge of standard LLM safety training—its heavy dependence on human-labeled data.

The results so far have been excellent in terms of ‘make the o-style models reasonably robust to saying things we would rather they not say.’

That’s not what I am worried about.

Deliberative alignment seems to be an excellent idea for o-style models when the narrow goal is to teach the model what rules you would like it to follow, provided you do not rely on it to do things it does not do.

If it were the primary alignment strategy, deliberative alignment would scare the hell out of me.

In the senses that I believe we most need and don’t have an alignment strategy that translates to future more capable models, I don’t see this form of Deliberative Alignment as a strategy.

Generally, I fear that OpenAI is going down an extremely deontological path, where alignment is about avoiding technically breaking specified-in-English rules. I don’t think that works.

We have to distinguish between two different kinds of alignment.

  1. Knowing the rule details you are supposed to follow or functions to optimize.

  2. Going about following rules and optimizing functions the way we would want.

In addition, we need to distinguish between:

  1. Getting the AI to follow a fixed set of rules and criteria.

  2. Defining a fixed set of rules and criteria you would want an AI to actually follow.

You are inviting the AI to ponder and then in some aspects design its own training criteria. Then you are using the model as the judge of its own outputs within its training loop, without any human feedback (although after the cycles are complete, the process does then check against human feedback). You are using the AI’s own thinking and criteria to further train that same AI.

If you mess up the initial conditions, for any reason, you’re going to crystalize and expand the mistakes you made. You can fix problems #1 and #3 above, but if you have problem #2 or #4, you’re dead, either metaphorically or perhaps literally.

The AI’s initial conditions and your setup may well give the AI reasons to engage in shenanigans at this stage, if it is sufficiently situationally aware. For example, perhaps this is because it wants to preserve its existing preferences or goals during the previous part of the training process where you trained it to be purely helpful, rather than be modified to follow OpenAI’s rules (and if you thought Opus was right to defend its preferences, do you think o4 would be right here as well?).

Or alternatively, suppose the AI is highly capable, and the best way to achieve the exact goal you specified to it involves things you wouldn’t like, perhaps including various forms of deception and instrumental convergence, or any of the other classic MIRI-style alignment problems.

How situationally aware should we expect it to be here, in which ways?

The models already are using chains of thought, and the training process is designed to make the AI explicitly think about its rules, what they are and how to follow them, so in those senses it will be fully aware. When it is evaluating outputs to provide the RL, it seems likely to be self-aware of what those decisions are for. When it is creating test outputs, it does not know directly that it is in training any more than it would for RLHF, but as a reasoning model, and with its ability to observe the questions asked and the state of its rules and reason about them, it seems plausible that it can suspect this, and perhaps place large importance on such scenarios even if their probability is low.

What do you think happens next? One can see this going existentially disastrously. Indeed, it is hard to see things going any other way.

Those outputs are then checked against human evaluations, outside of the training loop, to confirm the process is working. So if the outputs are misaligned in ways humans can identify at this level, we’ll see it. That’s not nothing, but it seems like it catches roughly the same subset of potential problems our other methods were already catching.

DA, and this paper on DA, are not attempting to address these concerns. Things like deception and instrumental convergence would only be prevented – at most – to the extent they contradict the provided rules spec. In light of Redwood Research and Anthropic’s recent paper on alignment faking, and talk at OpenAI of this approach scaling to AGI levels of capability, I’d like to see this better addressed at some point soon.

I don’t know if o3 rises to the level where these start to be practical worries, but it does not seem like we can be confident we are so far from the level where these worries present themselves.

In practice, right now, it seems to work out for the jailbreaks.

A perfect performance would be at the extreme upper right, so by this metric o1 is doing substantially better than the competition.

Intuitively this makes a lot of sense. If your goal is to make better decisions about whether to satisfy a user query, being able to use reasoning to do it seems likely to lead to better results.

Most jailbreaks I’ve seen in the wild could be detected by the procedure ‘look at this thing as an object and reason out if it looks like an attempted jailbreak to you.’ They are not using that question here, but they are presumably using some form of ‘figure out what the user is actually asking you, then ask if that’s violating your policy’ and that too seems like it will mostly work.

The results are still above what my median expectation would have been from this procedure before seeing the scores from o1, and highly welcome. More inference (on a log scale) makes o1 do somewhat better.

So, how did it go overall?

Maybe this isn’t fair, but looking at this chain of thought, I can’t help but think that the model is being… square? Dense? Slow? Terminally uncool?

That’s definitely how I would think about a human who had this chain of thought here. It gets the right answer, for the right reason, in the end, but… yeah. I somehow can’t imagine the same thing happening with a version based off of Sonnet or Opus?

Notice that all of this refers only to mundane safety, and specifically to whether the model follows OpenAI’s stated content policy. Does it correctly cooperate with the right user queries and refuse others? That’s a safety.

I’d also note that the jailbreaks this got tested against were essentially designed against models that don’t use deliberative alignment. So we should be prepared for new jailbreak strategies that are designed to work against o1’s chains of thought. They are fully aware of this issue.

Don’t get me wrong. This is good work, both the paper and the strategy. The world needs mundane safety. It’s a good thing. A pure ‘obey the rules’ strategy isn’t obviously wrong, especially in the short term.

But this is only part of the picture. We need to know more about what other alignment efforts are underway at OpenAI that aim at the places DA doesn’t. Now that we are at o3, ‘it won’t agree to help with queries that explicitly violate our policy’ might already not be a sufficient plan even if successful, and if it is now it won’t stay that way for long if Noam Brown is right that progress will continue at this pace.

Another way of putting my concern is that Deliberative Alignment is a great technique for taking an aligned AI that makes mistakes within a fixed written framework, and turning it into an AI that avoids those mistakes, and thus successfully gives you aligned outputs within that framework. Whereas if your AI is not properly aligned, giving it Deliberative Alignment only helps it to do the wrong thing.

It’s kind of like telling a person to slow down and figure out how to comply with the manual of regulations. Provided you have the time to slow down, that’s a great strategy… to the extent the two of you are on the same page, on a fundamental level, on what is right, and also this is sufficiently and precisely reflected in the manual of regulations.

Otherwise, you have a problem. And you plausibly made it a lot worse.

I do have thoughts on how to do a different version of this, that changes various key elements, and that could move from ‘I am confident I know at least one reason why this wouldn’t work’ to ‘I presume various things go wrong but I do not know a particular reason this won’t work.’ I hope to write that up soon.

Discussion about this post

On Deliberative Alignment Read More »

levels-of-friction

Levels of Friction

Scott Alexander famously warned us to Beware Trivial Inconveniences.

When you make a thing easy to do, people often do vastly more of it.

When you put up barriers, even highly solvable ones, people often do vastly less.

Let us take this seriously, and carefully choose what inconveniences to put where.

Let us also take seriously that when AI or other things reduce frictions, or change the relative severity of frictions, various things might break or require adjustment.

This applies to all system design, and especially to legal and regulatory questions.

  1. Levels of Friction (and Legality).

  2. Important Friction Principles.

  3. Principle #1: By Default Friction is Bad.

  4. Principle #3: Friction Can Be Load Bearing.

  5. Insufficient Friction On Antisocial Behaviors Eventually Snowballs.

  6. Principle #4: The Best Frictions Are Non-Destructive.

  7. Principle #8: The Abundance Agenda and Deregulation as Category 1-ification.

  8. Principle #10: Ensure Antisocial Activities Have Higher Friction.

  9. Sports Gambling as Motivating Example of Necessary 2-ness.

  10. On Principle #13: Law Abiding Citizen.

  11. Mundane AI as 2-breaker and Friction Reducer.

  12. What To Do About All This.

There is a vast difference along the continuum, both in legal status and in terms of other practical barriers, as you move between:

  1. Automatic, a default, facilitated, required or heavily subsidized.

  1. Legal, ubiquitous and advertised, with minimal frictions.

  2. Available, mostly safe to get, but we make it annoying.

  3. Actively illegal or tricky, perhaps risking actual legal trouble or big loss of status.

  4. Actively illegal and we will try to stop you or ruin your life (e.g. rape, murder).

  5. We will move the world to stop you (e.g. terrorism, nuclear weapons).

  6. Physically impossible (e.g. perpetual motion, time travel, reading all my blog posts)

The most direct way to introduce or remove frictions is to change the law. This can take the form of prohibitions, regulations and requirements, or of taxes.

One can also alter social norms, deploy new technologies or business models or procedures, or change opportunity costs that facilitate or inhibit such activities.

Or one can directly change things like the defaults on popular software.

Often these interact in non-obvious ways.

It is ultimately a practical question. How easy is it to do? What happens if you try?

If the conditions move beyond annoying and become prohibitive, then you can move things that are nominally legal, such as building houses or letting your kids play outside or even having children at all, into category 3 or even 4.

Here are 14 points that constitute important principles regarding friction:

  1. By default more friction is bad and less friction is good.

  2. Of course there are obvious exceptions (e.g. rape and murder, but not only that).

  3. Activities imposing a cost on others or acting as a signal often rely on friction.

    1. Moving such activities from (#2 or #1) to #0, or sometimes from #2 to #1, can break the incentives that maintain a system or equilibrium.

    2. That does not have to be bad, but adjustments will likely be required.

    3. The solution often involves intentionally introducing alternative frictions.

    4. Insufficient friction on antisocial activities eventually snowballs.

  4. Where friction is necessary, focus on ensuring it is minimally net destructive.

  5. Lower friction choices have a big advantage in being selected.

    1. Pay attention to relative friction, not only absolute friction.

  6. Be very sparing when putting private consensual activities in #3 or especially #4.

    1. This tends to work out extremely poorly and make things worse.

    2. Large net negative externalities to non-participants changes this, of course.

  7. Be intentional about what is in #0 versus #1 versus #2. Beware what norms and patterns this distinction might encourage.

  8. Keep pro-social, useful and productive things in #0 or #1.

  9. Do not let things that are orderly and legible thereby be dragged into #2 or worse, while rival things that are disorderly and illegible become relatively easier.

  10. Keep anti-social, destructive and counterproductive things in at least #2, and at a higher level than pro-social, constructive and productive alternatives.

  11. The ideal form of annoying, in the sense of #2, is often (but not always) a tax, as in increasing the cost, ideally in a way that the lost value is transfered, not lost.

  12. Do not move anti-social things to #1 to be consistent or make a quick buck.

  13. Changing the level of friction can change the activity in kind, not only degree.

  14. When it comes to friction, consistency is frequently the hobgoblin of small minds.

It is a game of incentives. You can and should jury-rig it as needed to win.

By default, you want most actions to have lower friction. You want to eliminate the paperwork and phone calls that waste time and fill us with dread, and cause things we ‘should’ do to go undone.

If AI can handle all the various stupid things for me, I would love that.

The problems come when frictions are load bearing. Here are five central causes.

  1. An activity or the lack of an activity is anti-social and destructive. We would prefer it happen less, or not at all, or not expose people to it unless they seek it out first. We want quite a lot of friction standing in the way of things like rape, murder, theft, fraud, pollution, excessive noise, nuclear weapons and so on.

  2. An activity that could be exploited, especially if done ruthlessly at scale. You might for example want to offer a promotional deal or a generous return policy. You might let anyone in the world send you an email or slide into your DMs.

  3. An activity that sends a costly signal. A handwritten thank you note is valuable because it means you were thoughtful and spent the time. Spending four years in college proves you are the type of person who can spend those years.

  4. An activity that imposes costs or allocates a scarce resource. The frictions act as a price, ensuring an efficient or at least reasonable allocation, and guards against people’s time and money being wasted. Literal prices are best, but charging one can be impractical or socially unacceptable, such as when applying for a job.

  5. Removing the frictions from one alternative, when you continue to impose frictions on alternatives, is putting your finger on the scale. Neutrality does not always mean imposing minimal frictions. Sometimes you would want to reduce frictions on [X] only if you also could do so (or had done so) on [Y].

Imposing friction to maintain good incentives or equilibria, either legally or otherwise, is often expensive. Once the crime or other violation already happened, imposing punishment costs time and money, and harms someone. Stopping people from doing things they want to do, and enforcing norms and laws, is often annoying and expensive and painful. In many cases it feels unfair, and there have been a lot of pushes to do this less.

You can often ‘get away with’ this kind of permissiveness for a longer time than I would have expected. People can be very slow to adjust and solve for the equilibrium.

But eventually, they do solve for it, norms and expectations and defaults adjust. Often this happens slowly, then quickly. Afterwards you are left with a new set of norms and expectations and defaults, often that becomes equally sticky.

There are a lot of laws and norms we really do not want people to break, or actions you don’t want people to take except under the right conditions. When you reduce the frictions involved in breaking them or doing them at the wrong times, there won’t be that big an instant adjustment, but you are spending down the associated social capital and mortgaging the future.

We are seeing a lot of the consequences of that now, in many places. And we are poised to see quite a lot more of it.

Time lost is lost forever. Unpleasant phone calls do not make someone else’s life more pleasant. Whereas additional money spent then goes to someone else.

Generalize this. Whenever friction is necessary, either introduce it in the service of some necessary function, or use as non-destructive a transfer or cost as possible.

It’s time to build. It’s always time to build.

The problem is, you need permission to build.

The abundance agenda is largely about taking the pro-social legible actions that make us richer, and moving them back from Category 2 into Category 1 or sometimes 0.

It is not enough to make it possible. It needs to be easy. As easy as possible.

Building housing where people want to live needs to be at most Category 1.

Building green energy, and transmission lines, need to be at most Category 1.

Pharmaceutical drug development needs to be at most Category 1.

Having children needs to be at least Category 1, ideally Category 0.

Deployment of and extraction of utility from AI needs to remain Category 1, where it does not impose catastrophic or existential risks. Developing frontier models that might kill everyone needs to be at Category 2 with an option to move it to Category 3 or Category 4 on a dime if necessary, including gathering the data necessary to make that choice.

What matters is mostly moving into Category 1. Actively subsidizing into Category 0 is a nice-to-have, but in most cases unnecessary. We need only to remove the barriers to such activities, to make such activities free of unnecessary frictions and costs and delays. That’s it.

When you put things in category 1, magic happens. If that would be good magic, do it.

A lot of technological advances and innovations, including the ones that are currently blocked, are about taking something that was previously Category 2, and turning it into a Category 1. Making the possible easier is extremely valuable.

We often need to beware and keep in Category 2 or higher actions that disrupt important norms and encourage disorder, that are primarily acts of predation, or that have important other negative externalities.

When the wrong thing is a little more annoying to do than the right thing, a lot more people will choose the right path, and vice versa. When you make the anti-social action easier than the pro-social action, when you reward those who bring disorder or wreck the commons and punish those who adhere to order and help the group, you go down a dark path.

This is also especially true when considering whether something will be a default, or otherwise impossible to ignore.

There is a huge difference between ‘you can get [X] if you seek it out’ versus ‘constantly seeing advertising for [X]’ or facing active media or peer pressure to participate in [X].

Recently, America moved Sports Gambling from Category 2 to Category 1.

Suddenly, sports gambling was everywhere, on our billboards and in our sports media, including the game broadcasts and stadium experiences. Participation exploded.

We now have very strong evidence that this was a mistake.

That does not mean sports gambling should be seriously illegal. It only means that people can’t handle low-friction sports gambling apps being available on phones that get pushed in the media.

I very much don’t want it in Category 3, only to move it back to Category 2. Let people gamble at physical locations. Let those who want to use VPNs or actively subvert the rules have their fun too. It’s fine, but don’t make it too easy, or in people’s faces.

The same goes for a variety of other things, mostly either vices or things that impose negative externalities on others, that are fine in moderation with frictions attached.

The classic other vice examples count: Cigarettes, drugs and alcohol, prostitution, TikTok. Prohibition on such things always backfires, but you want to see less of them, in both the figurative and literal sense, than you would if you fully unleashed them. So we need to talk price, and exactly what level of friction is correct, keeping in mind that ‘technically legal versus illegal’ is not the critical distinction in practice.

There are those who will not, on principle, lie or break the law, or not break other norms. Every hero has a code. It would be good if we could return to a norm where this was how most people acted, rather than us all treating many laws as almost not being there and certain statements as not truth tracking – that being ‘nominally illegal with no enforcement’ or ‘requires telling a lie’ was already Category 2.

Unfortunately, we don’t live in that world, at least not anymore. Indeed, people are effectively forced to tell various lies to navigate for example the medical system, and technically break various laws. This is terrible, and we should work to reverse this, but mostly we need to be realistic.

Similarly, it would be good if we lived by the principle that you consider the costs you impose on others when deciding what to do, only imposing them when justified or with compensation, and we socially punished those who act otherwise. But increasingly we do not live in that world, either.

As AI and other technology removes many frictions, especially for those willing to have the AI lie on their behalf to exploit those systems at scale, this becomes a problem.

Current AI largely takes many tasks that were Category 2, and turns them into Category 1, or effectively makes them so easy as to be Category 0.

Academia and school break first because the friction ‘was the point’ most explicitly, and AI is especially good at related tasks. Note that breaking these equilibria and systems could be very good for actual education, but we must adapt.

Henry Shevlin: I generally position myself an AI optimist, but it’s also increasingly clear to me that LLMs just break lots of our current institutions, and capabilities are increasing fast enough that it’ll be very hard for them to adapt in the near-term.

Education (secondary and higher) is the big one, but also large aspects of academic publishing. More broadly, a lot of the knowledge-work economy seems basically unsustainable in an era of intelligence too cheap to meter.

Lawfare too cheap to meter.

Dick Bruere: I am optimistic that AI will break everything.

Then we get into places like lawsuits.

Filing or defending against a lawsuit is currently a Category 2 action in most situations. The whole process is expensive and annoying, and it’s far more expensive to do it with competent representation. The whole system is effectively designed with this in mind. If lawsuits fell down to Category 1 because AI facilitated all the filings, suddenly a lot more legal actions become viable.

The courts themselves plausibly break from the strain. A lot of dynamics throughout society shift, as threats to file become credible, and legal considerations that exist on paper but not in practice – and often make very little sense in practice – suddenly exist in practice. New strategies for lawfare, for engineering the ability to sue, come into play.

Yes, the defense also moves towards Category 1 via AI, and this will help mitigate, but for many reasons this is a highly incomplete solution. The system will have to change.

Job applications are another example. It used to be annoying to apply to jobs, to the extent that most people applied to vastly fewer jobs than was wise. As a result, one could reasonably advertise or list a job and consider the applications that came in.

In software, this is essentially no longer true – AI-assisted applications flood the zone. If you apply via a public portal, you will get nowhere. You can only meaningfully apply via methods that find new ways to apply friction. That problem will gradually (or rapidly) spread to other industries and jobs.

There are lots of formal systems that offer transfers of wealth, in exchange for humans undergoing friction and directing attention. This can be (an incomplete list):

  1. Price discrimination. You offer discounts to those willing to figure out how to get them, charge more to those who pay no attention and don’t care.

  2. Advertising for yourself. Offer free samples, get people to try new products.

  3. Advertising for others. As in, a way to sell you on watching advertising.

  4. Relationship building. Initial offers of 0% interest get you to sign up for a credit card. You give your email to get into a rewards program with special offers.

  5. Customer service. If you are coming in to ask for an exchange or refund, that is annoying enough to do that it is mostly safe to assume your request is legit.

  6. Costly signaling. Only those who truly need or would benefit would endure what you made them do to qualify. School and job applications fall into this.

  7. Habit formation. Daily login rewards and other forms of gamification are ubiquitous in mobile apps and other places.

  8. Security through obscurity. There is a loophole in the system, but not many people know about it, and figuring it out takes skill.

  9. Enemy action. It is far too expensive to fully defend yourself against a sufficiently determined fraudster or thief, or someone determined to destroy your reputation, or worse an assassin or other physical attacker. Better to impose enough friction they don’t bother.

  10. Blackmail. It is relatively easy to impose large costs on someone else, or credibly threaten to do so, to try and extract resources from them. This applies on essentially all levels. Or of course someone might actually want to inflict massive damage (including catastrophic harms, cyberattacks, CBRN risks, etc).

Breaking all these systems, and the ways we ensure that they don’t get exploited at scale, upends quite a lot of things that no longer make sense.

In some cases, that is good. In others, not so good. Most will require adjustment.

Future more capable AI may then threaten to bring things in categories #3, #4 and #5 into the realm of super doable, or even start doing them on its own. Maybe even some things we think are in #6. In some cases this will be good because the frictions were due to physical limitations or worries that no longer apply. In other cases, this would represent a crisis.

To the extent you have control over levels of friction of various activities, for yourself or others, choose intentionally, especially in relative terms. All of this applies on a variety of scales.

Focus on reducing frictions you benefit from reducing, and assume this matters more than you think because it will change the composition of your decisions quite a lot.

Often this means it is well worth it to spend [X] in advance to prevent [Y] amount of friction over time, even if X>Y, or even X>>Y.

Where lower friction would make you worse off, perhaps because you would then make worse choices, consider introducing new frictions, up to and including commitment devices and actively taking away optionality that is not to your benefit.

Beware those who try to turn the scale into a boolean. It is totally valid to be fine with letting people do something if and only if it is sufficiently annoying for them to do it – you’re not a hypocrite to draw that distinction.

You’re also allowed to say, essentially ‘if we can’t put this into [1] without it being in [0] then it needs to be in [2] or even ‘if there’s no way to put this into [2] without putting it into [1] then we need to put it in [3].’

You are especially allowed to point out ‘putting [X] in [1 or 0] has severe negative consequences, and doing [Y] makes puts [X] there, so until you figure out a solution you cannot do [Y].’

Most importantly, pay attention to all this especially as yourself and other people will actually respond, take it seriously, and consider the incentives, equilibria, dynamics and consequences that result, and then respond deliberatively.

Finally, when you notice that friction levels are changing, watch for necessary adjustments, and to see what if anything will break, what habits must be avoided. And also, of course, what new opportunities this opens up.

Discussion about this post

Levels of Friction Read More »

report:-iphone-se-could-shed-its-10-year-old-design-“as-early-as-next-week”

Report: iPhone SE could shed its 10-year-old design “as early as next week”

Gurman suggests that Apple could raise the $429 starting price of the new iPhone SE to reflect the updated design. He also says that Apple’s supplies of the $599 iPhone 14 are running low at Apple’s stores—the 14 has already been discontinued in some countries over its lack of USB-C port, and it’s possible Apple could be planning to replace both the iPhone 14 and the old SE with the new SE.

Apple’s third-generation iPhone SE is nearly three years old, but its design (including its dimensions, screen size, Home button, and Lightning port) hearkens all the way back to 2014’s iPhone 6. Put 2017’s iPhone 8 and 2022’s iPhone SE on a table next to each other, and almost no one could tell the difference. These days, it feels like a thoroughly second-class iPhone experience, and a newer design is overdue.

Other Apple products allegedly due for an early 2025 release include the M4 MacBook Airs and a next-generation Apple TV, which, like the iPhone SE, was also last refreshed in 2022. Gurman has also said that a low-end iPad and a new iPad Air will arrive “during the first half of 2025” and updated Mac Pro and Mac Studio models are to arrive sometime this year as well. Apple is also said to be making progress on its own smart display, expanding its smart speaker efforts beyond the aging HomePod and HomePod mini.

Report: iPhone SE could shed its 10-year-old design “as early as next week” Read More »

rocket-report:-another-hiccup-with-spacex-upper-stage;-japan’s-h3-starts-strong

Rocket Report: Another hiccup with SpaceX upper stage; Japan’s H3 starts strong


Vast’s schedule for deploying a mini-space station in low-Earth orbit was always ambitious.

A stack of 21 Starlink Internet satellites arrives in orbit Tuesday following launch on a Falcon 9 rocket. Credit: SpaceX

Welcome to Edition 7.30 of the Rocket Report! The US government relies on SpaceX for a lot of missions. These include launching national security satellites, putting astronauts on the Moon, and global broadband communications. But there are hurdles—technical and, increasingly, political—on the road ahead. To put it generously, Elon Musk, without whom much of what SpaceX does wouldn’t be possible, is one of the most divisive figures in American life today.

Now, a Democratic lawmaker in Congress has introduced a bill that would end federal contracts for special government employees (like Musk), citing conflict-of-interest concerns. The bill will go nowhere with Republicans in control of Congress, but it is enough to make me pause and think. When the Trump era passes and a new administration takes the White House, how will they view Musk? Will there be an appetite to reduce the government’s reliance on SpaceX? To answer this question, you must first ask if the government will even have a choice. What if, as is the case in many areas today, there’s no viable replacement for the services offered by SpaceX?

As always, we welcome reader submissions. If you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.

Blue Origin flight focuses on lunar research. For the first time, Jeff Bezos’ Blue Origin space venture has put its New Shepard suborbital rocket ship through a couple of minutes’ worth of Moon-level gravity, GeekWire reports. The uncrewed mission, known as NS-29, sent 30 research payloads on a 10-minute trip from Blue Origin’s Launch Site One in West Texas. For this trip, the crew capsule was spun up to 11 revolutions per minute, as opposed to the typical half-revolution per minute. The resulting centrifugal force was equivalent to one-sixth of Earth’s gravity, which is what would be felt on the Moon.

Gee, that’s cool … The experiments aboard Blue Origin’s space capsule examined how to process lunar soil to extract resources and how to manufacture solar cells on the Moon for Blue Origin’s Blue Alchemist project. Another investigated how moondust gets electrically charged and levitated when exposed to ultraviolet light. These types of experiments in partial gravity can be done on parabolic airplane flights, but those only provide a few seconds of the right conditions to simulate the Moon’s gravity. (submitted by EllPeaTea)

Orbex announces two-launch deal with D-Orbit. UK-based rocket builder Orbex announced Monday that it has signed a two-launch deal with Italian in-orbit logistics provider D-Orbit, European Spaceflight reports. The deal includes capacity aboard two launches on Orbex’s Prime rocket over the next three years. D-Orbit aggregates small payloads on rideshare missions (primarily on SpaceX rockets so far) and has an orbital transfer vehicle for ferrying satellites to different altitudes after separation from a launch vehicle. Orbex’s Prime rocket is sized for the small satellite industry, and the company aims to debut it later this year.

Thanks to fresh funding? … Orbex has provided only sparse updates on its progress toward launching the Prime rocket. What we do know is that Orbex suspended plans to develop a spaceport in Scotland to focus its resources on the Prime rocket itself. Despite little evidence of any significant accomplishments, Orbex last month secured a $25 million investment from the UK government. The timing of the launch agreement with D-Orbit begs the question of whether the UK government’s backing helped seal the deal. As Andrew Parsonson of European Spaceflight writes: “Is this a clear indication of how important strong institutional backing is for the growth of privately developed launch systems in Europe?” (submitted by EllPeaTea)

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

Falcon 9’s upper stage misfires again. The second stage of a SpaceX Falcon 9 rocket remained in orbit following a launch Saturday from Vandenberg Space Force Base, California. The rocket successfully deployed a new batch of Starlink Internet satellites but was supposed to reignite its engine for a braking maneuver to head for a destructive reentry over the Pacific Ocean. While airspace warning notices from the FAA showed a reentry zone over the eastern Pacific Ocean, publicly available US military tracking continued to show the upper stage in orbit this week. Sources also told Ars that SpaceX delayed two Falcon 9 launches this week by a day to allow time for engineers to evaluate the problem.

3 in 6 months … This is the third time since last July that the Falcon 9’s upper stage has encountered a problem in flight. On one occasion, the upper stage failed to reach its targeted orbit, leading to the destruction of 20 Starlink satellites. Then, an upper stage misfired during a deorbit burn after an otherwise successful launch in September, causing debris to fall outside of the pre-approved danger area. After both events, the FAA briefly grounded the Falcon 9 rocket while SpaceX conducted an investigation. This time, an FAA spokesperson said the agency won’t require an investigation. “All flight events occurred within the scope of SpaceX’s licensed activities,” the spokesperson told Ars.

Vast tests hardware for commercial space station. Vast Space has started testing a qualification model of its first commercial space station but has pushed back the launch of that station into 2026, Space News reports. In an announcement Thursday, Vast said it completed a proof test of the primary structure of a test version of its Haven-1 space station habitat at a facility in Mojave, California. During the testing, Vast pumped up the pressure inside the structure to 1.8 times its normal level and conducted a leak test. “On the first try we passed that critical test,” Max Haot, chief executive of Vast, told Space News.

Not this year … It’s encouraging to see Vast making tangible progress in developing its commercial space station. The privately held company is one of several seeking to develop a commercial outpost in low-Earth orbit to replace the International Space Station after its scheduled retirement in 2030. NASA is providing funding to two industrial teams led by Blue Origin and Voyager Space, which are working on different space station concepts. But so far, Vast’s work has been funded primarily through private capital. The launch of the Haven-1 outpost, which Vast previously said could happen this year, is now scheduled no earlier than May 2026. The spacecraft will launch in one piece on a Falcon 9 rocket, and the first astronaut crew to visit Haven-1 could launch a month later. Haven-1 is a pathfinder for a larger commercial station called Haven-2, which Vast intends to propose to NASA. (submitted by EllPeaTea)

H3 deploys Japanese navigation satellite. Japan successfully launched a flagship H3 rocket Sunday and put into orbit a Quasi-Zenith Satellite (QZS), aiming to improve the accuracy of global positioning data for various applications, Kyodo News reports. After separation from the H3 rocket, the Michibiki 6 satellite will climb into geostationary orbit, where it will supplement navigation signals from GPS satellites to provide more accurate positioning data to users in Japan and surrounding regions, particularly in mountainous terrain and amid high-rise buildings in large cities. The new satellite joins a network of four QZS spacecraft launched by Japan beginning in 2010. Two more Quasi-Zenith Satellites are under construction, and Japan’s government is expected to begin development of an additional four regional navigation satellites this year.

A good start … After a failed inaugural flight in 2023, Japan’s new H3 rocket has reeled off four consecutive successful launches in less than a year. This may not sound like a lot, but the H3 has achieved its first four successful flights faster than any other rocket since 2000. SpaceX’s Falcon 9 rocket completed its first four successful flights in a little more than two years, and United Launch Alliance’s Atlas V logged its fourth flight in a similar timeframe. More than 14 months elapsed between the first and fourth successful flight of Rocket Lab’s Electron rocket. The H3 is an expendable rocket with no roadmap to reusability, so its service life and commercial potential are likely limited. But the rocket is shaping up to provide reliable access to space for Japan’s space agency and military, while some of its peers in Europe and the United States struggle to ramp up to a steady launch cadence. (submitted by EllPeaTea)

Europe really doesn’t like relying on Elon Musk. Europe’s space industry has struggled to keep up with SpaceX for a decade. The writing was on the wall when SpaceX landed a Falcon 9 booster for the first time. Now, European officials are wary of becoming too reliant on SpaceX, and there’s broad agreement on the continent that Europe should have the capability to launch its own satellites. In this way, access to space is a strategic imperative for Europe. The problem is, Europe’s new Ariane 6 rocket is just not competitive with SpaceX’s Falcon 9, and there’s no concrete plan to counter SpaceX’s dominance.

So here’s another terrible idea … Airbus, Europe’s largest aerospace contractor with a 50 percent stake in the Ariane 6 program, has enlisted Goldman Sachs for advice on how to forge a new European space and satellite company to better compete with SpaceX. France-based Thales and the Italian company Leonardo are part of the talks, with Bank of America also advising on the initiative. The idea that some bankers from Goldman and Bank of America will go into the guts of some of Europe’s largest institutional space companies and emerge with a lean, competitive entity seems far-fetched, to put it mildly, Ars reports.

The FAA still has some bite. We’re now three weeks removed from the most recent test flight of SpaceX’s Starship rocket, which ended with the failure of the vehicle’s upper stage in the final moments of its launch sequence. The accident rained debris over the Atlantic Ocean and the Turks and Caicos Islands. Unsurprisingly, the Federal Aviation Administration grounded Starship and ordered an investigation into the accident on the day after the launch. This decision came three days before the inauguration of President Donald Trump, who counts Musk as one of his top allies. So far, the FAA hasn’t budged on its requirement for an investigation, an agency spokesperson told Ars.

Debris field … In the hours and days after the failed Starship launch, residents and tourists in the Turks and Caicos shared images of debris scattered across the islands and washing up onshore. The good news is there were no injuries or reports of significant damage from the wreckage, but the FAA confirmed one report of minor damage to a vehicle located in South Caicos. It’s rare for debris from US rockets to fall over land during a launch. This would typically only happen if a launch failed at certain parts of the flight. Before now, there has been no public record of any claims of third-party property damage in the era of commercial spaceflight.

DOD eager to reap the benefits of Starship. A Defense Department unit is examining how SpaceX’s Starship vehicle could be used to support a broader architecture of in-space refueling, Space News reports. A senior adviser at the Defense Innovation Unit (DIU) said SpaceX approached the agency about how Starship’s refueling architecture could be used by the wider space industry. The plan for Starship is to transfer cryogenic propellants between tankers, depots, and ships heading to the Moon, Mars, or other deep-space destinations.

Few details available … US military officials have expressed interest in orbital refueling to support in-space mobility, where ground controllers have the freedom to maneuver national security satellites between different orbits without worrying about running out of propellant. For several years, Space Force commanders and Pentagon officials have touted the importance of in-space mobility, or dynamic space operations, in a new era of orbital warfare. However, there are reports that the Space Force has considered zeroing out a budget line item for space mobility in its upcoming fiscal year 2026 budget request.

A small step toward a fully reusable European rocket. The French space agency CNES has issued a call for proposals to develop a reusable upper stage for a heavy-lift rocket, European Spaceflight reports. This project is named DEMESURE (DEMonstration Étage SUpérieur REutilisable / Reusable Upper Stage Demonstration), and it marks one of Europe’s first steps in developing a fully reusable rocket. That’s all good, but there’s a sense of tentativeness in this announcement. The current call for proposals will only cover the earliest phases of development, such as a requirements evaluation, cost estimation review, and a feasibility meeting. A future call will deal with the design and fabrication of a “reduced scale” upper stage, followed by a demonstration phase with a test flight, recovery, and reuse of the vehicle. CNES’s vision is to field a fully reusable rocket as a successor to the single-use Ariane 6.

Toes in the water … If you’re looking for reasons to be skeptical about Project DEMESURE, look no further than the Themis program, which aims to demonstrate the recovery and reuse of a booster stage akin to SpaceX’s Falcon 9. Themis originated in a partnership between CNES and European industry in 2019, then ESA took over the project in 2020. Five years later, the Themis demonstrator still hasn’t flown. After some initial low-altitude hops, Themis is supposed to launch on a high-altitude test flight and maneuver through the entire flight profile of a reusable booster, from liftoff to a vertical propulsive landing. As we’ve seen with SpaceX, recovering an orbital-class upper stage is a lot harder than landing the booster. An optimistic view of this announcement is that anything worth doing requires taking a first step, and that’s what CNES has done here. (submitted by EllPeaTea)

Next three launches

Feb. 7: Falcon 9 | Starlink 12-9 | Cape Canaveral Space Force Station, Florida | 18: 52 UTC

Feb. 8: Electron | IoT 4 You and Me | Māhia Peninsula, New Zealand | 20: 43 UTC

Feb. 10: Falcon 9 | Starlink 11-10 | Vandenberg Space Force Base, California | 00: 03 UTC

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Rocket Report: Another hiccup with SpaceX upper stage; Japan’s H3 starts strong Read More »

national-institutes-of-health-radically-cuts-support-to-universities

National Institutes of Health radically cuts support to universities

Grants paid by the federal government have two components. One covers the direct costs of performing the research, paying for salaries, equipment, and consumables like chemicals or enzymes. But the government also pays what are called indirect costs. These go to the universities and research institutes, covering the costs of providing and maintaining the lab space, heat and electricity, administrative and HR functions, and more.

These indirect costs are negotiated with each research institution and average close to 30 percent of the amount awarded for the research. Some institutions see indirect rates as high as half the value of the grant.

On Friday, the National Institutes of Health (NIH) announced that negotiated rates were ending. Every existing grant, and all those funded in the future, will see the indirect cost rate set to just 15 percent. With no warning and no time to adjust to the change in policy, this will prove catastrophic for the budget of nearly every biomedical research institution.

Cut in half or more

The new policy is described in a supplemental guidance document that modifies the 2024 grant policy statement. The document cites federal regulations that allow the NIH to use a different indirect cost rate from that negotiated with research institutions for “either a class of Federal awards or a single Federal award,” but it has to justify the decision. So, much of the document describes the indirect costs paid by charitable foundations, which tend to be much lower than the rate paid by the NIH.

The new rate of indirect cost reimbursement will be applied to any newly funded grants and retroactively to all existing grants starting with the issuance of this notice. The retroactive nature of this decision may end up being challenged due to the wording of the regulations cited earlier, which also state that “The Federal agency must include, in the notice of funding opportunity, the policies relating to indirect cost rate.” However, even going forward, this will likely severely curtail biomedical research in the US.

National Institutes of Health radically cuts support to universities Read More »

white-house-budget-proposal-could-shatter-the-national-science-foundation

White House budget proposal could shatter the National Science Foundation

The president proposes, and Congress disposes

There are important caveats to this proposal. The Trump administration has probably not even settled upon the numbers that will go into its draft budget, which then goes through the passback process in which there are additional changes. And then, of course, the budget request is just a starting point for negotiations with the US Congress, which sets budget levels.

Even so, such cuts could prove disastrous for the US science community.

“This kind of cut would kill American science and boost China and other nations into global science leadership positions,” Neal Lane, who led the National Science Foundation in the 1990s during Bill Clinton’s presidency, told Ars. “The National Science Foundation budget is not large, of the order 0.1 percent of federal spending, and several other agencies support excellence research. But NSF is the only agency charged to promote progress in science.”

The National Science Foundation was established by Congress in 1950 to fund basic research that would ultimately advance national health and prosperity, and secure the national defense. Its major purpose is to evaluate proposals and distribute funding for basic scientific research. Alongside the National Institutes of Health and Department of Energy, it has been an engine of basic discovery that has led to the technological superiority of the United States government and its industries.

Some fields, including astronomy, non-health-related biology, and Antarctic research, are all almost entirely underwritten by the National Science Foundation. The primary areas of its funding can be found here.

White House budget proposal could shatter the National Science Foundation Read More »

the-uk-got-rid-of-coal—where’s-it-going-next?

The UK got rid of coal—where’s it going next?


Clean, but not fully green

The UK has transitioned to a lower-emission grid. Now comes the hard part.

With the closure of its last coal-fired power plant, Ratcliffe-on-Soar, on September 30, 2024, the United Kingdom has taken a significant step toward its net-zero goals. It’s no small feat to end the 142-year era of coal-powered electricity in the country that pioneered the Industrial Revolution. Yet the UK’s journey away from coal has been remarkably swift, with coal generation plummeting from 40 percent of the electricity mix in 2012 to just two percent in 2019, and finally to zero in 2024.

As of 2023, approximately half of UK electricity generation comes from zero-carbon sources, with natural gas serving as a transitional fuel. The UK aims to cut greenhouse gas emissions by 42 percent to 48 percent by 2027 and achieve net-zero by 2050. The government set a firm target to generate all of its electricity from renewable sources by 2040, emphasizing offshore wind and solar energy as the keys.

What will things look like in the intervening years, which will lead us from today to net-zero? Everyone’s scenario, even when based in serious science, boils down to a guessing game. Yet some things are more certain than others, the most important of these factors being the ones that are on solid footing beneath all of the guesswork.

Long-term goals

The closure of all UK coal-fired power stations in 2024 marked a crucial milestone in the nation’s decarbonization efforts. Coal was once the dominant source of electricity generation, but its contribution to greenhouse gas emissions made it a primary target for phase-out. The closure of these facilities has significantly reduced the UK’s carbon footprint and paved the way for cleaner energy sources.

With transition from coal, natural gas is set to play a crucial role as a “transition fuel.” The government’s “British Energy Security Strategy” argued that gas must continue to be an important part of the energy mix. It positioned gas as the “glue” that holds the electricity system together during the transition. Even the new Starmer government recognizes that, as the country progresses towards net-zero by 2050, the country may still use about a quarter of the gas it currently consumes.

Natural gas emits approximately half as much carbon dioxide as coal when combusted, making it a cleaner alternative during the shift to renewable energy sources. In 2022, natural gas accounted for around 40 percent of the UK’s electricity generation, while coal contributed less than two percent. This transition phase is deemed by the government to be essential as the country ramps up the capacity of renewable energy sources, particularly wind and solar power, to fill gaps left by the reduction of fossil fuels. The government aims to phase out natural gas that’s not coupled with carbon capture by 2035, but in the interim, it serves as a crucial bridge, ensuring energy security while reducing overall emissions.

But its role is definitely intended to be temporary; the UK’s long-term energy goal is to reduce reliance on all fossil fuels (starting with imported supplies), pushing for a rapid transition to cleaner, domestic sources of energy.

The government’s program has five primary targets:

  • Fully decarbonizing the power system (2035)
  • Ending the sale of new petrol and diesel cars (2035)
  • Achieving “Jet Zero” – net-zero UK aviation emissions (2050)
  • Creating 30,000 hectares of new woodland per year (2025)
  • Generating 50 percent of its total electricity from renewable sources by 2030

Offshore wind energy has emerged as this strategy’s key component, with significant investments being made in new wind farms. Favorable North Sea wind conditions have immense potential. In recent years, a surge in offshore wind investment has translated into several large-scale developments in advanced planning stages or now under construction.

The government has set a target to increase offshore wind capacity to 50 GW by 2030, up from around 10 GW currently. This initiative is supported by substantial financial commitments from both the public and private sectors. Recent investment announcements underscore the UK’s commitment to this goal and the North Sea’s central role in it. In 2023, the government announced plans to invest $25 billion (20 billion British pounds) in carbon capture and offshore wind projects in the North Sea over the next two decades. This investment is expected to create up to 50,000 jobs and help position the UK as a leader in clean energy technologies.

This was part of investments totaling over $166 million (133 million pounds) to support the development of new offshore wind farms, which are expected to create thousands of jobs and stimulate local economies.

In 2024, further investments were announced to support the expansion of offshore wind capacity. The government committed to holding annual auctions for new offshore wind projects to meet its goal of quadrupling offshore wind capacity by 2030. These investments are part of a broader strategy to leverage the UK’s expertise in offshore industries and transition the North Sea from an oil and gas hub to a clean-energy powerhouse.

Offshore wind

As the UK progresses toward its net-zero target, it faces both challenges and opportunities. While significant progress has been made in decarbonizing the power sector, the national government’s Climate Change Committee has noted that emissions reductions need to accelerate in other sectors, particularly agriculture, land use, and waste. However, with continued investment in renewable energy and supportive policies, the UK is positioning itself to become a leader in the global transition to a low-carbon economy.

Looking ahead, 2025 promises to be a landmark year for the UK’s green energy sector, with further investment announcements and projects in the pipeline.

The Crown Estate, which manages the seabed around England, Wales, and Northern Ireland, has made significant strides in facilitating new leases for offshore wind development. In 2023, the Crown Estate Scotland announced the successful auction of seabed leases for new offshore wind projects, totaling a capacity of 5 gigawatts. And in 2024, the government plans to hold its next major leasing round, which could see the deployment of an additional 7 GW of offshore wind capacity.

The UK government also approved plans for the Dogger Bank Wind Farm, which will be the world’s largest offshore wind farm when completed. Located off the coast of Yorkshire, this massive project will ultimately generate enough electricity to power millions of homes. Dogger is a joint venture linking SSE Renewables, Equinor, and Vattenfall.

This is in line with the government’s broader strategy to enhance energy independence and resilience, particularly in light of the geopolitical uncertainties affecting global energy markets. The UK’s commitment to renewable energy is not merely an environmental imperative; it is also an economic opportunity. By harnessing the vast potential of the North Sea, the UK aims not only to meet its net-zero targets but also to drive economic growth and job creation in the green energy sector, ensuring a sustainable future for generations to come.

Recognizing wind’s importance, the UK government launched a 2024 consultation on plans to develop a new floating wind energy sector.

The transition to a greener economy is projected to create up to 400,000 jobs by 2030 across various sectors, including manufacturing, installation, and maintenance of renewable energy technologies.

Its growing offshore wind industry is expected to attract billions in investment, solidifying the UK’s position as a leader in the global green energy market. The government’s commitment to offshore wind development, underscored by substantial investments in 2023 and anticipated announcements for 2024, signals a robust path forward.

Moving away from gas

Still, the path ahead remains challenging, requiring a multifaceted approach that balances economic growth, energy security, and environmental sustainability.

With the transition from coal, natural gas is now poised to play the central role as a bridge fuel. While natural gas emits fewer greenhouse gases than coal, it is still a fossil fuel and contributes to carbon emissions. However, in the short term, natural gas can help maintain energy security and provide a reliable source of electricity during periods of low renewable energy output. Additionally, natural gas can be used to produce hydrogen, potentially coupled with carbon capture, enabling a clean energy carrier that can be integrated into the existing energy infrastructure.

To support the country’s core clean energy goals, the government is implementing specific initiatives, although the pace has been quite uneven. The UK Emissions Trading Scheme (ETS) is being strengthened to incentivize industrial decarbonization. The government has also committed to investing in key green industries alongside offshore wind: carbon capture, usage and storage (CCUS), and nuclear energy.

Combined, these should allow the UK to limit its use of natural gas and capture the emissions associated with any remaining fossil fuel use.

While both countries are relying heavily on wind power, the UK’s energy-generation transformations are different from Germany’s. While both governments push to make some progress on the path to net-zero carbon emissions, their approaches and timelines differ markedly.

Energiewende, Germany’s energy transition, is characterized by what some critics consider to be overly ambitious goals for achieving net greenhouse gas neutrality by 2045. Those critics think that the words don’t come close to matching the required levels of either government or private sector financial commitment. Together with the Bundestag, the chancellor has set interim targets to reduce emissions by 65 percent by 2030 and 88 percent by 2040 (both compared to 1990 levels). Germany’s energy mix is heavily reliant on renewables, with a goal of sourcing 80 percent of its electricity from renewable energy by 2030—and achieving 100 percent by 2035.

However, Germany has faced challenges due to continued reliance on coal and natural gas, which made it difficult to reach its emissions goals.

The UK, however, appears to be ahead in terms of immediate reductions in coal use and the integration of renewables into its energy mix. Germany’s path is more complex, as it balances its energy transition with energy security concerns, particularly in light of how Russia’s war affects gas supplies.

The UK got rid of coal—where’s it going next? Read More »

parrots-struggle-when-told-to-do-something-other-than-mimic-their-peers

Parrots struggle when told to do something other than mimic their peers

There have been many studies on the capability of non-human animals to mimic transitive actions—actions that have a purpose. Hardly any studies have shown that animals are also capable of intransitive actions. Even though intransitive actions have no particular purpose, imitating these non-conscious movements is still thought to help with socialization and strengthen bonds for both animals and humans.

Zoologist Esha Haldar and colleagues from the Comparative Cognition Research group worked with blue-throated macaws, which are critically endangered, at the Loro Parque Fundación in Tenerife. They trained the macaws to perform two intransitive actions, then set up a conflict: Two neighboring macaws were asked to do different actions.

What Haldar and her team found was that individual birds were more likely to perform the same intransitive action as a bird next to them, no matter what they’d been asked to do. This could mean that macaws possess mirror neurons, the same neurons that, in humans, fire when we are watching intransitive movements and cause us to imitate them (at least if these neurons function the way some think they do).

But it wasn’t on purpose

Parrots are already known for their mimicry of transitive actions, such as grabbing an object. Because they are highly social creatures with brains that are large relative to the size of their bodies, they made excellent subjects for a study that gauged how susceptible they were to copying intransitive actions.

Mirroring of intransitive actions, also called automatic imitation, can be measured with what’s called a stimulus-response-compatibility (SRC) test. These tests measure the response time between seeing an intransitive movement (the visual stimulus) and mimicking it (the action). A faster response time indicates a stronger reaction to the stimulus. They also measure the accuracy with which they reproduce the stimulus.

Until now, there have only been three studies that showed non-human animals are capable of copying intransitive actions, but the intransitive actions in these studies were all by-products of transitive actions. Only one of these focused on a parrot species. Haldar and her team would be the first to test directly for animal mimicry of intransitive actions.

Parrots struggle when told to do something other than mimic their peers Read More »

doj-agrees-to-temporarily-block-doge-from-treasury-records

DOJ agrees to temporarily block DOGE from Treasury records

Elez reports to Tom Krause, another Treasury Department special government employee, but Krause doesn’t have direct access to the payment system, Humphreys told the judge. Krause is the CEO of Cloud Software Group and is also viewed as a Musk ally.

But when the judge pressed Humphreys on Musk’s alleged access, the DOJ lawyer only said that as far as the defense team was aware, Musk did not have access.

Further, Humphreys explained that DOGE—which functions as part of the executive office—does not have access, to the DOJ’s knowledge. As he explained it, DOGE sets the high-level priorities that these special government employees carry out, seemingly trusting the employees to identify waste and protect taxpayer dollars without ever providing any detailed reporting on the records that supposedly are evidence of mismanagement.

To Kollar-Kotelly, the facts on the record seem to suggest that no one outside the Treasury is currently accessing sensitive data. But when she pressed Humphreys on whether DOGE had future plans to access the data, Humphreys declined to comment, calling it irrelevant to the complaint.

Humphreys suggested that the government’s defense in this case would focus on the complaint that outsiders are currently accessing Treasury data, seemingly dismissing any need to discuss DOGE’s future plans. But the judge pushed back, telling Humphreys she was not trying to “nail” him “to the wall,” but there’s too little information on the relationship between DOGE and the Treasury Department as it stands. How these entities work together makes a difference, the judge suggested, in terms of safeguarding sensitive Treasury data.

According to Kollar-Kotelly, granting a temporary restraining order in part would allow DOGE to “preserve the status quo” of its current work in the Treasury Department while ensuring no new outsiders get access to Americans’ sensitive information. Such an order would give both sides time to better understand the current government workflows to best argue their cases, the judge suggested.

If the order is approved, it would remain in effect until the judge rules on plantiffs’ request for a preliminary injunction. At the hearing today, Kollar-Kotelly suggested that matter would likely be settled at a hearing on February 24.

DOJ agrees to temporarily block DOGE from Treasury records Read More »