Author name: Beth Washington

volkswagen-gets-the-message:-cheap,-stylish-evs-coming-from-2026

Volkswagen gets the message: Cheap, stylish EVs coming from 2026

A surprise find in my inbox this morning: news from Volkswagen about a pair of new electric vehicles it has in the works. Even better, they’re both small and affordable, bucking the supersized, overpriced trend of the past few years. But before we get too excited, there’s currently no guarantee either will go on sale in North America.

Next year sees the European debut of the ID. 2all, a small electric hatchback that VW wants to sell for less than 25,000 euros ($26,671). But the ID. 2all isn’t really news: VW showed off the concept, as well as a GTI version, back in September 2023.

What is new is the ID. EVERY1, an all-electric entry-level car that, if the concept is anything to go by, is high on style and charm. It does not have a retro shape like a Mini or Fiat 500—VW could easily have succumbed to a retread of the Giugiaro-styled Golf from 1976 but opted for something new instead. The design language involves three pillars: stability, likability, and surprise elements, or “secret sauce,” according to VW’s description.

The ID. EVERY1 is the antithesis of the giant SUVs and trucks that have come out of Detroit these past few years.

“The widely flared wheelarches over the large 19-inch wheels and the athletic and clearly designed surfaces of the silhouette ensure stability,” said VW head of design Andreas Mindt, confirming the inability of modern designers to stay away from huge wheels.

“The slightly cheeky smile at the front is a particularly likable feature. A secret sauce element is the roof drawn in the middle, usually seen on sports cars. All these design elements lend the ID. EVERY1 a charismatic identity with which people can identify,” Mindt said.

It really is a small car—at 152.8 inches (3,880 mm) long, it’s much shorter than the smallest car VW sells here in the US, the Golf GTI, which is a still-diminutive 168.8 inches (4,288 mm) in length. Like the slightly bigger ID. 2all—which is still much shorter than a Golf), the ID. EVERY1 will use a new front-wheel drive version of VW’s modular MEB platform. (Initially introduced for rear- or all-wheel-drive EVs, MEB underpins cars like the ID.4 crossover and ID. Buzz bus.)

Volkswagen gets the message: Cheap, stylish EVs coming from 2026 Read More »

on-openai’s-safety-and-alignment-philosophy

On OpenAI’s Safety and Alignment Philosophy

OpenAI’s recent transparency on safety and alignment strategies has been extremely helpful and refreshing.

Their Model Spec 2.0 laid out how they want their models to behave. I offered a detailed critique of it, with my biggest criticisms focused on long term implications. The level of detail and openness here was extremely helpful.

Now we have another document, How We Think About Safety and Alignment. Again, they have laid out their thinking crisply and in excellent detail.

I have strong disagreements with several key assumptions underlying their position.

Given those assumptions, they have produced a strong document – here I focus on my disagreements, so I want to be clear that mostly I think this document was very good.

This post examines their key implicit and explicit assumptions.

In particular, there are three core assumptions that I challenge:

  1. AI Will Remain a ‘Mere Tool.’

  2. AI Will Not Disrupt ‘Economic Normal.’

  3. AI Progress Will Not Involve Phase Changes.

The first two are implicit. The third is explicit.

OpenAI recognizes the questions and problems, but we have different answers. Those answers come with very different implications:

  1. OpenAI thinks AI can remain a ‘Mere Tool’ despite very strong capabilities if we make that a design goal. I do think this is possible in theory, but that there are extreme competitive pressures against this that make that almost impossible, short of actions no one involved is going to like. Maintaining human control is to try and engineer what is in important ways an ‘unnatural’ result.

  2. OpenAI expects massive economic disruptions, ‘more change than we’ve seen since the 1500s,’ but that still mostly assumes what I call ‘economic normal,’ where humans remain economic agents, private property and basic rights are largely preserved, and easy availability of oxygen, water, sunlight and similar resources continues. I think this is not a good assumption.

  3. OpenAI is expecting what is for practical purposes continuous progress without major sudden phase changes. I believe their assumptions on this are far too strong, and that there have already been a number of discontinuous points with phase changes, and we will have more coming, and also that with sufficient capabilities many current trends in AI behaviors would reverse, perhaps gradually but also perhaps suddenly.

I’ll then cover their five (very good) core principles.

I call upon the other major labs to offer similar documents. I’d love to see their takes.

  1. Core Implicit Assumption: AI Can Remain a ‘Mere Tool’.

  2. Core Implicit Assumption: ‘Economic Normal’.

  3. Core Assumption: No Abrupt Phase Changes.

  4. Implicit Assumption: Release of AI Models Only Matters Directly.

  5. On Their Taxonomy of Potential Risks.

  6. The Need for Coordination.

  7. Core Principles.

  8. Embracing Uncertainty.

  9. Defense in Depth.

  10. Methods That Scale.

  11. Human Control.

  12. Community Effort.

This is the biggest crux. OpenAI thinks that this is a viable principle to aim for. I don’t see how.

OpenAI imagines that AI will remain a ‘mere tool’ indefinitely. Humans will direct AIs, and AIs will do what the humans direct the AIs to do. Humans will remain in control, and remain ‘in the loop,’ and we can design to ensure that happens. When we model a future society, we need not imagine AIs, or collections of AIs, as if they were independent or competing economic agents or entities.

Thus, our goal in AI safety and alignment is to ensure the tools do what we intend them to do, and to guard against human misuse in various forms, and to prepare society for technological disruption similar to what we’d face with other techs. Essentially, This Time is Not Different.

Thus, the Model Spec and other such documents are plans for how to govern an AI assistant mere tool, assert a chain of command, and how to deal with the issues that come along with that.

That’s a great thing to do for now, but as a long term outlook I think this is Obvious Nonsense. A sufficiently capable AI might (or might not) be something that a human operating it could choose to leave as a ‘mere tool.’ But even under optimistic assumptions, you’d have to sacrifice a lot of utility to do so.

It does not have a goal? We can and will effectively give it a goal.

It is not an agent? We can and will make it an agent.

Human in the loop? We can and will take the human out of the loop once the human is not contributing to the loop.

OpenAI builds AI agents and features in ways designed to keep humans in the loop and ensure the AIs are indeed mere tools, as suggested in their presentation at the Paris summit? They will face dramatic competitive pressures to compromise on that. People will do everything to undo those restrictions. What’s the plan?

Thus, even if we solve alignment in every useful sense, and even if we know how to keep AIs as ‘mere tools’ if desired, we would rapidly face extreme competitive pressures towards gradual disempowerment, as AIs are given more and more autonomy and authority because that is the locally effective thing to do (and also others do it for the lulz, or unintentionally, or because they think AIs being in charge or ‘free’ is good).

Until a plan tackles these questions seriously, you do not have a serious plan.

What I mean by ‘Economic Normal’ is something rather forgiving – that the world does not transform in ways that render our economic intuitions irrelevant, or that invalidate economic actions. The document notes they expect ‘more change than from the 1500s to the present’ and the 1500s would definitely count as fully economic normal here.

It roughly means that your private property is preserved in a way that allows your savings to retain purchasing power, your rights to bodily autonomy and (very) basic rights are respected, your access to the basic requirements of survival (sunlight, water, oxygen and so on) are not disrupted or made dramatically more expensive on net, and so on. It also means that economic growth does not grow so dramatically as to throw all your intuitions out the window.

That things will not enter true High Weirdness, and that financial or physical wealth will meaningfully protect you from events.

I do not believe these are remotely safe assumptions.

AGI is notoriously hard to define or pin down. There are not two distinct categories of things, ‘definitely not AGI’ and then ‘fully AGI.’

Nor do we expect an instant transition from ‘AI not good enough to do much’ to ‘AI does recursive self-improvement.’ AI is already good enough to do much, and will probably get far more useful before things ‘go critical.’

That does not mean that there are not important phase changes between models, where the precautions and safety measures you were previously using either stop working or are no longer matched to the new threats.

AI is still on an exponential.

If we treat past performance as assuring us of future success, if we do not want to respond to an exponential ‘too early’ based on the impacts we can already observe, what happens? We will inevitably respond too late.

I think the history of GPT-2 actually illustrates this. If we conclude from that incident that OpenAI did something stupid and ‘looked silly,’ without understanding exactly why the decision was a mistake, we are in so so much trouble.

We used to view the development of AGI as a discontinuous moment when our AI systems would transform from solving toy problems to world-changing ones. We now view the first AGI as just one point along a series of systems of increasing usefulness.

In a discontinuous world, practicing for the AGI moment is the only thing we can do, and it leads to treating the systems of today with a level of caution disproportionate to their apparent power.

This is the approach we took for GPT-2 when we didn’t release the model due to concerns about malicious applications.

In the continuous world, the way to make the next system safe and beneficial is to learn from the current system. This is why we’ve adopted the principle of iterative deployment, so that we can enrich our understanding of safety and misuse, give society time to adapt to changes, and put the benefits of AI into people’s hands.

At present, we are navigating the new paradigm of chain-of-thought models – we believe this technology will be extremely impactful going forward, and we want to study how to make it useful and safe by learning from its real-world usage. In the continuous world view, deployment aids rather than opposes safety.

In the continuous world view, deployment aids rather than opposes safety.

At the current margins, subject to proper precautions and mitigations, I agree with this strategy of iterative deployment. Making models available, on net, is helpful.

However, we forget what happened with GPT-2. The demand was that the full GPT-2 be released as an open model, right away, despite it being a phase change in AI capabilities that potentially enabled malicious uses, with no one understanding what the impact might be. It turned out the answer was ‘nothing,’ but the point of iterative deployment is to test that theory while still being able to turn the damn thing off. That’s exactly what happened. The concerns look silly now, but that’s hindsight.

Similarly, there have been several cases of what sure felt like discontinuous progress since then. If we restrict ourselves to the ‘OpenAI extended universe,’ GPT-3, GPT-3.5, GPT-4, o1 and Deep Research (including o3) all feel like plausible cases where new modalities potentially opened up, and new things happened.

The most important potential phase changes lie in the future, especially the ones where various safety and alignment strategies potentially stop working, or capabilities make such failures far more dangerous, and it is quite likely these two things happen at the same time because one is a key cause of the other. And if you buy ‘o-ring’ style arguments, where AI is not so useful so long as there must be a human in the loop, removing the last need for such a human is a really big deal.

Alternatively: Iterative deployment can be great if and only if you use it in part to figure out when to stop.

I would also draw a distinction between open iterative deployment and closed iterative deployment. Closed iterative deployment can be far more aggressive while staying responsible, since you have much better options available to you if something goes awry.

I also think the logic here is wrong:

These diverging views of the world lead to different interpretations of what is safe.

For example, our release of ChatGPT was a Rorschach test for many in the field — depending on whether they expected AI progress to be discontinuous or continuous, they viewed it as either a detriment or learning opportunity towards AGI safety.

The primary impacts of ChatGPT were

  1. As a starting gun that triggered massively increased use, interest and spending on LLMs and AI. That impact has little to do with whether progress is continuous or discontinuous.

  2. As a way to massively increase capital and mindshare available to OpenAI.

  3. Helping transform OpenAI into a product company.

You can argue about whether those impacts were net positive or not. But they do not directly interact much with whether AI progress is centrally continuous.

Another consideration is various forms of distillation or reverse engineering, or other ways in which making your model available could accelerate others.

And there’s all the other ways in which perception of progress, and of relative positioning, impacts people’s decisions. It is bizarre how much the exact timing of the release of DeepSeek’s r1, relative to several other models, mattered.

Precedent matters too. If you get everyone in the habit of releasing models the moment they’re ready, it impacts their decisions, not only yours.

This is the most important detail-level disagreement, especially in the ways I fear that the document will be used and interpreted, both internally to OpenAI and also externally, even if the document’s authors know better.

It largely comes directly from applying the ‘mere tool’ and ‘economic normal’ assumptions.

As AI becomes more powerful, the stakes grow higher. The exact way the post-AGI world will look is hard to predict — the world will likely be more different from today’s world than today’s is from the 1500s. But we expect the transformative impact of AGI to start within a few years. From today’s AI systems, we see three broad categories of failures:

  1. Human misuse: We consider misuse to be when humans apply AI in ways that violate laws and democratic values. This includes suppression of free speech and thought, whether by political bias, censorship, surveillance, or personalized propaganda. It includes phishing attacks or scams. It also includes enabling malicious actors to cause harm at a new scale.

  2. Misaligned AI: We consider misalignment failures to be when an AI’s behavior or actions are not in line with relevant human values, instructions, goals, or intent. For example an AI might take actions on behalf of its user that have unintended negative consequences, influence humans to take actions they would otherwise not, or undermine human control. The more power the AI has, the bigger potential consequences are.

  3. Societal disruption: AI will bring rapid change, which can have unpredictable and possibly negative effects on the world or individuals, like social tensions, disparities and inequality, and shifts in dominant values and societal norms. Access to AGI will determine economic success, which risks authoritarian regimes pulling ahead of democratic ones if they harness AGI more effectively.

There are two categories of concern here, in addition to the ‘democratic values’ Shibboleth issue.

  1. As introduced, this is framed as ‘from today’s AI systems.’ In which case, this is a lot closer to accurate. But the way the descriptions are written clearly implies this is meant to cover AGI as well, where this taxonomy seems even less complete and less useful for cutting reality at its joints.

  2. This is in a technical sense a full taxonomy, but de facto it ignores large portions of the impact of AI and of the threat model that I am using.

When I say technically a full taxonomy, you could say this is essentially saying either:

  1. The human does something directly bad, on purpose.

  2. The AI does something directly bad, that the human didn’t intend.

  3. Nothing directly bad happens per se, but bad things happen overall anyway.

Put it like that, and what else is there? Yet the details don’t reflect the three options being fully covered, as summarized there. In particular, ‘societal disruption’ implies a far narrower set of impacts than we need to consider, but similar issues exist with all three.

  1. Human Misuse.

A human might do something bad using an AI, but how are we pinning that down?

Saying ‘violates the law’ puts an unreasonable burden on the law. Our laws, as they currently exist, are complex and contradictory and woefully unfit and inadequate for an AGI-infused world. The rules are designed for very different levels of friction, and very different social and other dynamics, and are written on the assumption of highly irregular enforcement. Many of them are deeply stupid.

If a human uses AI to assemble a new virus, that certainly is what they mean by ‘enabling malicious actors to cause harm at a new scale’ but the concern is not ‘did that break the law?’ nor is it ‘did this violate democratic values.’

Saying ‘democratic values’ is a Shibboleth and semantic stop sign. What are these ‘democratic values’? Things the majority of people would dislike? Things that go against the ‘values’ the majority of people socially express, or that we like to pretend our society strongly supports? Things that change people’s opinions in the wrong ways, or wrong directions, according to some sort of expert class?

Why is ‘personalized propaganda’ bad, other than the way that is presented? What exactly differentiates it from telling an AI to write a personalized email? Why is personalized bad but non-personalized fine and where is the line here? What differentiates ‘surveillance’ from gathering information, and does it matter if the government is the one doing it? What the hell is ‘political bias’ in the context of ‘suppression of free speech’ via ‘human misuse’? And why are these kinds of questions taking up most of the misuse section?

Most of all, this draws a box around ‘misuse’ and treats that as a distinct category from ‘use,’ in a way I think will be increasingly misleading. Certainly we can point to particular things that can go horribly wrong, and label and guard against those. But so much of what people want to do, or are incentivized to do, is not exactly ‘misuse’ but has plenty of negative side effects, especially if done at unprecedented scale, often in ways not centrally pointed at by ‘societal disruption’ even if they technically count. That doesn’t mean there is obviously anything to be done or that should be done about such things, banning things should be done with extreme caution, but it not being ‘misuse’ does not mean the problems go away.

  1. Misaligned AI.

There are three issues here:

  1. The longstanding question of what even is misaligned.

  2. The limited implied scope of the negative consequences.

  3. The implication that the AI has to be misaligned to pose related dangers.

AI is only considered misaligned here when it is not in line with relevant human values, instructions, goals or intent. If you read that literally, as an AI that is not in line with all four of these things, even then it can still easily bleed into questions of misuse, in ways that threaten to drop overlapping cases on the floor.

I don’t mean to imply there’s something great that could have been written here instead, but: This doesn’t actually tell us much about what ‘alignment’ means in practice. There are all sorts of classic questions about what happens when you give an AI instructions or goals that imply terrible outcomes, as indeed almost all maximalist or precise instructions and goals do at the limit. It doesn’t tell us what ‘human values’ are in various senses.

On scope, I do appreciate that it says the more power the AI has, the bigger potential consequences are. And ‘undermine human control’ can imply a broad range of dangers. But the scope seems severely limited here.

Especially worrisome is that the examples imply that the actions would still be taken ‘on behalf of its user’ and merely have unintended negative consequences. Misaligned AI could take actions very much not on behalf of its user, or might quickly fail to effectively have a user at all. Again, this is the ‘mere tool’ assumption run amok.

  1. Social disruption

Here once again we see ‘economic normal’ and ‘mere tool’ playing key roles.

The wrong regimes – the ‘authoritarian’ ones – might pull ahead, or we might see ‘inequality’ or ‘social tensions.’ Or shifts in ‘dominant values’ and ‘social norms.’ But the base idea of human society is assumed to remain in place, with social dynamics remaining between humans. The worry is that society will elevate the wrong humans, not that society would favor AIs over humans or cease to effectively contain humans at all, or that humans might lose control over events.

To me, this does not feel like it addresses much of what I worry about in terms of societal disruptions, or even if it technically does it gives the impression it doesn’t.

We should worry far more about social disruptions in the sense that AIs take over and humans lose control, or AIs outcompete humans and render them non-competitive and non-productive, rather than worries about relatively smaller problems that are far more amenable to being fixed after things go wrong.

  1. Gradual disempowerment

The ‘mere tool’ blind spot is especially important here.

The missing fourth category, or at least thing to highlight even if it is technically already covered, is that the local incentives will often be to turn things over to AI to pursue local objectives more efficiently, but in ways that cause humans to progressively lose control. Human control is a core principle listed in the document, but I don’t see the approach to retaining it here as viable, and it should be more clearly here in the risk section. This shift will also impact events in other ways that cause negative externalities we will find very difficult to ‘price in’ and deal with once the levels of friction involved are sufficiently reduced.

There need not be any ‘misalignment’ or ‘misuse.’ Everyone following the local incentives leading to overall success is a fortunate fact about how things have mostly worked up until now, and also depended on a bunch of facts about humans and the technologies available to them, and how those humans have to operate and relate to each other. And it’s also depended on our ability to adjust things to fix the failure modes as we go to ensure it continues to be true.

I want to highlight an important statement:

Like with any new technology, there will be disruptive effects, some that are inseparable from progress, some that can be managed well, and some that may be unavoidable.

Societies will have to find ways of democratically deciding about these trade-offs, and many solutions will require complex coordination and shared responsibility.

Each failure mode carries risks that range from already present to speculative, and from affecting one person to painful setbacks for humanity to irrecoverable loss of human thriving.

This downplays the situation, merely describing us as facing ‘trade-offs,’ although it correctly points to the stakes of ‘irrecoverable loss of human thriving,’ even if I wish the wording on that (e.g. ‘extinction’) was more blunt. And it once again fetishizes ‘democratic’ decisions, presumably with only humans voting, without thinking much about how to operationalize that or deal with the humans both being heavily AI influenced and not being equipped to make good decisions any other way.

The biggest thing, however, is to affirm that yes, we only have a chance if we have the ability to do complex coordination and share responsibility. We will need some form of coordination mechanism, that allows us to collectively steer the future away from worse outcomes towards better outcomes.

The problem is that somehow, there is a remarkably vocal Anarchist Caucus, who thinks that the human ability to coordinate is inherently awful and we need to destroy and avoid it at all costs. They call it ‘tyranny’ and ‘authoritarianism’ if you suggest that humans retain any ability to steer the future at all, asserting that the ability of humans to steer the future via any mechanism at all is a greater danger (‘concentration of power’) than all other dangers combined would be if we simply let nature take its course.

I strongly disagree, and wish people understood what such people were advocating for, and how extreme and insane a position it is both within and outside of AI, and to what extent it quite obviously cannot work, and inevitably ends with either us all getting killed or some force asserting control.

Coordination is hard.

Coordination, on the level we need it, might be borderline impossible. Indeed, many in the various forms of the Suicide Caucus argue that because Coordination is Hard, we should give up on coordination with ‘enemies,’ and therefore we must Fail Game Theory Forever and all race full speed ahead into the twirling razor blades.

I’m used to dealing with that.

I don’t know if I will ever get used to the position that Coordination is The Great Evil, even democratic coordination among allies, and must be destroyed. That because humans inevitably abuse power, humans must not have any power.

The result would be that humans would not have any power.

And then, quickly, there wouldn’t be humans.

They outline five core principles.

  1. Embracing Uncertainty: We treat safety as a science, learning from iterative deployment rather than just theoretical principles.

  2. Defense in Depth: We stack interventions to create safety through redundancy.

  3. Methods that Scale: We seek out safety methods that become more effective as models become more capable.

  4. Human Control: We work to develop AI that elevates humanity and promotes democratic ideals.

  5. Shared Responsibility: We view responsibility for advancing safety as a collective effort.

I’ll take each in turn.

Embracing uncertainty is vital. The question is, what helps you embrace it?

If you have sufficient uncertainty about the safety of deployment, then it would be very strange to ‘embrace’ that uncertainty by deploying anyway. That goes double, of course, for deployments that one cannot undo, or which are sufficiently powerful they might render you unable to undo them (e.g. they might escape control, exfiltrate, etc).

So the question is, when does it reduce uncertainty to release models and learn, versus when it increases uncertainty more to do that? And what other considerations are there, in both directions? They recognize that the calculus on this could flip in the future, as quoted below.

I am both sympathetic and cynical here. I think OpenAI’s iterative development is primarily a business case, the same as everyone else’s, but that right now that business case is extremely compelling. I do think for now the safety case supports that decision, but view that as essentially a coincidence.

In particular, my worry is that alignment and safety considerations are, along with other elements, headed towards a key phase change, in addition to other potential phase changes. They do address this under ‘methods that scale,’ which is excellent, but I think the problem is far harder and more fundamental than they recognize.

Some excellent quotes here:

Our approach demands hard work, careful decision-making, and continuous calibration of risks and benefits.

The best time to act is before risks fully materialize, initiating mitigation efforts as potential negative impacts — such as facilitation of malicious use-cases or the model deceiving its operator— begin to surface.

In the future, we may see scenarios where the model risks become unacceptable even relative to benefits. We’ll work hard to figure out how to mitigate those risks so that the benefits of the model can be realized. Along the way, we’ll likely test them in secure, controlled settings.

For example, making increasingly capable models widely available by sharing their weights should include considering a reasonable range of ways a malicious party could feasibly modify the model, including by finetuning (see our 2024 statement on open model weights).

Yes, if you release an open weights model you need to anticipate likely modifications including fine-tuning, and not pretend your mitigations remain in place unless you have a reason to expect them to remain in place. Right now, we do not expect that.

It’s (almost) never a bad idea to use defense in depth on top of your protocol.

My worry is that in a crisis, all relevant correlations go to 1.

As in, as your models get increasingly capable, if your safety and alignment training fails, then your safety testing will be increasingly unreliable, and it will be increasingly able to get around your inference time safety, monitoring, investigations and enforcement.

Its ability to get around these four additional layers are all highly correlated to each other. The skills that get you around one mostly get you around the others. So this isn’t as much defense in depth as you would like it to be.

That doesn’t mean don’t do it. Certainly there are cases, especially involving misuse or things going out of distribution in strange but non-malicious ways, where you will be able to fail early, then recover later on. The worry is that when the stakes are high, that becomes a lot less likely, and you should think of this as maybe one effective ‘reroll’ at most rather than four.

To align increasingly intelligent models, especially models that are more intelligent and powerful than humans, we must develop alignment methods that improve rather than break with increasing AI intelligence.

I am in violent agreement. The question is which methods will scale.

There are also two different levels at which we must ask what scales.

Does it scale as AI capabilities increase on the margin, right now? A lot of alignment techniques right now are essentially ‘have the AI figure out what you meant.’ On the margin right now, more intelligence and capability of the AI mean better answers.

Deliberative alignment is the perfect example of this. It’s great for mundane safety right now and will get better in the short term. Having the model think about how to follow your specified rules will improve as intelligence improves, as long as the goal of obeying your rules as written gets you what you want. However, if you apply too much optimization pressure and intelligence to any particular set of deontological rules as you move out of distribution, even under DWIM (do what I mean, or the spirit of the rules) I predict disaster.

In addition, under amplification, or attempts to move ‘up the chain’ of capabilities, I worry that you can hope to copy your understanding, but not to improve it. And as they say, if you make a copy of a copy of a copy, it’s not quite as sharp as the original.

I approve of everything they describe here, other than worries about the fetishization of democracy, please do all of it. But I don’t see how this allows humans to remain in effective control. These techniques are already hard to get right and aim to solve hard problems, but the full hard problems of control remain unaddressed.

Another excellent category, where they affirm the need to do safety work in public, fund it and support it, including government expertise, propose policy initiatives and make voluntary commitments.

There is definitely a lot of room for improvement in OpenAI and Sam Altman’s public facing communications and commitments.

Discussion about this post

On OpenAI’s Safety and Alignment Philosophy Read More »

do-these-dual-images-say-anything-about-your-personality?

Do these dual images say anything about your personality?

There’s little that Internet denizens love more than a snazzy personality test—cat videos, maybe, or perpetual outrage. One trend that has gained popularity over the last several years is personality quizzes based on so-called ambiguous images—in which one sees either a young girl or an old man, for instance, or a skull or a little girl. It’s possible to perceive both images by shifting one’s perspective, but it’s the image one sees first that is said to indicate specific personality traits. According to one such quiz, seeing the young girl first means you are optimistic and a bit impulsive, while seeing the old man first would mean one is honest, faithful, and goal-oriented.

But is there any actual science to back up the current fad? There is not, according to a paper published in the journal PeerJ, whose authors declare these kinds of personality quizzes to be a new kind of psychological myth. That said, they did find a couple of intriguing, statistically significant correlations they believe warrant further research.

In 1892, a German humor magazine published the earliest known version of the “rabbit-duck illusion,” in which one can see either a rabbit or a duck, depending on one’s perspective—i.e., multistable perception. There have been many more such images produced since then, all of which create ambiguity by exploiting certain peculiarities of the human visual system, such as playing with illusory contours and how we perceive edges.

Such images have long fascinated scientists and philosophers because they seem to represent different ways of seeing. So naturally there is a substantial body of research drawing parallels between such images and various sociological, biological, or psychological characteristics.

For instance, a 2010 study examined BBC archival data on the duck-rabbit illusion from the 1950s and found that men see the duck more often than women, while older people were more likely to see the rabbit. A 2018 study of the “younger-older woman” ambiguous image asked participants to estimate the age of the woman they saw in the image. Older participants over 30 gave higher estimates than younger ones. This was confirmed by a 2021 study, although that study also found no correlation between participants’ age and whether they were more likely to see the older or younger woman in the image.

Do these dual images say anything about your personality? Read More »

google’s-ai-powered-pixel-sense-app-could-gobble-up-all-your-pixel-10-data

Google’s AI-powered Pixel Sense app could gobble up all your Pixel 10 data

Google’s AI ambitions know no bounds. A new report claims Google’s next phones will herald the arrival of a feature called Pixel Sense that will ingest data from virtually every Google app on your phone, fueling a new personalized experience. This app could be the premiere feature of the Pixel 10 series expected out late this year.

According to a report from Android Authority, Pixel Sense is the new name for Pixie, an AI that was supposed to integrate with Google Assistant before Gemini became the center of Google’s universe. In late 2023, it looked as though Pixie would be launched on the Pixel 9 series, but that never happened. Now, it’s reportedly coming back as Pixel Sense, and we have more details on how it might work.

Pixel Sense will apparently be able to leverage data you create in apps like Calendar, Gmail, Docs, Maps, Keep Notes, Recorder, Wallet, and almost every other Google app. It can also process media files like screenshots in the same way the Pixel Screenshots app currently does. The goal of collecting all this data is to help you complete tasks faster by suggesting content, products, and names by understanding the context of how you use the phone. Pixel Sense will essentially try to predict what you need without being prompted.

Samsung is pursuing a goal that is ostensibly similar to Now Brief, a new AI feature available on the Galaxy S25 series. Now Brief collects data from a handful of apps like Samsung Health, Samsung Calendar, and YouTube to distill your important data with AI. However, it rarely offers anything of use with its morning, noon, and night “Now Bar” updates.

Pixel Sense sounds like a more expansive version of this same approach to processing user data—and perhaps the fulfillment of Google Now’s decade-old promise. The supposed list of supported apps is much larger, and they’re apps people actually use. If pouring more and more data into a large language model leads to better insights into your activities, Pixel Sense should be better at guessing what you’ll need. Admittedly, that’s a big “if.”

Google’s AI-powered Pixel Sense app could gobble up all your Pixel 10 data Read More »

gemini-live-will-learn-to-peer-through-your-camera-lens-in-a-few-weeks

Gemini Live will learn to peer through your camera lens in a few weeks

At Mobile World Congress, Google confirmed that a long-awaited Gemini AI feature it first teased nearly a year ago is ready for launch. The company’s conversational Gemini Live will soon be able to view live video and screen sharing, a feature Google previously demoed as Project Astra. When Gemini’s video capabilities arrive, you’ll be able to simply show the robot something instead of telling it.

Right now, Google’s multimodal AI can process text, images, and various kinds of documents. However, its ability to accept video as an input is spotty at best—sometimes it can summarize a YouTube video, and sometimes it can’t, for unknown reasons. Later in March, the Gemini app on Android will get a major update to its video functionality. You’ll be able to open your camera to provide Gemini Live a video stream or share your screen as a live video, thus allowing you to pepper Gemini with questions about what it sees.

Gemini Live with video.

It can be hard to keep track of which Google AI project is which—the 2024 Google I/O was largely a celebration of all things Gemini AI. The Astra demo made waves as it demonstrated a more natural way to interact with the AI. In the original video, which you can see below, Google showed how Gemini Live could answer questions in real time as the user swept a phone around a room. It had things to say about code on a computer screen, how speakers work, and a network diagram on a whiteboard. It even remembered where the user left their glasses from an earlier part of the video.

Gemini Live will learn to peer through your camera lens in a few weeks Read More »

“it’s-a-lemon”—openai’s-largest-ai-model-ever-arrives-to-mixed-reviews

“It’s a lemon”—OpenAI’s largest AI model ever arrives to mixed reviews

Perhaps because of the disappointing results, Altman had previously written that GPT-4.5 will be the last of OpenAI’s traditional AI models, with GPT-5 planned to be a dynamic combination of “non-reasoning” LLMs and simulated reasoning models like o3.

A stratospheric price and a tech dead-end

And about that price—it’s a doozy. GPT-4.5 costs $75 per million input tokens and $150 per million output tokens through the API, compared to GPT-4o’s $2.50 per million input tokens and $10 per million output tokens. (Tokens are chunks of data used by AI models for processing). For developers using OpenAI models, this pricing makes GPT-4.5 impractical for many applications where GPT-4o already performs adequately.

By contrast, OpenAI’s flagship reasoning model, o1 pro, costs $15 per million input tokens and $60 per million output tokens—significantly less than GPT-4.5 despite offering specialized simulated reasoning capabilities. Even more striking, the o3-mini model costs just $1.10 per million input tokens and $4.40 per million output tokens, making it cheaper than even GPT-4o while providing much stronger performance on specific tasks.

OpenAI has likely known about diminishing returns in training LLMs for some time. As a result, the company spent most of last year working on simulated reasoning models like o1 and o3, which use a different inference-time (runtime) approach to improving performance instead of throwing ever-larger amounts of training data at GPT-style AI models.

OpenAI's self-reported benchmark results for the SimpleQA test, which measures confabulation rate.

OpenAI’s self-reported benchmark results for the SimpleQA test, which measures confabulation rate. Credit: OpenAI

While this seems like bad news for OpenAI in the short term, competition is thriving in the AI market. Anthropic’s Claude 3.7 Sonnet has demonstrated vastly better performance than GPT-4.5, with a reportedly more efficient architecture. It’s worth noting that Claude 3.7 Sonnet is likely a system of AI models working together behind the scenes, although Anthropic has not provided details about its architecture.

For now, it seems that GPT-4.5 may be the last of its kind—a technological dead-end for an unsupervised learning approach that has paved the way for new architectures in AI models, such as o3’s inference-time reasoning and perhaps even something more novel, like diffusion-based models. Only time will tell how things end up.

GPT-4.5 is now available to ChatGPT Pro subscribers, with rollout to Plus and Team subscribers planned for next week, followed by Enterprise and Education customers the week after. Developers can access it through OpenAI’s various APIs on paid tiers, though the company is uncertain about its long-term availability.

“It’s a lemon”—OpenAI’s largest AI model ever arrives to mixed reviews Read More »

why-valve-should-make-half-life-3-a-steamos-exclusive

Why Valve should make Half-Life 3 a SteamOS exclusive


The ultimate system seller

Opinion: Just as Half-Life 2 helped launch Steam, a sequel could help establish non-Windows PC gaming.

We found this logo hidden deep in an abandoned steel forge, Credit: Aurich Lawson | Steam

A little over 20 years ago, Valve was getting ready to release a new Half-Life game. At the same time, the company was trying to push Steam as a new option for players to download and update games over the Internet.

Requiring Steam in order to play Half-Life 2 led to plenty of grumbling from players in 2004. But the high-profile Steam exclusive helped build an instant user base for Valve’s fresh distribution system, setting it on a path to eventually become the unquestioned leader in the space. The link between the new game and the new platform helped promote a bold alternative to the retail game sales and distribution systems that had dominated PC gaming for decades.

Remember DVD-ROMs?

Remember DVD-ROMs? Credit: Reddit

Today, all indications suggest that Valve is getting ready to release a new Half-Life game. At the same time, the company is getting ready to push SteamOS as a new option for third-party hardware makers and individual users to “download and test themselves.”

Requiring SteamOS to play Half-Life 3 would definitely lead to a lot of grumbling from players. But the high-profile exclusive could help build an instant user base for Valve’s fresh operating system, perhaps setting it on the path to become the unquestioned leader in the space. A link between the new game and the new platform could help promote a bold alternative to the Windows-based systems that have dominated PC gaming for decades.

Not another Steam Machine

Getting players to change the established platform they use to buy and play games (either in terms of hardware or software) usually requires some sort of instantly apparent benefit for the player. Those benefits can range from the tangible (e.g., an improved controller, better graphics performance) to the ancillary (e.g., social features, achievements) to the downright weird (e.g., a second screen on a portable). Often, though, a core reason why players switch platforms is for access to exclusive “system seller” games that aren’t available any other way.

Half-Life 2‘s role in popularizing early Steam shows just how much a highly anticipated exclusive can convince otherwise reluctant players to invest time and effort in a new platform. To see what can happen without such an exclusive, we only need to look to Valve’s 2015 launch of the Steam Machine hardware line, powered by the first version of the Linux-based SteamOS.

Valve offered players very little in the way of affirmative reasons to switch to a SteamOS-powered Steam Machine in 2015.

Credit: Alienware

Valve offered players very little in the way of affirmative reasons to switch to a SteamOS-powered Steam Machine in 2015. Credit: Alienware

At the time, Valve was selling SteamOS mainly as an alternative to a new Windows 8 environment that Valve co-founder Gabe Newell saw as a “catastrophe” in the making for the PC gaming world. Newell described SteamOS as a “hedging strategy” against Microsoft’s potential ability to force all Windows 8 app distribution through the Windows Store, a la Apple’s total control of iPhone app distribution.

When Microsoft failed to impose that kind of hegemonic control over Windows apps and games, Valve was left with little else to convince players that it was worth buying a Windows-free Steam Machine (or going through the onerous process of installing the original SteamOS on their gaming rigs). Sure, using SteamOS meant saving a few bucks on a Windows license. But it also meant being stuck with an extremely limited library of Linux ports (especially when it came to releases from major publishers) and poor technical performance compared to Windows even when those ports were available.

Given those obvious downsides—and the lack of any obvious upsides—it’s no wonder that users overwhelmingly ignored SteamOS and Steam Machines at the time. But as we argued way back in 2013, a major exclusive on the scale of Half-Life 3 could have convinced a lot of gamers to overlook at least some of those downsides and give the new platform a chance.

A little push

Fast forward to today, and the modern version of SteamOS is in a much better place than the Steam Machine-era version ever was. That’s thanks in large part to Valve’s consistent work on the Proton compatibility layer, which lets the Linux-based SteamOS run almost any game that’s designed for Windows (with only a few major exceptions). That wide compatibility has been a huge boon for the Steam Deck, which offered many players easy handheld access to vast swathes of PC gaming for the first time. The Steam Deck also showed off SteamOS’s major user interface and user experience benefits over clunkier Windows-based gaming portables.

The Steam Deck served as an excellent proof of concept for the viability of SteamOS hardware with the gaming masses.

Credit: Kyle Orland

The Steam Deck served as an excellent proof of concept for the viability of SteamOS hardware with the gaming masses. Credit: Kyle Orland

Still, the benefits of switching from Windows to SteamOS might seem a bit amorphous to many players today. If Valve is really interested in pushing its OS as an alternative to Windows gaming, a big exclusive game is just the thing to convince a critical mass of players to make the leap. And when it comes to massive PC gaming exclusives, it doesn’t get much bigger than the long, long-awaited Half-Life 3.

We know it might sound ludicrous to suggest that Valve’s biggest game in years should ignore the Windows platform that’s been used by practically every PC gamer for decades. Keep in mind, though, that there would be nothing stopping existing Windows gamers from downloading and installing a free copy of the Linux-based SteamOS (likely on a separate drive or partition) to get access to Half-Life 3.

Yes, installing a new operating system (especially one based on Linux) is not exactly a plug-and-play process. But Valve has a long history of streamlining game downloads, updates, and driver installations through Steam itself. If anyone can make the process of setting up a new OS relatively seamless, it’s Valve.

And let’s not forget that millions of gamers already have easy access to SteamOS through Steam Deck hardware. Those aging Steam Decks might not be powerful enough to run a game like Half-Life 3 at maximum graphics settings, but Valve games have a history of scaling down well on low-end systems.

Valve’s leaked “Powered by SteamOS” initiative also seems poised to let third-party hardware makers jump in with more powerful (and more Half-Life 3-capable) desktops, laptops, and handhelds with SteamOS pre-installed. And that’s before we even consider the potential impact of a more powerful “Steam Deck 2,” which Valve’s Pierre-Loup  Griffais said in 2023 could potentially come in “the next couple of years.”

Time for a bold move

Tying a major game like Half-Life 3 to a completely new and largely untested operating system would surely lead to some deafening pushback from gamers happy with the Windows-based status quo. An exclusive release could also be risky if SteamOS ends up showing some technical problems as it tries to grow past its Steam Deck roots (Linux doesn’t exactly have the best track record when it comes to things like game driver compatibility across different hardware).

The Lenovo Legion Go S will be the first non-Valve hardware to be officially “Powered by SteamOS.” A Windows-sporting version will be more expensive

The Lenovo Legion Go S will be the first non-Valve hardware to be officially “Powered by SteamOS.” A Windows-sporting version will be more expensive Credit: Lenovo

Despite all that, we’re pretty confident that the vast majority of players interested in Half-Life 3 would jump through a few OS-related hoops to get access to the game. And many of those players would likely stick with Valve’s gaming-optimized OS going forward rather than spending money on another Windows license.

Even a timed exclusivity window for Half-Life 3 on SteamOS could push a lot of early adopters to see what all the fuss is about without excluding those who refuse to switch away from Windows. Failing even that, maybe a non-exclusive Half-Life 3 could be included as a pre-installed freebie with future versions of SteamOS, as an incentive for the curious to try out a new operating system.

With the coming wide release of SteamOS, Valve has a rare opportunity to upend the PC gaming OS dominance that Microsoft more or less stumbled into decades ago. A game like Half-Life 3 could be just the carrot needed to get PC gaming as a whole over its longstanding Windows dependence.

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

Why Valve should make Half-Life 3 a SteamOS exclusive Read More »

astroscale-aced-the-world’s-first-rendezvous-with-a-piece-of-space-junk

Astroscale aced the world’s first rendezvous with a piece of space junk

Astroscale’s US subsidiary won a $25.5 million contract from the US Space Force in 2023 to build a satellite refueler that can hop around geostationary orbit. Like the ADRAS-J mission, this project is a public-private partnership, with Astroscale committing $12 million of its own money. In January, the Japanese government selected Astroscale for a contract worth up to $80 million to demonstrate chemical refueling in low-Earth orbit.

The latest win for Astroscale came Thursday, when the Japanese Ministry of Defense awarded the company a contract to develop a prototype satellite that could fly in geostationary orbit and collect information on other objects in the domain for Japan’s military and intelligence agencies.

“We are very bullish on the prospects for defense-related business,” said Nobu Matsuyama, Astroscale’s chief financial officer.

Astroscale’s other projects include a life extension mission for an unidentified customer in geostationary orbit, providing a similar service as Northrop Grumman’s Mission Extension Vehicle (MEV).

So, can Astroscale really do all of this? In an era of a militarized final frontier, it’s easy to see the usefulness of sidling up next to a “non-cooperative” satellite—whether it’s to refuel it, repair it, de-orbit it, inspect it, or (gasp!) disable it. Astroscale’s demonstration with ADRAS-J showed it can safely operate near another object in space without navigation aids, which is foundational to any of these applications.

So far, governments are driving demand for this kind of work.

Astroscale raised nearly $400 million in venture capital funding before going public on the Tokyo Stock Exchange last June. After quickly spiking to nearly $1 billion, the company’s market valuation has dropped to about $540 million as of Thursday. Astroscale has around 590 full-time employees across all its operating locations.

Matsuyama said Astroscale’s total backlog is valued at about 38.9 billion yen, or $260 million. The company is still in a ramp-up phase, reporting operating losses on its balance sheet and steep research and development spending that Matsuyama said should max out this year.

“We are the only company that has proved RPO technology for non-cooperative objects, like debris, in space,” Okada said last month.

“In simple terms, this means approach and capture of objects,” Okada continued. “This capability did not exist before us, but one’s mastering of this technology enables you to provide not only debris removal service, but also orbit correction, refueling, inspection, observation, and eventually repair and reuse services.”

Astroscale aced the world’s first rendezvous with a piece of space junk Read More »

portal-randomized-feels-like-playing-portal-again-for-the-first-time

Portal Randomized feels like playing Portal again for the first time

For most modern players, the worst thing about a video game classic like Portal is that you can never play it again for the first time. No matter how much time has passed since your last playthrough, those same old test chambers will feel a bit too familiar if you revisit them today.

Over the years, community mods like Portal Stories: Mel and Portal: Revolution have tried to fix this problem with extensive work on completely new levels and puzzles. Now, though, a much simpler mod is looking to recapture that “first time” feeling simply by adding random gameplay modifiers to Portal‘s familiar puzzle rooms.

The Portal Randomized demo recently posted on ModDB activates one of eight gameplay modifiers when you enter one of the game’s first two test chambers. The results, while still a little rough around the edges, show how much extra longevity can be wrung from simple tweaks to existing gameplay.

Make special note of that “gravity changed” note in the corner.

Make special note of that “gravity changed” note in the corner. Credit: Valve / gamingdominari

Not all of the Portal Randomized modifiers are instant winners. One that adds intermittent darkness, for instance, practically forces you to stand still for a few seconds until the lights flip back on (too slowly for my comfort). And modifiers like variable gravity or variable movement speed have a pretty trivial effect on how the game plays out, at least in the simplistic early test chambers in the demo.

Portal Randomized feels like playing Portal again for the first time Read More »

framework-gives-its-13-inch-laptop-another-boost-with-ryzen-ai-300-cpu-update

Framework gives its 13-inch Laptop another boost with Ryzen AI 300 CPU update

Framework announced two new systems to its lineup today: the convertible Framework 12 and a gaming-focused (but not-very-upgradeable) mini ITX Framework Desktop PC. But it’s continuing to pay attention to the Framework Laptop 13, too—the company’s first upgrade-friendly repairable laptop is getting another motherboard update, this time with AMD’s latest Ryzen AI 300-series processors. It’s Framework’s second AMD Ryzen-based board, following late 2023’s Ryzen 7040-based refresh.

The new boards are available for preorder today and will begin shipping in April. Buyers new to the Framework ecosystem can buy a laptop, which starts at $1,099 as a pre-built system with an OS, storage, and RAM included, or $899 for a build-it-yourself kit where you add those components yourself. Owners of Framework Laptops going all the way back to the original 11th-generation Intel version can also buy a bare board to drop into their existing systems; these start at $449.

Framework will ship six- and eight-core Ryzen AI 300 processors on lower-end configurations, most likely the Ryzen AI 5 340 and Ryzen AI 7 350 that AMD announced at CES in January. These chips include integrated Radeon 840M and 860M GPUs with four and eight graphics cores, respectively.

People who want to use the Framework Laptop as a thin-and-light portable gaming system will want to go for the top-tier Ryzen AI 9 HX 370, which includes 12 CPU cores and a Radeon 890M with 16 GPU cores. We’ve been impressed by this chip’s performance when we’ve seen it in other systems, though Framework’s may be a bit slower because it’s using slower socketed DDR5 memory instead of soldered-down RAM. This is a trade-off that Framework’s target customers are likely to be fine with.

The Ryzen AI 300-series motherboard. Framework says an updated heatpipe design helps to keep things cool. Credit: Framework

One of the issues with the original Ryzen Framework board was that the laptop’s four USB-C ports didn’t all support the same kinds of expansion cards, limiting the laptop’s customizability somewhat. That hasn’t totally gone away with the new version—the two rear USB ports support full 40Gbps USB4 speeds, while the front two are limited to 10Gbps USB 3.2—but all four ports do support display output instead of just three.

Framework gives its 13-inch Laptop another boost with Ryzen AI 300 CPU update Read More »

how-north-korea-pulled-off-a-$1.5-billion-crypto-heist—the-biggest-in-history

How North Korea pulled off a $1.5 billion crypto heist—the biggest in history

The cryptocurrency industry and those responsible for securing it are still in shock following Friday’s heist, likely by North Korea, that drained $1.5 billion from Dubai-based exchange Bybit, making the theft by far the biggest ever in digital asset history.

Bybit officials disclosed the theft of more than 400,000 ethereum and staked ethereum coins just hours after it occurred. The notification said the digital loot had been stored in a “Multisig Cold Wallet” when, somehow, it was transferred to one of the exchange’s hot wallets. From there, the cryptocurrency was transferred out of Bybit altogether and into wallets controlled by the unknown attackers.

This wallet is too hot, this one is too cold

Researchers for blockchain analysis firm Elliptic, among others, said over the weekend that the techniques and flow of the subsequent laundering of the funds bear the signature of threat actors working on behalf of North Korea. The revelation comes as little surprise since the isolated nation has long maintained a thriving cryptocurrency theft racket, in large part to pay for its weapons of mass destruction program.

Multisig cold wallets, also known as multisig safes, are among the gold standards for securing large sums of cryptocurrency. More shortly about how the threat actors cleared this tall hurdle. First, a little about cold wallets and multisig cold wallets and how they secure cryptocurrency against theft.

Wallets are accounts that use strong encryption to store bitcoin, ethereum, or any other form of cryptocurrency. Often, these wallets can be accessed online, making them useful for sending or receiving funds from other Internet-connected wallets. Over the past decade, these so-called hot wallets have been drained of digital coins supposedly worth billions, if not trillions, of dollars. Typically, these attacks have resulted from the thieves somehow obtaining the private key and emptying the wallet before the owner even knows the key has been compromised.

How North Korea pulled off a $1.5 billion crypto heist—the biggest in history Read More »

flashy-exotic-birds-can-actually-glow-in-the-dark

Flashy exotic birds can actually glow in the dark

Found in the forests of Papua New Guinea, Indonesia, and Eastern Australia, birds of paradise are famous for flashy feathers and unusually shaped ornaments, which set the standard for haute couture among birds. Many use these feathers for flamboyant mating displays in which they shape-shift into otherworldly forms.

As if this didn’t attract enough attention, we’ve now learned that they also glow in the dark.

Biofluorescent organisms are everywhere, from mushrooms to fish to reptiles and amphibians, but few birds have been identified as having glowing feathers. This is why biologist Rene Martin of the University of Nebraska-Lincoln wanted to investigate. She and her team studied a treasure trove of specimens at the American Museum of Natural History, which have been collected since the 1800s, and found that 37 of the 45 known species of birds of paradise have feathers that fluoresce.

The glow factor of birds of paradise is apparently important for mating displays. Despite biofluorescence being especially prominent in males, attracting a mate might not be all it is useful for, as these birds might also use it to signal to each other in other ways and sometimes even for camouflage among the light and shadows.

“The current very limited number of studies reporting fluorescence in birds suggests this phenomenon has not been thoroughly investigated,” the researchers said in a study that was recently published in Royal Society Open Science.

Glow-up

How do they get that glow? Biofluorescence is a phenomenon that happens when shorter, high-energy wavelengths of light, meaning UV, violet, and blue, are absorbed by an organism. The energy then gets re-emitted at longer, lower-energy wavelengths—greens, yellows, oranges, and reds. The feathers of birds of paradise contain fluorophores, molecules that undergo biofluorescence. Specialized filters in the light-sensitive cells of their eyes make their visual system more sensitive to biofluorescence.

Flashy exotic birds can actually glow in the dark Read More »