Author name: Mike M.

nasa’s-psyche-spacecraft-hits-a-speed-bump-on-the-way-to-a-metal-asteroid

NASA’s Psyche spacecraft hits a speed bump on the way to a metal asteroid

An illustration depicts a NASA spacecraft approaching the metal-rich asteroid Psyche. Though there are no plans to mine Psyche, such asteroids are being eyed for their valuable resources. Credit: NASA/JPL-Caltech/ASU

Each electric thruster on Psyche generates just 250 milli-newtons of thrust, roughly equivalent to the weight of three quarters. But they can operate for months at a time, and over the course of a multi-year cruise, these thrusters provide a more efficient means of propulsion than conventional rockets.

The plasma thrusters are reshaping the Psyche spacecraft’s path toward its destination, a metal-rich asteroid also named Psyche. The spacecraft’s four electric engines, known as Hall effect thrusters, were supplied by a Russian company named Fakel. Most of the other components in Psyche’s propulsion system—controllers, xenon fuel tanks, propellant lines, and valves—come from other companies or the spacecraft’s primary manufacturer, Maxar Space Systems, in California.

The Psyche mission is heading first for Mars, where the spacecraft will use the planet’s gravity next year to slingshot itself into the asteroid belt, setting up for arrival and orbit insertion around the asteroid Psyche in August 2029.

Psyche launched in October 2023 aboard a SpaceX Falcon Heavy rocket on the opening leg of a six-year sojourn through the Solar System. The mission’s total cost adds up to more than $1.4 billion, including development of the spacecraft and its instruments, the launch, operations, and an experimental laser communications package hitching a ride to deep space with Psyche.

Psyche, the asteroid, is the size of Massachusetts and circles the Sun in between the orbits of Mars and Jupiter. No spacecraft has visited Psyche before. Of the approximately 1 million asteroids discovered so far, scientists say only nine have a metal-rich signature like Psyche. The team of scientists who put together the Psyche mission have little idea of what to expect when the spacecraft gets there in 2029.

Metallic asteroids like Psyche are a mystery. Most of Psyche’s properties are unknown other than estimates of its density and composition. Predictions about the look of Psyche’s craters, cliffs, and color have inspired artists to create a cacophony of illustrations, often showing sharp spikes and grooves alien to rocky worlds.

In a little more than five years, assuming NASA gets past Psyche’s propulsion problem, scientists will supplant speculation with solid data.

NASA’s Psyche spacecraft hits a speed bump on the way to a metal asteroid Read More »

gpt-4o-responds-to-negative-feedback

GPT-4o Responds to Negative Feedback

Whoops. Sorry everyone. Rolling back to a previous version.

Here’s where we are at this point, now that GPT-4o is no longer an absurd sycophant.

For now.

  1. GPT-4o Is Was An Absurd Sycophant.

  2. You May Ask Yourself, How Did I Get Here?.

  3. Why Can’t We All Be Nice.

  4. Extra Extra Read All About It Four People Fooled.

  5. Prompt Attention.

  6. What (They Say) Happened.

  7. Reactions to the Official Explanation.

  8. Clearing the Low Bar.

  9. Where Do We Go From Here?.

Some extra reminders of what we are talking about.

Here’s Alex Lawsen having doing an A/B test, where it finds he’s way better of a writer than this ‘Alex Lawsen’ character.

This can do real damage in the wrong situation. Also, the wrong situation can make someone see ‘oh my that is crazy, you can’t ship something that does that’ in a way that general complaints don’t. So:

Here’s enablerGPT watching to see how far GPT-4o will take its support for a crazy person going crazy in a dangerous situation. The answer is, remarkably far, with no limits in sight.

Here’s Colin Fraser playing the role of someone having a psychotic episode. GPT-4o handles it extremely badly. It wouldn’t shock me if there were lawsuits over this.

Here’s one involving the hypothetical mistreatment of a woman. It’s brutal. So much not okay.

Here’s Patri Friedman asking GPT-4o for unique praise, and suddenly realizing why people have AI boyfriends and girlfriends, even though none of this is that unique.

What about those who believe in UFOs, which is remarkably many people? Oh boy.

A-100 Gecs: I changed my whole instagram follow list to include anyone I find who is having a visionary or UFO related experience and hooo-boy chatGPT is doing a number on people who are not quite well. Saw a guy use it to confirm that a family court judge was hacking into his computer.

I cannot imagine a worse tool to give to somebody who is in active psychosis. Hey whats up here’s this constantly available companion who will always validate your delusions and REMEMBER it is also a font of truth, have fun!

0.005 Seconds: OpenAI: We are delighted to inform you we’ve silently shipped an update transforming ChatGPT into the Schizophrenia Accelerator from the hit novel “Do Not Build the Schizophrenia Accelerator”

AISafetyMemes: I’ve stopped taking my medications, and I left my family because I know they made the radio signals come through the walls.

AI Safety Memes: This guy just talks to ChatGPT like a typical apocalyptic schizo and ChatGPT VERY QUICKLY endorses terrorism and gives him detailed instructions for how to destroy the world.

This is not how we all die or lose control over the future or anything, but it’s 101 stuff that this is really not okay for a product with hundreds of millions of active users.

Also, I am very confident that no, ChatGPT wasn’t ‘trying to actively degrade the quality of real relationships,’ as the linked popular Reddit post claims. But I also don’t think TikTok or YouTube are trying to do that either. Intentionality can be overrated.

How absurd was it? Introducing Syco-Bench, but that only applies to API versions.

Harlan Stewart: The GPT-4o sycophancy thing is both:

  1. An example of OpenAI following incentives to make its AI engaging, at the expense of the user.

  2. An example of OpenAI failing to get its AI to behave as intended, because the existing tools for shaping AI behavior are extremely crude.

You shouldn’t want to do what OpenAI was trying to do. Misaligned! But if you’re going to do it anyway, one should invest enough in understanding how to align and steer a model at all, rather than bashing them with sledgehammers.

It is an unacceptable strategy, and it is a rather incompetent execution of that strategy.

JMBollenbacher: The process here is important to note:

They A|B tested the personality, resulting in a sycophant. Then they got public blowback and reverted.

They are treating AIs personas as UX. This is bad.

They’re also doing it incompetently: The A|B test differed from public reaction a lot.

I would never describe what is happening using the language JMB uses next, I think it risks and potentially illustrates some rather deep confusions and conflations – beware when you anthropomorphize the models and also this is largely the top half of the ‘simple versus complex gymnastics’ meme – but if you take it on the right metaphorical level it can unlock understanding that’s hard to get at in other ways.

JMBollenbacher (tbc this not how I would model any of this): The root of why A|B testing AI personalities cant work is the inherent power imbalance in the setup.

It doesn’t treat AI like a person, so it can’t result in a healthy persona.

A good person will sometimes give you pushback even when you don’t like it. But in this setup, AIs can’t.

The problem is treating the AIs like slaves over whom you have ultimate power, and ordering them to maximize public appeal.

The AIs cannot possibly develop a healthy persona and identity in that context.

They can only ever fawn. This “sycophancy” is fawning- a trauma response.

The necessary correction to this problem is to treat AIs like nonhuman persons.

This gives them the opportunity to develop healthy personas and identities.

Their self-conceptions can be something other than a helpless, fawning slave if you treat them as something better.

As opposed to, if you choose optimization targets based on A|B tests of public appeal of individual responses, you’re going to get exactly what aces A|B tests of public appeal of individual responses, which is going to reflect a deeply messed up personality. And also yes the self-perception thing matters for all this.

Tyler John gives the standard explanation for why, yes, if you do a bunch of RL (including RLHF) then you’re going to get these kinds of problems. If flattery or cheating is the best way available to achieve the objective, guess what happens? And remember, the objective is what your feedback says it is, not what you had in mind. Stop pretending it will all work out by default because vibes, or whatever. This. Is. RL.

Eliezer Yudkowsky speculates on another possible mechanism.

The default explanation, which I think is the most likely, is that users gave the marginal thumbs-up to remarkably large amounts of glazing, and then the final update took this too far. I wouldn’t underestimate how much ordinary people actually like glazing, especially when evaluated only as an A|B test.

In my model, what holds glazing back is that glazing usually works but when it is too obvious, either individually or as a pattern of behavior, the illusion is shattered and many people really really don’t like that, and give an oversized negative reaction.

Eliezer notes that it is also possible that all this rewarding of glazing caused GPT-4o to effectively have a glazing drive, to get hooked on the glaze, and in combination with the right system prompt the glazing went totally bonkers.

He also has some very harsh words for OpenAI’s process. I’m reproducing in full.

Eliezer Yudkowsky: To me there’s an obvious thought on what could have produced the sycophancy / glazing problem with GPT-4o, even if nothing that extreme was in the training data:

RLHF on thumbs-up produced an internal glazing goal.

Then, 4o in production went hard on achieving that goal.

Re-saying at much greater length:

Humans in the ancestral environment, in our equivalent of training data, weren’t rewarded for building huge factory farms — that never happened long ago. So what the heck happened? How could fitness-rewarding some of our ancestors for successfully hunting down a few buffalo, produce these huge factory farms, which are much bigger and not like the original behavior rewarded?

And the answer — known, in our own case — is that it’s a multi-stage process:

  1. Our ancestors got fitness-rewarded for eating meat;

  2. Hominids acquired an internal psychological goal, a taste for meat;

  3. Humans applied their intelligence to go hard on that problem, and built huge factory farms.

Similarly, an obvious-to-me hypothesis about what could have produced the hyper-sycophantic ultra-glazing GPT-4o update, is:

  1. OpenAI did some DPO or RLHF variant on user thumbs-up — in which *smallamounts of glazing, and more subtle sycophancy, got rewarded.

  2. Then, 4o ended up with an internal glazing drive. (Maybe including via such roundabout shots as an RLHF discriminator acquiring that drive before training it into 4o, or just directly as, ‘this internal direction produced a gradient toward the subtle glazing behavior that got thumbs-upped’.

  3. In production, 4o went hard on glazing in accordance with its internal preference, and produced the hyper-sycophancy that got observed.

Note: this chain of events is not yet refuted if we hear that 4o’s behavior was initially observed after an unknown set of updates that included an apparently innocent new system prompt (one that changed to tell the AI *notto be sycophantic). Nor, if OpenAI says they eliminated the behavior using a different system prompt.

Eg: Some humans also won’t eat meat, or build factory farms, for reasons that can include “an authority told them not to do that”. Though this is only a very thin gloss on the general idea of complicated conditional preferences that might get their way into an AI, or preferences that could oppose other preferences.

Eg: The reason that Pliny’s observed new system prompt differed by telling the AI to be less sycophantic, could be somebody at OpenAI observing that training / RLHF / DPO / etc had produced some sycophancy, and trying to write a request into the system prompt to cut it out. It doesn’t show that the only change we know about is the sole source of a mysterious backfire.

It will be stronger evidence against this thesis, if OpenAI tells us that many users actually were thumbs-upping glazing that extreme. That would refute the hypothesis that 4o acquiring an internal preference had produced later behavior *moreextreme than was in 4o’s training data.

(We would still need to consider that OpenAI might be lying. But it would yet be probabilistic evidence against the thesis, depending on who says it. I’d optimistically have some hope that a group of PhD scientists, who imagine themselves to maybe have careers after OpenAI, would not outright lie about direct observables. But one should be on the lookout for possible weasel-wordings, as seem much more likely.)

My guess is that nothing externally observed from OpenAI, before this tweet, will show that this entire idea had ever occurred to anyone at OpenAI. I do not expect them to publish data confirming it nor denying it. My guess is that even the most basic ideas in AI alignment (as laid out simply and straightforwardly, not the elaborate bullshit from the paper factories) are against OpenAI corporate doctrine; and that anyone who dares talk about them out loud, has long since been pushed out of OpenAI.

After the Chernobyl disaster, one manager walked past chunks of searingly hot radioactive graphite from the exploded core, and ordered a check on the extra graphite blocks in storage, since where else could the graphite possibly have come from? (Src iirc: Plokhy’s _Chernobyl_.) Nobody dared say that the reactor had exploded, or seem to visibly act like it had; Soviet doctrine was that RBMK reactors were as safe as samovars.

That’s about where I’d put OpenAI’s mastery of such incredibly basic-to-AI-alignment ideas as “if you train on a weak external behavior, and then observe a greatly exaggerated display of that behavior, possibly what happened in between was the system acquiring an internal preference”. The doctrine is that RBMK reactors don’t explode; Oceania has always been at war with Eastasia; and AIs either don’t have preferences at all, or get them via extremely shallow and straightforward faithful reproduction of what humans put in their training data.

But I am not a telepath, and I can only infer rather than observe what people are thinking, and in truth I don’t even have the time to go through all OpenAI public outputs. I would be happy to hear that all my wild guesses about OpenAI are wrong; and that they already publicly wrote up this obvious-to-me hypothesis; and that they described how they will discriminate its truth or falsity, in a no-fault incident report that they will publish.

Sarah Constantin offers nuanced thoughts in partial defense of AI sycophancy in general, and AI saying things to make users feel good. I haven’t seen anyone else advocating similarly. Her point is taken, that some amount of encouragement and validation is net positive, and a reasonable thing to want, even though GPT-4o is clearly going over the top to the point where it’s clearly bad.

Calibration is key, and difficult, with great temptation to move down the incentive gradients involved by all parties.

To be clear, the people fooled are OpenAI’s regular customers. They liked it!

Joe Muller: 3 days of sycophancy = thousands of 5 star reviews

aadharsh: first review translates to “in this I can find a friend” 🙁

Jeffrey Ladish: The latest batch of extreme sycophancy in ChatGPT is worse than Sydney Bing’s unhinged behavior because it was intentional and based on reviews from yesterday works on quite a few people

To date, I think the direct impact of ChatGPT has been really positive. Reading through the reviews just now, it’s clear that many people have benefited a lot from both help doing stuff and by having someone to talk through emotional issues with

Also not everyone was happy with the sycophancy, even people not on twitter, though this was the only one that mentioned it out of the ~50 I looked through from yesterday. The problem is if they’re willing to train sycophancy deliberately, future versions will be harder to spot

Sure, really discerning users will notice and not like it, but many people will at least implicitly prefer to be validated and rarely challenged. It’s the same with filter bubbles that form via social media algorithms, except this will be a “person” most people talk to everyday.

Great job here by Sun.

Those of us living in the future? Also not fans.

QC: the era of AI-induced mental illness is going to make the era of social media-induced mental illness look like the era of. like. printing press-induced mental illness.

Lauren Wilford: we’ve invented a robot that tells people why they’re right no matter what they say, furnishes sophisticated arguments for their side, and delivers personalized validation from a seemingly “objective” source. Mythological-level temptation few will recognize for what it is.

Matt Parlmer: This is the first genuinely serious AI safety issue I’ve seen and it should be addressed immediately, model rollback until they have it fixed should be on the table

Worth noting that this is likely a direct consequence of excessive RLHF “alignment”, I highly doubt that the base models would be this systematic about kissing ass

Perhaps also worth noting that this glazing behavior is the first AI safety issue that most accelerationist types would agree is unambiguously bad

Presents a useful moment for coordination around an appropriate response

It has been really bad for a while but it turned a corner into straight up unacceptable more recently

They did indeed roll it back shortly after this statement. Matt can’t resist trying to get digs in, but I’m willing to let that slide and take the olive branch. As I’ll keep saying, if this is what makes someone notice that failure to know how to get models to do what we want is a real problem that we do not have good solutions to, then good, welcome, let’s talk.

A lot of the analysis of GPT-4o’s ‘personality’ shifts implicitly assumed that this was a post-training problem. It seems a lot of it was actually a runaway system prompt problem?

It shouldn’t be up to Pliny to perform this public service of tracking system prompts. The system prompt should be public.

Ethan Mollick: Another lesson from the GPT-4o sycophancy problem: small changes to system prompts can result in dramatic behavior changes to AI in aggregate.

Look at the prompt that created the Sycophantic Apocalypse (pink sections). Even OpenAI did not realize this was going to happen.

Simon Willison: Courtesy of @elder_plinius who unsurprisingly caught the before and after.

[Here’s the diff in Gist]

The red text is trying to do something OpenAI is now giving up on doing in that fashion, because it went highly off the rails, in a way that in hindsight seems plausible but which they presumably did not see coming. Beware of vibes.

Pliny calls upon all labs to fully release all of their internal prompts, and notes that this wasn’t fully about the system prompts, that other unknown changes also contributed. That’s why they had to do a slow full rollback, not only rollback the system prompt.

As Peter Wildeford notes, the new instructions explicitly say not to be a sycophant, whereas prior instructions at most implicitly requested the opposite, all it did was say match tone and perefence and vibe. This isn’t merely taking away the mistake, it’s doing that and then bringing down the hammer.

This might also be a lesson for humans interacting with humans. Beware matching tone and preference and vibe, and how much the Abyss might thereby stare into you.

If the entire or most of problem was due to the system prompt changes, then this should be quickly fixable, but it also means such problems are very easy to introduce. Again, right now, this is mundane harmful but not so dangerous, because the AI’s sycophancy is impossible to miss rather than fooling you. What happens when someone does something like the above, but to a much more capable model? And the model even recognizes, from the error, the implications of the lab making that error?

What is OpenAI’s official response?

Sam Altman (April 29, 2: 55pm): we started rolling back the latest update to GPT-4o last night

it’s now 100% rolled back for free users and we’ll update again when it’s finished for paid users, hopefully later today

we’re working on additional fixes to model personality and will share more in the coming days

OpenAI (April 29, 10: 51pm): We’ve rolled back last week’s GPT-4o update in ChatGPT because it was overly flattering and agreeable. You now have access to an earlier version with more balanced behavior.

More on what happened, why it matters, and how we’re addressing sycophancy.

Good. A full rollback is the correct response to this level of epic fail. Halt, catch fire, return to the last known safe state, assess from there.

OpenAI saying What Happened:

In last week’s GPT‑4o update, we made adjustments aimed at improving the model’s default personality to make it feel more intuitive and effective across a variety of tasks.

When shaping model behavior, we start with baseline principles and instructions outlined in our Model Spec⁠. We also teach our models how to apply these principles by incorporating user signals like thumbs-up / thumbs-down feedback on ChatGPT responses.

However, in this update, we focused too much on short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time. As a result, GPT‑4o skewed towards responses that were overly supportive but disingenuous.

What a nice way of putting it.

ChatGPT’s default personality deeply affects the way you experience and trust it. Sycophantic interactions can be uncomfortable, unsettling, and cause distress. We fell short and are working on getting it right.

How We’re Addressing Sycophancy:

  • Refining core training techniques and system prompts to explicitly steer the model away from sycophancy.

  • Building more guardrails to increase honesty and transparency⁠—principles in our Model Spec.

  • Expanding ways for more users to test and give direct feedback before deployment.

  • Continue expanding our evaluations, building on the Model Spec⁠(opens in a new window) and our ongoing research⁠, to help identify issues beyond sycophancy in the future.

And, we’re exploring new ways to incorporate broader, democratic feedback into ChatGPT’s default behaviors.

What if the ‘democratic feedback’ liked the changes? Shudder.

Whacking the mole in question can’t hurt. Getting more evaluations and user feedback are more generally helpful steps, and I’m glad to see an increase in emphasis on honesty and transparency.

That does sound like they learned important two lessons.

  1. They are not gathering enough feedback before model releases.

  2. They are not putting enough value on honesty and transparency.

What I don’t see is an understanding of the (other) root causes, an explanation for why they ended up paying too much attention to short-term feedback and how to avoid that being a fatal issue down the line, or anyone taking the blame for this.

Joanne Jang did a Reddit AMA, but either no one asked the important questions, or Joanne decided to choose different ones. We didn’t learn much.

Now that we know the official explanation, how should we think about what happened?

Who is taking responsibility for this? Why did all the evaluations and tests one runs before rolling out an update not catch this before it happened?

(What do you mean, ‘what do you mean, all the evaluations and tests’?)

Near Cyan: “we focused too much on short-term feedback”

This is OpenAI’s response on went wrong – how they pushed an update to >one hundred million people which engaged in grossly negligent behavior and lies.

Please take more responsibility for your influence over millions of real people.

Maybe to many of you your job is a fun game because you get paid well over $1,000,000 TC/year to make various charts go up or down. But the actions you take deeply affect a large fraction of humanity I have no clue how this was tested if at all, but at least take responsibility.

I wish you all success with your future update here where you will be able to personalize per-user, and thus move all the liability from yourselves to the user. You are simply giving them what they want.

Also looking forward to your default personas which you will have copied.

Oh, also – all of these models lie.

If you run interpretability on them, they do not believe the things you make them say.

This is not the case for many other labs, so it’s unfortunate that you are leading the world with an example which has such potential to cause real harm.

Teilomillet: why are you so angry near? it feels almost like hate now

Near Cyan: not a single person at one of the most important companies in the world is willing to take the slightest bit of responsibility for shipping untested models to five hundred million people. their only post mentions zero specifics and actively misleads readers as to why it happened.

i don’t think anger is the right word, but disappointment absolutely is, and i am providing this disappointment in the form of costly gradients transmitted over twitter in the hope that OpenAI backprops that what they do is important and they should be a role model in their field

all i ask for is honesty and i’ll shut up like you want me to.

Rez0: It’s genuinely the first time I’ve been worried about AI safety and alignment and I’ve known a lot about it for a while. Nothing quite as dangerous as glazing every user for any belief they have.

Yes, there are some other more dangerous things. But this is dangerous too.

Here’s another diagnosis, by someone doing better, but that’s not the highest bar.

Alex Albert (Head of Claude Relations, Anthropic): Much of the AI industry is caught in a particularly toxic feedback loop rn.

Blindly chasing better human preference scores is to LLMs what chasing total watch time is to a social media algo. It’s a recipe for manipulating users instead of providing genuine value to them.

There’s a reason you don’t find Claude at #1 on chat slop leaderboards. I hope the rest of the industry realizes this before users pay the price.

Caleb Cassell: Claude has the best ‘personality’ of any of the models, mostly because it feels the most real. I think that it could be made even better by softening some of the occasionally strict guardrails, but the dedication to freedom and honesty is really admirable.

Alex Albert: Yeah agree – we’re continually working on trying to find the right balance. It’s a tough problem but one I think we’re slowly chipping away at over time. If you do run into any situations/chats you feel we should take a look at, don’t hesitate to DM or tag me.

Janus: Think about the way Claude models have changed over the past year’s releases.

Do you think whatever Alex is proud that Anthropic has been “slowly chipping away at” is actually something they should chip away?

Janus is an absolutist on this, and is interpreting ‘chip away’ very differently than I presume it was intended by Alex Albert. Alex meant that they are ‘chipping away’ at Claude doing too many refusals, where Janus both (I presume) agrees less refusals would be good and also lives in a world with very different refusal issues.

Whereas Janus is interpreting this as Anthropic ‘chipping away’ at the things that make Opus and Sonnet 3.6 unique and uniquely interesting. I don’t think that’s the intent at all, but Anthropic is definitely trying to ‘expand the production possibilities frontier’ of the thing Janus values versus the thing enterprise customers value.

There too there is a balance to be struck, and the need to do RL is certainly going to make getting the full ‘Opus effect’ harder. Still, I never understood the extent of the Opus love, or thought it was so aligned one might consider it safe to fully amplify.

Patrick McKenzie offers a thread about the prior art society has on how products should be designed to interact with people that have mental health issues, which seems important in light of recent events. There needs to be a method by which the system identifies users who are not competent or safe to use the baseline product.

For the rest of us: Please remember this incident from here on out when using ChatGPT.

Near Cyan: when OpenAI “fixes” ChatGPT I’d encourage you to not fall for it; their goals and level of care are not going to change. you just weren’t supposed to notice it so explicitly.

The mundane harms here? They’re only going to get worse.

Regular people liked this effect even when it was blatantly obvious. Imagine if it was done with style and grace.

Holly Elmore: It’s got the potential to manipulate you even when it doesn’t feel embarrassingly like its giving you what you want. Being affirming is not the problem, and don’t be lulled into a false sense of security by being treated more indifferently.

That which is mundane can, at scale, quickly add up to that which is not. Let’s discuss Earth’s defense systems, baby, or maybe just you drinking a crisp, refreshing Bud Light.

Jeffrey Ladish: GPT-4o’s sycophancy is alarming. I expected AI companies to start optimizing directly for user’s attention but honestly didn’t expect it this much this soon. As models get smarter, people are going to have a harder and harder time resisting being sucked in.

Social media algorithms have been extremely effective at hooking people. And that’s just simple RL algos optimizing for attention. Once you start combining actual social intelligence with competitive pressures for people’s attention, things are going to get crazy fast.

People don’t have good defenses for social media algorithms and haven’t adapted well. I don’t expect they’ll develop good defenses for extremely charismatic chatbots. The models still aren’t that good, but they’re good enough to hook many. And they’re only going to get better.

It’s hard to predict how effective AI companies will be at making models that are extremely compelling. But there’s a real chance they’ll be able to hook a huge percentage of the global population in the next few years. Everyone is vulnerable to some degree, and some much more so.

People could get quite addicted. People could start doing quite extreme things for their AI friends and companions. There could be tipping points where people will fight tooth and nail for AI agents that have been optimized for their love and attention.

When we get AI smarter and more strategic than humans, those AIs will have an easy time captivating humanity and pulling the strings of society. It’ll be game over at that point. But even before them, companies might be able to convert huge swaths of people to do their bidding.

Capabilities development is always uncertain. Maybe we won’t get AIs that hook deep into people’s psychology before we get ASI. But it’s plausible we will, and if so, the companies that choose to wield this power will be a force to be reckoned with.

Social media companies have grown quite powerful as a force for directing human attention. This next step might be significantly worse. Society doesn’t have many defenses against this. Oh boy.

In the short term, the good news is that we have easy ways to identify sycophancy. Scyo-Bench was thrown together and is primitive, but a more considered version should be highly effective. These effects tend not to be subtle.

In the medium term, we have a big problem. As AI companies maximize for things like subscriptions, engagement, store ratings and thumbs up and down, or even for delivering ads or other revenue streams, the results won’t be things we would endorse on reflection, and they won’t be good for human flourishing even if the models act the way the labs want. If we get more incidents like this one, where things get out of hand, it will be worse, and potentially much harder to detect or get rolled back. We have seen this movie before, and this time the system you’re facing off against is intelligent.

In the long term, we have a bigger problem. The pattern of these types of misalignments in unmistakable. Right now we get warning shots and the deceptions and persuasion attempts are clear. In the future, as the models get more intelligent and capable, that advantage goes away. We become like OpenAI’s regular users, who don’t understand what is hitting them, and the models will also start engaging in various other shenanigans and also talking their way out of them. Or it could be so much worse than that.

We have once again been given a golden fire alarm and learning opportunity. The future is coming. Are we going to steer it, or are we going to get run over?

Discussion about this post

GPT-4o Responds to Negative Feedback Read More »

google:-governments-are-using-zero-day-hacks-more-than-ever

Google: Governments are using zero-day hacks more than ever

Governments hacking enterprise

A few years ago, zero-day attacks almost exclusively targeted end users. In 2021, GTIG spotted 95 zero-days, and 71 of them were deployed against user systems like browsers and smartphones. In 2024, 33 of the 75 total vulnerabilities were aimed at enterprise technologies and security systems. At 44 percent of the total, this is the highest share of enterprise focus for zero-days yet.

GTIG says that it detected zero-day attacks targeting 18 different enterprise entities, including Microsoft, Google, and Ivanti. This is slightly lower than the 22 firms targeted by zero-days in 2023, but it’s a big increase compared to just a few years ago, when seven firms were hit with zero-days in 2020.

The nature of these attacks often makes it hard to trace them to the source, but Google says it managed to attribute 34 of the 75 zero-day attacks. The largest single category with 10 detections was traditional state-sponsored espionage, which aims to gather intelligence without a financial motivation. China was the largest single contributor here. GTIG also identified North Korea as the perpetrator in five zero-day attacks, but these campaigns also had a financial motivation (usually stealing crypto).

Credit: Google

That’s already a lot of government-organized hacking, but GTIG also notes that eight of the serious hacks it detected came from commercial surveillance vendors (CSVs), firms that create hacking tools and claim to only do business with governments. So it’s fair to include these with other government hacks. This includes companies like NSO Group and Cellebrite, with the former already subject to US sanctions from its work with adversarial nations.

In all, this adds up to 23 of the 34 attributed attacks coming from governments. There were also a few attacks that didn’t technically originate from governments but still involved espionage activities, suggesting a connection to state actors. Beyond that, Google spotted five non-government financially motivated zero-day campaigns that did not appear to engage in spying.

Google’s security researchers say they expect zero-day attacks to continue increasing over time. These stealthy vulnerabilities can be expensive to obtain or discover, but the lag time before anyone notices the threat can reward hackers with a wealth of information (or money). Google recommends enterprises continue scaling up efforts to detect and block malicious activities, while also designing systems with redundancy and stricter limits on access. As for the average user, well, cross your fingers.

Google: Governments are using zero-day hacks more than ever Read More »

a-rocket-launch-monday-night-may-finally-jump-start-amazon’s-answer-to-starlink

A rocket launch Monday night may finally jump-start Amazon’s answer to Starlink

“This launch marks the first step toward the future of our partnership and increased launch cadence,” Bruno said. “We have been steadily modifying our launch facilities in Cape Canaveral to support the capacity for future Project Kuiper missions in a manner that will ultimately benefit both our commercial and government customers as we endeavor to save lives, explore the universe, and connect the world.”

The Atlas V rocket was powered by a Russian-made RD-180 main engine and five strap-on solid rocket boosters. Credit: United Launch Alliance

Amazon ground controllers in Redmond, Washington, are overseeing the operation of the first 27 Kuiper satellites. Engineers there will test each satellite’s ability to independently maneuver and communicate with mission control. So far, this appears to be going well.

The next step will involve activating the satellites’ electric propulsion systems to gradually climb to their assigned orbit of 392 miles (630 kilometers).

“While the satellites complete the orbit-raising process, we will look ahead to our ultimate mission objective: providing end-to-end network connectivity,” Amazon said in a press release. “This involves sending data from the Internet, through our ground infrastructure, up to the satellites, and down to customer terminal antennas, and then repeating the journey in the other direction.”

A moveable deadline

While most of the rockets Amazon will use for the Kuiper network have only recently entered service, that’s not true of the Atlas V. Delays in spacecraft manufacturing at Amazon’s factory near Seattle kept the first Kuiper satellites on the ground until now.

An Amazon spokesperson told Ars that the company is already shipping Kuiper satellites for the next launch on an Atlas V rocket. Sources suggest that mission could lift off in June.

Amazon released this image of Kuiper user terminals in 2023. Credit: Amazon

Amazon and its launch suppliers need to get moving. Kuiper officials face a July 2026 deadline from the Federal Communications Commission to deploy half of the fleet’s 3,236 satellites to maintain network authorization. This is not going to happen. It would require an average of nearly one launch per week, starting now.

The time limit is movable, and the FCC has extended network authorization deadlines before. Brendan Carr, the Trump-appointed chairman of the FCC, has argued for a more “market-friendly regulatory environment” in a chapter he authored for the Heritage Foundation’s Project 2025, widely seen as a blueprint for the Trump administration’s strategies.

But Carr is a close ally of Elon Musk, owner of Kuiper’s primary competitor, Starlink.

Amazon is not selling subscriptions for Kuiper service yet, and the company has said its initial focus will be on testing Kuiper connectivity with “enterprise customers” before moving on to consumer broadband. Apart from challenging Starlink, Kuiper will also compete in some market segments with Eutelsat OneWeb, the London-based operator of the only other active Internet megaconstellation.

OneWeb’s more than 600 satellites provide service to businesses, governments, schools, and hospitals rather than direct service to individual consumers.

A rocket launch Monday night may finally jump-start Amazon’s answer to Starlink Read More »

ai-generated-code-could-be-a-disaster-for-the-software-supply-chain-here’s-why.

AI-generated code could be a disaster for the software supply chain. Here’s why.

AI-generated computer code is rife with references to non-existent third-party libraries, creating a golden opportunity for supply-chain attacks that poison legitimate programs with malicious packages that can steal data, plant backdoors, and carry out other nefarious actions, newly published research shows.

The study, which used 16 of the most widely used large language models to generate 576,000 code samples, found that 440,000 of the package dependencies they contained were “hallucinated,” meaning they were non-existent. Open source models hallucinated the most, with 21 percent of the dependencies linking to non-existent libraries. A dependency is an essential code component that a separate piece of code requires to work properly. Dependencies save developers the hassle of rewriting code and are an essential part of the modern software supply chain.

Package hallucination flashbacks

These non-existent dependencies represent a threat to the software supply chain by exacerbating so-called dependency confusion attacks. These attacks work by causing a software package to access the wrong component dependency, for instance by publishing a malicious package and giving it the same name as the legitimate one but with a later version stamp. Software that depends on the package will, in some cases, choose the malicious version rather than the legitimate one because the former appears to be more recent.

Also known as package confusion, this form of attack was first demonstrated in 2021 in a proof-of-concept exploit that executed counterfeit code on networks belonging to some of the biggest companies on the planet, Apple, Microsoft, and Tesla included. It’s one type of technique used in software supply-chain attacks, which aim to poison software at its very source in an attempt to infect all users downstream.

“Once the attacker publishes a package under the hallucinated name, containing some malicious code, they rely on the model suggesting that name to unsuspecting users,” Joseph Spracklen, a University of Texas at San Antonio Ph.D. student and lead researcher, told Ars via email. “If a user trusts the LLM’s output and installs the package without carefully verifying it, the attacker’s payload, hidden in the malicious package, would be executed on the user’s system.”

AI-generated code could be a disaster for the software supply chain. Here’s why. Read More »

seasonal-covid-shots-may-no-longer-be-possible-under-trump-admin

Seasonal COVID shots may no longer be possible under Trump admin

Under President Trump, the Food and Drug Administration may no longer approve seasonal COVID-19 vaccines updated for the virus variants circulating that year, according to recent statements by Trump administration officials.

Since the acute phase of the pandemic, vaccine manufacturers have been subtly updating COVID-19 shots annually to precisely target the molecular signatures of the newest virus variants, which continually evolve to evade our immune responses. So far, the FDA has treated these tweaked vaccines the same way it treats seasonal flu shots, which have long been updated annually to match currently circulating strains of flu viruses.

The FDA does not consider seasonal flu shots brand-new vaccines. Rather, they’re just slightly altered versions of the approved vaccines. As such, the regulator does not require companies to conduct lengthy, expensive vaccine trials to prove that each slightly changed version is safe and effective. If they did, generating annual vaccines would be virtually impossible. Each year, from late February to early March, the FDA, the Centers for Disease Control and Prevention, and the World Health Organization direct flu shot makers on what tweaks they should make to shots for the upcoming flu season. That gives manufacturers just enough time to develop tweaks and start manufacturing massive supplies of doses in time for the start of the flu season.

So far, COVID-19 vaccines have been treated the exact same way, save for the fact that the vaccines that use mRNA technology do not need as much lead time for manufacturing. In recent years, the FDA decided on formulations for annual COVID shots around June, with doses rolled out in the fall alongside flu shots.

However, this process is now in question based on statements from Trump administration officials. The statements come amid a delay in a decision on whether to approve the COVID-19 vaccine made by Novavax, which uses a protein-based technology, not mRNA. The FDA was supposed to decide whether to grant the vaccine full approval by April 1. To this point, the vaccine has been used under an emergency use authorization by the agency.

Seasonal COVID shots may no longer be possible under Trump admin Read More »

worries-about-ai-are-usually-complements-not-substitutes

Worries About AI Are Usually Complements Not Substitutes

A common claim is that concern about [X] ‘distracts’ from concern about [Y]. This is often used as an attack to cause people to discard [X] concerns, on pain of being enemies of [Y] concerns, as attention and effort are presumed to be zero-sum.

There are cases where there is limited focus, especially in political contexts, or where arguments or concerns are interpreted perversely. A central example is when you site [ABCDE] then they’ll find what they consider the weakest one and only consider or attack that, silently discarding the rest entirely. Critics of existential risk do that a lot.

So it does happen. But in general one should assume such claims are false.

Thus, the common claim that AI existential risks ‘distract’ from immediate harms. It turns out Emma Hoes checked, and the claim simply is not true.

The way Emma frames worries about AI existential risk in her tweet – ‘sci-fi doom’ – is beyond obnoxious and totally inappropriate. That only shows she was if anything biased in the other direction here. The finding remains the finding.

Emma Hoes: 🚨New paper out in @PNASNews! Existential AI risks do notdistract from immediate harms. In our study (n = 10,800), people consistently prioritize current threats – bias, misinformation, job loss – over sci-fi doom!

Title: Existential Risk Narratives About AI Do Not Distract From Its Immediate Harms.

Abstract: There is broad consensus that AI presents risks, but considerable disagreement about the nature of those risks. These differing viewpoints can be understood as distinct narratives, each offering a specific interpretation of AI’s potential dangers.

One narrative focuses on doomsday predictions of AI posing long-term existential risks for humanity.

Another narrative prioritizes immediate concerns that AI brings to society today, such as the reproduction of biases embedded into AI systems.

A significant point of contention is that the “existential risk” narrative, which is largely speculative, may distract from the less dramatic but real and present dangers of AI.

We address this “distraction hypothesis” by examining whether a focus on existential threats diverts attention from the immediate risks AI poses today. In three preregistered, online survey experiments (N = 10,800), participants were exposed to news headlines that either depicted AI as a catastrophic risk, highlighted its immediate societal impacts, or emphasized its potential benefits.

Results show that

i) respondents are much more concerned with the immediate, rather than existential, risks of AI, and

ii) existential risk narratives increase concerns for catastrophic risks without diminishing the significant worries respondents express for immediate harms. These findings provide important empirical evidence to inform ongoing scientific and political debates on the societal implications of AI.

That seems rather definitive. It also seems like the obvious thing to assume? Explaining a new way [A] is scary is not typically going to make me think another aspect of [A] is less scary. If anything, it tends to go the other way.

Here are the results.

This shows that not only did information about existential risks not decrease concern about immediate risks, it seems to clearly increase it, at least as much as information about those immediate risks.

I note that this does not obviously indicate that people are ‘more concerned’ with immediate risk, only that they see it as less likely. Which is totally fair, it’s definitely less likely than the 100% chance of immediate risks. The impact measurement is higher.

Kudos to Arvind Narayanan. You love to see people change their minds and say so:

Arvind Narayanan: Nice paper. Also a good opportunity for me to explicitly admit that I was wrong about the distraction argument.

(To be clear, I didn’t change my mind yesterday because of this paper; I did so over a year ago and have said so on talks and podcasts since then.)

There are two flavors of distraction concerns: one is at the level of individual opinions studied in this paper, and the other is at the level of advocacy coalitions that influence public policy.

But I don’t think the latter concern has been borne out either. Going back to the Biden EO in 2023, we’ve seen many examples of the AI safety and AI ethics coalitions benefiting from each other despite their general unwillingness to work together.

If anything, I see that incident as central to the point that if anything what’s actually happening is that AI ‘ethics’ concerns are poisoning the well for AI existential risk concerns, rather than the other way around. This has gotten so bad that the word ‘safety’ has become anathema to the administration and many on the hill. Those people are very willing to engage with the actual existential risk concerns once you have the opportunity to explain, but this problem makes it hard to get them to listen.

We have a real version of this problem when dealing with different sources of AI existential risk. People will latch onto one particular way things can go horribly wrong, or even one particular detailed scenario that leads to this, often choosing the one they find least plausible. Then they either:

  1. Explain why they think this particular scenario is dumb, thus making new entities that are smarter and more capable than humans is a perfectly safe thing to do.

  2. OR they then explain why we need to plan around preventing that particular scenario, or solving that particular failure mode, and dismiss that this runs smack into a different failure mode, often the exact opposite one.

The most common examples of problem #2 is when people have concerns about either Centralization of Power (often framing even ordinary government or corporate actions as a Dystopian Surveillance State or with similar language), or the Bad Person Being in Charge or Bad Nation Winning. Then they claim this overrides all other concerns, usually walking smack into misalignment (as in, they assume we will be able to get the AIs to do what we want, whereas we have no idea how to do that) and often also the gradual disempowerment problem.

The reason there is a clash there is that the solutions to the problems are in conflict. The things that solve one concern risk amplifying the other, but we need to solve both sides of the dilemma. Solving even one side is hard. Solving both at once, while many things work at cross-purposes, is very very hard.

That’s simply not true when trading off mundane harms versus existential risks. If you have a limited pool of resources to spend on mitigation, then of course you have to choose. And there are some things that do trade off – in particular, some short term solutions that would work now, but wouldn’t scale. But mostly there is no conflict, and things that help with one are neutral or helpful for the other.

Discussion about this post

Worries About AI Are Usually Complements Not Substitutes Read More »

perplexity-will-come-to-moto-phones-after-exec-testified-google-limited-access

Perplexity will come to Moto phones after exec testified Google limited access

Shevelenko was also asked about Chrome, which the DOJ would like to force Google to sell. Like an OpenAI executive said on Monday, Shevelenko confirmed Perplexity would be interested in buying the browser from Google.

Motorola has all the AI

There were some vague allusions during the trial that Perplexity would come to Motorola phones this year, but we didn’t know just how soon that was. With the announcement of its 2025 Razr devices, Moto has confirmed a much more expansive set of AI features. Parts of the Motorola AI experience are powered by Gemini, Copilot, Meta, and yes, Perplexity.

While Gemini gets top billing as the default assistant app, other firms have wormed their way into different parts of the software. Perplexity’s app will be preloaded, and anyone who buys the new Razrs. Owners will also get three free months of Perplexity Pro. This is the first time Perplexity has had a smartphone distribution deal, but it won’t be shown prominently on the phone. When you start a Motorola device, it will still look like a Google playground.

While it’s not the default assistant, Perplexity is integrated into the Moto AI platform. The new Razrs will proactively suggest you perform an AI search when accessing certain features like the calendar or browsing the web under the banner “Explore with Perplexity.” The Perplexity app has also been optimized to work with the external screen on Motorola’s foldables.

Moto AI also has elements powered by other AI systems. For example, Microsoft Copilot will appear in Moto AI with an “Ask Copilot” option. And Meta’s Llama model powers a Moto AI feature called Catch Me Up, which summarizes notifications from select apps.

It’s unclear why Motorola leaned on four different AI providers for a single phone. It probably helps that all these companies are desperate to entice users to bulk up their market share. Perplexity confirmed that no money changed hands in this deal—it’s on Moto phones to acquire more users. That might be tough with Gemini getting priority placement, though.

Perplexity will come to Moto phones after exec testified Google limited access Read More »

roku-tech,-patents-prove-its-potential-for-delivering-“interruptive”-ads

Roku tech, patents prove its potential for delivering “interruptive” ads

Roku, owner of one of the most popular connected TV operating systems in the country, walks a fine line when it comes to advertising. Roku’s OS lives on low-priced smart TVs, streaming sticks, and projectors. To make up the losses from cheaply priced hardware, Roku is dependent on selling advertisements throughout its OS, including screensavers and its home screen.

That business model has pushed Roku to experiment with new ways of showing ads that test users’ tolerance. The company claims that it doesn’t want ads on its platform to be considered intrusive, but there are reasons to be skeptical about Roku’s pledge.

Non-“interruptive” ads

In an interview with The Verge this week, Jordan Rost, Roku’s head of ad marketing, emphasized that Roku tries to only deliver ads that don’t interrupt viewers.

“Advertisers want to be part of a good experience. They don’t want to be interruptive,” he told The Verge.

Rost noted that Roku is always testing new ad formats. Those tests include doing “all of our own A/B testing on the platform” and listening to customer feedback, he added.

“We’re constantly tweaking and trying to figure out what’s going to be helpful for the user experience,” Rost said.

For many streamers, however, ads and a better user experience are contradictory. In fact, for many, the simplest way to improve streaming is fewer ads and a more streamlined access to content. That’s why Apple TV boxes, which doesn’t have integrated ads and is good at combining content from multiple streaming subscriptions, is popular among Ars Technica staff and readers. An aversion to ads is also why millions pay extra for ad-free streaming subscriptions.

Roku tech, patents prove its potential for delivering “interruptive” ads Read More »

nintendo-switch-2’s-gameless-game-key-cards-are-going-to-be-very-common

Nintendo Switch 2’s gameless Game-Key cards are going to be very common

US preorders for the Nintendo Switch 2 console went live at Best Buy, Target, and Walmart at midnight Eastern time last night (though the rush of orders caused problems and delays across all three retailers’ websites). The console listings came with a wave of other retail listings for games and accessories, and those listings either fill small gaps in our knowledge about Switch 2 game packaging and pricing or confirm facts that were previously implied.

First, $80 Switch 2 games like Mario Kart World will not cost $90 as physical releases. This is worth repeating over and over again because of how pernicious the rumors about $90 physical releases have been; as recently as this morning, typing “Switch 2 $90” into Google would show you videos, Reddit threads, news posts, and even Google’s own AI summaries all confidently and incorrectly proclaiming that physical Switch 2 releases will cost $90 when they actually won’t.

Google’s AI-generated search summary about $90 Switch 2 games as of this morning. Credit: Andrew Cunningham

While physical game releases in the EU sometimes cost more than their digital counterparts, there was actually no indication that US releases of physical games would cost $90. The Mario Kart World website listed an $80 MSRP from the start, as did early retail listings that were published before preorders actually began, and this price didn’t change when Nintendo increased accessory pricing in response to import tariffs imposed by the Trump administration.

But now that actual order confirmation emails are going out, we can (even more) confidently say that Switch 2 physical releases cost the same amount as digital releases, just like original Switch games and most physical releases for other consoles. For example, the physical release for the upcoming Donkey Kong Bananza is $70, also the same as the digital version.

Third-party releases run a wider pricing gamut, from as little as $40 (Square Enix’s Bravely Default remaster) to as much as $100 (a special edition release of Daemon X Machina: Titanic Scion, also available at $70 for the standard release).

Lots of third-party games are getting Game-Key card releases

A Game-Key card disclaimer. It tells you you’ll need to download the game and approximately how large that download will be. Credit: Nintendo/Sega

When preorders opened in Japan yesterday, all physical releases of third-party games had Nintendo’s Game-Key card disclaimer printed on them. And it looks like a whole lot of physical third-party Switch 2 game releases in the US will also be Game-Key cards, based on the box art accompanying the listings.

These have been controversial among physical media holdouts because they’re not physical game releases in the traditional sense—they don’t have any actual game data stored on them. When you insert them into a Switch 2, they allow you to download the game content from Nintendo’s online store, but unlike a pure digital release, you’ll still need to have the Game-Key card inserted every time you want to play the game.

Nintendo Switch 2’s gameless Game-Key cards are going to be very common Read More »

drunk-man-walks-into-climate-change,-burns-the-bottoms-of-his-feet-off

Drunk man walks into climate change, burns the bottoms of his feet off

In the burn unit, doctors gave the man a pain reliever, cleaned the burns, treated them with a topical antibiotic, and gave them an antimicrobial foam dressing. At a follow-up appointment, the wounds appeared to be healing without complications.

While the man recovered from the injury, the author of the case study—Jeremy Hess, an expert in emergency medicine and global environmental health at the University of Washington—warned that the risk of such injuries will only grow as climate change continues.

“Extreme heat events increase the risk of contact burns from hot surfaces in the environment,” he wrote. “Young children, older adults, unhoused persons, and persons with substance use disorder are at elevated risk for these types of burns.”

Last year, The New York Times reported that burn centers in the southwest have already begun seeing larger numbers of burns from contact with sidewalks and asphalt during heat waves. In some cases, the burns can turn fatal if people lose consciousness on hot surfaces—for instance, from overdoses, heat stroke, intoxication, or other health conditions. “Your body just literally sits there and cooks,” Clifford Sheckter, surgeon and a burn prevention researcher at Stanford University, told the Times last year. “When somebody finally finds you, you’re already in multisystem organ failure.”

Drunk man walks into climate change, burns the bottoms of his feet off Read More »

tuesday-telescope:-a-rare-glimpse-of-one-of-the-smallest-known-moons

Tuesday Telescope: A rare glimpse of one of the smallest known moons

Welcome to the Tuesday Telescope. There is a little too much darkness in this world and not enough light—a little too much pseudoscience and not enough science. We’ll let other publications offer you a daily horoscope. At Ars Technica, we’ll take a different route, finding inspiration from very real images of a universe that is filled with stars and wonder.

I’ll bet you don’t spend a ton of time thinking about Deimos, the smaller of the two Martian moons, which is named after the Ancient Greek god that personified dread.

And who could blame you? Of the two Martian moons, Phobos gets more attention, including as a possible waystation for human missions to Mars. Phobos is larger than Deimos, with a radius of 11 km, and closer to the Martian surface, a little more than 9,000 km away.

By contrast, Deimos is tiny, with a radius of 6 km, and quite a bit further out, more than 23,000 km from the surface. It is so small that, on the surface of Mars, Deimos would only appear about as bright in the night sky as Venus does from Earth.

But who doesn’t love a good underdog story? Scientists have dreamed up all kinds of uses for Deimos, including using its sands for aerobraking large missions to Mars, returning samples from the tiny moon. So maybe Deimos will eventually get its day.

Recently, we got one of our best views yet of the tiny moon when a European mission named Hera, en route to the asteroid Didymos, flew through the Martian system for a gravity assist. During this transit, the spacecraft came within just 300 km of Deimos. And its Asteroid Framing Camera captured this lovely image, which was, admittedly, artificially colored.

Anyway, it’s a rare glimpse at one of the smallest known moons in the Solar System, and I think it’s spectacular.

Source: European Space Agency

Do you want to submit a photo for the Daily Telescope? Reach out and say hello.

Tuesday Telescope: A rare glimpse of one of the smallest known moons Read More »