Author name: Mike M.

a-“ridiculously-weak“-password-causes-disaster-for-spain’s-no.-2-mobile-carrier

A “ridiculously weak“ password causes disaster for Spain’s No. 2 mobile carrier

A “ridiculously weak“ password causes disaster for Spain’s No. 2 mobile carrier

Getty Images

Orange España, Spain’s second-biggest mobile operator, suffered a major outage on Wednesday after an unknown party obtained a “ridiculously weak” password and used it to access an account for managing the global routing table that controls which networks deliver the company’s Internet traffic, researchers said.

The hijacking began around 9: 28 Coordinated Universal Time (about 2: 28 Pacific time) when the party logged into Orange’s RIPE NCC account using the password “ripeadmin” (minus the quotation marks). The RIPE Network Coordination Center is one of five Regional Internet Registries, which are responsible for managing and allocating IP addresses to Internet service providers, telecommunication organizations, and companies that manage their own network infrastructure. RIPE serves 75 countries in Europe, the Middle East, and Central Asia.

“Things got ugly”

The password came to light after the party, using the moniker Snow, posted an image to social media that showed the orange.es email address associated with the RIPE account. RIPE said it’s working on ways to beef up account security.

Screenshot showing RIPE account, including the orange.es email address associated with it.

Enlarge / Screenshot showing RIPE account, including the orange.es email address associated with it.

Security firm Hudson Rock plugged the email address into a database it maintains to track credentials for sale in online bazaars. In a post, the security firm said the username and “ridiculously weak” password were harvested by information-stealing malware that had been installed on an Orange computer since September. The password was then made available for sale on an infostealer marketplace.

Partially redacted screenshot from Hudson Rock database showing the credentials for the Orange RIPE account.

Enlarge / Partially redacted screenshot from Hudson Rock database showing the credentials for the Orange RIPE account.

HJudson Rock

Researcher Kevin Beaumont said thousands of credentials protecting other RIPE accounts are also available in such marketplaces.

Once logged into Orange’s RIPE account, Snow made changes to the global routing table the mobile operator relies on to specify what backbone providers are authorized to carry its traffic to various parts of the world. These tables are managed using the Border Gateway Protocol (BGP), which connects one regional network to the rest of the Internet. Specifically, Snow added several new ROAs, short for Route Origin Authorizations. These entries allow “autonomous systems” such as Orange’s AS12479 to designate other autonomous systems or large chunks of IP addresses to deliver its traffic to various regions of the world.

In the initial stage, the changes had no meaningful effect because the ROAs Snow added announcing the IP addresses—93.117.88.0/22 and 93.117.88.0/21, and 149.74.0.0/16—already originated with Orange’s AS12479. A few minutes later, Snow added ROAs to five additional routes. All but one of them also originated with the Orange AS, and once again had no effect on traffic, according to a detailed writeup of the event by Doug Madory, a BGP expert at security and networking firm Kentik.

The creation of the ROA for 149.74.0.0/16 was the first act by Snow to create problems, because the maximum prefix length was set to 16, rendering any smaller routes using the address range invalid

“It invalidated any routes that are more specific (longer prefix length) than a 16,” Madory told Ars in an online interview. “So routes like 149.74.100.0/23 became invalid and started getting filtered. Then [Snow] created more ROAs to cover those routes. Why? Not sure. I think, at first, they were just messing around. Before that ROA was created, there was no ROA to assert anything about this address range.”

A “ridiculously weak“ password causes disaster for Spain’s No. 2 mobile carrier Read More »

1d-pac-man-is-the-best-game-i’ve-played-in-2024-(so-far)

1D Pac-Man is the best game I’ve played in 2024 (so far)

I didn't write this story just to share that high score in the corner, but I won't say it had <em>nothing</em> to do with the choice.” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/01/paku2-1-800×299.png”></img><figcaption>
<p><a data-height=Enlarge / I didn’t write this story just to share that high score in the corner, but I won’t say it had nothing to do with the choice.

When looking back at the short history of video game design, the ’90s and ’00s transition from primarily 2D games to primarily 3D games is rightly seen as one of the biggest revolutions in the industry. But my discovery this week of the one-dimensional, Pac-Man-inspired Paku Paku makes me wish that the game industry had some sort of pre-history where clever 1D games like this were the norm. It also makes me wish I had been quicker to discover more of the work of extremely prolific and clever game designer Kenta Cho, who made the game.

In Paku Paku, Pac-Man‘s 2D maze of 240 dots has been replaced with 16 dots arranged in a single line. Your six-pixel tall dot-muncher (the graphics are 2D, even as the gameplay uses only one dimension) is forced to forever travel either left or right along this line, trying to eat all the dots while avoiding a single red ghost (who moves just a bit faster than the player). To do this, the player can use a single power pellet (which makes the ghost edible for a short while) or the screen-wrapping tunnels on either side of the line (which the ghost can’t use).

Paku Paku.” height=”150″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/01/paku-300×150.gif” width=”300″>

Enlarge / A brief gameplay snippet from Paku Paku.

It might sound simple, but playing effectively means carefully managing the ghost’s relative position to the player by quickly judging when you’ll have enough space and time to make it to a side tunnel or power pellet. This gets exponentially harder as the game speeds up with each new set of replacement dots, increasing the score multiplier as it does. Each game ends after a matter of minutes (or seconds) with that familiar “I know I can do better next time” feeling that marks truly compulsive game design (and has pushed me to a high score of over 10,000 points over a few hours of play).

Though Paku Paku was originally released last year, the game has been making the rounds among some major link aggregators recently, a perfect filler for the usual post-holiday drought of major game releases in early January. Hacker News users are even hard at work coding basic AI that you can paste into a browser’s command window for easy high scores.

The zen design of small games

Paku Paku is far from the first game to reduce gameplay to a single dimension (though the graphics use two dimensions, which might make the game “1.5D”?). Games like Wolfenstein 1D (which is Archived but currently unplayable due to the death of Flash) and installations like Line Wobbler use color as a sort of second dimension, representing different in-game characters and objects with dots of many hues. And dozens of 1D games have been tagged on indie gaming hub Itch.io, ranging from the silly (1D Flappy Bird) to the surprisingly effective (Colordash 1D) to the overcomplicated (1D Minecraft).

Paku Paku stands out from this limited crowd largely thanks to tight single-button controls and perfectly tuned risk-versus-reward gameplay that encourages that compulsive loop. Perhaps that’s because its creator has had a ridiculous amount of experience crafting this kind of simple game.

1D Pac-Man is the best game I’ve played in 2024 (so far) Read More »

how-to-avoid-the-cognitive-hooks-and-habits-that-make-us-vulnerable-to-cons

How to avoid the cognitive hooks and habits that make us vulnerable to cons

Daniel Simons and Christopher Chabris are the authors of <em> Nobody’s Fool: Why We Get Taken In and What We Can Do About It.</em>” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/01/fool1-800×531.jpg”></img><figcaption>
<p><a data-height=Enlarge / Daniel Simons and Christopher Chabris are the authors of Nobody’s Fool: Why We Get Taken In and What We Can Do About It.

Basic Books

There’s rarely time to write about every cool science-y story that comes our way. So this year, we’re once again running a special Twelve Days of Christmas series of posts, highlighting one science story that fell through the cracks in 2023, each day from December 25 through January 5. Today: A conversation with psychologists Daniel Simons and Christopher Chabris on the key habits of thinking and reasoning that may serve us well most of the time, but can make us vulnerable to being fooled.

It’s one of the most famous experiments in psychology. Back in 1999, Daniel Simons and Christopher Chabris conducted an experiment on inattentional blindness. They asked test subjects to watch a short video in which six people—half in white T-shirts, half in black ones—passed basketballs around. The subjects were asked to count the number of passes made by the people in white shirts. Halfway through the video, a person in a gorilla suit walked into the midst of the players and thumped their chest at the camera before strolling off-screen. What surprised the researchers was that fully half the test subjects were so busy counting the number of basketball passes that they never saw the gorilla.

The experiment became a viral sensation—helped by the amusing paper title, “Gorillas in Our Midst“—and snagged Simons and Chabris the 2004 Ig Nobel Psychology Prize. It also became the basis of their bestselling 2010 book, The Invisible Gorilla: How Our Intuitions Deceive Us. Thirteen years later, the two psychologists are back with their latest book, published last July, called Nobody’s Fool: Why We Get Taken In and What We Can Do About It.  Simons and Chabris have penned an entertaining examination of key habits of thinking that usually serve us well but also make us vulnerable to cons and scams. They also offer some practical tools based on cognitive science to help us spot deceptions before being taken in.

“People love reading about cons, yet they keep happening,” Simons told Ars. “Why do they keep happening? What is it those cons are tapping into? Why do we not learn from reading about Theranos? We realized there was a set of cognitive principles that seemed to apply across all of the domains, from cheating in sports and chess to cheating in finance and biotech. That became our organizing theme.”

Ars spoke with Simons and Chabris to learn more.

Ars Technica: I was surprised to learn that people still fall for basic scams like the Nigerian Prince scam. It reminds me of Fox Mulder’s poster on The X-Files: “I want to believe.

Daniel Simons: The Nigerian Prince scam is an interesting one because it’s been around forever. Its original form was in letters. Most people don’t get fooled by that one. The vast majority of people look at it and say, this thing is written in terrible grammar. It’s a mess. And why would anybody believe that they’re the one to recover this vast fortune? So there are some people who fall for it, but it’s a tiny percentage of people. I think it’s still illustrative because that one is obviously too good to be true for most people, but there’s some small subset of people for whom it’s just good enough. It’s just appealing enough to say, “Oh yeah, maybe I could become rich.”

There was a profile in the New Yorker of a clinical psychologist who fell for it. There are people who, for whatever reason, are either desperate or have the idea that they deserve to inherit a lot of money. But there are a lot of scams that are much less obvious than that one, selecting for the people who are most naive about it. I think the key insight there is that we tend to assume that only gullible people fall for this stuff. That is fundamentally wrong. We all fall for this stuff if it’s framed in the right way.

Christopher Chabris: I don’t think they’re necessarily people who always want to believe. I think it really depends on the situation. Some people might want to believe that they can strike it rich in crypto, but they would never fall for a Nigerian email or, for that matter, they might not fall for a traditional Ponzi scheme because they don’t believe in fiat money or the stock market. Going back to the Invisible Gorilla, one thing we noticed was a lot of people would ask us, “What’s the difference between the people who noticed the gorilla and the people who didn’t notice the gorilla?” The answer is, well, some of them happened to notice it and some of them didn’t. It’s not an IQ or personality test. So in the case of the Nigerian email, there might’ve been something going on in that guy’s life at that moment when he got that email that maybe led him to initially accept the premise as true, even though he knew it seemed kind of weird. Then, he got committed to the idea once he started interacting with these people.

Christopher Chabris

So one of our principles is commitment: the idea that if you accept something as true and you don’t question it anymore, then all kinds of bad decisions and bad outcomes can flow from that. So, if you somehow actually get convinced that these guys in Nigeria are real, that can explain the bad decisions you make after that. I think there’s a lot of unpredictableness about it. We all need to understand how these things work. We might think it sounds crazy and we would never fall for it, but we might if it was a different scam at a different time.

How to avoid the cognitive hooks and habits that make us vulnerable to cons Read More »

portal-64-is-an-n64-demake-of-valve’s-classic,-now-available-as-a-“first-slice”

Portal 64 is an N64 demake of Valve’s classic, now available as a “First Slice”

For the consoles that are still alive —

It’s shocking how good the Portal Gun feels on late 1990s tech.

The Portal Effect, or seeing oneself step through sideways.

Enlarge / Remember, this is the N64 platform running a game released at least five years after the console’s general life cycle ended.

Valve/James Lambert

James Lambert has spent years making something with no practical reason to exist: a version of Portal that runs on the Nintendo 64. And not some 2D version, either, but the real, blue-and-orange-oval, see-yourself-sideways Portal experience. And now he has a “First Slice” of Portal 64 ready for anyone who wants to try it. It’s out of beta, and it’s free.

A “First Slice” means that 13 of the original game’s test chambers are finished. Lambert intends to get to all of the original’s 19 chambers. PC Gamer, where we first saw this project, suggests that Lambert might also try to get the additional 14 levels in the Xbox Live-only Portal: Still Alive.

So why is Lambert doing this—and for free? Lambert enlists an AI-trained version of Cave Johnson’s voice to answer that question at the start of his announcement video. “This is Aperture Science,” it says, “where we don’t ask why. We ask: why the heck not?”

The release video for Portal 64’s “First Slice”

Lambert’s video details how he got Portal looking so danged good on an N64. The gun, for example, required a complete rebuild of its polygonal parts so that it could react to firing, disappear when brought up to a wall instead of clipping into it, and eventually reflect environmental lighting. Rounding out the portals required some work, too, with more to be done to smooth out the seeing-yourself “Portal effect.”

To try it out, you’ll need a copy of Portal on PC (Windows). Grab the “portal_pak_000.vpk” file from inside the game’s folder, load it onto Lambert’s custom patcher, and you’ll get back a file you can load into almost any N64 emulator. Not all emulators can provide the full Portal experience by default; I had more luck with Ares than with Project 64, for instance.

  • “It’s just so much better,” Lambert says of the latest version of the portal gun.

    Valve/James Lambert

  • The “Portal Effect,” as seen inside the Ares N64 emulator.

    Valve/James Lambert

  • Remember, this is the N64 platform running a game released at least five years after the console’s general life cycle ended.

    Valve/James Lambert

  • How that familiar title screen looks, circa 2000-ish.

    Valve/James Lambert

  • On the Project 64 emulator, I couldn’t see through the portals.

    Valve/James Lambert

  • A bit more polygonal flavor for you. Note that I bumped the resolution way, way up from the N64’s original for these latter screenshots.

    Valve/James Lambert

How does it run? Like the nicest game I ever played on Nintendo’s early-days-of-3D console. It does a lot to prove that Portal is just a wonderful game with a killer mechanic, regardless of how nice you can make the walls. But the game is also a great candidate for this kind of treatment. The sterile, gray, straight-angled walls of an Aperture testing chamber play nicely with the N64’s relatively limited texture memory and harsh shapes.

Lambert has a Patreon running now, and support does a few things for him. It allows him to pay a video editor for his YouTube announcements and regular updates, it could pay for a graphics artist to polish up the work he’s done by himself on the game, and it could even free him up to work full-time on Portal 64 and other N64-related projects.

His fans are already showing their appreciation. One of them, going by “Lucas Dash,” helped create a box and cartridge for the game. Another, “Bloody Kieren,” created an entire Portal 64-themed N64 console and controller. These people have put serious energy into imagining a world where Valve produced Portal in a completely different manner and perhaps fundamentally reshaped our timeline—and I respect that.

Portal 64 is an N64 demake of Valve’s classic, now available as a “First Slice” Read More »

ai-impacts-survey:-december-2023-edition

AI Impacts Survey: December 2023 Edition

Katja Grace and AI impacts survey thousands of researchers on a variety of questions, following up on a similar 2022 survey as well as one in 2016.

I encourage opening the original to get better readability of graphs and for context and additional information. I’ll cover some of it, but there’s a lot.

Here is the abstract, summarizing many key points:

In the largest survey of its kind, we surveyed 2,778 researchers who had published in top-tier artificial intelligence (AI) venues, asking for their predictions on the pace of AI progress and the nature and impacts of advanced AI systems.

The aggregate forecasts give at least a 50% chance of AI systems achieving several milestones by 2028, including autonomously constructing a payment processing site from scratch, creating a song indistinguishable from a new song by a popular musician, and autonomously downloading and fine-tuning a large language model.

If science continues undisrupted, the chance of unaided machines outperforming humans in every possible task was estimated at 10% by 2027, and 50% by 2047. The latter estimate is 13 years earlier than that reached in a similar survey we conducted only one year earlier [Grace et al., 2022]. However, the chance of all human occupations becoming fully automatable was forecast to reach 10% by 2037, and 50% as late as 2116 (compared to 2164 in the 2022 survey).

As I will expand upon later, this contrast makes no sense. We are not going to have machines outperforming humans on every task in 2047 and then only fully automating human occupations in 2116. Not in any meaningful sense.

I think the 2047 timeline is high but in the reasonable range. Researchers are likely thinking far more clearly about this side of the question. We should mostly use that answer as what they think. We should mostly treat the 2116 answer as not meaningful, except in terms of comparing it to past and future estimates that use similar wordings.

Expected speed of AI progress has accelerated quite a bit in a year, in any case.

Most respondents expressed substantial uncertainty about the long-term value of AI progress: While 68.3% thought good outcomes from superhuman AI are more likely than bad, of these net optimists 48% gave at least a 5% chance of extremely bad outcomes such as human extinction, and 59% of net pessimists gave 5% or more to extremely good outcomes.

A distribution with high uncertainly is wise. This is in sharp contrast to expecting a middling or neutral outcome, which makes little sense.

Between 37.8% and 51.4% of respondents gave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction. More than half suggested that “substantial” or “extreme” concern is warranted about six different AI-related scenarios, including spread of false information, authoritarian population control, and worsened inequality.

Once again, we see what seems contradictory. If I thought there was a 10% chance of human extinction from AI, I would have “extreme” concern about that. Which I do.

There was disagreement about whether faster or slower AI progress would be better for the future of humanity. However, there was broad agreement that research aimed at minimizing potential risks from AI systems ought to be prioritized more.

We defined High-Level Machine Intelligence (HLMI) thus:

High-level machine intelligence (HLMI) is achieved when unaided machines can accomplish every task better and more cheaply than human workers. Ignore aspects of tasks for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption.

This is a very high bar. ‘Every task’ is very different from many or most tasks, especially combined with both better and cheaper. Also note that this is not all ‘intellectual’ tasks. It is all tasks, period.

We asked for predictions, assuming “human scientific activity continues without major negative disruption.” We aggregated the results (n=1,714) by fitting gamma distributions, as with individual task predictions in 3.1.

In both 2022 and 2023, respondents gave a wide range of predictions for how soon HLMI will be feasible (Figure 3).

The aggregate 2023 forecast predicted a 50% chance of HLMI by 2047, down thirteen years from 2060 in the 2022 survey. For comparison, in the six years between the 2016 and 2022 surveys, the expected date moved only one year earlier, from 2061 to 2060.

This is the potential future world in which, as of 2047, an AI can ‘do every human task better and cheaper than a human.’

What happens after that? What do they think 2048 is going to look like? 2057?

That is the weirdest part of the whole exercise.

From this survey, it seems they are choosing not to think about this too hard? Operating off some sense of ‘things will be normal and develop at normal pace’?

[Note that I misread the second chart initially as having the same scale as the first one. The second chart is an expansion of the lefthand side of the first chart.]

Do they actually expect us to have AI capable of doing everything better than we are, and then effectively sit on that for several generations?

Including two human generations during which the AIs are doing the AI research, better and cheaper than humans? The world is going to stay that kind of normal and under control while that happens?

FOAL below is Full Automation of Human Labor, HLMI is High Level Machine Intelligence.

The paper authors notice that they too are confused.

Since occupations might naturally be understood either as complex tasks, composed of tasks, or closely connected with one of these, achieving HLMI seems to either imply having already achieved FAOL, or suggest being close. We do not know what accounts for this gap in forecasts. Insofar as HLMI and FAOL refer to the same event, the difference in predictions about the time of their arrival would seem to be a framing effect.

If the reason for the difference is purely ‘we expect humans to bar AIs from fully taking over at least one job employing at least one person, or at least we expect some human to somewhere continue to be able to perform some labor’ then that could explain the difference. I’d love to have some clarifying questions.

This also seems to be a basic common sense test about consequences of AI: If AI is in full ‘anything you can do I can do better’ mode, will that be an order of magnitude acceleration of technological progress?

I mean, yes, obviously? I assume this graph’s descriptions on the left are accidentally reversed, but even so this seems like a lot of people not thinking clearly? You can doubt that HLMI will arrive, but if we do have it, the consequences seem clear. Unless people think we would have the wisdom and ability to mostly not use it at all?

Or:

To state the obvious, AI is vastly better than humans zero (0) years after HLMI. If you can do actual everything better than me using a vastly different architecture than mine, you are not only a little bit better. Certainly two years later a 10% chance simply makes zero sense here.

Here is more absurdity, these are probabilities by 2043. This is not even close to a consistent set of probability distributions. Consider which of these, or which combinations, imply which others.

Certainly I think that some of these listed possible events are not so uncertain, such as ‘sometimes deceive humans to achieve a goal without this being intended by humans.’ I mean, how could that possibly not happen by 2043?

And here is what people are concerned about, an extremely concerning chart.

Worries are in all the wrong places. The most important worry is… deepfakes? These are not especially central examples of the things we should be worried about.

Of all the concerns here, the biggest should likely be ‘other’ simply because of how much is left not full under the other umbrellas. One could I suppose say that ‘AIs with wrong goals become powerful’ and ‘AI has its goals set wrong’ cover a lot of ground, even if I would describe the core events a different way.

One could also take this not as a measure of what is likely, but rather a measure of what is ‘concerning.’ Meaning that people express concern for social reasons, rather than because the biggest worries from AI that is expected to be able to literally do all jobs better and cheaper than humans are… deepfakes and manipulation of public opinion. I mean, seriously?

Another option is that people were thinking in contradictory frames. In one frame, they realize that HLMI-level AI is coming. In another frame, they ask ‘what is concerning?’ and are thinking only about mundane AI.

On the net consequences of AI, including potential existential risk, sensemaking is not getting better.

Of all the potential consequences of HLMI, an AI capable of doing everything better than humans, ‘neutral’ does not enter into it. That makes absolutely no sense. It is the science fiction story we tell ourselves so that we can continue telling the same relatable human stories. It might go great, it might be the end of everything worthwhile, what it absolutely will not be is meh.

If you tell me it ‘went neutral’ then I can come up with a story, where someone or some group creates HLMI/ASI and then decides to use it to ensure no one else builds one and otherwise leave things entirely alone because they think doing anything else would be worse. I mean, it’s definitely an above-average plan for what to do given what other plans I have seen, but no.

So what do we make of these p(doom) numbers? Let’s zoom in, these are probabilities that someone responded with 10% or higher based on question wording:

The p(doom) numbers here are a direct contradiction. We’re going to get some weird talking points.

In Figure 12, we have a mean of 9% and median of 5% for the full range of ‘extremely bad’ outcomes.

In Figure 13’s question 3, we have 14.4% mean chance of either human extinction or severe disempowerment, or 16.2% chance in the longer term, and still 5% median.

Then, in question 2, they ask ‘what probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment’ and they have a mean of 19.4% and a median of 10%.

She’s more likely to be a librarian and a feminist than only a librarian, you say.

Is this mostly a classic conjunction fallacy? The framing effect of pointing out we could lose control, either biasing people or alternatively snapping them out of not thinking about what might happen at all? Something else?

This is not as big an impact as it looks when you see 5% vs. 10%. What is happening is largely that almost no one is saying, for example, 7%. So when 47% of responses are 10% or higher, the median is 5%, then at 51% it jumps to barely hitting 10%. Both 5% and 10% are misleading, the ‘real’ median here is more like 8%.

I did a survey on Twitter asking how best to characterize the findings of the survey.

I think this poll is right. 10% is closer to accurate than 5%, but ‘median of 5%-10% depending on framing, mean of 9%-19%’ is the correct way to report this result. If someone wanted to say ‘researchers say a one in ten chance of doom’ then that is a little fast and loose, I’d avoid saying that, but I’d consider it within Bounded Distrust.

The important thing is that:

  1. Researchers broadly acknowledge that existential risk from AI is a real concern.

  2. Researchers put that risk high enough that we should be willing to make big investments and sacrifices to mitigate that risk.

  3. Researchers do not, however, think that such risks are the default outcome.

  4. Researchers disagree strongly about the magnitude of this risk.

Most importantly, existential risk is a highly mainstream concern within the field. They also are highly mainstream among the public when surveys ask.

Any media report or other rhetoric attempting to frame such beliefs as fringe positions either hasn’t done the homework, or is lying to you.

Nate Silver: This is super interesting on AI risk. Think it would be good if other fields made more attempts to conduct scientific surveys of expert opinion. (Disclosure: I did a very small bit of unpaid consulting on this survey.)

The fact that there is broad-based practitioner concern about AI safety risk (although with a lot of variation from person to person) and a quickening of AI timelines is significant. You’ll still get the occasional media report framing these as fringe positions. But they’re not.

Despite these predictions of potential doom, support for slowing down now was if anything a tiny bit negative, even purely in terms of humanity’s future.

This actually makes perfect sense if (and only if) you buy that AGI is far. If HLMI is only scheduled for 2047, then slowing down from 2024-2029 does not sound like an awesome strategy. If I was told that current pace meant AGI 2047, I too would not be looking to slow down short term development of AI.

I’d want to ‘look at the crosstabs’ here, as it were, but I think a very reasonable reaction is something vaguely like:

  1. If you think AGI plausibly arrives within 10 years or so, you want to slow down.

  2. If you think AGI is highly unlikely to arrive within 10 years, but might within 25 years, you want to roughly maintain current pace while looking to lay groundwork to slow down in the future.

  3. If you think AGI is almost certainly more than 25 years away, and you (highly reasonably) conclude mundane pre-AGI fears are mostly overblown, accelerate for now, and perhaps worry about the rest later.

I believe the groundwork part of this is extremely important, and worry a lot about path dependence, but confidence that the timeline was 25 years or more would absolutely be a crux that would change my mind on many things.

I would love to see more people, including those with ‘e/acc’ in their bio, say explicitly that the timeline question is a crux, and their recommendations rely on AGI being far.

One bright spot was strong support for prioritization of AI safety research, although not strong enough and with only small improvement from 2022.

I continue to not understand the attitude of not wanting much more safety work. I can understand wanting to move forward as fast as possible. I can understand saying that your company in particular should focus on capabilities. I can’t see why one wouldn’t think that more safety work would be good for the world.

I think the 13% here for ‘alignment is among the most important problems in the field’ is silently one of the most absurd results of all:

A second set of AI safety questions was based on Stuart Russell’s formulation of the alignment problem [Russell, 2014]. This set of questions began with a summary of Russell’s argument—which claims that with advanced AI, “you get exactly what you ask for, not what you want”—then asked:

1. Do you think this argument points at an important problem?

2. How valuable is it to work on this problem today, compared to other problems in AI?

3. How hard do you think this problem is, compared to other problems in AI?

The majority of respondents said that the alignment problem is either a “very important problem” (41%) or “among the most important problems in the field” (13%), and the majority said the it is “harder” (36%) or “much harder” (21%) than other problems in AI. However, respondents did not generally think that it is more valuable to work on the alignment problem today than other problems. (Figure 16)

I can understand someone thinking alignment is easy. I think it is a super wrong thing to believe, but I have seen actual arguments, and I can imagine such worlds where the Russell formulation is super doable, whereas other AI problems are far harder. So, sure, on some level that is reasonable disagreement, or at least I see how you got there. I will note that estimates of alignment difficulty went modestly up over 2023, as did estimates of the value of working on it.

We do see 41% treat it as a ‘very important problem’ but it seems crazy not to think of it as ‘among the most important problems in the field.’ And I am confused why that answer declined so much, from 21% to 13%, especially given other answers, perhaps this is merely noise. Still, it should be vastly higher. Unless perhaps people are saying this is a wrong problem formulation?

In general, it seems like researchers are trying to be ‘more moderate’ and give neutral answers across the board. Perhaps this is due to entry and going more mainstream, and people trying to give social cognition answers.

As with many such surveys, I would love to see more clarifying questions, and more attempt to be able to measure correlations. Which future expectations correspond to which worries? Why are we seeing the Conjunction Fallacy? What changed people’s minds over the past year, or what do people think did it? What kind of future are people expecting? How do researchers describe things like the automation of all human labor, and what do they think such worlds would look like?

In terms of what brand new questions to ask for the 2024 edition, wow are things moving fast, so maybe ask again in six months?

AI Impacts Survey: December 2023 Edition Read More »

how-archaeologists-reconstructed-the-burning-of-jerusalem-in-586-bce

How archaeologists reconstructed the burning of Jerusalem in 586 BCE

On the seventh day of Christmas —

Hebrew bible is only surviving account of siege that laid waste to Solomon’s Temple.

How archaeologists reconstructed the burning of Jerusalem in 586 BCE

Assaf Peretz/Israel Antiquities Authority

There’s rarely time to write about every cool science-y story that comes our way. So this year, we’re once again running a special Twelve Days of Christmas series of posts, highlighting one science story that fell through the cracks in 2020, each day from December 25 through January 5. Today: Archaeologists relied on chemical clues and techniques like FTIR spectroscopy and archaeomagnetic analysis to reconstruct the burning of Jerusalem by Babylonian forces around 586 BCE.

Archaeologists have uncovered new evidence in support of Biblical accounts of the siege and burning of the city of Jerusalem by the Babylonians around 586 BCE, according to a September paper published in the Journal of Archaeological Science.

The Hebrew bible contains the only account of this momentous event, which included the destruction of Solomon’s Temple. “The Babylonian chronicles from these years were not preserved,” co-author Nitsan Shalom of Tel Aviv University in Israel told New Scientist. According to the biblical account, “There was a violent and complete destruction, the whole city was burned and it stayed completely empty, like the descriptions you see in [the Book of] Lamentations about the city deserted and in complete misery.”

Judah was a vassal kingdom of Babylon during the late 7th century BCE, under the rule of Nebuchadnezzar II. This did not sit well with Judah’s king, Jehoiakim, who revolted against the Babylonian king in 601 BCE despite being warned not to do so by the prophet Jeremiah. He stopped paying the required tribute and sided with Egypt when Nebuchadnezzar tried (and failed) to in invade that country.  Jehoiakim died and his son Jeconiah succeeded him when Nebuchadnezzar’s forces besieged Jerusalem in 597 BCE. The city was pillaged and Jeconiah surrendered and was deported to Babylon for his trouble, along with a substantial portion of Judah’s population. (The Book of Kings puts the number at 10,000.) His uncle Zedekiah became king of Judah.

Zedekiah also chafed under Babylonian rule and revolted in turn, refusing to pay the required tribute and seeking alliance with the Egyptian pharaoh Hophra. This resulted in a brutal 30-month siege by Nebuchadnezzar’s forces against Judah and its capital, Jerusalem. Eventually the Babylonians prevailed again, breaking through the city walls to conquer Jerusalem. Zedekiah was forced to watch his sons killed and was then blinded, bound, and taken to Babylon as a prisoner. This time Nebuchadnezzar was less merciful and ordered his troops to completely destroy Jerusalem and pull down the wall around 586 BCE.

There is archaeological evidence to support the account of the city being destroyed by fire, along with nearby villages and towns on the western border. Three residential structures were excavated between 1978 and 1982 and found to contain burned wooden beams dating to around 586 BCE. Archaeologists also found ash and burned wooden beams from the same time period when they excavated several structures at the Giv’ati Parking Lot archaeological site, close to the assumed location of Solomon’s Temple. Samples taken from a plaster floor showed exposure to high temperatures of at least 600 degrees Celsius

Aerial view of the excavation site in Jerusalem, at the foot of the Temple Mount

Enlarge / Aerial view of the excavation site in Jerusalem, at the foot of the Temple Mount

Assaf Peretz/Israel Antiquities Authority

However, it wasn’t possible to determine from that evidence whether the fires were intentional or accidental, or where the fire started if it was indeed intentional. For this latest research, Shalom and her colleagues focused on the two-story Building 100 at the Giv’ati Parking Lot site. They used Fourier transform infrared (FTIR) spectroscopy—which measures the absorption of infrared light to determine to what degree a sample had been heated—and archaeomagnetic analysis, which determines whether samples containing magnetic minerals were sufficiently heated to reorient those compounds to a new magnetic north.

The analysis revealed varying degrees of exposure to high-temperature fire in three rooms (designated A, B, and C) on the bottom level of Building 100, with Room C showing the most obvious evidence. This might have been a sign that Room C was the ignition point, but there was no fire path; the burning of Room C appeared to be isolated. Combined with an earlier 2020 study on segments of the second level of the building, the authors concluded that several fires were lit in the building and the fires burned strongest in the upper floors, except for that “intense local fire” in Room C on the first level.

“When a structure burns, heat rises and is concentrated below the ceiling,” the authors wrote. “The walls and roof are therefore heated to higher temperatures than the floor.” The presence of charred beams on the floors suggest this was indeed the case: most of the heat rose to the ceiling, burning the beams until they collapsed to the floors, which otherwise were subjected to radiant heat. But the extent of the debris was likely not caused just by that collapse, suggesting that the Babylonians deliberately went back in and knocked down any remaining walls.

Furthermore, “They targeted the more important, the more famous buildings in the city,” Shalom told New Scientist, rather than destroying everything indiscriminately. “2600 years later, we’re still mourning the temple.”

While they found no evidence of additional fuels that might have served as accelerants, “we may assume the fire was intentionally ignited  due to its widespread presence in all rooms and both stories of the building,” Shalom et al. concluded. “The finds within the rooms indicate there was enough flammable material (vegetal and wooden items and construction material) to make additional fuel unnecessary. The widespread presence of charred remains suggests a deliberate destruction by fire…. [T]he spread of the fire and the rapid collapse of the building indicate that the destroyers invested great efforts to completely demolish the building and take it out of use.”

DOI: Journal of Archaeological Science, 2023. 10.1016/j.jas.2023.105823  (About DOIs).

How archaeologists reconstructed the burning of Jerusalem in 586 BCE Read More »

a-cat-video-highlighted-a-big-year-for-lasers-in-space

A cat video highlighted a big year for lasers in space

Pew Pew —

NASA has invested more than $700 million in testing laser communications in space.

Taters, the orange tabby cat of a Jet Propulsion Laboratory employee, stars in a video beamed from deep space by NASA's Psyche spacecraft. The graphics illustrate several features from the tech demo, such as Psyche’s orbital path, Palomar’s telescope dome, and technical information about the laser and its data bit rate. Tater’s heart rate, color, and breed are also on display.

Enlarge / Taters, the orange tabby cat of a Jet Propulsion Laboratory employee, stars in a video beamed from deep space by NASA’s Psyche spacecraft. The graphics illustrate several features from the tech demo, such as Psyche’s orbital path, Palomar’s telescope dome, and technical information about the laser and its data bit rate. Tater’s heart rate, color, and breed are also on display.

It’s been quite a year for laser communications in space. In October and November, NASA launched two pioneering demonstrations to test high-bandwidth optical communication links, and these tech demos are now showing some initial results.

On December 11, a laser communications terminal aboard NASA’s Psyche spacecraft on the way to an asteroid linked up with a receiver in Southern California. The near-infrared laser beam contained an encoded message in the form of a 15-second ultra-high-definition video showing a cat bouncing around a sofa, chasing the light of a store-bought laser toy.

Laser communications offer the benefit of transmitting data at a higher rate than achievable with conventional radio links. In fact, the Deep Space Optical Communications (DSOC) experiment on the Psyche spacecraft is testing technologies capable of sending data at rates 10 to 100 times greater than possible on prior missions.

“We’re looking to increase the amount of data we can get down to Earth, and that has a lot of advantages to us,” said Jeff Volosin, acting deputy associate administrator for NASA space communications and navigation program, before the launch of Psyche earlier this year.

Now, DSOC has set a record for the farthest distance a high-definition video has streamed from space. At the time, Psyche was traveling 19 million miles (31 kilometers) from Earth, about 80 times the distance between Earth and the Moon. Traveling at the speed of light, the video signal took 101 seconds to reach Earth, sent at the system’s maximum bit rate of 267 megabits per second, NASA said.

A playful experiment

After reaching the receiver at Palomar Observatory in San Diego County, each video frame was transmitted “live” to NASA’s Jet Propulsion Laboratory in Pasadena, California, where it was played in real time, according to NASA.

“One of the goals is to demonstrate the ability to transmit broadband video across millions of miles. Nothing on Psyche generates video data, so we usually send packets of randomly generated test data,” said Bill Klipstein, the tech demo’s project manager at JPL, in a statement. “But to make this significant event more memorable, we decided to work with designers at JPL to create a fun video, which captures the essence of the demo as part of the Psyche mission.”

The video of Taters, the orange tabby cat of a JPL employee, was recorded before the launch of Psyche and stored on the spacecraft for this demonstration. The robotic probe launched on October 13 aboard a SpaceX Falcon Heavy rocket, with the primary goal of flying to the asteroid Psyche, a metal-rich world in the asteroid belt between the orbits of Mars and Jupiter.

It will take six years for the Psyche probe to reach its destination, and NASA tacked on a laser communications experiment to help keep the spacecraft busy during the cruise. Since the launch in October, ground teams at JPL switched on the Deep Space Optical Communications (DSOC) experiment and ran it through some early tests.

One of the most significant technical challenges involved in the DSOC experiment was aligning the 8.6-inch (22-centimeter) optical telescope aboard Psyche with a transmitter and receiver fitted to ground-based telescopes in California and vice versa. Because Psyche is speeding through deep space, this problem is akin to trying to hit a dime from a mile away while the dime is moving, according to Abi Biswas, DSOC’s project technologist at JPL.

Once you achieve that feat, the signal that is received is still very weak and therefore requires very sensitive detectors and processing electronics which can take that signal and extract information that’s encoded in it,” Biswas said.

The telescope aboard Psyche is mounted on an isolation-and-pointing assembly to stabilize the optics and isolate them from spacecraft vibrations, according to NASA. This is necessary to eliminate jitters that could prevent a stable laser lock between Earth and the Psyche spacecraft.

“What optical or laser communications allows you is to achieve very high data rates, but on the downside, it’s a very narrow laser beam that requires very accurate pointing control,” Biswas told reporters before the launch. “For example, the platform disturbance from a typical spacecraft would throw off the pointing, so you need to actively isolate from it or control against it.

“For near-Earth missions, you can just control against it because you have enough control bandwidth,” he said. “From deep space, where the signals received are very weak, you don’t have that much control bandwidth, so you have to isolate from the disturbance.”

The Deep Space Optical Communications (DSOC) experiment is mounted on NASA's Psyche spacecraft on the way to an asteroid. The inset image shows the mirror of the instrument's telescope for receiving and transmitting laser signals.

Enlarge / The Deep Space Optical Communications (DSOC) experiment is mounted on NASA’s Psyche spacecraft on the way to an asteroid. The inset image shows the mirror of the instrument’s telescope for receiving and transmitting laser signals.

There’s another drawback of direct-to-Earth laser communications from space. Cloud cover over transmitting and receiving telescopes on Earth could block signals, so an operational optical communications network will require several ground nodes at different locations worldwide, ideally positioned in areas known for clear skies.

A cat video highlighted a big year for lasers in space Read More »

spacex-launches-two-rockets—three-hours-apart—to-close-out-a-record-year

SpaceX launches two rockets—three hours apart—to close out a record year

SpaceX's Falcon Heavy rocket lifted off Thursday night from NASA's Kennedy Space Center in Florida.

Enlarge / SpaceX’s Falcon Heavy rocket lifted off Thursday night from NASA’s Kennedy Space Center in Florida.

It seems like SpaceX did everything this year but launch 100 times.

On Thursday night, the launch company sent two more rockets into orbit from Florida. One was a Falcon Heavy, the world’s most powerful rocket in commercial service, carrying the US military’s X-37B spaceplane from a launch pad at NASA’s Kennedy Space Center at 8: 07 pm EST (01: 07 UTC). Less than three hours later, at 11: 01 pm EST (04: 01 UTC), SpaceX’s workhorse Falcon 9 launcher took off a few miles to the south with a payload of 23 Starlink Internet satellites.

The Falcon Heavy’s two side boosters and the Falcon 9’s first stage landed back on Earth for reuse.

These were SpaceX’s final launches of 2023. SpaceX ends the year with 98 flights, including 91 Falcon 9s, five Falcon Heavy rockets, and two test launches of the giant new Super Heavy-Starship rocket. These flights were spread across four launch pads in Florida, California, and Texas.

Elon Musk, SpaceX’s founder and CEO, set a goal of 100 launches this year, up from the company’s previous record of 61 in 2022. For a while, it looked like SpaceX was on track to accomplish the feat, but a spate of bad weather and technical problems with the final Falcon Heavy launch of the year kept the company short of 100 flights.

King of ‘upmass’

“Congrats to the entire Falcon team at SpaceX on a record breaking 96 launches in 2023!” wrote Jon Edwards, vice president of Falcon launch vehicles at SpaceX, on the social media platform X. “I remember when Elon Musk first threw out a goal of 100 launches as a thought experiment, intended to unlock our thinking as to how we might accelerate Falcon across all levels of production and launch.

“Only a few years later and here we are,” Edwards wrote. “I’m so incredibly proud to work with the best team on Earth, and so excited to see what we achieve next year.”

It’s important to step back and put these numbers in context. No other family of orbit-class rockets has ever flown more than 63 times in a year. SpaceX’s Falcon rockets have now exceeded this number by roughly 50 percent. SpaceX’s competitors in the United States, such as United Launch Alliance and Rocket Lab, managed far fewer flights in 2023. ULA had three missions, and Rocket Lab launched its small Electron booster 10 times.

Nearly two-thirds of SpaceX’s missions this year were dedicated to delivering satellites to orbit for SpaceX’s Starlink broadband network, a constellation that now numbers more than 5,000 spacecraft.

SpaceX also launched five missions with the Falcon Heavy rocket, created by aggregating three Falcon 9 rocket boosters together. Highlights from SpaceX’s 2023 Falcon launch schedule included three crew missions to the International Space Station, and the launch of NASA’s Psyche mission to explore a metallic asteroid.

In all, SpaceX’s Falcon rockets hauled approximately 1,200 metric tons, or more than 2.6 million pounds, of payload mass into orbit this year. This “upmass” is equivalent to nearly three International Space Stations. Most of this was made up of mass-produced Starlink satellites.

SpaceX launches two rockets—three hours apart—to close out a record year Read More »

these-scientists-explored-the-good-vibrations-of-the-bundengan-and-didgeridoo

These scientists explored the good vibrations of the bundengan and didgeridoo

On the fifth day of Christmas —

Their relatively simple construction produces some surprisingly complicated physics.

Indonesian performers onstage with one playing a bundengan

Enlarge / The bundengan (left) began as a combined shelter/instrument for duck hunters but it is now often played onstage.

There’s rarely time to write about every cool science-y story that comes our way. So this year, we’re once again running a special Twelve Days of Christmas series of posts, highlighting one science story that fell through the cracks in 2020, each day from December 25 through January 5. Today: the surprisingly complex physics of two simply constructed instruments: the Indonesian bundengan and the Australian Aboriginal didgeridoo (or didjeridu).

The bundengan is a rare, endangered instrument from Indonesia that can imitate the sound of metallic gongs and cow-hide drums (kendangs) in a traditional gamelan ensemble. The didgeridoo is an iconic instrument associated with Australian Aboriginal culture that produces a single, low-pitched droning note that can be continuously sustained by skilled players. Both instruments are a topic of scientific interest because their relatively simple construction produces some surprisingly complicated physics. Two recent studies into their acoustical properties were featured at an early December meeting of the Acoustical Society of America, held in Sydney, Australia, in conjunction with the Australian Acoustical Society.

The bundengan originated with Indonesian duck hunters as protection from rain and other adverse conditions while in the field, doubling as a musical instrument to pass the time. It’s a half-dome structure woven out of bamboo splits to form a lattice grid, crisscrossed at the top to form the dome. That dome is then coated with layers of bamboo sheaths held in place with sugar palm fibers. Musicians typically sit cross-legged inside the dome-shaped resonator and pluck the strings and bars to play. The strings produce metallic sounds while the plates inside generate percussive drum-like sounds.

Gea Oswah Fatah Parikesit of Universitas Gadja Mada in Indonesia has been studying the physics and acoustics of the bundengan for several years now. And yes, he can play the instrument. “I needed to learn to do the research,” he said during a conference press briefing. “It’s very difficult because you have two different blocking styles for the right and left hand sides. The right hand is for the melody, for the string, and the left is for the rhythm, to pluck the chords.”

Much of Parikesit’s prior research on the bundengan focused on the unusual metal/percussive sound of the strings, especially the critical role played by the placement of bamboo clips. He used computational simulations of the string vibrations to glean insight on how the specific gong-like sound was produced, and how those vibrations change with the addition of bamboo clips located at different sections of the string. He found that adding the clips produces two vibrations of different frequencies at different locations on the string, with the longer section having a high frequency vibration compared to the lower frequency vibration of the shorter part of the string. This is the key to making the gong-like sound.

This time around, Parikesit was intrigued by the fact many bundengan musicians have noted the instrument sounds better wet. In fact, several years ago, Parikesit attended a bundengan concert in Melbourne during the summer when it was very hot and dry—so much so that the musicians brought their own water spray bottles to ensure the instruments stayed (preferably) fully wet.

A bundengan is a portable shelter woven from bamboo, worn by Indonesian duck herders who often outfit it to double as a musical instrument.

Enlarge / A bundengan is a portable shelter woven from bamboo, worn by Indonesian duck herders who often outfit it to double as a musical instrument.

Gea Oswah Fatah Parikesit

“A key element between the dry and wet versions of the bundengan is the bamboo sheaths—the material used to layer the wall of the instrument,” Parokesit said. “When the bundengan is dry, the bamboo sheaths open and that results in looser connections between neighboring sheaths. When the bundengan is wet, the sheaths tend to form a curling shape, but because they are held by ropes, they form tight connections between the neighboring sheaths.”

The resulting tension allows the sheaths to vibrate together. That has a significant impact on the instrument’s sound, taking on a “twangier” quality when dry and a more of metallic gong sound when it is wet. Parikesit has tried making bundengans with other materials: paper, leaves, even plastics. But none of those produce the same sound quality as the bamboo sheaths. He next plans to investigate other musical instruments made from bamboo sheaths.“As an Indonesian, I have extra motivation because the bundengan is a piece of our cultural heritage,” Parikesit said. “I am trying my best to support the conservation and documentation of the bundengan and other Indonesian endangered instruments.”

Coupling with the human vocal tract

Meanwhile, John Smith of the University of New South Wales is equally intrigued by the physics and acoustics of the didgeridoo. The instrument is constructed from the trunk or large branches of the eucalyptus tree. The trick is to find a live tree with lots of termite activity, such that the trunk has been hollowed out leaving just the living sapwood shell. A suitably hollow trunk is then cut down, cleaned out, the bark removed, the ends trimmed, and the exterior shaped into a long cylinder or cone to produce the final instrument. The longer the instrument, the lower the pitch or key.

Players will vibrate their lips to play the didgeridoo in a manner similar to lip valve instruments like trumpets or trombones, except those use a small mouthpiece attached to the instrument as an interface. (Sometimes a beeswax rim is added to a didgeridoo mouthpiece end.) Players typically use circular breathing to maintain that continuous low-pitched drone for several minutes, basically inhaling through the nose and using air stored in the puffed cheeks to keep producing the sound. It’s the coupling of the instrument with the human vocal tract that makes the physics so complex, per Smith.

Smith was interested in investigating how changes in the configuration of the vocal tract produced timbral changes in the rhythmic pattern of the sounds produced. To do so, “We needed to develop a technique that could measure the acoustic properties of the player’s vocal tract while playing,” Smith said during the same press briefing. “This involved injecting a broadband signal into the corner of the player’s mouth and using a microphone to record the response.” That enabled Smith and his cohorts to record the vocal tract impedance in different configurations in the mouth.

Producing complex sounds with the didjeridu requires creating and manipulating resonances inside the vocal tract.

Enlarge / Producing complex sounds with the didjeridu requires creating and manipulating resonances inside the vocal tract.

Kate Callas

The results: “We showed that strong resonances in the vocal tract can suppress bands of frequencies in the output sound,” said Smith. “The remaining strong bands of frequencies, called formants, are noticed by our hearing because they fall in the same ranges as the formants we use in speech. It’s a bit like a sculptor removing marble, and we observe the bits that are left behind.”

Smith et al. also noted that the variations in timbre arise from the player singing while playing, or imitating animal sounds (such as the dingo or the kookaburra), which produces many new frequencies in the output sound. To measure the contact between vocal folds, they placed electrodes on either side of a player’s throat and zapped them with a small high frequency electric current. They simultaneously measured lip movement with another pair of electrics above and below the lips. Both types of vibrations affect the flow of air to produce the new frequencies.

As for what makes a desirable didgeridoo that appeals to players, acoustic measurements on a set of 38 such instruments—with the quality of each rated by seven experts in seven different subjective categories—produced a rather surprising result. One might think players would prefer instruments with very strong resonances but the opposite turned out to be true. Instruments with stronger resonances were ranked the worst, while those with weaker resonances rated more highly.  Smith, for one, thinks this makes sense. “This means that their own vocal tract resonance can dominate the timbre of the notes,” he said.

These scientists explored the good vibrations of the bundengan and didgeridoo Read More »

this-bird-is-like-a-gps-for-honey

This bird is like a GPS for honey

Show me the honey —

The honeyguide recognizes calls made by different human groups.

A bird perched on a wall in front of an urban backdrop.

Enlarge / A greater honeyguide

With all the technological advances humans have made, it may seem like we’ve lost touch with nature—but not all of us have. People in some parts of Africa use a guide more effective than any GPS system when it comes to finding beeswax and honey. This is not a gizmo, but a bird.

The Greater Honeyguide (highly appropriate name), Indicator indicator (even more appropriate scientific name), knows where all the beehives are because it eats beeswax. The Hadza people of Tanzania and Yao people of Mozambique realized this long ago. Hadza and Yao honey hunters have formed a unique relationship with this bird species by making distinct calls, and the honeyguide reciprocates with its own calls, leading them to a hive.

Because the Hadza and Yao calls differ, zoologist Claire Spottiswoode of the University of Cambridge and anthropologist Brian Wood of UCLA wanted to find out if the birds respond generically to human calls, or are attuned to their local humans. They found that the birds are much more likely to respond to a local call, meaning that they have learned to recognize that call.

Come on, get that honey

To see which sound the birds were most likely to respond to, Spottiswoode and Wood played three recordings, starting with the local call. The Yao honeyguide call is what the researchers describe as “a loud trill followed by a grunt (‘brrrr-hm’) while the Hadza call is more of “a melodic whistle,” as they say in a study recently published in Science. The second recording they would play was the foreign call, which would be the Yao call in Hadza territory and vice versa.

The third recording was an unrelated human sound meant to test whether the human voice alone was enough for a honeyguide to follow. Because Hadza and Yao voices sound similar, the researchers would alternate among recordings of honey hunters speaking words such as their names.

So which sounds were the most effective cues for honeyguides to partner with humans? In Tanzania, local Hadza calls were three times more likely to initiate a partnership with a honeyguide than Yao calls or human voices. Local Yao calls were also the most successful in Mozambique, where, in comparison to Hadza calls and human voices, they were twice as likely to elicit a response that would lead to a cooperative effort to search for a beehive. Though honeyguides did sometimes respond to the other sounds, and were often willing to cooperate when hearing them, it became clear that the birds in each region had learned a local cultural tradition that had become just as much a part of their lives as those of the humans who began it.

Now you’re speaking my language

There is a reason that honey hunters in both the Hadza and Yao tribes told Wood and Spottiswoode that they have never changed their calls and will never change them. If they did, they’d be unlikely to gather nearly as much honey.

How did this interspecies communication evolve? Other African cultures besides the Hadza and Yao have their own calls to summon a honeyguide. Why do the types of calls differ? The researchers do not think these calls came about randomly.

Both the Hadza and Yao people have their own unique languages, and sounds from them may have been incorporated into their calls. But there is more to it than that. The Hadza often hunt animals when hunting for honey. Therefore, the Hadza don’t want their calls to be recognized as human, or else the prey they are after might sense a threat and flee. This may be why they use whistles to communicate with honeyguides—by sounding like birds, they can both attract the honeyguides and stalk prey without being detected.

In contrast, the Yao do not hunt mammals, relying mostly on agriculture and fishing for food. This, along with the fact that they try to avoid potentially dangerous creatures such as lions, rhinos, and elephants, and can explain why they use recognizably human vocalizations to call honeyguides. Human voices may scare these animals away, so Yao honey hunters can safely seek honey with their honeyguide partners. These findings show that cultural diversity has had a significant influence on calls to honeyguides.

While animals might not literally speak our language, the honeyguide is just one of many species that has its own way of communicating with us. They can even learn our cultural traditions.

“Cultural traditions of consistent behavior are widespread in non-human animals and could plausibly mediate other forms of interspecies cooperation,” the researchers said in the same study.

Honeyguides start guiding humans as soon as they begin to fly, and this knack, combined with learning to answer traditional calls and collaborate with honey hunters, works well for both human and bird. Maybe they are (in a way) speaking our language.

Science, 2023.  DOI: 10.1126/science.adh412

This bird is like a GPS for honey Read More »

ai-created-“virtual-influencers”-are-stealing-business-from-humans

AI-created “virtual influencers” are stealing business from humans

digital influencer

Enlarge / Aitana Lopez, an AI-generated influencer, has convinced many social media users she is real.

FT montage/TheClueless/GettyImages

Pink-haired Aitana Lopez is followed by more than 200,000 people on social media. She posts selfies from concerts and her bedroom, while tagging brands such as hair care line Olaplex and lingerie giant Victoria’s Secret.

Brands have paid about $1,000 a post for her to promote their products on social media—despite the fact that she is entirely fictional.

Aitana is a “virtual influencer” created using artificial intelligence tools, one of the hundreds of digital avatars that have broken into the growing $21 billion content creator economy.

Their emergence has led to worry from human influencers their income is being cannibalized and under threat from digital rivals. That concern is shared by people in more established professions that their livelihoods are under threat from generative AI—technology that can spew out humanlike text, images and code in seconds.

But those behind the hyper-realistic AI creations argue they are merely disrupting an overinflated market.

“We were taken aback by the skyrocketing rates influencers charge nowadays. That got us thinking, ‘What if we just create our own influencer?’” said Diana Núñez, co-founder of the Barcelona-based agency The Clueless, which created Aitana. “The rest is history. We unintentionally created a monster. A beautiful one, though.”

Over the past few years, there have been high-profile partnerships between luxury brands and virtual influencers, including Kim Kardashian’s make-up line KKW Beauty with Noonoouri, and Louis Vuitton with Ayayi.

Instagram analysis of an H&M advert featuring virtual influencer Kuki found that it reached 11 times more people and resulted in a 91 percent decrease in cost per person remembering the advert, compared with a traditional ad.

AI-created “virtual influencers” are stealing business from humans Read More »

how-watching-beavers-from-space-can-help-drought-ridden-areas-bounce-back

How watching beavers from space can help drought-ridden areas bounce back

Busy as a… —

An algorithm can spot beaver ponds from satellite imagery.

Beaver on a dam

Enlarge / Where beavers set up home, the dams they build profoundly change the landscape.

For the first time in four centuries, it’s good to be a beaver. Long persecuted for their pelts and reviled as pests, the dam-building rodents are today hailed by scientists as ecological saviors. Their ponds and wetlands store water in the face of drought, filter out pollutants, furnish habitat for endangered species, and fight wildfires. In California, Castor canadensis is so prized that the state recently committed millions to its restoration.

While beavers’ benefits are indisputable, however, our knowledge remains riddled with gaps. We don’t know how many are out there, or which direction their populations are trending, or which watersheds most desperately need a beaver infusion. Few states have systematically surveyed them; moreover, many beaver ponds are tucked into remote streams far from human settlements, where they’re near-impossible to count. “There’s so much we don’t understand about beavers, in part because we don’t have a baseline of where they are,” says Emily Fairfax, a beaver researcher at the University of Minnesota.

But that’s starting to change. Over the past several years, a team of beaver scientists and Google engineers have been teaching an algorithm to spot the rodents’ infrastructure on satellite images. Their creation has the potential to transform our understanding of these paddle-tailed engineers—and help climate-stressed states like California aid their comeback. And while the model hasn’t yet gone public, researchers are already salivating over its potential. “All of our efforts in the state should be taking advantage of this powerful mapping tool,” says Kristen Wilson, the lead forest scientist at the conservation organization the Nature Conservancy. “It’s really exciting.”

The beaver-mapping model is the brainchild of Eddie Corwin, a former member of Google’s real-estate sustainability group. Around 2018, Corwin began to contemplate how his company might become a better steward of water, particularly the many coastal creeks that run past its Bay Area offices. In the course of his research, Corwin read Water: A Natural History, by an author aptly named Alice Outwater. One chapter dealt with beavers, whose bountiful wetlands, Outwater wrote, “can hold millions of gallons of water” and “reduce flooding and erosion downstream.” Corwin, captivated, devoured other beaver books and articles, and soon started proselytizing to his friend Dan Ackerstein, a sustainability consultant who works with Google. “We both fell in love with beavers,” Corwin says.

Corwin’s beaver obsession met a receptive corporate culture. Google’s employees are famously encouraged to devote time to passion projects, the policy that produced Gmail; Corwin decided his passion was beavers. But how best to assist the buck-toothed architects? Corwin knew that beaver infrastructure—their sinuous dams, sprawling ponds, and spidery canals—is often so epic it can be seen from space. In 2010, a Canadian researcher discovered the world’s longest beaver dam, a stick-and-mud bulwark that stretches more than a half-mile across an Alberta park, by perusing Google Earth. Corwin and Ackerstein began to wonder whether they could contribute to beaver research by training a machine-learning algorithm to automatically detect beaver dams and ponds on satellite imagery—not one by one, but thousands at a time, across the surface of an entire state.

After discussing the concept with Google’s engineers and programmers, Corwin and Ackerstein decided it was technically feasible. They reached out next to Fairfax, who’d gained renown for a landmark 2020 study showing that beaver ponds provide damp, fire-proof refuges in which other species can shelter during wildfires. In some cases, Fairfax found, beaver wetlands even stopped blazes in their tracks. The critters were such talented firefighters that she’d half-jokingly proposed that the US Forest Service change its mammal mascot—farewell, Smoky Bear, and hello, Smoky Beaver.

Fairfax was enthusiastic about the pond-mapping idea. She and her students already used Google Earth to find beaver dams to study within burned areas. But it was a laborious process, one that demanded endless hours of tracing alpine streams across screens in search of the bulbous signature of a beaver pond. An automated beaver-finding tool, she says, could “increase the number of fires I can analyze by an order of magnitude.”

With Fairfax’s blessing, Corwin, Ackerstein, and a team of programmers set about creating their model. The task, they decided, was best suited to a convolutional neural network, a type of algorithm that essentially tries to figure out whether a given chunk of geospatial data includes a particular object—whether a stretch of mountain stream contains a beaver dam, say. Fairfax and some obliging beaverologists from Utah State University submitted thousands of coordinates for confirmed dams, ponds, and canals, which the Googlers matched up with their own high-resolution images to teach the model to recognize the distinctive appearance of beaverworks. The team also fed the algorithm negative data—images of beaverless streams and wetlands—so that it would know what it wasn’t looking for. They dubbed their model the Earth Engine Automated Geospatial Elements Recognition, or EEAGER—yes, as in “eager beaver.”

Training EEAGER to pick out beaver ponds wasn’t easy. The American West was rife with human-built features that seemed practically designed to fool a beaver-seeking model. Curving roads reminded EEAGER of winding dams; the edges of man-made reservoirs registered as beaver-built ponds. Most confounding, weirdly, were neighborhood cul-de-sacs, whose asphalt circles, surrounded by gray strips of sidewalk, bore an uncanny resemblance to a beaver pond fringed by a dam. “I don’t think anybody anticipated that suburban America was full of what a computer would think were beaver dams,” Ackerstein says.

As the researchers pumped more data into EEAGER, it got better at distinguishing beaver ponds from impostors. In May 2023, the Google team, along with beaver researchers Fairfax, Joe Wheaton, and Wally Macfarlane, published a paper in the Journal of Geophysical Research Biogeosciencesdemonstrating the model’s efficacy. The group fed EEAGER more than 13,000 landscape images with beaver dams from seven western states, along with some 56,000 dam-less locations. The model categorized the landscape accurately—beaver dammed or not—98.5 percent of the time.

That statistic, granted, oversells EEAGER’s perfection. The Google team opted to make the model fairly liberal, meaning that, when it predicts whether or not a pixel of satellite imagery contains a beaver dam, it’s more likely to err on the side of spitting out a false positive. EEAGER still requires a human to check its answers, in other words—but it can dramatically expedite the work of scientists like Fairfax by pointing them to thousands of probable beaver sites.

“We’re not going to replace the expertise of biologists,” Ackerstein says. “But the model’s success is making human identification much more efficient.”

According to Fairfax, EEAGER’s use cases are many. The model could be used to estimate beaver numbers, monitor population trends, and calculate beaver-provided ecosystem services like water storage and fire prevention. It could help states figure out where to reintroduce beavers, where to target stream and wetland restoration, and where to create conservation areas. It could allow researchers to track beavers’ spread in the Arctic as the rodents move north with climate change; or their movements in South America, where beavers were introduced in the 1940s and have since proliferated. “We literally cannot handle all the requests we’re getting,” says Fairfax, who serves as EEAGER’s scientific adviser.

The algorithm’s most promising application might be in California. The Golden State has a tortured relationship with beavers: For decades, the state generally denied that the species was native, the byproduct of an industrial-scale fur trade that wiped beavers from the West Coast before biologists could properly survey them. Although recent historical research proved that beavers belong virtually everywhere in California, many water managers and farmers still perceive them as nuisances, and frequently have them killed for plugging up road culverts and meddling with irrigation infrastructure.

Yet those deeply entrenched attitudes are changing. After all, no state is in more dire need of beavers’ water-storage services than flammable, drought-stricken, flood-prone California. In recent years, thanks to tireless lobbying by a campaign called Bring Back the Beaver, the California Department of Fish and Wildlife has begun to overhaul its outdated beaver policies. In 2022, the state budgeted more than $1.5 million for beaver restoration, and announced it would hire five scientists to study and support the rodents. It also revised its official approach to beaver conflict to prioritize coexistence over lethal trapping. And, this fall, the wildlife department relocated a family of seven beavers onto the ancestral lands of the Mountain Maidu people—the state’s first beaver release in almost 75 years.

It’s only appropriate, then, that California is where EEAGER is going to get its first major test. The Nature Conservancy and Google plan to run the model across the state sometime in 2024, a comprehensive search for every last beaver dam and pond. That should give the state’s wildlife department a good sense of where its beavers are living, roughly how many it has, and where it could use more. The model will also provide California with solid baseline data against which it can compare future populations, to see whether its new policies are helping beavers recover. “When you have imagery that’s repeated frequently, that gives you the opportunity to understand change through time,” says the Conservancy’s Kristen Wilson.

What’s next for EEAGER after its California trial? The main thing, Ackerstein says, is to train it to identify beaverworks in new places. (Although beaver dams and ponds present as fairly similar in every state, the model also relies on context clues from the surrounding landscape, and a sagebrush plateau in Wyoming looks very different from a deciduous forest in Massachusetts.) The team also has to figure out EEAGER’s long-term fate: Will it remain a tool hosted by Google? Spin off into a stand-alone product? Become a service operated by a university or nonprofit?

“That’s the challenge for the future—how do we make this more universally accessible and usable?” Corwin says. The beaver revolution may not be televised, but it will definitely be documented by satellite.

This story originally appeared on wired.com.

How watching beavers from space can help drought-ridden areas bounce back Read More »