social psychology

study:-social-media-probably-can’t-be-fixed

Study: Social media probably can’t be fixed


“The [structural] mechanism producing these problematic outcomes is really robust and hard to resolve.”

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

It’s no secret that much of social media has become profoundly dysfunctional. Rather than bringing us together into one utopian public square and fostering a healthy exchange of ideas, these platforms too often create filter bubbles or echo chambers. A small number of high-profile users garner the lion’s share of attention and influence, and the algorithms designed to maximize engagement end up merely amplifying outrage and conflict, ensuring the dominance of the loudest and most extreme users—thereby increasing polarization even more.

Numerous platform-level intervention strategies have been proposed to combat these issues, but according to a preprint posted to the physics arXiv, none of them are likely to be effective. And it’s not the fault of much-hated algorithms, non-chronological feeds, or our human proclivity for seeking out negativity. Rather, the dynamics that give rise to all those negative outcomes are structurally embedded in the very architecture of social media. So we’re probably doomed to endless toxic feedback loops unless someone hits upon a brilliant fundamental redesign that manages to change those dynamics.

Co-authors Petter Törnberg and Maik Larooij of the University of Amsterdam wanted to learn more about the mechanisms that give rise to the worst aspects of social media: the partisan echo chambers, the concentration of influence among a small group of elite users (attention inequality), and the amplification of the most extreme divisive voices. So they combined standard agent-based modeling with large language models (LLMs), essentially creating little AI personas to simulate online social media behavior. “What we found is that we didn’t need to put any algorithms in, we didn’t need to massage the model,” Törnberg told Ars. “It just came out of the baseline model, all of these dynamics.”

They then tested six different intervention strategies social scientists have been proposed to counter those effects: switching to chronological or randomized feeds; inverting engagement-optimization algorithms to reduce the visibility of highly reposted sensational content; boosting the diversity of viewpoints to broaden users’ exposure to opposing political views; using “bridging algorithms” to elevate content that fosters mutual understanding rather than emotional provocation; hiding social statistics like reposts and follower accounts to reduce social influence cues; and removing biographies to limit exposure to identity-based signals.

The results were far from encouraging. Only some interventions showed modest improvements. None were able to fully disrupt the fundamental mechanisms producing the dysfunctional effects. In fact, some interventions actually made the problems worse. For example, chronological ordering had the strongest effect on reducing attention inequality, but there was a tradeoff: It also intensified the amplification of extreme content. Bridging algorithms significantly weakened the link between partisanship and engagement and modestly improved viewpoint diversity, but it also increased attention inequality. Boosting viewpoint diversity had no significant impact at all.

So is there any hope of finding effective intervention strategies to combat these problematic aspects of social media? Or should we nuke our social media accounts altogether and go live in caves? Ars caught up with Törnberg for an extended conversation to learn more about these troubling findings.

Ars Technica: What drove you to conduct this study?

Petter Törnberg: For the last 20 years or so, there has been a ton of research on how social media is reshaping politics in different ways, almost always using observational data. But in the last few years, there’s been a growing appetite for moving beyond just complaining about these things and trying to see how we can be a bit more constructive. Can we identify how to improve social media and create online spaces that are actually living up to those early promises of providing a public sphere where we can deliberate and debate politics in a constructive way?

The problem with using observational data is that it’s very hard to test counterfactuals to implement alternative solutions. So one kind of method that has existed in the field is agent-based simulations and social simulations: create a computer model of the system and then run experiments on that and test counterfactuals. It is useful for looking at the structure and emergence of network dynamics.

But at the same time, those models represent agents as simple rule followers or optimizers, and that doesn’t capture anything of the cultural world or politics or human behavior. I’ve always been of the controversial opinion that those things actually matter,  especially for online politics. We need to study both the structural dynamics of network formations and the patterns of cultural interaction.

Ars Technica: So you developed this hybrid model that combines LLMs with agent-based modeling.

Petter Törnberg: That’s the solution that we find to move beyond the problems of conventional agent-based modeling. Instead of having this simple rule of followers or optimizers, we use AI or LLMs. It’s not a perfect solution—there’s all kind of biases and limitations—but it does represent a step forward compared to a list of if/then rules. It does have something more of capturing human behavior in a more plausible way. We give them personas that we get from the American National Election Survey, which has very detailed questions about US voters and their hobbies and preferences. And then we turn that into a textual persona—your name is Bob, you’re from Massachusetts, and you like fishing—just to give them something to talk about and a little bit richer representation.

And then they see the random news of the day, and they can choose to post the news, read posts from other users, repost them, or they can choose to follow users. If they choose to follow users, they look at their previous messages, look at their user profile.

Our idea was to start with the minimal bare-bones model and then add things to try to see if we could reproduce these problematic consequences. But to our surprise, we actually didn’t have to add anything because these problematic consequences just came out of the bare bones model. This went against our expectations and also what I think the literature would say.

Ars Technica: I’m skeptical of AI in general, particularly in a research context, but there are very specific instances where it can be extremely useful. This strikes me as one of them, largely because your basic model proved to be so robust. You got the same dynamics without introducing anything extra.

Petter Törnberg: Yes. It’s been a big conversation in social science over the last two years or so. There’s a ton of interest in using LLMs for social simulation, but no one has really figured out for what or how it’s going to be helpful, or how we’re going to get past these problems of validity and so on. The kind of approach that we take in this paper is building on a tradition of complex systems thinking. We imagine very simple models of the human world and try to capture very fundamental mechanisms. It’s not really aiming to be realistic or a precise, complete model of human behavior.

I’ve been one of the more critical people of this method, to be honest. At the same time, it’s hard to imagine any other way of studying these kinds of dynamics where we have cultural and structural aspects feeding back into each other. But I still have to take the findings with a grain of salt and realize that these are models, and they’re capturing a kind of hypothetical world—a spherical cow in a vacuum. We can’t predict what someone is going to have for lunch on Tuesday, but we can capture broader mechanisms, and we can see how robust those mechanisms are. We can see whether they’re stable, unstable, which conditions they emerge in, and the general boundaries. And in this case, we found a mechanism that seems to be very robust, unfortunately.

Ars Technica: The dream was that social media would help revitalize the public sphere and support the kind of constructive political dialogue that your paper deems “vital to democratic life.” That largely hasn’t happened. What are the primary negative unexpected consequences that have emerged from social media platforms?

Petter Törnberg: First, you have echo chambers or filter bubbles. The risk of broad agreement is that if you want to have a functioning political conversation, functioning deliberation, you do need to do that across the partisan divide. If you’re only having a conversation with people who already agree with each other, that’s not enough. There’s debate on how widespread echo chambers are online, but it is quite established that there are a lot of spaces online that aren’t very constructive because there’s only people from one political side. So that’s one ingredient that you need. You need to have a diversity of opinion, a diversity of perspective.

The second one is that the deliberation needs to be among equals; people need to have more or less the same influence in the conversation. It can’t be completely controlled by a small, elite group of users. This is also something that people have pointed to on social media: It has a tendency of creating these influencers because attention attracts attention. And then you have a breakdown of conversation among equals.

The final one is what I call (based on Chris Bail’s book) the social media prism. The more extreme users tend to get more attention online. This is often discussed in relation to engagement algorithms, which tend to identify the type of content that most upsets us and then boost that content. I refer to it as a “trigger bubble” instead of the filter bubble. They’re trying to trigger us as a way of making us engage more so they can extract our data and keep our attention.

Ars Technica: Your conclusion is that there’s something within the structural dynamics of the network itself that’s to blame—something fundamental to the construction of social networks that makes these extremely difficult problems to solve.

Petter Törnberg: Exactly. It comes from the fact that we’re using these AI models to capture a richer representation of human behavior, which allows us to see something that wouldn’t really be possible using conventional agent-based modeling. There have been previous models looking at the growth of social networks on social media. People choose to retweet or not, and we know that action tends to be very reactive. We tend to be very emotional in that choice. And it tends to be a highly partisan and polarized type of action. You hit retweet when you see someone being angry about something, or doing something horrific, and then you share that. It’s well-known that this leads to toxic, more polarized content spreading more.

But what we find is that it’s not just that this content spreads; it also shapes the network structures that are formed. So there’s feedback between the effective emotional action of choosing to retweet something and the network structure that emerges. And then in turn, you have a network structure that feeds back what content you see, resulting in a toxic network. The definition of an online social network is that you have this kind of posting, reposting, and following dynamics. It’s quite fundamental to it. That alone seems to be enough to drive these negative outcomes.

Ars Technica: I was frankly surprised at the ineffectiveness of the various intervention strategies you tested. But it does seem to explain the Bluesky conundrum. Bluesky has no algorithm, for example, yet the same dynamics still seem to emerge. I think Bluesky’s founders genuinely want to avoid those dysfunctional issues, but they might not succeed, based on this paper. Why are such interventions so ineffective? 

Petter Törnberg: We’ve been discussing whether these things are due to the platforms doing evil things with algorithms or whether we as users are choosing that we want a bad environment. What we’re saying is that it doesn’t have to be either of those. This is often the unintended outcomes from interactions based on underlying rules. It’s not necessarily because the platforms are evil; it’s not necessarily because people want to be in toxic, horrible environments. It just follows from the structure that we’re providing.

We tested six different interventions. Google has been trying to make social media less toxic and recently released a newsfeed algorithm based on the content of the text. So that’s one example. We’re also trying to do more subtle interventions because often you can find a certain way of nudging the system so it switches over to healthier dynamics. Some of them have moderate or slightly positive effects on one of the attributes, but then they often have negative effects on another attribute, or they have no impact whatsoever.

I should say also that these are very extreme interventions in the sense that, if you depended on making money on your platform, you probably don’t want to implement them because it probably makes it really boring to use. It’s like showing the least influential users, the least retweeted messages on the platform. Even so, it doesn’t really make a difference in changing the basic outcomes. What we take from that is that the mechanism producing these problematic outcomes is really robust and hard to resolve given the basic structure of these platforms.

Ars Technica: So how might one go about building a successful social network that doesn’t have these problems? 

Petter Törnberg: There are several directions where you could imagine going, but there’s also the constraint of what is popular use. Think back to the early Internet, like ICQ. ICQ had this feature where you could just connect to a random person. I loved it when I was a kid. I would talk to random people all over the world. I was 12 in the countryside on a small island in Sweden, and I was talking to someone from Arizona, living a different life. I don’t know how successful that would be these days, the Internet having become a lot less innocent than it was.

For instance, we can focus on the question of inequality of attention, a very well-studied and robust feature of these networks. I personally thought we would be able to address it with our interventions, but attention draws attention, and this leads to a power law distribution, where 1 percent [of users] dominates the entire conversation. We know the conditions under which those power laws emerge. This is one of the main outcomes of social network dynamics: extreme inequality of attention.

But in social science, we always teach that everything is a normal distribution. The move from studying the conventional social world to studying the online social world means that you’re moving from these nice normal distributions to these horrible power law distributions. Those are the outcomes of having social networks where the probability of connecting to someone depends on how many previous connections they have. If we want to get rid of that, we probably have to move away from the social network model and have some kind of spatial model or group-based model that makes things a little bit more local, a little bit less globally interconnected.

Ars Technica: It sounds like you’d want to avoid those big influential nodes that play such a central role in a large, complex global network. 

Petter Törnberg: Exactly. I think that having those global networks and structures fundamentally undermines the possibility of the kind of conversations that political scientists and political theorists traditionally talked about when they were discussing in the public square. They were talking about social interaction in a coffee house or a tea house, or reading groups and so on. People thought the Internet was going to be precisely that. It’s very much not that. The dynamics are fundamentally different because of those structural differences. We shouldn’t expect to be able to get a coffee house deliberation structure when we have a global social network where everyone is connected to everyone. It is difficult to imagine a functional politics building on that.

Ars Technica: I want to come back to your comment on the power law distribution, how 1 percent of people dominate the conversation, because I think that is something that most users routinely forget. The horrible things we see people say on the Internet are not necessarily indicative of the vast majority of people in the world. 

Petter Törnberg: For sure. That is capturing two aspects. The first is the social media prism, where the perspective we get of politics when we see it through the lens of social media is fundamentally different from what politics actually is. It seems much more toxic, much more polarized. People seem a little bit crazier than they really are. It’s a very well-documented aspect of the rise of polarization: People have a false perception of the other side. Most people have fairly reasonable and fairly similar opinions. The actual polarization is lower than the perceived polarization. And that arguably is a result of social media, how it misrepresents politics.

And then we see this very small group of users that become very influential who often become highly visible as a result of being a little bit crazy and outrageous. Social media creates an incentive structure that is really central to reshaping not just how we see politics but also what politics is, which politicians become powerful and influential, because it is controlling the distribution of what is arguably the most valuable form of capital of our era: attention. Especially for politicians, being able to control attention is the most important thing. And since social media creates the conditions of who gets attention or not, it creates an incentive structure where certain personalities work better in a way that’s just fundamentally different from how it was in previous eras.

Ars Technica: There are those who have sworn off social media, but it seems like simply not participating isn’t really a solution, either.

Petter Törnberg: No. First, even if you only read, say, The New York Times, that newspaper is still reshaped by what works on social media, the social media logic. I had a student who did a little project this last year showing that as social media became more influential, the headlines of The New York Times became more clickbaity and adapted to the style of what worked on social media. So conventional media and our very culture is being transformed.

But more than that, as I was just saying, it’s the type of politicians, it’s the type of people who are empowered—it’s the entire culture. Those are the things that are being transformed by the power of the incentive structures of social media. It’s not like, “This is things that are happening in social media and this is the rest of the world.” It’s all entangled, and somehow social media has become the cultural engine that is shaping our politics and society in very fundamental ways. Unfortunately.

Ars Technica: I usually like to say that technological tools are fundamentally neutral and can be used for good or ill, but this time I’m not so sure. Is there any hope of finding a way to take the toxic and turn it into a net positive?

Petter Törnberg: What I would say to that is that we are at a crisis point with the rise of LLMs and AI. I have a hard time seeing the contemporary model of social media continuing to exist under the weight of LLMs and their capacity to mass-produce false information or information that optimizes these social network dynamics. We already see a lot of actors—based on this monetization of platforms like X—that are using AI to produce content that just seeks to maximize attention. So misinformation, often highly polarized information as AI models become more powerful, that content is going to take over. I have a hard time seeing the conventional social media models surviving that.

We’ve already seen the process of people retreating in part to credible brands and seeking to have gatekeepers. Young people, especially, are going into WhatsApp groups and other closed communities. Of course, there’s misinformation from social media leaking into those chats also. But these kinds of crisis points at least have the hope that we’ll see a changing situation. I wouldn’t bet that it’s a situation for the better. You wanted me to sound positive, so I tried my best. Maybe it’s actually “good riddance.”

Ars Technica: So let’s just blow up all the social media networks. It still won’t be better, but at least we’ll have different problems.

Petter Törnberg: Exactly. We’ll find a new ditch.

DOI: arXiv, 2025. 10.48550/arXiv.2508.03385  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Study: Social media probably can’t be fixed Read More »

ten-cool-science-stories-we-almost-missed

Ten cool science stories we almost missed


Bronze Age combat, moral philosophy and Reddit’s AITA, Mondrian’s fractal tree, and seven other fascinating papers.

There is rarely time to write about every cool science paper that comes our way; many worthy candidates sadly fall through the cracks over the course of the year. But as 2024 comes to a close, we’ve gathered ten of our favorite such papers at the intersection of science and culture as a special treat, covering a broad range of topics: from reenacting Bronze Age spear combat and applying network theory to the music of Johann Sebastian Bach, to Spider-Man inspired web-slinging tech and a mathematical connection between a turbulent phase transition and your morning cup of coffee. Enjoy!

Reenacting Bronze Age spear combat

Experiment with experienced fighters who spar freely using different styles.

An experiment with experienced fighters who spar freely using different styles. Credit: Valerio Gentile/CC BY

The European Bronze Age saw the rise of institutionalized warfare, evidenced by the many spearheads and similar weaponry archaeologists have unearthed. But how might these artifacts be used in actual combat? Dutch researchers decided to find out by constructing replicas of Bronze Age shields and spears and using them in realistic combat scenarios. They described their findings in an October paper published in the Journal of Archaeological Science.

There have been a couple of prior experimental studies on bronze spears, but per Valerio Gentile (now at the University of Gottingen) and coauthors, practical research to date has been quite narrow in scope, focusing on throwing weapons against static shields. Coauthors C.J. van Dijk of the National Military Museum in the Netherlands and independent researcher O. Ter Mors each had more than a decade of experience teaching traditional martial arts, specializing in medieval polearms and one-handed weapons. So they were ideal candidates for testing the replica spears and shields.

Of course, there is no direct information on prehistoric fighting styles, so van Dijk and Mors relied on basic biomechanics of combat movements with similar weapons detailed in historic manuals. They ran three versions of the experiment: one focused on engagement and controlled collisions, another on delivering wounding body blows, and the third on free sparring. They then studied wear marks left on the spearheads and found they matched the marks found on similar genuine weapons excavated from Bronze Age sites. They also gleaned helpful clues to the skills required to use such weapons.

DOI: Journal of Archaeological Science, 2024. 10.1016/j.jas.2024.106044 (About DOIs).

Physics of Ned Kahn’s kinetic sculptures

Ned Kahn's Shimmer Wall, The Franklin Institute, Philadelphia, Pennsylvania.

Shimmer Wall, The Franklin Institute, Philadelphia, Pennsylvania. Credit: Ned Kahn

Environmental artist and sculptor Ned Kahn is famous for his kinematic building facades, inspired by his own background in science. An exterior wall on the Children’s Museum of Pittsburgh, for instance, consists of hundreds of flaps that move in response to wind, creating distinctive visual patterns. Kahn used the same method to create his Shimmer Wall at Philadelphia’s Franklin Institute, as well as several other similar projects.

Physicists at Sorbonne Universite in Paris have studied videos of Kahn’s kinetic facades and conducted experiments to measure the underlying physical mechanisms, outlined in a November paper published in the journal Physical Review Fluids. The authors analyzed 18 YouTube videos taken of six of Kahn’s kinematic facades, working with Kahn and building management to get the dimensions of the moving plates, scaling up from the video footage to get further information on spatial dimensions.

They also conducted their own wind tunnel experiments, using strings of pendulum plates. Their measurements confirmed that the kinetic patterns were propagating waves to create the flickering visual effects. The plates’ movement is driven primarily by their natural resonant frequencies at low speeds, and by pressure fluctuations from the wind at higher speeds.

DOI: Physical Review Fluids, 2024. 10.1103/PhysRevFluids.9.114604 (About DOIs).

How brewing coffee connects to turbulence

Trajectories in time traced out by turbulent puffs as they move along a simulated pipe and in experiments, with blue regions indicate the puff

Trajectories in time traced out by turbulent puffs as they move along a simulated pipe and in experiments, with blue regions indicate puff “traffic jams.” Credit: Grégoire Lemoult et al., 2024

Physicists have been studying turbulence for centuries, particularly the transitional period where flows shift from predictably smooth (laminar flow) to highly turbulent. That transition is marked by localized turbulent patches known as “puffs,” which often form in fluids flowing through a pipe or channel. In an October paper published in the journal Nature Physics, physicists used statistical mechanics to reveal an unexpected connection between the process of brewing coffee and the behavior of those puffs.

Traditional mathematical models of percolation date back to the 1940s. Directed percolation is when the flow occurs in a specific direction, akin to how water moves through freshly ground coffee beans, flowing down in the direction of gravity. There’s a sweet spot for the perfect cuppa, where the rate of flow is sufficiently slow to absorb most of the flavor from the beans, but also fast enough not to back up in the filter. That sweet spot in your coffee brewing process corresponds to the aforementioned laminar-turbulent transition in pipes.

Physicist Nigel Goldenfeld of the University of California, San Diego, and his coauthors used pressure sensors to monitor the formation of puffs in a pipe, focusing on how puff-to-puff interactions influenced each other’s motion. Next, they tried to mathematically model the relevant phase transitions to predict puff behavior. They found that the puffs behave much like cars moving on a freeway during rush hour: they are prone to traffic jams—i.e., when a turbulent patch matches the width of the pipe, causing other puffs to build up behind it—that form and dissipate on their own. And they tend to “melt” at the laminar-turbulent transition point.

DOI: Nature Physics, 2024. 10.1038/s41567-024-02513-0 (About DOIs).

Network theory and Bach’s music

In a network representation of music, notes are represented by nodes, and transition between notes are represented by directed edges connecting the nodes. Credit: S. Kulkarni et al., 2024

When you listen to music, does your ability to remember or anticipate the piece tell you anything about its structure? Physicists at the University of Pennsylvania developed a model based on network theory to do just that, describing their work in a February paper published in the journal Physical Review Research. Johann Sebastian Bach’s works were an ideal choice given the highly mathematical structure, plus the composer was so prolific, across so many very different kinds of musical compositions—preludes, fugues, chorales, toccatas, concertos, suites, and cantatas—as to allow for useful comparisons.

First, the authors built a simple “true” network for each composition, in which individual notes served as “nodes” and the transitions from note to note served as “edges” connecting them. Then they calculated the amount of information in each network. They found it was possible to tell the difference between compositional forms based on their information content (entropy). The more complex toccatas and fugues had the highest entropy, while simpler chorales had the lowest.

Next, the team wanted to quantify how effectively this information was communicated to the listener, a task made more difficult by the innate subjectivity of human perception. They developed a fuzzier “inferred” network model for this purpose, capturing an essential aspect of our perception: we find a balance between accuracy and cost, simplifying some details so as to make it easier for our brains to process incoming information like music.

The results: There were fewer differences between the true and inferred networks for Bach’s compositions than for randomly generated networks, suggesting that clustering and the frequent repetition of transitions (represented by thicker edges) in Bach networks were key to effectively communicating information to the listener. The next step is to build a multi-layered network model that incorporates elements like rhythm, timbre, chords, or counterpoint (a Bach specialty).

DOI: Physical Review Research, 2024. 10.1103/PhysRevResearch.6.013136 (About DOIs).

The philosophy of Reddit’s AITA

Count me among the many people practically addicted to Reddit’s “Am I the Asshole” (AITA) forum. It’s such a fascinating window into the intricacies of how flawed human beings navigate different relationships, whether personal or professional. That’s also what makes it a fantastic source of illustrative common-place dilemmas of moral decision-making for philosophers like Daniel Yudkin of the University of Pennsylvania. Relational context matters, as Yudkin and several co-authors ably demonstrated in a PsyArXiv preprint earlier this year.

For their study, Yudkin et al. compiled a dataset of nearly 370,000 AITA posts, along with over 11 million comments, posted between 2018 and 2021. They used machine learning to analyze the language used to sort all those posts into different categories. They relied on an existing taxonomy identifying six basic areas of moral concern: fairness/proportionality, feelings, harm/offense, honesty, relational obligation, and social norms.

Yudkin et al. identified 29 of the most common dilemmas in the AITA dataset and grouped them according to moral theme. Two of the most common were relational transgression and relational omission (failure to do what was expected), followed by behavioral over-reaction and unintended harm. Cheating and deliberate misrepresentation/dishonesty were the moral dilemmas rated most negatively in the dataset—even more so than intentional harm. Being judgmental was also evaluated very negatively, as it was often perceived as being self-righteous or hypocritical. The least negatively evaluated dilemmas were relational omissions.

As for relational context, cheating and broken promise dilemmas typically involved romantic partners like boyfriends rather than one’s mother, for example, while mother-related dilemmas more frequently fell under relational omission. Essentially, “people tend to disappoint their mothers but be disappointed by their boyfriends,” the authors wrote. Less close relationships, by contrast, tend to be governed by “norms of politeness and procedural fairness.” Hence, Yudkin et al. prefer to think of morality “less as a set of abstract principles and more as a ‘relational toolkit,’ guiding and constraining behavior according to the demands of the social situation.”

DOI: PsyArXiv, 2024. 10.31234/osf.io/5pcew (About DOIs).

Fractal scaling of trees in art

De grijze boom (Gray tree) Piet Mondrian, 1911.

De grijze boom (Gray tree) by Piet Mondrian, 1911. Credit: Public domain

Leonardo da Vinci famously invented a so-called “rule of trees” as a guide to realistically depicting trees in artistic representations according to their geometric proportions. In essence, if you took all the branches of a given tree, folded them up and compressed them into something resembling a trunk, that trunk would have the same thickness from top to bottom. That rule in turn implies a fractal branching pattern, with a scaling exponent of about 2 describing the proportions between the diameters of nearby boughs and the number of boughs with a given diameter.

According to the authors of a preprint posted to the physics arXiv in February, however, recent biological research suggests a higher scaling exponent of 3 known as Murray’s Law, for the rule of trees. Their analysis of 16th century Islamic architecture, Japanese paintings from the Edo period, and 20th century European art showed fractal scaling between 1.5 and 2.5. However, when they analyzed an abstract tree painting by Piet Mondrian, they found it exhibited fractal scaling of 3, before mathematicians had formulated Murray’s Law, even though Mondrian’s tree did not feature explicit branching.

The findings intrigued physicist Richard Taylor of the University of Oregon, whose work over the last 20 years includes analyzing fractal patterns in the paintings of Jackson Pollock. “In particular, I thought the extension to Mondrian’s ‘trees’ was impressive,” he told Ars earlier this year. “I like that it establishes a connection between abstract and representational forms. It makes me wonder what would happen if the same idea were to be applied to Pollock’s poured branchings.”

Taylor himself published a 2022 paper about climate change and how nature’s stress-reducing fractals might disappear in the future. “If we are pessimistic for a moment, and assume that climate change will inevitably impact nature’s fractals, then our only future source of fractal aesthetics will be through art, design and architecture,” he said. “This brings a very practical element to studies like [this].”

DOI: arXiv, 2024. 10.48550/arXiv.2402.13520 (About DOIs).

IDing George Washington’s descendants

Portrait of George Washington

A DNA study identified descendants of George Washington from unmarked remains. Credit: Public domain

DNA profiling is an incredibly useful tool in forensics, but the most common method—short tandem repeat (STR) analysis—typically doesn’t work when remains are especially degraded, especially if said remains have been preserved with embalming methods using formaldehyde. This includes the remains of US service members who died in such past conflicts as World War II, Korea, Vietnam, and the Cold War. That’s why scientists at the Armed Forces Medical Examiner System’s identification lab at the Dover Air Force Base have developed new DNA sequencing technologies.

They used those methods to identify the previously unmarked remains of descendants of George Washington, according to a March paper published in the journal iScience. The team tested three sets of remains and compared the results with those of a known living descendant, using methods for assessing paternal and maternal relationships, as well as a new method for next-generation sequencing data involving some 95,000 single-nucleotide polymorphisms (SNPs) in order to better predict more distant ancestry. The combined data confirmed that the remains belonged to Washington’s descendants and the new method should help do the same for the remains of as-yet-unidentified service members.

In related news, in July, forensic scientists successfully used descendant DNA to identify a victim of the 1921 Tulsa massacre in Oklahoma City, buried in a mass grave containing more than a hundred victims. C.L. Daniel was a World War I veteran, still in his 20s when he was killed. More than 120 such graves have been found since 2020, with DNA collected from around 30 sets of remains, but this is the first time those remains have been directly linked to the massacre. There are at least 17 other victims in the grave where Daniel’s remains were found.

DOI: iScience, 2024. 10.1016/j.isci.2024.109353 (About DOIs).

Spidey-inspired web-slinging tech

stream of liquid silk quickly turns to a strong fiber that sticks to and lifts objects

stream of liquid silk quickly turns to a strong fiber that sticks to and lifts objects. Credit: Marco Lo Presti et al., 2024

Over the years, researchers in Tufts University’s Silklab have come up with all kinds of ingenious bio-inspired uses for the sticky fibers found in silk moth cocoons: adhesive glues, printable sensors, edible coatings, and light-collecting materials for solar cells, to name a few. Their latest innovation is a web-slinging technology inspired by Spider-Man’s ability to shoot webbing from his wrists, described in an October paper published in the journal Advanced Functional Materials.

Coauthor Marco Lo Presti was cleaning glassware with acetone in the lab one day when he noticed something that looked a lot like webbing forming on the bottom of a glass. He realized this could be the key to better replicating spider threads for the purpose of shooting the fibers from a device like Spider-Man—something actual spiders don’t do. (They spin the silk, find a surface, and draw out lines of silk to build webs.)

The team boiled silk moth cocoons in a solution to break them down into proteins called fibroin. The fibroin was then extruded through bore needles into a stream. Spiking the fibroin solution with just the right additives will cause it to solidify into fiber once it comes into contact with air. For the web-slinging technology, they added dopamine to the fibroin solution and then shot it through a needle in which the solution was surrounded by a layer of acetone, which triggered solidification.

The acetone quickly evaporated, leaving just the webbing attached to whatever object it happened it hit. The team tested the resulting fibers and found they could lift a steel bolt, a tube floating on water, a partially buried scalpel and a wooden block—all from as far away as 12 centimeters. Sure, natural spider silk is still about 1000 times stronger than these fibers, but it’s still a significant step forward that paves the way for future novel technological applications.

DOI: Advanced Functional Materials, 2024. 10.1002/adfm.202414219

Solving a mystery of a 12th century supernova

Pa 30 is the supernova remnant of SN 1181.

Pa 30 is the supernova remnant of SN 1181. Credit: unWISE (D. Lang)/CC BY-SA 4.0

In 1181, astronomers in China and Japan recorded the appearance of a “guest star” that shone as bright as Saturn and was visible in the sky for six months. We now know it was a supernova (SN1181), one of only five such known events occurring in our Milky Way. Astronomers got a closer look at the remnant of that supernova and have determined the nature of strange filaments resembling dandelion petals that emanate from a “zombie star” at its center, according to an October paper published in The Astrophysical Journal Letters.

The Chinese and Japanese astronomers only recorded an approximate location for the unusual sighting, and for centuries no one managed to make a confirmed identification of a likely remnant from that supernova. Then, in 2021, astronomers measured the speed of expansion of a nebula known as Pa 30, which enabled them to determine its age: around 1,000 years, roughly coinciding with the recorded appearance of SN1181. PA 30 is an unusual remnant because of its zombie star—most likely itself a remnant of the original white dwarf that produced the supernova.

This latest study relied on data collected by Caltech’s Keck Cosmic Web Imager, a spectrograph at the Keck Observatory in Hawaii. One of the unique features of this instrument is that it can measure the motion of matter in a supernova and use that data to create something akin to a 3D movie of the explosion. The authors were able to create such a 3D map of P 30 and calculated that the zombie star’s filaments have ballistic motion, moving at approximately 1,000 kilometers per second.

Nor has that velocity changed since the explosion, enabling them to date that event almost exactly to 1181. And the findings raised fresh questions—namely, the ejected filament material is asymmetrical—which is unusual for a supernova remnant. The authors suggest that asymmetry may originate with the initial explosion.

There’s also a weird inner gap around the zombie star. Both will be the focus of further research.

DOI: Astrophysical Journal Letters, 2024. 10.3847/2041-8213/ad713b (About DOIs).

Reviving a “lost” 16th century score

manuscript page of Aberdeen Breviary : Volume 1 or 'Pars Hiemalis'

Fragment of music from The Aberdeen Breviary: Volume 1 Credit: National Library of Scotland /CC BY 4.0

Never underestimate the importance of marginalia in old manuscripts. Scholars from the University of Edinburgh and KU Leuven in Belgium can attest to that, having discovered a fragment of “lost” music from 16th-century pre-Reformation Scotland in a collection of worship texts. The team was even able to reconstruct the fragment and record it to get a sense of what music sounded like from that period in northeast Scotland, as detailed in a December paper published in the journal Music and Letters.

King James IV of Scotland commissioned the printing of several copies of The Aberdeen Breviary—a collection of prayers, hymns, readings, and psalms for daily worship—so that his subjects wouldn’t have to import such texts from England or Europe. One 1510 copy, known as the “Glamis copy,” is currently housed in the National Library of Scotland in Edinburgh. It was while examining handwritten annotations in this copy that the authors discovered the musical fragment on a page bound into the book—so it hadn’t been slipped between the pages at a later date.

The team figured out the piece was polyphonic, and then realized it was the tenor part from a harmonization for three or four voices of the hymn “Cultor Dei,” typically sung at night during Lent. (You can listen to a recording of the reconstructed composition here.) The authors also traced some of the history of this copy of The Aberdeen Breviary, including its use at one point by a rural chaplain at Aberdeen Cathedral, before a Scottish Catholic acquired it as a family heirloom.

“Identifying a piece of music is a real ‘Eureka’ moment for musicologists,” said coauthor David Coney of Edinburgh College of Art. “Better still, the fact that our tenor part is a harmony to a well-known melody means we can reconstruct the other missing parts. As a result, from just one line of music scrawled on a blank page, we can hear a hymn that had lain silent for nearly five centuries, a small but precious artifact of Scotland’s musical and religious traditions.”

DOI: Music and Letters, 2024. 10.1093/ml/gcae076 (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior reporter at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Ten cool science stories we almost missed Read More »