Science

spacex’s-latest-dragon-mission-will-breathe-more-fire-at-the-space-station

SpaceX’s latest Dragon mission will breathe more fire at the space station

“Our capsule’s engines are not pointed in the right direction for optimum boost,” said Sarah Walker, SpaceX’s director of Dragon mission management. “So, this trunk module has engines pointed in the right direction to maximize efficiency of propellant usage.”

When NASA says it’s the right time, SpaceX controllers will command the Draco thrusters to ignite and gently accelerate the massive 450-ton complex. All told, the reboost kit can add about 20 mph, or 9 meters per second, to the space station’s already-dizzying speed, according to Walker.

Spetch said that’s roughly equivalent to the total reboost impulse provided by one-and-a-half Russian Progress cargo vehicles. That’s about one-third to one-fourth of the total orbit maintenance the ISS needs in a year.

“The boost kit will help sustain the orbiting lab’s altitude, starting in September, with a series of burns planned periodically throughout the fall of 2025,” Spetch said.

After a few months docked at the ISS, the Dragon cargo capsule will depart and head for a parachute-assisted splashdown in the Pacific Ocean off the coast of California. SpaceX will recover the pressurized capsule to fly again, while the trunk containing the reboost kit will jettison and burn up in the atmosphere.

SpaceX’s Dragon spacecraft approaches the International Space Station for docking at 7: 05 am EDT (11: 05 UTC) on Monday. Credit: NASA TV/Ars Technica

While this mission is SpaceX’s 33rd cargo flight to the ISS under the auspices of NASA’s multibillion-dollar Commercial Resupply Services contract, it’s also SpaceX’s 50th overall Dragon mission to the outpost. This tally includes 17 flights of the human-rated Crew Dragon.

“With CRS-33, we’ll mark our 50th voyage to ISS,” Walker said. “Just incredible. Together, these missions have (carried) well over 300,000 pounds of cargo and supplies to the orbiting lab and well over 1,000 science and research projects that are not only helping us to understand how to live and work effectively in space… but also directly contributing to critical research that serves our lives here on Earth.”

Future Dragon trunks will be able to accommodate a reboost kit or unpressurized science payloads, depending on NASA’s needs at the space station.

The design of the Dragon reboost kit is a smaller-scale version of what SpaceX will build for a much larger Dragon trunk under a $843 million contract signed with NASA last year for the US Deorbit Vehicle. This souped-up Dragon will dock with the ISS and steer it back into the atmosphere after the lab’s decommissioning in the early 2030s. The deorbit vehicle will have 46 Draco thrusters—16 to control the craft’s orientation and 30 in the trunk to provide the impulse needed to drop the station out of orbit.

SpaceX’s latest Dragon mission will breathe more fire at the space station Read More »

time-is-running-out-for-spacex-to-make-a-splash-with-second-gen-starship

Time is running out for SpaceX to make a splash with second-gen Starship


SpaceX is gearing up for another Starship launch after three straight disappointing test flights.

SpaceX’s 10th Starship rocket awaits liftoff. Credit: Stephen Clark/Ars Technica

STARBASE, Texas—A beehive of aerospace technicians, construction workers, and spaceflight fans descended on South Texas this weekend in advance of the next test flight of SpaceX’s gigantic Starship rocket, the largest vehicle of its kind ever built.

Towering 404 feet (123.1 meters) tall, the rocket was supposed to lift off during a one-hour launch window beginning at 6: 30 pm CDT (7: 30 pm EDT; 23: 30 UTC) Sunday. But SpaceX called off the launch attempt about an hour before liftoff to investigate a ground system issue at Starbase, located a few miles north of the US-Mexico border.

SpaceX didn’t immediately confirm when it might try again to launch Starship, but it could happen as soon as Monday evening at the same time.

It will take about 66 minutes for the rocket to travel from the launch pad in Texas to a splashdown zone in the Indian Ocean northwest of Australia. You can watch the test flight live on SpaceX’s official website. We’ve also embedded a livestream from Spaceflight Now and LabPadre below.

This will be the 10th full-scale test flight of Starship and its Super Heavy booster stage. It’s the fourth flight of an upgraded version of Starship conceived as a stepping stone to a more reliable, heavier-duty version of the rocket designed to carry up to 150 metric tons, or some 330,000 pounds, of cargo to pretty much anywhere in the inner part of our Solar System.

But this iteration of Starship, known as Block 2 or Version 2, has been anything but reliable. After reeling off a series of increasingly successful flights last year with the first-generation Starship and Super Heavy booster, SpaceX has encountered repeated setbacks since debuting Starship Version 2 in January.

Now, there are just two Starship Version 2s left to fly, including the vehicle poised for launch this week. Then, SpaceX will move on to Version 3, the design intended to go all the way to low-Earth orbit, where it can be refueled for longer expeditions into deep space.

A closer look at the top of SpaceX’s Starship rocket, tail number Ship 37, showing some of the different configurations of heat shield tiles SpaceX wants to test on this flight. Credit: Stephen Clark/Ars Technica

Starship’s promised cargo capacity is unparalleled in the history of rocketry. The privately developed rocket’s enormous size, coupled with SpaceX’s plan to make it fully reusable, could enable cargo and human missions to the Moon and Mars. SpaceX’s most conspicuous contract for Starship is with NASA, which plans to use a version of the ship as a human-rated Moon lander for the agency’s Artemis program. With this contract, Starship is central to the US government’s plans to try to beat China back to the Moon.

Closer to home, SpaceX intends to use Starship to haul massive loads of more powerful Starlink Internet satellites into low-Earth orbit. The US military is interested in using Starship for a range of national security missions, some of which could scarcely be imagined just a few years ago. SpaceX wants its factory to churn out a Starship rocket every day, approximately the same rate Boeing builds its workhorse 737 passenger jets.

Starship, of course, is immeasurably more complex than an airliner, and it sees temperature extremes, aerodynamic loads, and vibrations that would destroy a commercial airplane.

For any of this to become reality, SpaceX needs to begin ticking off a lengthy to-do list of technical milestones. The interim objectives include things like catching and reusing Starships and in-orbit ship-to-ship refueling, with a final goal of long-duration spaceflight to reach the Moon and stay there for weeks, months, or years. For a time late last year, it appeared as if SpaceX might be on track to reach at least the first two of these milestones by now.

The 404-foot-tall (123-meter) Starship rocket and Super Heavy booster stand on SpaceX’s launch pad. In the foreground, there are empty loading docks where tanker trucks deliver propellants and other gases to the launch site. Credit: Stephen Clark/Ars Technica

Instead, SpaceX’s schedule for catching and reusing Starships, and refueling ships in orbit, has slipped well into next year. A Moon landing is probably at least several years away. And a touchdown on Mars? Maybe in the 2030s. Before Starship can sniff those milestones, engineers must get the rocket to survive from liftoff through splashdown. This would confirm that recent changes made to the ship’s heat shield work as expected.

Three test flights attempting to do just this ended prematurely in January, March, and May. These failures prevented SpaceX from gathering data on several different tile designs, including insulators made of ceramic and metallic materials, and a tile with “active cooling” to fortify the craft as it reenters the atmosphere.

The heat shield is supposed to protect the rocket’s stainless steel skin from temperatures reaching 2,600° Fahrenheit (1,430° Celsius). During last year’s test flights, it worked well enough for Starship to guide itself to an on-target controlled splashdown in the Indian Ocean, halfway around the world from SpaceX’s launch site in Starbase, Texas.

But the ship lost some of its tiles during each flight last year, causing damage to the ship’s underlying structure. While this wasn’t bad enough to prevent the vehicle from reaching the ocean intact, it would cause difficulties in refurbishing the rocket for another flight. Eventually, SpaceX wants to catch Starships returning from space with giant robotic arms back at the launch pad. The vision, according to SpaceX founder and CEO Elon Musk, is to recover the ship, quickly mount it on another booster, refuel it, and launch it again.

If SpaceX can accomplish this, the ship must return from space with its heat shield in pristine condition. The evidence from last year’s test flights showed engineers had a long way to go for that to happen.

Visitors survey the landscape at Starbase, Texas, where industry and nature collide. Credit: Stephen Clark/Ars Technica

The Starship setbacks this year have been caused by problems in the ship’s propulsion and fuel systems. Another Starship exploded on a test stand in June at SpaceX’s sprawling rocket development facility in South Texas. SpaceX engineers identified different causes for each of the failures. You can read about them in our previous story.

Apart from testing the heat shield, the goals for this week’s Starship flight include testing an engine-out capability on the Super Heavy booster. Engineers will intentionally disable one of the booster’s Raptor engines used to slow down for landing, and instead use another Raptor engine from the rocket’s middle ring. At liftoff, 33 methane-fueled Raptor engines will power the Super Heavy booster off the pad.

SpaceX won’t try to catch the booster back at the launch pad this time, as it did on three occasions late last year and earlier this year. The booster catches have been one of the bright spots for the Starship program as progress on the rocket’s upper stage floundered. SpaceX reused a previously flown Super Heavy booster for the first time on the most recent Starship launch in May.

The booster landing experiment on this week’s flight will happen a few minutes after launch over the Gulf of Mexico east of the Texas coastline. Meanwhile, six Raptor engines will fire until approximately T+plus 9 minutes to accelerate the ship, or upper stage, into space.

The ship is programmed to release eight Starlink satellite simulators from its payload bay in a test of the craft’s payload deployment mechanism. That will be followed by a brief restart of one of the ship’s Raptor engines to adjust its trajectory for reentry, set to begin around 47 minutes into the mission.

If Starship makes it that far, that will be when engineers finally get a taste of the heat shield data they were hungry for at the start of the year.

This story was updated at 8: 30 pm EDT after SpaceX scrubbed Sunday’s launch attempt.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Time is running out for SpaceX to make a splash with second-gen Starship Read More »

why-wind-farms-attract-so-much-misinformation-and-conspiracy theory

Why wind farms attract so much misinformation and conspiracy theory

The recent resistance

Academic work on the question of anti-wind farm activism is revealing a pattern: Conspiracy thinking is a stronger predictor of opposition than age, gender, education, or political leaning.

In Germany, the academic Kevin Winter and colleagues found that belief in conspiracies had many times more influence on wind opposition than any demographic factor. Worryingly, presenting opponents with facts was not particularly successful.

In a more recent article, based on surveys in the US, UK, and Australia that looked at people’s propensity to give credence to conspiracy theories, Winter and colleagues argued that opposition is “rooted in people’s worldviews.”

If you think climate change is a hoax or a beat-up by hysterical eco-doomers, you’re going to be easily persuaded that wind turbines are poisoning groundwater, causing blackouts, or, in Trump’s words, “driving [the whales] loco.”

Wind farms are fertile ground for such theories. They are highly visible symbols of climate policy, and complex enough to be mysterious to non-specialists. A row of wind turbines can become a target for fears about modernity, energy security, or government control.

This, say Winter and colleagues, “poses a challenge for communicators and institutions committed to accelerating the energy transition.” It’s harder to take on an entire worldview than to correct a few made-up talking points.

What is it all about?

Beneath the misinformation, often driven by money or political power, there’s a deeper issue. Some people—perhaps Trump among them—don’t want to deal with the fact that fossil technologies, which brought prosperity and a sense of control, are also causing environmental crises. And these are problems that aren’t solved with the addition of more technology. It offends their sense of invulnerability, of dominance. This “anti-reflexivity,” as some academics call it, is a refusal to reflect on the costs of past successes.

It is also bound up with identity. In some corners of the online “manosphere,” concerns over climate change are being painted as effeminate.

Many boomers, especially white heterosexual men like Trump, have felt disoriented as their world has shifted and changed around them. The clean energy transition symbolizes part of this change. Perhaps this is a good way to understand why Trump is lashing out at “windmills.”The Conversation

Marc Hudson, Visiting Fellow, SPRU, University of Sussex Business School, University of Sussex. This article is republished from The Conversation under a Creative Commons license. Read the original article.

Why wind farms attract so much misinformation and conspiracy theory Read More »

an-inner-speech-decoder-reveals-some-mental-privacy-issues

An inner-speech decoder reveals some mental privacy issues

But it struggled with more complex phrases.

Pushing the frontier

Once the mental privacy safeguard was in place, the team started testing their inner speech system with cued words first. The patients sat in front of the screen that displayed a short sentence and had to imagine saying it. The performance varied, reaching 86 percent accuracy with the best performing patient and on a limited vocabulary of 50 words, but dropping to 74 percent when the vocabulary was expanded to 125,000 words.

But when the team moved on to testing if the prosthesis could decode unstructured inner speech, the limitations of the BCI became quite apparent.

The first unstructured inner speech test involved watching arrows pointing up, right, or left in a sequence on a screen. The task was to repeat that sequence after a short delay using a joystick. The expectation was that the patients would repeat sequences like “up, right, up” in their heads to memorize them—the goal was to see if the prosthesis would catch it. It kind of did, but the performance was just above chance level.

Finally, Krasa and his colleagues tried decoding more complex phrases without explicit cues. They asked the participants to think of the name of their favorite food or recall their favorite quote from a movie. “This didn’t work,” Krasa says. “What came out of the decoder was kind of gibberish.”

In its current state, Krasa thinks, the inner speech neural prosthesis is a proof of concept. “We didn’t think this would be possible, but we did it and that’s exciting! The error rates were too high, though, for someone to use it regularly,” Krasa says. He suggested the key limitation might be in hardware—the number of electrodes implanted in the brain and precision with which we can record the signal from the neurons. Inner speech representations might also be stronger in other brain regions than they are in the motor cortex.

Krasa’s team is currently involved in two projects that stemmed from the inner speech neural prosthesis. “The first is asking the question [of] how much faster an inner speech BCI would be compared to an attempted speech alternative,” Krasa says. The second one is looking at people with a condition called aphasia, where people have motor control of their mouths but are unable to produce words. “We want to assess if inner speech decoding would help them,” Krasa adds.

Cell, 2025.  DOI: 10.1016/j.cell.2025.06.015

An inner-speech decoder reveals some mental privacy issues Read More »

google-says-it-dropped-the-energy-cost-of-ai-queries-by-33x-in-one-year

Google says it dropped the energy cost of AI queries by 33x in one year

To come up with typical numbers, the team that did the analysis tracked requests and the hardware that served them for a 24 hour period, as well as the idle time for that hardware. This gives them an energy per request estimate, which differs based on the model being used. For each day, they identify the median prompt and use that to calculate the environmental impact.

Going down

Using those estimates, they find that the impact of an individual text request is pretty small. “We estimate the median Gemini Apps text prompt uses 0.24 watt-hours of energy, emits 0.03 grams of carbon dioxide equivalent (gCO2e), and consumes 0.26 milliliters (or about five drops) of water,” they conclude. To put that in context, they estimate that the energy use is similar to about nine seconds of TV viewing.

The bad news is that the volume of requests is undoubtedly very high. The company has chosen to execute an AI operation with every single search request, a compute demand that simply didn’t exist a couple of years ago. So, while the individual impact is small, the cumulative cost is likely to be considerable.

The good news? Just a year ago, it would have been far, far worse.

Some of this is just down to circumstances. With the boom in solar power in the US and elsewhere, it has gotten easier for Google to arrange for renewable power. As a result, the carbon emissions per unit of energy consumed saw a 1.4x reduction over the past year. But the biggest wins have been on the software side, where different approaches have led to a 33x reduction in energy consumed per prompt.

A color bar showing the percentage of energy used by different hardware. AI accelerators are the largest use, followed by CPU and RAM. Idle machines and overhead account for about 10 percent each.

Most of the energy use in serving AI requests comes from time spent in the custom accelerator chips. Credit: Elsworth, et. al.

The Google team describes a number of optimizations the company has made that contribute to this. One is an approach termed Mixture-of-Experts, which involves figuring out how to only activate the portion of an AI model needed to handle specific requests, which can drop computational needs by a factor of 10 to 100. They’ve developed a number of compact versions of their main model, which also reduce the computational load. Data center management also plays a role, as the company can make sure that any active hardware is fully utilized, while allowing the rest to stay in a low-power state.

Google says it dropped the energy cost of AI queries by 33x in one year Read More »

us-military’s-x-37b-spaceplane-stays-relevant-with-launch-of-another-mission

US military’s X-37B spaceplane stays relevant with launch of another mission

“Quantum inertial sensors are not only scientifically intriguing, but they also have direct defense applications,” said Lt. Col. Nicholas Estep, an Air Force engineer who manages the DIU’s emerging technology portfolio. “If we can field devices that provide a leap in sensitivity and precision for observing platform motion over what is available today, then there’s an opportunity for strategic gains across the DoD.”

Teaching an old dog new tricks

The Pentagon’s twin X-37Bs have logged more than 4,200 days in orbit, equivalent to about 11-and-a-half years. The spaceplanes have flown in secrecy for nearly all of that time.

The most recent flight, Mission 7, ended in March with a runway landing at Vandenberg after a mission of more than 14 months that carried the spaceplane higher than ever before, all the way to an altitude approaching 25,000 miles (40,000 kilometers). The high-altitude elliptical orbit required a boost on a Falcon Heavy rocket.

In the final phase of the mission, ground controllers commanded the X-37B to gently dip into the atmosphere to demonstrate the spacecraft could use “aerobraking” maneuvers to bring its orbit closer to Earth in preparation for reentry.

An X-37B spaceplane is ready for encapsulation inside the Falcon 9 rocket’s payload fairing. Credit: US Space Force

Now, on Mission 8, the spaceplane heads back to low-Earth orbit hosting quantum navigation and laser communications experiments. Few people, if any, envisioned these kinds of missions flying on the X-37B when it first soared to space 15 years ago. At that time, quantum sensing was confined to the lab, and the first laser communication demonstrations in space were barely underway. SpaceX hadn’t revealed its plans for the Falcon Heavy rocket, which the X-37B needed to get to its higher orbit on the last mission.

The laser communications experiments on this flight will involve optical inter-satellite links with “proliferated commercial satellite networks in low-Earth orbit,” the Space Force said. This is likely a reference to SpaceX’s Starlink or Starshield broadband satellites. Laser links enable faster transmission of data, while offering more security against eavesdropping or intercepts.

Gen. Chance Saltzman, the Space Force’s chief of space operations, said in a statement that the laser communications experiment “will mark an important step in the US Space Force’s ability to leverage proliferated space networks as part of a diversified and redundant space architectures. In so doing, it will strengthen the resilience, reliability, adaptability and data transport speeds of our satellite communications architecture.”

US military’s X-37B spaceplane stays relevant with launch of another mission Read More »

for-some-people,-music-doesn’t-connect-with-any-of-the-brain’s-reward-circuits

For some people, music doesn’t connect with any of the brain’s reward circuits

“I was talking with my colleagues at a conference 10 years ago and I just casually said that everyone loves music,” recalls Josep Marco Pallarés, a neuroscientist at the University of Barcelona. But it was a statement he started to question almost immediately, given there were clinical cases in psychiatry where patients reported deriving absolutely no pleasure from listening to any kind of tunes.

So, Pallarés and his team spent the past 10 years researching the neural mechanisms behind a condition they called specific musical anhedonia: the inability to enjoy music.

The wiring behind joy

When we like something, it is usually a joint effect of circuits in our brain responsible for perception—be it perception of taste, touch, or sound—and reward circuits that give us a shot of dopamine in response to nice things we experience. For a long time, scientists attributed a lack of pleasure from things most people find enjoyable to malfunctions in one or more of those circuits.

You can’t enjoy music when the parts of the brain that process auditory stimuli don’t work properly, since you can’t hear it in the way that you would if the system were intact. You also can’t enjoy music when the reward circuit refuses to release that dopamine, even if you can hear it loud and clear. Pallarés, though, thought this traditional idea lacked a bit of explanatory power.

“When your reward circuit doesn’t work, you don’t experience enjoyment from anything, not just music,” Pallarés says. “But some people have no hearing impairments and can enjoy everything else—winning money, for example. The only thing they can’t enjoy is music.”

For some people, music doesn’t connect with any of the brain’s reward circuits Read More »

scientists-are-building-cyborg-jellyfish-to-explore-ocean-depths

Scientists are building cyborg jellyfish to explore ocean depths

Understanding the wakes and vortices that jellyfish produce as they swim is crucial, according to Wu, et al. Particle image velocimetry (PIV) is a vital tool for studying flow phenomena and biomechanical propulsion. PIV essentially tracks tiny tracer particles suspended in water by illuminating them with laser light. The technique usually relies on hollow glass spheres, polystyrene beads, aluminum flakes, or synthetic granules with special optical coatings to enhance the reflection of light.

These particles are readily available and have the right size and density for flow measurements, but they are very expensive, costing as much as $200 per pound in some cases. And they have associated health and environmental risks: glass microspheres can cause skin or eye irritation, for example, while it’s not a good idea to inhale polystyrene beads or aluminum flakes. They are also not digestible by animals and can cause internal damage. Several biodegradable options have been proposed, such as yeast cells, milk, micro algae, and potato starch, which are readily available and cheap, costing as little as $2 per pound.

Wu thought starch particles were the most promising as biodegradable tracers, and decided to study several different kinds of starches to identify the best candidate: specifically, corn starch, arrowroot starch, baking powder, jojoba beads, and walnut shell powder. Each type of particle was suspended in water tanks with moon jellyfish, tracking their movement with a PIV system. They evaluated their performance based on the particles’ size, density, and laser-scattering properties.

Of the various candidates, corn starch and arrowroot starch proved best suited for PIV applications, thanks to their density and uniform size distribution, while arrowroot starch performed best when it came to laser scattering tests. But corn starch would be well-suited for applications that require larger tracer particles since it produced larger laser scattering dots in the experiments. Both candidates matched the performance of commonly used synthetic PIV tracer particles in terms of accurately visualizing flow structures resulting from the swimming jellyfish.

DOI: Physical Review Fluids, 2025. 10.1103/bg66-976x  (About DOIs).

Scientists are building cyborg jellyfish to explore ocean depths Read More »

deeply-divided-supreme-court-lets-nih-grant-terminations-continue

Deeply divided Supreme Court lets NIH grant terminations continue

The dissents

The primary dissent was written by Chief Justice Roberts, and joined in part by the three Democratic appointees, Jackson, Kagan, and Sotomayor. It is a grand total of one paragraph and can be distilled down to a single sentence: “If the District Court had jurisdiction to vacate the directives, it also had jurisdiction to vacate the ‘Resulting Grant Terminations.’”

Jackson, however, chose to write a separate and far more detailed argument against the decision, mostly focusing on the fact that it’s not simply a matter of abstract law; it has real-world consequences.

She notes that existing law prevents plaintiffs from suing in the Court of Federal Claims while the facts are under dispute in other courts (something acknowledged by Barrett). That would mean that, as here, any plaintiffs would have to have the policy declared illegal first in the District Court, and only after that was fully resolved could they turn to the Federal Claims Court to try to restore their grants. That’s a process that could take years. In the meantime, the scientists would be out of funding, with dire consequences.

Yearslong studies will lose validity. Animal subjects will be euthanized. Life-saving medication trials will be abandoned. Countless researchers will lose their jobs. And community health clinics will close.

Jackson also had little interest in hearing that the government would be harmed by paying out the grants in the meantime. “For the Government, the incremental expenditure of money is at stake,” she wrote. “For the plaintiffs and the public, scientific progress itself hangs in the balance along with the lives that progress saves.”

With this decision, of course, it no longer hangs in the balance. There’s a possibility that the District Court’s ruling that the government’s policy was arbitrary and capricious will ultimately prevail; it’s not clear, because Barrett says she hasn’t even seen the government make arguments there, and Roberts only wrote regarding the venue issues. In the meantime, even with the policy stayed, it’s unlikely that anyone will focus grant proposals on the disfavored subjects, given that the policy might be reinstated at any moment.

And even if that ruling is upheld, it will likely take years to get there, and only then could a separate case be started to restore the funding. Any labs that had been using those grants will have long since moved on, and the people working on those projects scattered.

Deeply divided Supreme Court lets NIH grant terminations continue Read More »

neolithic-people-took-gruesome-trophies-from-invading-tribes

Neolithic people took gruesome trophies from invading tribes

A local Neolithic community in northeastern France may have clashed with foreign invaders, cutting off limbs as war trophies and otherwise brutalizing their prisoners of war, according to a new paper published in the journal Science Advances. The findings challenge conventional interpretations of prehistoric violence as bring indiscriminate or committed for pragmatic reasons.

Neolithic Europe was no stranger to collective violence of many forms, such as the odd execution and massacres of small communities, as well as armed conflicts. For instance, we recently reported on an analysis of human remains from 11 individuals recovered from El Mirador Cave in Spain, showing evidence of cannibalism—likely the result of a violent episode between competing Late Neolithic herding communities about 5,700 years ago. Microscopy analysis revealed telltale slice marks, scrape marks, and chop marks, as well as evidence of cremation, peeling, fractures, and human tooth marks.

This indicates the victims were skinned, the flesh removed, the bodies disarticulated, and then cooked and eaten. Isotope analysis indicated the individuals were local and were probably eaten over the course of just a few days. There have been similar Neolithic massacres in Germany and Spain, but the El Mirador remains provide evidence of a rare systematic consumption of victims.

Per the authors of this latest study, during the late Middle Neolithic, the Upper Rhine Valley was the likely site of both armed conflict and rapid cultural upheaval, as groups from the Paris Basin infiltrated the region between 4295 and 4165 BCE. In addition to fortifications and evidence of large aggregated settlements, many skeletal remains from this period show signs of violence.

Friends or foes?

Overhead views of late Middle Neolithic violence-related human mass deposits of the Alsace region, France

Overhead views of late Middle Neolithic violence-related human mass deposits in Pit 124 of the Alsace region, France. Credit: Philippe Lefranc, INRAP

Archaeologist Teresa Fernandez-Crespo of Spain’s Valladolid University and co-authors focused their analysis on human remains excavated from two circular pits at the Achenheim and Bergheim sites in Alsace in northwestern France. Fernandez-Crespo et al. examined the bones and found that many of the remains showed signs of unhealed trauma—such as skull fractures—as well as the use of excessive violence (overkill), not to mention quite a few severed left upper limbs. Other skeletons did not show signs of trauma and appeared to have been given a traditional burial.

Neolithic people took gruesome trophies from invading tribes Read More »

a-geothermal-network-in-colorado-could-help-a-rural-town-diversify-its-economy

A geothermal network in Colorado could help a rural town diversify its economy


Town pitches companies to take advantage of “reliable, cost-effective heating and cooling.”

This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here.

Hayden, a small town in the mountains of northwest Colorado, is searching for ways to diversify its economy, much like other energy communities across the Mountain West.

For decades, a coal-fired power plant, now scheduled to shut down in the coming years, served as a reliable source of tax revenue, jobs, and electricity.

When town leaders in the community just west of Steamboat Springs decided to create a new business park, harnessing geothermal energy to heat and cool the buildings simply made sense.

The technology aligns with Colorado’s sustainability goals and provides access to grants and tax credits that make the project financially feasible for a town with around 2,000 residents, said Matthew Mendisco, town manager.

“We’re creating the infrastructure to attract employers, support local jobs, and give our community reliable, cost-effective heating and cooling for decades to come,” Mendisco said in a statement.

Bedrock Energy, a geothermal drilling startup company that employs advanced drilling techniques developed by the oil and gas industry, is currently drilling dozens of boreholes that will help heat and cool the town’s Northwest Colorado Business District.

The 1,000-feet-deep boreholes or wells will connect buildings in the industrial park to steady underground temperatures. Near the surface the Earth is approximately 51° F year round. As the drills go deeper, the temperature slowly increases to approximately 64° F near the bottom of the boreholes. Pipes looping down into each well will draw on this thermal energy for heating in the winter and cooling in the summer, significantly reducing energy needs.

Ground source heat pumps located in each building will provide additional heating or cooling depending on the time of year.

The project, one of the first in the region, drew the interest of some of the state’s top political leaders, who attended an open house hosted by town officials and company executives on Wednesday.

“Our energy future is happening right now—right here in Hayden,” US Senator John Hickenlooper (D-Colo.) said in a prepared statement prior to the event.

“Projects like this will drive rural economic growth while harnessing naturally occurring energy to provide reliable, cost-effective heating and cooling to local businesses,” said US Senator Michael Bennet (D-Colo.) in a written statement.

In an interview with Inside Climate News, Mendisco said that extreme weather snaps, which are not uncommon in a town over 6,000 feet above sea level, will not force companies to pay higher prices for fossil fuels to meet energy demands, like they do elsewhere in the country. He added that the system’s rates will be “fairly sustainable, and they will be as competitive as any of our other providers, natural gas, etcetera.”

The geothermal system under construction for Hayden’s business district will be owned by the town and will initially consist of separate systems for each building that will be connected into a larger network over time. Building out the network as the business park grows will help reduce initial capital costs.

Statewide interest

Hayden received two state grants totaling $300,000 to help design and build its geothermal system.

“It wasn’t completely clear to us how much interest was really going to be out there,” Will Toor, executive director of the Colorado Energy Office, said of a grant program the state launched in 2022.

In the past few years, the program has seen significant interest, with approximately 80 communities across the state exploring similar projects, said Bryce Carter, the geothermal program manager for the state’s Energy Office.

Two projects under development are by Xcel Energy, the largest electricity and gas provider in the state. A law passed in Colorado in 2023 required large gas utilities to develop at least one geothermal heating and cooling network in the state. The networks, which connect individual buildings and boreholes into a shared thermal loop, offer high efficiency and an economy of scale, but also have high upfront construction costs.

There are now 26 utility-led geothermal heating and cooling projects under development or completed nationwide, Jessica Silber-Byrne of the Building Decarbonization Coalition, a nonprofit based in Delaware, said.

Utility companies are widely seen as a natural developer of such projects as they can shoulder multi-million dollar expenses and recoup those costs in ratepayer fees over time. The first, and so far only, geothermal network completed by a gas utility was built by Eversource Energy in Framingham, Massachusetts, last year.

Grid stress concerns heat up geothermal opportunities

Twelve states have legislation supporting or requiring the development of thermal heating and cooling networks. Regulators are interested in the technology because its high efficiency can reduce demand on electricity grids.

Geothermal heating and cooling is roughly twice as efficient as air source heat pumps, a common electric heating and cooling alternative that relies on outdoor air. During periods of extreme heat or extreme cold, air source heat pumps have to work harder, requiring approximately four times more electricity than ground source heat pumps.

As more power-hungry data centers come online, the ability of geothermal heating and cooling to reduce the energy needs of other users of the grid, particularly at periods of peak demand, could become increasingly important, geothermal proponents say.

“The most urgent conversation about energy right now is the stress on the grid,” Joselyn Lai, Bedrock Energy’s CEO, said. “Geothermal’s role in the energy ecosystem will actually increase because of the concerns about meeting load growth.”

The geothermal system will be one of the larger drilling projects to date for Bedrock, a company founded in Austin, Texas, in 2022. Bedrock, which is working on another similarly sized project in Crested Butte, Colorado, seeks to reduce the cost of relatively shallow-depth geothermal drilling through the use of robotics and data analytics that rely on artificial intelligence.

By using a single, continuous steel pipe for drilling, rather than dozens of shorter pipe segments that need to be attached as they go, Bedrock can drill faster and transmit data more easily from sensors near the drill head to the surface.

In addition to shallow, low-temperature geothermal heating and cooling networks, deep, hot-rock geothermal systems that generate steam for electricity production are also seeing increased interest. New, enhanced geothermal systems that draw on hydraulic fracturing techniques developed by the oil and gas industry and other advanced drilling methods are quickly expanding geothermal energy’s potential.

“We’re also very bullish on geothermal electricity,” said Toor, of the Colorado Energy Office, adding that the state has a goal of reducing carbon emissions from the electricity sector by 80 percent by 2030. He said geothermal power that produces clean, round-the-clock electricity will likely play a key role in meeting that target.

The University of Colorado, Boulder, is currently considering the use of geothermal energy for heating, cooling, and electricity production and has received grants for initial feasibility studies through the state’s energy office.

For town officials in Hayden, the technology’s appeal is simple.

“Geothermal works at night, it works in the day, it works whenever you want it to work,” Mendisco said. “It doesn’t matter if there’s a giant snowstorm [or] a giant rainstorm. Five hundred feet to 1,000 feet below the surface, the Earth doesn’t care. It just generates heat.”

Photo of Inside Climate News

A geothermal network in Colorado could help a rural town diversify its economy Read More »

using-pollen-to-make-paper,-sponges,-and-more

Using pollen to make paper, sponges, and more

Softening the shell

To begin working with pollen, scientists can remove the sticky coating around the grains in a process called defatting. Stripping away these lipids and allergenic proteins is the first step in creating the empty capsules for drug delivery that Csaba seeks. Beyond that, however, pollen’s seemingly impenetrable shell—made up of the biopolymer sporopollenin—had long stumped researchers and limited its use.

A breakthrough came in 2020, when Cho and his team reported that incubating pollen in an alkaline solution of potassium hydroxide at 80° Celsius (176° Fahrenheit) could significantly alter the surface chemistry of pollen grains, allowing them to readily absorb and retain water.

The resulting pollen is as pliable as Play-Doh, says Shahrudin Ibrahim, a research fellow in Cho’s lab who helped to develop the technique. Before the treatment, pollen grains are more like marbles: hard, inert, and largely unreactive. After, the particles are so soft they stick together easily, allowing more complex structures to form. This opens up numerous applications, Ibrahim says, proudly holding up a vial of the yellow-brown slush in the lab.

When cast onto a flat mold and dried out, the microgel assembles into a paper or film, depending on the final thickness, that is strong yet flexible. It is also sensitive to external stimuli, including changes in pH and humidity. Exposure to the alkaline solution causes pollen’s constituent polymers to become more hydrophilic, or water-loving, so depending on the conditions, the gel will swell or shrink due to the absorption or expulsion of water, explains Ibrahim.

For technical applications, pollen grains are first stripped of their allergy-inducing sticky coating, in a process called defatting. Next, if treated with acid, they form hollow sporopollenin capsules that can be used to deliver drugs. If treated instead with an alkaline solution, the defatted pollen grains are transformed into a soft microgel that can be used to make thin films, paper, and sponges. Credit: Knowable Magazine

This winning combination of properties, the Singaporean researchers believe, makes pollen-based film a prospect for many future applications: smart actuators that allow devices to detect and respond to changes in their surroundings, wearable health trackers to monitor heart signals, and more. And because pollen is naturally UV-protective, there’s the possibility it could substitute for certain photonically active substrates in perovskite solar cells and other optoelectronic devices.

Using pollen to make paper, sponges, and more Read More »