Swierk et al. use various methods, including Raman spectroscopy, nuclear magnetic resonance spectroscopy, and electron microscopy, to analyze a broad range of commonly used tattoo inks. This enables them to identify specific pigments and other ingredients in the various inks.
Earlier this year, Swierk’s team identified 45 out of 54 inks (90 percent) with major labeling discrepancies in the US. Allergic reactions to the pigments, especially red inks, have already been documented. For instance, a 2020 study found a connection between contact dermatitis and how tattoos degrade over time. But additives can also have adverse effects. More than half of the tested inks contained unlisted polyethylene glycol—repeated exposure could cause organ damage—and 15 of the inks contained a potential allergen called propylene glycol.
Meanwhile, across the pond…
That’s a major reason why the European Commission has recently begun to crack down on harmful chemicals in tattoo ink, including banning two widely used blue and green pigments (Pigment Blue 15 and Pigment Green 7), claiming they are often of low purity and can contain hazardous substances. (US regulations are less strict than those adopted by the EU.) Swierk’s team has now expanded its chemical analysis to include 10 different tattoo inks from five different manufacturers supplying the European market.
According to Swierk et al., nine of those 10 inks did not meet EU regulations; five simply failed to list all the components, but four contained prohibited ingredients. The other main finding was that Raman spectroscopy is not very reliable for figuring out which of three common structures of Pigment Blue 15 has been used. (Only one has been banned.) Different instruments failed to reliably distinguish between the three forms, so the authors concluded that the current ban on Pigment Blue 15 is simply unenforceable.
“There are regulations on the book that are not being complied with, at least in part because enforcement is lagging,” said Swierk. “Our work cannot determine whether the issues with inaccurate tattoo ink labeling is intentional or unintentional, but at a minimum, it highlights the need for manufacturers to adopt better manufacturing standards. At the same time, the regulations that are on the books need to be enforced and if they cannot be enforced, like we argue in the case of Pigment Blue 15, they need to be reevaluated.”
Our planet is choking on plastics. Some of the worst offenders, which can take decades to degrade in landfills, are polypropylene—which is used for things such as food packaging and bumpers—and polyethylene, found in plastic bags, bottles, toys, and even mulch.
Polypropylene and polyethylene can be recycled, but the process can be difficult and often produces large quantities of the greenhouse gas methane. They are both polyolefins, which are the products of polymerizing ethylene and propylene, raw materials that are mainly derived from fossil fuels. The bonds of polyolefins are also notoriously hard to break.
Now, researchers at the University of California, Berkeley have come up with a method of recycling these polymers that uses catalysts that easily break their bonds, converting them into propylene and isobutylene, which are gasses at room temperature. Those gasses can then be recycled into new plastics.
“Because polypropylene and polyethylene are among the most difficult and expensive plastics to separate from each other in a mixed waste stream, it is crucial that [a recycling] process apply to both polyolefins,” the research team said in a study recently published in Science.
Breaking it down
The recycling process the team used is known as isomerizing ethenolysis, which relies on a catalyst to break down olefin polymer chains into their small molecules. Polyethylene and polypropylene bonds are highly resistant to chemical reactions because both of these polyolefins have long chains of single carbon-carbon bonds. Most polymers have at least one carbon-carbon double bond, which is much easier to break.
While isomerizing ethenolysis had been tried by the same researchers before, the previous catalysts were expensive metals that did not remain pure long enough to convert all of the plastic into gas. Using sodium on alumina followed by tungsten oxide on silica proved much more economical and effective, even though the high temperatures required for the reaction added a bit to the cost
In both plastics, exposure to sodium on alumina broke each polymer chain into shorter polymer chains and created breakable carbon-carbon double bonds at the ends. The chains continued to break over and over. Both then underwent a second process known as olefin metathesis. They were exposed to a stream of ethylene gas flowing into a reaction chamber while being introduced to tungsten oxide on silica, which resulted in the breakage of the carbon-carbon bonds.
The reaction breaks all the carbon-carbon bonds in polyethylene and polypropylene, with the carbon atoms released during the breaking of these bonds ending up attached to molecules of ethylene.“The ethylene is critical to this reaction, as it is a co-reactant,” researcher R.J. Conk, one of the authors of the study, told Ars Technica. “The broken links then react with ethylene, which removes the links from the chain. Without ethylene, the reaction cannot occur.”
The entire chain is catalyzed until polyethylene is fully converted to propylene, and polypropylene is converted to a mixture of propylene and isobutylene.
This method has high selectivity—meaning it produces a large amount of the desired product. That means propylene derived from polyethylene, and both propylene and isobutylene derived from polypropylene. Both of these chemicals are in high demand, since propylene is an important raw material for the chemical industry, while isobutylene is a frequently used monomer in many different polymers, including synthetic rubber and a gasoline additive.
Mixing it up
Because plastics are often mixed at recycling centers, the researchers wanted to see what would happen if polypropylene and polyethylene underwent isomerizing ethenolysis together. The reaction was successful, converting the mixture into propylene and isobutylene, with slightly more propylene than isobutylene.
Mixtures also typically include contaminants in the form of additional plastics. So the team also wanted to see whether the reaction would still work if there were contaminants. So they experimented with plastic objects that would otherwise be thrown away, including a centrifuge and a bread bag, both of which contained traces of other polymers besides polypropylene and polyethylene. The reaction yielded only slightly less propylene and isobutylene than it did with unadulterated versions of the polyolefins.
Another test involved introducing different plastics, such as PET and PVC, to polypropylene and polyethylene to see if that would make a difference. These did lower the yield significantly. If this approach is going to be successful, then all but the slightest traces of contaminants will have to be removed from polypropylene and polyethylene products before they are recycled.
While this recycling method sounds like it could prevent tons upon tons of waste, it will need to be scaled up enormously for this to happen. When the research team increased the scale of the experiment, it produced the same yield, which looks promising for the future. Still, we’ll need to build considerable infrastructure before this could make a dent in our plastic waste.
“We hope that the work described…will lead to practical methods for…[producing] new polymers,” the researchers said in the same study. “By doing so, the demand for production of these essential commodity chemicals starting from fossil carbon sources and the associated greenhouse gas emissions could be greatly reduced.”
As power utilities and industrial companies seek to use more renewable energy, the market for grid-scale batteries is expanding rapidly. Alternatives to lithium-ion technology may provide environmental, labor, and safety benefits. And these new chemistries can work in markets like the electric grid and industrial applications that lithium doesn’t address well.
“I think the market for longer-duration storage is just now emerging,” said Mark Higgins, chief commercial officer and president of North America at Redflow. “We have a lot of… very rapid scale-up in the types of projects that we’re working on and the size of projects that we’re working on. We’ve deployed about 270 projects around the world. Most of them have been small off-grid or remote-grid systems. What we’re seeing today is much more grid-connected types of projects.”
“Demand… seems to be increasing every day,” said Giovanni Damato, president of CMBlu Energy. Media projections of growth in this space are huge. “We’re really excited about the opportunity to… just be able to play in that space and provide as much capacity as possible.”
New industrial markets are also becoming active. Chemical plants, steel plants, and metal processing plants have not been able to deploy renewable energy well so far due to batteries’ fire hazards, said Mukesh Chatter, co-founder and CEO of Alsym Energy. “When you already are generating a lot of heat in these plants and there’s a risk of fire to begin with, you don’t want to deploy any battery that’s flammable.”
Chatter said that the definition of long-duration energy storage is not agreed upon by industry organizations. Still, there are a number of potential contenders developing storage for this market. Here, we’ll look at Redflow, CMBlu Energy, and BASF Stationary Energy Storage.
Zinc-bromine batteries
Redflow has been manufacturing zinc-bromine flow batteries since 2010, Higgins said. These batteries do not require the critical minerals that lithium-ion batteries need, which are sometimes from parts of the world that have unsafe labor practices or geopolitical risks. The minerals for these zinc-bromine batteries are affordable and easy to obtain.
The interconversion of energy between electrical and stored chemical energy takes place in the electrochemical cell. This consists of two half cells separated by a porous or an ion-exchange membrane. The battery can be constructed of low-cost and readily available materials, such as thermoplastics and carbon-based materials. Many parts of the battery can be recycled. Electrolytes can be recovered and reused, leading to low cost of ownership.
Building these can be quite different from other batteries. “I would say that our manufacturing process is much more akin to… an automotive manufacturing process than to [an] electronics manufacturing process… like [a] lithium-ion battery,” Higgins said. “Essentially, it is assembling batteries that are made out of plastic tanks, pumps, fans, [and] tubing. It’s a flow battery, so it’s a liquid that flows through the system that goes through an electrical stack that has cells in it, which is where most of Redflow’s intellectual property resides. The rest of the battery is all… parts that we can obtain just about anywhere.”
The charging and discharging happen inside an electrical stack. In the stack, zinc is plated onto a carbon surface during the charging process. It is then dissolved into the liquid during the discharging process, Higgins said.
The zinc-bromine electrolyte is derived from an industrial chemical that has been used in the oil and gas sector for a long time, Higgins added.
This battery cannot catch fire, and all of its parts are recyclable, Higgins told Ars. “You don’t have any of the toxic materials that you do in a lithium-ion battery.” The electrolyte liquid can be reused in other batteries. If it’s contaminated, it can be used by the oil and gas industry. If the battery leaks, the contents can be neutralized quickly and are subsequently not hazardous.
“Right now, we manufacture our batteries in Thailand,” Higgins said. “The process and wages are all fair wages and we follow all relevant environmental and labor standards.” The largest sources of bromine come from the Dead Sea or within the United States. The zinc comes from Northern Europe, the United States, or Canada.
The batteries typically use an annual maintenance program to replace components that wear out or fail, something that’s not possible with many other battery types. Higgins estimated that two to four years down the road, this technology will be “completely competitive with lithium-ion” from a cost perspective. Some government grants have helped with the commercialization process.
Enlarge/ A lot of gold deposits are found embedded in quartz crystals.
One of the reasons gold is so valuable is because it is highly unreactive—if you make something out of gold, it keeps its lustrous radiance. Even when you can react it with another material, it’s also barely soluble, a combination that makes it difficult to purify away from other materials. Which is part of why a large majority of the gold we’ve obtained comes from deposits where it is present in large chunks, some of them reaching hundreds of kilograms.
Those of you paying careful attention to the previous paragraph may have noticed a problem here: If gold is so difficult to get into its pure form, how do natural processes create enormous chunks of it? On Monday, a group of Australian researchers published a hypothesis, and a bit of evidence supporting it. They propose that an earthquake-triggered piezoelectric effect essentially electroplates gold onto quartz crystals.
The hypothesis
Approximately 75 percent of the gold humanity has obtained has come from what are called orogenic gold deposits. Orogeny is a term for the tectonic processes that build mountains, and orogenic gold deposits form in the seams where two bodies of rock are moving past each other. These areas are often filled with hot hydrothermal fluids, and the heat can increase the solubility of gold from “barely there” to “extremely low,” meaning generally less than a single milligram in a liter of water.
The other striking thing about these deposits is that they’re generally associated with the mineral quartz, a crystalline form of silicon dioxide. And that fact formed the foundation for the new hypothesis, which brings together a number of topics that are generally considered largely unrelated.
It turns out that quartz is the only abundant mineral that’s piezoelectric, meaning that it generates a charge when it’s placed under strain. While you don’t need to understand why that’s the case to follow this hypothesis, the researchers’ explanation of the piezoelectric effect is remarkably cogent and clear, so I’ll just quote it here for people who want to come away from this having learned something: “Quartz is the only common mineral that forms crystals lacking a center of symmetry (non-centrosymmetric). Non-centrosymmetric crystals distorted under stress have an imbalance in their internal electric configuration, which produces an electrical potential—or voltage—across the crystal that is directly proportional to the applied mechanical force.”
Quartz happens to be an insulator, so this electric potential doesn’t easily dissipate on its own. It can, however, be eliminated through the transfer of electrons to or from any materials that touch the quartz crystals, including fluids. In practice, that means the charge can drive redox (reduction/oxidation) reactions in any nearby fluids, potentially neutralizing any dissolved ions and causing them to come out of solution.
This has the potential to be self-reinforcing. Once a small metal deposit forms on the surface of quartz, it will ease the exchange of electrons with the fluid in its immediate vicinity, meaning more metal will be deposited in the same location. This will also lower the concentration of the metal in the nearby solution, which will favor the diffusion of additional metal ions into the location, meaning that the fluid itself doesn’t need to keep circulating past the same spot.
Finally, the concept also needs a source of strain to generate the piezoelectric effect in the first place. But remember that this is all happening in an active fault zone, so strain is not in short supply.
And the evidence
Figuring out whether this happens in active fault zones would be extremely challenging for all sorts of reasons. But it’s relatively easy to dunk some quartz crystals in a solution containing gold and see what happens. So the latter is the route the Australians took.
The gold came in the form of either a solution of gold chloride ions or a suspension of gold nanoparticles. Quartz crystals were either pure quartz or obtained from a gold-rich area and already contained some small gold deposits. The crystals themselves were subject to strain at a frequency similar to that produced by small earthquakes, and the experiment was left to run for an hour.
An hour was enough to get small gold deposits to form on the pure quartz crystals, regardless of whether it was from dissolved gold or suspended gold nanoparticles. In the case of the naturally formed quartz, the gold ended up being deposited on the existing sites where gold metal is present, rather than forming additional deposits.
The researchers note that a lot of the quartz in deposits is disordered rather than in the form of single crystals. In disordered material, there are lots of small crystals oriented randomly, meaning the piezoelectric effect of any one of these crystals is typically canceled out by its neighbors. So, gold will preferentially form on single crystals, which also helps explain why it’s found in large lumps in these deposits.
So, this is a pretty compelling hypothesis—it explains something puzzling, relies on well-established processes, and has a bit of experimental support. Given that activity in active faults is likely to remain both slow and inaccessible, the next steps are probably going to involve getting longer-term information on the rate of deposition through this process and a physical comparison of these deposits with those found in natural settings.
This electroactive polymer hydrogel “learned” to play Pong. Credit: Cell Reports Physical Science/Strong et al.
Pong will always hold a special place in the history of gaming as one of the earliest arcade video games. Introduced in 1972, it was a table tennis game featuring very simple graphics and gameplay. In fact, it’s simple enough that even non-living materials known as hydrogels can “learn” to play the game by “remembering” previous patterns of electrical stimulation, according to a new paper published in the journal Cell Reports Physical Science.
“Our research shows that even very simple materials can exhibit complex, adaptive behaviors typically associated with living systems or sophisticated AI,” said co-author Yoshikatsu Hayashi, a biomedical engineer at the University of Reading in the UK. “This opens up exciting possibilities for developing new types of ‘smart’ materials that can learn and adapt to their environment.”
Hydrogels are soft, flexible biphasic materials that swell but do not dissolve in water. So a hydrogel may contain a large amount of water but still maintain its shape, making it useful for a wide range of applications. Perhaps the best-known use is soft contact lenses, but various kinds of hydrogels are also used in breast implants, disposable diapers, EEG and ECG medical electrodes, glucose biosensors, encapsulating quantum dots, solar-powered water purification, cell cultures, tissue engineering scaffolds, water gel explosives, actuators for soft robotics, supersonic shock-absorbing materials, and sustained-release drug delivery systems, among other uses.
In April, Hayashi co-authored a paper showing that hydrogels can “learn” to beat in rhythm with an external pacemaker, something previously only achieved with living cells. They exploited the intrinsic ability of the hydrogels to convert chemical energy into mechanical oscillations, using the pacemaker to apply cyclic compressions. They found that when the oscillation of a gel sample matched the harmonic resonance of the pacemaker’s beat, the system kept a “memory” of that resonant oscillation period and could retain that memory even when the pacemaker was turned off. Such hydrogels might one day be a useful substitute for heart research using animals, providing new ways to research conditions like cardiac arrhythmia.
For this latest work, Hayashi and co-authors were partly inspired by a 2022 study in which brain cells in a dish—dubbed DishBrain—were electrically stimulated in such a way as to create useful feedback loops, enabling them to “learn” to play Pong (albeit badly). As Ars Science Editor John Timmer reported at the time:
Pong proved to be an excellent choice for the experiments. The environment only involves a couple of variables: the location of the paddle and the location of the ball. The paddle can only move along a single line, so the motor portion of things only needs two inputs: move up or move down. And there’s a clear reward for doing things well: you avoid an end state where the ball goes past the paddles and the game stops. It is a great setup for testing a simple neural network.
Put in Pong terms, the sensory portion of the network will take the positional inputs, determine an action (move the paddle up or down), and then generate an expectation for what the next state will be. If it’s interpreting the world correctly, that state will be similar to its prediction, and thus the sensory input will be its own reward. If it gets things wrong, then there will be a large mismatch, and the network will revise its connections and try again.
There were a few caveats—even the best systems didn’t play Pong all that well—but the approach mostly worked. Those systems comprising either mouse or human neurons saw the average length of Pong rallies increase over time, indicating they might be learning the game’s rules. Systems based on non-neural cells, or those lacking a reward system, didn’t see this sort of improvement. The findings provided some evidence that neural networks formed from actual neurons spontaneously develop the ability to learn. And that could explain some of the learning capabilities of actual brains, where smaller groups of neurons are organized into functional units.
Enlarge/ Composite image showing color variation of emerald green bookcloth on book spines, likely a result of air pollution
In April, the National Library of France removed four 19th century books, all published in Great Britain, from its shelves because the covers were likely laced with arsenic. The books have been placed in quarantine for further analysis to determine exactly how much arsenic is present. It’s part of an ongoing global effort to test cloth-bound books from the 19th and early 20th centuries because of the common practice of using toxic dyes during that period.
Chemists from Lipscomb University in Nashville, Tennessee, have also been studying Victorian books from that university’s library collection in order to identify and quantify levels of poisonous substances in the covers. They reported their initial findings this week at a meeting of the American Chemical Society in Denver. Using a combination of spectroscopic techniques, they found that several books had lead concentrations more than twice the limit imposed by the US Centers for Disease Control (CDC).
The Lipscomb effort was inspired by the University of Delaware’s Poison Book Project, established in 2019 as an interdisciplinary crowdsourced collaboration between university scientists and the Winterthur Museum, Garden, and Library. The initial objective was to analyze all the Victorian-era books in the Winterthur circulating and rare books collection for the presence of an arsenic compound called cooper acetoarsenite, an emerald green pigment that was very popular at the time to dye wallpaper, clothing, and cloth book covers. Book covers dyed with chrome yellow—favored by Vincent van Gogh— aka lead chromate, were also examined, and the project’s scope has since expanded worldwide.
The Poison Book Project is ongoing, but 50 percent of the 19th century cloth-case bindings tested so far contain lead in the cloth across a range of colors, as well as other highly toxic heavy metals: arsenic, chromium, and mercury. The French National Library’s affected books included the two-volume Ballads of Ireland by Edward Hayes (1855), an anthology of translated Romanian poetry (1856), and the Royal Horticultural Society’s book from 1862–1863.
Levels were especially high in those bindings that contain chrome yellow. However, the project researchers also determined that, for the moment at least, the chromium and lead in chrome yellow dyed book covers are still bound to the cloth. The emerald green pigment, on the other hand, is highly “friable,” meaning that the particles break apart under even small amounts of stress or friction, like rubbing or brushing up against the surface—and that pigment dust is hazardous to human health, particularly if inhaled.
Enlarge/ Lipscomb University undergraduate Leila Ais cuts a sample from a book cover to test for toxic dyes.
Kristy Jones
The project lists several recommendations for the safe handling and storage of such books, such as wearing nitrile gloves—prolonged direct contact with arsenical green pigment, for instance, can lead to skin lesions and skin cancer—and not eating, drinking, biting one’s fingernails or touching one’s face during handling, as well as washing hands thoroughly and wiping down surfaces. Arsenical green books should be isolated for storage and removed from circulating collections, if possible. And professional conservators should work under a chemical fume hood to limit their exposure to arsenical pigment dust.
X-ray diffraction marks the spot
In 2022, Libscomb librarians heard about the Poison Book Project and approached the chemistry department about conducting a similar analytical survey of the 19th century books in the Beaman Library. “These old books with toxic dyes may be in universities, public libraries, and private collections,” said Abigail Hoermann, an undergraduate studying chemistry at Lipscomb University who is among those involved in the effort, led by chemistry professor Joseph Weinstein-Webb. “So, we want to find a way to make it easy for everyone to be able to find what their exposure is to these books, and how to safely store them.”
The team relied upon X-ray fluorescence spectroscopy to conduct a broad survey of the collection to determine the presence of arsenic or other heavy metals in the covers, followed by plasma optical emission spectroscopy to measure the concentrations in snipped samples from book covers where such poisons were found. They also took their analysis one step further by using X-ray diffraction to identify the specific pigment molecules within the detected toxic metals.
The results so far: Lead and chromium were present in several books in the Lipscomb collection, with high levels of lead and chromium in some of those samples. The highest lead level measured was more than twice the CDC limit, while the highest chromium concentration was six times the limit.
The Lipscomb library decided to seal any colored 19th century books not yet tested in plastic for storage pending analysis. Those books, now known to have covers colored with dangerous dyes, have been removed from public circulation and also sealed in plastic bags, per Poison Book Project recommendations.
The XRD testing showed that lead(II) chromate was present in a few of those heavy metals as well—a compound of the chrome yellow pigment. In fact, they were surprised to find that the book covers contained far more lead than chromium, given that there are equal amounts of both in lead(II) chromate. Further research is needed, but the working hypothesis is that there may be other lead-based pigments—lead(II) oxide, perhaps, or lead(II) sulfide—in the dyes used on those covers.
Enlarge/ Rembrandt’s The Night Watch underwent many chemical and mechanical alterations over the last 400 years.
Public domain
Since 2019, researchers have been analyzing the chemical composition of the materials used to create Rembrandt’s masterpiece, The Night Watch, as part of the Rijksmuseum’s ongoing Operation Night Watch, devoted to its long-term preservation. Chemists at the Rijksmuseum and the University of Amsterdam have now detected unusual arsenic-based yellow and orange/red pigments used to paint the duff coat of one of the central figures in the painting, according to a recent paper in the journal Heritage Science. It’s a new addition to Rembrandt’s known pigment palette that further adds to our growing body of knowledge about the materials he used.
As previously reported, past analyses of Rembrandt’s paintings identified many pigments the Dutch master used in his work, including lead white, multiple ochres, bone black, vermilion, madder lake, azurite, ultramarine, yellow lake, and lead-tin yellow, among others. The artist rarely used pure blue or green pigments, with Belshazzar’s Feast being a notable exception. (The Rembrandt Database is the best resource for a comprehensive chronicling of the many different investigative reports.)
Early last year, the researchers at Operation Night Watch found rare traces of a compound called lead formate in the painting—surprising in itself, but the team also identified those formates in areas where there was no lead pigment, white or yellow. It’s possible that lead formates disappear fairly quickly, which could explain why they have not been detected in paintings by the Dutch Masters until now. But if that is the case, why didn’t the lead formate disappear in The Night Watch? And where did it come from in the first place?
Hoping to answer these questions, the team whipped up a model of “cooked oils” from a 17th-century recipe and analyzed those model oils with synchrotron radiation. The results supported their hypothesis that the oil used for light parts of the painting was treated with an alkaline lead drier. The fact that The Night Watch was revarnished with an oil-based varnish in the 18th century complicates matters, as this may have provided a fresh source of formic acid, such that different regions of the painting rich in lead formates may have formed at different times in the painting’s history.
Last December, the team turned its attention to the preparatory layers applied to the canvas. It’s known that Rembrandt used a quartz-clay ground for The Night Watch—the first time he had done so, perhaps because the colossal size of the painting “motivated him to look for a cheaper, less heavy and more flexible alternative for the ground layer” than the red earth, lead white, and cerussite he was known to use on earlier paintings.
Enlarge/ (a) Rembrandt’s The Night Watch. (b) Detail of figure’s embroidered gold buff coat. (c) X-ray diffraction image of coat detail showing arsenic. (d) Stereomicroscope image showing arsenic hot spot.
N. De Keyser et al., 2024
They used 3D X-ray methods to capture more detail, revealing the presence of an unknown (and unexpected) lead-containing layer located just underneath the ground layer. This could be due to using a lead compound added to the oil used to prepare the canvas as a drying additive—perhaps to protect the painting from the damaging effects of humidity. (Usually a glue sizing was used before applying the ground layer.) The lead layer discovered last year could be the reason for the unusual lead protrusions in areas of The Night Watch, since there are no other lead-containing compounds in the paint. It’s possible that lead migrated into the painting’s ground layer from the lead-oil preparatory layer below.
An intentional combination
The presence of arsenic sulfides in The Night Watch appears to be an intentional pigment combination by Rembrandt, according to the authors of this latest paper. Artists throughout history have used naturally occurring orpiment and realgar, as well as artificial arsenic sulfide pigments, to get yellow, orange, and red hues in their paints. Orpiment was also used for medicinal purposes, in hair removal creams and oils, in wax seals, yellow ink, bookbinder green (mixed with indigo), and for the treatment or coating of metals like silver.
However, the use of artificial arsenic sulfides has rarely been reported in artworks, although they are mentioned in multiple artists’ treatises dating back to the 15th century. Earlier work using advanced analytical techniques such as Raman spectroscopy and X-ray powder diffraction revealed that Rembrandt used arsenic sulfide pigments (artificial orpiment) in two late paintings: The Jewish Bride (c 1665) and The Man in a Red Cap (c 1665).
For this latest work, Nouchka De Keyser of the Rijksmuseum and co-authors used macroscopic X-ray fluorescence imaging to map The Night Watch, which revealed the presence of arsenic and sulfur in the doublet sleeves and embroidered buff coat worn by Lt. Willem Van Ruytenburch, i.e., the central figure to the right of Captain Frans Bannick Cocq in the painting. The researchers initially assumed that this was due to Rembrandt’s use of orpiment for yellow hues and realgar for red hues.
Enlarge/ (a, b) Pages from Johann Kunckel’s Ars Vitraria Experimentalis, 1679. (c) Page from the Weimar taxa of 1674 including prices for white, yellow, and red arsenic.
N. De Keyser et al., 2024
To learn more, they took tiny samples and analyzed them with light microscopy, micro-Raman spectroscopy, electron microscopy, and X-ray powder diffraction. They found the yellow particles were actually pararealgar while the orange to red particles were semi-amorphous pararealgar. These are more unusual arsenic sulfide components, typically associated with degradation products from either the natural minerals or their artificial equivalents as they age.
But De Keyser et al. concluded that the presence of these components was actually an intentional mixture, based on their perusal of multiple historical sources and catalogs of collection cabinets with long lists of various arsenic sulfides. There was clearly contemporary knowledge of manipulating both natural and artificial arsenic sulfides to get different shades of yellow, orange, and red.
They also found vermilion and lead-tin yellow in the paint mixture; Rembrandt was known to use these to add brightness and intensity to his paintings. In the case of The Night Watch, “Rembrandt clearly aimed for a bright orange tone with a high color strength that allowed him to create an illusion of the gold thread embroidery in Van Ruytenburch’s costume,” the authors wrote. “The artificial orange to red arsenic sulfide might have offered different optical and rheological paint properties as compared to the mineral form of orpiment and realgar.”
In addition, the team examined paint samples from different artists known to use arsenic sulfides—whose works are also part of the Rijksmuseum collection—and found a similar mixture of pigments in a painting by Rembrandt’s contemporary, Willem Kalf. “It is evidence that a variety of natural and artificial arsenic sulfides were manufactured and traded during Rembrandt’s time and were available in Amsterdam,” the authors wrote—most likely imported, since the Dutch Republic did not have considerable mining resources.
Enlarge/ UNSW Sydney engineers developed a new way to make cold brew coffee in under three minutes without sacrificing taste.
University of New South Wales, Sydney
Diehard fans of cold-brew coffee put in a lot of time and effort for their preferred caffeinated beverage. But engineers at the University of New South Wales, Sydney, figured out a nifty hack. They rejiggered an existing espresso machine to accommodate an ultrasonic transducer to administer ultrasonic pulses, thereby reducing the brewing time from 12 to 24 hours to just under three minutes, according to a new paper published in the journal Ultrasonics Sonochemistry.
As previously reported, rather than pouring boiling or near-boiling water over coffee grounds and steeping for a few minutes, the cold-brew method involves mixing coffee grounds with room-temperature water and letting the mixture steep for anywhere from several hours to two days. Then it is strained through a sieve to filter out all the sludge-like solids, followed by filtering. This can be done at home in a Mason jar, or you can get fancy and use a French press or a more elaborate Toddy system. It’s not necessarily served cold (although it can be)—just brewed cold.
The result is coffee that tastes less bitter than traditionally brewed coffee. “There’s nothing like it,” co-author Francisco Trujillo of UNSW Sydney told New Scientist. “The flavor is nice, the aroma is nice and the mouthfeel is more viscous and there’s less bitterness than a regular espresso shot. And it has a level of acidity that people seem to like. It’s now my favorite way to drink coffee.”
While there have been plenty of scientific studies delving into the chemistry of coffee, only a handful have focused specifically on cold-brew coffee. For instance, a 2018 study by scientists at Thomas Jefferson University in Philadelphia involved measuring levels of acidity and antioxidants in batches of cold- and hot-brew coffee. But those experiments only used lightly roasted coffee beans. The degree of roasting (temperature) makes a significant difference when it comes to hot-brew coffee. Might the same be true for cold-brew coffee?
To find out, the same team decided in 2020 to explore the extraction yields of light-, medium-, and dark-roast coffee beans during the cold-brew process. They used the cold-brew recipe from The New York Times for their experiments, with a water-to-coffee ratio of 10:1 for both cold- and hot-brew batches. (Hot brew normally has a water-to-coffee ratio of 20:1, but the team wanted to control variables as much as possible.) They carefully controlled when water was added to the coffee grounds, how long to shake (or stir) the solution, and how best to press the cold-brew coffee.
The team found that for the lighter roasts, caffeine content and antioxidant levels were roughly the same in both the hot- and cold-brew batches. However, there were significant differences between the two methods when medium- and dark-roast coffee beans were used. Specifically, the hot-brew method extracts more antioxidants from the grind; the darker the bean, the greater the difference. Both hot- and cold-brew batches become less acidic the darker the roast.
Enlarge/ The new faster cold brew system subjects coffee grounds in the filter basket to ultrasonic sound waves from a transducer, via a specially adapted horn.
UNSW/Francisco Trujillo
That gives cold brew fans a few handy tips, but the process remains incredibly time-consuming; only true aficionados have the patience required to cold brew their own morning cuppa. Many coffee houses now offer cold brews, but it requires expensive, large semi-industrial brewing units and a good deal of refrigeration space. According to Trujillo, the inspiration for using ultrasound to speed up the process arose from failed research attempts to extract more antioxidants. Those experiments ultimately failed, but the setup produced very good coffee.
Trujillo et al. used a Breville Dual Boiler BES920 espresso machine for their latest experiments, with a few key modifications. They connected a bolt-clawed transducer to the brewing basket with a metal horn. They then used the transducer to inject 38.8 kHz sound waves through the walls at several different points, thereby transforming the filter basket into a powerful ultrasonic reactor.
The team used the machine’s original boiler but set it up to be independently controlled it with an integrated circuit to better manage the temperature of the water. As for the coffee beans, they picked Campos Coffee’s Caramel & Rich Blend (a medium roast). “This blend combines fresh, high-quality specialty coffee beans from Ethiopia, Kenya, and Colombia, and the roasted beans deliver sweet caramel, butterscotch, and milk chocolate flavors,” the authors wrote.
There were three types of samples for the experiments: cold brew hit with ultrasound at room temperature for one minute or for three minutes, and cold brew prepared with the usual 24-hour process. For the ultrasonic brews, the beans were ground into a fine grind typical for espresso, while a slightly coarser grind was used for the traditional cold-brew coffee.
One reason plastic waste persists in the environment is because there’s not much that can eat it. The chemical structure of most polymers is stable and different enough from existing food sources that bacteria didn’t have enzymes that could digest them. Evolution has started to change that situation, though, and a number of strains have been identified that can digest some common plastics.
An international team of researchers has decided to take advantage of those strains and bundle plastic-eating bacteria into the plastic. To keep them from eating it while it’s in use, the bacteria is mixed in as inactive spores that should (mostly—more on this below) only start digesting the plastic once it’s released into the environment. To get this to work, the researchers had to evolve a bacterial strain that could tolerate the manufacturing process. It turns out that the evolved bacteria made the plastic even stronger.
Bacteria meet plastics
Plastics are formed of polymers, long chains of identical molecules linked together by chemical bonds. While they can be broken down chemically, the process is often energy-intensive and doesn’t leave useful chemicals behind. One alternative is to get bacteria to do it for us. If they’ve got an enzyme that breaks the chemical bonds of a polymer, they can often use the resulting small molecules as an energy source.
The problem has been that the chemical linkages in the polymers are often distinct from the chemicals that living things have come across in the past, so enzymes that break down polymers have been rare. But, with dozens of years of exposure to plastics, that’s starting to change, and a number of plastic-eating bacterial strains have been discovered recently.
This breakdown process still requires that the bacteria and plastics find each other in the environment, though. So a team of researchers decided to put the bacteria in the plastic itself.
The plastic they worked with is called thermoplastic polyurethane (TPU), something you can find everywhere from bicycle inner tubes to the coating on your ethernet cables. Conveniently, there are already bacteria that have been identified that can break down TPU, including a species called Bacillus subtilis, a harmless soil bacterium that has also colonized our digestive tracts. B. subtilis also has a feature that makes it very useful for this work: It forms spores.
This feature handles one of the biggest problems with incorporating bacteria into materials: The materials often don’t provide an environment where living things can thrive. Spores, on the other hand, are used by bacteria to wait out otherwise intolerable conditions, and then return to normal growth when things improve. The idea behind the new work is that B. subtilis spores remain in suspended animation while the TPU is in use and then re-activate and digest it once it’s disposed of.
In practical terms, this works because spores only reactivate once nutritional conditions are sufficiently promising. An Ethernet cable or the inside of a bike tire is unlikely to see conditions that will wake the bacteria. But if that same TPU ends up in a landfill or even the side of the road, nutrients in the soil could trigger the spores to get to work digesting it.
The researchers’ initial problem was that the manufacturing of TPU products usually involves extruding the plastic at high temperatures, which are normally used to kill bacteria. In this case, they found that a typical manufacturing temperature (130° C) killed over 90 percent of the B. subtilis spores in just one minute.
So, they started out by exposing B. subtilis spores to lower temperatures and short periods of heat that were enough to kill most of the bacteria. The survivors were grown up, made to sporulate, and then exposed to a slightly longer period of heat or even higher temperatures. Over time, B. subtilis evolved the ability to tolerate a half hour of temperatures that would kill most of the original strain. The resulting strain was then incorporated into TPU, which was then formed into plastics through a normal extrusion process.
You might expect that putting a bunch of biological material into a plastic would weaken it. But the opposite turned out to be true, as various measures of its tensile strength showed that the spore-containing plastic was stronger than pure plastic. It turns out that the spores have a water-repelling surface that interacts strongly with the polymer strands in the plastic. The heat-resistant strain of bacteria repelled water even more strongly, and plastics made with these spores was tougher still.
To simulate landfilling or litter with the plastic, the researchers placed them in compost. Even without any bacteria, there were organisms present that could degrade it; by five months in the compost, plain TPU lost nearly half its mass. But with B. subtilis spores incorporated, the plastic lost 93 percent of its mass over the same time period.
This doesn’t mean our plastics problem is solved. Obviously, TPU breaks down relatively easily. There are lots of plastics that don’t break down significantly, and may not be compatible with incorporating bacterial spores. In addition, it’s possible that some TPU uses would expose the plastic to environments that would activate the spores—something like food handling or buried cabling. Still, it’s possible this new breakdown process can provide a solution in some cases, making it worth exploring further.
True wine aficionados might turn up their noses, but canned wines are growing in popularity, particularly among younger crowds during the summer months, when style often takes a back seat to convenience. Yet these same wines can go bad rather quickly, taking on distinctly displeasing notes of rotten eggs or dirty socks. Scientists at Cornell University conducted a study of all the relevant compounds and came up with a few helpful tips for frustrated winemakers to keep canned wines from spoiling. The researchers outlined their findings in a recent paper published in the American Journal of Enology and Viticulture.
“The current generation of wine consumers coming of age now, they want a beverage that’s portable and they can bring with them to drink at a concert or take to the pool,” said Gavin Sacks, a food chemist at Cornell. “That doesn’t really describe a cork-finished, glass-packaged wine. However, it describes a can very nicely.”
According to a 2004 article in Wine & Vines magazine, canned beer first appeared in the US in 1935, and three US wineries tried to follow suit for the next three years. Those efforts failed because it proved to be unusually challenging to produce a stable canned wine. One batch was tainted by “Fresno mold“; another batch resulted in cloudy wine within just two months; and the third batch of wine had a disastrous combination of low pH and high oxygen content, causing the wine to eat tiny holes in the cans. Nonetheless, wineries sporadically kept trying to can their product over the ensuing decades, with failed attempts in the 1950s and 1970s. United and Delta Airlines briefly had a short-lived partnership with wineries for canned wine in the early 1980s, but passengers balked at the notion.
The biggest issue was the plastic coating used to line the aluminum cans. You needed the lining because the wine would otherwise chemically react with the aluminum. But the plastic liners degraded quickly, and the wine would soon reek of dirty socks or rotten eggs, thanks to high concentrations of hydrogen sulfide. The canned wines also didn’t have much longevity, with a shelf life of just six months.
Thanks to vastly improved packing processes in the early 2000s, canned wine seems to finally be finding its niche in the market, initially driven by demand in Japan and other Asian markets and expanding after 2014 to Australia, New Zealand, the US, and the UK. In the US alone, projected sales of canned wines are expected to grow from $643 million in 2024 to $3.12 billion in 2034—a compound annual growth rate of 10.5 percent.
Granted, we won’t be seeing a fine Bordeaux in a can anytime soon; most canned wine comes in the form of spritzers, wine coolers, and cheaper rosés, whites, or sparkling wines. The largest US producers are EJ Gallo, which sells Barefoot Refresh Spritzers, and Francis Ford Coppola Winery, which markets the Sofia Mini, Underwood, and Babe brands.
Enlarge/ Locations within the body of a can sampled for liner and surface analysis.
M.J. Sheehan et al., 2024
There are plenty of oft-cited advantages to putting wine in cans. It’s super practical for picnics, camping, summer BBQs, or days at the beach, for example, and for the weight-conscious, it helps with portion control, since you don’t have to open an entire bottle. Canned wines are also touted as having a lower carbon footprint compared to glass—although that is a tricky calculation—and the aluminum is 100 percent recyclable.
This latest study grew out of a conference session Sacks led that was designed to help local winemakers get a better grasp on how best to protect the aromas, flavors, and shelf life of their canned wines since canned wines are still plagued by issues of corrosion, leakage, and off flavors like the dreaded rotten egg smell. “They said, ‘We’re following all the recommendations from the can suppliers and we still have these problems, can you help us out?’” Sacks said. “The initial focus was defining what the problem compounds were, what was causing corrosion and off aromas, and why was this happening in wines, but not in sodas? Why doesn’t Coca-Cola have a problem?”
Enlarge/ Active geology could have helped purify key chemicals needed for life.
Christof B. Mast
In some ways, the origin of life is looking much less mystifying than it was a few decades ago. Researchers have figured out how some of the fundamental molecules needed for life can form via reactions that start with extremely simple chemicals that were likely to have been present on the early Earth. (We’ve covered at least one of many examples of this sort of work.)
But that research has led to somewhat subtler but no less challenging questions. While these reactions will form key components of DNA and protein, those are often just one part of a complicated mix of reaction products. And often, to get something truly biologically relevant, they’ll have to react with some other molecules, each of which is part of its own complicated mix of reaction products. By the time these are all brought together, the key molecules may only represent a tiny fraction of the total list of chemicals present.
So, forming a more life-like chemistry still seems like a challenge. But a group of German chemists is now suggesting that the Earth itself provides a solution. Warm fluids moving through tiny fissures in rocks can potentially separate out mixes of chemicals, enriching some individual chemicals by three orders of magnitude.
Feeling the heat (and the solvent)
Even in the lab, it’s relatively rare for chemical reactions to produce just a single product. But there are lots of ways to purify out exactly what you want. Even closely related chemicals will often differ in their solubility in different solvents and in their tendency to stick to various glasses or ceramics, etc. The temperature can also influence all of those. So, chemists can use these properties as tools to fish a specific chemical out of a reaction mixture.
But, as far as the history of life is concerned, chemists are a relatively recent development—they weren’t available to purify important chemicals back before life had gotten started. Which raises the question of how the chemical building blocks of life ever reached the sorts of concentrations needed to do anything interesting.
The key insight behind this new work is that something similar to lab equipment exists naturally on Earth. Many rocks are laced with cracks, channels, and fissures that allow fluid to flow through them. In geologically active areas, that fluid is often warm, creating temperature gradients as it flows away from the heat source. And, as fluid moves through different rock types, the chemical environment changes. The walls of the fissures will have different chemical properties, and different salts may end up dissolved in the fluid.
All of that can provide conditions where some chemicals move more rapidly through the fluid, while others tend to stay where they started. And that has the potential to separate out key chemicals from the reaction mixes that produce the components of life.
But having the potential is very different from clearly working. So, the researchers decided to put the idea to the test.
Explore the chemistry behind making a cocktail with curdled milk, aka milk washing—like Ben Franklin’s fave, milk punch.
It’s well-known that Benjamin Franklin was a Founding Father who enjoyed a nice tipple or two (or three). One of his favorite alcoholic beverages was milk punch, a heady concoction of brandy, lemon juice, nutmeg, sugar, water, and hot whole milk—the latter nicely curdled thanks to the heat, lemon juice, and alcohol. It employs a technique known as “milk washing,” used to round out and remove harsh, bitter flavors from spirits that have been less than perfectly distilled, as well as preventing drinks from spoiling (a considerable benefit in the 1700s).
Some versions of milk punch also incorporate tea, and in the mixed drink taxonomy, it falls somewhere between a posset and syllabub. The American Chemical Society’s George Zaidan decided to delve a bit deeper into the chemistry behind milk washing in a new Reactions video after tasting the difference between a Tea Time cocktail made with the milk washing method and one made without it. The latter was so astringent, it was “like drinking a cup of tea that’s been brewed for 6,000 years,” per Zaidan. In the process, he ended up stumbling onto a flavorful new twist on the classic espresso martini (although martini purists probably wouldn’t consider either to be a true martini).
There isn’t anything in the scientific literature about milk washing as it specifically pertains to cocktails, so Zaidan broke the process down into three simple experiments, armed with all the necessary ingredients and his trusty centrifuge. First, he combined whole milk with Coke, a highly acidic beverage that curdles the milk. Per Zaidan, this happens because of the casein proteins in milk, which typically have an overall negative charge that keeps them from clumping. Adding the acid (Coke) adds protons to the mix so that it is electrically neutral (usually at a pH of 4.6).
At that point, the caseins clump together to form solid fatty curds surrounded by a watery liquid. That liquid is significantly lighter than the original Coke because the curds absorbed all the molecules that give the beverage its color. “They’re particularly good at pulling tannins, which are those astringent bitter mouth-puckering molecules, out of stuff,” Zaidan said. The liquid remained sweet, since the curds don’t absorb the sugar, but the taste was now more akin to Sprite. The curds didn’t taste much like Coke either.
Enlarge/ Benjamin Franklin’s recipe for milk punch, included in a 1763 letter to James Bowdoin.
Next, Zaidan conducted an experiment to see whether vodka can absorb the rich fatty flavors of butter and ghee (clarified butter), aka “fat washing,” which should be extendable to other fats like bacon and peanut butter. It took 24 hours to accomplish, but both the butter- and ghee-infused vodkas received a thumbs-up during the taste test. According to Zaidan, this demonstrates that milk washing adds buttery flavor and texture to a cocktail in addition to removing flavor (notably bitter compounds) and color.
But what about the whey, the other type of milk protein? Per Zaidan, this makes for a nice secret ingredient to add to a milk washed cocktail, based on his experiment combining whey with vodka. It doesn’t seem to have much impact on the vodka’s flavor but it adds a pleasant texture and smoother mouth feel as it coats the tongue.
Armed with his three deconstructed components of the milk washing process, Zaidan was ready to create his own twist on a classic cocktail. First, he poured vodka over peanut butter to infuse the fatty flavor into the spirits (fat washing). Then he curdled some milk and added it to espresso to temper the latter’s bitter flavors and combined it with the peanut butter-infused vodka. Finally, he added Kahlua, simple syrup, and a bit of whey for extra body and texture.
Voila! You’ve got a tastier, more complex version (per Zaidan) of an espresso martini. The downside: It’s an extremely time-consuming cocktail to make. Perhaps that’s why Franklin’s original recipe for milk punch was clearly meant to be made in bulk. (The Massachusetts Historical Society’s modern interpretation cuts the portions by three-quarters.)
Listing image by YouTube/American Chemical Society