Author name: Mike M.

how-the-“nutbush”-became-australia’s-unofficial-national-dance

How the “Nutbush” became Australia’s unofficial national dance

“A church house, gin house” —

Most Australians learned the “daggy” line dance in primary school starting in the mid-1970s

Embassy employees, men and women, in a bee-shaped line formation doing the Nutbush

Enlarge / US Embassy Australia employees learning to do the Nutbush to honor the late Tina Turner in 2023.

The whole world mourned the passing of music legend Tina Turner last year, perhaps none more so than Australians, who have always had a special fondness for her. That’s not just because of her star turn as Aunty Entity in 1985’s Mad Max Beyond Thunderdome or her stint as the face of Australia’s rugby league.

Australians of all ages have also been performing a line dance called the “Nutbush” at weddings and social events to Turner’s hit single (with then-husband Ike Turner) “Nutbush City Limits.” Turner herself never performed the dance, but when she died, there was a flood of viral TikTok videos of people performing the Nutbush in her honor—including members of the US Embassy in Canberra, who had clearly just learned it for the occasion. Dancers at the 2023 Mundi Mundi Bash in a remote corner of New South Wales set a world record with 6,594 dancers performing the Nutbush at the same time.

The exact origin of the dance remains unknown, but researchers at the University of South Australia think they understand how the Nutbush became so ubiquitous in Australia, according to a paper published in the journal Continuum. “What we seem to know is that there was a committee in the New South Wales education department that devised the idea of the Nutbush,” co-author Jon Stratton told the Guardian. “Whether they devised the dance itself, we don’t really know. But what’s interesting is that nobody has come forward.”

“Nutbush City Limits” was released in 1973. However, the authors note that the song peaked at 87 on the Australian charts and didn’t appear at all throughout 1974—only to start charting again from March to May 1975 and again from June–October 1976, peaking at No. 14. (It charted once again last year when Turner died.) They suggest that “Nutbush City Limits” was a great “dance floor filler,” especially in the 1970s disco era, so people were purchasing the single over a longer period of time.

But another likely explanation was the spread and development of the dance now known as the Nutbush during this same time period, initially as an educational activity in Australian primary schools. An homage of sorts to Turner’s hometown (Nutbush, Tennessee), the song features a hard 4/4 stomp beat laid over a funk rhythm, making it ideal for a line dance. There are anecdotal reports of people doing the Nutbush to different tunes, like Starship’s “We Built This City” (1985), which also features a 4/4 beat.

Knee, knee, kick, kick

Dancing has been incorporated into education since the 1920s. “Line dances work very well in classrooms because the teacher can stand at the front and give instructions to the lines,” said Stratton. “The idea must have been to provide students with an enjoyable way of exercising and learning coordination. Whoever designed the Nutbush succeeded beyond any success they could have hoped for. What makes it special is that it’s moved out of schools to become the dance of choice at many Australian social events.”

It’s unique to Australia, but the Nutbush shares some similarities with the Madison—minus the calls—another popular line dance that emerged in the 1950s thanks to teen dance shows like American Bandstand and The Buddy Deane Show. (The latter inspired the 1988 John Waters film Hairspray.) In fact, in a 2016 Reddit thread, P.J. Fletcher suggested in 2016 that the Nutbush was actually a bastardized version of the Madison with misremembered steps and drew a faulty diagram of those steps. Fletcher traced its origin to 1978 when the New South Wales Department of Education launched primary school teacher retraining initiatives, initially for a Sydney school district and spreading to other regions from there.

Let’s do the Nutbush again.

Stratton and his co-author, Panizza Allmark, dismiss this theory. For one thing, they discovered that the Nutbush was already being taught at a technical school in Victoria in 1978, so the dance was already well-established by then, at least in Victoria. They suggest the dance originated in 1975 in New South Wales and spread from there. Also, the Madison is difficult to perform to 4/4 songs like “Nutbush City Limits” since it is based on a six-beat pattern. Thus, “One possibility is that somebody in the school system in Sydney developed the Nutbush because they found that school students had too much difficulty learning the Madison to make it either enjoyable or worthwhile,” the authors wrote.

“Unlike formal dancing where you needed a partner, the Nutbush didn’t involve holding hands or touching anyone of the opposite sex,” said Allmark, who danced the Nutbush herself as a primary school student in Perth in the early 1980s. “In primary school, when learning folk dancing, there was great awkwardness in having to dance with a partner of the opposite sex but with the Nutbush, you didn’t need ‘to take a partner by the hand.’ You could enjoy the dance moves and be part of a communal experience without all the sweaty handholding.”

Regardless of how the fad took hold, hearing the song’s opening bars and the lines “A church house, gin house” will likely keep bringing Aussies enthusiastically to the dance floor for years to come.

Continuum, 2024. DOI: 10.1080/10304312.2024.2331796  (About DOIs).

Ike and Tina Turner perform “Nutbush City Limits” in 1973.

How the “Nutbush” became Australia’s unofficial national dance Read More »

securing-identities:-the-foundation-of-zero-trust

Securing Identities: The Foundation of Zero Trust

Welcome back to our zero trust blog series! In our previous post, we took a deep dive into data security, exploring the importance of data classification, encryption, and access controls in a zero trust model. Today, we’re shifting our focus to another critical component of zero trust: identity and access management (IAM).

In a zero trust world, identity is the new perimeter. With the dissolution of traditional network boundaries and the proliferation of cloud services and remote work, securing identities has become more important than ever. In this post, we’ll explore the role of IAM in a zero trust model, discuss common challenges, and share best practices for implementing strong authentication and authorization controls.

The Zero Trust Approach to Identity and Access Management

In a traditional perimeter-based security model, access is often granted based on a user’s location or network affiliation. Once a user is inside the network, they typically have broad access to resources and applications.

Zero trust turns this model on its head. By assuming that no user, device, or network should be inherently trusted, zero trust requires organizations to take a more granular, risk-based approach to IAM. This involves:

  1. Strong authentication: Verifying the identity of users and devices through multiple factors, such as passwords, biometrics, and security tokens.
  2. Least privilege access: Granting users the minimum level of access necessary to perform their job functions and revoking access when it’s no longer needed.
  3. Continuous monitoring: Constantly monitoring user behavior and access patterns to detect and respond to potential threats in real-time.
  4. Adaptive policies: Implementing dynamic access policies that adapt to changing risk factors, such as location, device health, and user behavior.

By applying these principles, organizations can create a more secure, resilient identity and access management posture that minimizes the risk of unauthorized access and data breaches.

Common Challenges in Zero Trust Identity and Access Management

Implementing a zero trust approach to IAM is not without its challenges. Some common hurdles organizations face include:

  1. Complexity: Managing identities and access across a diverse range of applications, systems, and devices can be complex and time-consuming, particularly in hybrid and multi-cloud environments.
  2. User experience: Balancing security with usability is a delicate task. Overly restrictive access controls and cumbersome authentication processes can hinder productivity and frustrate users.
  3. Legacy systems: Many organizations have legacy systems and applications that were not designed with zero trust principles in mind, making it difficult to integrate them into a modern IAM framework.
  4. Skill gaps: Implementing and managing a zero trust IAM solution requires specialized skills and knowledge, which can be difficult to find and retain in a competitive job market.

To overcome these challenges, organizations must invest in the right tools, processes, and talent, and take a phased approach to zero trust IAM implementation.

Best Practices for Zero Trust Identity and Access Management

Implementing a zero trust approach to IAM requires a comprehensive, multi-layered strategy. Here are some best practices to consider:

  1. Implement strong authentication: Use multi-factor authentication (MFA) wherever possible, combining factors such as passwords, biometrics, and security tokens. Consider using passwordless authentication methods, such as FIDO2, for enhanced security and usability.
  2. Enforce least privilege access: Implement granular, role-based access controls (RBAC) based on the principle of least privilege. Regularly review and update access permissions to ensure users only have access to the resources they need to perform their job functions.
  3. Monitor and log user activity: Implement robust monitoring and logging mechanisms to track user activity and detect potential threats. Use security information and event management (SIEM) tools to correlate and analyze log data for anomalous behavior.
  4. Use adaptive access policies: Implement dynamic access policies that adapt to changing risk factors, such as location, device health, and user behavior. Use tools like Microsoft Conditional Access or Okta Adaptive Multi-Factor Authentication to enforce these policies.
  5. Secure privileged access: Implement strict controls around privileged access, such as admin accounts and service accounts. Use privileged access management (PAM) tools to monitor and control privileged access and implement just-in-time (JIT) access provisioning.
  6. Educate and train users: Provide regular security awareness training to help users understand their role in protecting the organization’s assets and data. Teach best practices for password management, phishing detection, and secure remote work.

By implementing these best practices and continuously refining your IAM posture, you can better protect your organization’s identities and data and build a strong foundation for your zero trust architecture.

Conclusion

In a zero trust world, identity is the new perimeter. By treating identities as the primary control point and applying strong authentication, least privilege access, and continuous monitoring, organizations can minimize the risk of unauthorized access and data breaches.

However, achieving effective IAM in a zero trust model requires a commitment to overcoming complexity, balancing security and usability, and investing in the right tools and talent. It also requires a cultural shift, with every user taking responsibility for protecting the organization’s assets and data.

As you continue your zero trust journey, make IAM a top priority. Invest in the tools, processes, and training necessary to secure your identities, and regularly assess and refine your IAM posture to keep pace with evolving threats and business needs.

In the next post, we’ll explore the role of network segmentation in a zero trust model and share best practices for implementing micro-segmentation and software-defined perimeters.

Until then, stay vigilant and keep your identities secure!

Additional Resources:

Securing Identities: The Foundation of Zero Trust Read More »

gaming-historians-preserve-what’s-likely-nintendo’s-first-us-commercial

Gaming historians preserve what’s likely Nintendo’s first US commercial

A Mega Mego find —

Mego’s “Time Out” spot pitched Nintendo’s Game & Watch handhelds under a different name.

Enlarge / “So slim you can play it anywhere.”

Gamers of a certain age may remember Nintendo’s Game & Watch line, which predated the cartridge-based Game Boy by offering simple, single-serving LCD games that can fetch a pretty penny at auction today. But even most ancient gamers probably don’t remember Mego’s “Time Out” line, which took the internal of Nintendo’s early Game & Watch titles and rebranded them for an American audience that hadn’t yet heard of the Japanese game maker.

Now, the Video Game History Foundation (VGHF) has helped preserve the original film of an early Mego Time Out commercial, marking the recovered, digitized video as “what we believe is the first commercial for a Nintendo product in the United States.” The 30-second TV spot—which is now available in a high-quality digital transfer for the first time—provides a fascinating glimpse into how marketers positioned some of Nintendo’s earliest games to a public that still needed to be sold on the very idea of portable gaming.

Imagine an “electronic sport”

A 1980 Mego catalog sells Nintendo's Game & Watch games under the toy company's

Enlarge / A 1980 Mego catalog sells Nintendo’s Game & Watch games under the toy company’s “Time Out” branding.

Founded in the 1950s, Mego made a name for itself in the 1970s with licensed movie action figures and early robotic toys like the 2-XL (a childhood favorite of your humble author). In 1980, though, Mego branched out to partner with a brand-new, pre-Donkey Kong Nintendo of America to release rebranded versions of four early Game & Watch titles: Ball (which became Mego’s “Toss-Up”), Vermin (“Exterminator”), Fire (“Fireman Fireman”), and Flagman (“Flag Man”).

While Mego would go out of business by 1983 (long before a 2018 brand revival), in 1980, the company had the pleasure and responsibility of introducing America to Nintendo games for the first time, even if they were being sold under the Mego name. And while home systems like the Atari VCS and Intellivision were already popular with the American public at the time, Mego had to sell the then-new idea of simple black-and-white games you could play away from the living room TV (Milton Bradley Microvision notwithstanding).

The 1980 Mego spot that introduced Nintendo games to the US, now preserved in high-resolution.

That’s where a TV spot from Durona Productions came in. If you were watching TV in the early ’80s, you might have heard an announcer doing a bad Howard Cosell impression selling the Time Out line as “the new electronic sport,” suitable as a pastime for athletes who have been injured jogging or playing tennis or basketball.

The ad also had to introduce even extremely basic gaming functions like “an easy game and a hard game,” high score tracking, and the ability to “tell time” (as Douglas Adams noted, humans were “so amazingly primitive that they still [thought] digital watches [were] a pretty neat idea”). And the ad made a point of highlighting that the game is “so slim you can play it anywhere,” complete with a close-up of the unit fitting in the back pocket of a rollerskater’s tight shorts.

Preserved for all time

This early Nintendo ad wasn’t exactly “lost media” before now; you could find fuzzy, video-taped versions online, including variations that talk up the pocket-sized games as sports “where size and strength won’t help.” But the Video Game History Foundation has now digitized and archived a much higher quality version of the ad, courtesy of an original film reel discovered in an online auction by game collector (and former game journalist) Chris Kohler. Kohler acquired the rare 16 mm film and provided it to VGHF, which in turn reached out to film restoration experts at Movette Film Transfer to help color-correct the faded, 40-plus-year-old print and encode it in full 2K resolution for the first time.

This important historical preservation work is as good an excuse as any to remember a time when toy companies were still figuring out how to convince the public that Nintendo’s newfangled portable games were something that could fit into their everyday life. As VGHF’s Phil Salvador writes, “it feels laser-targeted to the on-the-go yuppie generation of the ’80s with disposable income to spend on electronic toys. There’s shades of how Nintendo would focus on young, trendy, mobile demographics in their more recent marketing campaigns… but we’ve never seen an ad where someone plays Switch in the hospital.”

Gaming historians preserve what’s likely Nintendo’s first US commercial Read More »

scotus-rejects-challenge-to-abortion-pill-for-lack-of-standing

SCOTUS rejects challenge to abortion pill for lack of standing

“Near miss” —

The anti-abortion defendants are not injured by the FDA’s actions on mifepristone.

Mifepristone (Mifeprex) and misoprostol, the two drugs used in a medication abortion, are seen at the Women's Reproductive Clinic, which provides legal medication abortion services, in Santa Teresa, New Mexico, on June 17, 2022.

Enlarge / Mifepristone (Mifeprex) and misoprostol, the two drugs used in a medication abortion, are seen at the Women’s Reproductive Clinic, which provides legal medication abortion services, in Santa Teresa, New Mexico, on June 17, 2022.

The US Supreme Court on Thursday struck down a case that threatened to remove or at least restrict access to mifepristone, a pill approved by the Food and Drug Administration for medication abortions and used in miscarriage care. The drug has been used for decades, racking up a remarkably good safety record in that time. It is currently used in the majority of abortions in the US.

The high court found that the anti-abortion medical groups that legally challenged the FDA’s decision to approve the drug in 2000 and then ease usage restrictions in 2016 and 2021 simply lacked standing to challenge any of those decisions. That is, the groups failed to demonstrate that they were harmed by the FDA’s decision and therefore had no grounds to legally challenge the government agency’s actions. The ruling tracks closely with comments and questions the justices raised during oral arguments in March.

“Plaintiffs are pro-life, oppose elective abortion, and have sincere legal, moral, ideological, and policy objections to mifepristone being prescribed and used by others,” the Supreme Court noted in its opinion, which included the emphasis on “by others.” The court summarized that the groups offered “complicated causation theories to connect FDA’s actions to the plaintiffs’ alleged injuries in fact,” and the court found that “none of these theories suffices” to prove harm.

Weak arguments

The anti-abortion medical groups, led by the Alliance for Hippocratic Medicine, argued that the FDA’s relaxation of mifepristone regulations could cause “downstream conscience injuries” to doctors who are forced to treat patients who may suffer (rare) complications from the drug. But the court noted that there are already strong federal conscience laws in place that protect doctors who refuse to participate in abortion care. Further, the doctors failed to provide any examples of being forced to provide care against their conscience.

The plaintiffs further claimed “downstream economic injuries” by way of having to divert resources from other patients and services. But the court flatly knocked down this argument, too, noting that the argument is “too speculative, lacks support in the record, and is otherwise too attenuated to establish standing.” Further, the organizations claimed that the FDA’s actions “caused” them to conduct studies and “forced” them to engage in advocacy and outreach efforts. “But an organization that has not suffered a concrete injury caused by a defendant’s action cannot spend its way into standing simply by expending money to gather information and advocate against the defendant’s action,” the Supreme Court ruled.

In a response to the ruling, reproductive health rights group National Institute for Reproductive Health blasted the lower courts’ actions that brought the case to the Supreme Court and described it as a warning. “This case should never have made it to the Supreme Court in the first place,” Haydee Morales, interim president of NIRH, said in a statement. “Anti-abortion operatives brought this case with one goal in mind—to ban medication abortion and they failed. This case was a near miss for the science and medicine community and it won’t be the last attack.”

SCOTUS rejects challenge to abortion pill for lack of standing Read More »

roku-owners-face-the-grimmest-indignity-yet:-stuck-on-motion-smoothing

Roku owners face the grimmest indignity yet: Stuck-on motion smoothing

Buttery and weird —

Software updates strike again, leaving interpolated frames in unwanted places.

Couple yelling at each other, as if in a soap opera, on a Roku TV, with a grotesque smoothing effect applied to both people.

Enlarge / Motion smoothing was making images uncanny and weird long before AI got here.

Aurich Lawson | Getty Images | Roku

Roku TV owners have been introduced to a number of annoyances recently through the software update pipeline. There was an arbitration-demanding terms of service that locked your TV until you agreed (or mailed a letter). There is the upcoming introduction of ads to the home screen. But the latest irritation hits some Roku owners right in the eyes.

Reports on Roku’s community forums and on Reddit find owners of TCL HDTVs, on which Roku is a built-in OS, experiencing “motion smoothing” without having turned it on after updating to Roku OS 13. Some people are reporting that their TV never offered “Action Smoothing” before, but it is now displaying the results with no way to turn it off. Neither the TV’s general settings, nor the specific settings available while content is playing, offer a way to turn it off, according to some users.

“Action smoothing” is Roku’s name for video interpolation, or motion smoothing. The heart of motion smoothing is Motion Estimation Motion Compensation (MEMC). Fast-moving video, such as live sports or intense action scenes, can have a “juddery” feeling when shown on TVs at a lower frame rate. Motion smoothing uses MEMC hardware and algorithms to artificially boost the frame rate of a video signal by creating its best guess of what a frame between two existing frames would look like and then inserting it to boost the frame rate.

When it works, a signal looks more fluid and, as the name implies, smooth. When it is left on and a more traditional signal at 24 or 30 frames per second is processed, it works somewhat too well. Shows and films look awkwardly realistic, essentially lacking the motion blur and softer movement to which we’re accustomed. Everything looks like a soap opera or like you’re watching a behind-the-scenes smartphone video of your show. It’s so persistent an issue, and often buried in a TV’s settings, that Tom Cruise did a whole PSA about it back in 2018.

Ars has contacted Roku for comment and will update this post with a response. When affected Roku TVs regain their ability to keep motion smoothing at bay, the setting is typically located in the “Expert Settings” area of the TV or by enabling “Movie” mode from the quick settings.

Roku owners face the grimmest indignity yet: Stuck-on motion smoothing Read More »

starlink-user-terminal-now-costs-just-$300-in-28-states,-$500-in-rest-of-us

Starlink user terminal now costs just $300 in 28 states, $500 in rest of US

Starlink price cut —

The $600 standard price was replaced with regional pricing of $500 or $300.

A rectangular satellite dish sitting on the ground outdoors.

Enlarge / The standard Starlink satellite dish.

Starlink

You can now buy a Starlink satellite dish for $299 (plus shipping and tax) in 28 US states due to a discount for areas where SpaceX’s broadband network has excess capacity.

Starlink had raised its upfront hardware cost from $499 to $599 in March 2022 but cut the standard price back down to $499 this week. In the 28 states where the network has what SpaceX deems excess capacity, a $200 discount is being applied to bring the price down to $299. It’s unclear how long the deal will last, though we can assume the number of states eligible for $299 pricing will fall if a lot of people sign up.

“In the United States, new orders in certain regions are eligible for a one-time savings in areas where Starlink has abundant network availability,” a support page posted yesterday said. “$200 will be removed from your Starlink kit price when ordering on Starlink.com and if activated after purchasing from a retailer, a $200 credit will be applied. The savings are only available for Residential Standard service in these designated regional savings areas.”

The 28 states in the “regional savings areas” are Arizona, California, Colorado, Connecticut, Delaware, Florida, Hawaii, Idaho, Iowa, Kansas, Maine, Maryland, Massachusetts, Nebraska, Nevada, New Hampshire, New Jersey, New Mexico, New York, North Dakota, Oklahoma, Oregon, Pennsylvania, Rhode Island, South Dakota, Utah, Vermont, and Wyoming.

There’s one more significant price difference that applies based on location. Since early 2023, Starlink has charged $120 a month for service in areas with limited capacity and $90 a month in areas with excess capacity. So if you’re in an excess-capacity area, you can buy a $299 dish and get $90 monthly service.

Whether you pay $499 or $299 upfront, you’ll get a Wi-Fi router and the new version of Starlink’s standard residential user terminal. There is a drawback compared to the older version of the Starlink dish, which is now called “Starlink Actuated” and doesn’t seem to be available for residential orders on Starlink.com anymore.

The current standard satellite dish doesn’t have the old version’s ability to re-position itself. The new version must be positioned manually, but the Starlink app can help you find the best position.

“The ‘actuated’ part of Standard Actuated refers to the electric motors inside the antenna housing,” says an in-depth comparison of the models written by Starlink user Noah Clarke. “The motors, which are connected to the mast, can rotate and tilt the Standard Actuated dish, enabling it to self-align to the Starlink satellites. In contrast, the Standard dish has done away with the built-in mast and motors. The Standard dish must be manually rotated during the initial installation, with the help of the Starlink app.”

Starlink offers mounting hardware as optional accessories during the checkout process. There’s a pivot mount for $74, a wall mount for $67, a pipe adapter for $38, and a 45-meter cable for $115. The optional cable is three times longer than the one that comes with the standard terminal.

Starlink user terminal now costs just $300 in 28 states, $500 in rest of US Read More »

google’s-abuse-of-fitbit-continues-with-web-app-shutdown

Google’s abuse of Fitbit continues with web app shutdown

Welcome to the Google lifestyle —

Users say the app, which is now the only Fitbit interface, lacks matching features.

Google’s abuse of Fitbit continues with web app shutdown

Fitbit

Google’s continued abuse of the Fitbit brand is continuing with the shutdown of the web dashboard. Fitbit.com used to be both a storefront and a way for users to get a big-screen UI to sift through reams of fitness data. The store closed up shop in April, and now the web dashboard is dying in July.

In a post on the “Fitbit Community” forums, the company said: “Next month, we’re consolidating the Fitbit.com dashboard into the Fitbit app. The web browser will no longer offer access to the Fitbit.com dashboard after July 8, 2024.” That’s it. There’s no replacement or new fitness thing Google is more interested in; web functionality is just being removed. Google, we’ll remind you, used to be a web company. Now it’s a phone app or nothing. Google did the same thing to its Google Fit product in 2019, killing off the more powerful website in favor of an app focus.

Dumping the web app leaves a few holes in Fitbit’s ecosystem. The Fitbit app doesn’t support big screens like tablet devices, so this is removing the only large-format interface for data. Fitbit’s competitors all have big-screen interfaces. Garmin has a very similar website, and the Apple Watch has an iPad health app. This isn’t an improvement. To make matters worse, the app does not have the features of the web dashboard, with many of the livid comments in the forums on Reddit calling out the app’s deficiencies in graphing, achievement statistics, calorie counting, and logs.

The web dashboard.

The web dashboard.

Fitbit

Google bought Fitbit back in 2021 and has spent most of its time shutting down Fitbit features and making the products worse. Migrations to Google Accounts started in 2022. The Google Assistant was removed from Fitbit’s 2022 product line, the Sense 2 and Versa 4, when support existed on the previous models. Social features—a key part of fitness motivation for many—were killed off in 2023. Google has mostly focused on making Fitbit an app for the Pixel Watch.

Google’s abuse of Fitbit continues with web app shutdown Read More »

ancient-maya-dna-shows-male-kids-were-sacrificed-in-pairs-at-chichen-itza

Ancient Maya DNA shows male kids were sacrificed in pairs at Chichén Itzá

Tossed into the sacred sinkhole —

Twins play an auspicious role in Maya mythology, most notably in the Popol Vuh.

Detail from the reconstructed stone tzompantli, or skull rack, at Chichén Itzá.

Enlarge / Detail from the reconstructed stone tzompantli, or skull rack, at Chichén Itzá, evidence of ritual human sacrifice.

Christina Warinner

Inhabitants of the ancient Maya city of Chichén Itzá are well-known for their practice of ritual human sacrifice. The most prevalent notion in the popular imagination is that of young Maya women being flung alive into sink holes as offerings to the gods. Details about the cultural context for these sacrifices remain fuzzy, so scientists conduced genetic analysis on ancient remains of some of the sacrificial victims to learn more. That analysis confirmed the prevalence of male sacrifices, according to a new paper published in the journal Nature, often of related children (ages 6 to 12) from the same household—including two pairs of identical twins.

Chichén Itzá (“at the mouth of the well of the Itzá”) is located in Mexico’s eastern Yucatán. It was one of the largest of the Maya cities, quite possibly one of the mythical capital cities (Tollans) that are frequently mentioned in Mesoamerican literature. It’s known for its incredible monumental architecture, such as the Temple of Kukulcán (“El Castillo”), a step pyramid honoring a feathered serpent deity. Around the spring and fall equinoxes, there is a distinctive light-and-shadow effect that creates the illusion of a serpent slithering down the staircase. There is also a well-known acoustical effect: clap your hands at the base of the staircases and you’ll get an echo that sounds eerily like a bird’s chirp—perhaps mimicking the quetzal, a brightly colored exotic bird native to the region and prized for its long, resplendent tail feathers.

The Great Ball Court (one of 13 at the site) is essentially a whispering gallery: even though it is 545 feet long and 225 feet wide, a whisper at one end can be heard clearly at the other. The court features slanted benches with sculpted panels depicting aspects of Maya ball games—which were not just athletic events but also religious ones that often involved ritual sacrifices of players by decapitation.

“Evidence of ritual killing is extensive throughout the site of Chichén Itzá and includes both the physical remains of sacrificed individuals as well as representations in monumental art,” the authors of the new Nature paper wrote. Decapitation was just one method of sacrifice favored by the Maya over various historical periods. The Maya were equally fond of cutting out the still-beating hearts of victims, accessing the organ either from below the diaphragm or through the sternum. There were also rituals that involved binding victims to a stake and shooting arrows at a white target painted on the heart.

The site features underground rivers with natural sinkholes, called cenotes, providing water to the local inhabitants. One of those is known as the Cenote Sagrado (“Sacred Cenote”), or the Well of Sacrifice, some 200 feet (60 meters) wide and surrounded by sheer cliffs. As its name implies, the Maya would regularly sacrifice valuable objects and the occasional human by tossing them into the sinkhole to appease the Maya rain god, Chaac. (If the 89-foot (27-meter) fall didn’t kill them, drowning would.)

We know this from the writings of Friar Diego de Landa, among others, who wrote in 1566 of the Maya custom of throwing men alive into the sinkhole during droughts, as well as other prized objects. Dredging the Sacred Cenote with a bucket-and-pulley system in the early 1900s yielded artifacts made of gold and jade, as well as pottery, incense, and human remains. There were also archaeological excavations in the 1960s that yielded even more such objects, including flint, shell, rubber, cloth, and wood preserved in the water.

El Castillo, also known as the Temple of Kukulcan, is among the largest structures at Chichén Itzá, and its architecture reflects its far-flung political connections.

Enlarge / El Castillo, also known as the Temple of Kukulcan, is among the largest structures at Chichén Itzá, and its architecture reflects its far-flung political connections.

Johannes Krause

Archaeologists also uncovered a full-scale stone representation of a massive tzompantli (skull rack) and a subterranean chamber near the Sacred Cenote, likely a repurposed water cistern (chultún) that had been enlarged to connect to a small cave. The Maya viewed both cenotes and chultúns as connections to the underworld, and this particular chultún housed the remains of over 100 children.

Rodrigo Barquera, an immunogeneticist and postdoc at the Max Planck Institute for Evolutionary Anthropology, and his fellow Nature co-authors conducted their in-depth genetic analysis on 64 child remains recovered from the chultún, along with stable isotope analysis of bone collagen and nitrogen and radiocarbon dating. They compared the genetic data to the genomes from blood samples taken from 68 present-day Maya residents of a nearby town (Tixcacaltuyub).

Most of the children had been sacrificed between 800 to 1000 CE, per the radiocarbon and nitrogen dating. Barquera et al. were surprised to find that all of the remains sampled were male and from the local Maya populations. Nearly one-quarter of those were closely related to at least one other child interred in the chultún, and the related children had similar diets, so were likely raised in the same household. The most surprising discovery: two sets of identical male twins. All this suggests that the Maya selected pairs of male children for sacrificial rituals associated with the chultún.

Ancient Maya DNA shows male kids were sacrificed in pairs at Chichén Itzá Read More »

google’s-pixel-8-series-gets-usb-c-to-displayport;-desktop-mode-rumors-heat-up

Google’s Pixel 8 series gets USB-C to DisplayPort; desktop mode rumors heat up

You would think a phone called “Pixel” would be better at this —

Grab a USB-C to DisplayPort cable and newer Pixels can be viewed from your TV or monitor.

The Pixel 8.

Enlarge / The Pixel 8.

Google

Google’s June Android update is out, and it’s bringing a few notable changes for Pixel phones. The most interesting is that the Pixel 8a, Pixel 8 and Pixel 8 Pro are all getting DisplayPort Alt Mode capabilities via their USB-C ports. This means you can go from USB-C to DisplayPort and plug right into a TV or monitor. This has been rumored forever and landed in some of the Android Betas earlier, but now it’s finally shipping out to production.

The Pixel 8’s initial display support is just a mirrored mode. You can either get an awkward vertical phone in the middle of your wide-screen display or turn the phone sideways and get a more reasonable layout. You could see it being useful for videos or presentations. It would be nice if it could do more.

Alongside this year-plus of display port rumors has been a steady drum beat (again) for an Android desktop mode. Google has been playing around with this idea since Android 7.0 in 2016. In 2019, we were told it was just a development testing project, and it never shipped to any real devices. Work around Android’s desktop mode has been heating up, though, so maybe a second swing at this idea will result in an actual product.

Android 15's in-development desktop mode.

Android 15’s in-development desktop mode.

Android Authority’s Mishaal Rahman has been tracking down the new desktop mode for a while now and now has it running. The new desktop mode looks just like a real desktop OS. Every app gets a title bar window decoration with an app icon, a label, and maximize and close buttons. You can drag windows around and resize them; the OS supports automatic window tiling by dragging to the side of the screen; and there’s even a little drop-down menu in the title bar app icon. If you were to turn that on with Tablet Android’s bottom app bar, you would have a lot of what you need for a desktop OS.

Just like last time, we’ve got no clue if this will turn into a real product. The biggest Android partner, Samsung, certainly seems to think the idea is worth doing. Samsung’s “DeX” desktop mode has been a feature for years on its devices.

DisplayPort support is part of the June 2024 update and should roll out to devices soon.

Google’s Pixel 8 series gets USB-C to DisplayPort; desktop mode rumors heat up Read More »

as-nasa-watches-starship-closely,-here’s-what-the-agency-wants-to-see-next

As NASA watches Starship closely, here’s what the agency wants to see next

Target and Chaser —

“What happens if I don’t have a Human Landing System available to execute a mission?”

The rocket for SpaceX's fourth full-scale Starship test flight awaits liftoff from Starbase, the company's private launch base in South Texas.

Enlarge / The rocket for SpaceX’s fourth full-scale Starship test flight awaits liftoff from Starbase, the company’s private launch base in South Texas.

SpaceX

Few people were happier with the successful outcome of last week’s test flight of SpaceX’s Starship launch system than a NASA engineer named Catherine Koerner.

In remarks after the spaceflight, Koerner praised the “incredible” video of the Starship rocket and its Super Heavy booster returning to Earth, with each making a soft landing. “That was very promising, and a very, very successful engineering test,” she added, speaking at a meeting of the Space Studies Board.

A former flight director, Koerner now manages development of the “exploration systems” that will support the Artemis missions for NASA—a hugely influential position within the space agency. This includes the Space Launch System rocket, NASA’s Orion spacecraft, spacesuits, and the Starship vehicle that will land on the Moon.

In recent months, NASA officials like Koerner have been grappling with the reality that not all of this hardware is likely to be ready for the planned September 2026 launch date for the Artemis III mission. In particular, the agency is concerned about Starship’s readiness as a “Human Landing System.” While SpaceX is pressing forward rapidly with a test campaign, there is still a lot of work to be done to get the vehicle down to the lunar surface and safely back into lunar orbit.

A spare tire

For these reasons, as Ars previously reported, NASA and SpaceX are planning for the possibility of modifying the Artemis III mission. Instead of landing on the Moon, a crew would launch in the Orion spacecraft and rendezvous with Starship in low-Earth orbit. This would essentially be a repeat of the Apollo 9 mission, buying down risk and providing a meaningful stepping stone between Artemis missions.

Officially, NASA maintains that the agency will fly a crewed lunar landing, the Artemis III mission, in September 2026. But almost no one in the space community regards that launch date as more than aspirational. Some of my best sources have put the most likely range of dates for such a mission from 2028 to 2032. A modified Artemis III mission, in low-Earth orbit, would therefore bridge a gap between Artemis II and an eventual landing.

Koerner has declined interview requests from Ars to discuss this, but during the Space Studies Board, she acknowledged seeing these reports on modifying Artemis III. She was then asked directly whether there was any validity to them. Here is her response in full:

So here’s what I’ll tell you, if you’ll permit me an analogy. I have in my car a spare tire, right? I don’t have a spare steering wheel. I don’t have spare windshield wipers. I have a spare tire. And why? Why do we carry a spare tire? That someone, at some point, did an assessment and said in order for this vehicle to accomplish its mission, there is a certain likelihood that some things may fail and a certain likelihood that other things may not fail, and it’s probably prudent to have a spare tire. I don’t necessarily need to have a spare steering wheel, right?

We at NASA do a lot of those kinds of assessments. Like, what happens if this isn’t available? What happens if that isn’t available? Do we have backup plans for that? We’re always doing those kinds of backup plans. Do we have backup plans? It’s imperative for me to look at what happens if an Orion spacecraft is not ready to do a mission. What happens if I don’t have an SLS ready to do a mission? What happens if I don’t have a Human Landing System available to execute a mission? What happens if I don’t have Gateway that I was planning on to do a mission?

So we look at backup plans all the time. There are lots of different opportunities for that. We have not made any changes to the current plan as I outlined it here today and talked about that. But we have lots of people who are looking at lots of different backup plans so that we are doing due diligence and making sure that we have the spare tire if we need the spare tire. It’s the reason we have, for example, two systems now that we’re developing for the Human Landing System, the one for SpaceX and the other one from Blue Origin. It’s the reason we have two providers that are building spacesuit hardware. Collins as well as Axiom, right? So we always are doing that kind of thing.

That is a long way of saying that if SpaceX’s Starship is not ready in 2026, NASA is actively considering alternative plans. (The most likely of these would be an Orion-Starship docking in low-Earth orbit.) NASA has not made any final plans and is waiting to see how Artemis II progresses and what happens with Starship and spacesuit development.

What SpaceX needs to demonstrate

During her remarks, Koerner was also asked what SpaceX’s next major milestone is and when it would need to be completed for NASA to remain on track for a lunar landing in 2026. “Their next big milestone test, from a contract perspective, is the cryogenic transfer test,” she said. “That is going to be early next year.”

Some details about the Starship propellant transfer test.

Enlarge / Some details about the Starship propellant transfer test.

NASA

This timeline is consistent with what NASA’s Human Landing System program manager, Lisa Watson-Morgan recently told Ars. It provides a useful benchmark to evaluate Starship’s progress in NASA’s eyes. The “prop transfer demo” is a fairly complex mission that involves the launch of a “Starship target” from the Starbase facility in South Texas. Then a second vehicle, the “Starship chaser,” will launch and meet the target in orbit and rendezvous. The chaser will then transfer a quantity of propellant to the target spaceship.

The test will entail a lot of technology, including docking mechanisms, navigation sensors, quick disconnects, and more. If SpaceX completes this test during the first quarter of 2025, NASA will at least theoretically have a path forward to a crewed lunar landing in 2026.

As NASA watches Starship closely, here’s what the agency wants to see next Read More »

stoke-space-ignites-its-ambitious-main-engine-for-the-first-time

Stoke Space ignites its ambitious main engine for the first time

Get stoked! —

“This industry is going toward full reusability. To me, that is the inevitable end state.”

A drone camera captures the hotfire test of Stoke Space's full-flow staged combustion engine at the company's testing facility in early June.

Enlarge / A drone camera captures the hotfire test of Stoke Space’s full-flow staged combustion engine at the company’s testing facility in early June.

Stoke Space

On Tuesday, Stoke Space announced the firing of its first stage rocket engine for the first time earlier this month, briefly igniting it for about two seconds. The company declared the June 5 test a success because the engine performed nominally and will be fired up again soon.

“Data point one is that the engine is still there,” said Andy Lapsa, chief executive of the Washington-based launch company, in an interview with Ars.

The test took place at the company’s facilities in Moses Lake, Washington. Seven of these methane-fueled engines, each intended to have a thrust of 100,000 pounds of force, will power the company’s Nova rocket. This launch vehicle will have a lift capacity of about 5 metric tons to orbit. Lapsa declined to declare a target launch date, but based on historical developmental programs, if Stoke continues to move fast, it could fly Nova for the first time in 2026.

Big ambitions for a small company

Although it remains relatively new in the field of emerging launch companies, Stoke has gathered a lot of attention because of its bold ambitions. The company intends for the two-stage Nova rocket to be fully reusable, with both stages returning to Earth. To achieve a vertical landing, the second stage has a novel design. This oxygen-hydrogen engine is based on a ring of 30 thrusters and a regeneratively cooled heat shield.

Lapsa and Stoke, which now has 125 employees, have also gone for an ambitious design in the first-stage engine tested earlier this month. The engine, with a placeholder name of S1E, is based on full-flow, stage-combustion technology in which the liquid propellants are burned in the engine’s pre-burners. Because of this, they arrive in the engine’s combustion chamber in fully gaseous form, leading to a more efficient mixing.

Such an engine—this technology has only previously been demonstrated in flight by SpaceX’s Raptor engine, on the Starship rocket—is more efficient and should theoretically extend turbine life. But it is also technically demanding to develop, and among the most complex engine designs for a rocket company to begin with. This is not rocket science. It’s exceptionally hard rocket science.

It may seem like Stoke is biting off a lot more than it can chew with Nova’s design. Getting to space is difficult enough for a launch startup, but this company is seeking to build a fully reusable rocket with a brand new second stage design and a first stage engine based on full-flow, staged combustion. I asked Lapsa if he was nuts for taking all of this on.

Are these guys nuts?

“I’ve been around long enough to know that any rocket development program is hard, even if you make it as simple as possible,” he responded. “But this industry is going toward full reusability. To me, that is the inevitable end state. When you start with that north star, any other direction you take is a diversion. If you start designing anything else, it’s not something where you can back into full reusability at any point. It means you’ll have to stop and start over to climb the mountain.”

This may sound like happy talk, but Stoke appears to be delivering on its ambitions. Last September, the company completed a successful “hop” test of its second stage at Moses Lake. This validated its design, thrust vector control, and avionics.

This engine is designed to power the Nova rocket.

Enlarge / This engine is designed to power the Nova rocket.

Stoke Space

After this test, the company turned its focus to developing the S1E engine and put it on the test stand for the first time in April before the first test firing in June. Going from zero to 350,000 horsepower in half a second for the first time had a “pretty high pucker factor,” Lapsa said of the first fully integrated engine test.

Now that this initial test is complete, Stoke will spend the rest of the year maturing the design of the engine, conducting longer test firings, and starting to develop flight stages. After that will come stage tests before the complete Nova vehicle is assembled. At the same time, Stoke is also working with the US Space Force on the regulatory process of refurbishing and modernizing Launch Complex 14 at Cape Canaveral Space Force Station in Florida.

Stoke Space ignites its ambitious main engine for the first time Read More »

apple’s-ai-promise:-“your-data-is-never-stored-or-made-accessible-by-apple”

Apple’s AI promise: “Your data is never stored or made accessible by Apple”

…and throw away the key —

And publicly reviewable server code means experts can “verify this privacy promise.”

Apple Senior VP of Software Engineering Craig Federighi announces

Enlarge / Apple Senior VP of Software Engineering Craig Federighi announces “Private Cloud Compute” at WWDC 2024.

Apple

With most large language models being run on remote, cloud-based server farms, some users have been reluctant to share personally identifiable and/or private data with AI companies. In its WWDC keynote today, Apple stressed that the new “Apple Intelligence” system it’s integrating into its products will use a new “Private Cloud Compute” to ensure any data processed on its cloud servers is protected in a transparent and verifiable way.

“You should not have to hand over all the details of your life to be warehoused and analyzed in someone’s AI cloud,” Apple Senior VP of Software Engineering Craig Federighi said.

Trust, but verify

Part of what Apple calls “a brand new standard for privacy and AI” is achieved through on-device processing. Federighi said “many” of Apple’s generative AI models can run entirely on a device powered by an A17+ or M-series chips, eliminating the risk of sending your personal data to a remote server.

When a bigger, cloud-based model is needed to fulfill a generative AI request, though, Federighi stressed that it will “run on servers we’ve created especially using Apple silicon,” which allows for the use of security tools built into the Swift programming language. The Apple Intelligence system “sends only the data that’s relevant to completing your task” to those servers, Federighi said, rather than giving blanket access to the entirety of the contextual information the device has access to.

And Apple says that minimized data is not going to be saved for future server access or used to further train Apple’s server-based models, either. “Your data is never stored or made accessible by Apple,” Federighi said. “It’s used exclusively to fill your request.”

But you don’t just have to trust Apple on this score, Federighi claimed. That’s because the server code used by Private Cloud Compute will be publicly accessible, meaning that “independent experts can inspect the code that runs on these servers to verify this privacy promise.” The entire system has been set up cryptographically so that Apple devices “will refuse to talk to a server unless its software has been publicly logged for inspection.”

While the keynote speech was light on details for the moment, the focus on privacy during the presentation shows that Apple is at least prioritizing security concerns in its messaging as it wades into the generative AI space for the first time. We’ll see what security experts have to say when these servers and their code are made publicly available in the near future.

Apple’s AI promise: “Your data is never stored or made accessible by Apple” Read More »