Author name: 9u50fv

space-ceo-explains-why-he-believes-private-space-stations-are-a-viable-business

Space CEO explains why he believes private space stations are a viable business

It’s a critical time for companies competing to develop a commercial successor to the International Space Station. NASA is working with several companies, including Axiom Space, Voyager Technologies, Blue Origin, and Vast, to develop concepts for private stations where it can lease time for its astronauts.

The space agency awarded Phase One contracts several years ago and is now in the final stages of writing requirements for Phase Two after asking for feedback from industry partners in September. This program is known as Commercial LEO Destinations, or CLDs in industry parlance.

Time is running out for NASA if it wants to establish continuity from the International Space Station, which will reach its end of life in 2030, with a follow-on station ready to go before then.

One of the more intriguing companies in the competition is Voyager Technologies, which recently announced a strategic investment from Janus Henderson, a global investment firm. In another sign that the competition is heating up, Voyager also just hired John Baum away from Vast, where he was the company’s business development leader.

To get a sense of this competition and how Voyager is coming along with its Starlab space station project, Ars spoke with the firm’s chairman, Dylan Taylor. This conversation has been lightly edited for clarity.

Ars: I know a lot of the companies working on CLDs are actively fundraising right now. How is this coming along for Voyager and Starlab?

Dylan Taylor: Fundraising is going quite well. You saw the Janus announcement. That’s significant for a few reasons. One is, it’s a significant investment. Of course, we’re not disclosing exactly how much. (Editor’s note: It likely is on the order of $100 million.) But the more positive development on the Janus investment is that they are such a well-known, well-respected financial investor.

If you look at the kind of bellwether investors, Janus would be up there with a Blackstone or Blackrock or Fidelity. So it’s significant not only in terms of capital contribution, but in… showing that commercial space stations are investable. This isn’t money coming from the Gulf States. It’s not a syndication of a bunch of $1,000 checks from retail investors. This is a very significant institutional investor coming in, and it’s a signal to the market. They did significant diligence on all our competitors, and they went out of the way saying that we’re far and away the best business plan, best design, and everything else, so that’s why it’s so meaningful.

Space CEO explains why he believes private space stations are a viable business Read More »

we-put-the-new-pocket-size-vinyl-format-to-the-test—with-mixed-results

We put the new pocket-size vinyl format to the test—with mixed results


is that a record in your pocket?

It’s a fun new format, but finding a place in the market may be challenging.

A 4-inch Tiny Vinyl record. Credit: Chris Foresman

A 4-inch Tiny Vinyl record. Credit: Chris Foresman

We recently looked at Tiny vinyl, a new miniature vinyl single format developed through a collaboration between a toy industry veteran and the world’s largest vinyl record manufacturer. The 4-inch singles are pressed in a process nearly identical to standard 12-inch LPs or 7-inch singles, except everything is smaller. They have a standard-size spindle hole and play at 33⅓ RPM, and they hold up to four minutes of music per side.

Several smaller bands, like The Band Loula and Rainbow Kitten Surprise, and some industry veterans like Blake Shelton and Melissa Etheridge, have already experimented with the format. But Tiny Vinyl partnered with US retail giant Target for its big coming-out party this fall, with 44 exclusive titles launching throughout the end of this year.

Tiny Vinyl supplied a few promotional copies of releases from former America’s Got Talent finalist Grace VanderWaal, The Band Loula, country pop stars Florida Georgia Line, and jazz legends the Vince Guaraldi Trio so I could get a first-hand look at how the records actually play. I tested these titles as well as several others I picked up at retail, playing them on an Audio Technica LP-120 direct drive manual turntable connected to a Yamaha S-301 integrated amplifier and playing through a pair of vintage Klipsch kg4 speakers.

I also played them out on a Crosley portable suitcase-style turntable, and for fun, I tried to play them on the miniature RSD3 turntable made for 3-inch singles to try to see what’s possible with a variety of hardware.

Tiny Vinyl releases cover several genres, including hip-hop, rock, country, pop, indie, and show tunes. Credit: Chris Foresman

Automatic turntables need not apply

First and foremost, I’ll note that the 4-inch diameter is essentially the same size as the label on a standard 12-inch LP. So any sort of automatic turntable won’t really work for 4-inch vinyl; most aren’t equipped to set the stylus at anything other than 12 inches or 7 inches, and even if they could, the automatic return would kick in before reaching the grooves where the music starts. Some automatic turntables allow switching to a manual mode, but they otherwise cannot play Tiny Vinyl records.

But if you have a turntable with a fully manual tonearm—including a wide array from DJ-style direct drive turntables or audiophile belt-drive turntables like those from Fluance, U-turn, or Pro-ject—you’re in luck. The tonearm can be placed on these records, and they will track the grooves well.

Lining up the stylus can be a challenge with such small records, but once it’s in place, the stylus on my LP120—a nude elliptical—tracked well. I also tried a few listens with a standard conical stylus since that’s what would be most common across a variety of low- and mid-range turntables. The elliptical stylus tracked slightly better in our experience; higher-end styli may track the extremely fine grooves even better but would probably be overkill given that the physical limitations of the format introduce some distortion, which would likely be more apparent with such gear.

While Tiny Vinyl will probably appeal most to pop music fans, I played a variety of music styles, including rock, country, dance pop, hip-hop, jazz, and even showtunes. The main sonic difference I noted when a direct comparison was available was that the Tiny Vinyl version of a track tended to sound quieter than the same track playing on a 12-inch LP at the same volume setting on the amplifier.

This Kacey Musgraves Tiny Vinyl includes songs from her album Deeper Well. Credit: Chris Foresman

It’s not unusual for different records to be mastered at different volumes; making the overall sound quieter means smaller modulations in the groove so they can be placed closer together. This is true for any album that has a side running longer than about 22 minutes, but it’s especially important to maintain the four-minute runtime on Tiny Vinyl. (This is also why the last song or two on many LP slides tend to be quieter or slower songs; it’s easier for these songs to sound better at the center of the record, where linear tracking speed decreases.)

That said, most of the songs I listened to tended to have a slight but audible increase in distortion as the grooves approached the physical limits of alignment for the stylus. This was usually only perceptible in the last several seconds of a song, which more discerning listeners would likely find objectionable. But sound quality overall is still comparable to typical vinyl records. It won’t compare to the most exacting pressings from the likes of Mobile Fidelity Labs, for instance, but then again, the sort of audiophile who would pay for the equipment to get the most out of such records probably won’t buy Tiny Vinyl in the first place, except perhaps as a conversation piece.

I also tried playing our Tiny Vinyl on a Crosley suitcase-style turntable since it has a manual tone arm. The model I have on hand has an Audio Technica AT3600L cartridge and stereo speakers, so it’s a bit nicer than the entry-level Cruiser models you’ll typically find at malls or department stores. But these are extremely popular first turntables for a lot of young people, so it seemed reasonable to consider how Tiny Vinyl sounds on these devices.

Unfortunately, I couldn’t play Tiny Vinyl on this turntable. Despite having a manual tone arm and an option to turn off the auto-start and stop of the turntable platter, the Crosley platter is designed for 7-inch and 12-inch vinyl—the Tiny Vinyl we tried wouldn’t even spin on the turntable without the addition of a slipmat of some kind.

Once I got it spinning, though, the tone arm simply would not track beyond the first couple of grooves before hitting some physical limitation of its gimbal. Since many of the suitcase-style turntables often share designs and parts, I suspect this would be a problem for most of the Crosley, Victrola, or other brands you might find at a big-box retailer.

Some releases really take advantage of the extra real estate of the gatefold jacket and printed inner sleeve,  Chris Foresman

Additionally, I compared the classic track “Linus and Lucy” from A Charlie Brown Christmas with a 2012 pressing of the full album, as well as the 2019 3-inch version using an adapter, all on the LP-120, to give readers the best comparison across formats.

Again, the LP version of the seminal soundtrack from A Charlie Brown Christmas sounded bright and noticeably louder than its 4-inch counterpart. No major surprises here. And of course, the LP includes the entire soundtrack, so if you’re a big fan of the film or the kind of contemplative, piano-based jazz that Vince Guaraldi is famous for, you’ll probably spring for the full album.

The 3-inch version of “Linus and Lucy” unsurprisingly sounds fairly comparable to the Tiny Vinyl version, with a much quieter playback at the same amplifier settings. But it also sounds a lot noisier, likely due to the differences in materials used in manufacturing.

Though 3-inch records can play on standard turntables, as I did here, they’re designed to go hand-in-hand with one of the many Crosley RSD3 variants released in the last five years, or on the Crosley Mini Cruiser turntable. If you manage to pick up an original 8ban player, you could get the original lo-fi, “noisy analog” sound that Bandai had intended as well. That’s really part of the 3-inch vinyl aesthetic.

Newer 3-inch vinyl singles are coming with a standard spindle hole, which makes them easier to play on standard turntables. It also means there are now adapters for the tiny spindle to fit these holes, so you can technically put a 4-inch single on them. But due to the design of the tonearm and its rest, the stylus won’t swing out to the edge of Tiny Vinyl; instead, you can only play starting at grooves around the 3-inch mark. It’s a little unfortunate because it would otherwise be fun to play these miniature singles on hardware that is a little more right-sized ergonomically.

Big stack of tiny records. Credit: Chris Foresman

Four-inch Tiny Vinyl singles, on the other hand, are intended to be played on standard turntables, and they do that fairly well as long as you can manually place the tonearm and it’s not otherwise limited physically from tracking its grooves. The sound was not expected to compare to a quality 12-inch pressing, and it doesn’t. But it still sounds good. And especially if your available space is at a premium, you might consider a Tiny Vinyl with the most well-known and popular tracks from a certain album or artist (like these songs from A Charlie Brown Christmas) over a full album that may cost upward of $35.

Fun for casual listeners, not for audiophiles

Overall, Tiny Vinyl still offers much of the visceral experience of playing standard vinyl records—the cover art, the liner notes, handling the record as you place it on the turntable—just in miniature. The cost is less than a typical LP, and the weight is significantly less, so there are definite benefits for casual listeners. On the other hand, serious collectors will gravitate toward 12-inch albums and—perhaps less so—7-inch singles. Ironically, the casual listeners the format would most likely appeal to are the least likely to have the equipment to play it. That will limit Tiny Vinyl’s mass-market appeal outside of just being a cool thing to put on the shelf that technically could be played on a turntable.

The Good:

  • Small enough to easily fit in a jacket pocket or the like
  • Use less resources to make and ship
  • With the gatefold jacket, printed inner sleeve, and color vinyl options, these look as cool as most full-size albums
  • Plays fine on manual turntables

The Bad:

  • Sound quality is (unsurprisingly) compromised
  • Price isn’t lower than typical 7-inch singles

The Ugly:

  • Won’t work on automatic-only turntables, like the very popular AT-LP60 series or the very popular suitcase-style turntables that are often an inexpensive “first” turntable for many

We put the new pocket-size vinyl format to the test—with mixed results Read More »

hp-plans-to-save-millions-by-laying-off-thousands,-ramping-up-ai-use

HP plans to save millions by laying off thousands, ramping up AI use

HP Inc. said that it will lay off 4,000 to 6,000 employees in favor of AI deployments, claiming it will help save $1 billion in annualized gross run rate by the end of its fiscal 2028.

HP expects to complete the layoffs by the end of that fiscal year. The reductions will largely hit product development, internal operations, and customer support, HP CEO Enrique Lores said during an earnings call on Tuesday.

Using AI, HP will “accelerate product innovation, improve customer satisfaction, and boost productivity,” Lores said.

In its fiscal 2025 earnings report released yesterday, HP said:

Structural cost savings represent gross reductions in costs driven by operational efficiency, digital transformation, and portfolio optimization. These initiatives include but are not limited to workforce reductions, platform simplification, programs consolidation and productivity measures undertaken by HP, which HP expects to be sustainable in the longer-term.

AI blamed for tech layoffs

HP’s announcement comes as workers everywhere try to decipher how AI will impact their future job statuses and job opportunities. Some industries, such as customer support, are expected to be more disrupted than others. But we’ve already seen many tech layoffs tied to AI.

Salesforce, for example, announced in October that it had let go of 4,000 customer support employees, with CEO Marc Benioff saying that AI meant “I need less heads.” In September, US senators accused Amazon of blaming its dismissal of “tens of thousands” of employees on the “adoption of generative AI tools” and then replacing the workers with over 10,000 foreign H-1B employees. Last month, Amazon announced it would lay off about 14,000 people to focus on its most promising projects, including generative AI. Last year, Intuit said it would lay off 1,800 people and replace them with AI-focused workers. Klarna and Duolingo have also replaced significant numbers of workers with AI. And in January, Meta announced plans to lay off 5 percent of its workforce as it looks to streamline operations and build its AI business.

HP plans to save millions by laying off thousands, ramping up AI use Read More »

ai-trained-on-bacterial-genomes-produces-never-before-seen-proteins

AI trained on bacterial genomes produces never-before-seen proteins

The researchers argue that this setup lets Evo “link nucleotide-level patterns to kilobase-scale genomic context.” In other words, if you prompt it with a large chunk of genomic DNA, Evo can interpret that as an LLM would interpret a query and produce an output that, in a genomic sense, is appropriate for that interpretation.

The researchers reasoned that, given the training on bacterial genomes, they could use a known gene as a prompt, and Evo should produce an output that includes regions that encode proteins with related functions. The key question is whether it would simply output the sequences for proteins we know about already, or whether it would come up with output that’s less predictable.

Novel proteins

To start testing the system, the researchers prompted it with fragments of the genes for known proteins and determined whether Evo could complete them. In one example, if given 30 percent of the sequence of a gene for a known protein, Evo was able to output 85 percent of the rest. When prompted with 80 percent of the sequence, it could return all of the missing sequence. When a single gene was deleted from a functional cluster, Evo could also correctly identify and restore the missing gene.

The large amount of training data also ensured that Evo correctly identified the most important regions of the protein. If it made changes to the sequence, they typically resided in the areas of the protein where variability is tolerated. In other words, its training had enabled the system to incorporate the rules of evolutionary limits on changes in known genes.

So, the researchers decided to test what happened when Evo was asked to output something new. To do so, they used bacterial toxins, which are typically encoded along with an anti-toxin that keeps the cell from killing itself whenever it activates the genes. There are a lot of examples of these out there, and they tend to evolve rapidly as part of an arms race between bacteria and their competitors. So, the team developed a toxin that was only mildly related to known ones, and had no known antitoxin, and fed its sequence to Evo as a prompt. And this time, they filtered out any responses that looked similar to known antitoxin genes.

AI trained on bacterial genomes produces never-before-seen proteins Read More »

data-driven-sport:-how-oracle-red-bull-racing-and-at&t-move-terabytes-of-f1-info

Data-driven sport: How Oracle Red Bull Racing and AT&T move terabytes of F1 info

“We learned how to be more efficient because before… we were so focused on performance that we almost forgot about efficiency, about it was full performance, and we have more people now than we had in 2017, for example, in the team, but we are spending less money,” Maia told me.

Bigger data

The number of sensors on each race car has tripled, with around 750 of them, each sending back a different data stream, amounting to around 1.5 terabytes per car per race. Telemetry used to be pretty basic—a TV feed, throttle, brake, and steering applications, and so on. Now a small squad of engineers sits at banks of screens in the back of the garage, hidden away from the cameras, in constant link with their colleagues in the Milton Keynes factory.

“We need as well to bring it straight away to Milton Keynes because it’s helping us to fine-tune the setup—so when you are here on Friday—and it’s helping us as well on Sunday to make the best decision for the race strategy. So that’s why it’s very good to have a lot of data, but you need as well to transfer it back and forth,” Maia said.

“It is a sport of milliseconds, as you know,” said Zee Hussain, head of global enterprise solutions at AT&T. “So the speed of data, the reliability of data, the latency, the security is just absolutely critical. If the data is not going, traversing, at the highest possible speed, and it’s not on a secure and reliable path, that is absolutely without question the difference between winning and losing,” Hussain said.

“I think the biggest latency we have is between Australia and the UK, and it’s around 0.3 seconds. It’s nothing. I think if you are on WhatsApp, calling someone is maybe more latency… So it’s impressive,” Maia said.

Data-driven sport: How Oracle Red Bull Racing and AT&T move terabytes of F1 info Read More »

newest-starship-booster-is-significantly-damaged-during-testing-early-friday

Newest Starship booster is significantly damaged during testing early Friday

Friday morning’s failure was less energetic than an explosion of a Starship upper stage during testing at Massey’s in June. That incident caused widespread damage at the test site and a complete loss of the vehicle. The Booster 18 problem on Friday appeared to cause less damage to test infrastructure, and no Raptor engines had yet been installed on the vehicle.

Nevertheless, this is the point in the rocket development program at which SpaceX sought to be accelerating with development of Starship and reaching a healthy flight cadence in 2026. Many of the company’s near-term goals rely on getting Starship flying regularly and reliably.

A full view of super heavy booster 18’s catastrophic damage during testing tonight. Very significant damage to the entire LOX tank section.

11/21/25 pic.twitter.com/Kw8XeZ2qXW

— Starship Gazer (@StarshipGazer) November 21, 2025

With this upgraded vehicle, SpaceX wants to demonstrate booster landing and reuse, an upper stage tower catch next year, the beginning of operational Starlink deployment missions, and a test campaign for NASA’s Artemis Program. To keep this Moon landing program on track, it is critical that SpaceX and NASA conduct an on-orbit refueling test of Starship, which nominally was slated for the second half of 2026.

On this timeline, the company was aiming to conduct a crewed lunar landing for NASA during the second half of 2028. From an outside perspective, before this most recent failure, that timeline already seemed to be fairly optimistic.

One of the core attributes of SpaceX is that it diagnoses failure quickly, addresses problems, and gets back to flying as rapidly as possible. No doubt its engineers are already poring over the data captured Friday morning and quite possibly have already diagnosed the problem. The company is resilient, and it has ample resources.

Nevertheless, this is also a maturing program. The Starship vehicle launched for the first time in 2023, and its first stage made a successful flight two years ago. Losing the first stage of the newest generation of the vehicle, during the initial phases of testing, can only be viewed as a significant setback for a program with so much promise and so much to accomplish so soon.

Newest Starship booster is significantly damaged during testing early Friday Read More »

rfk-jr.’s-loathesome-edits:-cdc-website-now-falsely-links-vaccines-and-autism

RFK Jr.’s loathesome edits: CDC website now falsely links vaccines and autism

With ardent anti-vaccine activist Robert F. Kennedy Jr. as the country’s top health official, a federal webpage that previously laid out the ample evidence refuting the misinformation that vaccines cause autism was abruptly replaced Wednesday with an anti-vaccine screed that promotes the false link.

It’s a move that is sure to be celebrated by Kennedy’s fringe anti-vaccine followers, but will only sow more distrust, fear, and confusion among the public, further erode the country’s crumbling vaccination rates, and ultimately lead to more disease, suffering, and deaths from vaccine-preventable infections, particularly among children and the most vulnerable.

On the Centers for Disease Control and Prevention’s website titled “Autism and Vaccines,” the previous top “key point” accurately reported that: “Studies have shown that there is no link between receiving vaccines and developing autism spectrum disorder (ASD).”

But, under Kennedy, the top “key point”  is now the erroneous statement: “The claim ‘vaccines do not cause autism’ is not an evidence-based claim because studies have not ruled out the possibility that infant vaccines cause autism.”

The Department of Health and Human Services, which oversees the CDC, did not respond to questions from Ars Technica about the change, including why it appears to be dismissing the substantial number of high-quality studies providing evidence that there is no association between lifesaving immunizations and the neurodevelopmental disorder. It also did not address questions of whether CDC scientists were included in the rewrite.

An emailed response attributed to HHS spokesperson Andrew Nixon said, “We are updating the CDC’s website to reflect gold standard, evidence-based science.”

RFK Jr.’s loathesome edits: CDC website now falsely links vaccines and autism Read More »

in-1982,-a-physics-joke-gone-wrong-sparked-the-invention-of-the-emoticon

In 1982, a physics joke gone wrong sparked the invention of the emoticon


A simple proposal on a 1982 electronic bulletin board helped sarcasm flourish online.

Credit: Benj Edwards / DEC

On September 19, 1982, Carnegie Mellon University computer science research assistant professor Scott Fahlman posted a message to the university’s bulletin board software that would later come to shape how people communicate online. His proposal: use 🙂 and 🙁 as markers to distinguish jokes from serious comments. While Fahlman describes himself as “the inventor… or at least one of the inventors” of what would later be called the smiley face emoticon, the full story reveals something more interesting than a lone genius moment.

The whole episode started three days earlier when computer scientist Neil Swartz posed a physics problem to colleagues on Carnegie Mellon’s “bboard,” which was an early online message board. The discussion thread had been exploring what happens to objects in a free-falling elevator, and Swartz presented a specific scenario involving a lit candle and a drop of mercury.

That evening, computer scientist Howard Gayle responded with a facetious message titled “WARNING!” He claimed that an elevator had been “contaminated with mercury” and suffered “some slight fire damage” due to a physics experiment. Despite clarifying posts noting the warning was a joke, some people took it seriously.

A DECSYSTEM-20 KL-10 (1974) that was once located at the Living Computer Museum in Seattle.

A DECSYSTEM-20 KL-10 (1974) seen at the Living Computer Museum in Seattle. Scott Fahlman used a similar system with a terminal to propose his smiley concept. Credit: Jason Scott

The incident sparked immediate discussion about how to prevent such misunderstandings and the “flame wars” (heated arguments) that could result from misread intent.

“This problem caused some of us to suggest (only half seriously) that maybe it would be a good idea to explicitly mark posts that were not to be taken seriously,” Fahlman later wrote in a retrospective post published on his CMU website. “After all, when using text-based online communication, we lack the body language or tone-of-voice cues that convey this information when we talk in person or on the phone.”

On September 17, 1982, the next day after the misunderstanding on the CMU bboard, Swartz made the first concrete proposal: “Maybe we should adopt a convention of putting a star

in the subject field of any notice which is to be taken as a joke.”

Within hours, multiple Carnegie Mellon computer scientists weighed in with alternative proposals. Joseph Ginder suggested using % instead of *. Anthony Stentz proposed a nuanced system: “How about using for good jokes and % for bad jokes?” Keith Wright championed the ampersand (&), arguing it “looks funny” and “sounds funny.” Leonard Hamey suggested # because “it looks like two lips with teeth showing between them.”

Meanwhile, some Carnegie Mellon users were already using their own solution. A group on the Gandalf VAX system later revealed they had been using __/ as “universally known as a smile” to mark jokes. But it apparently didn’t catch on beyond that local system.

The winning formula

Two days after Swartz’s initial proposal, Fahlman entered the discussion with his now-famous post: “I propose that the following character sequence for joke markers: 🙂 Read it sideways.” He added that serious messages could use :-(, noting, “Maybe we should mark things that are NOT jokes, given current trends.”

What made Fahlman’s proposal work wasn’t that he invented the concept of joke markers—Swartz had done that. It wasn’t that he invented smile symbols at Carnegie Mellon, since the __/ already existed. Rather, Fahlman synthesized the best elements from the ongoing discussion: the simplicity of single-character proposals, the visual clarity of face-like symbols, the sideways-reading principle hinted at by Hamey’s #, and a complete binary system that covered both humor 🙂 and seriousness :-(.

Early computer terminals like the DEC VT-100 did not support graphics, requiring typographic solutions for displaying

Early computer terminals like the DEC VT-100 did not support graphics, requiring typographic solutions for displaying “images.” Credit: Digital Equipment Corporation

The simplicity of Fahlman’s emoticons was key to their adoption. The university’s network ran on large DEC mainframes accessed via video terminals (Fahlman himself made his posts from a terminal attached to a DECSYSTEM-20) that were strictly limited to the 95 printable characters of the US-ASCII set. With no ability to display graphics or draw pixels, Fahlman’s solution used the only tools available: standard punctuation marks rearranging the strict grid of the terminal screen into a “picture.”

The emoticons spread quickly across ARPAnet, the precursor to the modern Internet, reaching other universities and research labs. By November 10, 1982—less than two months later—Carnegie Mellon researcher James Morris began introducing the smiley emoticon concept to colleagues at Xerox PARC, complete with a growing list of variations. What started as an internal Carnegie Mellon convention over time became a standard feature of online communication, often simplified without the hyphen nose to 🙂 or :(, among many other variations.

Lost backup tapes

There’s an interesting coda to this story: For years, the original bboard thread existed only in fading memory. The bulletin board posts had been deleted, and Carnegie Mellon’s computer science department had moved to new systems. The old messages seemed lost forever.

Between 2001 and 2002, Mike Jones, a former Carnegie Mellon researcher then working at Microsoft, sponsored what Fahlman calls a “digital archaeology” project. Jeff Baird and the Carnegie Mellon facilities staff undertook a painstaking effort: locating backup tapes from 1982, finding working tape drives that could read the obsolete media, decoding old file formats, and searching for the actual posts. The team recovered the thread, revealing not just Fahlman’s famous post but the entire three-day community discussion that led to it.

The recovered messages, which you can read here, show how collaboratively the emoticon was developed—not a lone genius moment but an ongoing conversation proposing, refining, and building on the group’s ideas. Fahlman had no idea his synthesis would become a fundamental part of how humans express themselves in digital text, but neither did Swartz, who first suggested marking jokes, or the Gandalf VAX users who were already using their own smile symbols.

From emoticon to emoji

While Fahlman’s text-based emoticons spread across Western online culture and remained text-character-based for a long time, Japanese mobile phone users in the late 1990s developed a parallel system: emoji. For years, Shigetaka Kurita’s 1999 set for NTT DoCoMo was widely cited as the original. However, recent discoveries have revealed earlier origins. SoftBank released a picture-based character set on mobile phones in 1997, and the Sharp PA-8500 personal organizer featured selectable icon characters as early as 1988.

Unlike emoticons that required reading sideways, emoji were small pictographic images that could convey emotion, objects, and ideas with more detail. When Unicode standardized emoji in 2010 and Apple added an emoji keyboard to iOS in 2011, the format exploded globally. Today, emoji have largely replaced emoticons in casual communication, though Fahlman’s sideways faces still appear regularly in text messages and social media posts.

IBM's Code Page 437 character set included a smiley face as early as 1981.

IBM’s Code Page 437 character set included a smiley face as early as 1981. Credit: Matt Giuca

As Fahlman himself notes on his website, he may not have been “the first person ever to type these three letters in sequence.” Others, including teletype operators and private correspondents, may have used similar symbols before 1982, perhaps even as far back as 1648. Author Vladimir Nabokov suggested before 1982 that “there should exist a special typographical sign for a smile.” And the original IBM PC included a dedicated smiley character as early as 1981 (perhaps that should be considered the first emoji).

What made Fahlman’s contribution significant wasn’t absolute originality but rather proposing the right solution at the right time in the right context. From there, the smiley could spread across the emerging global computer network, and no one would ever misunderstand a joke online again. 🙂

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

In 1982, a physics joke gone wrong sparked the invention of the emoticon Read More »

study:-kids’-drip-paintings-more-like-pollock’s-than-those-of-adults

Study: Kids’ drip paintings more like Pollock’s than those of adults

Taylor thought there might be a way to put this new hypothesis to the test, particularly in light of numerous experimental studies showing the prevalence of fractals in human physiology: walking, dancing, martial arts, and balancing motion, such as postural sway while standing. “Let’s think about that balance mechanism,” he said. “You go off-balance, you’re swaying around, so you’ve got big sways mixed in with smaller and smaller and smaller sways. It’s a multi-scale thing.”

Drip, drip, drip

Serendipitously, Taylor even had a built-in laboratory environment in which to conduct such experiments: the public “Dripfests” he regularly organized, in which both adults and children had the opportunity to create their own Pollock-like artworks by splattering diluted paint on sheets of paper on the floor. Life changes intervened before Taylor could implement the experiment, and the concept got pushed to the back burner. But he revived it a few years ago.

The study subjects were 18 children between the ages of four and six, and 34 adults ages 18 to 25. The age discrepancy was crucial, since those two groups are at markedly different stages of biomechanical balance development. And this time around, Taylor and his co-authors didn’t just look at the fractal dimensions of the resulting paintings, i.e., measuring the self-similar scaling behavior of the splatter patterns. They also looked at something called “lacunarity,” examining the variations in the gaps between paint clusters.

The results: Splatter paintings by adults had higher paint densities and wider, more varied paint trajectories. The children’s paintings had smaller fine-scale patterns, more gaps between paint clusters, and simpler one-dimensional trajectories that didn’t change direction nearly as often. “They both have coarse-scale motions, but the adults have lots of fine-scale structure,” said Taylor. “Not only did the kids have less fine structure, the fine structure they did have was very clumpy, while the adults’ fine structure was very uniform. So when the person is moving and how they regain their balance, we think it’s to do with how much structure there is at these different scales and how uniform it is.”

Study: Kids’ drip paintings more like Pollock’s than those of adults Read More »

celebrated-game-developer-rebecca-heineman-dies-at-age-62

Celebrated game developer Rebecca Heineman dies at age 62

From champion to advocate

During her later career, Heineman served as a mentor and advisor to many, never shy about celebrating her past as a game developer during the golden age of the home computer.

Her mentoring skills became doubly important when she publicly came out as transgender in 2003. She became a vocal advocate for LGBTQ+ representation in gaming and served on the board of directors for GLAAD. Earlier this year, she received the Gayming Icon Award from Gayming Magazine.

Andrew Borman, who serves as director of digital preservation at The Strong National Museum of Play in Rochester, New York, told Ars Technica that her influence made a personal impact wider than electronic entertainment. “Her legacy goes beyond her groundbreaking work in video games,” he told Ars. “She was a fierce advocate for LGBTQ rights and an inspiration to people around the world, including myself.”

The front cover of

The front cover of Dragon Wars on the Commodore 64, released in 1989. Credit: MobyGames

In the Netflix documentary series High Score, Heineman explained her early connection to video games. “It allowed me to be myself,” she said. “It allowed me to play as female.”

“I think her legend grew as she got older, in part because of her openness and approachability,” journalist Ernie Smith told Ars. “As the culture of gaming grew into an online culture of people ready to dig into the past, she remained a part of it in a big way, where her war stories helped fill in the lore about gaming’s formative eras.”

Celebrated to the end

Heineman was diagnosed with adenocarcinoma in October 2025 after experiencing shortness of breath at the PAX game convention. After diagnostic testing, doctors found cancer in her lungs and liver. That same month, she launched a GoFundMe campaign to help with medical costs. The campaign quickly surpassed its $75,000 goal, raising more than $157,000 from fans, friends, and industry colleagues.

Celebrated game developer Rebecca Heineman dies at age 62 Read More »

how-louvre-thieves-exploited-human-psychology-to-avoid-suspicion—and-what-it-reveals-about ai

How Louvre thieves exploited human psychology to avoid suspicion—and what it reveals about AI

On a sunny morning on October 19 2025, four men allegedly walked into the world’s most-visited museum and left, minutes later, with crown jewels worth 88 million euros ($101 million). The theft from Paris’ Louvre Museum—one of the world’s most surveilled cultural institutions—took just under eight minutes.

Visitors kept browsing. Security didn’t react (until alarms were triggered). The men disappeared into the city’s traffic before anyone realized what had happened.

Investigators later revealed that the thieves wore hi-vis vests, disguising themselves as construction workers. They arrived with a furniture lift, a common sight in Paris’s narrow streets, and used it to reach a balcony overlooking the Seine. Dressed as workers, they looked as if they belonged.

This strategy worked because we don’t see the world objectively. We see it through categories—through what we expect to see. The thieves understood the social categories that we perceive as “normal” and exploited them to avoid suspicion. Many artificial intelligence (AI) systems work in the same way and are vulnerable to the same kinds of mistakes as a result.

The sociologist Erving Goffman would describe what happened at the Louvre using his concept of the presentation of self: people “perform” social roles by adopting the cues others expect. Here, the performance of normality became the perfect camouflage.

The sociology of sight

Humans carry out mental categorization all the time to make sense of people and places. When something fits the category of “ordinary,” it slips from notice.

AI systems used for tasks such as facial recognition and detecting suspicious activity in a public area operate in a similar way. For humans, categorization is cultural. For AI, it is mathematical.

But both systems rely on learned patterns rather than objective reality. Because AI learns from data about who looks “normal” and who looks “suspicious,” it absorbs the categories embedded in its training data. And this makes it susceptible to bias.

The Louvre robbers weren’t seen as dangerous because they fit a trusted category. In AI, the same process can have the opposite effect: people who don’t fit the statistical norm become more visible and over-scrutinized.

It can mean a facial recognition system disproportionately flags certain racial or gendered groups as potential threats while letting others pass unnoticed.

A sociological lens helps us see that these aren’t separate issues. AI doesn’t invent its categories; it learns ours. When a computer vision system is trained on security footage where “normal” is defined by particular bodies, clothing, or behavior, it reproduces those assumptions.

Just as the museum’s guards looked past the thieves because they appeared to belong, AI can look past certain patterns while overreacting to others.

Categorization, whether human or algorithmic, is a double-edged sword. It helps us process information quickly, but it also encodes our cultural assumptions. Both people and machines rely on pattern recognition, which is an efficient but imperfect strategy.

A sociological view of AI treats algorithms as mirrors: They reflect back our social categories and hierarchies. In the Louvre case, the mirror is turned toward us. The robbers succeeded not because they were invisible, but because they were seen through the lens of normality. In AI terms, they passed the classification test.

From museum halls to machine learning

This link between perception and categorization reveals something important about our increasingly algorithmic world. Whether it’s a guard deciding who looks suspicious or an AI deciding who looks like a “shoplifter,” the underlying process is the same: assigning people to categories based on cues that feel objective but are culturally learned.

When an AI system is described as “biased,” this often means that it reflects those social categories too faithfully. The Louvre heist reminds us that these categories don’t just shape our attitudes, they shape what gets noticed at all.

After the theft, France’s culture minister promised new cameras and tighter security. But no matter how advanced those systems become, they will still rely on categorization. Someone, or something, must decide what counts as “suspicious behavior.” If that decision rests on assumptions, the same blind spots will persist.

The Louvre robbery will be remembered as one of Europe’s most spectacular museum thefts. The thieves succeeded because they mastered the sociology of appearance: They understood the categories of normality and used them as tools.

And in doing so, they showed how both people and machines can mistake conformity for safety. Their success in broad daylight wasn’t only a triumph of planning. It was a triumph of categorical thinking, the same logic that underlies both human perception and artificial intelligence.

The lesson is clear: Before we teach machines to see better, we must first learn to question how we see.

Vincent Charles, Reader in AI for Business and Management Science, Queen’s University Belfast, and Tatiana Gherman, Associate Professor of AI for Business and Strategy, University of Northampton.  This article is republished from The Conversation under a Creative Commons license. Read the original article.

How Louvre thieves exploited human psychology to avoid suspicion—and what it reveals about AI Read More »

gop-overhaul-of-broadband-permit-laws:-cities-hate-it,-cable-companies-love-it

GOP overhaul of broadband permit laws: Cities hate it, cable companies love it

US Rep. Richard Hudson (R-N.C.), the subcommittee chairman, defended the bills at today’s hearing. “These reforms will add much-needed certainty, predictability, and accountability to the broadband permitting process and help expedite deployment,” he said.

Cable lobby group NCTA called the hearing “important progress” toward “the removal of regulatory impediments that slow deployment to unserved areas.” Another cable lobby group, America’s Communications Association, said the permitting reform bills “will strip away red tape and enable broadband, cable, and telecommunications providers to redirect resources to upgrading and expanding their networks and services, especially in rural areas.”

$42 billion program delays

Much of the debate centered on a $42 billion federal program that was created in a November 2021 law to subsidize broadband construction in areas without modern access. The Trump administration threw out a Biden-era plan for distributing the Broadband Equity, Access, and Deployment (BEAD) program funds, forcing state governments to rewrite their plans and cut costs, delaying the projects’ start. Money still hasn’t been distributed, though the Trump administration today said it approved the rewritten plans of 18 states and territories.

Hudson alleged that BEAD suffered from “four years of delays caused by the Biden-Harris administration,” though the Biden administration had approximately three years to set up the program. Hudson said that “permitting reform is essential” to prevent the money from being “tied up in further unnecessary reviews and bureaucratic delays.”

The bills set varying deadlines for different types of network projects, ranging from 60 days to 150 days. One bill demands that permit fees for BEAD construction projects be based on the local government’s “actual and direct costs.” Another stipulates that certain environmental and historical preservation reviews aren’t required when removing equipment targeted by a 2019 law on foreign technology deemed to be a security risk.

Rep. Doris Matsui (D-Calif.), the subcommittee’s top Democrat, said during the hearing that she won’t support “proposals that force local governments to meet tight deadlines without any extra staff or funding.” She said that if the “shot clock” specified in the legislation “runs out, the project is automatically approved. That may sound like a way to speed things up but in reality, it cuts out community input, leads to mistakes and sets us up for more delays down the road. If we want faster reviews, we should give local communities more help, not take away their say.”

GOP overhaul of broadband permit laws: Cities hate it, cable companies love it Read More »