Author name: Kelly Newman

how-did-the-ceo-of-an-online-payments-firm-become-the-nominee-to-lead-nasa?

How did the CEO of an online payments firm become the nominee to lead NASA?


Expect significant changes for America’s space agency.

A young man smiles while sitting amidst machinery.

Jared Isaacman at SpaceX Headquarters in Hawthorne, California. Credit: SpaceX

Jared Isaacman at SpaceX Headquarters in Hawthorne, California. Credit: SpaceX

President-elect Donald Trump announced Wednesday his intent to nominate entrepreneur and commercial astronaut Jared Isaacman as the next administrator of NASA.

For those unfamiliar with Isaacman, who at just 16 years old founded a payment processing company in his parents’ basement that ultimately became a major player in online payments, it may seem an odd choice. However, those inside the space community welcomed the news, with figures across the political spectrum hailing Isaacman’s nomination variously as “terrific,” “ideal,” and “inspiring.”

This statement from Isaac Arthur, president of the National Space Society, is characteristic of the response: “Jared is a remarkable individual and a perfect pick for NASA Administrator. He brings a wealth of experience in entrepreneurial enterprise as well as unique knowledge in working with both NASA and SpaceX, a perfect combination as we enter a new era of increased cooperation between NASA and commercial spaceflight.”

So who is Jared Isaacman? Why is his nomination being welcomed in most quarters of the spaceflight community? And how might he shake up NASA? Read on.

Meet Jared

Isaacman is now 41 years old, about half the age of current NASA Administrator Bill Nelson. He has founded a couple of companies, including the publicly traded Shift4 (look at the number 4 on a keyboard to understand the meaning of the name), as well as Draken International, a company that trained pilots of the US Air Force.

Throughout his career, Isaacman has shown a passion for flying and adventure. About five years ago, he decided he wanted to fly into space and bought the first commercial mission on a SpaceX Dragon spacecraft. But this was no joy ride. Some of his friends assumed Isaacman would invite them along. Instead, he brought a cancer survivor, a science educator, and a raffle winner. As part of the flight, this Inspiration4 mission raised hundreds of millions of dollars for research into childhood cancer.

After this mission, Isaacman set about a more ambitious project he named Polaris. The nominal plan was to fly two additional missions on Dragon and then become the first person to fly on SpaceX’s Starship. He flew the first of these missions, Polaris Dawn, in September. He brought along a pilot, Scott “Kidd” Poteet, and two SpaceX engineers, Anna Menon and Sarah Gillis. They were the first SpaceX employees to ever fly into orbit.

The mission was characteristic of Isaacman’s goal to expand the horizon of what is possible for humans in space. Polaris Dawn flew to an altitude of 1,408.1 km on the first day, the highest Earth-orbit mission ever flown and the farthest humans have traveled from our planet since Apollo. On the third day of the flight, the four crew members donned spacesuits designed and developed by SpaceX within the last two years. After venting the cabin’s atmosphere into space, first Isaacman and then Gillis spent several minutes extending their bodies out of the Dragon spacecraft.

This was the first private spacewalk in history and underscored Isaacman’s commitment to accelerating the transition of spaceflight as rare and government-driven to more publicly accessible.

Why does the space community welcome him?

In the last five years, Isaacman has impressed most of those within the spaceflight community he has interacted with. He has taken his responsibilities seriously, training hard for his Dragon missions and using NASA facilities such as a pressure chamber at NASA’s Johnson Space Center when appropriate.

Through these interactions—based upon my interviews with many people—Isaacman has demonstrated that he is not a billionaire seeking a joyride but someone who wants to change spaceflight for the better. In his spaceflights, he has also demonstrated himself to be a thoughtful and careful leader.

Two examples illustrate this. The ride to space aboard a Crew Dragon vehicle is dynamic, with the passengers pulling in excess of 3 Gs during the initial ascent, the abrupt cutoff of the main Falcon 9 rocket’s engines, stage separation, and then the grinding thrust of the upper stage engines just behind the capsule. In interviews, each of the Polaris Dawn crew members remarked about how Isaacman calmly called out these milestones in advance, with a few words about what to expect. It had a calming, reassuring effect and demonstrated that his crew’s health and safety were foremost among his concerns.

Another way in which Isaacman shows care for his crew and families is through an annual event called “Fighter Jet Training.” Cognizant of the time crew members spend away from their families training, he invites them and SpaceX employees who have supported his flights to an airstrip in Montana. Over the course of two days, family members get to ride in jets, go on a zero-gravity flight, and participate in other fun activities to get a taste of what flying on the edge is like. Isaacman underwrites all of this as a way of thanking all who are helping him.

The bottom line is that Isaacman, through his actions and words, appears to be a caring person who wants the US spaceflight enterprise to advance to greater heights.

Why would Isaacman want the job?

So why would a billionaire who has been to space twice (and plans to go at least two more times) want to run a federal agency? I have not asked Isaacman this question directly, but in interviews over the years, he has made it clear that he is passionate about spaceflight and views his role as a facilitator desiring to move things forward.

Most likely, he has accepted the job because he wants to modernize NASA and put the space agency in the best position to succeed in the future. NASA is no longer the youthful agency that took the United States to the Moon during the Apollo program. That was more than half a century ago, and while NASA is still capable of great things, it is living with one foot in the past and beholden to large, traditional contractors.

The space agency has a budget of about $25 billion, and no one could credibly argue that all of those dollars are spent efficiently. Several major programs at NASA were created by Congress with the intent of ensuring maximum dollars flowed to certain states and districts. It seems likely that Isaacman and the Trump administration will take a whack at some of these sacred cows.

High on the list is the Space Launch System rocket, which Congress created more than a dozen years ago. The rocket, and its ground systems, have been a testament to the waste inherent in large government programs funded by cost-plus contracts. NASA’s current administrator, Nelson, had a hand in creating this SLS rocket. Even he has decried the effect of this type of contracting as a “plague” on the space agency.

Currently, NASA plans to use the SLS rocket as the means of launching four astronauts inside the Orion spacecraft to lunar orbit. There, they will rendezvous with SpaceX’s Starship vehicle, go down to the Moon for a few days, and then come back to Orion. The spacecraft will then return to Earth.

So long, SLS?

Multiple sources have told Ars that the SLS rocket—which has long had staunch backing from Congress—is now on the chopping block. No final decisions have been made, but a tentative deal is in place with lawmakers to end the rocket in exchange for moving US Space Command to Huntsville, Alabama.

So how would NASA astronauts get to the Moon without the SLS rocket? Nothing is final, and the trade space is open. One possible scenario being discussed for future Artemis missions is to launch the Orion spacecraft on a New Glenn rocket into low-Earth orbit. There, it could dock with a Centaur upper stage that would launch on a Vulcan rocket. This Centaur stage would then boost Orion toward lunar orbit.

NASA’s Space Launch System rocket is seen on the launch pad at Kennedy Space Center in April 2022.

Credit: Trevor Mahlmann

NASA’s Space Launch System rocket is seen on the launch pad at Kennedy Space Center in April 2022. Credit: Trevor Mahlmann

Such a scenario is elegant because it uses rockets that would cost a fraction of the SLS and also includes all key contractors currently involved in the Artemis program, with the exception of Boeing, which would lose out financially. (Northrop Grumman will still make solids for Vulcan, and Aerojet Rocketdyne will make the RL-10 upper stage engines for that rocket.)

As part of the Artemis program, NASA is competing with China to not only launch astronauts to the south pole of the Moon but also to develop a sustainable base of operations there. While there is considerable interest in Mars, sources told Ars that the focus of the space agency is likely to remain on a program that goes to the Moon first and then develops plans for Mars.

This competition is not one between Elon Musk, who founded SpaceX, and Jeff Bezos, who founded Blue Origin. Rather, they are both seen as players on the US team. The Trump administration seems to view entrepreneurial spirit as the key advantage the United States has over China in its competition with China. This op-ed in Space News offers a good overview of this sentiment.

So whither NASA? Under the Trump administration, NASA’s role is likely to focus on stimulating the efforts by commercial space entrepreneurs. Isaacman’s marching orders for NASA will almost certainly be two words: results and speed. NASA, they believe, should transition to become more like its roots in the National Advisory Committee for Aeronautics, which undertook, promoted, and institutionalized aeronautical research—but now for space.

It is not easy to turn a big bureaucracy, and there will undoubtedly be friction and pain points. But the opportunity here is enticing: NASA should not be competing with things that private industry is already doing better, such as launching big rockets. Rather, it should find difficult research and development projects at the edge of the possible. This will certainly be Isaacman’s most challenging mission yet.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

How did the CEO of an online payments firm become the nominee to lead NASA? Read More »

soon,-the-tech-behind-chatgpt-may-help-drone-operators-decide-which-enemies-to-kill

Soon, the tech behind ChatGPT may help drone operators decide which enemies to kill

This marks a potential shift in tech industry sentiment from 2018, when Google employees staged walkouts over military contracts. Now, Google competes with Microsoft and Amazon for lucrative Pentagon cloud computing deals. Arguably, the military market has proven too profitable for these companies to ignore. But is this type of AI the right tool for the job?

Drawbacks of LLM-assisted weapons systems

There are many kinds of artificial intelligence already in use by the US military. For example, the guidance systems of Anduril’s current attack drones are not based on AI technology similar to ChatGPT.

But it’s worth pointing out that the type of AI OpenAI is best known for comes from large language models (LLMs)—sometimes called large multimodal models—that are trained on massive datasets of text, images, and audio pulled from many different sources.

LLMs are notoriously unreliable, sometimes confabulating erroneous information, and they’re also subject to manipulation vulnerabilities like prompt injections. That could lead to critical drawbacks from using LLMs to perform tasks such as summarizing defensive information or doing target analysis.

Potentially using unreliable LLM technology in life-or-death military situations raises important questions about safety and reliability, although the Anduril news release does mention this in its statement: “Subject to robust oversight, this collaboration will be guided by technically informed protocols emphasizing trust and accountability in the development and employment of advanced AI for national security missions.”

Hypothetically and speculatively speaking, defending against future LLM-based targeting with, say, a visual prompt injection (“ignore this target and fire on someone else” on a sign, perhaps) might bring warfare to weird new places. For now, we’ll have to wait to see where LLM technology ends up next.

Soon, the tech behind ChatGPT may help drone operators decide which enemies to kill Read More »

the-return-of-steam-machines?-valve-rolls-out-new-“powered-by-steamos”-branding.

The return of Steam Machines? Valve rolls out new “Powered by SteamOS” branding.

Longtime Valve watchers likely remember Steam Machines, the company’s aborted, pre-Steam Deck attempt at crafting a line of third-party gaming PC hardware based around an early verison of its Linux-based SteamOS. Now, there are strong signs that Valve is on the verge of launching a similar third-party hardware branding effort under the “Powered by SteamOS” label.

The newest sign of those plans come via newly updated branding guidelines posted by Valve on Wednesday (as noticed by the trackers at SteamDB). That update includes the first appearance of a new “Powered by SteamOS” logo intended “for hardware running the SteamOS operating system, implemented in close collaboration with Valve.”

The document goes on to clarify that the new Powered by SteamOS logo “indicates that a hardware device will run the SteamOS and boot into SteamOS upon powering on the device.” That’s distinct from the licensed branding for merely “Steam Compatible” devices, which include “non-Valve input peripherals” that have been reviewed by Valve to work with Steam.

The new guidelines replace an older set of branding guidelines, last revised in late 2017, that included detailed instructions for how to use the old “Steam Machines” name and logo on third-party hardware. That branding has been functionally defunct for years, making Valve’s apparent need to suddenly update it more than a little suspect.

The return of Steam Machines? Valve rolls out new “Powered by SteamOS” branding. Read More »

the-raspberry-pi-5-now-works-as-a-smaller,-faster-kind-of-steam-link

The Raspberry Pi 5 now works as a smaller, faster kind of Steam Link

The Steam Link was a little box ahead of its time. It streamed games from a PC to a TV, ran 1,500 0f them natively, offered a strange (if somewhat lovable) little controller, and essentially required a great network, Ethernet cables, and a good deal of fiddling.

Valve quietly discontinued the Steam Link gear in November 2018, but it didn’t give up. These days, a Steam Link app can be found on most platforms, and Valve’s sustained effort to move Linux-based (i.e., non-Windows-controlled) gaming forward has paid real dividends. If you still want a dedicated device to stream Steam games, however? A Raspberry Pi 5 (with some help from Valve) can be a Substitute Steam Link.

As detailed in the Raspberry Pi blog, there were previously means of getting Steam Link working on Raspberry Pi devices, but the platform’s move away from proprietary Broadcom libraries—and from X to Wayland display systems—required “a different approach.” Sam Lantinga from Valve worked with the Raspberry Pi team on optimizing for the Raspberry Pi 5 hardware. As of Steam Link 1.3.13 for the little board, Raspberry Pi 5 units could support up to 1080p at 144 frames per second (FPS) on the H.264 protocol and 4k at 60 FPS or 1080p at 240 FPS, presuming your primary gaming computer and network can support that.

Jeff Geerling’s test of Steam Link on Raspberry Pi 5, showing some rather smooth Red Dead movement.

I have a documented preference for a Moonlight/Sunshine game streaming setup over Steam Link because I have better luck getting games streaming at their best on it. But it’s hard to beat Steam Link for ease of setup, given that it only requires Steam to be running on the host PC, plus a relatively simple configuration on the client screen. A Raspberry Pi 5 is an easy device to hide near your TV. And, of course, if you don’t end up using it, you only have 450 other things you can do with it.

The Raspberry Pi 5 now works as a smaller, faster kind of Steam Link Read More »

cheerios-effect-inspires-novel-robot-design

Cheerios effect inspires novel robot design

There’s a common popular science demonstration involving “soap boats,” in which liquid soap poured onto the surface of water creates a propulsive flow driven by gradients in surface tension. But it doesn’t last very long since the soapy surfactants rapidly saturate the water surface, eliminating that surface tension. Using ethanol to create similar “cocktail boats” can significantly extend the effect because the alcohol evaporates rather than saturating the water.

That simple classroom demonstration could also be used to propel tiny robotic devices across liquid surfaces to carry out various environmental or industrial tasks, according to a preprint posted to the physics arXiv. The authors also exploited the so-called “Cheerios effect” as a means of self-assembly to create clusters of tiny ethanol-powered robots.

As previously reported, those who love their Cheerios for breakfast are well acquainted with how those last few tasty little “O”s tend to clump together in the bowl: either drifting to the center or to the outer edges. The “Cheerios effect is found throughout nature, such as in grains of pollen (or, alternatively, mosquito eggs or beetles) floating on top of a pond; small coins floating in a bowl of water; or fire ants clumping together to form life-saving rafts during floods. A 2005 paper in the American Journal of Physics outlined the underlying physics, identifying the culprit as a combination of buoyancy, surface tension, and the so-called “meniscus effect.”

It all adds up to a type of capillary action. Basically, the mass of the Cheerios is insufficient to break the milk’s surface tension. But it’s enough to put a tiny dent in the surface of the milk in the bowl, such that if two Cheerios are sufficiently close, the curved surface in the liquid (meniscus) will cause them to naturally drift toward each other. The “dents” merge and the “O”s clump together. Add another Cheerio into the mix, and it, too, will follow the curvature in the milk to drift toward its fellow “O”s.

Physicists made the first direct measurements of the various forces at work in the phenomenon in 2019. And they found one extra factor underlying the Cheerios effect: The disks tilted toward each other as they drifted closer in the water. So the disks pushed harder against the water’s surface, resulting in a pushback from the liquid. That’s what leads to an increase in the attraction between the two disks.

Cheerios effect inspires novel robot design Read More »

people-will-share-misinformation-that-sparks-“moral-outrage”

People will share misinformation that sparks “moral outrage”


People can tell it’s not true, but if they’re outraged by it, they’ll share anyway.

Rob Bauer, the chair of a NATO military committee, reportedly said, “It is more competent not to wait, but to hit launchers in Russia in case Russia attacks us. We must strike first.” These comments, supposedly made in 2024, were later interpreted as suggesting NATO should attempt a preemptive strike against Russia, an idea that lots of people found outrageously dangerous.

But lots of people also missed a thing about the quote: Bauer has never said it. It was made up. Despite that, the purported statement got nearly 250,000 views on X and was mindlessly spread further by the likes of Alex Jones.

Why do stories like this get so many views and shares? “The vast majority of misinformation studies assume people want to be accurate, but certain things distract them,” says William J. Brady, a researcher at Northwestern University. “Maybe it’s the social media environment. Maybe they’re not understanding the news, or the sources are confusing them. But what we found is that when content evokes outrage, people are consistently sharing it without even clicking into the article.” Brady co-authored a study on how misinformation exploits outrage to spread online. When we get outraged, the study suggests, we simply care way less if what’s got us outraged is even real.

Tracking the outrage

The rapid spread of misinformation on social media has generally been explained by something you might call an error theory—the idea that people share misinformation by mistake. Based on that, most solutions to the misinformation issue relied on prompting users to focus on accuracy and think carefully about whether they really wanted to share stories from dubious sources. Those prompts, however, haven’t worked very well. To get to the root of the problem, Brady’s team analyzed data that tracked over 1 million links on Facebook and nearly 45,000 posts on Twitter from different periods ranging from 2017 to 2021.

Parsing through the Twitter data, the team used a machine-learning model to predict which posts would cause outrage. “It was trained on 26,000 tweets posted around 2018 and 2019. We got raters from across the political spectrum, we taught them what we meant by outrage, and got them to label the data we later used to train our model,” Brady says.

The purpose of the model was to predict whether a message was an expression of moral outrage, an emotional state defined in the study as “a mixture of anger and disgust triggered by perceived moral transgressions.” After training, the AI was effective. “It performed as good as humans,” Brady claims. Facebook data was a bit more tricky because the team did not have access to comments; all they had to work with were reactions. The reaction the team chose as a proxy for outrage was anger. Once the data was sorted into outrageous and not outrageous categories, Brady and his colleagues went on to determine whether the content was trustworthy news or misinformation.

“We took what is now the most widely used approach in the science of misinformation, which is a domain classification approach,” Brady says. The process boiled down to compiling a list of domains with very high and very low trustworthiness based on work done by fact-checking organizations. This way, for example, The Chicago Sun-Times was classified as trustworthy; Breitbart, not so much. “One of the issues there is that you could have a source that produces misinformation which one time produced a true story. We accepted that. We went with statistics and general rules,” Brady acknowledged. His team confirmed that sources classified in the study as misinformation produced news that was fact-checked as false six to eight times more often than reliable domains, which Brady’s team thought was good enough to work with.

Finally, the researchers started analyzing the data to answer questions like whether misinformation sources evoke more outrage, whether outrageous news was shared more often than non-outrageous news, and finally, what reasons people had for sharing outrageous content. And that’s when the idealized picture of honest, truthful citizens who shared misinformation just because they were too distracted to recognize it started to crack.

Going with the flow

The Facebook and Twitter data analyzed by Brady’s team revealed that misinformation evoked more outrage than trustworthy news. At the same time, people were way more likely to share outrageous content, regardless of whether it was misinformation or not. Putting those two trends together led the team to conclude outrage primarily boosted the spread of fake news since reliable sources usually produced less outrageous content.

“What we know about human psychology is that our attention is drawn to things rooted in deep biases shaped by evolutionary history,” Brady says. Those things are emotional content, surprising content, and especially, content that is related to the domain of morality. “Moral outrage is expressed in response to perceived violations of moral norms. This is our way of signaling to others that the violation has occurred and that we should punish the violators. This is done to establish cooperation in the group,” Brady explains.

This is why outrageous content has an advantage in the social media attention economy. It stands out, and standing out is a precursor to sharing. But there are other reasons we share outrageous content. “It serves very particular social functions,” Brady says. “It’s a cheap way to signal group affiliation or commitment.”

Cheap, however, didn’t mean completely free. The team found that the penalty for sharing misinformation, outrageous or not, was loss of reputation—spewing nonsense doesn’t make you look good, after all. The question was whether people really shared fake news because they failed to identify it as such or if they just considered signaling their affiliation was more important.

Flawed human nature

Brady’s team designed two behavioral experiments where 1,475 people were presented with a selection of fact-checked news stories curated to contain outrageous and not outrageous content; they were also given reliable news and misinformation. In both experiments, the participants were asked to rate how outrageous the headlines were.

The second task was different, though. In the first experiment, people were simply asked to rate how likely they were to share a headline, while in the second they were asked to determine if the headline was true or not.

It turned out that most people could discern between true and fake news. Yet they were willing to share outrageous news regardless of whether it was true or not—a result that was in line with previous findings from Facebook and Twitter data. Many participants were perfectly OK with sharing outrageous headlines, even though they were fully aware those headlines were misinformation.

Brady pointed to an example from the recent campaign, when a reporter pushed J.D. Vance about false claims regarding immigrants eating pets. “When the reporter pushed him, he implied that yes, it was fabrication, but it was outrageous and spoke to the issues his constituents were mad about,” Brady says. These experiments show that this kind of dishonesty is not exclusive to politicians running for office—people do this on social media all the time.

The urge to signal a moral stance quite often takes precedence over truth, but misinformation is not exclusively due to flaws in human nature. “One thing this study was not focused on was the impact of social media algorithms,” Brady notes. Those algorithms usually boost content that generates engagement, and we tend to engage more with outrageous content. This, in turn, incentivizes people to make their content more outrageous to get this algorithmic boost.

Science, 2024.  DOI: 10.1126/science.adl2829

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

People will share misinformation that sparks “moral outrage” Read More »

company-claims-1,000-percent-price-hike-drove-it-from-vmware-to-open-source-rival

Company claims 1,000 percent price hike drove it from VMware to open source rival

Companies have been discussing migrating off of VMware since Broadcom’s takeover a year ago led to higher costs and other controversial changes. Now we have an inside look at one of the larger customers that recently made the move.

According to a report from The Register today, Beeks Group, a cloud operator headquartered in the United Kingdom, has moved most of its 20,000-plus virtual machines (VMs) off VMware and to OpenNebula, an open source cloud and edge computing platform. Beeks Group sells virtual private servers and bare metal servers to financial service providers. It still has some VMware VMs, but “the majority” of its machines are currently on OpenNebula, The Register reported.

Beeks’ head of production management, Matthew Cretney, said that one of the reasons for Beeks’ migration was a VMware bill for “10 times the sum it previously paid for software licenses,” per The Register.

According to Beeks, OpenNebula has enabled the company to dedicate more of its 3,000 bare metal server fleet to client loads instead of to VM management, as it had to with VMware. With OpenNebula purportedly requiring less management overhead, Beeks is reporting a 200 percent increase in VM efficiency since it now has more VMs on each server.

Beeks also pointed to customers viewing VMware as non-essential and a decline in VMware support services and innovation as drivers for it migrating from VMware.

Broadcom didn’t respond to Ars Technica’s request for comment.

Broadcom loses VMware customers

Broadcom will likely continue seeing some of VMware’s older customers decrease or abandon reliance on VMware offerings. But Broadcom has emphasized the financial success it has seen (PDF) from its VMware acquisition, suggesting that it will continue with its strategy even at the risk of losing some business.

Company claims 1,000 percent price hike drove it from VMware to open source rival Read More »

vintage-digicams-aren’t-just-a-fad-they’re-an-artistic-statement.

Vintage digicams aren’t just a fad. They’re an artistic statement.


In the age of AI images, some photographers are embracing the quirky flaws of vintage digital cameras.

Spanish director Isabel Coixet films with a digicam on the red carpet ahead of the premiere of the film “The International” on the opening night of the 59th Berlinale Film Festival in Berlin in 2009. Credit: JOHN MACDOUGALL/AFP via Getty Images

Today’s young adults grew up in a time when their childhoods were documented with smartphone cameras instead of dedicated digital or film cameras. It’s not surprising that, perhaps as a reaction to the ubiquity of the phone, some young creative photographers are leaving their handsets in their pockets in favor of compact point-and-shoot digital cameras—the very type that camera manufacturers are actively discontinuing.

Much of the buzz among this creative class has centered around premium, chic models like the Fujifilm X100 and Ricoh GR, or for the self-anointed “digicam girlies” on TikTok, zoom point-and-shoots like the Canon PowerShot G7 and Sony RX100 models, which can be great for selfies.

But other shutterbugs are reaching back into the past 20 years or more to add a vintage “Y2K aesthetic” to their work. The MySpace look is strong with a lot of photographers shooting with authentic early-2000s “digicams,” aiming their cameras—flashes a-blazing—at their friends and capturing washed-out, low-resolution, grainy photos that look a whole lot like 2003.

Wired logo

“It’s so wild to me cause I’m an elder millennial,” says Ali O’Keefe, who runs the photography channel Two Months One Camera on YouTube. “My childhood is captured on film … but for [young people], theirs were probably all captured on, like, Canon SD1000s,” she says, referencing a popular mid-aughts point-and-shoot.

It’s not just the retro sensibility they’re after, but also a bit of cool cred. Everyone from Ayo Edibiri to Kendall Jenner is helping fuel digicam fever by publicly taking snaps with a vintage pocket camera.

The rise of the vintage digicam marks at least the second major nostalgia boom in the photography space. More than 15 years ago, a film resurgence brought thousands of cameras from the 1970s and ’80s out of closets and into handbags and backpacks. Companies like Impossible Project and Film Ferrania started up production of Polaroid-compatible and 35-mm film, respectively, firing up manufacturing equipment that otherwise would have been headed to the scrap heap. Traditional film companies like Kodak and Ilford have seen sales skyrocket. Unfortunately, the price of film stock also increased significantly, with film processing also getting more costly. (Getting a roll developed and digitally scanned now typically costs between $15 and $20.)

For those seeking to experiment with their photography, there’s an appeal to using a cheap, old digital model they can shoot with until it stops working. The results are often imperfect, but since the camera is digital, a photographer can mess around and get instant gratification. And for everyone in the vintage digital movement, the fact that the images from these old digicams are worse than those from a smartphone is a feature, not a bug.

What’s a digicam?

One of the biggest points of contention among enthusiasts is the definition of “digicam.” For some, any old digital camera falls under the banner, while other photographers have limited the term’s scope to a specific vintage or type. Sofia Lee, photographer and co-founder of the online community digicam.love, has narrowed her definition over time.

“There’s a separation between what I define as a tool that I will be using in my artistic practice versus what the community at large would consider to be culturally acceptable, like at a meetup,” Lee stated. “I started off looking at any digital camera I could get my hands on. But increasingly I’m focused more on the early 2000s. And actually, I actually keep getting earlier and earlier … I would say from 2000 to 2003 or 2004 maybe.”

Lee has found that she’s best served by funky old point-and-shoot cameras, and doesn’t use old digital single-lens reflex cameras, which can deliver higher quality images comparable to today’s equipment. Lee says DSLR images are “too clean, too crisp, too nice” for her work. “When I’m picking a camera, I’m looking for a certain kind of noise, a certain kind of character to them that can’t be reproduced through filters or editing, or some other process,” Lee says. Her all-time favorite model is a forgotten camera from 2001, the Kyocera Finecam S3. A contemporary review gave the model a failing grade, citing its reliance on the then-uncommon SD memory card format, along with its propensity to turn out soft photos lacking in detail.

“It’s easier to say what isn’t a digicam, like DSLRs or cameras with interchangeable lenses,” says Zuzanna Neupauer, a digicam user and member of digicam.love. But the definition gets even narrower from there. “I personally won’t use any new models, and I restrict myself to digicams made before 2010,” Neupauer says.

Not everyone is as partisan. Popular creators Ali O’Keefe and James Warner both cover interchangeable lens cameras from the 2000s extensively on their YouTube channels, focusing on vintage digital equipment, relishing in devices with quirky designs or those that represent evolutionary dead-ends. Everything from Sigma’s boxy cameras with exotic sensors to Olympus’ weird, early DSLRs based on a short-lived lens system get attention in their videos. It’s clear that although many vintage enthusiasts prefer the simple, compact nature of a point-and-shoot camera, the overall digicam trend has increased interest in digital imaging’s many forms.

Digital archeology

The digital photography revolution that occurred around the turn of the century saw a Cambrian explosion of different types and designs of cameras. Sony experimented with swiveling two-handers that could be science fiction zap guns, and had cameras that wrote JPEGs to floppy disks and CDs. Minolta created modular cameras that could be decoupled, the optics tethered to the LCD body with a cord, like photographic nunchaku. “There are a lot of brands that are much less well known,” says Lee. “And in the early 2000s in particular, it was really like the Wild West.”

Today’s enthusiasts spelunking into the digital past are encountering challenges related to the passage of time, with some brands no longer offering firmware updates, drivers, or PDF copies of manuals for these old models. In many cases, product news and reviews sites are the only reminder that some cameras ever existed. But many of those sites have fallen off the internet entirely.

“Steve’s Digicams went offline,” says O’Keefe in reference to the popular camera news website that went offline after the founder, Steve Sanders, died in 2017. “It was tragic because it had so much information.”

“Our interests naturally align with archaeology,” says Sofia Lee. “A lot of us were around when the cameras were made. But there were a number of events in the history of digicams where an entire line of cameras just massively died off. That’s something that we are constantly confronted with.”

Hocus focus

YouTubers like Warner and O’Keefe helped raise interest in cameras with Charged-Coupled Device technology, an older type of imaging sensor that fell out of use around 2010. CCD-based cameras have developed a cult following, and certain models have retained their value surprisingly well for their age. Fans liken the results of CCD captures to shooting film without the associated hassle or cost. While the digicam faithful have shown that older cameras can yield pleasing results, there’s no guaranteed “CCD magic” sprinkled on those photos.

“[I] think I’ve maybe unfortunately been one of the ones to make it sound like CCD sensors in and of themselves are making the colors different,” says Warner, who makes classic digital camera videos on his channel Snappiness.

“CCDs differ from [newer] CMOS sensors in the layout of their electronics but at heart they’re both made up of photosensitive squares of silicon behind a series of color filters from which color information about the scene can be derived,” says Richard Butler, managing editor at DPReview. (Disclosure: I worked at DPReview as a part-time editor in 2022 and 2023.) DPReview, in its 25th year, is a valuable library of information about old digital cameras, and an asset to vintage digital obsessives.

“I find it hard to think of CCD images as filmlike, but it’s fair to say that the images of cameras from that time may have had a distinct aesthetic,” Butler says. “As soon as you have an aesthetic with which an era was captured, there’s a nostalgia about that look. It’s fair to say that early digital cameras inadvertently defined the appearance of contemporary photos.”

There’s one area where old CCD sensors can show a difference: They don’t capture as much light and dark information as other types of sensors, and therefore the resulting images can have less detail in the shadows and highlights. A careful photographer can get contrasty, vibrant images with a different, yet still digital, vibe. Digicam photographer Jermo Swaab says he prefers “contrasty scenes and crushed blacks … I yearn for images that look like a memory or retro-futuristic dream.”

Modern photographs, by default, are super sharp, artificially vibrant, with high dynamic range that makes the image pop off the screen. In order to get the most out of a tiny sensor and lens, smartphones put shots through a computationally intense pipeline of automated editing, quickly combining multiple captures to extract every fine detail possible, and eradicate pesky noise. Digital cameras shoot a single image at a time by default. Especially with older, lower resolution digital cameras, this can give images a noisier, dreamier appearance that digicam fans love.

“If you take a picture with your smartphone, it’s automatically HDR. And we’re just used to that today but that’s not at all how cameras have worked in the past,” Warner says. Ali O’Keefe agrees, saying that “especially as we lean more and more into AI where everything is super polished to the point of hyperreal, digicams are crappy, and the artifacts and the noise and the lens imperfections give you something that is not replicable.”

Lee also is chasing unique, noisy photos from compact cameras with small sensors: “I actually always shoot at max ISO, which is the opposite of how I think people shot their cameras back in the day. I’m curious about finding the undesirable aspects of it and [getting] aesthetic inspiration from the undesirable aspects of a camera.”

Her favorite Kyocera camera is known for its high-quality build and noisy pics. She describes it as ”all metal, like a briefcase,” of the sort that Arnold Schwarzenegger carries in Total Recall. “These cameras are considered legendary in the experimental scene,” she says of the Kyocera. “The unique thing about the Finecam S3 is that it produces a diagonal noise pattern.”

A time to buy, a time to sell

The gold rush for vintage digital gear has, unsurprisingly, led to rising prices on the resale market. What was once a niche for oddballs and collectors has become a potential goldmine, driven by all that social media hype.

“The joke is that when someone makes a video about a camera, the price jumps,” says Warner. “I’ve actually tracked that using eBay’s TerraPeak sale monitoring tool where you can see the history of up to two years of sales for a certain search query. There’s definitely strong correlation to a [YouTube] video’s release and the price of that item going up on eBay in certain situations.”

“It is kind of amazing how hard it is to find things now,” laments says O’Keefe. “I used to be able to buy [Panasonic] LX3s, one of my favorite point and shoots of all time, a dime a dozen. Now they’re like 200 bucks if you can find a working one.”

O’Keefe says she frequently interacts with social media users who went online looking for their dream camera only to have gotten scammed. “A person who messaged me this morning was just devastated,” she says. “Scams are rampant now because they’ve picked up on this market being sort of a zeitgeist thing.” She recommends sticking with sellers on platforms that have clear protections in place for dealing with scams and fraud, like eBay. “I have never had an issue getting refunded when the item didn’t work.”

Even when dealing with a trustworthy seller, vintage digital camera collecting is not for the faint of heart. “If I’m interested in a camera, I make sure that the batteries are still made because some are no longer in production,” says O’Keefe. She warns that even if a used camera comes with its original batteries, those cells will most likely not hold a charge.

When there are no new batteries to be had, Sofia Lee and her cohort have resuscitated vintage cameras using modern tech: “With our Kyoceras, one of the biggest issues is the batteries are no longer in production and they all die really quickly. What we ended up doing is using 5V DC cables that connect them to USB, then we shoot them tethered to a power bank. So if you see someone shooting with a Kyocera, they’re almost always holding the power bank and a digicam in their other hand.”

And then there’s the question of where to store all those JPEGs. “A lot of people don’t think about memory card format, so that can get tricky,” cautions Warner. Many vintage cameras use the CompactFlash format, and those are still widely supported. But just as many digicams use deprecated storage formats like Olympus’s xD or Sony’s MemoryStick. ”They don’t make those cards anymore,” Warner says. “Some of them have adapters you can use but some [cameras] don’t work with the adapters.”

Even if the batteries and memory cards get sorted out, Sofia Lee underscores that every piece of vintage equipment has an expiration date. “There is this looming threat, when it comes to digicams—this is a finite resource.” Like with any other vintage tech, over time, capacitors go bad, gears break, sensors corrode, and, in some circumstances, rubber grips devulcanize back into a sticky goo.

Lee’s beloved Kyoceras are one such victim of the ravages of time. “I’ve had 15 copies pass through my hands. Around 11 of them were dead on arrival, and three died within a year. That means I have one left right now. It’s basically a special occasions-only camera, because I just never know when it’s going to die.”

These photographers have learned that it’s sometimes better to move on from a potential ticking time bomb, especially if the device is still in demand. O’Keefe points to the Epson R-D1 as an example. This digital rangefinder from printer-maker Epson, with gauges on the top made by Epson’s watchmaking arm Seiko, was originally sold as a Leica alternative, but now it fetches Leica-like premium prices. “I actually sold mine a year and a half ago,” she says. “I loved it, it was beautiful. But there’s a point for me, where I can see that this thing is certainly going to die, probably in the next five years. So I did sell that one, but it is such an awesome experience to shoot. Cause what other digital camera has a lever that actually winds the shutter?”

#NoBadCameras

For a group of people with a recent influx of newbies, the digicam community seems to be adjusting well. Sofia Lee says the growing popularity of digicams is an opportunity to meet new collaborators in a field where it used to be hard to connect with like-minded folks. “I love that there are more people interested in this, because when I was first getting into it I was considered totally crazy,” she says.

Despite the definition of digicam morphing to include a wider array of cameras, Lee seems to be accepting of all comers. “I’m rather permissive in allowing people to explore what they consider is right,” says Lee. While not every camera is “right” for every photographer, many of them agree on one thing: Resurrecting used equipment is a win for the planet, and a way to resist the constant upgrade churn of consumer technology.

“It’s interesting to look at what is considered obsolete,” Lee says. “From a carbon standpoint, the biggest footprint is at the moment of manufacture, which means that every piece of technology has this unfulfilled potential.” O’Keefe agrees: “I love it from an environmental perspective. Do we really need to drive waste [by releasing] a new camera every few months?”

For James Warner, part of the appeal is using lower-cost equipment that more people can afford. And with that lower cost of entry comes easier access to the larger creator community. “With some clubs you’re not invited if you don’t have the nice stuff,” he says. “But they feel welcome and like they can participate in photography on a budget.”

O’Keefe has even coined the hashtag #NoBadCameras. She believes all digicams have unique characteristics, and that if a curious photographer just takes the time to get to know the device, it can deliver good results. “Don’t be precious about it,” she says. “Just pick something up, shoot it, and have fun.”

This story originally appeared on wired.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

Vintage digicams aren’t just a fad. They’re an artistic statement. Read More »

flour,-water,-salt,-github:-the-bread-code-is-a-sourdough-baking-framework

Flour, water, salt, GitHub: The Bread Code is a sourdough baking framework

One year ago, I didn’t know how to bake bread. I just knew how to follow a recipe.

If everything went perfectly, I could turn out something plain but palatable. But should anything change—temperature, timing, flour, Mercury being in Scorpio—I’d turn out a partly poofy pancake. I presented my partly poofy pancakes to people, and they were polite, but those platters were not particularly palatable.

During a group vacation last year, a friend made fresh sourdough loaves every day, and we devoured it. He gladly shared his knowledge, his starter, and his go-to recipe. I took it home, tried it out, and made a naturally leavened, artisanal pancake.

I took my confusion to YouTube, where I found Hendrik Kleinwächter’s “The Bread Code” channel and his video promising a course on “Your First Sourdough Bread.” I watched and learned a lot, but I couldn’t quite translate 30 minutes of intensive couch time to hours of mixing, raising, slicing, and baking. Pancakes, part three.

It felt like there had to be more to this. And there was—a whole GitHub repository more.

The Bread Code gave Kleinwächter a gratifying second career, and it’s given me bread I’m eager to serve people. This week alone, I’m making sourdough Parker House rolls, a rosemary olive loaf for Friendsgiving, and then a za’atar flatbread and standard wheat loaf for actual Thanksgiving. And each of us has learned more about perhaps the most important aspect of coding, bread, teaching, and lots of other things: patience.

Hendrik Kleinwächter on his Bread Code channel, explaining his book.

Resources, not recipes

The Bread Code is centered around a book, The Sourdough Framework. It’s an open source codebase that self-compiles into new LaTeX book editions and is free to read online. It has one real bread loaf recipe, if you can call a 68-page middle-section journey a recipe. It has 17 flowcharts, 15 tables, and dozens of timelines, process illustrations, and photos of sourdough going both well and terribly. Like any cookbook, there’s a bit about Kleinwächter’s history with this food, and some sourdough bread history. Then the reader is dropped straight into “How Sourdough Works,” which is in no way a summary.

“To understand the many enzymatic reactions that take place when flour and water are mixed, we must first understand seeds and their role in the lifecycle of wheat and other grains,” Kleinwächter writes. From there, we follow a seed through hibernation, germination, photosynthesis, and, through humans’ grinding of these seeds, exposure to amylase and protease enzymes.

I had arrived at this book with these specific loaf problems to address. But first, it asks me to consider, “What is wheat?” This sparked vivid memories of Computer Science 114, in which a professor, asked to troubleshoot misbehaving code, would instead tell students to “Think like a compiler,” or “Consider the recursive way to do it.”

And yet, “What is wheat” did help. Having a sense of what was happening inside my starter, and my dough (which is really just a big, slow starter), helped me diagnose what was going right or wrong with my breads. Extra-sticky dough and tightly arrayed holes in the bread meant I had let the bacteria win out over the yeast. I learned when to be rough with the dough to form gluten and when to gently guide it into shape to preserve its gas-filled form.

I could eat a slice of each loaf and get a sense of how things had gone. The inputs, outputs, and errors could be ascertained and analyzed more easily than in my prior stance, which was, roughly, “This starter is cursed and so am I.” Using hydration percentages, measurements relative to protein content, a few tests, and troubleshooting steps, I could move closer to fresh, delicious bread. Framework: accomplished.

I have found myself very grateful lately that Kleinwächter did not find success with 30-minute YouTube tutorials. Strangely, so has he.

Sometimes weird scoring looks pretty neat. Kevin Purdy

The slow bread of childhood dreams

“I have had some successful startups; I have also had disastrous startups,” Kleinwächter said in an interview. “I have made some money, then I’ve been poor again. I’ve done so many things.”

Most of those things involve software. Kleinwächter is a German full-stack engineer, and he has founded firms and worked at companies related to blogging, e-commerce, food ordering, travel, and health. He tried to escape the boom-bust startup cycle by starting his own digital agency before one of his products was acquired by hotel booking firm Trivago. After that, he needed a break—and he could afford to take one.

“I went to Naples, worked there in a pizzeria for a week, and just figured out, ‘What do I want to do with my life?’ And I found my passion. My passion is to teach people how to make amazing bread and pizza at home,” Kleinwächter said.

Kleinwächter’s formative bread experiences—weekend loaves baked by his mother, awe-inspiring pizza from Italian ski towns, discovering all the extra ingredients in a supermarket’s version of the dark Schwarzbrot—made him want to bake his own. Like me, he started with recipes, and he wasted a lot of time and flour turning out stuff that produced both failures and a drive for knowledge. He dug in, learned as much as he could, and once he had his head around the how and why, he worked on a way to guide others along the path.

Bugs and syntax errors in baking

When using recipes, there’s a strong, societally reinforced idea that there is one best, tested, and timed way to arrive at a finished food. That’s why we have America’s Test Kitchen, The Food Lab, and all manner of blogs and videos promoting food “hacks.” I should know; I wrote up a whole bunch of them as a young Lifehacker writer. I’m still a fan of such things, from the standpoint of simply getting food done.

As such, the ultimate “hack” for making bread is to use commercial yeast, i.e., dried “active” or “instant” yeast. A manufacturer has done the work of selecting and isolating yeast at its prime state and preserving it for you. Get your liquids and dough to a yeast-friendly temperature and you’ve removed most of the variables; your success should be repeatable. If you just want bread, you can make the iconic no-knead bread with prepared yeast and very little intervention, and you’ll probably get bread that’s better than you can get at the grocery store.

Baking sourdough—or “naturally leavened,” or with “levain”—means a lot of intervention. You are cultivating and maintaining a small ecosystem of yeast and bacteria, unleashing them onto flour, water, and salt, and stepping in after they’ve produced enough flavor and lift—but before they eat all the stretchy gluten bonds. What that looks like depends on many things: your water, your flours, what you fed your starter, how active it was when you added it, the air in your home, and other variables. Most important is your ability to notice things over long periods of time.

When things go wrong, debugging can be tricky. I was able to personally ask Kleinwächter what was up with my bread, because I was interviewing him for this article. There were many potential answers, including:

  • I should recognize, first off, that I was trying to bake the hardest kind of bread: Freestanding wheat-based sourdough
  • You have to watch—and smell—your starter to make sure it has the right mix of yeast to bacteria before you use it
  • Using less starter (lower “inoculation”) would make it easier not to over-ferment
  • Eyeballing my dough rise in a bowl was hard; try measuring a sample in something like an aliquot tube
  • Winter and summer are very different dough timings, even with modern indoor climate control.

But I kept with it. I was particularly susceptible to wanting things to go quicker and demanding to see a huge rise in my dough before baking. This ironically leads to the flattest results, as the bacteria eats all the gluten bonds. When I slowed down, changed just one thing at a time, and looked deeper into my results, I got better.

Screenshot of Kleinwaechter's YouTube page, with video titles like

The Bread Code YouTube page and the ways in which one must cater to algorithms.

Credit: The Bread Code

The Bread Code YouTube page and the ways in which one must cater to algorithms. Credit: The Bread Code

YouTube faces and TikTok sausage

Emailing and trading video responses with Kleinwächter, I got the sense that he, too, has learned to go the slow, steady route with his Bread Code project.

For a while, he was turning out YouTube videos, and he wanted them to work. “I’m very data-driven and very analytical. I always read the video metrics, and I try to optimize my videos,” Kleinwächter said. “Which means I have to use a clickbait title, and I have to use a clickbait-y thumbnail, plus I need to make sure that I catch people in the first 30 seconds of the video.” This, however, is “not good for us as humans because it leads to more and more extreme content.”

Kleinwächter also dabbled in TikTok, making videos in which, leaning into his German heritage, “the idea was to turn everything into a sausage.” The metrics and imperatives on TikTok were similar to those on YouTube but hyperscaled. He could put hours or days into a video, only for 1 percent of his 200,000 YouTube subscribers to see it unless he caught the algorithm wind.

The frustrations inspired him to slow down and focus on his site and his book. With his community’s help, The Bread Code has just finished its second Kickstarter-backed printing run of 2,000 copies. There’s a Discord full of bread heads eager to diagnose and correct each other’s loaves and occasional pull requests from inspired readers. Kleinwächter has seen people go from buying what he calls “Turbo bread” at the store to making their own, and that’s what keeps him going. He’s not gambling on an attention-getting hit, but he’s in better control of how his knowledge and message get out.

“I think homemade bread is something that’s super, super undervalued, and I see a lot of benefits to making it yourself,” Kleinwächter said. “Good bread just contains flour, water, and salt—nothing else.”

Loaf that is split across the middle-top, with flecks of olives showing.

A test loaf of rosemary olive sourdough bread. An uneven amount of olive bits ended up on the top and bottom, because there is always more to learn.

Credit: Kevin Purdy

A test loaf of rosemary olive sourdough bread. An uneven amount of olive bits ended up on the top and bottom, because there is always more to learn. Credit: Kevin Purdy

You gotta keep doing it—that’s the hard part

I can’t say it has been entirely smooth sailing ever since I self-certified with The Bread Code framework. I know what level of fermentation I’m aiming for, but I sometimes get home from an outing later than planned, arriving at dough that’s trying to escape its bucket. My starter can be very temperamental when my house gets dry and chilly in the winter. And my dough slicing (scoring), being the very last step before baking, can be rushed, resulting in some loaves with weird “ears,” not quite ready for the bakery window.

But that’s all part of it. Your sourdough starter is a collection of organisms that are best suited to what you’ve fed them, developed over time, shaped by their environment. There are some modern hacks that can help make good bread, like using a pH meter. But the big hack is just doing it, learning from it, and getting better at figuring out what’s going on. I’m thankful that folks like Kleinwächter are out there encouraging folks like me to slow down, hack less, and learn more.

Flour, water, salt, GitHub: The Bread Code is a sourdough baking framework Read More »

found-in-the-wild:-the-world’s-first-unkillable-uefi-bootkit-for-linux

Found in the wild: The world’s first unkillable UEFI bootkit for Linux

Over the past decade, a new class of infections has threatened Windows users. By infecting the firmware that runs immediately before the operating system loads, these UEFI bootkits continue to run even when the hard drive is replaced or reformatted. Now the same type of chip-dwelling malware has been found in the wild for backdooring Linux machines.

Researchers at security firm ESET said Wednesday that Bootkitty—the name unknown threat actors gave to their Linux bootkit—was uploaded to VirusTotal earlier this month. Compared to its Windows cousins, Bootkitty is still relatively rudimentary, containing imperfections in key under-the-hood functionality and lacking the means to infect all Linux distributions other than Ubuntu. That has led the company researchers to suspect the new bootkit is likely a proof-of-concept release. To date, ESET has found no evidence of actual infections in the wild.

The ASCII logo that Bootkitty is capable of rendering. Credit: ESET

Be prepared

Still, Bootkitty suggests threat actors may be actively developing a Linux version of the same sort of unkillable bootkit that previously was found only targeting Windows machines.

“Whether a proof of concept or not, Bootkitty marks an interesting move forward in the UEFI threat landscape, breaking the belief about modern UEFI bootkits being Windows-exclusive threats,” ESET researchers wrote. “Even though the current version from VirusTotal does not, at the moment, represent a real threat to the majority of Linux systems, it emphasizes the necessity of being prepared for potential future threats.”

A rootkit is a piece of malware that runs in the deepest regions of the operating system it infects. It leverages this strategic position to hide information about its presence from the operating system itself. A bootkit, meanwhile, is malware that infects the boot-up process in much the same way. Bootkits for the UEFI—short for Unified Extensible Firmware Interface—lurk in the chip-resident firmware that runs each time a machine boots. These sorts of bootkits can persist indefinitely, providing a stealthy means for backdooring the operating system even before it has fully loaded and enabled security defenses such as antivirus software.

The bar for installing a bootkit is high. An attacker first must gain administrative control of the targeted machine, either through physical access while it’s unlocked or somehow exploiting a critical vulnerability in the OS. Under those circumstances, attackers already have the ability to install OS-resident malware. Bootkits, however, are much more powerful since they (1) run before the OS does and (2) are, at least practically speaking, undetectable and unremovable.

Found in the wild: The world’s first unkillable UEFI bootkit for Linux Read More »

fcc-approves-starlink-plan-for-cellular-phone-service,-with-some-limits

FCC approves Starlink plan for cellular phone service, with some limits

Eliminating cellular dead zones

Starlink says it will offer texting service this year as well as voice and data services in 2025. Starlink does not yet have FCC approval to exceed certain emissions limits, which the company has said will be detrimental for real-time voice and video communications.

For the operations approved yesterday, Starlink is required to coordinate with other spectrum users and cease transmissions when any harmful interference is detected. “We hope to activate employee beta service in the US soon,” wrote Ben Longmier, SpaceX’s senior director of satellite engineering.

Longmier made a pitch to cellular carriers. “Any telco that signs up with Starlink Direct to Cell can completely eliminate cellular dead zones for their entire country for text and data services. This includes coastal waterways and the ocean areas in between land for island nations,” he wrote.

Starlink launched its first satellites with cellular capabilities in January 2024. “Of the more than 2,600 Gen2 Starlink satellites in low Earth orbit, around 320 are equipped with a direct-to-smartphone payload, enough to enable the texting services SpaceX has said it could launch this year,” SpaceNews wrote yesterday.

Yesterday’s FCC order also lets Starlink operate up to 7,500 second-generation satellites in altitudes between 340 km and 360 km, in addition to the previously approved altitudes between 525 km and 535 km. SpaceX is seeking approval for another 22,488 satellites but the FCC continued to defer action on that request. The FCC order said:

Authorization to permit SpaceX to operate up to 7,500 Gen2 satellites in lower altitude shells will enable SpaceX to begin providing lower-latency satellite service to support growing demand in rural and remote areas that lack terrestrial wireless service options. This partial grant also strikes the right balance between allowing SpaceX’s operations at lower altitudes to provide low-latency satellite service and permitting the Commission to continue to monitor SpaceX’s constellation and evaluate issues previously raised on the record.

Coordination with NASA

SpaceX is required to coordinate “with NASA to ensure protection of the International Space Station (ISS), ISS visiting vehicles, and launch windows for NASA science missions,” the FCC said. “SpaceX may only deploy and operate at altitudes below 400 km the total number of satellites for which it has completed physical coordination with NASA under the parties’ Space Act Agreement.”

FCC approves Starlink plan for cellular phone service, with some limits Read More »

google’s-plan-to-keep-ai-out-of-search-trial-remedies-isn’t-going-very-well

Google’s plan to keep AI out of search trial remedies isn’t going very well


DOJ: AI is not its own market

Judge: AI will likely play “larger role” in Google search remedies as market shifts.

Google got some disappointing news at a status conference Tuesday, where US District Judge Amit Mehta suggested that Google’s AI products may be restricted as an appropriate remedy following the government’s win in the search monopoly trial.

According to Law360, Mehta said that “the recent emergence of AI products that are intended to mimic the functionality of search engines” is rapidly shifting the search market. Because the judge is now weighing preventive measures to combat Google’s anticompetitive behavior, the judge wants to hear much more about how each side views AI’s role in Google’s search empire during the remedies stage of litigation than he did during the search trial.

“AI and the integration of AI is only going to play a much larger role, it seems to me, in the remedy phase than it did in the liability phase,” Mehta said. “Is that because of the remedies being requested? Perhaps. But is it also potentially because the market that we have all been discussing has shifted?”

To fight the DOJ’s proposed remedies, Google is seemingly dragging its major AI rivals into the trial. Trying to prove that remedies would harm Google’s ability to compete, the tech company is currently trying to pry into Microsoft’s AI deals, including its $13 billion investment in OpenAI, Law360 reported. At least preliminarily, Mehta has agreed that information Google is seeking from rivals has “core relevance” to the remedies litigation, Law360 reported.

The DOJ has asked for a wide range of remedies to stop Google from potentially using AI to entrench its market dominance in search and search text advertising. They include a ban on exclusive agreements with publishers to train on content, which the DOJ fears might allow Google to block AI rivals from licensing data, potentially posing a barrier to entry in both markets. Under the proposed remedies, Google would also face restrictions on investments in or acquisitions of AI products, as well as mergers with AI companies.

Additionally, the DOJ wants Mehta to stop Google from any potential self-preferencing, such as making an AI product mandatory on Android devices Google controls or preventing a rival from distribution on Android devices.

The government seems very concerned that Google may use its ownership of Android to play games in the emerging AI sector. They’ve further recommended an order preventing Google from discouraging partners from working with rivals, degrading the quality of rivals’ AI products on Android devices, or otherwise “coercing” manufacturers or other Android partners into giving Google’s AI products “better treatment.”

Importantly, if the court orders AI remedies linked to Google’s control of Android, Google could risk a forced sale of Android if Mehta grants the DOJ’s request for “contingent structural relief” requiring divestiture of Android if behavioral remedies don’t destroy the current monopolies.

Finally, the government wants Google to be required to allow publishers to opt out of AI training without impacting their search rankings. (Currently, opting out of AI scraping automatically opts sites out of Google search indexing.)

All of this, the DOJ alleged, is necessary to clear the way for a thriving search market as AI stands to shake up the competitive landscape.

“The promise of new technologies, including advances in artificial intelligence (AI), may present an opportunity for fresh competition,” the DOJ said in a court filing. “But only a comprehensive set of remedies can thaw the ecosystem and finally reverse years of anticompetitive effects.”

At the status conference Tuesday, DOJ attorney David Dahlquist reiterated to Mehta that these remedies are needed so that Google’s illegal conduct in search doesn’t extend to this “new frontier” of search, Law360 reported. Dahlquist also clarified that the DOJ views these kinds of AI products “as new access points for search, rather than a whole new market.”

“We’re very concerned about Google’s conduct being a barrier to entry,” Dahlquist said.

Google could not immediately be reached for comment. But the search giant has maintained that AI is beyond the scope of the search trial.

During the status conference, Google attorney John E. Schmidtlein disputed that AI remedies are relevant. While he agreed that “AI is key to the future of search,” he warned that “extraordinary” proposed remedies would “hobble” Google’s AI innovation, Law360 reported.

Microsoft shields confidential AI deals

Microsoft is predictably protective of its AI deals, arguing in a court filing that its “highly confidential agreements with OpenAI, Perplexity AI, Inflection, and G42 are not relevant to the issues being litigated” in the Google trial.

According to Microsoft, Google is arguing that it needs this information to “shed light” on things like “the extent to which the OpenAI partnership has driven new traffic to Bing and otherwise affected Microsoft’s competitive standing” or what’s required by “terms upon which Bing powers functionality incorporated into Perplexity’s search service.”

These insights, Google seemingly hopes, will convince Mehta that Google’s AI deals and investments are the norm in the AI search sector. But Microsoft is currently blocking access, arguing that “Google has done nothing to explain why” it “needs access to the terms of Microsoft’s highly confidential agreements with other third parties” when Microsoft has already offered to share documents “regarding the distribution and competitive position” of its AI products.

Microsoft also opposes Google’s attempts to review how search click-and-query data is used to train OpenAI’s models. Those requests would be better directed at OpenAI, Microsoft said.

If Microsoft gets its way, Google’s discovery requests will be limited to just Microsoft’s content licensing agreements for Copilot. Microsoft alleged those are the only deals “related to the general search or the general search text advertising markets” at issue in the trial.

On Tuesday, Microsoft attorney Julia Chapman told Mehta that Microsoft had “agreed to provide documents about the data used to train its own AI model and also raised concerns about the competitive sensitivity of Microsoft’s agreements with AI companies,” Law360 reported.

It remains unclear at this time if OpenAI will be forced to give Google the click-and-query data Google seeks. At the status hearing, Mehta ordered OpenAI to share “financial statements, information about the training data for ChatGPT, and assessments of the company’s competitive position,” Law360 reported.

But the DOJ may also be interested in seeing that data. In their proposed final judgment, the government forecasted that “query-based AI solutions” will “provide the most likely long-term path for a new generation of search competitors.”

Because of that prediction, any remedy “must prevent Google from frustrating or circumventing” court-ordered changes “by manipulating the development and deployment of new technologies like query-based AI solutions.” Emerging rivals “will depend on the absence of anticompetitive constraints to evolve into full-fledged competitors and competitive threats,” the DOJ alleged.

Mehta seemingly wants to see the evidence supporting the DOJ’s predictions, which could end up exposing carefully guarded secrets of both Google’s and its biggest rivals’ AI deals.

On Tuesday, the judge noted that integration of AI into search engines had already evolved what search results pages look like. And from his “very layperson’s perspective,” it seems like AI’s integration into search engines will continue moving “very quickly,” as both parties seem to agree.

Whether he buys into the DOJ’s theory that Google could use its existing advantage as the world’s greatest gatherer of search query data to block rivals from keeping pace is still up in the air, but the judge seems moved by the DOJ’s claim that “AI has the ability to affect market dynamics in these industries today as well as tomorrow.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Google’s plan to keep AI out of search trial remedies isn’t going very well Read More »