Author name: Kelly Newman

company-claims-1,000-percent-price-hike-drove-it-from-vmware-to-open-source-rival

Company claims 1,000 percent price hike drove it from VMware to open source rival

Companies have been discussing migrating off of VMware since Broadcom’s takeover a year ago led to higher costs and other controversial changes. Now we have an inside look at one of the larger customers that recently made the move.

According to a report from The Register today, Beeks Group, a cloud operator headquartered in the United Kingdom, has moved most of its 20,000-plus virtual machines (VMs) off VMware and to OpenNebula, an open source cloud and edge computing platform. Beeks Group sells virtual private servers and bare metal servers to financial service providers. It still has some VMware VMs, but “the majority” of its machines are currently on OpenNebula, The Register reported.

Beeks’ head of production management, Matthew Cretney, said that one of the reasons for Beeks’ migration was a VMware bill for “10 times the sum it previously paid for software licenses,” per The Register.

According to Beeks, OpenNebula has enabled the company to dedicate more of its 3,000 bare metal server fleet to client loads instead of to VM management, as it had to with VMware. With OpenNebula purportedly requiring less management overhead, Beeks is reporting a 200 percent increase in VM efficiency since it now has more VMs on each server.

Beeks also pointed to customers viewing VMware as non-essential and a decline in VMware support services and innovation as drivers for it migrating from VMware.

Broadcom didn’t respond to Ars Technica’s request for comment.

Broadcom loses VMware customers

Broadcom will likely continue seeing some of VMware’s older customers decrease or abandon reliance on VMware offerings. But Broadcom has emphasized the financial success it has seen (PDF) from its VMware acquisition, suggesting that it will continue with its strategy even at the risk of losing some business.

Company claims 1,000 percent price hike drove it from VMware to open source rival Read More »

vintage-digicams-aren’t-just-a-fad-they’re-an-artistic-statement.

Vintage digicams aren’t just a fad. They’re an artistic statement.


In the age of AI images, some photographers are embracing the quirky flaws of vintage digital cameras.

Spanish director Isabel Coixet films with a digicam on the red carpet ahead of the premiere of the film “The International” on the opening night of the 59th Berlinale Film Festival in Berlin in 2009. Credit: JOHN MACDOUGALL/AFP via Getty Images

Today’s young adults grew up in a time when their childhoods were documented with smartphone cameras instead of dedicated digital or film cameras. It’s not surprising that, perhaps as a reaction to the ubiquity of the phone, some young creative photographers are leaving their handsets in their pockets in favor of compact point-and-shoot digital cameras—the very type that camera manufacturers are actively discontinuing.

Much of the buzz among this creative class has centered around premium, chic models like the Fujifilm X100 and Ricoh GR, or for the self-anointed “digicam girlies” on TikTok, zoom point-and-shoots like the Canon PowerShot G7 and Sony RX100 models, which can be great for selfies.

But other shutterbugs are reaching back into the past 20 years or more to add a vintage “Y2K aesthetic” to their work. The MySpace look is strong with a lot of photographers shooting with authentic early-2000s “digicams,” aiming their cameras—flashes a-blazing—at their friends and capturing washed-out, low-resolution, grainy photos that look a whole lot like 2003.

Wired logo

“It’s so wild to me cause I’m an elder millennial,” says Ali O’Keefe, who runs the photography channel Two Months One Camera on YouTube. “My childhood is captured on film … but for [young people], theirs were probably all captured on, like, Canon SD1000s,” she says, referencing a popular mid-aughts point-and-shoot.

It’s not just the retro sensibility they’re after, but also a bit of cool cred. Everyone from Ayo Edibiri to Kendall Jenner is helping fuel digicam fever by publicly taking snaps with a vintage pocket camera.

The rise of the vintage digicam marks at least the second major nostalgia boom in the photography space. More than 15 years ago, a film resurgence brought thousands of cameras from the 1970s and ’80s out of closets and into handbags and backpacks. Companies like Impossible Project and Film Ferrania started up production of Polaroid-compatible and 35-mm film, respectively, firing up manufacturing equipment that otherwise would have been headed to the scrap heap. Traditional film companies like Kodak and Ilford have seen sales skyrocket. Unfortunately, the price of film stock also increased significantly, with film processing also getting more costly. (Getting a roll developed and digitally scanned now typically costs between $15 and $20.)

For those seeking to experiment with their photography, there’s an appeal to using a cheap, old digital model they can shoot with until it stops working. The results are often imperfect, but since the camera is digital, a photographer can mess around and get instant gratification. And for everyone in the vintage digital movement, the fact that the images from these old digicams are worse than those from a smartphone is a feature, not a bug.

What’s a digicam?

One of the biggest points of contention among enthusiasts is the definition of “digicam.” For some, any old digital camera falls under the banner, while other photographers have limited the term’s scope to a specific vintage or type. Sofia Lee, photographer and co-founder of the online community digicam.love, has narrowed her definition over time.

“There’s a separation between what I define as a tool that I will be using in my artistic practice versus what the community at large would consider to be culturally acceptable, like at a meetup,” Lee stated. “I started off looking at any digital camera I could get my hands on. But increasingly I’m focused more on the early 2000s. And actually, I actually keep getting earlier and earlier … I would say from 2000 to 2003 or 2004 maybe.”

Lee has found that she’s best served by funky old point-and-shoot cameras, and doesn’t use old digital single-lens reflex cameras, which can deliver higher quality images comparable to today’s equipment. Lee says DSLR images are “too clean, too crisp, too nice” for her work. “When I’m picking a camera, I’m looking for a certain kind of noise, a certain kind of character to them that can’t be reproduced through filters or editing, or some other process,” Lee says. Her all-time favorite model is a forgotten camera from 2001, the Kyocera Finecam S3. A contemporary review gave the model a failing grade, citing its reliance on the then-uncommon SD memory card format, along with its propensity to turn out soft photos lacking in detail.

“It’s easier to say what isn’t a digicam, like DSLRs or cameras with interchangeable lenses,” says Zuzanna Neupauer, a digicam user and member of digicam.love. But the definition gets even narrower from there. “I personally won’t use any new models, and I restrict myself to digicams made before 2010,” Neupauer says.

Not everyone is as partisan. Popular creators Ali O’Keefe and James Warner both cover interchangeable lens cameras from the 2000s extensively on their YouTube channels, focusing on vintage digital equipment, relishing in devices with quirky designs or those that represent evolutionary dead-ends. Everything from Sigma’s boxy cameras with exotic sensors to Olympus’ weird, early DSLRs based on a short-lived lens system get attention in their videos. It’s clear that although many vintage enthusiasts prefer the simple, compact nature of a point-and-shoot camera, the overall digicam trend has increased interest in digital imaging’s many forms.

Digital archeology

The digital photography revolution that occurred around the turn of the century saw a Cambrian explosion of different types and designs of cameras. Sony experimented with swiveling two-handers that could be science fiction zap guns, and had cameras that wrote JPEGs to floppy disks and CDs. Minolta created modular cameras that could be decoupled, the optics tethered to the LCD body with a cord, like photographic nunchaku. “There are a lot of brands that are much less well known,” says Lee. “And in the early 2000s in particular, it was really like the Wild West.”

Today’s enthusiasts spelunking into the digital past are encountering challenges related to the passage of time, with some brands no longer offering firmware updates, drivers, or PDF copies of manuals for these old models. In many cases, product news and reviews sites are the only reminder that some cameras ever existed. But many of those sites have fallen off the internet entirely.

“Steve’s Digicams went offline,” says O’Keefe in reference to the popular camera news website that went offline after the founder, Steve Sanders, died in 2017. “It was tragic because it had so much information.”

“Our interests naturally align with archaeology,” says Sofia Lee. “A lot of us were around when the cameras were made. But there were a number of events in the history of digicams where an entire line of cameras just massively died off. That’s something that we are constantly confronted with.”

Hocus focus

YouTubers like Warner and O’Keefe helped raise interest in cameras with Charged-Coupled Device technology, an older type of imaging sensor that fell out of use around 2010. CCD-based cameras have developed a cult following, and certain models have retained their value surprisingly well for their age. Fans liken the results of CCD captures to shooting film without the associated hassle or cost. While the digicam faithful have shown that older cameras can yield pleasing results, there’s no guaranteed “CCD magic” sprinkled on those photos.

“[I] think I’ve maybe unfortunately been one of the ones to make it sound like CCD sensors in and of themselves are making the colors different,” says Warner, who makes classic digital camera videos on his channel Snappiness.

“CCDs differ from [newer] CMOS sensors in the layout of their electronics but at heart they’re both made up of photosensitive squares of silicon behind a series of color filters from which color information about the scene can be derived,” says Richard Butler, managing editor at DPReview. (Disclosure: I worked at DPReview as a part-time editor in 2022 and 2023.) DPReview, in its 25th year, is a valuable library of information about old digital cameras, and an asset to vintage digital obsessives.

“I find it hard to think of CCD images as filmlike, but it’s fair to say that the images of cameras from that time may have had a distinct aesthetic,” Butler says. “As soon as you have an aesthetic with which an era was captured, there’s a nostalgia about that look. It’s fair to say that early digital cameras inadvertently defined the appearance of contemporary photos.”

There’s one area where old CCD sensors can show a difference: They don’t capture as much light and dark information as other types of sensors, and therefore the resulting images can have less detail in the shadows and highlights. A careful photographer can get contrasty, vibrant images with a different, yet still digital, vibe. Digicam photographer Jermo Swaab says he prefers “contrasty scenes and crushed blacks … I yearn for images that look like a memory or retro-futuristic dream.”

Modern photographs, by default, are super sharp, artificially vibrant, with high dynamic range that makes the image pop off the screen. In order to get the most out of a tiny sensor and lens, smartphones put shots through a computationally intense pipeline of automated editing, quickly combining multiple captures to extract every fine detail possible, and eradicate pesky noise. Digital cameras shoot a single image at a time by default. Especially with older, lower resolution digital cameras, this can give images a noisier, dreamier appearance that digicam fans love.

“If you take a picture with your smartphone, it’s automatically HDR. And we’re just used to that today but that’s not at all how cameras have worked in the past,” Warner says. Ali O’Keefe agrees, saying that “especially as we lean more and more into AI where everything is super polished to the point of hyperreal, digicams are crappy, and the artifacts and the noise and the lens imperfections give you something that is not replicable.”

Lee also is chasing unique, noisy photos from compact cameras with small sensors: “I actually always shoot at max ISO, which is the opposite of how I think people shot their cameras back in the day. I’m curious about finding the undesirable aspects of it and [getting] aesthetic inspiration from the undesirable aspects of a camera.”

Her favorite Kyocera camera is known for its high-quality build and noisy pics. She describes it as ”all metal, like a briefcase,” of the sort that Arnold Schwarzenegger carries in Total Recall. “These cameras are considered legendary in the experimental scene,” she says of the Kyocera. “The unique thing about the Finecam S3 is that it produces a diagonal noise pattern.”

A time to buy, a time to sell

The gold rush for vintage digital gear has, unsurprisingly, led to rising prices on the resale market. What was once a niche for oddballs and collectors has become a potential goldmine, driven by all that social media hype.

“The joke is that when someone makes a video about a camera, the price jumps,” says Warner. “I’ve actually tracked that using eBay’s TerraPeak sale monitoring tool where you can see the history of up to two years of sales for a certain search query. There’s definitely strong correlation to a [YouTube] video’s release and the price of that item going up on eBay in certain situations.”

“It is kind of amazing how hard it is to find things now,” laments says O’Keefe. “I used to be able to buy [Panasonic] LX3s, one of my favorite point and shoots of all time, a dime a dozen. Now they’re like 200 bucks if you can find a working one.”

O’Keefe says she frequently interacts with social media users who went online looking for their dream camera only to have gotten scammed. “A person who messaged me this morning was just devastated,” she says. “Scams are rampant now because they’ve picked up on this market being sort of a zeitgeist thing.” She recommends sticking with sellers on platforms that have clear protections in place for dealing with scams and fraud, like eBay. “I have never had an issue getting refunded when the item didn’t work.”

Even when dealing with a trustworthy seller, vintage digital camera collecting is not for the faint of heart. “If I’m interested in a camera, I make sure that the batteries are still made because some are no longer in production,” says O’Keefe. She warns that even if a used camera comes with its original batteries, those cells will most likely not hold a charge.

When there are no new batteries to be had, Sofia Lee and her cohort have resuscitated vintage cameras using modern tech: “With our Kyoceras, one of the biggest issues is the batteries are no longer in production and they all die really quickly. What we ended up doing is using 5V DC cables that connect them to USB, then we shoot them tethered to a power bank. So if you see someone shooting with a Kyocera, they’re almost always holding the power bank and a digicam in their other hand.”

And then there’s the question of where to store all those JPEGs. “A lot of people don’t think about memory card format, so that can get tricky,” cautions Warner. Many vintage cameras use the CompactFlash format, and those are still widely supported. But just as many digicams use deprecated storage formats like Olympus’s xD or Sony’s MemoryStick. ”They don’t make those cards anymore,” Warner says. “Some of them have adapters you can use but some [cameras] don’t work with the adapters.”

Even if the batteries and memory cards get sorted out, Sofia Lee underscores that every piece of vintage equipment has an expiration date. “There is this looming threat, when it comes to digicams—this is a finite resource.” Like with any other vintage tech, over time, capacitors go bad, gears break, sensors corrode, and, in some circumstances, rubber grips devulcanize back into a sticky goo.

Lee’s beloved Kyoceras are one such victim of the ravages of time. “I’ve had 15 copies pass through my hands. Around 11 of them were dead on arrival, and three died within a year. That means I have one left right now. It’s basically a special occasions-only camera, because I just never know when it’s going to die.”

These photographers have learned that it’s sometimes better to move on from a potential ticking time bomb, especially if the device is still in demand. O’Keefe points to the Epson R-D1 as an example. This digital rangefinder from printer-maker Epson, with gauges on the top made by Epson’s watchmaking arm Seiko, was originally sold as a Leica alternative, but now it fetches Leica-like premium prices. “I actually sold mine a year and a half ago,” she says. “I loved it, it was beautiful. But there’s a point for me, where I can see that this thing is certainly going to die, probably in the next five years. So I did sell that one, but it is such an awesome experience to shoot. Cause what other digital camera has a lever that actually winds the shutter?”

#NoBadCameras

For a group of people with a recent influx of newbies, the digicam community seems to be adjusting well. Sofia Lee says the growing popularity of digicams is an opportunity to meet new collaborators in a field where it used to be hard to connect with like-minded folks. “I love that there are more people interested in this, because when I was first getting into it I was considered totally crazy,” she says.

Despite the definition of digicam morphing to include a wider array of cameras, Lee seems to be accepting of all comers. “I’m rather permissive in allowing people to explore what they consider is right,” says Lee. While not every camera is “right” for every photographer, many of them agree on one thing: Resurrecting used equipment is a win for the planet, and a way to resist the constant upgrade churn of consumer technology.

“It’s interesting to look at what is considered obsolete,” Lee says. “From a carbon standpoint, the biggest footprint is at the moment of manufacture, which means that every piece of technology has this unfulfilled potential.” O’Keefe agrees: “I love it from an environmental perspective. Do we really need to drive waste [by releasing] a new camera every few months?”

For James Warner, part of the appeal is using lower-cost equipment that more people can afford. And with that lower cost of entry comes easier access to the larger creator community. “With some clubs you’re not invited if you don’t have the nice stuff,” he says. “But they feel welcome and like they can participate in photography on a budget.”

O’Keefe has even coined the hashtag #NoBadCameras. She believes all digicams have unique characteristics, and that if a curious photographer just takes the time to get to know the device, it can deliver good results. “Don’t be precious about it,” she says. “Just pick something up, shoot it, and have fun.”

This story originally appeared on wired.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

Vintage digicams aren’t just a fad. They’re an artistic statement. Read More »

flour,-water,-salt,-github:-the-bread-code-is-a-sourdough-baking-framework

Flour, water, salt, GitHub: The Bread Code is a sourdough baking framework

One year ago, I didn’t know how to bake bread. I just knew how to follow a recipe.

If everything went perfectly, I could turn out something plain but palatable. But should anything change—temperature, timing, flour, Mercury being in Scorpio—I’d turn out a partly poofy pancake. I presented my partly poofy pancakes to people, and they were polite, but those platters were not particularly palatable.

During a group vacation last year, a friend made fresh sourdough loaves every day, and we devoured it. He gladly shared his knowledge, his starter, and his go-to recipe. I took it home, tried it out, and made a naturally leavened, artisanal pancake.

I took my confusion to YouTube, where I found Hendrik Kleinwächter’s “The Bread Code” channel and his video promising a course on “Your First Sourdough Bread.” I watched and learned a lot, but I couldn’t quite translate 30 minutes of intensive couch time to hours of mixing, raising, slicing, and baking. Pancakes, part three.

It felt like there had to be more to this. And there was—a whole GitHub repository more.

The Bread Code gave Kleinwächter a gratifying second career, and it’s given me bread I’m eager to serve people. This week alone, I’m making sourdough Parker House rolls, a rosemary olive loaf for Friendsgiving, and then a za’atar flatbread and standard wheat loaf for actual Thanksgiving. And each of us has learned more about perhaps the most important aspect of coding, bread, teaching, and lots of other things: patience.

Hendrik Kleinwächter on his Bread Code channel, explaining his book.

Resources, not recipes

The Bread Code is centered around a book, The Sourdough Framework. It’s an open source codebase that self-compiles into new LaTeX book editions and is free to read online. It has one real bread loaf recipe, if you can call a 68-page middle-section journey a recipe. It has 17 flowcharts, 15 tables, and dozens of timelines, process illustrations, and photos of sourdough going both well and terribly. Like any cookbook, there’s a bit about Kleinwächter’s history with this food, and some sourdough bread history. Then the reader is dropped straight into “How Sourdough Works,” which is in no way a summary.

“To understand the many enzymatic reactions that take place when flour and water are mixed, we must first understand seeds and their role in the lifecycle of wheat and other grains,” Kleinwächter writes. From there, we follow a seed through hibernation, germination, photosynthesis, and, through humans’ grinding of these seeds, exposure to amylase and protease enzymes.

I had arrived at this book with these specific loaf problems to address. But first, it asks me to consider, “What is wheat?” This sparked vivid memories of Computer Science 114, in which a professor, asked to troubleshoot misbehaving code, would instead tell students to “Think like a compiler,” or “Consider the recursive way to do it.”

And yet, “What is wheat” did help. Having a sense of what was happening inside my starter, and my dough (which is really just a big, slow starter), helped me diagnose what was going right or wrong with my breads. Extra-sticky dough and tightly arrayed holes in the bread meant I had let the bacteria win out over the yeast. I learned when to be rough with the dough to form gluten and when to gently guide it into shape to preserve its gas-filled form.

I could eat a slice of each loaf and get a sense of how things had gone. The inputs, outputs, and errors could be ascertained and analyzed more easily than in my prior stance, which was, roughly, “This starter is cursed and so am I.” Using hydration percentages, measurements relative to protein content, a few tests, and troubleshooting steps, I could move closer to fresh, delicious bread. Framework: accomplished.

I have found myself very grateful lately that Kleinwächter did not find success with 30-minute YouTube tutorials. Strangely, so has he.

Sometimes weird scoring looks pretty neat. Kevin Purdy

The slow bread of childhood dreams

“I have had some successful startups; I have also had disastrous startups,” Kleinwächter said in an interview. “I have made some money, then I’ve been poor again. I’ve done so many things.”

Most of those things involve software. Kleinwächter is a German full-stack engineer, and he has founded firms and worked at companies related to blogging, e-commerce, food ordering, travel, and health. He tried to escape the boom-bust startup cycle by starting his own digital agency before one of his products was acquired by hotel booking firm Trivago. After that, he needed a break—and he could afford to take one.

“I went to Naples, worked there in a pizzeria for a week, and just figured out, ‘What do I want to do with my life?’ And I found my passion. My passion is to teach people how to make amazing bread and pizza at home,” Kleinwächter said.

Kleinwächter’s formative bread experiences—weekend loaves baked by his mother, awe-inspiring pizza from Italian ski towns, discovering all the extra ingredients in a supermarket’s version of the dark Schwarzbrot—made him want to bake his own. Like me, he started with recipes, and he wasted a lot of time and flour turning out stuff that produced both failures and a drive for knowledge. He dug in, learned as much as he could, and once he had his head around the how and why, he worked on a way to guide others along the path.

Bugs and syntax errors in baking

When using recipes, there’s a strong, societally reinforced idea that there is one best, tested, and timed way to arrive at a finished food. That’s why we have America’s Test Kitchen, The Food Lab, and all manner of blogs and videos promoting food “hacks.” I should know; I wrote up a whole bunch of them as a young Lifehacker writer. I’m still a fan of such things, from the standpoint of simply getting food done.

As such, the ultimate “hack” for making bread is to use commercial yeast, i.e., dried “active” or “instant” yeast. A manufacturer has done the work of selecting and isolating yeast at its prime state and preserving it for you. Get your liquids and dough to a yeast-friendly temperature and you’ve removed most of the variables; your success should be repeatable. If you just want bread, you can make the iconic no-knead bread with prepared yeast and very little intervention, and you’ll probably get bread that’s better than you can get at the grocery store.

Baking sourdough—or “naturally leavened,” or with “levain”—means a lot of intervention. You are cultivating and maintaining a small ecosystem of yeast and bacteria, unleashing them onto flour, water, and salt, and stepping in after they’ve produced enough flavor and lift—but before they eat all the stretchy gluten bonds. What that looks like depends on many things: your water, your flours, what you fed your starter, how active it was when you added it, the air in your home, and other variables. Most important is your ability to notice things over long periods of time.

When things go wrong, debugging can be tricky. I was able to personally ask Kleinwächter what was up with my bread, because I was interviewing him for this article. There were many potential answers, including:

  • I should recognize, first off, that I was trying to bake the hardest kind of bread: Freestanding wheat-based sourdough
  • You have to watch—and smell—your starter to make sure it has the right mix of yeast to bacteria before you use it
  • Using less starter (lower “inoculation”) would make it easier not to over-ferment
  • Eyeballing my dough rise in a bowl was hard; try measuring a sample in something like an aliquot tube
  • Winter and summer are very different dough timings, even with modern indoor climate control.

But I kept with it. I was particularly susceptible to wanting things to go quicker and demanding to see a huge rise in my dough before baking. This ironically leads to the flattest results, as the bacteria eats all the gluten bonds. When I slowed down, changed just one thing at a time, and looked deeper into my results, I got better.

Screenshot of Kleinwaechter's YouTube page, with video titles like

The Bread Code YouTube page and the ways in which one must cater to algorithms.

Credit: The Bread Code

The Bread Code YouTube page and the ways in which one must cater to algorithms. Credit: The Bread Code

YouTube faces and TikTok sausage

Emailing and trading video responses with Kleinwächter, I got the sense that he, too, has learned to go the slow, steady route with his Bread Code project.

For a while, he was turning out YouTube videos, and he wanted them to work. “I’m very data-driven and very analytical. I always read the video metrics, and I try to optimize my videos,” Kleinwächter said. “Which means I have to use a clickbait title, and I have to use a clickbait-y thumbnail, plus I need to make sure that I catch people in the first 30 seconds of the video.” This, however, is “not good for us as humans because it leads to more and more extreme content.”

Kleinwächter also dabbled in TikTok, making videos in which, leaning into his German heritage, “the idea was to turn everything into a sausage.” The metrics and imperatives on TikTok were similar to those on YouTube but hyperscaled. He could put hours or days into a video, only for 1 percent of his 200,000 YouTube subscribers to see it unless he caught the algorithm wind.

The frustrations inspired him to slow down and focus on his site and his book. With his community’s help, The Bread Code has just finished its second Kickstarter-backed printing run of 2,000 copies. There’s a Discord full of bread heads eager to diagnose and correct each other’s loaves and occasional pull requests from inspired readers. Kleinwächter has seen people go from buying what he calls “Turbo bread” at the store to making their own, and that’s what keeps him going. He’s not gambling on an attention-getting hit, but he’s in better control of how his knowledge and message get out.

“I think homemade bread is something that’s super, super undervalued, and I see a lot of benefits to making it yourself,” Kleinwächter said. “Good bread just contains flour, water, and salt—nothing else.”

Loaf that is split across the middle-top, with flecks of olives showing.

A test loaf of rosemary olive sourdough bread. An uneven amount of olive bits ended up on the top and bottom, because there is always more to learn.

Credit: Kevin Purdy

A test loaf of rosemary olive sourdough bread. An uneven amount of olive bits ended up on the top and bottom, because there is always more to learn. Credit: Kevin Purdy

You gotta keep doing it—that’s the hard part

I can’t say it has been entirely smooth sailing ever since I self-certified with The Bread Code framework. I know what level of fermentation I’m aiming for, but I sometimes get home from an outing later than planned, arriving at dough that’s trying to escape its bucket. My starter can be very temperamental when my house gets dry and chilly in the winter. And my dough slicing (scoring), being the very last step before baking, can be rushed, resulting in some loaves with weird “ears,” not quite ready for the bakery window.

But that’s all part of it. Your sourdough starter is a collection of organisms that are best suited to what you’ve fed them, developed over time, shaped by their environment. There are some modern hacks that can help make good bread, like using a pH meter. But the big hack is just doing it, learning from it, and getting better at figuring out what’s going on. I’m thankful that folks like Kleinwächter are out there encouraging folks like me to slow down, hack less, and learn more.

Flour, water, salt, GitHub: The Bread Code is a sourdough baking framework Read More »

found-in-the-wild:-the-world’s-first-unkillable-uefi-bootkit-for-linux

Found in the wild: The world’s first unkillable UEFI bootkit for Linux

Over the past decade, a new class of infections has threatened Windows users. By infecting the firmware that runs immediately before the operating system loads, these UEFI bootkits continue to run even when the hard drive is replaced or reformatted. Now the same type of chip-dwelling malware has been found in the wild for backdooring Linux machines.

Researchers at security firm ESET said Wednesday that Bootkitty—the name unknown threat actors gave to their Linux bootkit—was uploaded to VirusTotal earlier this month. Compared to its Windows cousins, Bootkitty is still relatively rudimentary, containing imperfections in key under-the-hood functionality and lacking the means to infect all Linux distributions other than Ubuntu. That has led the company researchers to suspect the new bootkit is likely a proof-of-concept release. To date, ESET has found no evidence of actual infections in the wild.

The ASCII logo that Bootkitty is capable of rendering. Credit: ESET

Be prepared

Still, Bootkitty suggests threat actors may be actively developing a Linux version of the same sort of unkillable bootkit that previously was found only targeting Windows machines.

“Whether a proof of concept or not, Bootkitty marks an interesting move forward in the UEFI threat landscape, breaking the belief about modern UEFI bootkits being Windows-exclusive threats,” ESET researchers wrote. “Even though the current version from VirusTotal does not, at the moment, represent a real threat to the majority of Linux systems, it emphasizes the necessity of being prepared for potential future threats.”

A rootkit is a piece of malware that runs in the deepest regions of the operating system it infects. It leverages this strategic position to hide information about its presence from the operating system itself. A bootkit, meanwhile, is malware that infects the boot-up process in much the same way. Bootkits for the UEFI—short for Unified Extensible Firmware Interface—lurk in the chip-resident firmware that runs each time a machine boots. These sorts of bootkits can persist indefinitely, providing a stealthy means for backdooring the operating system even before it has fully loaded and enabled security defenses such as antivirus software.

The bar for installing a bootkit is high. An attacker first must gain administrative control of the targeted machine, either through physical access while it’s unlocked or somehow exploiting a critical vulnerability in the OS. Under those circumstances, attackers already have the ability to install OS-resident malware. Bootkits, however, are much more powerful since they (1) run before the OS does and (2) are, at least practically speaking, undetectable and unremovable.

Found in the wild: The world’s first unkillable UEFI bootkit for Linux Read More »

fcc-approves-starlink-plan-for-cellular-phone-service,-with-some-limits

FCC approves Starlink plan for cellular phone service, with some limits

Eliminating cellular dead zones

Starlink says it will offer texting service this year as well as voice and data services in 2025. Starlink does not yet have FCC approval to exceed certain emissions limits, which the company has said will be detrimental for real-time voice and video communications.

For the operations approved yesterday, Starlink is required to coordinate with other spectrum users and cease transmissions when any harmful interference is detected. “We hope to activate employee beta service in the US soon,” wrote Ben Longmier, SpaceX’s senior director of satellite engineering.

Longmier made a pitch to cellular carriers. “Any telco that signs up with Starlink Direct to Cell can completely eliminate cellular dead zones for their entire country for text and data services. This includes coastal waterways and the ocean areas in between land for island nations,” he wrote.

Starlink launched its first satellites with cellular capabilities in January 2024. “Of the more than 2,600 Gen2 Starlink satellites in low Earth orbit, around 320 are equipped with a direct-to-smartphone payload, enough to enable the texting services SpaceX has said it could launch this year,” SpaceNews wrote yesterday.

Yesterday’s FCC order also lets Starlink operate up to 7,500 second-generation satellites in altitudes between 340 km and 360 km, in addition to the previously approved altitudes between 525 km and 535 km. SpaceX is seeking approval for another 22,488 satellites but the FCC continued to defer action on that request. The FCC order said:

Authorization to permit SpaceX to operate up to 7,500 Gen2 satellites in lower altitude shells will enable SpaceX to begin providing lower-latency satellite service to support growing demand in rural and remote areas that lack terrestrial wireless service options. This partial grant also strikes the right balance between allowing SpaceX’s operations at lower altitudes to provide low-latency satellite service and permitting the Commission to continue to monitor SpaceX’s constellation and evaluate issues previously raised on the record.

Coordination with NASA

SpaceX is required to coordinate “with NASA to ensure protection of the International Space Station (ISS), ISS visiting vehicles, and launch windows for NASA science missions,” the FCC said. “SpaceX may only deploy and operate at altitudes below 400 km the total number of satellites for which it has completed physical coordination with NASA under the parties’ Space Act Agreement.”

FCC approves Starlink plan for cellular phone service, with some limits Read More »

google’s-plan-to-keep-ai-out-of-search-trial-remedies-isn’t-going-very-well

Google’s plan to keep AI out of search trial remedies isn’t going very well


DOJ: AI is not its own market

Judge: AI will likely play “larger role” in Google search remedies as market shifts.

Google got some disappointing news at a status conference Tuesday, where US District Judge Amit Mehta suggested that Google’s AI products may be restricted as an appropriate remedy following the government’s win in the search monopoly trial.

According to Law360, Mehta said that “the recent emergence of AI products that are intended to mimic the functionality of search engines” is rapidly shifting the search market. Because the judge is now weighing preventive measures to combat Google’s anticompetitive behavior, the judge wants to hear much more about how each side views AI’s role in Google’s search empire during the remedies stage of litigation than he did during the search trial.

“AI and the integration of AI is only going to play a much larger role, it seems to me, in the remedy phase than it did in the liability phase,” Mehta said. “Is that because of the remedies being requested? Perhaps. But is it also potentially because the market that we have all been discussing has shifted?”

To fight the DOJ’s proposed remedies, Google is seemingly dragging its major AI rivals into the trial. Trying to prove that remedies would harm Google’s ability to compete, the tech company is currently trying to pry into Microsoft’s AI deals, including its $13 billion investment in OpenAI, Law360 reported. At least preliminarily, Mehta has agreed that information Google is seeking from rivals has “core relevance” to the remedies litigation, Law360 reported.

The DOJ has asked for a wide range of remedies to stop Google from potentially using AI to entrench its market dominance in search and search text advertising. They include a ban on exclusive agreements with publishers to train on content, which the DOJ fears might allow Google to block AI rivals from licensing data, potentially posing a barrier to entry in both markets. Under the proposed remedies, Google would also face restrictions on investments in or acquisitions of AI products, as well as mergers with AI companies.

Additionally, the DOJ wants Mehta to stop Google from any potential self-preferencing, such as making an AI product mandatory on Android devices Google controls or preventing a rival from distribution on Android devices.

The government seems very concerned that Google may use its ownership of Android to play games in the emerging AI sector. They’ve further recommended an order preventing Google from discouraging partners from working with rivals, degrading the quality of rivals’ AI products on Android devices, or otherwise “coercing” manufacturers or other Android partners into giving Google’s AI products “better treatment.”

Importantly, if the court orders AI remedies linked to Google’s control of Android, Google could risk a forced sale of Android if Mehta grants the DOJ’s request for “contingent structural relief” requiring divestiture of Android if behavioral remedies don’t destroy the current monopolies.

Finally, the government wants Google to be required to allow publishers to opt out of AI training without impacting their search rankings. (Currently, opting out of AI scraping automatically opts sites out of Google search indexing.)

All of this, the DOJ alleged, is necessary to clear the way for a thriving search market as AI stands to shake up the competitive landscape.

“The promise of new technologies, including advances in artificial intelligence (AI), may present an opportunity for fresh competition,” the DOJ said in a court filing. “But only a comprehensive set of remedies can thaw the ecosystem and finally reverse years of anticompetitive effects.”

At the status conference Tuesday, DOJ attorney David Dahlquist reiterated to Mehta that these remedies are needed so that Google’s illegal conduct in search doesn’t extend to this “new frontier” of search, Law360 reported. Dahlquist also clarified that the DOJ views these kinds of AI products “as new access points for search, rather than a whole new market.”

“We’re very concerned about Google’s conduct being a barrier to entry,” Dahlquist said.

Google could not immediately be reached for comment. But the search giant has maintained that AI is beyond the scope of the search trial.

During the status conference, Google attorney John E. Schmidtlein disputed that AI remedies are relevant. While he agreed that “AI is key to the future of search,” he warned that “extraordinary” proposed remedies would “hobble” Google’s AI innovation, Law360 reported.

Microsoft shields confidential AI deals

Microsoft is predictably protective of its AI deals, arguing in a court filing that its “highly confidential agreements with OpenAI, Perplexity AI, Inflection, and G42 are not relevant to the issues being litigated” in the Google trial.

According to Microsoft, Google is arguing that it needs this information to “shed light” on things like “the extent to which the OpenAI partnership has driven new traffic to Bing and otherwise affected Microsoft’s competitive standing” or what’s required by “terms upon which Bing powers functionality incorporated into Perplexity’s search service.”

These insights, Google seemingly hopes, will convince Mehta that Google’s AI deals and investments are the norm in the AI search sector. But Microsoft is currently blocking access, arguing that “Google has done nothing to explain why” it “needs access to the terms of Microsoft’s highly confidential agreements with other third parties” when Microsoft has already offered to share documents “regarding the distribution and competitive position” of its AI products.

Microsoft also opposes Google’s attempts to review how search click-and-query data is used to train OpenAI’s models. Those requests would be better directed at OpenAI, Microsoft said.

If Microsoft gets its way, Google’s discovery requests will be limited to just Microsoft’s content licensing agreements for Copilot. Microsoft alleged those are the only deals “related to the general search or the general search text advertising markets” at issue in the trial.

On Tuesday, Microsoft attorney Julia Chapman told Mehta that Microsoft had “agreed to provide documents about the data used to train its own AI model and also raised concerns about the competitive sensitivity of Microsoft’s agreements with AI companies,” Law360 reported.

It remains unclear at this time if OpenAI will be forced to give Google the click-and-query data Google seeks. At the status hearing, Mehta ordered OpenAI to share “financial statements, information about the training data for ChatGPT, and assessments of the company’s competitive position,” Law360 reported.

But the DOJ may also be interested in seeing that data. In their proposed final judgment, the government forecasted that “query-based AI solutions” will “provide the most likely long-term path for a new generation of search competitors.”

Because of that prediction, any remedy “must prevent Google from frustrating or circumventing” court-ordered changes “by manipulating the development and deployment of new technologies like query-based AI solutions.” Emerging rivals “will depend on the absence of anticompetitive constraints to evolve into full-fledged competitors and competitive threats,” the DOJ alleged.

Mehta seemingly wants to see the evidence supporting the DOJ’s predictions, which could end up exposing carefully guarded secrets of both Google’s and its biggest rivals’ AI deals.

On Tuesday, the judge noted that integration of AI into search engines had already evolved what search results pages look like. And from his “very layperson’s perspective,” it seems like AI’s integration into search engines will continue moving “very quickly,” as both parties seem to agree.

Whether he buys into the DOJ’s theory that Google could use its existing advantage as the world’s greatest gatherer of search query data to block rivals from keeping pace is still up in the air, but the judge seems moved by the DOJ’s claim that “AI has the ability to affect market dynamics in these industries today as well as tomorrow.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Google’s plan to keep AI out of search trial remedies isn’t going very well Read More »

after-telling-cadillac-to-pound-sand,-f1-does-180,-grants-entry-for-2026

After telling Cadillac to pound sand, F1 does 180, grants entry for 2026

The United States will have a second team competing in Formula 1 from 2026, when Cadillac Formula 1 will join the sport as its 11th team. The result is a complete 180 for the sport’s owner, which was highly resistant to the initial bid, first announced at the beginning of 2023.

“As the pinnacle of motorsports, F1 demands boundary-pushing innovation and excellence. It’s an honor for General Motors and Cadillac to join the world’s premier racing series, and we’re committed to competing with passion and integrity to elevate the sport for race fans around the world,” said GM President Mark Reuss. “This is a global stage for us to demonstrate GM’s engineering expertise and technology leadership at an entirely new level.”

Team first, engines later

We will have to wait until 2028 to see that full engineering potential on display. Even with the incoming changes to the technical regulations, it’s far more than the work of a minute to develop a new F1 hybrid powertrain, let alone a competitive package. Audi has been working on its F1 powertrain since at least 2023, as has Red Bull, which decided to make its internal combustion engine in-house, like Ferrari or Mercedes, with partner Ford providing the electrification.

GM’s decision to throw Cadillac’s hat into the ring came with the caveat that its powertrain wouldn’t be ready until 2028—two years after it actually wants to enter the sport. That means for 2026 and 2027, Cadillac F1 will use customer engines from another manufacturer, in this case Ferrari. From 2028, we can expect a GM-designed V6 hybrid under Cadillac F1’s engine covers.

As McLaren has demonstrated this year, customer powertrains are no impediment to success, and Alpine (née Renault) is going so far as to give up its own in-house powertrain program in favor of customer engines (and most likely, a for sale sign as the French automaker looks set to walk away from the sport once again).

After telling Cadillac to pound sand, F1 does 180, grants entry for 2026 Read More »

nasa-awards-spacex-a-contract-for-one-of-the-few-things-it-hasn’t-done-yet

NASA awards SpaceX a contract for one of the few things it hasn’t done yet

Notably, the Dragonfly launch was one of the first times United Launch Alliance has been eligible to bid its new Vulcan rocket for a NASA launch contract. NASA officials gave the green light for the Vulcan rocket to compete head-to-head with SpaceX’s Falcon 9 and Falcon Heavy after ULA’s new launcher had a successful debut launch earlier this year. With this competition, SpaceX came out on top.

A half-life of 88 years

NASA’s policy for new space missions is to use solar power whenever possible. For example, Europa Clipper was originally supposed to use a nuclear power generator, but engineers devised a way for the spacecraft to use expansive solar panels to capture enough sunlight to produce electricity, even at Jupiter’s vast distance from the Sun.

But there are some missions where this isn’t feasible. One of these is Dragonfly, which will soar through the soupy nitrogen-methane atmosphere of Titan. Saturn’s largest moon is shrouded in cloud cover, and Titan is nearly 10 times farther from the Sun than Earth, so its surface is comparatively dim.

The Dragonfly mission, seen here in an artist’s concept, is slated to launch no earlier than 2027 on a mission to explore Saturn’s moon Titan. Credit: NASA/JHUAPL/Steve Gribben

Dragonfly will launch with about 10.6 pounds (4.8 kilograms) of plutonium-238 to fuel its power generator. Plutonium-238 has a half-life of 88 years. With no moving parts, RTGs have proven quite reliable, powering spacecraft for many decades. NASA’s twin Voyager probes are approaching 50 years since launch.

The Dragonfly rotorcraft will launch cocooned inside a transit module and entry capsule, then descend under parachute through Titan’s atmosphere, which is four times denser than Earth’s. Finally, Dragonfly will detach from its descent module and activate its eight rotors to reach a safe landing.

Once on Titan, Dragonfly is designed to hop from place to place on numerous flights, exploring environments rich in organic molecules, the building blocks of life. This is one of NASA’s most exciting, and daring, robotic missions of all time.

After launching from NASA’s Kennedy Space Center in Florida in July 2028, it will take Dragonfly about six years to reach Titan. When NASA selected the Dragonfly mission to begin development in 2019, the agency hoped to launch the mission in 2026. NASA later directed Dragonfly managers to target a launch in 2027, and then 2028, requiring the mission to change from a medium-lift to a heavy-lift rocket.

Dragonfly has also faced rising costs NASA blames on the COVID-19 pandemic and supply chain issues and an in-depth redesign since the mission’s selection in 2019. Collectively, these issues caused Dragonfly’s total budget to grow to $3.35 billion, more than double its initial projected cost.

NASA awards SpaceX a contract for one of the few things it hasn’t done yet Read More »

we’re-closer-to-re-creating-the-sounds-of-parasaurolophus

We’re closer to re-creating the sounds of Parasaurolophus

The duck-billed dinosaur Parasaurolophus is distinctive for its prominent crest, which some scientists have suggested served as a kind of resonating chamber to produce low-frequency sounds. Nobody really knows what Parasaurolophus sounded like, however. Hongjun Lin of New York University is trying to change that by constructing his own model of the dinosaur’s crest and its acoustical characteristics. Lin has not yet reproduced the call of Parasaurolophus, but he talked about his progress thus far at a virtual meeting of the Acoustical Society of America.

Lin was inspired in part by the dinosaur sounds featured in the Jurassic Park film franchise, which were a combination of sounds from other animals like baby whales and crocodiles. “I’ve been fascinated by giant animals ever since I was a kid. I’d spend hours reading books, watching movies, and imagining what it would be like if dinosaurs were still around today,” he said during a press briefing. “It wasn’t until college that I realized the sounds we hear in movies and shows—while mesmerizing—are completely fabricated using sounds from modern animals. That’s when I decided to dive deeper and explore what dinosaurs might have actually sounded like.”

A skull and partial skeleton of Parasaurolophus were first discovered in 1920 along the Red Deer River in Alberta, Canada, and another partial skull was discovered the following year in New Mexico. There are now three known species of Parasaurolophus; the name means “near crested lizard.” While no complete skeleton has yet been found, paleontologists have concluded that the adult dinosaur likely stood about 16 feet tall and weighed between 6,000 to 8,000 pounds. Parasaurolophus was an herbivore that could walk on all four legs while foraging for food but may have run on two legs.

It’s that distinctive crest that has most fascinated scientists over the last century, particularly its purpose. Past hypotheses have included its use as a snorkel or as a breathing tube while foraging for food; as an air trap to keep water out of the lungs; or as an air reservoir so the dinosaur could remain underwater for longer periods. Other scientists suggested the crest was designed to help move and support the head or perhaps used as a weapon while combating other Parasaurolophus. All of these, plus a few others, have largely been discredited.

We’re closer to re-creating the sounds of Parasaurolophus Read More »

android-will-soon-instantly-log-you-in-to-your-apps-on-new-devices

Android will soon instantly log you in to your apps on new devices

If you lose your iPhone or buy an upgrade, you could reasonably expect to be up and running after an hour, presuming you backed up your prior model. Your Apple stuff all comes over, sure, but most of your third-party apps will still be signed in.

Doing the same swap with an Android device is more akin to starting three-quarters fresh. After one or two Android phones, you learn to bake in an extra hour of rapid-fire logging in to all your apps. Password managers, or just using a Google account as your authentication, are a godsend.

That might change relatively soon, as Google has announced a new Restore Credentials feature, which should do what it says in the name. Android apps can “seamlessly onboard users to their accounts on a new device,” with the restore keys handled by Android’s native backup and restore process. The experience, says Google, is “delightful” and seamless. You can even get the same notifications on the new device as you were receiving on the old.

Android will soon instantly log you in to your apps on new devices Read More »

qubit-that-makes-most-errors-obvious-now-available-to-customers

Qubit that makes most errors obvious now available to customers


Can a small machine that makes error correction easier upend the market?

A graphic representation of the two resonance cavities that can hold photons, along with a channel that lets the photon move between them. Credit: Quantum Circuits

We’re nearing the end of the year, and there are typically a flood of announcements regarding quantum computers around now, in part because some companies want to live up to promised schedules. Most of these involve evolutionary improvements on previous generations of hardware. But this year, we have something new: the first company to market with a new qubit technology.

The technology is called a dual-rail qubit, and it is intended to make the most common form of error trivially easy to detect in hardware, thus making error correction far more efficient. And, while tech giant Amazon has been experimenting with them, a startup called Quantum Circuits is the first to give the public access to dual-rail qubits via a cloud service.

While the tech is interesting on its own, it also provides us with a window into how the field as a whole is thinking about getting error-corrected quantum computing to work.

What’s a dual-rail qubit?

Dual-rail qubits are variants of the hardware used in transmons, the qubits favored by companies like Google and IBM. The basic hardware unit links a loop of superconducting wire to a tiny cavity that allows microwave photons to resonate. This setup allows the presence of microwave photons in the resonator to influence the behavior of the current in the wire and vice versa. In a transmon, microwave photons are used to control the current. But there are other companies that have hardware that does the reverse, controlling the state of the photons by altering the current.

Dual-rail qubits use two of these systems linked together, allowing photons to move from the resonator to the other. Using the superconducting loops, it’s possible to control the probability that a photon will end up in the left or right resonator. The actual location of the photon will remain unknown until it’s measured, allowing the system as a whole to hold a single bit of quantum information—a qubit.

This has an obvious disadvantage: You have to build twice as much hardware for the same number of qubits. So why bother? Because the vast majority of errors involve the loss of the photon, and that’s easily detected. “It’s about 90 percent or more [of the errors],” said Quantum Circuits’ Andrei Petrenko. “So it’s a huge advantage that we have with photon loss over other errors. And that’s actually what makes the error correction a lot more efficient: The fact that photon losses are by far the dominant error.”

Petrenko said that, without doing a measurement that would disrupt the storage of the qubit, it’s possible to determine if there is an odd number of photons in the hardware. If that isn’t the case, you know an error has occurred—most likely a photon loss (gains of photons are rare but do occur). For simple algorithms, this would be a signal to simply start over.

But it does not eliminate the need for error correction if we want to do more complex computations that can’t make it to completion without encountering an error. There’s still the remaining 10 percent of errors, which are primarily something called a phase flip that is distinct to quantum systems. Bit flips are even more rare in dual-rail setups. Finally, simply knowing that a photon was lost doesn’t tell you everything you need to know to fix the problem; error-correction measurements of other parts of the logical qubit are still needed to fix any problems.

The layout of the new machine. Each qubit (gray square) involves a left and right resonance chamber (blue dots) that a photon can move between. Each of the qubits has connections that allow entanglement with its nearest neighbors. Credit: Quantum Circuits

In fact, the initial hardware that’s being made available is too small to even approach useful computations. Instead, Quantum Circuits chose to link eight qubits with nearest-neighbor connections in order to allow it to host a single logical qubit that enables error correction. Put differently: this machine is meant to enable people to learn how to use the unique features of dual-rail qubits to improve error correction.

One consequence of having this distinctive hardware is that the software stack that controls operations needs to take advantage of its error detection capabilities. None of the other hardware on the market can be directly queried to determine whether it has encountered an error. So, Quantum Circuits has had to develop its own software stack to allow users to actually benefit from dual-rail qubits. Petrenko said that the company also chose to provide access to its hardware via its own cloud service because it wanted to connect directly with the early adopters in order to better understand their needs and expectations.

Numbers or noise?

Given that a number of companies have already released multiple revisions of their quantum hardware and have scaled them into hundreds of individual qubits, it may seem a bit strange to see a company enter the market now with a machine that has just a handful of qubits. But amazingly, Quantum Circuits isn’t alone in planning a relatively late entry into the market with hardware that only hosts a few qubits.

Having talked with several of them, there is a logic to what they’re doing. What follows is my attempt to convey that logic in a general form, without focusing on any single company’s case.

Everyone agrees that the future of quantum computation is error correction, which requires linking together multiple hardware qubits into a single unit termed a logical qubit. To get really robust, error-free performance, you have two choices. One is to devote lots of hardware qubits to the logical qubit, so you can handle multiple errors at once. Or you can lower the error rate of the hardware, so that you can get a logical qubit with equivalent performance while using fewer hardware qubits. (The two options aren’t mutually exclusive, and everyone will need to do a bit of both.)

The two options pose very different challenges. Improving the hardware error rate means diving into the physics of individual qubits and the hardware that controls them. In other words, getting lasers that have fewer of the inevitable fluctuations in frequency and energy. Or figuring out how to manufacture loops of superconducting wire with fewer defects or handle stray charges on the surface of electronics. These are relatively hard problems.

By contrast, scaling qubit count largely involves being able to consistently do something you already know how to do. So, if you already know how to make good superconducting wire, you simply need to make a few thousand instances of that wire instead of a few dozen. The electronics that will trap an atom can be made in a way that will make it easier to make them thousands of times. These are mostly engineering problems, and generally of similar complexity to problems we’ve already solved to make the electronics revolution happen.

In other words, within limits, scaling is a much easier problem to solve than errors. It’s still going to be extremely difficult to get the millions of hardware qubits we’d need to error correct complex algorithms on today’s hardware. But if we can get the error rate down a bit, we can use smaller logical qubits and might only need 10,000 hardware qubits, which will be more approachable.

Errors first

And there’s evidence that even the early entries in quantum computing have reasoned the same way. Google has been working iterations of the same chip design since its 2019 quantum supremacy announcement, focusing on understanding the errors that occur on improved versions of that chip. IBM made hitting the 1,000 qubit mark a major goal but has since been focused on reducing the error rate in smaller processors. Someone at a quantum computing startup once told us it would be trivial to trap more atoms in its hardware and boost the qubit count, but there wasn’t much point in doing so given the error rates of the qubits on the then-current generation machine.

The new companies entering this market now are making the argument that they have a technology that will either radically reduce the error rate or make handling the errors that do occur much easier. Quantum Circuits clearly falls into the latter category, as dual-rail qubits are entirely about making the most common form of error trivial to detect. The former category includes companies like Oxford Ionics, which has indicated it can perform single-qubit gates with a fidelity of over 99.9991 percent. Or Alice & Bob, which stores qubits in the behavior of multiple photons in a single resonance cavity, making them very robust to the loss of individual photons.

These companies are betting that they have distinct technology that will let them handle error rate issues more effectively than established players. That will lower the total scaling they need to do, and scaling will be an easier problem overall—and one that they may already have the pieces in place to handle. Quantum Circuits’ Petrenko, for example, told Ars, “I think that we’re at the point where we’ve gone through a number of iterations of this qubit architecture where we’ve de-risked a number of the engineering roadblocks.” And Oxford Ionics told us that if they could make the electronics they use to trap ions in their hardware once, it would be easy to mass manufacture them.

None of this should imply that these companies will have it easy compared to a startup that already has experience with both reducing errors and scaling, or a giant like Google or IBM that has the resources to do both. But it does explain why, even at this stage in quantum computing’s development, we’re still seeing startups enter the field.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Qubit that makes most errors obvious now available to customers Read More »

automatic-braking-systems-save-lives-now-they’ll-need-to-work-at-62-mph.

Automatic braking systems save lives. Now they’ll need to work at 62 mph.

Otherwise, drivers will get mad. “The mainstream manufacturers have to be a little careful because they don’t want to create customer dissatisfaction by making the system too twitchy,” says Brannon, at AAA. Tesla drivers, for example, have proven very tolerant of “beta testing” and quirks. Your average driver, maybe less so.

Based on its own research, IIHS has pushed automakers to install AEB systems able to operate at faster speeds on their cars. Kidd says IIHS research suggests there have been no systemic, industry-wide issues with safety and automatic emergency braking. Fewer and fewer drivers seem to be turning off their AEB systems out of annoyance. (The new rules make it so drivers can’t turn them off.) But US regulators have investigated a handful of automakers, including General Motors and Honda, for automatic emergency braking issues that have reportedly injured more than 100 people, though automakers have reportedly fixed the issue.

New complexities

Getting cars to fast-brake at even higher speeds will require a series of tech advances, experts say. AEB works by bringing in data from sensors. That information is then turned over to automakers’ custom-tuned classification systems, which are trained to recognize certain situations and road users—that’s a stopped car in the middle of the road up ahead or there’s a person walking across the road up there—and intervene.

So to get AEB to work in higher-speed situations, the tech will have to “see” further down the road. Most of today’s new cars come loaded up with sensors, including cameras and radar, which can collect vital data. But the auto industry trade group argues that the Feds have underestimated the amount of new hardware—including, possibly, more expensive lidar units—that will have to be added to cars.

Brake-makers will have to tinker with components to allow quicker stops, which will require the pressurized fluid that moves through a brake’s hydraulic lines to go even faster. Allowing cars to detect hazards at further distances could require different types of hardware, including sometimes-expensive sensors. “Some vehicles might just need a software update, and some might not have the right sensor suite,” says Bhavana Chakraborty, an engineering director at Bosch, an automotive supplier that builds safety systems. Those without the right hardware will need updates “across the board,” she says, to get to the levels of safety demanded by the federal government.

Automatic braking systems save lives. Now they’ll need to work at 62 mph. Read More »