Author name: Kelly Newman

flux:-this-new-ai-image-generator-is-eerily-good-at-creating-human-hands

FLUX: This new AI image generator is eerily good at creating human hands

five-finger salute —

FLUX.1 is the open-weights heir apparent to Stable Diffusion, turning text into images.

AI-generated image by FLUX.1 dev:

Enlarge / AI-generated image by FLUX.1 dev: “A beautiful queen of the universe holding up her hands, face in the background.”

FLUX.1

On Thursday, AI-startup Black Forest Labs announced the launch of its company and the release of its first suite of text-to-image AI models, called FLUX.1. The German-based company, founded by researchers who developed the technology behind Stable Diffusion and invented the latent diffusion technique, aims to create advanced generative AI for images and videos.

The launch of FLUX.1 comes about seven weeks after Stability AI’s troubled release of Stable Diffusion 3 Medium in mid-June. Stability AI’s offering faced widespread criticism among image-synthesis hobbyists for its poor performance in generating human anatomy, with users sharing examples of distorted limbs and bodies across social media. That problematic launch followed the earlier departure of three key engineers from Stability AI—Robin Rombach, Andreas Blattmann, and Dominik Lorenz—who went on to found Black Forest Labs along with latent diffusion co-developer Patrick Esser and others.

Black Forest Labs launched with the release of three FLUX.1 text-to-image models: a high-end commercial “pro” version, a mid-range “dev” version with open weights for non-commercial use, and a faster open-weights “schnell” version (“schnell” means quick or fast in German). Black Forest Labs claims its models outperform existing options like Midjourney and DALL-E in areas such as image quality and adherence to text prompts.

  • AI-generated image by FLUX.1 dev: “A close-up photo of a pair of hands holding a plate full of pickles.”

    FLUX.1

  • AI-generated image by FLUX.1 dev: A hand holding up five fingers with a starry background.

    FLUX.1

  • AI-generated image by FLUX.1 dev: “An Ars Technica reader sitting in front of a computer monitor. The screen shows the Ars Technica website.”

    FLUX.1

  • AI-generated image by FLUX.1 dev: “a boxer posing with fists raised, no gloves.”

    FLUX.1

  • AI-generated image by FLUX.1 dev: “An advertisement for ‘Frosted Prick’ cereal.”

    FLUX.1

  • AI-generated image of a happy woman in a bakery baking a cake by FLUX.1 dev.

    FLUX.1

  • AI-generated image by FLUX.1 dev: “An advertisement for ‘Marshmallow Menace’ cereal.”

    FLUX.1

  • AI-generated image of “A handsome Asian influencer on top of the Empire State Building, instagram” by FLUX.1 dev.

    FLUX.1

In our experience, the outputs of the two higher-end FLUX.1 models are generally comparable with OpenAI’s DALL-E 3 in prompt fidelity, with photorealism that seems close to Midjourney 6. They represent a significant improvement over Stable Diffusion XL, the team’s last major release under Stability (if you don’t count SDXL Turbo).

The FLUX.1 models use what the company calls a “hybrid architecture” combining transformer and diffusion techniques, scaled up to 12 billion parameters. Black Forest Labs said it improves on previous diffusion models by incorporating flow matching and other optimizations.

FLUX.1 seems competent at generating human hands, which was a weak spot in earlier image-synthesis models like Stable Diffusion 1.5 due to a lack of training images that focused on hands. Since those early days, other AI image generators like Midjourney have mastered hands as well, but it’s notable to see an open-weights model that renders hands relatively accurately in various poses.

We downloaded the weights file to the FLUX.1 dev model from GitHub, but at 23GB, it won’t fit in the 12GB VRAM of our RTX 3060 card, so it will need quantization to run locally (reducing its size), which reportedly (through chatter on Reddit) some people have already had success with.

Instead, we experimented with FLUX.1 models on AI cloud-hosting platforms Fal and Replicate, which cost money to use, though Fal offers some free credits to start.

Black Forest looks ahead

Black Forest Labs may be a new company, but it’s already attracting funding from investors. It recently closed a $31 million Series Seed funding round led by Andreessen Horowitz, with additional investments from General Catalyst and MätchVC. The company also brought on high-profile advisers, including entertainment executive and former Disney President Michael Ovitz and AI researcher Matthias Bethge.

“We believe that generative AI will be a fundamental building block of all future technologies,” the company stated in its announcement. “By making our models available to a wide audience, we want to bring its benefits to everyone, educate the public and enhance trust in the safety of these models.”

  • AI-generated image by FLUX.1 dev: A cat in a car holding a can of beer that reads, ‘AI Slop.’

    FLUX.1

  • AI-generated image by FLUX.1 dev: Mickey Mouse and Spider-Man singing to each other.

    FLUX.1

  • AI-generated image by FLUX.1 dev: “a muscular barbarian with weapons beside a CRT television set, cinematic, 8K, studio lighting.”

    FLUX.1

  • AI-generated image of a flaming cheeseburger created by FLUX.1 dev.

    FLUX.1

  • AI-generated image by FLUX.1 dev: “Will Smith eating spaghetti.”

    FLUX.1

  • AI-generated image by FLUX.1 dev: “a muscular barbarian with weapons beside a CRT television set, cinematic, 8K, studio lighting. The screen reads ‘Ars Technica.'”

    FLUX.1

  • AI-generated image by FLUX.1 dev: “An advertisement for ‘Burt’s Grenades’ cereal.”

    FLUX.1

  • AI-generated image by FLUX.1 dev: “A close-up photo of a pair of hands holding a plate that contains a portrait of the queen of the universe”

    FLUX.1

Speaking of “trust and safety,” the company did not mention where it obtained the training data that taught the FLUX.1 models how to generate images. Judging by the outputs we could produce with the model that included depictions of copyrighted characters, Black Forest Labs likely used a huge unauthorized image scrape of the Internet, possibly collected by LAION, an organization that collected the datasets that trained Stable Diffusion. This is speculation at this point. While the underlying technological achievement of FLUX.1 is notable, it feels likely that the team is playing fast and loose with the ethics of “fair use” image scraping much like Stability AI did. That practice may eventually attract lawsuits like those filed against Stability AI.

Though text-to-image generation is Black Forest’s current focus, the company plans to expand into video generation next, saying that FLUX.1 will serve as the foundation of a new text-to-video model in development, which will compete with OpenAI’s Sora, Runway’s Gen-3 Alpha, and Kuaishou’s Kling in a contest to warp media reality on demand. “Our video models will unlock precise creation and editing at high definition and unprecedented speed,” the Black Forest announcement claims.

FLUX: This new AI image generator is eerily good at creating human hands Read More »

ai’s-future-in-grave-danger-from-nvidia’s-chokehold-on-chips,-groups-warn

AI’s future in grave danger from Nvidia’s chokehold on chips, groups warn

Controlling “the world’s computing destiny” —

Anti-monopoly groups want DOJ to probe Nvidia’s AI chip bundling, alleged price-fixing.

AI’s future in grave danger from Nvidia’s chokehold on chips, groups warn

Sen. Elizabeth Warren (D-Mass.) has joined progressive groups—including Demand Progress, Open Markets Institute, and the Tech Oversight Project—pressuring the US Department of Justice to investigate Nvidia’s dominance in the AI chip market due to alleged antitrust concerns, Reuters reported.

In a letter to the DOJ’s chief antitrust enforcer, Jonathan Kanter, groups demanding more Big Tech oversight raised alarms that Nvidia’s top rivals apparently “are struggling to gain traction” because “Nvidia’s near-absolute dominance of the market is difficult to counter” and “funders are wary of backing its rivals.”

Nvidia is currently “the world’s most valuable public company,” their letter said, worth more than $3 trillion after taking near-total control of the high-performance AI chip market. Particularly “astonishing,” the letter said, was Nvidia’s dominance in the market for GPU accelerator chips, which are at the heart of today’s leading AI. Groups urged Kanter to probe Nvidia’s business practices to ensure that rivals aren’t permanently blocked from competing.

According to the advocacy groups that strongly oppose Big Tech monopolies, Nvidia “now holds an 80 percent overall global market share in GPU chips and a 98 percent share in the data center market.” This “puts it in a position to crowd out competitors and set global pricing and the terms of trade,” the letter warned.

Earlier this year, inside sources reported that the DOJ and the Federal Trade Commission reached a deal where the DOJ would probe Nvidia’s alleged anti-competitive behavior in the booming AI industry, and the FTC would probe OpenAI and Microsoft. But there has been no official Nvidia probe announced, prompting progressive groups to push harder for the DOJ to recognize what they view as a “dire danger to the open market” that “well deserves DOJ scrutiny.”

Ultimately, the advocacy groups told Kanter that they fear Nvidia wielding “control over the world’s computing destiny,” noting that Nvidia’s cloud computing data centers don’t just power “Big Tech’s consumer products” but also “underpin every aspect of contemporary society, including the financial system, logistics, healthcare, and defense.”

They claimed that Nvidia is “leveraging” its “scarce chips” to force customers to buy its “chips, networking, and programming software as a package.” Such bundling and “price-fixing,” their letter warned, appear to be “the same kinds of anti-competitive tactics that the courts, in response to actions brought by the Department of Justice against other companies, have found to be illegal” and could perhaps “stifle innovation.”

Although data from TechInsights suggested that Nvidia’s chip shortage and cost actually helped companies like AMD and Intel sell chips in 2023, both Nvidia rivals reported losses in market share earlier this year, Yahoo Finance reported.

Perhaps most closely monitoring Nvidia’s dominance, France antitrust authorities launched an investigation into Nvidia last month over antitrust concerns, the letter said, “making it the first enforcer to act against the computer chip maker,” Reuters reported.

Since then, the European Union and the United Kingdom, as well as the US, have heightened scrutiny, but their seeming lag to follow through with an official investigation may only embolden Nvidia, as the company allegedly “believes its market behavior is above the law,” the progressive groups wrote. Suspicious behavior includes allegations that “Nvidia has continued to sell chips to Chinese customers and provide them computing access” despite a “Department of Commerce ban on trading with Chinese companies due to national security and human rights concerns.”

“Its chips have been confirmed to be reaching blacklisted Chinese entities,” their letter warned, citing a Wall Street Journal report.

Nvidia’s dominance apparently impacts everyone involved with AI. According to the letter, Nvidia seemingly “determining who receives inventory from a limited supply, setting premium pricing, and contractually blocking customers from doing business with competitors” is “alarming” the entire AI industry. That includes “both small companies (who find their supply choked off) and the Big Tech AI giants.”

Kanter will likely be receptive to the letter. In June, Fast Company reported that Kanter told an audience at an AI conference that there are “structures and trends in AI that should give us pause.” He further suggested that any technology that “relies on massive amounts of data and computing power” can “give already dominant firms a substantial advantage,” according to Fast Company’s summary of his remarks.

AI’s future in grave danger from Nvidia’s chokehold on chips, groups warn Read More »

karaoke-reveals-why-we-blush

Karaoke reveals why we blush

Singing for science —

Volunteers watched their own performances as an MRI tracked brain activity.

A hand holding a microphone against a blurry backdrop, taken from an angle that implies the microphone is directly in front of your face.

Singing off-key in front of others is one way to get embarrassed. Regardless of how you get there, why does embarrassment almost inevitably come with burning cheeks that turn an obvious shade of red (which is possibly even more embarrassing)?

Blushing starts not in the face but in the brain, though exactly where has been debated. Previous thinking often reasoned that the blush reaction was associated with higher socio-cognitive processes, such as thinking of how one is perceived by others.

After studying subjects who watched videos of themselves singing karaoke, however, researchers led by Milica Nicolic of the University of Amsterdam have found that blushing is really the result of specific emotions being aroused.

Nicolic’s findings suggest that blushing “is a consequence of a high level of ambivalent emotional arousal that occurs when a person feels threatened and wants to flee but, at the same time, feels the urge not to give up,” as she and her colleagues put it in a study recently published in Proceedings of the Royal Society B.

Taking the stage

The researchers sought out test subjects who were most likely to blush when watching themselves sing bad karaoke: adolescent girls. Adolescents tend to be much more self-aware and more sensitive to being judged by others than adults are.

The subjects couldn’t pick just any song. Nicolic and her team had made sure to give them a choice of four songs that music experts had deemed difficult, which is why they selected “Hello” by Adele, “Let it Go” from Frozen, “All I Want For Christmas is You” by Mariah Carey, and “All the Things You Said” by tATu. Videos of the subjects were recorded as they sang.

On their second visit to the lab, subjects were put in an MRI scanner and were shown videos of themselves and others singing karaoke. They watched 15 video clips of themselves singing and, as a control, 15 segments of someone who was thought to have similar singing ability, so secondhand embarrassment could be ruled out.

The other control factor was videos of professional singers disguised as participants. Because the professionals sang better overall, it was unlikely they would trigger secondhand embarrassment.

Enough to make you blush

The researchers checked for an increase in cheek temperature, as blood flow measurements had been used in past studies but are more prone to error. This was measured with a fast-response temperature transducer as the subjects watched karaoke videos.

It was only when the subjects watched themselves sing that cheek temperature went up. There was virtually no increase or decrease when watching others—meaning no secondhand embarrassment—and a slight decrease when watching a professional singer.

The MRI scans revealed which regions of the brain were activated as subjects watched videos of themselves. These include the anterior insular cortex, or anterior insula, which responds to a range of emotions, including fear, anxiety, and, of course, embarrassment. There was also the mid-cingulate cortex, which emotionally and cognitively manages pain—including embarrassment—by trying to anticipate that pain and reacting with aversion and avoidance. The dorsolateral prefrontal cortex, which helps process fear and anxiety, also lit up.

There was also more activity detected in the cerebellum, which is responsible for much of the emotional processing in the brain, when subjects watched themselves sing. Those who blushed more while watching their own video clips showed the most cerebellum activity. This could mean they were feeling stronger emotions.

What surprised the researchers was that there was no additional activation in areas known for being involved in the process of understanding one’s mental state, meaning someone’s opinion of what others might think of them may not be necessary for blushing to happen.

So blushing is really more about the surge of emotions someone feels when being faced with things that pertain to the self and not so much about worrying what other people think. That can definitely happen if you’re watching a video of your own voice cracking at the high notes in an Adele song.

Proceedings of the Royal Society B, 2024.  DOI: 10.1098/rspb.2024.0958

Karaoke reveals why we blush Read More »

nothing’s-new-ai-widget-is-trying-to-make-its-cfo-a-news-star

Nothing’s new AI widget is trying to make its CFO a news star

something out of nothing —

Its news app is available on all Nothing and CMF handsets, including the new Phone (2a) Plus.

Nothing’s new AI widget is trying to make its CFO a news star

Nothing has a new smartphone—the Phone (2a) Plus—nearly identical to the Phone (2a) it released earlier this year, but with slightly beefed-up specs. It costs $399 and is available in the US through the same beta program. But it isn’t the new Android handset we find most interesting, it’s the company’s new widget.

The “News Reporter” widget, available by default on all Nothing and CMF smartphones plus other Android and iOS devices via the Nothing X app, lets you quickly play a news bulletin summarized by artificial intelligence. It is read out by the synthesized voice of Tim Holbrow, the company’s chief financial officer. (Nothing is using ElevenLabs’ tech for sound synthesis and output.) As soon as you tap the widget, you’re greeted by a soothing British voice:

“Welcome to Nothing News, where the only thing we take seriously is not taking anything seriously. I’m Tim, your CFO and reluctant news reader. Today, we’re making something out of nothing, because that’s literally our job.”

The widget will start cycling through a selection of news stories—you can press and hold the widget and tap Edit to add or remove categories you’re interested in, such as business, entertainment, tech, and sports. These news stories are pulled from “trusted English-language news sources” through News API, using Meta’s Llama large language models for the summary.

Nothing's News Reporter widget is available on all Nothing and CMF phones by default. If you download the Nothing X app, you can also access it on Android and iOS.

Enlarge / Nothing’s News Reporter widget is available on all Nothing and CMF phones by default. If you download the Nothing X app, you can also access it on Android and iOS.

You can swipe down the notification bar and press the next button on the media playback notification to skip a story, to which Holbrow will add a quip. “Not feeling that one? Let’s find another.” After I skipped quite a few in a row, AI Holbrow asked, “Do you even like news?”

The summaries are one minute each (roughly), and you get eight stories per day. Every morning, the widget will refresh with a fresh batch. Unfortunately, and frustratingly, the widget doesn’t give you much to go on if you want to read more. There’s no attribution to where it pulled the news from, and no links are provided to read directly from the source.

Every smartphone company has been touting some kind of generative AI feature in new devices this year. Samsung has Galaxy AI; Google has its Gemini chatbot and a bevy of AI features in Pixel phones; Motorola introduced Moto AI recently; and even OnePlus has been teasing a few AI features in its phones, like AI Eraser, which lets you remove unwanted objects from photos. Nothing introduced a ChatGPT integration in its earbuds earlier this year, and this widget is the latest generative AI feature to land.

That said, it’s hardly the first time we’ve seen a news summarization feature. Back when Amazon Alexa and Google Assistant were gaining popularity, one of the top features was to ask the voice assistant to play the news—you’d be able to hear short news clips from various sources, like NPR and CNN. That said, I like the implementation in Nothing’s widget, but I’d also like to see attribution and a way to dig deeper into a story if it’s interesting.

What about that phone?

As for the Nothing Phone (2a) Plus, I’ve been using it for several days and it’s … indiscernible from the Phone (2a) I reviewed positively in March. I love the new gray color option, which hides smudges on the rear better and makes the phone’s already fun design pop even more. You still get the same Glyph light functionality, allowing the LEDs to light up for notifications and calendar events, and even double as a visualizer when playing music.

Nothing Phone (2a) on the left, Nothing Phone (2a) Plus on the right.

Enlarge / Nothing Phone (2a) on the left, Nothing Phone (2a) Plus on the right.

The top change here is the processor. Inside is MediaTek’s Dimensity 7350 Pro 5G (as opposed to the Phone (2a)’s Dimensity 7200 Pro), which offers a 10 percent increase in CPU power, and a 30 percent jump in graphics performance. Honestly, I didn’t notice a huge bump in speed, and my benchmark scores show a very tiny boost.

The next upgrade is in the camera, namely, the selfie camera and its new 50-MP sensor that can shoot 4K at 30 frames per second (up from 32 megapixels). The company says it has issued seven updates since the launch of the Phone (2a) with 26 improvements to the camera, which include upgrades to loading speeds, color consistency, and blur accuracy in portrait mode. The Phone (2a) Plus launches with all of those improvements, and the 50-MP front and ultrawide cameras on the rear are the same.

Selfies indeed look much nicer, especially in low light, where my face appears sharper with better HDR and a more balanced exposure. The rear cameras produce nice results considering the price, and I found daytime renders to deliver natural-looking colors. It can still struggle with super high-contrast scenes, but this is a solid camera system.

Lastly, the wired charging on the phone now supports 50 watts (up from 45 watts), which supposedly gets you a 10 percent charging speed boost. Everything else is identical from the Phone (2a)’s specs, from the 6.7-inch AMOLED display to the 5,000-mAh battery.

Nothing new

I’ve enjoyed the phone over the past few days, but its launch is so peculiar, considering it doesn’t introduce any groundbreaking updates to the Phone (2a). So I asked the company why it decided to launch the (2a) Plus now. “We aren’t launching Phone (3) until next year, and we saw an opportunity to enhance the smartphone we launched in March with Phone (2a) Plus, a new smartphone—catered towards power users—at an accessible price point,” says Jane Nho, Nothing’s head of PR in the US. The company launched its last flagship phone, the Phone (2), in July 2023.

So there you have it: The Phone (2a) Plus is a seemingly painless way for Nothing to try and stay relevant amidst all the other smartphone launches, still have an AI story, boost sales, and oddly try and make some sort of digital celebrity out of its CFO.

Nothing says it’ll go on sale August 3 in London at Nothing’s store in Soho, in gray and black, with 12GB RAM and 256GB storage. In the US, the device will follow the same beta program system as the Phone (2a) and CMF Phone 1. That means you’ll have to sign up for the beta, and once you’re accepted, you’ll be able to purchase the device for $399. It’ll be available on August 7 at 9 am ET.

This story originally appeared on wired.com.

Nothing’s new AI widget is trying to make its CFO a news star Read More »

7-million-pounds-of-meat-recalled-amid-deadly-outbreak

7 million pounds of meat recalled amid deadly outbreak

7 million pounds across 71 products —

Authorities worry that the contaminated meats are still sitting in people’s fridges.

Shelves sit empty where Boar's Head meats are usually displayed at a Safeway store on July 31, 2024, in San Anselmo, California.

Enlarge / Shelves sit empty where Boar’s Head meats are usually displayed at a Safeway store on July 31, 2024, in San Anselmo, California.

Over 7 million pounds of Boar’s Head brand deli meats are being recalled amid a bacterial outbreak that has killed two people. The outbreak, which began in late May, has sickened a total of 34 people across 13 states, leading to 33 hospitalizations, according to the US Department of Agriculture.

On June 26, Boar’s Head recalled 207,528 pounds of products, including liverwurst, beef bologna, ham, salami, and “heat and eat” bacon. On Tuesday, the Jarratt, Virginia-based company expanded the recall to include about 7 million additional pounds of meat, including 71 different products sold on the Boar’s Head and Old Country brand labels. The products were sold nationwide.

The meats may be contaminated with Listeria monocytogenes, a foodborne pathogen that is particularly dangerous to pregnant people, people over the age of 65, and people with compromised immune systems. Infections during pregnancy can cause miscarriage, stillbirth, premature delivery, or a life-threatening infection in newborns. For others who develop invasive illness, the fatality rate is nearly 16 percent. Symptoms of listeriosis can include fever, muscle aches, headache, stiff neck, confusion, loss of balance, and convulsions that are sometimes preceded by diarrhea or other gastrointestinal symptoms.

The problem was discovered when the Maryland Department of Health—working with the Baltimore City Health Department—collected an unopened liverwurst product from a retail store and found that it was positive for L. monocytogenes. In later testing, the strain in the liverwurst was linked to those isolated from people sickened in the outbreak.

According to the Centers for Disease Control and Prevention, six of the 34 known cases were identified in Maryland, and 12 were identified in New York. The other 11 states have only reported one or two cases each. However, the CDC expects the true number of infections to be much higher, given that many people recover without medical care and, even if people did seek care, health care providers do not routinely test for L. monocytogenes in people with mild gastrointestinal illnesses.

In the outbreak so far, there has been one case in a pregnant person, who recovered and remained pregnant. The two deaths occurred in New Jersey and Illinois.

In a statement on the company’s website, Boar’s Head said that it learned from the USDA on Monday night that L. monocytogenes strain in the liverwurst linked to the multistate outbreak. “Out of an abundance of caution, we decided to immediately and voluntarily expand our recall to include all items produced at the Jarratt facility. We have also decided to pause ready-to-eat operations at this facility until further notice. As a company that prioritizes safety and quality, we believe it is the right thing to do.”

The USDA said it is “concerned that some product may be in consumers’ refrigerators and in retail deli cases.” The USDA, the company, and CDC warn people not to eat the recalled products. Instead, they should either be thrown away or returned to the store where they were purchased for a full refund. And if you’ve purchased one of the recalled products, the USDA also advises you to thoroughly clean your fridge to prevent cross-contamination.

7 million pounds of meat recalled amid deadly outbreak Read More »

how-kepler’s-400-year-old-sunspot-sketches-helped-solve-a-modern-mystery

How Kepler’s 400-year-old sunspot sketches helped solve a modern mystery

A naked-eye sunspot group on 11 May 2024

Enlarge / A naked-eye sunspot group on May 11, 2024. There are typically 40,000 to 50,000 sunspots observed in ~11-year solar cycles.

E. T. H. Teague

A team of Japanese and Belgian astronomers has re-examined the sunspot drawings made by 17th century astronomer Johannes Kepler with modern analytical techniques. By doing so, they resolved a long-standing mystery about solar cycles during that period, according to a recent paper published in The Astrophysical Journal Letters.

Precisely who first observed sunspots was a matter of heated debate in the early 17th century. We now know that ancient Chinese astronomers between 364 and 28 BCE observed these features and included them in their official records. A Benedictine monk in 807 thought he’d observed Mercury passing in front of the Sun when, in reality, he had witnessed a sunspot; similar mistaken interpretations were also common in the 12th century. (An English monk made the first known drawings of sunspots in December 1128.)

English astronomer Thomas Harriot made the first telescope observations of sunspots in late 1610 and recorded them in his notebooks, as did Galileo around the same time, although the latter did not publish a scientific paper on sunspots (accompanied by sketches) until 1613. Galileo also argued that the spots were not, as some believed, solar satellites but more like clouds in the atmosphere or the surface of the Sun. But he was not the first to suggest this; that credit belongs to Dutch astronomer Johannes Fabricus, who published his scientific treatise on sunspots in 1611.

Kepler read that particular treatise and admired it, having made his sunspot observations using a camera obscura in 1607 (published in a 1609 treatise), which he initially thought was a transit of Mercury. He retracted that report in 1618, concluding that he had actually seen a group of sunspots. Kepler made his solar drawings based on observations conducted both in his own house and in the workshop of court mechanic Justus Burgi in Prague.  In the first case, he reported “a small spot in the size of a small fly”; in the second, “a small spot of deep darkness toward the center… in size and appearance like a thin flea.”

The earliest datable sunspot drawings based on Kepler's solar observations with camera obscura in May 1607.

Enlarge / The earliest datable sunspot drawings based on Kepler’s solar observations with camera obscura in May 1607.

Public domain

The long-standing debate that is the subject of this latest paper concerns the period from around 1645 to 1715, during which there were very few recorded observations of sunspots despite the best efforts of astronomers. This was a unique event in astronomical history. Despite only observing some 59 sunspots during this time—compared to between 40,000 to 50,000 sunspots over a similar time span in our current age—astronomers were nonetheless able to determine that sunspots seemed to occur in 11-year cycles.

German astronomer Gustav Spörer noted the steep decline in 1887 and 1889 papers, and his British colleagues, Edward and Annie Maunder, expanded on that work to study how the latitudes of sunspots changed over time. That period became known as the “Maunder Minimum.” Spörer also came up with “Spörer’s law,” which holds that spots at the start of a cycle appear at higher latitudes in the Sun’s northern hemisphere, moving to successively lower latitudes in the southern hemisphere as the cycle runs its course until a new cycle of sunspots begins in the higher latitudes.

But precisely how the solar cycle transitioned to the Maunder Minimum has been far from clear. Reconstructions based on tree rings have produced conflicting data. For instance, one such reconstruction concluded that the gradual transition was preceded either by an extremely short solar cycle of about five years or an extremely long solar cycle of about 16 years. Another tree ring reconstruction concluded the solar cycle would have been of normal 11-year duration.

Independent observational records can help resolve the discrepancy. That’s why Hisashi Hayakawa of Nagoya University in Japan and co-authors turned to Kepler’s drawings of sunspots for additional insight, which predate existing telescopic observations by several years.

How Kepler’s 400-year-old sunspot sketches helped solve a modern mystery Read More »

webb-confirms:-big,-bright-galaxies-formed-shortly-after-the-big-bang

Webb confirms: Big, bright galaxies formed shortly after the Big Bang

They grow up so fast —

Structure of galaxy rules out early, bright objects were supermassive black holes.

Image of a field of stars and galaxies.

Enlarge / Some of the galaxies in the JADES images.

One of the things that the James Webb Space Telescope was designed to do was look at some of the earliest objects in the Universe. And it has already succeeded spectacularly, imaging galaxies as they existed just 250 million years after the Big Bang. But these galaxies were small, compact, and similar in scope to what we’d consider a dwarf galaxy today, which made it difficult to determine what was producing their light: stars or an actively feeding supermassive black hole at their core.

This week, Nature is publishing confirmation that some additional galaxies we’ve imaged also date back to just 300 million years after the Big Bang. Critically, one of them is bright and relatively large, allowing us to infer that most of its light was coming from a halo of stars surrounding its core, rather than originating in the same area as the central black hole. The finding implies that it formed through a continuing burst of star formation that started just 200 million years after the Big Bang.

Age checks

The galaxies at issue here were first imaged during the JADES (JWST Advanced Deep Extragalactic Survey) imaging program, which includes part of the area imaged for the Hubble Ultra Deep Field. Initially, old galaxies were identified by using a combination of filters on one of Webb’s infrared imaging cameras.

Most of the Universe is made of hydrogen, and figuring out the age of early galaxies involves looking for the most energetic transitions of hydrogen’s electron, called the Lyman series. These transitions produce photons that are in the UV area of the spectrum. But the redshift of light that’s traveled for billions of years will shift these photons into the infrared area of the spectrum, which is what Webb was designed to detect.

What this looks like in practice is that hydrogen-dominated material will emit a broad range of light right up to the highest energy Lyman transition. Above that energy, photons will be sparse (they may still be produced by things like processes that accelerate particles). This point in the energy spectrum is called the “Lyman break,” and its location on the spectrum will change based on how distant the source is—the greater the distance to the source, the deeper into the infrared the break will appear.

Initial surveys checked for the Lyman break using filters on Webb’s cameras that cut off different areas of the IR spectrum. Researchers looked for objects that showed up at low energies but disappeared when a filter that selected for higher-energy infrared photons was swapped in. The difference in energies between the photons allowed through by the two filters can provide a rough estimate of where the Lyman break must be.

Locating the Lyman break requires imaging with a spectrograph, which can sample the full spectrum of near-infrared light. Fortunately, Webb has one of those, too. The newly published study involved turning the NIRSpec onto three early galaxies found in the JADES images.

Too many, too soon

The researchers involved in the analysis only ended up with data from two of these galaxies. NIRSpec doesn’t gather as much light as one of Webb’s cameras can, and so the faintest of the three just didn’t produce enough data to enable analysis. The other two, however, produced very clear data that placed the galaxies at a redshift measure roughly z = 14, which means we’re seeing them as they looked 300 million years after the Big Bang. Both show sharp Lyman breaks, with the amount of light dropping gradually as you move further into the lower-energy part of the spectrum.

There’s a slight hint of emissions from heavily ionized carbon atoms in one of the galaxies, but no sign of any other specific elements beyond hydrogen.

One of the two galaxies was quite compact, so similar to the other galaxies of this age that we’d confirmed previously. But the other, JADES-GS-ZZ14-0, was quite distinct. For starters, it’s extremely bright, being the third most luminous distant galaxy out of hundreds we’ve imaged so far. And it’s big enough that it’s not possible for all its light to be originating from the core. That rules out the possibility that what we’re looking at is a blurred view of an active galactic nucleus powered by a supermassive black hole feeding on material.

Instead, much of the light we’re looking at seems to have originated in the stars of JADES-GS-ZZ14-0. Most of those stars are young, and there seems to be very little of the dust that characterizes modern galaxies. The researchers estimate that star formation started at least 100 million years earlier (meaning just 200 million years after the Big Bang) and continued at a rapid pace in the intervening time.

Combined with earlier data, the researchers write that this confirms that “bright and massive galaxies existed already only 300 [million years] after the Big Bang, and their number density is more than ten times higher than extrapolations based on pre-JWST observations.” In other words, there were a lot more galaxies around in the early Universe than we thought, which could pose some problems for our understanding of the Universe’s contents and their evolution.

Meanwhile, the early discovery of the extremely bright galaxy implies that there are a number of similar ones out there awaiting our discovery. This means there’s going to be a lot of demand for time on NIRSpec in the coming years.

Nature, 2024. DOI: 10.1038/s41586-024-07860-9  (About DOIs).

Webb confirms: Big, bright galaxies formed shortly after the Big Bang Read More »

charter-failed-to-notify-911-call-centers-and-fcc-about-voip-phone-outages

Charter failed to notify 911 call centers and FCC about VoIP phone outages

Charter admits violations —

Charter blames error with email notification and misunderstanding of FCC rules.

A parked van used by a Spectrum cable technician. The van has the Spectrum logo on its side and a ladder stowed on the roof.

Charter Communications agreed to pay a $15 million fine after admitting that it failed to notify more than a thousand 911 call centers about an outage caused by a denial-of-service attack and separately failed to meet the Federal Communications Commission’s reporting deadlines for hundreds of planned maintenance outages.

“As part of the settlement, Charter admits to violating the agency’s rules regarding notifications to public safety officials and the Commission in connection with three unplanned network outages and hundreds of planned, maintenance-related network outages that occurred last year,” the FCC said in an announcement yesterday.

A consent decree said Charter admits that it “failed to timely notify more than 1,000 PSAPs [Public Safety Answering Points] of an outage on February 19, 2023.” The decree notes that failure to notify the PSAPs, or 911 call centers, “impedes the ability of public safety officials to mediate the effects of an outage by notifying the public of alternate ways to contact emergency services.”

Phone providers like Charter must also provide required outage notifications to the FCC through the Network Outage Reporting System (NORS). However, Charter admits that it “failed to meet reporting deadlines for reports in the NORS associated with the [February 2023] Outage, and separate outages on March 31 and April 26, 2023; and failed to meet other NORS reporting deadlines associated with hundreds of planned maintenance outages, all in violation of the Commission’s rules.”

Error with email notification

With the February 2023 outage, “Charter was required to notify all of the impacted PSAPs ‘as soon as possible,’ but due to a clerical error associated with the sending of an email notification, over 1,000 PSAPs were not contacted,” the consent decree said. Charter also “failed to file the required NORS notification until almost six hours after it was due.”

Failure to meet NORS deadlines “impairs the Commission’s ability to assess the magnitude of major outages, identify trends, and promote network reliability best practices that can prevent or mitigate future disruptions. Therefore, it is imperative for the Commission to hold providers, like Charter, accountable for fulfilling these essential obligations,” the consent decree said.

In addition to paying a $15 million civil penalty to the US Treasury, “Charter has agreed to implement a robust compliance plan, including cybersecurity provisions related to compliance with the Commission’s 911 rules,” the FCC said. Charter reported revenue of $13.7 billion and net income of $1.2 billion in the most recent quarter.

The February 2023 outage was caused by what the FCC described as “a minor, low and slow Denial of Service (DoS) attack.” The resulting outage in Charter’s VoIP service affected about 400,000 “residential and commercial interconnected VoIP customers in portions of 41 states and the District of Columbia.” Charter restored service in less than four hours.

The FCC said its rules require VoIP providers like Charter “to notify 911 call centers as soon as possible of outages longer than 30 minutes that potentially affect such call centers. Providers are also required to file by set deadlines in the FCC’s Network Outage Reporting System when outages reach a certain severity threshold.”

The FCC investigation into the February 2023 outage led to Charter admitting violations related to hundreds of other outages:

Charter indicated that based on a misunderstanding of the Commission’s rules, hundreds of planned maintenance events may have met the criteria for filing a NORS report but were never submitted. Thereafter, Charter also identified two additional, unplanned outages—which occurred on March 31, 2023, and April 26, 2023—that each met the NORS reporting threshold but Charter failed to report.

Charter downplays violations

In a statement provided to Ars, Charter said, “We’re glad to have resolved these issues, which will primarily result in Charter reporting certain planned maintenance to the FCC.” Charter downplayed the outage reporting violations, saying that “the fine has nothing to do with cybersecurity violations and is attributable solely to administrative notifications.”

Charter’s statement emphasized that the company did not violate cybersecurity rules. “No provision within either the CISA Cybersecurity Best Practices or the NIST Cybersecurity Framework would have prevented this attack, and no flaws were identified by the FCC regarding Charter’s cybersecurity practices. We agreed with the FCC that we should continue doing what we’re already doing,” the company said.

Although Charter said the settlement “will primarily result in Charter reporting certain planned maintenance to the FCC,” the consent decree also requires changes to ensure that the company promptly notifies 911 call centers. It says that Charter must create “an automated PSAP notification system to automatically contact PSAPs after a network outage that meets the reporting thresholds in the 911 Rules.”

The FCC said the “compliance plan includes the first-of-its-kind application of certain cybersecurity measures—including network segmentation and vulnerability mitigation management—related to 911 communications services and network outage reporting. Charter has agreed to maintain and evolve its overall cybersecurity risk management program in accordance with the voluntary National Institute of Standards and Technology (NIST) Cyber Security Framework, and other applicable industry standards and best practices, and applicable state and/or federal laws covering cybersecurity risk management and governance practices.”

The compliance plan requirements are set to remain in effect for three years.

Disclosure: The Advance/Newhouse Partnership, which owns 12.4 percent of Charter, is part of Advance Publications, which also owns Ars Technica parent Condé Nast.

Charter failed to notify 911 call centers and FCC about VoIP phone outages Read More »

spacex-moving-dragon-splashdowns-to-pacific-to-solve-falling-debris-problem

SpaceX moving Dragon splashdowns to Pacific to solve falling debris problem

A Crew Dragon spacecraft is seen docked at the International Space Station in 2022. The section of the spacecraft on the left is the pressurized capsule, while the rear section, at right, is the trunk.

Enlarge / A Crew Dragon spacecraft is seen docked at the International Space Station in 2022. The section of the spacecraft on the left is the pressurized capsule, while the rear section, at right, is the trunk.

NASA

Sometime next year, SpaceX will begin returning its Dragon crew and cargo capsules to splashdowns in the Pacific Ocean and end recoveries of the spacecraft off the coast of Florida.

This will allow SpaceX to make changes to the way it brings Dragons back to Earth and eliminate the risk, however tiny, that a piece of debris from the ship’s trunk section might fall on someone and cause damage, injury, or death.

“After five years of splashing down off the coast of Florida, we’ve decided to shift Dragon recovery operations back to the West Coast,” said Sarah Walker, SpaceX’s director of Dragon mission management.

Public safety

In the past couple of years, landowners have discovered debris from several Dragon missions on their property, and the fragments all came from the spacecraft’s trunk, an unpressurized section mounted behind the capsule as it carries astronauts or cargo on flights to and from the International Space Station.

SpaceX returned its first 21 Dragon cargo missions to splashdowns in the Pacific Ocean southwest of Los Angeles. When an upgraded human-rated version of Dragon started flying in 2019, SpaceX moved splashdowns to the Atlantic Ocean and the Gulf of Mexico to be closer to the company’s refurbishment and launch facilities at Cape Canaveral, Florida. The benefits of landing near Florida included a faster handover of astronauts and time-sensitive cargo back to NASA and shorter turnaround times between missions.

The old version of Dragon, known as Dragon 1, separated its trunk after the deorbit burn, allowing the trunk to fall into the Pacific. With the new version of Dragon, called Dragon 2, SpaceX changed the reentry profile to jettison the trunk before the deorbit burn. This meant that the trunk remained in orbit after each Dragon mission, while the capsule reentered the atmosphere on a guided trajectory. The trunk, which is made of composite materials and lacks a propulsion system, usually takes a few weeks or a few months to fall back into the atmosphere and doesn’t have control of where or when it reenters.

Air resistance from the rarefied upper atmosphere gradually slows the trunk’s velocity enough to drop it out of orbit, and the amount of aerodynamic drag the trunk sees is largely determined by fluctuations in solar activity.

SpaceX and NASA, which funded a large portion of the Dragon spacecraft’s development, initially determined the trunk would entirely burn up when it reentered the atmosphere and would pose no threat of surviving reentry and causing injuries or damaging property. However, that turned out to not be the case.

In May, a 90-pound chunk of a SpaceX Dragon spacecraft that departed the International Space Station fell on the property of a “glamping” resort in North Carolina. At the same time, a homeowner in a nearby town found a smaller piece of material that also appeared to be from the same Dragon mission.

These events followed the discovery in April of another nearly 90-pound piece of debris from a Dragon capsule on a farm in the Canadian province of Saskatchewan. SpaceX and NASA later determined the debris fell from orbit in February, and earlier this month, SpaceX employees came to the farm to retrieve the wreckage, according to CBC.

Pieces of a Dragon spacecraft also fell over Colorado last year, and a farmer in Australia found debris from a Dragon capsule on his land in 2022.

SpaceX moving Dragon splashdowns to Pacific to solve falling debris problem Read More »

from-sci-fi-to-state-law:-california’s-plan-to-prevent-ai-catastrophe

From sci-fi to state law: California’s plan to prevent AI catastrophe

Adventures in AI regulation —

Critics say SB-1047, proposed by “AI doomers,” could slow innovation and stifle open source AI.

The California state capital building in Sacramento.

Enlarge / The California State Capitol Building in Sacramento.

California’s “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” (a.k.a. SB-1047) has led to a flurry of headlines and debate concerning the overall “safety” of large artificial intelligence models. But critics are concerned that the bill’s overblown focus on existential threats by future AI models could severely limit research and development for more prosaic, non-threatening AI uses today.

SB-1047, introduced by State Senator Scott Wiener, passed the California Senate in May with a 32-1 vote and seems well positioned for a final vote in the State Assembly in August. The text of the bill requires companies behind sufficiently large AI models (currently set at $100 million in training costs and the rough computing power implied by those costs today) to put testing procedures and systems in place to prevent and respond to “safety incidents.”

The bill lays out a legalistic definition of those safety incidents that in turn focuses on defining a set of “critical harms” that an AI system might enable. That includes harms leading to “mass casualties or at least $500 million of damage,” such as “the creation or use of chemical, biological, radiological, or nuclear weapon” (hello, Skynet?) or “precise instructions for conducting a cyberattack… on critical infrastructure.” The bill also alludes to “other grave harms to public safety and security that are of comparable severity” to those laid out explicitly.

An AI model’s creator can’t be held liable for harm caused through the sharing of “publicly accessible” information from outside the model—simply asking an LLM to summarize The Anarchist’s Cookbook probably wouldn’t put it in violation of the law, for instance. Instead, the bill seems most concerned with future AIs that could come up with “novel threats to public safety and security.” More than a human using an AI to brainstorm harmful ideas, SB-1047 focuses on the idea of an AI “autonomously engaging in behavior other than at the request of a user” while acting “with limited human oversight, intervention, or supervision.”

Would California's new bill have stopped WOPR?

Enlarge / Would California’s new bill have stopped WOPR?

To prevent this straight-out-of-science-fiction eventuality, anyone training a sufficiently large model must “implement the capability to promptly enact a full shutdown” and have policies in place for when such a shutdown would be enacted, among other precautions and tests. The bill also focuses at points on AI actions that would require “intent, recklessness, or gross negligence” if performed by a human, suggesting a degree of agency that does not exist in today’s large language models.

Attack of the killer AI?

This kind of language in the bill likely reflects the particular fears of its original drafter, Center for AI Safety (CAIS) co-founder Dan Hendrycks. In a 2023 Time Magazine piece, Hendrycks makes the maximalist existential argument that “evolutionary pressures will likely ingrain AIs with behaviors that promote self-preservation” and lead to “a pathway toward being supplanted as the earth’s dominant species.'”

If Hendrycks is right, then legislation like SB-1047 seems like a common-sense precaution—indeed, it might not go far enough. Supporters of the bill, including AI luminaries Geoffrey Hinton and Yoshua Bengio, agree with Hendrycks’ assertion that the bill is a necessary step to prevent potential catastrophic harm from advanced AI systems.

“AI systems beyond a certain level of capability can pose meaningful risks to democracies and public safety,” wrote Bengio in an endorsement of the bill. “Therefore, they should be properly tested and subject to appropriate safety measures. This bill offers a practical approach to accomplishing this, and is a major step toward the requirements that I’ve recommended to legislators.”

“If we see any power-seeking behavior here, it is not of AI systems, but of AI doomers.

Tech policy expert Dr. Nirit Weiss-Blatt

However, critics argue that AI policy shouldn’t be led by outlandish fears of future systems that resemble science fiction more than current technology. “SB-1047 was originally drafted by non-profit groups that believe in the end of the world by sentient machine, like Dan Hendrycks’ Center for AI Safety,” Daniel Jeffries, a prominent voice in the AI community, told Ars. “You cannot start from this premise and create a sane, sound, ‘light touch’ safety bill.”

“If we see any power-seeking behavior here, it is not of AI systems, but of AI doomers,” added tech policy expert Nirit Weiss-Blatt. “With their fictional fears, they try to pass fictional-led legislation, one that, according to numerous AI experts and open source advocates, could ruin California’s and the US’s technological advantage.”

From sci-fi to state law: California’s plan to prevent AI catastrophe Read More »

are-you-a-workaholic?-here’s-how-to-spot-the-signs

Are you a workaholic? Here’s how to spot the signs

bad for business —

Psychologists now view an out-of-control compulsion to work as an addiction.

Man works late in dimly lit cubicle amid a dark office space

An accountant who fills out spreadsheets at the beach, a dog groomer who always has time for one more client, a basketball player who shoots free throws to the point of exhaustion.

Every profession has its share of hard chargers and overachievers. But for some workers—perhaps more than ever in our always-on, always-connected world—the drive to send one more email, clip one more poodle, sink one more shot becomes all-consuming.

Workaholism is a common feature of the modern workplace. A recent review gauging its pervasiveness across occupational fields and cultures found that roughly 15 percent of workers qualify as workaholics. That adds up to millions of overextended employees around the world who don’t know when—or how, or why—to quit.

Whether driven by ambition, a penchant for perfectionism, or the small rush of completing a task, they work past any semblance of reason. A healthy work ethic can cross the line into an addiction, a shift with far-reaching consequences, says Toon Taris, a behavioral scientist and work researcher at Utrecht University in the Netherlands.

“Workaholism” is a word that gets thrown around loosely and sometimes glibly, says Taris, but the actual affliction is more common, more complex, and more dangerous than many people realize.

What workaholism is—and isn’t

Psychologists and employment researchers have tinkered with measures and definitions of workaholism for decades, and today the picture is coming into focus. In a major shift, workaholism is now viewed as an addiction with its own set of risk factors and consequences, says Taris, who, with occupational health scientist Jan de Jonge of Eindhoven University of Technology in the Netherlands, explored the phenomenon in the 2024 Annual Review of Organizational Psychology and Organizational Behavior.

Taris stresses that the “workaholic” label doesn’t apply to people who put in long hours because they love their jobs. Those people are considered engaged workers, he says. “That’s fine. No problems there.” People who temporarily put themselves through the grinder to advance their careers or keep up on car or house payments don’t count, either. Workaholism is in a different category from capitalism.

The growing consensus is that true workaholism encompasses four dimensions: motivations, thoughts, emotions, and behaviors, says Malissa Clark, an industrial/organizational psychologist at the University of Georgia in Athens. In 2020, Clark and colleagues proposed in the Journal of Applied Psychology  that, in sum, workaholism involves an inner compulsion to work, having persistent thoughts about work, experiencing negative feelings when not working, and working beyond what is reasonably expected.

Some personality types are especially likely to fall into the work trap. Perfectionists, extroverts, and people with type A (ambitious, aggressive, and impatient) personalities are prone to workaholism, Clark and coauthors found in a 2016 meta-analysis. They had expected people with low self-esteem to be at risk, but that link was nowhere to be found. Workaholics may put themselves through the wringer, but it’s not necessarily out of a sense of inadequacy or self-loathing.

Are you a workaholic? Here’s how to spot the signs Read More »

hang-out-with-ars-in-san-jose-and-dc-this-fall-for-two-infrastructure-events

Hang out with Ars in San Jose and DC this fall for two infrastructure events

Arsmeet! —

Join us as we talk about the next few years in AI & storage, and what to watch for.

Photograph of servers and racks

Enlarge / Infrastructure!

Howdy, Arsians! Last year, we partnered with IBM to host an in-person event in the Houston area where we all gathered together, had some cocktails, and talked about resiliency and the future of IT. Location always matters for things like this, and so we hosted it at Space Center Houston and had our cocktails amidst cool space artifacts. In addition to learning a bunch of neat stuff, it was awesome to hang out with all the amazing folks who turned up at the event. Much fun was had!

This year, we’re back partnering with IBM again and we’re looking to repeat that success with not one, but two in-person gatherings—each featuring a series of panel discussions with experts and capping off with a happy hour for hanging out and mingling. Where last time we went central, this time we’re going to the coasts—both east and west. Read on for details!

September: San Jose, California

Our first event will be in San Jose on September 18, and it’s titled “Beyond the Buzz: An Infrastructure Future with GenAI and What Comes Next.” The idea will be to explore what generative AI means for the future of data management. The topics we’ll be discussing include:

  • Playing the infrastructure long game to address any kind of workload
  • Identifying infrastructure vulnerabilities with today’s AI tools
  • Infrastructure’s environmental footprint: Navigating impacts and responsibilities

We’re getting our panelists locked down right now, and while I don’t have any names to share, many will be familiar to Ars readers from past events—or from the front page.

As a neat added bonus, we’re going to host the event at the Computer History Museum, which any Bay Area Ars reader can attest is an incredibly cool venue. (Just nobody spill anything. I think they’ll kick us out if we break any exhibits!)

October: Washington, DC

Switching coasts, on October 29 we’ll set up shop in our nation’s capital for a similar show. This time, our event title will be “AI in DC: Privacy, Compliance, and Making Infrastructure Smarter.” Given that we’ll be in DC, the tone shifts a bit to some more policy-centric discussions, and the talk track looks like this:

  • The key to compliance with emerging technologies
  • Data security in the age of AI-assisted cyber-espionage
  • The best infrastructure solution for your AI/ML strategy

Same here deal with the speakers as with the September—I can’t name names yet, but the list will be familiar to Ars readers and I’m excited. We’re still considering venues, but hoping to find something that matches our previous events in terms of style and coolness.

Interested in attending?

While it’d be awesome if everyone could come, the old song and dance applies: space, as they say, will be limited at both venues. We’d like to make sure local folks in both locations get priority in being able to attend, so we’re asking anyone who wants a ticket to register for the events at the sign-up pages below. You should get an email immediately confirming we’ve received your info, and we’ll send another note in a couple of weeks with further details on timing and attendance.

On the Ars side, at minimum both our EIC Ken Fisher and I will be in attendance at both events, and we’ll likely have some other Ars staff showing up where we can—free drinks are a strong lure for the weary tech journalist, so there ought to be at least a few appearing at both. Hoping to see you all there!

Hang out with Ars in San Jose and DC this fall for two infrastructure events Read More »