Author name: Kelly Newman

brompton-c-line-electric-review:-fun-and-foldable,-fits-better-than-you’d-think

Brompton C Line Electric review: Fun and foldable, fits better than you’d think

Brompton C Line Electric Review —

A motor evens out its natural disadvantages, but there’s still a learning curve.

What can I say? It was tough putting the Brompton C Line Electric through its paces. Finding just the right context for it. Grueling work.

Enlarge / What can I say? It was tough putting the Brompton C Line Electric through its paces. Finding just the right context for it. Grueling work.

Kevin Purdy

There’s never been a better time to ride a weird bike.

That’s especially true if you live in a city where you can regularly see kids being dropped off at schools from cargo bikes with buckets, child seats, and full rain covers. Further out from the urban core, fat-tire e-bikes share space on trails with three-wheelers, retro-style cruisers, and slick roadies. And folding bikes, once an obscurity, are showing up in more places, especially as they’ve gone electric.

So when I got to try out the Brompton Electric C Line (in a six-speed model), I felt far less intimidated riding, folding, and stashing the little guy wherever I went than I might have been a few years back. A few folks recognized the distinctively small and British bike and offered a thumbs-up or light curiosity. If anyone was concerned about the oddity of this quirky ride, it was me, mostly because I obsessed over whether I could and should lock it up outside or not.

But for the most part, the Brompton fits in, and it works as a bike. It sat next to me at bars and coffee shops and outdoor eateries, it rode the DC Metro, it went on a memorial group ride, and it went to the grocery store. I repeatedly hauled it to a third-floor walkup apartment and brought it on a week’s vacation, fitting it on the floor behind the car driver’s seat. And with an electric battery pack, it was even easier to forget that it was any different from a stereotypical bike—so long as you didn’t look down.

Still, should you pay a good deal more than $3,000 (and probably more like $4,000 after accessories) for a bike with 16-inch tires—especially one you might never want to leave locked up outside?

Let’s get into that.

  • The Brompton C Line, pre-fold (mid-beer).

    Kevin Purdy

  • Step 1: Release a clasp and pull the bike frame up, allowing the rear wheel to swing forward underneath.

    Kevin Purdy

  • Step 2: Loosen the clamp and fold the front half back to align with the rear wheel, lining up a little hook on the wheel with the frame.

    Kevin Purdy

  • Step 3: Remove the battery (technically unnecessary, but wise), loosen a clamp holding up the handlebar, then fold it down onto the frame, letting a nub tuck into a locking notch.

    Kevin Purdy

  • Step 4: Drop down the seat (which also locks the frame into position), rotate one pedal onto the tire, and flip the other pedal up.

    Kevin Purdy

Learning The Fold

Whether you buy it at a store or have it shipped to you, a Brompton C Line is possibly the easiest e-bike to unpack, set up, and get rolling. You take out the folded-up bike, screw in the crucial hinge clamps that hold it together, put on the saddle, and learn how to unfold it for the first time. Throw some air in the tires, and you could be on your way about 20 minutes after getting the bike.

But you shouldn’t head out without getting some reps in on The Fold. The Fold is the reason the Brompton exists. It hasn’t actually changed that much since Andrew Ritchie designed it in 1975. Release a rear frame clip and yank the frame up, and the rear wheel and its frame triangle roll underneath the top tube. Unscrew a hinged clamp, then “stir” the front wheel backward, allowing a subtle hook to catch on the rear frame. Drop the seat and you’ll feel something lock inside the frame. You can then unhinge and fold the handlebar down, or you can keep it up to push the bike around on its tiny frame wheels in “shopping cart mode.”

If you forget the sequence of the fold, there are little reminders in a few spots on the bike.

If you forget the sequence of the fold, there are little reminders in a few spots on the bike.

Kevin Purdy

After maybe five attempts, I began to get The Fold done in less than a minute. After around a dozen tries, I started to appreciate its design and motions. The way a Brompton folds up is great for certain applications, like fitting into a car instead of using a rack, bringing on public transit or train rides, tucking underneath a counter or table, or fitting into the corner of the most space-challenged home. It can also be handy if you’re heading somewhere you’re wary of locking it up outside (more on that in a moment).

Brompton C Line Electric review: Fun and foldable, fits better than you’d think Read More »

civilization-vii-looks-like-2k’s-next-big-game-announcement

Civilization VII looks like 2K’s next big game announcement

Tell Gandhi to prep the nukes —

Logo drop comes amid publisher’s “beloved franchise” tease for Summer Games Fest.

COMING SOON

Enlarge / COMING SOON

2K / Imgur

2K Games is expected to show the first trailer footage of the upcoming Civilization VII as part of this weekend’s Summer Games Fest marketing extravaganza after a logo for the game leaked on 2K’s website this morning.

Eagle-eyed gamers at ResetEra and Reddit both noticed the Civ VII banner atop the publisher’s official site early this morning, alongside a “Coming Soon” label and inactive links to a trailer and wishlist page. The appearance comes just ahead of the trailer-filled Summer Games Fest livestream, which will premiere at 5 pm Eastern Friday afternoon.

In May, the Summer Games Fest Twitter account teased that 2K would be using the event “to reveal the next iteration in one of [its] biggest and most beloved franchises.” Civilization VII now seems primed to fill that pre-announced slot, which may be unwelcome news for fans of 2K-owned franchises like Borderlands, Bioshock, and NFL2K (which was first publicly mulled for a revival in 2020).

Last year, developer Firaxis announced that it had started development on the “next mainline game in the world-famous Sid Meier’s Civilization franchise,” under the guidance of Civilization VI Creative Director Ed Beach and newly promoted studio head Heather Hazen (who previously worked with Epic Games and Popcap). “We have plans to take the Civilization franchise to exciting new heights for our millions of players around the world,” Hazen said in a statement at the time.

Civilization namesake Sid Meier holds forth with fans at a Firaxicon fan gathering in 2014.” height=”426″ src=”https://cdn.arstechnica.net/wp-content/uploads/2014/09/firaxicon39-640×426.png” width=”640″>

Enlarge / Civilization namesake Sid Meier holds forth with fans at a Firaxicon fan gathering in 2014.

Kyle Orland

Civilization VII will be the first entry in the storied strategy franchise in at least eight years, a historically long gap for a series that has seen a new numbered entry every four to six years since its debut in 1991. When Civilization VI hit in 2016, we praised the game for its smoother, more easily accessible interface and focus on fraught decision-making.

In an interview with Ars just after the launch of Civ VI, series namesake Sid Meier said his official role as franchise director has evolved over the years into more of a support structure for younger designers. “I’m there to kind of represent the history of the game,” Meier told Ars. “My role is just to be supportive. Designers have huge egos, and they’re easily bruised. Making a game can be a painful process. Part of my role is to be encouraging—’that idea didn’t work, try something else.'”

Elsewhere in early Summer Games Fest leaks and announcements, we can expect more info about Meta Quest VR-exclusive Batman: Arkham Shadow, a Lego-fied version of Sony’s Horizon series, and new footage for cinematic survival game Dune Awakening, among many other planned announcements.

Civilization VII looks like 2K’s next big game announcement Read More »

as-leaks-on-the-space-station-worsen,-there’s-no-clear-plan-to-deal-with-them

As leaks on the space station worsen, there’s no clear plan to deal with them

Plugging leaks —

“We heard that basically the program office had a runaway fire on their hands.”

Launched in 2000, the Zvezda Service Module provides living quarters and performs some life-support system functions.

Launched in 2000, the Zvezda Service Module provides living quarters and performs some life-support system functions.

NASA

NASA and the Russian space agency, Roscosmos, still have not solved a long-running and worsening problem with leaks on the International Space Station.

The microscopic structural cracks are located inside the small PrK module on the Russian segment of the space station, which lies between a Progress spacecraft airlock and the Zvezda module. After the leak rate doubled early this year during a two-week period, the Russians experimented with keeping the hatch leading to the PrK module closed intermittently and performed other investigations. But none of these measures taken during the spring worked.

“Following leak troubleshooting activities in April of 2024, Roscosmos has elected to keep the hatch between Zvezda and Progress closed when it is not needed for cargo operations,” a NASA spokesperson told Ars. “Roscosmos continues to limit operations in the area and, when required for use, implements measures to minimize the risk to the International Space Station.”

What are the real risks?

NASA officials have downplayed the severity of the leak risks publicly and in meetings with external stakeholders of the International Space Station. And they presently do not pose an existential risk to the space station. In a worst-case scenario of a structural failure, Russia could permanently close the hatch leading to the PrK module and rely on a separate docking port for Progress supply missions.

However, there appears to be rising concern in the ISS program at NASA’s Johnson Space Center in Houston. The space agency often uses a 5×5 “risk matrix” to classify the likelihood and consequence of risks to spaceflight activities, and the Russian leaks are now classified as a “5” both in terms of high likelihood and high consequence. Their potential for “catastrophic failure” is discussed in meetings.

In responding to questions from Ars by email, NASA public relations officials declined to make program leaders available for an interview. The ISS program is currently managed by Dana Weigel, a former flight director. She recently replaced Joel Montalbano, who became deputy associate administrator for the agency’s Space Operations Mission Directorate at NASA Headquarters in Washington.

One source familiar with NASA’s efforts to address the leaks confirmed to Ars that the internal concerns about the issue are serious. “We heard that basically the program office had a runaway fire on their hands and were working to solve it,” this person said. “Joel and Dana are keeping a lid on this.”

US officials are likely remaining quiet about their concerns because they don’t want to embarrass their Russian partners. The working relationship has improved since the sacking of the pugnacious leader of Russia’s space activities, Dmitry Rogozin, two years ago. The current leadership of Roscosmos has maintained a cordial relationship with NASA despite the high geopolitical tensions between Russia and the United States over the war in Ukraine.

The leaks are a sensitive subject. Because of Russian war efforts, the resources available to the country’s civil space program will remain flat or even decrease in the coming years. A dedicated core of Russian officials who value the International Space Station partnership are striving to “make do” with the resources they have to maintain its Soyuz and Progress spacecraft, which carry crew and cargo to the space station respectively, and its infrastructure on the station. But they do not have the ability to make major new investments, so they’re left with patching things together as best they can.

Aging infrastructure

At the same time, the space station is aging. The Zvezda module was launched nearly a quarter of a century ago, in July 2000, on a Russian Proton rocket. The cracking issue first appeared in 2019 and has continued to worsen since then. Its cause is unknown.

“They have repaired multiple leak locations, but additional leak locations remain,” the NASA spokesperson said. “Roscosmos has yet to identify the cracks’ root cause, making it challenging to analyze or predict future crack formation and growth.”

NASA and Russia have managed to maintain the space station partnership since Russia’s invasion of Ukraine in February 2022. The large US segment is dependent on the Russian segment for propulsion to maintain the station’s altitude and maneuver to avoid debris. Since the invasion, the United States could have taken overt steps to mitigate against this, such as funding the development of its own propulsion module or increasing the budget for building new commercial space stations to maintain a presence in low-Earth orbit.

Instead, senior NASA officials chose to stay the course and work with Russia for as long as possible to maintain the fragile partnership and fly the aging but venerable International Space Station. It remains to be seen whether cracks—structural, diplomatic, or otherwise—will rupture this effort prior to the station’s anticipated retirement date of 2030.

As leaks on the space station worsen, there’s no clear plan to deal with them Read More »

climate-wise,-it-was-a-full-year-of-exceptional-months

Climate-wise, it was a full year of exceptional months

Every month is above average —

Last June was the warmest June on record. Every month since has been similar.

A red and orange background, with a thermometer representing extreme heat in the center.

June 2023 did not seem like an exceptional month at the time. It was the warmest June in the instrumental temperature record, but monthly records haven’t exactly been unusual in a period where the top 10 warmest years on record have all occurred within the last 15 years. And monthly records have often occurred in years that are otherwise unexceptional; at the time, the warmest July on record had occurred in 2019, a year that doesn’t stand out much from the rest of the past decade.

But July 2023 set another monthly record, easily eclipsing 2019’s high temperatures. Then August set yet another monthly record. And so has every single month since, a string of records that propelled 2023 to the warmest year since we started keeping track.

Yesterday, the European Union’s Copernicus Earth-monitoring service announced that we’ve now gone a full year where every single month has been the warmest version of that month since we’ve had enough instruments in place to track global temperatures.

History monthly temperatures show just how extreme the temperatures have been over the last year.

Enlarge / History monthly temperatures show just how extreme the temperatures have been over the last year.

As you can see from this graph, most years feature a mix of temperatures, some higher than average, some lower. Exceptionally high months tend to cluster, but those clusters also tend to be shorter than a full year.

In the Copernicus data, a similar year-long streak of records happened once before (recently, in 2015/2016). NASA, which uses slightly different data and methods, doesn’t show a similar streak in that earlier period. NASA hasn’t released its results for May’s temperatures yet—they’re expected in the next few days. But it’s very likely that NASA will also show a year-long streak of records.

Beyond records, the EU is highlighting the fact that the one-year period ending in May was 1.63° C above the average temperatures of the 1850–1900 period, which are used as a baseline for preindustrial temperatures. That’s notable because many countries have ostensibly pledged to try to keep temperatures from exceeding 1.5° above preindustrial conditions by the end of the century. While it’s likely that temperatures will drop below the target again at some point within the next few years, the new records suggest that we have a very limited amount of time before temperatures persistently exceed it.

For the first time on record, temperatures have held steadily in excess of 1.5º above the preindustrial average.

Enlarge / For the first time on record, temperatures have held steadily in excess of 1.5º above the preindustrial average.

Realistically, those plans involve overshooting the 1.5° C target by midcentury but using carbon capture technology to draw down greenhouse gas levels. Exceeding that target earlier will mean that we have more carbon dioxide to pull out of the atmosphere, using technology that hasn’t been demonstrated anywhere close to the scale that we’ll need. Plus, it’s unclear who will pay for the carbon removal.

The extremity of some of the monthly records—some months have come in a half-degree C above any earlier month—are also causing scientists to look for reasons. But so far, the field hasn’t come to a consensus regarding the sudden surge in temperature extremes.

Because it has been accompanied by significant warming of ocean temperatures, a lot of attention has focused on changes to pollution rules for international shipping, which are meant to reduce sulfur emissions. These went into effect recently and have cut down on the emission of aerosols by cargo vessels, reducing the amount of sunlight that’s reflected back to space.

That’s considered likely to be a partial contributor. A slight contribution may have also come from the Hunga Tonga eruption, which blasted significant amounts of water vapor into the upper atmosphere, though nowhere near enough to explain this warming. Beyond that, there are no obvious explanations for the recent warmth.

Climate-wise, it was a full year of exceptional months Read More »

ars-drives-the-second-generation-rivian-r1t-and-r1s-electric-trucks

Ars drives the second-generation Rivian R1T and R1S electric trucks

no more car sick —

The EV startup has reengineered the R1 to make it better to drive, easier to build.

A Rivian R1T and R1S parked together in a forest

Enlarge / The R1S and R1T don’t look much different from the electric trucks we drove in 2022, but under the skin, there have been a lot of changes.

Rivian

In rainy Seattle this week, Rivian unveiled what it’s calling the “Second Generation” of its R1 line with a suite of mostly under-the-hood software and hardware updates that increase range, power, and efficiency while simultaneously lowering the cost of production for the company. While it’s common for automotive manufacturers to do some light refreshes after about four model years, Rivian has almost completely retooled the underpinnings of its popular R1S SUV and R1T pickup just two years after the vehicles made their debut.

“Overdelivering on the product is one of our core values,” Wassym Bensaid, the chief software officer at Rivian, told a select group of journalists at the event on Monday night, “and customer feedback has been one of the key inspirations for us.”

For these updates, Rivian changed more than half the hardware components in the R1 platform, retooled its drive units to offer new tri- and quad-motor options (with more horsepower), updated the suspension tuning, deleted 1.6 miles (2.6 km) of wiring, reduced the number of ECUs, increased the number of cameras and sensors around the vehicle, changed the battery packs, and added some visual options that better aligned with customizations that owners were making to their vehicles, among other things. Rivian is also leaning harder into AI and ML tools with the aim of bringing limited hands-free driver-assistance systems to their owners toward the end of the year.

  • Usually, an automaker waits four years before it refreshes a product, but Rivian decided to move early.

    Rivian

  • The R1 interior can feel quite serene.

    Rivian

  • Perhaps you’d prefer something more colorful?

    Rivian

  • An exploded view of a drive unit with a pair of motors.

    Rivian

  • There are two capacities of lithium-ion battery, and an optional lithium iron phosphate pack with 275 miles of range is on the way.

  • Rivian’s R1 still looks friendly amid a sea of scary-looking SUVs and trucks.

    Rivian

While many of these changes have simplified manufacturing for Rivian, which as of Q1 of this year lost a whopping $38,000 on every vehicle it sold, the company has continued to close the gap with the likes of BMW and Mercedes in terms of ride, handling, comfort, and efficiency.

On the road in the new R1

We drove a new second-gen dual-motor 665 hp (496 kW), 829 lb-ft (1,124 Nm) R1S Performance, which gets up to 410 miles (660 km) of range with the new Max Pack battery, out to DirtFish Rally School in Snoqualmie in typically rainy Seattle weather. On the road, the new platform, with its revised suspension and shocks, felt much more comfortable than it did in our first experience with an R1S in New York in 2022.

The vehicle offers modes that allow you to tackle pretty much any kind of driving that life can throw at you, including Sport, All Purpose (there’s no longer a “Conserve” mode), Snow, All-Terrain, and Soft Sand, alongside customizable suspension, ride feel and height, and regen settings. The R1S feels far more comfortable from all seating positions, including the back and third-row seats. There’s less floaty, car-sick-inducing modulation over bumps in All-Purpose, and Sport tightens things down nicely when you want to have a bit more road feel.

One of the big improvements on the road comes from the new “Autonomy Compute Module” and its suite of high-resolution 4K HDR cameras, radars, and sensors that have been upgraded on the R1 platform. The new R1 gets 11 cameras (one more than the first gen), with eight times greater resolution, five radar modules, and a new proprietary AI and ML integrated system that learns from anonymized driver data and information taken from the world around the vehicles to “see” 360-degrees around the vehicle, even in inclement weather.

While the R1S has had cruise control since its launch, the new “Autonomy” platform allows for smart lane-changing—something Rivian calls “Lane Change on Command” when using the new “Enhanced Highway Assist” (a partially automated driver assist), and centers the vehicle in marked lanes. We tried both features on the highways around Seattle, and the system handled very rainy and wet weather without hesitation, but it did ping-pong between the lane markers, and when that smart lane change system bailed out at the last minute, the move was abrupt and not confidence-inspiring, since there was no apparent reason for the system to fail. These features are not nearly as good as the latest from BMW and Mercedes, both of which continue to offer some of the most usable driver-assist systems on the market.

With the new R1 software stack, Rivian is also promising some limited hands-free highway driver-assistance features to come at the end of the year. While we didn’t get to try the feature in the short drive to DirtFish, Rivian says eye-tracking cameras in the rearview mirror will ensure that drivers have ample warning to take over when the system is engaged and needs human input.

Ars drives the second-generation Rivian R1T and R1S electric trucks Read More »

can-a-technology-called-rag-keep-ai-models-from-making-stuff-up?

Can a technology called RAG keep AI models from making stuff up?

Can a technology called RAG keep AI models from making stuff up?

Aurich Lawson | Getty Images

We’ve been living through the generative AI boom for nearly a year and a half now, following the late 2022 release of OpenAI’s ChatGPT. But despite transformative effects on companies’ share prices, generative AI tools powered by large language models (LLMs) still have major drawbacks that have kept them from being as useful as many would like them to be. Retrieval augmented generation, or RAG, aims to fix some of those drawbacks.

Perhaps the most prominent drawback of LLMs is their tendency toward confabulation (also called “hallucination”), which is a statistical gap-filling phenomenon AI language models produce when they are tasked with reproducing knowledge that wasn’t present in the training data. They generate plausible-sounding text that can veer toward accuracy when the training data is solid but otherwise may just be completely made up.

Relying on confabulating AI models gets people and companies in trouble, as we’ve covered in the past. In 2023, we saw two instances of lawyers citing legal cases, confabulated by AI, that didn’t exist. We’ve covered claims against OpenAI in which ChatGPT confabulated and accused innocent people of doing terrible things. In February, we wrote about Air Canada’s customer service chatbot inventing a refund policy, and in March, a New York City chatbot was caught confabulating city regulations.

So if generative AI aims to be the technology that propels humanity into the future, someone needs to iron out the confabulation kinks along the way. That’s where RAG comes in. Its proponents hope the technique will help turn generative AI technology into reliable assistants that can supercharge productivity without requiring a human to double-check or second-guess the answers.

“RAG is a way of improving LLM performance, in essence by blending the LLM process with a web search or other document look-up process” to help LLMs stick to the facts, according to Noah Giansiracusa, associate professor of mathematics at Bentley University.

Let’s take a closer look at how it works and what its limitations are.

A framework for enhancing AI accuracy

Although RAG is now seen as a technique to help fix issues with generative AI, it actually predates ChatGPT. Researchers coined the term in a 2020 academic paper by researchers at Facebook AI Research (FAIR, now Meta AI Research), University College London, and New York University.

As we’ve mentioned, LLMs struggle with facts. Google’s entry into the generative AI race, Bard, made an embarrassing error on its first public demonstration back in February 2023 about the James Webb Space Telescope. The error wiped around $100 billion off the value of parent company Alphabet. LLMs produce the most statistically likely response based on their training data and don’t understand anything they output, meaning they can present false information that seems accurate if you don’t have expert knowledge on a subject.

LLMs also lack up-to-date knowledge and the ability to identify gaps in their knowledge. “When a human tries to answer a question, they can rely on their memory and come up with a response on the fly, or they could do something like Google it or peruse Wikipedia and then try to piece an answer together from what they find there—still filtering that info through their internal knowledge of the matter,” said Giansiracusa.

But LLMs aren’t humans, of course. Their training data can age quickly, particularly in more time-sensitive queries. In addition, the LLM often can’t distinguish specific sources of its knowledge, as all its training data is blended together into a kind of soup.

In theory, RAG should make keeping AI models up to date far cheaper and easier. “The beauty of RAG is that when new information becomes available, rather than having to retrain the model, all that’s needed is to augment the model’s external knowledge base with the updated information,” said Peterson. “This reduces LLM development time and cost while enhancing the model’s scalability.”

Can a technology called RAG keep AI models from making stuff up? Read More »

elon-musk’s-x-defeats-australia’s-global-takedown-order-of-stabbing-video

Elon Musk’s X defeats Australia’s global takedown order of stabbing video

Elon Musk’s X defeats Australia’s global takedown order of stabbing video

Australia’s safety regulator has ended a legal battle with X (formerly Twitter) after threatening approximately $500,000 daily fines for failing to remove 65 instances of a religiously motivated stabbing video from X globally.

Enforcing Australia’s Online Safety Act, eSafety commissioner Julie Inman-Grant had argued it would be dangerous for the videos to keep spreading on X, potentially inciting other acts of terror in Australia.

But X owner Elon Musk refused to comply with the global takedown order, arguing that it would be “unlawful and dangerous” to allow one country to control the global Internet. And Musk was not alone in this fight. The legal director of a nonprofit digital rights group called the Electronic Frontier Foundation (EFF), Corynne McSherry, backed up Musk, urging the court to agree that “no single country should be able to restrict speech across the entire Internet.”

“We welcome the news that the eSafety Commissioner is no longer pursuing legal action against X seeking the global removal of content that does not violate X’s rules,” X’s Global Government Affairs account posted late Tuesday night. “This case has raised important questions on how legal powers can be used to threaten global censorship of speech, and we are heartened to see that freedom of speech has prevailed.”

Inman-Grant was formerly Twitter’s director of public policy in Australia and used that experience to land what she told The Courier-Mail was her “dream role” as Australia’s eSafety commissioner in 2017. Since issuing the order to remove the video globally on X, Inman-Grant had traded barbs with Musk (along with other Australian lawmakers), responding to Musk labeling her a “censorship commissar” by calling him an “arrogant billionaire” for fighting the order.

On X, Musk arguably got the last word, posting, “Freedom of speech is worth fighting for.”

Safety regulator still defends takedown order

In a statement, Inman-Grant said early Wednesday that her decision to discontinue proceedings against X was part of an effort to “consolidate actions,” including “litigation across multiple cases.” She ultimately determined that dropping the case against X would be the “option likely to achieve the most positive outcome for the online safety of all Australians, especially children.”

“Our sole goal and focus in issuing our removal notice was to prevent this extremely violent footage from going viral, potentially inciting further violence and inflicting more harm on the Australian community,” Inman-Grant said, still defending the order despite dropping it.

In court, X’s lawyer Marcus Hoyne had pushed back on such logic, arguing that the eSafety regulator’s mission was “pointless” because “footage of the attack had now spread far beyond the few dozen URLs originally identified,” the Australian Broadcasting Corporation reported.

“I stand by my investigators and the decisions eSafety made,” Inman-Grant said.

Other Australian lawmakers agree the order was not out of line. According to AP News, Australian Minister for Communications Michelle Rowland shared a similar statement in parliament today, backing up the safety regulator while scolding X users who allegedly took up Musk’s fight by threatening Inman-Grant and her family. The safety regulator has said that Musk’s X posts incited a “pile-on” from his followers who allegedly sent death threats and exposed her children’s personal information, the BBC reported.

“The government backs our regulators and we back the eSafety Commissioner, particularly in light of the reprehensible threats to her physical safety and the threats to her family in the course of doing her job,” Rowland said.

Elon Musk’s X defeats Australia’s global takedown order of stabbing video Read More »

the-challenge-of-securing-user-identities

The Challenge of Securing User Identities

Several businesses I’ve worked with recently have had the misfortune of being victims of cybersecurity incidents. While these incidents come in many forms, there is a common thread: they all started with a compromise of user identity.

Why Identities are Targeted

Identity security—whether it involves usernames and passwords, machine names, encryption keys, or certificates—presents a real challenge. These credentials are needed for access control, ensuring only authorized users have access to systems, infrastructure, and data. Cybercriminals also know this, which is why they are constantly trying to compromise credentials. It’s why incidents such as phishing attacks remain an ongoing problem; gaining access to the right credentials is the foothold an attacker needs.

Attempts to compromise identity do leave a trail: a phishing email, an attempted logon from an incorrect location, or more sophisticated signs such as the creation of a new multifactor authentication (MFA) token. Unfortunately, these things can happen many days apart, are often recorded across multiple systems, and individually may not look suspicious. This creates security gaps attackers can exploit.

Solving the Identity Security Challenge

Identity security is complex and difficult to address. Threats are constant and many, with users and machines targeted with increasingly innovative attack methods by focused cyberattackers. A compromised account can be highly valuable to an attacker, offering hard-to-detect access that can be used to carry out reconnaissance and craft a targeted attack to deploy malware or steal data or funds. The problem of compromised identities is only going to grow, and the impact of compromise is significant, as in many cases, organizations do not have the tools or knowledge to deal with it.

It was the challenge of securing user identities that made me leap at the chance to work on a GigaOm research project into identity threat detection and response (ITDR) solutions, providing me with a chance to learn and understand how security vendors could help address this complex challenge. ITDR solutions are a growing IT industry trend, and while they are a discipline rather than a product, the trend has led to software-based solutions that help enforce that discipline.

How to Choose the Right ITDR Solution

Solution Capabilities

ITDR tools bring together identity-based threat telemetry from many sources, including user directories, identity platforms, cloud platforms, SaaS solutions, and other areas such as endpoints and networks. They then apply analytics, machine learning, and human oversight to look for correlations across data points to provide insight into potential threats.

Critically, they do this quickly and accurately—within minutes—and it is this speed that is essential in tackling threats. In the examples I mentioned, it took days before the identity compromise was spotted, and by then the damage had been done. Tools that can quickly notify of threats and even automate the response will significantly reduce the risk of potential compromise.

Proactive security that can help reduce risk in the first place adds additional value. ITDR solutions can help build a picture of the current environment and apply risk templates to it to highlight areas of concern, such as accounts or data repositories with excessive permissions, unused accounts, and accounts found on the dark web. The security posture insights provided by highlighting these concerns help improve security baselines.

Deception technology is also useful. It works by using fake accounts or resources to attract attackers, leaving the true resources untouched. This reduces the risk to actual resources while providing a useful way to study attacks in progress without risking valuable assets.

Vendor Approach

ITDR solutions fall into two main camps, and while neither approach is better or worse than the other, they are likely to appeal to different markets.

One route is the “add-on” approach, usually from vendors either in the extended detection and response (XDR) space or privileged access management (PAM) space. This approach uses existing insights and applies identity threat intelligence to them. For organizations using XDR or PAM tools already, adding ITDR to can be an attractive option, as they are likely to have more robust and granular mitigation controls and the capability to use other parts of their solution stack to help isolate and stop attacks.

The other approach comes from vendors that have built specific, identity-focused tools from the ground up, designed to integrate broadly with existing technology stacks. These tools pull telemetry from the existing stacks into a dedicated ITDR engine and use that to highlight and prioritize risk and potentially enforce isolation and mitigation. The flexibility and breadth of coverage these tools offer can make them attractive to users with broader and more complex environments that want to add identity security without changing other elements of their current investment.

Next Steps

To learn more, take a look at GigaOm’s ITDR Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.

If you’re not yet a GigaOm subscriber, sign up here.

The Challenge of Securing User Identities Read More »

what-kind-of-bug-would-make-machine-learning-suddenly-40%-worse-at-nethack?

What kind of bug would make machine learning suddenly 40% worse at NetHack?

Large Moon Models (LMMs) —

One day, a roguelike-playing system just kept biffing it, for celestial reasons.

Moon rendered in ASCII text, with

Aurich Lawson

Members of the Legendary Computer Bugs Tribunal, honored guests, if I may have your attention? I would, humbly, submit a new contender for your esteemed judgment. You may or may not find it novel, you may even deign to call it a “bug,” but I assure you, you will find it entertaining.

Consider NetHack. It is one of the all-time roguelike games, and I mean that in the more strict sense of that term. The content is procedurally generated, deaths are permanent, and the only thing you keep from game to game is your skill and knowledge. I do understand that the only thing two roguelike fans can agree on is how wrong the third roguelike fan is in their definition of roguelike, but, please, let us move on.

NetHack is great for machine learning…

Being a difficult game full of consequential choices and random challenges, as well as a “single-agent” game that can be generated and played at lightning speed on modern computers, NetHack is great for those working in machine learning—or imitation learning, actually, as detailed in Jens Tuyls’ paper on how compute scaling affects single-agent game learning. Using Tuyls’ model of expert NetHack behavior, Bartłomiej Cupiał and Maciej Wołczyk trained a neural network to play and improve itself using reinforcement learning.

By mid-May of this year, the two had their model consistently scoring 5,000 points by their own metrics. Then, on one run, the model suddenly got worse, on the order of 40 percent. It scored 3,000 points. Machine learning generally, gradually, goes in one direction with these types of problems. It didn’t make sense.

Cupiał and Wołczyk tried quite a few things: reverting their code, restoring their entire software stack from a Singularity backup, and rolling back their CUDA libraries. The result? 3,000 points. They rebuild everything from scratch, and it’s still 3,000 points.

<em>NetHack</em>, played by a regular human.” height=”506″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/06/13863751533_64654db44e_o.png” width=”821″></img><figcaption>
<p><em>NetHack</em>, played by a regular human.</p>
</figcaption></figure>
<h2>… except on certain nights</h2>
<p>As <a href=detailed in Cupiał’s X (formerly Twitter) thread, this was several hours of confused trial and error by him and Wołczyk. “I am starting to feel like a madman. I can’t even watch a TV show constantly thinking about the bug,” Cupiał wrote. In desperation, he asks model author Tuyls if he knows what could be wrong. He wakes up in Kraków to an answer:

“Oh yes, it’s probably a full moon today.”

In NetHack, the game in which the DevTeam has thought of everything, if the game detects from your system clock that it should be a full moon, it will generate a message: “You are lucky! Full moon tonight.” A full moon imparts a few player benefits: a single point added to Luck, and werecreatures mostly kept to their animal forms.

It’s an easier game, all things considered, so why would the learning agent’s score be lower? It simply doesn’t have data about full moon variables in its training data, so a branching series of decisions likely leads to lesser outcomes, or just confusion. It was indeed a full moon in Kraków when the 3,000-ish scores started showing up. What a terrible night to have a learning model.

Of course, “score” is not a real metric for success in NetHack, as Cupiał himself noted. Ask a model to get the best score, and it will farm the heck out of low-level monsters because it never gets bored. “Finding items required for [ascension] or even [just] doing a quest is too much for pure RL agent,” Cupiał wrote. Another neural network, AutoAscend, does a better job of progressing through the game, but “even it can only solve sokoban and reach mines end,” Cupiał notes.

Is it a bug?

I submit to you that, although NetHack responded to the full moon in its intended way, this quirky, very hard-to-fathom stop on a machine-learning journey was indeed a bug and a worthy one in the pantheon. It’s not a Harvard moth, nor a 500-mile email, but what is?

Because the team used Singularity to back up and restore their stack, they inadvertently carried forward the machine time and resulting bug each time they tried to solve it. The machine’s resulting behavior was so bizarre, and seemingly based on unseen forces, that it drove a coder into fits. And the story has a beginning, a climactic middle, and a denouement that teaches us something, however obscure.

The NetHack Lunar Learning Bug is, I submit, quite worth memorializing. Thank you for your time.

What kind of bug would make machine learning suddenly 40% worse at NetHack? Read More »

isps-seek-halt-of-net-neutrality-rules-before-they-take-effect-next-month

ISPs seek halt of net neutrality rules before they take effect next month

Net neutrality back in court —

Fate of net neutrality may hinge on Supreme Court’s “major questions” doctrine.

Illustration of network data represented by curving lines flowing on a dark background.

Getty Images | Yuichiro Chino

As expected, broadband industry lobby groups have sued the Federal Communications Commission in an attempt to nullify net neutrality rules that prohibit blocking, throttling, and paid prioritization.

Lobby groups representing cable, telecom, and mobile Internet service providers sued the FCC in several US appeals courts last week. Industry groups also filed a petition with the FCC on Friday asking for a stay of the rules, claiming the regulations shouldn’t take effect while litigation is pending because the industry is likely to prevail in court.

The FCC is highly likely to reject the petition for a stay, but the groups can then ask appeals court judges to impose an injunction that would prevent enforcement. The industry lost a similar case during the Obama era, but is hoping to win this time because of the Supreme Court’s evolving approach on whether federal agencies can decide “major questions” without explicit instructions from Congress.

The petition for a stay was filed by groups including NCTA-The Internet & Television Association, which represents large cable providers such as Comcast and Charter; and USTelecom, which represents telcos including AT&T, Verizon, and CenturyLink/Lumen.

“By reclassifying broadband under Title II of the Communications Act of 1934, the Commission asserts the power to set prices, dictate terms and conditions, require or prohibit investment or divestment, and more. It should be ‘indisputable’ that the major-questions doctrine applies to that seismic claim of authority,” the petition for a stay said.

Broadband classified as telecommunications

The FCC’s net neutrality order reclassified broadband as telecommunications, which makes Internet service subject to common-carrier regulations under Title II. The order reverses the Trump-era FCC’s classification of broadband as an information service and is scheduled to take effect on July 22. The FCC approved it in a 3-2 vote on April 25.

Despite the industry’s claim that classification is a major question that can only be decided by Congress, a federal appeals court ruled in previous cases that the FCC has authority to classify broadband as either a telecommunications or information service.

The lobby groups claim that without a stay preventing enforcement, their members “will suffer irreparable harm, as they did in the wake of the 2015 Order. In particular, petitioners’ members will be forced to delay or forego valuable new services, incur prohibitive compliance costs, and pay more to obtain capital.”

Lawsuits against the FCC were filed in the US Court of Appeals for the District of Columbia Circuit by CTIA-The Wireless Association, which represents mobile providers; America’s Communications Association (ACA), which represents small and medium-sized cable providers; and the Wireless Internet Service Providers Association (WISPA), which represents fixed wireless providers.

The FCC was sued in other federal circuit appeals courts by the Texas Cable Association, the Ohio Telecom Association, the Ohio Cable Telecommunications Association, the Missouri Internet & Television Association, and Florida Internet & Television Association.

The cases will be consolidated into one court. The DC Circuit appeals court handled challenges to the Obama-era and Trump-era net neutrality decisions, ruling in favor of the FCC both times. Despite the Trump-era repeal, many ISPs still have to follow net neutrality rules because of regulations imposed by California and other states.

FCC: Authority “clear as day”

FCC Commissioner Geoffrey Starks said before the April 25 vote that the FCC’s authority to regulate broadband as a telecommunications service “is clear as day.”

To find otherwise, a court “would need to conclude that ‘this is a major questions case.’ Yet major questions review is reserved for only ‘extraordinary cases’—and this one doesn’t come close,” Starks said. “There’s no ‘unheralded power’ that we’re purporting to discover in the annals of an old, dusty statute—we’ve been classifying communications services one way or the other for decades, and the 1996 [Telecommunications] Act expressly codified our ability to continue that practice.”

If the industry loses at the appeals-court level again, lobby groups would seek review at the Supreme Court. Their hopes depend partly on Justice Brett Kavanaugh, who argued in a 2017 dissent as a circuit court judge that the “net neutrality rule is unlawful and must be vacated” because “Congress did not clearly authorize the FCC to issue the net neutrality rule.”

The CTIA lawsuit against the FCC said, “Given the undisputed fact that broadband Internet is an essential engine of the nation’s economic, social, and political life, the major-questions doctrine requires the FCC to identify clear statutory authority to subject broadband Internet access service to common-carrier regulation. The Order does not and cannot point to such authority. And to the extent there is any statutory ambiguity, the Order’s Title II approach far exceeds the bounds of reasonable interpretation and infringes rights protected by the Constitution.”

ISPs seek halt of net neutrality rules before they take effect next month Read More »

nvidia-emails:-elon-musk-diverting-tesla-gpus-to-his-other-companies

Nvidia emails: Elon Musk diverting Tesla GPUs to his other companies

why not just make cars? —

The Tesla CEO is accused of diverting resources from the company again.

A row of server racks

Enlarge / Tesla will have to rely on its Dojo supercomputer for a while longer after CEO Elon Musk diverted 12,000 Nvidia GPU clusters to X instead.

Tesla

Elon Musk is yet again being accused of diverting Tesla resources to his other companies. This time, it’s high-end H100 GPU clusters from Nvidia. CNBC’s Lora Kolodny reports that while Tesla ordered these pricey computers, emails from Nvidia staff show that Musk instead redirected 12,000 GPUs to be delivered to his social media company X.

It’s almost unheard of for a profitable automaker to pivot its business into another sector, but that appears to be the plan at Tesla as Musk continues to say that the electric car company is instead destined to be an AI and robotics firm instead.

Does Tesla make cars or AI?

That explains why Musk told investors in April that Tesla had spent $1 billion on GPUs in the first three months of this year, almost as much as it spent on R&D, despite being desperate for new models to add to what is now an old and very limited product lineup that is suffering rapidly declining sales in the US and China.

Despite increasing federal scrutiny here in the US, Tesla has reduced the price of its controversial “full-self driving” assist, and the automaker is said to be close to rolling out the feature in China. (Questions remain about how many Chinese Teslas would be able to utilize this feature given that a critical chip was left out of 1.2 million cars built there during the chip shortage.)

Perfecting this driver assist would be very valuable to Tesla, which offers FSD as a monthly subscription as an alternative to a one-off payment. The profit margins for subscription software services vastly outstrip the margins Tesla can make selling physical cars, which dropped to just 5.5 percent for Q1 2024. And Tesla says that massive GPU clusters are needed to develop FSD’s software.

Isn’t Tesla desperate for Nvidia GPUs?

Tesla has been developing its own in-house supercomputer for AI, called Dojo. But Musk has previously said that computer could be redundant if Tesla could source more H100s. “If they could deliver us enough GPUs, we might not need Dojo, but they can’t because they’ve got so many customers,” Musk said during a July 2023 investor day.

Which makes his decision to have his other companies jump all the more notable. In December, an internal Nvidia memo seen by CNBC said, “Elon prioritizing X H100 GPU cluster deployment at X versus Tesla by redirecting 12k of shipped H100 GPUs originally slated for Tesla to X instead. In exchange, original X orders of 12k H100 slated for Jan and June to be redirected to Tesla.”

X and the affiliated xAi are developing generative AI products like large language models.

Not the first time

This is not the first time that Musk has been accused of diverting resources (and his time) from publicly held Tesla to his other privately owned enterprises. In December 2022, US Sen. Elizabeth Warren (D-Mass.) wrote to Tesla asking Tesla to explain whether Musk was diverting Tesla resources to X (then called Twitter):

This use of Tesla employees raises obvious questions about whether Mr. Musk is appropriating resources from a publicly traded firm, Tesla, to benefit his own private company, Twitter. This, of course, would violate Mr. Musk’s legal duty of loyalty to Tesla and trigger questions about the Tesla Board’s responsibility to prevent such actions, and may also run afoul other “anti-tunneling rules that aim to prevent corporate insiders from extracting resources from their firms.”

Musk giving time meant (and compensated) for by Tesla to SpaceX, X, and his other ventures was also highlighted as a problem by the plaintiffs in a successful lawsuit to overturn a $56 billion stock compensation package.

And last summer, the US Department of Justice opened an investigation into whether Musk used Tesla resources to build a mansion for the CEO in Texas; the probe has since expanded to cover behavior stretching back to 2017.

These latest accusations of misuse of Tesla resources come at a time when Musk is asking shareholders to reapprove what is now a $46 billion stock compensation plan.

Nvidia emails: Elon Musk diverting Tesla GPUs to his other companies Read More »

intel-details-new-lunar-lake-cpus-that-will-go-up-against-amd,-qualcomm,-and-apple

Intel details new Lunar Lake CPUs that will go up against AMD, Qualcomm, and Apple

more lakes —

Lunar Lake returns to a more conventional-looking design for Intel.

A high-level breakdown of Intel's next-gen Lunar Lake chips, which preserve some of Meteor Lake's changes while reverting others.

Enlarge / A high-level breakdown of Intel’s next-gen Lunar Lake chips, which preserve some of Meteor Lake’s changes while reverting others.

Intel

Given its recent manufacturing troubles, a resurgent AMD, an incursion from Qualcomm, and Apple’s shift from customer to competitor, it’s been a rough few years for Intel’s processors. Computer buyers have more viable options than they have in many years, and in many ways the company’s Meteor Lake architecture was more interesting as a technical achievement than it was as an upgrade for previous-generation Raptor Lake processors.

But even given all of that, Intel still provides the vast majority of PC CPUs—nearly four-fifths of all computer CPUs sold are Intel’s, according to recent analyst estimates from Canalys. The company still casts a long shadow, and what it does still helps set the pace for the rest of the industry.

Enter its next-generation CPU architecture, codenamed Lunar Lake. We’ve known about Lunar Lake for a while—Intel reminded everyone it was coming when Qualcomm upstaged it during Microsoft’s Copilot+ PC reveal—but this month at Computex the company is going into more detail ahead of availability sometime in Q3 of 2024.

Lunar Lake will be Intel’s first processor with a neural processing unit (NPU) that meets Microsoft’s Copilot+ PC requirements. But looking beyond the endless flow of AI news, it also includes upgraded architectures for its P-cores and E-cores, a next-generation GPU architecture, and some packaging changes that simultaneously build on and revert many of the dramatic changes Intel made for Meteor Lake.

Intel didn’t have more information to share on Arrow Lake, the architecture that will bring Meteor Lake’s big changes to socketed desktop motherboards for the first time. But Intel says that Arrow Lake is still on track for release in Q4 of 2024, and it could be announced at Intel’s annual Innovation event in late September.

Building on Meteor Lake

Lunar Lake continues to use a mix of P-cores and E-cores, which allow the chip to handle a mix of low-intensity and high-performance workloads without using more power than necessary.

Enlarge / Lunar Lake continues to use a mix of P-cores and E-cores, which allow the chip to handle a mix of low-intensity and high-performance workloads without using more power than necessary.

Intel

Lunar Lake shares a few things in common with Meteor Lake, including a chiplet-based design that combines multiple silicon dies into one big one with Intel’s Foveros packaging technology. But in some ways Lunar Lake is simpler and less weird than Meteor Lake, with fewer chiplets and a more conventional design.

Meteor Lake’s components were spread across four tiles: a compute tile that was mainly for the CPU cores, a TSMC-manufactured graphics tile for the GPU rendering hardware, an IO tile to handle things like PCI Express and Thunderbolt connectivity, and a grab-bag “SoC” tile with a couple of additional CPU cores, the media encoding and decoding engine, display connectivity, and the NPU.

Lunar Lake only has two functional tiles, plus a small “filler tile” that seems to exist solely so that the Lunar Lake silicon die can be a perfect rectangle once it’s all packaged together. The compute tile combines all of the processor’s P-cores and E-cores, the GPU, the NPU, the display outputs, and the media encoding and decoding engine. And the platform controller tile handles wired and wireless connectivity, including PCIe and USB, Thunderbolt 4, and Wi-Fi 7 and Bluetooth 5.4.

This is essentially the same split that Intel has used for laptop chips for years and years: one chipset die and one die for the CPU, GPU, and everything else. It’s just that now, those two chips are part of the same silicon die, rather than separate dies on the same processor package. In retrospect it seems like some of Meteor Lake’s most noticeable design departures—the division of GPU-related functions among different tiles, the presence of additional CPU cores inside of the SoC tile—were things Intel had to do to work around the fact that another company was actually manufacturing most of the GPU. Given the opportunity, Intel has returned to a more recognizable assemblage of components.

Intel is shifting to on-package RAM for Meteor Lake, something Apple also uses for its M-series chips.

Enlarge / Intel is shifting to on-package RAM for Meteor Lake, something Apple also uses for its M-series chips.

Intel

Another big packaging change is that Intel is integrating RAM into the CPU package for Lunar Lake, rather than having it installed separately on the motherboard. Intel says this uses 40 percent less power, since it shortens the distance data needs to travel. It also saves motherboard space, which can either be used for other components, to make systems smaller, or to make more room for battery. Apple also uses on-package memory for its M-series chips.

Intel says that Lunar Lake chips can include up to 32GB of LPDDR5x memory. The downside is that this on-package memory precludes the usage of separate Compression-Attached Memory Modules, which combine many of the benefits of traditional upgradable DIMM modules and soldered-down laptop memory.

Intel details new Lunar Lake CPUs that will go up against AMD, Qualcomm, and Apple Read More »