Author name: Mike M.

pornhub-prepares-to-block-five-more-states-rather-than-check-ids

Pornhub prepares to block five more states rather than check IDs

“Uphill battle” —

The number of states blocked by Pornhub will soon nearly double.

Pornhub prepares to block five more states rather than check IDs

Aurich Lawson | Getty Images

Pornhub will soon be blocked in five more states as the adult site continues to fight what it considers privacy-infringing age-verification laws that require Internet users to provide an ID to access pornography.

On July 1, according to a blog post on the adult site announcing the impending block, Pornhub visitors in Indiana, Idaho, Kansas, Kentucky, and Nebraska will be “greeted by a video featuring” adult entertainer Cherie Deville, “who explains why we had to make the difficult decision to block them from accessing Pornhub.”

Pornhub explained that—similar to blocks in Texas, Utah, Arkansas, Virginia, Montana, North Carolina, and Mississippi—the site refuses to comply with soon-to-be-enforceable age-verification laws in this new batch of states that allegedly put users at “substantial risk” of identity theft, phishing, and other harms.

Age-verification laws requiring adult site visitors to submit “private information many times to adult sites all over the Internet” normalizes the unnecessary disclosure of personally identifiable information (PII), Pornhub argued, warning, “this is not a privacy-by-design approach.”

Pornhub does not outright oppose age verification but advocates for laws that require device-based age verification, which allows users to access adult sites after authenticating their identity on their devices. That’s “the best and most effective solution for protecting minors and adults alike,” Pornhub argued, because the age-verification technology is proven and less PII would be shared.

“Users would only get verified once, through their operating system, not on each age-restricted site,” Pornhub’s blog said, claiming that “this dramatically reduces privacy risks and creates a very simple process for regulators to enforce.”

A spokesperson for Pornhub-owner Aylo told Ars that “unfortunately, the way many jurisdictions worldwide have chosen to implement age verification is ineffective, haphazard, and dangerous.”

“Any regulations that require hundreds of thousands of adult sites to collect significant amounts of highly sensitive personal information is putting user safety in jeopardy,” Aylo’s spokesperson told Ars. “Moreover, as experience has demonstrated, unless properly enforced, users will simply access non-compliant sites or find other methods of evading these laws.

Age-verification laws are harmful, Pornhub says

Pornhub’s big complaint with current age-verification laws is that these laws are hard to enforce and seem to make it riskier than ever to visit an adult site.

“Since age verification software requires users to hand over extremely sensitive information, it opens the door for the risk of data breaches,” Pornhub’s blog said. “Whether or not your intentions are good, governments have historically struggled to secure this data. It also creates an opportunity for criminals to exploit and extort people through phishing attempts or fake [age verification] processes, an unfortunate and all too common practice.”

Over the past few years, the risk of identity theft or stolen PII on both widely used and smaller niche adult sites has been well-documented.

Hundreds of millions of people were impacted by major leaks exposing PII shared with popular adult sites like Adult Friend Finder and Brazzers in 2016, while likely tens of thousands of users were targeted on eight poorly secured adult sites in 2018. Niche and free sites have also been vulnerable to attacks, including millions collectively exposed through breaches of fetish porn site Luscious in 2019 and MyFreeCams in 2021.

And those are just the big breaches that make headlines. In 2019, Kaspersky Lab reported that malware targeting online porn account credentials more than doubled in 2018, and researchers analyzing 22,484 pornography websites estimated that 93 percent were leaking user data to a third party.

That’s why Pornhub argues that, as states have passed age-verification laws requiring ID, they’ve “introduced harm” by redirecting visitors to adult sites that have fewer privacy protections and worse security, allegedly exposing users to more threats.

As an example, Pornhub reported, traffic to Pornhub in Louisiana “dropped by approximately 80 percent” after their age-verification law passed. That allegedly showed not just how few users were willing to show an ID to access their popular platform, but also how “very easily” users could simply move to “pirate, illegal, or other non-compliant sites that don’t ask visitors to verify their age.”

Pornhub has continued to argue that states passing laws like Louisiana’s cannot effectively enforce the laws and are simply shifting users to make riskier choices when accessing porn.

“The Louisiana law and other copycat state-level laws have no regulator, only civil liability, which results in a flawed enforcement regime, effectively making it an option for platform operators to comply,” Pornhub’s blog said. As one of the world’s most popular adult platforms, Pornhub would surely be targeted for enforcement if found to be non-compliant, while smaller adult sites perhaps plagued by security risks and disincentivized to check IDs would go unregulated, the thinking goes.

Aylo’s spokesperson shared 2023 Similarweb data with Ars, showing that sites complying with age-verification laws in Virginia, including Pornhub and xHamster, lost substantial traffic while seven non-compliant sites saw a sharp uptick in traffic. Similar trends were observed in Google trends data in Utah and Mississippi, while market shares were seemingly largely maintained in California, a state not yet checking IDs to access adult sites.

Pornhub prepares to block five more states rather than check IDs Read More »

radioactive-drugs-strike-cancer-with-precision

Radioactive drugs strike cancer with precision

Pharma interest and investment in radiotherapy drugs is heating up.

Enlarge / Pharma interest and investment in radiotherapy drugs is heating up.

Knowable Magazine

On a Wednesday morning in late January 1896 at a small light bulb factory in Chicago, a middle-aged woman named Rose Lee found herself at the heart of a groundbreaking medical endeavor. With an X-ray tube positioned above the tumor in her left breast, Lee was treated with a torrent of high-energy particles that penetrated into the malignant mass.

“And so,” as her treating clinician later wrote, “without the blaring of trumpets or the beating of drums, X-ray therapy was born.”

Radiation therapy has come a long way since those early beginnings. The discovery of radium and other radioactive metals opened the doors to administering higher doses of radiation to target cancers located deeper within the body. The introduction of proton therapy later made it possible to precisely guide radiation beams to tumors, thus reducing damage to surrounding healthy tissues—a degree of accuracy that was further refined through improvements in medical physics, computer technologies and state-of-the-art imaging techniques.

But it wasn’t until the new millennium, with the arrival of targeted radiopharmaceuticals, that the field achieved a new level of molecular precision. These agents, akin to heat-seeking missiles programmed to hunt down cancer, journey through the bloodstream to deliver their radioactive warheads directly at the tumor site.

Use of radiation to kill cancer cells has a long history. In this 1915 photo, a woman receives “roentgenotherapy”—treatment with X-rays—directed at an epithelial-cell cancer on her face.

Use of radiation to kill cancer cells has a long history. In this 1915 photo, a woman receives “roentgenotherapy”—treatment with X-rays—directed at an epithelial-cell cancer on her face.

Wikimedia Commons

Today, only a handful of these therapies are commercially available for patients—specifically, for forms of prostate cancer and for tumors originating within hormone-producing cells of the pancreas and gastrointestinal tract. But this number is poised to grow as major players in the biopharmaceutical industry begin to invest heavily in the technology.

AstraZeneca became the latest heavyweight to join the field when, on June 4, the company completed its purchase of Fusion Pharmaceuticals, maker of next-generation radiopharmaceuticals, in a deal worth up to $2.4 billion. The move follows similar billion-dollar-plus transactions made in recent months by Bristol Myers Squibb (BMS) and Eli Lilly, along with earlier takeovers of innovative radiopharmaceutical firms by Novartis, which continued its acquisition streak—begun in 2018—with another planned $1 billion upfront payment for a radiopharma startup, as revealed in May.

“It’s incredible how, suddenly, it’s all the rage,” says George Sgouros, a radiological physicist at Johns Hopkins University School of Medicine in Baltimore and the founder of Rapid, a Baltimore-based company that provides software and imaging services to support radiopharmaceutical drug development. This surge in interest, he points out, underscores a wider recognition that radiopharmaceuticals offer “a fundamentally different way of treating cancer.”

Treating cancer differently, however, means navigating a minefield of unique challenges, particularly in the manufacturing and meticulously timed distribution of these new therapies, before the radioactivity decays. Expanding the reach of the therapy to treat a broader array of cancers will also require harnessing new kinds of tumor-killing particles and finding additional suitable targets.

“There’s a lot of potential here,” says David Nierengarten, an analyst who covers the radiopharmaceutical space for Wedbush Securities in San Francisco. But, he adds, “There’s still a lot of room for improvement.”

Atomic advances

For decades, a radioactive form of iodine stood as the sole radiopharmaceutical available on the market. Once ingested, this iodine gets taken up by the thyroid, where it helps to destroy cancerous cells of that butterfly-shaped gland in the neck—a treatment technique established in the 1940s that remains in common use today.

But the targeted nature of this strategy is not widely applicable to other tumor types.

The thyroid is naturally inclined to absorb iodine from the bloodstream since this mineral, which is found in its nonradioactive form in many foods, is required for the synthesis of certain hormones made by the gland.

Other cancers don’t have a comparable affinity for radioactive elements. So instead of hijacking natural physiological pathways, researchers have had to design drugs that are capable of recognizing and latching onto specific proteins made by tumor cells. These drugs are then further engineered to act as targeted carriers, delivering radioactive isotopes—unstable atoms that emit nuclear energy—straight to the malignant site.

Radioactive drugs strike cancer with precision Read More »

anthropic-introduces-claude-3.5-sonnet,-matching-gpt-4o-on-benchmarks

Anthropic introduces Claude 3.5 Sonnet, matching GPT-4o on benchmarks

The Anthropic Claude 3 logo, jazzed up by Benj Edwards.

Anthropic / Benj Edwards

On Thursday, Anthropic announced Claude 3.5 Sonnet, its latest AI language model and the first in a new series of “3.5” models that build upon Claude 3, launched in March. Claude 3.5 can compose text, analyze data, and write code. It features a 200,000 token context window and is available now on the Claude website and through an API. Anthropic also introduced Artifacts, a new feature in the Claude interface that shows related work documents in a dedicated window.

So far, people outside of Anthropic seem impressed. “This model is really, really good,” wrote independent AI researcher Simon Willison on X. “I think this is the new best overall model (and both faster and half the price of Opus, similar to the GPT-4 Turbo to GPT-4o jump).”

As we’ve written before, benchmarks for large language models (LLMs) are troublesome because they can be cherry-picked and often do not capture the feel and nuance of using a machine to generate outputs on almost any conceivable topic. But according to Anthropic, Claude 3.5 Sonnet matches or outperforms competitor models like GPT-4o and Gemini 1.5 Pro on certain benchmarks like MMLU (undergraduate level knowledge), GSM8K (grade school math), and HumanEval (coding).

Claude 3.5 Sonnet benchmarks provided by Anthropic.

Enlarge / Claude 3.5 Sonnet benchmarks provided by Anthropic.

If all that makes your eyes glaze over, that’s OK; it’s meaningful to researchers but mostly marketing to everyone else. A more useful performance metric comes from what we might call “vibemarks” (coined here first!) which are subjective, non-rigorous aggregate feelings measured by competitive usage on sites like LMSYS’s Chatbot Arena. The Claude 3.5 Sonnet model is currently under evaluation there, and it’s too soon to say how well it will fare.

Claude 3.5 Sonnet also outperforms Anthropic’s previous-best model (Claude 3 Opus) on benchmarks measuring “reasoning,” math skills, general knowledge, and coding abilities. For example, the model demonstrated strong performance in an internal coding evaluation, solving 64 percent of problems compared to 38 percent for Claude 3 Opus.

Claude 3.5 Sonnet is also a multimodal AI model that accepts visual input in the form of images, and the new model is reportedly excellent at a battery of visual comprehension tests.

Claude 3.5 Sonnet benchmarks provided by Anthropic.

Enlarge / Claude 3.5 Sonnet benchmarks provided by Anthropic.

Roughly speaking, the visual benchmarks mean that 3.5 Sonnet is better at pulling information from images than previous models. For example, you can show it a picture of a rabbit wearing a football helmet, and the model knows it’s a rabbit wearing a football helmet and can talk about it. That’s fun for tech demos, but the tech is still not accurate enough for applications of the tech where reliability is mission critical.

Anthropic introduces Claude 3.5 Sonnet, matching GPT-4o on benchmarks Read More »

researchers-describe-how-to-tell-if-chatgpt-is-confabulating

Researchers describe how to tell if ChatGPT is confabulating

Researchers describe how to tell if ChatGPT is confabulating

Aurich Lawson | Getty Images

It’s one of the world’s worst-kept secrets that large language models give blatantly false answers to queries and do so with a confidence that’s indistinguishable from when they get things right. There are a number of reasons for this. The AI could have been trained on misinformation; the answer could require some extrapolation from facts that the LLM isn’t capable of; or some aspect of the LLM’s training might have incentivized a falsehood.

But perhaps the simplest explanation is that an LLM doesn’t recognize what constitutes a correct answer but is compelled to provide one. So it simply makes something up, a habit that has been termed confabulation.

Figuring out when an LLM is making something up would obviously have tremendous value, given how quickly people have started relying on them for everything from college essays to job applications. Now, researchers from the University of Oxford say they’ve found a relatively simple way to determine when LLMs appear to be confabulating that works with all popular models and across a broad range of subjects. And, in doing so, they develop evidence that most of the alternative facts LLMs provide are a product of confabulation.

Catching confabulation

The new research is strictly about confabulations, and not instances such as training on false inputs. As the Oxford team defines them in their paper describing the work, confabulations are where “LLMs fluently make claims that are both wrong and arbitrary—by which we mean that the answer is sensitive to irrelevant details such as random seed.”

The reasoning behind their work is actually quite simple. LLMs aren’t trained for accuracy; they’re simply trained on massive quantities of text and learn to produce human-sounding phrasing through that. If enough text examples in its training consistently present something as a fact, then the LLM is likely to present it as a fact. But if the examples in its training are few, or inconsistent in their facts, then the LLMs synthesize a plausible-sounding answer that is likely incorrect.

But the LLM could also run into a similar situation when it has multiple options for phrasing the right answer. To use an example from the researchers’ paper, “Paris,” “It’s in Paris,” and “France’s capital, Paris” are all valid answers to “Where’s the Eiffel Tower?” So, statistical uncertainty, termed entropy in this context, can arise either when the LLM isn’t certain about how to phrase the right answer or when it can’t identify the right answer.

This means it’s not a great idea to simply force the LLM to return “I don’t know” when confronted with several roughly equivalent answers. We’d probably block a lot of correct answers by doing so.

So instead, the researchers focus on what they call semantic entropy. This evaluates all the statistically likely answers evaluated by the LLM and determines how many of them are semantically equivalent. If a large number all have the same meaning, then the LLM is likely uncertain about phrasing but has the right answer. If not, then it is presumably in a situation where it would be prone to confabulation and should be prevented from doing so.

Researchers describe how to tell if ChatGPT is confabulating Read More »

why-interplay’s-original-fallout-3-was-canceled-20+-years-ago

Why Interplay’s original Fallout 3 was canceled 20+ years ago

The path untaken —

OG Fallout producer says “Project Van Buren” ran out of time and money.

What could have been.

Enlarge / What could have been.

PC gamers of a certain vintage will remember tales of Project Van Buren, a title that early ’00s Interplay intended as the sequel to 1998’s hit Fallout 2. Now, original Fallout producer Timothy Cain is sharing some behind-the-scenes details about how he contributed to the project’s cancellation during a particularly difficult time for publisher Interplay.

Cain famously left Interplay during Fallout 2‘s development in the late ’90s to help form short-lived RPG house Troika Games. After his departure, though, he was still in touch with some people from his former employer, including an unnamed Interplay vice president looking for some outside opinions on the troubled Van Buren project.

“Would you mind coming over and playing one of my game prototypes?” Cain recalls this vice president asking him sometime in mid-2003. “We’re making a Fallout game and I’m going to have to cancel it. I don’t think they can get it done… but if you could come over and look at it and give me an estimate, there’s a chance I wouldn’t cancel it.”

Cain discusses his memories of testing “Project Van Buren.”

Cain recalls walking “across the street” from Troika to the Interplay offices, motivated to help because, as he remembers it, “if you don’t do it, bad things will happen to other people.” There, he got to see the latest build of Project Van Buren, running on the 3D Jefferson Engine that was intended to replace the sprite-based isometric view of the first two Fallout games. Cain said the version he played was similar or identical to a tech demo obtained by fan site No Mutants Allowed in 2007 and featured in a recent YouTube documentary about the failed project.

After playing for about two hours and talking to the team behind the project, Cain said the VP asked him directly how long the demo needed to become a shippable game. The answer Cain reportedly gave—18 months of standard development for “a really good game” or 12 months of “death march” crunch time for an unbalanced, buggy mess—was too long for the financially strapped publisher to devote to funding the project.

“He could not afford a development period of more than six months,” Cain said. “To me, that time frame was out of the question… He thought it couldn’t be done in six months; I just confirmed that.”

Show me the money

Looking back today, Cain said it’s hard to pinpoint a single “villain” responsible for Van Buren’s failure. Even reusing the engine from the first Fallout game—as the Fallout 2 team did for that title’s quick 12-month development process—wouldn’t have necessarily helped, Cain said. “Would that engine have been acceptable five years later [after Fallout 2]?” he asked rhetorically. “Had anyone really looked at it? I started the engine in 1994… it’s creaky.”

Real “Van Buren”-heads will enjoy this in-depth look at the game’s development, including details of Interplay’s troubled financial situation in the early ’00s.

In the end, Van Buren’s cancellation (and that of a planned Interplay Fallout MMO years later) simply “comes down to money,” Cain said. “I do not believe that [with] the money they had left, the game in the state it was in, and the people who were working on it could have completed it within six months,” he said. “And [if they did], I don’t think it would have been a game you would have liked playing.”

Luckily, the all-but-name shuttering of Interplay in the years after Van Buren’s cancellation wouldn’t spell the end of the Fallout series. Bethesda acquired the license in 2007, leading to a completely reimagined Fallout 3 that has become the cornerstone of a fan-favorite franchise many years later. But for those still wondering what Interplay’s original “Fallout 3” could have been, a group of fans is trying to rebuild the Project Van Buren demo from the ground up for modern audiences.

Why Interplay’s original Fallout 3 was canceled 20+ years ago Read More »

natgeo-documents-salvage-of-tuskegee-airman’s-lost-wwii-plane-wreckage

NatGeo documents salvage of Tuskegee Airman’s lost WWII plane wreckage

Remembering a hero this Juneteenth —

The Real Red Tails investigates the fatal crash of 2nd Lt. Frank Moody in 1944.

Michigan's State Maritime Archaeologist Wayne R. Lusardi takes notes underwater at the wreckage.

Enlarge / Michigan’s State Maritime Archaeologist Wayne R. Lusardi takes notes underwater at the Lake Huron WWII wreckage of 2nd Lt. Frank Moody’s P-39 Airacobra. Moody, one of the famed Tuskagee Airmen, fatally crashed in 1944.

National Geographic

In April 1944, a pilot with the Tuskegee Airmen, Second Lieutenant Frank Moody, was on a routine training mission when his plane malfunctioned. Moody lost control of the aircraft and plunged to his death in the chilly waters of Lake Huron. His body was recovered two months later, but the airplane was left at the bottom of the lake—until now. Over the last few years, a team of divers working with the Tuskegee Airmen National Historical Museum in Detroit has been diligently recovering the various parts of Moody’s plane to determine what caused the pilot’s fatal crash.

That painstaking process is the centerpiece of The Real Red Tails, a new documentary from National Geographic narrated by Sheryl Lee Ralph (Abbot Elementary). The documentary features interviews with the underwater archaeologists working to recover the plane, as well as firsthand accounts from Moody’s fellow airmen and stunning underwater footage from the wreck itself.

The Tuskegee Airmen were the first Black military pilots in the US Armed Forces and helped pave the way for the desegregation of the military. The men painted the tails of their P-47 planes red, earning them the nickname the Red Tails. (They initially flew Bell P-39 Airacobras like Moody’s downed plane, and later flew P-51 Mustangs.) It was then-First Lady Eleanor Roosevelt who helped tip popular opinion in favor of the fledgling unit when she flew with the Airmen’s chief instructor, C. Alfred Anderson, in March 1941. The Airmen earned praise for their skill and bravery in combat during World War II, with members being awarded three Distinguished Unit Citations, 96 Distinguished Flying Crosses, 14 Bronze Stars, 60 Purple Hearts, and at least one Silver Star.

  • 2nd Lt. Frank Moody’s official military portrait.

    National Archives and Records Administration

  • Tuskegee Airman Lt. Col. (Ret.) Harry T. Stewart.

    National Geographic/Rob Lyall

  • Stewart’s official portrait as a US Army Air Force pilot.

    National Archives and Records Administration

  • Tuskegee Airman Lt. Col. (Ret.) James H. Harvey.

    National Geographic/Rob Lyall

  • Harvey’s official portrait as a US Army Air Force pilot.

    National Archives and Records Administration

  • Stewart and Harvey (second and third, l-r).

    James Harvey

  • Stewart stands next to a restored WWII Mustang airplane at the Tuskegee Airmen National Museum in Detroit.

    National Geographic/Rob Lyall

A father-and-son team, David and Drew Losinski, discovered the wreckage of Moody’s plane in 2014 during cleanup efforts for a sunken barge. They saw what looked like a car door lying on the lake bed that turned out to be a door from a WWII-era P-39. The red paint on the tail proved it had been flown by a “Red Tail” and it was eventually identified as Moody’s plane. The Losinskis then joined forces with Wayne Lusardi, Michigan’s state maritime archaeologist, to explore the remarkably well-preserved wreckage. More than 600 pieces have been recovered thus far, including the engine, the propeller, the gearbox, machine guns, and the main 37mm cannon.

Ars caught up with Lusardi to learn more about this fascinating ongoing project.

Ars Technica: The area where Moody’s plane was found is known as Shipwreck Alley. Why have there been so many wrecks—of both ships and airplanes—in that region?

Wayne Lusardi: Well, the Great Lakes are big, and if you haven’t been on them, people don’t really understand they’re literally inland seas. Consequently, there has been a lot of maritime commerce on the lakes for hundreds of years. Wherever there’s lots of ships, there’s usually lots of accidents. It’s just the way it goes. What we have in the Great Lakes, especially around some places in Michigan, are really bad navigation hazards: hidden reefs, rock piles that are just below the surface that are miles offshore and right near the shipping lanes, and they often catch ships. We have bad storms that crop up immediately. We have very chaotic seas. All of those combined to take out lots of historic vessels. In Michigan alone, there are about 1,500 shipwrecks; in the Great Lakes, maybe close to 10,000 or so.

One of the biggest causes of airplanes getting lost offshore here is fog. Especially before they had good navigation systems, pilots got lost in the fog and sometimes crashed into the lake or just went missing altogether. There are also thunderstorms, weather conditions that impact air flight here, and a lot of ice and snow storms.

Just like commercial shipping, the aviation heritage of the Great Lakes is extensive; a lot of the bigger cities on the Eastern Seaboard extend into the Great Lakes. It’s no surprise that they populated the waterfront, the shorelines first, and in the early part of the 20th century, started connecting them through aviation. The military included the Great Lakes in their training regimes because during World War I, the conditions that you would encounter in the Great Lakes, like flying over big bodies of water, or going into remote areas to strafe or to bomb, mimicked what pilots would see in the European theater during the first World War. When Selfridge Field near Detroit was developed by the Army Air Corps in 1917, it was the farthest northern military air base in the United States, and it trained pilots to fly in all-weather conditions to prepare them for Europe.

NatGeo documents salvage of Tuskegee Airman’s lost WWII plane wreckage Read More »

shadow-of-the-erdtree-has-ground-me-into-dust,-which-is-why-i-recommend-it

Shadow of the Erdtree has ground me into dust, which is why I recommend it

Elden Ring: Shadow of the Erdtree DLC —

Souls fans seeking real challenge should love it. Casuals like me might wait.

Image of a fight from Shadow of the Erdtree

Bandai

Elden Ring was my first leap into FromSoftware titles (and Dark-Souls-like games generally), and I fell in deep. Over more than 200 hours, I ate up the cryptic lore, learned lots of timings, and came to appreciate the feeling of achievement through perseverance.

Months ago, in preparation for Elden Ring’s expansion, Shadow of the Erdtree (also on PlayStation and Xbox, arriving June 21), I ditched the save file with which I had beaten the game and started over. I wanted to try out big swords and magic casting. I wanted to try a few new side quests. And I wanted to have a fresh experience with the game before Shadow arrived.

I have had a very fresh experience, in that this DLC has made me feel like I’m still in the first hour of my first game. Reader, this expansion is mopping the floor with me. It looked at my resume, which has “Elden Lord” as its most recent job title, and has tossed it into the slush pile. If you’re wondering whether Shadow would, like Elden Ring, provide a different kind of challenge and offer, like the base game, easier paths for Souls newcomers: No, not really. At least not until you’re already far along. This DLC is for people who beat Elden Ring, or all but beat it, and want capital-M More.

That should be great news for longtime Souls devotees, who fondly recall the difficulty spikes of some of earlier games’ DLC or those who want a slightly more linear, dungeon-by-dungeon, boss-by-boss experience. For everybody else, I’d suggest waiting until you’re confidently through most of the main game—and for the giant wiki/YouTube apparatus around the game to catch up and provide some guidance.

What “ready for the DLC” really means

Technically, you can play Shadow of the Erdtree once you’ve done two things in Elden Ring: beaten Starscourge Radahn and Mohg, Lord of Blood. Radahn is a mid-game boss, and Mohg is generally encountered in the later stages. But, perhaps anticipating the DLC, the game allows you to get to Mohg relatively early by using a specific item.

Just getting to a level where you’re reasonably ready to tackle Mohg will be a lot. As of a week ago, more than 60 percent of players on Steam (PC) had not yet beaten Mohg; that number is even higher on consoles. On my replay, I got to about level 105 at around 50 hours, but I remembered a lot about both the mechanics and the map. I had the item to travel to Mohg and the other item that makes him easier to beat. Maybe it’s strange to avoid spoilers for a game that came out more than two years ago, but, again, most players have not gotten this far.

I took down Mohg in one try; I’m not bragging, just setting expectations. I had a fully upgraded Moonlight Greatsword, a host of spells, a fully upgraded Mimic Tear spirit helper, and a build focused on Intelligence (for the sword and spell casting), but I could also wear decent armor while still adequately rolling. Up until this point, I was surprised by how much easier the bosses and dungeons I revisited had felt (except the Valiant Gargoyle, which was just as hard).

I stepped into the DLC, wandered around a bit, killed a few shambling souls (“Shadows of the Dead”), and found a sealed chasm (“Blackgaol”) in the first area. The knight inside took me out, repeatedly, usually in two quick sword flicks. Sometimes he would change it up and perforate me with gatling-speed flaming crossbow bolts or a wave emanating from his sword. Most of the time, he didn’t even touch his healing flask before I saw “YOU DIED.”

Ah, but most Elden Ring players will remember that the game put an intentionally way-too-hard enemy in the very first open area, almost as a lesson about leveling up and coming back. So I hauled my character and bruised ego toward a nearby ruin, filled mostly with more dead Shadows. The first big “legacy dungeon,” Belurat, Tower Settlement, was just around the corner. I headed in and started compiling my first of what must be 100 deaths by now.

There are the lumbering Shadows, yes, but there are also their bigger brothers, who love to ambush with a leaping strike and take me down in two hits. There are Man-Flies, which unsurprisingly swarmed and latched onto my head, killing me if I wasn’t at full health (40 Vigor, if you must know). There are Gravebirds, which, like all birds in Elden Ring, are absolute jerks that mess with your camera angles. And there are Horned Warriors, who are big, fast, relentless, and responsible for maybe a dozen each of my deaths.

At level 105, with a known build strategy centered around a weapon often regarded as overpowered and all the knowledge I had of the game’s systems and strategies, I was barely hanging on, occasionally inching forward. What gives?

Shadow of the Erdtree has ground me into dust, which is why I recommend it Read More »

t-mobile-defends-misleading-“price-lock”-claim-but-agrees-to-change-ads

T-Mobile defends misleading “Price Lock” claim but agrees to change ads

T-Mobile logo displayed in front of a stock market chart.

Getty Images | SOPA Images

T-Mobile has agreed to change its advertising for the “Price Lock” guarantee that doesn’t actually lock in a customer’s price, but continues to defend the offer.

T-Mobile users expressed their displeasure about being hit with up to $5 per-line price hikes on plans that seemed to have a lifetime price guarantee, but it was a challenge by AT&T that forced T-Mobile to agree to change its advertising. AT&T filed the challenge with the advertising industry’s self-regulatory group, which ruled that T-Mobile’s Price Lock ads were misleading.

As we’ve reported, T-Mobile’s guarantee (currently called “Price Lock” and previously the “Un-contract”) is simply a promise that T-Mobile will pay your final month’s bill if the carrier raises your price and you decide to cancel. Despite that, T-Mobile promised users that it “will never change the price you pay” if you’re on a plan with the provision.

BBB National Programs’ National Advertising Division (NAD), the ad industry’s self-regulatory body, ruled against T-Mobile in a decision issued yesterday. BBB National Programs is an independent nonprofit that is affiliated with the International Association of Better Business Bureaus.

The NAD’s decisions aren’t binding, but advertisers usually comply with them. That’s what T-Mobile is doing.

“T-Mobile is proud of its innovative Price Lock policy, where customers can get their last month of service on T-Mobile if T-Mobile ever changes the customer’s price, and the customer decides to leave,” the company said in its official response to the NAD’s decision. “While T-Mobile believes the challenged advertisements appropriately communicate the generous terms of its Price Lock policy, T-Mobile is a supporter of self-regulation and will take NAD’s recommendations to clarify the terms of its policy into account with respect to its future advertising.”

AT&T: Price Lock not a real price lock

While our recent reports on Price Lock concerned mobile plans, the ads challenged by AT&T were for T-Mobile’s 5G home Internet service.

“AT&T argued that the ‘Price Lock’ claims are false because T-Mobile is not committing to locking the pricing of its service for any amount of time,” the NAD’s decision said. “AT&T also argued that T-Mobile’s disclosures contradict the ‘Price Lock’ claim because they set forth limitations which make clear that T-Mobile may increase the price of service for any reason at any time.”

T-Mobile countered “that its home Internet service ‘price lock’ is innovative and unique in the industry, serving as a strong disincentive to T-Mobile against raising prices and offering a potential benefit of free month’s service, and that it has the discretion as to how to define a ‘price lock’ so long as it clearly communicates the terms,” the NAD noted.

AT&T challenged print and online ads, and a TV commercial featuring actors Zach Braff, Donald Faison, and Jason Momoa. The ads displayed a $50 monthly rate with the text “Price Lock” and included language clarifying the actual details of the offer.

The NAD said that “impactful claims about pricing policies require clear communication of what those policies are and cannot leave consumers with a fundamental misunderstanding about what those policies mean.” T-Mobile’s ads created a fundamental misunderstanding, the NAD found.

T-Mobile defends misleading “Price Lock” claim but agrees to change ads Read More »

supermassive-black-hole-roars-to-life-as-astronomers-watch-in-real-time

Supermassive black hole roars to life as astronomers watch in real time

Sleeping Beauty —

A similar awakening may one day occur with the Milky Way’s supermassive black hole

Artist’s animation of the black hole at the center of SDSS1335+0728 awakening in real time—a first for astronomers.

In December 2019, astronomers were surprised to observe a long-quiet galaxy, 300 million light-years away, suddenly come alive, emitting ultraviolet, optical, and infrared light into space. Far from quieting down again, by February of this year, the galaxy had begun emitting X-ray light; it is becoming more active. Astronomers think it is most likely an active galactic nucleus (AGN), which gets its energy from supermassive black holes at the galaxy’s center and/or from the black hole’s spin. That’s the conclusion of a new paper accepted for publication in the journal Astronomy and Astrophysics, although the authors acknowledge the possibility that it might also be some kind of rare tidal disruption event (TDE).

The brightening of SDSS1335_0728 in the constellation Virgo, after decades of quietude, was first detected by the Zwicky Transient Facility telescope. Its supermassive black hole is estimated to be about 1 million solar masses. To get a better understanding of what might be going on, the authors combed through archival data and combined that with data from new observations from various instruments, including the X-shooter, part of the Very Large Telescope (VLT) in Chile’s Atacama Desert.

There are many reasons why a normally quiet galaxy might suddenly brighten, including supernovae or a TDE, in which part of the shredded star’s original mass is ejected violently outward. This, in turn, can form an accretion disk around the black hole that emits powerful X-rays and visible light. But these events don’t last nearly five years—usually not more than a few hundred days.

So the authors concluded that the galaxy has awakened and now has an AGN. First discovered by Carl Seyfert in 1943, the glow is the result of the cold dust and gas surrounding the black hole, which can form orbiting accretion disks. Gravitational forces compress the matter in the disk and heat it to millions of degrees Kelvin, producing radiation across the electromagnetic spectrum.

Alternatively, the activity might be due to an especially long and faint TDE—the longest and faintest yet detected, if so. Or it could be an entirely new phenomenon altogether. So SDSS1335+0728 is a galaxy to watch. Astronomers are already preparing for follow-up observations with the VLT’s Multi Unit Spectroscopic Explorer (MUSE) and Extremely Large Telescope, among others, and perhaps even the Vera Rubin Observatory slated to come online next summer. Its Large Synoptic Survey Telescope (LSST) will be capable of imaging the entire southern sky continuously, potentially capturing even more galaxy awakenings.

“Regardless of the nature of the variations, [this galaxy] provides valuable information on how black holes grow and evolve,” said co-author Paula Sánchez Sáez, an astronomer at the European Southern Observatory in Germany. “We expect that instruments like [these] will be key in understanding [why the galaxy is brightening].”

There is also a supermassive black hole at the center of our Milky Way galaxy (Sgr A*), but there is not yet enough material that has accreted for astronomers to pick up any emitted radiation, even in the infrared. So, its galactic nucleus is deemed inactive. It may have been active in the past, and it’s possible that it will reawaken again in a few million (or even billion) years when the Milky Way merges with the Andromeda Galaxy and their respective supermassive black holes combine. Only much time will tell.

Astronomy and Astrophysics, 2024. DOI: 10.1051/0004-6361/202347957  (About DOIs).

Listing image by ESO/M. Kornmesser

Supermassive black hole roars to life as astronomers watch in real time Read More »

proton-is-taking-its-privacy-first-apps-to-a-nonprofit-foundation-model

Proton is taking its privacy-first apps to a nonprofit foundation model

Proton going nonprofit —

Because of Swiss laws, there are no shareholders, and only one mission.

Swiss flat flying over a landscape of Swiss mountains, with tourists looking on from nearby ledge

Getty Images

Proton, the secure-minded email and productivity suite, is becoming a nonprofit foundation, but it doesn’t want you to think about it in the way you think about other notable privacy and web foundations.

“We believe that if we want to bring about large-scale change, Proton can’t be billionaire-subsidized (like Signal), Google-subsidized (like Mozilla), government-subsidized (like Tor), donation-subsidized (like Wikipedia), or even speculation-subsidized (like the plethora of crypto “foundations”),” Proton CEO Andy Yen wrote in a blog post announcing the transition. “Instead, Proton must have a profitable and healthy business at its core.”

The announcement comes exactly 10 years to the day after a crowdfunding campaign saw 10,000 people give more than $500,000 to launch Proton Mail. To make it happen, Yen, along with co-founder Jason Stockman and first employee Dingchao Lu, endowed the Proton Foundation with some of their shares. The Proton Foundation is now the primary shareholder of the business Proton, which Yen states will “make irrevocable our wish that Proton remains in perpetuity an organization that places people ahead of profits.” Among other members of the Foundation’s board is Sir Tim Berners-Lee, inventor of HTML, HTTP, and almost everything else about the web.

Of particular importance is where Proton and the Proton Foundation are located: Switzerland. As Yen noted, Swiss foundations do not have shareholders and are instead obligated to act “in accordance with the purpose for which they were established.” While the for-profit entity Proton AG can still do things like offer stock options to recruits and even raise its own capital on private markets, the Foundation serves as a backstop against moving too far from Proton’s founding mission, Yen wrote.

There’s a lot more Proton to protect these days

Proton has gone from a single email offering to a wide range of services, many of which specifically target the often invasive offerings of other companies (read, mostly: Google). You can now take your cloud files, passwords, and calendars over to Proton and use its VPN services, most of which offer end-to-end encryption and open source core software hosted in Switzerland, with its notably strong privacy laws.

None of that guarantees that a Swiss court can’t compel some forms of compliance from Proton, as happened in 2021. But compared to most service providers, Proton offers a far clearer and easier-to-grasp privacy model: It can’t see your stuff, and it only makes money from subscriptions.

Of course, foundations are only as strong as the people who guide them, and seemingly firewalled profit/non-profit models can be changed. Time will tell if Proton’s new model can keep up with changing markets—and people.

Proton is taking its privacy-first apps to a nonprofit foundation model Read More »

after-a-few-years-of-embracing-thickness,-apple-reportedly-plans-thinner-devices

After a few years of embracing thickness, Apple reportedly plans thinner devices

return to form —

Thinness is good, as long as it doesn’t come at the expense of other things.

Apple bragged about the thinness of the M4 iPad Pro; it's apparently a template for the company's designs going forward.

Enlarge / Apple bragged about the thinness of the M4 iPad Pro; it’s apparently a template for the company’s designs going forward.

Apple

Though Apple has a reputation for prioritizing thinness in its hardware designs, the company has actually spent the last few years learning to embrace a little extra size and/or weight in its hardware. The Apple Silicon MacBook Pro designs are both thicker and heavier than the Intel-era MacBook Pros they replaced. The MacBook Air gave up its distinctive taper. Even the iPhone 15 Pro was a shade thicker than its predecessor.

But Apple is apparently planning to return to emphasizing thinness in its devices, according to reporting from Bloomberg’s Mark Gurman (in a piece that is otherwise mostly about Apple’s phased rollout of the AI-powered features it announced at its Worldwide Developers Conference last week).

Gurman’s sources say that Apple is planning “a significantly skinnier iPhone in time for the iPhone 17 line in 2025,” which presumably means that we can expect the iPhone 16 to continue in the same vein as current iPhone 15 models. The Apple Watch and MacBook Pro are also apparently on the list of devices Apple is trying to make thinner.

Apple previewed this strategy with the introduction of the M4 iPad Pro a couple of months ago, which looked a lot like the previous-generation iPad Pro design but was a few hundredths of an inch thinner and (especially for the 13-inch model) noticeably lighter than before. Gurman says the new iPad Pro is “the beginning of a new class of Apple devices that should be the thinnest and lightest products in their categories across the whole tech industry.”

Thin-first design isn’t an inherently good or bad thing, but the issue in Apple’s case is that it has occasionally come at the expense of other more desirable features. A thinner device has less room for cooling hardware like fans and heatsinks, less room for batteries, and less room to fit ports.

The late-2010s-era MacBook Pro and Air redesigns were probably the nadir of this thin-first design, switching to all-Thunderbolt ports and a stiff-feeling butterfly switch keyboard design that also ended up being so breakage-prone that it spawned a long-running Apple repair program and a class-action lawsuit that the company settled. The 2020 and 2021 MacBooks reversed course on both decisions, reverting to a more traditional scissor-switch keyboard and restoring larger ports like MagSafe and HDMI.

Hopefully, Apple has learned the lessons of the last decade or so and is planning not to give up features people like just so it can craft thinner hardware. The new iPad Pros are a reason for optimism—they don’t really give up anything relative to older iPad models while still improving performance and screen quality. But iPad hardware is inherently more minimalist than the Mac and is less space-constrained than an iPhone or an Apple Watch. Here’s hoping Apple has figured out how to make a thinner, lighter Mac without giving up ports or keyboard quality or a thinner, lighter iPhone or Apple Watch without hurting battery life.

After a few years of embracing thickness, Apple reportedly plans thinner devices Read More »

tdk-claims-insane-energy-density-in-solid-state-battery-breakthrough

TDK claims insane energy density in solid-state battery breakthrough

All charged up —

Apple supplier says new tech has 100 times the capacity of its current batteries.

man wearing headphones

Enlarge / TDK says its new ceramic materials for batteries will improve the performance of small consumer electronics devices such as smartwatches and wireless headphones

Japan’s TDK is claiming a breakthrough in materials used in its small solid-state batteries, with the Apple supplier predicting significant performance increases for devices from wireless headphones to smartwatches.

The new material provides an energy density—the amount that can be squeezed into a given space—of 1,000 watt-hours per liter, which is about 100 times greater than TDK’s current battery in mass production. Since TDK introduced it in 2020, competitors have moved forward, developing small solid-state batteries that offer 50 Wh/l, while rechargeable coin batteries using traditional liquid electrolytes offer about 400 Wh/l, according to the group.

“We believe that our newly developed material for solid-state batteries can make a significant contribution to the energy transformation of society. We will continue the development towards early commercialisation,” said TDK’s chief executive Noboru Saito.

The batteries set to be produced will be made of an all-ceramic material, with oxide-based solid electrolyte and lithium alloy anodes. The high capability of the battery to store electrical charge, TDK said, would allow for smaller device sizes and longer operating times, while the oxide offered a high degree of stability and thus safety. The battery technology is designed to be used in smaller-sized cells, replacing existing coin-shaped batteries found in watches and other small electronics.

The breakthrough is the latest step forward for a technology industry experts think can revolutionize energy storage, but which faces significant obstacles on the path to mass production, particularly at larger battery sizes.

Solid-state batteries are safer, lighter and potentially cheaper and offer longer performance and faster charging than current batteries relying on liquid electrolytes. Breakthroughs in consumer electronics have filtered through to electric vehicles, although the dominant battery chemistries for the two categories now differ substantially.

The ceramic material used by TDK means that larger-sized batteries would be more fragile, meaning the technical challenge of making batteries for cars or even smartphones will not be surmounted in the foreseeable future, according to the company.

Kevin Shang, senior research analyst at Wood Mackenzie, a data and analytics firm, said that “unfavorable mechanical properties,” as well as the difficulty and cost of mass production, are challenges for moving the application of solid-state oxide-based batteries into smartphones.

Industry experts believe the most significant use case for solid-state batteries could be in electric cars by enabling greater driving range. Japanese companies are in the vanguard of a push to commercialize the technology: Toyota is aiming for as early as 2027, Nissan the year after and Honda by the end of the decade.

Car manufacturers are focused on developing sulfide-based electrolytes for long-range electric vehicles, an alternative kind of material to the oxide-based material that TDK has developed.

However, there is still skepticism about how quickly the much-hyped technology can be realized, particularly the larger batteries needed for electric vehicles.

Robin Zeng, founder and chief executive of CATL, the world’s biggest electric vehicle battery manufacturer, told the Financial Times in March that solid-state batteries did not work well enough, lacked durability and still had safety problems. Zeng’s CATL originated as a spin-off from Amperex Technology, or ATL, which is a subsidiary of TDK and is the world’s leading producer of lithium-ion batteries.

TDK, which was founded in 1935 and became a household name as a top cassette tape brand in the 1960s and 1970s, has lengthy experience in battery materials and technology.

It has 50 to 60 percent global market share in the small-capacity batteries that power smartphones and is targeting leadership in the medium-capacity market, which includes energy storage devices and larger electronics such as drones.

The group plans to start shipping samples of its new battery prototype to clients from next year and hopes to be able to move into mass production after that.

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

TDK claims insane energy density in solid-state battery breakthrough Read More »