Author name: Shannon Garcia

rfk-jr’s-anti-vaccine-group-can’t-sue-meta-for-agreeing-with-cdc,-judge-rules

RFK Jr’s anti-vaccine group can’t sue Meta for agreeing with CDC, judge rules

Independent presidential candidate Robert F. Kennedy Jr.

Enlarge / Independent presidential candidate Robert F. Kennedy Jr.

The Children’s Health Defense (CHD), an anti-vaccine group founded by Robert F. Kennedy Jr, has once again failed to convince a court that Meta acted as a state agent when censoring the group’s posts and ads on Facebook and Instagram.

In his opinion affirming a lower court’s dismissal, US Ninth Circuit Court of Appeals Judge Eric Miller wrote that CHD failed to prove that Meta acted as an arm of the government in censoring posts. Concluding that Meta’s right to censor views that the platforms find “distasteful” is protected by the First Amendment, Miller denied CHD’s requested relief, which had included an injunction and civil monetary damages.

“Meta evidently believes that vaccines are safe and effective and that their use should be encouraged,” Miller wrote. “It does not lose the right to promote those views simply because they happen to be shared by the government.”

CHD told Reuters that the group “was disappointed with the decision and considering its legal options.”

The group first filed the complaint in 2020, arguing that Meta colluded with government officials to censor protected speech by labeling anti-vaccine posts as misleading or removing and shadowbanning CHD posts. This caused CHD’s traffic on the platforms to plummet, CHD claimed, and ultimately, its pages were removed from both platforms.

However, critically, Miller wrote, CHD did not allege that “the government was actually involved in the decisions to label CHD’s posts as ‘false’ or ‘misleading,’ the decision to put the warning label on CHD’s Facebook page, or the decisions to ‘demonetize’ or ‘shadow-ban.'”

“CHD has not alleged facts that allow us to infer that the government coerced Meta into implementing a specific policy,” Miller wrote.

Instead, Meta “was entitled to encourage” various “input from the government,” justifiably seeking vaccine-related information provided by the World Health Organization (WHO) and the US Centers for Disease Control and Prevention (CDC) as it navigated complex content moderation decisions throughout the pandemic, Miller wrote.

Therefore, Meta’s actions against CHD were due to “Meta’s own ‘policy of censoring,’ not any provision of federal law,” Miller concluded. “The evidence suggested that Meta had independent incentives to moderate content and exercised its own judgment in so doing.”

None of CHD’s theories that Meta coordinated with officials to deprive “CHD of its constitutional rights” were plausible, Miller wrote, whereas the “innocent alternative”—”that Meta adopted the policy it did simply because” CEO Mark Zuckerberg and Meta “share the government’s view that vaccines are safe and effective”—appeared “more plausible.”

Meta “does not become an agent of the government just because it decides that the CDC sometimes has a point,” Miller wrote.

Equally not persuasive were CHD’s notions that Section 230 immunity—which shields platforms from liability for third-party content—”‘removed all legal barriers’ to the censorship of vaccine-related speech,” such that “Meta’s restriction of that content should be considered state action.”

“That Section 230 operates in the background to immunize Meta if it chooses to suppress vaccine misinformation—whether because it shares the government’s health concerns or for independent commercial reasons—does not transform Meta’s choice into state action,” Miller wrote.

One judge dissented over Section 230 concerns

In his dissenting opinion, Judge Daniel Collins defended CHD’s Section 230 claim, however, suggesting that the appeals court erred and should have granted CHD injunctive and declaratory relief from alleged censorship. CHD CEO Mary Holland told The Defender that the group was pleased the decision was not unanimous.

According to Collins, who like Miller is a Trump appointee, Meta could never have built its massive social platforms without Section 230 immunity, which grants platforms the ability to broadly censor viewpoints they disfavor.

It was “important to keep in mind” that “the vast practical power that Meta exercises over the speech of millions of others ultimately rests on a government-granted privilege to which Meta is not constitutionally entitled,” Collins wrote. And this power “makes a crucial difference in the state-action analysis.”

As Collins sees it, CHD could plausibly allege that Meta’s communications with government officials about vaccine-related misinformation targeted specific users, like the “disinformation dozen” that includes both CHD and Kennedy. In that case, it appears possible to Collins that Section 230 provides a potential opportunity for government to target speech that it disfavors through mechanisms provided by the platforms.

“Having specifically and purposefully created an immunized power for mega-platform operators to freely censor the speech of millions of persons on those platforms, the Government is perhaps unsurprisingly tempted to then try to influence particular uses of such dangerous levers against protected speech expressing viewpoints the Government does not like,” Collins warned.

He further argued that “Meta’s relevant First Amendment rights” do not “give Meta an unbounded freedom to work with the Government in suppressing speech on its platforms.” Disagreeing with the majority, he wrote that “in this distinctive scenario, applying the state-action doctrine promotes individual liberty by keeping the Government’s hands away from the tempting levers of censorship on these vast platforms.”

The majority agreed, however, that while Section 230 immunity “is undoubtedly a significant benefit to companies like Meta,” lawmakers’ threats to weaken Section 230 did not suggest that Meta’s anti-vaccine policy was coerced state action.

“Many companies rely, in one way or another, on a favorable regulatory environment or the goodwill of the government,” Miller wrote. “If that were enough for state action, every large government contractor would be a state actor. But that is not the law.”

RFK Jr’s anti-vaccine group can’t sue Meta for agreeing with CDC, judge rules Read More »

stratasys-sues-bambu-lab-over-patents-used-widely-by-consumer-3d-printers

Stratasys sues Bambu Lab over patents used widely by consumer 3D printers

Patent protections pushed for proprietary processes —

Heated platforms and purge towers are among Stratasys’ infringement claims.

Bambu Lab A1, with three filament spools connected by circular loops off to the right.

Enlarge / The Bambu Lab A1, complete with heated build platform.

Bambu Lab

A patent lawsuit filed by one of 3D printing’s most established firms against a consumer-focused upstart could have a big impact on the wider 3D-printing scene.

In two complaints, (1, 2, PDF) filed in the Eastern District of Texas, Marshall Division, against six entities related to Bambu Lab, Stratasys alleges that Bambu Lab infringed upon 10 patents that it owns, some through subsidiaries like Makerbot (acquired in 2013). Among the patents cited are US9421713B2, “Additive manufacturing method for printing three-dimensional parts with purge towers,” and US9592660B2, “Heated build platform and system for three-dimensional printing methods.”

There are not many, if any, 3D printers sold to consumers that do not have a heated bed, which prevents the first layers of a model from cooling during printing and potentially shrinking and warping the model. “Purge towers” (or “prime towers” in Bambu’s parlance) allow for multicolor printing by providing a place for the filament remaining in a nozzle to be extracted and prevent bleed-over between colors. Stratasys’ infringement claims also target some fundamental technologies around force detection and fused deposition modeling (FDM) that, like purge towers, are used by other 3D-printer makers that target entry-level and intermediate 3D-printing enthusiasts.

Bambu Lab launched onto the 3D-printing scene in 2022, quickly picking up market share in the entry-level and enthusiast space, in part due to its relatively fast multicolor printing. It hasn’t had an entirely smooth path to its market share, with a cloud-based force printing fiasco in the summer of 2023 and a recall of its popular A1 printer for heat issues earlier this year.

Stratasys, by contrast, has been working in 3D printing since 1988, and its products are used more often in manufacturing and commercial prototyping. Its 3D printers were part of how General Motors pivoted to making face shields and ventilators during the COVID-19 pandemic. Its acquisition of MakerBot led to layoffs two years in and eventually a spin-off merger with Ultimaker, but Stratasys retained MakerBot’s patents.

Another patent lawsuit filed by a larger prototyping firm against a smaller semi-competitor was settled in 2014. 3D Systems sued Formlabs in 2012 over patents regarding laser-based stereolithography. That suit ended with Formlabs agreeing to pay an 8 percent royalty on all sales to 3D Systems. Stratasys had also previously sued another smaller-scale printing firm, Afinia, in 2013, although that case eventually failed.

Listing image by Bambu Lab

Stratasys sues Bambu Lab over patents used widely by consumer 3D printers Read More »

nasa-is-about-to-make-its-most-important-safety-decision-in-nearly-a-generation

NASA is about to make its most important safety decision in nearly a generation

Boeing's Starliner spacecraft, seen docked at the International Space Station through the window of a SpaceX Dragon spacecraft.

Enlarge / Boeing’s Starliner spacecraft, seen docked at the International Space Station through the window of a SpaceX Dragon spacecraft.

As soon as this week, NASA officials will make perhaps the agency’s most consequential safety decision in human spaceflight in 21 years.

NASA astronauts Butch Wilmore and Suni Williams are nearly 10 weeks into a test flight that was originally set to last a little more than one week. The two retired US Navy test pilots were the first people to fly into orbit on Boeing’s Starliner spacecraft when it launched on June 5. Now, NASA officials aren’t sure Starliner is safe enough to bring the astronauts home.

Three of the managers at the center of the pending decision, Ken Bowersox and Steve Stich from NASA and Boeing’s LeRoy Cain, either had key roles in the ill-fated final flight of Space Shuttle Columbia in 2003 or felt the consequences of the accident.

At that time, officials misjudged the risk. Seven astronauts died, and the Space Shuttle Columbia was destroyed as it reentered the atmosphere over Texas. Bowersox, Stich, and Cain weren’t the people making the call on the health of Columbia‘s heat shield in 2003, but they had front-row seats to the consequences.

Bowersox was an astronaut on the International Space Station when NASA lost Columbia. He and his crewmates were waiting to hitch a ride home on the next Space Shuttle mission, which was delayed two-and-a-half years in the wake of the Columbia accident. Instead, Bowersox’s crew came back to Earth later that year on a Russian Soyuz capsule. After retiring from the astronaut corps, Bowersox worked at SpaceX and is now the head of NASA’s spaceflight operations directorate.

Stich and Cain were NASA flight directors in 2003, and they remain well-respected in human spaceflight circles. Stich is now the manager of NASA’s commercial crew program, and Cain is now a Boeing employee and chair of the company’s Starliner mission director. For the ongoing Starliner mission, Bowersox, Stich, and Cain are in the decision-making chain.

All three joined NASA in the late 1980s, soon after the Challenger accident. They have seen NASA attempt to reshape its safety culture after both of NASA’s fatal Space Shuttle tragedies. After Challenger, NASA’s astronaut office had a more central role in safety decisions, and the agency made efforts to listen to dissent from engineers. Still, human flaws are inescapable, and NASA’s culture was unable to alleviate them during Columbia‘s last flight in 2003.

NASA knew launching a Space Shuttle in cold weather reduced the safety margin on its solid rocket boosters, which led to the Challenger accident. And shuttle managers knew foam routinely fell off the external fuel tank. In a near-miss, one of these foam fragments hit a shuttle booster but didn’t damage it, just two flights prior to Columbia‘s STS-107 mission.

“I have wondered if some in management roles today that were here when we lost Challenger and Columbia remember that in both of those tragedies, there were those that were not comfortable proceeding,” Milt Heflin, a retired NASA flight director who spent 47 years at the agency, wrote in an email to Ars. “Today, those memories are still around.”

“I suspect Stich and Cain are paying attention to the right stuff,” Heflin wrote.

The question facing NASA’s leadership today? Should the two astronauts return to Earth from the International Space Station in Boeing’s Starliner spacecraft, with its history of thruster failures and helium leaks, or should they come home on a SpaceX Dragon capsule?

Under normal conditions, the first option is the choice everyone at NASA would like to make. It would be least disruptive to operations at the space station and would potentially maintain a clearer future for Boeing’s Starliner program, which NASA would like to become operational for regular crew rotation flights to the station.

But some people at NASA aren’t convinced this is the right call. Engineers still don’t fully understand why five of the Starliner spacecraft’s thrusters overheated and lost power as the capsule approached the space station for docking in June. Four of these five control jets are now back in action with near-normal performance, but managers would like to be sure the same thrusters—and maybe more—won’t fail again as Starliner departs the station and heads for reentry.

NASA is about to make its most important safety decision in nearly a generation Read More »

one-startup’s-plan-to-fix-ai’s-“shoplifting”-problem

One startup’s plan to fix AI’s “shoplifting” problem

I’ve been caught stealing, once when I was five —

Algorithm will identify sources used by generative AI, compensate them for use.

One startup’s plan to fix AI’s “shoplifting” problem

Bloomberg via Getty

Bill Gross made his name in the tech world in the 1990s, when he came up with a novel way for search engines to make money on advertising. Under his pricing scheme, advertisers would pay when people clicked on their ads. Now, the “pay-per-click” guy has founded a startup called ProRata, which has an audacious, possibly pie-in-the-sky business model: “AI pay-per-use.”

Gross, who is CEO of the Pasadena, California, company, doesn’t mince words about the generative AI industry. “It’s stealing,” he says. “They’re shoplifting and laundering the world’s knowledge to their benefit.”

AI companies often argue that they need vast troves of data to create cutting-edge generative tools and that scraping data from the Internet, whether it’s text from websites, video or captions from YouTube, or books pilfered from pirate libraries, is legally allowed. Gross doesn’t buy that argument. “I think it’s bullshit,” he says.

So do plenty of media executives, artists, writers, musicians, and other rights-holders who are pushing back—it’s hard to keep up with the constant flurry of copyright lawsuits filed against AI companies, alleging that the way they operate amounts to theft.

But Gross thinks ProRata offers a solution that beats legal battles. “To make it fair—that’s what I’m trying to do,” he says. “I don’t think this should be solved by lawsuits.”

His company aims to arrange revenue-sharing deals so publishers and individuals get paid when AI companies use their work. Gross explains it like this: “We can take the output of generative AI, whether it’s text or an image or music or a movie, and break it down into the components, to figure out where they came from, and then give a percentage attribution to each copyright holder, and then pay them accordingly.” ProRata has filed patent applications for the algorithms it created to assign attribution and make the appropriate payments.

This week, the company, which has raised $25 million, launched with a number of big-name partners, including Universal Music Group, the Financial Times, The Atlantic, and media company Axel Springer. In addition, it has made deals with authors with large followings, including Tony Robbins, Neal Postman, and Scott Galloway. (It has also partnered with former White House Communications Director Anthony Scaramucci.)

Even journalism professor Jeff Jarvis, who believes scraping the web for AI training is fair use, has signed on. He tells WIRED that it’s smart for people in the news industry to band together to get AI companies access to “credible and current information” to include in their output. “I hope that ProRata might open discussion for what could turn into APIs [application programming interfaces] for various content,” he says.

Following the company’s initial announcement, Gross says he had a deluge of messages from other companies asking to sign up, including a text from Time CEO Jessica Sibley. ProRata secured a deal with Time, the publisher confirmed to WIRED. He plans to pursue agreements with high-profile YouTubers and other individual online stars.

The key word here is “plans.” The company is still in its very early days, and Gross is talking a big game. As a proof of concept, ProRata is launching its own subscription chatbot-style search engine in October. Unlike other AI search products, ProRata’s search tool will exclusively use licensed data. There’s nothing scraped using a web crawler. “Nothing from Reddit,” he says.

Ed Newton-Rex, a former Stability AI executive who now runs the ethical data licensing nonprofit Fairly Trained, is heartened by ProRata’s debut. “It’s great to see a generative AI company licensing training data before releasing their model, in contrast to many other companies’ approach,” he says. “The deals they have in place further demonstrate media companies’ openness to working with good actors.”

Gross wants the search engine to demonstrate that quality of data is more important than quantity and believes that limiting the model to trustworthy information sources will curb hallucinations. “I’m claiming that 70 million good documents is actually superior to 70 billion bad documents,” he says. “It’s going to lead to better answers.”

What’s more, Gross thinks he can get enough people to sign up for this all-licensed-data AI search engine to make as much money needed to pay its data providers their allotted share. “Every month the partners will get a statement from us saying, ‘Here’s what people search for, here’s how your content was used, and here’s your pro rata check,’” he says.

Other startups already are jostling for prominence in this new world of training-data licensing, like the marketplaces TollBit and Human Native AI. A nonprofit called the Dataset Providers Alliance was formed earlier this summer to push for more standards in licensing; founding members include services like the Global Copyright Exchange and Datarade.

ProRata’s business model hinges in part on its plan to license its attribution and payment technologies to other companies, including major AI players. Some of those companies have begun striking their own deals with publishers. (The Atlantic and Axel Springer, for instance, have agreements with OpenAI.) Gross hopes that AI companies will find licensing ProRata’s models more affordable than creating them in-house.

“I’ll license the system to anyone who wants to use it,” Gross says. “I want to make it so cheap that it’s like a Visa or MasterCard fee.”

This story originally appeared on wired.com.

One startup’s plan to fix AI’s “shoplifting” problem Read More »

512-bit-rsa-key-in-home-energy-system-gives-control-of-“virtual-power-plant”

512-bit RSA key in home energy system gives control of “virtual power plant”

512-bit RSA key in home energy system gives control of “virtual power plant”

When Ryan Castellucci recently acquired solar panels and a battery storage system for their home just outside of London, they were drawn to the ability to use an open source dashboard to monitor and control the flow of electricity being generated. Instead, they gained much, much more—some 200 megawatts of programmable capacity to charge or discharge to the grid at will. That’s enough energy to power roughly 40,000 homes.

Castellucci, whose pronouns are they/them, acquired this remarkable control after gaining access to the administrative account for GivEnergy, the UK-based energy management provider who supplied the systems. In addition to the control over an estimated 60,000 installed systems, the admin account—which amounts to root control of the company’s cloud-connected products—also made it possible for them to enumerate names, email addresses, usernames, phone numbers, and addresses of all other GivEnergy customers (something the researcher didn’t actually do).

“My plan is to set up Home Assistant and integrate it with that, but in the meantime, I decided to let it talk to the cloud,” Castellucci wrote Thursday, referring to the recently installed gear. “I set up some scheduled charging, then started experimenting with the API. The next evening, I had control over a virtual power plant comprised of tens of thousands of grid connected batteries.”

Still broken after all these years

The cause of the authentication bypass Castellucci discovered was a programming interface that was protected by an RSA cryptographic key of just 512 bits. The key signs authentication tokens and is the rough equivalent of a master-key. The bit sizes allowed Castellucci to factor the private key underpinning the entire API. The factoring required $70 in cloud computing costs and less than 24 hours. GivEnergy introduced a fix within 24 hours of Castellucci privately disclosing the weakness.

The first publicly known instance of 512-bit RSA being factored came in 1999 by an international team of more than a dozen researchers. The feat took a supercomputer and hundreds of other computers seven months to carry out. By 2009 hobbyists spent about three weeks to factor 13 512-bit keys protecting firmware in Texas Instruments calculators from being copied. In 2015, researchers demonstrated factoring as a service, a method that used Amazon cloud computing, cost $75, and took about four hours. As processing power has increased, the resources required to factor keys has become ever less.

It’s tempting to fault GivEnergy engineers for pinning the security of its infrastructure on a key that’s trivial to break. Castellucci, however, said the responsibility is better assigned to the makers of code libraries developers rely on to implement complex cryptographic processes.

“Expecting developers to know that 512 bit RSA is insecure clearly doesn’t work,” the security researcher wrote. “They’re not cryptographers. This is not their job. The failure wasn’t that someone used 512 bit RSA. It was that a library they were relying on let them.”

Castellucci noted that OpenSSL, the most widely used cryptographic code library, still offers the option of using 512-bit keys. So does the Go crypto library. Coincidentally, the Python cryptography library removed the option only a few weeks ago (the commit for the change was made in January).

In an email, a GivEnergy representative reinforced Castellucci’s assessment, writing:

In this case, the problematic encryption approach was picked up via a 3rd party library many years ago, when we were a tiny startup company with only 2, fairly junior software developers & limited experience. Their assumption at the time was that because this encryption was available within the library, it was safe to use. This approach was passed through the intervening years and this part of the codebase was not changed significantly since implementation (so hadn’t passed through the review of the more experienced team we now have in place).

512-bit RSA key in home energy system gives control of “virtual power plant” Read More »

at&t-rebuked-over-misleading-ad-for-nonexistent-satellite-phone-calling

AT&T rebuked over misleading ad for nonexistent satellite phone calling

Remember 5GE? —

AT&T reluctantly adds disclaimer: “Satellite calling is not currently available.”

A gloved hand holds a phone while making a call. The screen shows an AT&T logo and the text,

Enlarge / Screenshot from AT&T commercial featuring Ben Stiller making a satellite call to Jordan Spieth.

AT&T has been told to stop running ads that claim the carrier is already offering cellular coverage from space.

AT&T intends to offer Supplemental Coverage from Space (SCS) and has a deal with AST SpaceMobile, a Starlink competitor that plans a smartphone service from low-Earth-orbit satellites. But AST SpaceMobile’s first batch of five satellites isn’t scheduled to launch until September.

T-Mobile was annoyed by AT&T running an ad indicating that its satellite-to-cellular service was already available, and filed a challenge with the advertising industry’s self-regulatory system run by BBB National Programs. The BBB National Advertising Division (NAD) ruled against AT&T last month and the carrier appealed to the National Advertising Review Board (NARB), which has now also ruled against AT&T.

“It was not disputed that AT&T does not currently offer SCS coverage to its cellular customers… Therefore, the NARB panel recommended that AT&T discontinue the claim that SCS service is presently available to consumers or modify the claim to clearly and conspicuously communicate that SCS is not available at this time,” the NARB said in an announcement yesterday.

AT&T, which is also famous for renaming its 4G service “5GE,” reluctantly agreed to comply with the recommendation and released a new version of the satellite-calling commercial with more specific disclaimers. “AT&T supports NARB’s self-regulatory process and will comply with NARB’s decision… However, we respectfully disagree with NARB’s conclusion recommending that the commercial be discontinued or modified,” AT&T said in its statement on the decision.

The challenged advertisement, titled “Epic Bad Golf Day,” features actor Ben Stiller looking for a golf ball in various remote locations.

“The commercial near the end shows Mr. Stiller having finally caught up with his golf ball in a desert wasteland… He then places a cellular phone call to champion golfer Jordan Spieth, shown standing on a golf green, presumably so that Mr. Spieth can offer golfing advice,” the NARB ruling said. “An image in the commercial shows the call from Mr. Stiller to Mr. Spieth connecting through a satellite relay. Another visual shows Mr. Stiller’s phone stating that it is ‘Making satellite connection.'”

AT&T: Commercial shouldn’t be taken literally

AT&T’s appeal “points to a number of fanciful/ludicrous features of the commercial in Mr. Stiller’s golf ball odyssey to argue that reasonable consumers will not receive a message that satellite service is currently available, but will understand that AT&T is burnishing its brand by pointing to technological features currently under development,” the panel wrote.

T-Mobile countered “that the use of humor does not shield an advertiser from its obligation to ensure that claims are truthful and non-misleading,” and the NARB agreed.

“The panel views the humorous/fanciful nature of Mr. Stiller’s antics as a means of attracting the attention of viewers, but also as a means of emphasizing the utility of SCS technology—allowing for calls to be placed from remote locations not currently accessible to mobile service,” the industry self-regulatory group said. “The humor associated with Mr. Stiller’s golf misadventures does not cancel out the consumer communication that SCS service is currently available. In addition, the panel does not accept AT&T’s argument that the panel’s decision (or NAD’s decision being appealed) will interfere with the use of humor in advertising.”

The ad originally included small text that described the depicted satellite call as a “demonstration of evolving technology.” The text was changed this week to say that “satellite calling is not currently available.”

“Even assuming consumers will read [the disclaimer], one reasonable interpretation of ‘evolving technology’ is that the technology is currently available, albeit expected to improve in the future,” the NARB said.

The original version also had text that said, “the future of help is an AT&T satellite call away.” The NARB concluded that this “statement can be interpreted reasonably as stating that ‘future’ technology has now arrived. The next visual reinforces that message, as it shows Mr. Stiller communicating on a cell phone call while in a remote location, and the accompanying visual states ‘connecting changes everything,’ a message addressing the present, not the future.”

In the updated version of the ad, AT&T changed the text to say that “the future of help will be an AT&T satellite call away.”

AT&T rebuked over misleading ad for nonexistent satellite phone calling Read More »

nashville-man-arrested-for-running-“laptop-farm”-to-get-jobs-for-north-koreans

Nashville man arrested for running “laptop farm” to get jobs for North Koreans

HOW TO LAND A SIX-FIGURE SALARY —

Laptop farm gave the impression North Korean nationals were working from the US.

Nashville man arrested for running “laptop farm” to get jobs for North Koreans

Federal authorities have arrested a Nashville man on charges he hosted laptops at his residences in a scheme to deceive US companies into hiring foreign remote IT workers who funneled hundreds of thousands of dollars in income to fund North Korea’s weapons program.

The scheme, federal prosecutors said, worked by getting US companies to unwittingly hire North Korean nationals, who used the stolen identity of a Georgia man to appear to be a US citizen. Under sanctions issued by the federal government, US employers are strictly forbidden from hiring citizens of North Korea. Once the North Korean nationals were hired, the employers sent company-issued laptops to Matthew Isaac Knoot, 38, of Nashville, Tennessee, the prosecutors said in court papers filed in the US District Court of the Middle District of Tennessee. The court documents also said a foreign national with the alias Yang Di was involved in the conspiracy.

The prosecutors wrote:

As part of the conspiracy, Knoot received and hosted laptop computers issued by US companies to Andrew M. at Knoot’s Nashville, Tennessee residences for the purposes of deceiving the companies into believing that Andrew M. was located in the United States. Following receipt of the laptops and without authorization, Knoot logged on to the laptops, downloaded and installed remote desktop applications, and accessed without authorization the victim companies’ networks. The remote desktop applications enabled DI to work from locations outside the United states, in particular, China, while appearing to the victim companies that Andre M. was working from Knoot’s residences. In exchange, Knoot charged Di monthly fees for his services, including flat rates for each hosted laptop and a percentage of Di’s salary for IT work, enriching himself off the scheme.

The arrest comes two weeks after security-training company KnowBe4 said it unknowingly hired a North Korean national using a fake identity to appear as someone eligible to fill a position for a software engineer for an internal IT AI team. KnowBe4’s security team soon became suspicious of the new hire after detecting “anomalous activity,” including manipulating session history files, transferring potentially harmful files, and executing unauthorized software.

The North Korean national was hired even after KnowBe4 conducted background checks, verified references, and conducted four video interviews while he was an applicant. The fake applicant was able to stymie those checks by using a stolen identity and a photo that was altered with AI tools to create a fake profile picture and mimic the face during video conference calls.

In May federal prosecutors charged an Arizona woman for allegedly raising $6.8 million in a similar scheme to fund the weapons program. The defendant in that case, Christina Marie Chapman, 49, of Litchfield Park, Arizona, and co-conspirators compromised the identities of more than 60 people living in the US and used their personal information to get North Koreans IT jobs across more than 300 US companies.

The FBI and Departments of State and Treasury issued a May 2022 advisory alerting the international community, private sector, and public of a campaign underway to land North Korean nationals IT jobs in violation of many countries’ laws. US and South Korean officials issued updated guidance in October 2023 and again in May 2024. The advisories include signs that may indicate North Korea IT worker fraud and the use of US-based laptop farms.

The North Korean IT workers using Knoot’s laptop farm generated revenue of more than $250,000 each between July 2022 and August 2023. Much of the funds were then funneled to North Korea’s weapons program, which includes weapons of mass destruction, prosecutors said.

Knoot faces charges, including wire fraud, intentional damage to protected computers, aggravated identity theft, and conspiracy to cause the unlawful employment of aliens. If found guilty, he faces a maximum of 20 years in prison.

Nashville man arrested for running “laptop farm” to get jobs for North Koreans Read More »

nova-launcher,-savior-of-cruft-filled-android-phones,-is-on-life-support

Nova Launcher, savior of cruft-filled Android phones, is on life support

A setup that’s a bit too minimalist —

Nova Launcher feels the “massive” layoffs at the firm that acquired it in 2022.

Lineup of four Android devices showing Nova Launcher aspects, including the logo, icon customization, and app drawer

Nova Launcher

Back in July 2022, when mobile app metrics firm Branch acquired the popular and well-regarded Nova Launcher for Android, the app’s site put up one of those self-directed FAQ posts about it. Under the question heading “What does Branch want with Nova?,” Nova founder and creator Kevin Barry started his response with, “Not to mess it up, don’t worry!”

Branch (formerly/sometimes Branch Metrics) is a firm concerned with helping businesses track the links that lead into their apps, whether from SMS, email, marketing, or inside other apps. Nova, with its Sesame Search tool that helped users find and access deeper links—like heading straight to calling a car, rather than just opening a rideshare app—seemed like a reasonable fit.

Barry wrote that he had received a number of acquisition offers over the years, but he didn’t want to be swallowed by a giant corporation, an OEM, or a volatile startup. “Branch is different,” he wrote then, because they wanted to add staff to Nova, keep it available to the public, and mostly leave it alone.

Two years later, Branch has left Nova Launcher a bit too alone. As documented on Nova’s official X (formerly Twitter) account, and transcripts from its Discord, as of Thursday Nova had “gone from a team of around a dozen people” to just Barry, the founder, working alone. The Nova cuts were part of “a massive layoff” of purportedly more than 100 people across all of Branch, according to now-former Nova workers.

Barry wrote that he would keep working on Nova, “However I have less resources.” He would need to “cut scope” on an upcoming Nova release, he wrote. Other employees noted that customer support, marketing, and even correspondence would likely be strained or disappear.

Ars has reached out to Branch for comment and will update this post with response.

Some of the icon customization options, shown here on a tablet, inside Nova Launcher.

Some of the icon customization options, shown here on a tablet, inside Nova Launcher.

Nova Launcher

Custom, clean Android home screens

It’s hard to tell if Nova would have been better off without ever having been inside Branch, or if it might have inevitably run into the vexing question of how to get people to continually pay for an Android utility. But for Nova to be endangered, or at least heavily constrained, is a sad state for a very useful tool.

Installing a launcher on Android allows you to ignore whatever home screen, app tray, and search bars your phone came with and design your own. Nova Launcher allowed people to change how many icons showed up on their screen, and how big. It allowed for hiding default apps that could not be uninstalled. It was, and still is, one of the best ways to save your phone of bad skins, cruddy OEM software, and stuff for which you never asked.

In more than a dozen Ars reviews of Android devices touting organization concepts that people might not like—including Google’s own Pixels—Nova Launcher was recommended (minus one weird Razer/Nextbit phone that came with it by default). In his Pixel 7 Pro review, Ron Amadeo spells out one such way Nova saved the day:

The worst part of the Pixel software package is the home screen launcher, the primary interface of the phone, which is not nearly configurable enough. All I’m asking for is two things. First, I’d like many more icon grid size adjustments—the default 4×4 grid was fine when we were using 3.2-inch, 480p displays, but I now run a 7×5 grid in Nova launcher, and the Pixel launcher looks ridiculous. Second, I want to remove Google’s useless “At a Glance” widget, which takes up an incredible four icon slots to show the date and current outdoor temperature.

For the more than a decade that I used (and sometimes reviewed) Android phones, I maintained an exported Nova configuration file that I brought from phone to phone. I could experiment with theming, icon packs, and custom widgets (complete with deep links into app actions), but what that export really did was allow me to feel comfortable tinkering and messing with layout ideas. I could always go back to my rock-solid, no-nonsense layout of apps, spaced just how I liked them.

While Nova is not dead (despite mine and others‘ eulogistic tones), it’s certainly not positioned to launch bold new features or plot new futures. Here’s hoping Barry can make a go of Nova Launcher for as long as it’s viable for him.

Nova Launcher, savior of cruft-filled Android phones, is on life support Read More »

chatgpt-unexpectedly-began-speaking-in-a-user’s-cloned-voice-during-testing

ChatGPT unexpectedly began speaking in a user’s cloned voice during testing

An illustration of a computer synthesizer spewing out letters.

On Thursday, OpenAI released the “system card” for ChatGPT’s new GPT-4o AI model that details model limitations and safety testing procedures. Among other examples, the document reveals that in rare occurrences during testing, the model’s Advanced Voice Mode unintentionally imitated users’ voices without permission. Currently, OpenAI has safeguards in place that prevent this from happening, but the instance reflects the growing complexity of safely architecting with an AI chatbot that could potentially imitate any voice from a small clip.

Advanced Voice Mode is a feature of ChatGPT that allows users to have spoken conversations with the AI assistant.

In a section of the GPT-4o system card titled “Unauthorized voice generation,” OpenAI details an episode where a noisy input somehow prompted the model to suddenly imitate the user’s voice. “Voice generation can also occur in non-adversarial situations, such as our use of that ability to generate voices for ChatGPT’s advanced voice mode,” OpenAI writes. “During testing, we also observed rare instances where the model would unintentionally generate an output emulating the user’s voice.”

In this example of unintentional voice generation provided by OpenAI, the AI model outbursts “No!” and continues the sentence in a voice that sounds similar to the “red teamer” heard in the beginning of the clip. (A red teamer is a person hired by a company to do adversarial testing.)

It would certainly be creepy to be talking to a machine and then have it unexpectedly begin talking to you in your own voice. Ordinarily, OpenAI has safeguards to prevent this, which is why the company says this occurrence was rare even before it developed ways to prevent it completely. But the example prompted BuzzFeed data scientist Max Woolf to tweet, “OpenAI just leaked the plot of Black Mirror’s next season.”

Audio prompt injections

How could voice imitation happen with OpenAI’s new model? The primary clue lies elsewhere in the GPT-4o system card. To create voices, GPT-4o can apparently synthesize almost any type of sound found in its training data, including sound effects and music (though OpenAI discourages that behavior with special instructions).

As noted in the system card, the model can fundamentally imitate any voice based on a short audio clip. OpenAI guides this capability safely by providing an authorized voice sample (of a hired voice actor) that it is instructed to imitate. It provides the sample in the AI model’s system prompt (what OpenAI calls the “system message”) at the beginning of a conversation. “We supervise ideal completions using the voice sample in the system message as the base voice,” writes OpenAI.

In text-only LLMs, the system message is a hidden set of text instructions that guides behavior of the chatbot that gets added to the conversation history silently just before the chat session begins. Successive interactions are appended to the same chat history, and the entire context (often called a “context window”) is fed back into the AI model each time the user provides a new input.

(It’s probably time to update this diagram created in early 2023 below, but it shows how the context window works in an AI chat. Just imagine that the first prompt is a system message that says things like “You are a helpful chatbot. You do not talk about violent acts, etc.”)

A diagram showing how GPT conversational language model prompting works.

Enlarge / A diagram showing how GPT conversational language model prompting works.

Benj Edwards / Ars Technica

Since GPT-4o is multimodal and can process tokenized audio, OpenAI can also use audio inputs as part of the model’s system prompt, and that’s what it does when OpenAI provides an authorized voice sample for the model to imitate. The company also uses another system to detect if the model is generating unauthorized audio. “We only allow the model to use certain pre-selected voices,” writes OpenAI, “and use an output classifier to detect if the model deviates from that.”

ChatGPT unexpectedly began speaking in a user’s cloned voice during testing Read More »

it’s-not-worth-paying-to-be-removed-from-people-finder-sites,-study-says

It’s not worth paying to be removed from people-finder sites, study says

Better than nothing but not by enough —

The best removal rate was less than 70%, and that didn’t beat manual opt-outs.

Folks in suits hiding behind trees that do not really obscure them

Enlarge / For a true representation of the people-search industry, a couple of these folks should have lanyards that connect them by the pockets.

Getty Images

If you’ve searched your name online in the last few years, you know what’s out there, and it’s bad. Alternately, you’ve seen the lowest-common-denominator ads begging you to search out people from your past to see what crimes are on their record. People-search sites are a gross loophole in the public records system, and it doesn’t feel like there’s much you can do about it.

Not that some firms haven’t promised to try. Do they work? Not really, Consumer Reports (CR) suggests in a recent study.

“[O]ur study shows that many of these services fall short of providing the kind of help and performance you’d expect, especially at the price levels some of them are charging,” said Yael Grauer, program manager for CR, in a statement.

Consumer Reports’ study asked 32 volunteers for permission to try to delete their personal data from 13 people-search sites, using seven services over four months. The services, including DeleteMe, Reputation Defender from Norton, and Confidently, were also compared to “Manual opt-outs,” i.e. following the tucked-away links to pull down that data on each people-search site. CR took volunteers from California, in which the California Consumer Privacy Act should theoretically make it mandatory for brokers to respond to opt-out requests, and in New York, with no such law, to compare results.

Table from Consumer Reports' study of people-search removal services, showing effective removal rates over time for each service.

Table from Consumer Reports’ study of people-search removal services, showing effective removal rates over time for each service.

Finding a total of 332 instances of identifying information profiles on those sites, Consumer Reports found that only 117 profiles were removed within four months using all the services, or 35 percent. The services varied in efficacy, with EasyOptOuts notably performing the second-best at a 65 percent removal rate after four months. But if your goal is to remove entirely others’ ability to find out about you, no service Consumer Reports tested truly gets you there.

Manual opt-outs were the most effective removal method, at 70 percent removed within one week, which is both a higher elimination rate and quicker turn-around than all the automated services.

The study noted close ties between the people-search sites and the services that purport to clean them. Removing one volunteer’s data from ClustrMaps resulted in a page with a suggested “Next step”: signing up for privacy protection service OneRep. Firefox-maker Mozilla dropped OneRep as a service provider for its Mozilla Monitor Plus privacy bundle after reporting by Brian Krebs found that OneRep’s CEO had notable ties to the people-search industry.

In releasing this study, CR also advocates for laws at the federal and state level, like California’s Delete Act, that would make people-search removal far easier than manually scouring the web or paying for incomplete monitoring.

CR’s study cites CheckPeople, PublicDataUSA, and Intelius as the least responsive businesses in one of the least responsive industries, while noting that PeopleFinders, ClustrMaps, and ThatsThem deserve some very tiny, nearly inaudible recognition for complying with opt-out requests (our words, not theirs).

It’s not worth paying to be removed from people-finder sites, study says Read More »

all-the-possible-ways-to-destroy-google’s-monopoly-in-search

All the possible ways to destroy Google’s monopoly in search

All the possible ways to destroy Google’s monopoly in search

Aurich Lawson

After US District Judge Amit Mehta ruled that Google has a monopoly in two markets—general search services and general text advertising—everybody is wondering how Google might be forced to change its search business.

Specifically, the judge ruled that Google’s exclusive deals with browser and device developers secured Google’s monopoly. These so-called default agreements funneled the majority of online searches to Google search engine result pages (SERPs), where results could be found among text ads that have long generated the bulk of Google’s revenue.

At trial, Mehta’s ruling noted, it was estimated that if Google lost its most important default deal with Apple, Google “would lose around 65 percent of its revenue, even assuming that it could retain some users without the Safari default.”

Experts told Ars that disrupting these default deals is the most obvious remedy that the US Department of Justice will seek to restore competition in online search. Other remedies that may be sought range from least painful for Google (mandating choice screens in browsers and devices) to most painful (requiring Google to divest from either Chrome or Android, where it was found to be self-preferencing).

But the remedies phase of litigation may have to wait until after Google’s appeal, which experts said could take years to litigate before any remedies are ever proposed in court. Whether Google could be successful in appealing the ruling is currently being debated, with anti-monopoly advocates backing Mehta’s ruling as “rock solid” and critics suggesting that the ruling’s fresh takes on antitrust law are open to attack.

Google declined Ars’ request to comment on appropriate remedies or its plan to appeal.

Previously, Google’s president of global affairs, Kent Walker, confirmed in a statement that the tech giant would be appealing the ruling because the court found that “Google is ‘the industry’s highest quality search engine, which has earned Google the trust of hundreds of millions of daily users,’ that Google ‘has long been the best search engine, particularly on mobile devices,’ ‘has continued to innovate in search,’ and that ‘Apple and Mozilla occasionally assess Google’s search quality relative to its rivals and find Google’s to be superior.'”

“Given this, and that people are increasingly looking for information in more and more ways, we plan to appeal,” Walker said. “As this process continues, we will remain focused on making products that people find helpful and easy to use.”

But Mehta found that Google was wielding its outsize influence in the search industry to block rivals from competing by locking browsers and devices into agreements ensuring that all searches went to Google SERPs. None of the pro-competitive benefits that Google claimed justified the exclusive deals persuaded Mehta, who ruled that “importantly,” Google “exercised its monopoly power by charging supra-competitive prices for general search text ads”—and thus earned “monopoly profits.”

While experts think the appeal process will delay litigation on remedies, Google seems to think that Mehta may rule on potential remedies before Google can proceed with its appeal. Walker told Google employees that a ruling on remedies may arrive in the next few months, The Wall Street Journal reported. Ars will continue monitoring for updates on this timeline.

As the DOJ’s case against Google’s search business has dragged on, reports have long suggested that a loss for Google could change the way that nearly the entire world searches the Internet.

Adam Epstein—the president and co-CEO of adMarketplace, which bills itself as “the largest consumer search technology company outside of Google and Bing”—told Ars that innovations in search could result in a broader landscape of more dynamic search experiences that draw from sources beyond Google and allow searchers to skip Google’s SERPs entirely. If that happens, the coming years could make Google’s ubiquitous search experience today a distant memory.

“By the end of this decade, going to a search engine results page will seem quaint,” Epstein predicted. “The court’s decision sets the stage for a remedy that will dramatically improve the search experience for everyone connected to the web. The era of innovation in search is just around the corner.”

The DOJ has not meaningfully discussed potential remedies it will seek, but Jonathan Kanter, assistant attorney general of the Justice Department’s antitrust division, celebrated the ruling.

“This landmark decision holds Google accountable,” Kanter said. “It paves the path for innovation for generations to come and protects access to information for all Americans.”

All the possible ways to destroy Google’s monopoly in search Read More »

macos-15-sequoia-makes-you-jump-through-more-hoops-to-disable-gatekeeper-app-checks

macOS 15 Sequoia makes you jump through more hoops to disable Gatekeeper app checks

gate-kept —

But nothing is changing about the kinds of software you can run on your Mac.

The Mac's Gatekeeper feature has been pushing developers to digitally sign their apps since it was introduced in 2012.

Enlarge / The Mac’s Gatekeeper feature has been pushing developers to digitally sign their apps since it was introduced in 2012.

Apple/Andrew Cunningham

It has always been easier to run third-party software on a Mac than on an iPhone or iPad. Despite the introduction of the Mac App Store a couple of years after the iPhone’s App Store opened, it has always been possible to download and run third-party scripts and software on your Mac from anywhere. It’s one reason why the iPhone and iPad are subject to new European Union regulations about software sideloading and third-party app stores, while the Mac isn’t.

That’s not changing in macOS 15 Sequoia, the new version of macOS that’s due to be released to the public this fall. But it is about to get more annoying for some apps, according to a note added to Apple’s developer site yesterday.

“In macOS Sequoia, users will no longer be able to Control-click to override Gatekeeper when opening software that isn’t signed correctly or notarized,” the brief note reads. “They’ll need to visit System Settings > Privacy & Security to review security information for software before allowing it to run.”

Users (including me) had noticed this behavior in early macOS Sequoia betas, but this note confirms that the change was made on purpose and that the software is working as intended.

What’s changing and what isn’t

To understand what’s changing, it’s helpful to understand how macOS handles third-party apps. Though software can be downloaded and run in macOS from everywhere, Apple encourages developers to digitally sign their software and send it to Apple for notarization, which Apple describes as “an automated system that scans your software for malicious content, checks for code-signing issues, and returns the results to you quickly.” Notably, it is not the same as the app review process in Apple’s App Stores, where humans check submitted apps and can refuse to distribute them if they run afoul of Apple’s rules.

Notarization does come with benefits for users—users can be sure that the apps haven’t been tampered with and can run them with minimal hassle from Gatekeeper, macOS’ app-screening security feature. But it creates an extra step for developers and requires the use of a $100-a-year paid Apple Developer account, something that may not be worth the cost for hobby projects or open source projects that don’t generate much (or any) income for their contributors.

Unsigned, non-notarized software will refuse to run in current macOS versions, but it has always been possible to right-click or control-click the app or script you want to run and then click Open, which exposes an “open anyway” option in a dialog box that lets you launch the software. Once you’ve made an exception for an app, you can run it like you would any other app unless the software is updated or changed in some way.

The section of the Settings app where you'll need to go in macOS Sequoia to allow unsigned apps to run.

Enlarge / The section of the Settings app where you’ll need to go in macOS Sequoia to allow unsigned apps to run.

Andrew Cunningham

Which gets us to what Sequoia changes. The right-click/control-click option for easily opening unsigned apps is no longer available. Users who want to open unsigned software will now need to go the long way around to do it: first, try to launch the app and dismiss the dialog box telling you that it can’t be opened. Then, open Settings, go to the Privacy & Security screen, scroll all the way to the bottom to get to the Security section, and click the Open Anyway button that appears for the last unsigned app you tried to run.

This has always been an option for skirting around Gatekeeper, going all the way back to the days when Settings was still System Preferences (and when Apple would let you disable Gatekeeper’s checks entirely, something it removed in 2016). But it takes so much more time that I never actually did it that way once I discovered the right-click trick. Now, doing it the long way is mandatory.

I don’t want to oversell how disruptive this is—generally once you allow an app to run the first time, you don’t have to think about it again unless the app is updated or otherwise modified or tampered with. Apple isn’t allowing or disallowing any new behavior in macOS. Popular apps from major developers do tend to be notarized, rendering this change irrelevant. And if this change pushes more developers to sign and notarize their apps, that is arguably a win for user security and convenience.

But for most people most of the time, it’s just going to make a minor annoyance into a medium-size annoyance. And among the conspiratorially minded, it’s going to reignite 12-year-old anxieties about Apple locking macOS down to the same degree that it already locks down iOS and iPadOS.

The macOS 15 Sequoia update is currently available to developers and the general public as a beta if you’ve signed up for either of Apple’s beta programs. An early iteration of the 15.1 update with some Apple Intelligence generative AI features enabled is also available to developers with Apple Silicon Macs.

macOS 15 Sequoia makes you jump through more hoops to disable Gatekeeper app checks Read More »