Author name: DJ Henderson

with-a-landmark-launch,-the-pentagon-is-finally-free-of-russian-rocket-engines

With a landmark launch, the Pentagon is finally free of Russian rocket engines

Liftoff of ULA's Atlas V rocket on the US Space Force's USSF-51 mission.

Enlarge / Liftoff of ULA’s Atlas V rocket on the US Space Force’s USSF-51 mission.

United Launch Alliance delivered a classified US military payload to orbit Tuesday for the last time with an Atlas V rocket, ending the Pentagon’s use of Russian rocket engines as national security missions transition to all-American launchers.

The Atlas V rocket lifted off from Cape Canaveral Space Force Station in Florida at 6: 45 am EDT (10: 45 UTC) Tuesday, propelled by a Russian-made RD-180 engine and five strap-on solid-fueled boosters in its most powerful configuration. This was the 101st launch of an Atlas V rocket since its debut in 2002, and the 58th and final Atlas V mission with a US national security payload since 2007.

The US Space Force’s Space Systems Command confirmed a successful conclusion to the mission, code-named USSF-51, on Tuesday afternoon. The rocket’s Centaur upper stage released the top secret USSF-51 payload about seven hours after liftoff, likely in a high-altitude geostationary orbit over the equator. The military did not publicize the exact specifications of the rocket’s target orbit.

“What a fantastic launch and a fitting conclusion for our last national security space Atlas V (launch),” said Walt Lauderdale, USSF-51 mission director at Space Systems Command, in a post-launch press release. “When we look back at how well Atlas V met our needs since our first launch in 2007, it illustrates the hard work and dedication from our nation’s industrial base. Together, we made it happen, and because of teams like this, we have the most successful and thriving launch industry in the world, bar none.”

RD-180’s long goodbye

The launch Tuesday morning was the end of an era born in the 1990s when US government policy allowed Lockheed Martin, the original developer of the Atlas V, to use Russian rocket engines during its first stage. There was a widespread sentiment in the first decade after the fall of the Soviet Union that the United States and other Western nations should partner with Russia to keep the country’s aerospace workers employed and prevent “rogue states” like Iran or North Korea from hiring them.

At the time, the Pentagon was procuring new rockets to replace legacy versions of the Atlas, Delta, and Titan rocket families, which had been in service since the late 1950s or early 1960s.

A cluster of solid rocket boosters surround the RD-180 main engine as the Atlas V launcher climbs away from Cape Canaveral Space Force Station to begin the USSF-51 mission.

Enlarge / A cluster of solid rocket boosters surround the RD-180 main engine as the Atlas V launcher climbs away from Cape Canaveral Space Force Station to begin the USSF-51 mission.

Ultimately, the Air Force chose Lockheed Martin’s Atlas V and Boeing’s Delta IV rocket for development in 1998. The Atlas V, with its Russian main engine, was somewhat less expensive than the Delta IV and the more successful of the two designs. After Tuesday’s launch, 15 more Atlas V rockets are booked to fly payloads for commercial customers and NASA, mainly for Amazon’s Kuiper network and Boeing’s Starliner crew spacecraft. The 45th and final Delta IV launch occurred in April.

Boeing and Lockheed Martin merged their rocket divisions in 2006 to form a 50-50 joint venture named United Launch Alliance, which became the sole contractor certified to carry large US military satellites to orbit until SpaceX started launching national security missions in 2018.

SpaceX filed a lawsuit in 2014 to protest the Air Force’s decision to award ULA a multibillion-dollar sole-source contract for 36 Atlas V and Delta IV rocket booster cores. The litigation started soon after Russia’s military occupation and annexation of Crimea, which prompted US government sanctions on prominent Russian government officials, including Dmitry Rogozin, then Russia’s deputy prime minister and later the head of Russia’s space agency.

Rogozin, known for his bellicose but usually toothless rhetoric, threatened to halt exports of RD-180 engines for US military missions on the Atlas V. That didn’t happen until Russia finally stopped engine exports to the United States in 2022, following its full-scale invasion of Ukraine. At that point, ULA already had all the engines it needed to fly out all of its remaining Atlas V rockets. This export ban had a larger effect on Northrop Grumman’s Antares rocket, which also used Russian engines, forcing the development of a brand new first stage booster with US engines.

The SpaceX lawsuit, Russia’s initial military incursions into Ukraine in 2014, and the resulting sanctions marked the beginning of the end for the Atlas V rocket and ULA’s use of the Russian RD-180 engine. The dual-nozzle RD-180, made by a Russian company named NPO Energomash, consumes kerosene and liquid oxygen propellants and generates 860,000 pounds of thrust at full throttle.

With a landmark launch, the Pentagon is finally free of Russian rocket engines Read More »

ai-search-engine-accused-of-plagiarism-announces-publisher-revenue-sharing-plan

AI search engine accused of plagiarism announces publisher revenue-sharing plan

Beg, borrow, or license —

Perplexity says WordPress.com, TIME, Der Spiegel, and Fortune have already signed up.

Robot caught in a flashlight vector illustration

On Tuesday, AI-powered search engine Perplexity unveiled a new revenue-sharing program for publishers, marking a significant shift in its approach to third-party content use, reports CNBC. The move comes after plagiarism allegations from major media outlets, including Forbes, Wired, and Ars parent company Condé Nast. Perplexity, valued at over $1 billion, aims to compete with search giant Google.

“To further support the vital work of media organizations and online creators, we need to ensure publishers can thrive as Perplexity grows,” writes the company in a blog post announcing the problem. “That’s why we’re excited to announce the Perplexity Publishers Program and our first batch of partners: TIME, Der Spiegel, Fortune, Entrepreneur, The Texas Tribune, and WordPress.com.”

Under the program, Perplexity will share a percentage of ad revenue with publishers when their content is cited in AI-generated answers. The revenue share applies on a per-article basis and potentially multiplies if articles from a single publisher are used in one response. Some content providers, such as WordPress.com, plan to pass some of that revenue on to content creators.

A press release from WordPress.com states that joining Perplexity’s Publishers Program allows WordPress.com content to appear in Perplexity’s “Keep Exploring” section on their Discover pages. “That means your articles will be included in their search index and your articles can be surfaced as an answer on their answer engine and Discover feed,” the blog company writes. “If your website is referenced in a Perplexity search result where the company earns advertising revenue, you’ll be eligible for revenue share.”

A screenshot of the Perplexity.ai website taken on July 30, 2024.

Enlarge / A screenshot of the Perplexity.ai website taken on July 30, 2024.

Benj Edwards

Dmitry Shevelenko, Perplexity’s chief business officer, told CNBC that the company began discussions with publishers in January, with program details solidified in early 2024. He reported strong initial interest, with over a dozen publishers reaching out within hours of the announcement.

As part of the program, publishers will also receive access to Perplexity APIs that can be used to create custom “answer engines” and “Enterprise Pro” accounts that provide “enhanced data privacy and security capabilities” for all employees of Publishers in the program for one year.

Accusations of plagiarism

The revenue-sharing announcement follows a rocky month for the AI startup. In mid-June, Forbes reported finding its content within Perplexity’s Pages tool with minimal attribution. Pages allows Perplexity users to curate content and share it with others. Ars Technica sister publication Wired later made similar claims, also noting suspicious traffic patterns from IP addresses likely linked to Perplexity that were ignoring robots.txt exclusions. Perplexity was also found to be manipulating its crawling bots’ ID string to get around website blocks.

As part of company policy, Ars Technica parent Condé Nast disallows AI-based content scrapers, and its CEO Roger Lynch testified in the US Senate earlier this year that generative AI has been built with “stolen goods.” Condé sent a cease-and-desist letter to Perplexity earlier this month.

But publisher trouble might not be Perplexity’s only problem. In some tests of the search we performed in February, Perplexity badly confabulated certain answers, even when citations were readily available. Since our initial tests, the accuracy of Perplexity’s results seems to have improved, but providing inaccurate answers (which also plagued Google’s AI Overviews search feature) is still a potential issue.

Compared to the free tier of service, Perplexity users who pay $20 per month can access more capable LLMs such as GPT-4o and Claude 3, so the quality and accuracy of the output can vary dramatically depending on whether a user subscribes or not. The addition of citations to every Perplexity answer allows users to check accuracy—if they take the time to do it.

The move by Perplexity occurs against a backdrop of tensions between AI companies and content creators. Some media outlets, such as The New York Times, have filed lawsuits against AI vendors like OpenAI and Microsoft, alleging copyright infringement in the training of large language models. OpenAI has struck media licensing deals with many publishers as a way to secure access to high-quality training data and avoid future lawsuits.

In this case, Perplexity is not using the licensed articles and content to train AI models but is seeking legal permission to reproduce content from publishers on its website.

AI search engine accused of plagiarism announces publisher revenue-sharing plan Read More »

loss-of-popular-2fa-tool-puts-security-minded-grapheneos-in-a-paradox

Loss of popular 2FA tool puts security-minded GrapheneOS in a paradox

Just a bit too custom for their taste —

Losing access to Authy leads to another reckoning with Google’s security model.

Scientist looking at a molecular model of graphene in a laboratory

Enlarge / Graphene is a remarkable allotrope, deserving of further study. GrapheneOS is a remarkable ROM, one that Google does not quite know how to accommodate, due to its “tiny, tiny” user numbers compared to mainstream Android.

“If it’s not an official OS, we have to assume it’s bad.”

That’s how Shawn Wilden, the tech lead for hardware-backed security in Android, described the current reality of custom Android-based operating systems in response to a real security conundrum. GrapheneOS users discovered recently that Authy, a popular (and generally well-regarded) two-factor authentication manager, will not work on their phones—phones running an OS intended to be more secure and hardened than any standard Android phone.

“We don’t want to punish users of alternative OSes, but there’s really no other option at the moment,” Wilden added before his blunt conclusion. “Play Integrity has absolutely no way to guess whether a given custom OS completely subverts the Android security model.”

Play Integrity, formerly SafetyNet Attestation, essentially allows apps to verify whether an Android device has provided permissions beyond Google’s intended models or has been rooted. Root access is not appealing to the makers of some apps involving banking, payments, competitive games, and copyrighted media.]

There are many reasons beyond cheating and skulduggery that someone might root or modify their Android device. But to prove itself secure, an Android device must contact Google’s servers through an API in Google Play Services and then have its bootloader, ROM signature, and kernel verified. GrapheneOS, like most custom Android ROMs, does not contain a Google Play Services package by default but will let users install a sandboxed version of Play Services if they wish.

Wilden offered some hope for a future in which ROMs could vouch for their non-criminal nature to Google, noting “some discussions with makers of high-quality ROMs” about passing the Compatibility Test Suite, then “establishing some kind of relationship we can use to trust them.” But it’s “a lot of work on both sides, including by lawyers,” Wilden notes. And while his team is happy to help, higher-level support is tough because “modders are such a tiny, tiny fraction of the user base.”

The official GrapheneOS X account was less hopeful. It noted that another custom ROM, LineageOS, disabled verified boot at installation, and “rolls back security in a lot of other ways,” contributing to “a misconception that every alternate OS rolls back security and isn’t production quality.” A typical LineageOS installation, like most custom ROMs, does disable verified boot, though it can be re-enabled, except it’s risky and complicated. GrapheneOS has a page on its site regarding its stance on, and criticisms of, Google’s attestation model for Android.

Ars has reached out to Google, GrapheneOS, and Authy (via owner Twilio) for comment. At the moment, it doesn’t seem like there’s a clear path forward for any party unless one of them is willing to majorly rework what they consider proper security.

Loss of popular 2FA tool puts security-minded GrapheneOS in a paradox Read More »

meta-to-pay-$1.4-billion-settlement-after-texas-facial-recognition-complaint

Meta to pay $1.4 billion settlement after Texas facial recognition complaint

data harvesting —

Facebook’s parent accused of gathering data from photos and videos without “informed consent.”

Meta to pay $1.4 billion settlement after Texas facial recognition complaint

Facebook owner Meta has agreed to pay $1.4 billion to the state of Texas to settle claims that the company harvested millions of citizens’ biometric data without proper consent.

The settlement, to be paid over five years, is the largest ever obtained from an action brought by a single US state, said a statement from Attorney General Ken Paxton.

It also marks one of the largest penalties levied at Meta by regulators, second only to a $5 billion settlement it paid the US Federal Trade Commission in 2019 for the misuse of user data in the wake of the Cambridge Analytica privacy scandal.

The original complaint filed by Paxton in February 2022 accused Facebook’s now-closed facial recognition system of collecting biometric identifiers of “millions of Texans” from photos and videos posted on the platform without “informed consent.”

Meta launched a feature in 2011 called “tag suggestions” that recommended to users who to tag in photos and videos by scanning the “facial geometry” of those pictured, Paxton’s office said.

In 2021, a year before the lawsuit was filed, Meta announced it was shuttering its facial recognition system including the tag suggestions feature. It wiped the biometric data it had collected from 1 billion users, citing legal “uncertainty.”

The latest fine comes amid growing concern globally over privacy and data protection risks related to facial recognition, as well as algorithmic bias, although legislation is patchy, differing from jurisdiction to jurisdiction.

In 2021, Facebook agreed to pay a $650 million settlement in a class-action lawsuit in Illinois under a state privacy law over similar allegations related to its face-tagging system.

“This historic settlement demonstrates our commitment to standing up to the world’s biggest technology companies and holding them accountable for breaking the law and violating Texans’ privacy rights,” Paxton said in a statement. “Any abuse of Texans’ sensitive data will be met with the full force of the law.”

Meta previously said that the claims were without merit. However, the company and Texas agreed at the end of May to settle the lawsuit, just weeks before a trial was set to begin.

A spokesperson for Meta said on Tuesday: “We are pleased to resolve this matter, and look forward to exploring future opportunities to deepen our business investments in Texas, including potentially developing data centers.”

© 2024 The Financial Times Ltd. All rights reserved. Please do not copy and paste FT articles and redistribute by email or post to the web.

Meta to pay $1.4 billion settlement after Texas facial recognition complaint Read More »

rtfb:-california’s-ab-3211

RTFB: California’s AB 3211

Some in the tech industry decided now was the time to raise alarm about AB 3211.

As Dean Ball points out, there’s a lot of bills out there. One must do triage.

Dean Ball: But SB 1047 is far from the only AI bill worth discussing. It’s not even the only one of the dozens of AI bills in California worth discussing. Let’s talk about AB 3211, the California Provenance, Authenticity, and Watermarking Standards Act, written by Assemblymember Buffy Wicks, who represents the East Bay.

SB 1047 is a carefully written bill that tries to maximize benefits and minimize costs. You can still quite reasonably disagree with the aims, philosophy or premise of the bill, or its execution details, and thus think its costs exceed its benefits. When people claim SB 1047 is made of crazy pills, they are attacking provisions not in the bill.

That is not how it usually goes.

Most bills involving tech regulation that come before state legislatures are made of crazy pills, written by people in over their heads.

There are people whose full time job is essentially pointing out the latest bill that might break the internet in various ways, over and over, forever. They do a great and necessary service, and I do my best to forgive them the occasional false alarm. They deal with idiots, with bulls in China shops, on the daily. I rarely get the sense these noble warriors are having any fun.

AB 3211 unanimously passed the California assembly, and I started seeing bold claims about how bad it would be. Here was one of the more measured and detailed ones.

Dean Ball: The bill also requires every generative AI system to maintain a database with digital fingerprints for “any piece of potentially deceptive content” it produces. This would be a significant burden for the creator of any AI system. And it seems flatly impossible for the creators of open weight models to comply.

Under AB 3211, a chatbot would have to notify the user that it is a chatbot at the start of every conversation. The user would have to acknowledge this before the conversation could begin. In other words, AB 3211 could create the AI version of those annoying cookie notifications you get every time you visit a European website.

AB 3211 mandates “maximally indelible watermarks,” which it defines as “a watermark that is designed to be as difficult to remove as possible using state-of-the-art techniques and relevant industry standards.”

So I decided to Read the Bill (RTFB).

It’s a bad bill, sir. A stunningly terrible bill.

How did it unanimously pass the California assembly?

My current model is:

  1. There are some committee chairs and others that can veto procedural progress.

  2. Most of the members will vote for pretty much anything.

  3. They are counting on Newsom to evaluate and if needed veto.

  4. So California only sort of has a functioning legislative branch, at best.

  5. Thus when bills pass like this, it means a lot less than you might think.

Yet everyone stays there, despite everything. There really is a lot of ruin in that state.

Time to read the bill.

It’s short – the bottom half of the page is all deleted text.

Section 1 is rhetorical declarations. GenAI can produce inauthentic images, they need to be clearly disclosed and labeled, or various bad things could happen. That sounds like a job for California, which should require creators to provide tools and platforms to provide labels. So we all can remain ‘safe and informed.’ Oh no.

Section 2 22949.90 provides some definitions. Most are standard. These aren’t:

(c) “Authentic content” means images, videos, audio, or text created by human beings without any modifications or with only minor modifications that do not lead to significant changes to the perceived contents or meaning of the content. Minor modifications include, but are not limited to, changes to brightness or contrast of images, removal of background noise in audio, and spelling or grammar corrections in text.

(i) “Inauthentic content” means synthetic content that is so similar to authentic content that it could be mistaken as authentic.

This post would likely be neither authentic nor inauthentic. Confusing.

(k) “Maximally indelible watermark” means a watermark that is designed to be as difficult to remove as possible using state-of-the-art techniques and relevant industry standards.

That is a much higher standard than ‘reasonable care’ or ‘reasonable assurance.’ This essentially means (after an adoption period) you have to use whatever the ‘best’ technique is. Cost, or a hit to product quality, is not technically a factor. Some common sense applies, but this could get ugly.

(f) “Generative AI hosting platform” means an online repository or other internet website that makes generative AI systems available for download.

(g) “Generative AI provider” means an organization or individual that creates, codes, substantially modifies, or otherwise produces a generative AI system.

There is no minimize size or other threshold. It even says ‘individual.’

All right, what do these providers have to do?

That starts next, with 22949.90.1.

(a) A generative AI provider shall do all of the following:

(1) Place imperceptible and maximally indelible watermarks containing provenance data into synthetic content produced or significantly modified by a generative AI system that the provider makes available.

(A) If a sample of synthetic content is too small… [do your best anyway.]

(B) To the greatest extent possible, watermarks shall be designed to retain information that identifies content as synthetic and gives the name of the provider in the event that a sample of synthetic content is corrupted, downscaled, cropped, or otherwise damaged.

So ‘to the extent possible’ even in places where it is absurd due to brevity, a use of ‘maximally.’ No qualifier at all on ‘imperceptible.’

This is not a situation where an occasional false negative is fatal. Why this standard?

(a2) says they have to offer downloadable watermark decoders, with another ‘greatest extent possible’ for adherence to ‘relevant national and international standards.’

(a3) says they need to conduct third-party red teaming exercises, including whether you can ‘add false watermarks to authentic content.’ What? And submit a report.

(b) your system from before this act can be grandfathered in but only if you retroactively make a 99% accurate decoder, or the system is ‘not capable of producing inauthentic content.’

or here’s the exact words, given what I think this provision does:

(b) A generative AI provider may continue to make available a generative AI system that was made available before the date upon which this act takes effect and that does not have watermarking capabilities as described by paragraph (1) of subdivision (a), if either of the following conditions are met:

(1) The provider is able to retroactively create and make publicly available a decoder that accurately determines whether a given piece of content was produced by the provider’s system with at least 99 percent accuracy as measured by an independent auditor.

(2) The provider conducts and publishes research to definitively demonstrate that the system is not capable of producing inauthentic content.

No one has any idea how to create a 99% accurate decoder, let alone a retroactively 99% accurate decoder.

Every LLM or image model worth using can produce inauthentic content.

This is therefore, flat out, a ban on all existing generative AI systems worth using that produce images or text. Claude Sonnet 3.5 anticipates that all existing LLMs would have to be withdrawn from the market.

(A model producing obviously distorted voice outputs might survive? Maybe.)

As a reminder: This unanimously passed the California assembly.

Moving on.

(c) says no one shall provide anything designed to remove watermarks.

(d) says hosting platforms shall not make available anything not placing maximally indelible watermarks.

This is essentially saying (I think) that all internet hosting platforms would be held responsible if you could download any LLM that did not watermark, which includes every model that currently exists.

(e) requires reporting all vulnerabilities and failures with 24 hours, including notifying everyone involved.

A period of 24 hours is crazy short. The notification of the issue has to include all users who interacted with incorrectly marked data, so this is a public announcement. It gives no time to figure out what happened, or space to actually address or fix it.

(f) was noted above, it requires constant notification of AI content.

(f) (1) A conversational AI system shall clearly and prominently disclose to users that the conversational AI system generates synthetic content.

(A) In visual interfaces, including, but not limited to, text chats or video calling, a conversational AI system shall place the disclosure required under this subdivision in the interface itself and maintain the disclosure’s visibility in a prominent location throughout any interaction with the interface.

(B) In audio-only interfaces, including, but not limited to, phone or other voice calling systems, a conversational AI system shall verbally make the disclosure required under this subdivision at the beginning and end of a call.

(2) In all conversational interfaces of a conversational AI system, the conversational AI system shall, at the beginning of a user’s interaction with the system, obtain a user’s affirmative consent acknowledging that the user has been informed that they are interacting with a conversational AI system. A conversational AI system shall obtain a user’s affirmative consent prior to beginning the conversation.

(3) Disclosures and affirmative consent opportunities shall be made available to a user in the language in which the conversational AI system is communicating with the user.

(4) The requirements under this subdivision shall not apply to conversational AI systems that do not produce inauthentic content.

The intent here is good. People should know when they are interacting with an AI.

The key is to not be like GPDR and end up with endless pop-ups, click throughs and even audio notifications.

In this case, for verbal content, I think (hope?) that clause (4) actually is doing work.

As in, suppose you are using Siri. Can Siri produce ‘authentic content’? Obviously if you are being sufficiently pedantic then yes. But in practice I’d say no.

If I was trying to salvage this bill, I would add a clause to make it clear that repeated verbal interactions between a user and the same AI system wouldn’t count, and that any system using a clearly robotic voice or one chosen by the user does not count.

I don’t think this would turn every interaction into ‘Hey Siri send an email to Josh inviting him to dinner.’ ‘I am Siri, a conversational AI system, what time should I ask him to come?’ But I’m not fully confident.

For text there’s little question every decent LLM can produce ‘inauthentic content.’ So you’re losing one line of screen space permanently, including on a phone. Sounds annoying, needless and stupid. GPDR stuff.

22949.90.2 requires new digital cameras to include ‘authenticity and provenance watermarks’ on their outputs.

The first use of the camera will require a new disclosure. Then they’ll eat screen space for an indicator of the watermarking at all times when using the camera (why? What does this possibly accomplish?). Again, I can see a good argument for the functional requiring of the core watermark capabilities, but the implementation is needlessly annoying.

22949.90.3 says large online platforms (1 million California customers) shall use labels to ‘prominently disclose’ the provenance data found in watermarks or digital signatures.

(i) “Large online platform” means a public-facing internet website, web application, or digital application, including a social network, media platform as defined in Section 22675, video-sharing platform, messaging platform, advertising network, or search engine that had at least 1,000,000 California users during the preceding 12 months and can facilitate the sharing of synthetic content.

Note that this is not only social networks. A messaging platform has to do this. Is every text message an upload? I really do not think they have thought this through.

(1) The labels shall indicate whether content is fully synthetic, partially synthetic, authentic, authentic with minor modifications, or does not contain a watermark.

I don’t mind the idea of ‘there is a symbol to indicate that AI content is from an AI.’

It’s rather looney to forcibly label every other piece of content ‘this is human.’

Why? What does this accomplish? Can we perhaps not be such idiots?

(b) The disclosure required under subdivision (a) shall be readily legible to an average viewer or, if the content is in audio format, shall be clearly audible. A disclosure in audio content shall occur at the beginning and end of a piece of content and shall be presented in a prominent manner and at a comparable volume and speaking cadence as other spoken words in the content. A disclosure in video content should be legible for the full duration of the video.

Think ‘I am Senator Bob, and I approved this message,’ except twice, on every clip.

Not every AI clip. Every clip, period. If it’s human, it will need to start with ‘this is not AI,’ then end with ‘this is not AI.’

If it’s a video, you can get an icon instead. Plausibly every audio clip becomes a ‘video’ so that the video can contain the icon.

Complete looney tunes.

They do this to users doing uploads, too. Every time you upload anything you did that isn’t AI, you’d need to check a box (as the bill is written right now) that says ‘this is human content.’

Can’t we simply, at most… require disclosure when it is indeed AI content (and another if you are unsure)? And use auto-detect on the actual watermarks, so the user almost never has to actually do anything, since the platform has to use ‘state of the art’ detection techniques anyway?

Do we instead need this active affirmation on every Tweet and Instagram photo?

22949.90.4 calls for annual risk assessments from generative AI providers and large online platforms, including [various distinct risks of varying types.]

If you’re wondering if my eyes are rolling yet again, the answer is yes, and a lot.

22949.90.5 defines fines as up to $1 million or 5% of violator’s global annual revenue, whichever is greater.

Did the European Union write this bill? It’s like Bad Bill Bingo up here. Vile stuff. If you have any violation they can fine Meta about $7 billion?

22949.90.6 says the Department of Technology shall implement and carry out regulations within 90 days, and finally 91 says severability.

Existing ones would be toast the same way the closed models would be toast. But beyond that, what happens?

I don’t know with any confidence. The bill does not specify.

Would an open weights model developer be responsible for a subsequent fine tuning that removed or altered the watermark? What counts as distinct?

It could plausibly end up being everything from ‘you are responsible for anything downwind of your release no matter what’ to ‘once they fine tune it that is their problem.’

My guess is the standard would be ‘substantially modify,’ since doing that makes one a ‘generative AI provider.’ In context, any attempt to evade the bill’s requirements could be seen automatically as a ‘substantial’ modification, so you would effectively be safe. Or at least, you would be if that step was indeed substantial, and you didn’t leave a ‘insert_watermarks=true’ lying around that someone could flip.

Or not. Hell if I know. Which means chilling effect.

What we do know for certain is that this bans platforms from allowing the downloading of models that lack the watermarking, which includes all currently existing models. It is not obvious how one would comply with this.

A good bill thinks about these questions, and clearly answers them. AB 3211 doesn’t.

So to summarize what I think this bill most importantly does in practice:

  1. Essentially all LLMs and most other generative AI systems are banned.

  2. New generative AI systems must place maximally effective watermarks on all content, in ways that may or may not be possible to comply with.

  3. Open models might or might not have it even worse than that, and don’t know. We do know that hosts could not let anyone download any LLM that exists today.

  4. New digital cameras have to include watermarks.

  5. Any interaction with an AI system whose content could be mistaken for a human must include disclosure it is an AI system. That means permanent on screen statement for text or video, and audio statement for voice.

  6. Many things with 1 million California users, including search engines, social media platforms and messaging services, have to visibly mark every piece of text or video as human or AI generated. Every audio must say which one it is at the start and finish. Every user input must include an active user indication of whether it is AI or human (and the system must run detection software on it to check).

  7. Violations can cost you $1 million or 5% of your global revenue. Which for Meta would be ~$7 billion, or ~$15 billion for Google.

I would like to think that the system is not this stupid. That if this somehow got to Newsom’s desk, that we would all rise up as one to warn him to veto this, that he would have his people actually read the bill, and he would stop this madness. But one cannot ever be sure.

There would doubtless be many legal challenges. I don’t know how bad it would get in practice. If everything so far hasn’t caused people to leave San Francisco, I can never be confident that any new thing will be sufficient.

But this seems really, really bad, from its large principles to its detailed language to its likely consequences if actually implemented in practice.

There are several points where this bill offers sharp contrast with SB 1047, and illustrates how very differently were these two bills constructed.

Here are some of them.

  1. AB 3211 addresses labeling content. SB 1047 tries to prevent catastrophes.

  2. AB 3211 retroactively bans all existing LLMs. SB 1047 does not touch them at all.

  3. AB 3211 applies to generative AI systems of any size, with no restrictions. SB 1047 has no impact whatsoever unless you spend $100 million in training compute.

  4. AB 3211 does not specify who is responsible for what versions of what open models. SB 1047 has a definition that has gone through rounds of debate.

  5. AB 3211 uses the standards ‘maximally’ and ‘greatest extent possible,’ and in some places no qualifiers at all, for things we do not know how to do. SB 1047 centrally uses ‘reasonable assurance’ which is close to ‘reasonable care.’

  6. AB 3211 gives 24 hours to report an incident, in a way that is effectively fully public. SB 1047 already gives 72 hours and may end up giving more, despite that information potentially being of catastrophic importance.

  7. AB 3211 fines you a percentage of global revenue. SB 1047 does not do that.

  8. AB 3211 requires continuous disclosures and box checking and background annoyances, even when no AIs are involved, usually for no purpose. SB 1047 does not do anything of the kind.

If anything, others raising the alarm about AB 3211 were dramatically underselling how bad and destructive this bill would be in its current form. If we are going to succeed in our Quest for Sane Regulation, while avoiding insane ones, calibration is necessary. Different proposals need to be treated differently, and addressed on their merits, without fabrication, hallucination or hyperbole.

I have yet to see, from anyone I follow or respect, a statement of support for AB 3211.

So, yes. This AB 3211 is a no good, very bad bill, sir.

RTFB: California’s AB 3211 Read More »

air-pollution-makes-it-harder-for-bees-to-smell-flowers

Air pollution makes it harder for bees to smell flowers

protect the pollinators —

Contaminants can alter plant odors and warp insects’ senses, disrupting the process of pollination.

Scientists are uncovering various ways that air pollution can interfere with the ability of insects to pollinate plants.

Scientists are uncovering various ways that air pollution can interfere with the ability of insects to pollinate plants.

In the summers of 2018 and 2019, ecologist James Ryalls and his colleagues would go out to a field near Reading in southern England to stare at the insects buzzing around black mustard plants. Each time a bee, hoverfly, moth, butterfly, or other insect tried to get at the pollen or nectar in the small yellow flowers, they’d make a note.

It was part of an unusual experiment. Some patches of mustard plants were surrounded by pipes that released ozone and nitrogen oxides—polluting gases produced around power plants and conventional cars. Other plots had pipes releasing normal air.

The results startled the scientists. Plants smothered by pollutants were visited by up to 70 percent fewer insects overall, and their flowers received 90 percent fewer visits compared with those in unpolluted plots. The concentrations of pollutants were well below what US regulators consider safe. “We didn’t expect it to be quite as dramatic as that,” says study coauthor Robbie Girling, an entomologist at the University of Southern Queensland in Australia and a visiting professor at the University of Reading.

A growing body of research suggests that pollution can disrupt insect attraction to plants—at a time when many insect populations are already suffering deep declines due to agricultural chemicals, habitat loss, and climate change. Around 75 percent of wild flowering plants and around 35 percent of food crops rely on animals to move pollen around, so that plants can fertilize one another and form seeds. Even the black mustard plants used in the experiment, which can self-fertilize, exhibited a drop of 14 percent to 31 percent in successful pollination as measured by the number of seedpods, seeds per pod, and seedpod weight from plants engulfed by dirty air.

Scientists are still working out how strong and widespread these effects of pollution are, and how they operate. They’re learning that pollution may have a surprising diversity of effects, from changing the scents that draw insects to flowers to warping the creatures’ ability to smell, learn, and remember.

This research is still young, says Jeff Riffell, a neuroscientist at the University of Washington. “We’re only touching the tip of the iceberg, if you will, in terms of how these effects are influencing these pollinators.”

Altered scents

Insects often rely on smell to get around. As they buzz about in their neighborhoods, they learn to associate flowers that are good sources of nectar and pollen with their scents. Although some species, like honeybees, also use directions from their hive mates and visual landmarks like trees to navigate, even they critically depend on the sense of smell for sniffing out favorite flowers from afar. Nocturnal pollinators such as moths are particularly talented smellers. “They can smell these patches of flowers from a kilometer away,” Riffell says.

One of the effects of pollution—and what Girling suspects was largely responsible for the pollination declines at the England site—is how it perturbs these flowery aromas. Each fragrance is a unique blend of dozens of compounds that are chemically reactive and degrade in the air. Gases such as ozone or nitrogen oxide will quickly react with these molecules and cause odors to vanish even faster than usual. “For very reactive scents, the plume can only travel a third of the distance than it should actually travel when there is no pollution,” says atmospheric scientist Jose D. Fuentes of Penn State University, who has simulated the influence of ozone on floral scent compounds.

And if some compounds degrade faster than others, the bouquet of scents that insects associate with particular plants transforms, potentially rendering them unrecognizable. Girling and his colleagues observed this in experiments in a wind tunnel into which they delivered ozone. The tunnel was also outfitted with a device that steadily released a synthetic blend of floral odors (an actual flower would have wilted, says coauthor Ben Langford, an atmospheric chemist at the UK Centre for Ecology & Hydrology). Using chemical detectors, the team watched the flowery scent plume shorten and narrow as ozone ate away at the edges, with some compounds dropping off entirely as others persisted.

The scientists had trained honeybees to detect the original flowery scent by exposing them to the odor, then giving them sugar water—until they automatically stuck out their tongue-like proboscises to taste it upon smelling the scent. But when bees were tested with ozonated odor representing the edges of the scent plume, either 6 or 12 meters away from the source, only 32 percent and 10 percent, respectively, stuck out their proboscises. The bee is “sniffing a completely different odor at that point,” Langford says.

Researchers also have observed that striped cucumber beetles and buff-tailed bumblebees struggle to recognize their host plants above certain levels of ozone. Some of the most dramatic observations are at night, when extremely reactive pollutants called nitrate radicals accumulate. Riffell and colleagues recently found that about 50 percent fewer tobacco hornworm moths were attracted to the pale evening primrose when the plant’s aroma was altered by these pollutants, and white-lined sphinx moths didn’t recognize the scent at all. This reduced the number of seeds and fruits by 28 percent, the team found in outdoor pollination experiments. “It’s having a really big effect on the plant’s ability to produce seeds,” Riffell says.

Air pollution makes it harder for bees to smell flowers Read More »

marvel-has-cast-robert-downey-jr.-as-doctor-doom-in-two-new-avengers-movies

Marvel has cast Robert Downey Jr. as Doctor Doom in two new Avengers movies

Robert Downey Jr in all-green outfit and sunglasses standing with arms spayed wide in triumph

Enlarge / Robert Downey Jr. will play Doctor Doom in two new Avengers movies from the Russo brothers.

Marvel Studios

The Marvel Cinematic Universe received a much-needed boost this weekend with the box office dominance of Deadpool and Wolverine, which raked in a record-breaking $438.3 million worldwide and $205 million domestically. And the Marvel panel at this weekend’s San Diego Comic-Con kept up the momentum, delighting attendees with sneak peeks of what’s to come—most notably the return of Robert Downey Jr. to the MCU. The twist: RDJ won’t be donning his usual Iron Man suit. Instead, he’ll be playing Doctor Doom for Avengers: Doomsday (2026), with the Russo brothers returning to direct. This will be followed by the Russo-directed Avengers: Secret Wars (2027).

Comic-Con attendees were also treated to exclusive new footage from Captain America: Brave New World, and updates on Thunderbolts and The Fantastic Four reboot, titled First Steps, as well as a surprise screening of Deadpool and Wolverine.

“New mask, same task”

Avengers: Secret Wars and Avengers: Doomsday.” height=”427″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/07/marvel4-640×427.jpg” width=”640″>

Enlarge / The Russo brothers will direct Avengers: Secret Wars and Avengers: Doomsday.

Marvel Studios

It’s no secret that Marvel Studios originally planned to build its Phase Six Avengers arc (The Kang Dynasty) around Jonathan Majors’ Kang the Conqueror (and associated Variants), introduced in Loki and last year’s Ant-Man and the Wasp: Quantumania. But then Majors was convicted of “reckless assault and harassment” (domestic violence), and Marvel fired the actor soon after. That meant the studio needed to retool its Phase Six plans, culminating in the announced return of the Russo brothers, who directed four of the MCU’s most successful films, which brought in more than $6 billion at the global box office.

Joe Russo told the assembled fans that he and his brother had thoroughly enjoyed their “incredible” four-movie run, but “it left us with our emotions spent.” But over time, “We came to see a road forward with you all,” Anthony Russo chimed in.

Rather than keeping Kang as the villain and recasting the character, the Russo brothers decided to strike out in a new direction: bringing the Secret Wars storyline to the big screen. It’s one of their favorites and an ambitious task, but before they can take on Avengers: Secret Wars (slated for May 2027), the duo told the audience in Hall H that they had to make another movie first: Avengers: Doomsday (slated for May 2026), featuring the wildly popular comics villain Doctor Doom. Of course, the biggest reveal came when a masked figure dressed in the Doctor’s trademark green came onstage and revealed himself to be none other than RDJ, to wild cheers.

Marvel has cast Robert Downey Jr. as Doctor Doom in two new Avengers movies Read More »

spacex-roars-back-to-orbit-barely-two-weeks-after-in-flight-anomaly

SpaceX roars back to orbit barely two weeks after in-flight anomaly

Look who’s back, back again —

“It was incredible to see how quickly the team was able to identify the cause of the mishap.”

The Starlink 10-9 mission lifts off early Saturday morning from Florida.

Enlarge / The Starlink 10-9 mission lifts off early Saturday morning from Florida.

SpaceX webcast

Early on Saturday morning, at 1: 45 am local time, a Falcon 9 rocket soared into orbit from its launch site at Kennedy Space Center in Florida.

By some measures this was an extremely routine mission—it was, after all, SpaceX’s 73rd launch of this calendar year. And like many other Falcon 9 launches this year, the “Starlink 10-9” mission carried 23 of the broadband internet satellites into orbit. However, after a rare failure earlier this month, this particular Falcon 9 rocket was making a return-to-flight for the company, and attempting to get the world’s most active booster back into service.

And by all measures, it performed. The first stage booster, B-1069, made its 17th flight into orbit before landing on the Just Read the Instructions drone ship in the Atlantic Ocean. Then, a little more than an hour after liftoff, the rocket’s second stage released its payload into a good orbit, from which the Starlink spacecraft will use their on-board thrusters to reach operational altitudes in the coming weeks.

A crack in the sense line

The Falcon 9 rocket only failed a little more than 15 days ago, during a Starlink launch from Vandenberg Space Force Base, California, at 7: 35 pm PDT (02: 35 UTC) on July 11. During that mission, just a few minutes after stage separation, an unusual buildup of ice was observed on the Merlin vacuum engine that powers the second stage of the vehicle.

According to the company, the Merlin vacuum engine successfully completed its first burn after the second stage separated. However, during this time a liquid oxygen leak developed near the engine—which led to the buildup of ice observed during the webcast.

Engineers and technicians were quickly able to pinpoint the cause of the leak, a crack in a “sense line” for a pressure sensor attached to the vehicle’s liquid oxygen system. “This line cracked due to fatigue caused by high loading from engine vibration and looseness in the clamp that normally constrains the line,” the company said in an update published prior to Saturday morning’s launch.

This leak excessively cooled the engine, and caused a lower amount of igniter fluid to be available prior to re-lighting the Merlin for its second burn to circularize the rocket’s orbit before releasing the Starlink satellites. This caused a hard start of the Merlin engine. Ultimately the satellites were released into a lower orbit, where they burnt up in Earth’s atmosphere within days.

The sense line that failed is redundant, SpaceX said. It is not used by the flight safety system, and can be covered by alternate sensors already present on the engine. In the near term, the sense line will be removed from the second stage engine for Falcon 9 launches.

During a news briefing Thursday, SpaceX director Sarah Walker said this sense line was installed based on a customer requirement for another mission. The only difference between this component and other commonly flown sense lines is that it has two connections rather than one, she said. This may have made it a bit more susceptible to vibration, leading to a small crack.

Getting back fast

SpaceX identified the cause of the failure within hours of the anomaly, and worked the Federal Aviation Administration to come to a rapid resolution. On Thursday, the launch company received permission to return to flight.

“It was incredible to see how quickly the team was able to identify the cause of the mishap, and then the associated corrective actions to ensure success,” Walker said.

Before the failure on the night of July 11th, SpaceX had not experienced a mission failure in the previous 297 launches of the Falcon 9 rocket, dating back to the Amos-6 launch pad explosion in September 2016. The short interval between the failure earlier this month, and Saturday’s return to flight, appears to be unprecedented in spaceflight history.

The company now plans to launch two more Starlink missions on the Falcon 9 rocket this weekend, one from Cape Canaveral Space Force Station in Florida, as well as Vandenberg Space Force Base in California. It then has three additional missions before a critical astronaut flight for NASA, Crew-9, that could occur as soon as August 18.

For this reason, NASA was involved in the investigation of the second stage failure. Steve Stich, manager of NASA’s Commercial Crew Program, said SpaceX did an “extraordinary job” in identifying the root cause of the failure, and then rapidly looking at its Dragon spacecraft and first stage of the Falcon 9 rocket to ensure there were no other sensors that could cause similar problems.

SpaceX roars back to orbit barely two weeks after in-flight anomaly Read More »

union-game-performers-strike-over-ai-voice-and-motion-capture-training

Union game performers strike over AI voice and motion-capture training

Speaking into the large language model —

Use of motion-capture actors’ performances for AI training is a sticking point.

Image of SAG-AFTRA logo next to a raised fist holding up a game controller, with

Enlarge / One day, using pixellated fonts and images to represent that something is a video game will not be a trope. Today is not that day.

SAG-AFTRA has called for a strike of all its members working in video games, with the union demanding that its next contract not allow “companies to abuse AI to the detriment of our members.”

The strike mirrors similar actions taken by SAG-AFTRA and the Writers Guild of America (WGA) last year, which, while also broader in scope than just AI, were similarly focused on concerns about AI-generated work product and the use of member work to train AI.

“Frankly, it’s stunning that these video game studios haven’t learned anything from the lessons of last year—that our members can and will stand up and demand fair and equitable treatment with respect to A.I., and the public supports us in that,” Duncan Crabtree-Ireland, chief negotiator for SAG-AFTRA, said in a statement.

During the strike, the more than 160,000 members of the union will not provide talent to games produced by Disney, Electronic Arts, Blizzard Activision, Take-Two, WB Games, and others. Not every game is affected. Some productions may have interim agreements with union workers, and others, like continually updated games that launched before the current negotiations starting September 2023, may be exempt.

The publishers and other companies issued statements to the media through a communications firm representing them. “We are disappointed the union has chosen to walk away when we are so close to a deal, and we remain prepared to resume negotiations,” a statement offered to The New York Times and other outlets read. The statement said the two sides had found common ground on 24 out of 25 proposals and that the game companies’ offer was responsive and “extends meaningful AI protections.”

The Washington Post says the biggest remaining issue involves on-camera performers, including motion capture performers. Crabtree-Ireland told the Post that while AI training protections were extended to voice performers, motion and stunt work was left out. “[A]ll of those performers deserve to have their right to have informed consent and fair compensation for the use of their image, their likeness or voice, their performance. It’s that simple,” Crabtree-Ireland said in June.

It will be difficult to know the impact of a game performer strike for some time, if ever, owing to the non-linear and secretive nature of game production. A game’s conception, development, casting, acting, announcement, and further development (and development pivots) happen on whatever timeline they happen upon.

SAG-AFTRA has a tool for searching game titles to see if they are struck for union work, but it is finicky, recognizing only specific production titles, code names, and ID numbers. Searches for Grande Theft Auto VI and 6 returned a “Game Over!” (i.e., struck), but Kotaku confirmed the game is technically unaffected, even though its parent publisher, Take-Two, is generally struck.

Video game performers in SAG-AFTRA last went on strike in 2016, that time regarding long-term royalties. The strike lasted 340 days, still the longest in that union’s history, and was settled with pay raises for actors while residuals and terms on vocal stress remained unaddressed. The impact of that strike was generally either hidden or largely blunted, as affected titles hired non-union replacements. Voice work, as noted by the original English voice for Bayonetta, remains a largely unprotected field.

Union game performers strike over AI voice and motion-capture training Read More »

astronomers-find-first-emission-spectra-in-brightest-grb-of-all-time

Astronomers find first emission spectra in brightest GRB of all time

shine on, you beautiful BOAT —

Chance that first detected emission line is a noise fluctuation is one in half a billion.

A jet of particles moving at nearly light speed emerges from a massive star in this artist’s concept.

Enlarge / A jet of particles moving at nearly light-speed emerges from a massive star in this artist’s concept of the BOAT.

NASA’s Goddard Space Flight Center Conceptual Image Lab

Scientists have been all aflutter since several space-based detectors picked up a powerful gamma-ray burst (GRB) in October 2022—a burst so energetic that astronomers nicknamed it the BOAT (Brightest Of All Time). Now an international team of astronomers has analyzed an unusual energy peak detected by NASA’s Fermi Gamma-ray Space Telescope and concluded that it was an emission spectra, according to a new paper published in the journal Science. Per the authors, it’s the first high-confidence emission line ever seen in 50 years of studying GRBs.

As reported previously, gamma-ray bursts are extremely high-energy explosions in distant galaxies lasting between mere milliseconds to several hours. There are two classes of gamma-ray bursts. Most (70 percent) are long bursts lasting more than two seconds, often with a bright afterglow. These are usually linked to galaxies with rapid star formation. Astronomers think that long bursts are tied to the deaths of massive stars collapsing to form a neutron star or black hole (or, alternatively, a newly formed magnetar). The baby black hole would produce jets of highly energetic particles moving near the speed of light, powerful enough to pierce through the remains of the progenitor star, emitting X-rays and gamma rays.

Those gamma-ray bursts lasting less than two seconds (about 30 percent) are deemed short bursts, usually emitting from regions with very little star formation. Astronomers think these gamma-ray bursts are the result of mergers between two neutron stars, or a neutron star merging with a black hole, comprising a “kilonova.” That hypothesis was confirmed in 2017 when the LIGO collaboration picked up the gravitational wave signal of two neutron stars merging, accompanied by the powerful gamma-ray bursts associated with a kilonova.

Several papers were published last year reporting on the analytical results of all the observational data. Those findings confirmed that GRB 221009A was indeed the BOAT, appearing especially bright because its narrow jet was pointing directly at Earth. But the various analyses also yielded several surprising results that puzzled astronomers. Most notably, a supernova should have occurred a few weeks after the initial burst, but astronomers didn’t detect one, perhaps because it was very faint, and thick dust clouds in that part of the sky were dimming any incoming light.

Earlier this year, astronomers confirmed that the BOAT came from a supernova, thanks to the telltale signatures of key elements like calcium and oxygen that one would expect to find with a supernova. However, they did not find evidence of the expected heavy elements like platinum and gold, which bears on the longstanding question of the origin of such elements in the universe. The BOAT might just be special in that regard; further data will tell us more.

“It gave me goosebumps”

A few minutes after the BOAT erupted, Fermi’s Gamma-ray Burst Monitor recorded an unusual energy peak. Scientists now say this feature is the first high-confidence emission line ever seen in 50 years of studying GRBs.

The newly detected spectral emission line was likely caused by the collision of matter and anti-matter, according to the authors, producing a pair of gamma rays that are blue-shifted toward higher energies because we are looking into the jet. Having a spectral emission associated with a GRB is important because it can shed light on the specific chemicals involved in the interactions. There have been prior studies reporting possible evidence for absorption or emission lines in other GRBs, but they have usually turned out likely to be statistical noise.

That’s not the case with this latest detection, according to co-author Om Sharan Salafia at INAF-Brera Observatory in Milan, Italy, who added that the odds of this turning out to be a statistical fluctuation “are less than one chance in half a billion.” His INAF colleague and co-author, Maria Edvige Ravasio, said that when she first saw the signal, “it gave me goosebumps.”

Why did astronomers take so long to detect it? When the BOAT first erupted in 2022, it saturated most of the space-based gamma-ray detectors, including the Fermi Space Telescope, making them unable to measure the most intense part of that blast. The emission line didn’t appear until a good five minutes after the burst when it had sufficiently dimmed for Fermi to make a measurement. The spectral emission lasted for about 40 seconds and reached a peak energy of about 12 MeV, compared to 2 or 3 MeB for visible light, per the authors.

Science, 2024. DOI: 10.1126/science.adj3638  (About DOIs).

Astronomers find first emission spectra in brightest GRB of all time Read More »

chrome-will-now-prompt-some-users-to-send-passwords-for-suspicious-files

Chrome will now prompt some users to send passwords for suspicious files

SAFE BROWSING —

Google says passwords and files will be deleted shortly after they are deep-scanned.

Chrome will now prompt some users to send passwords for suspicious files

Google is redesigning Chrome malware detections to include password-protected executable files that users can upload for deep scanning, a change the browser maker says will allow it to detect more malicious threats.

Google has long allowed users to switch on the Enhanced Mode of its Safe Browsing, a Chrome feature that warns users when they’re downloading a file that’s believed to be unsafe, either because of suspicious characteristics or because it’s in a list of known malware. With Enhanced Mode turned on, Google will prompt users to upload suspicious files that aren’t allowed or blocked by its detection engine. Under the new changes, Google will prompt these users to provide any password needed to open the file.

Beware of password-protected archives

In a post published Wednesday, Jasika Bawa, Lily Chen, and Daniel Rubery of the Chrome Security team wrote:

Not all deep scans can be conducted automatically. A current trend in cookie theft malware distribution is packaging malicious software in an encrypted archive—a .zip, .7z, or .rar file, protected by a password—which hides file contents from Safe Browsing and other antivirus detection scans. In order to combat this evasion technique, we have introduced two protection mechanisms depending on the mode of Safe Browsing selected by the user in Chrome.

Attackers often make the passwords to encrypted archives available in places like the page from which the file was downloaded, or in the download file name. For Enhanced Protection users, downloads of suspicious encrypted archives will now prompt the user to enter the file’s password and send it along with the file to Safe Browsing so that the file can be opened and a deep scan may be performed. Uploaded files and file passwords are deleted a short time after they’re scanned, and all collected data is only used by Safe Browsing to provide better download protections.

Enter a file password to send an encrypted file for a malware scan

Enlarge / Enter a file password to send an encrypted file for a malware scan

Google

For those who use Standard Protection mode which is the default in Chrome, we still wanted to be able to provide some level of protection. In Standard Protection mode, downloading a suspicious encrypted archive will also trigger a prompt to enter the file’s password, but in this case, both the file and the password stay on the local device and only the metadata of the archive contents are checked with Safe Browsing. As such, in this mode, users are still protected as long as Safe Browsing had previously seen and categorized the malware.

Sending Google an executable casually downloaded from a site advertising a screensaver or media player is likely to generate little if any hesitancy. For more sensitive files such as a password-protected work archive, however, there is likely to be more pushback. Despite the assurances the file and password will be deleted promptly, things sometimes go wrong and aren’t discovered for months or years, if at all. People using Chrome with Enhanced Mode turned on should exercise caution.

A second change Google is making to Safe Browsing is a two-tiered notification system when users are downloading files. They are:

  1. Suspicious files, meaning those Google’s file-vetting engine have given a lower-confidence verdict, with unknown risk of user harm
  2. Dangerous files, or those with a high confidence verdict that they pose a high risk of user harm

The new tiers are highlighted by iconography, color, and text in an attempt to make it easier for users to easily distinguish between the differing levels of risk. “Overall, these improvements in clarity and consistency have resulted in significant changes in user behavior, including fewer warnings bypassed, warnings heeded more quickly, and all in all, better protection from malicious downloads,” the Google authors wrote.

Previously, Safe Browsing notifications looked like this:

Differentiation between suspicious and dangerous warnings.

Enlarge / Differentiation between suspicious and dangerous warnings.

Google

Over the past year, Chrome hasn’t budged on its continued support of third-party cookies, a decision that allows companies large and small to track users of that browser as they navigate from website to website to website. Google’s alternative to tracking cookies, known as the Privacy Sandbox, has also received low marks from privacy advocates because it tracks user interests based on their browser usage.

That said, Chrome has long been a leader in introducing protections, such as a security sandbox that cordons off risky code so it can’t mingle with sensitive data and operating system functions. Those who stick with Chrome should at a minimum keep Standard Mode Safe Browsing on. Users with the experience required to judiciously choose which files to send to Google should consider turning on Enhanced Mode.

Chrome will now prompt some users to send passwords for suspicious files Read More »

sonos-ceo-apologizes-for-botched-app-redesign,-promises-month-by-month-updates

Sonos CEO apologizes for botched app redesign, promises month-by-month updates

More like a downdate, amirite? —

Restoring previously present features is Sonos’ No. 1 priority.

Two people with extremely 70s vibes looking at Sonos' app, with shag carpeting, wood paneling, and houndstooth pants in the frame.

Enlarge / I don’t know how Sonos’ app might have developed during the groovy era their marketing images aim to summon, but it feels like it might not have wanted to rush head-long into disappointing users quite so quickly.

Sonos

Sonos issued a redesigned app in May, and what lots of customers noticed about it wasn’t the refreshed look, but the things from the previous design entirely missing. Not small things, but things that Sonos enthusiasts would really notice: sleep timers, local music library access and management, playlist and song queue editing, plus accessibility downgrades.

In May, a Sonos executive told The Verge that it “takes courage to rebuild a brand’s core product from the ground up, and to do so knowing it may require taking a few steps back to ultimately leap into the future.” You might ask if bravery could have been mustered to not release an app before it was feature-complete.

Now, nearly three months after shipping, Sonos leadership has pivoted from excitement about future innovations to humility, apology, and a detailed roadmap of fixes. CEO Patrick Spence starts his “Update on the Sonos app from Patrick” with a personal apology, a note that “there isn’t an employee at Sonos who isn’t pained by having let you down,” and a pledge that fixing the app is the No. 1 priority.

New updates have arrived every two weeks since the update, Spence writes, and there are more to come. A better device-adding experience and, finally, a local music library interface should arrive in July or August. August and/or September bring volume responsiveness, UI upgrades, and general stability, plus Alarm reliability. Editing your playlists and queue could arrive in September or October, according to Sonos’ post.

This is not the first time Sonos has acknowledged missteps in its aims to refresh its mobile apps, but it is the most public and contrite, and perhaps realistic in timing. In mid-May, Sonos emailed its software and API partners about “valuable feedback” on “the areas where we fell short,” according to an email obtained by Ars Technica. Back then, Sonos told partners it intended to have alarms, queue editing, sleep timers, local music libraries, and Wi-Fi update settings sorted by the end of June.

While different resources can be deployed on different projects, it didn’t help existing customers’ perceptions that, two weeks after shipping its rather incomplete mobile app updates, Sonos announced the Ace, new $450 headphones. As we wrote then, the update did make doing basic tasks like adjusting volumes faster, but its lack of existing features left Sonos “playing damage control with an angry subset of its normally loyal user base.” That user base, which has been asking the company what happened ever since early May, now has some sense that they’re not posting into the void.

Sonos CEO apologizes for botched app redesign, promises month-by-month updates Read More »