Author name: DJ Henderson

automation-and-orchestration:-the-backbone-of-zero-trust

Automation and Orchestration: The Backbone of Zero Trust

Welcome to the next installment of our zero trust blog series! In our previous post, we explored the critical role of monitoring and analytics in a zero trust model and shared best practices for building a comprehensive monitoring and analytics strategy. Today, we’re shifting our focus to another key enabler of zero trust: automation and orchestration.

In a zero trust model, security must be dynamic, adaptive, and continuous. With no implicit trust granted to any user, device, or application, organizations must be able to quickly and consistently enforce security policies, detect and respond to threats, and maintain a robust security posture across a complex, ever-changing environment.

In this post, we’ll explore the role of automation and orchestration in a zero trust model, discuss the key technologies and processes involved, and share best practices for building a comprehensive automation and orchestration strategy.

The Role of Automation and Orchestration in Zero Trust

In a traditional perimeter-based security model, security processes are often manual, reactive, and siloed. Security teams must manually configure and enforce policies, investigate and respond to alerts, and coordinate across multiple tools and teams to remediate incidents.

However, in a zero trust model, this approach is no longer sufficient. With the attack surface expanding and the threat landscape evolving at an unprecedented pace, organizations must be able to automate and orchestrate security processes across the entire environment, from identity and access management to network segmentation and incident response.

Automation and orchestration play a critical role in enabling zero trust by:

  1. Enforcing consistent policies: Automating the configuration and enforcement of security policies across the environment, ensuring that all users, devices, and applications are subject to the same rules and controls.
  2. Accelerating threat detection and response: Orchestrating the collection, analysis, and correlation of security data from multiple sources, enabling faster detection and response to potential threats.
  3. Reducing human error and inconsistency: Minimizing the risk of human error and inconsistency by automating repetitive, manual tasks and ensuring that policies and processes are applied consistently across the environment.
  4. Enabling continuous monitoring and optimization: Continuously monitoring the environment for changes and anomalies, and automatically adapting policies and controls based on new information and insights.

By applying these principles, organizations can create a more agile, adaptive, and efficient security posture that can keep pace with the demands of a zero trust model.

Key Technologies and Processes for Zero Trust Automation and Orchestration

To build a comprehensive automation and orchestration strategy for zero trust, organizations must leverage a range of technologies and processes, including:

  1. Security orchestration, automation, and response (SOAR): Platforms that enable the automation and orchestration of security processes across multiple tools and systems, such as incident response, threat hunting, and vulnerability management.
  2. Infrastructure as code (IaC): Tools and practices that enable the automated provisioning, configuration, and management of infrastructure using code, such as Terraform, Ansible, and CloudFormation.
  3. Continuous integration and continuous deployment (CI/CD): Processes and tools that enable the automated building, testing, and deployment of applications and infrastructure, such as Jenkins, GitLab, and Azure DevOps.
  4. Policy as code: Practices and tools that enable the definition and enforcement of security policies using code, such as Open Policy Agent (OPA) and HashiCorp Sentinel.
  5. Robotic process automation (RPA): Tools that enable the automation of repetitive, manual tasks across multiple systems and applications, such as UiPath and Automation Anywhere.

By leveraging these technologies and processes, organizations can build a comprehensive, automated, and orchestrated approach to zero trust that can adapt to changing business requirements and threat landscapes.

Best Practices for Zero Trust Automation and Orchestration

Implementing a zero trust approach to automation and orchestration requires a comprehensive, multi-layered strategy. Here are some best practices to consider:

  1. Identify and prioritize use cases: Identify the key security processes and use cases that can benefit from automation and orchestration, and prioritize them based on their impact and feasibility. Focus on high-value, high-volume processes first, such as incident response and policy enforcement.
  2. Establish a centralized automation platform: Implement a centralized platform, such as a SOAR or IaC tool, to manage and orchestrate automated processes across the environment. Ensure that the platform can integrate with existing tools and systems and can scale to meet the needs of the organization.
  3. Implement policy as code: Define and enforce security policies using code, leveraging tools such as OPA and Sentinel. Ensure that policies are version-controlled, tested, and continuously updated based on new requirements and insights.
  4. Automate testing and validation: Automate the testing and validation of security controls and policies, leveraging tools such as Terraform Sentinel and Inspec. Ensure that tests are run continuously and that results are used to drive improvements and optimizations.
  5. Monitor and measure effectiveness: Continuously monitor and measure the effectiveness of automated processes and orchestrations, using metrics such as mean time to detect (MTTD), mean time to respond (MTTR), and false positive rates. Use these insights to continuously improve and optimize processes and policies.
  6. Foster collaboration and communication: Foster collaboration and communication between security, operations, and development teams, leveraging tools such as ChatOps and collaboration platforms. Ensure that all teams are aligned on the goals and processes of automation and orchestration and that feedback and insights are continuously shared and acted upon.

By implementing these best practices and continuously refining your automation and orchestration posture, you can build a more agile, adaptive, and efficient approach to zero trust that can keep pace with the demands of the modern threat landscape.

Conclusion

In a zero trust world, automation and orchestration are the backbone of the security organization. By automating and orchestrating key security processes and policies, organizations can enforce consistent controls, accelerate threat detection and response, reduce human error and inconsistency, and enable continuous monitoring and optimization.

However, achieving effective automation and orchestration in a zero trust model requires a commitment to leveraging the right technologies and processes, fostering collaboration and communication between teams, and continuously monitoring and optimizing effectiveness. It also requires a shift in mindset, from a reactive, manual approach to a proactive, automated approach that can adapt to changing business requirements and threat landscapes.

As you continue your zero trust journey, make automation and orchestration a top priority. Invest in the tools, processes, and skills necessary to build a comprehensive automation and orchestration strategy, and regularly assess and refine your approach to keep pace with evolving threats and business needs.

In the next post, we’ll explore the role of governance and compliance in a zero trust model and share best practices for aligning zero trust initiatives with regulatory requirements and industry standards.

Until then, stay vigilant and keep automating!

Additional Resources:

Automation and Orchestration: The Backbone of Zero Trust Read More »

man-suffers-rare-bee-sting-directly-to-the-eyeball—it-didn’t-go-well

Man suffers rare bee sting directly to the eyeball—it didn’t go well

Nightmare fuel —

He did recover. No disturbing images in the article, but a link for those who dare.

Bees fly to their hive.

Enlarge / Bees fly to their hive.

In what may be the biological equivalent to getting struck by lightning, a very unlucky man in the Philadelphia area took a very rare bee sting directly to the eyeball—and things went badly from there.

As one might expect, the 55-year-old went to the emergency department, where doctors tried to extract the injurious insect’s stinger from the man’s right eye. But it soon became apparent that they didn’t get it all.

Two days after the bee attack, the man went to the Wills Eye Hospital with worsening vision and pain in the pierced eye. At that point, the vision in his right eye had deteriorated to only being able to count fingers. The eye was swollen, inflamed, and bloodshot. Blood was visibly pooling at the bottom of his iris. And right at the border between the man’s cornea and the white of his eye, ophthalmologists spotted the problem: a teeny spear-like fragment of the bee’s stinger still stuck in place.

(Images of the eye and stinger fragment are here for those who aren’t squeamish. The white arrow in Panel A shows the location of the stinger fragment while the asterisk marks the pooled blood.)

Get thee to an ophthalmologist

In a report published recently in the New England Journal of Medicine, treating ophthalmology experts Talia Shoshany and Zeba Syed made a critical recommendation: If you happen to be among the ill-fated few who are stung in the eye by a bee, you should make sure to see an eye doctor specifically.

“I am not surprised that the ER missed a small fragment,” Shoshany told Ars over email. “They pulled out the majority of the stinger, but the small fragment was only able to be visualized at a slit lamp,” she said, referring to a microscope with a bright light used in eye exams. In this case, they visualized the stinger at 10X or 16X magnification with the additional help of a fluorescent dye. Moreover, after spotting it, the stinger fragment “needed to be pulled out with ophthalmic-specific micro-forceps.”

After finally getting the entirety of the wee dagger out, Shoshany and Syed prescribed a topical antibacterial and prednisolone eye drops (a steroid for inflammation). At a five-month follow-up, the patient had recovered and the vision in his right eye had improved to 20/25.

For those now in fear of eye stings, Soshany has some comforting words: “Ocular bee stings are very rare.” She noted this was the first one she had seen in her career. Although there are documented cases in the scientific literature, the incidence rate is unknown. The odds of getting struck by lightning, meanwhile, are 1 in 15,300, according to the National Weather Service.

But one troubling aspect of this case is that it’s unclear why the man was stung to begin with. According to Shoshany, the man worked on a property with a beehive, but he didn’t work with the insects himself. “He reports he was just walking by and several bees flew up to him; one stung him in the eye,” she said. It’s unclear what provoked them.

Man suffers rare bee sting directly to the eyeball—it didn’t go well Read More »

t-mobile-users-enraged-as-“un-carrier”-breaks-promise-to-never-raise-prices

T-Mobile users enraged as “Un-carrier” breaks promise to never raise prices

Illustration of T-Mobile customers protesting price hikes

Aurich Lawson

In 2017, Kathleen Odean thought she had found the last cell phone plan she would ever need. T-Mobile was offering a mobile service for people age 55 and over, with an “Un-contract” guarantee that it would never raise prices.

“I thought, wow, I can live out my days with this fixed plan,” Odean, a Rhode Island resident who is now 70 years old, told Ars last week. Odean and her husband switched from Verizon to get the T-Mobile deal, which cost $60 a month for two lines.

Despite its Un-contract promise, T-Mobile in May 2024 announced a price hike for customers like Odean who thought they had a lifetime price guarantee on plans such as T-Mobile One, Magenta, and Simple Choice. The $5-per-line price hike will raise her and her husband’s monthly bill from $60 to $70, Odean said.

As we’ve reported, T-Mobile’s January 2017 announcement of its “Un-contract” for T-Mobile One plans said that “T-Mobile One customers keep their price until THEY decide to change it. T-Mobile will never change the price you pay for your T-Mobile One plan. When you sign up for T-Mobile One, only YOU have the power to change the price you pay.”

T-Mobile contradicted that clear promise on a separate FAQ page, which said the only real guarantee was that T-Mobile would pay your final month’s bill if the company raised the price and you decided to cancel. Customers like Odean bitterly point to the press release that made the price guarantee without including the major caveat that essentially nullifies the promise.

“I gotta tell you, it really annoys me”

T-Mobile’s 2017 press release even blasted other carriers for allegedly being dishonest, saying that “customers are subjected to a steady barrage of ads for wireless deals—only to face bill shock and wonder what the hell happened when their Verizon or AT&T bill arrives.”

T-Mobile made the promise under the brash leadership of CEO John Legere, who called the company the “Un-carrier” and frequently insulted its larger rivals while pledging that T-Mobile would treat customers more fairly. Legere left T-Mobile in 2020 after the company completed a merger with Sprint in a deal that made T-Mobile one of three major nationwide carriers alongside AT&T and Verizon.

Then-CEO of T-Mobile John Legere at the company's Un-Carrier X event in Los Angeles on Tuesday, Nov. 10, 2015.

Enlarge / Then-CEO of T-Mobile John Legere at the company’s Un-Carrier X event in Los Angeles on Tuesday, Nov. 10, 2015.

Getty Images | Bloomberg

After being notified of the price hike, Odean filed complaints with the Federal Communications Commission and the Rhode Island attorney general’s office. “I can afford it, but I gotta tell you, it really annoys me because the promise was so absolutely clear… It’s right there in writing: ‘T-Mobile will never change the price you pay for your T-Mobile One plan.’ It couldn’t be more clear,” she said.

Now, T-Mobile is “acting like, oh, well, we gave ourselves a way out,” Odean said. But the caveat that lets T-Mobile raise prices whenever it wants, “as far as I can tell, was never mentioned to the customers… I don’t care what they say in the FAQ,” she said.

T-Mobile users enraged as “Un-carrier” breaks promise to never raise prices Read More »

apple’s-“longevity,-by-design”-argues-its-huge-scale-affects-its-repair-polices

Apple’s “Longevity, by Design” argues its huge scale affects its repair polices

Apple Longevity by Design whitepaper —

Apple must consider volume, but also the world outside its closed loop.

Images of two charred batteries from Apple's Longevity by Design document

Enlarge / Apple has a lot to say about the third-party battery market in “Longevity, by Design,” specifically about how many batteries fail to meet testing standards.

Apple

Earlier this week, Apple published a whitepaper titled “Longevity by Design.” The purpose, Apple says, is to explain “the company’s principles for designing for longevity—a careful balance between product durability and repairability.” It also contains some notable changes to Apple’s parts pairing and repair technology.

Here is a summary of the action items in the document’s 24 pages:

  • The self-service diagnostics tool that arrived in the US last year is now available in 32 European countries.
  • True Tone, the color-balancing screen feature, can soon be activated on third-party screens, “to the best performance that can be provided.”
  • Battery statistics, like maximum capacity and cycle count, will be available “later in 2024” for third-party batteries, with a notice that “Apple cannot verify the information presented.”
  • Used Apple parts, transferred from one to another, will be “as easy to use as new Apple parts” in select products “later this year.”
  • Parts for “most repairs” from Apple’s Self Service Repair program will no longer require a device serial number to order.

Changes timed to “later this year” may well indicate their arrival with iOS 18 or a subsequent update.

Apple’s take on repair focuses on scale

To whom is Apple’s document explaining its principles? Apple might say it’s speaking to consumers and the public, but one might infer that the most coveted audience is elected representatives, or their staff, as they consider yet another state or federal bill aimed at regulating repair. Earlier this year, Oregon and Colorado passed repair bills that stop companies from halting repairs with software checks on parts, or “parts pairing.” Other recent bills and legal actions have targeted repair restrictions in Minnesota, Canada, and the European Union.

Apple came out in support of a repair bill in California and at the federal level, in large part because it allows for parts and tools pricing at “fair and reasonable terms” and requires non-affiliated vendors to disclose their independence and use of third-party parts to customers.

“Longevity, by Design” stakes out Apple’s position that there are things more important than repair. Due to what Apple says is its unique combination of software support, resale value, and a focus on preventing the most common device failures, the company “leads the industry in longevity” as measured in products’ value holding, lifespans, and service rates, Apple says. Hundreds of millions of iPhones more than five years old are in use, out-of-warranty service rates dropped 38 percent from 2015 to 2022, and initiatives like liquid ingress protection dropped repair rates on the iPhone 7 and 7 Plus by 75 percent.

“The reliability of our hardware will always be our top concern when seeking to maximize the lifespan of products,” the whitepaper states. “The reason is simple: the best repair is the one that’s never needed.”

Photos from Apple's

Photos from Apple’s “Longevity, by Design” document showing the water ingress testing as part of its design.

Apple

Consider the charge port

Apple offers the charging port on iPhones as “an internal case study” to justify why it often bundles parts together rather than making them individually replaceable. From the independent repair shops and techs I’ve talked to in my career, iPhone charging ports, and the chips that control them, are not an uncommon failure point. “Cheap charging cables from 7-11 are serial killers,” one board-level repair shop once told me. Apple disagrees, saying it must consider the broader impact of its designs.

“Making the charging port individually replaceable would require additional components, including its own flexible printed circuit board, connector, and fasteners that increase the carbon emissions required to manufacture each device,” Apple states. This could be justified if 10 percent of iPhones required replacement, but Apple says “the actual service rate was below 0.1%.” As such, keeping the port integrated is a lower-carbon-emission choice.

Apple’s “Longevity, by Design” argues its huge scale affects its repair polices Read More »

tesla-announces-third-and-fourth-cybertruck-recalls

Tesla announces third and fourth Cybertruck recalls

Cybertruck recalls —

Wiper motor may stop working and cosmetic applique may detach while driving.

A Tesla Cybertruck with the passenger door open is displayed in a convention center.

Enlarge / A Tesla Cybertruck at the Viva Technology show at Parc des Expositions Porte de Versailles on May 24, 2024 in Paris, France.

Getty Images | Chesnot

Tesla has announced two more recalls of the Cybertruck, both of which affect over 11,000 vehicles produced since the car first became available late last year. Cybertruck owners will need to bring their cars in for service because of faulty windshield wiper motors and a cosmetic piece that could come off the vehicle while it’s being driven.

Tesla previously recalled the Cybertruck in April over a faulty accelerator pedal assembly and in January for a software problem in which the font size of brake, park, and antilock brake system visual warning indicators were too small. The January recall also affected Tesla Model 3, S, X, and Y.

A new recall notice says, “the front windshield wiper motor controller may stop functioning due to electrical overstress to the gate driver component. A non-functioning windshield wiper may reduce visibility in certain operating conditions, which may increase the risk of a collision.”

The wiper motors have a gate driver that “may have been damaged due to electrical overstress during functional testing,” the notice said. The fix is to “replace the windshield wiper motor with a wiper motor that has a properly functioning gate driver component.”

The wiper motor recall affects 11,688 cars. While it is estimated that 2 percent of cars have the defect, the notice said the “recall population includes all Model Year 2024 Cybertruck vehicles manufactured from November 13, 2023, to June 6, 2024.”

Tesla said it is not aware of any crashes, injuries, or deaths related to the wiper motor problem. Newly manufactured Cybertrucks shouldn’t have the problem because “the supplier introduced a functional test using a lower current to prevent damage and ensure integrity of the gate driver,” the notice said.

Cosmetic applique may not stay on the car

The other new recall notice describes a problem “with a cosmetic applique along the exterior of the trunk bed trim, known as the sail applique, which is affixed to the vehicle with adhesive.” The applique or adhesion was not installed correctly on some cars, “which may cause the sail applique to become loose or separate from the vehicle.”

“If the applique separates from the vehicle while in drive, it could create a road hazard for following motorists and increase their risk of injury or a collision,” the recall notice said. The fix is to “replace or rework the sail applique such that the assembly meets specifications and ensures sufficient adhesion between the applique and the vehicle’s deck rail.”

It’s estimated that 1 percent of vehicles have the applique defect, and the “recall population includes all Model Year 2024 Cybertruck vehicles manufactured from November 13, 2023, to May 26, 2024.” That amounts to 11,383 Cybertrucks. Customers will not be charged for the fixes to the wiper motor and applique.

The problem was discovered in December 2023 when “an undelivered Cybertruck with a single missing applique arrived at a Tesla delivery center after being transported on a vehicle hauler,” the notice said. The problem was found a second time in May 2024 on a customer vehicle, and then on more cars when “Tesla surveyed and assessed the retention of sail appliques on vehicles in the field.”

Tesla said it is not aware of any crashes, injuries, or deaths related to the applique problem. On newly manufactured Cybertrucks, “quality control improvements to the adhesive application” should keep the piece attached to the car.

Separately, one Cybertruck owner recently alleged that his car crashed into a neighbor’s house despite him holding down the brake pedal. The driver claimed that Tesla told him, “We have reviewed logs and due to the terrain the accelerator may or may not disengage when the brake is depressed.”

We contacted Tesla about the alleged braking problem today and will provide an update if the company responds. There is video of the accident, and the driver says the incident left skid marks for about 50 feet, “almost like one motor was accelerating while the other set of wheels locked.”

Tesla announces third and fourth Cybertruck recalls Read More »

toys-“r”-us-riles-critics-with-“first-ever”-ai-generated-commercial-using-sora

Toys “R” Us riles critics with “first-ever” AI-generated commercial using Sora

A screen capture from the partially AI-generated Toys

Enlarge / A screen capture from the partially AI-generated Toys “R” Us brand film created using Sora.

Toys R Us

On Monday, Toys “R” Us announced that it had partnered with an ad agency called Native Foreign to create what it calls “the first-ever brand film using OpenAI’s new text-to-video tool, Sora.” OpenAI debuted Sora in February, but the video synthesis tool has not yet become available to the public. The brand film tells the story of Toys “R” Us founder Charles Lazarus using AI-generated video clips.

“We are thrilled to partner with Native Foreign to push the boundaries of Sora, a groundbreaking new technology from OpenAI that’s gaining global attention,” wrote Toys “R” Us on its website. “Sora can create up to one-minute-long videos featuring realistic scenes and multiple characters, all generated from text instruction. Imagine the excitement of creating a young Charles Lazarus, the founder of Toys “R” Us, and envisioning his dreams for our iconic brand and beloved mascot Geoffrey the Giraffe in the early 1930s.”

The company says that The Origin of Toys “R” Us commercial was co-produced by Toys “R” Us Studios President Kim Miller Olko as executive producer and Native Foreign’s Nik Kleverov as director. “Charles Lazarus was a visionary ahead of his time, and we wanted to honor his legacy with a spot using the most cutting-edge technology available,” Miller Olko said in a statement.

In the video, we see a child version of Lazarus, presumably generated using Sora, falling asleep and having a dream that he is flying through a land of toys. Along the way, he meets Geoffery, the store’s mascot, who hands the child a small red car.

Many of the scenes retain obvious hallmarks of AI-generated imagery, such as unnatural movement, strange visual artifacts, and the irregular shape of eyeglasses. In February, a few Super Bowl commercials intentionally made fun of similar AI-generated video defects, which became famous online after fake AI-generated beer commercial and “Pepperoni Hug Spot” clips made using Runway’s Gen-2 model went viral in 2023.

  • A screen capture from the partially AI-generated Toys “R” Us brand film created using Sora.

    Toys “R” Us

  • A screen capture from the partially AI-generated Toys “R” Us brand film created using Sora.

    Toys “R” Us

  • A screen capture from the partially AI-generated Toys “R” Us brand film created using Sora.

    Toys “R” Us

  • A screen capture from the partially AI-generated Toys “R” Us brand film created using Sora.

    Toys R Us

  • A screen capture from the partially AI-generated Toys “R” Us brand film created using Sora.

    Toys R Us

  • A screen capture from the partially AI-generated Toys “R” Us brand film created using Sora.

    Toys “R” Us

  • A screen capture from the partially AI-generated Toys “R” Us brand film created using Sora.

    Toys “R” Us

  • A screen capture from the partially AI-generated Toys “R” Us brand film created using Sora.

    Toys “R” Us

  • A screen capture from the partially AI-generated Toys “R” Us brand film created using Sora.

    Toys “R” Us

  • A screen capture from the partially AI-generated Toys “R” Us brand film created using Sora.

    Toys “R” Us

  • A screen capture from the partially AI-generated Toys “R” Us brand film created using Sora.

    Toys “R” Us

AI-generated artwork receives frequent criticism online due to the use of human-created artwork to train AI models that create the works, the perception that AI synthesis tools will replace (or are currently replacing) human creative jobs, and the potential environmental impact of AI models, which are seen as energy-wasteful by some critics. Also, some people just think the output quality looks bad.

On the social network X, comedy writer Mike Drucker wrapped up several of these criticisms into one post, writing, “Love this commercial is like, ‘Toys R Us started with the dream of a little boy who wanted to share his imagination with the world. And to show how, we fired our artists and dried Lake Superior using a server farm to generate what that would look like in Stephen King’s nightmares.'”

Other critical comments were more frank. Filmmaker Joe Russo posted: “TOYS ‘R US released an AI commercial and it fucking sucks.”

Toys “R” Us riles critics with “first-ever” AI-generated commercial using Sora Read More »

youtube-tries-convincing-record-labels-to-license-music-for-ai-song-generator

YouTube tries convincing record labels to license music for AI song generator

Jukebox zeroes —

Video site needs labels’ content to legally train AI song generators.

Man using phone in front of YouTube logo

Chris Ratcliffe/Bloomberg via Getty

YouTube is in talks with record labels to license their songs for artificial intelligence tools that clone popular artists’ music, hoping to win over a skeptical industry with upfront payments.

The Google-owned video site needs labels’ content to legally train AI song generators, as it prepares to launch new tools this year, according to three people familiar with the matter.

The company has recently offered lump sums of cash to the major labels—Sony, Warner, and Universal—to try to convince more artists to allow their music to be used in training AI software, according to several people briefed on the talks.

However, many artists remain fiercely opposed to AI music generation, fearing it could undermine the value of their work. Any move by a label to force their stars into such a scheme would be hugely controversial.

“The industry is wrestling with this. Technically the companies have the copyrights, but we have to think through how to play it,” said an executive at a large music company. “We don’t want to be seen as a Luddite.”

YouTube last year began testing a generative AI tool that lets people create short music clips by entering a text prompt. The product, initially named “Dream Track,” was designed to imitate the sound and lyrics of well-known singers.

But only 10 artists agreed to participate in the test phase, including Charli XCX, Troye Sivan and John Legend, and Dream Track was made available to just a small group of creators.

YouTube wants to sign up “dozens” of artists to roll out a new AI song generator this year, said two of the people.

YouTube said: “We’re not looking to expand Dream Track but are in conversations with labels about other experiments.”

Licenses or lawsuits

YouTube is seeking new deals at a time when AI companies such as OpenAI are striking licensing agreements with media groups to train large language models, the systems that power AI products such as the ChatGPT chatbot. Some of those deals are worth tens of millions of dollars to media companies, insiders say.

The deals being negotiated in music would be different. They would not be blanket licenses but rather would apply to a select group of artists, according to people briefed on the discussions.

It would be up to the labels to encourage their artists to participate in the new projects. That means the final amounts YouTube might be willing to pay the labels are at this stage undetermined.

The deals would look more like the one-off payments from social media companies such as Meta or Snap to entertainment groups for access to their music, rather than the royalty-based arrangements labels have with Spotify or Apple, these people said.

YouTube’s new AI tool, which is unlikely to carry the Dream Track brand, could form part of YouTube’s Shorts platform, which competes with TikTok. Talks continue and deal terms could still change, the people said.

YouTube’s latest move comes as the leading record companies on Monday sued two AI start-ups, Suno and Udio, which they allege are illegally using copyrighted recordings to train their AI models. A music industry group is seeking “up to $150,000 per work infringed,” according to the filings.

After facing the threat of extinction following the rise of Napster in the 2000s, music companies are trying to get ahead of disruptive technology this time around. The labels are keen to get involved with licensed products that use AI to create songs using their music copyrights—and get paid for it.

Sony Music, which did not participate in the first phase of YouTube’s AI experiment, is in negotiations with the tech group to make available some of its music to the new tools, said a person familiar with the matter. Warner and Universal, whose artists participated in the test phase, are also in talks with YouTube about expanding the product, these people said.

In April, more than 200 musicians including Billie Eilish and the estate of Frank Sinatra signed an open letter.

“Unchecked, AI will set in motion a race to the bottom that will degrade the value of our work and prevent us from being fairly compensated for it,” the letter said.

YouTube added: “We are always testing new ideas and learning from our experiments; it’s an important part of our innovation process. We will continue on this path with AI and music as we build for the future.”

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

YouTube tries convincing record labels to license music for AI song generator Read More »

securing-applications:-zero-trust-for-cloud-and-on-premises-environments

Securing Applications: Zero Trust for Cloud and On-Premises Environments

Welcome back to our zero trust blog series! In our previous post, we discussed the importance of device security and explored best practices for securing endpoints and IoT devices. Today, we’re shifting our focus to another critical component of zero trust: application security.

In a world where applications are increasingly distributed, diverse, and dynamic, securing them has never been more challenging – or more critical. From cloud-native apps and microservices to legacy on-premises systems, every application represents a potential target for attackers.

In this post, we’ll explore the role of application security in a zero trust model, discuss the unique challenges of securing modern application architectures, and share best practices for implementing a zero trust approach to application security.

The Zero Trust Approach to Application Security

In a traditional perimeter-based security model, applications are often trusted by default once they are inside the network. However, in a zero trust model, every application is treated as a potential threat, regardless of its location or origin.

To mitigate these risks, zero trust requires organizations to take a comprehensive, multi-layered approach to application security. This involves:

  1. Application inventory and classification: Maintaining a complete, up-to-date inventory of all applications and classifying them based on their level of risk and criticality.
  2. Secure application development: Integrating security into the application development lifecycle, from design and coding to testing and deployment.
  3. Continuous monitoring and assessment: Continuously monitoring application behavior and security posture to detect and respond to potential threats in real-time.
  4. Least privilege access: Enforcing granular access controls based on the principle of least privilege, allowing users and services to access only the application resources they need to perform their functions.

By applying these principles, organizations can create a more secure, resilient application ecosystem that minimizes the risk of unauthorized access and data breaches.

The Challenges of Securing Modern Application Architectures

While the principles of zero trust apply to all types of applications, securing modern application architectures presents unique challenges. These include:

  1. Complexity: Modern applications are often composed of multiple microservices, APIs, and serverless functions, making it difficult to maintain visibility and control over the application ecosystem.
  2. Dynamic nature: Applications are increasingly dynamic, with frequent updates, auto-scaling, and ephemeral instances, making it challenging to maintain consistent security policies and controls.
  3. Cloud-native risks: Cloud-native applications introduce new risks, such as insecure APIs, misconfigurations, and supply chain vulnerabilities, that require specialized security controls and expertise.
  4. Legacy applications: Many organizations still rely on legacy applications that were not designed with modern security principles in mind, making it difficult to retrofit them with zero trust controls.

To overcome these challenges, organizations must take a risk-based approach to application security, prioritizing high-risk applications and implementing compensating controls where necessary.

Best Practices for Zero Trust Application Security

Implementing a zero trust approach to application security requires a comprehensive, multi-layered strategy. Here are some best practices to consider:

  1. Inventory and classify applications: Maintain a complete, up-to-date inventory of all applications, including cloud-native and on-premises apps. Classify applications based on their level of risk and criticality, and prioritize security efforts accordingly.
  2. Implement secure development practices: Integrate security into the application development lifecycle, using practices like threat modeling, secure coding, and automated security testing. Train developers on secure coding practices and provide them with the tools and resources they need to build secure applications.
  3. Enforce least privilege access: Implement granular access controls based on the principle of least privilege, allowing users and services to access only the application resources they need to perform their functions. Use tools like OAuth 2.0 and OpenID Connect to manage authentication and authorization for APIs and microservices.
  4. Monitor and assess applications: Continuously monitor application behavior and security posture using tools like application performance monitoring (APM), runtime application self-protection (RASP), and web application firewalls (WAFs). Regularly assess applications for vulnerabilities and compliance with security policies.
  5. Secure application infrastructure: Ensure that the underlying infrastructure supporting applications, such as servers, containers, and serverless platforms, is securely configured and hardened against attack. Use infrastructure as code (IaC) and immutable infrastructure practices to ensure consistent and secure deployments.
  6. Implement zero trust network access: Use zero trust network access (ZTNA) solutions to provide secure, granular access to applications, regardless of their location or the user’s device. ZTNA solutions use identity-based access policies and continuous authentication and authorization to ensure that only authorized users and devices can access application resources.

By implementing these best practices and continuously refining your application security posture, you can better protect your organization’s assets and data from the risks posed by modern application architectures.

Conclusion

In a zero trust world, every application is a potential threat. By treating applications as untrusted and applying secure development practices, least privilege access, and continuous monitoring, organizations can minimize the risk of unauthorized access and data breaches.

However, achieving effective application security in a zero trust model requires a commitment to understanding your application ecosystem, implementing risk-based controls, and staying up-to-date with the latest security best practices. It also requires a cultural shift, with every developer and application owner taking responsibility for securing their applications.

As you continue your zero trust journey, make application security a top priority. Invest in the tools, processes, and training necessary to secure your applications, and regularly assess and refine your application security posture to keep pace with evolving threats and business needs.

In the next post, we’ll explore the role of monitoring and analytics in a zero trust model and share best practices for using data to detect and respond to threats in real-time.

Until then, stay vigilant and keep your applications secure!

Additional Resources:

Securing Applications: Zero Trust for Cloud and On-Premises Environments Read More »

nasa’s-commercial-spacesuit-program-just-hit-a-major-snag

NASA’s commercial spacesuit program just hit a major snag

Suit issues —

“Unfortunately Collins has been significantly behind schedule.”

NASA astronaut Christina Koch (right) poses for a portrait with fellow Expedition 61 Flight Engineer Jessica Meir, who is inside a US spacesuit for a fit check.

Enlarge / NASA astronaut Christina Koch (right) poses for a portrait with fellow Expedition 61 Flight Engineer Jessica Meir, who is inside a US spacesuit for a fit check.

NASA

Almost exactly two years ago, as it prepared for the next generation of human spaceflight, NASA chose a pair of private companies to design and develop new spacesuits. These were to be new spacesuits that would allow astronauts to both perform spacewalks outside the International Space Station as well as walk on the Moon as part of the Artemis program.

Now, that plan appears to be in trouble, with one of the spacesuit providers—Collins Aerospace—expected to back out, Ars has learned. It’s a blow for NASA, because the space agency really needs modern spacesuits.

NASA’s Apollo-era suits have long been retired. The current suits used for spacewalks in low-Earth orbit are four decades old. “These new capabilities will allow us to continue on the ISS and allows us to do the Artemis program and continue on to Mars,” said the director of Johnson Space Center, Vanessa Wyche, during a celebratory news conference in Houston two years ago.

The two winning teams were led by Collins Aerospace and Axiom Space, respectively. They were eligible for task orders worth up to $3.5 billion—in essence NASA would rent the use of these suits for a couple of decades. Since then, NASA has designated Axiom to work primarily on a suit for the Moon and the Artemis Program, and Collins with developing a suit for operations in-orbit, such as space station servicing.

Collins exits

This week, however, Collins said it will likely end its participation in the Exploration Extravehicular Activity Services, or xEVAS, contract. On Tuesday morning Chris Ayers, general manager at Collins Aerospace, met with employees to tell them about the company’s exit from the program. A NASA source confirmed decision.

“Unfortunately Collins has been significantly behind schedule,” a person familiar with the situation told Ars. “Collins has admitted they have drastically underperformed and have overspent on their xEVAS work, culminating in a request to be taken off the contract or renegotiate the scope and their budget.”

NASA and Collins Aerospace acknowledged a request for comment sent by Ars early on Tuesday morning but as of the afternoon did not provide substantive replies to questions about this action, nor steps forward.

The agency has been experiencing periodic problems with the maintenance of the suits built decades ago, known as the Extravehicular Mobility Unit, which made its debut in the 1980s. NASA has acknowledged the suit has exceeded its planned design lifetime. Just this Monday the agency had to halt a spacewalk after the airlock had been de-pressurized and hatch opened due to a water leak in the service and cooling umbilical unit of Tracy Dyson’s spacesuit.

As a result of this problem, NASA will likely only be able to conduct a single spacewalk this summer, after initially planning three, to complete work outside the International Space Station.

Increased pressure on Axiom

During the bidding process for the commercial spacesuit program, which unfolded in 2021 and 2022, just two bidders ultimately emerged. A unit of Raytheon Technologies, Collins was the bidder with the most experience in spacesuits, having designed the original Apollo suits, and it partnered with experienced providers ILC Dover and Oceaneering. Axiom is a newer company that, until the spacesuit competition, was largely focused on developing a private space station.

As they evaluated bids, NASA officials raised some concerns about Collins’ approach, noting that the proposal relied on “rapid acceleration of technology maturation and resolution of key technical trade studies to achieve their proposed schedule.” However, in its source selection statement, the agency concluded that it had a “high level of confidence” that Collins would be able to deliver on its spacesuits.

It is not clear what NASA will do now. One person suggested that NASA would not seek to immediately re-compete the xEVAS because it could signal to private investors that Axiom is not capable of delivering on its spacesuit contracts. (Like a lot of other companies in this capital-constrained era, Axiom Space, according to sources, has been struggling to raise a steady stream of private investment.)

Another source, however, suggested that NASA likely would seek to bring a new partner on board to compete with Axiom. The space agency did something similar in 2007 with its Commercial Orbital Transportation Services program to provide cargo to the space station. When Rocketplane Kistler could not deliver on its commitments, the agency recompeted the contract and ultimately selected Orbital Sciences. If NASA were to re-open competition, one of the bidders could be SpaceX, which has already designed a basic spacesuit to support the private Polaris Dawn mission.

Since the awards two years ago, Axiom has been making comparatively better technical progress on its spacesuit, which is based on the Extravehicular Mobility Unit design that NASA has used for decades. However, the Houston-based company has yet to complete the critical design review process, which can be demanding. Axiom is also battling a difficult supply chain environment—which is especially problematic given that NASA has not built new suits for such a long time.

NASA’s commercial spacesuit program just hit a major snag Read More »

verizon-screwup-caused-911-outage-in-6-states—carrier-agrees-to-$1m-fine

Verizon screwup caused 911 outage in 6 states—carrier agrees to $1M fine

That’ll teach ’em —

Verizon initially failed to remove a flawed update file that caused two outages.

A Verizon logo on top of a black background.

Verizon Wireless agreed to pay a $1,050,000 penalty to the US Treasury and implement a compliance plan because of a 911 outage in December 2022 that was caused by a botched update, the Federal Communications Commission announced today.

A consent decree explains that the outage was caused by “the reapplication of a known flawed security policy update file.” During the outage, lasting one hour and 44 minutes, Verizon failed to deliver hundreds of 911 calls in Alabama, Florida, Georgia, North Carolina, South Carolina, and Tennessee, the FCC said.

“The [FCC] Enforcement Bureau takes any potential violations of the Commission’s 911 rules extremely seriously. Sunny day outages, as occurred here, can be especially troubling because they occur when the public and 911 call centers least expect it,” Bureau Chief Loyaan Egal said.

The flawed update file was involved in another outage that happened two months earlier, in October 2022. After the October incident, Verizon “implemented a wide range of audits and technical system updates designed to protect against future recurrences of configuration and one-way audio issues,” the consent decree said.

Even before the December outage, Verizon knew that the problematic update file “was related to the root cause of the outage that occurred in October,” the FCC said. “Due to insufficient naming convention protocols and a failure to follow then-current implementation protocols, the flawed security policy update file was reintroduced into the Verizon Wireless network. This resulted in the [December] outage, however without the one-way audio issues.”

Verizon failed to remove flawed update file

The December outage happened when the flawed update file was re-applied by a Verizon Wireless employee. But the fault lies with more than one person, the FCC said:

Despite this prior outage and Verizon Wireless’s understanding that the flawed security policy update file resulted in that prior outage, Verizon Wireless did not remove that security policy update file from the inventory of available security policies, which enabled personnel to select and reapply the flawed security policy update file to the Verizon Wireless network. Additionally, Verizon Wireless admits its employees failed to comply with its “business-as-usual” operating and implementation procedures, which procedures required additional oversight prior to the implementation of the type of security policy update that caused the December Outage.

Verizon admitted in the consent decree that the FCC’s description is “a true and accurate description of the facts underlying the Investigation.” The agreed-upon compliance plan includes processes to prevent the reoccurrence of firewall and one-way audio problems, enhanced processes for implementing security policy updates, testing before significant network changes, risk assessments, a compliance training program for employees, and more.

Verizon must file four compliance reports over the next three years and “report any material noncompliance” with 911 rules and the consent decree terms to the FCC. In a statement provided to Ars, Verizon said the December 2022 outage “was a highly unusual occurrence. We understand the critical importance of maintaining a robust and reliable 911 network, and we’re committed to ensuring that our customers can always rely on our services in times of need.”

Verizon has 30 days to pay the $1.05 million fine. Verizon’s wireless service revenue was $19.5 billion in the first quarter of 2024. The entire company’s quarterly operating revenue was $33 billion, and net income was $4.7 billion.

Verizon isn’t the only major carrier to have a big outage caused by a faulty update. In February 2024, a major AT&T wireless outage caused by a botched network update led to warnings that 911 access could be disrupted. The FCC was investigating that outage.

There was also a statewide 911 outage for two hours in Massachusetts this month, but that one was caused by a faulty firewall used by the state’s 911 vendor.

Verizon screwup caused 911 outage in 6 states—carrier agrees to $1M fine Read More »

the-math-on-unplayed-steam-“shame”-is-way-off—and-no-cause-for-guilt

The math on unplayed Steam “shame” is way off—and no cause for guilt

Steam Backlog Simulator 2024 —

It’s fun to speculate, but sales and library quirks make it impossible to know.

Person holding a Steam Deck and playing PowerWash Simulator

Enlarge / Blast away all the guilt you want in PowerWash Simulator, but there’s no need to feel dirty in the real world about your backlog.

Getty Images

Gaming news site PCGamesN has a web tool, SteamIDFinder, that can do a neat trick. If you buy PC games on Steam and have your user profile set to make your gaming details public, you can enter your numeric user ID into it and see a bunch of stats. One set of stats is dedicated to the total value of the games listed as unplayed; you can share this page as an image linking to your “Pile of Shame,” which includes the total “Value” of your Steam collection and unplayed games.

Example findings from SteamIDFinder, from someone who likely has hundreds of games from Humble Bundles and other deals in their library.

Example findings from SteamIDFinder, from someone who likely has hundreds of games from Humble Bundles and other deals in their library.

SteamIDFinder

Using data from what it claims are the roughly 10 percent of 73 million Steam accounts in its database set to Public, PCGamesN extrapolates $1.9 billion in unplayed games, multiplies it by 10, and casually suggests that there are $19 billion in unplayed games hanging around. That is “more than the gross national product of Nicaragua, Niger, Chad, or Mauritius,” the site notes.

That is a very loose “$19 billion”

“Multiply by 10” is already a pretty soft science, but the numbers are worth digging into further. For starters, SteamIDFinder is using the current sale price of every game in your unplayed library, as confirmed by looking at a half-dozen “Pile of Shame” profiles. An informal poll of Ars Technica co-workers and friends with notable Steam libraries suggests that games purchased at full price make up a tiny fraction of the games in our backlogs. Games acquired through package deals, like the Humble Bundle, or during one of Steam’s annual or one-time sales, are a big part of most people’s Steam catalogs, I’d reckon.

  • Step 1 to seeing your unplayed collection: Click the three-vertical-bar icon next to your Steam library to filter, choose “Games,” then “Group by Collection” …

    Andrew Cunningham

  • … And pick “Unplayed” as a Play State filter.

    Andrew Cunningham

Then there’s what counts as “Unplayed.” Clicking on the filtering tool next to my Steam library and choosing “Unplayed” suggests that I have 54 titles out of 173 total that I have never cracked open. My own manual count of my library is closer to 45. Steam and I disagree on whether I’ve launched and played Baldur’s Gate II: Enhanced Edition (I definitely did and was definitely overwhelmed), Mountain, and SteamWorld Dig. And Steam is definitely not counting games that you buy through Steam, mod in some way, and then launch directly through a Windows executable. I’m certain I’ve played some TIE Fighter: Total Conversion, just not through Valve’s channels. One Ars editor played Half-Life 2 multiple times from 2004–2007, but Steam says they’ve never played it, because it didn’t start counting gameplay hours until March 2009.

Even if they’re not dedicated tools, Steam libraries sometimes end up with little bits of game that you didn’t ask for and might never play, like Half-Life Deathmatch: Source. I have quite a few Star Wars games that I never intend to launch, because they were part of a bundle that got me Jedi Knight and Jedi Outcast for cheaper than either game cost on its own.

What “shame” really looks like

Curious as to what people’s backlogs look like, I asked friends and co-workers to run their own numbers after checking them for errors and oddities. Here’s the Ars list:

  • Kevin Purdy: 173 games, 45 unplayed (26 percent)
  • Lee Hutchinson: 361 games, 109 unplayed (30 percent)
  • Benj Edwards: 404 games, 148 unplayed (36.6 percent)
  • Andrew Cunningham: 172 games, 79 unplayed (46 percent)

Friends who did a check ended up at 25 percent, 40 percent, and 52 percent. So nobody I could easily poll had fewer than 25 percent of their games unplayed, and those with higher numbers tended to have bought into bundles, sales, add-ons, and other entry generators. And nobody thought their dollar value total made any sense at all, given the full-price math.

Back in 2014, Kyle Orland went deep on Steam statistics. Among games released since Steam started tracking hour counts in March 2009, 26 percent had never been played at that point, while another 19 percent had only been played for an hour or less. That’s roughly 45 percent of games having been played for an essentially token amount of time.

There is a much larger point to argue here, too: You do not have to feel “shame” about giving too much money to people making games, especially smaller games, if you do not want to. This applies to even broader understandings of “Unplayed,” like checking out an intro level or two. Sometimes playing a game for a little bit and deciding it’s not something you want to put dozens more hours into is worth it, whether or not you go for the refund.

If you’ve looked up your own stats and feel surprised, you can keep your unplayed games as a dedicated collection in Steam, and it might inspire you to check out the most intriguing left-behinds. Or, like me, filter that list further by the games that are Steam Deck Verified and bring them on your next trip.

You can usually make additional money more easily than additional life. Nobody is going to inherit your Steam library (probably), so it’s not really worth anything anyway. Play what interests you when you have the time, and if your unplayed count helps you stave off your worst sale impulse buys or rediscover lost gems, so be it. There are neat tricks, but there is no real math—and no real shame.

The math on unplayed Steam “shame” is way off—and no cause for guilt Read More »