Author name: Shannon Garcia

a-scientific-mission-to-save-the-sharks

A scientific mission to save the sharks

A scientific mission to save the sharks

A hammerhead shark less than one meter long swims frantically in a plastic container aboard a boat in the Sanquianga National Natural Park, off Colombia’s Pacific coast. It is a delicate female Sphyrna corona, the world’s smallest hammerhead species, and goes by the local name cornuda amarilla—yellow hammerhead—because of the color of its fins and the edges of its splendid curved head, which is full of sensors to perceive the movement of its prey.

Marine biologist Diego Cardeñosa of Florida International University, along with local fishermen, has just captured the shark and implanted it with an acoustic marker before quickly returning it to the murky waters. A series of receivers will help to track its movements for a year, to map the coordinates of its habitat—valuable information for its protection.

That hammerhead is far from the only shark species that keeps the Colombian biologist busy. Cardeñosa’s mission is to build scientific knowledge to support shark conservation, either by locating the areas where the creatures live or by identifying, with genetic tests, the species that are traded in the world’s main shark markets.

Sharks are under threat for several reasons. The demand for their fins to supply the mainly Asian market (see box) is a very lucrative business: Between 2012 and 2019, it generated $1.5 billion. This, plus their inclusion in bycatch—fish caught unintentionally in the fishing industry—as well as the growing market for shark meat, leads to the death of millions every year. In 2019 alone the estimated total killed was at least 80 million sharks, 25 million of which were endangered species. In fact, in the Hong Kong market alone, a major trading spot for shark fins, two-thirds of the shark species sold there are at risk of extinction, according to a 2022 study led by Cardeñosa and molecular ecologist Demian Chapman, director of the shark and ray conservation program at Mote Marine Laboratory in Sarasota, Florida.

Sharks continue to face a complicated future despite decades of legislation designed to protect them. In 2000, the US Congress passed the Shark Finning Prohibition Act, and in 2011 the Shark Conservation Act. These laws require that sharks brought ashore by fishermen have all their fins naturally attached and aim to end the practice of stripping the creatures of their fins and returning them, mutilated, to the water to die on the seafloor. Ninety-four other countries have implemented similar regulations.

Perhaps the main political and diplomatic tool for shark conservation is in the hands of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), composed of 183 member countries plus the European Union. The treaty offers three degrees of protection, or appendices, to more than 40,000 species of animals and plants, imposing prohibitions and restrictions on their trade according to their threat status.

Sharks were included in CITES Appendix II—which includes species that are not endangered but could become so if trade is not controlled—in February 2003, with the addition of two species: the basking shark (Cetorhinus maximus) and the whale shark (Rhincodon typus). Following that, the list of protected species grew to 12 and then increased significantly in November 2023 with the inclusion of 60 more species of sharks in CITES Appendix II.

But do these tools actually protect sharks? To seek out answers, over the past decade researchers have worked to develop tests that can easily identify which species of sharks are being traded—and determine whether protected species continue to be exploited. They have also focused on studying shark populations around the world in order to provide information for the establishment of protected areas that can help safeguard these animals.

A scientific mission to save the sharks Read More »

give-yourself-a-day-to-tackle-all-your-recommendation-and-subscription-guilt

Give yourself a day to tackle all your recommendation and subscription guilt

A modest Patreon proposal —

Opinion: It never ends, but you can triage and help out your favorite creators.

Hand made up of thousands of digital cubes, giving a thumbs up

Getty Images

We’re heading into summer, a time when some people get a few half or whole days off from work. These can’t all be vacations, and there’s only so much shopping, golfing, or streaming one can do. A few of these times off are even unexpected, such that people with kids might even have some rare time to themselves.

I have a suggestion for some part of one of these days: Declare a Tech Guilt Absolution Day. Sit down, gather up the little computer and phone stuff you love that more people should know about, or free things totally worth a few bucks, and blitz through ratings, reviews, and donations.

Note that I am using the term “guilt,” not “shame.” I do not believe any modern human should feel bad about themselves for all the things they have failed to like, rate, and subscribe to. The modern ecosystems of useful little applications, games, podcasts, YouTube videos, newsletters, and the like demand far more secondary engagement than anyone can manage. Even if you purchase something or subscribe, the creators you appreciate, swimming upstream in the torrential rapids of the attention economy, can always use some attention. So I suggest we triage as best we can.

When you’ve got some time to yourself coming up, mark the Tech Guilt Absolution Day (or just Tech Guilt Day, if you realize it never ends) on your calendar. Sit down and, with the freshest mind you can manage (caffeinated, in many cases), start out with a blank piece of paper, word document, or whatever you use. Poll your brain about the little phone, computer, and email things you like and, without even looking, know could use a little boost. This could be a one-time donation, a Patreon or newsletter subscription, writing out a couple nice sentiments about something more people should know about, or taking the 30 seconds to log in and rate something thumbs-up or five stars.

Turning Hulu into ad-free podcasts

Subscription fatigue is real, and little donations add up, so go ahead and make a budget for this exercise. You might consider checking your existing subscriptions and cycling the money from cancelling one of them into something more relevant. I personally felt great turning the rest of the year’s Hulu subscription into Patreon dollars for my favorite podcast about engineering disasters.

I’ve been doing my own Tech Guilt Day every year or so for the past few years, inspired in part by Ron Lieber’s “Financial Health Day.” Sometimes it’s just 30 minutes of casual ratings clicks and PayPal donations on a half-day. Sometimes I sit down and do the real work of installing Apple’s own Podcasts app, just to give my favorite dog-walk distraction vendors the best possible visibility boost. I’ve always been glad I did it. I’ll often realize how great the ad-free version of something I like is, or rediscover a lost treasure, like a Steam game with some cool new updates (or a dignified ending).

Setting aside and labeling this stretch of one day, however long it is, to do the unpaid labor so many good things must ask of us has made a seemingly impossible task feel more manageable to me. It may do the same for you. Feel free to recommend some other categories of easily neglected creator help in the comments (or the forums, available, of course, to subscribers).

Give yourself a day to tackle all your recommendation and subscription guilt Read More »

mod-easy:-a-retro-e-bike-with-a-sidecar-perfect-for-indiana-jones-cosplay

Mod Easy: A retro e-bike with a sidecar perfect for Indiana Jones cosplay

Pure fun —

It’s not the most practical option for passengers, but my son had a blast.

The Mod Easy Sidecar

Enlarge / The Mod Easy Sidecar

As some Ars readers may recall, I reviewed The Maven Cargo e-bike earlier this year as a complete newb to e-bikes. For my second foray into the world of e-bikes, I took an entirely different path.

The stylish Maven was designed with utility in mind—it’s safe, user-friendly, and practical for accomplishing all the daily transportation needs of a busy family. The second bike, the $4,299 Mod Easy Sidecar 3, is on the other end of the spectrum. Just a cursory glance makes it clear: This bike is built for pure, head-turning fun.

The Mod Easy 3 is a retro-style Class 2 bike—complete with a sidecar that looks like it’s straight out of Indiana Jones and the Last Crusade. Nailing this look wasn’t the initial goal of Mod Bike founder Dor Korngold. In an interview with Ars, Korngold said the Mod Easy was the first bike he designed for himself. “It started with me wanting to have this classic cruiser,” he said, but he didn’t have a sketch or final design in mind at the outset. Instead, the design was based on what parts he had in his garage.

The first step was adding a wooden battery compartment to an old Electra frame he had painted. The battery compartment “looked vintage from the beginning,” he said, but the final look came together gradually as he added the sidecar and some of the other motorcycle-style features. Today, the Mod Easy is a sleek bike reminiscent of World War II-era motorcycles and comes in a chic matte finish.

An early version of the Mod Easy bike.

Enlarge / An early version of the Mod Easy bike.

Dor Korngold

When I showed my 5-year-old son a picture of the bike and sidecar, he was instantly enamored and insisted I review it. How could I refuse? He thoroughly enjoyed riding with me on the Maven, but riding in the sidecar turned out to be some next-level fun. He will readily tell you he gives it a five out of five-star rating. But in case you want a more thorough review, my thoughts are below. I’ll start with some general impressions and then discuss specific features of the bike and experience.

The Mod Easy Sidecar 3 at a glance

General impressions

  • The Mod Easy Sidecar 3.

  • Just the bike, which is sold at $3,299

    Beth Mole

  • The Mod Easy Sidecar 3.

    Beth Mole

Again, this is a stylish, fun bike. The bike alone is an effortless and smooth ride. Although it has the heft of an e-bike at 77 pounds (without the sidecar), it never felt unwieldy to me as a 5-foot-4-inch rider. The torque sensors are beautifully integrated into the riding experience, allowing the motor to feel like a gentle, natural assist to pedaling rather than an on-off boost. Of course, with my limited experience, I can’t comment on how these torque sensors compare to other torque sensors, but I have no complaints, and they’re an improvement over my experience with cadence sensors.

You may remember from my review of the Maven that the entrance to a bike path in my area has a switchback path with three tight turns on a hill. With the Maven’s cadence sensors, I struggled to go through the U-turns smoothly, especially going uphill, even after weeks of practice. With the Mod Easy’s torque sensors (and non-cargo length), I glided through them perfectly on the first try. Overall, the bike handles and corners nicely. The wide-set handlebars give the driving experience a relaxed, cruising feel, while the cushy saddle invites you to sink in and stay awhile. The sidecar, meanwhile, was a fun, head-turning feature, but it presents some practical aspects to consider.

Below, I’ll go through key features, starting with the headlining one: the sidecar.

Mod Easy: A retro e-bike with a sidecar perfect for Indiana Jones cosplay Read More »

securing-endpoints:-zero-trust-for-devices-and-iot

Securing Endpoints: Zero Trust for Devices and IoT

Welcome to the next installment of our zero trust blog series! In our previous post, we explored the importance of network segmentation and microsegmentation in a zero trust model. Today, we’re turning our attention to another critical aspect of zero trust: device security.

In a world where the number of connected devices is exploding, securing endpoints has never been more challenging – or more critical. From laptops and smartphones to IoT sensors and smart building systems, every device represents a potential entry point for attackers.

In this post, we’ll explore the role of device security in a zero trust model, discuss the unique challenges of securing IoT devices, and share best practices for implementing a zero trust approach to endpoint protection.

The Zero Trust Approach to Device Security

In a traditional perimeter-based security model, devices are often trusted by default once they are inside the network. However, in a zero trust model, every device is treated as a potential threat, regardless of its location or ownership.

To mitigate these risks, zero trust requires organizations to take a comprehensive, multi-layered approach to device security. This involves:

  1. Device inventory and classification: Maintaining a complete, up-to-date inventory of all devices connected to the network and classifying them based on their level of risk and criticality.
  2. Strong authentication and authorization: Requiring all devices to authenticate before accessing network resources and enforcing granular access controls based on the principle of least privilege.
  3. Continuous monitoring and assessment: Continuously monitoring device behavior and security posture to detect and respond to potential threats in real-time.
  4. Secure configuration and patch management: Ensuring that all devices are securely configured and up to date with the latest security patches and firmware updates.

By applying these principles, organizations can create a more secure, resilient device ecosystem that minimizes the risk of unauthorized access and data breaches.

The Challenges of Securing IoT Devices

While the principles of zero trust apply to all types of devices, securing IoT devices presents unique challenges. These include:

  1. Heterogeneity: IoT devices come in a wide variety of form factors, operating systems, and communication protocols, making it difficult to apply a consistent security approach.
  2. Resource constraints: Many IoT devices have limited processing power, memory, and battery life, making it challenging to implement traditional security controls like encryption and device management.
  3. Lack of visibility: IoT devices are often deployed in large numbers and in hard-to-reach locations, making it difficult to maintain visibility and control over the device ecosystem.
  4. Legacy devices: Many IoT devices have long lifespans and may not have been designed with security in mind, making it difficult to retrofit them with modern security controls.

To overcome these challenges, organizations must take a risk-based approach to IoT security, prioritizing high-risk devices and implementing compensating controls where necessary.

Best Practices for Zero Trust Device Security

Implementing a zero trust approach to device security requires a comprehensive, multi-layered strategy. Here are some best practices to consider:

  1. Inventory and classify devices: Maintain a complete, up-to-date inventory of all devices connected to the network, including IoT devices. Classify devices based on their level of risk and criticality, and prioritize security efforts accordingly.
  2. Implement strong authentication: Require all devices to authenticate before accessing network resources, using methods like certificates, tokens, or biometrics. Consider using device attestation to verify the integrity and security posture of devices before granting access.
  3. Enforce least privilege access: Implement granular access controls based on the principle of least privilege, allowing devices to access only the resources they need to perform their functions. Use network segmentation and microsegmentation to isolate high-risk devices and limit the potential impact of a breach.
  4. Monitor and assess devices: Continuously monitor device behavior and security posture using tools like endpoint detection and response (EDR) and security information and event management (SIEM). Regularly assess devices for vulnerabilities and compliance with security policies.
  5. Secure device configurations: Ensure that all devices are securely configured and hardened against attack. Use secure boot and firmware signing to prevent unauthorized modifications, and disable unused ports and services.
  6. Keep devices up to date: Regularly patch and update devices to address known vulnerabilities and security issues. Consider using automated patch management tools to ensure timely and consistent updates across the device ecosystem.

By implementing these best practices and continuously refining your device security posture, you can better protect your organization’s assets and data from the risks posed by connected devices.

Conclusion

In a zero trust world, every device is a potential threat. By treating devices as untrusted and applying strong authentication, least privilege access, and continuous monitoring, organizations can minimize the risk of unauthorized access and data breaches. However, achieving effective device security in a zero trust model requires a commitment to understanding your device ecosystem, implementing risk-based controls, and staying up to date with the latest security best practices. It also requires a cultural shift, with every user and device owner taking responsibility for securing their endpoints.

As you continue your zero trust journey, make device security a top priority. Invest in the tools, processes, and training necessary to secure your endpoints, and regularly assess and refine your device security posture to keep pace with evolving threats and business needs.

In the next post, we’ll explore the role of application security in a zero trust model and share best practices for securing cloud and on-premises applications.

Until then, stay vigilant and keep your devices secure!

Additional Resources:

Securing Endpoints: Zero Trust for Devices and IoT Read More »

Microsegmentation: Implementing Zero Trust at the Network Level

Welcome back to our zero trust blog series! In our previous posts, we explored the importance of data security and identity and access management in a zero trust model. Today, we’re diving into another critical component of zero trust: network segmentation.

In a traditional perimeter-based security model, the network is often treated as a single, monolithic entity. Once a user or device is inside the network, they typically have broad access to resources and applications. However, in a zero trust world, this approach is no longer sufficient.

In this post, we’ll explore the role of network segmentation in a zero trust model, discuss the benefits of microsegmentation, and share best practices for implementing a zero trust network architecture.

The Zero Trust Approach to Network Segmentation

In a zero trust model, the network is no longer treated as a trusted entity. Instead, zero trust assumes that the network is always hostile and that threats can come from both inside and outside the organization.

To mitigate these risks, zero trust requires organizations to segment their networks into smaller, more manageable zones. This involves:

  1. Microsegmentation: Dividing the network into small, isolated segments based on application, data sensitivity, and user roles.
  2. Least privilege access: Enforcing granular access controls between segments, allowing only the minimum level of access necessary for users and devices to perform their functions.
  3. Continuous monitoring: Constantly monitoring network traffic and user behavior to detect and respond to potential threats in real-time.
  4. Software-defined perimeters: Using software-defined networking (SDN) and virtual private networks (VPNs) to create dynamic, adaptable network boundaries that can be easily modified as needed.

By applying these principles, organizations can create a more secure, resilient network architecture that minimizes the risk of lateral movement and data breaches.

Benefits of Microsegmentation in a Zero Trust Model

Microsegmentation is a key enabler of zero trust at the network level. By dividing the network into small, isolated segments, organizations can realize several benefits:

  1. Reduced attack surface: Microsegmentation limits the potential damage of a breach by containing threats within a single segment, preventing lateral movement across the network.
  2. Granular access control: By enforcing least privilege access between segments, organizations can ensure that users and devices only have access to the resources they need, reducing the risk of unauthorized access.
  3. Improved visibility: Microsegmentation provides greater visibility into network traffic and user behavior, making it easier to detect and respond to potential threats.
  4. Simplified compliance: By isolating regulated data and applications into separate segments, organizations can more easily demonstrate compliance with industry standards and regulations.

Best Practices for Implementing Microsegmentation

Implementing micro-segmentation in a zero trust model requires a comprehensive, multi-layered approach. Here are some best practices to consider:

  1. Map your network: Before implementing micro-segmentation, thoroughly map your network to understand your applications, data flows, and user roles. Use tools like application discovery and dependency mapping (ADDM) to identify dependencies and prioritize segments.
  2. Define segmentation policies: Develop clear, granular segmentation policies based on your organization’s unique security and compliance requirements. Consider factors such as data sensitivity, user roles, and application criticality when defining segments.
  3. Use software-defined networking: Leverage SDN technologies to create dynamic, adaptable network segments that can be easily modified as needed. Use tools like Cisco ACI, VMware NSX, or OpenStack Neutron to implement SDN.
  4. Enforce least privilege access: Implement granular access controls between segments, allowing only the minimum level of access necessary for users and devices to perform their functions. Use network access control (NAC) and identity-based segmentation to enforce these policies.
  5. Monitor and log traffic: Implement robust monitoring and logging mechanisms to track network traffic and user behavior. Use network detection and response (NDR) tools to identify and investigate potential threats.
  6. Regularly test and refine: Regularly test your micro-segmentation policies and controls to ensure they are effective and up to date. Conduct penetration testing and red team exercises to identify weaknesses and refine your segmentation strategy.

By implementing these best practices and continuously refining your micro-segmentation posture, you can better protect your organization’s assets and data and build a more resilient, adaptable network architecture.

Conclusion

In a zero trust world, the network is no longer a trusted entity. By treating the network as always hostile and segmenting it into small, isolated zones, organizations can minimize the risk of lateral movement and data breaches. However, achieving effective microsegmentation in a zero trust model requires a commitment to understanding your network, defining clear policies, and investing in the right tools and processes. It also requires a cultural shift, with every user and device treated as a potential threat.

As you continue your zero trust journey, make network segmentation a top priority. Invest in the tools, processes, and training necessary to implement microsegmentation and regularly assess and refine your segmentation posture to keep pace with evolving threats and business needs.

In the next post, we’ll explore the role of device security in a zero trust model and share best practices for securing endpoints, IoT devices, and other connected systems.

Until then, stay vigilant and keep your network secure!

Additional Resources:

Microsegmentation: Implementing Zero Trust at the Network Level Read More »

meta-halts-plans-to-train-ai-on-facebook,-instagram-posts-in-eu

Meta halts plans to train AI on Facebook, Instagram posts in EU

Not so fast —

Meta was going to start training AI on Facebook and Instagram posts on June 26.

Meta halts plans to train AI on Facebook, Instagram posts in EU

Meta has apparently paused plans to process mounds of user data to bring new AI experiences to Europe.

The decision comes after data regulators rebuffed the tech giant’s claims that it had “legitimate interests” in processing European Union- and European Economic Area (EEA)-based Facebook and Instagram users’ data—including personal posts and pictures—to train future AI tools.

There’s not much information available yet on Meta’s decision. But Meta’s EU regulator, the Irish Data Protection Commission (DPC), posted a statement confirming that Meta made the move after ongoing discussions with the DPC about compliance with the EU’s strict data privacy laws, including the General Data Protection Regulation (GDPR).

“The DPC welcomes the decision by Meta to pause its plans to train its large language model using public content shared by adults on Facebook and Instagram across the EU/EEA,” the DPC said. “This decision followed intensive engagement between the DPC and Meta. The DPC, in co-operation with its fellow EU data protection authorities, will continue to engage with Meta on this issue.”

The European Center for Digital Rights, known as Noyb, had filed 11 complaints across the EU and intended to file more to stop Meta from moving forward with its AI plans. The DPC initially gave Meta AI the green light to proceed but has now made a U-turn, Noyb said.

Meta’s policy still requires update

In a blog, Meta had previously teased new AI features coming to the EU, including everything from customized stickers for chats and stories to Meta AI, a “virtual assistant you can access to answer questions, generate images, and more.” Meta had argued that training on EU users’ personal data was necessary so that AI services could reflect “the diverse cultures and languages of the European communities who will use them.”

Before the pause, the company had been hoping to rely “on the legal basis of ‘legitimate interests’” to process the data, because it’s needed “to improve AI at Meta.” But Noyb and EU data regulators had argued that Meta’s legal basis did not comply with the GDPR, with the Norwegian Data Protection Authority arguing that “the most natural thing would have been to ask the users for their consent before their posts and images are used in this way.”

Rather than ask for consent, however, Meta had given EU users until June 26 to opt out. Noyb had alleged that in going this route, Meta planned to use “dark patterns” to thwart AI opt-outs in the EU and collect as much data as possible to fuel undisclosed AI technologies. Noyb urgently argued that once users’ data is in the system, “users seem to have no option of ever having it removed.”

Noyb said that the “obvious explanation” for Meta seemingly halting its plans was pushback from EU officials, but the privacy advocacy group also warned EU users that Meta’s privacy policy has not yet been fully updated to reflect the pause.

“We welcome this development but will monitor this closely,” Max Schrems, Noyb chair, said in a statement provided to Ars. “So far there is no official change of the Meta privacy policy, which would make this commitment legally binding. The cases we filed are ongoing and will need a determination.”

Ars was not immediately able to reach Meta for comment.

Meta halts plans to train AI on Facebook, Instagram posts in EU Read More »

tesla-investors-sue-elon-musk-for-diverting-carmaker’s-resources-to-xai

Tesla investors sue Elon Musk for diverting carmaker’s resources to xAI

Tesla sued by shareholders —

Lawsuit: Musk’s xAI poached Tesla employees, Nvidia GPUs, and data.

A large Tesla logo

Getty Images | SOPA Images

A group of Tesla investors yesterday sued Elon Musk, the company, and its board members, alleging that Tesla was harmed by Musk’s diversion of resources to his xAI venture. The diversion of resources includes hiring AI employees away from Tesla, diverting microchips from Tesla to X (formerly Twitter) and xAI, and “xAI’s use of Tesla’s data to develop xAI’s own software/hardware, all without compensation to Tesla,” the lawsuit said.

The lawsuit in Delaware Court of Chancery was filed by three Tesla shareholders: the Cleveland Bakers and Teamsters Pension Fund, Daniel Hazen, and Michael Giampietro. It seeks financial damages for Tesla and the disgorging of Musk’s equity stake in xAI to Tesla.

“Could the CEO of Coca-Cola loyally start a competing soft-drink company on the side, then divert scarce ingredients from Coca-Cola to the startup? Could the CEO of Goldman Sachs loyally start a competing financial advisory company on the side, then hire away key bankers from Goldman Sachs to the startup? Could the board of either company loyally permit such conduct without doing anything about it? Of course not,” the lawsuit says.

Tesla and Musk have touted artificial intelligence “as the key to Tesla’s future” and described Tesla as an AI company, the lawsuit said. By founding xAI, Musk started a competing company “and then divert[ed] talent and resources from his corporation to the startup,” with the apparent approval of Tesla’s board, the lawsuit said.

After founding xAI in March 2023, “Musk hired away numerous key AI-focused employees from Tesla to xAI” and later diverted Nvidia GPUs from Tesla to X and xAI, the lawsuit said. The GPU diversion was recently confirmed by Nvidia emails that were revealed in a report by CNBC.

GPU diversion

Before founding xAI, “Musk stated that Tesla needed more Nvidia H100 GPUs than Nvidia had available for sale, a common problem in the AI industry… After Musk established xAI, however, he began personally directing Nvidia to redirect GPUs from Tesla to xAI and X,” the lawsuit said.

The investors suing Musk and Tesla don’t buy Musk’s justification. “For his part, Musk dubiously claimed in a post on X following the publication of the CNBC report that, contrary to his prior public representations about Tesla’s appetite for Nvidia hardware, ‘Tesla had no place to send the Nvidia chips to turn them on, so they would have just sat in a warehouse,'” the lawsuit said.

The complaint says that a pitch deck to potential investors in xAI said the new firm “intended to harvest data from X and Tesla to help xAI catch up to AI companies OpenAI and Anthropic. X would provide data from social media users, and Tesla would provide video data from its cars.”

“It is apparent that Musk has pitched prospective investors in xAI partly by exploiting information owned by Tesla,” the lawsuit also said. “On information and belief, Musk has already or intends to have xAI harvest data from Tesla without appropriately compensating Tesla even though X has already been provided xAI equity for its data contributions. None of this would be necessary if Musk properly created xAI as a subsidiary of Tesla.”

We contacted Tesla today and will update this article if the company provides a response to the lawsuit. The filing of the complaint was previously reported by TechCrunch.

Same court nullified Musk’s pay

The Delaware Court of Chancery is the same one that nullified Elon Musk’s 2018 pay package following a different investor lawsuit. Tesla shareholders yesterday re-approved the $44.9 billion pay plan, with 72 percent voting yes on the proposal, but the re-vote doesn’t end the legal battle over Musk’s pay. Tesla shareholders also approved a corporate move from Delaware to Texas, which was proposed by Musk and Tesla after the pay-plan court ruling.

That drama factors into the lawsuit filed yesterday. After the pay ruling that effectively reduced Musk’s stake in Tesla, “Musk accelerated his efforts to grow xAI” by “raising billions of dollars and poaching at least eleven employees from Tesla,” the new lawsuit said. The lawsuit also points to Musk’s threat “that he would only build an AI and robotics business within Tesla if Tesla gave him at least 25% voting power.”

The lawsuit accuses Tesla’s board of “permit[ting] Musk to create and grow xAI, hindering Tesla’s AI development efforts and diverting billions of dollars in value from Tesla to xAI.” The board’s failure to act is alleged to be “an obvious breach of its members’ unyielding fiduciary duty to protect the interests of Tesla and its stockholders.”

The Tesla board members’ close ties to Musk could play a key role in the case. In the pay-plan ruling, Delaware Court of Chancery Judge Kathaleen McCormick found that most of Tesla’s board members were beholden to Musk or had compromising conflicts. The lawsuit filed yesterday points to the court’s previous findings on those board members, including Kimbal Musk, Elon Musk’s brother; and James Murdoch, a longtime friend of Musk.

Tesla investors sue Elon Musk for diverting carmaker’s resources to xAI Read More »

report:-apple-isn’t-paying-openai-for-chatgpt-integration-into-oses

Report: Apple isn’t paying OpenAI for ChatGPT integration into OSes

in the pocket —

Apple thinks pushing OpenAI’s brand to hundreds of millions is worth more than money.

The OpenAI and Apple logos together.

OpenAI / Apple / Benj Edwards

On Monday, Apple announced it would be integrating OpenAI’s ChatGPT AI assistant into upcoming versions of its iPhone, iPad, and Mac operating systems. It paves the way for future third-party AI model integrations, but given Google’s multi-billion-dollar deal with Apple for preferential web search, the OpenAI announcement inspired speculation about who is paying whom. According to a Bloomberg report published Wednesday, Apple considers ChatGPT’s placement on its devices as compensation enough.

“Apple isn’t paying OpenAI as part of the partnership,” writes Bloomberg reporter Mark Gurman, citing people familiar with the matter who wish to remain anonymous. “Instead, Apple believes pushing OpenAI’s brand and technology to hundreds of millions of its devices is of equal or greater value than monetary payments.”

The Bloomberg report states that neither company expects the agreement to generate meaningful revenue in the short term, and in fact, the partnership could burn extra money for OpenAI, because it pays Microsoft to host ChatGPT’s capabilities on its Azure cloud. However, OpenAI could benefit by converting free users to paid subscriptions, and Apple potentially benefits by providing easy, built-in access to ChatGPT during a time when its own in-house LLMs are still catching up.

And there’s another angle at play. Currently, OpenAI offers subscriptions (ChatGPT Plus, Enterprise, Team) that unlock additional features. If users subscribe to OpenAI through the ChatGPT app on an Apple device, the process will reportedly use Apple’s payment platform, which may give Apple a significant cut of the revenue. According to the report, Apple hopes to negotiate additional revenue-sharing deals with AI vendors in the future.

Why OpenAI

The rise of ChatGPT in the public eye over the past 18 months has made OpenAI a power player in the tech industry, allowing it to strike deals with publishers for AI training content—and ensure continued support from Microsoft in the form of investments that trade vital funding and compute for access to OpenAI’s large language model (LLM) technology like GPT-4.

Still, Apple’s choice of ChatGPT as Apple’s first external AI integration has led to widespread misunderstanding, especially since Apple buried the lede about its own in-house LLM technology that powers its new “Apple Intelligence” platform.

On Apple’s part, CEO Tim Cook told The Washington Post that it chose OpenAI as its first third-party AI partner because he thinks the company controls the leading LLM technology at the moment: “I think they’re a pioneer in the area, and today they have the best model,” he said. “We’re integrating with other people as well. But they’re first, and I think today it’s because they’re best.”

Apple’s choice also brings risk. OpenAI’s record isn’t spotless, racking up a string of public controversies over the past month that include an accusation from actress Scarlett Johansson that the company intentionally imitated her voice, resignations from a key scientist and safety personnel, the revelation of a restrictive NDA for ex-employees that prevented public criticism, and accusations against OpenAI CEO Sam Altman of “psychological abuse” related by a former member of the OpenAI board.

Meanwhile, critics of privacy issues related to gathering data for training AI models—including OpenAI foe Elon Musk, who took to X on Monday to spread misconceptions about how the ChatGPT integration might work—also worried that the Apple-OpenAI deal might expose personal data to the AI company, although both companies strongly deny that will be the case.

Looking ahead, Apple’s deal with OpenAI is not exclusive, and the company is already in talks to offer Google’s Gemini chatbot as an additional option later this year. Apple has also reportedly held talks with Anthropic (maker of Claude 3) as a potential chatbot partner, signaling its intention to provide users with a range of AI services, much like how the company offers various search engine options in Safari.

Report: Apple isn’t paying OpenAI for ChatGPT integration into OSes Read More »

china-state-hackers-infected-20,000-fortinet-vpns,-dutch-spy-service-says

China state hackers infected 20,000 Fortinet VPNs, Dutch spy service says

DISCLOSURE FUBAR —

Critical code-execution flaw was under exploitation 2 months before company disclosed it.

China state hackers infected 20,000 Fortinet VPNs, Dutch spy service says

Hackers working for the Chinese government gained access to more than 20,000 VPN appliances sold by Fortinet using a critical vulnerability that the company failed to disclose for two weeks after fixing it, Netherlands government officials said.

The vulnerability, tracked as CVE-2022-42475, is a heap-based buffer overflow that allows hackers to remotely execute malicious code. It carries a severity rating of 9.8 out of 10. A maker of network security software, Fortinet silently fixed the vulnerability on November 28, 2022, but failed to mention the threat until December 12 of that year, when the company said it became aware of an “instance where this vulnerability was exploited in the wild.” On January 11, 2023—more than six weeks after the vulnerability was fixed—Fortinet warned a threat actor was exploiting it to infect government and government-related organizations with advanced custom-made malware.

Enter CoatHanger

The Netherlands officials first reported in February that Chinese state hackers had exploited CVE-2022-42475 to install an advanced and stealthy backdoor tracked as CoatHanger on Fortigate appliances inside the Dutch Ministry of Defense. Once installed, the never-before-seen malware, specifically designed for the underlying FortiOS operating system, was able to permanently reside on devices even when rebooted or receiving a firmware update. CoatHanger could also escape traditional detection measures, the officials warned. The damage resulting from the breach was limited, however, because infections were contained inside a segment reserved for non-classified uses.

On Monday, officials with the Military Intelligence and Security Service (MIVD) and the General Intelligence and Security Service in the Netherlands said that to date, Chinese state hackers have used the critical vulnerability to infect more than 20,000 FortiGate VPN appliances sold by Fortinet. Targets include dozens of Western government agencies, international organizations, and companies within the defense industry.

“Since then, the MIVD has conducted further investigation and has shown that the Chinese cyber espionage campaign appears to be much more extensive than previously known,” Netherlands officials with the National Cyber Security Center wrote. “The NCSC therefore calls for extra attention to this campaign and the abuse of vulnerabilities in edge devices.”

Monday’s report said that exploitation of the vulnerability started two months before Fortinet first disclosed it and that 14,000 servers were backdoored during this zero-day period. The officials warned that the Chinese threat group likely still has access to many victims because CoatHanger is so hard to detect and remove.

Netherlands government officials wrote in Monday’s report:

Since the publication in February, the MIVD has continued to investigate the broader Chinese cyber espionage campaign. This revealed that the state actor gained access to at least 20,000 FortiGate systems worldwide within a few months in both 2022 and 2023 through the vulnerability with the identifier CVE-2022-42475 . Furthermore, research shows that the state actor behind this campaign was already aware of this vulnerability in FortiGate systems at least two months before Fortinet announced the vulnerability. During this so-called ‘zero-day’ period, the actor alone infected 14,000 devices. Targets include dozens of (Western) governments, international organizations and a large number of companies within the defense industry.

The state actor installed malware at relevant targets at a later date. This gave the state actor permanent access to the systems. Even if a victim installs security updates from FortiGate, the state actor continues to have this access.

It is not known how many victims actually have malware installed. The Dutch intelligence services and the NCSC consider it likely that the state actor could potentially expand its access to hundreds of victims worldwide and carry out additional actions such as stealing data.

Even with the technical report on the COATHANGER malware, infections from the actor are difficult to identify and remove. The NCSC and the Dutch intelligence services therefore state that it is likely that the state actor still has access to systems of a significant number of victims.

Fortinet’s failure to timely disclose is particularly acute given the severity of the vulnerability. Disclosures are crucial because they help users prioritize the installation of patches. When a new version fixes minor bugs, many organizations often wait to install it. When it fixes a vulnerability with a 9.8 severity rating, they’re much more likely to expedite the update process. Given the vulnerability was being exploited even before Fortinet fixed it, the disclosure likely wouldn’t have prevented all of the infections, but it stands to reason it could have stopped some.

Fortinet officials have never explained why they didn’t disclose the critical vulnerability when it was fixed. They have also declined to disclose what the company policy is for the disclosure of security vulnerabilities. Company representatives didn’t immediately respond to an email seeking comment for this post.

China state hackers infected 20,000 Fortinet VPNs, Dutch spy service says Read More »

adobe-to-update-vague-ai-terms-after-users-threaten-to-cancel-subscriptions

Adobe to update vague AI terms after users threaten to cancel subscriptions

Adobe to update vague AI terms after users threaten to cancel subscriptions

Adobe has promised to update its terms of service to make it “abundantly clear” that the company will “never” train generative AI on creators’ content after days of customer backlash, with some saying they would cancel Adobe subscriptions over its vague terms.

Users got upset last week when an Adobe pop-up informed them of updates to terms of use that seemed to give Adobe broad permissions to access user content, take ownership of that content, or train AI on that content. The pop-up forced users to agree to these terms to access Adobe apps, disrupting access to creatives’ projects unless they immediately accepted them.

For any users unwilling to accept, canceling annual plans could trigger fees amounting to 50 percent of their remaining subscription cost. Adobe justifies collecting these fees because a “yearly subscription comes with a significant discount.”

On X (formerly Twitter), YouTuber Sasha Yanshin wrote that he canceled his Adobe license “after many years as a customer,” arguing that “no creator in their right mind can accept” Adobe’s terms that seemed to seize a “worldwide royalty-free license to reproduce, display, distribute” or “do whatever they want with any content” produced using their software.

“This is beyond insane,” Yanshin wrote on X. “You pay a huge monthly subscription, and they want to own your content and your entire business as well. Going to have to learn some new tools.”

Adobe’s design leader Scott Belsky replied, telling Yanshin that Adobe had clarified the update in a blog post and noting that Adobe’s terms for licensing content are typical for every cloud content company. But he acknowledged that those terms were written about 11 years ago and that the language could be plainer, writing that “modern terms of service in the current climate of customer concerns should evolve to address modern day concerns directly.”

Yanshin has so far not been encouraged by any of Adobe’s attempts to clarify its terms, writing that he gives “precisely zero f*cks about Adobe’s clarifications or blog posts.”

“You forced people to sign new Terms,” Yanshin told Belsky on X. “Legally, they are the only thing that matters.”

Another user in the thread using an anonymous X account also pushed back, writing, “Point to where it says in the terms that you won’t use our content for LLM or AI training? And state unequivocally that you do not have the right to use our work beyond storing it. That would go a long way.”

“Stay tuned,” Belsky wrote on X. “Unfortunately, it takes a process to update a TOS,” but “we are working on incorporating these clarifications.”

Belsky co-authored the blog this week announcing that Adobe’s terms would be updated by June 18 after a week of fielding feedback from users.

“We’ve never trained generative AI on customer content, taken ownership of a customer’s work, or allowed access to customer content beyond legal requirements,” Adobe’s blog said. “Nor were we considering any of those practices as part of the recent Terms of Use update. That said, we agree that evolving our Terms of Use to reflect our commitments to our community is the right thing to do.”

Adobe to update vague AI terms after users threaten to cancel subscriptions Read More »

polarized-light-yields-fresh-insight-into-mysterious-fast-radio-bursts

Polarized light yields fresh insight into mysterious fast radio bursts

CHIME-ing in —

Scientists looked at how polarization changed direction to learn more about origins

Artist’s rendition of how the angle of polarized light from an FRB changes as it journeys through space.

Enlarge / Artist’s rendition of how the angle of polarized light from a fast radio burst changes as it journeys through space.

CHIME/Dunlap Institute

Astronomers have been puzzling over the origins of mysterious fast radio bursts (FRBs) since the first one was spotted in 2007. Researchers now have their first look at non-repeating FRBs, i.e., those that have only produced a single burst of light to date. The authors of a new paper published in The Astrophysical Journal looked specifically at the properties of polarized light emitting from these FRBs, yielding further insight into the origins of the phenomenon. The analysis supports the hypothesis that there are different origins for repeating and non-repeating FRBs.

“This is a new way to analyze the data we have on FRBs. Instead of just looking at how bright something is, we’re also looking at the angle of the light’s vibrating electromagnetic waves,” said co-author Ayush Pandhi, a graduate student at the University of Toronto’s Dunlap Institute for Astronomy and Astrophysics. “It gives you additional information about how and where that light is produced and what it has passed through on its journey to us over many millions of light years.”

As we’ve reported previously, FRBs involve a sudden blast of radio-frequency radiation that lasts just a few microseconds. Astronomers have over a thousand of them to date; some come from sources that repeatedly emit FRBs, while others seem to burst once and go silent. You can produce this sort of sudden surge of energy by destroying something. But the existence of repeating sources suggests that at least some of them are produced by an object that survives the event. That has led to a focus on compact objects, like neutron stars and black holes—especially a class of neutron stars called magnetars—as likely sources.

There have also been many detected FRBs that don’t seem to repeat at all, suggesting that the conditions that produced them may destroy their source. That’s consistent with a blitzar—a bizarre astronomical event caused by the sudden collapse of an overly massive neutron star. The event is driven by an earlier merger of two neutron stars; this creates an unstable intermediate neutron star, which is kept from collapsing immediately by its rapid spin.

In a blitzar, the strong magnetic fields of the neutron star slow down its spin, causing it to collapse into a black hole several hours after the merger. That collapse suddenly deletes the dynamo powering the magnetic fields, releasing their energy in the form of a fast radio burst.

So the events we’ve been lumping together as FRBs could actually be the product of two different events. The repeating events occur in the environment around a magnetar. The one-shot events are triggered by the death of a highly magnetized neutron star within a few hours of its formation. Astronomers announced the detection of a possible blitzar potentially associated with an FRB last year.

Only about 3 percent of FRBs are of the repeating variety. Per Pandhi, this is the first analysis of the other 97 percent of non-repeating FRBs, using data from Canada’s CHIME instrument (Canadian Hydrogen Intensity Mapping Experiment). CHIME was built for other observations but is sensitive to many of the wavelengths that make up an FRB. Unlike most radio telescopes, which focus on small points in the sky, CHIME scans a huge area, allowing it to pick out FRBs even though they almost never happen in the same place twice.

Pandhi et al. decided to investigate how the direction of the light polarization from 128 non-repeating FRBs changes to learn more about the environments in which they originated. The team found that the polarized light from non-repeating FRBs changes both with time and with different colors of light. They concluded that this particular sample of non-repeating FRBs is either a separate population or more evolved versions of these kinds of FRBs that are part of a population that originated in less extreme environments with lower burst rates. That’s in keeping with the notion that non-repeating FRBs are quite different from their rarer repeating FRBs.

The Astrophysical Journal, 2024. DOI: 10.3847/1538-4357/ad40aa  (About DOIs).

Polarized light yields fresh insight into mysterious fast radio bursts Read More »

ransomware-gangs-are-adopting-“more-brutal”-tactics-amid-crackdowns

Ransomware gangs are adopting “more brutal” tactics amid crackdowns

Illustration of a lock on a motherboard

Just_Super via Getty

Today, people around the world will head to school, doctor’s appointments, and pharmacies, only to be told, “Sorry, our computer systems are down.” The frequent culprit is a cybercrime gang operating on the other side of the world, demanding payment for system access or the safe return of stolen data.

The ransomware epidemic shows no signs of slowing down in 2024—despite increasing police crackdowns—and experts worry that it could soon enter a more violent phase.

“We’re definitely not winning the fight against ransomware right now,” Allan Liska, a threat intelligence analyst at Recorded Future, tells WIRED.

Ransomware may be the defining cybercrime of the past decade, with criminals targeting a wide range of victims including hospitals, schools, and governments. The attackers encrypt critical data, bringing the victim’s operation to a grinding halt, and then extort them with the threat of releasing sensitive information. These attacks have had serious consequences. In 2021, the Colonial Pipeline Company was targeted by ransomware, forcing the company to pause fuel delivery and spurring US president Joe Biden to implement emergency measures to meet demand. But ransomware attacks are a daily event around the world—last week, ransomware hit hospitals in the UK—and many of them don’t make headlines.

“There is a visibility problem into incidents; most organizations don’t disclose or report them,” says Brett Callow, a threat analyst at Emsisoft. He adds that this makes it “hard to ascertain which way they are trending” on a month-by-month basis.

Researchers are forced to rely on information from public institutions that disclose attacks, or even criminals themselves. But “criminals are lying bastards,” says Liska.

By all indications, the problem is not going away and may even be accelerating in 2024. According to a recent report by security firm Mandiant, a Google subsidiary, 2023 was a record-breaking year for ransomware. Reporting indicates that victims paid more than $1 billion to gangs—and those are just the payments that we know about.

Ransomware gangs are adopting “more brutal” tactics amid crackdowns Read More »