Author name: Shannon Garcia

Microsegmentation: Implementing Zero Trust at the Network Level

Welcome back to our zero trust blog series! In our previous posts, we explored the importance of data security and identity and access management in a zero trust model. Today, we’re diving into another critical component of zero trust: network segmentation.

In a traditional perimeter-based security model, the network is often treated as a single, monolithic entity. Once a user or device is inside the network, they typically have broad access to resources and applications. However, in a zero trust world, this approach is no longer sufficient.

In this post, we’ll explore the role of network segmentation in a zero trust model, discuss the benefits of microsegmentation, and share best practices for implementing a zero trust network architecture.

The Zero Trust Approach to Network Segmentation

In a zero trust model, the network is no longer treated as a trusted entity. Instead, zero trust assumes that the network is always hostile and that threats can come from both inside and outside the organization.

To mitigate these risks, zero trust requires organizations to segment their networks into smaller, more manageable zones. This involves:

  1. Microsegmentation: Dividing the network into small, isolated segments based on application, data sensitivity, and user roles.
  2. Least privilege access: Enforcing granular access controls between segments, allowing only the minimum level of access necessary for users and devices to perform their functions.
  3. Continuous monitoring: Constantly monitoring network traffic and user behavior to detect and respond to potential threats in real-time.
  4. Software-defined perimeters: Using software-defined networking (SDN) and virtual private networks (VPNs) to create dynamic, adaptable network boundaries that can be easily modified as needed.

By applying these principles, organizations can create a more secure, resilient network architecture that minimizes the risk of lateral movement and data breaches.

Benefits of Microsegmentation in a Zero Trust Model

Microsegmentation is a key enabler of zero trust at the network level. By dividing the network into small, isolated segments, organizations can realize several benefits:

  1. Reduced attack surface: Microsegmentation limits the potential damage of a breach by containing threats within a single segment, preventing lateral movement across the network.
  2. Granular access control: By enforcing least privilege access between segments, organizations can ensure that users and devices only have access to the resources they need, reducing the risk of unauthorized access.
  3. Improved visibility: Microsegmentation provides greater visibility into network traffic and user behavior, making it easier to detect and respond to potential threats.
  4. Simplified compliance: By isolating regulated data and applications into separate segments, organizations can more easily demonstrate compliance with industry standards and regulations.

Best Practices for Implementing Microsegmentation

Implementing micro-segmentation in a zero trust model requires a comprehensive, multi-layered approach. Here are some best practices to consider:

  1. Map your network: Before implementing micro-segmentation, thoroughly map your network to understand your applications, data flows, and user roles. Use tools like application discovery and dependency mapping (ADDM) to identify dependencies and prioritize segments.
  2. Define segmentation policies: Develop clear, granular segmentation policies based on your organization’s unique security and compliance requirements. Consider factors such as data sensitivity, user roles, and application criticality when defining segments.
  3. Use software-defined networking: Leverage SDN technologies to create dynamic, adaptable network segments that can be easily modified as needed. Use tools like Cisco ACI, VMware NSX, or OpenStack Neutron to implement SDN.
  4. Enforce least privilege access: Implement granular access controls between segments, allowing only the minimum level of access necessary for users and devices to perform their functions. Use network access control (NAC) and identity-based segmentation to enforce these policies.
  5. Monitor and log traffic: Implement robust monitoring and logging mechanisms to track network traffic and user behavior. Use network detection and response (NDR) tools to identify and investigate potential threats.
  6. Regularly test and refine: Regularly test your micro-segmentation policies and controls to ensure they are effective and up to date. Conduct penetration testing and red team exercises to identify weaknesses and refine your segmentation strategy.

By implementing these best practices and continuously refining your micro-segmentation posture, you can better protect your organization’s assets and data and build a more resilient, adaptable network architecture.

Conclusion

In a zero trust world, the network is no longer a trusted entity. By treating the network as always hostile and segmenting it into small, isolated zones, organizations can minimize the risk of lateral movement and data breaches. However, achieving effective microsegmentation in a zero trust model requires a commitment to understanding your network, defining clear policies, and investing in the right tools and processes. It also requires a cultural shift, with every user and device treated as a potential threat.

As you continue your zero trust journey, make network segmentation a top priority. Invest in the tools, processes, and training necessary to implement microsegmentation and regularly assess and refine your segmentation posture to keep pace with evolving threats and business needs.

In the next post, we’ll explore the role of device security in a zero trust model and share best practices for securing endpoints, IoT devices, and other connected systems.

Until then, stay vigilant and keep your network secure!

Additional Resources:

Microsegmentation: Implementing Zero Trust at the Network Level Read More »

meta-halts-plans-to-train-ai-on-facebook,-instagram-posts-in-eu

Meta halts plans to train AI on Facebook, Instagram posts in EU

Not so fast —

Meta was going to start training AI on Facebook and Instagram posts on June 26.

Meta halts plans to train AI on Facebook, Instagram posts in EU

Meta has apparently paused plans to process mounds of user data to bring new AI experiences to Europe.

The decision comes after data regulators rebuffed the tech giant’s claims that it had “legitimate interests” in processing European Union- and European Economic Area (EEA)-based Facebook and Instagram users’ data—including personal posts and pictures—to train future AI tools.

There’s not much information available yet on Meta’s decision. But Meta’s EU regulator, the Irish Data Protection Commission (DPC), posted a statement confirming that Meta made the move after ongoing discussions with the DPC about compliance with the EU’s strict data privacy laws, including the General Data Protection Regulation (GDPR).

“The DPC welcomes the decision by Meta to pause its plans to train its large language model using public content shared by adults on Facebook and Instagram across the EU/EEA,” the DPC said. “This decision followed intensive engagement between the DPC and Meta. The DPC, in co-operation with its fellow EU data protection authorities, will continue to engage with Meta on this issue.”

The European Center for Digital Rights, known as Noyb, had filed 11 complaints across the EU and intended to file more to stop Meta from moving forward with its AI plans. The DPC initially gave Meta AI the green light to proceed but has now made a U-turn, Noyb said.

Meta’s policy still requires update

In a blog, Meta had previously teased new AI features coming to the EU, including everything from customized stickers for chats and stories to Meta AI, a “virtual assistant you can access to answer questions, generate images, and more.” Meta had argued that training on EU users’ personal data was necessary so that AI services could reflect “the diverse cultures and languages of the European communities who will use them.”

Before the pause, the company had been hoping to rely “on the legal basis of ‘legitimate interests’” to process the data, because it’s needed “to improve AI at Meta.” But Noyb and EU data regulators had argued that Meta’s legal basis did not comply with the GDPR, with the Norwegian Data Protection Authority arguing that “the most natural thing would have been to ask the users for their consent before their posts and images are used in this way.”

Rather than ask for consent, however, Meta had given EU users until June 26 to opt out. Noyb had alleged that in going this route, Meta planned to use “dark patterns” to thwart AI opt-outs in the EU and collect as much data as possible to fuel undisclosed AI technologies. Noyb urgently argued that once users’ data is in the system, “users seem to have no option of ever having it removed.”

Noyb said that the “obvious explanation” for Meta seemingly halting its plans was pushback from EU officials, but the privacy advocacy group also warned EU users that Meta’s privacy policy has not yet been fully updated to reflect the pause.

“We welcome this development but will monitor this closely,” Max Schrems, Noyb chair, said in a statement provided to Ars. “So far there is no official change of the Meta privacy policy, which would make this commitment legally binding. The cases we filed are ongoing and will need a determination.”

Ars was not immediately able to reach Meta for comment.

Meta halts plans to train AI on Facebook, Instagram posts in EU Read More »

tesla-investors-sue-elon-musk-for-diverting-carmaker’s-resources-to-xai

Tesla investors sue Elon Musk for diverting carmaker’s resources to xAI

Tesla sued by shareholders —

Lawsuit: Musk’s xAI poached Tesla employees, Nvidia GPUs, and data.

A large Tesla logo

Getty Images | SOPA Images

A group of Tesla investors yesterday sued Elon Musk, the company, and its board members, alleging that Tesla was harmed by Musk’s diversion of resources to his xAI venture. The diversion of resources includes hiring AI employees away from Tesla, diverting microchips from Tesla to X (formerly Twitter) and xAI, and “xAI’s use of Tesla’s data to develop xAI’s own software/hardware, all without compensation to Tesla,” the lawsuit said.

The lawsuit in Delaware Court of Chancery was filed by three Tesla shareholders: the Cleveland Bakers and Teamsters Pension Fund, Daniel Hazen, and Michael Giampietro. It seeks financial damages for Tesla and the disgorging of Musk’s equity stake in xAI to Tesla.

“Could the CEO of Coca-Cola loyally start a competing soft-drink company on the side, then divert scarce ingredients from Coca-Cola to the startup? Could the CEO of Goldman Sachs loyally start a competing financial advisory company on the side, then hire away key bankers from Goldman Sachs to the startup? Could the board of either company loyally permit such conduct without doing anything about it? Of course not,” the lawsuit says.

Tesla and Musk have touted artificial intelligence “as the key to Tesla’s future” and described Tesla as an AI company, the lawsuit said. By founding xAI, Musk started a competing company “and then divert[ed] talent and resources from his corporation to the startup,” with the apparent approval of Tesla’s board, the lawsuit said.

After founding xAI in March 2023, “Musk hired away numerous key AI-focused employees from Tesla to xAI” and later diverted Nvidia GPUs from Tesla to X and xAI, the lawsuit said. The GPU diversion was recently confirmed by Nvidia emails that were revealed in a report by CNBC.

GPU diversion

Before founding xAI, “Musk stated that Tesla needed more Nvidia H100 GPUs than Nvidia had available for sale, a common problem in the AI industry… After Musk established xAI, however, he began personally directing Nvidia to redirect GPUs from Tesla to xAI and X,” the lawsuit said.

The investors suing Musk and Tesla don’t buy Musk’s justification. “For his part, Musk dubiously claimed in a post on X following the publication of the CNBC report that, contrary to his prior public representations about Tesla’s appetite for Nvidia hardware, ‘Tesla had no place to send the Nvidia chips to turn them on, so they would have just sat in a warehouse,'” the lawsuit said.

The complaint says that a pitch deck to potential investors in xAI said the new firm “intended to harvest data from X and Tesla to help xAI catch up to AI companies OpenAI and Anthropic. X would provide data from social media users, and Tesla would provide video data from its cars.”

“It is apparent that Musk has pitched prospective investors in xAI partly by exploiting information owned by Tesla,” the lawsuit also said. “On information and belief, Musk has already or intends to have xAI harvest data from Tesla without appropriately compensating Tesla even though X has already been provided xAI equity for its data contributions. None of this would be necessary if Musk properly created xAI as a subsidiary of Tesla.”

We contacted Tesla today and will update this article if the company provides a response to the lawsuit. The filing of the complaint was previously reported by TechCrunch.

Same court nullified Musk’s pay

The Delaware Court of Chancery is the same one that nullified Elon Musk’s 2018 pay package following a different investor lawsuit. Tesla shareholders yesterday re-approved the $44.9 billion pay plan, with 72 percent voting yes on the proposal, but the re-vote doesn’t end the legal battle over Musk’s pay. Tesla shareholders also approved a corporate move from Delaware to Texas, which was proposed by Musk and Tesla after the pay-plan court ruling.

That drama factors into the lawsuit filed yesterday. After the pay ruling that effectively reduced Musk’s stake in Tesla, “Musk accelerated his efforts to grow xAI” by “raising billions of dollars and poaching at least eleven employees from Tesla,” the new lawsuit said. The lawsuit also points to Musk’s threat “that he would only build an AI and robotics business within Tesla if Tesla gave him at least 25% voting power.”

The lawsuit accuses Tesla’s board of “permit[ting] Musk to create and grow xAI, hindering Tesla’s AI development efforts and diverting billions of dollars in value from Tesla to xAI.” The board’s failure to act is alleged to be “an obvious breach of its members’ unyielding fiduciary duty to protect the interests of Tesla and its stockholders.”

The Tesla board members’ close ties to Musk could play a key role in the case. In the pay-plan ruling, Delaware Court of Chancery Judge Kathaleen McCormick found that most of Tesla’s board members were beholden to Musk or had compromising conflicts. The lawsuit filed yesterday points to the court’s previous findings on those board members, including Kimbal Musk, Elon Musk’s brother; and James Murdoch, a longtime friend of Musk.

Tesla investors sue Elon Musk for diverting carmaker’s resources to xAI Read More »

report:-apple-isn’t-paying-openai-for-chatgpt-integration-into-oses

Report: Apple isn’t paying OpenAI for ChatGPT integration into OSes

in the pocket —

Apple thinks pushing OpenAI’s brand to hundreds of millions is worth more than money.

The OpenAI and Apple logos together.

OpenAI / Apple / Benj Edwards

On Monday, Apple announced it would be integrating OpenAI’s ChatGPT AI assistant into upcoming versions of its iPhone, iPad, and Mac operating systems. It paves the way for future third-party AI model integrations, but given Google’s multi-billion-dollar deal with Apple for preferential web search, the OpenAI announcement inspired speculation about who is paying whom. According to a Bloomberg report published Wednesday, Apple considers ChatGPT’s placement on its devices as compensation enough.

“Apple isn’t paying OpenAI as part of the partnership,” writes Bloomberg reporter Mark Gurman, citing people familiar with the matter who wish to remain anonymous. “Instead, Apple believes pushing OpenAI’s brand and technology to hundreds of millions of its devices is of equal or greater value than monetary payments.”

The Bloomberg report states that neither company expects the agreement to generate meaningful revenue in the short term, and in fact, the partnership could burn extra money for OpenAI, because it pays Microsoft to host ChatGPT’s capabilities on its Azure cloud. However, OpenAI could benefit by converting free users to paid subscriptions, and Apple potentially benefits by providing easy, built-in access to ChatGPT during a time when its own in-house LLMs are still catching up.

And there’s another angle at play. Currently, OpenAI offers subscriptions (ChatGPT Plus, Enterprise, Team) that unlock additional features. If users subscribe to OpenAI through the ChatGPT app on an Apple device, the process will reportedly use Apple’s payment platform, which may give Apple a significant cut of the revenue. According to the report, Apple hopes to negotiate additional revenue-sharing deals with AI vendors in the future.

Why OpenAI

The rise of ChatGPT in the public eye over the past 18 months has made OpenAI a power player in the tech industry, allowing it to strike deals with publishers for AI training content—and ensure continued support from Microsoft in the form of investments that trade vital funding and compute for access to OpenAI’s large language model (LLM) technology like GPT-4.

Still, Apple’s choice of ChatGPT as Apple’s first external AI integration has led to widespread misunderstanding, especially since Apple buried the lede about its own in-house LLM technology that powers its new “Apple Intelligence” platform.

On Apple’s part, CEO Tim Cook told The Washington Post that it chose OpenAI as its first third-party AI partner because he thinks the company controls the leading LLM technology at the moment: “I think they’re a pioneer in the area, and today they have the best model,” he said. “We’re integrating with other people as well. But they’re first, and I think today it’s because they’re best.”

Apple’s choice also brings risk. OpenAI’s record isn’t spotless, racking up a string of public controversies over the past month that include an accusation from actress Scarlett Johansson that the company intentionally imitated her voice, resignations from a key scientist and safety personnel, the revelation of a restrictive NDA for ex-employees that prevented public criticism, and accusations against OpenAI CEO Sam Altman of “psychological abuse” related by a former member of the OpenAI board.

Meanwhile, critics of privacy issues related to gathering data for training AI models—including OpenAI foe Elon Musk, who took to X on Monday to spread misconceptions about how the ChatGPT integration might work—also worried that the Apple-OpenAI deal might expose personal data to the AI company, although both companies strongly deny that will be the case.

Looking ahead, Apple’s deal with OpenAI is not exclusive, and the company is already in talks to offer Google’s Gemini chatbot as an additional option later this year. Apple has also reportedly held talks with Anthropic (maker of Claude 3) as a potential chatbot partner, signaling its intention to provide users with a range of AI services, much like how the company offers various search engine options in Safari.

Report: Apple isn’t paying OpenAI for ChatGPT integration into OSes Read More »

china-state-hackers-infected-20,000-fortinet-vpns,-dutch-spy-service-says

China state hackers infected 20,000 Fortinet VPNs, Dutch spy service says

DISCLOSURE FUBAR —

Critical code-execution flaw was under exploitation 2 months before company disclosed it.

China state hackers infected 20,000 Fortinet VPNs, Dutch spy service says

Hackers working for the Chinese government gained access to more than 20,000 VPN appliances sold by Fortinet using a critical vulnerability that the company failed to disclose for two weeks after fixing it, Netherlands government officials said.

The vulnerability, tracked as CVE-2022-42475, is a heap-based buffer overflow that allows hackers to remotely execute malicious code. It carries a severity rating of 9.8 out of 10. A maker of network security software, Fortinet silently fixed the vulnerability on November 28, 2022, but failed to mention the threat until December 12 of that year, when the company said it became aware of an “instance where this vulnerability was exploited in the wild.” On January 11, 2023—more than six weeks after the vulnerability was fixed—Fortinet warned a threat actor was exploiting it to infect government and government-related organizations with advanced custom-made malware.

Enter CoatHanger

The Netherlands officials first reported in February that Chinese state hackers had exploited CVE-2022-42475 to install an advanced and stealthy backdoor tracked as CoatHanger on Fortigate appliances inside the Dutch Ministry of Defense. Once installed, the never-before-seen malware, specifically designed for the underlying FortiOS operating system, was able to permanently reside on devices even when rebooted or receiving a firmware update. CoatHanger could also escape traditional detection measures, the officials warned. The damage resulting from the breach was limited, however, because infections were contained inside a segment reserved for non-classified uses.

On Monday, officials with the Military Intelligence and Security Service (MIVD) and the General Intelligence and Security Service in the Netherlands said that to date, Chinese state hackers have used the critical vulnerability to infect more than 20,000 FortiGate VPN appliances sold by Fortinet. Targets include dozens of Western government agencies, international organizations, and companies within the defense industry.

“Since then, the MIVD has conducted further investigation and has shown that the Chinese cyber espionage campaign appears to be much more extensive than previously known,” Netherlands officials with the National Cyber Security Center wrote. “The NCSC therefore calls for extra attention to this campaign and the abuse of vulnerabilities in edge devices.”

Monday’s report said that exploitation of the vulnerability started two months before Fortinet first disclosed it and that 14,000 servers were backdoored during this zero-day period. The officials warned that the Chinese threat group likely still has access to many victims because CoatHanger is so hard to detect and remove.

Netherlands government officials wrote in Monday’s report:

Since the publication in February, the MIVD has continued to investigate the broader Chinese cyber espionage campaign. This revealed that the state actor gained access to at least 20,000 FortiGate systems worldwide within a few months in both 2022 and 2023 through the vulnerability with the identifier CVE-2022-42475 . Furthermore, research shows that the state actor behind this campaign was already aware of this vulnerability in FortiGate systems at least two months before Fortinet announced the vulnerability. During this so-called ‘zero-day’ period, the actor alone infected 14,000 devices. Targets include dozens of (Western) governments, international organizations and a large number of companies within the defense industry.

The state actor installed malware at relevant targets at a later date. This gave the state actor permanent access to the systems. Even if a victim installs security updates from FortiGate, the state actor continues to have this access.

It is not known how many victims actually have malware installed. The Dutch intelligence services and the NCSC consider it likely that the state actor could potentially expand its access to hundreds of victims worldwide and carry out additional actions such as stealing data.

Even with the technical report on the COATHANGER malware, infections from the actor are difficult to identify and remove. The NCSC and the Dutch intelligence services therefore state that it is likely that the state actor still has access to systems of a significant number of victims.

Fortinet’s failure to timely disclose is particularly acute given the severity of the vulnerability. Disclosures are crucial because they help users prioritize the installation of patches. When a new version fixes minor bugs, many organizations often wait to install it. When it fixes a vulnerability with a 9.8 severity rating, they’re much more likely to expedite the update process. Given the vulnerability was being exploited even before Fortinet fixed it, the disclosure likely wouldn’t have prevented all of the infections, but it stands to reason it could have stopped some.

Fortinet officials have never explained why they didn’t disclose the critical vulnerability when it was fixed. They have also declined to disclose what the company policy is for the disclosure of security vulnerabilities. Company representatives didn’t immediately respond to an email seeking comment for this post.

China state hackers infected 20,000 Fortinet VPNs, Dutch spy service says Read More »

adobe-to-update-vague-ai-terms-after-users-threaten-to-cancel-subscriptions

Adobe to update vague AI terms after users threaten to cancel subscriptions

Adobe to update vague AI terms after users threaten to cancel subscriptions

Adobe has promised to update its terms of service to make it “abundantly clear” that the company will “never” train generative AI on creators’ content after days of customer backlash, with some saying they would cancel Adobe subscriptions over its vague terms.

Users got upset last week when an Adobe pop-up informed them of updates to terms of use that seemed to give Adobe broad permissions to access user content, take ownership of that content, or train AI on that content. The pop-up forced users to agree to these terms to access Adobe apps, disrupting access to creatives’ projects unless they immediately accepted them.

For any users unwilling to accept, canceling annual plans could trigger fees amounting to 50 percent of their remaining subscription cost. Adobe justifies collecting these fees because a “yearly subscription comes with a significant discount.”

On X (formerly Twitter), YouTuber Sasha Yanshin wrote that he canceled his Adobe license “after many years as a customer,” arguing that “no creator in their right mind can accept” Adobe’s terms that seemed to seize a “worldwide royalty-free license to reproduce, display, distribute” or “do whatever they want with any content” produced using their software.

“This is beyond insane,” Yanshin wrote on X. “You pay a huge monthly subscription, and they want to own your content and your entire business as well. Going to have to learn some new tools.”

Adobe’s design leader Scott Belsky replied, telling Yanshin that Adobe had clarified the update in a blog post and noting that Adobe’s terms for licensing content are typical for every cloud content company. But he acknowledged that those terms were written about 11 years ago and that the language could be plainer, writing that “modern terms of service in the current climate of customer concerns should evolve to address modern day concerns directly.”

Yanshin has so far not been encouraged by any of Adobe’s attempts to clarify its terms, writing that he gives “precisely zero f*cks about Adobe’s clarifications or blog posts.”

“You forced people to sign new Terms,” Yanshin told Belsky on X. “Legally, they are the only thing that matters.”

Another user in the thread using an anonymous X account also pushed back, writing, “Point to where it says in the terms that you won’t use our content for LLM or AI training? And state unequivocally that you do not have the right to use our work beyond storing it. That would go a long way.”

“Stay tuned,” Belsky wrote on X. “Unfortunately, it takes a process to update a TOS,” but “we are working on incorporating these clarifications.”

Belsky co-authored the blog this week announcing that Adobe’s terms would be updated by June 18 after a week of fielding feedback from users.

“We’ve never trained generative AI on customer content, taken ownership of a customer’s work, or allowed access to customer content beyond legal requirements,” Adobe’s blog said. “Nor were we considering any of those practices as part of the recent Terms of Use update. That said, we agree that evolving our Terms of Use to reflect our commitments to our community is the right thing to do.”

Adobe to update vague AI terms after users threaten to cancel subscriptions Read More »

polarized-light-yields-fresh-insight-into-mysterious-fast-radio-bursts

Polarized light yields fresh insight into mysterious fast radio bursts

CHIME-ing in —

Scientists looked at how polarization changed direction to learn more about origins

Artist’s rendition of how the angle of polarized light from an FRB changes as it journeys through space.

Enlarge / Artist’s rendition of how the angle of polarized light from a fast radio burst changes as it journeys through space.

CHIME/Dunlap Institute

Astronomers have been puzzling over the origins of mysterious fast radio bursts (FRBs) since the first one was spotted in 2007. Researchers now have their first look at non-repeating FRBs, i.e., those that have only produced a single burst of light to date. The authors of a new paper published in The Astrophysical Journal looked specifically at the properties of polarized light emitting from these FRBs, yielding further insight into the origins of the phenomenon. The analysis supports the hypothesis that there are different origins for repeating and non-repeating FRBs.

“This is a new way to analyze the data we have on FRBs. Instead of just looking at how bright something is, we’re also looking at the angle of the light’s vibrating electromagnetic waves,” said co-author Ayush Pandhi, a graduate student at the University of Toronto’s Dunlap Institute for Astronomy and Astrophysics. “It gives you additional information about how and where that light is produced and what it has passed through on its journey to us over many millions of light years.”

As we’ve reported previously, FRBs involve a sudden blast of radio-frequency radiation that lasts just a few microseconds. Astronomers have over a thousand of them to date; some come from sources that repeatedly emit FRBs, while others seem to burst once and go silent. You can produce this sort of sudden surge of energy by destroying something. But the existence of repeating sources suggests that at least some of them are produced by an object that survives the event. That has led to a focus on compact objects, like neutron stars and black holes—especially a class of neutron stars called magnetars—as likely sources.

There have also been many detected FRBs that don’t seem to repeat at all, suggesting that the conditions that produced them may destroy their source. That’s consistent with a blitzar—a bizarre astronomical event caused by the sudden collapse of an overly massive neutron star. The event is driven by an earlier merger of two neutron stars; this creates an unstable intermediate neutron star, which is kept from collapsing immediately by its rapid spin.

In a blitzar, the strong magnetic fields of the neutron star slow down its spin, causing it to collapse into a black hole several hours after the merger. That collapse suddenly deletes the dynamo powering the magnetic fields, releasing their energy in the form of a fast radio burst.

So the events we’ve been lumping together as FRBs could actually be the product of two different events. The repeating events occur in the environment around a magnetar. The one-shot events are triggered by the death of a highly magnetized neutron star within a few hours of its formation. Astronomers announced the detection of a possible blitzar potentially associated with an FRB last year.

Only about 3 percent of FRBs are of the repeating variety. Per Pandhi, this is the first analysis of the other 97 percent of non-repeating FRBs, using data from Canada’s CHIME instrument (Canadian Hydrogen Intensity Mapping Experiment). CHIME was built for other observations but is sensitive to many of the wavelengths that make up an FRB. Unlike most radio telescopes, which focus on small points in the sky, CHIME scans a huge area, allowing it to pick out FRBs even though they almost never happen in the same place twice.

Pandhi et al. decided to investigate how the direction of the light polarization from 128 non-repeating FRBs changes to learn more about the environments in which they originated. The team found that the polarized light from non-repeating FRBs changes both with time and with different colors of light. They concluded that this particular sample of non-repeating FRBs is either a separate population or more evolved versions of these kinds of FRBs that are part of a population that originated in less extreme environments with lower burst rates. That’s in keeping with the notion that non-repeating FRBs are quite different from their rarer repeating FRBs.

The Astrophysical Journal, 2024. DOI: 10.3847/1538-4357/ad40aa  (About DOIs).

Polarized light yields fresh insight into mysterious fast radio bursts Read More »

ransomware-gangs-are-adopting-“more-brutal”-tactics-amid-crackdowns

Ransomware gangs are adopting “more brutal” tactics amid crackdowns

Illustration of a lock on a motherboard

Just_Super via Getty

Today, people around the world will head to school, doctor’s appointments, and pharmacies, only to be told, “Sorry, our computer systems are down.” The frequent culprit is a cybercrime gang operating on the other side of the world, demanding payment for system access or the safe return of stolen data.

The ransomware epidemic shows no signs of slowing down in 2024—despite increasing police crackdowns—and experts worry that it could soon enter a more violent phase.

“We’re definitely not winning the fight against ransomware right now,” Allan Liska, a threat intelligence analyst at Recorded Future, tells WIRED.

Ransomware may be the defining cybercrime of the past decade, with criminals targeting a wide range of victims including hospitals, schools, and governments. The attackers encrypt critical data, bringing the victim’s operation to a grinding halt, and then extort them with the threat of releasing sensitive information. These attacks have had serious consequences. In 2021, the Colonial Pipeline Company was targeted by ransomware, forcing the company to pause fuel delivery and spurring US president Joe Biden to implement emergency measures to meet demand. But ransomware attacks are a daily event around the world—last week, ransomware hit hospitals in the UK—and many of them don’t make headlines.

“There is a visibility problem into incidents; most organizations don’t disclose or report them,” says Brett Callow, a threat analyst at Emsisoft. He adds that this makes it “hard to ascertain which way they are trending” on a month-by-month basis.

Researchers are forced to rely on information from public institutions that disclose attacks, or even criminals themselves. But “criminals are lying bastards,” says Liska.

By all indications, the problem is not going away and may even be accelerating in 2024. According to a recent report by security firm Mandiant, a Google subsidiary, 2023 was a record-breaking year for ransomware. Reporting indicates that victims paid more than $1 billion to gangs—and those are just the payments that we know about.

Ransomware gangs are adopting “more brutal” tactics amid crackdowns Read More »

reflections-on-snowflake-summit-2024

Reflections on Snowflake Summit 2024

This past week, I had the opportunity to attend Snowflake Summit 2024 in San Francisco. As an analyst, I was treated to an exclusive pre-day of content from the Snowflake team, which proved both enlightening and thought-provoking.

The event kicked off with Snowflake addressing the recent “hack” reported in the news. They assured us that their collaboration with Crowdstrike and other partners has revealed no signs of compromise within Snowflake itself. The evidence points to compromised customer credentials, and the investigation remains ongoing.

One of the highlights was the introduction of new products, including the Trust Center. This innovative tool assesses the security of your Snowflake data estate, utilizing AI and ML to identify potential issues such as privacy mismatches, user access inconsistencies, and poor data classification. However, there was no mention of whether the Trust Center provided similar insights into user accounts—a critical consideration given the hack discussion. I managed to get clarity on this from the Trust Center’s product manager later in the day.

This experience underscored a recurring issue I see in the vendor space: a distinct lack of cohesive storytelling. Vendors often present without any sense of overarching narrative. This fragmented approach can feel like a song composed entirely of solos. Individually, each instrument may showcase talent, but without the common purpose of harmony, no audience will ever sing along. It’s chaotic and discordant—much like many of the tech presentations I sat in on.

We need change. We don’t just need better messaging; we need better stories. We need to ask ourselves: Who are we? Why do we exist? Where are we going? What are the stops along the way? What sights will we see? Where do we board? Improving how we communicate is essential.

On a positive note, Snowflake showcased all the components necessary to build a compelling data story for your enterprise. Their partners can fill in any gaps in your business narrative. The only missing piece is a cohesive purpose, which you can personalize to meet your specific needs.

The good news is that there are people like me here to help. I aim to demystify what is presented, help you create a strategy, and build a plan to achieve your goals. Together, we can create your story, one chapter at a time.

Reflections on Snowflake Summit 2024 Read More »

some-company-heads-hoped-return-to-office-mandates-would-make-people-quit,-survey-says

Some company heads hoped return-to-office mandates would make people quit, survey says

HR study —

1,504 workers, including 504 HR managers questioned.

Man and woman talking at an office water cooler

Enlarge / RTO mandates can boost workers’ professional networks, but in-office employees may also spend more time socializing than remote ones.

A new survey suggests that some US companies implemented return-to-office (RTO) policies in the hopes of getting workers to quit. And despite the belief that such policies could boost productivity compared to letting employees work from home, the survey from HR software provider BambooHR points to remote and in-office employees spending an equal amount of time working.

BambooHR surveyed 1,504 full-time US employees, including 504 human resources (HR) workers who are a manager or higher, from March 9 to March 22. According to the firm, the sample group used for its report “The New Surveillance Era: Visibility Beats Productivity for RTO & Remote” is equally split across genders and includes “a spread of age groups, race groups, and geographies.” Method Research, the research arm of technology PR and marketing firm Method, prepared the survey, and data collection firm Rep Data distributed it.

Trying to make people quit

Among those surveyed, 52 percent said they prefer working remotely compared to 39 percent who prefer working in an office.

A generation-based breakdown of respondents who prefer remote work. BambooHR's report didn't specify how many respondents it surveyed from each category.

Enlarge / A generation-based breakdown of respondents who prefer remote work. BambooHR’s report didn’t specify how many respondents it surveyed from each category.

Despite an apparently large interest in remote work, numerous companies made workers return to the office after COVID-19 pandemic restrictions were lifted. The report suggests that in at least some cases, this was done to get workers to quit:

Nearly two in five (37 percent) managers, directors, and executives believe their organization enacted layoffs in the last year because fewer employees than they expected quit during their RTO. And their beliefs are well-founded: One in four (25 percent) VP and C-suite executives and one in five (18 percent) HR pros admit they hoped for some voluntary turnover during an RTO.

It’s hard to get a firm understanding of the effectiveness of RTO policies, as 22 percent of HR professionals surveyed said that their company has no metrics for measuring a successful RTO. The report points to a “disconnect between stated goals for RTO and actually measuring the success of those goals.”

The report also found that 28 percent of remote workers fear they will be laid off before those working in the office. While BambooHR’s report doesn’t comment on this, some firms have discouraged employees from working remotely. Dell, for example, told remote workers that they can’t be promoted.

“By using RTO mandates as a workforce reduction tactic, companies are losing talent and morale among their employees,” BambooHR’s report says. The report notes that 45 percent of people surveyed whose companies have RTO policies said they lost valued workers. The finding is similar to that of a May study of Apple, Microsoft, and SpaceX that suggested that RTO mandates drove senior talent away.

In BambooHR’s survey, 28 percent said they would consider leaving their jobs if their employer enacted an RTO mandate.

Productivity

A frequently cited reason for in-office mandates is to drive teamwork, collaboration, and productivity. BambooHR’s data, however, doesn’t support the idea of RTO mandates driving productivity.

According to the report, regardless of whether they’re working in their home or in an office, employees work for 76 percent of a 9-to-5 shift. The report adds:

When it comes to who’s more productive overall, in-office workers spend around one hour more socializing than their remote counterparts, while remote workers spend that time on work-related tasks and responsibilities.

Despite this, 32 percent of managers said that one of the main goals of their firm implementing an in-office policy was to track employee working habits, with some companies tracking VPN usage and company badge swipes to ensure employees are coming into the office as expected.

RTO works for some

Although the majority of people surveyed prefer working from home, the survey also highlighted some perceived benefits of working in the office. For example, 48 percent of the people surveyed said “their work results have improved” since returning to the office, per the report. And 58 percent said they have a “stronger professional network” since going back, BambooHR reported.

Preferences for working from home or in an office can vary by various factors, like age. This points to the benefits of building RTO strategies around worker feedback and needs.

“The mental and emotional burdens workers face today are real, and the companies who seek employee feedback with the intent to listen and improve are the ones who will win employee loyalty and ultimately customer satisfaction,” Anita Grantham, head of HR at BambooHR, said in a statement.

Some company heads hoped return-to-office mandates would make people quit, survey says Read More »

nasa-is-commissioning-10-studies-on-mars-sample-return—most-are-commercial

NASA is commissioning 10 studies on Mars Sample Return—most are commercial

Alternatives —

SpaceX will show NASA how Starship could one day return rock samples from Mars.

An artist's concept of a Mars Ascent Vehicle orbiting the red planet.

Enlarge / An artist’s concept of a Mars Ascent Vehicle orbiting the red planet.

NASA announced Friday that it will award contracts to seven companies, including SpaceX and Blue Origin, to study how to transport rock samples from Mars more cheaply back to Earth.

The space agency put out a call to industry in April to propose ideas on how to return the Mars rocks to Earth for less than $11 billion and before 2040, the cost and schedule for NASA’s existing plan for Mars Sample Return (MSR). A NASA spokesperson told Ars the agency received 48 responses to the solicitation and selected seven companies to conduct more detailed studies.

Each company will receive up to $1.5 million for their 90-day studies. Five of the companies chosen by NASA are among the agency’s roster of large contractors, and their inclusion in the study contracts is no surprise. Two other winners are smaller businesses.

Mars Sample Return is the highest priority for NASA’s planetary science division. The Perseverance rover currently on Mars is gathering several dozen specimens of rock powder, soil, and Martian air in cigar-shaped titanium tubes for eventual return to Earth.

“Mars Sample Return will be one of the most complex missions NASA has undertaken, and it is critical that we carry it out more quickly, with less risk, and at a lower cost,” said Bill Nelson, NASA’s administrator. “I’m excited to see the vision that these companies, centers and partners present as we look for fresh, exciting, and innovative ideas to uncover great cosmic secrets from the red planet.”

Who’s in?

Lockheed Martin, the only company that has built a spacecraft to successfully land on Mars, will perform “rapid mission design studies for Mars Sample Return,” according to NASA. Northrop Grumman also won a contract for its proposal: “High TRL (Technology Readiness Level) MAV (Mars Ascent Vehicle) Propulsion Trades and Concept Design for MSR Rapid Mission Design.”

These two companies were partners in developing the solid-fueled Mars Ascent Vehicle for NASA’s existing Mars Sample Return mission. The MAV is the rocket that will propel the capsule containing the rock specimens from the surface of Mars back into space to begin the months-long journey back to Earth. The involvement of Lockheed Martin and Northrop Grumman in NASA’s Mars program, along with the study scope suggested in Northrop’s proposal, suggest they will propose applying existing capabilities to solve the program of Mars Sample Return.

Aerojet Rocketdyne, best known as a rocket propulsion supplier, will study a high-performance liquid-fueled Mars Ascent Vehicle using what it says are “highly reliable and mature propulsion technologies, to improve program affordability and schedule.”

SpaceX, a company with a long-term vision for Mars, also won NASA funding for a study contract. Its study proposal was titled “Enabling Mars Sample Return with Starship.” SpaceX is already designing the privately funded Starship rocket with Mars missions in mind, and Elon Musk, the company’s founder, has predicted Starship will land on Mars by the end of the decade.

Musk has famously missed schedule predictions before with Starship, and a landing on the red planet before the end of the 2020s still seems unlikely. However, the giant rocket could enable delivery to Mars and the eventual return of dozens of tons of cargo. A successful test flight of Starship this week proved SpaceX is making progress toward this goal. Still, there’s a long way to go.

Blue Origin, Jeff Bezos’ space company, will also receive funding for a study it calls “Leveraging Artemis for Mars Sample Return.”

SpaceX and Blue Origin each have multibillion-dollar contracts with NASA to develop Starship and the Blue Moon lander as human-rated spacecraft to ferry astronauts to and from the lunar surface as part of the Artemis program.

Two other small businesses, Quantum Space and Whittinghill Aerospace, will also conduct studies for NASA.

Quantum, which describes itself as a space infrastructure company, was founded in 2021 by entrepreneur Kam Ghaffarian, who also founded Intuitive Machines and Axiom Space. No details are known about the scope of its study, known as the “Quantum Anchor Leg Mars Sample Return Study.” Perhaps the “anchor leg” refers to the final stage of returning samples to Earth, like the anchor in a relay race.

Whittinghill Aerospace, based in California, has just a handful of employees. It will perform a rapid design study for a single-stage Mars Ascent Vehicle, NASA said.

Missing from the list of contract winners was Boeing, which has pushed the use of NASA’s super-expensive Space Launch System to do the Mars Sample Return mission with a single launch. Boeing, of course, builds most of the SLS rocket. Most other sample return concepts require multiple launches.

Alongside the seven industry contracts, NASA centers, the Jet Propulsion Laboratory (JPL) and the Applied Physics Laboratory (APL) at Johns Hopkins University will also produce studies on how to complete the Mars Sample Return mission more affordablely.

JPL is the lead center in charge of managing NASA’s existing concept for Mars Sample Return in partnership with the European Space Agency. However, cost growth and delays prompted NASA officials to decide in April to take a different approach.

Nicola Fox, head of NASA’s science directorate, said in April that she hopes “out of the box” concepts will allow the agency to get the samples back to Earth in the 2030s rather than in 2040 or later. “This is definitely a very ambitious goal,” she said. “We’re going to need to go after some very innovative new possibilities for a design and certainly leave no stone unturned.”

NASA will use the results of these 10 studies to craft a new approach for Mars Sample Return later this year. Most likely, the architecture NASA ultimately chooses will mix and match various elements from industry, NASA centers, and the European Space Agency, which remains a committed partner on Mars Sample Return with the Earth Return Orbiter.

NASA is commissioning 10 studies on Mars Sample Return—most are commercial Read More »

new-steam-deck-competitor-lets-you-easily-swap-in-more-ram,-storage

New Steam Deck competitor lets you easily swap in more RAM, storage

Plug and play —

Adata embraces the CAMM2 memory standard for its intriguing handheld prototype.

A slide-up screen is just one of the novel features for Adata's Steam Deck clone.

Enlarge / A slide-up screen is just one of the novel features for Adata’s Steam Deck clone.

For PC gamers used to the modular design of a desktop rig, there are pros and cons to the all-in-one, pre-fab design of the Steam Deck (and its many subsequent imitators in the growing handheld gaming PC market). On the one hand, you don’t have to worry about pricing out individual parts and making sure they all work together. On the other hand, the only way to upgrade one of these devices is to essentially throw out the old unit and replace the entire thing, console-style.

Korean computer storage-maker Adata is looking to straddle these two extremes. Lilliputing reports on Adata’s XPG Nia prototype, which was shown off at the Computex trade show. The unit is the first gaming handheld so far to embrace the CAMM (Compression Attached Memory Module) standard that allows for easily replaceable and upgradeable memory modules, as well as a number of other mod-friendly features.

CAMM on down

Samsung shared this rendering of a CAMM ahead of the publishing of the CAMM2 standard in September.

Enlarge / Samsung shared this rendering of a CAMM ahead of the publishing of the CAMM2 standard in September.

If you’ve read our previous coverage of the emerging CAMM standard, you know how excited we are about the ultra-thin modules that can simply be screwed into place on a laptop or portable motherboard. That offers a viable replacement for the now-standard soldered LPDDR RAM, which saves space but is incredibly difficult to repair or replace.

The CAMM standard brings the same easy-to-swap design as the older SO-DIMM RAM stick standard but with a smaller footprint, thermal design, and power usage specially made for portable devices. Reports suggest the XPG Nia will use the low-power LPCAMM2 version of these RAM modules, which will be easily accessible by lifting up the kickstand on the back of the XPG Nia. Alongside a standard M.2 2230 slot for adding more storage, that should make the new handheld much easier to upgrade than the likes of the Steam Deck, which requires some serious hacking to push above the standard spec.

A Dell rendering depicting the size differences between SODIMM and CAMM.

Enlarge / A Dell rendering depicting the size differences between SODIMM and CAMM.

Dell

The only real downside to CAMM2 memory modules, at the moment, is the price; a recent CAMM2 offering from Crucial runs $175 for 32GB or $330 for 64GB. That’s significantly more than similar, bulkier SO-DIMM modules, but those prices should come down as more device makers and RAM manufacturers start supporting the standard.

A “circular computing device”?

Thus far, the XRG Nia’s modular design doesn’t seem to extend to the planned AMD Phoenix APU or Ryzen Z1 Extreme processor. That said, users will reportedly be able to remove the entire motherboard from the portable case, which can then be inserted into a smaller, screen-free enclosure for a potential second life as an Arudino/Raspberry Pi-like mini-PC.

Adata says it also plans to release the system’s 3D design files and pinout details publicly, letting modders and third-party manufacturers design their own cases and accessories. It’s all part of what Adata is calling a “circular computing design” that’s part of its “sustainable future” initiative.

GGF Events talks with the makers of the Adata XPG Nia prototype at Computex.

There are a few other features that set the XRG Nia apart from the waves of me-too Steam Deck clones. In addition to the rear kickstand, the entire screen enclosure slides up on a sort of pivot, providing a shallower viewing angle that requires less neck tilting when holding the device in front of your chest. Adata is also promising a front-facing camera that can be used for streaming as well as eye-tracking, which could theoretically power some fancy foveated rendering tricks for extra graphics horsepower.

It’s way too early to know how all of these features will shake out in the move from prototype to final device; reports suggest the company is aiming for a release sometime in 2025. Still, it’s nice to see a company trying some new things in the increasingly crowded space of handheld PC gaming devices that Valve has unleashed. If Adata can deliver the portable as an actual consumer product at its target of 1.5 pounds and $600, it could be one to keep an eye on.

New Steam Deck competitor lets you easily swap in more RAM, storage Read More »