AWS

openai-signs-massive-ai-compute-deal-with-amazon

OpenAI signs massive AI compute deal with Amazon

On Monday, OpenAI announced it has signed a seven-year, $38 billion deal to buy cloud services from Amazon Web Services to power products like ChatGPT and Sora. It’s the company’s first big computing deal after a fundamental restructuring last week that gave OpenAI more operational and financial freedom from Microsoft.

The agreement gives OpenAI access to hundreds of thousands of Nvidia graphics processors to train and run its AI models. “Scaling frontier AI requires massive, reliable compute,” OpenAI CEO Sam Altman said in a statement. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”

OpenAI will reportedly use Amazon Web Services immediately, with all planned capacity set to come online by the end of 2026 and room to expand further in 2027 and beyond. Amazon plans to roll out hundreds of thousands of chips, including Nvidia’s GB200 and GB300 AI accelerators, in data clusters built to power ChatGPT’s responses, generate AI videos, and train OpenAI’s next wave of models.

Wall Street apparently liked the deal, because Amazon shares hit an all-time high on Monday morning. Meanwhile, shares for long-time OpenAI investor and partner Microsoft briefly dipped following the announcement.

Massive AI compute requirements

It’s no secret that running generative AI models for hundreds of millions of people currently requires a lot of computing power. Amid chip shortages over the past few years, finding sources of that computing muscle has been tricky. OpenAI is reportedly working on its own GPU hardware to help alleviate the strain.

But for now, the company needs to find new sources of Nvidia chips, which accelerate AI computations. Altman has previously said that the company plans to spend $1.4 trillion to develop 30 gigawatts of computing resources, an amount that is enough to roughly power 25 million US homes, according to Reuters.

OpenAI signs massive AI compute deal with Amazon Read More »

a-single-point-of-failure-triggered-the-amazon-outage-affecting-millions

A single point of failure triggered the Amazon outage affecting millions

In turn, the delay in network state propagations spilled over to a network load balancer that AWS services rely on for stability. As a result, AWS customers experienced connection errors from the US-East-1 region. AWS network functions affected included the creating and modifying Redshift clusters, Lambda invocations, and Fargate task launches such as Managed Workflows for Apache Airflow, Outposts lifecycle operations, and the AWS Support Center.

For the time being, Amazon has disabled the DynamoDB DNS Planner and the DNS Enactor automation worldwide while it works to fix the race condition and add protections to prevent the application of incorrect DNS plans. Engineers are also making changes to EC2 and its network load balancer.

A cautionary tale

Ookla outlined a contributing factor not mentioned by Amazon: a concentration of customers who route their connectivity through the US-East-1 endpoint and an inability to route around the region. Ookla explained:

The affected US‑EAST‑1 is AWS’s oldest and most heavily used hub. Regional concentration means even global apps often anchor identity, state or metadata flows there. When a regional dependency fails as was the case in this event, impacts propagate worldwide because many “global” stacks route through Virginia at some point.

Modern apps chain together managed services like storage, queues, and serverless functions. If DNS cannot reliably resolve a critical endpoint (for example, the DynamoDB API involved here), errors cascade through upstream APIs and cause visible failures in apps users do not associate with AWS. That is precisely what Downdetector recorded across Snapchat, Roblox, Signal, Ring, HMRC, and others.

The event serves as a cautionary tale for all cloud services: More important than preventing race conditions and similar bugs is eliminating single points of failure in network design.

“The way forward,” Ookla said, “is not zero failure but contained failure, achieved through multi-region designs, dependency diversity, and disciplined incident readiness, with regulatory oversight that moves toward treating the cloud as systemic components of national and economic resilience.”

A single point of failure triggered the Amazon outage affecting millions Read More »

smart-beds-leave-sleepers-hot-and-bothered-during-aws-outage

Smart beds leave sleepers hot and bothered during AWS outage

Some users complained that malfunctioning devices kept them awake for hours. Others bemoaned waking up in the middle of the night drenched in sweat.

Even more basic features, such as alarms, failed to work when Eight Sleep’s servers went down.

Eight Sleep will offer local control

Eight Sleep co-founder and CEO Matteo Franceschetti addressed the problems via X on Monday:

The AWS outage has impacted some of our users since last night, disrupting their sleep. That is not the experience we want to provide and I want to apologize for it.

We are taking two main actions:

1) We are restoring all the features as AWS comes back. All devices are currently working, with some experiencing data processing delays.

2) We are currently outage-proofing your Pod experience and we will be working tonight-24/7 until that is done.

On Monday evening, Franceschetti said that “all the features should be working.” On Tuesday, he claimed that a local control option would be available on Wednesday “at the latest” without providing more detail.

Eight Sleep users will be relieved to hear that the company is working to make their products usable during Internet outages. But many are also questioning why Eight Sleep didn’t implement local control sooner. This isn’t Eight Sleep’s first outage, and users can also experience personal Wi-Fi problems. And there’s an obvious user benefit to being able to control their bed’s elevation and temperature without the Internet or if Eight Sleep ever goes out of business.

For Eight Sleep, though, making flagship features available without its app while still making enough money isn’t easy. Without forcing people to put their Eight Sleep devices online, it would be harder for Eight Sleep to convince people that Autopilot subscriptions should be mandatory. Pod hardware’s high prices will deter people from multiple or frequent purchases, making alternative, more frequent revenue streams key for the 11-year-old company’s survival.

After a June outage, an Eight Sleep user claimed that the company told him that it was working on an offline mode. This week’s AWS problems seem to have hastened efforts, so users don’t lose sleep during the next outage.

Smart beds leave sleepers hot and bothered during AWS outage Read More »

amazon’s-dns-problem-knocked-out-half-the-web,-likely-costing-billions

Amazon’s DNS problem knocked out half the web, likely costing billions

On Monday afternoon, Amazon confirmed that an outage affecting Amazon Web Services’ cloud hosting, which had impacted millions across the Internet, had been resolved.

Considered the worst outage since last year’s CrowdStrike chaos, Amazon’s outage caused “global turmoil,” Reuters reported. AWS is the world’s largest cloud provider and, therefore, the “backbone of much of the Internet,” ZDNet noted. Ultimately, more than 28 AWS services were disrupted, causing perhaps billions in damages, one analyst estimated for CNN.

Popular apps like Snapchat, Signal, and Reddit went dark. Flights got delayed. Banks and financial services went down. Massive games like Fortnite could not be accessed. Some of Amazon’s own services were hit, too, including its e-commerce platform, Alexa, and Prime Video. Ultimately, millions of businesses simply stopped operating, unable to log employees into their systems or accept payments for their goods.

“The incident highlights the complexity and fragility of the Internet, as well as how much every aspect of our work depends on the Internet to work,” Mehdi Daoudi, the CEO of an Internet performance monitoring firm called Catchpoint, told CNN. “The financial impact of this outage will easily reach into the hundreds of billions due to loss in productivity for millions of workers that cannot do their job, plus business operations that are stopped or delayed—from airlines to factories.”

Amazon’s problems originated at a US site that is its “oldest and largest for web services” and often “the default region for many AWS services,” Reuters noted. The same site has experienced two outages before in 2020 and 2021, but while the tech giant had confirmed that those prior issues had been “fully mitigated,” apparently the fixes did not ensure stability into 2025.

Amazon’s DNS problem knocked out half the web, likely costing billions Read More »

microsoft-ends-openai-exclusivity-in-office,-adds-rival-anthropic

Microsoft ends OpenAI exclusivity in Office, adds rival Anthropic

Microsoft’s Office 365 suite will soon incorporate AI models from Anthropic alongside existing OpenAI technology, The Information reported, ending years of exclusive reliance on OpenAI for generative AI features across Word, Excel, PowerPoint, and Outlook.

The shift reportedly follows internal testing that revealed Anthropic’s Claude Sonnet 4 model excels at specific Office tasks where OpenAI’s models fall short, particularly in visual design and spreadsheet automation, according to sources familiar with the project cited by The Information, who stressed the move is not a negotiating tactic.

Anthropic did not immediately respond to Ars Technica’s request for comment.

In an unusual arrangement showing the tangled alliances of the AI industry, Microsoft will reportedly purchase access to Anthropic’s models through Amazon Web Services—both a cloud computing rival and one of Anthropic’s major investors. The integration is expected to be announced within weeks, with subscription pricing for Office’s AI tools remaining unchanged, the report says.

Microsoft maintains that its OpenAI relationship remains intact. “As we’ve said, OpenAI will continue to be our partner on frontier models and we remain committed to our long-term partnership,” a Microsoft spokesperson told Reuters following the report. The tech giant has poured over $13 billion into OpenAI to date and is currently negotiating terms for continued access to OpenAI’s models amid ongoing negotiations about their partnership terms.

Stretching back to 2019, Microsoft’s tight partnership with OpenAI until recently gave the tech giant a head start in AI assistants based on language models, allowing for a rapid (though bumpy) deployment of OpenAI-technology-based features in Bing search and the rollout of Copilot assistants throughout its software ecosystem. It’s worth noting, however, that a recent report from the UK government found no clear productivity boost from using Copilot AI in daily work tasks among study participants.

Microsoft ends OpenAI exclusivity in Office, adds rival Anthropic Read More »

basecamp-maker-37signals-says-its-“cloud-exit”-will-save-it-$10m-over-5-years

Basecamp-maker 37Signals says its “cloud exit” will save it $10M over 5 years

Lots of pointing at clouds

AWS made data transfer out of AWS free for customers who were moving off their servers in March, spurred in part by European regulations. Trade publications are full of trend stories about rising cloud costs and explainers on why companies are repatriating. Stories of major players’ cloud reversals, like that of Dropbox, have become talking points for the cloud-averse.

Not everyone believes the sky is falling. Lydia Leong, a cloud computing analyst at Gartner, wrote on her own blog about how “the myth of cloud repatriation refuses to die.” A large part of this, Leong writes, is in how surveys and anecdotal news stories confuse various versions of “repatriation” from managed service providers to self-hosted infrastructure.

“None of these things are in any way equivalent to the notion that there’s a broad or even common movement of workloads from the cloud back on-premises, though, especially for those customers who have migrated entire data centers or the vast majority of their IT estate to the cloud,” writes Leong.

Both Leong and Rich Hoyer, director of the FinOps group at SADA, suggest that framing the issue as simply “cloud versus on-premises” is too simplistic. A poorly architected split between cloud and on-prem, vague goals and measurements of cloud “cost” and “success,” and fuzzy return-on-investment math, Hoyer writes, are feeding alarmist takes on cloud costs.

For its part, AWS has itself testified that it faces competition from the on-premises IT movement, although it did so as part of a “Cloud Services Market Investigation” by UK market competition authorities. Red Hat and Citrix have suggested that, at a minimum, hybrid approaches have regained ground after a period of cloud primacy.

Those kinds of measured approaches don’t have the same broad reach as declaring an “exit” and putting a very round number on it, but it’s another interesting data point.

Ars has reached out to AWS and will update this post with comment.

Basecamp-maker 37Signals says its “cloud exit” will save it $10M over 5 years Read More »

amazon-exec-tells-employees-to-work-elsewhere-if-they-dislike-rto-policy

Amazon exec tells employees to work elsewhere if they dislike RTO policy

Amazon workers are being reminded that they can find work elsewhere if they’re unhappy with Amazon’s return-to-office (RTO) mandate.

In September, Amazon told staff that they’ll have to RTO five days a week starting in 2025. Amazon employees are currently allowed to work remotely twice a week. A memo from CEO Andy Jassy announcing the policy change said that “it’s easier for our teammates to learn, model, practice, and strengthen our culture” when working at the office.

On Thursday, at what Reuters described as an “all-hands meeting” for Amazon Web Services (AWS), AWS CEO Matt Garman reportedly told workers:

If there are people who just don’t work well in that environment and don’t want to, that’s okay, there are other companies around.

Garman said that he didn’t “mean that in a bad way,” however, adding: “We want to be in an environment where we’re working together. When we want to really, really innovate on interesting products, I have not seen an ability for us to do that when we’re not in-person.”

Interestingly, Garman’s comments about dissatisfaction with the RTO policy coincided with him claiming that 9 out of 10 Amazon employees that he spoke to are in support of the RTO mandate, Reuters reported.

Some suspect RTO mandates are attempts to make workers quit

Amazon has faced resistance to RTO since pandemic restrictions were lifted. Like workers at other companies, some Amazon employees have publicly wondered if strict in-office policies are being enacted as attempts to reduce headcount without layoffs.

In July 2023, Amazon started requiring employees to work in their team’s central hub location (as opposed to remotely or in an office that may be closer to where they reside). Amazon reportedly told workers that if they didn’t comply or find a new job internally, they’d be considered a “voluntary resignation,” per a Slack message that Business Insider reportedly viewed. And many Amazon employees have already reported considering looking for a new job due to the impending RTO requirements.

However, employers like Amazon “can face an array of legal consequences for encouraging workers to quit via their RTO policies,” Helen D. (Heidi) Reavis, managing partner at Reavis Page Jump LLP, an employment, dispute resolution, and media law firm, told Ars Technica:

Amazon exec tells employees to work elsewhere if they dislike RTO policy Read More »

amazon-joins-google-in-investing-in-small-modular-nuclear-power

Amazon joins Google in investing in small modular nuclear power


Small nukes is good nukes?

What’s with the sudden interest in nuclear power among tech titans?

Diagram of a reactor and its coolant system. There are two main components, the reactor itself, which has a top-to-bottom flow of fuel pellets, and the boiler, which receives hot gas from the reactor and uses it to boil water.

Fuel pellets flow down the reactor (left), as gas transfer heat to a boiler (right). Credit: X-energy

On Tuesday, Google announced that it had made a power purchase agreement for electricity generated by a small modular nuclear reactor design that hasn’t even received regulatory approval yet. Today, it’s Amazon’s turn. The company’s Amazon Web Services (AWS) group has announced three different investments, including one targeting a different startup that has its own design for small, modular nuclear reactors—one that has not yet received regulatory approval.

Unlike Google’s deal, which is a commitment to purchase power should the reactors ever be completed, Amazon will lay out some money upfront as part of the agreements. We’ll take a look at the deals and technology that Amazon is backing before analyzing why companies are taking a risk on unproven technologies.

Money for utilities and a startup

Two of Amazon’s deals are with utilities that serve areas where it already has a significant data center footprint. One of these is Energy Northwest, which is an energy supplier that sends power to utilities in the Pacific Northwest. Amazon is putting up the money for Energy Northwest to study the feasibility of adding small modular reactors to its Columbia Generating Station, which currently houses a single, large reactor. In return, Amazon will get the right to purchase power from an initial installation of four small modular reactors. The site could potentially support additional reactors, which Energy Northwest would be able to use to meet demands from other users.

The deal with Virginia’s Dominion Energy is similar in that it would focus on adding small modular reactors to Dominion’s existing North Anna Nuclear Generating Station. But the exact nature of the deal is a bit harder to understand. Dominion says the companies will “jointly explore innovative ways to advance SMR development and financing while also mitigating potential cost and development risks.”

Should either or both of these projects go forward, the reactor designs used will come from a company called X-energy, which is involved in the third deal Amazon is announcing. In this case, it’s a straightforward investment in the company, although the exact dollar amount is unclear (the company says Amazon is “anchoring” a $500 million round of investments). The money will help finalize the company’s reactor design and push it through the regulatory approval process.

Small modular nuclear reactors

X-energy is one of several startups attempting to develop small modular nuclear reactors. The reactors all have a few features that are expected to help them avoid the massive time and cost overruns associated with the construction of large nuclear power stations. In these small reactors, the limited size allows them to be made at a central facility and then be shipped to the power station for installation. This limits the scale of the infrastructure that needs to be built in place and allows the assembly facility to benefit from economies of scale.

This also allows a great deal of flexibility at the installation site, as you can scale the facility to power needs simply by adjusting the number of installed reactors. If demand rises in the future, you can simply install a few more.

The small modular reactors are also typically designed to be inherently safe. Should the site lose power or control over the hardware, the reactor will default to a state where it can’t generate enough heat to melt down or damage its containment. There are various approaches to achieving this.

X-energy’s technology is based on small, self-contained fuel pellets called TRISO particles for TRi-structural ISOtropic. These contain both the uranium fuel and a graphite moderator and are surrounded by a ceramic shell. They’re structured so that there isn’t sufficient uranium present to generate temperatures that can damage the ceramic, ensuring that the nuclear fuel will always remain contained.

The design is meant to run at high temperatures and extract heat from the reactor using helium, which is used to boil water and generate electricity. Each reactor can produce 80 megawatts of electricity, and the reactors are designed to work efficiently as a set of four, creating a 320 MW power plant. As of yet, however, there are no working examples of this reactor, and the design hasn’t been approved by the Nuclear Regulatory Commission.

Why now?

Why is there such sudden interest in small modular reactors among the tech community? It comes down to growing needs and a lack of good alternatives, even given the highly risky nature of the startups that hope to build the reactors.

It’s no secret that data centers require enormous amounts of energy, and the sudden popularity of AI threatens to raise that demand considerably. Renewables, as the cheapest source of power on the market, would be one way of satisfying that growth, but they’re not ideal. For one thing, the intermittent nature of the power they supply, while possible to manage at the grid level, is a bad match for the around-the-clock demands of data centers.

The US has also benefitted from over a decade of efficiency gains keeping demand flat despite population and economic growth. This has meant that all the renewables we’ve installed have displaced fossil fuel generation, helping keep carbon emissions in check. Should newly installed renewables instead end up servicing rising demand, it will make it considerably more difficult for many states to reach their climate goals.

Finally, renewable installations have often been built in areas without dedicated high-capacity grid connections, resulting in a large and growing backlog of projects (2.6 TW of generation and storage as of 2023) that are stalled as they wait for the grid to catch up. Expanding the pace of renewable installation can’t meet rising server farm demand if the power can’t be brought to where the servers are.

These new projects avoid that problem because they’re targeting sites that already have large reactors and grid connections to use the electricity generated there.

In some ways, it would be preferable to build more of these large reactors based on proven technologies. But not in two very important ways: time and money. The last reactor completed in the US was at the Vogtle site in Georgia, which started construction in 2009 but only went online this year. Costs also increased from $14 billion to over $35 billion during construction. It’s clear that any similar projects would start generating far too late to meet the near-immediate needs of server farms and would be nearly impossible to justify economically.

This leaves small modular nuclear reactors as the least-bad option in a set of bad options. Despite many startups having entered the space over a decade ago, there is still just a single reactor design approved in the US, that of NuScale. But the first planned installation saw the price of the power it would sell rise to the point where it was no longer economically viable due to the plunge in the cost of renewable power; it was canceled last year as the utilities that would have bought the power pulled out.

The probability that a different company will manage to get a reactor design approved, move to construction, and manage to get something built before the end of the decade is extremely low. The chance that it will be able to sell power at a competitive price is also very low, though that may change if demand rises sufficiently. So the fact that Amazon is making some extremely risky investments indicates just how worried it is about its future power needs. Of course, when your annual gross profit is over $250 billion a year, you can afford to take some risks.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Amazon joins Google in investing in small modular nuclear power Read More »

aws-s3-storage-bucket-with-unlucky-name-nearly-cost-developer-$1,300

AWS S3 storage bucket with unlucky name nearly cost developer $1,300

Not that kind of bucket list —

Amazon says it’s working on stopping others from “making your AWS bill explode.”

A blue bucket, held by red and yellow brackets, being continuously filled and overflowing

Enlarge / Be careful with the buckets you put out there for anybody to fill.

Getty Images

If you’re using Amazon Web Services and your S3 storage bucket can be reached from the open web, you’d do well not to pick a generic name for that space. Avoid “example,” skip “change_me,” don’t even go with “foo” or “bar.” Someone else with the same “change this later” thinking can cost you a MacBook’s worth of cash.

Ask Maciej Pocwierz, who just happened to pick an S3 name that “one of the popular open-source tools” used for its default backup configuration. After setting up the bucket for a client project, he checked his billing page and found nearly 100 million unauthorized attempts to create new files on his bucket (PUT requests) within one day. The bill was over $1,300 and counting.

Nothing, nothing, nothing, nothing, nothing … nearly 100 million unauthorized requests.

Nothing, nothing, nothing, nothing, nothing … nearly 100 million unauthorized requests.

“All this actually happened just a few days after I ensured my client that the price for AWS services will be negligible, like $20 at most for the entire month,” Pocwierz wrote over chat. “I explained the situation is very unusual but it definitely looked as if I didn’t know what I’m doing.”

Pocwierz declined to name the open source tool that inadvertently bum-rushed his S3 account. In a Medium post about the matter, he noted a different problem with an unlucky default backup. After turning on public writes, he watched as he collected more than 10GB of data in less than 30 seconds. Other people’s data, that is, and they had no idea that Pocwierz was collecting it.

Some of that data came from companies with customers, which is part of why Pocwierz is keeping the specifics under wraps. He wrote to Ars that he contacted some of the companies that either tried or successfully backed up their data to his bucket, and “they completely ignored me.” “So now instead of having this fixed, their data is still at risk,” Pocwierz writes. “My lesson is if I ever run a company, I will definitely have a bug bounty program, and I will treat such warnings seriously.”

As for Pocwierz’s accounts, both S3 and bank, it mostly ended well. An AWS representative reached out on LinkedIn and canceled his bill, he said, and was told that anybody can request refunds for excessive unauthorized requests. “But they didn’t explicitly say that they will necessarily approve it,” he wrote. He noted in his Medium post that AWS “emphasized that this was done as an exception.”

In response to Pocwierz’s story, Jeff Barr, chief evangelist for AWS at Amazon, tweeted that “We agree that customers should not have to pay for unauthorized requests that they did not initiate.” Barr added that Amazon would have more to share on how the company could prevent them “shortly.” AWS has a brief explainer and contact page on unexpected AWS charges.

The open source tool did change its default configuration after Pocwierz contacted them. Pocwierz suggested to AWS that it should restrict anyone else from creating a bucket name like his, but he had yet to hear back about it. He suggests in his blog post that, beyond random bad luck, adding a random suffix to your bucket name and explicitly specifying your AWS region can help avoid massive charges like the one he narrowly dodged.

AWS S3 storage bucket with unlucky name nearly cost developer $1,300 Read More »

alleged-cryptojacking-scheme-consumed-$3.5m-of-stolen-computing-to-make-just-$1m

Alleged cryptojacking scheme consumed $3.5M of stolen computing to make just $1M

SHOCKING CRYPTOCURRENCY SCAM —

Indictment says man tricked cloud providers into giving him services he never paid for.

Alleged cryptojacking scheme consumed $3.5M of stolen computing to make just $1M

Getty Images

Federal prosecutors indicted a Nebraska man on charges he perpetrated a cryptojacking scheme that defrauded two cloud providers—one based in Seattle and the other in Redmond, Washington—out of $3.5 million.

The indictment, filed in US District Court for the Eastern District of New York and unsealed on Monday, charges Charles O. Parks III—45 of Omaha, Nebraska—with wire fraud, money laundering, and engaging in unlawful monetary transactions in connection with the scheme. Parks has yet to enter a plea and is scheduled to make an initial appearance in federal court in Omaha on Tuesday. Parks was arrested last Friday.

Prosecutors allege that Parks defrauded “two well-known providers of cloud computing services” of more than $3.5 million in computing resources to mine cryptocurrency. The indictment says the activity was in furtherance of a cryptojacking scheme, a term for crimes that generate digital coin through the acquisition of computing resources and electricity of others through fraud, hacking, or other illegal means.

Details laid out in the indictment underscore the failed economics involved in the mining of most cryptocurrencies. The $3.5 million of computing resources yielded roughly $1 million worth of cryptocurrency. In the process, massive amounts of energy were consumed.

Parks’ scheme allegedly used a variety of personal and business identities to register “numerous accounts” with the two cloud providers and in the process acquiring vast amounts of computing processing power and storage that he never paid for. Prosecutors said he tricked the providers into allotting him elevated levels of services and deferred billing accommodations and deflected the providers’ inquiries regarding questionable data usage in unpaid bills. He allegedly then used those resources to mine Ether, Litecoin, and Monero digital currencies.

The defendant then allegedly laundered the proceeds through cryptocurrency exchanges, an NFT marketplace, an online payment provider, and traditional bank accounts in an attempt to disguise the illegal scheme. Once proceeds had been converted to dollars, Parks allegedly bought a Mercedes-Benz, jewelry, first-class hotel and travel accommodations, and other luxury goods and services.

From January to August 2021, prosecutors allege, Parks created five accounts with the Seattle-based “on-demand cloud computing platform” using different names, email addresses, and corporate affiliations. He then allegedly “tricked and defrauded” employees of the platform into providing elevated levels of service, deferring billing payments, and failing to discover the activity.

During this time, Parks repeatedly requested that the provider “provide him access to powerful and expensive instances that included graphics processing units used for cryptocurrency mining and launched tens of thousands of these instances to mine cryptocurrency, employing mining software applications to facilitate the mining of tokens including ETH, LTC and XMR in various mining pools, and employing tools that allowed him to maximize cloud computing power and monitor which instances were actively mining on each mining pool,” prosecutors wrote in the indictment.

Within a day of having one account suspended for nonpayment and fraudulent activity, Parks allegedly used a new account with the provider. In all, Parks allegedly consumed more than $2.5 million of the Seattle-based provider’s services.

The prosecutors went on to allege that Parks used similar tactics to defraud the Redmond provider of more than $969,000 in cloud computing and related services.

Prosecutors didn’t say precisely how Parks was able to trick the providers into giving him elevated services, deferring unpaid payments, or failing to discover the allegedly fraudulent behavior. They also didn’t identify either of the cloud providers by name. Based on the details, however, they are almost certainly Amazon Web Services and Microsoft Azure. Representatives from both providers didn’t immediately return emails seeking confirmation.

If convicted on all charges, Parks faces as much as 30 years in prison.

Alleged cryptojacking scheme consumed $3.5M of stolen computing to make just $1M Read More »

redis’-license-change-and-forking-are-a-mess-that-everybody-can-feel-bad-about

Redis’ license change and forking are a mess that everybody can feel bad about

Licensing is hard —

Cloud firms want a version of Redis that’s still open to managed service resale.

AWS data centers built right next to suburban cul-de-sac housing

Enlarge / An Amazon Web Services (AWS) data center under construction in Stone Ridge, Virginia, in March 2024. Amazon will spend more than $150 billion on data centers in the next 15 years.

Getty Images

Redis, a tremendously popular tool for storing data in-memory rather than in a database, recently switched its licensing from an open source BSD license to both a Source Available License and a Server Side Public License (SSPL).

The software project and company supporting it were fairly clear in why they did this. Redis CEO Rowan Trollope wrote on March 20 that while Redis and volunteers sponsored the bulk of the project’s code development, “the majority of Redis’ commercial sales are channeled through the largest cloud service providers, who commoditize Redis’ investments and its open source community.” Clarifying a bit, “cloud service providers hosting Redis offerings will no longer be permitted to use the source code of Redis free of charge.”

Clarifying even further: Amazon Web Services (and lesser cloud giants), you cannot continue reselling Redis as a service as part of your $90 billion business without some kind of licensed contribution back.

This generated a lot of discussion, blowback, and action. The biggest thing was a fork of the Redis project, Valkey, that is backed by The Linux Foundation and, critically, also Amazon Web Services, Google Cloud, Oracle, Ericsson, and Snap Inc. Valkey is “fully open source,” Linux Foundation execs note, with the kind of BSD-3-Clause license Redis sported until recently. You might note the exception of Microsoft from that list of fork fans.

As noted by Matt Asay, who formerly ran open source strategy and marketing at AWS, most developers are “largely immune to Redis’ license change.” Asay suggests that, aside from the individual contributions of AWS engineer and former Redis core contributor Madelyn Olson (who contributed in her free time) and Alibaba’s Zhao Zhao, “The companies jumping behind the fork of Redis have done almost nothing to get Redis to its current state.”

Olson told TechCrunch that she was disappointed by Redis’ license change but not surprised. “I’m more just disappointed than anything else.” David Nally, AWS’ current director for open source strategy and marketing, demurred when asked by TechCrunch if AWS considered buying a Redis license from Redis Inc. before forking. “[F]rom an open-source perspective, we’re now invested in ensuring the success of Valkey,” Nally said.

Shifts in open source licensing have triggered previous keep-it-open forks, including OpenSearch (from ElasticSearch) and OpenTofu (from Terraform). With the backing of the Linux Foundation and some core contributors, though, Valkey will likely soon evolve far beyond a drop-in Redis replacement, and Redis is likely to follow suit.

If you’re reading all this and you don’t own a gigascale cloud provider or sit on the board of a source code licensing foundation, it’s hard to know what to make of the fiasco. Every party in this situation is doing what is legally permissible, and software from both sides will continue to be available to the wider public. Taking your ball and heading home is a longstanding tradition when parties disagree on software goals and priorities. But it feels like there had to be another way this could have worked out.

Redis’ license change and forking are a mess that everybody can feel bad about Read More »