Author name: DJ Henderson

broadcom-reverses-controversial-plan-in-effort-to-cull-vmware-migrations

Broadcom reverses controversial plan in effort to cull VMware migrations

Customers re-examining VMware dependence

VMware has been the go-to virtualization platform for years, but Broadcom’s acquisition has pushed customers to reconsider their VMware dependence. A year into its VMware buy, Broadcom is at a critical point. By now, customers have had months to determine whether they’ll navigate VMware’s new landscape or opt for alternatives. Beyond dissatisfaction with the new pricing and processes under Broadcom, the acquisition has also served as a wake-up call about vendor lock-in. Small- and medium-size businesses (SMBs) are having the biggest problems navigating the changes, per conversations that Ars has had with VMware customers and analysts.

Speaking to The Register, Edwards claimed that migration from VMware is still modest. However, the coming months are set to be decision time for some clients. In a June and July survey that Veeam, which provides hypervisor backup solutions, sponsored, 56 percent of organizations were expecting to “decrease” VMware usage by July 2025. The survey examined 561 “senior decisionmakers employed in IT operations and IT security roles” in companies with over 1,000 employees in the US, France, Germany, and the UK.

Impact on migrations questioned

With the pain points seemingly affecting SMBs more than bigger clients, Broadcom’s latest move may do little to deter the majority of customers from considering ditching VMware.

Speaking with Ars, Rick Vanover, VP of product strategy at Veeam, said he thinks Broadcom taking fewer large VMware customers direct will have an “insignificant” impact on migrations, explaining:

Generally speaking, the largest enterprises (those who would qualify for direct servicing by Broadcom) are not considering migrating off VMware.

However, channel partners can play a “huge part” in helping customers decide to stay or migrate platforms, the executive added.

“Product telemetry at Veeam shows a slight distribution of hypervisors in the market, across all segments, but not enough to tell the market that the sky is falling,” Vanover said.

In his blog, Edwards argued that Tan is demonstrating a “clear objective to strip out layers of cost and complexity in the business, and return it to strong growth and profitability.” He added: “But so far this has come at the expense of customer and partner relationships. Has VMware done enough to turn the tide?”

Perhaps more pertinent to SMBs, Broadcom last month announced a more SMB-friendly VMware subscription tier. Ultimate pricing will be a big factor in whether this tier successfully maintains SMB business. But Broadcom’s VMware still seems more focused on larger customers.

Broadcom reverses controversial plan in effort to cull VMware migrations Read More »

study:-warming-has-accelerated-due-to-the-earth-absorbing-more-sunlight

Study: Warming has accelerated due to the Earth absorbing more sunlight

The concept of an atmospheric energy imbalance is pretty straightforward: We can measure both the amount of energy the Earth receives from the Sun and how much energy it radiates back into space. Any difference between the two results in a net energy imbalance that’s either absorbed by or extracted from the ocean/atmosphere system. And we’ve been tracking it via satellite for a while now as rising greenhouse gas levels have gradually increased the imbalance.

But greenhouse gases aren’t the only thing having an effect. For example, the imbalance has also increased in the Arctic due to the loss of snow cover and retreat of sea ice. The dark ground and ocean absorb more solar energy compared to the white material that had previously been exposed to the sunlight. Not all of this is felt directly, however, as a lot of the areas where it’s happening are frequently covered by clouds.

Nevertheless, the loss of snow and ice has caused the Earth’s reflectivity, termed its albedo, to decline since the 1970s, enhancing the warming a bit.

Vanishing clouds

The new paper finds that the energy imbalance set a new high in 2023, with a record amount of energy being absorbed by the ocean/atmosphere system. This wasn’t accompanied by a drop in infrared emissions from the Earth, suggesting it wasn’t due to greenhouse gases, which trap heat by absorbing this radiation. Instead, it seems to be due to decreased reflection of incoming sunlight by the Earth.

While there was a general trend in that direction, the planet set a new record low for albedo in 2023. Using two different data sets, the teams identify the areas most effected by this, and they’re not at the poles, indicating loss of snow and ice are unlikely to be the cause. Instead, the key contributor appears to be the loss of low-level clouds. “The cloud-related albedo reduction is apparently largely due to a pronounced decline of low-level clouds over the northern mid-latitude and tropical oceans, in particular the Atlantic,” the researchers say.

Study: Warming has accelerated due to the Earth absorbing more sunlight Read More »

at&t-says-it-won’t-build-fiber-home-internet-in-half-of-its-wireline-footprint

AT&T says it won’t build fiber home Internet in half of its wireline footprint


AT&T is ditching copper and building fiber, but many will get only 5G or satellite.

Credit: Getty Images | Joe Raedle

AT&T this week detailed plans to eliminate copper phone and DSL lines from its network while leaving many customers in rural areas with only wireless or satellite as an alternative.

In a presentation for analysts and investors on Tuesday, AT&T said it has a “wireless first” plan for 50 percent of its 500,000-square-mile wireline territory and a “fiber first” plan for the rest. The more sparsely populated half accounts for 10 percent of the potential customer base, and AT&T does not plan to build fiber home Internet for those users.

AT&T said it expects to be able to ditch copper because of state-level deregulation and the impending shift in power at the Federal Communications Commission, where Trump pick Brendan Carr is set to become the chairman. California is the only state out of 21 in AT&T’s wireline territory that hasn’t yet granted AT&T’s request for deregulation of old networks.

An AT&T press release said the company “is actively working to exit its legacy copper network operations across the large majority of its wireline footprint by the end of 2029.” AT&T’s wireline footprint has 88 million locations, said Susan Johnson, an AT&T executive VP in charge of supply chain and wireline transformation.

About 21 million of those have access only to voice service. The other 67 million are eligible for Internet access, and 29 million of those have access to fiber already. AT&T plans to boost its number of fiber locations to 45 million by the end of 2029 but says it isn’t profitable enough to build fiber to the other parts of its old landline phone and DSL networks.

AT&T: Fiber not profitable enough in half of footprint

AT&T reported that its residential business has 13.97 million Internet connections, including 9.02 million fiber connections. Many copper users who don’t get fiber will be able to use 5G-based home broadband with AT&T Internet Air and wireless phone service with AT&T Phone-Advanced. Johnson said that Internet Air offers “up to 25 times faster speeds than legacy ADSL.” But customers who don’t get access to the terrestrial wireless service may have to use satellite.

“Wireless first is the name for our wire center areas where we have not built and do not plan to build residential fiber. There’s not an economic path to do so,” Johnson said. “These wire centers may still have fiber supporting businesses or cell sites but no consumer fiber. This is about 50 percent of our land area but it’s only 10 percent of the population.” These areas have “four remaining copper customers per square mile,” she said.

Wireless home phone service will be available to “the vast majority of our existing copper-based customers,” but not all, she said. In some areas, “we will need to work with our customers to move them to other technologies, including satellite. But we’ve made a pledge that we’re going to keep our customers connected through the process and make sure that no customer loses access to voice or 911 services.”

Johnson said AT&T’s “plan is to have no customers using copper services in these wire center areas by the end of 2027.” A Republican-majority FCC will help, she said.

“We are going to work with the FCC to speed up and scale this process, and with the new administration we are optimistic that we can make even more progress in simplifying our networks and migrating our customers over the next several years,” Johnson said.

She said that AT&T Phone-Advanced “was specifically designed to meet the FCC’s criteria as an adequate replacement product for our traditional landline phone service, and we have successfully completed the testing with the FCC and we are continuing to move through their preview process.”

AT&T has an application pending with the FCC in a small number of wire centers, “which, if approved, would allow us to replace traditional landline phone service, think POTS [Plain Old Telephone Service], our most regulated product, with AT&T Phone Advanced,” Johnson said.

California demanded more reliable service

AT&T already achieved what Johnson called “an absolutely critical precedent” earlier this year when the FCC allowed it to stop accepting new copper-based service orders in 60 wire centers across 13 states, she said. A wire center consists of a central office and the surrounding infrastructure, including the copper lines that stretch from the central office to homes and businesses. AT&T has 4,600 wire centers in the US, Johnson said.

Notably, AT&T’s plan to ditch copper currently excludes California, where the Public Utilities Commission rejected AT&T’s request to end its landline phone obligations in a June 2024 ruling. “California is not included in the plans I just laid out for you. We are continuing to work with policy makers to define our path in that state,” Johnson said.

AT&T is still classified as a Carrier of Last Resort (COLR) in California, and the state telecom agency rejected AT&T’s argument that VoIP and mobile services could fill the gap that would exist if AT&T escaped that obligation. Residents “highlighted the unreliability of voice alternatives” at public hearings, the agency said.

An administrative law judge at the California agency said AT&T falsely claimed that commission rules require it “to retain outdated copper-based landline facilities that are expensive to maintain.” AT&T is allowed to upgrade those lines from copper to fiber, the agency said.

AT&T achieved its goal of deregulation in the other 20 states where it has wireline operations, Johnson said. “While California is the last state to modernize, we’ve started a process there and we will continue to work towards this objective,” she said.

The deregulation in other states already helped AT&T stop offering old services in “about 250,000 square miles where we have met the regulatory requirements to no longer offer regulated services because our customers have moved on to other services,” Johnson said.

AT&T planned to hit that milestone by 2025 but achieved it this year, she said. But as Johnson stressed, AT&T wants to get rid of copper in the remaining 500,000 square miles. “This is really good progress… however, without the full discontinuance of services across an entire wire center geography, we’re unable to stop the maintenance, repair, and attack the more fixed infrastructure costs,” she said.

Copper network degrading

Johnson said that AT&T is “seeing declining reliability with storms and increased copper theft. Copper simply does not do well with water and flooding, and repairs are very labor-intensive.” State regulators have said the declining reliability is largely AT&T’s fault. Many copper lines deteriorated because AT&T failed to do maintenance that would prevent lengthy outages and other troubles, a 2019 investigation by California state regulators found.

As noted earlier, AT&T said it plans to have no customers using copper in half of its territory by the end of 2027. In the other half, where AT&T described a “fiber first” strategy, there will nonetheless be copper customers who won’t get a fiber upgrade and will have to stop using copper by the end of 2029, Johnson said.

AT&T plans to build lots of fiber in the more populated half, but “not every customer location will be reached with fiber in these areas and we will still serve some of the customers in these areas with wireless alternatives,” Johnson said. AT&T’s “plan is to have no customer using copper services in these wire center areas by the end of 2029.”

The biggest beneficiaries of AT&T’s copper retirement may be shareholders. Johnson said the old network is an energy hog and has $6 billion in annual expenses. “Overall, our legacy business is profitable today but the revenue declines are accelerating,” she said.

AT&T is selling copper after it is decommissioned and leasing out some unused central offices. “By targeting the complete customer transition in a wire center, with the least profitable wire centers first, we are able to remove these geographic costs and really optimize margins as we move towards exiting copper services,” Johnson said.

Besides the 45 million existing and planned fiber locations, AT&T said its total fiber footprint by 2029 will include another 5 million or so locations through Gigapower, a joint venture with Blackrock, and agreements with commercial open-access providers.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

AT&T says it won’t build fiber home Internet in half of its wireline footprint Read More »

apple-takes-over-third-party-apple-passwords-autofill-extension-for-firefox

Apple takes over third-party Apple Passwords autofill extension for Firefox

Over the last few years, Apple has steadily been building password manager-style features into macOS and iOS, including automatic password generation, password breach detection, and more. Starting with this year’s updates—iOS 18 and macOS 15 Sequoia—Apple broke all that functionality out into its own Passwords app, making it all even more visible as a competitor to traditional password managers like 1Password and Bitwarden.

One area where Apple has lagged behind its platform-agnostic competitors is in browser support. Users could easily autofill passwords in Safari on macOS, and Apple did support a basic extension for the Windows versions of Google Chrome and Microsoft Edge via iCloud for Windows. But the company only added a Chrome extension for macOS users in the summer of 2023, and it has never supported non-Chromium browsers at all.

That has finally changed, at least for Firefox users running macOS—Apple has an officially supported Passwords extension for Firefox that supports syncing and autofilling passwords in macOS Sonoma and macOS Sequoia. Currently, the extension doesn’t support older versions of macOS or any versions of Firefox for Windows or Linux. When you install the extension in Firefox on a Mac that’s already synced with your iCloud account, all you should need to do to sign in is input a six-digit code that macOS automatically generates for you. As with the Chromium extension, there’s no need to re-sign in to your iCloud account separately.

To enable this functionality, it looks like Apple has taken ownership of a third-party extension that supported autofilling Apple Passwords in Firefox—a GitHub page for the original extension is still available but says that Apple “are now the sole owners in charge of maintaining their own official iCloud Passwords extension.” That extension supports the versions of Windows that can run the official iCloud for Windows app, suggesting that Apple ought to be able to add official Windows support for the extension at some point down the line.

Apple takes over third-party Apple Passwords autofill extension for Firefox Read More »

google’s-deepmind-tackles-weather-forecasting,-with-great-performance

Google’s DeepMind tackles weather forecasting, with great performance

By some measures, AI systems are now competitive with traditional computing methods for generating weather forecasts. Because their training penalizes errors, however, the forecasts tend to get “blurry”—as you move further ahead in time, the models make fewer specific predictions since those are more likely to be wrong. As a result, you start to see things like storm tracks broadening and the storms themselves losing clearly defined edges.

But using AI is still extremely tempting because the alternative is a computational atmospheric circulation model, which is extremely compute-intensive. Still, it’s highly successful, with the ensemble model from the European Centre for Medium-Range Weather Forecasts considered the best in class.

In a paper being released today, Google’s DeepMind claims its new AI system manages to outperform the European model on forecasts out to at least a week and often beyond. DeepMind’s system, called GenCast, merges some computational approaches used by atmospheric scientists with a diffusion model, commonly used in generative AI. The result is a system that maintains high resolution while cutting the computational cost significantly.

Ensemble forecasting

Traditional computational methods have two main advantages over AI systems. The first is that they’re directly based on atmospheric physics, incorporating the rules we know govern the behavior of our actual weather, and they calculate some of the details in a way that’s directly informed by empirical data. They’re also run as ensembles, meaning that multiple instances of the model are run. Due to the chaotic nature of the weather, these different runs will gradually diverge, providing a measure of the uncertainty of the forecast.

At least one attempt has been made to merge some of the aspects of traditional weather models with AI systems. An internal Google project used a traditional atmospheric circulation model that divided the Earth’s surface into a grid of cells but used an AI to predict the behavior of each cell. This provided much better computational performance, but at the expense of relatively large grid cells, which resulted in relatively low resolution.

For its take on AI weather predictions, DeepMind decided to skip the physics and instead adopt the ability to run an ensemble.

Gen Cast is based on diffusion models, which have a key feature that’s useful here. In essence, these models are trained by starting them with a mixture of an original—image, text, weather pattern—and then a variation where noise is injected. The system is supposed to create a variation of the noisy version that is closer to the original. Once trained, it can be fed pure noise and evolve the noise to be closer to whatever it’s targeting.

In this case, the target is realistic weather data, and the system takes an input of pure noise and evolves it based on the atmosphere’s current state and its recent history. For longer-range forecasts, the “history” includes both the actual data and the predicted data from earlier forecasts. The system moves forward in 12-hour steps, so the forecast for day three will incorporate the starting conditions, the earlier history, and the two forecasts from days one and two.

This is useful for creating an ensemble forecast because you can feed it different patterns of noise as input, and each will produce a slightly different output of weather data. This serves the same purpose it does in a traditional weather model: providing a measure of the uncertainty for the forecast.

For each grid square, GenCast works with six weather measures at the surface, along with six that track the state of the atmosphere and 13 different altitudes at which it estimates the air pressure. Each of these grid squares is 0.2 degrees on a side, a higher resolution than the European model uses for its forecasts. Despite that resolution, DeepMind estimates that a single instance (meaning not a full ensemble) can be run out to 15 days on one of Google’s tensor processing systems in just eight minutes.

It’s possible to make an ensemble forecast by running multiple versions of this in parallel and then integrating the results. Given the amount of hardware Google has at its disposal, the whole process from start to finish is likely to take less than 20 minutes. The source and training data will be placed on the GitHub page for DeepMind’s GraphCast project. Given the relatively low computational requirements, we can probably expect individual academic research teams to start experimenting with it.

Measures of success

DeepMind reports that GenCast dramatically outperforms the best traditional forecasting model. Using a standard benchmark in the field, DeepMind found that GenCast was more accurate than the European model on 97 percent of the tests it used, which checked different output values at different times in the future. In addition, the confidence values, based on the uncertainty obtained from the ensemble, were generally reasonable.

Past AI weather forecasters, having been trained on real-world data, are generally not great at handling extreme weather since it shows up so rarely in the training set. But GenCast did quite well, often outperforming the European model in things like abnormally high and low temperatures and air pressure (one percent frequency or less, including at the 0.01 percentile).

DeepMind also went beyond standard tests to determine whether GenCast might be useful. This research included projecting the tracks of tropical cyclones, an important job for forecasting models. For the first four days, GenCast was significantly more accurate than the European model, and it maintained its lead out to about a week.

One of DeepMind’s most interesting tests was checking the global forecast of wind power output based on information from the Global Powerplant Database. This involved using it to forecast wind speeds at 10 meters above the surface (which is actually lower than where most turbines reside but is the best approximation possible) and then using that number to figure out how much power would be generated. The system beat the traditional weather model by 20 percent for the first two days and stayed in front with a declining lead out to a week.

The researchers don’t spend much time examining why performance seems to decline gradually for about a week. Ideally, more details about GenCast’s limitations would help inform further improvements, so the researchers are likely thinking about it. In any case, today’s paper marks the second case where taking something akin to a hybrid approach—mixing aspects of traditional forecast systems with AI—has been reported to improve forecasts. And both those cases took very different approaches, raising the prospect that it will be possible to combine some of their features.

Nature, 2024. DOI: 10.1038/s41586-024-08252-9  (About DOIs).

Google’s DeepMind tackles weather forecasting, with great performance Read More »

intel’s-second-generation-arc-b580-gpu-beats-nvidia’s-rtx-4060-for-$249

Intel’s second-generation Arc B580 GPU beats Nvidia’s RTX 4060 for $249

Turnover at the top of the company isn’t stopping Intel from launching new products: Today the company is announcing the first of its next-generation B-series Intel Arc GPUs, the Arc B580 and Arc B570.

Both are decidedly midrange graphics cards that will compete with the likes of Nvidia’s GeForce RTX 4060 and AMD’s RX 7600 series, but Intel is pricing them competitively: $249 for a B580 with 12GB of RAM and $219 for a B570 with 10GB of RAM. The B580 launches on December 13, while the B570 won’t be available until January 16.

The two cards are Intel’s first dedicated GPUs based on its next-generation “Battlemage” architecture, a successor to the “Alchemist” architecture used in the A-series cards. Intel’s Core Ultra 200 laptop processors were its first products to ship with Battlemage, though they used an integrated version with fewer of Intel’s Xe cores and no dedicated memory. Both B-series GPUs use silicon manufactured on a 5 nm TSMC process, an upgrade from the 6 nm process used for the A-series; as of this writing, no integrated or dedicated Arc GPUs have been manufactured by one of Intel’s factories.

Both cards use a single 8-pin power connector, at least in Intel’s reference design; Intel is offering a first-party limited-edition version of the B580, while it looks like partners like Asus, ASRock, Gunnir, Maxsun, Onix, and Sparkle will be responsible for the B570.

Compared to the original Arc GPUs, both Battlemage cards should benefit from the work Intel has put into its graphics drivers over the last two years—a combination of performance improvements plus translation layers for older versions of DirectX have all improved Arc’s performance quite a bit in older games since late 2022. Hopefully buyers won’t need to wait months or years to get good performance out of the Battlemage cards.

The new cards also come with XeSS 2, the next-generation version of Intel’s upscaling technology (analogous to DLSS for Nvidia cards and FSR for AMD’s). Like DLSS 3 and FSR 3, one of XeSS 2’s main additions is a frame-generation feature that can interpolate additional frames to insert between the frames that are actually being rendered by the graphics card. These kinds of technologies tend to work best when the cards are already running at a reasonably high frame rate, but when they’re working well, they can lead to smoother-looking gameplay. A related technology, Xe Low Latency, aims to reduce the increase in latency that comes with frame-generation technologies, similar to Nvidia’s Reflex and AMD’s Anti-Lag.

Intel’s second-generation Arc B580 GPU beats Nvidia’s RTX 4060 for $249 Read More »

four-desk-organizing-gifts-you-don’t-technically-need-but-might-very-much-want

Four desk-organizing gifts you don’t technically need but might very much want

Brother P-Touch Cube printer on a light wood table, with a handful of

Welcome! Credit: Brother

The drawer-sized label maker that will convert you

My quirky wireless keyboard can keep up to three Bluetooth connections memorized. Can I, a human being, remember which of those three connections corresponds to which device and which number? No, I cannot. That’s why there is a little label on the back of my keyboard, reminding me that Fn+Q is the MacBook, Fn+W is the Chromebook, and so on.

Maybe you would do the same, but you don’t want to commit to a whole-danged label maker. That’s why Brother now makes the P-touch Cube. It offloads the typing and design to a phone or computer you already have and crams the heat transfer printing into a 4.5-inch square by 2.5-inch-thick cube. The Plus version can connect to a computer by USB and has a rechargeable battery and automatic label cutter, while the basic version is a pared-down, smartphone-only affair.

Like a lot of devices, the P-Touch Cube software wants you to do a lot more than you probably need it for. But the thing it does—make labels that improve your life, even if you live in the tiniest apartment—is good enough to forgive some very Brother-ish software.

A light green pillow with three cables magnetically stuck to its top.

Credit: Smartish

Flexible cable management that looks kinda nice

When I’m rushed or stressed or distracted, I let cables run everywhere. When I have a moment but don’t really have a plan, I bundle them up with reusable twist-ties, Velcro, or zip ties. It’s always hard for me to commit to anything more permanent, because a new desk setup, new ideas, or some new piece of hardware is always around the corner.

The gear that can get past my “That’s too permanent” cable chaos mentality are made of magnets. Specifically, a little cable pillow with magnets inside.

Smartish, maker of good phone cases, makes the Cable Wrangler and Bigger Cable Wrangler in enough colors and finishes that one should be acceptable. It’s a little pad onto which you can place the ends of your cables, or run them through one of the included collars. It can feel silly to have a dedicated magnet pillow for cables, but having and naming the space—”This, here, is where the cables go”—helps me remember not to leave them elsewhere. It’s also great for the cable extenders I use to switch my headphones and mouse between a Mac and a gaming PC.

Ars Technica may earn compensation for sales from links on this post through affiliate programs.

Four desk-organizing gifts you don’t technically need but might very much want Read More »

join-us-tomorrow-for-ars-live:-how-asahi-linux-ports-open-software-to-apple’s-hardware

Join us tomorrow for Ars Live: How Asahi Linux ports open software to Apple’s hardware

One of the key differences between Apple’s Macs and the iPhone and iPad is that the Mac can still boot and run non-Apple operating systems. This is a feature that Apple specifically built for the Mac, one of many features meant to ease the transition from Intel’s chips to Apple’s own silicon.

The problem, at least at first, was that alternate operating systems like Windows and Linux didn’t work natively with Apple’s hardware, not least because of missing drivers for basic things like USB ports, GPUs, and power management. Enter the Asahi Linux project, a community-driven effort to make open-source software run on Apple’s hardware.

In just a few years, the team has taken Linux on Apple Silicon from “basically bootable” to “plays native Windows games and sounds great doing it.” And the team’s ultimate goal is to contribute enough code upstream that you no longer need a Linux distribution just for Apple Silicon Macs.

On December 4 at 3: 30 pm Eastern (1: 30 pm Pacific), Ars Technica Senior Technology Reporter Andrew Cunningham will host a livestreamed YouTube conversation with Asahi Linux Project Lead Hector Martin and Graphics Lead Alyssa Rosenzweig that will cover the project’s genesis and its progress, as well as what the future holds.

View livestream

Add to Google Calendar

Add to calendar (.ics download)

Join us tomorrow for Ars Live: How Asahi Linux ports open software to Apple’s hardware Read More »

us-blocks-china-from-foreign-exports-with-even-a-single-us-made-chip

US blocks China from foreign exports with even a single US-made chip

But while Commerce Secretary Gina Raimondo said that these new curbs would help prevent “China from advancing its domestic semiconductor manufacturing system” to modernize its military, analysts and “several US officials” told The Post that they pack “far less punch” than the prior two rounds of export controls.

Analysts told The Wall Street Journal that the US took too long to launch the controls, which were composed around June. As industry insiders weighed in on the restrictions, word got out about the US plans to expand controls. In the months since, analysts said, China had plenty of time to stockpile the now-restricted tech. Applied Materials, for example, saw an eye-popping 86 percent spike in net revenue from products shipped to China “in the nine months ending July 28,” the WSJ reported.

Because of this and other alleged flaws, it’s unclear how effectively Biden’s final attempts to block China from accessing the latest US technologies will work.

Beyond concerns that China had time to stockpile tech it anticipated would be restricted, Gregory Allen, the director at the Wadhwani AI Center at the Center for Strategic and International Studies, told the WSJ that these latest controls “left loopholes that Huawei and Chinese companies could exploit.”

Loopholes include failing to blacklist companies that Huawei regularly uses—with allies and American companies allegedly lobbying to exempt factories or fabs they like, such as ChangXin Memory Technologies Inc., “one of China’s largest memory chipmakers,” The Post noted. They also include failing to restrict older versions of the HBM chips and various chipmaking equipment that China may still be able to easily access, Allen said.

“These controls are weaker than what the United States should have done,” Allen told The Post. “You can make a halfway logical argument that says, ‘Sell everything to China.’ Then you can make a reasonable argument, ‘Sell very little to China.’ But the worst thing you can do is to dramatically signal your intention to cut off China’s access to tech but then have so many loopholes and such bungled implementation that you incur almost all of the costs of the policy with only a fraction of the benefits.”

US blocks China from foreign exports with even a single US-made chip Read More »

it’s-black-friday,-and-here-are-the-best-shopping-deals-we-could-find

It’s Black Friday, and here are the best shopping deals we could find

The leaves have turned, the turkey has been eaten, the parades are over, and the football has been watched—the only thing left to do is to try to hide from increasingly uncomfortable family conversations by going out and shopping for things! It’s the holiday tradition that not only makes us feel good, but also (apocryphally) drags the balance sheets of businesses the world over into profitability—hence “Black Friday!”

Our partners in the e-commerce side of the business have spent days assembling massive lists for you all to peruse—lists of home deals and video game deals and all kinds of other things. Does that special someone in your life need, like, a security camera? Or a tablet? Or, uh—(checks list)—some board games? We’ve got all those things and more!

A couple of quick notes: First, we’re going to be updating this list throughout the weekend as things change, so if you don’t see anything that tickles your fancy right now, check back in a few hours! Additionally, although we’re making every effort to keep our prices accurate, deals are constantly shifting around, and an item’s actual price might have drifted from what we list. Caveat emptor and all that.

So, with that out of the way, let’s make like *NSYNC and buy, buy, buy!

Laptop and tablet deals

Headphone deals

It’s Black Friday, and here are the best shopping deals we could find Read More »

the-upside-down-capacitor-in-mid-‘90s-macs,-proven-and-documented-by-hobbyists

The upside-down capacitor in mid-‘90s Macs, proven and documented by hobbyists

Brown notes that the predecessor Mac LC and LC II had the correct connections, as did the LC 475, which uses the same power supply scheme. This makes him “confident that Apple made a boo-boo on the LC III,” or “basically the hardware equivalent of a copy/paste error when you’re writing code.”

Making sure rehabbers don’t make the same mistake

Why was this not noticed earlier, other than a couple forum threads seen by dedicated board rehabbers? There are a few reasons. For one thing, the rail was only used for a serial port or certain expansion card needs, so a capacitor failure, or out-of-spec power output, may not have been noticed. The other bit is that the original capacitor was rated for 16V, so even with -5V across it, it might not have failed, at least while it was relatively fresh. And it would not have failed in quite so spectacular a fashion as to generate stories and myths.

As to whether Apple knew about this but decided against acting on a somewhat obscure fault, one that might never cause real problems? By all means, let us know if you worked at Apple during that time and can clue us in. Ars has emailed Apple with this tremendously relevant question, the day before Thanksgiving, and will update this post with any comment.

By posting his analysis, Brown hopes to provide anyone else re-capping one of these devices with a bright, reflective warning sign to ignore Apple’s markings and install C22 the electrically correct way. Brown, reached by email, said that he heard from another hobbyist that the reverse voltage “would explain why the replacement cap” they installed “blew up.” Some restoration types, like Retro Viator, noticed the problem and fixed it pre-detonation.

Modern rehabbers tend to use tantalum capacitors to replace the fluid-filled kind that probably damaged the board they’re working on. Tantalum tends to react more violently to too much or reverse voltage, Brown wrote me.

Should C22 or other faulty capacitors destroy your LC III board entirely, Brown notes that 68kMLA member max1zzz has made a reverse-engineered full logic board schematic.

The upside-down capacitor in mid-‘90s Macs, proven and documented by hobbyists Read More »