Tech

google-adds-youtube-music-feature-to-end-annoying-volume-shifts

Google adds YouTube Music feature to end annoying volume shifts

Google’s history with music services is almost as convoluted and frustrating as its history with messaging. However, things have gotten calmer (and slower) ever since Google ceded music to the YouTube division. The YouTube Music app has its share of annoyances, to be sure, but it’s getting a long-overdue feature that users have been requesting for ages: consistent volume.

Listening to a single album from beginning to end is increasingly unusual in this age of unlimited access to music. As your playlist wheels from one genre or era to the next, the inevitable vibe shifts can be grating. Different tracks can have wildly different volumes, which can be shocking and potentially damaging to your ears if you’ve got your volume up for a ballad only to be hit with a heavy guitar riff after the break.

The gist of consistent volume simple—it normalizes volume across tracks, making the volume roughly the same. Consistent volume builds on a feature from the YouTube app called “stable volume.” When Google released stable volume for YouTube, it noted that the feature would continuously adjust volume throughout the video. Because of that, it was disabled for music content on the platform.

Google adds YouTube Music feature to end annoying volume shifts Read More »

synology-confirms-that-higher-end-nas-products-will-require-its-branded-drives

Synology confirms that higher-end NAS products will require its branded drives

Popular NAS-maker Synology has confirmed and slightly clarified a policy that appeared on its German website earlier this week: Its “Plus” tier of devices, starting with the 2025 series, will require Synology-branded hard drives for full compatibility, at least at first.

“Synology-branded drives will be needed for use in the newly announced Plus series, with plans to update the Product Compatibility List as additional drives can be thoroughly vetted in Synology systems,” a Synology representative told Ars by email. “Extensive internal testing has shown that drives that follow a rigorous validation process when paired with Synology systems are at less risk of drive failure and ongoing compatibility issues.”

Without a Synology-branded or approved drive in a device that requires it, NAS devices could fail to create storage pools and lose volume-wide deduplication and lifespan analysis, Synology’s German press release stated. Similar drive restrictions are already in place for XS Plus and rack-mounted Synology models, though work-arounds exist.

Synology also says it will later add a “carefully curated drive compatibility framework” for third-party drives and that users can submit drives for testing and documentation. “Drives that meet Synology’s stringent standards may be validated for use, offering flexibility while maintaining system integrity.”

Synology confirms that higher-end NAS products will require its branded drives Read More »

lg-tvs’-integrated-ads-get-more-personal-with-tech-that-analyzes-viewer-emotions

LG TVs’ integrated ads get more personal with tech that analyzes viewer emotions

With all this information, ZenVision will group LG TV viewers into highly specified market segments, such as “goal-driven achievers,” “social connectors,” or “emotionally engaged planners,” an LG spokesperson told StreamTV Insider. Zenapse’s website for ZenVision points to other potential market segments, including “digital adopters,” “wellness seekers,” “positive impact & environment,” and “money matters.”

Companies paying to advertise on LG TVs can then target viewers based on the ZenVision-specified market segments and deliver an “emotionally intelligent ad,” as Zenapse’s website puts it.

This type of targeted advertising aims to bring advertisers more in-depth information about TV viewers than demographic data or even contextual advertising (which shows ads based on what the viewer is watching) via psychographic data. Demographic data gives advertisers viewer information, like location, age, gender, ethnicity, marital status, and income. Psychographic data is supposed to go deeper and allow advertisers to target people based on so-called psychological factors, like personal beliefs, values, and attitudes. As Salesforce explains, “psychographic segmentation delves deeper into their psyche” than relying on demographic data.

“As viewers engage with content, ZenVision’s understanding of a consumer grows deeper, and our… segmentation continually evolves to optimize predictions,” the ZenVision website says.

Getting emotional

LG’s partnership comes as advertisers struggle to appeal to TV viewers’ emotions. Google, for example, attempted to tug at parents’ heartstrings with the now-infamous Dear Sydney ad aired during the 2024 Summer Olympics. Looking to push Gemini, Google hit all the wrong chords with parents, and, after much backlash, pulled the ad.

The partnership also comes as TV OS operators seek new ways to use smart TVs to grow their own advertising businesses and to get people to use TVs to buy stuff.

LG TVs’ integrated ads get more personal with tech that analyzes viewer emotions Read More »

google-adds-veo-2-video-generation-to-gemini-app

Google adds Veo 2 video generation to Gemini app

Google has announced that yet another AI model is coming to Gemini, but this time, it’s more than a chatbot. The company’s Veo 2 video generator is rolling out to the Gemini app and website, giving paying customers a chance to create short video clips with Google’s allegedly state-of-the-art video model.

Veo 2 works like other video generators, including OpenAI’s Sora—you input text describing the video you want, and a Google data center churns through tokens until it has an animation. Google claims that Veo 2 was designed to have a solid grasp of real-world physics, particularly the way humans move. Google’s examples do look good, but presumably that’s why they were chosen.

Prompt: Aerial shot of a grassy cliff onto a sandy beach where waves crash against the shore, a prominent sea stack rises from the ocean near the beach, bathed in the warm, golden light of either sunrise or sunset, capturing the serene beauty of the Pacific coastline.

Veo 2 will be available in the model drop-down, but Google does note it’s still considering ways to integrate this feature and that the location could therefore change. However, it’s probably not there at all just yet. Google is starting the rollout today, but it could take several weeks before all Gemini Advanced subscribers get access to Veo 2. Gemini features can take a surprisingly long time to arrive for the bulk of users—for example, it took about a month for Google to make Gemini Live video available to everyone after announcing its release.

When Veo 2 does pop up in your Gemini app, you can provide it with as much detail as you want, which Google says will ensure you have fine control over the eventual video. Veo 2 is currently limited to 8 seconds of 720p video, which you can download as a standard MP4 file. Video generation uses even more processing than your average generative AI feature, so Google has implemented a monthly limit. However, it hasn’t confirmed what that limit is, saying only that users will be notified as they approach it.

Google adds Veo 2 video generation to Gemini app Read More »

4chan-has-been-down-since-monday-night-after-“pretty-comprehensive-own”

4chan has been down since Monday night after “pretty comprehensive own”

Infamous Internet imageboard and wretched hive of scum and villainy 4chan was apparently hacked at some point Monday evening and remains mostly unreachable as of this writing. DownDetector showed reports of outages spiking at about 10: 07 pm Eastern time on Monday, and they’ve remained elevated since.

Posters at Soyjack Party, a rival imageboard that began as a 4chan offshoot, claimed responsibility for the hack. But as with all posts on these intensely insular boards, it’s difficult to separate fact from fiction. The thread shows screenshots of what appear to be 4chan’s PHP admin interface, among other screenshots, that suggest extensive access to 4chan’s databases of posts and users.

Security researcher Kevin Beaumont described the hack as “a pretty comprehensive own” that included “SQL databases, source, and shell access.” 404Media reports that the site used an outdated version of PHP that could have been used to gain access, including the phpMyAdmin tool, a common attack vector that is frequently patched for security vulnerabilities. Ars staffers pointed to the presence of long-deprecated and removed functions like mysql_real_escape_string in the screenshots as possible signs of an old, unpatched PHP version.

In other words, there’s a possibility that the hackers have gained pretty deep access to all of 4chan’s data, including site source code and user data.

Some widely shared posts on social media sites have made as-yet-unsubstantiated claims about data leaks from the outage, including the presence of users’ real names, IP addresses, and .edu and .gov email addresses used for registration. Without knowing more about the extent of the hack, reports of the site’s ultimate “death” are probably also premature.

4chan has been down since Monday night after “pretty comprehensive own” Read More »

netflix-plans-to-bring-streaming-into-the-$1-trillion-club-by-2030

Netflix plans to bring streaming into the $1 trillion club by 2030

Netflix doesn’t plan to disclose subscriber counts anymore, but one of WSJ’s anonymous sources said that the streaming leader wants to have 410 million subscribers by 2030. That would require Netflix to add 108.4 million more subscribers than it reported at the end of 2024, or about 21.7 million per year, and expand its global reach. In 2024, Netflix added 41.36 million subscribers, including a record number of new subscribers in Q4 2024.

Netflix plans to release its Q1 2025 earnings report on April 17.

$1 trillion club hopeful

Should Netflix achieve its reported goals, it would be the first to join the $1 trillion club solely through streaming-related business. The club is currently populated mostly by tech brands, including two companies that own Netflix rivals: Apple and Amazon.

Netflix is, by far, the most likely streaming candidate to potentially enter the lucrative club. It’s currently beating all other video-streaming providers, including Amazon Prime Video and Disney+, in terms of revenue and profits. Some streaming businesses, including Apple TV+ and Peacock, still aren’t profitable yet.

Netflix’s reported striving for a $1 trillion market cap exemplifies the meteoric rise of streaming since Netflix launched its streaming service in 2007. As linear TV keeps shrinking, and streaming companies continue learning how to mimic the ads, live TV, and content strategies of their predecessors, the door is open for streaming firms to evolve into some of the world’s most highly valued media entities.

The potential for Netflix to have a trillion-dollar market cap also has notable implications for rivals Apple and Amazon, which both earned membership into the $1 trillion club without their streaming services.

Whether Netflix will reach the goals reported by WSJ is not guaranteed, but it will be interesting to watch how Netflix’s strategy for reaching that lofty goal affects subscribers. Further, with streaming set to be more central to the viewing of TV shows, movies, and live events by 2030, efforts around things like ads, pricing, and content libraries could impact media consumption as we head toward 2030.

Netflix plans to bring streaming into the $1 trillion club by 2030 Read More »

android-phones-will-soon-reboot-themselves-after-sitting-unused-for-3-days

Android phones will soon reboot themselves after sitting unused for 3 days

A silent update rolling out to virtually all Android devices will make your phone more secure, and all you have to do is not touch it for a few days. The new feature implements auto-restart of a locked device, which will keep your personal data more secure. It’s coming as part of a Google Play Services update, though, so there’s nothing you can do to speed along the process.

Google is preparing to release a new update to Play Services (v25.14), which brings a raft of tweaks and improvements to myriad system features. First spotted by 9to5Google, the update was officially released on April 14, but as with all Play Services updates, it could take a week or more to reach all devices. When 25.14 arrives, Android devices will see a few minor improvements, including prettier settings screens, improved connection with cars and watches, and content previews when using Quick Share.

Most importantly, Play Services 25.14 adds a feature that Google describes thusly: “With this feature, your device automatically restarts if locked for 3 consecutive days.”

This is similar to a feature known as Inactivity Reboot that Apple added to the iPhone in iOS 18.1. This actually caused some annoyance among law enforcement officials who believed they had suspects’ phones stored in a readable state, only to find they were rebooting and becoming harder to access due to this feature.

Android phones will soon reboot themselves after sitting unused for 3 days Read More »

an-ars-technica-history-of-the-internet,-part-1

An Ars Technica history of the Internet, part 1


Intergalactic Computer Network

In our new 3-part series, we remember the people and ideas that made the Internet.

A collage of vintage computer elements

Credit: Collage by Aurich Lawson

Credit: Collage by Aurich Lawson

In a very real sense, the Internet, this marvelous worldwide digital communications network that you’re using right now, was created because one man was annoyed at having too many computer terminals in his office.

The year was 1966. Robert Taylor was the director of the Advanced Research Projects Agency’s Information Processing Techniques Office. The agency was created in 1958 by President Eisenhower in response to the launch of Sputnik. So Taylor was in the Pentagon, a great place for acronyms like ARPA and IPTO. He had three massive terminals crammed into a room next to his office. Each one was connected to a different mainframe computer. They all worked slightly differently, and it was frustrating to remember multiple procedures to log in and retrieve information.

Author’s re-creation of Bob Taylor’s office with three teletypes. Credit: Rama & Musée Bolo (Wikipedia/Creative Commons), steve lodefink (Wikipedia/Creative Commons), The Computer Museum @ System Source

In those days, computers took up entire rooms, and users accessed them through teletype terminals—electric typewriters hooked up to either a serial cable or a modem and a phone line. ARPA was funding multiple research projects across the United States, but users of these different systems had no way to share their resources with each other. Wouldn’t it be great if there was a network that connected all these computers?

The dream is given form

Taylor’s predecessor, Joseph “J.C.R.” Licklider, had released a memo in 1963 that whimsically described an “Intergalactic Computer Network” that would allow users of different computers to collaborate and share information. The idea was mostly aspirational, and Licklider wasn’t able to turn it into a real project. But Taylor knew that he could.

In a 1998 interview, Taylor explained: “In most government funding, there are committees that decide who gets what and who does what. In ARPA, that was not the way it worked. The person who was responsible for the office that was concerned with that particular technology—in my case, computer technology—was the person who made the decision about what to fund and what to do and what not to do. The decision to start the ARPANET was mine, with very little or no red tape.”

Taylor marched into the office of his boss, Charles Herzfeld. He described how a network could save ARPA time and money by allowing different institutions to share resources. He suggested starting with a small network of four computers as a proof of concept.

“Is it going to be hard to do?” Herzfeld asked.

“Oh no. We already know how to do it,” Taylor replied.

“Great idea,” Herzfeld said. “Get it going. You’ve got a million dollars more in your budget right now. Go.”

Taylor wasn’t lying—at least, not completely. At the time, there were multiple people around the world thinking about computer networking. Paul Baran, working for RAND, published a paper in 1964 describing how a distributed military networking system could be made resilient even if some nodes were destroyed in a nuclear attack. Over in the UK, Donald Davies independently came up with a similar concept (minus the nukes) and invented a term for the way these types of networks would communicate. He called it “packet switching.”

On a regular phone network, after some circuit switching, a caller and answerer would be connected via a dedicated wire. They had exclusive use of that wire until the call was completed. Computers communicated in short bursts and didn’t require pauses the way humans did. So it would be a waste for two computers to tie up a whole line for extended periods. But how could many computers talk at the same time without their messages getting mixed up?

Packet switching was the answer. Messages were divided into multiple snippets. The order and destination were included with each message packet. The network could then route the packets in any way that made sense. At the destination, all the appropriate packets were put into the correct order and reassembled. It was like moving a house across the country: It was more efficient to send all the parts in separate trucks, each taking their own route to avoid congestion.

A simplified diagram of how packet switching works. Credit: Jeremy Reimer

By the end of 1966, Taylor had hired a program director, Larry Roberts. Roberts sketched a diagram of a possible network on a napkin and met with his team to propose a design. One problem was that each computer on the network would need to use a big chunk of its resources to manage the packets. In a meeting, Wes Clark passed a note to Roberts saying, “You have the network inside-out.” Clark’s alternative plan was to ship a bunch of smaller computers to connect to each host. These dedicated machines would do all the hard work of creating, moving, and reassembling packets.

With the design complete, Roberts sent out a request for proposals for constructing the ARPANET. All they had to do now was pick the winning bid, and the project could begin.

BB&N and the IMPs

IBM, Control Data Corporation, and AT&T were among the first to respond to the request. They all turned it down. Their reasons were the same: None of these giant companies believed the network could be built. IBM and CDC thought the dedicated computers would be too expensive, but AT&T flat-out said that packet switching wouldn’t work on its phone network.

In late 1968, ARPA announced a winner for the bid: Bolt Beranek and Newman. It seemed like an odd choice. BB&N had started as a consulting firm that calculated acoustics for theaters. But the need for calculations led to the creation of a computing division, and its first manager had been none other than J.C.R. Licklider. In fact, some BB&N employees had been working on a plan to build a network even before the ARPA bid was sent out. Robert Kahn led the team that drafted BB&N’s proposal.

Their plan was to create a network of “Interface Message Processors,” or IMPs, out of Honeywell 516 computers. They were ruggedized versions of the DDP-516 16-bit minicomputer. Each had 24 kilobytes of core memory and no mass storage other than a paper tape reader, and each cost $80,000 (about $700,000 today). In comparison, an IBM 360 mainframe cost between $7 million and $12 million at the time.

An original IMP, the world’s first router. It was the size of a large refrigerator. Credit: Steve Jurvetson (CC BY 2.0)

The 516’s rugged appearance appealed to BB&N, who didn’t want a bunch of university students tampering with its IMPs. The computer came with no operating system, but it didn’t really have enough RAM for one. The software to control the IMPs was written on bare metal using the 516’s assembly language. One of the developers was Will Crowther, who went on to create the first computer adventure game.

One other hurdle remained before the IMPs could be put to use: The Honeywell design was missing certain components needed to handle input and output. BB&N employees were dismayed that the first 516, which they named IMP-0, didn’t have working versions of the hardware additions they had requested.

It fell on Ben Barker, a brilliant undergrad student interning at BB&N, to manually fix the machine. Barker was the best choice, even though he had slight palsy in his hands. After several stressful 16-hour days wrapping and unwrapping wires, all the changes were complete and working. IMP-0 was ready.

In the meantime, Steve Crocker at the University of California, Los Angeles, was working on a set of software specifications for the host computers. It wouldn’t matter if the IMPs were perfect at sending and receiving messages if the computers themselves didn’t know what to do with them. Because the host computers were part of important academic research, Crocker didn’t want to seem like he was a dictator telling people what to do with their machines. So he titled his draft a “Request for Comments,” or RFC.

This one act of politeness forever changed the nature of computing. Every change since has been done as an RFC, and the culture of asking for comments pervades the tech industry even today.

RFC No. 1 proposed two types of host software. The first was the simplest possible interface, in which a computer pretended to be a dumb terminal. This was dubbed a “terminal emulator,” and if you’ve ever done any administration on a server, you’ve probably used one. The second was a more complex protocol that could be used to transfer large files. This became FTP, which is still used today.

A single IMP connected to one computer wasn’t much of a network. So it was very exciting in September 1969 when IMP-1 was delivered to BB&N and then shipped via air freight to UCLA. The first test of the ARPANET was done with simultaneous phone support. The plan was to type “LOGIN” to start a login sequence. This was the exchange:

“Did you get the L?”

“I got the L!”

“Did you get the O?”

“I got the O!”

“Did you get the G?”

“Oh no, the computer crashed!”

It was an inauspicious beginning. The computer on the other end was helpfully filling in the “GIN” part of “LOGIN,” but the terminal emulator wasn’t expecting three characters at once and locked up. It was the first time that autocomplete had ruined someone’s day. The bug was fixed, and the test completed successfully.

IMP-2, IMP-3, and IMP-4 were delivered to the Stanford Research Institute (where Doug Engelbart was keen to expand his vision of connecting people), UC Santa Barbara, and the University of Utah.

Now that the four-node test network was complete, the team at BB&N could work with the researchers at each node to put the ARPANET through its paces. They deliberately created the first ever denial of service attack in January 1970, flooding the network with packets until it screeched to a halt.

The original ARPANET, predecessor of the Internet. Circles are IMPs, and rectangles are computers. Credit: DARPA

Surprisingly, many of the administrators of the early ARPANET nodes weren’t keen to join the network.  They didn’t like the idea of anyone else being able to use resources on “their” computers. Taylor reminded them that their hardware and software projects were mostly ARPA-funded, so they couldn’t opt out.

The next month, Stephen Carr, Stephen Crocker, and Vint Cerf released RFC No. 33. It described a Network Control Protocol (NCP) that standardized how the hosts would communicate with each other. After this was adopted, the network was off and running.

J.C.R. Licklider, Bob Taylor, Larry Roberts, Steve Crocker, and Vint Cerf. Credit: US National Library of Medicine, WIRED, Computer Timeline, Steve Crocker, Vint Cerf

The ARPANET grew significantly over the next few years. Important events included the first ever email between two different computers, sent by Roy Tomlinson in July 1972. Another groundbreaking demonstration involved a PDP-10 in Harvard simulating, in real-time, an aircraft landing on a carrier. The data was sent over the ARPANET to a MIT-based graphics terminal, and the wireframe graphical view was shipped back to a PDP-1 at Harvard and displayed on a screen. Although it was primitive and slow, it was technically the first gaming stream.

A big moment came in October 1972 at the International Conference on Computer Communication. This was the first time the network had been demonstrated to the public. Interest in the ARPANET was growing, and people were excited. A group of AT&T executives noticed a brief crash and laughed, confident that they were correct in thinking that packet switching would never work. Overall, however, the demonstration was a resounding success.

But the ARPANET was no longer the only network out there.

The two keystrokes on a Model 33 Teletype that changed history. Credit: Marcin Wichary (CC BY 2.0)

A network of networks

The rest of the world had not been standing still. In Hawaii, Norman Abramson and Franklin Kuo created ALOHAnet, which connected computers on the islands using radio. It was the first public demonstration of a wireless packet switching network. In the UK, Donald Davies’ team developed the National Physical Laboratory (NPL) network. It seemed like a good idea to start connecting these networks together, but they all used different protocols, packet formats, and transmission rates. In 1972, the heads of several national networking projects created an International Networking Working Group. Cerf was chosen to lead it.

The first attempt to bridge this gap was SATNET, also known as the Atlantic Packet Satellite Network. Using satellite links, it connected the US-based ARPANET with networks in the UK. Unfortunately, SATNET itself used its own set of protocols. In true tech fashion, an attempt to make a universal standard had created one more standard instead.

Robert Kahn asked Vint Cerf to try and fix these problems once and for all. They came up with a new plan called the Transmission Control Protocol, or TCP. The idea was to connect different networks through specialized computers, called “gateways,” that translated and forwarded packets. TCP was like an envelope for packets, making sure they got to the right destination on the correct network. Because some networks were not guaranteed to be reliable, when one computer successfully received a complete and undamaged message, it would send an acknowledgement (ACK) back to the sender. If the ACK wasn’t received in a certain amount of time, the message was retransmitted.

In December 1974, Cerf, Yogen Dalal, and Carl Sunshine wrote a complete specification for TCP. Two years later, Cerf and Kahn, along with a dozen others, demonstrated the first three-network system. The demo connected packet radio, the ARPANET, and SATNET, all using TCP. Afterward, Cerf, Jon Postel, and Danny Cohen suggested a small but important change: They should take out all the routing information and put it into a new protocol, called the Internet Protocol (IP). All the remaining stuff, like breaking and reassembling messages, detecting errors, and retransmission, would stay in TCP. Thus, in 1978, the protocol officially became known as, and was forever thereafter, TCP/IP.

A map of the Internet in 1977. White dots are IMPs, and rectangles are host computers. Jagged lines connect to other networks. Credit: The Computer History Museum

If the story of creating the Internet was a movie, the release of TCP/IP would have been the triumphant conclusion. But things weren’t so simple. The world was changing, and the path ahead was murky at best.

At the time, joining the ARPANET required leasing high-speed phone lines for $100,000 per year. This limited it to large universities, research companies, and defense contractors. The situation led the National Science Foundation (NSF) to propose a new network that would be cheaper to operate. Other educational networks arose at around the same time. While it made sense to connect these networks to the growing Internet, there was no guarantee that this would continue. And there were other, larger forces at work.

By the end of the 1970s, computers had improved significantly. The invention of the microprocessor set the stage for smaller, cheaper computers that were just beginning to enter people’s homes. Bulky teletypes were being replaced with sleek, TV-like terminals. The first commercial online service, CompuServe, was released to the public in 1979. For just $5 per hour, you could connect to a private network, get weather and financial reports, and trade gossip with other users. At first, these systems were completely separate from the Internet. But they grew quickly. By 1987, CompuServe had 380,000 subscribers.

A magazine ad for CompuServe from 1980. Credit: marbleriver

Meanwhile, the adoption of TCP/IP was not guaranteed. At the beginning of the 1980s, the Open Systems Interconnection (OSI) group at the International Standardization Organization (ISO) decided that what the world needed was more acronyms—and also a new, global, standardized networking model.

The OSI model was first drafted in 1980, but it wasn’t published until 1984. Nevertheless, many European governments, and even the US Department of Defense, planned to transition from TCP/IP to OSI. It seemed like this new standard was inevitable.

The seven-layer OSI model. If you ever thought there were too many layers, you’re not alone. Credit: BlueCat Networks

While the world waited for OSI, the Internet continued to grow and evolve. In 1981, the fourth version of the IP protocol, IPv4, was released. On January 1, 1983, the ARPANET itself fully transitioned to using TCP/IP. This date is sometimes referred to as the “birth of the Internet,” although from a user’s perspective, the network still functioned the same way it had for years.

A map of the Internet from 1982. Ovals are networks, and rectangles are gateways. Hosts are not shown, but number in the hundreds. Note the appearance of modern-looking IPv4 addresses. Credit: Jon Postel

In 1986, the NFSNET came online, running under TCP/IP and connected to the rest of the Internet. It also used a new standard, the Domain Name System (DNS). This system, still in use today, used easy-to-remember names to point to a machine’s individual IP address. Computer names were assigned “top-level” domains based on their purpose, so you could connect to “frodo.edu” at an educational institution, or “frodo.gov” at a governmental one.

The NFSNET grew rapidly, dwarfing the ARPANET in size. In 1989, the original ARPANET was decommissioned. The IMPs, long since obsolete, were retired. However, all the ARPANET hosts were successfully migrated to other Internet networks. Like a Ship of Theseus, the ARPANET lived on even after every component of it was replaced.

The exponential growth of the ARPANET/Internet during its first two decades. Credit: Jeremy Reimer

Still, the experts and pundits predicted that all of these systems would eventually have to transfer over to the OSI model. The people who had built the Internet were not impressed. In 1987, writing RFC No. 1,000, Crocker said, “If we had only consulted the ancient mystics, we would have seen immediately that seven layers were required.”

The Internet pioneers felt they had spent many years refining and improving a working system. But now, OSI had arrived with a bunch of complicated standards and expected everyone to adopt their new design. Vint Cerf had a more pragmatic outlook. In 1982, he left ARPA for a new job at MCI, where he helped build the first commercial email system (MCI Mail) that was connected to the Internet. While at MCI, he contacted researchers at IBM, Digital, and Hewlett-Packard and convinced them to experiment with TCP/IP. Leadership at these companies still officially supported OSI, however.

The debate raged on through the latter half of the 1980s and into the early 1990s. Tired of the endless arguments, Cerf contacted the head of the National Institute of Standards and Technology (NIST) and asked him to write a blue ribbon report comparing OSI and TCP/IP. Meanwhile, while planning a successor to IPv4, the Internet Advisory Board (IAB) was looking at the OSI Connectionless Network Protocol and its 128-bit addressing for inspiration. In an interview with Ars, Vint Cerf explained what happened next.

“It was deliberately misunderstood by firebrands in the IETF [Internet Engineering Task Force] that we are traitors by adopting OSI,” he said. “They raised a gigantic hoo-hah. The IAB was deposed, and the authority in the system flipped. IAB used to be the decision makers, but the fight flips it, and IETF becomes the standard maker.”

To calm everybody down, Cerf performed a striptease at a meeting of the IETF in 1992. He revealed a T-shirt that said “IP ON EVERYTHING.” At the same meeting, David Clark summarized the feelings of the IETF by saying, “We reject kings, presidents, and voting. We believe in rough consensus and running code.”

Vint Cerf strips down to the bare essentials. Credit: Boardwatch and Light Reading

The fate of the Internet

The split design of TCP/IP, which was a small technical choice at the time, had long-lasting political implications. In 2001, David Clark and Marjory Blumenthal wrote a paper that looked back on the Protocol War. They noted that the Internet’s complex functions were performed at the endpoints, while the network itself ran only the IP part and was concerned simply with moving data from place to place. These “end-to-end principles” formed the basis of “… the ‘Internet Philosophy’: freedom of action, user empowerment, end-user responsibility for actions undertaken, and lack of controls ‘in’ the Net that limit or regulate what users can do,” they said.

In other words, the battle between TCP/IP and OSI wasn’t just about two competing sets of acronyms. On the one hand, you had a small group of computer scientists who had spent many years building a relatively open network and wanted to see it continue under their own benevolent guidance. On the other hand, you had a huge collective of powerful organizations that believed they should be in charge of the future of the Internet—and maybe the behavior of everyone on it.

But this impossible argument and the ultimate fate of the Internet was about to be decided, and not by governments, committees, or even the IETF. The world was changed forever by the actions of one man. He was a mild-mannered computer scientist, born in England and working for a physics research institute in Switzerland.

That’s the story covered in the next article in our series.

Photo of Jeremy Reimer

I’m a writer and web developer. I specialize in the obscure and beautiful, like the Amiga and newLISP.

An Ars Technica history of the Internet, part 1 Read More »

turbulent-global-economy-could-drive-up-prices-for-netflix-and-rivals

Turbulent global economy could drive up prices for Netflix and rivals


“… our members are going to be punished.”

A scene from BBC’s Doctor Who. Credit: BBC/Disney+

Debate around how much taxes US-based streaming services should pay internationally, among other factors, could result in people paying more for subscriptions to services like Netflix and Disney+.

On April 10, the United Kingdom’s Culture, Media and Sport (CMS) Committee reignited calls for a streaming tax on subscription revenue acquired through UK residents. The recommendation came alongside the committee’s 120-page report [PDF] that makes numerous recommendations for how to support and grow Britain’s film and high-end television (HETV) industry.

For the US, the recommendation garnering the most attention is one calling for a 5 percent levy on UK subscriber revenue from streaming video on demand services, such as Netflix. That’s because if streaming services face higher taxes in the UK, costs could be passed onto consumers, resulting in more streaming price hikes. The CMS committee wants money from the levy to support HETV production in the UK and wrote in its report:

The industry should establish this fund on a voluntary basis; however, if it does not do so within 12 months, or if there is not full compliance, the Government should introduce a statutory levy.

Calls for a streaming tax in the UK come after 2024’s 25 percent decrease in spending for UK-produced high-end TV productions and 27 percent decline in productions overall, per the report. Companies like the BBC have said that they lack funds to keep making premium dramas.

In a statement, the CMS committee called for streamers, “such as Netflix, Amazon, Apple TV+, and Disney+, which benefit from the creativity of British producers, to put their money where their mouth is by committing to pay 5 percent of their UK subscriber revenue into a cultural fund to help finance drama with a specific interest to British audiences.” The committee’s report argues that public service broadcasters and independent movie producers are “at risk,” due to how the industry currently works. More investment into such programming would also benefit streaming companies by providing “a healthier supply of [public service broadcaster]-made shows that they can license for their platforms,” the report says.

The Department for Digital, Culture, Media and Sport has said that it will respond to the CMS Committee’s report.

Streaming companies warn of higher prices

In response to the report, a Netflix spokesperson said in a statement shared by the BBC yesterday that the “UK is Netflix’s biggest production hub outside of North America—and we want it to stay that way.” Netflix reportedly claims to have spent billions of pounds in the UK via work with over 200 producers and 30,000 cast and crew members since 2020, per The Hollywood Reporter. In May 2024, Benjamin King, Netflix’s senior director of UK and Ireland public policy, told the CMS committee that the streaming service spends “about $1.5 billion” annually on UK-made content.

Netflix’s statement this week, responding to the CMS Committee’s levy, added:

… in an increasingly competitive global market, it’s key to create a business environment that incentivises rather than penalises investment, risk taking, and success. Levies diminish competitiveness and penalise audiences who ultimately bear the increased costs.

Adam Minns, executive director for the UK’s Association for Commercial Broadcasters and On-Demand Services (COBA), highlighted how a UK streaming tax could impact streaming providers’ content budgets.

“Especially in this economic climate, a levy risks impacting existing content budgets for UK shows, jobs, and growth, along with raising costs for businesses,” he said, per the BBC.

An anonymous source that The Hollywood Reporter described as “close to the matter” said that “Netflix members have already paid the BBC license fee. A levy would be a double tax on them and us. It’s unfair. This is a tariff on success. And our members are going to be punished.”

The anonymous source added: “Ministers have already rejected the idea of a streaming levy. The creation of a Cultural Fund raises more questions than it answers. It also begs the question: Why should audiences who choose to pay for a service be then compelled to subsidize another service for which they have already paid through the license fee. Furthermore, what determines the criteria for ‘Britishness,’ which organizations would qualify for funding … ?”

In May, Mitchel Simmons, Paramount’s VP of EMEA public policy and government affairs, also questioned the benefits of a UK streaming tax when speaking to the CMS committee.

“Where we have seen levies in other jurisdictions on services, we then see inflation in the market. Local broadcasters, particularly in places such as Italy, have found that the prices have gone up because there has been a forced increase in spend and others have suffered as a consequence,” he said at the time.

Tax threat looms largely on streaming companies

Interest in the UK putting a levy on streaming services follows other countries recently pushing similar fees onto streaming providers.

Music streaming providers, like Spotify, for example, pay a 1.2 percent tax on streaming revenue made in France. Spotify blamed the tax for a 1.2 percent price hike in the country issued in May. France’s streaming taxes are supposed to go toward the Centre National de la Musique.

Last year, Canada issued a 5 percent tax on Canadian streaming revenue that’s been halted as companies including Netflix, Amazon, Apple, Disney, and Spotify battle it in court.

Lawrence Zhang, head of policy of the Centre for Canadian Innovation and Competitiveness at the Information Technology and Innovation Foundation think tank, has estimated that a 5 percent streaming tax would result in the average Canadian family paying an extra CA$40 annually.

A streaming provider group called the Digital Media Association has argued that the Canadian tax “could lead to higher prices for Canadians and fewer content choices.”

“As a result, you may end up paying more for your favourite streaming services and have less control over what you can watch or listen to,” the Digital Media Association’s website says.

Streaming companies hold their breath

Uncertainty around US tariffs and their implications on the global economy have also resulted in streaming companies moving slower than expected regarding new entrants, technologies, mergers and acquisitions, and even business failures, Alan Wolk, co-founder and lead analyst at TVRev, pointed out today. “The rapid-fire nature of the executive orders coming from the White House” has a massive impact on the media industry, he said.

“Uncertainty means that deals don’t get considered, let alone completed,” Wolk mused, noting that the growing stability of the streaming industry overall also contributes to slowing market activity.

For consumers, higher prices for other goods and/or services could result in smaller budgets for spending on streaming subscriptions. Establishing and growing advertising businesses is already a priority for many US streaming providers. However, the realities of stingier customers who are less willing to buy multiple streaming subscriptions or opt for premium tiers or buy on-demand titles are poised to put more pressure on streaming firms’ advertising plans. Simultaneously, advertisers are facing pressures from tariffs, which could result in less money being allocated to streaming ads.

“With streaming platform operators increasingly turning to ad-supported tiers to bolster profitability—rather than just rolling out price increases—this strategy could be put at risk,” Matthew Bailey, senior principal analyst of advertising at Omdia, recently told Wired. He added:

Against this backdrop, I wouldn’t be surprised if we do see some price increases for some streaming services over the coming months.

Streaming service providers are likely to tighten their purse strings, too. As we’ve seen, this can result in price hikes and smaller or less daring content selection.   

Streaming customers may soon be forced to reduce their subscriptions. The good news is that most streaming viewers are already accustomed to growing prices and have figured out which streaming services align with their needs around affordability, ease of use, content, and reliability. Customers may set higher standards, though, as streaming companies grapple with the industry and global changes.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Turbulent global economy could drive up prices for Netflix and rivals Read More »

chrome’s-new-dynamic-bottom-bar-gives-websites-a-little-more-room-to-breathe

Chrome’s new dynamic bottom bar gives websites a little more room to breathe

The Internet might look a bit different on Android soon. Last month, Google announced its intent to make Chrome for Android a more immersive experience by hiding the navigation bar background. The promised edge-to-edge update is now rolling out to devices on Chrome version 135, giving you a touch more screen real estate. However, some websites may also be a bit harder to use.

Moving from button to gesture navigation reduced the amount of screen real estate devoted to the system UI, which leaves more room for apps. Google’s move to a “dynamic bottom bar” in Chrome creates even more space for web content. When this feature shows up, the pages you visit will be able to draw all the way to the bottom of the screen instead of stopping at the navigation area, which Google calls the “chin.”

Chrome edge-to-edge

Credit: Google

As you scroll down a page, Chrome hides the address bar. With the addition of the dynamic bottom bar, the chin also vanishes. The gesture handle itself remains visible, shifting between white and black based on what is immediately behind it to maintain visibility. Unfortunately, this feature will not work if you have chosen to stick with the classic three-button navigation option.

Chrome’s new dynamic bottom bar gives websites a little more room to breathe Read More »

powerful-programming:-bbc-controlled-electric-meters-are-coming-to-an-end

Powerful programming: BBC-controlled electric meters are coming to an end

Two rare tungsten-centered, hand-crafted cooled anode modulators (CAM) are needed to keep the signal going, and while the BBC bought up the global supply of them, they are running out. The service is seemingly on its last two valves and has been telling the public about Long Wave radio’s end for nearly 15 years. Trying to remanufacture the valves is hazardous, as any flaws could cause a catastrophic failure in the transmitters.

BBC Radio 4’s 198 kHz transmitting towers at Droitwich.

BBC Radio 4’s 198 kHz transmitting towers at Droitwich. Credit: Bob Nienhuis (Public domain)

Rebuilding the transmitter, or moving to different, higher frequencies, is not feasible for the very few homes that cannot get other kinds of lower-power radio, or internet versions, the BBC told The Guardian in 2011. What’s more, keeping Droitwich powered such that it can reach the whole of the UK, including Wales and lower Scotland, requires some 500 kilowatts of power, more than most other BBC transmission types.

As of January 2025, roughly 600,000 UK customers still use RTS meters to manage their power switching, after 300,000 were switched away in 2024. Utilities and the BBC have agreed that the service will stop working on June 30, 2025, and have pushed to upgrade RTS customers to smart meters.

In a combination of sad reality and rich irony, more than 4 million smart meters in the UK are not working properly. Some have delivered eye-popping charges to their customers, based on estimated bills instead of real readings, like Sir Grayson Perry‘s 39,000 pounds due on 15 simultaneous bills. But many have failed because the UK, like other countries, phased out the 2G and 3G networks older meters relied upon without coordinated transition efforts.

Powerful programming: BBC-controlled electric meters are coming to an end Read More »

oneplus-releases-watch-3-with-inflated-$500-price-tag,-won’t-say-why

OnePlus releases Watch 3 with inflated $500 price tag, won’t say why

watch 3 pricing

Credit: OnePlus

The tariff fees are typically paid on a product’s declared value rather than the retail cost. So a $170 price bump could be close to what the company’s US arm will pay to import the Watch 3 in the midst of a trade war. Many technology firms have attempted to stockpile products in the US ahead of tariffs, but it’s possible OnePlus simply couldn’t do that because it had to fix its typo.

Losing its greatest advantage?

Like past OnePlus wearables, the Watch 3 is a chunky, high-power device with a stainless steel case. It sports a massive 1.5-inch OLED screen, the latest Snapdragon W5 wearable processor, 32GB of storage, and 2GB of RAM. It runs Google’s Wear OS for smart features, but it also has a dialed-back power-saving mode that runs separate RTOS software. This robust hardware adds to the manufacturing cost, which also means higher tariffs now. As it currently stands, the Watch 3 is just too expensive given the competition.

OnePlus has managed to piece together a growing ecosystem of devices, including phones, tablets, earbuds, and, yes, smartwatches. With a combination of competitive prices and high-end specs, it successfully established a foothold in the US market, something few Chinese OEMs have accomplished.

The implications go beyond wearables. OnePlus also swings for the fences with its phone hardware, using the best Arm chips and expensive, high-end OLED panels. OnePlus tends to price its phones lower than similar Samsung and Google hardware, so it doesn’t make as much on each phone. If the tariffs stick, that strategy could be unviable.

OnePlus releases Watch 3 with inflated $500 price tag, won’t say why Read More »