Artificial Intelligence

redditor-accidentally-reinvents-discarded-’90s-tool-to-escape-today’s-age-gates

Redditor accidentally reinvents discarded ’90s tool to escape today’s age gates


The ’90s called. They want their flawed age verification methods back.

A boys head with a fingerprint revealing something unclear but perhaps evocative

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Back in the mid-1990s, when The Net was among the top box office draws and Americans were just starting to flock online in droves, kids had to swipe their parents’ credit cards or find a fraudulent number online to access adult content on the web. But today’s kids—even in states with the strictest age verification laws—know they can just use Google.

Last month, a study analyzing the relative popularity of Google search terms found that age verification laws shift users’ search behavior. It’s impossible to tell if the shift represents young users attempting to circumvent the child-focused law or adult users who aren’t the actual target of the laws. But overall, enforcement causes nearly half of users to stop searching for popular adult sites complying with laws and instead search for a noncompliant rival (48 percent) or virtual private network (VPN) services (34 percent), which are used to mask a location and circumvent age checks on preferred sites, the study found.

“Individuals adapt primarily by moving to content providers that do not require age verification,” the study concluded.

Although the Google Trends data prevented researchers from analyzing trends by particular age groups, the findings help confirm critics’ fears that age verification laws “may be ineffective, potentially compromise user privacy, and could drive users toward less regulated, potentially more dangerous platforms,” the study said.

The authors warn that lawmakers are not relying enough on evidence-backed policy evaluations to truly understand the consequences of circumvention strategies before passing laws. Internet law expert Eric Goldman recently warned in an analysis of age-estimation tech available today that this situation creates a world in which some kids are likely to be harmed by the laws designed to protect them.

Goldman told Ars that all of the age check methods carry the same privacy and security flaws, concluding that technology alone can’t solve this age-old societal problem. And logic-defying laws that push for them could end up “dramatically” reshaping the Internet, he warned.

Zeve Sanderson, a co-author of the Google Trends study, told Ars that “if you’re a policymaker, in addition to being potentially nervous about the more dangerous content, it’s also about just benefiting a noncompliant firm.”

“You don’t want to create a regulatory environment where noncompliance is incentivized or they benefit in some way,” Sanderson said.

Sanderson’s study pointed out that search data is only part of the picture. Some users may be using VPNs and accessing adult sites through direct URLs rather than through search. Others may rely on social media to find adult content, a 2025 conference paper noted, “easily” bypassing age checks on the largest platforms. VPNs remain the most popular circumvention method, a 2024 article in the International Journal of Law, Ethics, and Technology confirmed, “and yet they tend to be ignored or overlooked by statutes despite their popularity.”

While kids are ducking age gates and likely putting their sensitive data at greater risk, adult backlash may be peaking over the red wave of age-gating laws already blocking adults from visiting popular porn sites in several states.

Some states started controversially requiring checking IDs to access adult content, which prompted Pornhub owner Aylo to swiftly block access to its sites in certain states. Pornhub instead advocates for device-based age verification, which it claims is a safer choice.

Aylo’s campaign has seemingly won over some states that either explicitly recommend device-based age checks or allow platforms to adopt whatever age check method they deem “reasonable.” Other methods could include app store-based age checks, algorithmic age estimation (based on a user’s web activity), face scans, or even tools that guess users’ ages based on hand movements.

On Reddit, adults have spent the past year debating the least intrusive age verification methods, as it appears inevitable that adult content will stay locked down, and they dread a future where more and more adult sites might ask for IDs. Additionally, critics have warned that showing an ID magnifies the risk of users publicly exposing their sexual preferences if a data breach or leak occurs.

To avoid that fate, at least one Redditor has attempted to reinvent the earliest age verification method, promoting a resurgence of credit card-based age checks that society discarded as unconstitutional in the early 2000s.

Under those systems, an entire industry of age verification companies emerged, selling passcodes to access adult sites for a supposedly nominal fee. The logic was simple: Only adults could buy credit cards, so only adults could buy passcodes with credit cards.

If “a person buys, for a nominal fee, a randomly generated passcode not connected to them in any way” to access adult sites, one Redditor suggested about three months ago, “there won’t be any way to tie the individual to that passcode.”

“This could satisfy the requirement to keep stuff out of minors’ hands,” the Redditor wrote in a thread asking how any site featuring sexual imagery could hypothetically comply with US laws. “Maybe?”

Several users rushed to educate the Redditor about the history of age checks. Those grasping for purely technology-based solutions today could be propping up the next industry flourishing from flawed laws, they said.

And, of course, since ’90s kids easily ducked those age gates, too, history shows why investing millions to build the latest and greatest age verification systems probably remains a fool’s errand after all these years.

The cringey early history of age checks

The earliest age verification systems were born out of Congress’s “first attempt to outlaw pornography online,” the LA Times reported. That attempt culminated in the Communications Decency Act of 1996.

Although the law was largely overturned a year later, the million-dollar age verification industry was already entrenched, partly due to its intriguing business model. These companies didn’t charge adult sites any fee to add age check systems—which required little technical expertise to implement—and instead shared a big chunk of their revenue with porn sites that opted in. Some sites got 50 percent of revenues, estimated in the millions, simply for adding the functionality.

The age check business was apparently so lucrative that in 2000, one adult site, which was sued for distributing pornographic images of children, pushed fans to buy subscriptions to its preferred service as a way of helping to fund its defense, Wired reported. “Please buy an Adult Check ID, and show your support to fight this injustice!” the site urged users. (The age check service promptly denied any association with the site.)

In a sense, the age check industry incentivized adult sites’ growth, an American Civil Liberties Union attorney told the LA Times in 1999. In turn, that fueled further growth in the age verification industry.

Some services made their link to adult sites obvious, like Porno Press, which charged a one-time fee of $9.95 to access affiliated adult sites, a Congressional filing noted. But many others tried to mask the link, opting for names like PayCom Billing Services, Inc. or CCBill, as Forbes reported, perhaps enticing more customers by drawing less attention on a credit card statement. Other firms had names like Adult Check, Mancheck, and Adult Sights, Wired reported.

Of these firms, the biggest and most successful was Adult Check. At its peak popularity in 2001, the service boasted 4 million customers willing to pay “for the privilege of ogling 400,000 sex sites,” Forbes reported.

At the head of the company was Laith P. Alsarraf, the CEO of the Adult Check service provider Cybernet Ventures.

Alsarraf testified to Congress several times, becoming a go-to expert witness for lawmakers behind the 1998 Child Online Protection Act (COPA). Like the version of the CDA that prompted it, this act was ultimately deemed unconstitutional. And some judges and top law enforcement officers defended Alsarraf’s business model with Adult Check in court—insisting that it didn’t impact adult speech and “at most” posed a “modest burden” that was “outweighed by the government’s compelling interest in shielding minors” from adult content.

But his apparent conflicts of interest also drew criticism. One judge warned in 1999 that “perhaps we do the minors of this country harm if First Amendment protections, which they will with age inherit fully, are chipped away in the name of their protection,” the American Civil Liberties Union (ACLU) noted.

Summing up the seeming conflict, Ann Beeson, an ACLU lawyer, told the LA Times, “the government wants to shut down porn on the Net. And yet their main witness is this guy who makes his money urging more and more people to access porn on the Net.”

’90s kids dodged Adult Check age gates

Adult Check’s subscription costs varied, but the service predictably got more expensive as its popularity spiked. In 1999, customers could snag a “lifetime membership” for $76.95 or else fork over $30 every two years or $20 annually, the LA Times reported. Those were good deals compared to the significantly higher costs documented in the 2001 Forbes report, which noted a three-month package was available for $20, or users could pay $20 monthly to access supposedly premium content.

Among Adult Check’s customers were apparently some savvy kids who snuck through the cracks in the system. In various threads debating today’s laws, several Redditors have claimed that they used Adult Check as minors in the ’90s, either admitting to stealing a parent’s credit card or sharing age-authenticated passcodes with friends.

“Adult Check? I remember signing up for that in the mid-late 90s,” one commenter wrote in a thread asking if anyone would ever show ID to access porn. “Possibly a minor friend of mine paid for half the fee so he could use it too.”

“Those years were a strange time,” the commenter continued. “We’d go see tech-suspense-horror-thrillers like The Net and Disclosure where the protagonist has to fight to reclaim their lives from cyberantagonists, only to come home to send our personal information along with a credit card payment so we could look at porn.”

“LOL. I remember paying for the lifetime package, thinking I’d use it for decades,” another commenter responded. “Doh…”

Adult Check thrived even without age check laws

Sanderson’s study noted that today, minors’ “first exposure [to adult content] typically occurs between ages 11–13,” which is “substantially earlier than pre-Internet estimates.” Kids seeking out adult content may be in a period of heightened risk-taking or lack self-control, while others may be exposed without ever seeking it out. Some studies suggest that kids who are more likely to seek out adult content could struggle with lower self-esteem, emotional problems, body image concerns, or depressive symptoms. These potential negative associations with adolescent exposure to porn have long been the basis for lawmakers’ fight to keep the content away from kids—and even the biggest publishers today, like Pornhub, agree that it’s a worthy goal.

After parents got wise to ’90s kids dodging age gates, pressure predictably mounted on Adult Check to solve the problem, despite Adult Check consistently admitting that its system wasn’t foolproof. Alsarraf claimed that Adult Check developed “proprietary” technology to detect when kids were using credit cards or when multiple kids were attempting to use the same passcode at the same time from different IP addresses. He also claimed that Adult Check could detect stolen credit cards, bogus card numbers, card numbers “posted on the Internet,” and other fraud.

Meanwhile, the LA Times noted, Cybernet Ventures pulled in an estimated $50 million in 1999, ensuring that the CEO could splurge on a $690,000 house in Pasadena and a $100,000 Hummer. Although Adult Check was believed to be his most profitable venture at that time, Alsarraf told the LA Times that he wasn’t really invested in COPA passing.

“I know Adult Check will flourish,” Alsarraf said, “with or without the law.”

And he was apparently right. By 2001, subscriptions banked an estimated $320 million.

After the CDA and COPA were blocked, “many website owners continue to use Adult Check as a responsible approach to content accessibility,” Alsarraf testified.

While adult sites were likely just in it for the paychecks—which reportedly were dependably delivered—he positioned this ongoing growth as fueled by sites voluntarily turning to Adult Check to protect kids and free speech. “Adult Check allows a free flow of ideas and constitutionally protected speech to course through the Internet without censorship and unreasonable intrusion,” Alsarraf said.

“The Adult Check system is the least restrictive, least intrusive method of restricting access to content that requires minimal cost, and no parental technical expertise and intervention: It does not judge content, does not inhibit free speech, and it does not prevent access to any ideas, word, thoughts, or expressions,” Alsarraf testified.

Britney Spears aided Adult Check’s downfall

Adult Check’s downfall ultimately came in part thanks to Britney Spears, Wired reported in 2002. Spears went from Mickey Mouse Club child star to the “Princess of Pop” at 16 years old with her hit “Baby One More Time” in 1999, the same year that Adult Check rose to prominence.

Today, Spears is well-known for her activism, but in the late 1990s and early 2000s, she was one of the earliest victims of fake online porn.

Spears submitted documents in a lawsuit raised by the publisher of a porn magazine called Perfect 10. The publisher accused Adult Check of enabling the infringement of its content featured on the age check provider’s partner sites, and Spears’ documents helped prove that Adult Check was also linking to “non-existent nude photos,” allegedly in violation of unfair competition laws. The case was an early test of online liability, and Adult Check seemingly learned the hard way that the courts weren’t on its side.

That suit prompted an injunction blocking Adult Check from partnering with sites promoting supposedly illicit photos of “models and celebrities,” which it said was no big deal because it only comprised about 6 percent of its business.

However, after losing the lawsuit in 2004, Adult Check’s reputation took a hit, and it fell out of the pop lexicon. Although Cybernet Ventures continued to exist, Adult Check screening was dropped from sites, as it was no longer considered the gold standard in age verification. Perhaps more importantly, it was no longer required by law.

But although millions validated Adult Check for years, not everybody in the ’90s bought into Adult Check’s claims that it was protecting kids from porn. Some critics said it only provided a veneer of online safety without meaningfully impacting kids. Most of the country—more than 250 million US residents—never subscribed.

“I never used Adult Check,” one Redditor said in a thread pondering whether age gate laws might increase the risks of government surveillance. “My recollection was that it was an untrustworthy scam and unneeded barrier for the theater of legitimacy.”

Alsarraf keeps a lower profile these days and did not respond to Ars’ request to comment.

The rise and fall of Adult Check may have prevented more legally viable age verification systems from gaining traction. The ACLU argued that its popularity trampled the momentum of the “least restrictive” method for age checks available in the ’90s, a system called the Platform for Internet Content Selection (PICS).

Based on rating and filtering technology, PICS allowed content providers or third-party interest groups to create private rating systems so that “individual users can then choose the rating system that best reflects their own values, and any material that offends them will be blocked from their homes.”

However, like all age check systems, PICS was also criticized as being imperfect. Legal scholar Lawrence Lessig called it “the devil” because “it allows censorship at any point on the chain of distribution” of online content.

Although the age verification technology has changed, today’s lawmakers are stuck in the same debate decades later, with no perfect solutions in sight.

SCOTUS to rule on constitutionality of age gate laws

This summer, the Supreme Court will decide whether a Texas law blocking minors’ access to porn is constitutional. The decision could either stunt the momentum or strengthen the backbone of nearly 20 laws in red states across the country seeking to age-gate the Internet.

For privacy advocates opposing the laws, the SCOTUS ruling feels like a sink-or-swim moment for age gates, depending on which way the court swings. And it will come just as blue states like Colorado have recently begun pushing for age gates, too. Meanwhile, other laws increasingly seek to safeguard kids’ privacy and prevent social media addiction by also requiring age checks.

Since the 1990s, the US has debated how to best keep kids away from harmful content without trampling adults’ First Amendment rights. And while cruder credit card-based systems like Adult Check are no longer seen as viable, it’s clear that for lawmakers today, technology is still viewed as both the problem and the solution.

While lawmakers claim that the latest technology makes it easier than ever to access porn, advancements like digital IDs, device-based age checks, or app store age checks seem to signal salvation, making it easier to digitally verify user ages. And some artificial intelligence solutions have likely made lawmakers’ dreams of age-gating the Internet appear even more within reach.

Critics have condemned age gates as unconstitutionally limiting adults’ access to legal speech, at the furthest extreme accusing conservatives of seeking to censor all adult content online or expand government surveillance by tracking people’s sexual identity. (Goldman noted that “Russell Vought, an architect of Project 2025 and President Trump’s Director of the Office of Management and Budget, admitted that he favored age authentication mandates as a ‘back door’ way to censor pornography.”)

Ultimately, SCOTUS could end up deciding if any kind of age gate is ever appropriate. The court could perhaps rule that strict scrutiny, which requires a narrowly tailored solution to serve a compelling government interest, must be applied, potentially ruling out all of lawmakers’ suggested strategies. Or the court could decide that strict scrutiny applies but age checks are narrowly tailored. Or it could go the other way and rule that strict scrutiny does not apply, so all state lawmakers need to show is that their basis for requiring age verification is rationally connected to their interest in blocking minors from adult content.

Age verification remains flawed, experts say

If there’s anything the ’90s can teach lawmakers about age gates, it’s that creating an age verification industry dependent on adult sites will only incentivize the creation of more adult sites that benefit from the new rules. Back then, when age verification systems increased sites’ revenues, compliant sites were rewarded, but in today’s climate, it’s the noncompliant sites that stand to profit by not authenticating ages.

Sanderson’s study noted that Louisiana “was the only state that implemented age verification in a manner that plausibly preserved a user’s anonymity while verifying age,” which is why Pornhub didn’t block the state over its age verification law. But other states that Pornhub blocked passed copycat laws that “tended to be stricter, either requiring uploads of an individual’s government identification,” methods requiring providing other sensitive data, “or even presenting biometric data such as face scanning,” the study noted.

The technology continues evolving as the debate rages on. Some of the most popular platforms and biggest tech companies have been testing new age estimation methods this year. Notably, Discord is testing out face scans in the United Kingdom and Australia, and both Meta and Google are testing technology to supposedly detect kids lying about their ages online.

But a solution has not yet been found as parents and their lawyers circle social media companies they believe are harming their kids. In fact, the unreliability of the tech remains an issue for Meta, which is perhaps the most motivated to find a fix, having long faced immense pressure to improve child safety on its platforms. Earlier this year, Meta had to yank its age detection tool after the “measure didn’t work as well as we’d hoped and inadvertently locked out some parents and guardians who shared devices with their teens,” the company said.

On April 21, Meta announced that it started testing the tech in the US, suggesting the flaws were fixed, but Meta did not directly respond to Ars’ request to comment in more detail on updates.

Two years ago, Ash Johnson, a senior policy manager at the nonpartisan nonprofit think tank the Information Technology and Innovation Foundation (ITIF), urged Congress to “support more research and testing of age verification technology,” saying that the government’s last empirical evaluation was in 2014. She noted then that “the technology is not perfect, and some children will break the rules, eventually slipping through the safeguards,” but that lawmakers need to understand the trade-offs of advocating for different tech solutions or else risk infringing user privacy.

More research is needed, Johnson told Ars, while Sanderson’s study suggested that regulators should also conduct circumvention research or be stuck with laws that have a “limited effectiveness as a standalone policy tool.”

For example, while AI solutions are increasingly more accurate—and in one Facebook survey overwhelmingly more popular with users, Goldman’s analysis noted—the tech still struggles to differentiate between a 17- or 18-year-old, for example.

Like Aylo, ITIF recommends device-based age authentication as the least restrictive method, Johnson told Ars. Perhaps the biggest issue with that option, though, is that kids may have an easy time accessing adult content on devices shared with parents, Goldman noted.

Not sharing Johnson’s optimism, Goldman wrote that “there is no ‘preferred’ or ‘ideal’ way to do online age authentication.” Even a perfect system that accurately authenticates age every time would be flawed, he suggested.

“Rather, they each fall on a spectrum of ‘dangerous in one way’ to ‘dangerous in a different way,'” he wrote, concluding that “every solution has serious privacy, accuracy, or security problems.”

Kids at “grave risk” from uninformed laws

As a “burgeoning” age verification industry swells, Goldman wants to see more earnest efforts from lawmakers to “develop a wider and more thoughtful toolkit of online child safety measures.” They could start, he suggested, by consistently defining minors in laws so it’s clear who is being regulated and what access is being restricted. They could then provide education to parents and minors to help them navigate online harms.

Without such careful consideration, Goldman predicts a dystopian future prompted by age verification laws. If SCOTUS endorses them, users could become so accustomed to age gates that they start entering sensitive information into various web platforms without a second thought. Even the government knows that would be a disaster, Goldman said.

“Governments around the world want people to think twice before sharing sensitive biometric information due to the information’s immutability if stolen,” Goldman wrote. “Mandatory age authentication teaches them the opposite lesson.”

Goldman recommends that lawmakers start seeking an information-based solution to age verification problems rather than depending on tech to save the day.

“Treating the online age authentication challenges as purely technological encourages the unsupportable belief that its problems can be solved if technologists ‘nerd harder,'” Goldman wrote. “This reductionist thinking is a categorical error. Age authentication is fundamentally an information problem, not a technology problem. Technology can help improve information accuracy and quality, but it cannot unilaterally solve information challenges.”

Lawmakers could potentially minimize risks to kids by only verifying age when someone tries to access restricted content or “by compelling age authenticators to minimize their data collection” and “promptly delete any highly sensitive information” collected. That likely wouldn’t stop some vendors from collecting or retaining data anyway, Goldman suggested. But it could be a better standard to protect users of all ages from inevitable data breaches, since we know that “numerous authenticators have suffered major data security failures that put authenticated individuals at grave risk.”

“If the policy goal is to protect minors online because of their potential vulnerability, then forcing minors to constantly decide whether or not to share highly sensitive information with strangers online is a policy failure,” Goldman wrote. “Child safety online needs a whole-of-society response, not a delegate-and-pray approach.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Redditor accidentally reinvents discarded ’90s tool to escape today’s age gates Read More »

openai-rolls-back-update-that-made-chatgpt-a-sycophantic-mess

OpenAI rolls back update that made ChatGPT a sycophantic mess

In search of good vibes

OpenAI, along with competitors like Google and Anthropic, is trying to build chatbots that people want to chat with. So, designing the model’s apparent personality to be positive and supportive makes sense—people are less likely to use an AI that comes off as harsh or dismissive. For lack of a better word, it’s increasingly about vibemarking.

When Google revealed Gemini 2.5, the team crowed about how the model topped the LM Arena leaderboard, which lets people choose between two different model outputs in a blinded test. The models people like more end up at the top of the list, suggesting they are more pleasant to use. Of course, people can like outputs for different reasons—maybe one is more technically accurate, or the layout is easier to read. But overall, people like models that make them feel good. The same is true of OpenAI’s internal model tuning work, it would seem.

An example of ChatGPT’s overzealous praise.

Credit: /u/Talvy

An example of ChatGPT’s overzealous praise. Credit: /u/Talvy

It’s possible this pursuit of good vibes is pushing models to display more sycophantic behaviors, which is a problem. Anthropic’s Alex Albert has cited this as a “toxic feedback loop.” An AI chatbot telling you that you’re a world-class genius who sees the unseen might not be damaging if you’re just brainstorming. However, the model’s unending praise can lead people who are using AI to plan business ventures or, heaven forbid, enact sweeping tariffs, to be fooled into thinking they’ve stumbled onto something important. In reality, the model has just become so sycophantic that it loves everything.

The constant pursuit of engagement has been a detriment to numerous products in the Internet era, and it seems generative AI is not immune. OpenAI’s GPT-4o update is a testament to that, but hopefully, this can serve as a reminder for the developers of generative AI that good vibes are not all that matters.

OpenAI rolls back update that made ChatGPT a sycophantic mess Read More »

mike-lindell’s-lawyers-used-ai-to-write-brief—judge-finds-nearly-30-mistakes

Mike Lindell’s lawyers used AI to write brief—judge finds nearly 30 mistakes

A lawyer representing MyPillow and its CEO Mike Lindell in a defamation case admitted using artificial intelligence in a brief that has nearly 30 defective citations, including misquotes and citations to fictional cases, a federal judge said.

“[T]he Court identified nearly thirty defective citations in the Opposition. These defects include but are not limited to misquotes of cited cases; misrepresentations of principles of law associated with cited cases, including discussions of legal principles that simply do not appear within such decisions; misstatements regarding whether case law originated from a binding authority such as the United States Court of Appeals for the Tenth Circuit; misattributions of case law to this District; and most egregiously, citation of cases that do not exist,” US District Judge Nina Wang wrote in an order to show cause Wednesday.

Wang ordered attorneys Christopher Kachouroff and Jennifer DeMaster to show cause as to why the court should not sanction the defendants, law firm, and individual attorneys. Kachouroff and DeMaster also have to explain why they should not be referred to disciplinary proceedings for violations of the rules of professional conduct.

Kachouroff and DeMaster, who are defending Lindell against a lawsuit filed by former Dominion Voting Systems employee Eric Coomer, both signed the February 25 brief with the defective citations. Kachouroff, representing defendants as lead counsel, admitted using AI to write the brief at an April 21 hearing, the judge wrote. The case is in the US District Court for the District of Colorado.

“Time and time again, when Mr. Kachouroff was asked for an explanation of why citations to legal authorities were inaccurate, he declined to offer any explanation, or suggested that it was a ‘draft pleading,'” Wang wrote. “Not until this Court asked Mr. Kachouroff directly whether the Opposition was the product of generative artificial intelligence did Mr. Kachouroff admit that he did, in fact, use generative artificial intelligence.”

Mike Lindell’s lawyers used AI to write brief—judge finds nearly 30 mistakes Read More »

trump-orders-ed-dept-to-make-ai-a-national-priority-while-plotting-agency’s-death

Trump orders Ed Dept to make AI a national priority while plotting agency’s death

Trump pushes for industry involvement

It seems clear that Trump’s executive order was a reaction to China’s announcement about AI education reforms last week, as Reuters reported. Elsewhere, Singapore and Estonia have laid out their AI education initiatives, Forbes reported, indicating that AI education is increasingly considered critical to any nation’s success.

Trump’s vision for the US requires training teachers and students about what AI is and what it can do. He offers no new appropriations to fund the initiative; instead, he directs a new AI Education Task Force to find existing funding to cover both research into how to implement AI in education and the resources needed to deliver on the executive order’s promises.

Although AI advocates applauded Trump’s initiative, the executive order’s vagueness makes it uncertain how AI education tools will be assessed as Trump pushes for AI to be integrated into “all subject areas.” Possibly using AI in certain educational contexts could disrupt learning by confabulating misinformation, a concern that the Biden administration had in its more cautious approach to AI education initiatives.

Trump also seems to push for much more private sector involvement than Biden did.

The order recommended that education institutions collaborate with industry partners and other organizations to “collaboratively develop online resources focused on teaching K–12 students foundational AI literacy and critical thinking skills.” These partnerships will be announced on a “rolling basis,” the order said. It also pushed students and teachers to partner with industry for the Presidential AI Challenge to foster collaboration.

For Trump’s AI education plan to work, he will seemingly need the DOE to stay intact. However, so far, Trump has not acknowledged this tension. In March, he ordered the DOE to dissolve, with power returned to states to ensure “the effective and uninterrupted delivery of services, programs, and benefits on which Americans rely.”

Were that to happen, at least 27 states and Puerto Rico—which EdWeek reported have already laid out their own AI education guidelines—might push back, using their power to control federal education funding to pursue their own AI education priorities and potentially messing with Trump’s plan.

Trump orders Ed Dept to make AI a national priority while plotting agency’s death Read More »

perplexity-will-come-to-moto-phones-after-exec-testified-google-limited-access

Perplexity will come to Moto phones after exec testified Google limited access

Shevelenko was also asked about Chrome, which the DOJ would like to force Google to sell. Like an OpenAI executive said on Monday, Shevelenko confirmed Perplexity would be interested in buying the browser from Google.

Motorola has all the AI

There were some vague allusions during the trial that Perplexity would come to Motorola phones this year, but we didn’t know just how soon that was. With the announcement of its 2025 Razr devices, Moto has confirmed a much more expansive set of AI features. Parts of the Motorola AI experience are powered by Gemini, Copilot, Meta, and yes, Perplexity.

While Gemini gets top billing as the default assistant app, other firms have wormed their way into different parts of the software. Perplexity’s app will be preloaded, and anyone who buys the new Razrs. Owners will also get three free months of Perplexity Pro. This is the first time Perplexity has had a smartphone distribution deal, but it won’t be shown prominently on the phone. When you start a Motorola device, it will still look like a Google playground.

While it’s not the default assistant, Perplexity is integrated into the Moto AI platform. The new Razrs will proactively suggest you perform an AI search when accessing certain features like the calendar or browsing the web under the banner “Explore with Perplexity.” The Perplexity app has also been optimized to work with the external screen on Motorola’s foldables.

Moto AI also has elements powered by other AI systems. For example, Microsoft Copilot will appear in Moto AI with an “Ask Copilot” option. And Meta’s Llama model powers a Moto AI feature called Catch Me Up, which summarizes notifications from select apps.

It’s unclear why Motorola leaned on four different AI providers for a single phone. It probably helps that all these companies are desperate to entice users to bulk up their market share. Perplexity confirmed that no money changed hands in this deal—it’s on Moto phones to acquire more users. That might be tough with Gemini getting priority placement, though.

Perplexity will come to Moto phones after exec testified Google limited access Read More »

trump-can’t-keep-china-from-getting-ai-chips,-tsmc-suggests

Trump can’t keep China from getting AI chips, TSMC suggests

“Despite TSMC’s best efforts to comply with all relevant export control and sanctions laws and regulations, there is no assurance that its business activities will not be found incompliant with export control laws and regulations,” TSMC said.

Further, “if TSMC or TSMC’s business partners fail to obtain appropriate import, export or re-export licenses or permits or are found to have violated applicable export control or sanctions laws, TSMC may also be adversely affected, through reputational harm as well as other negative consequences, including government investigations and penalties resulting from relevant legal proceedings,” TSMC warned.

Trump’s tariffs may end TSMC’s “tariff-proof” era

TSMC is thriving despite years of tariffs and export controls, its report said, with at least one analyst suggesting that, so far, the company appears “somewhat tariff-proof.” However, all of that could be changing fast, as “US President Donald Trump announced in 2025 an intention to impose more expansive tariffs on imports into the United States,” TSMC said.

“Any tariffs imposed on imports of semiconductors and products incorporating chips into the United States may result in increased costs for purchasing such products, which may, in turn, lead to decreased demand for TSMC’s products and services and adversely affect its business and future growth,” TSMC said.

And if TSMC’s business is rattled by escalations in the US-China trade war, TSMC warned, that risks disrupting the entire global semiconductor supply chain.

Trump’s semiconductor tariff plans remain uncertain. About a week ago, Trump claimed the rates would be unveiled “over the next week,” Reuters reported, which means they could be announced any day now.

Trump can’t keep China from getting AI chips, TSMC suggests Read More »

regrets:-actors-who-sold-ai-avatars-stuck-in-black-mirror-esque-dystopia

Regrets: Actors who sold AI avatars stuck in Black Mirror-esque dystopia

In a Black Mirror-esque turn, some cash-strapped actors who didn’t fully understand the consequences are regretting selling their likenesses to be used in AI videos that they consider embarrassing, damaging, or harmful, AFP reported.

Among them is a 29-year-old New York-based actor, Adam Coy, who licensed rights to his face and voice to a company called MCM for one year for $1,000 without thinking, “am I crossing a line by doing this?” His partner’s mother later found videos where he appeared as a doomsayer predicting disasters, he told the AFP.

South Korean actor Simon Lee’s AI likeness was similarly used to spook naïve Internet users but in a potentially more harmful way. He told the AFP that he was “stunned” to find his AI avatar promoting “questionable health cures on TikTok and Instagram,” feeling ashamed to have his face linked to obvious scams.

As AI avatar technology improves, the temptation to license likenesses will likely grow. One of the most successful companies that’s recruiting AI avatars, UK-based Synthesia, doubled its valuation to $2.1 billion in January, CNBC reported. And just last week, Synthesia struck a $2 billion deal with Shutterstock that will make its AI avatars more human-like, The Guardian reported.

To ensure that actors are incentivized to license their likenesses, Synthesia also recently launched an equity fund. According to the company, actors behind the most popular AI avatars or featured in Synthesia marketing campaigns will be granted options in “a pool of our company shares” worth $1 million.

“These actors will be part of the program for up to four years, during which their equity awards will vest monthly,” Synthesia said.

For actors, selling their AI likeness seems quick and painless—and perhaps increasingly more lucrative. All they have to do is show up and make a bunch of different facial expressions in front of a green screen, then collect their checks. But Alyssa Malchiodi, a lawyer who has advocated on behalf of actors, told the AFP that “the clients I’ve worked with didn’t fully understand what they were agreeing to at the time,” blindly signing contracts with “clauses considered abusive,” even sometimes granting “worldwide, unlimited, irrevocable exploitation, with no right of withdrawal.”

Regrets: Actors who sold AI avatars stuck in Black Mirror-esque dystopia Read More »

google-created-a-new-ai-model-for-talking-to-dolphins

Google created a new AI model for talking to dolphins

Dolphins are generally regarded as some of the smartest creatures on the planet. Research has shown they can cooperate, teach each other new skills, and even recognize themselves in a mirror. For decades, scientists have attempted to make sense of the complex collection of whistles and clicks dolphins use to communicate. Researchers might make a little headway on that front soon with the help of Google’s open AI model and some Pixel phones.

Google has been finding ways to work generative AI into everything else it does, so why not its collaboration with the Wild Dolphin Project (WDP)? This group has been studying dolphins since 1985 using a non-invasive approach to track a specific community of Atlantic spotted dolphins. The WDP creates video and audio recordings of dolphins, along with correlating notes on their behaviors.

One of the WDP’s main goals is to analyze the way dolphins vocalize and how that can affect their social interactions. With decades of underwater recordings, researchers have managed to connect some basic activities to specific sounds. For example, Atlantic spotted dolphins have signature whistles that appear to be used like names, allowing two specific individuals to find each other. They also consistently produce “squawk” sound patterns during fights.

WDP researchers believe that understanding the structure and patterns of dolphin vocalizations is necessary to determine if their communication rises to the level of a language. “We do not know if animals have words,” says WDP’s Denise Herzing.

An overview of DolphinGemma

The ultimate goal is to speak dolphin, if indeed there is such a language. The pursuit of this goal has led WDP to create a massive, meticulously labeled data set, which Google says is perfect for analysis with generative AI.

Meet DolphinGemma

The large language models (LLMs) that have become unavoidable in consumer tech are essentially predicting patterns. You provide them with an input, and the models predict the next token over and over until they have an output. When a model has been trained effectively, that output can sound like it was created by a person. Google and WDP hope it’s possible to do something similar with DolphinGemma for marine mammals.

DolphinGemma is based on Google’s Gemma open AI models, which are themselves built on the same foundation as the company’s commercial Gemini models. The dolphin communication model uses a Google-developed audio technology called SoundStream to tokenize dolphin vocalizations, allowing the sounds to be fed into the model as they’re recorded.

Google created a new AI model for talking to dolphins Read More »

google-announces-faster,-more-efficient-gemini-ai-model

Google announces faster, more efficient Gemini AI model

We recently spoke with Google’s Tulsee Doshi, who noted that the 2.5 Pro (Experimental) release was still prone to “overthinking” its responses to simple queries. However, the plan was to further improve dynamic thinking for the final release, and the team also hoped to give developers more control over the feature. That appears to be happening with Gemini 2.5 Flash, which includes “dynamic and controllable reasoning.”

The newest Gemini models will choose a “thinking budget” based on the complexity of the prompt. This helps reduce wait times and processing for 2.5 Flash. Developers even get granular control over the budget to lower costs and speed things along where appropriate. Gemini 2.5 models are also getting supervised tuning and context caching for Vertex AI in the coming weeks.

In addition to the arrival of Gemini 2.5 Flash, the larger Pro model has picked up a new gig. Google’s largest Gemini model is now powering its Deep Research tool, which was previously running Gemini 2.0 Pro. Deep Research lets you explore a topic in greater detail simply by entering a prompt. The agent then goes out into the Internet to collect data and synthesize a lengthy report.

Gemini vs. ChatGPT chart

Credit: Google

Google says that the move to Gemini 2.5 has boosted the accuracy and usefulness of Deep Research. The graphic above shows Google’s alleged advantage compared to OpenAI’s deep research tool. These stats are based on user evaluations (not synthetic benchmarks) and show a greater than 2-to-1 preference for Gemini 2.5 Pro reports.

Deep Research is available for limited use on non-paid accounts, but you won’t get the latest model. Deep Research with 2.5 Pro is currently limited to Gemini Advanced subscribers. However, we expect before long that all models in the Gemini app will move to the 2.5 branch. With dynamic reasoning and new TPUs, Google could begin lowering the sky-high costs that have thus far made generative AI unprofitable.

Google announces faster, more efficient Gemini AI model Read More »

nj-teen-wins-fight-to-put-nudify-app-users-in-prison,-impose-fines-up-to-$30k

NJ teen wins fight to put nudify app users in prison, impose fines up to $30K


Here’s how one teen plans to fix schools failing kids affected by nudify apps.

When Francesca Mani was 14 years old, boys at her New Jersey high school used nudify apps to target her and other girls. At the time, adults did not seem to take the harassment seriously, telling her to move on after she demanded more severe consequences than just a single boy’s one or two-day suspension.

Mani refused to take adults’ advice, going over their heads to lawmakers who were more sensitive to her demands. And now, she’s won her fight to criminalize deepfakes. On Wednesday, New Jersey Governor Phil Murphy signed a law that he said would help victims “take a stand against deceptive and dangerous deepfakes” by making it a crime to create or share fake AI nudes of minors or non-consenting adults—as well as deepfakes seeking to meddle with elections or damage any individuals’ or corporations’ reputations.

Under the law, victims targeted by nudify apps like Mani can sue bad actors, collecting up to $1,000 per harmful image created either knowingly or recklessly. New Jersey hopes these “more severe consequences” will deter kids and adults from creating harmful images, as well as emphasize to schools—whose lax response to fake nudes has been heavily criticized—that AI-generated nude images depicting minors are illegal and must be taken seriously and reported to police. It imposes a maximum fine of $30,000 on anyone creating or sharing deepfakes for malicious purposes, as well as possible punitive damages if a victim can prove that images were created in willful defiance of the law.

Ars could not reach Mani for comment, but she celebrated the win in the governor’s press release, saying, “This victory belongs to every woman and teenager told nothing could be done, that it was impossible, and to just move on. It’s proof that with the right support, we can create change together.”

On LinkedIn, her mother, Dorota Mani—who has been working with the governor’s office on a commission to protect kids from online harms—thanked lawmakers like Murphy and former New Jersey Assemblyman Herb Conaway, who sponsored the law, for “standing with us.”

“When used maliciously, deepfake technology can dismantle lives, distort reality, and exploit the most vulnerable among us,” Conaway said. “I’m proud to have sponsored this legislation when I was still in the Assembly, as it will help us keep pace with advancing technology. This is about drawing a clear line between innovation and harm. It’s time we take a firm stand to protect individuals from digital deception, ensuring that AI serves to empower our communities.”

Doing nothing is no longer an option for schools, teen says

Around the country, as cases like Mani’s continue to pop up, experts expect that shame prevents most victims from coming forward to flag abuses, suspecting that the problem is much more widespread than media reports suggest.

Encode Justice has a tracker monitoring reported cases involving minors, including allowing victims to anonymously report harms around the US. But the true extent of the harm currently remains unknown, as cops warn of a flood of AI child sex images obscuring investigations into real-world child abuse.

Confronting this shadowy threat to kids everywhere, Mani was named as one of TIME’s most influential people in AI last year due to her advocacy fighting deepfakes. She’s not only pressured lawmakers to take strong action to protect vulnerable people, but she’s also pushed for change at tech companies and in schools nationwide.

“When that happened to me and my classmates, we had zero protection whatsoever,” Mani told TIME, and neither did other girls around the world who had been targeted and reached out to thank her for fighting for them. “There were so many girls from different states, different countries. And we all had three things in common: the lack of AI school policies, the lack of laws, and the disregard of consent.”

Yiota Souras, chief legal officer at the National Center for Missing and Exploited Children, told CBS News last year that protecting teens started with laws that criminalize sharing fake nudes and provide civil remedies, just as New Jersey’s law does. That way, “schools would have protocols,” she said, and “investigators and law enforcement would have roadmaps on how to investigate” and “what charges to bring.”

Clarity is urgently needed in schools, advocates say. At Mani’s school, the boys who shared the photos had their names shielded and were pulled out of class individually to be interrogated, but victims like Mani had no privacy whatsoever. Their names were blared over the school’s loud system, as boys mocked their tears in the hallway. To this day, it’s unclear who exactly shared and possibly still has copies of the images, which experts say could haunt Mani throughout her life. And the school’s inadequate response was a major reason why Mani decided to take a stand, seemingly viewing the school as a vehicle furthering her harassment.

“I realized I should stop crying and be mad, because this is unacceptable,” Mani told CBS News.

Mani pushed for NJ’s new law and claimed the win, but she thinks that change must start at schools, where the harassment starts. In her school district, the “harassment, intimidation and bullying” policy was updated to incorporate AI harms, but she thinks schools should go even further. Working with Encode Justice, she is helping to push a plan to fix schools failing kids targeted by nudify apps.

“My goal is to protect women and children—and we first need to start with AI school policies, because this is where most of the targeting is happening,” Mani told TIME.

Encode Justice did not respond to Ars’ request to comment. But their plan noted a common pattern in schools throughout the US. Students learn about nudify apps through ads on social media—such as Instagram reportedly driving 90 percent of traffic to one such nudify app—where they can also usually find innocuous photos of classmates to screenshot. Within seconds, the apps can nudify the screenshotted images, which Mani told CBS News then spread “rapid fire”  by text message and DMs, and often shared over school networks.

To end the abuse, schools need to be prepared, Encode Justice said, especially since “their initial response can sometimes exacerbate the situation.”

At Mani’s school, for example, leadership was criticized for announcing the victims’ names over the loudspeaker, which Encode Justice said never should have happened. Another misstep was at a California middle school, which delayed action for four months until parents went to police, Encode Justice said. In Texas, a school failed to stop images from spreading for eight months while a victim pleaded for help from administrators and police who failed to intervene. The longer the delays, the more victims will likely be targeted. In Pennsylvania, a single ninth grader targeted 46 girls before anyone stepped in.

Students deserve better, Mani feels, and Encode Justice’s plan recommends that all schools create action plans to stop failing students and respond promptly to stop image sharing.

That starts with updating policies to ban deepfake sexual imagery, then clearly communicating to students “the seriousness of the issue and the severity of the consequences.” Consequences should include identifying all perpetrators and issuing suspensions or expulsions on top of any legal consequences students face, Encode Justice suggested. They also recommend establishing “written procedures to discreetly inform relevant authorities about incidents and to support victims at the start of an investigation on deepfake sexual abuse.” And, critically, all teachers must be trained on these new policies.

“Doing nothing is no longer an option,” Mani said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

NJ teen wins fight to put nudify app users in prison, impose fines up to $30K Read More »

gemini-“coming-together-in-really-awesome-ways,”-google-says-after-2.5-pro-release

Gemini “coming together in really awesome ways,” Google says after 2.5 Pro release


Google’s Tulsee Doshi talks vibes and efficiency in Gemini 2.5 Pro.

Google was caught flat-footed by the sudden skyrocketing interest in generative AI despite its role in developing the underlying technology. This prompted the company to refocus its considerable resources on catching up to OpenAI. Since then, we’ve seen the detail-flubbing Bard and numerous versions of the multimodal Gemini models. While Gemini has struggled to make progress in benchmarks and user experience, that could be changing with the new 2.5 Pro (Experimental) release. With big gains in benchmarks and vibes, this might be the first Google model that can make a dent in ChatGPT’s dominance.

We recently spoke to Google’s Tulsee Doshi, director of product management for Gemini, to talk about the process of releasing Gemini 2.5, as well as where Google’s AI models are going in the future.

Welcome to the vibes era

Google may have had a slow start in building generative AI products, but the Gemini team has picked up the pace in recent months. The company released Gemini 2.0 in December, showing a modest improvement over the 1.5 branch. It only took three months to reach 2.5, meaning Gemini 2.0 Pro wasn’t even out of the experimental stage yet. To hear Doshi tell it, this was the result of Google’s long-term investments in Gemini.

“A big part of it is honestly that a lot of the pieces and the fundamentals we’ve been building are now coming together in really awesome ways, ” Doshi said. “And so we feel like we’re able to pick up the pace here.”

The process of releasing a new model involves testing a lot of candidates. According to Doshi, Google takes a multilayered approach to inspecting those models, starting with benchmarks. “We have a set of evals, both external academic benchmarks as well as internal evals that we created for use cases that we care about,” she said.

Credit: Google

The team also uses these tests to work on safety, which, as Google points out at every given opportunity, is still a core part of how it develops Gemini. Doshi noted that making a model safe and ready for wide release involves adversarial testing and lots of hands-on time.

But we can’t forget the vibes, which have become an increasingly important part of AI models. There’s great focus on the vibe of outputs—how engaging and useful they are. There’s also the emerging trend of vibe coding, in which you use AI prompts to build things instead of typing the code yourself. For the Gemini team, these concepts are connected. The team uses product and user feedback to understand the “vibes” of the output, be that code or just an answer to a question.

Google has noted on a few occasions that Gemini 2.5 is at the top of the LM Arena leaderboard, which shows that people who have used the model prefer the output by a considerable margin—it has good vibes. That’s certainly a positive place for Gemini to be after a long climb, but there is some concern in the field that too much emphasis on vibes could push us toward models that make us feel good regardless of whether the output is good, a property known as sycophancy.

If the Gemini team has concerns about feel-good models, they’re not letting it show. Doshi mentioned the team’s focus on code generation, which she noted can be optimized for “delightful experiences” without stoking the user’s ego. “I think about vibe less as a certain type of personality trait that we’re trying to work towards,” Doshi said.

Hallucinations are another area of concern with generative AI models. Google has had plenty of embarrassing experiences with Gemini and Bard making things up, but the Gemini team believes they’re on the right path. Gemini 2.5 apparently has set a high-water mark in the team’s factuality metrics. But will hallucinations ever be reduced to the point we can fully trust the AI? No comment on that front.

Don’t overthink it

Perhaps the most interesting thing you’ll notice when using Gemini 2.5 is that it’s very fast compared to other models that use simulated reasoning. Google says it’s building this “thinking” capability into all of its models going forward, which should lead to improved outputs. The expansion of reasoning in large language models in 2024 resulted in a noticeable improvement in the quality of these tools. It also made them even more expensive to run, exacerbating an already serious problem with generative AI.

The larger and more complex an LLM becomes, the more expensive it is to run. Google hasn’t released technical data like parameter count on its newer models—you’ll have to go back to the 1.5 branch to get that kind of detail. However, Doshi explained that Gemini 2.5 is not a substantially larger model than Google’s last iteration, calling it “comparable” in size to 2.0.

Gemini 2.5 is more efficient in one key area: the chain of thought. It’s Google’s first public model to support a feature called Dynamic Thinking, which allows the model to modulate the amount of reasoning that goes into an output. This is just the first step, though.

“I think right now, the 2.5 Pro model we ship still does overthink for simpler prompts in a way that we’re hoping to continue to improve,” Doshi said. “So one big area we are investing in is Dynamic Thinking as a way to get towards our [general availability] version of 2.5 Pro where it thinks even less for simpler prompts.”

Gemini models on phone

Credit: Ryan Whitwam

Google doesn’t break out earnings from its new AI ventures, but we can safely assume there’s no profit to be had. No one has managed to turn these huge LLMs into a viable business yet. OpenAI, which has the largest user base with ChatGPT, loses money even on the users paying for its $200 Pro plan. Google is planning to spend $75 billion on AI infrastructure in 2025, so it will be crucial to make the most of this very expensive hardware. Building models that don’t waste cycles on overthinking “Hi, how are you?” could be a big help.

Missing technical details

Google plays it close to the chest with Gemini, but the 2.5 Pro release has offered more insight into where the company plans to go than ever before. To really understand this model, though, we’ll need to see the technical report. Google last released such a document for Gemini 1.5. We still haven’t seen the 2.0 version, and we may never see that document now that 2.5 has supplanted 2.0.

Doshi notes that 2.5 Pro is still an experimental model. So, don’t expect full evaluation reports to happen right away. A Google spokesperson clarified that a full technical evaluation report on the 2.5 branch is planned, but there is no firm timeline. Google hasn’t even released updated model cards for Gemini 2.0, let alone 2.5. These documents are brief one-page summaries of a model’s training, intended use, evaluation data, and more. They’re essentially LLM nutrition labels. It’s much less detailed than a technical report, but it’s better than nothing. Google confirms model cards are on the way for Gemini 2.0 and 2.5.

Given the recent rapid pace of releases, it’s possible Gemini 2.5 Pro could be rolling out more widely around Google I/O in May. We certainly hope Google has more details when the 2.5 branch expands. As Gemini development picks up steam, transparency shouldn’t fall by the wayside.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Gemini “coming together in really awesome ways,” Google says after 2.5 Pro release Read More »

deepmind-has-detailed-all-the-ways-agi-could-wreck-the-world

DeepMind has detailed all the ways AGI could wreck the world

As AI hype permeates the Internet, tech and business leaders are already looking toward the next step. AGI, or artificial general intelligence, refers to a machine with human-like intelligence and capabilities. If today’s AI systems are on a path to AGI, we will need new approaches to ensure such a machine doesn’t work against human interests.

Unfortunately, we don’t have anything as elegant as Isaac Asimov’s Three Laws of Robotics. Researchers at DeepMind have been working on this problem and have released a new technical paper (PDF) that explains how to develop AGI safely, which you can download at your convenience.

It contains a huge amount of detail, clocking in at 108 pages before references. While some in the AI field believe AGI is a pipe dream, the authors of the DeepMind paper project that it could happen by 2030. With that in mind, they aimed to understand the risks of a human-like synthetic intelligence, which they acknowledge could lead to “severe harm.”

All the ways AGI could harm humanity

This work has identified four possible types of AGI risk, along with suggestions on how we might ameliorate said risks. The DeepMind team, led by company co-founder Shane Legg, categorized the negative AGI outcomes as misuse, misalignment, mistakes, and structural risks. Misuse and misalignment are discussed in the paper at length, but the latter two are only covered briefly.

table of AGI risks

The four categories of AGI risk, as determined by DeepMind.

Credit: Google DeepMind

The four categories of AGI risk, as determined by DeepMind. Credit: Google DeepMind

The first possible issue, misuse, is fundamentally similar to current AI risks. However, because AGI will be more powerful by definition, the damage it could do is much greater. A ne’er-do-well with access to AGI could misuse the system to do harm, for example, by asking the system to identify and exploit zero-day vulnerabilities or create a designer virus that could be used as a bioweapon.

DeepMind has detailed all the ways AGI could wreck the world Read More »