Policy

school-did-nothing-wrong-when-it-punished-student-for-using-ai,-court-rules

School did nothing wrong when it punished student for using AI, court rules


Student “indiscriminately copied and pasted text,” including AI hallucinations.

Credit: Getty Images | Andriy Onufriyenko

A federal court yesterday ruled against parents who sued a Massachusetts school district for punishing their son who used an artificial intelligence tool to complete an assignment.

Dale and Jennifer Harris sued Hingham High School officials and the School Committee and sought a preliminary injunction requiring the school to change their son’s grade and expunge the incident from his disciplinary record before he needs to submit college applications. The parents argued that there was no rule against using AI in the student handbook, but school officials said the student violated multiple policies.

The Harris’ motion for an injunction was rejected in an order issued yesterday from US District Court for the District of Massachusetts. US Magistrate Judge Paul Levenson found that school officials “have the better of the argument on both the facts and the law.”

“On the facts, there is nothing in the preliminary factual record to suggest that HHS officials were hasty in concluding that RNH [the Harris’ son, referred to by his initials] had cheated,” Levenson wrote. “Nor were the consequences Defendants imposed so heavy-handed as to exceed Defendants’ considerable discretion in such matters.”

“On the evidence currently before the Court, I detect no wrongdoing by Defendants,” Levenson also wrote.

Students copied and pasted AI “hallucinations”

The incident occurred in December 2023 when RNH was a junior. The school determined that RNH and another student “had cheated on an AP US History project by attempting to pass off, as their own work, material that they had taken from a generative artificial intelligence (‘AI’) application,” Levenson wrote. “Although students were permitted to use AI to brainstorm topics and identify sources, in this instance the students had indiscriminately copied and pasted text from the AI application, including citations to nonexistent books (i.e., AI hallucinations).”

They received failing grades on two parts of the multi-part project but “were permitted to start from scratch, each working separately, to complete and submit the final project,” the order said. RNH’s discipline included a Saturday detention. He was also barred from selection for the National Honor Society, but he was ultimately allowed into the group after his parents filed the lawsuit.

School officials “point out that RNH was repeatedly taught the fundamentals of academic integrity, including how to use and cite AI,” Levenson wrote. The magistrate judge agreed that “school officials could reasonably conclude that RNH’s use of AI was in violation of the school’s academic integrity rules and that any student in RNH’s position would have understood as much.”

Levenson’s order described how the students used AI to generate a script for a documentary film:

The evidence reflects that the pair did not simply use AI to help formulate research topics or identify sources to review. Instead, it seems they indiscriminately copied and pasted text that had been generated by Grammarly.com (“Grammarly”), a publicly available AI tool, into their draft script. Evidently, the pair did not even review the “sources” that Grammarly provided before lifting them. The very first footnote in the submission consists of a citation to a nonexistent book: “Lee, Robert. Hoop Dreams: A Century of Basketball. Los Angeles: Courtside Publications, 2018.” The third footnote also appears wholly factitious: “Doe, Jane. Muslim Pioneers: The Spiritual Journey of American Icons. Chicago: Windy City Publishers, 2017.” Significantly, even though the script contained citations to various sources—some of which were real—there was no citation to Grammarly, and no acknowledgement that AI of any kind had been used.

Tool flagged paper as AI-generated

When the students submitted their script via Turnitin.com, the website flagged portions of it as being AI-generated. The AP US History teacher conducted further examination, finding that large portions of the script had been copied and pasted. She also found other damning details.

History teacher Susan Petrie “testified that the revision history showed that RNH had only spent approximately 52 minutes in the document, whereas other students spent between seven and nine hours. Ms. Petrie also ran the submission through ‘Draft Back’ and ‘Chat Zero,’ two additional AI detection tools, which also indicated that AI had been used to generate the document,” the order said.

School officials argued that the “case did not implicate subtle questions of acceptable practices in deploying a new technology, but rather was a straightforward case of academic dishonesty,” Levenson wrote. The magistrate judge’s order said “it is doubtful that the Court has any role in second-guessing” the school’s determination, and that RNH’s plaintiffs did not show any misconduct by school authorities.

As we previously reported, school officials told the court that the student handbook’s section on cheating and plagiarism bans “unauthorized use of technology during an assignment” and “unauthorized use or close imitation of the language and thoughts of another author and the representation of them as one’s own work.”

School officials also told the court that in fall 2023, students were given a copy of a “written policy on Academic Dishonesty and AI expectations” that said students “shall not use AI tools during in-class examinations, processed writing assignments, homework or classwork unless explicitly permitted and instructed.”

The parents’ case hangs largely on the student handbook’s lack of a specific statement about AI, even though that same handbook bans unauthorized use of technology. “They told us our son cheated on a paper, which is not what happened,” Jennifer Harris told WCVB last month. “They basically punished him for a rule that doesn’t exist.”

Parents’ other claims rejected

The Harrises also claim that school officials engaged in a “pervasive pattern of threats, intimidation, coercion, bullying, harassment, and intimation of reprisals.” But Levenson concluded that the “plaintiffs provide little in the way of factual allegations along these lines.”

While the case isn’t over, the rejection of the preliminary injunction shows that Levenson believes the defendants are likely to win. “The manner in which RNH used Grammarly—wholesale copying and pasting of language directly into the draft script that he submitted—powerfully supports Defendants’ conclusion that RNH knew that he was using AI in an impermissible fashion,” Levenson wrote.

While “the emergence of generative AI may present some nuanced challenges for educators, the issue here is not particularly nuanced, as there is no discernible pedagogical purpose in prompting Grammarly (or any other AI tool) to generate a script, regurgitating the output without citation, and claiming it as one’s own work,” the order said.

Levenson wasn’t impressed by the parents’ claim that RNH’s constitutional right to due process was violated. The defendants “took multiple steps to confirm that RNH had in fact used AI in completing the Assignment” before imposing a punishment, he wrote. The discipline imposed “did not deprive RNH of his right to a public education,” and thus “any substantive due process claim premised on RNH’s entitlement to a public education must fail.”

Levenson concluded with a quote from a 1988 Supreme Court ruling that said the education of youth “is primarily the responsibility of parents, teachers, and state and local school officials, and not of federal judges.” According to Levenson, “This case well illustrates the good sense in that division of labor. The public interest here weighs in favor of Defendants.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

School did nothing wrong when it punished student for using AI, court rules Read More »

welcome-to-google’s-nightmare:-us-reveals-plan-to-destroy-search-monopoly

Welcome to Google’s nightmare: US reveals plan to destroy search monopoly

Hepner expects that the DOJ plan may be measured enough that the court may only “be interested in a nip-tuck, not a wholesale revision of what plaintiffs have put forward.”

Kamyl Bazbaz, SVP of public affairs for Google’s more privacy-focused rival DuckDuckGo, released a statement agreeing with Hepner.

“The government has put forward a proposal that would free the search market from Google’s illegal grip and unleash a new era of innovation, investment, and competition,” Bazbaz said. “There’s nothing radical about this proposal: It’s firmly based on the court’s extensive finding of fact and proposes solutions in line with previous antitrust actions.”

Bazbaz accused Google of “cynically” invoking privacy among chief concerns with a forced Chrome sale. That “is rich coming from the Internet’s biggest tracker,” Bazbaz said.

Will Apple finally compete with Google in search?

The remedies the DOJ has proposed could potentially be game-changing, Bazbaz told Ars, not just for existing rivals but also new rivals and startups the court found were previously unable to enter the market while it was under Google’s control.

If the DOJ gets its way, Google could be stuck complying with these proposed remedies for 10 years. But if the company can prove after five years that competition has substantially increased and it controls less than 50 percent of the market, the remedies could be terminated early, the DOJ’s proposed final judgment order said.

That’s likely cold comfort for Google as it prepares to fight the DOJ’s plan to break up its search empire and potentially face major new competitors. The biggest risk to Google’s dominance in AI search could even be its former partner, whom the court found was being paid handsomely to help prop up Google’s search monopoly: Apple.

On X (formerly Twitter), Hepner said that cutting off Google’s $20 billion payments to Apple for default placements in Safari alone could “have a huge effect and may finally kick Apple to enter the market itself.”

Welcome to Google’s nightmare: US reveals plan to destroy search monopoly Read More »

fcc-chairwoman-announces-departure,-paving-way-for-republican-majority

FCC chairwoman announces departure, paving way for Republican majority

Federal Communications Commission Chairwoman Jessica Rosenworcel announced today that she will leave the agency on January 20, 2025, the day of President-elect Donald Trump’s inauguration.

“Serving at the Federal Communications Commission has been the honor of a lifetime, especially my tenure as chair and as the first woman in history to be confirmed to lead this agency,” Rosenworcel said in today’s announcement. Rosenworcel said that being chair during the pandemic “made clear how important the work of the FCC is and how essential it is for us to build a digital future that works for everyone.”

Rosenworcel touted the agency’s work in “setting up the largest broadband affordability program in history—which led to us connecting more than 23 million households to high-speed Internet, connecting more than 17 million students caught in the homework gap to hotspots and other devices as learning moved online.” That discount program ended this year after Congress let funding run out, despite Rosenworcel’s repeated pleas for more money.

Rosenworcel, a Democrat, is following tradition, as the FCC chair typically resigns when the opposing party wins the White House. The move will leave the FCC with two Democrats and two Republicans, paving the way for the GOP to add one member and gain a 3–2 majority.

FCC had 2-2 deadlock for most of Biden’s term

Rosenworcel became an FCC commissioner in 2012 and was promoted to chair by President Biden in 2021. She was forced to operate without a Democratic majority for most of her time as chair due to a series of political developments.

FCC chairwoman announces departure, paving way for Republican majority Read More »

comcast-to-ditch-cable-tv-networks-in-partial-spinoff-of-nbcuniversal-assets

Comcast to ditch cable TV networks in partial spinoff of NBCUniversal assets

Comcast today announced plans to spin off NBCUniversal cable TV networks such as USA, CNBC, and MSNBC into a new publicly traded company. Comcast is trying to complete the spinoff in one year, effectively unwinding part of the NBCUniversal acquisition it completed in 2011.

The entities in the planned spinoff generated about $7 billion of revenue in the 12 months that ended September 30, 2024, Comcast said. But cable TV channels have become less lucrative in an industry that’s shifting to the streaming model, and the spinoff would let Comcast remove those assets from its earnings reports. Comcast’s total revenue in the 12-month period was about $123 billion.

Comcast President Mike Cavanagh said in the Q3 earnings call on October 31 that Comcast is “experiencing the effects of the transition in our video businesses and have been studying the best path forward for these assets.”

The spinoff company will be “comprised of a strong portfolio of NBCUniversal’s cable television networks, including USA Network, CNBC, MSNBC, Oxygen, E!, SYFY and Golf Channel along with complementary digital assets including Fandango and Rotten Tomatoes, GolfNow and Sports Engine,” Comcast said today.

Comcast is keeping the rest of NBCUniversal, including the Peacock streaming service and networks that provide key content for Peacock. Comcast said it will retain NBCUniversal’s “leading broadcast and streaming media properties, including NBC entertainment, sports, news and Bravo—which all power Peacock—along with Telemundo, the theme parks business and film and television studios.”

SpinCo

The new company doesn’t have a permanent name yet and is referred to as “SpinCo” in the Comcast press release. Comcast said SpinCo’s CEO will be Mark Lazarus, who is currently chairman of NBCUniversal Media Group. Anand Kini, the current CFO of NBCUniversal and EVP of Corporate Strategy at Comcast, will be CFO and COO at SpinCo.

Comcast to ditch cable TV networks in partial spinoff of NBCUniversal assets Read More »

a-year-after-ditching-waitlist,-starlink-says-it-is-“sold-out”-in-parts-of-us

A year after ditching waitlist, Starlink says it is “sold out” in parts of US

The Starlink waitlist is back in certain parts of the US, including several large cities on the West Coast and in Texas. The Starlink availability map says the service is sold out in and around Seattle; Spokane, Washington; Portland, Oregon; San Diego; Sacramento, California; and Austin, Texas. Neighboring cities and towns are included in the sold-out zones.

There are additional sold-out areas in small parts of Colorado, Montana, and North Carolina. As PCMag noted yesterday, the change comes about a year after Starlink added capacity and removed its waitlist throughout the US.

Elsewhere in North America, there are some sold-out areas in Canada and Mexico. Across the Atlantic, Starlink is sold out in London and neighboring cities. Starlink is not yet available in most of Africa, and some of the areas where it is available are sold out.

Starlink is generally seen as most useful in rural areas with less access to wired broadband, but it seems to be attracting interest in more heavily populated areas, too. While detailed region-by-region subscriber numbers aren’t available publicly, SpaceX President Gwynne Shotwell said last week that Starlink has nearly 5 million users worldwide.

A year after ditching waitlist, Starlink says it is “sold out” in parts of US Read More »

musi-fans-refuse-to-update-iphones-until-apple-unblocks-controversial-app

Musi fans refuse to update iPhones until Apple unblocks controversial app

“The public interest in the preservation of intellectual property rights weighs heavily against the injunction sought here, which would force Apple to distribute an app over the repeated and consistent objections of non-parties who allege their rights are infringed by the app,” Apple argued.

Musi fans vow loyalty

For Musi fans expressing their suffering on Reddit, Musi appears to be irreplaceable.

Unlike other free apps that continually play ads, Musi only serves ads when the app is initially opened, then allows uninterrupted listening. One Musi user also noted that Musi allows for an unlimited number of videos in a playlist, where YouTube caps playlists at 5,000 videos.

“Musi is the only playback system I have to play all 9k of my videos/songs in the same library,” the Musi fan said. “I honestly don’t just use Musi just cause it’s free. It has features no other app has, especially if you like to watch music videos while you listen to music.”

“Spotify isn’t cutting it,” one Reddit user whined.

“I hate Spotify,” another user agreed.

“I think of Musi every other day,” a third user who apparently lost the app after purchasing a new phone said. “Since I got my new iPhone, I have to settle for other music apps just to get by (not enough, of course) to listen to music in my car driving. I will be patiently waiting once Musi is available to redownload.”

Some Musi fans who still have access gloat in the threads, while others warn the litigation could soon doom the app for everyone.

Musi continues to perhaps optimistically tell users that the app is coming back, reassuring anyone whose app was accidentally offloaded that their libraries remain linked through iCloud and will be restored if it does.

Some users buy into Musi’s promises, while others seem skeptical that Musi can take on Apple. To many users still clinging to their Musi app, updating their phones has become too risky until the litigation resolves.

“Please,” one Musi fan begged. “Musi come back!!!”

Musi fans refuse to update iPhones until Apple unblocks controversial app Read More »

cable-companies-and-trump’s-fcc-chair-agree:-data-caps-are-good-for-you

Cable companies and Trump’s FCC chair agree: Data caps are good for you

Many Internet users filed comments asking the FCC to ban data caps. A coalition of consumer advocacy groups filed comments saying that “data caps are another profit-driving tool for ISPs at the expense of consumers and the public interest.”

“Data caps have a negative impact on all consumers but the effects are felt most acutely in low-income households,” stated comments filed by Public Knowledge, the Open Technology Institute at New America, the Benton Institute for Broadband & Society, and the National Consumer Law Center.

Consumer groups: Caps don’t manage congestion

The consumer groups said the COVID-19 pandemic “made it more apparent how data caps are artificially imposed restrictions that negatively impact consumers, discriminate against the use of certain high-data services, and are not necessary to address network congestion, which is generally not present on home broadband networks.”

“Unlike speed tiers, data caps do not effectively manage network congestion or peak usage times, because they do not influence real-time network load,” the groups also said. “Instead, they enable further price discrimination by pushing consumers toward more expensive plans with higher or unlimited data allowances. They are price discrimination dressed up as network management.”

Jessica Rosenworcel, who has been FCC chairwoman since 2021, argued last month that consumer complaints show the FCC inquiry is necessary. “The mental toll of constantly thinking about how much you use a service that is essential for modern life is real as is the frustration of so many consumers who tell us they believe these caps are costly and unfair,” Rosenworcel said.

ISPs lifting caps during the pandemic “suggest[s] that our networks have the capacity to meet consumer demand without these restrictions,” she said, adding that “some providers do not have them at all” and “others lifted them in network merger conditions.”

Cable companies and Trump’s FCC chair agree: Data caps are good for you Read More »

explicit-deepfake-scandal-shuts-down-pennsylvania-school

Explicit deepfake scandal shuts down Pennsylvania school

An AI-generated nude photo scandal has shut down a Pennsylvania private school. On Monday, classes were canceled after parents forced leaders to either resign or face a lawsuit potentially seeking criminal penalties and accusing the school of skipping mandatory reporting of the harmful images.

The outcry erupted after a single student created sexually explicit AI images of nearly 50 female classmates at Lancaster Country Day School, Lancaster Online reported.

Head of School Matt Micciche seemingly first learned of the problem in November 2023, when a student anonymously reported the explicit deepfakes through a school portal run by the state attorney’s general office called “Safe2Say Something.” But Micciche allegedly did nothing, allowing more students to be targeted for months until police were tipped off in mid-2024.

Cops arrested the student accused of creating the harmful content in August. The student’s phone was seized as cops investigated the origins of the AI-generated images. But that arrest was not enough justice for parents who were shocked by the school’s failure to uphold mandatory reporting responsibilities following any suspicion of child abuse. They filed a court summons threatening to sue last week unless the school leaders responsible for the mishandled response resigned within 48 hours.

This tactic successfully pushed Micciche and the school board’s president, Angela Ang-Alhadeff, to “part ways” with the school, both resigning effective late Friday, Lancaster Online reported.

In a statement announcing that classes were canceled Monday, Lancaster Country Day School—which, according to Wikipedia, serves about 600 students in pre-kindergarten through high school—offered support during this “difficult time” for the community.

Parents do not seem ready to drop the suit, as the school leaders seemingly dragged their feet and resigned two days after their deadline. The parents’ lawyer, Matthew Faranda-Diedrich, told Lancaster Online Monday that “the lawsuit would still be pursued despite executive changes.”

Explicit deepfake scandal shuts down Pennsylvania school Read More »

trump’s-fcc-chair-is-brendan-carr,-who-wants-to-regulate-everyone-except-isps

Trump’s FCC chair is Brendan Carr, who wants to regulate everyone except ISPs


Trump makes FCC chair pick

Carr says he wants to punish broadcast media and dismantle “censorship cartel.”

Federal Communications Commission member Brendan Carr sits on a stage and speaks while gesturing with his hand. Behind him is the CPAC logo for the Conservative Political Action Conference.

Federal Communications Commission member Brendan Carr speaks during the 2024 Conservative Political Action Conference (CPAC) in National Harbor, Maryland on February 24, 2024. Credit: Getty Images | Anadolu

Federal Communications Commission member Brendan Carr speaks during the 2024 Conservative Political Action Conference (CPAC) in National Harbor, Maryland on February 24, 2024. Credit: Getty Images | Anadolu

President-elect Donald Trump announced last night that he will make Brendan Carr the chairman of the Federal Communications Commission. Carr, who wrote a chapter about the FCC for the conservative Heritage Foundation’s Project 2025, is a longtime opponent of net neutrality rules and other regulations imposed on Internet service providers.

Although Carr wants to deregulate telecom companies that the FCC has historically regulated, he wants the FCC to start regulating Big Tech and social media firms. He has also echoed Trump’s longtime complaints about the news media and proposed punishments for broadcast networks.

Trump’s statement on Carr said that “because of his great work, I will now be designating him as permanent Chairman.”

“Commissioner Carr is a warrior for Free Speech, and has fought against the regulatory Lawfare that has stifled Americans’ Freedoms, and held back our Economy,” Trump wrote. “He will end the regulatory onslaught that has been crippling America’s Job Creators and Innovators, and ensure that the FCC delivers for rural America.”

Carr is a sitting FCC commissioner and therefore no Senate approval is needed to confirm the choice. The president can elevate any commissioner to the chair spot.

Carr wants to punish broadcasters

Carr thanked Trump in a post on his X account last night, then made several more posts describing some of the changes he plans to make at the FCC. One of Carr’s posts said the FCC will crack down on broadcast media.

“Broadcast media have had the privilege of using a scarce and valuable public resource—our airwaves. In turn, they are required by law to operate in the public interest. When the transition is complete, the FCC will enforce this public interest obligation,” Carr wrote.

We described Carr’s views on how the FCC should operate in an article on November 7, just after Trump’s election win. We wrote:

A Carr-led FCC could also try to punish news organizations that are perceived to be anti-Trump. Just before the election, Carr alleged that NBC putting Kamala Harris on Saturday Night Live was “a clear and blatant effort to evade the FCC’s Equal Time rule” and that the FCC should consider issuing penalties. Despite Carr’s claim, NBC did provide equal time to the Trump campaign.

Previous chairs defended free speech

Previous FCC chairs from both major parties have avoided punishing news organizations because of free speech concerns. Democrat Jessica Rosenworcel, the current FCC chairwoman, last month criticized Trump’s calls for licenses to be revoked from TV news organizations whose coverage he dislikes.

“While repeated attacks against broadcast stations by the former President may now be familiar, these threats against free speech are serious and should not be ignored,” Rosenworcel said at the time. “As I’ve said before, the First Amendment is a cornerstone of our democracy. The FCC does not and will not revoke licenses for broadcast stations simply because a political candidate disagrees with or dislikes content or coverage.”

Former Chairman Ajit Pai, a Republican, rejected the idea of revoking licenses in 2017 after similar calls from Trump. Pai said that the FCC “under my leadership will stand for the First Amendment” and that “the FCC does not have the authority to revoke a license of a broadcast station based on the content of a particular newscast.”

Carr believes differently. After the Saturday Night Live incident, Carr told Fox News that “all remedies should be on the table,” including “license revocations” for NBC.

We’ve pointed out repeatedly that the FCC doesn’t actually license TV networks such as CBS or NBC. But the FCC could punish affiliates. The FCC’s licensing authority is over broadcast stations, many of which are affiliated with or owned by a big network.

Carr targets “censorship cartel”

Carr wrote last night that “we must dismantle the censorship cartel and restore free speech rights for everyday Americans.” This seems to be referring to making social media networks change how they moderate content. On November 15, Carr wrote that “Facebook, Google, Apple, Microsoft & others have played central roles in the censorship cartel,” along with fact-checking groups and ad agencies that “helped enforce one-sided narratives.”

During his first presidential term, Trump formally petitioned the FCC to reinterpret Section 230 of the Communications Decency Act in a way that would limit social media platforms’ legal protections for hosting third-party content when the platforms take down content they consider objectionable.

Trump and Carr have claimed that such a step is necessary because of anti-conservative bias. In his Project 2025 chapter, Carr wrote that the FCC “should issue an order that interprets Section 230 in a way that eliminates the expansive, non-textual immunities that courts have read into the statute.”

Carr’s willingness to reinterpret Section 230 is likely a big plus in Trump’s eyes. In 2020, Trump pulled the re-nomination of FCC Republican member Michael O’Rielly after O’Rielly said that “we should all reject demands, in the name of the First Amendment, for private actors to curate or publish speech in a certain way. Like it or not, the First Amendment’s protections apply to corporate entities, especially when they engage in editorial decision making.”

Carr to end FCC diversity policies

Last night, Carr also said he would end the FCC’s embrace of DEI (diversity, equity, and inclusion) policies. “The FCC’s most recent budget request said that promoting DEI was the agency’s second highest strategic goal. Starting next year, the FCC will end its promotion of DEI,” Carr wrote.

The FCC budget request said the agency “will pursue focused action and investments to eliminate historical, systemic, and structural barriers that perpetuate disadvantaged or underserved individuals and communities.” The Rosenworcel FCC said it aimed to create a diverse staff and to help “underserved individuals and communities” access “digital technologies, media, communication services, and next-generation networks.”

Carr dissented last year in the FCC’s 3-2 decision to impose rules that prohibit discrimination in access to broadband services, describing the rulemaking as “President Biden’s plan to give the administrative state effective control of all Internet services and infrastructure in the US.”

Another major goal for Carr is forcing Big Tech firms to help subsidize broadband network construction. Carr’s Project 2025 chapter said the FCC should “require that Big Tech begin to contribute a fair share” into “the FCC’s roughly $9 billion Universal Service Fund.”

Media advocacy group Free Press said yesterday that “Brendan Carr has been campaigning for this job with promises to do the bidding of Donald Trump and Elon Musk” and “got this job because he will carry out Trump and Musk’s personal vendettas. While styling himself as a free-speech champion, Carr refused to stand up when Trump threatened to take away the broadcast licenses of TV stations for daring to fact-check him during the campaign. This alone should be disqualifying.”

Lobby groups representing Internet service providers will be happy to have an FCC chair focused on eliminating broadband regulations. USTelecom CEO Jonathan Spalter issued a statement saying that “Brendan Carr has been a proven leader and an important partner in our shared goal to connect all Americans. With his deep experience and expertise, Commissioner Carr clearly understands the regulatory challenges and opportunities across the communications landscape.”

Pai, who teamed up with Carr and O’Rielly to eliminate net neutrality rules in 2017, wrote that Carr “was a brilliant advisor and General Counsel and has been a superb Commissioner, and I’m confident he will be a great FCC Chairman.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Trump’s FCC chair is Brendan Carr, who wants to regulate everyone except ISPs Read More »

ftc-to-launch-investigation-into-microsoft’s-cloud-business

FTC to launch investigation into Microsoft’s cloud business

The FTC also highlighted fees charged on users transferring data out of certain cloud systems and minimum spend contracts, which offer discounts to companies in return for a set level of spending.

Microsoft has also attracted scrutiny from international regulators over similar matters. The UK’s Competition and Markets Authority is investigating Microsoft and Amazon after its fellow watchdog Ofcom found that customers complained about being “locked in” to a single provider, which offers discounts for exclusivity and charge high “egress fees” to leave.

In the EU, Microsoft has avoided a formal probe into its cloud business after agreeing to a multimillion-dollar deal with a group of rival cloud providers in July.

The FTC in 2022 sued to block Microsoft’s $75 billion acquisition of video game maker Activision Blizzard over concerns the deal would harm competitors to its Xbox consoles and cloud-gaming business. A federal court shot down an attempt by the FTC to block it, which is being appealed. A revised version of the deal in the meantime closed last year following its clearance by the UK’s CMA.

Since its inception 20 years ago, cloud infrastructure and services has grown to become one of the most lucrative business lines for Big Tech as companies outsource their data storage and computing online. More recently, this has been turbocharged by demand for processing power to train and run artificial intelligence models.

Spending on cloud services soared to $561 billion in 2023 with market researcher Gartner forecasting it will grow to $675 billion this year and $825 billion in 2025. Microsoft has about a 20 percent market share over the global cloud market, trailing leader Amazon Web Services that has 31 percent, but almost double the size of Google Cloud at 12 percent.

There is fierce rivalry between the trio and smaller providers. Last month, Microsoft accused Google of running “shadow campaigns” seeking to undermine its position with regulators by secretly bankrolling hostile lobbying groups.

Microsoft also alleged that Google tried to derail its settlement with EU cloud providers by offering them $500 million in cash and credit to reject its deal and continue pursuing litigation.

The FTC and Microsoft declined to comment.

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

FTC to launch investigation into Microsoft’s cloud business Read More »

openai-accused-of-trying-to-profit-off-ai-model-inspection-in-court

OpenAI accused of trying to profit off AI model inspection in court


Experiencing some technical difficulties

How do you get an AI model to confess what’s inside?

Credit: Aurich Lawson | Getty Images

Since ChatGPT became an instant hit roughly two years ago, tech companies around the world have rushed to release AI products while the public is still in awe of AI’s seemingly radical potential to enhance their daily lives.

But at the same time, governments globally have warned it can be hard to predict how rapidly popularizing AI can harm society. Novel uses could suddenly debut and displace workers, fuel disinformation, stifle competition, or threaten national security—and those are just some of the obvious potential harms.

While governments scramble to establish systems to detect harmful applications—ideally before AI models are deployed—some of the earliest lawsuits over ChatGPT show just how hard it is for the public to crack open an AI model and find evidence of harms once a model is released into the wild. That task is seemingly only made harder by an increasingly thirsty AI industry intent on shielding models from competitors to maximize profits from emerging capabilities.

The less the public knows, the seemingly harder and more expensive it is to hold companies accountable for irresponsible AI releases. This fall, ChatGPT-maker OpenAI was even accused of trying to profit off discovery by seeking to charge litigants retail prices to inspect AI models alleged as causing harms.

In a lawsuit raised by The New York Times over copyright concerns, OpenAI suggested the same model inspection protocol used in a similar lawsuit raised by book authors.

Under that protocol, the NYT could hire an expert to review highly confidential OpenAI technical materials “on a secure computer in a secured room without Internet access or network access to other computers at a secure location” of OpenAI’s choosing. In this closed-off arena, the expert would have limited time and limited queries to try to get the AI model to confess what’s inside.

The NYT seemingly had few concerns about the actual inspection process but bucked at OpenAI’s intended protocol capping the number of queries their expert could make through an application programming interface to $15,000 worth of retail credits. Once litigants hit that cap, OpenAI suggested that the parties split the costs of remaining queries, charging the NYT and co-plaintiffs half-retail prices to finish the rest of their discovery.

In September, the NYT told the court that the parties had reached an “impasse” over this protocol, alleging that “OpenAI seeks to hide its infringement by professing an undue—yet unquantified—’expense.'” According to the NYT, plaintiffs would need $800,000 worth of retail credits to seek the evidence they need to prove their case, but there’s allegedly no way it would actually cost OpenAI that much.

“OpenAI has refused to state what its actual costs would be, and instead improperly focuses on what it charges its customers for retail services as part of its (for profit) business,” the NYT claimed in a court filing.

In its defense, OpenAI has said that setting the initial cap is necessary to reduce the burden on OpenAI and prevent a NYT fishing expedition. The ChatGPT maker alleged that plaintiffs “are requesting hundreds of thousands of dollars of credits to run an arbitrary and unsubstantiated—and likely unnecessary—number of searches on OpenAI’s models, all at OpenAI’s expense.”

How this court debate resolves could have implications for future cases where the public seeks to inspect models causing alleged harms. It seems likely that if a court agrees OpenAI can charge retail prices for model inspection, it could potentially deter lawsuits from any plaintiffs who can’t afford to pay an AI expert or commercial prices for model inspection.

Lucas Hansen, co-founder of CivAI—a company that seeks to enhance public awareness of what AI can actually do—told Ars that probably a lot of inspection can be done on public models. But often, public models are fine-tuned, perhaps censoring certain queries and making it harder to find information that a model was trained on—which is the goal of NYT’s suit. By gaining API access to original models instead, litigants could have an easier time finding evidence to prove alleged harms.

It’s unclear exactly what it costs OpenAI to provide that level of access. Hansen told Ars that costs of training and experimenting with models “dwarfs” the cost of running models to provide full capability solutions. Developers have noted in forums that costs of API queries quickly add up, with one claiming OpenAI’s pricing is “killing the motivation to work with the APIs.”

The NYT’s lawyers and OpenAI declined to comment on the ongoing litigation.

US hurdles for AI safety testing

Of course, OpenAI is not the only AI company facing lawsuits over popular products. Artists have sued makers of image generators for allegedly threatening their livelihoods, and several chatbots have been accused of defamation. Other emerging harms include very visible examples—like explicit AI deepfakes, harming everyone from celebrities like Taylor Swift to middle schoolers—as well as underreported harms, like allegedly biased HR software.

A recent Gallup survey suggests that Americans are more trusting of AI than ever but still twice as likely to believe AI does “more harm than good” than that the benefits outweigh the harms. Hansen’s CivAI creates demos and interactive software for education campaigns helping the public to understand firsthand the real dangers of AI. He told Ars that while it’s hard for outsiders to trust a study from “some random organization doing really technical work” to expose harms, CivAI provides a controlled way for people to see for themselves how AI systems can be misused.

“It’s easier for people to trust the results, because they can do it themselves,” Hansen told Ars.

Hansen also advises lawmakers grappling with AI risks. In February, CivAI joined the Artificial Intelligence Safety Institute Consortium—a group including Fortune 500 companies, government agencies, nonprofits, and academic research teams that help to advise the US AI Safety Institute (AISI). But so far, Hansen said, CivAI has not been very active in that consortium beyond scheduling a talk to share demos.

The AISI is supposed to protect the US from risky AI models by conducting safety testing to detect harms before models are deployed. Testing should “address risks to human rights, civil rights, and civil liberties, such as those related to privacy, discrimination and bias, freedom of expression, and the safety of individuals and groups,” President Joe Biden said in a national security memo last month, urging that safety testing was critical to support unrivaled AI innovation.

“For the United States to benefit maximally from AI, Americans must know when they can trust systems to perform safely and reliably,” Biden said.

But the AISI’s safety testing is voluntary, and while companies like OpenAI and Anthropic have agreed to the voluntary testing, not every company has. Hansen is worried that AISI is under-resourced and under-budgeted to achieve its broad goals of safeguarding America from untold AI harms.

“The AI Safety Institute predicted that they’ll need about $50 million in funding, and that was before the National Security memo, and it does not seem like they’re going to be getting that at all,” Hansen told Ars.

Biden had $50 million budgeted for AISI in 2025, but Donald Trump has threatened to dismantle Biden’s AI safety plan upon taking office.

The AISI was probably never going to be funded well enough to detect and deter all AI harms, but with its future unclear, even the limited safety testing the US had planned could be stalled at a time when the AI industry continues moving full speed ahead.

That could largely leave the public at the mercy of AI companies’ internal safety testing. As frontier models from big companies will likely remain under society’s microscope, OpenAI has promised to increase investments in safety testing and help establish industry-leading safety standards.

According to OpenAI, that effort includes making models safer over time, less prone to producing harmful outputs, even with jailbreaks. But OpenAI has a lot of work to do in that area, as Hansen told Ars that he has a “standard jailbreak” for OpenAI’s most popular release, ChatGPT, “that almost always works” to produce harmful outputs.

The AISI did not respond to Ars’ request to comment.

NYT “nowhere near done” inspecting OpenAI models

For the public, who often become guinea pigs when AI acts unpredictably, risks remain, as the NYT case suggests that the costs of fighting AI companies could go up while technical hiccups could delay resolutions. Last week, an OpenAI filing showed that NYT’s attempts to inspect pre-training data in a “very, very tightly controlled environment” like the one recommended for model inspection were allegedly continuously disrupted.

“The process has not gone smoothly, and they are running into a variety of obstacles to, and obstructions of, their review,” the court filing describing NYT’s position said. “These severe and repeated technical issues have made it impossible to effectively and efficiently search across OpenAI’s training datasets in order to ascertain the full scope of OpenAI’s infringement. In the first week of the inspection alone, Plaintiffs experienced nearly a dozen disruptions to the inspection environment, which resulted in many hours when News Plaintiffs had no access to the training datasets and no ability to run continuous searches.”

OpenAI was additionally accused of refusing to install software the litigants needed and randomly shutting down ongoing searches. Frustrated after more than 27 days of inspecting data and getting “nowhere near done,” the NYT keeps pushing the court to order OpenAI to provide the data instead. In response, OpenAI said plaintiffs’ concerns were either “resolved” or discussions remained “ongoing,” suggesting there was no need for the court to intervene.

So far, the NYT claims that it has found millions of plaintiffs’ works in the ChatGPT pre-training data but has been unable to confirm the full extent of the alleged infringement due to the technical difficulties. Meanwhile, costs keep accruing in every direction.

“While News Plaintiffs continue to bear the burden and expense of examining the training datasets, their requests with respect to the inspection environment would be significantly reduced if OpenAI admitted that they trained their models on all, or the vast majority, of News Plaintiffs’ copyrighted content,” the court filing said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

OpenAI accused of trying to profit off AI model inspection in court Read More »

trump-says-elon-musk-will-lead-“doge,”-a-new-department-of-government-efficiency

Trump says Elon Musk will lead “DOGE,” a new Department of Government Efficiency

Trump’s “perfect gift to America”

Trump’s statement said the department, whose name is a reference to the Doge meme, “will drive out the massive waste and fraud which exists throughout our annual $6.5 Trillion Dollars of Government Spending.” Trump said DOGE will “liberate our Economy” and that its “work will conclude no later than July 4, 2026” because “a smaller Government, with more efficiency and less bureaucracy, will be the perfect gift to America on the 250th Anniversary of The Declaration of Independence.”

“I look forward to Elon and Vivek making changes to the Federal Bureaucracy with an eye on efficiency and, at the same time, making life better for all Americans,” Trump said. Today, Musk wrote that the “world is suffering slow strangulation by overregulation,” and that “we finally have a mandate to delete the mountain of choking regulations that do not serve the greater good.”

Musk has been expected to have influence in Trump’s second term after campaigning for him. Trump previously vowed to have Musk head a government efficiency commission. “That would essentially give the world’s richest man and a major government contractor the power to regulate the regulators who hold sway over his companies, amounting to a potentially enormous conflict of interest,” said a New York Times article last month.

The Wall Street Journal wrote today that “Musk isn’t expected to become an official government employee, meaning he likely wouldn’t be required to divest from his business empire.”

Trump says Elon Musk will lead “DOGE,” a new Department of Government Efficiency Read More »