Policy

openai-thinks-elon-musk-funded-its-biggest-critics—who-also-hate-musk

OpenAI thinks Elon Musk funded its biggest critics—who also hate Musk

“We are not in any way supported by or funded by Elon Musk and have a history of campaigning against him and his interests,” Ruby-Sachs told NBC News.

Another nonprofit watchdog targeted by OpenAI was The Midas Project, which strives to make sure AI benefits everyone. Notably, Musk’s lawsuit accused OpenAI of abandoning its mission to benefit humanity in pursuit of immense profits.

But the founder of The Midas Project, Tyler Johnston, was shocked to see his group portrayed as coordinating with Musk. He posted on X to clarify that Musk had nothing to do with the group’s “OpenAI Files,” which comprehensively document areas of concern with any plan to shift away from nonprofit governance.

His post came after OpenAI’s chief strategy officer, Jason Kwon, wrote that “several organizations, some of them suddenly newly formed like the Midas Project, joined in and ran campaigns” backing Musk’s “opposition to OpenAI’s restructure.”

“What are you talking about?” Johnston wrote. “We were formed 19 months ago. We’ve never spoken with or taken funding from Musk and [his] ilk, which we would have been happy to tell you if you asked a single time. In fact, we’ve said he runs xAI so horridly it makes OpenAI ‘saintly in comparison.'”

OpenAI acting like a “cutthroat” corporation?

Johnston complained that OpenAI’s subpoena had already hurt the Midas Project, as insurers had denied coverage based on news coverage. He accused OpenAI of not just trying to silence critics but possibly shut them down.

“If you wanted to constrain an org’s speech, intimidation would be one strategy, but making them uninsurable is another, and maybe that’s what’s happened to us with this subpoena,” Johnston suggested.

Other nonprofits, like the San Francisco Foundation (SFF) and Encode, accused OpenAI of using subpoenas to potentially block or slow down legal interventions. Judith Bell, SFF’s chief impact officer, told NBC News that her nonprofit’s subpoena came after spearheading a petition to California’s attorney general to block OpenAI’s restructuring. And Encode’s general counsel, Nathan Calvin, was subpoenaed after sponsoring a California safety regulation meant to make it easier to monitor risks of frontier AI.

OpenAI thinks Elon Musk funded its biggest critics—who also hate Musk Read More »

feds-seize-$15-billion-from-alleged-forced-labor-scam-built-on-“human-suffering”

Feds seize $15 billion from alleged forced labor scam built on “human suffering”

Federal prosecutors have seized $15 billion from the alleged kingpin of an operation that used imprisoned laborers to trick unsuspecting people into making investments in phony funds, often after spending months faking romantic relationships with the victims.

Such “pig butchering” scams have operated for years. They typically work when members of the operation initiate conversations with people on social media and then spend months messaging them. Often, the scammers pose as attractive individuals who feign romantic interest for the victim.

Forced labor, phone farms, and human suffering

Eventually, conversations turn to phony investment funds with the end goal of convincing the victim to transfer large amounts of bitcoin. In many cases, the scammers are trafficked and held against their will in compounds surrounded by fences and barbed wire.

On Tuesday, federal prosecutors unsealed an indictment against Chen Zhi, the founder and chairman of a multinational business conglomerate based in Cambodia. It alleged that Zhi led such a forced-labor scam operation, which, with the help of unnamed co-conspirators, netted billions of dollars from victims.

“The defendant CHEN ZHI and his co-conspirators designed the compounds to maximize profits and personally ensured that they had the necessary infrastructure to reach as many victims as possible,” prosecutors wrote in the court document, filed in US District Court for the Eastern District of New York. The indictment continued:

For example, in or about 2018, Co-Conspirator-1 was involved in procuring millions of mobile telephone numbers and account passwords from an illicit online marketplace. In or about 2019, Co-Conspirator-3 helped oversee construction of the Golden Fortune compound. CHEN himself maintained documents describing and depicting “phone farms,” automated call centers used to facilitate cryptocurrency investment fraud and other cybercrimes, including the below image:

Credit: Justice Department

Prosecutors said Zhi is the founder and chairman of Prince Group, a Cambodian corporate conglomerate that ostensibly operated dozens of legitimate business entities in more than 30 countries. In secret, however, Zhi and top executives built Prince Group into one of Asia’s largest transnational criminal organizations. Zhi’s whereabouts are unknown.

Feds seize $15 billion from alleged forced labor scam built on “human suffering” Read More »

trump-admin-pressured-facebook-into-removing-ice-tracking-group

Trump admin pressured Facebook into removing ICE-tracking group

Trump slammed Biden for social media “censorship”

Trump and Republicans repeatedly criticized the Biden administration for pressuring social media companies into removing content. In a day-one executive order declaring an end to “federal censorship,” Trump said, “the previous administration trampled free speech rights by censoring Americans’ speech on online platforms, often by exerting substantial coercive pressure on third parties, such as social media companies, to moderate, deplatform, or otherwise suppress speech that the Federal Government did not approve.”

Sen. Ted Cruz (R-Texas) last week held a hearing on his allegation that under Biden, the US government “infringed on the First Amendment by pressuring social media companies to censor Americans that held views different than the Biden administration.” Cruz called the tactic of pressuring social media companies part of the “left-wing playbook,” and said he wants Congress to pass a law “to stop government jawboning and safeguard every American’s right to free speech.”

Shortly before Trump’s January 2025 inauguration, Meta announced it would end the third-party fact-checking program it had introduced in 2016. “Governments and legacy media have pushed to censor more and more. A lot of this is clearly political,” Meta CEO Mark Zuckerberg said at the time. Zuckerberg called the election “a cultural tipping point toward once again prioritizing speech.”

In addition to pressuring Facebook, the Trump administration demanded that Apple remove the ICEBlock app from its App Store. Apple responded by removing the app, which let iPhone users report the locations of Immigration and Customs Enforcement officers. Google removed similar Android apps from the Play Store.

Chicago is a primary target of Trump’s immigration crackdown. The Department of Homeland Security says it launched Operation Midway Blitz in early September to find “criminal illegal aliens who flocked to Chicago and Illinois seeking protection under the sanctuary policies of Governor Pritzker.”

People seeking to avoid ICE officers have used technology to obtain crowdsourced information on the location of agents. While crowdsourced information can vary widely in accuracy, a group called the Illinois Coalition for Immigrant & Refugee Rights says it works to verify reports of ICE sightings and sends text alerts to local residents only when ICE activity is verified.

Last month, an ICE agent shot and killed a man named Silverio Villegas Gonzalez in a Chicago suburb. The Department of Homeland Security alleged that Villegas Gonzalez was “a criminal illegal alien with a history of reckless driving,” and that he “drove his car at law enforcement officers.” The Chicago Tribune said it “found no criminal history for Villegas Gonzalez, who had been living in the Chicago area for the past 18 years.”

Trump admin pressured Facebook into removing ICE-tracking group Read More »

openai-unveils-“wellness”-council;-suicide-prevention-expert-not-included

OpenAI unveils “wellness” council; suicide prevention expert not included


Doctors examining ChatGPT

OpenAI reveals which experts are steering ChatGPT mental health upgrades.

Ever since a lawsuit accused ChatGPT of becoming a teen’s “suicide coach,” OpenAI has been scrambling to make its chatbot safer. Today, the AI firm unveiled the experts it hired to help make ChatGPT a healthier option for all users.

In a press release, OpenAI explained its Expert Council on Wellness and AI started taking form after OpenAI began informally consulting with experts on parental controls earlier this year. Now it’s been formalized, bringing together eight “leading researchers and experts with decades of experience studying how technology affects our emotions, motivation, and mental health” to help steer ChatGPT updates.

One priority was finding “several council members with backgrounds in understanding how to build technology that supports healthy youth development,” OpenAI said, “because teens use ChatGPT differently than adults.”

That effort includes David Bickham, a research director at Boston Children’s Hospital, who has closely monitored how social media impacts kids’ mental health, and Mathilde Cerioli, the chief science officer at a nonprofit called Everyone.AI. Cerioli studies the opportunities and risks of children using AI, particularly focused on “how AI intersects with child cognitive and emotional development.”

These experts can seemingly help OpenAI better understand how safeguards can fail kids during extended conversations to ensure kids aren’t particularly vulnerable to so-called “AI psychosis,” a phenomenon where longer chats trigger mental health issues.

In January, Bickham noted in an American Psychological Association article on AI in education that “little kids learn from characters” already—as they do things like watch Sesame Street—and form “parasocial relationships” with those characters. AI chatbots could be the next frontier, possibly filling in teaching roles if we know more about the way kids bond with chatbots, Bickham suggested.

“How are kids forming a relationship with these AIs, what does that look like, and how might that impact the ability of AIs to teach?” Bickham posited.

Cerioli closely monitors AI’s influence in kids’ worlds. She suggested last month that kids who grow up using AI may risk having their brains rewired to “become unable to handle contradiction,” Le Monde reported, especially “if their earliest social interactions, at an age when their neural circuits are highly malleable, are conducted with endlessly accommodating entities.”

“Children are not mini-adults,” Cerioli said. “Their brains are very different, and the impact of AI is very different.”

Neither expert is focused on suicide prevention in kids. That may disappoint dozens of suicide prevention experts who last month pushed OpenAI to consult with experts deeply familiar with what “decades of research and lived experience” show about “what works in suicide prevention.”

OpenAI experts on suicide risks of chatbots

On a podcast last year, Cerioli said that child brain development is the area she’s most “passionate” about when asked about the earliest reported chatbot-linked teen suicide. She said it didn’t surprise her to see the news and noted that her research is focused less on figuring out “why that happened” and more on why it can happen because kids are “primed” to seek out “human connection.”

She noted that a troubled teen confessing suicidal ideation to a friend in the real world would more likely lead to an adult getting involved, whereas a chatbot would need specific safeguards built in to ensure parents are notified.

This seems in line with the steps OpenAI took to add parental controls, consulting with experts to design “the notification language for parents when a teen may be in distress,” the company’s press release said. However, on a resources page for parents, OpenAI has confirmed that parents won’t always be notified if a teen is linked to real-world resources after expressing “intent to self-harm,” which may alarm some critics who think the parental controls don’t go far enough.

Although OpenAI does not specify this in the press release, it appears that Munmun De Choudhury, a professor of interactive computing at Georgia Tech, could help evolve ChatGPT to recognize when kids are in danger and notify parents.

De Choudhury studies computational approaches to improve “the role of online technologies in shaping and improving mental health,” OpenAI noted.

In 2023, she conducted a study on the benefits and harms of large language models in digital mental health. The study was funded in part through a grant from the American Foundation for Suicide Prevention and noted that chatbots providing therapy services at that point could only detect “suicide behaviors” about half the time. The task appeared “unpredictable” and “random” to scholars, she reported.

It seems possible that OpenAI hopes the child experts can provide feedback on how ChatGPT is impacting kids’ brains while De Choudhury helps improve efforts to notify parents of troubling chat sessions.

More recently, De Choudhury seemed optimistic about potential AI mental health benefits, telling The New York Times in April that AI therapists can still have value even if companion bots do not provide the same benefits as real relationships.

“Human connection is valuable,” De Choudhury said. “But when people don’t have that, if they’re able to form parasocial connections with a machine, it can be better than not having any connection at all.”

First council meeting focused on AI benefits

Most of the other experts on OpenAI’s council have backgrounds similar to De Choudhury’s, exploring the intersection of mental health and technology. They include Tracy Dennis-Tiwary (a psychology professor and cofounder of Arcade Therapeutics), Sara Johansen (founder of Stanford University’s Digital Mental Health Clinic), David Mohr (director of Northwestern University’s Center for Behavioral Intervention Technologies), and Andrew K. Przybylski (a professor of human behavior and technology).

There’s also Robert K. Ross, a public health expert whom OpenAI previously tapped to serve as a nonprofit commission advisor.

OpenAI confirmed that there has been one meeting so far, which served to introduce the advisors to teams working to upgrade ChatGPT and Sora. Moving forward, the council will hold recurring meetings to explore sensitive topics that may require adding guardrails. Initially, though, OpenAI appears more interested in discussing the potential benefits to mental health that could be achieved if tools were tweaked to be more helpful.

“The council will also help us think about how ChatGPT can have a positive impact on people’s lives and contribute to their well-being,” OpenAI said. “Some of our initial discussions have focused on what constitutes well-being and the ways ChatGPT might empower people as they navigate all aspects of their life.”

Notably, Przybylski co-authored a study in 2023 providing data disputing that access to the Internet has negatively affected mental health broadly. He told Mashable that his research provided the “best evidence” so far “on the question of whether Internet access itself is associated with worse emotional and psychological experiences—and may provide a reality check in the ongoing debate on the matter.” He could possibly help OpenAI explore if the data supports perceptions that AI poses mental health risks, which are currently stoking a chatbot mental health panic in Congress.

Also appearing optimistic about companion bots in particular is Johansen. In a LinkedIn post earlier this year, she recommended that companies like OpenAI apply “insights from the impact of social media on youth mental health to emerging technologies like AI companions,” concluding that “AI has great potential to enhance mental health support, and it raises new challenges around privacy, trust, and quality.”

Other experts on the council have been critical of companion bots. OpenAI noted that Mohr specifically “studies how technology can help prevent and treat depression.”

Historically, Mohr has advocated for more digital tools to support mental health, suggesting in 2017 that apps could help support people who can’t get to the therapist’s office.

More recently, Mohr told The Wall Street Journal in 2024 that he had concerns about AI chatbots posing as therapists, though.

“I don’t think we’re near the point yet where there’s just going to be an AI who acts like a therapist,” Mohr said. “There’s still too many ways it can go off the rails.”

Similarly, although Dennis-Tiwary told Wired last month that she finds the term “AI psychosis” to be “very unhelpful” in most cases that aren’t “clinical,” she has warned that “above all, AI must support the bedrock of human well-being, social connection.”

“While acknowledging that there are potentially fruitful applications of social AI for neurodivergent individuals, the use of this highly unreliable and inaccurate technology among children and other vulnerable populations is of immense ethical concern,” Dennis-Tiwary wrote last year.

For OpenAI, the wellness council could help the company turn a corner as ChatGPT and Sora continue to be heavily scrutinized. The company also confirmed that it would continue consulting “the Global Physician Network, policymakers, and more, as we build advanced AI systems in ways that support people’s well-being.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

OpenAI unveils “wellness” council; suicide prevention expert not included Read More »

4chan-fined-$26k-for-refusing-to-assess-risks-under-uk-online-safety-act

4chan fined $26K for refusing to assess risks under UK Online Safety Act

The risk assessments also seem to unconstitutionally compel speech, they argued, forcing them to share information and “potentially incriminate themselves on demand.” That conflicts with 4chan and Kiwi Farms’ Fourth Amendment rights, as well as “the right against self-incrimination and the due process clause of the Fifth Amendment of the US Constitution,” the suit says.

Additionally, “the First Amendment protects Plaintiffs’ right to permit anonymous use of their platforms,” 4chan and Kiwi Farms argued, opposing Ofcom’s requirements to verify ages of users. (This may be their weakest argument as the US increasingly moves to embrace age gates.)

4chan is hoping a US district court will intervene and ban enforcement of the OSA, arguing that the US must act now to protect all US companies. Failing to act now could be a slippery slope, as the UK is supposedly targeting “the most well-known, but small and, financially speaking, defenseless platforms” in the US before mounting attacks to censor “larger American companies,” 4chan and Kiwi Farms argued.

Ofcom has until November 25 to respond to the lawsuit and has maintained that the OSA is not a censorship law.

On Monday, Britain’s technology secretary, Liz Kendall, called OSA a “lifeline” meant to protect people across the UK “from the darkest corners of the Internet,” the Record reported.

“Services can no longer ignore illegal content, like encouraging self-harm or suicide, circulating online which can devastate young lives and leaves families shattered,” Kendall said. “This fine is a clear warning to those who fail to remove illegal content or protect children from harmful material.”

Whether 4chan and Kiwi Farms can win their fight to create a carveout in the OSA for American companies remains unclear, but the Federal Trade Commission agrees that the UK law is an overreach. In August, FTC Chair Andrew Ferguson warned US tech companies against complying with the OSA, claiming that censoring Americans to comply with UK law is a violation of the FTC Act, the Record reported.

“American consumers do not reasonably expect to be censored to appease a foreign power and may be deceived by such actions,” Ferguson told tech executives in a letter.

Another lawyer backing 4chan, Preston Byrne, seemed to echo Ferguson, telling the BBC, “American citizens do not surrender our constitutional rights just because Ofcom sends us an e-mail.”

4chan fined $26K for refusing to assess risks under UK Online Safety Act Read More »

“extremely-angry”-trump-threatens-“massive”-tariff-on-all-chinese-exports

“Extremely angry” Trump threatens “massive” tariff on all Chinese exports

The chairman of the House of Representatives’ Select Committee on the Chinese Communist Party (CCP), John Moolenaar (R-Mich.), issued a statement, suggesting that, unlike Trump, he’d seen China’s rare earths move coming. He pushed Trump to interpret China’s export controls as “an economic declaration of war against the United States and a slap in the face to President Trump.”

“China has fired a loaded gun at the American economy, seeking to cut off critical minerals used to make the semiconductors that power the American military, economy, and devices we use every day including cars, phones, computers, and TVs,” Moolenaar said. “Every American will be negatively affected by China’s action, and that’s why we must address America’s vulnerabilities and build our own leverage against China.”

To strike back forcefully, Moolenaar suggested passing a law he sponsored that he said would “end preferential trade treatment for China, build a resilient resource reserve of critical minerals, secure American research and campuses from Chinese influence, and strangle China’s technology sector with export controls instead of selling it advanced chips.”

Moolenaar also emphasized steps he recommended back in September that he claimed Trump could take to “create real leverage with China” in the face of its stranglehold on rare earths.

Those included “restricting or suspending Chinese airline landing rights in the US,” “reviewing export control policies governing the sale of commercial aircraft, parts, and maintenance services to China,” and “restricting outbound investment in China’s aviation sector in coordination with key allies.”

“These steps would send a clear message to Beijing that it cannot choke off critical supplies to our defense industries without consequences to its own strategic sectors,” Moolenaar wrote in his September letter to Trump. “By acting together, the US and its allies can strengthen our resilience, reinforce solidarity, and create real leverage with China.”

“Extremely angry” Trump threatens “massive” tariff on all Chinese exports Read More »

openai-will-stop-saving-most-chatgpt-users’-deleted-chats

OpenAI will stop saving most ChatGPT users’ deleted chats

Moving forward, all of the deleted and temporary chats that were previously saved under the preservation order will continue to be accessible to news plaintiffs, who are looking for examples of outputs infringing their articles or attributing misinformation to their publications.

Additionally, OpenAI will continue monitoring certain ChatGPT accounts, saving deleted and temporary chats of any users whose domains have been flagged by news organizations since they began searching through the data. If news plaintiffs flag additional domains during future meetings with OpenAI, more accounts could be roped in.

Ars could not immediately reach OpenAI or the Times’ legal team for comment.

The dispute with news plaintiffs continues to heat up beyond the battle over user logs, most recently with co-defendant Microsoft pushing to keep its AI companion Copilot out of the litigation.

The stakes remain high for both sides. News organizations have alleged that ChatGPT and other allegedly copyright-infringing tools threaten to replace them in their market while potentially damaging their reputations by attributing false information to them.

OpenAI may be increasingly pressured to settle the lawsuit, and not by news organizations but by insurance companies that won’t provide comprehensive coverage for their AI products with multiple potentially multibillion-dollar lawsuits pending.

OpenAI will stop saving most ChatGPT users’ deleted chats Read More »

boring-company-cited-for-almost-800-environmental-violations-in-las-vegas

Boring Company cited for almost 800 environmental violations in Las Vegas

Workers have complained of chemical burns from the waste material generated by the tunneling process, and firefighters must decontaminate their equipment after conducting rescues from the project sites. The company was fined more than $112,000 by Nevada’s Occupational Safety and Health Administration in late 2023 after workers complained of “ankle-deep” water in the tunnels, muck spills, and burns. The Boring Co. has contested the violations. Just last month, a construction worker suffered a “crush injury” after being pinned between two 4,000-foot pipes, according to police records. Firefighters used a crane to extract him from the tunnel opening.

After ProPublica and City Cast Las Vegas published their January story, both the CEO and the chairman of the LVCVA board criticized the reporting, arguing the project is well-regulated. As an example, LVCVA CEO Steve Hill cited the delayed opening of a Loop station by local officials who were concerned that fire safety requirements weren’t adequate. Board chair Jim Gibson, who is also a Clark County commissioner, agreed the project is appropriately regulated.

“We wouldn’t have given approvals if we determined things weren’t the way they ought to be and what it needs to be for public safety reasons,” Gibson said, according to the Las Vegas Review Journal. “Our sense is we’ve done what we need to do to protect the public.”

Asked for a response to the new proposed fines, an LVCVA spokesperson said, “We won’t be participating in this story.”

The repeated allegations that the company is violating regulations—including the bespoke regulatory arrangement agreed to by the company—indicates that officials aren’t keeping the public safe, said Ben Leffel, an assistant public policy professor at the University of Nevada, Las Vegas.

“Not if they’re recommitting almost the exact violation,” Leffel said.

Leffel questioned whether a $250,000 penalty would be significant enough to change operations at The Boring Co., which was valued at $7 billion in 2023. Studies show that fines that don’t put a significant dent in a company’s profit don’t deter companies from future violations, Leffel said.

A state spokesperson disagreed that regulators aren’t keeping the public safe and said the agency believes its penalties will deter “future non-compliance.”

“NDEP is actively monitoring and inspecting the projects,” the spokesperson said.

This story originally appeared on ProPublica.

Boring Company cited for almost 800 environmental violations in Las Vegas Read More »

musk’s-x-posts-on-ketamine,-putin-spur-release-of-his-security-clearances

Musk’s X posts on ketamine, Putin spur release of his security clearances

“A disclosure, even with redactions, will reveal whether a security clearance was granted with or without conditions or a waiver,” DCSA argued.

Ultimately, DCSA failed to prove that Musk risked “embarrassment or humiliation” not only if the public learned what specific conditions or waivers applied to Musk’s clearances but also if there were any conditions or waivers at all, Cote wrote.

Three cases that DCSA cited to support this position—including a case where victims of Jeffrey Epstein’s trafficking scheme had a substantial privacy interest in non-disclosure of detailed records—do not support the government’s logic, Cote said. The judge explained that the disclosures would not have affected the privacy rights of any third parties, emphasizing that “Musk’s diminished privacy interest is underscored by the limited information plaintiffs sought in their FOIA request.”

Musk’s X posts discussing his occasional use of prescription ketamine and his disclosure that smoking marijuana on a podcast prompted NASA requirements for random drug testing, Cote wrote, “only enhance” the public’s interest in how Musk’s security clearances were vetted. Additionally, Musk has posted about speaking with Vladimir Putin, prompting substantial public interest in how his foreign contacts may or may not restrict his security clearances. More than 2 million people viewed Musk’s X posts on these subjects, the judge wrote, noting that:

It is undisputed that drug use and foreign contacts are two factors DCSA considers when determining whether to impose conditions or waivers on a security clearance grant. DCSA fails to explain why, given Musk’s own, extensive disclosures, the mere disclosure that a condition or waiver exists (or that no condition or waiver exists) would subject him to ’embarrassment or humiliation.

Rather, for the public, “the list of Musk’s security clearances, including any conditions or waivers, could provide meaningful insight into DCSA’s performance of that duty and responses to Musk’s admissions, if any,” Cote wrote.

In a footnote, Cote said that this substantial public interest existed before Musk became a special government employee, ruling that DCSA was wrong to block the disclosures seeking information on Musk as a major government contractor. Her ruling likely paves the way for the NYT or other news organizations to submit FOIA requests for a list of Musk’s clearances while he helmed DOGE.

It’s not immediately clear when the NYT will receive the list they requested in 2024, but the government has until October 17 to request redactions before it’s publicized.

“The Times brought this case because the public has a right to know about how the government conducts itself,” Charlie Stadtlander, an NYT spokesperson, said. “The decision reaffirms that fundamental principle and we look forward to receiving the document at issue.”

Musk’s X posts on ketamine, Putin spur release of his security clearances Read More »

vandals-deface-ads-for-ai-necklaces-that-listen-to-all-your-conversations

Vandals deface ads for AI necklaces that listen to all your conversations

In addition to backlash over feared surveillance capitalism, critics have accused Schiffman of taking advantage of the loneliness epidemic. Conducting a survey last year, researchers with Harvard Graduate School of Education’s Making Caring Common found that people between “30-44 years of age were the loneliest group.” Overall, 73 percent of those surveyed “selected technology as contributing to loneliness in the country.”

But Schiffman rejects these criticisms, telling the NYT that his AI Friend pendant is intended to supplement human friends, not replace them, supposedly helping to raise the “average emotional intelligence” of users “significantly.”

“I don’t view this as dystopian,” Schiffman said, suggesting that “the AI friend is a new category of companionship, one that will coexist alongside traditional friends rather than replace them,” the NYT reported. “We have a cat and a dog and a child and an adult in the same room,” the Friend founder said. “Why not an AI?”

The MTA has not commented on the controversy, but Victoria Mottesheard—a vice president at Outfront Media, which manages MTA advertising—told the NYT that the Friend campaign blew up because AI “is the conversation of 2025.”

Website lets anyone deface Friend ads

So far, the Friend ads have not yielded significant sales, Schiffman confirmed, telling the NYT that only 3,100 have sold. He expects that society isn’t ready for AI companions to be promoted at such a large scale and that his ad campaign will help normalize AI friends.

In the meantime, critics have rushed to attack Friend on social media, inspiring a website where anyone can vandalize a Friend ad and share it online. That website has received close to 6,000 submissions so far, its creator, Marc Mueller, told the NYT, and visitors can take a tour of these submissions by choosing to “ride train to see more” after creating their own vandalized version.

For visitors to Mueller’s site, riding the train displays a carousel documenting backlash to Friend, as well as “performance art” by visitors poking fun at the ads in less serious ways. One example showed a vandalized ad changing “Friend” to “Fries,” with a crude illustration of McDonald’s French fries, while another transformed the ad into a campaign for “fried chicken.”

Others were seemingly more serious about turning the ad into a warning. One vandal drew a bunch of arrows pointing to the “end” in Friend while turning the pendant into a cry-face emoji, seemingly drawing attention to research on the mental health risks of relying on AI companions—including the alleged suicide risks of products like Character.AI and ChatGPT, which have spawned lawsuits and prompted a Senate hearing.

Vandals deface ads for AI necklaces that listen to all your conversations Read More »

elon-musk-tries-to-make-apple-and-mobile-carriers-regret-choosing-starlink-rivals

Elon Musk tries to make Apple and mobile carriers regret choosing Starlink rivals

SpaceX holds spectrum licenses for the Starlink fixed Internet service for homes and businesses. Adding the EchoStar spectrum will make its holdings suitable for mobile service.

“SpaceX currently holds no terrestrial spectrum authorizations and no license to use spectrum allocated on a primary basis to MSS,” the company’s FCC filing said. “Its only authorization to provide any form of mobile service is an authorization for secondary SCS [Supplemental Coverage from Space] operations in spectrum licensed to T-Mobile.”

Starlink unlikely to dethrone major carriers

SpaceX’s spectrum purchase doesn’t make it likely that Starlink will become a fourth major carrier. Grand claims of that sort are “complete nonsense,” wrote industry analyst Dean Bubley. “Apart from anything else, there’s one very obvious physical obstacle: walls and roofs,” he wrote. “Space-based wireless, even if it’s at frequencies supported in normal smartphones, won’t work properly indoors. And uplink from devices to satellites will be even worse.”

When you’re indoors, “there’s more attenuation of the signal,” resulting in lower data rates, Farrar said. “You might not even get megabits per second indoors, unless you are going to go onto a home Starlink broadband network,” he said. “You might only be able to get hundreds of kilobits per second in an obstructed area.”

The Mach33 analyst firm is more bullish than others regarding Starlink’s potential cellular capabilities. “With AWS-4/H-block and V3 [satellites], Starlink DTC is no longer niche, it’s a path to genuine MNO competition. Watch for retail mobile bundles, handset support, and urban hardware as the signals of that pivot,” the firm said.

Mach33’s optimism is based in part on the expectation that SpaceX will make more deals. “DTC isn’t just a coverage filler, it’s a springboard. It enables alternative growth routes; M&A, spectrum deals, subleasing capacity in denser markets, or technical solutions like mini-towers that extend Starlink into neighborhoods,” the group’s analysis said.

The amount of spectrum SpaceX is buying from EchoStar is just a fraction of what the national carriers control. There is “about 1.1 GHz of licensed spectrum currently allocated to mobile operators,” wireless lobby group CTIA said in a January 2025 report. The group also says the cellular industry has over 432,000 active cell sites around the US.

What Starlink can offer cellular users “is nothing compared to the capacity of today’s 5G networks,” but it would be useful “in less populated areas or where you cannot get coverage,” Rysavy said.

Starlink has about 8,500 satellites in orbit. Rysavy estimated in a July 2025 report that about 280 of them are over the United States at any given time. These satellites are mostly providing fixed Internet service in which an antenna is placed outside a building so that people can use Wi-Fi indoors.

SpaceX’s FCC filing said the EchoStar spectrum’s mix of terrestrial and satellite frequencies will be ideal for Starlink.

“By acquiring EchoStar’s market-access authorization for 2 GHz MSS as well as its terrestrial AWS-4 licenses, SpaceX will be able to deploy a hybrid satellite and terrestrial network, just as the Commission envisioned EchoStar would do,” SpaceX said. “Consistent with the Commission’s finding that potential interference between MSS and terrestrial mobile service can best be managed by enabling a single licensee to control both networks, assignment of the AWS-4 spectrum is critical to enable SpaceX to deploy robust MSS service in this band.”

Elon Musk tries to make Apple and mobile carriers regret choosing Starlink rivals Read More »

ice-wants-to-build-a-24/7-social-media-surveillance-team

ICE wants to build a 24/7 social media surveillance team

Together, these teams would operate as intelligence arms of ICE’s Enforcement and Removal Operations division. They will receive tips and incoming cases, research individuals online, and package the results into dossiers that could be used by field offices to plan arrests.

The scope of information contractors are expected to collect is broad. Draft instructions specify open-source intelligence: public posts, photos, and messages on platforms from Facebook to Reddit to TikTok. Analysts may also be tasked with checking more obscure or foreign-based sites, such as Russia’s VKontakte.

They would also be armed with powerful commercial databases such as LexisNexis Accurint and Thomson Reuters CLEAR, which knit together property records, phone bills, utilities, vehicle registrations, and other personal details into searchable files.

The plan calls for strict turnaround times. Urgent cases, such as suspected national security threats or people on ICE’s Top Ten Most Wanted list, must be researched within 30 minutes. High-priority cases get one hour; lower-priority leads must be completed within the workday. ICE expects at least three-quarters of all cases to meet those deadlines, with top contractors hitting closer to 95 percent.

The plan goes beyond staffing. ICE also wants algorithms, asking contractors to spell out how they might weave artificial intelligence into the hunt—a solicitation that mirrors other recent proposals. The agency has also set aside more than a million dollars a year to arm analysts with the latest surveillance tools.

ICE did not immediately respond to a request for comment.

Earlier this year, The Intercept revealed that ICE had floated plans for a system that could automatically scan social media for “negative sentiment” toward the agency and flag users thought to show a “proclivity for violence.” Procurement records previously reviewed by 404 Media identified software used by the agency to build dossiers on flagged individuals, compiling personal details, family links, and even using facial recognition to connect images across the web. Observers warned it was unclear how such technology could distinguish genuine threats from political speech.

ICE wants to build a 24/7 social media surveillance team Read More »