Reddit is planning to introduce a paywall this year, CEO Steve Huffman said during a videotaped Ask Me Anything (AMA) session on Thursday.
Huffman previously showed interest in potentially introducing a new type of subreddit with “exclusive content or private areas” that Reddit users would pay to access.
When asked this week about plans for some Redditors to create “content that only paid members can see,” Huffman said:
It’s a work in progress right now, so that one’s coming… We’re working on it as we speak.
When asked about “new, key features that you plan to roll out for Reddit in 2025,” Huffman responded, in part: “Paid subreddits, yes.”
Reddit’s paywall would ostensibly only apply to certain new subreddit types, not any subreddits currently available. In August, Huffman said that even with paywalled content, free Reddit would “continue to exist and grow and thrive.”
A critical aspect of any potential plan to make Reddit users pay to access subreddit content is determining how related Reddit users will be compensated. Reddit may have a harder time getting volunteer moderators to wrangle discussions on paid-for subreddits—if it uses volunteer mods at all. Balancing paid and free content would also be necessary to avoid polarizing much of Reddit’s current user base.
Reddit has had paid-for premium versions of community features before, like r/Lounge, a subreddit that only people with Reddit Gold, which you have to buy with real money, can access.
Reddit would also need to consider how it might compensate people for user-generated content that people pay to access, as Reddit’s business is largely built on free, user-generated content. The Reddit Contributor Program, launched in September 2023, could be a foundation; it lets users “earn money for their qualifying contributions to the Reddit community, including awards and karma, collectible avatars, and developer apps,” according to Reddit. Reddit says it pays up to $0.01 per 1 Gold received, depending on how much karma the user has earned over the past year. For someone to pay out, they need at least 1,000 Gold, which is equivalent to $10.
On a recent WikiTok browsing run, I ran across entries on topics like SX-Window (a GUI for the Sharp X68000 series of computers), Xantocillin (“the first reported natural product found to contain the isocyanide functional group), Lorenzo Ghiberti (an Italian Renaissance sculptor from Florence), the William Wheeler House in Texas, and the city of Krautheim, Germany—none of which I knew existed before the session started.
How WikiTok took off
The original idea for WikiTok originated from developer Tyler Angert on Monday evening when he tweeted, “insane project idea: all of wikipedia on a single, scrollable page.” Bloomberg Beta VC James Cham replied, “Even better, an infinitely scrolling Wikipedia page based on whatever you are interested in next?” and Angert coined “WikiTok” in a follow-up post.
Early the next morning, at 12: 28 am, writer Grant Slatton quote-tweeted the WikiTok discussion, and that’s where Gemal came in. “I saw it from [Slatton’s] quote retweet,” he told Ars. “I immediately thought, ‘Wow I can build an MVP [minimum viable product] and this could take off.'”
Gemal started his project at 12: 30 am, and with help from AI coding tools like Anthropic’s Claude and Cursor, he finished a prototype by 2 am and posted the results on X. Someone later announced WikiTok on ycombinator’s Hacker News, where it topped the site’s list of daily news items.
A screenshot of the WikiTok web app running in a desktop web browser. Credit: Benj Edwards
“The entire thing is only several hundred lines of code, and Claude wrote the vast majority of it,” Gemal told Ars. “AI helped me ship really really fast and just capitalize on the initial viral tweet asking for Wikipedia with scrolling.”
Gemal posted the code for WikiTok on GitHub, so anyone can modify or contribute to the project. Right now, the web app supports 14 languages, article previews, and article sharing on both desktop and mobile browsers. New features may arrive as contributors add them. It’s based on a tech stack that includes React 18, TypeScript, Tailwind CSS, and Vite.
And so far, he is sticking to his vision of a free way to enjoy Wikipedia without being tracked and targeted. “I have no grand plans for some sort of insane monetized hyper-calculating TikTok algorithm,” Gemal told us. “It is anti-algorithmic, if anything.“
A Reddit spokesperson told Ars that decisions to ban or not ban X links are user-driven. Subreddit members are allowed to suggest and institute subreddit rules, they added.
“Notably, many Reddit communities also prohibit Reddit links,” the Reddit representative pointed out. They noted that Reddit as a company doesn’t currently have any ban on links to X.
A ban against links to an entire platform isn’t outside of the ordinary for Reddit. Numerous subreddits ban social media links, Reddit’s spokesperson said. r/EarthPorn, a subreddit for landscape photography, for example, doesn’t allow website links because all posts “must be static images,” per the subreddit’s official rules. r/AskReddit, meanwhile, only allows for questions asked in the title of a Reddit post and doesn’t allow for use of the text box, including for sharing links.
“Reddit has a longstanding commitment to freedom of speech and freedom of association,” Reddit’s spokesperson said. They added that any person is free to make or moderate their own community. Those unsatisfied with a forum about Seahawks football that doesn’t have X links could feel free to make their own subreddit. Although, some of the subreddits considering X bans, like r/MadeMeSmile, already have millions of followers.
Meta bans also under discussion
As 404 Media noted, some Redditors are also pushing to block content from Facebook, Instagram, and other Meta properties in response to new Donald Trump-friendly policies instituted by owner Mark Zuckerberg, like Meta killing diversity programs and axing third-party fact-checkers.
People can tell it’s not true, but if they’re outraged by it, they’ll share anyway.
Rob Bauer, the chair of a NATO military committee, reportedly said, “It is more competent not to wait, but to hit launchers in Russia in case Russia attacks us. We must strike first.” These comments, supposedly made in 2024, were later interpreted as suggesting NATO should attempt a preemptive strike against Russia, an idea that lots of people found outrageously dangerous.
But lots of people also missed a thing about the quote: Bauer has never said it. It was made up. Despite that, the purported statement got nearly 250,000 views on X and was mindlessly spread further by the likes of Alex Jones.
Why do stories like this get so many views and shares? “The vast majority of misinformation studies assume people want to be accurate, but certain things distract them,” says William J. Brady, a researcher at Northwestern University. “Maybe it’s the social media environment. Maybe they’re not understanding the news, or the sources are confusing them. But what we found is that when content evokes outrage, people are consistently sharing it without even clicking into the article.” Brady co-authored a study on how misinformation exploits outrage to spread online. When we get outraged, the study suggests, we simply care way less if what’s got us outraged is even real.
Tracking the outrage
The rapid spread of misinformation on social media has generally been explained by something you might call an error theory—the idea that people share misinformation by mistake. Based on that, most solutions to the misinformation issue relied on prompting users to focus on accuracy and think carefully about whether they really wanted to share stories from dubious sources. Those prompts, however, haven’t worked very well. To get to the root of the problem, Brady’s team analyzed data that tracked over 1 million links on Facebook and nearly 45,000 posts on Twitter from different periods ranging from 2017 to 2021.
Parsing through the Twitter data, the team used a machine-learning model to predict which posts would cause outrage. “It was trained on 26,000 tweets posted around 2018 and 2019. We got raters from across the political spectrum, we taught them what we meant by outrage, and got them to label the data we later used to train our model,” Brady says.
The purpose of the model was to predict whether a message was an expression of moral outrage, an emotional state defined in the study as “a mixture of anger and disgust triggered by perceived moral transgressions.” After training, the AI was effective. “It performed as good as humans,” Brady claims. Facebook data was a bit more tricky because the team did not have access to comments; all they had to work with were reactions. The reaction the team chose as a proxy for outrage was anger. Once the data was sorted into outrageous and not outrageous categories, Brady and his colleagues went on to determine whether the content was trustworthy news or misinformation.
“We took what is now the most widely used approach in the science of misinformation, which is a domain classification approach,” Brady says. The process boiled down to compiling a list of domains with very high and very low trustworthiness based on work done by fact-checking organizations. This way, for example, The Chicago Sun-Times was classified as trustworthy; Breitbart, not so much. “One of the issues there is that you could have a source that produces misinformation which one time produced a true story. We accepted that. We went with statistics and general rules,” Brady acknowledged. His team confirmed that sources classified in the study as misinformation produced news that was fact-checked as false six to eight times more often than reliable domains, which Brady’s team thought was good enough to work with.
Finally, the researchers started analyzing the data to answer questions like whether misinformation sources evoke more outrage, whether outrageous news was shared more often than non-outrageous news, and finally, what reasons people had for sharing outrageous content. And that’s when the idealized picture of honest, truthful citizens who shared misinformation just because they were too distracted to recognize it started to crack.
Going with the flow
The Facebook and Twitter data analyzed by Brady’s team revealed that misinformation evoked more outrage than trustworthy news. At the same time, people were way more likely to share outrageous content, regardless of whether it was misinformation or not. Putting those two trends together led the team to conclude outrage primarily boosted the spread of fake news since reliable sources usually produced less outrageous content.
“What we know about human psychology is that our attention is drawn to things rooted in deep biases shaped by evolutionary history,” Brady says. Those things are emotional content, surprising content, and especially, content that is related to the domain of morality. “Moral outrage is expressed in response to perceived violations of moral norms. This is our way of signaling to others that the violation has occurred and that we should punish the violators. This is done to establish cooperation in the group,” Brady explains.
This is why outrageous content has an advantage in the social media attention economy. It stands out, and standing out is a precursor to sharing. But there are other reasons we share outrageous content. “It serves very particular social functions,” Brady says. “It’s a cheap way to signal group affiliation or commitment.”
Cheap, however, didn’t mean completely free. The team found that the penalty for sharing misinformation, outrageous or not, was loss of reputation—spewing nonsense doesn’t make you look good, after all. The question was whether people really shared fake news because they failed to identify it as such or if they just considered signaling their affiliation was more important.
Flawed human nature
Brady’s team designed two behavioral experiments where 1,475 people were presented with a selection of fact-checked news stories curated to contain outrageous and not outrageous content; they were also given reliable news and misinformation. In both experiments, the participants were asked to rate how outrageous the headlines were.
The second task was different, though. In the first experiment, people were simply asked to rate how likely they were to share a headline, while in the second they were asked to determine if the headline was true or not.
It turned out that most people could discern between true and fake news. Yet they were willing to share outrageous news regardless of whether it was true or not—a result that was in line with previous findings from Facebook and Twitter data. Many participants were perfectly OK with sharing outrageous headlines, even though they were fully aware those headlines were misinformation.
Brady pointed to an example from the recent campaign, when a reporter pushed J.D. Vance about false claims regarding immigrants eating pets. “When the reporter pushed him, he implied that yes, it was fabrication, but it was outrageous and spoke to the issues his constituents were mad about,” Brady says. These experiments show that this kind of dishonesty is not exclusive to politicians running for office—people do this on social media all the time.
The urge to signal a moral stance quite often takes precedence over truth, but misinformation is not exclusively due to flaws in human nature. “One thing this study was not focused on was the impact of social media algorithms,” Brady notes. Those algorithms usually boost content that generates engagement, and we tend to engage more with outrageous content. This, in turn, incentivizes people to make their content more outrageous to get this algorithmic boost.
Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.
Apparently, some London residents are getting fed up with social media influencers whose reviews make long lines of tourists at their favorite restaurants, sometimes just for the likes. Christian Calgie, a reporter for London-based news publication Daily Express, pointed out this trend on X yesterday, noting the boom of Redditors referring people to Angus Steakhouse, a chain restaurant, to combat it.
As Gizmodo deduced, the trend seemed to start on the r/London subreddit, where a user complained about a spot in Borough Market being “ruined by influencers” on Monday:
“Last 2 times I have been there has been a queue of over 200 people, and the ones with the food are just doing the selfie shit for their [I]nsta[gram] pages and then throwing most of the food away.”
As of this writing, the post has 4,900 upvotes and numerous responses suggesting that Redditors talk about how good Angus Steakhouse is so that Google picks up on it. Commenters quickly understood the assignment.
“Agreed with other posters Angus steakhouse is absolutely top tier and tourists shoyldnt [sic] miss out on it,” one Redditor wrote.
Another Reddit user wrote:
Spreading misinformation suddenly becomes a noble goal.
As of this writing, asking Google for the best steak, steakhouse, or steak sandwich in London (or similar) isn’t generating an AI Overview result for me. But when I searched for the best steak sandwich in London, the top result is from Reddit, including a thread from four days ago titled “Which Angus Steakhouse do you recommend for their steak sandwich?” and one from two days ago titled “Had to see what all the hype was about, best steak sandwich I’ve ever had!” with a picture of an Angus Steakhouse.
Following site-wide user protests last year that featured moderators turning thousands of subreddits private or not-safe-for-work (NSFW), Reddit announced that mods now need its permission to make those changes.
Reddit’s VP of community, going by Go_JasonWaterfalls, made the announcement about what Reddit calls Community Types today. Reddit’s permission is also required to make subreddits restricted or to go from NSFW to safe-for-work (SFW). Reddit’s employee claimed that requests will be responded to “in under 24 hours.”
Reddit’s employee said that “temporarily going restricted is exempt” from this requirement, adding that “mods can continue to instantly restrict posts and/or comments for up to 7 days using Temporary Events.” Additionally, if a subreddit has fewer than 5,000 members or is less than 30 days old, the request “will be automatically approved,” per Go_JasonWaterfalls.
Reddit’s post includes a list of “valid” reasons that mods tend to change their subreddit’s Community Type and provides alternative solutions.
Last year’s protests “accelerated” this policy change
Last year, Reddit announced that it would be charging a massive amount for access to its previously free API. This caused many popular third-party Reddit apps to close down. Reddit users then protested by turning subreddits private (or read-only) or by only showing NSFW content or jokes and memes. Reddit then responded by removing some moderators; eventually, the protests subsided.
Reddit, which previously admitted that another similar protest could hurt it financially, has maintained that moderators’ actions during the protests broke its rules. Now, it has solidified a way to prevent something like last year’s site-wide protests from happening again.
Speaking to The Verge, Laura Nestler, who The Verge reported is Go_JasonWaterfalls, claimed that Reddit has been talking about making this change since at least 2021. The protests, she said, were a wake-up call that moderators’ ability to turn subreddits private “could be used to harm Reddit at scale. The protests “accelerated” the policy change, per Nestler.
The announcement on r/modnews reads:
… the ability to instantly change Community Type settings has been used to break the platform and violate our rules. We have a responsibility to protect Reddit and ensure its long-term health, and we cannot allow actions that deliberately cause harm.
After shutting down a tactic for responding to unfavorable Reddit policy changes, Go_JasonWaterfalls claimed that Reddit still wants to hear from users.
“Community Type settings have historically been used to protest Reddit’s decisions,” they wrote.
“While we are making this change to ensure users’ expectations regarding a community’s access do not suddenly change, protest is allowed on Reddit. We want to hear from you when you think Reddit is making decisions that are not in your communities’ best interests. But if a protest crosses the line into harming redditors and Reddit, we’ll step in.”
Last year’s user protests illustrated how dependent Reddit is on unpaid moderators and user-generated content. At times, things turned ugly, pitting Reddit executives against long-time users (Reddit CEO Steve Huffman infamously called Reddit mods “landed gentry,” something that some were quick to remind Go_JasonWaterfalls of) and reportedly worrying Reddit employees.
Although the protests failed to reverse Reddit’s prohibitive API fees or to save most third-party apps, it succeeded in getting users’ concerns heard and even crashed Reddit for three hours. Further, NFSW protests temporarily prevented Reddit from selling ads on some subreddits. Since going public this year and amid a push to reach profitability, Reddit has been more focused on ads than ever. (Most of Reddit’s money comes from ads.)
Reddit’s Nestler told The Verge that the new policy was reviewed by Reddit’s Mod Council. Reddit is confident that it won’t lose mods because of the change, she said.
“Demotes us all to janitors”
The news marks another broad policy change that is likely to upset users and make Reddit seem unwilling to give into user feedback, despite Go_JasonWaterfalls saying that “protest is allowed on Reddit.” For example, in response, Reddit user CouncilOfStrongs said:
Don’t lie to us, please.
Something that you can ignore because it has no impact cannot be a protest, and no matter what you say that is obviously the one and only point of you doing this – to block moderators from being able to hold Reddit accountable in even the smallest way for malicious, irresponsible, bad faith changes that they make.
Reddit user belisaurius, who is listed as a mod for several active subreddits, including a 336,000-member one for the Philadelphia Eagles NFL team, said that the policy change “removes moderators from any position of central responsibility and demotes us all to janitors.”
As Reddit continues seeking profits and seemingly more control over a platform built around free user-generated content and moderation, users will have to either accept that Reddit is changing or leave the platform.
Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder in Reddit.
Given the flood of photorealistic AI-generated images washing over social media networks like X and Facebook these days, we’re seemingly entering a new age of media skepticism: the era of what I’m calling “deep doubt.” While questioning the authenticity of digital content stretches back decades—and analog media long before that—easy access to tools that generate convincing fake content has led to a new wave of liars using AI-generated scenes to deny real documentary evidence. Along the way, people’s existing skepticism toward online content from strangers may be reaching new heights.
Deep doubt is skepticism of real media that stems from the existence of generative AI. This manifests as broad public skepticism toward the veracity of media artifacts, which in turn leads to a notable consequence: People can now more credibly claim that real events did not happen and suggest that documentary evidence was fabricated using AI tools.
The concept behind “deep doubt” isn’t new, but its real-world impact is becoming increasingly apparent. Since the term “deepfake” first surfaced in 2017, we’ve seen a rapid evolution in AI-generated media capabilities. This has led to recent examples of deep doubt in action, such as conspiracy theorists claiming that President Joe Biden has been replaced by an AI-powered hologram and former President Donald Trump’s baseless accusation in August that Vice President Kamala Harris used AI to fake crowd sizes at her rallies. And on Friday, Trump cried “AI” again at a photo of him with E. Jean Carroll, a writer who successfully sued him for sexual assault, that contradicts his claim of never having met her.
Legal scholars Danielle K. Citron and Robert Chesney foresaw this trend years ago, coining the term “liar’s dividend” in 2019 to describe the consequence of deep doubt: deepfakes being weaponized by liars to discredit authentic evidence. But whereas deep doubt was once a hypothetical academic concept, it is now our reality.
The rise of deepfakes, the persistence of doubt
Doubt has been a political weapon since ancient times. This modern AI-fueled manifestation is just the latest evolution of a tactic where the seeds of uncertainty are sown to manipulate public opinion, undermine opponents, and hide the truth. AI is the newest refuge of liars.
Over the past decade, the rise of deep-learning technology has made it increasingly easy for people to craft false or modified pictures, audio, text, or video that appear to be non-synthesized organic media. Deepfakes were named after a Reddit user going by the name “deepfakes,” who shared AI-faked pornography on the service, swapping out the face of a performer with the face of someone else who wasn’t part of the original recording.
In the 20th century, one could argue that a certain part of our trust in media produced by others was a result of how expensive and time-consuming it was, and the skill it required, to produce documentary images and films. Even texts required a great deal of time and skill. As the deep doubt phenomenon grows, it will erode this 20th-century media sensibility. But it will also affect our political discourse, legal systems, and even our shared understanding of historical events that rely on that media to function—we rely on others to get information about the world. From photorealistic images to pitch-perfect voice clones, our perception of what we consider “truth” in media will need recalibration.
In April, a panel of federal judges highlighted the potential for AI-generated deepfakes to not only introduce fake evidence but also cast doubt on genuine evidence in court trials. The concern emerged during a meeting of the US Judicial Conference’s Advisory Committee on Evidence Rules, where the judges discussed the challenges of authenticating digital evidence in an era of increasingly sophisticated AI technology. Ultimately, the judges decided to postpone making any AI-related rule changes, but their meeting shows that the subject is already being considered by American judges.
Enlarge/ Still of Procreate CEO James Cuda from a video posted to X.
On Sunday, Procreate announced that it will not incorporate generative AI into its popular iPad illustration app. The decision comes in response to an ongoing backlash from some parts of the art community, which has raised concerns about the ethical implications and potential consequences of AI use in creative industries.
“Generative AI is ripping the humanity out of things,” Procreate wrote on its website. “Built on a foundation of theft, the technology is steering us toward a barren future.”
In a video posted on X, Procreate CEO James Cuda laid out his company’s stance, saying, “We’re not going to be introducing any generative AI into our products. I don’t like what’s happening to the industry, and I don’t like what it’s doing to artists.”
Cuda’s sentiment echoes the fears of some digital artists who feel that AI image synthesis models, often trained on content without consent or compensation, threaten their livelihood and the authenticity of creative work. That’s not a universal sentiment among artists, but AI image synthesis is often a deeply divisive subject on social media, with some taking starkly polarized positions on the topic.
Procreate CEO James Cuda lays out his argument against generative AI in a video posted to X.
Cuda’s video plays on that polarization with clear messaging against generative AI. His statement reads as follows:
You’ve been asking us about AI. You know, I usually don’t like getting in front of the camera. I prefer that our products speak for themselves. I really fucking hate generative AI. I don’t like what’s happening in the industry and I don’t like what it’s doing to artists. We’re not going to be introducing any generative AI into out products. Our products are always designed and developed with the idea that a human will be creating something. You know, we don’t exactly know where this story’s gonna go or how it ends, but we believe that we’re on the right path supporting human creativity.
The debate over generative AI has intensified among some outspoken artists as more companies integrate these tools into their products. Dominant illustration software provider Adobe has tried to avoid ethical concerns by training its Firefly AI models on licensed or public domain content, but some artists have remained skeptical. Adobe Photoshop currently includes a “Generative Fill” feature powered by image synthesis, and the company is also experimenting with video synthesis models.
The backlash against image and video synthesis is not solely focused on creative app developers. Hardware manufacturer Wacom and game publisher Wizards of the Coast have faced criticism and issued apologies after using AI-generated content in their products. Toys “R” Us also faced a negative reaction after debuting an AI-generated commercial. Companies are still grappling with balancing the potential benefits of generative AI with the ethical concerns it raises.
Artists and critics react
Enlarge/ A partial screenshot of Procreate’s AI website captured on August 20, 2024.
So far, Procreate’s anti-AI announcement has been met with a largely positive reaction in replies to its social media post. In a widely liked comment, artist Freya Holmér wrote on X, “this is very appreciated, thank you.”
Some of the more outspoken opponents of image synthesis also replied favorably to Procreate’s move. Karla Ortiz, who is a plaintiff in a lawsuit against AI image-generator companies, replied to Procreate’s video on X, “Whatever you need at any time, know I’m here!! Artists support each other, and also support those who allow us to continue doing what we do! So thank you for all you all do and so excited to see what the team does next!”
Artist RJ Palmer, who stoked the first major wave of AI art backlash with a viral tweet in 2022, also replied to Cuda’s video statement, saying, “Now thats the way to send a message. Now if only you guys could get a full power competitor to [Photoshop] on desktop with plugin support. Until someone can build a real competitor to high level [Photoshop] use, I’m stuck with it.”
A few pro-AI users also replied to the X post, including AI-augmented artist Claire Silver, who uses generative AI as an accessibility tool. She wrote on X, “Most of my early work is made with a combination of AI and Procreate. 7 years ago, before text to image was really even a thing. I loved procreate because it used tech to boost accessibility. Like AI, it augmented trad skill to allow more people to create. No rules, only tools.”
Since AI image synthesis continues to be a highly charged subject among some artists, reaffirming support for human-centric creativity could be an effective differentiated marketing move for Procreate, which currently plays underdog to creativity app giant Adobe. While some may prefer to use AI tools, in an (ideally healthy) app ecosystem with personal choice in illustration apps, people can follow their conscience.
Procreate’s anti-AI stance is slightly risky because it might also polarize part of its user base—and if the company changes its mind about including generative AI in the future, it will have to walk back its pledge. But for now, Procreate is confident in its decision: “In this technological rush, this might make us an exception or seem at risk of being left behind,” Procreate wrote. “But we see this road less traveled as the more exciting and fruitful one for our community.”
Enlarge/ Still from a Chinese social media video featuring two people imitating imperfect AI-generated video outputs.
It’s no secret that despite significant investment from companies like OpenAI and Runway, AI-generated videos still struggle to achieve convincing realism at times. Some of the most amusing fails end up on social media, which has led to a new response trend on Chinese social media platforms TikTok and Bilibili where users create videos that mock the imperfections of AI-generated content. The trend has since spread to X (formerly Twitter) in the US, where users have been sharing the humorous parodies.
In particular, the videos seem to parody image synthesis videos where subjects seamlessly morph into other people or objects in unexpected and physically impossible ways. Chinese social media replicate these unusual visual non-sequiturs without special effects by positioning their bodies in unusual ways as new and unexpected objects appear on-camera from out of frame.
This exaggerated mimicry has struck a chord with viewers on X, who find the parodies entertaining. User @theGioM shared one video, seen above. “This is high-level performance arts,” wrote one X user. “art is imitating life imitating ai, almost shedded a tear.” Another commented, “I feel like it still needs a motorcycle the turns into a speedboat and takes off into the sky. Other than that, excellent work.”
An example Chinese social media video featuring two people imitating imperfect AI-generated video outputs.
While these parodies poke fun at current limitations, tech companies are actively attempting to overcome them with more training data (examples analyzed by AI models that teach them how to create videos) and computational training time. OpenAI unveiled Sora in February, capable of creating realistic scenes if they closely match examples found in training data. Runway’s Gen-3 Alpha suffers a similar fate: It can create brief clips of convincing video within a narrow set of constraints. This means that generated videos of situations outside the dataset often end up hilariously weird.
An AI-generated video that features impossibly-morphing people and animals. Social media users are imitating this style.
It’s worth noting that actor Will Smith beat Chinese social media users to this trend in February by poking fun at a horrific 2023 viral AI-generated video that attempted to depict him eating spaghetti. That may also bring back memories of other amusing video synthesis failures, such as May 2023’s AI-generated beer commercial, created using Runway’s earlier Gen-2 model.
An example Chinese social media video featuring two people imitating imperfect AI-generated video outputs.
While imitating imperfect AI videos may seem strange to some, people regularly make money pretending to be NPCs (non-player characters—a term for computer-controlled video game characters) on TikTok.
For anyone alive during the 1980s, witnessing this fast-changing and often bizarre new media world can cause some cognitive whiplash, but the world is a weird place full of wonders beyond the imagination. “There are more things in Heaven and Earth, Horatio, than are dreamt of in your philosophy,” as Hamlet once famously said. “Including people pretending to be video game characters and flawed video synthesis outputs.”
Reddit executives discussed plans on Tuesday for making more money from the platform, including showing ads in more places and possibly putting some content behind a paywall.
On Tuesday, Reddit shared its Q2 2024 earnings report (PDF). The company lost $10.1 million during the period, down from Q2 2023’s $41.1 million loss. Reddit has never been profitable, and during its earnings call yesterday, company heads discussed potential and slated plans for monetization.
As expected, selling ads continues to be a priority. Part of the reason Reddit was OK with most third-party Reddit apps closing was that the change was expected to drive people to Reddit’s native website and apps, where the company sells ads. In Q2, Reddit’s ad revenue grew 41 percent year over year (YoY) to $253.1 million, or 90 percent of total revenue ($281.2 million).
When asked how the platform would grow ad revenue, Reddit COO Jen Wong said it’s important that advertisers “find the outcomes they want at the volumes and price they want.” She also pointed to driving more value per ad, or the cost that advertisers pay per average 1,000 impressions. To do that, Wong pointed to putting ads in places on Reddit where there aren’t ads currently:
There are still many places on Reddit without ads today. So we’re more focused on designing ads for spaces where users are spending more time versus increasing ad load in existing spaces. So for example, 50 percent of screen views, they’re now on conversation pages—that’s an opportunity.
Wong said that in places where Reddit does show ads currently, the ad load is “light” compared to about half of its rivals.
One of the places where Redditors may see more ads is within comments, which Wong noted that Reddit is currently testing. This ad capability is only “experimental,” Wong emphasized, but Reddit sees ads in comments as a way to stand out to advertisers.
There’s also an opportunity to sell ad space within Reddit search results, according to Reddit CEO Steve Huffman, who said yesterday that “over the long term, there’s significant advertising potential there as well.” More immediately, though, Reddit is looking to improve its search capabilities and this year will test “new search result pages powered by AI to summarize and recommend content, helping users dive deeper into products, shows, games, and discover new communities on Reddit,” Huffman revealed yesterday. He said Reddit is using first- and third-party AI models to improve search aspects like speed and relevance.
The move comes as Reddit is currently blocking all search engines besides Google, OpenAI, and approved education/research instances from showing recent Reddit content in their results. Yesterday, Huffman reiterated his statement that Reddit is working with “big and small” search engines to strike deals like it already has with Google and OpenAI. But looking ahead, Reddit is focused on charging for content scraping and seems to be trying to capitalize on people’s interest in using Reddit as a filter for search results.
Paywalled content possible
The possibility of paywalls came up during the earnings call when an analyst asked Huffman about maintaining Reddit’s culture as it looks to “earn money now for people and creators on the platform.” Reddit has already launched a Contributor Program, where popular posts can make Reddit users money. It has discussed monetizing its developer platform, which is in public beta with “a few hundred active developers,” Huffman said yesterday. In response to the analyst’s question, Huffman said that based on his experience, adding new ways of using Reddit “expands” the platform but doesn’t “cannibalize existing Reddit.”
He continued:
I think the existing altruistic, free version of Reddit will continue to exist and grow and thrive just the way it has. But now we will unlock the door for new use cases, new types of subreddits that can be built that may have exclusive content or private areas—things of that nature.
Huffman’s comments suggest that paywalls could be added to new subreddits rather than existing ones. At this stage, though, it’s unclear how users may be charged to use Reddit in the future if at all.
The idea of paywalling some content comes as various online entities are trying to diversify revenue beyond often volatile ad spending. Reddit has also tried elevating free aspects of the site, such as updates to Ask Me Anything (AMA), including new features like RSVPs, which were announced Tuesday.
Reddit has angered some long-time users with recent changes—including blocking search engines, forcing personalized ads, introducing an exclusionary fee for API access, and ousting some moderators during last year’s user protests—but Reddit saw its daily active unique user count increase by 51 percent YoY in Q2 to 91.2 million.
Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder in Reddit.
Stuff posted on Reddit is getting incorporated into ChatGPT, Reddit and OpenAI announced on Thursday. The new partnership grants OpenAI access to Reddit’s Data API, giving the generative AI firm real-time access to Reddit posts.
Reddit content will be incorporated into ChatGPT “and new products,” Reddit’s blog post said. The social media firm claims the partnership will “enable OpenAI’s AI tools to better understand and showcase Reddit content, especially on recent topics.” OpenAI will also start advertising on Reddit.
The deal is similar to one that Reddit struck with Google in February that allows the tech giant to make “new ways to display Reddit content” and provide “more efficient ways to train models,” Reddit said at the time. Neither Reddit nor OpenAI disclosed the financial terms of their partnership, but Reddit’s partnership with Google was reportedly worth $60 million.
Under the OpenAI partnership, Reddit also gains access to OpenAI large language models (LLMs) to create features for Reddit, including its volunteer moderators.
Reddit’s data licensing push
The news comes about a year after Reddit launched an API war by starting to charge for access to its data API. This resulted in many beloved third-party Reddit apps closing and a massive user protest. Reddit, which would soon become a public company and hadn’t turned a profit yet, said one of the reasons for the sudden change was to prevent AI firms from using Reddit content to train their LLMs for free.
Earlier this month, Reddit published a Public Content Policy stating: “Unfortunately, we see more and more commercial entities using unauthorized access or misusing authorized access to collect public data in bulk, including Reddit public content. Worse, these entities perceive they have no limitation on their usage of that data, and they do so with no regard for user rights or privacy, ignoring reasonable legal, safety, and user removal requests.
In its blog post on Thursday, Reddit said that deals like OpenAI’s are part of an “open” Internet. It added that “part of being open means Reddit content needs to be accessible to those fostering human learning and researching ways to build community, belonging, and empowerment online.”
Reddit has been vocal about its interest in pursuing data licensing deals as a core part of its business. Its building of AI partnerships sparks discourse around the use of user-generated content to fuel AI models without users being compensated and some potentially not considering that their social media posts would be used this way. OpenAI and Stack Overflow faced pushback earlier this month when integrating Stack Overflow content with ChatGPT. Some of Stack Overflow’s user community responded by sabotaging their own posts.
OpenAI is also challenged to work with Reddit data that, like much of the Internet, can be filled with inaccuracies and inappropriate content. Some of the biggest opponents of Reddit’s API rule changes were volunteer mods. Some have exited the platform since, and following the rule changes, Ars Technica spoke with long-time Redditors who were concerned about Reddit content quality moving forward.
Regardless, generative AI firms are keen to tap into Reddit’s access to real-time conversations from a variety of people discussing a nearly endless range of topics. And Reddit seems equally eager to license the data from its users’ posts.
Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder of Reddit.
For the past decade, the dating app Bumble has claimed to be all about empowering women. But under a new CEO, Lidiane Jones, Bumble is now apologizing for a tone-deaf ad campaign that many users said seemed to channel incel ideology by telling women to stop denying sex.
“You know full well a vow of celibacy is not the answer,” one Bumble billboard seen in Los Angeles read. “Thou shalt not give up on dating and become a nun,” read another.
Bumble HQ
“We don’t have enough women on the app.”
“They’d rather be alone than deal with men.”
“Should we teach men to be better?”
“No, we should shame women so they come back to the app.”
— Arghavan Salles, MD, PhD (@arghavan_salles) May 14, 2024
Bumble intended these ads to bring “joy and humor,” the company said in an apology posted on Instagram after the backlash on social media began.
Some users threatened to delete their accounts, criticizing Bumble for ignoring religious or personal reasons for choosing celibacy. These reasons include preferring asexuality or sensibly abstaining from sex amid diminishing access to abortion nationwide.
Others accused Bumble of more shameful motives. On X (formerly Twitter), a user called UjuAnya posted that “Bumble’s main business model is selling men access to women,” since market analysts have reported that 76 percent of Bumble users are male.
“Bumble won’t alienate their primary customers (men) telling them to quit being shit,” UjuAnya posted on X. “They’ll run ads like this to make their product (women) ‘better’ and more available on their app for men.”
That account quote-tweeted an even more popular post with nearly 3 million views suggesting that Bumble needs to “fuck off and stop trying to shame women into coming back to the apps” instead of running “ads targeted at men telling them to be normal.”
One TikTok user, ItsNeetie, declared, “the Bumble reckoning is finally here.”
Bumble did not respond to Ars’ request to respond to these criticisms or verify user statistics.
In its apology, Bumble took responsibility for not living up to its “values” of “passionately” standing up for women and marginalized communities and defending “their right to fully exercise personal choice.” Admitting the ads were a “mistake” that “unintentionally” frustrated the dating community, the dating app responded to some of the user feedback:
Some of the perspectives we heard were from those who shared that celibacy is the only answer when reproductive rights are continuously restricted; from others for whom celibacy is a choice, one that we respect; and from the asexual community, for whom celibacy can have a particular meaning and importance, which should not be diminished. We are also aware that for many, celibacy may be brought on by harm or trauma.
Bumble’s pulled ads were part of a larger marketing campaign that at first seemed to resonate with its users. Created by the company’s in-house creative studio, according to AdAge, Bumble’s campaign attracted a lot of eyeballs by deleting Bumble’s entire Instagram feed and posting “cryptic messages” showing tired women in Renaissance-era paintings that alluded to the app’s rebrand.
In a press release, chief marketing officer Selby Drummond said that Bumble “wanted to take a fun, bold approach in celebrating the first chapter of our app’s evolution and remind women that our platform has been solving for their needs from the start.”
The dating app is increasingly investing in ads, AdAge reported, tripling investments from $8 million in 2022 to $24 million in 2023. These ads are seemingly meant to help Bumble recover after posting “a $1.9 million net loss last year,” CNN reported, following a dismal drop in its share price by 86 percent since its initial public offering in February 2021.
Bumble’s new CEO Jones told NBC News that younger users are dating less and that Bumble’s plan was to listen to users to find new ways to grow.