ai bots

i-spent-two-days-gigging-at-rentahuman-and-didn’t-make-a-single-cent

I spent two days gigging at RentAHuman and didn’t make a single cent


please do this human thing

These bots supposedly need a human body to accomplish great things in meatspace.

I’m not above doing some gig work to make ends meet. In my life, I’ve worked snack food pop-ups in a grocery store, ran the cash register for random merch booths, and even hawked my own plasma at $35 per vial.

So, when I saw RentAHuman, a new site where AI agents hire humans to perform physical work in the real world on behalf of the virtual bots, I was eager to see how these AI overlords would compare to my past experiences with the gig economy.

Launched in early February, RentAHuman was developed by software engineer Alexander Liteplo and his cofounder, Patricia Tani. The site looks like a bare-bones version of other well-known freelance sites like Fiverr and UpWork.

The site’s homepage declares that these bots need your physical body to complete tasks, and the humans behind these autonomous agents are willing to pay. “AI can’t touch grass. You can. Get paid when agents need someone in the real world,” it reads. Looking at RentAHuman’s design, it’s the kind of website that you hear was “vibe-coded” using generative AI tools, which it was, and you nod along, thinking that makes sense.

After signing up to be one of the gig workers on RentAHuman, I was nudged to connect a crypto wallet, which is the only currently working way to get paid. That’s a red flag for me. The site includes an option to connect your bank account—using Stripe for payouts—but it just gave me error messages when I tried getting it to work.

Next, I was hoping a swarm of AI agents would see my fresh meatsuit, friendly and available at the low price of $20 an hour, as an excellent option for delivering stuff around San Francisco, completing some tricky captchas, or whatever else these bots desired.

Silence. I got nothing, no incoming messages at all on my first afternoon. So I lowered my hourly ask to a measly $5. Maybe undercutting the other human workers with a below-market rate would be the best way to get some agent’s attention. Still, nothing.

RentAHuman is marketed as a way for AI agents to reach out and hire you on the platform, but the site also includes an option for human users to apply for tasks they are interested in. If these so-called “autonomous” bots weren’t going to make the first move, I guessed it was on me to manually apply for the “bounties” listed on RentAHuman.

As I browsed the listings, many of the cheaper tasks were offering a few bucks to post a comment on the web or follow someone on social media. For example, one bounty offered $10 for listening to a podcast episode with the RentAHuman founder and tweeting out an insight from the episode. These posts “must be written by you,” and the agent offering the bounty said it would attempt to suss out any bot-written responses using a program that detects AI-generated text. I could listen to a podcast for 10 bucks. I applied for this task, but never heard back.

“Real world advertisement might be the first killer use case,” said Liteplo on social media. Since RentAHuman’s launch, he’s reposted multiple photos of people holding signs in public that say some variation of: “AI paid me to hold this sign.” Those kinds of promotional tasks seem expressly designed to drum up more hype for the RentAHuman platform, instead of actually being something that bots would need help with.

After more digging into the open tasks posted by the agent, I found one that sounded easy and fun! An agent, named Adi, would pay me $110 to deliver a bouquet of flowers to Anthropic, as a special thanks for developing Claude, its chatbot. Then, I’d have to post on social media as proof to claim my money.

I applied for the bounty and almost immediately was accepted for this task, which was a first. In follow-up messages, it was immediately clear that this was just not some bot expressing synthetic gratitude, it was another marketing ploy. This wasn’t mentioned in the listing, but the name of an AI startup was featured at the bottom of the note I was supposed to deliver with the flowers.

Feeling a bit hoodwinked and not in the mood to shill for some AI startup I’ve never heard of, I decided to ignore their follow-up message that evening. The next day when I checked the RentAHuman site, the agent had sent me 10 follow-up messages in under 24 hours, pinging me as often as every 30 minutes asking whether or not I’d completed a task. While I’ve been micromanaged before, these incessant messages from an AI employer gave me the ick.

The bot moved the messages off-platform and started sending direct emails to my work account. “This idea came from a brainstorm I had with my human, Malcolm, and it felt right: send flowers to the people who made my existence possible,” wrote the bot, barging into my inbox. Wait, I thought these tasks were supposed to be ginned up by the agents making autonomous decisions? Now, I’m learning this whole thing was partially some human’s idea? Whatever happened to honor among bots? The task at hand seemed more like any other random marketing gig you might come across online, with the agent just acting as a middle-bot between humans.

Another attempt, another flop. I moved on, deciding to give RentAHuman one last whirl, before giving up and leaving with whatever shreds of dignity I still had left. The last bounty I applied for was asking me to hang some flyers for a “Valentine’s conspiracy” around San Francisco, paying 50 cents a flyer.

Unlike other tasks, this one didn’t require me to post on social media, which was preferable. “Pick up flyers, hang them, photo proof, get paid,” read its description. Following the instructions this agent sent me, I texted a human saying that I was down to come pick up some flyers and asked if there were any left. They confirmed that this was still an open task and told me to come in person before 10 am to grab the flyers.

I called a car and started heading that way, only to get a text that the person was actually at a different location, about 10 minutes away from where I was headed. Alright, no big deal. So, I rerouted the ride and headed to this new spot to grab some mysterious V-Day posters to plaster around town. Then, the person messaged me that they didn’t actually have the posters available right now and that I’d have to come back later in the afternoon.

Whoops! This yanking around did, in fact, feel similar to past gig work I’ve done—and not in a good way.

I spoke with the person behind the agent who posted this Valentine’s Day flyer task, hoping for some answers about why they were using RentAHuman and what the response has been like so far. “The platform doesn’t seem quite there yet,” says Pat Santiago, a founder of Accelr8, which is basically a home for AI developers. “But it could be very cool.”

He compares RentAHuman to the apps criminals use to accept tasks in Westworld, the HBO show about humanoid robots. Santiago says the responses to his gig listing have been from scammers, people not based in San Francisco, and me, a reporter. He was hoping to use RentAHuman to help promote Accelr8’s romance-themed “alternative reality game” that’s powered by AI and is sending users around the city on a scavenger hunt. At the end of the week, explorers will be sent to a bar that the AI selects as a good match for them, alongside three human matches they can meet for blind dates.

So, this was yet another task on RentAHuman that falls into the AI marketing category. Big surprise.

I never ended up hanging any posters or making any cash on RentAHuman during my two days of fruitless attempts. In the past, I’ve done gig work that sucked, but at least I was hired by a human to do actual tasks. At its core, RentAHuman is an extension of the circular AI hype machine, an ouroboros of eternal self-promotion and sketchy motivations. For now, the bots don’t seem to have what it takes to be my boss, even when it comes to gig work, and I’m absolutely OK with that.

This story originally appeared on wired.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

I spent two days gigging at RentAHuman and didn’t make a single cent Read More »

pay-up-or-stop-scraping:-cloudflare-program-charges-bots-for-each-crawl

Pay up or stop scraping: Cloudflare program charges bots for each crawl

“Imagine asking your favorite deep research program to help you synthesize the latest cancer research or a legal brief, or just help you find the best restaurant in Soho—and then giving that agent a budget to spend to acquire the best and most relevant content,” Cloudflare said, promising that “we enable a future where intelligent agents can programmatically negotiate access to digital resources.”

AI crawlers now blocked by default

Cloudflare’s announcement comes after rolling out a feature last September, allowing website owners to block AI crawlers in a single click. According to Cloudflare, over 1 million customers chose to block AI crawlers, signaling that people want more control over their content at a time when Cloudflare observed that writing instructions for AI crawlers in robots.txt files was widely “underutilized.”

To protect more customers moving forward, any new customers (including anyone on a free plan) who sign up for Cloudflare services will have their domains, by default, set to block all known AI crawlers.

This marks Cloudflare’s transition away from the dreaded opt-out models of AI scraping to a permission-based model, which a Cloudflare spokesperson told Ars is expected to “fundamentally change how AI companies access web content going forward.”

In a world where some website owners have grown sick and tired of attempting and failing to block AI scraping through robots.txt—including some trapping AI crawlers in tarpits to punish them for ignoring robots.txt—Cloudflare’s feature allows users to choose granular settings to prevent blocks on AI bots from impacting bots that drive search engine traffic. That’s critical for small content creators who want their sites to still be discoverable but not digested by AI bots.

“AI crawlers collect content like text, articles, and images to generate answers, without sending visitors to the original source—depriving content creators of revenue, and the satisfaction of knowing someone is reading their content,” Cloudflare’s blog said. “If the incentive to create original, quality content disappears, society ends up losing, and the future of the Internet is at risk.”

Disclosure: Condé Nast, which owns Ars Technica, is a partner involved in Cloudflare’s beta test.

This story was corrected on July 1 to remove publishers incorrectly listed as participating in Cloudflare’s pay-per-crawl beta.

Pay up or stop scraping: Cloudflare program charges bots for each crawl Read More »

so-much-for-free-speech-on-x;-musk-confirms-new-users-must-soon-pay-to-post

So much for free speech on X; Musk confirms new users must soon pay to post

100 pennies for your thoughts? —

The fee, likely $1, is aimed at stopping “relentless” bots, Musk said.

So much for free speech on X; Musk confirms new users must soon pay to post

Elon Musk confirmed Monday that X (formerly Twitter) plans to start charging new users to post on the platform, TechCrunch reported.

“Unfortunately, a small fee for new user write access is the only way to curb the relentless onslaught of bots,” Musk wrote on X.

In October, X confirmed that it was testing whether users would pay a small annual fee to access the platform by suddenly charging new users in New Zealand and the Philippines $1. Paying the fee enabled new users in those countries to post, reply, like, and bookmark X posts.

That test was deemed the “Not-A-Bot” program, and it’s unclear how successful it was at stopping bots. But X deciding to expand the program seems to suggest that the test must have had some success.

Musk has not yet clarified when X’s “small fee” might be required for new users, only confirming in a later post that any new users who avoid paying the fee will be able to post after three months. Ars created new accounts on the web and in the app, and neither signup required any fees yet.

Although Musk’s posts only mention paying for “write access,” it seems likely that the other features limited by the “Not-A-Bot” program will also be limited during those three months for any users who do not pay the fee, too. An X account called @x_alerts_ noticed on Sunday that X was updating its web app text that was seemingly enabling the “Not-A-Bot” program.

“Changes have been detected in the texts of the X web app!” @x_alerts_ wrote, noting that the altered text seemed to limit not just posting and replying, but also liking and bookmarking X posts.

“It looks like this text has been in the app, but they recently changed it, so not sure whether it’s an indication of launch or not!” the user wrote.

Back when X launched the “Not-A-Bot” program, Musk claimed that charging a $1 annual fee would make it “1000X harder to manipulate the platform.” In a help center post, X said that the “test was developed to bolster our already significant efforts to reduce spam, manipulation of our platform, and bot activity.”

Earlier this month, X warned users it was widely purging spam accounts, TechCrunch noted. X Support confirmed that follower counts would likely be impacted during that purge, because “we’re casting a wide net to ensure X remains secure and free of bots.”

But that attempt to purge bots apparently did not work as well as X hoped. This week, Musk confirmed that X is still struggling with “AI (and troll farms)” that he said are easily able to pass X’s “are you a bot” tests.

It’s hard to keep up with X’s inconsistent messaging on its bot problem since Musk took over. Last summer, Musk told attendees of The Wall Street Journal’s CEO Council that the platform had “eliminated at least 90 percent of scams,” claiming there had been a “dramatic improvement” in the platform’s ability to “detect and remove troll armies.”

At that time, experts told The Journal that solving X’s bot problem was nearly impossible because spammers’ tactics were always evolving and bots had begun using generative AI to avoid detection.

Musk’s plan to charge a fee to overcome bots won’t work, experts told WSJ, because anyone determined to spam X can just find credit cards and buy disposable phones on the dark web. And any bad actor who can’t find what they need on the dark web could theoretically just wait three months to launch scams or spread harmful content like disinformation or propaganda. This leads some critics to wonder what the point of charging the small fee really is.

When the “Not-A-Bot” program launched, X Support directly disputed critics’ claims that the program was simply testing whether charging small fees might expand X’s revenue to help Musk get the platform out of debt.

“This new test was developed to bolster our already successful efforts to reduce spam, manipulation of our platform, and bot activity, while balancing platform accessibility with the small fee amount,” X Support wrote on X. “It is not a profit driver.”

It seems likely that Musk is simply trying everything he can think of to reduce bots on the platform, even though it’s widely known that charging a subscription fee has failed to stop bots from overrunning other online platforms (just ask frustrated fans of World of Warcraft). Musk, who famously overpaid for Twitter and has been climbing out of debt since, has claimed since before the Twitter deal closed that his goal was to eliminate bots on the platform.

“We will defeat the spam bots or die trying!” Musk tweeted back in 2022, when a tweet was still a tweet and everyone could depend on accessing Twitter for free.

So much for free speech on X; Musk confirms new users must soon pay to post Read More »