2024 olympics

google-pulls-its-terrible-pro-ai-“dear-sydney”-ad-after-backlash

Google pulls its terrible pro-AI “Dear Sydney” ad after backlash

Gemini, write me a fan letter! —

Taking the “human” out of “human communication.”

A picture of the Gemini prompt box from the

Enlarge / The Gemini prompt box in the “Dear Sydney” ad.

Google

Have you seen Google’s “Dear Sydney” ad? The one where a young girl wants to write a fan letter to Olympic hurdler Sydney McLaughlin-Levrone? To which the girl’s dad responds that he is “pretty good with words but this has to be just right”? And so, to be just right, he suggests that the daughter get Google’s Gemini AI to write a first draft of the letter?

If you’re watching the Olympics, you have undoubtedly seen it—because the ad has been everywhere. Until today. After a string of negative commentary about the ad’s dystopian implications, Google has pulled the “Dear Sydney” ad from TV. In a statement to The Hollywood Reporter, the company said, “While the ad tested well before airing, given the feedback, we have decided to phase the ad out of our Olympics rotation.”

The backlash was similar to that against Apple’s recent ad in which an enormous hydraulic press crushed TVs, musical instruments, record players, paint cans, sculptures, and even emoji into… the newest model of the iPad. Apple apparently wanted to show just how much creative and entertainment potential the iPad held; critics read the ad as a warning image about the destruction of human creativity in a technological age. Apple apologized soon after.

Now Google has stepped on the same land mine. Not only is AI coming for human creativity, the “Dear Sydney” ad suggests—but it won’t even leave space for the charming imperfections of a child’s fan letter to an athlete. Instead, AI will provide the template, just as it will likely provide the template for the athlete’s response, leading to a nightmare scenario in which huge swathes of human communication have the “human” part stripped right out.

“Very bad”

The generally hostile tone of the commentary to the new ad was captured by Alexandra Petri’s Washington Post column on the ad, which Petri labeled “very bad.”

This ad makes me want to throw a sledgehammer into the television every time I see it. Given the choice between watching this ad and watching the ad about how I need to be giving money NOW to make certain that dogs do not perish in the snow, I would have to think long and hard. It’s one of those ads that makes you think, perhaps evolution was a mistake and our ancestor should never have left the sea. This could be slight hyperbole but only slight!

If you haven’t seen this ad, you are leading a blessed existence and I wish to trade places with you.

A TechCrunch piece said that it was “hard to think of anything that communicates heartfelt inspiration less than instructing an AI to tell someone how inspiring they are.”

Shelly Palmer, a Syracuse University professor and marketing consultant, wrote that the ad’s basic mistake was overestimating “AI’s ability to understand and convey the nuances of human emotions and thoughts.” Palmer would rather have a “heartfelt message over a grammatically correct, AI-generated message any day,” he said. He then added:

I received just such a heartfelt message from a reader years ago. It was a single line email about a blog post I had just written: “Shelly, you’re to [sic] stupid to own a smart phone.” I love this painfully ironic email so much, I have it framed on the wall in my office. It was honest, direct, and probably accurate.

But his conclusion was far more serious. “I flatly reject the future that Google is advertising,” Palmer wrote. “I want to live in a culturally diverse world where billions of individuals use AI to amplify their human skills, not in a world where we are used by AI pretending to be human.”

Things got saltier from there. NPR host Linda Holmes wrote on social media:

This commercial showing somebody having a child use AI to write a fan letter to her hero SUCKS. Obviously there are special circumstances and people who need help, but as a general “look how cool, she didn’t even have to write anything herself!” story, it SUCKS. Who wants an AI-written fan letter?? I promise you, if they’re able, the words your kid can put together will be more meaningful than anything a prompt can spit out. And finally: A fan letter is a great way for a kid to learn to write! If you encourage kids to run to AI to spit out words because their writing isn’t great yet, how are they supposed to learn? Sit down with your kid and write the letter with them! I’m just so grossed out by the entire thing.

The Atlantic was more succinct with its headline: “Google Wins the Gold Medal for Worst Olympic Ad.”

All of this largely tracks with our own take on the ad, which Ars Technica’s Kyle Orland called a “grim” vision of the future. “I want AI-powered tools to automate the most boring, mundane tasks in my life, giving me more time to spend on creative, life-affirming moments with my family,” he wrote. “Google’s ad seems to imply that these life-affirming moments are also something to be avoided—or at least made pleasingly more efficient—through the use of AI.”

Getting people excited about their own obsolescence and addiction is a tough sell, so I don’t envy the marketers who have to hawk Big Tech’s biggest products in a climate of suspicion and hostility toward everything from AI to screen time to social media to data collection. I’m sure the marketers will find a way—but clearly “Dear Sydney” isn’t it.

Google pulls its terrible pro-AI “Dear Sydney” ad after backlash Read More »

outsourcing-emotion:-the-horror-of-google’s-“dear-sydney”-ai-ad

Outsourcing emotion: The horror of Google’s “Dear Sydney” AI ad

Here's an idea: Don't be a deadbeat and do it yourself!

Enlarge / Here’s an idea: Don’t be a deadbeat and do it yourself!

If you’ve watched any Olympics coverage this week, you’ve likely been confronted with an ad for Google’s Gemini AI called “Dear Sydney.” In it, a proud father seeks help writing a letter on behalf of his daughter, who is an aspiring runner and superfan of world-record-holding hurdler Sydney McLaughlin-Levrone.

“I’m pretty good with words, but this has to be just right,” the father intones before asking Gemini to “Help my daughter write a letter telling Sydney how inspiring she is…” Gemini dutifully responds with a draft letter in which the LLM tells the runner, on behalf of the daughter, that she wants to be “just like you.”

Every time I see this ad, it puts me on edge in a way I’ve had trouble putting into words (though Gemini itself has some helpful thoughts). As someone who writes words for a living, the idea of outsourcing a writing task to a machine brings up some vocational anxiety. And the idea of someone who’s “pretty good with words” doubting his abilities when the writing “has to be just right” sets off alarm bells regarding the superhuman framing of AI capabilities.

But I think the most offensive thing about the ad is what it implies about the kinds of human tasks Google sees AI replacing. Rather than using LLMs to automate tedious busywork or difficult research questions, “Dear Sydney” presents a world where Gemini can help us offload a heartwarming shared moment of connection with our children.

The “Dear Sydney” ad.

It’s a distressing answer to what’s still an incredibly common question in the AI space: What do you actually use these things for?

Yes, I can help

Marketers have a difficult task when selling the public on their shiny new AI tools. An effective ad for an LLM has to make it seem like a superhuman do-anything machine but also an approachable, friendly helper. An LLM has to be shown as good enough to reliably do things you can’t (or don’t want to) do yourself, but not so good that it will totally replace you.

Microsoft’s 2024 Super Bowl ad for Copilot is a good example of an attempt to thread this needle, featuring a handful of examples of people struggling to follow their dreams in the face of unseen doubters. “Can you help me?” those dreamers ask Copilot with various prompts. “Yes, I can help” is the message Microsoft delivers back, whether through storyboard images, an impromptu organic chemistry quiz, or “code for a 3D open world game.”

Microsoft’s Copilot marketing sells it as a helper for achieving your dreams.

The “Dear Sydney” ad tries to fit itself into this same box, technically. The prompt in the ad starts with “Help my daughter…” and the tagline at the end offers “A little help from Gemini.” If you look closely near the end, you’ll also see Gemini’s response starts with “Here’s a draft to get you started.” And to be clear, there’s nothing inherently wrong with using an LLM as a writing assistant in this way, especially if you have a disability or are writing in a non-native language.

But the subtle shift from Microsoft’s “Help me” to Google’s “Help my daughter” changes the tone of things. Inserting Gemini into a child’s heartfelt request for parental help makes it seem like the parent in question is offloading their responsibilities to a computer in the coldest, most sterile way possible. More than that, it comes across as an attempt to avoid an opportunity to bond with a child over a shared interest in a creative way.

It’s one thing to use AI to help you with the most tedious parts of your job, as people do in recent ads for Salesforce’s Einstein AI. It’s another to tell your daughter to go ask the computer for help pouring their heart out to their idol.

Outsourcing emotion: The horror of Google’s “Dear Sydney” AI ad Read More »

at-the-olympics,-ai-is-watching-you

At the Olympics, AI is watching you

“It’s the eyes of the police multiplied” —

New system foreshadows a future where there are too many CCTV cameras for humans to physically watch.

Police observe the Eiffel Tower from Trocadero ahead of the Paris 2024 Olympic Games.

Enlarge / Police observe the Eiffel Tower from Trocadero ahead of the Paris 2024 Olympic Games on July 22, 2024.

On the eve of the Olympics opening ceremony, Paris is a city swamped in security. Forty thousand barriers divide the French capital. Packs of police officers wearing stab vests patrol pretty, cobbled streets. The river Seine is out of bounds to anyone who has not already been vetted and issued a personal QR code. Khaki-clad soldiers, present since the 2015 terrorist attacks, linger near a canal-side boulangerie, wearing berets and clutching large guns to their chests.

French interior minister Gérald Darmanin has spent the past week justifying these measures as vigilance—not overkill. France is facing the “biggest security challenge any country has ever had to organize in a time of peace,” he told reporters on Tuesday. In an interview with weekly newspaper Le Journal du Dimanche, he explained that “potentially dangerous individuals” have been caught applying to work or volunteer at the Olympics, including 257 radical Islamists, 181 members of the far left, and 95 from the far right. Yesterday, he told French news broadcaster BFM that a Russian citizen had been arrested on suspicion of plotting “large scale” acts of “destabilization” during the Games.

Parisians are still grumbling about road closures and bike lanes that abruptly end without warning, while human rights groups are denouncing “unacceptable risks to fundamental rights.” For the Games, this is nothing new. Complaints about dystopian security are almost an Olympics tradition. Previous iterations have been characterized as Lockdown London, Fortress Tokyo, and the “arms race” in Rio. This time, it is the least-visible security measures that have emerged as some of the most controversial. Security measures in Paris have been turbocharged by a new type of AI, as the city enables controversial algorithms to crawl CCTV footage of transport stations looking for threats. The system was first tested in Paris back in March at two Depeche Mode concerts.

For critics and supporters alike, algorithmic oversight of CCTV footage offers a glimpse of the security systems of the future, where there is simply too much surveillance footage for human operators to physically watch. “The software is an extension of the police,” says Noémie Levain, a member of the activist group La Quadrature du Net, which opposes AI surveillance. “It’s the eyes of the police multiplied.”

Near the entrance of the Porte de Pantin metro station, surveillance cameras are bolted to the ceiling, encased in an easily overlooked gray metal box. A small sign is pinned to the wall above the bin, informing anyone willing to stop and read that they are part of a “video surveillance analysis experiment.” The company which runs the Paris metro RATP “is likely” to use “automated analysis in real time” of the CCTV images “in which you can appear,” the sign explains to the oblivious passengers rushing past. The experiment, it says, runs until March 2025.

Porte de Pantin is on the edge of the park La Villette, home to the Olympics’ Park of Nations, where fans can eat or drink in pavilions dedicated to 15 different countries. The Metro stop is also one of 46 train and metro stations where the CCTV algorithms will be deployed during the Olympics, according to an announcement by the Prefecture du Paris, a unit of the interior ministry. City representatives did not reply to WIRED’s questions on whether there are plans to use AI surveillance outside the transport network. Under a March 2023 law, algorithms are allowed to search CCTV footage in real-time for eight “events,” including crowd surges, abnormally large groups of people, abandoned objects, weapons, or a person falling to the ground.

“What we’re doing is transforming CCTV cameras into a powerful monitoring tool,” says Matthias Houllier, cofounder of Wintics, one of four French companies that won contracts to have their algorithms deployed at the Olympics. “With thousands of cameras, it’s impossible for police officers [to react to every camera].”

At the Olympics, AI is watching you Read More »

new-zealand-“deeply-shocked”-after-canada-drone-spied-on-its-olympic-practices—twice

New Zealand “deeply shocked” after Canada drone-spied on its Olympic practices—twice

Droned —

Two Canadians have already been sent home over the incident.

New Zealand “deeply shocked” after Canada drone-spied on its Olympic practices—twice

Aurich Lawson | Getty Images

On July 22, the New Zealand women’s football (soccer) team was training in Saint-Étienne, France, for its upcoming Olympics matchup against Canada when team officials noticed a drone hovering near the practice pitch. Suspecting skullduggery, the New Zealand squad called the local police, and gendarmes located and then detained the nearby drone operator. He turned out to be one Joseph Lombardi, an “unaccredited analyst with Canada Soccer”—and he was apparently spying on the New Zealand practice and relaying information to a Canadian assistant coach.

On July 23, the New Zealand Olympic Committee put out a statement saying it was “deeply shocked and disappointed by this incident, which occurred just three days before the sides are due to face each other in their opening game of Paris 2024.” It also complained to the official International Olympic Committee integrity unit.

Early today, July 24, the Canadian side issued its own statement saying that it “stands for fair-play and we are shocked and disappointed. We offer our heartfelt apologies to New Zealand Football, to all the players affected, and to the New Zealand Olympic Committee.”

Later in the day, a follow-up Canadian statement revealed that this was actually the second drone-spying incident; the New Zealand side had also been watched by drone at its July 19 practice.

Team Canada announced four responses to these incidents:

  • “Joseph Lombardi, an unaccredited analyst with Canada Soccer, is being removed from the Canadian Olympic Team and will be sent home immediately.
  • Jasmine Mander, an assistant coach to whom Mr. Lombardi report sent [sic], is being removed from the Canadian Olympic Team and will be sent home immediately.
  • [The Canadian Olympic Committee] has accepted the decision of Head Coach Bev Priestman to remove herself from coaching the match against New Zealand on July 25th.
  • Canada Soccer staff will undergo mandatory ethics training.”

Drones are now everywhere—swarming the skies over Ukraine’s battlefields, flying from Houthi-controlled Yemen to Tel Aviv, scouting political assassination attempt options. Disney is running an 800-drone light show in Florida. The roofer who recently showed up to look at my shingles brought a drone with him. My kid owns one.

So, from a technical perspective, stories like this little spying scandal are no surprise at all. But for the Olympics, already awash in high-tech cheating scandals such as years-long state-sponsored doping campaigns, drone spying is just one more depressing example of how humans excel at using our tools to ruin good things in creative new ways.

And it’s a good reminder that every crazy example in those terrible HR training videos your boss makes you watch every year are included for a reason. So if you see “drone ethics” creeping into your compliance program right after sections on “how to avoid being phished” and “don’t let anyone else follow you through the door after you swipe your keycard”… well, now you know why.

New Zealand “deeply shocked” after Canada drone-spied on its Olympic practices—twice Read More »