What Gemini and Super Bowl Ads Got Wrong About AI


Are you feeling like your football-scouting operation has been taking a beating lately?

Do you sometimes wonder why your spreadsheets can’t get generated fast enough? Or perhaps your software coding is going slower than you always thought it would?

Most of all, does your kid struggle with not being able to imagine the decor of his bedroom in your new home?

If any of these problems resonate — and really, what could be more universal? — has Silicon Valley got an AI product for you.

You may have noticed Sunday night that these four instances were prime AI use cases per a series of Super Bowl ads from the industry’s biggest players (Microsoft Copilot, unicorn startup GenSpark, OpenAI‘s Codex and Google Gemini, respectively), either solving challenges that don’t exist day-to-day for most Americans or, in the last case, solving a challenge that may actually be a good thing. Any parenting expert will tell you that temporary uncertainty or disappointment can healthily prepare a child for adulthood. But why risk that brief bout of questioning when AI can Magic Erase it from their lives?

Of course, we’re acting like the removal of a childhood-development moment is a byproduct of AI adoption and not the whole point. While these ads and the dozen or so more that aired during the game— from both established players like Meta and Anthropic and upstarts like Ramp AI and Artlist — have different visions for how machine thinking will help us, they are nearly all united by a common ideology. Namely: Everyday life is unruly, unknown, hard. Wouldn’t it be nice if a computer happened along to make it easy and guaranteed?

If you arrived unformed into the techno-capitalist parade that is the current iteration of the Super Bowl telecast, you would come to at least one very specific conclusion: technology will soon offload so much of our current toil. “It’ll be whatever we want it to be,” the Gemini mother says to her son about their house — AI is apparently manna now — as onscreen a message flashes “A new kind of help from Google.” A more encapsulating set of credos I cannot imagine. Whatever we want! No limitations or consequences! And new help! Who doesn’t want that? Well, compared to the current kind of Googling — the kind that requires critical thinking — it certainly is new. Better? Less clear.

Tech revolutions at heart change the mechanisms by which humans live. The automobile lessened our reliance on the horse. This new revolution will lessen our need for a brain. Whether we want what this digital Che will wreak is another matter. Yes, on the surface, this ad spate is about AI products, which is about massive capitalizations, and Wall Street valuations, and many other -ations you hear on CNBC. But such talk of companies and products abstract, purposefully, what’s really being sold.

The abstracting reached its pinnacle (nadir) with an insidious Alexa ad featuring Chris Hemsworth and wife Elsa Pataky. He insisted the smart speaker could go sentient in various wildly extravagant ways and kill him — a classic straw man of painting anyone worried about AI Safety as some kind of tinfoil alarmist while cleverly ignoring the actual dangers, like Alexa’s new policy of nonconsensual constant uploading. (See also under Amazon‘s Super Bowl Ring ad for how it saves all the lost dogs while, oh yes, turning on some kind of Big Brother camera for mass surveillance.)

“I would never. I’m just here to help,” Alexa tells Hemsworth, which confoundingly seeks to have it both ways: “An AI can’t have murderous feelings; that’s silly. But it can have feelings of help and love!” (Literally an Alexa ad from earlier this football season starring Pete Davidson has him vulnerably telling a computer screen “I like you, too.”)

To think about any of these tech company ads for more than five seconds is to realize how little they stand up to scrutiny. Which is exactly how the brands want it generally: feeling more, thinking less.

Of course matters aren’t that simple; we’re just not that naive anymore. By now too many of us are wary of what’s being sold — sensitized by two years of deepfakes and soft slop, chastened by two decades of social media and rage-farming. And indeed, in-between the shiny sales pitches came little glimpses of self-own. Anthropic went after OpenAI for how the latter’s ad-based chatbot could be compromised without appearing to realize that asking sensitive information from a chatbot could be dangerous even when it wasn’t trying to sell you something. I’m not sure relying on an LLM to tell you how to navigate your relationship with your mother is so wise even if it refrains from pushing a cougar dating site.

And Artlist.io, a little-known video-generation platform, pitched its tools to NY and LA markets with an ad the company told us that as a result of those tools took less than a week to create — or rather, a polar bear reading a voiceover script told us that while, on screen, dogs roasted marshmallows, horses ate from craft services and a person in a banana costume surveyed a rocker-destroyed stage, all in an attempt to show how our content landscape will be transformed. 

“Artlist’s Big Game debut proves that high-end video production is no longer gated by time, budget, or access,” an accompanying press release touted. No doubt such efficiency boasts land with Madison Avenue beancounters, but the rest of us may find ourselves busier trying to recover from the retinal burning brought on by these proto-assaults of slop. Some 25 million people viewed the spot, by the way, but only 15 commented on it, and most were critical. These ads may not have fooled as many of us as their makers seemed to think. 

Some brands, at least, respected human intelligence. One standout came from Volkswagen — which went back to its “90’s “Drivers Wanted” slogan and the deeply human vibes of its seminal Nick Drake spot from that era (directed by a pre Little Miss Sunshine Jonathan Dayton and Valerie Faris!) — with a beautifully on-point third-quarter ad. Soundtracked by House of Pain’s 90’s staple “Jump Around,” the spot showed a young professional guy leaving his laptop life behind to play with his dog; a Gen Z woman dancing in the rain and getting her friends to exit the car to join her; a driver making a U-turn to follow an ice cream truck; and a group in a schoolyard cheering on a besuited corporate worker to kick back their soccer ball over the fence, Messi penalty-kick style, which he eventually does, incurring sweet release.
Hardly a smartphone appears in the spot, let alone any AI,  and the whole vibe blissfully shrugs its shoulders at the “let a computer tell you what to do” low-key enslavement of so many of the other spots that aired Sunday night (and at the slop; it was shot on film).

“Being so programmed puts us in handcuffs, and we wanted to push back on that,” Rachel Zaluzec, Volkswagen’s chief marketing officer, told me in an interview Friday. The company didn’t even set out to make a Super Bowl ad, she said; it simply wanted to react to all the automation out there before soon realizing that it had a newly relevant “Drivers Wanted” campaign capturing the essence of those refreshing pre-tech days.

“We see this as a recruiting campaign for an invitation to participate,” she said. “All this tech has its place of course but we should be asking, ‘are we controlling it or is it controlling us?’” Even driving, she added, could be a human act compared to the shut-offery of our Uberized and Waymoified world. “There’s nothing like putting your hands on the wheel and deciding where you want to go,” she said.

One of the slop-makers was trying to temper their message, too. Beverage company Sazerac, which during the game had the first-ever national Super Bowl ad generated by AI, said that its revival of the Fembot and Brobot characters to sell Svedka vodka was meant to show the folly of turning over so much power to the algorithm. “The entire idea of the campaign is that the robots have returned to remind the humans to be more human,” Sazerac chief marketing officer Sara Saunders told THR before the game.

The best AI ad didn’t mention the technology at all: Ben Affleck/90’s sitcom Good Will Dunkin’ spoof used de-aging to some nifty effect, even if it dipped into the uncanny valley a few times. Of course this was AI as tool for human vision, not as replacement for human creativity, skill and decision-making.

It has become fashionable to rag on AI, and for some brands and public pronouncers this is, as you might warily suspect, a pose — a cheap monetization of hipster skepticism more than a carefully thought-out ideology. But a kernel of meaning sits at the heart of even the most blithe pushbacks — that we should ask what all this technology that has come to automate and convenientize our lives will take away with it.

AI is coming and technology can’t be stopped; about that Bob Iger, who recently made the point to David Muir, is right. But that doesn’t tell the whole story. Sure, as a broad concept the idea of AI is moving forward; there are too many stakeholders for it not to. And too many unassailably positive use cases for it not to. If you as a medical researcher could know a drug’s effects on the genome better, why wouldn’t you? If you’re a climate activist and can push for models that will limit waste, why wouldn’t you do that too? But AI as a consumer deployment is far from assured; AI as something we’ll use to replace or at least seriously augment teachers and writers and designers and therapists is not necessarily an inevitability. We have no idea if people will want to use Sora en masse to animate characters on Disney+, as Iger is betting it will, or will be deemed safe enough to become all of our assistants, or trustworthy enough to give advice on our how to talk to our mothers.

Most important of all — and this was decidedly hidden by the brands on NBC Sunday night — we can do something about the onslaught. If we reject slop, AI video-generation tools get marginalized; if we evince skepticism about using a chatbot for mental health, Claude and ChatGPT see their reach curbed.

It’s telling that even as Google was making the case for how Gemini can think a house into existence and dispel the worry of its new inhabitants it used a very real Randy Newman singing “Feels Like Home” to make the point, and not, say, an AI trained on “the greatest living songwriters.” A more ironic undercutting of a tech company’s message you will not find: “Machines can address all of our emotional needs, and to convince you of that we’ll draw on the most human of artists.” But Big Tech leaders don’t really do irony, and they’re not worried about undercutting. Just keep pushing products that will make life more efficient, they believe, and hope that, faced with a world made so exhausting and overwhelming (by tech), consumers will grasp at any product that brings them momentary relief — the executives unaware, it seems, that we can simply be recruited not to participate.




Leave a Reply

Your email address will not be published. Required fields are marked *