Filed under: We meant to give you clickbait slop

Cartoon of a tabloid newspaper cover with the headline Shocker! across the top

When I was a journalist, one of my jobs was to write good headlines. Rule No. 1: Accurately reflect the story. No one likes bait and switch.

Who among us hasn’t clicked a headline like “Scientists discover the secret to eternal youth!” or “Celebrity shocker: The truth they don’t want you to know!” only to find out the scientists are shilling for face cream and the celebrity is embarrassed they flunked math in grade school. I’d bet you weren’t too thrilled about those minutes of your life you’ll never get back.

These shenanigans have been around a very long time, driven by $$$$$, and they’re a hallmark of tabloids, hucksters, and pseudo-news outlets. Anyone remember the old Weekly World News at the supermarket checkout? At least you could tell it was more entertainment than fact. It’s not so easy when the clickbait is coming from a purportedly trusted source. And it’s especially not easy now that AI is part and parcel of automating your news feed.

We’ve come to expect a degree of exaggeration in what aggregators pick up from various sources that are eager to get eyeballs on their ads. But, until the relatively recent Fake News vs. Actually Fake News controversies, impartial journalistic-style reporting has held a position of trust in society. You’d think if companies inadvertently injected a slop factory into it, they’d be scrambling to minimize the risk and reputational damage.

That’s what happened about a year ago when Apple came under fire because its AI was hallucinating news summaries on people’s iPhones and making media outlets look bad. Apple ended up disabling the feature, though has since re-enabled it with a warning that — surprise! — AI makes mistakes.

Now it’s Google’s turn. But, upon being caught generating garbage headlines on its Discover search feed, it’s decided this is a feature, not a bug. The sensationalism drives clicks and engagement, even if they misrepresent the target article. So, yes, they’ve adopted the Shenanigans model and are probably overriding the intent of the writer and publisher.

Initially I wondered whether Google simply can’t refine the model enough to do better and doesn’t want to admit it. But that seems like a risky game of chicken with readers’ trust and media companies’ reputations. Google employs a lot of smart people and surely they’ve learned lessons from embarrassing episodes like Glue Pizza.

Fueling the engine

There’s another possibility, though. Google’s role as a news aggregator has long rankled media outlets because readers scan headlines on Google instead of visiting the news site and browsing around with their eyeballs on the ads there, and maybe subscribing. The eyeball deflection has spread even further now that AI tools are changing audience behaviors. Organic search traffic to traditional sites is on a path to becoming secondary to AI search, and many sites are seeing steady declines in what was once a success indicator. (Tangent: You should still monitor site traffic, but if you’re basing success goals on it, it’s time to rethink that.)

Here’s where it gets chicken-and-egg. We’re all painfully aware LLMs were trained on a lot of copyrighted material and the entirety of the Internet, news sites and all. Those same LLMs are now pumping generated content back into the Internet and re-ingesting it. But without a steady source of fresh human content to train on, a phenomenon called model collapse occurs, and then all the LLMs break. Which is very, very bad if you’ve changed your business model to rely on LLM-generated content. Like Google has.

Oddly enough, it’s in Google’s best interest right now to drive traffic back to the originating site, because without that monetization, the site and all its human content creators will go bye-bye. Plus, news is perpetually fresh. There’s always something happening to report on, and AI can’t (yet) watch and report on complex situations like humans can.

If it were me, I’d pick a different strategy than clickbait headlines to keep the engine running. Smacks of desperation. But I can see the dilemma. Licensing content doesn’t scale well, and neither does acquiring or funding a plethora of sites. Revenue-sharing might be an option, but it puts the burden on methods of monetization (subscriptions, ad sales, etc.) the AI companies haven’t yet made profitable and, ultimately, falls back on consumers.

With the collective amount of brainpower involved and the steep consequences of failure, someone will find a way to sort this eventually. And/or the technology will evolve and change these dynamics altogether. Whatever happens, my hope is that it doesn’t involve forming even more entangling corporate and political alliances that influence impartiality. It’s hard enough to spot the true headlines as it is.

In the meantime, did you notice how AI needs humans in this situation, and not the other way around? Score one for us.


All opinions here are my own. All text is my own, too, including the em dashes. I welcome constructive comments and discussion on LinkedIn and Bluesky.