I’ve come to the conclusion that a bizarre celestial alignment is behind the slew of questionable AI-related business decisions cropping up in my news feeds.
This isn’t the usual jackassery. You know, the AI-driven layoffs that aren’t AI-driven. The self-cannibalizing cycle of AI investments fueling an inevitable bubble burst. The rising tide of founders and founder wannabes who are trying to one-up each other in the race to see whose livelihood gets torched first by devil-may-care use of OpenClaw. (“Yes, it deleted my bank account and production database, and posted the contents of my password vault on Reddit. But can you believe I didn’t need a team of engineers to make this app??”)
Nope. These are head-scratchers from companies who failed to read the room.
Slop-py search headlines, part 2
A few weeks ago, I wrote about Google’s decision to make content more appealing to you by putting AI rewrites of headlines in your Discover feed instead of the headlines the publishers intended. As you might surmise, AI can’t be bothered to make sure it’s giving you an accurate representation of the content, so who knows what you’ll find when you click that oh-so-enticing link. Not to mention, readers won’t blame Google for the mismatch. They’ll blame the author/publisher.
Welp, Google’s doubling down on the clickbait. There’s an experiment in progress to rewrite headlines in Google Search results, not just the Discover feed. The results are predictably misleading thus far, though Google pinky-swears it’s still just an experiment.
The reputational clickbait issue is bad enough for author/publishers. But it also introduces an unwelcome variable into the metrics they’re using to make editorial decisions. Customer intent is a big deal for content strategy and UX in general. A really big deal. Without understanding intent (what the customer was looking for when they clicked the link or ran the search), you can’t fully understand whether your content is performing up to spec. So if intent is now influenced by a variable completely outside your control … well, that messes with your carefully crafted strategy.
I suspect there’s a bigger story at play here in the difference between Search Engine Optimization and Generative Engine Optimization, but we’ll have to see how the “experiment” pans out.
Just … don’t
Over at Nvidia, they announced DLSS 5, which is techie-speak for the latest iteration of their method of AI-enhancing video game graphics. It’s meant to provide a more realistic visual experience — better lighting, crisper details, etc. — without slowing your computer to a crawl to process all of it.
Sounds awesome until you notice DLSS 5 is functioning more like a beauty filter, in some cases redrawing things like character faces altogether. Gamers, who’ve been pretty vocal about their disdain for AI-generated anything, certainly did not ask for character glow-ups. Artists, who’ve been pretty vocal about their desire to not have AI water down their expertise, certainly did not ask for this either.
Nvidia’s been in hot water with the gaming crowd for a while now due to its hard skew toward enabling AI data centers instead of human customers. But this newest announcement struck a particularly sensitive nerve. Even if they fix the tech so it doesn’t f*ck over video game artists, it’ll still require a considerable amount of kit to run. The costs of which are skyrocketing.
It seemed like a good idea at the time
Grammarly, the popular writing assistant, also got itself into some really hot water recently. Last August, it released an Expert Review feature that could give you AI-generated advice in the persona of a well-known writer or academic. Think of it like being able to ask Mark Twain for feedback on your semantic wittiness.
Grammarly used the publicly available work of real people — living and dead — to train the feature. Such scraping already gets the side-eye in many circles. But Grammarly took it a step further by never telling the real people they were impersonating them, nor asking for permission, nor providing compensation for the literal brain dump. Predictable outrage ensued when this was discovered, and Grammarly ended up pulling the feature.
This appears to be another case of AI data ingestion pushing the boundaries on fair use of source content. Grammarly got caught, but these sorts of avatars are easy to create with even relatively basic tools. Anyone whose work is publicly available, myself included, faces this risk. That includes your social media posts and personal brand, too.
My two takeaways on this episode are: 1/ As a society, we really need to make sure our laws are catching up with the growing problem of digital impersonation and IP theft; and 2/ It took 6 months for this to come to light. That’s an indicator the real experts aren’t using Grammarly, which seems like … maybe the business problem the company should have been trying to solve.
Slop-py headlines, part 3
Microsoft has taken it on the chin this year for its AI enshittification of Windows 11, the shoving of Copilot into every digital nook and cranny, and the phenomenon that became known as Microslop. They seem to have taken the feedback to heart, and announced plans to de-Copilot some of the more egregious insertions in Windows as well as an overhaul of the internal Copilot org. These are positive signs of (finally) listening to customer feedback.
Then I found this headline on the Microsoft blog. It’s not related to the other announcements in the slightest but is definitely a d’oh moment.
“New tools and guidance: Announcing Zero Trust for AI”
If you’re not familiar with Zero Trust in its context as the name of a core cybersecurity model, you’re probably misreading that headline in a particularly awkward, though somehow apropos, way.
I can’t say with 100% certainty who/what wrote or reviewed it, but the human editors I know probably would have caught the double meaning.
Perhaps Google should have taken a stab at rewriting it.
All opinions here are my own. All text is my own, too, including the em dashes. I welcome constructive comments and discussion on LinkedIn and Bluesky.

