Hot on the heels of the SaaSpocalypse, we have another AI panic a-risin’.
This one started when an AI founder’s lengthy essay about the future of work went viral. I won’t link to it; it’s easy to find. But the gist is he told every knowledge worker in every profession in every corner of the world they’re cooked because AI is now good enough to replace them. (I prefer the term eff’d rather than cooked, but this is a polite blog post.)
Thus far, the commentariat seems to be split into two camps: Yep, he’s right and your days are numbered unless you immediately and fundamentally change your life; and Nope, he’s full of it (again, I prefer a different term) and nothing will ever replace human ingenuity and the models really aren’t that good yet.
What’s striking is that the split isn’t a matter of real experts vs. people pretending to be experts to sell things. I’ve seen legitimate, well-articulated arguments on both sides, from people I consider authoritative and non-agenda-driven.
In my humble estimation, this means the truth is likely somewhere in between, and so I’ll now add my voice to the commentary pile.
Down the dystopian rabbit hole
Attempting to take my own advice, I was thinking last week about how the traditional linear content workflow — planning, writing, reviewing, publishing, maintaining — would change when agents are able to replace humans 100%.
What ensued was a disturbing daydream in which robots create content for other robots, who act on that content and maybe respond back to the first set of robots, who provide an answer while they’re also simultaneously building out and maintaining the meta knowledge base that feeds answers and context to the broader robot network. Because it’s not content anymore. It’s data.
(I swear, AI did not write that paragraph, even with the “it’s not [X]” construct.)
The humans are along for the ride in this scenario. Did they initiate the Doer Robot’s task? Maybe. But it could just as well be part of an automated workflow. Is there somewhere they can search the knowledge base themselves? Probably. But the way the data would need to be structured, they’d have to get it through interpretive AI lookup bots and not static files/pages or SQL queries. And in what situations would humans need to look up the info? Wouldn’t it be easier to task the Doer Robot, who could then consult the Lookup Robot if they got stuck?
I already knew about the content -> data part of this future state. The industry has been on that track for several years with the notion of structured content and componentized content. The disturbing part of the daydream was less about the workflow, and more about the system itself. The knowledge base is making the robots smarter, not the humans. Humans aren’t learning by doing. They’re not getting into the messiness and making the mistakes where true mastery comes from. They’re outsourcing their own intellectual growth for the sake of convenience. And humans are … OK with that? What does that bode for the rest of society?
I go to the dark place pretty quickly. In program management, they call it “anticipating risks.”
Not so fast…
Then I remembered a situation from a few weeks ago when I needed to map a giant dataset to the product categories on the company website. The categories change periodically, so I asked our handy internal bot to pull the latest version from the website’s taxonomy. It dutifully returned a list.
A list that was thoroughly wrong, and made-up in places.
I asked it the chat equivalent of WTF, and it said it also pulled categories from 1/ its evaluation of badly outdated published content and 2/ the Interwebs as a whole. I think it was trying to be helpful. (“you’re a helpful assistant…”)
I checked the creativity dial (yep, low) and asked it again to focus on only the single taxonomy in the product menu on the company website. It dutifully returned a revised list.
Which also was wrong.
We went round like this a few times, where I tried to add context and refine the prompt and the robot was clearly still confused. After realizing I would have to validate the entire mapping myself no matter what, I walked away rather frustrated at the wasted time.
Here’s what was tripping everything up: Over time, customized categorizations have cropped up in different sections of the site. (Long story as to why.) The main product menu is as official as it gets. But because the other categories seemed just as authoritative in the robot’s perspective, it kept getting confused. Even when I gave it the context I knew.
I don’t blame the model. It works in math and probability, and our unstructured taxonomy salad was far from probable or predictable.
Splitting the difference
The nothing-to-see-here commentators are right that AI isn’t replacing knowledge workers tomorrow. Simply put: No matter how sophisticated the models get, the existing systems we’re inserting them into are built for humans, not robots. Humans of the 20th century, at that. Arguably, you could ask the robot to take what’s there and rebuild it, but it would still need a fair amount of human oversight to make sure whatever it built was also ultimately meeting the humans’ needs.
Also standing in the way are pesky things like cybersecurity, hardware infrastructure, safety and privacy regulation, chip shortages, an insufficient power grid, and the inevitable economic bubble burst. Those are all solvable problems, but not in the scary-short time frame the doom-and-gloomers are predicting.
They’re right, however, about the need to upskill yourself pronto. The trajectory of the technology is clear: If you’re a knowledge worker, the way you work will change fundamentally at some point. You’ll need to know how to tell the robots what to do, how to set up the systems that help them do the work, and how to make sure they stay on track.
Don’t believe me? The U.S. government, historically not known for being on the cutting edge, recently released a basic framework for AI literacy in the workplace. It’s more than just knowing how to ask ChatGPT a question.
The urgency to get moving is because the technology is advancing so quickly that as soon as you think you’ve learned how to work with it, it changes in ways that blow your mind. The speed was intense to begin with but has reached absurd levels, to the point where AI experts and researchers themselves are feeling overwhelmed.
Truth be told, no one knows yet how this will sort itself out, or when. So … don’t panic-buy a kajillion AI subscriptions to give yourself a crash course. Seriously, don’t. It can get expensive and risky, depending on what you’re giving the tool access to. Start small. Find some free online courses or YouTube tutorials first to get yourself oriented.
And … don’t ignore that your job will change. The baseline of AI literacy is rising all the time. New jobs will emerge, too, and it’s prudent to start thinking about how your skills and expertise could plug into what’s next. Your favorite chatbot can be your thought partner here. (And you’re careful about not sharing identifying information with commercial models, right??)
Keep calm and carry on, friends. We’ve got this.
All opinions here are my own. All text is my own, too, including the em dashes. I welcome constructive comments and discussion on LinkedIn and Bluesky.

