Why your org needs to be as AI-literate as you are

Image of a chalk tablet with a checklist that says: People, Process, Tools

AI literacy is coming soon to your role guidelines, if it’s not there already.

Software engineers have already had this awhile now, and it’s a fair expectation of the role. For the rest of us, literacy no longer means just knowing how to use a chatbot. You also need to be able to automate tasks, define skills for agents, or, if you’re feeling ambitious, build your own apps with tools like Claude Code or Codex.

Oh, and there isn’t usually formal training. You’ll have to experiment your way up the literacy ladder because … isn’t experimentation fun and cutting-edge?

A lot of folks do enjoy that part, and it can be a bit awe-worthy to sit down and speak (or type) something into existence that you thought wasn’t possible without a team of engineers. But there’s also a sizable contingent who don’t really feel an urge to go out and vibe-train for the hell of it.

For the sake of argument, let’s say we get everyone to the task-automation stage or higher. Shouldn’t efficiency and productivity gains follow?

Turns out, it’s kinda underwhelming so far. Studies do show incremental boosts, but they can be partly offset by spending time dealing with the volume of slop that’s generated as AI use increases.

In order for this to work the way our execs want it to, you need organizational AI literacy in addition to personal AI literacy. Otherwise, you end up with — yes — a bunch of incremental personal gains that don’t add up to big leaps forward. Yet it’s the part of the equation many teams are missing.

What are we really automating here?

Organizational literacy has to be more than putting new role guidelines in place, purchasing a bunch of Copilot subscriptions, and blocking off everyone’s calendars for Experimentation Fridays. These tactics focus on learning how to use the tools, but they miss the part where you also have to apply those tools logically to your work. You can be a Copilot Jedi and still not get very far without the right systems to support it.

Here’s an example from my own work. Last week, I sat down to build a relatively simple automation that puts a non-blocking meeting invite on people’s calendars to remind them of a due date for business updates. In tandem, I wanted to schedule and post extra reminders in Slack on the due date itself, and a week beforehand. A summary of the chat convo:

Me: Hey, here’s this automation I want to build. <uploads process doc>

AI tool: That’s awesome! Let me do that. <thinks and does AI stuff> Here it is, take a look. Also, I can’t preschedule, so you’ll trigger the calendar invite and then need to re-trigger it each time you want the Slack messages.

Me: Wouldn’t that spam-resend the calendar invite, too? And if I’m triggering manually, wouldn’t it be just as fast to copy/paste the reminder myself in Slack?

AI tool: Good points.

There are a few things going on here. First, I probably chose the wrong tool and/or approach. This particular tool does indeed work better with simple, linear tasks that happen once in sequence. So I actually needed three automations: One for the calendar invite, one to post Slack reminders, and a scheduler for the Slacks.

That’s overkill for the Slacks, though, and bordering on overkill for the calendar invite. The time savings I was aiming for was to enter the settings once and then fire and forget. This week, I plan to explore a couple alternatives that might make it work.

More importantly, though, even if I can get it to work, it literally only saves me minutes each month. And the reason is because of the very human-oriented systems that require this reminder in the first place. To get true efficiency, we need to bypass the humans (sorry!) and set up agentic systems that automatically scan their project files and task trackers, gather the necessary updates, and then submit them to a queue for compiling and human-in-the-looping. Even better, perhaps there’s a daily dashboard summary that keeps everyone up to speed and auto-flags potential issues.

That’s … not possible at the moment. Not because of the technology, but because humans have placed the project files and task trackers scattershot across various tools and laptops and don’t consistently maintain them. Previous attempts to standardize all of this pre-AI fell flat because, well, no one likes to be told how to do their work.

These are organizational problems, and AI magnifies the challenge, as it is wont to do.

Learning the tools is the easy part

Organizational AI literacy means having systems that are designed from the ground up for agent-human collaboration. Which is easier said than done. (Here’s a previous post that describes one approach.) In part, this is because some of the technology pieces are still missing: connected infrastructure, clean and structured knowledge bases, improved LLM reliability, etc.

But that gives us time to work on the people and process changes that are arguably more important. This won’t happen overnight, and key to keep in mind throughout: 1/ Everyone wants to feel like they’re making progress toward something meaningful, and 2/ No one wants to feel like they’re training AI to do their job.

I wish I could give you an exact blueprint to follow, but every org is different, and so the changes that need to made are different, too. However … because frameworks are cool, here’s a 4-C’s way of thinking about it.

Changes start at the top

Leadership needs to accept, model, and champion the fact that the entire paradigm of work is shifting. I don’t mean simply sharing “here’s how I experimented with AI today,” though that’s important, too. Leaders also need to accept that a lot of their comfort zones are no longer a thing. Annual goals, days-long planning workshops, and extended waterfall timelines aren’t really apropos anymore. KPIs can change or become irrelevant on a dime. Inability to tolerate risk will wreck progress, and siloed fiefdoms will eventually silo themselves into obsolescence.

Clarity in the destination, not the path

In the current environment, it’s impossible to plan with any certainty more than a few weeks or months out. Carefully detailed roadmaps and implementation steps will derail, even if you’re using AI to speed up iterations. This will devolve into constant whiplash without a single, simple, clear vision that keeps everyone focused. How you get there will probably change a thousand times. But getting there is the important part.

Culture of learning

If you want folks to experiment, you can’t punish them when the experiment doesn’t work. You still do want to drive accountability, but framed as lessons learned and subsequent iteration toward success. Grass-roots successes, not mandates, will shape the way your workflows need to be redesigned and will help drive adoption, too. Get in the habit of measuring everything so you can see exactly what’s having the most impact.

Comfort with chaos

We’re in an era of continuous change that, frankly, goes against how our brains are wired. The trick is to work through it, not fight it. Accept that priorities will pivot with no warning. When you have to undo or abandon work, think of it as a learning opportunity and not a waste of time. Focus on making small, iterative decisions rather than one, big, giant one that might be hard to pivot.

There’s a lot more to unpack here, and I’ll do that in future posts. In the meantime, here’s a handy cheat sheet for reference.


All opinions here are my own. All text is my own, too, including the em dashes. I welcome constructive comments and discussion on LinkedIn and Bluesky.