Org AI Literacy Step 1: Changes start at the top

A group of business leaders balances precariously on a platform on top of a pyramid

Framework-makers are gonna framework, and last week I introduced one for boosting organizational AI literacy. Hint: It’s more than providing shiny new tools.

ICYMI, the upshot is that the most meaningful productivity gains won’t happen until the org redesigns how work gets done in the AI era. And that’s … hard.

This post is aimed at leaders and managers who are caught in the swirl. Changes start at the top, and you’ll need a particularly large amount of fortitude for this one.

Here’s roughly how you’ll see AI literacy progress at an organizational level. It’s similar to the levels of personal literacy, but at a different altitude.

Stage 1: Shiny new tools

Recognizing that AI is a thing, new tools are rolled out and folks are encouraged to use them. The tool choice will be heavily dependent on org size, security risk, and budget. But the general tone is, “Hey, meet your new BFF Claude!”

Stage 2: Experiment

Your BFF Claude is cool, but no one really knows how to work with him, beyond as a glorified search engine and proofreader. So the call for experimentation goes out, and “What did you automate this week?” starts becoming an agenda item in team meetings.

Stage 3: Holy Shit

This is the critical make-or-break point. It’s the moment you realize literally EVERYTHING needs to change for this to work, and if it doesn’t — fast! — you will be left behind, living in a van down by the river. There’s a really high risk of bad, panic-led decisions here. There’s also a really high risk of analysis paralysis because this IS ALL TOO MUCH!

Stage 4: Automating for humans

Time to buckle down. You look at what’s working with the experimentation and start applying it at scale. But you’re still using it with workflows built for humans, and so you’ll start to notice the limitations that also need to be addressed. Janky data connections. Scattered knowledge bases. Misapplied models. And … significant talent gaps that training alone might not be able to cover.

Stage 5: Holy Shit, part 2

Yeah, you didn’t change enough the first time ’round. Here’s where the system redesign kicks in, at every level. Org design, decision chains, job roles, infrastructure, human/agent RACIs, and fundamentally how your business is run. Maybe even your entire product, since your customers are in the same boat you are.

Stage 6: Agents and humans, working in harmony

This is the ideal, but it’s pretty difficult to get to right now at any sort of scale. This is why you’re seeing a lot of one-person-armies touting how AI has revolutionized their productivity, and not really any of the bigger dogs. (I’m not counting the braggy “99.9% of our code is now produced by agents!” PR crap here, because I guarantee you there’s a fleet of humans who are still holding things together.)

The Holy Shit moment

It’s very easy to get stuck at Stage 2, especially when you’re not familiar with the technology yourself. And who has time to go learn a whole new discipline when there’s a business to run?? Vibe-coding with Claude is fun to play with, but you know in your heart of hearts it’s not giving you the right context for bigger-scale strategic decisions. You’re taking what the AI experts (internal or external) say at face value, which feels risky.

Cost and tool selection/upgrades will also start to become a factor. Commercial AI tools like Codex and Nano Banana and all the newly baked-in bots in your SaaS platforms are getting more sophisticated all the time, but they can also be token-heavy (read: expensive). DIY is also super-expensive and should never, ever be 100% vibe-coded with today’s tools lest you’d like to roll out the welcome mat for every hacker and scammer in the universe.

When enough of these issues pile up and you realize you’re in uncharted problem-solving territory, it will trigger the Stage 3 Holy Shit moment. Your reptile brain will kick in because this is a straight-up legitimate threat to your survival as a business and as a knowledge worker in general. So let’s take a look at the risks of what your brain is telling you to do:

Fight

Hoo boy. This is a recipe for bad decisions. You’ll feel the urge to do something, anything to regain control, and every random AI-centered idea will suddenly become genius. The bolder, the better. Oh, look, here’s where the “let’s lay off the entire customer service department” decisions lie.

Flight

You’re gonna Nope your way out. Humans for humans, and AI be damned! You’ll be over here focusing on what only humans can do, staying out of the fracas while everyone else is bowing to their robot overlords.

Freeze

You’ve seen the corporate-hype show before. The shiny new objects often turn out to be backfiring mufflers, and this one is putting people’s livelihoods at risk. Where’s the data that proves this will work? Plus, what about all the political and socio-economic impacts? It’s best to be really careful with how this plays out. If you wait long enough, it might even go away.

Fawn

You’ll show everyone! You’ll embrace AI like there’s no tomorrow! You’ll be the AI-iest team in all of AI-dom! You’ll get the whole team Claude Code licenses and let them have at it! Full speed ahead!!

The big freeze

To be fair, the Flight and Fawn responses can be viable strategies under the right circumstances.

If authenticity and human connection are core to your brand, than Flight might make sense given current backlashes. But it also might end up only being a short-term boost. The world is changing, and it’s a safe bet your competitors are finding advantages with AI.

Fawn might work, too, since it’s like the Experimentation stage on steroids. In theory it’ll get you to Stage 4 faster, but you have to make sure the right guardrails are in place. An AI free-for-all can lead to things like OpenClaw deleting your production environment. Oopsies!

Fight is definitely a danger zone for self-explanatory reasons. But it’s also recoverable. Business pivots are happening right and left at the moment, which makes it easier to walk back a bad decision and eat a little crow.

The most dangerous response — the one you really need to watch for across your entire team — is Freeze, because it seems like you’re being the rational adult in the room. It starts with typical due diligence: gathering data, exploring options, comparing costs. But no amount of information is ever enough to convince you, and what you’re really doing is asking the ostrich to move over so you can stick your head in the sand. Is AI a backfiring muffler? In certain situations, yep. Is it putting livelihoods at risk? Also yep. Are we ignoring the grave impacts on the world at large? Most definitely yep.

Some tough love, because I want you to succeed: AI is not going away. You can’t wait it out. Yes, do your due diligence, but don’t hold off on making decisions because you need more data or want to see how it works out for others. That’s straight-up analysis paralysis, and you’re putting your team in a tough spot by holding them back as the technology advances.

Tips for leading through it

So now we know what not to do. But how does change truly start from the top?

Keep on being the AI champion you were (are) in Stages 1 and 2. That includes doing some personal development to learn fundamentals of the technology. There are plenty of free resources available to help (look for titles like “AI for Business Leaders”). Make sure your teams have the tools they need, and encourage experimentation. Also encourage everyone to measure, measure, measure. You won’t know what to take forward if you don’t know what success looks like. Keep putting “what did you automate this week?” on the agenda, and share your own successes. Share your mistakes, too. This helps make it safe when your team’s experiments don’t work out.

When you get to Stage 3, pause and take a breath. There’s no one-size-fits-all way of doing this. Every business and every situation is different, and you’ve also got to be self-aware of your own tendencies. If you’re a Fighter, go consult with cooler heads before doing anything rash. If you tend to Freeze, set a timer on decisions, and ask a colleague or your boss to hold you accountable if every decision is coming out No. Listen to the AI power users on your team and solicit their advice.

Then, come to terms with the fact that this isn’t simply changing the team’s slogan to “We heart AI!” At some point, you will need to change pretty much every aspect of your operational model. Don’t panic and try to do this overnight (Fight!). And also don’t let it fall by the wayside because you’re more concerned about the priority du jour (Freeze!).

Start scaling out the successful experiments, and juggle other priorities around accordingly so there’s time to do it. Some of the pilots won’t actually scale, and that’s OK. Encourage your team to think through the lessons learned and what they can apply to other projects. Do some blue-sky thinking of what an agentic future looks like, and what the barriers are.

This is where Holy Shit, part 2 will kick in, because it will become obvious that many of the barriers aren’t patch-and-move-on. Agents work differently and much faster than humans, and your current systems were absolutely not built for this. You’ll need to invest a lot of time, money, blood, sweat, and tears to redesign and rebuild. Things will probably get worse before they get better, and you’ll also need to strike a balance that keeps your current human-driven outputs moving while you build the new future underneath.

This is pretty scary stuff, but you can take comfort in the fact you’re not alone. Few folks have made it to Stage 6, especially when they’ve got to pivot the Titanic to get there. Your single biggest asset in the crazy times ahead is your mindset. You set the tone for your team, and they need to see you embracing change and ambiguity and all the scary stuff, so that they feel empowered to do the same.

If you’re not feelin’ it, lean on the change agents who do.


All opinions here are my own. All text is my own, too, including the em dashes. I welcome constructive comments and discussion on LinkedIn and Bluesky.