Distinguished Analyst and Research Fellow, Info-Tech Research Group. Insights and recommendations for projects, portfolios and change.
I used to love that sense of accomplishment when my AI chatbot produced a slide deck in a few seconds. The dopamine hit was real—a great slide deck can cost about an hour per slide. Now, I check major deliverables off my list in a few moments.
Then, I realized that something was off. The AI content wasn’t quite ready to publish, and what looked like productivity was a deferred obligation. The AI didn’t do my work; it created more work for me. And that dopamine morphed into cortisol.
That’s when it hit me: I wasn’t saving time—I was creating an obligation for more.
Welcome to the world of cognitive debt: the unpaid obligation to engage with your AI output.
AI output feels like a nearly free asset, but it comes with a hidden liability. Someone has to cough up the brain power to consume, qualify, interpret, improve, classify, analyze, synthesize, scrutinize, store and/or share the document.
Until then, our AI usage was all cost with no benefit because cognitive debt is only repayable in human attention.
This cold realization is spreading quickly as AI adoption grows into unvetted assumptions, unvalidated conclusions, unexplored implications and unmeasured results: AI is producing cognitive debt faster than Agile ever produced technical debt.
How Cognitive Debt Grows Exponentially
Even when we take the time to review the AI’s work, something still seems off. Why? We think by doing. We usually don’t even understand the problem until we’re deeply frustrated by trying to solve it.
We get frustrated, re-assess the problem, imagine a new solution and then we feel satisfied by having solved it. Rinse, repeat. It’s like exercise for the human soul. We don’t just iterate on the answer—we continually rethink the question itself. When we finally get the solution, it’s usually answering a different question than the original.
Innovation, if nothing else, usually involves reframing the issue until it’s solving the right problem.
If cognitive debt comes from the unexamined outputs, it worsens when we’re not sure the question was valid. That uncertainty is like accruing interest on the cognitive debt.
A Problem That Grows At Scale
Using an AI output is harder than it first appears. Each artifact comes from a unique prompt, and the models each have their own set of evolving guardrails that can override their training. Because answers can shift over time, each new AI slide deck risks misrepresenting the cited white papers you haven’t read.
It’s not enough to consume the individual responses to your prompts; you have to reconcile them. If you don’t, the collection of AI artifacts produced across different models, guardrails and time frames will drive ambiguity instead of insight.
So, cognitive debt is created when you don’t review and reconcile your AI outputs. Interest accrues when we answer the wrong questions and when we scale up the amount of output.
Why Consumption Is Not Enough
It feels at first like the solution is to review our AI outputs and then decide what to do with them to avoid incurring cognitive debt. But diligent consumption doesn’t assure results. The root of the problem isn’t in the unvetted AI output, it’s in the suitability of those consuming it.
This is where I got the cold splash of water because I can’t simply use my Copilot license to work in a new field. I’m an applications specialist, and AI isn’t going to help me design an office network or a sales compensation program.
You could give me the questions, but I won’t be able to qualify the answer or iterate with more questions. I might struggle to separate fact from hyperbole, understand competitive threats or avoid alienating clients.
The Zone Of Proximal Inquiry
Let’s look at the recently popularized idea of Vygotsky’s “Zone of Proximal Development” (ZPD), which illuminates the distance between what a learner can achieve independently and what they can do with guidance. These insights are helping humans achieve their potential by coaching them to pursue what’s achievable.
Sure, we might try to inspire the team by telling them to reach for the moon, but that type of inspiration decays quickly. We push our people to growth and greatness by giving them work within their Zone of Proximal Development, a good coach to guide them and reinforcement when they succeed.
I believe that a corollary exists where the Zone of Proximal Inquiry (or ZPI, as I’m calling it) perfectly overlaps the Zone of Proximal Development. The ZPI is the gap between questions we’re equipped to ask and the answers we’re equipped to assess. Questions are outside of your ZPI when you lack the knowledge or context to judge the response.
Herein lies the ugly side of GenAI. We equip our people with toolsets that are only useful within their ZPI. Outside of their ZPI, they might be creating cognitive debt rather than value. I have a math degree and four decades of solving problems with computer science, but my AI-produced recommendations on cancer treatments are cognitive debt until an oncologist weighs in.
Conclusion
It’s not enough to simply acquire AI tools and set people to work. As leaders, we have to stay grounded in human skills and match the person to the work. AI doesn’t devalue our capability. In fact, it intensifies our need for discernment.
Cognitive debt explodes as we mistake momentum for meaning. To prevent it, engage your experts within their Zone of Proximal Inquiry, where imagination meets insight.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Leave a comment