Rahul Saluja is a technology and business leader focused on AI-driven enterprise transformation and operating-model innovation.
For years, enterprise AI strategies have centered on a reassuring principle: keep humans in the loop. As algorithms became more capable, organizations relied on people to review recommendations, approve actions and absorb accountability.
That model is struggling. Across many industries, AI is no longer limited to dashboards or decision support. It is increasingly embedded directly into the flow of work itself—triggering actions, orchestrating processes and, in some cases, making operational decisions in real time.
This is not a technology upgrade. It is a fundamental shift in how enterprises operate.
Why Human-In-The-Loop No Longer Scales
Human oversight worked when AI outputs were occasional and decisions were discrete. Today’s enterprise environment looks very different. Operations now span thousands of workflows, continuous data streams and systems that must respond in seconds, not days.
In that context, human-in-the-loop models can introduce friction. Manual approvals can become bottlenecks. Review processes can slow execution. And when volume overwhelms capacity, “review everything” can quietly become “review nothing.”
What I’m seeing across large organizations is a growing recognition that decision support alone cannot keep up with operational complexity. AI must move closer to execution if enterprises expect meaningful impact.
The Rise Of AI-In-The-Flow
AI-in-the-Flow is a term I use to describe a shift in how AI is embedded into enterprise operations. Rather than operating as a separate analytical layer that produces insights for humans to interpret, these systems are directly integrated into business processes and authorized to initiate actions within defined boundaries.
The concept builds on existing trends such as autonomous agents and hyperautomation, but the key distinction is that, instead of being external to the process, AI becomes part of the process itself.
Oversight is achieved not through constant human review, but through embedded governance mechanisms—including role-based permissions, policy constraints, monitoring, logging and automated exception handling—that allow AI to act while remaining auditable and reversible.
Consider how, in healthcare, AI systems can automatically generate clinical documentation, update patient records and trigger care workflows. In customer operations, AI agents can classify tickets, route cases and initiate remediation steps across systems.
This model represents a shift from systems that suggest actions to systems that take them. This shift is being driven by three forces:
1. Operational complexity continues to grow. Supply chains, digital customer channels and regulated environments create decision surfaces that humans are struggling to manage at scale.
2. AI itself is changing. Agentic systems are emerging that can sequence tasks across applications, reason over context and monitor outcomes without constant supervision.
3. Boards and CEOs are no longer impressed by AI pilots. They want to see cycle times reduced, costs lowered, quality improved, and risk contained.
Together, these forces make AI-in-the-flow not just attractive, but inevitable.
AI: An Operating Model Decision, Not A Software Choice
One of the most common mistakes I see is treating AI as a product selection exercise. The real work is not choosing a model but redesigning how decisions are made and actions are executed when software can act independently.
AI-in-the-flow forces leaders to rethink process ownership, exception handling and measurement. Success is no longer about model accuracy alone. It is about cost per outcome, speed to resolution and resilience when things go wrong.
This shift mirrors earlier enterprise transitions—from manual processes to ERP, from on-premise systems to cloud platforms. Each required organizations to rethink roles, controls and accountability. AI-in-the-flow follows the same pattern, but at a faster pace and with higher stakes.
The Question Leaders Avoid
When AI moves into execution, the conversation changes. The critical question is no longer whether AI can act, but: Who is accountable when it does?
In regulated industries, especially, this question cannot be ignored. Leaders must be able to answer, quickly and clearly: Who authorized the AI to act? Under what conditions? Using what data? And how can decisions be audited or reversed?
This is where many AI programs stall. Governance is often discussed at a conceptual level, but rarely engineered into workflows themselves. A recent industry survey found that, while 70% of surveyed companies report having cross-functional AI oversight committees, only 48% have AI governance guardrails in progress. As a result, organizations hesitate to grant autonomy, even when the technology is ready.
To move forward, enterprises need to design trust as part of the operating model.
What Leaders Should Be Doing Now
For CEOs and enterprise leaders, the path forward is less about experimentation and more about design discipline. AI initiatives should be structured around intent, execution and exception management, not just automation. Models should be treated as replaceable components, not long-term assets. And trust must be auditable, not assumed.
Most importantly, AI strategy can no longer live solely within IT. It is an enterprise-wide operating model decision that touches operations, finance, risk and culture. Enterprises should also maintain clear accountability and control when embedding AI into the flow of work. To do so, the focus should be less on model selection and more on operational design:
1. Define decision boundaries. Clearly specify which actions AI is authorized to take autonomously, which require human approval and which are prohibited entirely. Most failures occur when autonomy is implicit rather than explicitly designed.
2. Embed governance into workflows. Oversight mechanisms—identity management, permissions, policy rules, audit logs and rollback procedures—should be built directly into processes, not layered on after deployment.
3. Design for exceptions, not perfection. AI-in-the-Flow works best when systems are optimized for handling edge cases and failures gracefully. Humans should intervene at exception points, not at every step.
4. Shift performance metrics. Success should be measured by operational outcomes—cycle time, cost per transaction, error rates and recovery speed—rather than model accuracy alone.
Where leaders often get confused is assuming that AI oversight means more human review. In reality, scalable oversight requires better system design, not more people. The goal is not to watch AI more closely, but to engineer environments where AI can act safely, transparently and reversibly by default.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Leave a comment