If you’re talking AI, you better talk in outcomes.
getty
Hopefully someone has been keeping a meticulous photographic record of the San Francisco billboards over the past few years, because in them, you can trace the AI revolution almost frame by frame.
At first, the message was pure promise and a vision of unparalleled intelligence deployed at scale. Every second company appeared to be announcing that it had crossed some invisible threshold, unlocking a future that would leave slower competitors behind. The language was grand and aspirational, aimed as much at capital markets as at customers.
And as each individual signal was beginning to get lost in the noise, the volume began rising and the claims sharpened. For a while it felt like every product launch leaned harder into showing what the AI under its hood can do, how many steps it could reason through, how autonomously it could act.
Capability theater had reached full intensity and AI was presented not as a component or a means, but as the organizing principle of the enterprise itself.
When Salesforce introduced Agentforce more almost exactly a year ago, it landed squarely within that arc as a confident, well-executed push that spoke directly to the ambition of agentic systems. At the time, it read as a category marker and now, it seems almost like a high-water mark.
Because a year later, the tone looks noticeably different.
The billboards are still there, but the language has softened and matured. Look closely and you’ll see how the emphasis has shifted away from what systems can do and toward what organizations achieve with them. There is less talk of reasoning, planning, and thinking, and more talk of resolution times, throughput, reduced friction, and outcomes delivered.
Alongside that shift, a quieter mood has taken hold on the customer side. AI fatigue has entered the conversation, born out of sheer saturation. Intelligence has become abundant enough that it no longer commands attention on its own, and executives no longer need persuasion that AI belongs in their strategy.
They want confidence that it will not become another cognitive burden layered onto already complex systems.
That moment marks the beginning of a quiet inversion that will come to claim 2026.
The most effective enterprise leaders are already acting on a future in which AI fades from view. They continue to invest heavily, rebuild architectures, and rethink workflows around intelligent systems, but they deliberately remove AI from the foreground of their messaging. They stop explaining how it works and focus instead on what changes.
They talk about results, without saying a word about the AI.
The exhaustion nobody wants to name
For nearly two years, AI has dominated executive attention, and for good reason.
Earnings calls, investor decks, internal roadmaps, offsites, and product launches all revolved around intelligence, models, and transformation narratives. Early on, that intensity felt productive as organizations learned to recalibrate their assumptions about what software could do.
At the same time, each new AI initiative arrived with an implicit set of obligations. Teams were asked to learn new interfaces, adopt new vocabularies, and take responsibility for outputs whose behavior remained probabilistic rather than deterministic. Even successful pilots produced follow-on questions about governance and accountability that few organizations were structurally prepared to answer.
Dave Osborne, CEO of Conga, has seen this pattern repeat across companies at different stages of maturity. “I’ve witnessed my share of organizations where the executive team doesn’t actually know how to make it to scale,” he told me. “There’s excitement, yes, and now there’s also funding, but there’s rarely shared clarity on what the organization is trying to win at and how AI truly plays into this.”
That lack of clarity becomes especially costly in the context of AI.
Research on cognitive load and decision fatigue helps explain why. When systems add layers instead of removing them, adoption is bound to slow down regardless of technical quality or productivity gains.
Enterprise environments already struggle with tool sprawl, and intelligence that demands attention competes directly with the work it is meant to support.
Osborne has been deliberately cautious in how Conga positions AI for precisely this reason. “We treat AI as a digital assistant,” he said. “We deploy it to our customers to assist them, not to steal the show.” That subtle framing is intentional, with assistance implying support within an existing workflow, not a demand for reorientation around a new center of gravity.
That’s why Osborne has intentionally slowed down his company’s AI narrative rather than accelerating it. His emphasis has been on alignment first. “The executive team has to be on the same page about three things,” he said. “What we’re accomplishing, how we’re going to do it, and what success looks like.” Without that foundation, AI becomes another source of fragmentation rather than leverage.
AI did not underdeliver. It delivered quickly and broadly, and in doing so, it exposed a different constraint. Attention became the scarce resource and confidence has become harder to earn. Capability alone no longer persuades buyers who now want proof of systems that work quietly in the background while results take the center stage.
In that environment, the leaders pulling ahead are not those declaring AI at the center of everything. They are the ones ensuring that AI delivers without asking to be admired.
The quiet inversion
Against that backdrop, a distinct leadership pattern has emerged and Frank Vella, CEO of Constant Contact, articulates this philosophy with unusual clarity.
“I make AI my problem,” he told me. “The client shouldn’t have to worry about it.”
That statement carries more weight than it appears to at first glance. It implies a willingness to take responsibility for uncertainty, performance, and failure modes before customers ever encounter them. It also implies a deliberate refusal to outsource that responsibility by asking users to understand or manage intelligence themselves.
Vella’s perspective is shaped by decades spent selling to small businesses, a group he describes as living some of the most challenging professional lives imaginable. Time scarcity defines their decision-making. Tools are judged less by potential and more by whether they reduce stress in the present.
“SMBs don’t wake up thinking about technology,” Vella said. “They wake up thinking about getting through the day.”
In that environment, presenting AI as a feature or a capability introduces friction. Explaining how a system works consumes attention that customers cannot spare. Vella draws a sharp distinction between internal and external conversations as a result.
“If you hear me talking about AI,” he explained, “it’s probably because a board member is in the room. Customers hear me talk about how the product transforms their lives.”
He returns to the same principle repeatedly. Technology should never be presented as an object of admiration. It should be presented as work accomplished. “Don’t present technology,” Vella said. “Present the results and the work.”
The underlying logic is simple and demanding. Intelligence exposed to customers becomes customer labor. Intelligence embedded into systems becomes leverage.
From demos to disappearance
This pattern extends well beyond small business software.
Marlon Misra, CEO of Assembly, works with professional services firms struggling under fragmented back-office operations. His view runs counter to much of the prevailing AI narrative. As systems grow more capable, AI becomes less central to how value is communicated, not more.
“We’re getting to a place where a two-to-five person firm can operate like a twenty-five person firm,” Misra said. That leverage does not come from showcasing intelligence. It comes from assistants that complete workflows end to end inside environments already trusted with payments, contracts, and accountability.
Misra emphasizes that the most powerful assistants are not those that talk the most, but those that know enough context to act without supervision. “There’s a big opportunity,” he noted, “to bring assistants that are geared toward completing workflows in secure environments, where they already have the context they need.”
As that happens, the AI component recedes. Clients interact with a portal, a process, or a completed task rather than with intelligence directly. The assistant matters because it works.
Ismael Wrixen, CEO of ThriveCart, sees the same inversion unfolding in the creator economy, though the pressures look different on the surface. Early excitement clustered around tools that promised to augment creativity or automate marketing and capability arrived faster than clarity.
What creators actually struggled with, Wrixen noticed, was not imagination, but execution. Too many systems asked them to stop and manage AI while the momentum it could have generated leaked out of the process.
“We’ve moved much more toward a workflow model,” Wrixen told me. “That’s where the leverage is.” In his view, AI mattered only insofar as it helped close loops. Sales pipelines that finished their own work. Monetization flows that removed manual handoffs. Systems that respected how creators already operated rather than asking them to reorganize around AI.
Wrixen has become increasingly explicit that specialization for creators, not broad knowledge, drives value. “Skills are the new everything,” he said, pointing to the way creators and operators alike win by executing well within narrow lanes rather than juggling sprawling toolkits. AI that amplifies that execution earns adoption. AI that asks users to become systems integrators does not.
The result has been a deliberate shift away from showcasing AI as a feature. ThriveCart’s focus has moved toward outcomes that feel almost mundane, such as AI to improve fraud prevention, approve more legitimate transactions and give more personalized checkout experiences to end customers based on their location and purchasing behavior. The intelligence remains embedded, but it no longer asks for attention.
That same logic becomes even more pronounced in high-stakes industrial environments.
Francesco Iorio, founder of Augmenta and one of the pioneers of generative design at Autodesk, works in an industry where visibility actively undermines trust. Construction and engineering teams operate under relentless deadline pressure and novelty carries real cost.
“People don’t have time to look into technological affordances,” Iorio said. “They are buried under work with backlogs stacking up, especially for mission-critical infrastructure like hospitals and data centers.” Any system that requires interpretation, supervision, or validation introduces hesitation. Even small doubts compound quickly when failure carries legal, financial, or safety consequences.
Iorio frames the challenge through three constraints that show up repeatedly. Time remains scarce. Trust must be earned through proof rather than explanation. Expertise is unevenly distributed and often tacit rather than codified. AI that demands engagement struggles on all three fronts.
That is why Augmenta removes AI from the conversation entirely. Engineers do not interact with models, prompts, or agents. They interact with requirements, constraints, and intent. “The conversation should not be about the drawing,” Iorio explained. “It should be about how the design achieves the desired goals and the outcomes.” The system reasons underneath, processing infinitely complex 3D scenarios, handling trade clashes, regulations and tradeoffs – learning from user feedback over time to eliminate the need for users to supervise its intelligence.
In Iorio’s words, adoption accelerates only once proof replaces persuasion. “Nothing works better than showing it works,” he said. Case studies, operational metrics, and most importantly, word of mouth, lower trust barriers far more effectively than explanations of how the system thinks.
Across both ThriveCart’s commercial workflows and Augmenta’s industrial design systems, the same pattern holds. Trust emerges when intelligence disappears behind performance over time, especially under stress which is exactly why infrastructure earns trust precisely because it does not demand attention. Electricity, logistics networks, and payment rails fade into the background until they fail. Their success lies in their absence from daily thought.
AI now sits on the edge of that transition.
In regulated, high-stakes, or time-constrained environments, visibility introduces friction, while invisibility shifts the burden back to the system provider. The technology acts and the work progresses, while real humans remain accountable for decisions without becoming system managers.
This shift has created a clear leadership tell where mature AI organizations communicate differently. They announce fewer features and publish more operational metrics. They track time saved, errors avoided, throughput increased, and backlog reduced. They speak fluently about outcomes while rarely naming models.
Less mature organizations remain noisy.
They rebrand frequently, emphasize tools over workflows, and treat intelligence as a headline rather than infrastructure. And soon enough, their messaging will invite scrutiny rather than confidence.
Over the next several years, enterprise AI will continue its migration from category to infrastructure. The most durable companies will stop being described as AI companies altogether. They will simply be organizations that operate with less friction and greater resilience.

Leave a comment