Home Artificial intelligence How To Maintain Quality And Trust During AI Adoption
Artificial intelligence

How To Maintain Quality And Trust During AI Adoption

Share


Priya Sawant leads ASAPP’s engineering team, delivering enterprise AI at scale.

Despite productivity gains, many organizations are discovering an uncomfortable truth about AI: As velocity increases, value can quietly erode. Code ships faster, documents multiply and, at first glance, the AI-generated work appears polished. However, on closer inspection, it lacks substance or is flat-out wrong.

A recent BetterUp article about the hidden costs of AI-generated “workslop” suggests that at least 40% of U.S. employees have received workslop from their colleagues. Each incident requires a minimum of two hours of rework, translating into hundreds of dollars of wasted effort per employee each month. These costs compound quickly for an organization. But these direct costs are just the start of the problem. Workslop frustrates employees, erodes trust in teams and creates a dangerous illusion of AI productivity. Velocity metrics trend upward, while output quality declines.

AI slop is the predictable result of several systemic factors:

• Psychological Safety Gap: When engineers (especially junior employees) feel immense pressure to show productivity, they are more likely to use AI tools as a shield rather than a collaborative tool. The result is often output that lacks context and substance.

• Mandate Without Framework: Organizational leaders frequently mandate AI adoption through blanket directives rather than thoughtful frameworks with clear guardrails. Misaligned incentives exacerbate the problem. As engineering leaders, if we mandate tools without defining quality standards or appropriate use cases, we are responsible for the resulting degradation of output.

• Lack Of Agency: Engineers who feel controlled by AI tools, rather than in charge of them, are more likely to use those tools indiscriminately, producing lower-quality work as a result.

• Training Gap: AI does not replace the need for champions and mentors. Without formal training and structured mentorship, junior engineers default to indiscriminate AI use that produces workslop instead of strengthening their engineering judgment.

To maximize AI’s value while avoiding the pitfalls of workslop, organizations must build trust, provide a guided framework for AI adoption, give engineers real agency in how tools are used and invest deeply in strong mentorship, especially for junior talent.

The trust equation developed by David Maister, Charles Green and Robert Galford in The Trusted Advisor defines trustworthiness through four variables: trust = (credibility + reliability + intimacy) / self-orientation.

Psychological safety is paramount. Without it, trust cannot grow. This matters because, fundamentally, workslop is a trust problem. When engineers feel insecure, they ship AI-generated output to protect their image rather than admit gaps in understanding. That damages credibility. Reliability erodes as teams inflate velocity metrics rather than being honest about blockers. Engineers stop sharing rough work and collaborating openly, destroying intimacy. Personal productivity is optimized over team outcomes, feeding self-interest.

Leaders must be honest with themselves and with the organization. They must ask an uncomfortable but essential question: Are we asking engineers to use AI to augment or replace them? The answer determines whether you build trust or erode it.

The productivity gains from AI are real. Velocity improvements, reduced cognitive load, faster onboarding and easier prototyping represent substantial increases in engineering capacity. We cannot afford to ignore those gains in a competitive market. However, unguided AI adoption creates serious risks that can compound over time. These risks show up in the form of unmaintainable code, degraded system understanding, accelerating technical debt, shallow investigations and generic documentation.

To tackle this head-on, we need a guided framework that advises what to accelerate and what to gate:

• AI-Accelerated, For Well-Defined, Low-Risk Tasks: Boilerplate and scaffolding, test generation, initial documentation drafts, data transformations, prototyping and summarization. These are areas where AI’s speed advantage is highest, and the cost of imperfection is low.

Guardrail: Peer review focused on clarity, maintainability and context.

• AI-Augmented, For Complex But Well-Understood Work: Business logic, integration code, performance optimization and data analysis, such as incident post-mortems and system metrics. Using LLMs as a judge to augment humans in evaluating outputs of an agentic system is an important example. Here, AI can accelerate execution, but human judgment remains essential.

Guardrail: Senior engineer review. The reviewer must fully understand the implementation and confirm that work produced through AI-augmentation is context-specific and actionable.

• Human-Led, For High-Stakes Decisions: System architecture, security-critical code, core business differentiators, complex debugging, production incidents, architecture decision records, and strategic investigations like build-versus-buy or platform migrations. AI can inform these decisions, but not make them.

Guardrail: Architecture review board approval. Final decisions must be human-designed with explicit reasoning, including why alternatives were rejected.

Simply establishing the above framework is not enough; engineers need to feel ownership of how AI affects their work. To foster that ownership, you must involve them in the adoption of tools within their domains. For example, QA engineers should evaluate and own debugging tools. Platform engineers should enable the adoption of code generation and PR review tools within the SDLC. And SRE engineers should drive the adoption of AI-powered observability and incident management tools. When engineers choose the tools, they will use them with intention, not merely for compliance.

As an organization becomes AI-native, career ladders must evolve alongside it. The value of senior engineers shifts from writing code to system architecture, peer review and quality oversight.

Hiring, mentorship and coaching need to adapt, as well. Organizations should look for candidates who demonstrate AI-augmented productivity in addition to sound technical judgment. Senior engineers need to mentor junior engineers along the traditional learning paths (i.e., good production patterns, coding best practices, unit/integration testing).

Formalize AI adoption through champions, as you would in platform engineering. These champions model good AI use following the framework, call out and prevent workslop and guide teams toward mature AI adoption.

Ultimately, the gap between AI’s promise and its returns comes down to what engineering leadership incentivizes. Are you upleveling engineers by encouraging thoughtful AI adoption, or are you just issuing blanket mandates that create the illusion of improved productivity and velocity?

It is possible to increase velocity without sacrificing output quality or trust within the organization. With the right framework, investing in AI yields significant value without workslop becoming a perpetual tax on your returns.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?




Source link

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *