Infrastructure Patterns

Faster Isn't Better When Nobody's Governing

METR's 2025 study dropped a number that should bother anyone leading an AI-enabled engineering team: experienced developers using AI coding tools were 19% slower — while believing they were 24% faster.

That is a 43-point perception gap. The teams think they are accelerating. The measurements say otherwise.

This is not an argument against AI-assisted development. It is an argument against unstructured AI-assisted development. The distinction matters.

The productivity illusion

The surface metrics look great. Commit velocity is up. Pull request cycle times are down. Code is shipping faster than ever. Dashboards are green.

But underneath the velocity metrics, a different picture is forming.

GitClear's 2024 analysis found code duplication increased roughly 8x in AI-assisted codebases. Google's DORA research showed every 25% increase in AI adoption correlated with a 7.2% drop in delivery stability. The Consortium for Information and Software Quality (CISQ) estimated $2.41 trillion in annual costs from poor software quality — a number that predates widespread AI code generation.

The pattern: AI makes it faster to produce code. It does not make it faster to produce good code. Without structure, the speed creates compounding costs that surface weeks or months after the commit.

Speed without governance creates compounding costs

Ajay Pundhir nailed the framing: "Speed without governance creates compounding costs." That sentence should be on a whiteboard in every engineering leadership meeting.

The compounding works like this: an AI assistant generates a function quickly. It works. It ships. But it duplicates logic that already exists elsewhere in the codebase. The next AI-generated function duplicates it again. Three months later, a business rule changes and the team discovers the logic lives in fourteen places, not one. The cost of the original speed advantage has compounded into a maintenance burden the original developer never saw.

This is not a new problem. Ungoverned velocity has always created technical debt. What is new is the rate of accumulation. When a human developer writes duplicated code, they produce it at human speed. When an AI assistant does it, the duplication compounds at machine speed.

What the fix looks like

The answer is not slowing down. It is governing the speed.

Three patterns keep appearing in teams that report sustained productivity gains from AI coding tools:

Architecture-first workflows. The AI needs to understand the system before it writes the code. Teams feeding architectural context — dependency maps, module boundaries, existing patterns — into their AI workflows report fewer duplication and integration issues. The AI is not guessing at system structure. It is operating within known constraints.

Queryable context. When business rules live in stored procedures, tribal knowledge, and outdated wikis, an AI assistant has no choice but to guess. Teams making their business rules and architectural decisions queryable — structured, searchable, and current — give the AI a foundation to build on instead of a void to fill.

Outcome metrics, not activity metrics. Commit velocity and PR cycle times measure motion, not progress. Teams tracking defect rates, code duplication, rework cycles, and production incidents alongside velocity metrics catch the compounding costs before they compound.

The question worth asking

The organizations winning with AI-assisted development will not be the fastest or the slowest. They will be the most deliberate.

If your team is using AI coding tools, the question is not "how much faster are we going?" It is "what are we not measuring that will cost us later?"

The productivity dashboards will not tell you. The codebase will — eventually.


Sources

  • METR 2025 Study: Measuring the Impact of AI Coding Tools — METR

  • GitClear 2024: Coding on Copilot — GitClear

  • Google DORA Research — DORA

  • Cost of Poor Software Quality — CISQ

  • Ajay Pundhir analysis — LinkedIn

AI Coding Tools
Join the conversation Discuss on LinkedIn →
Mar 2026
Practice

How to Actually Benchmark a VPS: What a Day of Testing Taught Us About Getting It Right

15 mins
Mar 2026
AI Governance

Grep 'n Guess: The Research Caught Up

20 mins
Mar 2026
Natural Selection

Natural Selection - About This Series

2mins