Chapter 5

AI Coding in Established Organisations: Governance, Risk, and the Skills Gap

80% of developers use AI coding tools. But without governance, AI accelerates the feature factory — building the wrong things faster. Three risks established businesses must address.

AI has fundamentally changed how software gets built. Prototyping speed has accelerated dramatically. Code generation is no longer experimental — it is becoming the default workflow. But for established organisations with existing codebases, compliance requirements, and complex team structures, the story is more nuanced than the productivity headlines suggest.

80%+

of developers using or planning to use AI coding tools

40%

productivity jump for PMs using AI tools

McKinsey

~25%

of Google's codebase now AI-assisted

The question is not whether your teams should use AI tools. They already are. The question is whether your organisation has the structures to capture the upside while managing the risks. For most established businesses, the answer is no.

The Pattern: Three Risks for Established Businesses

AI coding tools create specific risks for organisations that already have complex codebases, regulatory obligations, and mature teams. These risks are different from those facing startups building greenfield products.

1. The Feature Factory Accelerator

AI lets you build the wrong things faster. If your organisation already suffers from feature factory dynamics — shipping features without validating whether they solve customer problems — then AI-accelerated delivery makes the problem worse, not better.

Without disciplined discovery, AI-speed delivery of unvalidated ideas produces tech debt at an unprecedented rate. You can now generate a feature in hours that used to take weeks. But if nobody validated whether the feature should exist, you have simply accelerated the accumulation of code that doesn't serve customers and will need maintaining forever.

The feature factory doesn't need more speed. It needs more discipline. AI without discovery is a faster path to the same dead end.

2. The Governance Gap

Approximately 40–45% of AI-generated code contains potential security vulnerabilities. This is not a theoretical concern. If AI-generated code enters production without the same scrutiny applied to human-written code, you are accepting security exposure that may affect your compliance with GDPR, HIPAA, SOC 2, and other regulatory frameworks.

Most established organisations have code review processes, automated testing pipelines, and security scanning tools. The question is whether AI-generated code goes through these same gates — or whether the speed advantage of AI is being captured by skipping the controls that exist for good reason.

The governance question every product leader should ask:

Does AI-generated code in your organisation go through the same security scanning, code review, and testing processes as human-written code? If you don't know the answer, the answer is probably no.

3. The Skills Polarisation

The AI skills gap is creating a two-tier workforce. AI-focused specialists command approximately 35% higher salaries than their non-AI counterparts. 71% of leaders now prefer less-experienced candidates with strong AI skills over experienced candidates without them.

For established organisations, this creates a compounding problem. Teams that don't upskill face a growing capability gap. The people who can use AI tools effectively become increasingly valuable. The people who can't become increasingly replaceable. And the gap widens every quarter.

This is not just an engineering concern. Product managers, designers, and QA professionals all face the same dynamic. AI literacy is becoming a core competency across the entire product organisation, not a specialist skill for a subset of engineers.

~35%

salary premium for AI-focused specialists

71%

of leaders prefer less-experienced candidates with strong AI skills

What This Means for Product Leaders

AI coding tools are not optional. Your teams are already using them, whether or not you have a policy. The choice is not whether to adopt AI — it is whether to adopt it deliberately, with governance, or to let it happen by default, without controls.

For product leaders in established organisations, the stakes are higher than for startups. You have existing compliance obligations. You have complex, legacy codebases where AI-generated code can introduce subtle bugs. You have teams with varying levels of AI literacy. And you have organisational structures that may treat AI adoption as purely an engineering concern when it is, in fact, a product strategy concern.

The Playbook: What the Handbook Covers

Chapter 6 of the Product Leaders handbook provides the implementation guide for governing AI adoption across your product organisation. Here is the trajectory.

Step 1: Treat All AI-Generated Code as Untrusted Code

Mandatory security scanning for all AI output. Write tests before AI generates implementation. Human review required before merge. The handbook provides the review checklist and process templates.

Step 2: Use AI Speed for Discovery, Not Just Delivery

Redirect AI-accelerated development toward building prototypes to validate hypotheses. Use the speed advantage for customer experiments, not for shipping unvalidated features faster. The handbook covers the discovery-delivery integration model.

Step 3: Upskill the Entire Team

AI literacy is a core competency for PMs, designers, and engineers — not a specialist skill. The handbook includes the skills assessment framework and learning path by role.

Step 4: Redefine "Senior"

Senior engineers become the guardrails around AI development. Their value shifts from writing code to reviewing AI output, designing system architecture, and ensuring quality. The handbook covers the updated role definitions and career frameworks.

Step 5: Establish AI Governance Deliberately

Define approved tools, review standards, and compliance requirements before adoption spreads further. The handbook provides the governance framework, tool evaluation criteria, and compliance checklist.

Watch Out For

  • Using AI to accelerate the feature factory. If you are already shipping features nobody uses, doing it faster is not an improvement. AI-accelerated delivery without discovery discipline produces more waste, not less.
  • Letting AI code skip review. The speed advantage of AI is real. But the speed advantage disappears if AI-generated code introduces security vulnerabilities or subtle bugs that take weeks to diagnose. The review step is not overhead — it is the investment that makes AI adoption safe.
  • Cutting headcount based on theoretical productivity gains. AI augments senior capability; it does not replace it. Reducing team size before understanding how AI changes workflows leads to capability gaps that take months to recover from.

The Full Playbook Is in the Handbook

This article diagnoses the risks. Chapter 6 of the Product Leaders Edition provides the complete implementation guide — AI governance frameworks, review process templates, skills assessment tools, role redefinition models, and compliance checklists.