Leading Engineering Teams in the Age of Agentic AI

AI is accelerating delivery like nothing before it. Some things I think we need to pay attention to as leaders.

Something has shifted in engineering teams over the past year. Not just the velocity, though that has changed too. It is the way engineers are working and how the role of the people leading them is shifting alongside it.

Agentic AI is genuinely exciting. A developer can now run multiple workstreams simultaneously, delegate implementation to agents, and ship things at a pace that would have seemed unrealistic eighteen months ago. I use these tools myself. I have led adoption programmes around them, and I have seen what they can do for a team.

This is not really a piece about whether any of that is a good or a bad thing. That debate is for another day. This is just me sharing some thoughts on the questions I think our role now has to answer, and how I think we will keep having to adapt as agentic AI becomes more capable and more embedded in how our teams work.


Accountability Cannot Be Delegated

One pattern I think we need to get ahead of is the "the AI wrote it" framing. Whether it surfaces in a retro, a code review, or a post-incident conversation, in my view it needs to be addressed clearly.

Anything an AI generates is under the developer's control. They review it, they push it, they own it. The accountability does not transfer to the tool.

This is not about being punitive. It is about maintaining the culture of ownership that makes teams resilient. If accountability blurs, quality suffers. If quality suffers, trust breaks down. Getting this expectation embedded early, before it becomes a default, feels to me like one of the most valuable things we can focus on right now.

If accountability blurs What follows
"The AI wrote it" becomes an accepted excuse Nobody owns quality, and standards quietly drop
Engineers stop scrutinising AI output Bugs, security issues and poor decisions ship unchallenged
Incidents get blamed on the tool Root causes go unfixed and the same problems repeat
Ownership of the codebase weakens Technical debt accumulates with no one responsible for addressing it
Culture of carelessness takes hold Trust within the team and with stakeholders erodes over time

Quality, Security, and Guardrails That Have to Evolve

Speed without oversight is where technical debt is born, and agentic AI makes it very easy to move fast without fully understanding what you are shipping.

Area The concern What I think the response needs to look like
Security AI pulls in packages and dependencies engineers might not have consciously chosen or even noticed Automated scanning tools like Snyk and SonarQube are a useful first layer in the pipeline, but they are not a substitute for engineers understanding what is in the codebase. Both are needed.
Technical debt Vibe coding produces code that works on the surface but may not be understood beneath it Logging and backlogging debt only works if someone understands it well enough to articulate it. The human capability to read and reason about the code has to stay in the team.
QA and testing Ask AI to write tests and it will, but without guidance on how you want things tested the results are unpredictable. It might get it right. It might not. You simply do not know unless you have given it clear direction to work from. The strategy, the guardrails, and the instructions that shape how AI approaches testing need to exist before the agents start. Human review on top of that keeps it honest.
Code review Reviews risk becoming a rubber stamp, especially when AI tools are doing the reviewing as well as the building A mix of automated review tooling for a first pass and genuine human review is probably where this lands. More than one pair of eyes on work that carries real risk feels right to me, though I would not want to be prescriptive about what that looks like for every team.

The Pairing Problem

When a developer is working deep in a terminal with an agent doing the heavy lifting, they are not pairing with a colleague. They are not asking questions across the room, not rubber-ducking with a senior, and not picking up the domain knowledge that transfers in those conversations.

That used to happen naturally. It does not any more, at least not as reliably.

The risk is not that AI is a poor learning partner. Used well and validated properly, it can be a useful one. The risk is that the conditions which created organic knowledge sharing have quietly changed. Silos form not because people are being secretive, but because everyone is absorbed in their own context window.

And if we let that go unchecked, I think the consequences are real. Domain knowledge concentrates in individuals rather than spreading across the team. If someone leaves, goes on holiday, or is off sick, that knowledge goes with them. Onboarding new engineers becomes harder because the shared understanding that used to live in the team now lives in a series of private conversations between one developer and their agent. Code decisions get made without the challenge and perspective that comes from another human being involved. And over time, the team starts to feel less like a team and more like a collection of individuals working in parallel.

Some people point to MCP servers as the answer here. If agents have access to shared domain knowledge through a centralised context, does it matter that engineers are not pairing? I think it matters a lot. MCP gives agents a view of the system. It does not give your engineers a shared understanding of each other's thinking, the decisions being made, or the reasoning behind them. And if that context file is not being actively maintained as the codebase shifts, even the agents will start working from a stale picture. Domain knowledge shared between machines is not the same as domain knowledge that lives in a team.

What stops happening What the risk becomes
Engineers talking through problems together Knowledge concentrates in individuals and their agents rather than spreading across the team
Seniors challenging junior thinking in the moment Gaps in understanding go unnoticed until something breaks
Shared context building naturally across the team Onboarding gets harder and fragile because the knowledge is not written down anywhere humans actually read
Decisions being tested by a second perspective Code gets pushed without the challenge that catches poor choices
Team cohesion forming through daily collaboration The team becomes a group of individuals working in parallel

As leaders, I think we have to be deliberate about recreating those conditions. That might mean reinstating pairing sessions as a protected activity rather than something that happens when there is time. Knowledge sharing probably needs to become a ritual again, not an afterthought.


Keeping Up With the Pace: Scrum and Agile in an Agentic World

I do not think Scrum or agile practices are going anywhere. If anything, the principles behind them matter more when the pace of delivery is accelerating. The question I keep sitting with is how we keep those practices meaningful when the context they were designed for is shifting so quickly.

What I can see is that some of the assumptions our Scrum events were built on are starting to feel less stable than they used to.

Scrum concept What is shifting What I think needs attention
Velocity Velocity still matters. It measures how far a team can travel, a blend of capacity and pace. But with agents amplifying output unpredictably, velocity may become more volatile and harder to rely on as a planning signal. Stay aware of it but hold it loosely. The trend matters more than the number right now.
Estimation Engineers are effectively becoming agent managers, juggling their own judgement with prompting skill and agent output. Some will move significantly faster than others. The models themselves keep improving. Estimation becomes harder to predict because the ground keeps shifting. Acknowledge the volatility openly with the team. Predictability is one of Scrum's strengths and this is under real pressure.
Definition of Done The Definition of Done needs to include criteria that AI output has to meet alongside the human review, testing, and sign-off steps that sit on top. Revisit it regularly. Retrospectives are the right place to keep this in sync as the tooling evolves.
Definition of Ready Agents need clear intent and well-formed instructions before they start. Vague tickets produce unpredictable output. Sharper acceptance criteria and upfront thinking become more valuable as agents take on more of the build.
Sprint rhythm and team size If one engineer with agents can do the equivalent of what three people once did, do teams get smaller and more focused, or do we adapt to handling much larger volumes of work? I genuinely do not know yet. This is an open question we are living through right now. Worth watching closely rather than assuming the old model still fits.

The honest answer to a lot of this is that we are in it and we do not have all the data yet. One of the things I have always valued about Scrum is its predictability, the ability to say with reasonable confidence what a team can deliver. That predictability is under more pressure than it has been for a while. Not because the framework is broken, but because the environment it is operating in is shifting faster than any of us can fully keep up with. Retrospectives feel more important than ever as a place to keep adapting in response to that.

Scrum.org launched the PSM AI Essentials course in February 2026, specifically designed to help Scrum Masters and Agile Coaches understand and apply AI within their teams and organisations. I am going to work through it and write a follow-up post on what it covers. If you are curious in the meantime, you can find it here: scrum.org/courses/professional-scrum-master-ai-essentials-training.


Documentation, Shared Context, and the Limits of MCP

A brief note for context: MCP (Model Context Protocol) is a way of giving AI agents structured access to information about a system, its architecture, conventions, existing code, so they can work with awareness of what already exists rather than from scratch. It is genuinely useful, particularly for keeping agents aligned as a codebase grows.

But MCP gives context to agents. It is less clear that it keeps the humans in a team equally aligned. If a codebase is shifting constantly under agentic development and the thing keeping agents on track is a shared context file that the team is not actively maintaining, that context will drift. And if each engineer is heads-down in their own agentic workstream, how does the team as a whole stay aware of what everyone else is building?

The communication practices that kept teams aligned, reviews, retros, regular conversation, cannot be outsourced to tooling. They are still the leader's responsibility to protect.


Junior Engineers: The Risk and the Opportunity

This is the area I keep coming back to, because I think the industry is at risk of getting it badly wrong and also missing a genuine opportunity.

LeadDev's AI Impact Report 2025, based on responses from over 880 engineers, found that 54% of respondents expect hiring of junior engineers to decrease as a direct result of AI coding tools. The full report is worth reading: leaddev.com/the-ai-impact-report-2025. A related LeadDev article goes deeper on what this means for early-career engineers and their path to seniority: leaddev.com/hiring/junior-devs-still-have-path-senior-roles.

The short-term logic is understandable. A senior engineer with AI tooling can cover significantly more ground than before. But the long-term arithmetic is troubling. Reduce junior hiring for two or three consecutive years and you have a meaningful gap in your pipeline of future senior engineers. You cannot fast-track that. It takes time, experience, and genuine challenge to build a senior engineer and you cannot buy that time back later.

Inside teams that do have juniors, there is a different but equally real concern. 38% of respondents to LeadDev's AI Impact Report agreed that AI tools have already reduced the amount of direct mentoring junior engineers receive from senior engineers. That matters. When a junior pairs with a senior, they build intuition. They learn why a decision was made, not just what it was.

AI can also mask weak code. What looks like a clean implementation on the surface might be underpinned by a misunderstanding that would have been caught in a proper review. Seniors can mistake polished AI-assisted output for genuine competence and hand over more responsibility too soon. The problems surface later.

But here is the opportunity. Juniors who genuinely learn how to work with agentic AI, with proper mentorship, structured review, and guardrails, are going to be among the most capable engineers in five years. They are the first AI-native generation, and I think that is more significant than it might sound.

I think about my own generation of developers. We wrote raw HTML and CSS, wrestled with browser quirks, and built things from scratch before React, before modern frameworks, before any of the tooling that abstracts all of that away today. Those frameworks are brilliant and I use them. But when something breaks deep in the stack, I know where to look and why, because I lived through the era before the abstraction existed.

The juniors coming through right now are living through the equivalent moment with AI. The tooling is rough around the edges, the patterns are still forming, and nothing is polished yet. That is exactly the right time to be in it. In twenty or thirty years, when AI-assisted development is as normalised as React is today, the engineers who started now will carry a depth of understanding that simply cannot be shortcut. They will know why things go wrong, not just how to ask an agent to fix them.

If we close the pipeline now, we do not just lose the junior engineers of today. We lose the senior engineers and technical leaders of the 2040s. That is the part I think the industry is not thinking about carefully enough.


The Manager's Role Has Not Got Smaller

I want to end on something that gets lost in conversations about AI and productivity.

The things I have outlined in this post, accountability blurring, quality and security risks scaling up, knowledge silos forming quietly, delivery practices feeling the strain, documentation drifting, and the junior pipeline quietly narrowing, none of these are problems an AI will surface for you. They show up in how a team feels, how people talk in retros, how much psychological safety exists to raise a concern.

That is still the manager's domain. And as delivery gets faster, as agents handle more, and as the pace of change accelerates further, that human layer of leadership becomes more important, not less.

My instinct is that the leaders who will navigate this well are the ones who stay curious about the tooling, honest about what it changes, and deliberate about protecting the human things that make teams work. Whether that turns out to be right, only time will tell.

Related Articles

Explore more insights on similar topics and technologies.

From AI Assistants to AI Agents: A Year of Rapid Evolution

Written on 9th August 2025

Twelve months after exploring AI coding assistants, I've discovered how autonomous AI agents are reshaping development workflows - and what it means for engineering leadership.

Read more

Ready, Set, Done: Transforming Teams with Clear Definitions

Written on 30th August 2024

Discover how implementing clear Definitions of Ready and Done in teams can significantly improve your product team's performance and productivity.

Read more

Contact Me

Got a hanging question or just want to connect? Get in touch!

LinkedIn

Find Me Online

See what I am up to online.

GitHub Instagram

Crafted with :