London

June 2–3, 2026

New York

September 15–16, 2026

Berlin

November 9–10, 2026

Making AI productivity gains stick across your engineering team

How to build a measurement framework for what comes after AI adoption.

Anastasia Zamyshlyaeva, Maxime Najim and Abiodun Olowode

Date & time

17:00

Register for the panel discussion

Login or join LeadDev.com to view this content

Create an account to access our free engineering leadership content, free online events and to receive our weekly email newsletter. We will also keep you up to date with LeadDev events.

Register with google

We have linked your account and just need a few more details to complete your registration:

Terms and conditions

 

 

Enter your email address to reset your password.

 

A link has been emailed to you - check your inbox.



Don't have an account? Click here to register

Analysis of 2,172 developer-weeks across teams using Copilot, Cursor, and Claude Code shows regular AI users improving output roughly 25% year-over-year, with real wins in test coverage and review efficiency. But the data also shows code churn climbing faster than output, duplication expanding, and quality signals that most dashboards don’t surface.

The challenge for engineering leaders isn’t adoption anymore. It’s making sure the gains hold up as AI becomes a permanent part of how teams work.

This session explores how we can practically measure AI’s impact on productivity, quality, developer experience, and efficiency ratios. You’ll walk away with a clear picture of where AI delivers durable value, where the gains are more fragile, and how to build visibility into both.

This panel discussion will cover:

  • Where AI is delivering value beyond lines of code
  • How to get started with an AI impact measurement framework
  • Sustaining AI-driven gains beyond the initial productivity bump

panelists:

Anastasia Zamyshlyaeva

GitKraken
VP Engineering

Maxime Najim

Target
Distinguished Engineer

Abiodun Olowode

Cleo
Engineering Manager

Moderator: