New York

October 15–17, 2025

Berlin

November 3–4, 2025

London

June 2–3, 2026

AI models can’t understand code. Does that matter?

New research highlights how little large language models understand about the code they are churning out. Should we care?
April 25, 2024

As developers find new ways to get more done by using generative AI-powered coding assistants, a recent study has found that large language models (LLMs) are more parrot than 10x developer when it comes to the relatively simple task of summarizing code.

“I’ve talked to a lot of people in the area and their instinct is that you can ask these language models to do any task for you, and they’ll do it,” says Rajarshi Haldar, a researcher at the University of Illinois Urbana-Champaign and co-author of the paper. “It’s great how often they do work, but you also have to know when they don’t work.

Join LeadDev.com for free to access this content

Create an account to access our free engineering leadership content, free online events and to receive our weekly email newsletter. We will also keep you up to date with LeadDev events.

Register with google

We have linked your account and just need a few more details to complete your registration:

Terms and conditions

 

 

Enter your email address to reset your password.

 

A link has been emailed to you - check your inbox.



Don't have an account? Click here to register