You have 1 article left to read this month before you need to register a free LeadDev.com account.
Estimated reading time: 10 minutes
Key takeaways:
- JetBrains is betting on a future most customers haven’t arrived at.
- The portfolio is growing faster than it’s simplifying: is this a coherent stack or just sprawl? Engineering leaders will need to decide for themselves.
- Trust is the real product. Developers need to know what Central costs and that the terms won’t change mid-contract.
JetBrain’s new Central platform signals a significant shift in how it sees software development. Whether its existing customers agree is another matter.
The IDE maker recently introduced JetBrains Central, a unified production system that “connects tools, agents, and infrastructure, allowing automated work to run, be monitored, and be managed across teams.” It is scheduled to launch this year, with an Early Access Program in Q2.
Reflecting the broader shift in the industry, Central signals a new strategic direction for JetBrains. Their head of agentic platform, Oleg Koverznev, writes in the announcement blog that code generation is cheap, and no longer a bottleneck. The real challenge is managing the “growing operational and economic complexity of agent-driven work.”
He has data to back this up. A JetBrains survey of 11,000 developers conducted in January 2026 found that 90% now use AI at work, 22% use AI-coding agents, and 66% of companies plan to adopt agents within the next 12 months.
However, only 13% of developers report using AI across the entire software development lifecycle, with organizations struggling to translate AI use into measurable improvements in software delivery speed, system reliability, or cost efficiency.
This is the gap that Central is designed to close, and also means JetBrains is betting on a transformation that most of its customers haven’t yet experienced.
Your inbox, upgraded.
Receive weekly engineering insights to level up your leadership approach.
What’s under the hood?
The product is built around three core capabilities: governance and policy enforcement, cloud infrastructure for running agents reliably, and a semantic context layer that gives agents a system-level understanding of your codebase and organization.
That last claim invites scrutiny. When asked to explain what ‘system-level understanding’ means in practice, Koverznev broke it into two parts.
The first is code intelligence. “Concretely, each program file is mapped to an abstract syntax tree,” he told me. “This allows humans and agents to have a stronger understanding of how the code coheres as a whole, such as cross-file dependencies.”
The second is a broader semantic layer, currently limited to static data, that incorporates code context and will soon extend to natural language artefacts such as issue trackers and agent configuration files. Dynamic data sources, such as traces, logs, and production monitoring data, can provide valuable insights but are not yet covered.
“We perform experiments with agents, like automatic on-call incident resolution, with ingestion through oTel and integration with APMs, but they are not yet being released,” Koverznev said.
The semantic layer automatically indexes every commit as it lands. This means that when Agent A is mid-refactor and Agent B needs to generate tests, Agent B can work against the indexed state of the refactored code, rather than the pre-refactor interface.
Central features an “intelligent” routing layer that selects the most appropriate models, tools, and execution paths for different tasks. It is not a prerequisite for multi-step autonomous workflows. Claude Code, for example, already executes multi-step workflows, Koverznev noted. “The intelligent routing layer is primarily about completing these multi-step workflows at lower cost and latency by utilizing alternative low-cost providers, such as open-weight models.”
However, there is an issue when the routing decision is wrong. “Today, even an intelligent Large Language Model (LLM) can provide incorrect answers; there is no ‘automatic’ rollback,” he told me.
Modern harnesses are capable of rolling back to a previous point to attempt a different solution, and this capability is currently under construction. Once built, recovery can happen in two ways: a human in the loop will be able to manually select which models and tools to use on a retry, or a plan-level rollback will trigger automatically when a test fails or a human detects an error.
Central supports mixing JetBrains-native agents with Claude Agent, Codex, Gemini CLI, and custom-built solutions, underpinned by The Agent Client Protocol (ACP), an open standard for connecting AI-coding agents to code editors/Integrated Development Environments (IDEs) developed by JetBrains and Zed.
True interoperability, where agents could delegate subtasks to each other, is not what’s on offer; at least, not yet.
“We do not currently support delegating sub-tasks between agent types,” Koverznev confirmed. What does exist are pre-configured sequential workflows, such as an automated code review triggered by the results of a prior agent run. That’s useful, but it is chaining rather than dynamic delegation.
More sophisticated automations are in the pipeline, but Koverznev confirmed that there are no current plans to have one agent type delegate to another as a replacement for each harness’s built-in capability to spawn sub-agents.
More like this
What about pricing?
JetBrains was heavily criticized last year after developers on annual AI Ultimate subscriptions found their credits depleted roughly ten times faster than expected, following a mid-contract change affecting credit consumption. JetBrains’ response at the time was unapologetic, arguing it couldn’t subsidize usage the way VC-backed competitors could.
Pricing for Central is yet to be announced, but JetBrains gave me some details about how it will work. It is structured into two distinct components. There is a fixed per-seat subscription covering the AI governance and policy layer. Agentic execution, by contrast, follows a pay-as-you-go model, moving toward a Bring Your Own Key (BYOK) architecture.
“This allows teams to use their existing subscriptions with providers like OpenAI or Google,” Koverznev explained. “It also ensures they maintain full control over their LLM costs and data processing, without being locked into a single provider’s shifting credits.”
For a single discrete Application Programming Interface call, cost attribution is straightforward. The shared semantic index is more complex: it’s a cost-bearing resource that many agents consume without each one re-incurring its construction cost.
JetBrains confirmed that the semantic layer uses both LLMs and embedding models to generate its representations, and that this incurs real costs. Koverznev said the company uses advanced incremental algorithms to keep those costs manageable, “even for big mono repositories being updated thousands of times in an hour,” with the cost passed through its standard pay-as-you-go credit mechanism.
A growing portfolio and the question of where to start
Central doesn’t simplify the question of which JetBrains product to use. Alongside Central, the portfolio now includes Junie, Claude Agent, Koog, Grazie, Mellum, Air, and Air Team. When asked what the recommended agentic stack looks like for an engineering leader, Koverznev leaned into optionality rather than prescription.
“The enterprises we talk to today need flexibility in using a wide selection of tools,” he said. “A single enterprise may comprise 20 teams, where each team is opinionated about their tool choice and selects different tools for their use.”
The recommended approach is to use JetBrains Air as the neutral Agentic Development Environment, with Central sitting above it for governance, visibility, and cross-team coordination. Junie CLI is positioned as the option for teams that want deep project understanding injected into any workflow, including non-JetBrains environments.
In practice though, engineering leaders need to do their own mapping of which tools address which problems before they can evaluate whether the portfolio is genuinely complementary or just sprawling.

New York • September 15-16, 2026
Speakers Camille Fournier, Gergely Orosz and Will Larson confirmed 🙌
The Code With Me casualty
Alongside the launch of Central, JetBrains announced it is sunsetting Code With Me, its collaborative pair-programming feature, with the 2026.1 release being the last to officially support it. The feature will be available as a standalone plugin until Q1 2027, when the public relay infrastructure will be shut down entirely.
The vendor cited declining demand since the feature’s pandemic-era peak. The response in the blog comments was pointed. One developer described the impact as “devastating” for a two-person remote company that pair-programs full-time. Others argued the value of a tool isn’t measured by average usage frequency, but by how critical it is when you need it, such as for onboarding, debugging with a colleague, or working with students.
Another commenter asked why, if AI agents are supposed to be writing most of the code, JetBrains couldn’t maintain a feature like Code With Me. It’s a fair question. When we put it to Koverznev directly, he framed the decision in terms of a new standard for what earns engineering investment.
“In an agentic-first world, our internal bar for a feature is its ability, like JetBrains Air and Central, to provide professional infrastructure that moves beyond traditional pair programming toward a model where humans and ‘agent co-workers’ coordinate across a unified, code-native ecosystem.”
In the mid-to-late 1990s I worked in management information for an investment bank. We used to talk about the “fact gap” when, in the absence of good information, managers and executives sometimes made intuitive guesses that were disastrously wrong. I’ve found myself reflecting on this idea in the context of JetBrains; for developers whose workflow relies on tools like WebStorm, Rider, or even IntelliJ, what guarantee is there that those tools won’t be sunsetted too? Might our careers go the same way?
Koverznev pushed back on the suggestion that the IDE business is being de-emphasized. “We are doubling down on the IDE as the primary environment for professional editing, navigation, and high-integrity development,” he said. “Our 26 years of IDE expertise remain the foundation of everything we build.” The implied logic is that the new collaboration is agent-to-human, rather than human-to-human.
However, JetBrains, like any company, has finite resources. As it focuses on rebuilding itself around automated coding, some less popular products and features will inevitably be dropped or left to languish. Code With Me joins a growing list of sunsetted JetBrains products and features, including AppCode, Aqua, and CodeCanvas.
What this means for engineering leaders
The blog post’s claim that “AI will not replace software development” also sits alongside a product that automates issue investigation, code generation, test execution, and multi-step workflows. Koverznev sees the new role of the software developer as defined by accountability, rather than implementation.
“Developers are accountable for the quality of the codebase, not the agents,” he said. Day-to-day, that means expressing project intent, designing and enforcing architecture, scoping and delegating work to agents, and reviewing their output.
He argues that Central is not a replacement for developer judgment since it “simply ensures that the AI use within these tools is governed and tracked across the enterprise.” It also provides a cloud execution layer that makes agent runs shareable across teams, and enables hand-offs – “human one to agent one, to human two…” – that would otherwise be difficult to coordinate.
Koverznev frames the broader shift in stark terms. “Developer workflow is undergoing a fundamental shift from manual implementation and ‘chat box sidecars,’ toward a fragmented sprawl of autonomous agents operating across silos,” he said. “Developers are moving past simple code generation and are now tasked with managing AI sprawl and the resulting ‘shadow tech debt’ that occurs when agents lack a deep understanding of a project’s history and architecture.”
As we’ve explored on LeadDev previously, this is a problematic shift. It scales badly, and removes the fun part of the job for many developers.
Surviving a shift you didn’t start
For JetBrains, the underlying vision of AI agents as a distributed production system, rather than a chat interface bolted onto an IDE, may well be the right one.
If your organization is moving toward agentic workflows, as the vendor’s survey data suggests most are planning to, then a governance and orchestration layer that isn’t tied to a single model provider is a reasonable thing to want. No lock-in matters in a market where the leading models are changing every few months, and where the cost of replatforming can outstrip the benefit of switching.
It’s also worth acknowledging that this kind of transition is genuinely hard, not only for the developers being asked to change how they work, but for the companies whose livelihoods depend on remaining useful through the change.
JetBrains built its reputation over two decades by making tools that developers wanted to use, and it now faces the uncomfortable task of dismantling parts of that identity to survive a shift it didn’t initiate. Sunsetting beloved features, repricing products mid-cycle, and betting on a future that most customers haven’t arrived at yet are not easy calls, even when they’re arguably the right ones.
Equally though, the developers who stuck with JetBrains through the pricing controversy deserve a straight answer on what Central is going to cost, and the reassurance that the terms won’t change mid-contract.