You have 1 article left to read this month before you need to register a free LeadDev.com account.
Estimated reading time: 5 minutes
As the gap between AI enthusiasm and scepticism grows, resistance is increasingly looking like an anchor on career growth.
From the moment ChatGPT became available to the public, corporate executives have raced to hop on board the generative AI hype train.
Workers have been decidedly slower to follow suit, with concerns over diminished opportunities as a result of AI automation, data security risks, environmental concerns, and the overall quality of AI-assisted outputs.
This ambivalence may be especially pronounced in tech, where workers are generally more attuned to the gaps between a new technology’s potential and its near-term limitations.
Now, tech CEOs are alternately urging their workers to embrace AI to stay relevant and warning that AI will make many of those workers obsolete.
“Organizations are heavily invested, financially and reputationally, in AI,” says Sonu Kapoor, a senior front-end engineer and consultant based in Toronto. “Managers naturally want people on those initiatives who are unlikely to slow momentum with skepticism, even if that skepticism is technically valid.”
It’s a climate that leaves software engineers and their colleagues seemingly little room to voice concern over the AI revolution – at least, not without paying a professional price.
Your inbox, upgraded.
Receive weekly engineering insights to level up your leadership approach.
Money talks
There is limited data to quantify whether AI skepticism has had a pronounced negative impact on career prospects. But broader market trends show that, for many in tech, speaking ill of AI means biting off the hand that feeds you.
In December 2025, 55% of US tech job postings listed AI skills as a requirement, an 84% increase from December 2024, according to a tech jobs report from the recruitment firm Dice. A 2025 global workforce analysis from PricewaterhouseCoopers (PwC) further found that workers with AI skills saw a 56% wage premium over others in the same job without AI skills, up 25% from 2024.
Though he has never known anyone to face direct or career-ending ramifications for being critical of AI, Kapoor has seen several colleagues “get sidelined” for raising concerns about the technology versus real customer value. These workers tended to be relegated to legacy projects which, while important, was rarely celebrated as an engine for company growth.
In contrast, engineers who were openly enthusiastic about AI were more likely to be staffed on high-visibility pilots and innovation efforts that offered executive exposure. This translated into stronger performance narratives and better long-term career positioning, not to mention higher bonuses and overall pay.
As Kapoor sees it, the potential blowback has less to do with perceived AI skepticism than being seen as inflexible. “In practice, leadership is looking for people who can de-risk AI adoption, not block it. I’ve personally been careful about how I phrase public criticism.”
Fergal Glynn, Chief Marketing Officer and AI Security Advocate for the AI security platform Mindgard, says that if he were an employee, he too would be careful about how he discusses AI at work and online – especially on platforms like LinkedIn.
“When you become critical about the risks associated with AI or its impact on labor, you may get sidelined, pushed out of big labs, or not considered for big projects and conferences,” Glynn says. “When you question the hype of AI, you may lose promotions or other opportunities for being labelled as negative or ‘not aligned with the vision.’”
Glynn points out that leaders sometimes appreciate when workers provide constructive, evidence-based criticism that both acknowledges the benefits of AI, while offering suggestions to make it safer. But context is important. “Even a well-thought-through constructive criticism has risks when it challenges how the company generates revenue, or the speed of the company to launch a product in the market, or the commitment by the leadership,” Glynn says.
Russell Twilligear, Head Of AI Research and Development at BlogBuster, sums it up plainly. “Job growth and capital are centered around AI, so when someone puts themselves against AI, that can create a career risk,” he says. “People often watch what they are saying not because of criticism or fear of getting into trouble, but because what they say matters when it comes to making money.”
More like this
Exceptions to the rule
As potentially damaging as it may be to outright dismiss AI, “blind enthusiasm” can be equally discrediting. “It’s a tool,” says David Kimball, co-founder of the AI marketing agency lilAgents. “If the Web3 wave taught us anything, it’s that systems and tools don’t win based on pure potential alone. Blockchain tech is amazing, and quietly still growing in adoption in the background, but it certainly didn’t catch on the way the cryptobros had predicted.”
James Stanier, CTO for veterinary business at the healthcare software company Nordhealth, agrees that uncritical buy-in to new tech innovation often ages poorly in hindsight. “Plenty of leaders have made bold claims and investments that haven’t materialized. When the dust settles, the people who maintained a thoughtful, evidence-based view – whether that was optimistic or cautious – will look better than those who picked a side and dug in.”
The difference comes down to curiosity. Every new technological shift attracts its evangelists and its haters. Those that support their positions in the spirit of inquiry, and with an open mind, are better positioned to arrive at the other side of the hype wave with their reputations intact.
“There’s a difference between ‘I don’t think AI will replace software engineers, here’s why, and here’s what I think it will actually change’ versus ‘this is all hype and I’m not engaging with it,’” Stanier explains. “The former is a legitimate professional opinion. The latter can be read as someone who’s opted out of thinking critically about a significant shift in the industry, which isn’t a great look regardless of the topic.”
Framing also matters. Kapoor says that, when he opts to express an AI-critical view, he takes care to avoid sweeping or emotionally-charged claims that might be read as anti-innovation. Instead, he focuses his remarks on concrete AI-implementation challenges.
“This lets me raise legitimate concerns about cost, maintainability, trust, governance, and long-term impact on teams without sounding dismissive of AI itself,” Kapoor explains. “In today’s market, being anti-AI is often perceived as being anti-business, so nuance really matters.”

London • June 2 & 3, 2026
LDX3 London agenda is live! 🎉