8 lessons from tech leadership on scaling teams and AI
What we learned from the first year of Leaders of Code.
January 14, 2026
8 lessons from tech leadership on scaling teams and AI
What we learned from the first year of Leaders of Code.
- Credit: Alexandra Francis*
It’s been nearly a year since we launched Leaders of Code, a segment on the Stack Overflow Podcast where we curate candid, illuminating, and (dare we say) inspiring conversations between senior engineering leaders.
An impressive roster of guests from organizations like Google, Cloudflare, GitLab, JPMorgan Chase, Morgan Stanley, and more joined members of our senior leadership team to compare notes on how they build high-performing teams, how they’re leveraging AI and other rapidly emerging tech, and how they drive innovation in their engineering organizations.
To kick off 2026, we wanted to collect some overarching lessons and common themes that many of our guests touched on last year, from the importance of high-quality training data to why so many AI initiatives fizzle to what the trust/adoption gap tells us and how to bridge it.
Read on for the most important insights we heard last year.
AI initiatives need quality data
Poor data quality undermines even the most sophisticated AI initiatives. That was a unifying theme of our show throughout 2025, beginning with the inaugural Leaders of Code episode. In that conversation, Stack Overflow CEO Prashanth Chandrasekar and Don Woodlock, Head of Global Healthcare Solutions at InterSystems, explored how and why a robust data strategy helps organizations realize successful AI projects.
An out-of-tune guitar is an apt metaphor here: No matter how skilled the musician (or advanced the AI model), if the instrument itself is broken or out of tune, the output will be inherently flawed.
Organizations rushing to implement AI often discover that their data infrastructure is fragmented across siloed systems, inconsistent in terms of format, and devoid of proper governance. These issues prevent AI tools from delivering meaningful business value and proving their value to skeptical developers.
In the episode, Prashanth and Don emphasized that maintaining a human-centric approach when automating processes with AI requires building trust among users, which, in turn, starts with clean, well-organized data that AI systems can reliably interpret and effectively use.
Most organizations overestimate data readiness
Too many organizations rush into AI implementation without properly assessing whether their data infrastructure can support it, explained Ram Rai, VP of Platform Engineering at JPMorgan Chase. This overconfidence stems from a fundamental misunderstanding: Having data is not the same as having AI-ready data. A centralized, well-maintained knowledge base is essential for getting AI initiatives off the ground successfully, yet most organizations discover this requirement only after launching poorly conceived pilot projects.
Organizations often fail to evaluate whether their AI projects align with core business values. This can lead to wasted investments in tools that cannot access the internal context necessary for meaningful results. In highly regulated environments with heavy compliance requirements like banking and finance, Ram says his team can’t ignore the productivity benefits offered by AI. At the same time, he says, they must “be surgical about it,” particularly when dealing with critical infrastructure where “we can't entirely trust probabilistic AI.”
Internal knowledge is the antidote to AI hallucinations
Enterprise AI models frequently hallucinate because they lack access to internal company knowledge, as Ram points out: “Why does AI hallucinate? Because it lacks the right context, especially your internal context. AI doesn't know your IDP configuration, token lifetimes, your authentication patterns or your load balance settings, so the training data is thin on this proprietary knowledge.”
This gap between general training data and specific organizational knowledge leads AI tools to make convincing-sounding but fundamentally incorrect suggestions. Grounding AI tools in verified, internal documentation significantly improves accuracy and reliability, helping enterprise users realize the value they need from these new tools.
The conversation with Ram highlighted how Stack Overflow’s structured Q&A data provides ideal fine-tuning material for next-generation AI models by offering the kind of community-driven, verified knowledge that can bridge this context gap. Organizations that invest in robust internal knowledge systems create a foundation for AI tools that developers can actually trust.
To learn more about how Stack Internal can help you build smarter, more trustworthy AI systems, check out this webinar.
Developers trust AI less than ever
Stack Overflow’s 2025 Developer Survey revealed a striking paradox: more developers actively distrust the accuracy of AI tools (46%) than trust it (33%), while only a tiny fraction (3%) report “highly trusting” the output.
This trust deficit has real consequences for adoption and productivity. The number-one frustration, cited by 66% of developers, is dealing with “AI solutions that are almost right, but not quite,” which often leads directly to the second-biggest frustration: “Debugging AI-generated code.” Many developers find themselves wasting time reviewing and fixing AI-generated code rather than experiencing the promised productivity gains.
Experienced developers are the most skeptical of AI, with the lowest “highly trust” rate (2.6%) and the highest “highly distrust” rate (20%). As Ram Rai of JPMorgan Chase acknowledged, “Many developers distrust AI accuracy—that’s the current reality, and there is a struggle with adoption of AI.”
This decline in trust—down from over 70% positive sentiment in 2023 and 2024 to just 60% in 2025—is a red flag. Organizations must address developers’ valid accuracy and reliability concerns before expecting widespread adoption and the realization of actual business value.
Developers turn to Stack Overflow for human-verified, trusted knowledge, with about 35% reporting that their visits to Stack Overflow are a result of AI-related issues at least some of the time. This pattern reveals a crucial insight: when AI tools fail or produce suspicious results, developers seek validation from community-driven platforms where real humans have vetted the answers through collective scrutiny. By “grounding AI in our internal reality using a] solid community knowledge system like Stack Overflow,” [says JPMorgan Chase’s Ram Rai, his organization can move beyond purely probabilistic AI toward systems that incorporate verified, battle-tested knowledge.
As we mentioned above, the structured nature of community Q&A—with voting, peer review, and iterative refinement—provides exactly the kind of high-quality training data that AI models need to generate trustworthy outputs. Organizations that build or access community-driven knowledge layers provide their AI tools the verified context they need to move from “almost right” to consistently reliable.
Understanding AI limitations is crucial
[...]