AI can detect vulnerabilities, but who governs risk?
Anthropic recently announced Claude Code Security, an AI system that detects vulnerabilities and proposes fixes. The market reacted immediately, with security stocks dipping as investors questioned whether AI might replace traditional AppSec tools. The question on everyone's mind: If AI can write code and secure it, is application security about to become obsolete?If security only meant scanning code, the answer might be yes. But enterprise security has never been about detection alone.Organizations are not asking whether AI can find vulnerabilities. They are asking three much harder questions:Is what we are about to ship safe?Has our risk posture changed as environments evolve and dependencies, third-party services, tools, and infrastructure continuously shift?How do we govern a codebase that is increasingly assembled by AI and third-party sources, and that we are still accountable for?Those questions require a platform answer: Detection surfaces risk, but governance determines what happens next.GitLab is the orchestration layer built to govern the software lifecycle end-to-end. It gives teams the enforcement, visibility, and auditability they need to keep pace with the speed of AI-assisted development.Trusting AI requires governing riskAI systems are rapidly getting better at identifying vulnerabilities and suggesting fixes. This is a meaningful and welcome advancement, but analysis is not accountability.AI cannot enforce company policy or define acceptable risk on its own. Humans must set the boundaries, policies, and guardrails that agents operate within, establishing separation of duties, ensuring audit trails, and maintaining consistent controls across thousands of repositories and teams. Trust in agents comes not from autonomy alone, but from clearly defined governance set by people.In an agentic world, where software is increasingly written and modified by autonomous systems, governance becomes more important, not less. The more autonomy organizations grant to AI, the stronger the governance must be.Governance is not friction. It is the foundation that makes AI-assisted development trustworthy at scale.LLMs see code, but platforms see contextA large language model (LLM) evaluates code in isolation. An enterprise application security platform understands context. This difference matters because risk decisions are contextual:Who authored the change?How critical is the application to the business?How does it interact with infrastructure and dependencies?Does the vulnerability exist in code that is actually reachable in production, or is it buried in a dependency that never executes?Is it actually exploitable in production, given how the application runs, its APIs, and the environment around it?Security decisions depend on this context. Without it, detection produces noisy alerts that slow down development rather than reducing risk. With it, organizations can triage quickly and manage risk effectively. Context evolves continuously as software changes, which means governance cannot be a one-time decision.Static scans can’t keep up with dynamic riskSoftware risk is dynamic. Dependencies change, environments evolve, and systems interact in ways no single analysis can fully predict. A clean scan at one moment does not guarantee safety at release.Enterprise security depends on continuous assurance: controls embedded directly into development workflows that evaluate risk as software is built, tested, and deployed.Detection provides insight. Governance provides trust. Continuous governance is what allows organizations to ship safely at scale.Governing the agentic futureAI is reshaping how software is created. The question is no longer whether teams will use AI, but how safely they can scale it.Software today is assembled as much as it is written, from AI-generated code, open-source libraries, and third-party dependencies that span thousands of projects. Governing what ships across all of those sources is the hardest and most consequential part of application security, and it is the part that no developer-side tool is built to address.As an intelligent orchestration platform, GitLab is built to address this problem. GitLab Ultimate embeds governance, policy enforcement, security scanning, and auditability directly into the workflows where software is planned, built, and shipped, so security teams can govern at the speed of AI.AI will accelerate development dramatically. The organizations that benefit most from AI will not be those with the smartest assistants alone, but those that build trust through strong governance.To learn how GitLab helps organizations govern and ship AI-generated code safely, talk to our team todayRelated readingIntegrating AI with DevOps for enhanced securityThe GitLab AI Security Framework for security leadersImprove AI security in GitLab with composite identities
Source: https://about.gitlab.com/blog/ai-can-detect-vulnerabilities-but-who-governs-risk/