4 min readNew DelhiUpdated: Feb 22, 2026 01:20 PM IST Anthropic has unveiled a new AI-powered feature that enables users of its popular AI coding assistant to scan their codebases and generate software patches to address them.ARTICLE CONTINUES BELOW VIDEO
The new feature called Claude Code Security has been integrated into the web version of Anthropic’s Claude Code tool. It is designed to allow teams to find and fix security issues that traditional methods often miss, the AI startup said in a blog post on Friday, February 20.
To start, Claude Code Security will currently only be available to a limited number of paid Claude Enterprise and Team customers, with expedited access for maintainers of open-source repositories.
The launch of the new cybersecurity feature comes as a growing number of non-coders are using AI vibe-coding tools to create their own websites and apps, even as many may lack the expertise to identify security flaws in the AI-generated code they deploy. A recent report by AI security startup Tenzai found that websites created using AI coding tools from OpenAI, Anthropic, Cursor, Replit, and Devin, could be tricked into leaking sensitive data or mistakenly sending money to hackers.
“Existing analysis tools help, but only to a point, as they usually look for known patterns […] Rather than scanning for known patterns, Claude Code Security reads and reasons about your code the way a human security researcher would: understanding how components interact, tracing how data moves through your application, and catching complex vulnerabilities that rule-based tools miss,” Anthropic said in a blog post.
How it works
Claude Code Security is built into Anthropic’s Claude Code, allowing users to easily review AI-generated code and iterate on fixes within the same environment. The AI-powered tool analyses programming code and software through a multi-stage verification process, with review by a human analyst as the last step.
“Nothing is applied without human approval: Claude Code Security identifies problems and suggests solutions, but developers always make the call,” Anthropic said.Story continues below this ad
The code review process also involves filtering out false positives and additional verification rounds of its own findings. These findings will be shown to users in a unified dashboard, where developers can inspect the AI-suggested patches.
The findings will be graded based on their severity as well as Claude’s confidence in its assessment. “We also use Claude to review our own code,” Anthropic said. Earlier this month, Mike Krieger, Anthropic’s chief product officer, revealed that the company’s AI coding tools are used internally by employees to generate effectively 100 per cent of code.
“Claude is being written by Claude. Claude products and Claude code are being entirely written by Claude,” Krieger had said. In terms of testing and performance, Anthropic said that Claude Code Security has been stress-tested on a collection of competitive Capture-the-Flag events. It also partnered with Pacific Northwest National Laboratory to experiment with using AI to defend critical infrastructure.
The company further said that its team of researchers had successfully found over 500 never-before-detected vulnerabilities in production open-source codebase using the Claude Opus 4.6 model. “We’re working through triage and responsible disclosure with maintainers now, and we plan to expand our security work with the open-source community,” it said.
