Join StackHawk at RSAC 2026 | Moscone Center, San Francisco Discover
StackHawk logo featuring a stylized hawk icon on the left and STACKHAWK in bold, uppercase letters to the right. The white text and icon on a light gray background reflect its focus on Shift-Left Security in CI/CD.

The 2026 State of AI-Era AppSec: Key Findings from Our Survey

Payton O'Neal   |   Jan 22, 2026

Share on LinkedIn
Share on X
Share on Facebook
Share on Reddit
Send us an email
A digital graphic with the StackHawk logo and text: The 2026 AppSec Leaders Guide to Survival in the AI Era on a dark gradient background with teal highlights, featuring insights on Dynamic Application Security Testing (DAST) for modern threats.

AI-driven development has moved from emerging trend to operational reality in record time. But how are AppSec teams actually adapting? What’s working, what’s breaking, and where are organizations investing?

To find out, we surveyed 250+ AppSec stakeholders in December of 2025 through an independent third-party provider. Respondents ranged from individual contributors to C-level executives, with 71% serving as decision-makers over application security. The majority support mid-to-large development organizations, with 84% of AppSec teams consisting of 4 or more people, and they spanned industries—from technology and financial services to healthcare and manufacturing.

The findings paint a picture of an industry at an inflection point: AI adoption is nearly universal, AppSec tools are abundant, but the foundational functions needed to excel in this new environment—visibility, prioritization, and risk-based measurement—remain elusive for most organizations.

Based on those survey results, we put together a state of AppSec playbook: The 2026 AppSec Leader’s Guide to Survival in the AI Era. 

Banner promoting a report titled The 2026 AppSec Leader’s Guide to Survival in the AI Era, featuring insights on AppSec Risk Prioritization and a call-to-action button to download the full report. Discover key challenges facing AppSec leaders in 2026.

Keep reading for a rundown of the survey results, or download the full report.

AI Development Is the New Normal

AI adoption has crossed the tipping point. 87% of organizations surveyed have adopted AI coding assistants such as GitHub Copilot, Cursor, or Claude Code to some extent. More than a third (35%) reported widespread or full adoption, meaning AI-assisted development is already embedded into standard workflows, not confined to pilot programs or select teams.

A bar chart shows responses to adopting AI coding assistants for AppSec Risk Prioritization: 4% no use at all, 8% evaluating, 18% with limited teams, 34% moderate adoption, 19% widespread adoption, and 16% fully integrated.

The “should we adopt AI coding assistants?” debate is over. The question now is how to secure development environments where AI is a default participant.

Balancing velocity and security is the #1 challenge. When asked about their biggest challenges for 2026, “keeping up with rapid development velocity and AI-generated code” was the most frequently cited significant challenge. With their existing headcount and tooling, AppSec teams are struggling to keep pace with how fast new code, new applications, and new attack surface gets created.

The perception of risk is mixed. The AppSec market has debated over whether AI-generated code is more inherently insecure or not. From our responses, it’s clear that we haven’t aligned on a clear answer. About half of the respondents (53%) view AI coding assistants as a moderate or significant security risk, while the other see AI assistants as neutral, low risk, or even a security benefit. 

Bar chart shows survey responses on the organizational risk of AI coding assistants, with insights into AppSec Risk Prioritization. Results: 35% moderate risk, 25% neutral, 18% significant risk. Download the report for more details.

This split suggests the industry hasn’t reached consensus on how to think about AI-assisted development risk on organizations as a whole, but risk isn’t simply that AI writes vulnerable code. It’s that the shift from writing code to reviewing code fundamentally changes what developers know about their applications. This context gap compounds over time. Applications grow more complex while the humans responsible for them understand less about how they actually work.

Testing Tools Are Abundant, Intelligence Is Scarce

AppSec tool adoption isn’t the challenge. 94% of organizations use at least one application security testing tool, with the majority using two or more categories. The most common: Software Composition Analysis (56%), API Protection (51%), and Dynamic Application Security Testing (48%).

A bar chart shows the percentage of organizations using various application security testing tools, with Software Composition Analysis (SCA) at 56%. Text on the left asks about API Attack Surface Discovery tools and prompts to download a report for more stats.

But despite the tooling (plus penetration testing, which 84% of respondents run regularly), risks still make their way to production.

And there are more risks than ever. The survey asked which risks teams focus on through automated testing versus manual efforts like pen testing and bug bounties. The familiar suspects are there, but so are some new ones that didn’t exist two years ago: AI/LLM-specific risks like prompt injection and data leakage are already on the radar for 35% of teams.

The results also reveal interesting methodology trends. Authorization and access control issues top both lists—61% through automated testing and 57% through manual. This makes sense: broken access controls consistently rank among the most exploited vulnerabilities, and teams are throwing both approaches at the problem.

But the divergence is telling. API-specific vulnerabilities see significantly more automated coverage (53%) than manual (42%). Business logic flaws were the only category where manual testing outpaced automated.

A horizontal bar chart compares team focus on detecting security risks through manual vs. automated efforts, highlighting a greater emphasis on automated methods like Dynamic Application Security Testing (DAST). Download the report for more statistics.

Triage consumes half of AppSec’s time. Despite this tool investment, 50% of respondents report their teams spend 40% or more of their time triaging and prioritizing findings—determining what’s real and what matters before any actual remediation work begins.

Bar graph shows AppSec teams’ time spent triaging/prioritizing findings—a key aspect of Shift-Left Security in CI/CD: 14% (80%).

This is a math problem that doesn’t scale. When AI development increases code volume 5-10x but AppSec headcount stays flat, the triage burden becomes unsustainable. Alert fatigue was cited as a moderate to critical challenge by 71% of respondents.

The Accountability Gap Is Growing

Boards are asking harder questions. 73% of respondents report their board or executive leadership has asked about application attack surface or risk posture in the past 12 months. Nearly a quarter (24%) face these questions frequently, with detailed inquiries about security practices and tooling. 

A bar chart shows survey responses to whether executive leadership asked about AppSec Risk Prioritization or application attack surface in the past 12 months. The most common answer, at 49%, is “Yes, high level about strategy.”.

But teams are reporting activity, not risk. The most commonly reported metrics tell a different story:

The top metrics—scans performed and vulnerabilities found—measure activity. The metrics that would actually answer board questions about risk posture and attack surface coverage sit lower on the list.

A bar chart titled Survey Stat lists top security metrics for 2026, including AppSec Risk Prioritization, cases performed, vulnerabilities found, time to fix, coverage percentage, and several others, each shown with corresponding percentage values.

There is a clear gap between what AppSec teams are focusing on and what boards are asking. They want to know: “What’s our risk posture? How is it trending? Are our security investments working?” AppSec teams answer: “We fixed 500 vulnerabilities and ran 10,000 scans.” These aren’t the same conversation. The gap stems from not having the underlying intelligence infrastructure to connect security activity to business risk.

Visibility remains a challenge. Only 30% of respondents are “very confident” that they have visibility into 90% or more of their application attack surface.

A bar chart shows survey responses about confidence in understanding an application’s API Attack Surface Discovery: 30% very confident, 44% mostly confident, and 26% somewhat/not confident. Download the report for more stats.

When asked how they discover APIs and application components, 37% use manual spreadsheets and quarterly surveys, and 42% rely on external attack surface management or production monitoring tools.

The fragmentation suggests most organizations don’t have a single, reliable, continuous method for understanding what they’re protecting, but more and more, organizations are being required to report test coverage metrics to executives (41% do). But if you’re only measuring coverage against an incomplete inventory, those numbers are misleading. You can achieve “90% test coverage” while leaving significant portions of your actual attack surface completely untested—because you didn’t know those applications existed.

Where Teams Are Investing in 2026

AI/LLM security is a top priority. 77% of organizations are building LLM or AI components into applications: chatbots, RAG systems, AI-powered features. And many are already running multiple applications with AI features in production or have AI integration as core to their business model. To address this expanding attack surface, 82% of organizations have a specific strategy for securing LLM/AI applications: 41% are using dedicated LLM security testing tools, 27% have comprehensive AI security programs with dedicated resources, and 14% are taking a red team approach.

A bar chart titled What is your current security strategy around LLM/AI applications? shows: No specific strategy 18%, Best team approach 14%, LLM security testing like Dynamic Application Security Testing (DAST) 41%, Dedicated AI security program 27%.

Investment is increasing across the board. Organizations are investing in breadth (more coverage), depth (AI-specific security), and maturity (better metrics and training). When asked about 2026 investment priorities, the initiatives seeing the most growth (moderate or major increases) include:

The biggest challenges ahead. Respondents rated the significance of various AppSec challenges for 2026. The issues rated as moderate to critical challenges by the highest percentage of respondents:

The pattern: speed, complexity, and visibility. Organizations are trying to move faster, with more tools, against a larger and less-understood attack surface.

What This Means for AppSec Leaders

The survey data points to an industry at a crossroads. The old playbook (comprehensive static analysis, manual asset tracking, activity-based metrics) was designed for a world where humans wrote code at human speed. That world is gone.

Three shifts define the path forward:

  • From testing-first to visibility-first. You can’t secure what you don’t know exists. When only 30% of organizations are confident in their attack surface visibility, and AI development creates new applications faster than manual processes can track, automated discovery is no longer optional.
  • From static testing to runtime testing. When half of AppSec time goes to triage, something has to change. Clear insights about what’s real, what’s exploitable, and what poses actual business risk is crucial, and runtime testing is emerging as the leading way to achieve those goals. Plus, static tools are missing new LLM risks and business logic flaws that pose the greatest risks.
  • From activity metrics to risk metrics. Boards are asking about risk posture and ROI—not scan activity. Closing that gap requires connecting security findings to business context—mapping vulnerabilities to application criticality, exposure, and data sensitivity, then tracking risk reduction over time.

Get the Full Playbook

This survey reveals the state of AppSec in the AI era. But knowing the challenges is only half the battle.

The AppSec Leader’s Guide to Survival in the AI Era provides a practical framework for building intelligence-first AppSec programs—covering visibility, runtime testing, prioritization, and measurement. It’s the playbook for adapting your program to the realities this survey reveals.

More Hawksome Posts