StackHawk
๏ƒ‰

4 Best Practices for AI Code Security: A Developer’s Guide

Matt Tanner   |   Aug 21, 2025

LinkedIn
X (Twitter)
Facebook
Reddit
Subscribe To StackHawk Posts

Coding with AI assistance has become the new normal. Applications are being generated and pushed into production at unprecedented ratesโ€”and so are security vulnerabilities. While AI-generated code isn’t inherently insecure, without proper guardrails and testing, the chances of deploying vulnerable code increase dramatically.

When ChatGPT first emerged and was adopted as part of the software engineering toolkit, it helped us understand what certain functions were doing and assisted in refactoring. Most of us were blown away, and many were equally skeptical. From there, AI-infused coding assistants like GitHub Copilot entered the stage, providing faster ways to code through IDE integration.

Now, less than two years later, AI coding agents have made every developer significantly more productive, turning nearly anyone into a capable application builder. But we cannot afford to let this speed sacrifice security, and ignore the risk of vibe coding security vulnerabilities.

This guide provides four concrete strategies with implementation details that development teams can use today to secure their AI-powered workflows.

The Current State of AI Code Security

According to Stack Overflow’s 2024 developer survey, 76% of developers are using or plan to use AI tools this year, with 62% already working with them daily. GitHub’s research shows Copilot users completed routine programming tasks up to 55% faster, with some studies showing even higher productivity gains for experienced developers.

However, here’s the challenge: speed doesn’t automatically equate to security. In fact, even before AI coding, we had a lot of security issues floating around. AI has simply multiplied this reality. As these tools become standard in development workflows, teams need strategies that capture AI’s productivity benefits without compromising security. 

The core technical challenges include:

  • AI suggests code that works, but ignores security best practices
  • Dependency recommendations may include vulnerable or malicious packages
  • Common anti-patterns get replicated across codebases at scale
  • Traditional security reviews can’t keep up with the AI development speed

So, how do you bridge this gap? AI-generated code and development practices are advancing rapidly, making it crucial to implement effective strategies. Let’s examine four strategies that can be implemented to address many of these security concerns.

Strategy 1: Configure AI Tools for Security-First Development

Most developers simply install Cursor or Copilot and start coding, but this approach can lead to significant mistakes when creating secure code with these agents. Most platforms offer the ability to establish universal rules or provide additional context, guiding the agent or LLM to implement security guardrails.

Of course, you could just remind the AI about security in every chat, but that’s inefficient and can be easily forgotten. Using a Rules config or similar method allows you to integrate these requirements into your project configuration at both the individual and team levels by default.

Setting Up Cursor for Security

Cursor supports customizable context through its Rules configuration, allowing you to embed security requirements directly into your AI assistant’s decision-making process. Instead of relying on developers to remember security practices, these rules make secure coding with Cursor automatic.

Here’s what a simple security-focused setup looks like. You can save this as part of your project’s .cursorrules file or add it to your global Cursor settings. Note that these rules guide AI output but don’t enforce securityโ€”developers can still override suggestions, so they work best when combined with automated testing. For complete documentation on Cursor Rules, see the official Cursor documentation:

# Security-First Development Rules

## Code Security Standards
- Always use parameterized queries - never string concatenation for database queries
- Implement proper input validation and sanitization for all user inputs
- Use secure authentication and authorization patterns
- Never hardcode secrets, API keys, or passwords in source code
- Implement proper error handling that doesn't expose sensitive information
- Follow OWASP Top 10 guidelines for web application security

## Dependency Management
- Only suggest well-maintained packages with recent updates
- Prefer packages with strong security track records
- Flag any dependencies that haven't been updated in 12+ months
- Always check for known vulnerabilities before suggesting packages

## Code Review Requirements
- Generate TODO comments for any code that needs security review
- Add inline comments explaining security-relevant decisions
- Flag any code that handles sensitive data for manual review
- Suggest security test cases for authentication and authorization logic

## Error Handling
- Implement fail-secure patterns (deny by default)
- Log security events appropriately without exposing sensitive data
- Use structured error responses that don't leak implementation details

Implement Granular, Project-Specific Security Rules

In general, configurations like the above will effectively establish basic security guardrails. However, different types of applications have different security considerations. For instance, a fintech app has different requirements than a content management system. This is where you’ll want to configure your AI code assistant accordingly. Let’s look at two highly-regulated industries that have much stricter rules than a typical application.

For Healthcare/HIPAA Applications, at a minimum, you’d want to also include further rules such as:

## Healthcare Security Requirements
- Encrypt all PHI at rest and in transit using AES-256 and TLS 1.2+
- Log all data access for auditing
- Follow the minimum necessary principle for data access  
- Use cryptographically secure random number generation
- Implement session timeouts for sensitive data access

For Financial Services, you’d likely want to specify rules such as:

## Financial Security Requirements
- Implement transaction isolation and consistency checks
- Follow PCI-DSS requirements for payment data with AES-256 encryption
- Use TLS 1.2+ for all payment data transmission
- Use time-based session management with secure token generation
- Build in rate limiting and fraud detection patterns
- Require multi-factor authentication with FIDO2/WebAuthn standards

By implementing rules, you make specific requirements automatic, rather than relying on developers to remember them during code reviews. Choosing AI platforms that allow for this type of configuration is preferred for this exact reason.

Strategy 2: Build Security Testing Into Your AI Workflow

Configuration helps, but it’s not enough. You need automated security testing that runs as fast as your AI generates code. Traditional security reviews often occur too late. By the time a security team reviews AI-generated code, it has already been deployed or integrated into larger features.

Integrating StackHawk for AI Development

StackHawk provides dynamic application security testing (DAST) that complements AI-generated code by testing actual application behavior rather than just source code patterns. The key advantage is that DAST catches vulnerabilities that only appear when code is running, which is particularly important for AI-generated applications where logic flows might be unexpected.

You can integrate StackHawk at multiple points in your AI development workflow. During active development, running local scans helps catch issues before they’re committed to your repository. For comprehensive setup instructions, see the StackHawk Getting Started guide:

# Run local scans while building with AI assistance
hawk scan --config stackhawk.yml

# Get immediate feedback before committing
hawk validate config stackhawk.yml

For continuous integration pipelines, you can automate security scanning on every pull request or deployment. Here’s an example GitHub Actions workflow that runs StackHawk scans. For additional CI/CD integration examples, see the StackHawk CI/CD documentation:

# GitHub Actions example
- name: StackHawk Scan
  uses: stackhawk/hawkscan-action@v2
  with:
    apiKey: ${{ secrets.HAWK_API_KEY }}
    configurationFile: stackhawk.yml

To make StackHawk work seamlessly with Cursor, you can configure specific rules that help your AI assistant understand StackHawk’s configuration format and best practices. This prevents common configuration errors and helps the AI suggest appropriate security testing approaches. For complete configuration options, see the StackHawk Configuration documentation:

{
  "name": "StackHawk Main Config",
  "description": "Rules specific to stackhawk.yml",
  "patterns": [
    "stackhawk.yml",
    "*.sarif"
  ],
  "prompts": [
    "Validate stackhawk.yml using 'hawk validate config stackhawk.yml' before committing.",
    "Use 'filePath' for local OpenAPI files, not 'path'.",
    "Reference documentation via prompts, e.g., 'Update stackhawk.yml using @https://docs.stackhawk.com/hawkscan/scan-discovery/'.",
    "Installation should reference documentation on @https://docs.stackhawk.com/download.html",
    "In order to output sarif you need to use 'export SARIF_ARTIFACT=true' environment variable",
    "Whenever you change authentication run the authentication validation using hawk validate auth"
  ]
}

When working with AI tools that can parse structured security data, SARIF (Static Analysis Results Interchange Format) output becomes valuable. This allows your AI assistant to understand and act on specific vulnerability findings:

export SARIF_ARTIFACT=true
hawk scan --sarif-artifact

Using MCP Servers for Real-Time Security

Model Context Protocol (MCP) servers create direct connections between AI tools and external services, eliminating the need to copy security data between tools manually. This integration allows your AI assistant to access live security data and provide contextual recommendations based on your actual application’s security posture.

StackHawk provides an MCP server that brings comprehensive security integration directly into your AI development workflow. The StackHawk MCP server is available in the official MCP servers repository and provides real-time security analytics integration. Setting it up requires installing the MCP server package and configuring your API credentials:

pip install stackhawk-mcp
export STACKHAWK_API_KEY="your-api-key-here"

Once installed, you need to configure Cursor to recognize and connect to the StackHawk MCP server. This configuration tells Cursor how to launch the server and pass your authentication credentials. Note that environment variables must be available to the AI tool process, not just your shell. In CI/CD systems, these need to be set in the job or container environment. For complete MCP setup instructions, see the Model Context Protocol documentation:

{
  "mcpServers": {
    "stackhawk": {
      "command": "python3",
      "args": ["-m", "stackhawk_mcp.server"],
      "env": {
        "STACKHAWK_API_KEY": "${env:STACKHAWK_API_KEY}"
      }
    }
  }
}

With MCP configured, you can interact with StackHawk directly through your AI assistant using natural language prompts. The AI can validate configurations, retrieve vulnerability data, and suggest fixes without you needing to switch between tools:

  • “Validate this StackHawk YAML config for errors”
  • “Show me the latest vulnerabilities for my application”
  • “Generate a security scan configuration for this API”

This integration provides several advantages over manual security workflows:

  • Real-time security feedback as you write code
  • AI suggests fixes based on actual vulnerability data from your security tools
  • Automated security workflows without leaving your development environment
  • Prevention of AI hallucinations through schema validation against real configurations

Manual Security Review Areas

Despite the power of automation, certain security decisions require human judgment and domain expertise that AI cannot provide. These areas represent the intersection of business logic, regulatory requirements, and security architecture where human oversight remains essential:

  • Security architecture decisions: AI can implement patterns, but architects need to choose the right patterns for their specific context
  • Threat modeling: Understanding how attackers might target your specific application and business logic
  • Compliance requirements: Interpreting regulations and mapping them to technical controls
  • Context-specific risks: Understanding your business logic and data sensitivity requirements

Strategy 3: Monitor Production Applications in Real-Time

AI enables you to ship code faster, but that also means potential security issues reach production more quickly. Although it is best to catch vulnerabilities before they hit production, catching every single one all the time is tough to do. This is where monitoring comes into play, ensuring that anything that does make it through and is being exploited is caught before it can do much damage.

Practical Monitoring Implementation

AI-generated applications require monitoring that can detect security issues as quickly as they are introduced. Rather than building complex custom monitoring systems, most teams benefit from implementing structured logging that works with established monitoring platforms.

The key is to log security-relevant events in a structured format that monitoring tools can automatically parse and alert on. Here’s an example of how to implement security-focused logging that integrates well with monitoring platforms:

// Structure logs for security monitoring tools
const securityLogger = {
  logSuspiciousActivity: (event) => {
    console.log(JSON.stringify({
      timestamp: new Date().toISOString(),
      level: 'SECURITY_ALERT',
      event: event.type,
      details: {
        ip: event.ip,
        user_id: event.userId,
        path: event.path,
        user_agent: event.userAgent
      }
    }));
  }
};

// Example usage in middleware
app.use((req, res, next) => {
  // Note: For production use, consider size limits or sampling for large request bodies
  const bodyStr = typeof req.body === 'string' ? req.body : JSON.stringify(req.body || '');
  if (req.path.includes('../') || bodyStr.includes('<script>')) {
    securityLogger.logSuspiciousActivity({
      type: 'PATH_TRAVERSAL_ATTEMPT',
      ip: req.ip,
      path: req.path,
      userId: req.user?.id
    });
  }
  next();
});

Rather than building custom monitoring solutions, most development teams achieve better results by leveraging established monitoring platforms that provide out-of-the-box security monitoring capabilities. These platforms have mature alerting systems and can automatically detect anomalous patterns that might indicate security issues:

  • Datadog: Provides out-of-the-box security monitoring with AI-generated code detection capabilities and application performance monitoring. See Datadog Security Monitoring
  • Splunk: Offers security information and event management (SIEM) with custom alerting rules and comprehensive log analysis. See Splunk Security Solutions
  • New Relic: Includes application performance monitoring with security event tracking and real-time alerting. See New Relic Security
  • Elastic Stack: Combines logging, monitoring, and security analytics in one platform with powerful search and visualization capabilities. See Elastic Security

These tools can automatically detect patterns like unusual API calls, authentication anomalies, and performance spikes that might indicate security issues with AI-generated code.Here’s a practical example of integrating structured security logging with Datadog for server-side monitoring (using their Winston logger), which can automatically detect and alert on security patterns. For complete integration instructions, see the Datadog Node.js logging documentation:

// Server-side security logging with Datadog
const winston = require('winston');

const logger = winston.createLogger({
  format: winston.format.json(),
  transports: [
    new winston.transports.Http({
      host: 'http-intake.logs.datadoghq.com',
      path: `/v1/input/${process.env.DD_API_KEY}?ddsource=nodejs&service=my-ai-app`,
      ssl: true
    })
  ]
});

// Log security events
logger.warn('Authentication failure', {
  user_id: userId,
  ip_address: req.ip,
  failure_count: failureCount,
  tags: ['security', 'auth_failure']
});

Incident Response for AI-Generated Code

When security incidents occur involving AI-generated code, the investigation process needs to account for the unique characteristics of AI development. Unlike traditional incidents where you can trace code back to specific developers and design decisions, AI-generated code requires a different investigative approach.

When analyzing incidents, you need to determine the root cause and whether similar patterns exist elsewhere in your codebase. Note that attributing code to AI generation can be challenging if code has been refactored. Commit metadata or AI usage logs may be needed for accurate analysis:

  1. Root Cause Analysis:
    • Was this vulnerability in AI-generated or human-written code?
    • Are there similar patterns elsewhere in the codebase?
    • Does this indicate a problem with AI configuration?

Once you understand the scope and cause of the incident, immediate containment and remediation actions should focus on both fixing the current issue and preventing similar problems in the future:

  1. Immediate Actions:
    • Isolate affected systems
    • Review similar AI-generated patterns across the application
    • Update AI assistant configurations to prevent similar issues
    • Document lessons learned for future AI-assisted development

Strategy 4: Maintain and Improve Developer Security Skills

The focus of security knowledge is changing due to AI. Key areas for AI-era security skills include understanding how to evaluate AI suggestions, write security-focused prompts, and recognize when AI-generated code lacks proper security controls.

Evaluating AI Suggestions for Security Vulnerabilities

One critical skill is learning to evaluate AI-generated database code for common security vulnerabilities. AI often generates functional code that works in development but contains serious security flaws, especially around input validation. This example demonstrates a common pattern where AI might suggest functional but insecure database queries:

# Example: Reviewing AI-generated database code
# BAD - AI might suggest this (vulnerable to SQL injection)
query = f"SELECT * FROM users WHERE id = {user_id}"

# GOOD - Secure parameterized query
query = "SELECT * FROM users WHERE id = %s"
cursor.execute(query, (user_id,))

Note that string interpolation vulnerabilities can occur even when user_id appears safe, particularly when it’s passed from an API without proper validation.

Writing Security-Focused Prompts

Another essential competency is learning how to write security-focused prompts that guide AI toward secure implementations from the start. The specificity of your prompts directly impacts the security of the generated code:

Instead of: “Create a login function”

Use: “Create a secure login function with proper password hashing, rate limiting, and session management following OWASP guidelines”

The more specific and security-conscious your prompts, the more likely AI is to generate code that follows security best practices.

Recognizing Security Gaps in AI Code

Developers also need to develop pattern recognition skills to identify when AI-generated code is functional but lacks proper security controls. AI often creates working endpoints that miss critical security considerations. Here’s a typical example of how AI might generate working but insecure API endpoints:

// AI might generate functional but insecure code like this:
app.get('/api/user/:id', (req, res) => {
  const user = database.getUser(req.params.id);
  res.json(user);  // Exposes all user data
});

// Secure version with proper authorization and data filtering:
app.get('/api/user/:id', authenticate, (req, res) => {
  if (req.user.id !== req.params.id && !req.user.isAdmin) {
    return res.status(403).json({error: 'Unauthorized'});
  }
  
  const user = database.getUser(req.params.id);
  const safeUser = {
    id: user.id,
    name: user.name,
    // Conditionally return sensitive fields
    email: req.user.isAdmin ? user.email : undefined
  };
  res.json(safeUser);
});

The secure version adds authentication middleware, authorization checks, and proper data filtering to prevent unauthorized access and data exposure.

Ongoing Security Education

Effective security education programs for AI-assisted teams should include regular touchpoints and practical learning opportunities. Since security is an ongoing concern, especially with the rapid development of AI technology in the developer workflow, regular knowledge updates and security updates are necessary. Going forward, adding the following tasks and check-ins is a good place to start:

  • Monthly security briefings on new threats relevant to AI-assisted development
  • Reviews of security incidents involving AI-generated code (anonymized)
  • Updates on new security features in AI development tools
  • Training on emerging AI security best practices

The goal is to equip every developer with sufficient knowledge to work effectively with AI while identifying and addressing obvious security issues.

Implementation Roadmap

  1. Week 1-2: Configure AI tools with security-first rules
  2. Week 3-4: Integrate automated security testing into your workflow
  3. Month 2: Implement production monitoring for AI-generated applications
  4. Ongoing: Establish regular security training and updates

Getting Started with StackHawk

To start testing applications built using AI-powered workflows with StackHawk, youโ€™ll need an account. You canย sign up for a trial account. If youโ€™re using an AI-coding assistant like Cursor or Claude Code, sign up for our $5/month single-user plan,ย Vibe, to find and fix vulnerabilities 100% in natural language.

Conclusion

The strategies covered here work best together: configure AI tools for security by default, back that up with automated testing, monitor applications in production, and ensure developers can identify issues that automation might miss.

The future belongs to teams that master the balance between AI speed and security rigor. These implementation details provide a concrete path to achieve that balance while maintaining the productivity benefits that make AI coding tools indispensable.

References and Further Reading

Research and Studies:

Documentation and Implementation Guides:

Monitoring and Security Resources:

FEATURED POSTS

Introducing the StackHawk Model Context Protocol (MCP) Server

Weโ€™re excited to announce the StackHawk Model Context Protocol (MCP) Server, bringing enterprise-grade application security testing directly into AI coding assistants like Cursor, Claude Code, and Windsurf. With this open-source integration, developers can run dynamic application security tests (DAST), analyze vulnerabilities, and implement fixesโ€”all without leaving their coding environment.

Security Testing for the Modern Dev Team

See how StackHawk makes web application and API security part of software delivery.

Watch a Demo

Subscribe to Our Newsletter

Keep up with all of the hottest news from the Hawkโ€™s nest.

"*" indicates required fields

More Hawksome Posts