Implement Data Loss Prevention (DLP) and "GenAI Firewalls" for AI Tools
Implement dedicated Data Loss Prevention (DLP) solutions and "Generative AI Firewalls" to provide technical, real-time enforcement against data exfiltration via AI prompts. Policies and training (Rec 22) are insufficient to mitigate the risk of "Shadow AI"; organizations must deploy technical controls to monitor and block sensitive data from leaving the network.
Deploy a "GenAI Firewall" or equivalent DLP solution that is capable of monitoring and controlling all generative AI traffic. This solution must be configured to perform real-time content inspection, detect and block sensitive data (PII, secrets, proprietary code) in prompts, and enforce policies that block unauthorized AI tools.
While standardizing on approved tools (Recommendation 16) and governing data use (Recommendation 22) are critical policies, they are ineffective without technical enforcement. Developers will inevitably use the path of least resistance, which may include pasting sensitive data into unauthorized "Shadow AI" tools. A "Generative AI Firewall" is the only reliable, automated solution to this problem. It is the "security guardrail" for prompts and network traffic.
This recommendation should be implemented by any organization that: Handles any PII, financial data (PCI), or health data (HIPAA). Has proprietary source code or intellectual property that is a core business asset. Has standardized on an enterprise AI tool (Rec 16) and now needs to enforce that standard by blocking all others. Is building out its security/security-guardrails and recognizes the prompt as a new, ungoverned attack surface.
Assess Network/DLP Capabilities: Evaluate your existing network firewall and DLP solutions. They may already have "GenAI" capabilities that can be enabled. Evaluate GenAI Firewall Vendors: If your existing tools are insufficient, evaluate dedicated GenAI firewall vendors. Use the "Security & Privacy" criteria from the AI Tool Evaluation Matrix (Rec 15). Define and Implement Policies: Work with the cross-functional AI CoP (Rec 12) and Legal (Rec 22) to define policies. These should include: Blocklist: A list of all unauthorized AI tools to be blocked at the network level. Allowlist: The approved AI tools and endpoints (e.g., your enterprise GitHub tenant). Content Policies: DLP rules that inspect outbound traffic to the allowlist, looking for patterns that match PII, API keys, or proprietary code markers. Configure Actions: Define actions for policy violations: Block and Alert: For high-severity violations (e.g., PII in a prompt), block the request and send a high-priority alert to the security team. Log: For sanctioned tools, log all interactions to provide an audit trail for compliance. Develop an Incident Response Plan: Create a specific playbook for "AI data leakage" incidents. What is the process when the firewall blocks a user? How is this escalated? This plan is a key part of your AI governance.
Workflows that implement or support this recommendation.
- How to Prevent Generative AI Data Leakage - Zscaler - https://www.zscaler.com/blogs/product-insights/how-to-prevent-generative-ai-data-leakage
GenAI Firewall solutions can monitor and control all generative AI traffic, detect and block sensitive data in prompts, and enforce policies that block unauthorized AI tools.
Ready to implement this recommendation?
Explore our workflows and guardrails to learn how teams put this recommendation into practice.
Engineering Leader & AI Guardrails Leader. Creator of Engify.ai, helping teams operationalize AI through structured workflows and guardrails based on real production incidents.