Five Intelligence Agencies Agree: Slow Down Your AI Agents

The Five Eyes alliance issued its first joint advisory on agentic AI security, warning that autonomous AI systems introduce novel attack surfaces enterprises are not prepared for.
Five Intelligence Agencies Agree Slow Down Your AI Agents
Table of Contents
    Add a header to begin generating the table of contents

    For the first time, the intelligence agencies of all five Five Eyes nations have issued a coordinated advisory on the same topic — and that topic is the security risk of artificial intelligence agents, the autonomous systems that enterprises are racing to deploy across their operations.

    What the Five Eyes Advisory Warns About Agentic AI’s Specific Security Risks

    The joint advisory, issued by security agencies from the United States, United Kingdom, Canada, Australia, and New Zealand, is specific about what they mean by “agentic AI”: autonomous systems capable of browsing the internet, writing and executing code, sending emails, accessing cloud services, reading and modifying files, and generally taking actions in the world based on instructions they receive. These are not chatbots that answer questions — they are systems that do things.

    The agencies are not arguing that agentic AI should not be used. The advisory’s core recommendation is that organizations should adopt these systems deliberately and cautiously rather than rapidly and at scale. The gap between what enterprises are currently doing — deploying AI agents quickly to gain competitive advantage — and what the advisory recommends is significant.

    The specific security concerns the agencies enumerate reflect the technical realities of how these systems work:

    Prompt injection is the most novel risk. An AI agent that processes external content — emails in an inbox, documents it retrieves from the web, data from connected APIs — can be manipulated by attacker-controlled content in that external environment. A malicious email could contain instructions that the AI agent reads as legitimate commands, causing it to exfiltrate data, send messages to unintended recipients, or take unauthorized actions. Unlike traditional software, which executes defined instructions, AI agents interpret natural language — and natural language instructions from attackers are indistinguishable from authorized instructions if the system has no mechanism to verify their source.

    Over-privileged access is the second core concern. Most AI agent deployments give the agent the same level of access to systems and data that the user deploying it possesses. An AI agent authorized to read emails, draft responses, access CRM data, and execute calendar changes has significant power over sensitive information and external communications. If that agent is compromised or manipulated, the blast radius of unauthorized actions is determined by what the agent was permitted to do — which is often far more than any given task requires.

    Insufficient human oversight is structural. Agentic systems are valuable precisely because they operate autonomously without requiring human approval for each step. That same autonomy makes their mistakes, whether from errors in reasoning or deliberate manipulation, harder to catch before they cause harm.

    The Enterprise Deployment Reality

    The timing of this advisory is not coincidental. Major enterprises have been deploying AI agents across customer service, software development, financial analysis, and business process automation throughout 2025 and into 2026. The pace of deployment has far outrun the development of security frameworks, testing methodologies, and operational controls for managing agentic systems.

    In many cases, the organizations deploying AI agents have not conducted formal security reviews of those systems before production deployment. The same enterprise that would require months of security testing before deploying a new financial application is deploying AI agents that interact with those financial applications with minimal equivalent scrutiny.

    The Five Eyes advisory represents institutional recognition that this deployment velocity has created risk that needs to be explicitly named and addressed. A joint advisory from five major intelligence agencies carries normative weight that a single vendor’s warning would not.

    Prompt Injection: The Attack That Hasn’t Scaled Yet

    The advisory’s specific emphasis on prompt injection deserves attention because the attack class is not yet widely exploited at the scale these systems enable. Current prompt injection attacks are primarily proof-of-concept demonstrations or limited-scope attacks against specific deployed systems.

    As AI agents proliferate and as attacker awareness of prompt injection techniques grows, the attack surface will expand significantly. An AI agent deployed to process customer support emails is a potential entry point for any customer who chooses to embed malicious instructions in their message. An AI agent authorized to browse the web will encounter attacker-controlled web pages designed to manipulate its behavior.

    The challenge is that there is no complete technical defense against prompt injection. It is a consequence of AI systems that cannot reliably distinguish between instructions from authorized sources and instructions embedded in content they process. Mitigations exist — restricting what information agents expose in their prompts, applying human review gates for high-impact actions, limiting agent permissions to what specific tasks require — but no mitigation eliminates the risk entirely.

    Operationalizing the Five Eyes Agentic AI Security Recommendations

    For CISOs evaluating or managing agentic AI deployments, the Five Eyes advisory provides external validation for what careful security practitioners have been saying internally: the current pace of deployment has outrun the maturity of available security controls.

    Specific actions the advisory supports:

    Apply least-privilege access to AI agents. Define what the agent needs to do for each task and limit permissions to exactly that scope. An agent that summarizes internal reports does not need email-sending authority. An agent that drafts responses does not need to send them autonomously.

    Implement human-in-the-loop review for high-impact actions. Autonomous operation is valuable, but specific action categories — sending external communications, modifying financial records, deleting data — should require explicit human approval before the agent executes them.

    Test agents for prompt injection resilience. Include adversarial content in testing scenarios: emails designed to manipulate agent behavior, web pages with embedded instructions, documents containing conflicting directives. Establish baselines for how your deployed agents respond to manipulation attempts.

    Treat agentic AI as a new attack surface requiring its own security program. Security architecture reviews, supply chain assessment of AI model and tool dependencies, and ongoing monitoring of agent behavior belong in enterprise security programs just as network security and endpoint protection do.

    A joint Five Eyes advisory is a regulatory signal as much as a technical one. Governance requirements around agentic AI security are likely to follow.

    Related Posts