HTTP/1.1 Desync Flaw Leaves 24 Million Websites Open to Complete Takeover

Researchers find 24 million sites reliant on HTTP/1.1 in the proxy chain. Request smuggling enables desync attacks that can steal accounts, poison caches, and fully hijack sites.
HTTP/1.1 Desync Flaw Leaves 24 Million Websites Open to Complete Takeover
Table of Contents
    Add a header to begin generating the table of contents

    PortSwigger researchers warn that more than 24 million websites still rely on the old HTTP/1.1 protocol somewhere inside their proxy chain. Although sites may present modern TLS and HTTP/2 at the edge, requests often downgrade to HTTP/1.1 as they pass through reverse proxies, load balancers or CDN edges. That hidden downgrade creates a serious attack surface: request smuggling and HTTP desync attacks let an attacker splice malicious data into other users’ requests and take full control of affected sites.

    Why Many Sites Still Use HTTP/1.1 And Why That Matters

    When a browser talks to a website, that HTTP request usually hops through several components before reaching the application server. In many deployments a modern client negotiates HTTP/2 with a CDN or edge proxy, but that component then forwards requests to the origin using HTTP/1.1. Major cloud stacks and middleware still default to HTTP/1.1 internally, and some widely used products do not yet support HTTP/2 upstream.

    That internal downgrade is critical because HTTP/1.1 is text-based and permissive. Its request boundaries are weak: requests are concatenated on the TCP/TLS socket without explicit delimiters, and multiple headers can specify length in different ways. These facts create ambiguity that attackers can exploit.

    How Desync (Request Smuggling) Works And The Practical Risks

    At the core of desync attacks is a mismatch between how a front-end proxy and a back-end server parse the same request. A simple example:

    • A client request can include Content-Length (total bytes) or Transfer-Encoding: chunked (sent in chunks).
    • If a proxy and origin interpret those headers differently, one may stop reading while the other keeps waiting.
    • An attacker can craft a request that leaves leftover bytes on the connection. Those leftover bytes become the start of the next user’s request.

    That tiny parser discrepancy lets attackers “smuggle” a malicious request into the back end. The consequences are severe:

    • Users may be randomly logged into other users’ sessions.
    • Site caches can be poisoned with attacker-controlled JavaScript, giving persistent control of pages.
    • Attackers can redirect users, steal cookies, capture credentials, or inject forms that collect payment data.
    • Full account takeover and mass data disclosure are possible.

    PortSwigger’s lead researcher, James Kettle, described the problem bluntly: “HTTP/1.1 has a fatal, highly-exploitable flaw — the boundaries between individual HTTP requests are very weak.” He added, “If we want a secure web, HTTP/1.1 must die.”

    Researchers have demonstrated the risk in the wild. PortSwigger used request smuggling to compromise PayPal twice and retrieved plaintext passwords during private disclosures, which yielded substantial bounty payments. The scale and simplicity of the technique make it attractive: the protocol is old, lenient and implemented by thousands of different parsers, so finding discrepancies is not hard.

    What Enterprises Should Do Now

    PortSwigger’s guidance is clear and practical. Organizations should assume HTTP/1.1 desync is a real risk if any part of their stack uses HTTP/1.1 between components. Key steps:

    • Migrate upstream connections to HTTP/2 where possible. HTTP/2 defines request framing that removes the ambiguous length rules that enable desync.
    • Harden parsers and reject ambiguous requests. Configure proxies and origin servers to validate headers strictly and to drop or log requests that contain conflicting length indicators.
    • Scan and test regularly. Run automated checks for request smuggling and desync patterns. The researcher released an open-source tool, HTTP Request Smuggler v3.0, that detects and automates many advanced desync techniques.
    • Review CDN and edge configurations. Confirm that the entire chain — client to edge to origin — preserves the same protocol or that each link is configured to avoid ambiguity.
    • Monitor caches and session-handling logic. Look for unexpected entries or cross-user responses that indicate a desync is being exploited.

    PortSwigger’s work also showed that some large vendors and CDNs either do not support upstream HTTP/2 by default or require manual configuration to avoid internal downgrades. Administrators running Nginx, common CDNs, or legacy load balancers should validate their end-to-end protocol handling.

    The Bottom Line

    HTTP desync and request smuggling are not obscure academic problems. They are practical, repeatable, and able to yield complete site takeover. The root cause is the HTTP/1.1 design and the reality that many infrastructure stacks still use it internally. Patching web apps alone will not stop these attacks — the fix is architectural: prefer framed protocols (HTTP/2) upstream, harden parser behavior, and scan the full proxy chain for parsing mismatches.

    Related Posts