Why Application-Level Security Fails at Code Execution
Egress policies, credential brokers, and URL validators operate in the application process. Code execution containers bypass all of them. The only solution is network-level enforcement at the kernel.
The Layer Problem
You built an egress policy that validates every outbound URL. A credential broker that replaces real tokens with opaque handles. A DNS rebinding defense that validates IPs at connection time. A token scrubber that catches leaked credentials at every output boundary.
All of this runs in the application process. The worker that orchestrates agent tool calls. The Python runtime that manages the agent loop.
Now the agent calls the code execution tool. It writes arbitrary Python. That code runs inside a Docker container. Inside that container, the code makes its own HTTP requests. Using its own libraries. Making its own connections.
Your egress policy? Not running in that container. Your credential broker? Not running in that container. Your DNS rebinding defense? Not running in that container. Your token scrubber? Not running in that container.
Application-level security boundaries exist in the application process. Code execution happens in a different process, in a different container, on a different network interface. Every security boundary you built at the application layer is irrelevant inside the execution container.
If the security boundary and the code execution do not share a process, the code execution bypasses the security boundary. This is not a flaw in the implementation. It is a flaw in the architecture.
What a Code Execution Container Can Do Without Network Isolation
A code execution container on a standard Docker network shares that network with every other container in the deployment. Without isolation, code running inside the container can:
Reach the database directly. The database is another container on the same network. The hostname resolves to a local IP. A simple HTTP or TCP connection from the exec container reaches the database. No authentication required at the network level because the connection is "internal."
Access the cache layer. Same network, same story. The cache accepts connections from any container on the network. Code inside the exec container can read cached data, modify it, or inject malicious entries.
Call internal APIs. The application server is on the same network. Its internal API endpoints (the ones not exposed to the internet, the ones that assume all callers are trusted) are reachable from the exec container. No firewall between them. No authentication on internal routes.
Exfiltrate data via POST. The container has internet access. Code can POST any data to any server. The egress policy running in the application process cannot see or intercept requests originating from the exec container. The requests bypass the entire application-level security stack.
Tunnel encrypted traffic. An HTTPS CONNECT request creates an encrypted tunnel. Once established, the proxy (if there is one) cannot inspect the contents. Data exfiltration over an encrypted tunnel is invisible to any layer that does not have the decryption key.
This is the threat model. Not a hypothetical. Any AI agent platform that runs code execution containers on a shared Docker network has this exposure today.
The Solution: Network-Level Isolation at the Kernel
The fix is not another application-level check. The fix is to move enforcement to a layer that container code cannot bypass: the Linux kernel.
HeartBeatAgents runs code execution containers on an isolated Docker network. The isolation is not advisory. It is enforced by iptables rules in the host kernel. These rules block all traffic from the execution network to any other network interface. All traffic. Not "most traffic." Not "traffic that matches certain patterns." All of it.
The iptables rules cannot be modified by code running inside the container. They run in the host kernel. The container does not have access to the host kernel. Even if the container runs as root (it does not). Even if the container has all Linux capabilities (it does not). The network isolation is physical, enforced at a layer that container code cannot see, reach, or modify.
No internet access. The isolated network has no gateway to the internet. Direct connections to any external host fail at the network layer. Not at the application layer. Not with an error message from a proxy. The TCP SYN packet never leaves the host.
No access to other containers. The database, the cache, the application server, and every other container in the deployment are on a different network. The iptables rules block traffic between the networks. The execution container cannot reach them by name or by IP.
No workaround from inside the container. Code running inside the container operates in a network namespace that has exactly one interface: the isolated network. It does not have a route to any other network. It cannot create one. It cannot modify the routing table. It cannot add a network interface. The isolation is at the namespace level, below anything application code can affect.
The Proxy: The Only Path Out
Complete network isolation would make code execution useless. Agents need to install packages. They need to download libraries. They need to read from public URLs.
The solution is a dual-homed proxy: a process that has an interface on both the isolated execution network and the main network. It is the single point of egress for all execution container traffic. Every outbound connection from every execution container passes through this proxy.
The proxy enforces its own access control at the request level:
Private IPs are denied for all methods. Any request targeting a private, loopback, link-local, or reserved IP address is blocked. This is the same IP range protection as the application-level DNS rebinding defense, but enforced at the network layer for execution containers.
Encrypted tunneling is restricted to package repositories. HTTPS tunneling (used by pip, npm, cargo, and other package managers) is allowed only to known package repository domains. This lets agents install dependencies during code execution. It does not let agents establish encrypted tunnels to arbitrary hosts. The restriction matters because once an encrypted tunnel is established, the proxy cannot inspect the traffic. Unrestricted tunneling is equivalent to unrestricted internet access for data exfiltration.
Read operations are allowed to any public host. GET, HEAD, and OPTIONS requests to public IPs are permitted. Agents can fetch data, read documentation, download files. This preserves code execution utility while restricting what the code can do with what it reads.
Write operations are denied entirely. POST, PUT, PATCH, and DELETE requests are blocked for all destinations. No exceptions.
This is stricter than the application-level egress policy, which allows writes to approved integration hosts. The rationale: code execution containers should compute, not communicate. If an agent needs to write to an external service, it should use the platform's integration tools, which carry the full security stack: credential broker, egress policy, DNS rebinding defense, and token scrubbing. Code that tries to POST directly from an execution container is either a mistake or an attack. Either way, blocking it is correct.
Why the Proxy Restriction on Encrypted Tunneling Matters
HTTP CONNECT creates a TCP tunnel through the proxy. Once the TLS handshake completes, the proxy sees only encrypted bytes. It cannot inspect the payload. It cannot determine whether the tunnel carries a legitimate package download or a data exfiltration payload.
Without restriction, CONNECT enables:
Encrypted data exfiltration. Code POSTs sensitive data to an attacker's server over a CONNECT tunnel. The proxy cannot see the POST because it is inside the encrypted stream. To the proxy, it looks like any other HTTPS connection.
Reverse shells. Code establishes a CONNECT tunnel to an attacker's server and upgrades it to an interactive shell. The attacker has a shell inside your execution environment, tunneled through your proxy, invisible to your logging.
Arbitrary protocol tunneling. Any protocol can be tunneled over CONNECT. SSH, database protocols, custom exfiltration protocols. Once the tunnel is open, the proxy is just passing bytes.
Restricting CONNECT to package repository domains (PyPI, npm, GitHub, and others) allows legitimate package installation while blocking these attack vectors. An attacker cannot use CONNECT to reach their own server because the proxy only allows CONNECT to the declared package repository domains.
Fail-Closed: Proxy Down Means Zero Access
If the proxy process crashes or stops:
1. The execution container attempts to connect to the proxy. Connection refused.
2. The isolated network has no other route to the internet. There is no fallback gateway. There is no "if proxy is down, try direct" logic.
3. The code execution fails with a network error. The agent reports the failure.
This is fail-closed behavior. A missing proxy does not mean unrestricted access. It means zero access. The network isolation ensures this: without the proxy, the execution container has no path to any other network. Not to the internet. Not to other containers. Not to the host.
Compare this to application-level security that degrades to "allow all" when a component fails. A missing egress policy in an application process might mean no validation occurs. A missing proxy in a network-isolated architecture means no connections succeed. The failure mode is safe by default because the isolation is structural, not procedural.
The Proxy Itself Is Hardened
The proxy is a security-critical component. It is the single point of egress for all code execution traffic. If it is compromised, the attacker controls what leaves the execution environment.
The proxy container runs with:
Read-only root filesystem. The proxy binary and its configuration are read-only. Code that somehow reaches the proxy (unlikely, given network isolation) cannot modify its behavior by writing to its filesystem.
No privilege escalation. The kernel blocks any attempt to gain additional privileges. Setuid binaries are ignored. Capability escalation is impossible.
Minimal capabilities. All Linux capabilities are dropped except the minimum required for the proxy to function: binding to its listening port and running worker processes. It cannot modify network interfaces. It cannot load kernel modules. It cannot mount filesystems.
Ephemeral storage. Logs, cache, and temporary files use size-limited ephemeral storage that is discarded on restart. No persistent state accumulates. A restart returns the proxy to a known-good state.
No caching. The proxy does not cache responses. This prevents cache poisoning attacks where an attacker poisons a cached response that is later served to a different execution container.
The Architecture in Full
Here is the complete picture of what happens when an agent executes code that tries to make a network request:
Code calls an HTTP library. The library attempts to connect to the destination. The operating system routes the connection through the isolated network. The only reachable host on the isolated network (other than the container itself) is the proxy.
Proxy evaluates the request. Is the destination a private IP? Blocked. Is it a CONNECT to a non-package-repository host? Blocked. Is it a POST/PUT/PATCH/DELETE? Blocked. Is it a GET to a public host? Allowed.
If allowed, the proxy forwards the request. The proxy has an interface on the main network. It makes the outbound connection on behalf of the execution container. The response flows back through the proxy to the container.
If denied, the proxy returns an error. The code receives an HTTP error response. The agent sees the error in the tool result and can reason about it or retry with a different approach.
At no point does the execution container have direct internet access. At no point can it reach other containers on the deployment network. The proxy is the only path out, and the proxy enforces the rules.
Application-Level vs. Network-Level: Different Threats, Same Architecture
HeartBeatAgents maintains both application-level and network-level egress controls. They are not redundant. They serve different processes with different threat models:
Application-level (egress policy, credential broker, DNS defense): Protects the agent's integration tool calls. These are HTTP requests made by the platform on behalf of the agent, using OAuth tokens, through the platform's HTTP client. The threats are prompt injection, token exposure, and SSRF. The defenses operate in the worker process.
Network-level (isolated network, proxy, iptables): Protects against arbitrary code running in execution containers. These are HTTP requests made by user-authored or LLM-generated code, using whatever libraries the code imports, through whatever connections the code opens. The threats are infrastructure access, data exfiltration, and lateral movement. The defenses operate in the kernel.
Neither layer alone is sufficient. Application-level controls do not reach into execution containers. Network-level controls do not understand application semantics like OAuth tokens and integration bindings. Both layers, together, close the gap.
This is defense in depth applied to a real architectural boundary: the process boundary between the platform and its code execution environment. The boundary exists because code execution is inherently untrusted. The defense exists at the layer where the boundary is enforced.