Outcall
Operator guides

Configuration

Configuration

outcalld is configured through command-line flags. There is no config file on the daemon side — the only persistent state is the rule YAML in the configured --rules-dir.

Daemon flags

FlagDefaultPurpose
--socket <path>/run/outcall/host.sockOperator API Unix socket (host CLI + UI).
--bridge <name>outcall0Linux bridge interface name. Created if missing.
--rules-dir <path>/etc/outcall/rules.dDirectory of YAML rule files.
--dns-listen <ip>0.0.0.0DNS filter bind address (IP only).
--dns-port <port>53DNS filter bind port.
--dns-upstream <list>from /etc/resolv.confComma-separated upstream resolvers (IP[:port]).
--proxy-addr <host:port>0.0.0.0:8080HTTP proxy bind address.
--no-proxyoffDisable the HTTP proxy entirely.
--agent-socket-host-path <path>/run/outcall/agent.sockAgent API Unix socket (one shared socket per host).
--shim-host-path <path>/usr/local/bin/outcall-agentPath to the outcall-agent shim binary; bind-mounted into agent containers.
--agent-timeout-secs <n>5Server-side rule-evaluation timeout for agent permission checks.
--agent-perm-rate <count/seconds>100/10Sliding-window rate limit for permission checks per container.
--agent-rule-rate <count/seconds>10/60Sliding-window rate limit for rule submissions per container.
--subnet-block <cidr>10.200.0.0/16RFC 1918 supernet for /24 per-network allocation.

If port 8080 is bound on the host already, pass --no-proxy or pick a free port with --proxy-addr 0.0.0.0:18080. Containers then need their HTTP_PROXY env vars updated to match.

Optional: TLS interception (S011)

The HTTP proxy can be put into a per-rule TLS-interception mode for L7 matching on encrypted traffic. The flags below are loaded only when both are present and readable:

FlagDefaultPurpose
--ca-cert <path>unsetPEM-encoded root CA certificate the proxy uses to sign per-host leaf certs.
--ca-key <path>unsetPEM-encoded root CA private key. Daemon refuses to start if permissions are broader than 0600.
--intercept-leaf-ttl-secs <n>86400Validity window of generated leaf certificates.
--intercept-body-cap-bytes <n>1048576Maximum bytes the proxy will buffer for http.body matching.

When neither --ca-cert nor --ca-key is supplied, interception is disabled and any rule with egress.mode: intercept is rejected at reload time. See Writing rules — TLS interception and S011 for the full spec.

Capability requirements

CapabilityWhy
NET_ADMINManage the bridge, configure interfaces, install nftables.
--network hostDaemon must operate in the host network namespace.
/var/run/docker.sock mountManage Docker networks; resolve PIDs to containers.

The daemon does not require SYS_ADMIN for current code paths (verified against application/outcalld/src/bridge.rs); some kernels are stricter about netlink and may need it. Add SYS_ADMIN only if the daemon fails to bring up the bridge with EPERM.

Logging

The daemon emits structured logs via tracing-subscriber to stderr. The log level is controlled by the RUST_LOG environment variable, not a flag:

RUST_LOG=info outcalld                       # default
RUST_LOG=outcalld=debug,hyper=warn outcalld  # per-target
RUST_LOG=trace outcalld                      # everything

Each subsystem logs under a stable target name (bridge, network, rule_engine, dns, proxy, agent_api, host_api, docker_manager). Filter by target when shipping to your log backend.

State and persistence

PathWhat lives there
--rules-dir (default /etc/outcall/rules.d/)Rule YAML, edited by operators.
/var/lib/outcall/Persisted state (rule requests, dynamic rules).
/run/outcall/host.sockOperator socket (recreated on each daemon start).
/run/outcall/agent.sockAgent socket (recreated on each daemon start).

Networks and containers outlive the daemon: if outcalld exits, the bridge and its nftables table are torn down, but Docker networks remain. When the daemon restarts, it re-attaches and re-applies the ruleset. During the gap, traffic on the bridge is unfiltered — design your deploys around this.

Daemon lifecycle

outcalld initialises in dependency order:

1. Bridge          create / attach + base nftables
2. Rule engine     load YAML, compile CEL conditions
3. Network         create default network, assign gateway IP
4. DNS + Proxy     bind on gateway IP (parallel)
5. Docker manager  ready to attach containers
6. Host API        operator socket goes live
7. Agent API       agent socket goes live

If any P1 step fails, startup aborts. The full sequence is documented in the spec index.

Hot reload

Rule changes are picked up by POSTing to the host API:

curl --unix-socket /run/outcall/host.sock \
  -X POST http://localhost/api/v1/rules/reload | jq .

The daemon validates the new ruleset, then atomically swaps it in. Failed validation keeps the old set active and returns the error in the response.

Health and readiness

There is no dedicated /api/v1/health or /api/v1/ready endpoint today. For now, supervisors can probe /api/v1/bridge (bridge_status) — a healthy daemon will return JSON with up: true and nftables: active. A proper liveness/readiness pair is on the roadmap.

On this page