Skip to content

Architecture Overview

SCC is a provider-neutral governed runtime for AI coding agents. It separates shared infrastructure from provider-specific adapters, so the same governance, safety, and runtime surfaces work regardless of which agent you use.

SCC Architecture Overview
SCC Architecture Overview
  1. Provider-neutral core: Safety, egress, audit, and config are shared. Provider-specific surfaces (auth, settings format, container image) live in adapters.
  2. Security by default: Container isolation, fail-closed safety engine, topology-enforced web egress.
  3. Hierarchical config: Org → Team → Project inheritance with immutable security blocks.
  4. Governance: Centralized policy with delegated customization.
  5. Offline capable: Cached configs for disconnected operation.

SCC’s core is provider-agnostic. The ProviderRuntimeSpec registry maps each provider to its runtime constants:

ProviderImageConfig DirSettings PathData Volume
Claude Codescc-agent-claude.claude.claude/settings.json (home-scoped)docker-claude-sandbox-data
Codexscc-agent-codex.codex.codex/config.toml (workspace-scoped)docker-codex-sandbox-data

Organizations can allow one provider, the other, or both. Teams keep the same governance model, safety engine, network policy, and audit surfaces either way. Developers then choose the provider that fits the task, or keep SCC in ask mode and decide session by session.

Each provider implements the AgentProvider protocol:

  • Capability metadata — what the provider supports
  • Auth check — verify credential readiness (auth_check())
  • Launch spec — provider-specific container configuration
  • Bootstrap auth — trigger browser sign-in when credentials are missing
  • Settings rendering — produce provider-native config files (rendered_bytes, not shared dicts)

The AgentRunner protocol handles settings serialization in provider-native format — Claude uses JSON, Codex uses TOML. Renderers produce fragment dicts for caller-owned merge; they never write shared config files directly.

Shared core (every provider benefits):

  • Safety engine (shell tokenizer, git rules, network tool rules)
  • Web egress topology (internal Docker network + Squid proxy sidecar)
  • Durable audit sink (JSONL for launch and safety events)
  • Bundle resolver and renderer pipeline
  • Preflight readiness checks (image + auth)
  • Session management and workspace context

Provider-specific adapters:

  • Container image and Dockerfile
  • Auth flow (browser sign-in URL, callback mechanism)
  • Settings serialization format
  • Config directory and scoping (home vs workspace)
  • Credential volume and persistence
  • Agent process argv

Adding a provider requires:

  1. One ProviderRuntimeSpec entry in the registry
  2. One AgentProvider adapter implementation
  3. One AgentRunner for settings rendering
  4. One Dockerfile in images/scc-agent-<provider>/

The core, safety engine, egress, audit, and governance are untouched.

SCC owns a layered image hierarchy:

ImagePurpose
scc-baseSafety wrappers, standalone safety evaluator, shared tooling
scc-agent-claudeClaude Code agent (extends scc-base)
scc-agent-codexCodex agent (extends scc-base)
scc-egress-proxySquid proxy sidecar for web-egress-enforced policy

Why SCC owns images: The safety evaluator and shell wrappers must be inside the container to intercept commands. SCC cannot rely on the agent’s own plugin system for hard enforcement — wrappers are defense-in-depth that work even if the agent ignores hooks.

First-run build: SCC auto-builds the provider image from bundled Dockerfiles on first start. Manual build:

Terminal window
docker build -t scc-agent-claude:latest images/scc-agent-claude/
docker build -t scc-agent-codex:latest images/scc-agent-codex/

SCC uses a portable OCI runtime path that works with any Docker-compatible engine:

RuntimeStatus
Docker Engine✅ Supported
OrbStack✅ Supported
Colima✅ Supported
Docker Desktop✅ Supported
Podman🔄 Planned (not fully validated)

SCC auto-detects the runtime via docker info. The OciSandboxRuntime adapter uses standard Docker CLI commands (docker create, docker start, docker exec) — no Docker Desktop-specific APIs.

Each provider gets a persistent named volume for credential and data persistence. Container names are deterministic, derived from the workspace path and provider ID — running Claude and Codex in the same workspace produces separate, identity-isolated containers.

The built-in safety engine is a three-layer system:

  1. Shell tokenizer (core/shell_tokenizer.py) — parses command strings into individual commands, handling pipes, subshells, and compound operators
  2. Git safety rules (core/git_safety_rules.py) — analyzes git commands for destructive operations (force push, hard reset, branch force delete, etc.)
  3. Network tool rules (core/network_tool_rules.py) — analyzes network commands (curl, wget, ssh, scp, sftp, rsync)

The engine is orchestrated by DefaultSafetyEngine which loads policy from org config (fail-closed: parse failure → default block). Safety verdicts are provider-neutral — both Claude and Codex adapters consume the same engine.

Runtime wrappers in images/scc-base/wrappers/bin/ intercept commands inside the container. Each wrapper calls the standalone scc_safety_eval package (stdlib-only, no external dependencies) before forwarding to the real binary.

When network_policy is web-egress-enforced:

┌──────────────┐ internal-only ┌───────────────┐ bridge
│ agent │ ─────────────────▶ │ scc-proxy │ ──────────▶ Internet
│ container │ scc-egress-{id} │ (Squid 3128) │ (default)
└──────────────┘ └───────────────┘
  • The agent container is on an internal-only Docker network (no direct external access)
  • The Squid proxy sidecar is dual-homed: internal network + default bridge
  • The proxy enforces an ACL compiled from the team’s allowed destinations
  • Even if the agent ignores HTTP_PROXY env vars, it physically cannot bypass the proxy

All five launch paths (start command, wizard, worktree create, dashboard start, dashboard resume) use the same three-function preflight sequence:

  1. resolve_launch_provider() — determine which provider to use
  2. collect_launch_readiness() — check image availability and auth status
  3. ensure_launch_ready() — auto-build image or trigger auth bootstrap if needed

This ensures consistent behavior regardless of how you start a session.