Architecture Overview
SCC is a provider-neutral governed runtime for AI coding agents. It separates shared infrastructure from provider-specific adapters, so the same governance, safety, and runtime surfaces work regardless of which agent you use.
System Components
Section titled “System Components”Key Design Principles
Section titled “Key Design Principles”- Provider-neutral core: Safety, egress, audit, and config are shared. Provider-specific surfaces (auth, settings format, container image) live in adapters.
- Security by default: Container isolation, fail-closed safety engine, topology-enforced web egress.
- Hierarchical config: Org → Team → Project inheritance with immutable security blocks.
- Governance: Centralized policy with delegated customization.
- Offline capable: Cached configs for disconnected operation.
Provider-Neutral Architecture
Section titled “Provider-Neutral Architecture”SCC’s core is provider-agnostic. The ProviderRuntimeSpec registry maps each provider to its runtime constants:
| Provider | Image | Config Dir | Settings Path | Data Volume |
|---|---|---|---|---|
| Claude Code | scc-agent-claude | .claude | .claude/settings.json (home-scoped) | docker-claude-sandbox-data |
| Codex | scc-agent-codex | .codex | .codex/config.toml (workspace-scoped) | docker-codex-sandbox-data |
Organizations can allow one provider, the other, or both. Teams keep the same governance model, safety engine, network policy, and audit surfaces either way. Developers then choose the provider that fits the task, or keep SCC in ask mode and decide session by session.
How Providers Plug In
Section titled “How Providers Plug In”Each provider implements the AgentProvider protocol:
- Capability metadata — what the provider supports
- Auth check — verify credential readiness (
auth_check()) - Launch spec — provider-specific container configuration
- Bootstrap auth — trigger browser sign-in when credentials are missing
- Settings rendering — produce provider-native config files (
rendered_bytes, not shared dicts)
The AgentRunner protocol handles settings serialization in provider-native format — Claude uses JSON, Codex uses TOML. Renderers produce fragment dicts for caller-owned merge; they never write shared config files directly.
What’s Shared vs Provider-Specific
Section titled “What’s Shared vs Provider-Specific”Shared core (every provider benefits):
- Safety engine (shell tokenizer, git rules, network tool rules)
- Web egress topology (internal Docker network + Squid proxy sidecar)
- Durable audit sink (JSONL for launch and safety events)
- Bundle resolver and renderer pipeline
- Preflight readiness checks (image + auth)
- Session management and workspace context
Provider-specific adapters:
- Container image and Dockerfile
- Auth flow (browser sign-in URL, callback mechanism)
- Settings serialization format
- Config directory and scoping (home vs workspace)
- Credential volume and persistence
- Agent process argv
Adding a New Provider
Section titled “Adding a New Provider”Adding a provider requires:
- One
ProviderRuntimeSpecentry in the registry - One
AgentProvideradapter implementation - One
AgentRunnerfor settings rendering - One Dockerfile in
images/scc-agent-<provider>/
The core, safety engine, egress, audit, and governance are untouched.
Container Images
Section titled “Container Images”SCC owns a layered image hierarchy:
| Image | Purpose |
|---|---|
scc-base | Safety wrappers, standalone safety evaluator, shared tooling |
scc-agent-claude | Claude Code agent (extends scc-base) |
scc-agent-codex | Codex agent (extends scc-base) |
scc-egress-proxy | Squid proxy sidecar for web-egress-enforced policy |
Why SCC owns images: The safety evaluator and shell wrappers must be inside the container to intercept commands. SCC cannot rely on the agent’s own plugin system for hard enforcement — wrappers are defense-in-depth that work even if the agent ignores hooks.
First-run build: SCC auto-builds the provider image from bundled Dockerfiles on first start. Manual build:
docker build -t scc-agent-claude:latest images/scc-agent-claude/docker build -t scc-agent-codex:latest images/scc-agent-codex/OCI Runtime Path
Section titled “OCI Runtime Path”SCC uses a portable OCI runtime path that works with any Docker-compatible engine:
| Runtime | Status |
|---|---|
| Docker Engine | ✅ Supported |
| OrbStack | ✅ Supported |
| Colima | ✅ Supported |
| Docker Desktop | ✅ Supported |
| Podman | 🔄 Planned (not fully validated) |
SCC auto-detects the runtime via docker info. The OciSandboxRuntime adapter uses standard Docker CLI commands (docker create, docker start, docker exec) — no Docker Desktop-specific APIs.
Each provider gets a persistent named volume for credential and data persistence. Container names are deterministic, derived from the workspace path and provider ID — running Claude and Codex in the same workspace produces separate, identity-isolated containers.
Safety Engine Architecture
Section titled “Safety Engine Architecture”The built-in safety engine is a three-layer system:
- Shell tokenizer (
core/shell_tokenizer.py) — parses command strings into individual commands, handling pipes, subshells, and compound operators - Git safety rules (
core/git_safety_rules.py) — analyzes git commands for destructive operations (force push, hard reset, branch force delete, etc.) - Network tool rules (
core/network_tool_rules.py) — analyzes network commands (curl, wget, ssh, scp, sftp, rsync)
The engine is orchestrated by DefaultSafetyEngine which loads policy from org config (fail-closed: parse failure → default block). Safety verdicts are provider-neutral — both Claude and Codex adapters consume the same engine.
Runtime wrappers in images/scc-base/wrappers/bin/ intercept commands inside the container. Each wrapper calls the standalone scc_safety_eval package (stdlib-only, no external dependencies) before forwarding to the real binary.
Web Egress Topology
Section titled “Web Egress Topology”When network_policy is web-egress-enforced:
┌──────────────┐ internal-only ┌───────────────┐ bridge│ agent │ ─────────────────▶ │ scc-proxy │ ──────────▶ Internet│ container │ scc-egress-{id} │ (Squid 3128) │ (default)└──────────────┘ └───────────────┘- The agent container is on an internal-only Docker network (no direct external access)
- The Squid proxy sidecar is dual-homed: internal network + default bridge
- The proxy enforces an ACL compiled from the team’s allowed destinations
- Even if the agent ignores
HTTP_PROXYenv vars, it physically cannot bypass the proxy
Launch Preflight
Section titled “Launch Preflight”All five launch paths (start command, wizard, worktree create, dashboard start, dashboard resume) use the same three-function preflight sequence:
resolve_launch_provider()— determine which provider to usecollect_launch_readiness()— check image availability and auth statusensure_launch_ready()— auto-build image or trigger auth bootstrap if needed
This ensures consistent behavior regardless of how you start a session.