Enterprise Deployment Architecture¶
CloudTaser's enterprise deployment model uses the CLI as an orchestrator that bridges the gap between the vault (secret source) and Kubernetes clusters (targets). The vault never needs WAN exposure. The plan file is the only artifact that moves between environments.
The Problem¶
Enterprise environments have constraints that prevent simple vault-to-cluster connectivity:
- Air-gapped networks. The vault is in a secure EU network segment. Target clusters may be in different VPCs, clouds, or even on-premises.
- No direct WAN access. Security teams prohibit exposing the vault endpoint to the internet or to cross-cloud peering.
- Multi-cluster deployments. A single vault serves dozens of Kubernetes clusters across regions and cloud providers.
- Separation of duties. The platform team manages vault, the security team reviews changes, and application teams own their workloads.
CLI-as-Orchestrator Model¶
The CloudTaser CLI runs on an operator's workstation (or in a CI/CD runner) and acts as the bridge. It connects to each system independently -- it never requires the vault and the cluster to be able to reach each other at plan time.
Operator Workstation
(or CI/CD runner)
|
+-------------+-------------+
| |
cloudtaser source * cloudtaser target *
| |
v v
+-------------------+ +--------------------+
| EU Vault/OpenBao | | Target Kubernetes |
| (private network)| | Cluster |
+-------------------+ +--------------------+
The CLI connects to the vault for source commands and to the Kubernetes cluster for target commands. The plan file is the handoff artifact:
target discover -o plan.yaml-- connects to the cluster, writes a plan file- Security team reviews
plan.yaml-- no system access needed, just a YAML file source apply-plan plan.yaml-- connects to vault, creates policies and roles- User populates secrets in vault -- uses
bao/vaultCLI or UI source verify-plan plan.yaml-- connects to vault, checks secret existencetarget protect --plan plan.yaml-- connects to the cluster, patches workloads
Steps 1 and 6 only need cluster access. Steps 3 and 5 only need vault access. Step 2 needs nothing. This means:
- The vault never needs to be reachable from the cluster network during the plan workflow.
- The plan file can be transferred between networks via secure file transfer, Git, or even email.
- Different people can execute different steps from different machines.
Air-Gap Friendly Design¶
In fully air-gapped environments, the workflow adapts naturally:
Network A (cluster) Secure Transfer Network B (vault)
===================== ============== ===================
target discover -o plan.yaml
|
v
plan.yaml -----------> plan.yaml
|
v
source apply-plan plan.yaml
(populate secrets)
source verify-plan plan.yaml
|
v
plan.yaml <----------- plan.yaml (unchanged)
|
v
target protect --plan plan.yaml
The plan file is the only thing that crosses the air gap. It contains no secret values -- only metadata about which workloads need which vault paths. It is safe to store in version control or compliance systems.
Multi-Cluster Topology¶
A single EU vault can serve multiple Kubernetes clusters. Each cluster gets its own plan file:
EU Vault / OpenBao
(single source of truth)
|
+-------------+-------------+
| | |
apply-plan apply-plan apply-plan
verify-plan verify-plan verify-plan
| | |
v v v
+----------+ +----------+ +----------+
| Cluster A | | Cluster B | | Cluster C |
| (GKE EU) | | (EKS EU) | | (AKS EU) |
+----------+ +----------+ +----------+
| | |
discover -o discover -o discover -o
protect protect protect
--plan --plan --plan
Each cluster generates its own plan. All plans point to the same vault. The vault policies and roles are per-tenant (per-namespace), so clusters can share vault paths or have isolated paths depending on the naming convention.
Per-cluster workflow¶
# Cluster A
cloudtaser target discover --kubeconfig ~/.kube/cluster-a -o plan-a.yaml
cloudtaser source apply-plan plan-a.yaml --openbao-addr https://vault.eu.example.com --token hvs.TOKEN
# (populate secrets, verify)
cloudtaser target protect --plan plan-a.yaml --kubeconfig ~/.kube/cluster-a --vault-address https://vault.eu.example.com --interactive
# Cluster B
cloudtaser target discover --kubeconfig ~/.kube/cluster-b -o plan-b.yaml
cloudtaser source apply-plan plan-b.yaml --openbao-addr https://vault.eu.example.com --token hvs.TOKEN
# (populate secrets, verify)
cloudtaser target protect --plan plan-b.yaml --kubeconfig ~/.kube/cluster-b --vault-address https://vault.eu.example.com --interactive
Plan File as Compliance Artifact¶
The migration plan is designed to be reviewed and approved before execution:
| Property | Benefit |
|---|---|
| Human-readable YAML | Security teams can review without CLI access |
| No secret values | Safe to store in Git, Jira, or compliance platforms |
| Versioned schema | apiVersion: cloudtaser.io/v1 ensures forward compatibility |
| Audit fields | createdAt, createdBy, cluster provide provenance |
| Status tracking | Each workload's migration status is recorded in the file |
Typical compliance workflow:
- Platform team generates the plan and opens a review ticket
- Security team reviews vault paths, field mappings, and tenant boundaries
- Security team approves the plan (the YAML file is the approval artifact)
- Platform team executes
apply-plan, populates secrets, runsverify-plan - Application team confirms readiness
- Platform team executes
target protect --planwith interactive approval - The final plan file (with all statuses set to
migrated) is archived as evidence
Interactive vs Automated Migration¶
The target protect --plan command supports two modes:
Interactive (--interactive)¶
Each workload is presented with its full secret mapping. The operator confirms injection and rolling restart individually. Recommended for:
- First migration of production workloads
- Environments where application teams need to confirm each workload
- Regulated environments requiring per-workload approval
Auto-approve (--yes)¶
All workloads are migrated without prompting. Recommended for:
- CI/CD pipelines
- Development and staging environments
- Re-running after a partial migration where remaining workloads are already approved
Both modes are resumable. If interrupted, re-run the same command -- workloads already marked as migrated or skipped are not reprocessed.
Runtime Connectivity¶
While the plan workflow does not require vault-to-cluster connectivity during migration setup, the workloads themselves need vault access at runtime. Once CloudTaser annotations are applied and pods restart:
- The CloudTaser init container injects the wrapper binary
- The wrapper authenticates to vault using the pod's ServiceAccount JWT (Kubernetes auth)
- The wrapper fetches secrets from vault into process memory
- The wrapper fork+execs the original process with secrets in the child environment
This means the vault endpoint must be reachable from the cluster's pod network at runtime. This is typically achieved via:
- Private peering between the vault VPC and the cluster VPC
- VPN or WireGuard tunnel for cross-cloud connectivity
- Internal load balancer if vault is in the same cloud region
- Ingress with mTLS for external access with client certificate authentication
The cloudtaser target connect command configures the Kubernetes auth method in vault to accept ServiceAccount JWTs from the target cluster.
Related¶
- Migration Plan Workflow -- step-by-step guide
- Plan File Format Reference -- full YAML schema
- Zero Kubernetes Secrets Architecture -- how the operator itself avoids K8s Secrets
- Security Model -- trust boundaries and threat model