Self-Bootstrap: How CloudTaser Connects a Cluster to the EU Vault¶
CloudTaser uses a P2P beacon relay to connect Kubernetes clusters to an EU-hosted secret store without direct network access. The entire process requires zero K8s Secrets, zero vault tokens for day-2 operations, and only one Helm parameter to deploy.
Overview¶
┌─────────────────┐ ┌──────────────┐ ┌─────────────────┐
│ Target Cluster │ │ Beacon │ │ EU Secret Store │
│ (K8s + Operator)│────▶│ (TCP relay) │◀────│ (Vault + Bridge)│
│ │ │ port 443 │ │ │
└─────────────────┘ └──────────────┘ └─────────────────┘
outbound only stateless outbound only
no inbound ports no secrets no inbound ports
Both sides connect outbound to the beacon. Neither needs inbound ports or direct network paths to the other.
Complete Walkthrough¶
Phase 1: Get Cluster Fingerprint (Target Side)¶
$ cloudtaser target fingerprint
CloudTaser Cluster Fingerprint
Cluster ID: 0ad50c05-1153-4508-9237-4cb2912e2c23
Register this cluster on your secret store:
cloudtaser source register --fingerprint 0ad50c05-1153-4508-9237-4cb2912e2c23
The fingerprint is the kube-system namespace UID — unique per cluster, stable across restarts. This is the only value exchanged between the two sides. No files, no certs, no tokens.
Phase 2: Register Cluster (Secret Store Side)¶
On the vault side (different network, different VPN — doesn't matter):
$ cloudtaser source register \
--fingerprint 0ad50c05-1153-4508-9237-4cb2912e2c23 \
--vault-address https://vault.cloudtaser.io \
--vault-token <admin-token>
This command does four things:
1. Generates a CA and two cert pairs (broker cert + bridge client cert) — all in memory
2. Stores certs in vault at cloudtaser/data/system/bridge-*
3. Registers the fingerprint at cloudtaser/data/clusters/<id>/registered
4. Computes info hashes — one for init (from fingerprint), one for operational (from CA cert)
The bridge (running alongside vault) detects the new registration and connects to the beacon with the init hash.
Phase 3: Deploy Operator (Target Side)¶
Back on the K8s cluster — just ONE parameter:
$ helm install cloudtaser cloudtaser/cloudtaser \
--set operator.beacon.address=beacon.cloudtaser.io:443
Or with ArgoCD:
apiVersion: argoproj.io/v1alpha1
kind: Application
spec:
source:
chart: cloudtaser
repoURL: https://charts.cloudtaser.io
helm:
values: |
operator:
beacon:
address: beacon.cloudtaser.io:443
No certificates, no secrets, no tokens in the manifest. Safe to store in git.
Phase 4: Self-Bootstrap (Automatic)¶
The operator starts and self-bootstraps through the beacon relay:
┌──────────┐ ┌────────┐ ┌────────┐
│ Operator │ │ Beacon │ │ Bridge │
│ (broker) │ │ (relay)│ │ (vault)│
└────┬─────┘ └───┬────┘ └───┬────┘
│ │ │
│ 1. Connect (init hash) │ │
│──────────────────────────▶│ │
│ │◀─────────────────────────│
│ │ Bridge already waiting │
│ │ (detected registration) │
│ │ │
│ 2. Beacon matches │ │
│◀─────── relay ──────────▶│ │
│ │ │
│ 3. Send fingerprint │ relay │
│──────────────────────────────────────────────────────▶│
│ │ │
│ │ 4. Verify fingerprint │
│ │ Check vault: │
│ │ registered? ✓ │
│ │ initialized? ✗ │
│ │ │
│ 5. Receive certs │ relay │
│◀──────────────────────────────────────────────────────│
│ (CA + broker cert + key) │ │
│ │ 6. Mark initialized │
│ │ (one-time use) │
│ │ │
│ 7. Disconnect │ │
│──────── close ──────────▶│ │
│ │ │
│ 8. Reconnect │ │
│ (operational hash) │ │
│──────────────────────────▶│ │
│ │◀─────────────────────────│
│ │ Bridge in operational │
│ │ room (same CA hash) │
│ │ │
│ 9. mTLS handshake │ relay │
│◀═══════════════════════════════════════════════════▶│
│ │ │
│ OPERATIONAL │ │
│ Secrets flow via mTLS │ │
│ through beacon relay │ │
Steps explained:
- Operator reads
kube-systemUID, computesSHA256("cloudtaser-init:" + fingerprint)→ connects to beacon with this init hash - Beacon matches the operator with the bridge (both registered with the same init hash)
- Operator sends its fingerprint through the relay (protected by beacon's outer TLS)
- Bridge checks vault: is this fingerprint registered? Yes. Already initialized? No.
- Bridge sends the pre-generated certificates: CA cert, broker cert, broker private key
- Bridge marks the fingerprint as "initialized" in vault — one-time use, can't be replayed
- Operator disconnects from the init "room"
- Operator computes
SHA256(CA cert DER)→ reconnects to beacon with the operational hash - Both sides do a full mTLS handshake through the relay — connection is now authenticated and encrypted end-to-end
Phase 5: Operational¶
Secrets now flow through the relay:
Each pod authenticates individually through vault's Kubernetes auth — vault RBAC policies apply per-pod, not per-bridge.
Phase 6: Migrate Existing Secrets¶
No vault token needed — the bridge authenticates with its own credentials. Secrets are read from K8s, written to vault through the relay, and deleted from K8s.
Security Properties¶
| Property | How |
|---|---|
| Zero K8s Secrets | All certs in operator process memory |
| Zero vault tokens for day-2 | Bridge authenticates admin proxy requests |
| One-time cert exchange | Fingerprint marked as initialized after first use |
| Vault RBAC preserved | Each pod authenticates via K8s auth, scoped token |
| No inbound ports | Both sides connect outbound to beacon (TCP 443) |
| No certs in git/Helm | Only beacon address in Helm values |
| Beacon sees nothing | Encrypted mTLS bytes only, no key material |
Deregistering a Cluster¶
$ cloudtaser source deregister \
--fingerprint 0ad50c05-1153-4508-9237-4cb2912e2c23 \
--vault-address https://vault.cloudtaser.io \
--vault-token <admin-token>
Removes the registration and initialization records. The bridge stops accepting connections from this cluster.
Automating with Terraform¶
data "kubernetes_namespace" "kube_system" {
metadata { name = "kube-system" }
depends_on = [google_container_cluster.main]
}
resource "null_resource" "register_cluster" {
provisioner "local-exec" {
command = <<-EOT
cloudtaser source register \
--fingerprint ${data.kubernetes_namespace.kube_system.metadata[0].uid} \
--vault-address https://vault.cloudtaser.io \
--vault-token $VAULT_TOKEN
EOT
}
}
Every new GKE/AKS/EKS cluster gets auto-registered. ArgoCD deploys the Helm chart with one parameter. Operator self-bootstraps.