Skip to content

Sovereign Deployment Decision Guide

CloudTaser's sovereignty guarantee depends on two deployment choices the installer makes before any manifest is applied:

  1. Where OpenBao is hosted -- the secret source of truth.
  2. What compute substrate runs the target workload -- whether memory is hypervisor-sovereign at runtime.

Get either wrong and the cryptographic story still technically works -- the secret still bypasses etcd, the wrapper still injects via memfd_secret, the eBPF layer still blocks the usual exfiltration paths -- but the sovereignty claim you make to auditors, customers, and regulators will not hold under scrutiny. Hosting OpenBao in AWS Frankfurt does not make OpenBao EU-sovereign. Running protected workloads on standard GCE n2 does not make runtime memory hypervisor-sovereign.

This page gives you two decision trees, a provider comparison table, a scope breakdown of what "sovereign" actually covers in a CloudTaser deployment, and a list of silent-failure anti-patterns to look out for.

Silent failure is reputational risk

A careless deployment on AWS eu-central-1 satisfies every CI check, passes every Schrems II box-tick, and silently undermines the whole sovereignty story. CloudTaser cannot detect substrate choice for you -- you must make it deliberately.


Decision tree -- OpenBao hosting

OpenBao holds the long-lived material: root keys, wrap tokens, the audit trail of every secret read, and the metadata about which cluster holds which fingerprint. If the host provider is subject to US extraterritorial compulsion (CLOUD Act, FISA 702, NSL), sovereignty is not established regardless of region label.

Is OpenBao hosted on a US-owned cloud provider?
|
+-- Yes, commodity compute (e.g. AWS eu-central-1 EC2 t3.medium)
|     -> SOVEREIGNTY NOT ESTABLISHED. Move.
|     The CLOUD Act reaches the provider regardless of region.
|     Region labels are a billing concept, not a jurisdictional one.
|
+-- Yes, confidential-compute SKU with attestation
|   (e.g. AWS Nitro Enclaves + remote attestation, Azure DCsv5 + MAA)
|     -> Acceptable. Document the attestation chain.
|     You are trusting the CC boundary, not the provider. Audit evidence
|     must include attestation quotes and a documented key-release policy
|     tying unseal material to measured state.
|
+-- No -- verify substrate:
    |
    +-- Hetzner / OVH / Scaleway / IONOS / Exoscale / UpCloud
    |     -> Acceptable. EU-headquartered, EU-operated, no US parent.
    |
    +-- SecNumCloud-qualified provider (OVH SecNumCloud, Outscale, Bleu)
    |     -> Acceptable and preferred for French regulated workloads.
    |     ANSSI qualification requires EU ownership and immunity-by-design
    |     against non-EU extraterritorial law.
    |
    +-- On-prem in EU (customer datacenter)
    |     -> Acceptable. Document the physical access policy and key
    |     custody. This is the strongest posture for OpenBao hosting.
    |
    +-- Self-hosted hardware in EU (bare metal, customer-owned)
        -> Strongest. Full control of substrate, network, and lifecycle.

What to check before picking a provider

  • Corporate ownership. Is the operating entity a direct subsidiary of a US or non-EU parent? Wholly-owned EU subsidiaries of US companies remain in scope for US extraterritorial compulsion.
  • Operational control. Are support staff, root access, and incident response entirely within EU jurisdiction? "EU region, US support" is a common footgun.
  • Governing law. Is the master services agreement governed by EU law (member state law) or by the laws of a non-EU country?
  • Attestation. For CC-based deployments, the attestation evidence and the key-release policy are the audit artefacts. Record them.

Decision tree -- target-cluster compute

The target cluster is where the wrapped workload runs. The wrapper puts secrets into memfd_secret and the eBPF layer blocks guest-root exfiltration -- but hypervisor-level access is out of scope for guest-side controls. Whether hypervisor access is sovereign depends entirely on compute SKU.

Does the workload use runtime secrets (wrapper + memfd_secret)?
|
+-- Yes -> Confidential-compute (CC) cluster REQUIRED for the full claim:
|     |
|     +-- GKE c3 / n2d + AMD SEV-SNP or Intel TDX + attestation
|     |     Managed by Google; customer verifies attestation at node join.
|     |
|     +-- EKS with Nitro Enclaves (parent + enclave topology)
|     |     Isolates secret-handling enclave inside standard instance.
|     |
|     +-- AKS DCdv5 / ECdv5 / DC*asv6 (SEV-SNP or TDX)
|     |     Often free or near-free uplift over standard DS instances.
|     |
|     +-- Self-managed AMD EPYC SEV-SNP or Intel TDX on bare metal
|           Full sovereign posture. Customer provides attestation harness.
|
+-- No -- workload is at-rest-only (S3 encryption proxy, DB proxy, offline
    batch that reads secrets once and writes ciphertext) ->
          Commodity compute is acceptable.
          Keys never decrypt user data on the protected node -- they
          decrypt on the proxy-hosting node, which can itself be CC if
          you need a stronger claim for the proxy tier.

Why CC matters for runtime-secret workloads

Without CC, the hypervisor running your VM has theoretical read access to guest physical memory. On a shared public cloud this means the provider (and, via US extraterritorial compulsion, a foreign government) has a legal path to your process memory that no guest-side control can close. CC SKUs move the confidentiality boundary inside the CPU package via measured encryption and attestation, so the hypervisor holds only ciphertext.

The wrapper's memfd_secret and the eBPF layer defend against guest-root adversaries (a compromised container, a privileged DaemonSet, a noisy-neighbour tenant). They do not and cannot defend against hypervisor-root adversaries. CC is what closes that second gap.


Provider comparison

Substrate CC SKU available Attestation CLOUD Act exposure Price delta vs. commodity Notes
GKE (google-owned) c3, c3d, n2d with SEV-SNP; c3 with TDX (preview) Google Confidential Space, vTPM at node-boot Full -- US parent +6 to +14% Attestation tooling mature; widely deployed.
EKS (aws-owned) Nitro Enclaves on parent instances; Graviton CC in preview Nitro attestation docs, KMS-integrated Full -- US parent +0 to +10% (enclave overhead) Enclave model is topologically different; parent is not CC.
AKS (microsoft-owned) DCsv3, DCdv5, ECdv5, DC*asv6 (SEV-SNP), DCsv4 (TDX) Microsoft Azure Attestation (MAA) Full -- US parent Often no uplift, sometimes cheaper Broadest GA CC catalogue today.
Hetzner (eu) CAX + dedicated EPYC bare metal (SEV-SNP possible self-managed) Self-managed (customer harness) None Baseline (commodity pricing) Best commodity price in EU; CC requires bare-metal + self-managed.
OVH (eu) Advance / Scale dedicated + SecNumCloud Self-managed or ANSSI-qualified None Baseline to +20% for SecNumCloud SecNumCloud tier is the reference French sovereign posture.
Scaleway (eu) Elastic Metal + AMD EPYC Self-managed None Baseline EU-only; good DX for self-managed Kubernetes.
IONOS (eu, germany) Dedicated Core / Enterprise on AMD EPYC Self-managed None Baseline BSI C5-audited; SecNumCloud tier in partnership.
Exoscale (eu, switzerland/austria) Commodity + dedicated Self-managed None (CH/AT governing law) Baseline Non-EU jurisdiction (CH) but sovereign-aligned.
UpCloud (eu, finland) Commodity Self-managed None Baseline Finnish jurisdiction.
SecNumCloud providers (ovh, outscale, bleu, cloud temple) Varies ANSSI-qualified None (legally immune by qualification criteria) +10 to +30% Strongest regulated posture; mandatory for certain French public-sector workloads.
On-prem (customer-owned) Any (customer choice) Customer TPM/vTPM harness None Capex model Full sovereign; highest ops burden.

Pricing deltas are order-of-magnitude indicators as of this document's revision; verify with your provider account team for current rates and committed-use discounts.


What sovereignty means here -- scope breakdown

The word "sovereignty" gets used loosely. In a CloudTaser deployment, four distinct properties are relevant, and each has its own sovereign-or-not answer tied to a specific condition.

Data at rest

Sovereign IF OpenBao is hosted on a sovereign substrate per the OpenBao decision tree above.

Encryption keys, wrapped secrets, and the OpenBao audit log live on OpenBao's durable storage. Whoever controls that storage controls the cleartext -- directly if unsealed, via legal compulsion to the host if sealed. Non-EU substrate breaks this.

Data in transit

Sovereign via endpoint-terminated TLS between OpenBao and the wrapper, relayed through the beacon.

The beacon is a stateless SHA-256-info-hash switchboard; it cannot see plaintext. TLS terminates at the wrapper's memfd_secret on one end and OpenBao's listener on the other. The only caveat: if the beacon operator colludes to run an active MitM with a forged cert, the pinned beacon cert protects against this for managed deployments; self-hosted beacons require customer-managed pinning.

Data in use

Sovereign IF the target cluster runs on a CC SKU with attestation per the target-cluster decision tree above. Otherwise the protection boundary covers only compromised guest-root adversaries, not hypervisor-root or provider-operator adversaries.

Be explicit about this in compliance documentation. Claiming "end-to-end sovereign" on commodity compute is the single most common factual error in CloudTaser deployments.

Control-plane metadata

NOT sovereign -- reflect in DPIA.

Kubernetes API-server objects that describe CloudTaser-protected workloads (pod manifests with annotations, image references, scheduler events, audit logs) live in the managed control plane of the cluster provider. These do not contain secret material but do contain metadata about which workloads exist, when they run, and which secrets they reference by name. This metadata is visible to the cluster provider.

For most threat models this is acceptable; for workloads where metadata itself is sensitive (e.g. regulatory intent, merger activity), either deploy on a sovereign Kubernetes distribution (self-managed k3s/kubeadm on sovereign substrate) or accept the residual exposure and document it.


Silent failure modes

These are the anti-patterns we see most often. Each silently defeats the sovereignty claim without producing any CI or runtime failure.

1. OpenBao on AWS Frankfurt

Pattern: "We picked eu-central-1 so OpenBao is in the EU."

Why it fails: AWS is a US corporate entity. Region label controls billing and latency, not jurisdiction. CLOUD Act, FISA 702, and NSLs reach AWS's corporate parent regardless of which region the bits sit in.

Remediation: Move OpenBao to an EU-owned provider (Hetzner, OVH, Scaleway, IONOS, Exoscale, UpCloud) or to SecNumCloud-qualified infrastructure. See the OpenBao decision tree above.

2. Non-CC nodes expecting hypervisor protection

Pattern: "We use CloudTaser so our workload is hypervisor-protected."

Why it fails: The wrapper + eBPF layer close guest-side exfiltration paths. They do not touch the hypervisor boundary. On commodity compute the hypervisor can read guest memory pages as plaintext.

Remediation: Move runtime-secret workloads to CC SKUs (GKE c3d/n2d, AKS DCdv5/ECdv5, EKS Nitro Enclaves, or self-managed SEV-SNP/TDX). For at-rest-only workloads (S3 encryption proxy, DB proxy), commodity compute remains acceptable -- document the scope difference.

3. Compliance mapping as audit evidence

Pattern: A checkbox says "Schrems II supplementary measures: yes" and the deployment is declared audit-ready.

Why it fails: Compliance framework mappings say what a control can satisfy if deployed correctly. They do not verify the control is deployed correctly. Auditors ask for evidence -- attestation quotes, substrate agreements, OpenBao access logs, eBPF enforcement events -- not framework tables.

Remediation: Treat docs/compliance/ pages as a map of what you need to produce, not as the evidence itself. Every claim needs an artefact: substrate MSA, CC attestation quote, kernel CONFIG, policy deployment log.

4. CONFIG_BPF_KPROBE_OVERRIDE off on node OS

Pattern: The node OS is built without CONFIG_BPF_KPROBE_OVERRIDE=y, or kprobe override is disabled at runtime.

Why it fails: Without kprobe override the eBPF layer can only observe a forbidden syscall and react (SIGKILL after the fact); it cannot block the syscall synchronously. For most of the 20+ enforcement vectors this means the forbidden read has already returned bytes to the attacker by the time the reaction fires.

Remediation: Use a node OS that ships with kprobe override enabled (COS, Bottlerocket recent, Ubuntu 22.04+ default) and verify at daemonset startup. The CloudTaser eBPF daemonset logs its detected capability set at boot -- check kubectl logs -n cloudtaser-system -l app=cloudtaser-ebpf | grep -i kprobe_override. See Kernel Compatibility.


Where this fits in your deployment plan

  1. Before Helm install -- answer both decision trees and record the answers in the deployment plan document.
  2. In the DPIA -- document which of the four scope properties above are in scope and which are out of scope, with remediations for any out-of-scope property the organisation is unwilling to accept.
  3. In the runbook -- record substrate ownership, governing law, attestation method, and kernel CONFIG as audit artefacts. Review on the same cadence as MSA renewals.
  4. After deployment -- run cloudtaser-cli target audit to collect runtime evidence (eBPF capability set, node labels, CC attestation where supported) and file the output with your compliance team.

Related reading: