Skip to content

Shared Responsibility Model

Once a secret is fetched from your EU-hosted OpenBao into a wrapper's memfd_secret region, you might think CloudTaser has "solved" data sovereignty for that workload. It has -- but only for that secret, and only up to the boundary of your process. Inside the process, your application code can still log a token, send a DB row to a webhook, or hand a JWT to a third-party SDK. CloudTaser does not stop you from doing any of that, and could not without blocking normal application behaviour.

This page draws the boundary.


CloudTaser's responsibility

CloudTaser (the product) guarantees:

Concern How CloudTaser covers it
Secret delivery Wrapper fetches from your EU OpenBao into memfd_secret without passing through etcd, K8s Secrets, disk, or env vars readable from /proc
In-memory exfiltration blocking eBPF daemonset blocks /proc/PID/mem, /proc/PID/environ, ptrace, process_vm_readv, io_uring, userfaultfd, /dev/mem and /proc writes
Object-storage encryption S3/DB proxies do AES-GCM encrypt/decrypt with keys held in EU OpenBao; the provider stores ciphertext only
Compute-time sovereignty When deployed on confidential-compute substrate (SEV-SNP, TDX, Nitro Enclaves), runtime memory is not readable by the hypervisor
Supply-chain evidence CloudTaser's own images + binaries are Cosign-signed with per-release SBOMs (see Supply-Chain Evidence)
Signed-image enforcement Optional Kyverno/VAP admission policy that rejects unsigned images at admission time (chart admission.enforceSignature)

That is the set of problems CloudTaser is built to close. Anything outside that table is your responsibility.


What memfd_secret protects -- and what it doesn't

The marketing line "the provider stores only ciphertext it cannot decrypt" is true for secrets and S3-backed objects. It is not true for the personal data your pod is actively processing.

memfd_secret physically removes secret pages from the kernel's direct memory map. No root, no kernel module, no /dev/mem, no eBPF program can read those pages. On commodity compute the one remaining access path is the hypervisor's physical-memory view; on confidential-compute substrate that path is closed as well.

But memfd_secret only protects what's in the memfd. Your application's ordinary working memory -- the Go heap, the stack of the goroutine processing a DB row, the .bss and .data sections, the Postgres driver's response buffer after TLS decryption -- is not in memfd_secret. It is in ordinary guest RAM.

Concrete example

Your payments-api pod authenticates to the EU Postgres using a DB password CloudTaser fetched. The password lives in memfd_secret. Sovereign.

Your app then runs:

SELECT email, dob FROM customers WHERE id = 123;

Postgres returns the row over TLS to the pod. The Go driver:

  1. Decrypts the TLS record -- cleartext bytes land in an ordinary heap-allocated buffer.
  2. Hands the decoded email and dob string to your handler.
  3. Your handler assembles a response, maybe logs a trace attribute, maybe serialises to JSON.

Every one of those buffers is in the ordinary heap of your process. They are in the same address space as the memfd_secret secret, but they are not in memfd. They are in guest RAM. On commodity compute, a compelled hypervisor could read them; memfd_secret does not change that.

To close this gap for processed data, you need confidential-compute substrate -- SEV-SNP, TDX, or Nitro Enclaves with attestation. See the Sovereign Deployment Decision Guide for the decision tree and Security Model for the runtime-memory story.

Rule of thumb

  • CloudTaser's cryptographic story holds end-to-end for the secrets themselves (API keys, DB passwords, JWTs, TLS private keys) -- even on commodity compute.
  • For the data those secrets unlock (DB rows, file contents, user records, decrypted payloads), the sovereignty guarantee requires CC substrate.

Both claims are honest, testable, and scoped. Do not let anyone conflate them.


Customer's responsibility

Once a secret has been delivered into your process, the following are still yours:

1. Application code

You can still write:

log.Info("authenticated with key", "key", os.Getenv("API_KEY"))

CloudTaser cannot stop you from logging your own memory. The getenv() interposer returns the memfd-backed value -- your logger then happily ships it to stdout, to Loki, to a SaaS log aggregator in us-east-1. That is a DLP / code-review problem, not a CloudTaser problem.

Mitigation: structured-logging redaction libraries (e.g. zap with sampling + field redaction, logrus with hooks, OpenTelemetry attribute processors), code review, static analysis (grep for log.*Secret|log.*Token|log.*Password in CI).

2. Webhook and SDK egress

If your app hands a token to a third-party SDK (Stripe, Twilio, Datadog, PagerDuty), the SDK makes the outbound call using that token. CloudTaser does not see or control the call. Your egress policy does.

Mitigation: NetworkPolicy (or Cilium CiliumNetworkPolicy) scoping egress to known destinations; service mesh egress gateways; eBPF-based DNS-rule enforcement at the node level; audit-grade network logs.

3. Downstream systems

The moment your pod sends a DB query, that query runs against a system CloudTaser does not protect. If the DB is hosted on a US provider, those rows are in US-controlled memory even if your app is sovereign. The DB, the object store, the upstream API -- all are their own trust domains.

Mitigation: S3 proxy for object storage; DB proxy (roadmap) for relational data; host the DB on an EU-sovereign substrate; CC compute at the DB tier as well if processed data matters (see above).

4. Container image trust chain

CloudTaser signs its own init container, operator, eBPF agent, wrapper, and S3/DB proxy images. Your application image is built and published by you. If the provider's build pipeline is compromised and pushes a malicious layer to your image, CloudTaser has no visibility.

Mitigation: sign your own images (Cosign, Sigstore), admission policies (Kyverno, OPA Gatekeeper, native ValidatingAdmissionPolicy) that reject unsigned images, reproducible builds, SBOM publication per release, supply-chain review (SLSA-level aspiration). See Supply-Chain Evidence for how CloudTaser signs its own artefacts -- you should hold yourself to the same bar.

5. Workload-specific egress controls

You know which destinations your workload legitimately talks to. CloudTaser does not. Default-deny egress and allow-list what's needed.

Mitigation examples:

  • NetworkPolicy with egress: [] as default, then per-namespace allow-lists.
  • Kyverno validate rules rejecting images not from approved registries.
  • OPA Gatekeeper constraint templates on namespaces, service accounts, privileged mounts.
  • Service mesh (Istio, Linkerd, Cilium) with egress gateways and mTLS between services.

Shared mitigations

Some attack surfaces are split:

eBPF content-matching for common patterns

The eBPF agent can detect common exfiltration patterns in real time -- base64-encoded API keys in outbound write() syscalls, known-shape JWT payloads in HTTP bodies, credential-shaped strings via curl | bash. This is a safety net, not primary defence: a determined exfiltrator can split, XOR, or compress their payload past a pattern matcher. Treat it as an augmentation of your SIEM, not a replacement.

CloudTaser catches a known-pattern slice. You catch the rest via your SIEM, your egress monitoring, and your audit-review process.

Logging & audit

CloudTaser emits audit events (secret fetch, fingerprint verification, attestation result, admission decision) you forward to your SIEM. You correlate them against the application-layer logs we can't see (request logs, auth events, outbound API calls) to build the full incident picture.


Threats to CloudTaser itself

Everything above draws the boundary between CloudTaser's scope and yours. But CloudTaser is software too, and anyone evaluating it for a regulated workload should understand the attack surface of the product -- not just the product's claims. This section names each component, the class of compromise, what CloudTaser ships today as mitigation, and what's on the roadmap.

1. Mutating webhook / operator compromise

What an attacker controls if they own the operator image: the operator runs the mutating admission webhook. Every annotated pod's container spec passes through it at admission time. An attacker who substitutes the operator image can inject arbitrary entrypoints into every new pod (not just the cloudtaser-wrapper -- anything: a reverse shell, a log-scraper, a secondary sidecar). Existing pods are unaffected until they restart; new pods are fully under the attacker's control.

What CloudTaser provides today:

  • Cosign-signed operator images published to GAR and GHCR with signatures registered in the public Rekor transparency log (Supply-Chain Evidence).
  • admission.enforceSignature opt-in (chart 1.0.45+) that renders a Kyverno verifyImages rule rejecting any CloudTaser image whose Cosign signature does not verify against the published key. Includes a VAP-rendered fallback that enforces digest-pinned pulls even without Kyverno installed.
  • Minimised operator ServiceAccount RBAC -- the operator's ClusterRole is scoped to exactly the API resources it needs (webhook configurations, the CloudTaser CRDs, pod reads for reconciliation). No broad cluster-admin, no namespace-level catch-alls.
  • CC-substrate compatibility -- running the operator on a confidential-compute node (SEV-SNP, TDX) closes the hypervisor-memory-read path, so the binary in memory cannot be tampered with from the host.

What's on the roadmap:

  • Reproducible builds for the operator so any customer can locally rebuild from a Git SHA and verify the result byte-matches the signed image we publish. Partial today (Go binaries are reproducible with the standard flags; container layer ordering is not fully pinned). Tracker in the cloudtaser-pipeline repo.
  • SLSA level 3 provenance attestation on every image, beyond the Cosign signature. Depends on build-environment isolation work currently underway in the pipeline.
  • Operator runtime integrity via eBPF -- self-monitoring that would detect if the in-memory operator binary is modified post-start. Scoping.

2. Wrapper-as-PID-1 compromise

What an attacker controls if they own the wrapper image: the wrapper runs as PID 1 inside the customer's pod. It fetches secrets from the EU OpenBao into memfd_secret, then exec's the customer application as a child. A compromised wrapper could log the secret material it fetches, exfiltrate to the attacker's endpoint, or swap the customer-app entrypoint.

What CloudTaser provides today:

  • Cosign-signed wrapper image -- same signing flow as the operator, same public key.
  • admission.enforceSignature applies to the wrapper image too -- an unsigned or mis-signed wrapper fails admission.
  • eBPF enforcement on the node prevents the usual exfiltration paths (/proc/PID/mem, /proc/PID/environ, ptrace, process_vm_readv, io_uring, userfaultfd, /dev/mem) from reading the wrapper's memfd. A compromised wrapper can still actively exfiltrate -- egress is the customer's responsibility via NetworkPolicy -- but passive memory-read paths are closed.
  • Init-phase trust chain -- the wrapper's identity is attested during the bootstrap handshake (fingerprint + nonce signed by the beacon); the bridge refuses to release a token to an un-attested fingerprint. A substituted wrapper image won't pass the attestation unless the attacker has also compromised the bridge or the enrollment flow.

What's on the roadmap:

  • Reproducible wrapper builds -- same effort as the operator.
  • Linked from Supply-Chain Evidence -- the cloudtaser-wrapper signing story is part of the common signing pipeline.

3. eBPF DaemonSet compromise

What an attacker controls if they own the eBPF daemonset image: the DaemonSet runs privileged on every node, loads eBPF programs into the kernel, and enforces the runtime memory-protection policy. A compromised daemon can disable enforcement for specific PIDs, selectively whitelist exfiltration paths, or simply exit and leave pods unprotected.

What CloudTaser provides today:

  • Cosign-signed eBPF image -- same pipeline, same key.
  • Admission-policy enforcement of Cosign signatures -- if you enable admission.enforceSignature: true, an unsigned eBPF image cannot land on any node.
  • Minimised capabilities -- the DaemonSet runs with only the capabilities eBPF loading requires (CAP_BPF, CAP_PERFMON, CAP_SYS_ADMIN on older kernels where CAP_BPF isn't yet available), not full privileged root.
  • Audit-log surface -- the daemon emits events for every eBPF attachment and detachment. Feeding these into your SIEM means a silent detach (daemon exiting mid-run) is detectable by absence of the heartbeat, not just by policy violations.

What's on the roadmap:

  • Kernel-side integrity verification -- cryptographically signed eBPF programs with kernel-verified signatures (requires kernels that ship BPF signature verification; not yet universal).
  • DaemonSet spec drift detection -- active watcher on the DaemonSet object itself, alerting if its image, args, or volume mounts are mutated post-install. The right place for this is your GitOps controller (Argo, Flux) with drift alerting enabled on the CloudTaser namespace.

4. Beacon relay compromise

What an attacker controls if they own the beacon relay operator: the beacon is a stateless TCP relay operating the P2P vault-to-cluster bridge. Its operator (CloudTaser-the-company in the default public setup; you, if you self-host) could theoretically be compelled (subpoena, national-security letter) to refuse service or tamper with traffic.

Why plaintext exposure is not on the table:

  • mTLS is endpoint-terminated -- the cluster-side bridge and the vault-side bridge establish mTLS through the beacon; the beacon sees only ciphertext TCP bytes. It cannot read secret material in transit, and it cannot MITM because both ends pin each other's certificates (cert-pinning tied to the attestation handshake).
  • Denial-of-service is the realistic compulsion outcome -- a hostile beacon operator can refuse to relay, which takes your cluster offline. It cannot read your secrets.

What CloudTaser provides today:

  • Self-host option -- the beacon is a single-binary relay you can run on your own infrastructure, removing CloudTaser-the-company from the trust path entirely. See cloudtaser-beacon for the deployment.
  • Public-beacon jurisdiction documented -- the public enroll.cloudtaser.io beacon runs in an EU jurisdiction; the Beacon Trust Model page documents where and under whose legal regime.
  • Multi-beacon failover -- roadmap; currently a single-beacon deployment is the default.

What's on the roadmap:

  • Beacon health-probe metrics -- external liveness checks so a silently-refused-service beacon is detected promptly.
  • Customer-self-hosted beacon quickstart -- one-command deploy of a sovereign beacon on the customer's substrate.

5. OpenBao compromise

What an attacker controls if they own your OpenBao: everything. The KEK lives in OpenBao's Transit engine; the secret material lives in OpenBao's KV engine; the admin tokens that mint wrapper identities live in OpenBao's token store. A compromised OpenBao is a game-over event for the sovereignty claim.

Why this is explicitly "your problem": OpenBao is the sovereign substrate. CloudTaser delegates to it intentionally because the trust-shift (from US hyperscaler to customer-owned OpenBao) is the value proposition. CloudTaser cannot mitigate an OpenBao compromise for you -- it can only make sure the rest of the architecture doesn't amplify it.

What CloudTaser recommends:

  • Hardware-backed unseal -- Shamir with offline custodians (OpenBao's default when you initialise without auto-unseal), or auto-unseal via a CC-attested KMS (confidential-compute node running your KMS substrate).
  • Sovereign substrate for OpenBao -- not AWS Frankfurt. See Sovereign Deployment Decision Guide for the hosting-selection decision tree.
  • Audit log forwarding -- every OpenBao read/write forwarded to a SIEM you control, with alerts on anomalous token-creation patterns.
  • Root token destruction -- after initial setup, the root token should be destroyed; all admin access goes through named-identity tokens with limited policies and short TTLs.
  • Quorum operations -- production changes to OpenBao policy or engine mounts should require two-person quorum via the Sentinel / policy-approval flow.
  • Backup encryption -- OpenBao snapshots are encrypted with a separate backup key held offline (paper, HSM). A backup accidentally left in a US bucket unencrypted would recreate the compromise you're trying to avoid.

What CloudTaser provides today:

  • Bridge auth uses its own vault token -- the bridge authenticates admin-proxy requests with its own OpenBao token rather than requiring per-call provisioner tokens in the request path. This reduces the number of high-privilege tokens the customer has to manage.
  • CloudTaserConfig CRD enforces a named Transit key per engine -- so misuse ("use the root KEK for everything") is harder to fall into by default.

What's on the roadmap:

  • Opinionated sovereign-OpenBao Helm chart -- deploy OpenBao with sovereign defaults (Shamir, audit-forwarding, CC node affinity) in one command.
  • OpenBao HSM-unseal quickstart -- documented integration with Nitrokey HSM, YubiHSM, and other EU-sovereign HSM vendors.

Putting the threat model together

Five components, five classes of compromise, five mitigation sets. The combined story:

  • Supply chain is addressed via Cosign signing + admission enforcement. That's the common mitigation across components 1-3. Turn on admission.enforceSignature: true in every production cluster and you've closed the "attacker substitutes our image" vector.
  • Runtime compromise of any one CloudTaser component (operator, wrapper, eBPF daemon) is blast-radius-limited by the minimised RBAC and the eBPF enforcement on the node. Adding CC-substrate nodes closes the remaining hypervisor-memory-read path.
  • Beacon compromise is bounded to denial-of-service by the end-to-end mTLS. Self-hosting the beacon removes CloudTaser-the-company from the trust path.
  • OpenBao compromise is yours to prevent. It's the substrate; CloudTaser delegates to it. Harden it, forward its audit logs, hold its unseal keys offline, run it on sovereign infra.

CloudTaser is neither magic nor opaque. Every one of the five compromise classes has a named mitigation today and a tracker on the roadmap for the gaps. If a would-be customer's threat model demands stronger guarantees than what this section lists, that's a legitimate conversation to have -- not something to paper over.

Cross-references for the detail:


NOT a substitute for

CloudTaser complements but does not replace:

  • Code review -- a human reading the diff that added log.Debug("token=%s", t).
  • Data Loss Prevention (DLP) tooling -- Forcepoint, Symantec, native AWS Macie / GCP DLP.
  • Dynamic application security testing (DAST) -- OWASP ZAP, Burp, Nuclei templates.
  • Static analysis (SAST) -- Semgrep, CodeQL, Checkmarx, gosec.
  • Threat modelling -- STRIDE, PASTA, attack-tree exercises before code is shipped.
  • Security training -- your engineers still need to know what a secret is and what not to do with one.

CloudTaser closes the specific gap between a compliant secret source (EU OpenBao) and runtime use (in-guest memory), plus the object-storage encryption gap. It does not claim to solve application security. Pretending otherwise is how you end up with a GDPR-compliant pipeline feeding plaintext PII into a Slack webhook.