kubectl exec into a wrapped pod — what you see, what you don't¶
CloudTaser changes what ordinary Kubernetes debug commands reveal about a pod's runtime state. This page documents the exact behaviour so SREs, platform engineers, and security reviewers can:
- Understand what
kubectl exec,kubectl debug,kubectl attach, andkubectl logsactually expose against a wrapped pod. - Plan debugging workflows that don't require reaching the secret material.
- Verify the eBPF enforcement policy works as documented against an in-container adversary (not just external compromise).
Everything below applies to pods injected by the CloudTaser operator (annotation cloudtaser.io/inject: "true"). Pods without the injection annotation see stock Kubernetes behaviour.
Related pages
- Sovereign Deployment Decision Guide — the substrate decisions that make the guarantees below meaningful.
- eBPF Runtime Enforcement — authoritative page on the 23 vectors and their kernel requirements.
- Wrapper Design — how the wrapper delivers secrets via
execve()env-var inheritance.
Two things to separate in your head¶
Most confusion here comes from conflating two distinct mechanisms:
-
Env-var inheritance via
execve(). When the wrapper (PID 1) fork+execs the application, the Unixexecve()syscall inherits the wrapper's environment — including the secret values — into the child process's initialenvp[]. That environment is then readable via/proc/<child_pid>/environfor the lifetime of the child. -
Env for new processes created via the container runtime.
kubectl exec,kubectl debug, andephemeral containersall go through the CRI (ExecSync/ExecStartAPIs exposed by containerd / CRI-O). The runtime spawns a new process in the container's PID namespace using the container spec'senv/envFrom— not a copy of any currently-running process's environment. The new process inherits no env from PID 1, no env from the app child, nothing beyond what the PodSpec declared at admission time.
The CloudTaser operator's mutating webhook rewrites the container spec's env to include configuration variables — CLOUDTASER_ORIGINAL_CMD, CLOUDTASER_SECRET_PATHS, VAULT_ADDR, CLOUDTASER_ENV_MAP, etc. — but not any secret values. Secrets are fetched at runtime by the wrapper and live only in the wrapper's (then the app's) own environment, reachable only through /proc/<pid>/environ.
This means a kubectl exec'd shell sees the routing metadata (where OpenBao lives, which paths are fetched, which env-var names the app expects) but not the secret values.
The four debug commands, one at a time¶
kubectl exec <pod> -- env¶
Shows: the container spec env — CLOUDTASER_* config variables, VAULT_ADDR, PATH, HOSTNAME, standard k8s service discovery variables, plus any non-secret env the PodSpec declared.
Does not show: any secret value fetched by the wrapper. The secrets were never in the PodSpec — they exist only in the running app's environ.
Example output fragment:
$ kubectl exec postgres-demo -- env
CLOUDTASER_ORIGINAL_CMD=docker-entrypoint.sh
CLOUDTASER_ORIGINAL_ARGS=postgres
CLOUDTASER_SECRET_PATHS=secret/data/demo/postgres-credentials
CLOUDTASER_ENV_MAP=password=PGPASSWORD,username=PGUSER
VAULT_ADDR=https://vault.eu.example.com:8200
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=postgres-demo
# NO PGPASSWORD here. NO PGUSER value here.
An attacker with exec permissions learns the plumbing (which vault, which paths, which env-var names the app wants) but not the keys.
kubectl exec <pod> -- cat /proc/1/environ¶
This is the interesting one. The running wrapper process (PID 1) has the secrets in its environ (briefly, before it strips the CLOUDTASER_* control vars and fork+execs the app). The app child has the secrets in its environ for the lifetime of the process.
Reading /proc/<pid>/environ of a monitored PID is blocked by the eBPF enforcement layer. The exact behaviour depends on the kernel:
On kernels with CONFIG_BPF_KPROBE_OVERRIDE=y (the expected production substrate — current GKE COS, current EKS AL2023, current AKS Ubuntu 22.04+, current Bottlerocket, current Flatcar):
$ kubectl exec postgres-demo -- cat /proc/1/environ
cat: /proc/1/environ: Permission denied
command terminated with exit code 1
The openat("/proc/1/environ", O_RDONLY) syscall is intercepted by a kprobe before it executes. The eBPF program checks the calling PID is inside a protected container, the target PID is a wrapper-monitored process, and returns -EACCES synchronously. The file is never opened; no data is returned. The event is logged as ENVIRON_READ in the audit stream.
On kernels without kprobe-override (older node images, e.g. kernels < 5.7 without the build flag): the eBPF layer falls back to reactive-kill mode. The openat() succeeds, the read attempt trips a tracepoint, and the calling process receives SIGKILL before the read() syscall completes:
The theoretical gap — between openat() returning an FD and the read() executing — is named explicitly in the eBPF enforcement page. In practice the SIGKILL delivers before read() returns, but this is a weaker guarantee than synchronous blocking. If you're on a regulated workload, run on a kernel with kprobe-override.
The same enforcement applies to the app's PID (not just PID 1), every /proc/<pid>/mem read, process_vm_readv, ptrace, /proc/<pid>/maps, and the other 17 vectors. See the full enforcement table.
kubectl exec <pod> -- /bin/sh (interactive shell)¶
Shows: exactly the same env as kubectl exec -- env — container-spec variables only. No secrets.
Caveats for an SRE in the shell:
ps -efworks and shows all processes (wrapper, app, shell). No secret material leaks fromps— command-line args from the PodSpec are visible but secrets don't go through args, they go through env.ls /proc/<pid>/works for listing directory contents.- Attempting to
catany ofenviron,mem,maps,smaps,pagemapof the wrapper or app is blocked as described above. - Attempting to attach a debugger (
gdb -p <pid>,strace -p <pid>) is blocked via theptracekprobe (vector #8). ss -tnp,ss -unp,netstat -tnpwork — you can see that the wrapper has an outbound mTLS connection to the beacon, but you cannot inspect the connection contents from inside the pod (TLS).kill -TERM 1will terminate the wrapper, which propagates to the app and ends the pod. This is not a secret-extraction path.
kubectl attach <pod>¶
Shows: the stdin/stdout/stderr of the main container's PID 1 (the wrapper) — which, after fork+exec, is wired to the application's stdio. Equivalent to kubectl logs -f <pod> plus stdin write access.
Secrets exposure: none, unless the application itself writes a secret to stdout or stderr (an application-level bug caught by eBPF content-matching on write/sendto, vector #9–14).
kubectl logs <pod>¶
Shows: captured stdout/stderr of the main container.
Secrets exposure: none unless the application logs a secret. If it does:
- The eBPF layer content-matches on
write()andsendto()against known fetched secret values (vector #9). On a match, the write is blocked (kprobe-override) or the process is killed (reactive-kill). - Content-matching is not a substitute for not logging secrets — it catches exact matches of known strings, not synonyms or hashes. Treat the eBPF catch as a backstop, not a policy.
Ephemeral containers and kubectl debug¶
Ephemeral containers (GA in Kubernetes 1.25+) and kubectl debug share the same runtime path as kubectl exec: the new container is started via CRI with its own spec env, and the new processes run inside the pod's PID namespace.
What this means for the secret:
- The ephemeral container's processes see their own container spec env — which does not contain any CloudTaser configuration unless explicitly declared in the debug spec. They do not inherit anything from the main container's wrapper or app.
- Attempts to read
/proc/<app_pid>/environor/proc/<app_pid>/memfrom inside the ephemeral container are blocked by the same eBPF kprobes — the enforcement key is the target PID (the monitored app), not the source. - Attempts to
ptraceorprocess_vm_readvagainst the monitored PIDs are blocked. - The ephemeral container can read its own memory, run diagnostic tools, shell into the pod's network namespace — everything you'd want for debugging except the secret material itself.
This is the intended security boundary. If you find a path where an ephemeral container can read secrets from a monitored PID on a kprobe-override kernel, file it as a security issue — we want to know.
The debuggability trade-off — named explicitly¶
You cannot inspect, via any standard Kubernetes mechanism, what secret values a wrapped app currently holds. That's the point. It's also the primary debuggability regression vs. a traditional secrets workflow (Vault Agent writing to a tmpfs file, K8s Secrets mounted as env).
Traditional debug flow: "The app can't connect to the database. Let me kubectl exec and check echo $PGPASSWORD."
CloudTaser debug flow: same command returns empty. You need a different path:
-
Check the secret was fetched. The wrapper's stdout (captured in
kubectl logs) reports each fetch by path, successfully or with an error. If the wrapper couldn't reach OpenBao, this is where you see it. No secret material is logged — only the path and success/failure. -
Check the secret contents in OpenBao directly. If you have OpenBao access and the relevant policy,
vault kv get secret/data/<path>orbao kv geton the source of truth is the authoritative answer. The wrapper reads what OpenBao returns; if what OpenBao has is wrong, that's where you fix it. -
Check the rotation state. Secrets rotate on lease renewal or explicit rotation. Check OpenBao's lease view (
vault token lookup,vault list sys/leases/lookup/...) to understand whether the wrapper has an active lease and when it last renewed. -
Check wrapper audit logs. The wrapper emits structured logs for every authentication, secret fetch, lease renewal, and rotation event. These land in
kubectl logsand, if configured, in your log aggregator. -
Use OpenBao audit logging. OpenBao's audit backend logs every secret read with a timestamp, the requester's identity (the pod's ServiceAccount token), and the path. This is the authoritative answer to "did this pod fetch this secret at this time?" — data the eBPF layer doesn't have.
What you cannot do:
- Read the secret value from within the pod. By design.
- Attach a debugger to the app and read its memory. Blocked.
- Inspect the wrapper's memfd_secret pages. Physically unmapped from the kernel direct memory — invisible to root, kernel modules, eBPF itself. No SRE bypass exists.
This trade-off is the one that makes the sovereignty claim hold. "If the SRE can see it, so can a compromised SRE credential, which is the attack path we're closing." CloudTaser deliberately chooses that side of the trade-off.
The specific debuggability improvements we recommend on your side¶
If your operations team is used to the "read the env var to diagnose" workflow, plan these around it:
-
Standardize on OpenBao audit logs as the source of truth for "did this pod receive a secret". OpenBao's audit backend is free, open-source, and produces exactly the structured data your incident-response process needs.
-
Write operational runbooks that don't require reading secret values. Verify the secret's plumbing (OpenBao has it, the wrapper fetched it, the app received the fetch event in logs) rather than the content. Most bad-credential incidents are fixable from plumbing state alone.
-
If content inspection is genuinely necessary (e.g., cross-referencing the exact value an app saw against the source of truth), do it at the OpenBao side with a signed audit request — not from inside the cluster. The principle: if a human needs to see a secret value, that access belongs in OpenBao's audit trail, not in a pod's
/proc/environ. -
For pre-production or staging clusters, consider running pods without the injection annotation so traditional env-var debugging is available. Production clusters keep the annotation on and accept the trade-off.
What a security reviewer should check¶
If you're auditing a CloudTaser deployment, these are the specific assertions to verify against a live cluster:
- [ ]
kubectl exec <wrapped-pod> -- cat /proc/1/environreturns EACCES (or the calling process is killed). - [ ]
kubectl exec <wrapped-pod> -- cat /proc/<app-pid>/environreturns EACCES likewise (PID 1 is the wrapper; the app has a different PID). - [ ]
kubectl exec <wrapped-pod> -- cat /proc/1/memreturns EACCES. - [ ]
kubectl exec <wrapped-pod> -- strace -p 1fails withptrace: Operation not permittedor the strace process is killed. - [ ]
kubectl exec <wrapped-pod> -- envshows only container-spec variables, no secret values. - [ ] The eBPF agent's event log (
kubectl logs -n cloudtaser-system cloudtaser-ebpf-<...>) records anENVIRON_READevent for each attempt above. - [ ]
kubectl debug <wrapped-pod> --image=busybox -- /bin/shinto an ephemeral container, then all of the above — same results. - [ ] The node's kernel exports
CONFIG_BPF_KPROBE_OVERRIDE=y:grep BPF_KPROBE_OVERRIDE /boot/config-$(uname -r)or check the eBPF agent's startup log for the self-verification line. If the kernel doesn't have it, enforcement is reactive-kill — acceptable for many workloads, weaker for regulated ones.
If any of the above don't hold, open an issue on cloudtaser-ebpf.
Related¶
- eBPF Runtime Enforcement — the 23 vectors in detail, event types, kernel-config requirements.
- Sovereign Deployment Decision Guide — the substrate decisions that determine whether the guarantees above map to a regulatory posture or just guest-root protection.
- Wrapper Design — the exact fork+exec chain and how the env-var handoff works.
- Security Model — threat-model framing, including the "kill the daemon first" chain.
- Operational Readiness — blast radius, failure modes, SLA, backout.