GKE Deployment Guide¶
This guide covers deploying CloudTaser on GKE with the recommended configuration for maximum protection. With Ubuntu nodes and Confidential Computing enabled, you can achieve the full 115/115 protection score.
Why GKE Ubuntu + Confidential Nodes¶
GKE offers two node image types: Container-Optimized OS (COS) and Ubuntu. For CloudTaser, Ubuntu is the recommended choice:
| Feature | Ubuntu | COS |
|---|---|---|
CONFIG_BPF_KPROBE_OVERRIDE |
Yes | No |
memfd_secret (kernel 5.14+) |
Yes | Yes |
| Synchronous syscall blocking | Yes | Reactive kill only |
| Full eBPF enforcement | 36 kprobes | 21 tracepoints (fallback) |
Ubuntu gives you kprobes. The CONFIG_BPF_KPROBE_OVERRIDE=y kernel config option enables synchronous syscall blocking -- the eBPF agent can prevent a syscall from executing before it completes. COS does not compile this option, so the agent falls back to reactive kill (SIGKILL after detection), which has a theoretical gap between syscall execution and process termination.
Confidential nodes give you hardware memory encryption. GKE Confidential Nodes use AMD SEV-SNP to encrypt VM memory at the hardware level. The hypervisor and cloud provider cannot read the memory contents. This closes the last remaining attack surface after all software protections are in place.
Step 1: Create the GKE Cluster¶
Create a cluster with Ubuntu nodes and Confidential Computing:
gcloud container clusters create cloudtaser-prod \
--region europe-west4 \
--num-nodes 3 \
--image-type UBUNTU_CONTAINERD \
--enable-confidential-nodes \
--machine-type n2d-standard-2 \
--workload-pool "$(gcloud config get-value project).svc.id.goog" \
--release-channel regular
Key flags:
| Flag | Purpose |
|---|---|
--image-type UBUNTU_CONTAINERD |
Ubuntu nodes with kprobe override support |
--enable-confidential-nodes |
AMD SEV-SNP memory encryption on all nodes |
--machine-type n2d-standard-2 |
N2D instances required for Confidential Computing (AMD EPYC) |
--workload-pool |
Workload Identity for GCP service account binding |
--region europe-west4 |
EU region for data residency |
N2D machine type required
Confidential Computing on GKE requires N2D (AMD EPYC) instances. Other machine families (N2, E2, C3) do not support AMD SEV-SNP.
Step 2: Connect the Cluster to Your Vault¶
Use the CloudTaser CLI to configure Kubernetes auth on your EU-hosted vault:
# Connect to the cluster
gcloud container clusters get-credentials cloudtaser-prod --region europe-west4
# Connect the cluster to your vault
cloudtaser target connect \
--vault-address https://vault.eu.example.com \
--vault-token hvs.YOUR_ROOT_TOKEN \
--auth-path kubernetes/gke-prod
This configures the vault's Kubernetes auth method to accept ServiceAccount JWTs from the GKE cluster.
Step 3: Install CloudTaser¶
Install the operator and eBPF daemonset via Helm:
helm install cloudtaser oci://europe-west4-docker.pkg.dev/skipopsmain/cloudtaser/cloudtaser \
--namespace cloudtaser-system \
--create-namespace \
--set operator.vaultAddress=https://vault.eu.example.com \
--set ebpf.enabled=true \
--set ebpf.enforceMode=true
Or use the CLI:
Verify the installation:
Expected: operator and eBPF daemonset pods in Running state.
Step 4: Deploy a Protected Workload¶
Annotate your deployment with CloudTaser annotations:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
namespace: production
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
annotations:
cloudtaser.io/inject: "true"
cloudtaser.io/ebpf: "true"
cloudtaser.io/vault-address: "https://vault.eu.example.com"
cloudtaser.io/vault-role: "cloudtaser"
cloudtaser.io/secret-paths: "secret/data/myapp/config"
cloudtaser.io/env-map: "db_password=PGPASSWORD,api_key=API_KEY"
labels:
app: myapp
spec:
containers:
- name: myapp
image: myorg/myapp:v1.2.3
Required Annotations¶
| Annotation | Required | Description |
|---|---|---|
cloudtaser.io/inject |
Yes | Enables CloudTaser injection ("true") |
cloudtaser.io/ebpf |
No | Enables eBPF runtime enforcement ("true") |
cloudtaser.io/vault-address |
Yes | URL of the EU-hosted vault |
cloudtaser.io/vault-role |
Yes | Vault Kubernetes auth role name |
cloudtaser.io/secret-paths |
Yes | Comma-separated vault secret paths |
cloudtaser.io/env-map |
Yes | Maps vault fields to environment variable names |
Apply the deployment:
Step 5: Verify the Protection Score¶
Check the wrapper logs to confirm the protection score:
With Ubuntu + Confidential Nodes + eBPF enforcement, you should see:
[cloudtaser-wrapper] Protection score: 115/115
[cloudtaser-wrapper] memfd_secret: OK (+15)
[cloudtaser-wrapper] mlock: OK (+10)
[cloudtaser-wrapper] core_dump_exclusion: OK (+5)
[cloudtaser-wrapper] dumpable_disabled: OK (+5)
[cloudtaser-wrapper] token_protected: OK (+10)
[cloudtaser-wrapper] environ_scrubbed: OK (+5)
[cloudtaser-wrapper] getenv_interposer: OK (+10)
[cloudtaser-wrapper] ebpf_agent_connected: OK (+10)
[cloudtaser-wrapper] cpu_mitigations: OK (+5)
[cloudtaser-wrapper] ebpf_enforce_mode: OK (+15)
[cloudtaser-wrapper] ebpf_kprobes: OK (+15)
[cloudtaser-wrapper] confidential_vm: OK (+10)
No nodeSelector Needed¶
When all nodes in the cluster have the cloud.google.com/gke-confidential-nodes=true label (which they do when --enable-confidential-nodes is set at cluster creation), the operator auto-detects confidential node support. There is no need to add a nodeSelector to your workloads.
If you have a mixed cluster with both confidential and non-confidential node pools, the operator still detects the capability per-node and reports the confidential_vm check accordingly.
Container Image Requirements¶
The getenv_interposer check (10 points) requires a glibc-based container image. The LD_PRELOAD interposer that blocks getenv() from returning secrets on the heap does not work with musl or statically linked binaries.
| Base Image | getenv_interposer | Recommendation |
|---|---|---|
| Debian / Ubuntu | Supported | Recommended |
| Red Hat / Fedora | Supported | Recommended |
| Alpine (musl) | Not supported | Use debian-slim instead |
| Distroless (glibc) | Supported | Works |
| Distroless (static) | Not supported | Use glibc variant |
| Scratch (static binary) | Not supported | Fallback to env injection |
Switch from Alpine to Debian slim
If your application uses alpine as the base image, switch to debian:bookworm-slim or ubuntu:24.04 for the same small footprint with glibc support. This enables the getenv interposer and adds 10 points to your protection score.
Full Workflow Example¶
End-to-end deployment from scratch:
# 1. Create the cluster
gcloud container clusters create cloudtaser-prod \
--region europe-west4 \
--num-nodes 3 \
--image-type UBUNTU_CONTAINERD \
--enable-confidential-nodes \
--machine-type n2d-standard-2 \
--workload-pool "$(gcloud config get-value project).svc.id.goog"
# 2. Get credentials
gcloud container clusters get-credentials cloudtaser-prod --region europe-west4
# 3. Connect to vault
cloudtaser target connect \
--vault-address https://vault.eu.example.com \
--vault-token hvs.YOUR_ROOT_TOKEN
# 4. Install CloudTaser
cloudtaser target install \
--vault-address https://vault.eu.example.com \
--ebpf --enforce
# 5. Discover workloads and generate migration plan
cloudtaser target discover -o plan.yaml
# 6. Apply plan to vault (provision policies and roles)
cloudtaser source apply-plan plan.yaml \
--openbao-addr https://vault.eu.example.com \
--token hvs.YOUR_ROOT_TOKEN
# 7. Populate secrets in vault
bao kv put secret/myapp/config db_password=supersecret api_key=sk-live-xxx
# 8. Verify secrets exist
cloudtaser source verify-plan plan.yaml \
--openbao-addr https://vault.eu.example.com \
--token hvs.YOUR_ROOT_TOKEN
# 9. Migrate workloads
cloudtaser target protect --plan plan.yaml \
--vault-address https://vault.eu.example.com \
--interactive
# 10. Verify protection scores
cloudtaser target status --namespace production
Troubleshooting¶
| Symptom | Cause | Fix |
|---|---|---|
confidential_vm: FAIL |
Non-N2D machine type | Recreate node pool with --machine-type n2d-standard-2 --enable-confidential-nodes |
ebpf_kprobes: FAIL |
COS node image | Recreate node pool with --image-type UBUNTU_CONTAINERD |
getenv_interposer: FAIL |
Alpine or musl-based image | Switch to a debian/ubuntu-based container image |
ebpf_agent_connected: FAIL |
eBPF daemonset not running | Check kubectl get ds -n cloudtaser-system |
ebpf_enforce_mode: FAIL |
Enforce mode not enabled | Set ebpf.enforceMode=true in Helm values |
Related¶
- Protection Score Reference -- all 12 checks explained
- Reverse-Connect Architecture -- deploying without exposing your vault
- Kubernetes Compatibility -- full distribution matrix
- Enterprise Deployment -- multi-cluster topology