S3 Encryption Proxy Installation¶
The CloudTaser S3 proxy is a sidecar container that provides client-side encryption for S3-compatible object storage. It intercepts S3 API calls on localhost:8190, encrypts object bodies using envelope encryption with an EU-hosted Vault Transit key, and forwards requests to the upstream storage endpoint. Data is encrypted before it leaves the pod -- the cloud provider never sees plaintext object contents.
Purpose¶
While CloudTaser's core components protect secrets in process memory, many workloads also store sensitive data in object storage (S3, GCS, Azure Blob). The S3 proxy extends data sovereignty to data at rest:
- Client-side encryption -- Objects are encrypted inside the pod before upload; the cloud provider stores only ciphertext
- EU key control -- Encryption keys are managed by the EU-hosted Vault Transit engine; the cloud provider never holds decryption keys
- Transparent to applications -- The proxy runs as a sidecar; applications point their S3 SDK to
localhost:8190and the proxy handles encryption/decryption transparently - Envelope encryption -- Each object gets a unique data encryption key (DEK), wrapped by the Vault Transit key (KEK)
S3-compatible
The proxy works with any S3-compatible endpoint: AWS S3, GCS (S3-compatible API), MinIO, Ceph, and others.
Prerequisites¶
Before enabling the S3 proxy, ensure:
- CloudTaser operator is installed -- The S3 proxy sidecar is injected by the operator's mutating webhook. See Operator Installation.
- Vault Transit engine is configured -- The proxy requires an EU-hosted Vault Transit secret engine with at least one encryption key.
- Upstream S3 credentials -- The proxy needs credentials to access the upstream S3-compatible endpoint. See S3 Proxy Credentials for detailed credential scenarios.
Vault Transit Engine Setup¶
The S3 proxy uses Vault's Transit secret engine for envelope encryption. Configure it on your EU-hosted vault:
# Enable Transit secret engine
vault secrets enable -path=transit transit
# Create an encryption key for S3 proxy
vault write -f transit/keys/cloudtaser-s3 \
type=aes256-gcm96
Key location matters
The Transit key must be hosted on your EU vault instance. This is what ensures that the cloud provider cannot decrypt your stored data -- the key never leaves EU jurisdiction.
Grant the CloudTaser vault role access to the Transit key:
# Create policy for S3 proxy Transit operations
vault policy write cloudtaser-s3-proxy - <<EOF
path "transit/encrypt/cloudtaser-s3" {
capabilities = ["update"]
}
path "transit/decrypt/cloudtaser-s3" {
capabilities = ["update"]
}
EOF
# Attach policy to the CloudTaser Kubernetes auth role
vault write auth/kubernetes/role/cloudtaser \
bound_service_account_names="*" \
bound_service_account_namespaces="*" \
policies="cloudtaser,cloudtaser-s3-proxy" \
ttl=1h
Enable S3 Proxy on a Workload¶
The S3 proxy is enabled per-workload via pod annotations. Add the following annotations alongside the standard CloudTaser injection annotations:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
annotations:
# Standard CloudTaser injection
cloudtaser.io/inject: "true"
cloudtaser.io/vault-address: "https://vault.eu.example.com"
cloudtaser.io/vault-role: "cloudtaser"
cloudtaser.io/secret-paths: "secret/data/myapp/config"
cloudtaser.io/env-map: "db_password=PGPASSWORD"
# S3 proxy annotations
cloudtaser.io/s3-proxy: "true"
cloudtaser.io/s3-proxy-endpoint: "https://s3.eu-central-1.amazonaws.com"
cloudtaser.io/s3-proxy-region: "eu-central-1"
cloudtaser.io/s3-proxy-transit-key: "cloudtaser-s3"
cloudtaser.io/s3-proxy-transit-mount: "transit"
spec:
containers:
- name: myapp
image: myapp:latest
Annotation Reference¶
| Annotation | Required | Default | Description |
|---|---|---|---|
cloudtaser.io/s3-proxy |
Yes | -- | Set to "true" to inject the S3 proxy sidecar |
cloudtaser.io/s3-proxy-endpoint |
Yes | -- | Upstream S3-compatible endpoint URL |
cloudtaser.io/s3-proxy-region |
Yes | -- | AWS region for SigV4 signing (use auto for GCS) |
cloudtaser.io/s3-proxy-transit-key |
Yes | -- | Vault Transit key name for envelope encryption |
cloudtaser.io/s3-proxy-transit-mount |
No | transit |
Vault Transit engine mount path |
When the operator detects cloudtaser.io/s3-proxy: "true", it injects:
- An S3 proxy sidecar container listening on
localhost:8190 - The
AWS_ENDPOINT_URL_S3=http://localhost:8190environment variable into the workload container
Application SDK Changes¶
Your application must point its S3 client to the proxy sidecar at localhost:8190. In most cases, the operator injects AWS_ENDPOINT_URL_S3 automatically, and SDKs that support this environment variable require no code changes.
import boto3
# Option 1: Automatic via AWS_ENDPOINT_URL_S3 (injected by operator)
s3 = boto3.client('s3')
# Option 2: Explicit
s3 = boto3.client(
's3',
endpoint_url='http://localhost:8190',
config=boto3.session.Config(s3={'addressing_style': 'path'})
)
# Use normally -- encryption is transparent
s3.put_object(Bucket='my-bucket', Key='data.json', Body=b'{"secret": "value"}')
response = s3.get_object(Bucket='my-bucket', Key='data.json')
Path-style addressing required
The proxy expects path-style URLs (http://localhost:8190/bucket/key). Most S3 SDKs default to virtual-hosted style (bucket.s3.amazonaws.com/key), which does not work with localhost. Configure your SDK to use path-style addressing.
Cloud Provider Configuration¶
With IRSA or EKS Pod Identity, the proxy automatically inherits the pod's IAM role. No explicit credentials are needed.
# Service account with IRSA
apiVersion: v1
kind: ServiceAccount
metadata:
name: myapp
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/myapp-role
The IAM role needs s3:PutObject, s3:GetObject, s3:DeleteObject, s3:ListBucket, and s3:HeadObject permissions on the target bucket.
GCS S3-compatible API requires HMAC keys. Create them for the GCP service account bound via Workload Identity:
Set the proxy annotations for GCS:
annotations:
cloudtaser.io/s3-proxy-endpoint: "https://storage.googleapis.com"
cloudtaser.io/s3-proxy-region: "auto"
See S3 Proxy Credentials for full GCS HMAC setup.
For Azure Blob Storage with S3-compatible access, use the storage account's access keys or configure the proxy to connect to a MinIO gateway.
For detailed credential configuration across all cloud providers, see S3 Proxy Credentials.
Verify Installation¶
After deploying a workload with the S3 proxy annotation, verify the sidecar was injected:
Expected output should include the proxy sidecar:
Check the proxy sidecar logs:
Look for a log line confirming the proxy is listening on port 8190 and has connected to the Vault Transit engine.
Helm Values¶
The S3 proxy image is configured in the unified Helm chart:
The proxy sidecar is only injected into pods that carry the cloudtaser.io/s3-proxy: "true" annotation. There is no cluster-wide S3 proxy deployment.
Troubleshooting¶
| Symptom | Cause | Fix |
|---|---|---|
No cloudtaser-s3-proxy container in pod |
Missing cloudtaser.io/s3-proxy: "true" annotation |
Add the annotation to the pod template |
Proxy sidecar in CrashLoopBackOff |
Cannot reach Vault Transit engine | Verify vault address and Transit mount path |
AccessDenied on S3 operations |
Proxy lacks upstream S3 credentials | Configure IRSA, Pod Identity, or HMAC keys per your cloud provider |
SignatureDoesNotMatch errors |
Region mismatch or wrong endpoint | Verify s3-proxy-region matches the upstream endpoint region |
| Objects uploaded but cannot be decrypted | Transit key rotated or different key used | Ensure the same Transit key name is used for encrypt and decrypt |
Next Steps¶
- S3 Proxy Credentials -- Detailed credential setup for EKS, GKE, and AKS
- S3 Proxy Configuration -- Advanced proxy configuration options
- S3 Proxy Protocol -- How envelope encryption works under the hood
- Production Guide -- Production hardening and monitoring