Kubernetes via Helm Chart
Learn how to deploy Hanzo KMS on Kubernetes using our official Helm chart.
Learn how to deploy Hanzo KMS on Kubernetes using the official Helm chart. This method is ideal for production environments that require scalability, high availability, and integration with existing Kubernetes infrastructure.
Prerequisites
- A running Kubernetes cluster (version 1.23+)
- Helm package manager (version 3.11.3+)
- kubectl installed and configured to access your cluster
- Basic understanding of Kubernetes concepts (pods, services, secrets, ingress)
This guide assumes familiarity with Kubernetes. If you're new to Kubernetes, consider starting with the Docker Compose guide for simpler deployments.
System Requirements
The following are minimum requirements for running Hanzo KMS on Kubernetes:
| Component | Minimum | Recommended (Production) |
|---|---|---|
| Nodes | 1 node | 3+ nodes (for HA) |
| CPU per node | 2 cores | 4 cores |
| RAM per node | 4 GB | 8 GB |
| Disk per node | 20 GB | 50 GB+ (SSD recommended) |
Per-pod resource defaults (configurable in values.yaml):
| Pod | CPU Request | Memory Limit |
|---|---|---|
| Hanzo KMS | 350m | 1000Mi |
| PostgreSQL | 250m | 512Mi |
| Redis | 100m | 256Mi |
For production deployments with many users or secrets, increase these values accordingly.
Deployment Steps
Create a dedicated namespace for Hanzo KMS to isolate resources:
kubectl create namespace kmsAll subsequent commands will use this namespace. You can also add -n kms to each kubectl command if you prefer not to set a default context.
Add the Hanzo KMS Helm charts repository and update your local cache:
helm repo add kms-helm-charts 'https://dl.cloudsmith.io/public/kms/helm-charts/helm/charts/'
helm repo updateHanzo KMS requires a Kubernetes secret named kms-secrets containing essential configuration. Create this secret in the same namespace where you'll deploy the chart.
For testing or proof-of-concept deployments, the Helm chart automatically provisions in-cluster PostgreSQL and Redis instances. You only need to provide the core secrets:
kubectl create secret generic kms-secrets \
--namespace kms \
--from-literal=AUTH_SECRET="$(openssl rand -base64 32)" \
--from-literal=ENCRYPTION_KEY="$(openssl rand -hex 16)" \
--from-literal=SITE_URL="http://localhost"The in-cluster PostgreSQL and Redis are not configured for high availability. Use this only for testing purposes.
For production environments, use external managed services for PostgreSQL and Redis to ensure high availability:
kubectl create secret generic kms-secrets \
--namespace kms \
--from-literal=AUTH_SECRET="$(openssl rand -base64 32)" \
--from-literal=ENCRYPTION_KEY="$(openssl rand -hex 16)" \
--from-literal=DB_CONNECTION_URI="postgresql://user:password@your-postgres-host:5432/kms" \
--from-literal=REDIS_URL="redis://:password@your-redis-host:6379" \
--from-literal=SITE_URL="https://kms.example.com"Store your ENCRYPTION_KEY securely outside the cluster. Without this key, you cannot decrypt your secrets even if you restore the database.
For AWS RDS with SSL, add the DB_ROOT_CERT environment variable. See environment variables documentation for details.
Create a values.yaml file to configure your deployment. Start with a minimal configuration:
kms:
image:
repository: kms/kms
tag: "v0.151.0" # Check https://hub.docker.com/r/kms/kms/tags for latest
pullPolicy: IfNotPresent
replicaCount: 2
ingress:
enabled: true
hostName: "kms.example.com" # Replace with your domain
ingressClassName: nginx
nginx:
enabled: trueDo not use the latest tag in production. Always pin to a specific version to avoid unexpected changes during upgrades.
For all available configuration options, see the [full values.yaml reference](https://raw.githubusercontent.com/Hanzo KMS/kms/main/helm-charts/kms-standalone-postgres/values.yaml).
Deploy Hanzo KMS using Helm:
helm upgrade --install kms kms-helm-charts/kms-standalone \
--namespace kms \
--values values.yamlThis command installs Hanzo KMS if it doesn't exist, or upgrades it if it does.
Check that all pods are running:
kubectl get pods -n kmsYou should see output similar to:
NAME READY STATUS RESTARTS AGE
kms-5d4f8b7c9-abc12 1/1 Running 0 2m
kms-5d4f8b7c9-def34 1/1 Running 0 2m
postgresql-0 1/1 Running 0 2m
redis-master-0 1/1 Running 0 2mVerify the ingress is configured:
kubectl get ingress -n kmsTest the health endpoint (port-forward if ingress isn't ready):
kubectl port-forward -n kms svc/kms 8080:8080 &
curl http://localhost:8080/api/statusThe first user to sign up becomes the instance administrator. Complete this step before exposing Hanzo KMS to others.
Managing Your Deployment
Viewing Pod Logs
To view logs from Hanzo KMS pods:
# View logs from all Hanzo KMS pods
kubectl logs -n kms -l component=kms -f
# View logs from a specific pod
kubectl logs -n kms <pod-name> -f
# View last 100 lines
kubectl logs -n kms <pod-name> --tail=100
# View logs from the previous container instance (useful after crashes)
kubectl logs -n kms <pod-name> --previousTo view logs from PostgreSQL or Redis:
kubectl logs -n kms -l app.kubernetes.io/name=postgresql -f
kubectl logs -n kms -l app.kubernetes.io/name=redis -fScaling the Deployment
Hanzo KMS's application layer is stateless, so you can scale horizontally:
# Scale to 4 replicas
kubectl scale deployment -n kms kms --replicas=4Or update your values.yaml and re-apply:
kms:
replicaCount: 4helm upgrade kms kms-helm-charts/kms-standalone \
--namespace kms \
--values values.yamlUpgrading Hanzo KMS
To upgrade to a new version:
- Back up your database before upgrading:
kubectl exec -n kms postgresql-0 -- pg_dump -U kms kmsDB > backup_$(date +%Y%m%d).sql - Update the image tag in your
values.yaml:kms: image: tag: "v0.152.0" # New version - Apply the upgrade:
helm upgrade kms kms-helm-charts/kms-standalone \ --namespace kms \ --values values.yaml - Monitor the rollout:
kubectl rollout status deployment/kms -n kms
Uninstalling Hanzo KMS
To completely remove Hanzo KMS from your cluster:
# Uninstall the Helm release
helm uninstall kms -n kms
# Delete the namespace (removes all resources including secrets and PVCs)
kubectl delete namespace kmsDeleting the namespace removes all data including persistent volume claims. Back up your database before uninstalling if you need to preserve data.
To uninstall but preserve data:
# Uninstall only the Helm release (keeps PVCs and secrets)
helm uninstall kms -n kms
# Verify PVCs are retained
kubectl get pvc -n kmsPersistent Volume Claims
The Helm chart creates Persistent Volume Claims (PVCs) for PostgreSQL and Redis data storage when using in-cluster databases.
Default PVCs
| PVC Name | Purpose | Default Size |
|---|---|---|
data-postgresql-0 | PostgreSQL data | 8Gi |
redis-data-redis-master-0 | Redis data | 8Gi |
Viewing PVCs
kubectl get pvc -n kmsCustomizing Storage
To customize storage in your values.yaml:
postgresql:
primary:
persistence:
size: 20Gi
storageClass: "your-storage-class"
redis:
master:
persistence:
size: 10Gi
storageClass: "your-storage-class"The PostgreSQL PVC contains all your encrypted secrets. Never delete this PVC unless you intend to lose all data. Always back up before any maintenance operations.
Additional Configuration
Hanzo KMS uses email for user invitations, password resets, and notifications. Add SMTP configuration to your secrets:
kubectl create secret generic kms-secrets \
--namespace kms \
--from-literal=AUTH_SECRET="your-auth-secret" \
--from-literal=ENCRYPTION_KEY="your-encryption-key" \
--from-literal=SITE_URL="https://kms.example.com" \
--from-literal=SMTP_HOST="smtp.example.com" \
--from-literal=SMTP_PORT="587" \
--from-literal=SMTP_USERNAME="your-smtp-username" \
--from-literal=SMTP_PASSWORD="your-smtp-password" \
--from-literal=SMTP_FROM_ADDRESS="kms@example.com" \
--from-literal=SMTP_FROM_NAME="Hanzo KMS" \
--dry-run=client -o yaml | kubectl apply -f -Common SMTP providers:
| Provider | Host | Port |
|---|---|---|
| AWS SES | email-smtp.{region}.amazonaws.com | 587 |
| SendGrid | smtp.sendgrid.net | 587 |
| Gmail | smtp.gmail.com | 587 |
After updating secrets, restart the Hanzo KMS pods:
kubectl rollout restart deployment/kms -n kmsTo configure a custom domain with HTTPS:
1. Using cert-manager (recommended):
First, install cert-manager if not already installed:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.0/cert-manager.yamlCreate a ClusterIssuer for Let's Encrypt:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: your-email@example.com
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginxkubectl apply -f cluster-issuer.yamlUpdate your values.yaml:
ingress:
enabled: true
hostName: "kms.example.com"
ingressClassName: nginx
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
tls:
- secretName: kms-tls
hosts:
- kms.example.com2. Using existing TLS certificate:
Create a TLS secret with your certificate:
kubectl create secret tls kms-tls \
--namespace kms \
--cert=path/to/tls.crt \
--key=path/to/tls.keyUpdate your values.yaml:
ingress:
enabled: true
hostName: "kms.example.com"
ingressClassName: nginx
tls:
- secretName: kms-tls
hosts:
- kms.example.comApply the changes:
helm upgrade kms kms-helm-charts/kms-standalone \
--namespace kms \
--values values.yamlFor enhanced security, implement network policies to restrict traffic between pods:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: kms-network-policy
namespace: kms
spec:
podSelector:
matchLabels:
component: kms
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: postgresql
ports:
- protocol: TCP
port: 5432
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: redis
ports:
- protocol: TCP
port: 6379
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
- to:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 443
- protocol: TCP
port: 587kubectl apply -f network-policy.yamlNetwork policies require a CNI plugin that supports them (e.g., Calico, Cilium, Weave Net). Verify your cluster supports network policies before applying.
For production, use external managed services instead of in-cluster databases.
Disable in-cluster databases in values.yaml:
postgresql:
enabled: false
redis:
enabled: falseAdd connection strings to your secrets:
kubectl create secret generic kms-secrets \
--namespace kms \
--from-literal=AUTH_SECRET="your-auth-secret" \
--from-literal=ENCRYPTION_KEY="your-encryption-key" \
--from-literal=DB_CONNECTION_URI="postgresql://user:password@your-rds-endpoint:5432/kms?sslmode=require" \
--from-literal=REDIS_URL="rediss://:password@your-elasticache-endpoint:6379" \
--from-literal=SITE_URL="https://kms.example.com" \
--dry-run=client -o yaml | kubectl apply -f -Recommended managed services:
| Cloud | PostgreSQL | Redis |
|---|---|---|
| AWS | RDS for PostgreSQL | ElastiCache |
| GCP | Cloud SQL | Memorystore |
| Azure | Azure Database for PostgreSQL | Azure Cache for Redis |
Hanzo KMS exposes Prometheus metrics when enabled.
1. Add telemetry configuration to your secrets:
Include these in your kms-secrets:
--from-literal=OTEL_TELEMETRY_COLLECTION_ENABLED="true" \
--from-literal=OTEL_EXPORT_TYPE="prometheus"2. Create a ServiceMonitor (if using Prometheus Operator):
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: kms
namespace: kms
spec:
selector:
matchLabels:
component: kms
endpoints:
- port: metrics
interval: 30skubectl apply -f servicemonitor.yamlSee the Monitoring Guide for full setup instructions.
For production high availability:
1. Multiple Hanzo KMS replicas:
kms:
replicaCount: 3
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
component: kms2. Pod Disruption Budget:
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: kms-pdb
namespace: kms
spec:
minAvailable: 1
selector:
matchLabels:
component: kmskubectl apply -f pdb.yaml3. External HA database:
Use managed PostgreSQL with multi-AZ deployment (e.g., AWS RDS Multi-AZ, GCP Cloud SQL HA).
4. External HA Redis:
Use managed Redis with replication (e.g., AWS ElastiCache with cluster mode, GCP Memorystore).
Troubleshooting
Check pod events:
kubectl describe pod -n kms <pod-name>Common causes:
- Insufficient cluster resources: Check node capacity with
kubectl describe nodes - PVC not bound: Check PVC status with
kubectl get pvc -n kms - Image pull errors: Verify image name and check for ImagePullBackOff errors
Solutions:
- Scale up your cluster or reduce resource requests
- Ensure a StorageClass is available for dynamic provisioning
- Check image registry credentials if using a private registry
View pod logs:
kubectl logs -n kms <pod-name> --previousCommon causes:
- Missing or invalid secrets: Verify
kms-secretsexists and contains required keys - Database connection failed: Check
DB_CONNECTION_URIis correct and accessible - Invalid configuration: Check for typos in environment variables
Verify secrets:
kubectl get secret kms-secrets -n kms -o yamlCheck ingress status:
kubectl get ingress -n kms
kubectl describe ingress -n kms kmsCheck ingress controller logs:
kubectl logs -n ingress-nginx -l app.kubernetes.io/name=ingress-nginxVerify service is accessible:
kubectl port-forward -n kms svc/kms 8080:8080
curl http://localhost:8080/api/statusCommon causes:
- Ingress controller not installed
- DNS not pointing to ingress IP
- TLS certificate issues
Check PostgreSQL pod:
kubectl get pods -n kms -l app.kubernetes.io/name=postgresql
kubectl logs -n kms postgresql-0Test database connectivity:
kubectl exec -it -n kms postgresql-0 -- psql -U kms -d kmsDB -c "SELECT 1"For external databases:
- Verify the connection string in
kms-secrets - Check network policies and security groups allow traffic
- Ensure SSL certificates are configured if required
Check Redis pod:
kubectl get pods -n kms -l app.kubernetes.io/name=redis
kubectl logs -n kms redis-master-0Test Redis connectivity:
kubectl exec -it -n kms redis-master-0 -- redis-cli pingFor external Redis:
- Verify the
REDIS_URLinkms-secrets - Check if TLS is required (use
rediss://instead ofredis://)
Check Helm release status:
helm status kms -n kms
helm history kms -n kmsRollback to previous version:
helm rollback kms -n kmsForce upgrade (use with caution):
helm upgrade kms kms-helm-charts/kms-standalone \
--namespace kms \
--values values.yaml \
--forceCheck resource usage:
kubectl top pods -n kms
kubectl top nodesCheck for resource throttling:
kubectl describe pod -n kms <pod-name> | grep -A5 "Limits\|Requests"Solutions:
- Increase resource limits in
values.yaml - Scale horizontally by increasing
replicaCount - Use external managed databases for better performance
- Enable connection pooling for PostgreSQL
Full Values Reference
# -- Overrides the default release name
nameOverride: ""
# -- Overrides the full name of the release, affecting resource names
fullnameOverride: ""
kms:
# -- Enable Hanzo KMS chart deployment
enabled: true
# -- Sets the name of the deployment within this chart
name: kms
autoBootstrap:
# -- Enable auto-bootstrap of the Hanzo KMS instance
enabled: false
image:
# -- Hanzo KMS CLI image tag version
tag: "0.41.86"
# -- Template for the data/stringData section of the Kubernetes secret. Available functions: encodeBase64
secretTemplate: '{"data":{"token":"{{.Identity.Credentials.Token}}"}}'
secretDestination:
# -- Name of the bootstrap secret to create in the Kubernetes cluster which will store the formatted root identity credentials
name: "kms-bootstrap-secret"
# -- Namespace to create the bootstrap secret in. If not provided, the secret will be created in the same namespace as the release.
namespace: "default"
# -- Hanzo KMS organization to create in the Hanzo KMS instance during auto-bootstrap
organization: "default-org"
credentialSecret:
# -- Name of the Kubernetes secret containing the credentials for the auto-bootstrap workflow
name: "kms-bootstrap-credentials"
databaseSchemaMigrationJob:
image:
# -- Image repository for migration wait job
repository: ghcr.io/groundnuty/k8s-wait-for
# -- Image tag version
tag: no-root-v2.0
# -- Pulls image only if not present on the node
pullPolicy: IfNotPresent
serviceAccount:
# -- Creates a new service account if true, with necessary permissions for this chart. If false and `serviceAccount.name` is not defined, the chart will attempt to use the Default service account
create: true
# -- Custom annotations for the auto-created service account
annotations: {}
# -- Optional custom service account name, if existing service account is used
name: null
# -- Override for the full name of Hanzo KMS resources in this deployment
fullnameOverride: ""
# -- Custom annotations for Hanzo KMS pods
podAnnotations: {}
# -- Custom annotations for Hanzo KMS deployment
deploymentAnnotations: {}
# -- Number of pod replicas for high availability
replicaCount: 2
image:
# -- Image repository for the Hanzo KMS service
repository: kms/kms
# -- Specific version tag of the Hanzo KMS image. View the latest version here https://hub.docker.com/r/kms/kms
tag: "v0.151.0"
# -- Pulls image only if not already present on the node
pullPolicy: IfNotPresent
# -- Secret references for pulling the image, if needed
imagePullSecrets: []
# -- Node affinity settings for pod placement
affinity: {}
# -- Tolerations definitions
tolerations: []
# -- Node selector for pod placement
nodeSelector: {}
# -- Topology spread constraints for multi-zone deployments
# -- Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/
topologySpreadConstraints: []
# -- Kubernetes Secret reference containing Hanzo KMS root credentials
kubeSecretRef: "kms-secrets"
service:
# -- Custom annotations for Hanzo KMS service
annotations: {}
# -- Service type, can be changed based on exposure needs (e.g., LoadBalancer)
type: ClusterIP
# -- Optional node port for service when using NodePort type
nodePort: ""
resources:
limits:
# -- Memory limit for Hanzo KMS container
memory: 1000Mi
requests:
# -- CPU request for Hanzo KMS container
cpu: 350m
ingress:
# -- Enable or disable ingress configuration
enabled: true
# -- Hostname for ingress access, e.g., app.example.com
hostName: ""
# -- Specifies the ingress class, useful for multi-ingress setups
ingressClassName: nginx
nginx:
# -- Enable NGINX-specific settings, if using NGINX ingress controller
enabled: true
# -- Custom annotations for ingress resource
annotations: {}
# -- TLS settings for HTTPS access
tls: []
# -- TLS secret name for HTTPS
# - secretName: letsencrypt-prod
# -- Domain name to associate with the TLS certificate
# hosts:
# - some.domain.com
postgresql:
# -- Enables an in-cluster PostgreSQL deployment. To achieve HA for Postgres, we recommend deploying https://github.com/zalando/postgres-operator instead.
enabled: true
# -- PostgreSQL resource name
name: "postgresql"
# -- Full name override for PostgreSQL resources
fullnameOverride: "postgresql"
image:
# -- Image registry for PostgreSQL
registry: mirror.gcr.io
# -- Image repository for PostgreSQL
repository: bitnamilegacy/postgresql
auth:
# -- Database username for PostgreSQL
username: kms
# -- Password for PostgreSQL database access
password: root
# -- Database name for Hanzo KMS
database: kmsDB
useExistingPostgresSecret:
# -- Set to true if using an existing Kubernetes secret that contains PostgreSQL connection string
enabled: false
existingConnectionStringSecret:
# -- Kubernetes secret name containing the PostgreSQL connection string
name: ""
# -- Key name in the Kubernetes secret that holds the connection string
key: ""
redis:
# -- Enables an in-cluster Redis deployment
enabled: true
# -- Redis resource name
name: "redis"
# -- Full name override for Redis resources
fullnameOverride: "redis"
image:
# -- Image registry for Redis
registry: mirror.gcr.io
# -- Image repository for Redis
repository: bitnamilegacy/redis
cluster:
# -- Clustered Redis deployment
enabled: false
# -- Requires a password for Redis authentication
usePassword: true
auth:
# -- Redis password
password: "mysecretpassword"
# -- Redis deployment type (e.g., standalone or cluster)
architecture: standaloneYour Hanzo KMS instance should now be running on Kubernetes. Access it via the ingress hostname you configured, or use kubectl port-forward for local testing.

How is this guide?
Last updated on