GCP (GKE with Cloud SQL & Memorystore)
Deploy Hanzo KMS securely on Google Cloud Platform using GKE, Cloud SQL, and Memorystore.
Learn how to deploy Hanzo KMS on Google Cloud Platform using Google Kubernetes Engine (GKE) for container orchestration. This guide covers setting up Hanzo KMS in a production-ready GCP environment using Cloud SQL (PostgreSQL) for the database, Memorystore (Redis) for caching, and Google Cloud Load Balancing for routing traffic.
Prerequisites
- A Google Cloud Platform account with permissions to create VPCs, GKE clusters, Cloud SQL instances, Memorystore instances, and Load Balancers
- Basic knowledge of GCP networking (VPC, subnets, firewall rules) and Kubernetes concepts
- gcloud CLI installed and configured
- kubectl installed for interacting with your GKE cluster
- Helm installed (version 3.x) for deploying the Hanzo KMS Helm chart
- An Hanzo KMS Docker image tag from Docker Hub
Do not use the latest tag in production. Always pin to a specific version to avoid unexpected changes during upgrades.
System Requirements
The following are minimum requirements for running Hanzo KMS on GCP GKE:
| Component | Minimum | Recommended (Production) |
|---|---|---|
| GKE Node Machine Type | e2-small | n2-standard-2 or larger |
| GKE Nodes per Zone | 1 | 2+ |
| Cloud SQL Instance | db-f1-micro | db-n1-standard-2 or larger |
| Memorystore Capacity | 1 GB | 2 GB or larger |
| Hanzo KMS Pod Memory | 512 MB | 1 GB |
| Hanzo KMS Pod CPU | 500m | 1000m |
For production deployments with many users or secrets, increase these values accordingly.
Deployment Steps
Create a VPC network for hosting Hanzo KMS:
VPC & Subnets:
- Create a VPC-native network that will host your GKE cluster, Cloud SQL instance, and Memorystore instance
- Create a subnet for your GKE cluster with an appropriate IP range
- Define secondary IP ranges for Kubernetes pods and services:
- Primary range:
10.0.0.0/20(for nodes) - Secondary range for pods:
10.4.0.0/14 - Secondary range for services:
10.8.0.0/20
- Primary range:
# Create VPC
gcloud compute networks create kms-vpc --subnet-mode=custom
# Create subnet with secondary ranges
gcloud compute networks subnets create kms-subnet \
--network=kms-vpc \
--region=us-central1 \
--range=10.0.0.0/20 \
--secondary-range=pods=10.4.0.0/14,services=10.8.0.0/20Cloud Router & Cloud NAT:
- Deploy a Cloud Router and NAT gateway for outbound internet access from private GKE nodes
# Create Cloud Router
gcloud compute routers create kms-router \
--network=kms-vpc \
--region=us-central1
# Create Cloud NAT
gcloud compute routers nats create kms-nat \
--router=kms-router \
--region=us-central1 \
--nat-all-subnet-ip-ranges \
--auto-allocate-nat-external-ipsFirewall Rules:
| Rule | Source | Destination | Ports | Purpose |
|---|---|---|---|---|
| Allow internal | VPC CIDR | VPC CIDR | All | Internal communication |
| Allow health checks | 130.211.0.0/22, 35.191.0.0/16 | GKE nodes | 8080 | Load balancer health checks |
| Allow GKE to Cloud SQL | GKE pods | Cloud SQL | 5432 | Database access |
| Allow GKE to Memorystore | GKE pods | Memorystore | 6379 | Redis access |
# Allow health check traffic
gcloud compute firewall-rules create allow-health-checks \
--network=kms-vpc \
--allow=tcp:8080 \
--source-ranges=130.211.0.0/22,35.191.0.0/16 \
--target-tags=gke-kmsEnable Private Google Access:
gcloud compute networks subnets update kms-subnet \
--region=us-central1 \
--enable-private-ip-google-accessVerify: Confirm your network infrastructure is created:
# Verify VPC and subnet
gcloud compute networks describe kms-vpc
gcloud compute networks subnets describe kms-subnet --region=us-central1
# Verify NAT gateway
gcloud compute routers nats describe kms-nat --router=kms-router --region=us-central1Create a GKE cluster to host your Hanzo KMS deployment:
gcloud container clusters create kms-cluster \
--region us-central1 \
--machine-type n2-standard-2 \
--num-nodes 1 \
--enable-ip-alias \
--network kms-vpc \
--subnetwork kms-subnet \
--cluster-secondary-range-name pods \
--services-secondary-range-name services \
--enable-private-nodes \
--master-ipv4-cidr 172.16.0.0/28 \
--no-enable-basic-auth \
--no-issue-client-certificate \
--enable-stackdriver-kubernetes \
--enable-autoscaling \
--min-nodes 1 \
--max-nodes 5 \
--enable-autorepair \
--enable-autoupgrade \
--enable-workload-identity \
--workload-pool=<YOUR_PROJECT_ID>.svc.id.googConnect to the cluster:
gcloud container clusters get-credentials kms-cluster --region us-central1Verify: Confirm cluster is ready:
# Check nodes are ready
kubectl get nodes
# Verify cluster info
kubectl cluster-infoYou should see your nodes listed and in a Ready state.
For private GKE clusters, you'll need to access the cluster from within the VPC (via a bastion host or Cloud Shell) or configure authorized networks to allow your IP to access the control plane endpoint.
Set up the PostgreSQL database for Hanzo KMS:
# Enable required APIs
gcloud services enable sqladmin.googleapis.com
gcloud services enable servicenetworking.googleapis.com
# Allocate IP range for private services
gcloud compute addresses create google-managed-services-kms-vpc \
--global \
--purpose=VPC_PEERING \
--prefix-length=16 \
--network=kms-vpc
# Create private connection
gcloud services vpc-peerings connect \
--service=servicenetworking.googleapis.com \
--ranges=google-managed-services-kms-vpc \
--network=kms-vpc
# Create Cloud SQL instance
gcloud sql instances create kms-db \
--database-version=POSTGRES_15 \
--tier=db-n1-standard-2 \
--region=us-central1 \
--network=kms-vpc \
--no-assign-ip \
--availability-type=REGIONAL \
--storage-type=SSD \
--storage-size=20GB \
--storage-auto-increase \
--backup-start-time=03:00 \
--enable-point-in-time-recovery \
--retained-backups-count=7Create database and user:
# Set root password
gcloud sql users set-password postgres \
--instance=kms-db \
--password=<your-secure-password>
# Create database
gcloud sql databases create kms --instance=kms-db
# Create user
gcloud sql users create kms_user \
--instance=kms-db \
--password=<your-secure-password>Verify: Confirm Cloud SQL is ready:
# Check instance status
gcloud sql instances describe kms-db --format="value(state)"
# Get private IP address
gcloud sql instances describe kms-db --format="value(ipAddresses[0].ipAddress)"Note the private IP address for your connection string:
postgresql://kms_user:<password>@<private-ip>:5432/kmsSet up Redis caching for Hanzo KMS:
# Enable Memorystore API
gcloud services enable redis.googleapis.com
# Create Memorystore instance
gcloud redis instances create kms-redis \
--size=1 \
--region=us-central1 \
--network=kms-vpc \
--tier=STANDARD_HA \
--redis-version=redis_7_0Verify: Confirm Memorystore is ready:
# Check instance status
gcloud redis instances describe kms-redis --region=us-central1 --format="value(state)"
# Get host IP
gcloud redis instances describe kms-redis --region=us-central1 --format="value(host)"Note the host IP for your connection string:
redis://<memorystore-ip>:6379Memorystore for Redis does not support AUTH passwords. Security relies on VPC isolation and firewall rules. Ensure only your GKE cluster's pods can access the Memorystore IP.
Generate and store the required secrets:
Generate secrets:
# Generate ENCRYPTION_KEY (16-byte hex string)
ENCRYPTION_KEY=$(openssl rand -hex 16)
echo "ENCRYPTION_KEY: $ENCRYPTION_KEY"
# Generate AUTH_SECRET (32-byte base64 string)
AUTH_SECRET=$(openssl rand -base64 32)
echo "AUTH_SECRET: $AUTH_SECRET"Store your ENCRYPTION_KEY securely outside of GCP as well. Without this key, you cannot decrypt your secrets even if you restore the database.
# Enable the Secret Manager API
gcloud services enable secretmanager.googleapis.com
# Store each secret
echo -n "$ENCRYPTION_KEY" | gcloud secrets create kms-encryption-key --data-file=-
echo -n "$AUTH_SECRET" | gcloud secrets create kms-auth-secret --data-file=-
echo -n "postgresql://kms_user:<password>@<cloud-sql-ip>:5432/kms" | gcloud secrets create kms-db-uri --data-file=-
echo -n "redis://<memorystore-ip>:6379" | gcloud secrets create kms-redis-url --data-file=-Verify: Confirm secrets are stored:
gcloud secrets list --filter="name:kms"# Create namespace
kubectl create namespace kms
# Create the secret
kubectl create secret generic kms-secrets \
--from-literal=ENCRYPTION_KEY="$ENCRYPTION_KEY" \
--from-literal=AUTH_SECRET="$AUTH_SECRET" \
--from-literal=DB_CONNECTION_URI="postgresql://kms_user:<password>@<cloud-sql-ip>:5432/kms" \
--from-literal=REDIS_URL="redis://<memorystore-ip>:6379" \
--from-literal=SITE_URL="https://kms.example.com" \
-n kmsVerify: Confirm secret is created:
kubectl get secrets -n kmsConfigure IAM for Secret Access (if using Secret Manager with Workload Identity):
# Create Google Cloud IAM service account
gcloud iam service-accounts create kms-gsa \
--display-name="Hanzo KMS GKE Service Account"
# Grant access to secrets
for secret in kms-encryption-key kms-auth-secret kms-db-uri kms-redis-url; do
gcloud secrets add-iam-policy-binding $secret \
--member="serviceAccount:kms-gsa@<YOUR_PROJECT_ID>.iam.gserviceaccount.com" \
--role="roles/secretmanager.secretAccessor"
done
# Bind to Kubernetes service account
gcloud iam service-accounts add-iam-policy-binding \
kms-gsa@<YOUR_PROJECT_ID>.iam.gserviceaccount.com \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:<YOUR_PROJECT_ID>.svc.id.goog[kms/kms]"Deploy Hanzo KMS to your GKE cluster using the official Helm chart:
Add the Hanzo KMS Helm Repository:
helm repo add kms-helm-charts https://dl.cloudsmith.io/public/kms/helm-charts/helm/charts/
helm repo updateCreate a Helm Values File:
Create a file named kms-values.yaml:
# kms-values.yaml
replicaCount: 2
image:
repository: kms/kms
tag: "v0.46.2-postgres" # Use a specific version
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 8080
ingress:
enabled: true
className: "gce"
annotations:
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.global-static-ip-name: "kms-ip"
networking.gke.io/managed-certificates: "kms-cert"
hosts:
- host: kms.example.com
paths:
- path: /
pathType: Prefix
env:
- name: SITE_URL
value: "https://kms.example.com"
- name: HOST
value: "0.0.0.0"
- name: PORT
value: "8080"
envFrom:
- secretRef:
name: kms-secrets
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "1000m"
livenessProbe:
httpGet:
path: /api/status
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /api/status
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
podDisruptionBudget:
enabled: true
minAvailable: 1
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 70
serviceAccount:
create: true
name: kms
annotations:
iam.gke.io/gcp-service-account: kms-gsa@<YOUR_PROJECT_ID>.iam.gserviceaccount.com
podSecurityContext:
fsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- kms
topologyKey: topology.kubernetes.io/zoneReserve a Static IP Address:
gcloud compute addresses create kms-ip --global
gcloud compute addresses describe kms-ip --global --format="value(address)"Deploy Hanzo KMS:
helm install kms kms-helm-charts/kms \
--namespace kms \
--create-namespace \
--values kms-values.yamlVerify: Confirm deployment is successful:
# Check pods are running
kubectl get pods -n kms
# Check service
kubectl get svc -n kms
# Check ingress
kubectl get ingress -n kms
# Check pod logs
kubectl logs -l app.kubernetes.io/name=kms -n kms --tail=50Wait for all pods to be in Running state.
Secure your Hanzo KMS deployment with HTTPS:
Create a ManagedCertificate Resource:
# managed-cert.yaml
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: kms-cert
namespace: kms
spec:
domains:
- kms.example.comkubectl apply -f managed-cert.yamlUpdate DNS:
Create an A record in your DNS provider pointing kms.example.com to the static IP address.
Verify: Check certificate status:
kubectl describe managedcertificate kms-cert -n kmsCertificate provisioning can take 15-60 minutes.
Install cert-manager:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.0/cert-manager.yamlCreate a ClusterIssuer:
# letsencrypt-prod.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: admin@example.com
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: gcekubectl apply -f letsencrypt-prod.yamlUpdate your ingress annotations to use cert-manager:
ingress:
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"Force HTTPS Redirect:
# frontend-config.yaml
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: ssl-redirect
namespace: kms
spec:
redirectToHttps:
enabled: truekubectl apply -f frontend-config.yamlAdd the annotation to your ingress:
annotations:
networking.gke.io/v1beta1.FrontendConfig: "ssl-redirect"Verify: Test HTTPS access:
curl -I https://kms.example.com/api/statusAfter completing the above steps, your Hanzo KMS instance should be up and running on GCP. You can now proceed with creating an admin account and configuring additional features.
Additional Configuration
Configure email sending for Hanzo KMS notifications and invitations:
Using SendGrid:
kubectl create secret generic kms-smtp \
--from-literal=SMTP_HOST="smtp.sendgrid.net" \
--from-literal=SMTP_PORT="587" \
--from-literal=SMTP_USERNAME="apikey" \
--from-literal=SMTP_PASSWORD="<your-sendgrid-api-key>" \
--from-literal=SMTP_FROM_ADDRESS="noreply@example.com" \
--from-literal=SMTP_FROM_NAME="Hanzo KMS" \
-n kmsUsing Gmail:
kubectl create secret generic kms-smtp \
--from-literal=SMTP_HOST="smtp.gmail.com" \
--from-literal=SMTP_PORT="587" \
--from-literal=SMTP_USERNAME="your-email@gmail.com" \
--from-literal=SMTP_PASSWORD="<app-password>" \
--from-literal=SMTP_FROM_ADDRESS="your-email@gmail.com" \
--from-literal=SMTP_FROM_NAME="Hanzo KMS" \
-n kmsUpdate your Helm values to include the SMTP secret:
envFrom:
- secretRef:
name: kms-secrets
- secretRef:
name: kms-smtpUpgrade the deployment:
helm upgrade kms kms-helm-charts/kms \
--namespace kms \
--values kms-values.yamlVerify: Check logs for SMTP configuration:
kubectl logs -l app.kubernetes.io/name=kms -n kms | grep -i smtpAccess running containers for debugging:
Exec into an Hanzo KMS pod:
# Get pod name
kubectl get pods -n kms
# Exec into the pod
kubectl exec -it <pod-name> -n kms -- /bin/shCommon debugging commands:
# Check environment variables
kubectl exec -it <pod-name> -n kms -- env | grep -E "(DB_|REDIS_|SITE_)"
# Test database connectivity
kubectl exec -it <pod-name> -n kms -- nc -zv <cloud-sql-ip> 5432
# Test Redis connectivity
kubectl exec -it <pod-name> -n kms -- nc -zv <memorystore-ip> 6379
# View application logs
kubectl logs <pod-name> -n kms --tail=100 -fRun a debug pod:
kubectl run debug-pod --rm -it --image=busybox -n kms -- /bin/shHanzo KMS automatically runs database migrations on startup. To manually manage migrations:
Check migration status:
kubectl logs -l app.kubernetes.io/name=kms -n kms | grep -i migrationRun migrations manually:
# Exec into a pod and run migrations
kubectl exec -it <pod-name> -n kms -- npm run migration:latestRollback migrations (if needed):
kubectl exec -it <pod-name> -n kms -- npm run migration:rollbackAlways backup your database before running manual migrations. Use Cloud SQL automated backups or create a manual snapshot first.
Database Backups:
Cloud SQL automated backups are enabled by default. To create a manual backup:
gcloud sql backups create --instance=kms-dbTo restore from a backup:
gcloud sql backups restore <backup-id> \
--restore-instance=kms-db \
--backup-instance=kms-dbEncryption Key Backup:
The ENCRYPTION_KEY is critical. Store it securely:
- In Google Secret Manager with restricted IAM permissions
- In an offline encrypted backup in a secure physical location
- Never store it in version control
Export secrets for backup:
gcloud secrets versions access latest --secret=kms-encryption-key > encryption-key-backup.txt
# Encrypt and store this file securely offlinePre-upgrade checklist:
- Review Hanzo KMS release notes for breaking changes
- Backup your database
- Verify your
ENCRYPTION_KEYis backed up
Upgrade process:
# Update Helm repo
helm repo update
# Update image tag in values file
# Edit kms-values.yaml: image.tag: "v0.X.X"
# Upgrade deployment
helm upgrade kms kms-helm-charts/kms \
--namespace kms \
--values kms-values.yaml
# Monitor rollout
kubectl rollout status deployment/kms -n kmsRollback if needed:
helm rollback kms -n kmsGoogle Cloud Logging:
View Hanzo KMS logs in Cloud Logging:
- Navigate to Logging > Logs Explorer in the GCP Console
- Filter:
resource.type="k8s_container" resource.labels.namespace_name="kms"
Set up Cloud Monitoring alerts:
# Create alert for high CPU usage
gcloud alpha monitoring policies create \
--notification-channels=<channel-id> \
--display-name="Hanzo KMS High CPU" \
--condition-display-name="Pod CPU > 80%" \
--condition-threshold-value=0.8 \
--condition-threshold-duration=300s \
--condition-filter='resource.type="k8s_pod" AND resource.labels.namespace_name="kms"'Uptime checks:
- Navigate to Monitoring > Uptime checks in the GCP Console
- Create a check for
https://kms.example.com/api/status - Set check frequency (e.g., every 1 minute)
Enable OpenTelemetry:
env:
- name: OTEL_TELEMETRY_COLLECTION_ENABLED
value: "true"
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: "http://otel-collector.monitoring.svc.cluster.local:4317"Horizontal Pod Autoscaler:
The HPA is configured in the Helm values. To verify:
kubectl get hpa -n kms
kubectl describe hpa kms -n kmsGKE Cluster Autoscaler:
Already enabled during cluster creation. To verify:
gcloud container clusters describe kms-cluster \
--region=us-central1 \
--format="value(autoscaling)"Adjust scaling parameters:
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 20
targetCPUUtilizationPercentage: 70
targetMemoryUtilizationPercentage: 80To completely remove Hanzo KMS and associated resources:
Delete Helm release:
helm uninstall kms -n kms
kubectl delete namespace kmsDelete GCP resources:
# Delete Memorystore
gcloud redis instances delete kms-redis --region=us-central1
# Delete Cloud SQL (WARNING: This deletes all data!)
gcloud sql instances delete kms-db
# Delete GKE cluster
gcloud container clusters delete kms-cluster --region=us-central1
# Delete static IP
gcloud compute addresses delete kms-ip --global
# Delete NAT and router
gcloud compute routers nats delete kms-nat --router=kms-router --region=us-central1
gcloud compute routers delete kms-router --region=us-central1
# Delete firewall rules
gcloud compute firewall-rules delete allow-health-checks
# Delete VPC (after all resources are removed)
gcloud compute networks subnets delete kms-subnet --region=us-central1
gcloud compute networks delete kms-vpc
# Delete secrets
for secret in kms-encryption-key kms-auth-secret kms-db-uri kms-redis-url; do
gcloud secrets delete $secret
done
# Delete service account
gcloud iam service-accounts delete kms-gsa@<YOUR_PROJECT_ID>.iam.gserviceaccount.comDeleting Cloud SQL will permanently delete all data. Ensure you have backups before proceeding.
Infrastructure as Code
A Terraform configuration for deploying Hanzo KMS infrastructure on GCP:
# main.tf
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "~> 5.0"
}
}
}
provider "google" {
project = var.project_id
region = var.region
}
variable "project_id" {
description = "GCP Project ID"
type = string
}
variable "region" {
description = "GCP Region"
type = string
default = "us-central1"
}
# VPC Network
resource "google_compute_network" "kms_vpc" {
name = "kms-vpc"
auto_create_subnetworks = false
}
resource "google_compute_subnetwork" "kms_subnet" {
name = "kms-subnet"
ip_cidr_range = "10.0.0.0/20"
region = var.region
network = google_compute_network.kms_vpc.id
secondary_ip_range {
range_name = "pods"
ip_cidr_range = "10.4.0.0/14"
}
secondary_ip_range {
range_name = "services"
ip_cidr_range = "10.8.0.0/20"
}
private_ip_google_access = true
}
# Cloud Router and NAT
resource "google_compute_router" "kms_router" {
name = "kms-router"
region = var.region
network = google_compute_network.kms_vpc.id
}
resource "google_compute_router_nat" "kms_nat" {
name = "kms-nat"
router = google_compute_router.kms_router.name
region = var.region
nat_ip_allocate_option = "AUTO_ONLY"
source_subnetwork_ip_ranges_to_nat = "ALL_SUBNETWORKS_ALL_IP_RANGES"
}
# GKE Cluster
resource "google_container_cluster" "kms_cluster" {
name = "kms-cluster"
location = var.region
network = google_compute_network.kms_vpc.name
subnetwork = google_compute_subnetwork.kms_subnet.name
remove_default_node_pool = true
initial_node_count = 1
ip_allocation_policy {
cluster_secondary_range_name = "pods"
services_secondary_range_name = "services"
}
private_cluster_config {
enable_private_nodes = true
enable_private_endpoint = false
master_ipv4_cidr_block = "172.16.0.0/28"
}
workload_identity_config {
workload_pool = "${var.project_id}.svc.id.goog"
}
logging_service = "logging.googleapis.com/kubernetes"
monitoring_service = "monitoring.googleapis.com/kubernetes"
}
resource "google_container_node_pool" "kms_nodes" {
name = "kms-node-pool"
location = var.region
cluster = google_container_cluster.kms_cluster.name
node_count = 1
autoscaling {
min_node_count = 1
max_node_count = 5
}
node_config {
machine_type = "n2-standard-2"
disk_size_gb = 50
oauth_scopes = [
"https://www.googleapis.com/auth/cloud-platform"
]
workload_metadata_config {
mode = "GKE_METADATA"
}
}
management {
auto_repair = true
auto_upgrade = true
}
}
# Cloud SQL
resource "google_sql_database_instance" "kms_db" {
name = "kms-db"
database_version = "POSTGRES_15"
region = var.region
settings {
tier = "db-n1-standard-2"
availability_type = "REGIONAL"
disk_type = "PD_SSD"
disk_size = 20
disk_autoresize = true
ip_configuration {
ipv4_enabled = false
private_network = google_compute_network.kms_vpc.id
}
backup_configuration {
enabled = true
start_time = "03:00"
point_in_time_recovery_enabled = true
transaction_log_retention_days = 7
}
}
deletion_protection = true
depends_on = [google_service_networking_connection.private_vpc_connection]
}
# Private VPC Connection for Cloud SQL
resource "google_compute_global_address" "private_ip_address" {
name = "google-managed-services-kms-vpc"
purpose = "VPC_PEERING"
address_type = "INTERNAL"
prefix_length = 16
network = google_compute_network.kms_vpc.id
}
resource "google_service_networking_connection" "private_vpc_connection" {
network = google_compute_network.kms_vpc.id
service = "servicenetworking.googleapis.com"
reserved_peering_ranges = [google_compute_global_address.private_ip_address.name]
}
# Memorystore Redis
resource "google_redis_instance" "kms_redis" {
name = "kms-redis"
tier = "STANDARD_HA"
memory_size_gb = 1
region = var.region
authorized_network = google_compute_network.kms_vpc.id
redis_version = "REDIS_7_0"
}
# Outputs
output "gke_cluster_name" {
value = google_container_cluster.kms_cluster.name
}
output "cloud_sql_private_ip" {
value = google_sql_database_instance.kms_db.private_ip_address
}
output "redis_host" {
value = google_redis_instance.kms_redis.host
}This is a simplified example to get you started. For a complete deployment, you'll need to add Secret Manager resources, IAM bindings, and Kubernetes resources. Adapt this example to your infrastructure standards.
Troubleshooting
Symptoms: Pods stuck in Pending, CrashLoopBackOff, or Error state.
Check pod status:
kubectl describe pod <pod-name> -n kms
kubectl logs <pod-name> -n kms --previousCommon causes:
- Insufficient resources: Check node capacity and resource requests
- Image pull errors: Verify image tag and registry access
- Secret not found: Ensure
kms-secretsexists in the namespace - Database connection failed: Verify Cloud SQL private IP and credentials
Symptoms: Database connection errors in logs.
Verify connectivity:
# Check Cloud SQL instance status
gcloud sql instances describe kms-db
# Test from a debug pod
kubectl run debug --rm -it --image=postgres:15 -n kms -- \
psql "postgresql://kms_user:<password>@<cloud-sql-ip>:5432/kms"Common causes:
- VPC peering not established: Check private service connection
- Firewall rules blocking traffic: Verify firewall allows port 5432
- Wrong credentials: Verify username and password
- Cloud SQL not in same VPC: Ensure private IP is configured
Symptoms: Redis connection errors in logs.
Verify connectivity:
# Check Memorystore status
gcloud redis instances describe kms-redis --region=us-central1
# Test from a debug pod
kubectl run debug --rm -it --image=redis:7 -n kms -- \
redis-cli -h <memorystore-ip> pingCommon causes:
- Memorystore not in same VPC: Verify network configuration
- Firewall rules blocking traffic: Verify firewall allows port 6379
- Wrong IP address: Verify the Memorystore host IP
Symptoms: Cannot access Hanzo KMS via the external URL.
Check ingress status:
kubectl describe ingress -n kms
kubectl get events -n kms --sort-by='.lastTimestamp'Common causes:
- DNS not configured: Verify A record points to static IP
- Certificate not ready: Check ManagedCertificate status
- Backend unhealthy: Verify pods are passing health checks
- Static IP not reserved: Ensure
kms-ipexists
Symptoms: ManagedCertificate stuck in Provisioning state.
Check certificate status:
kubectl describe managedcertificate kms-cert -n kmsCommon causes:
- DNS not propagated: Wait for DNS propagation (can take up to 48 hours)
- Domain verification failed: Ensure A record is correct
- Rate limiting: Let's Encrypt has rate limits for certificate issuance
- Ingress not ready: Ensure ingress has an external IP assigned
Symptoms: Pods being OOMKilled or throttled.
Check resource usage:
kubectl top pods -n kms
kubectl describe pod <pod-name> -n kms | grep -A5 "Limits\|Requests"Solutions:
- Increase resource limits in Helm values
- Enable HPA for automatic scaling
- Check for memory leaks in application logs
- Review Cloud Monitoring dashboards for trends
Symptoms: Errors about decryption or invalid encryption key.
Common causes:
- Wrong
ENCRYPTION_KEY: Verify the key matches what was used to encrypt data - Key not set: Ensure the secret contains
ENCRYPTION_KEY - Key changed: The encryption key cannot be changed after initial setup
Verify key is set:
kubectl get secret kms-secrets -n kms -o jsonpath='{.data.ENCRYPTION_KEY}' | base64 -dIf you've lost your encryption key, encrypted data cannot be recovered. Always maintain secure backups of your encryption key.
How is this guide?
Last updated on