Linux (HA)
Hanzo KMS High Availability Deployment architecture for Linux
This guide describes how to achieve a highly available deployment of Hanzo KMS on Linux machines without containerization. The architecture provided serves as a foundation for minimum high availability, which you can scale based on your specific requirements.
Architecture Overview

The deployment consists of the following key components:
| Service | Nodes | Recommended Specs | GCP Instance | AWS Instance |
|---|---|---|---|---|
| External Load Balancer | 1 | 4 vCPU, 4 GB memory | n1-highcpu-4 | c5n.xlarge |
| Internal Load Balancer | 1 | 4 vCPU, 4 GB memory | n1-highcpu-4 | c5n.xlarge |
| Etcd Cluster | 3 | 4 vCPU, 4 GB memory | n1-highcpu-4 | c5n.xlarge |
| PostgreSQL Cluster | 3 | 2 vCPU, 8 GB memory | n1-standard-2 | m5.large |
| Redis + Sentinel | 3+3 | 2 vCPU, 8 GB memory | n1-standard-2 | m5.large |
| Hanzo KMS Core | 3 | 2 vCPU, 4 GB memory | n1-highcpu-2 | c5.large |
Network Architecture
All servers operate within the 52.1.0.0/24 private network range with the following IP assignments:
| Service | IP Address |
|---|---|
| External Load Balancer | 52.1.0.1 |
| Internal Load Balancer | 52.1.0.2 |
| Etcd Node 1 | 52.1.0.3 |
| Etcd Node 2 | 52.1.0.4 |
| Etcd Node 3 | 52.1.0.5 |
| PostgreSQL Node 1 | 52.1.0.6 |
| PostgreSQL Node 2 | 52.1.0.7 |
| PostgreSQL Node 3 | 52.1.0.8 |
| Redis Node 1 | 52.1.0.9 |
| Redis Node 2 | 52.1.0.10 |
| Redis Node 3 | 52.1.0.11 |
| Sentinel Node 1 | 52.1.0.12 |
| Sentinel Node 2 | 52.1.0.13 |
| Sentinel Node 3 | 52.1.0.14 |
| Hanzo KMS Core 1 | 52.1.0.15 |
| Hanzo KMS Core 2 | 52.1.0.16 |
| Hanzo KMS Core 3 | 52.1.0.17 |
Component Setup Guide
Configure Etcd Cluster
The Etcd cluster is needed for leader election in the PostgreSQL HA setup. Skip this step if using managed PostgreSQL.
- Install Etcd on each node:
sudo apt update
sudo apt install etcd- Configure each node with unique identifiers and cluster membership. Example configuration for Node 1 (
/etc/etcd/etcd.conf):
name: etcd1
data-dir: /var/lib/etcd
initial-cluster-state: new
initial-cluster-token: etcd-cluster-1
initial-cluster: etcd1=http://52.1.0.3:2380,etcd2=http://52.1.0.4:2380,etcd3=http://52.1.0.5:2380
initial-advertise-peer-urls: http://52.1.0.3:2380
listen-peer-urls: http://52.1.0.3:2380
listen-client-urls: http://52.1.0.3:2379,http://127.0.0.1:2379
advertise-client-urls: http://52.1.0.3:2379Configure PostgreSQL
For production deployments, you have two options for highly available PostgreSQL:
Option A: Managed PostgreSQL Service (Recommended for Most Users)
Use cloud provider managed services:
- AWS: Amazon RDS for PostgreSQL with Multi-AZ
- GCP: Cloud SQL for PostgreSQL with HA configuration
- Azure: Azure Database for PostgreSQL with zone redundant HA
These services handle replication, failover, and maintenance automatically.
Option B: Self-Managed PostgreSQL Cluster
Full HA installation guide of PostgreSQL is beyond the scope of this document. However, we have provided an overview of resources and code snippets below to guide your deployment.
-
Required Components:
- PostgreSQL 14+ on each node
- Patroni for cluster management
- Etcd for distributed consensus
-
Documentation we recommend you read:
-
Key Steps Overview:
# 1. Install requirements on each PostgreSQL node
sudo apt update
sudo apt install -y postgresql-14 postgresql-contrib-14 python3-pip
pip3 install patroni[etcd] psycopg2-binary
# 2. Create Patroni config directory
sudo mkdir /etc/patroni
sudo chown postgres:postgres /etc/patroni
# 3. Create Patroni configuration (example for first node)
# /etc/patroni/config.yml - REQUIRES CAREFUL CUSTOMIZATIONscope: kms-cluster
namespace: /db/
name: postgresql1
restapi:
listen: 52.1.0.6:8008
connect_address: 52.1.0.6:8008
etcd:
hosts: 52.1.0.3:2379,52.1.0.4:2379,52.1.0.5:2379
bootstrap:
dcs:
ttl: 30
loop_wait: 10
retry_timeout: 10
maximum_lag_on_failover: 1048576
postgresql:
use_pg_rewind: true
parameters:
max_connections: 1000
shared_buffers: 2GB
work_mem: 8MB
max_worker_processes: 8
max_parallel_workers_per_gather: 4
max_parallel_workers: 8
wal_level: replica
hot_standby: "on"
max_wal_senders: 10
max_replication_slots: 10
hot_standby_feedback: "on"-
Important considerations:
- Proper disk configuration for WAL and data directories
- Network latency between nodes
- Backup strategy and point-in-time recovery
- Monitoring and alerting setup
- Connection pooling configuration
- Security and network access controls
-
Recommended readings:
Configure Redis and Sentinel
Similar to PostgreSQL, a full HA Redis setup guide is beyond the scope of this document. Below are the key resources and considerations for your deployment.
Option A: Managed Redis Service (Recommended for Most Users)
Use cloud provider managed Redis services:
- AWS: ElastiCache for Redis with Multi-AZ
- GCP: Memorystore for Redis with HA
- Azure: Azure Cache for Redis with zone redundancy
Follow your cloud provider's documentation:
Option B: Self-Managed Redis Cluster
Setting up a production Redis HA cluster requires understanding several components. Refer to these linked resources:
-
Required Reading:
-
Key Steps Overview:
# 1. Install Redis on all nodes
sudo apt update
sudo apt install redis-server
# 2. Configure master node (52.1.0.9)
# /etc/redis/redis.confbind 52.1.0.9
port 6379
dir /var/lib/redis
maxmemory 3gb
maxmemory-policy noeviction
requirepass "your_redis_password"
masterauth "your_redis_password"- Configure replica nodes (
52.1.0.10,52.1.0.11):
bind 52.1.0.10 # Change for each replica
port 6379
dir /var/lib/redis
replicaof 52.1.0.9 6379
masterauth "your_redis_password"
requirepass "your_redis_password"- Configure Sentinel nodes (
52.1.0.12,52.1.0.13,52.1.0.14):
port 26379
sentinel monitor mymaster 52.1.0.9 6379 2
sentinel auth-pass mymaster "your_redis_password"
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 60000
sentinel parallel-syncs mymaster 1- Recommended Additional Reading:
Configure HAProxy Load Balancer
Install and configure HAProxy for internal load balancing:
global
maxconn 10000
log stdout format raw local0
defaults
log global
mode tcp
retries 3
timeout client 30m
timeout connect 10s
timeout server 30m
timeout check 5s
listen stats
mode http
bind *:7000
stats enable
stats uri /
resolvers hostdns
nameserver dns 127.0.0.11:53
resolve_retries 3
timeout resolve 1s
timeout retry 1s
hold valid 5s
frontend postgres_master
bind *:5000
default_backend postgres_master_backend
frontend postgres_replicas
bind *:5001
default_backend postgres_replica_backend
backend postgres_master_backend
option httpchk GET /master
http-check expect status 200
default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
server postgres-1 52.1.0.6:5432 check port 8008
server postgres-2 52.1.0.7:5432 check port 8008
server postgres-3 52.1.0.8:5432 check port 8008
backend postgres_replica_backend
option httpchk GET /replica
http-check expect status 200
default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
server postgres-1 52.1.0.6:5432 check port 8008
server postgres-2 52.1.0.7:5432 check port 8008
server postgres-3 52.1.0.8:5432 check port 8008
frontend redis_master_frontend
bind *:6379
default_backend redis_master_backend
backend redis_master_backend
option tcp-check
tcp-check send AUTH\ 123456\r\n
tcp-check expect string +OK
tcp-check send PING\r\n
tcp-check expect string +PONG
tcp-check send info\ replication\r\n
tcp-check expect string role:master
tcp-check send QUIT\r\n
tcp-check expect string +OK
server redis-1 52.1.0.9:6379 check inter 1s
server redis-2 52.1.0.10:6379 check inter 1s
server redis-3 52.1.0.11:6379 check inter 1s
frontend kms_frontend
bind *:80
default_backend kms_backend
backend kms_backend
option httpchk GET /api/status
http-check expect status 200
server kms-1 52.1.0.15:8080 check inter 1s
server kms-2 52.1.0.16:8080 check inter 1s
server kms-3 52.1.0.17:8080 check inter 1sDeploy Hanzo KMS Core
First, add the Hanzo KMS repository:
curl -1sLf 'https://artifacts-kms-core.kms.hanzo.ai/setup.deb.sh' | sudo -E bashThen install Hanzo KMS:
sudo apt-get update && sudo apt-get install -y kms-coreFor production environments, we strongly recommend installing a specific version of the package to maintain consistency across reinstalls. View available versions.
First, add the Hanzo KMS repository:
curl -1sLf 'https://artifacts-kms-core.kms.hanzo.ai/setup.rpm.sh' | sudo -E bashThen install Hanzo KMS:
sudo yum install kms-coreFor production environments, we strongly recommend installing a specific version of the package to maintain consistency across reinstalls. View available versions.
Next, create configuration file /etc/kms/kms.rb with the following:
kms_core['ENCRYPTION_KEY'] = 'your-secure-encryption-key'
kms_core['AUTH_SECRET'] = 'your-secure-auth-secret'
kms_core['DB_CONNECTION_URI'] = 'postgres://user:pass@52.1.0.2:5000/kms'
kms_core['REDIS_URL'] = 'redis://52.1.0.2:6379'
kms_core['PORT'] = 8080To generate ENCRYPTION_KEY and AUTH_SECRET view the following configurations documentation here.
If you are using managed services for either Postgres or Redis, please replace the values of the secrets accordingly.
Lastly, start and verify each node running kms-core:
sudo kms-ctl reconfigure
sudo kms-ctl statusMonitoring and Maintenance
- Monitor HAProxy stats:
http://52.1.0.2:7000/haproxy?stats - Monitor Hanzo KMS logs:
sudo kms-ctl tail - Check cluster health:
- Etcd:
etcdctl cluster-health - PostgreSQL:
patronictl list - Redis:
redis-cli info replication
- Etcd:
How is this guide?
Last updated on