A Proof-of-concept for using K3s to create a Kubernetes cluster deployed on nodes from different public cloud providers (AWS, GCP, Azure) utilizing Kilo
What’s possible with that?
- Automatic failover even on AZ and cloud level
- Cloud-agnostic setup to select the services and offers that suit the best (mix & match)
# create RSA key
ssh-keygen -b 4096 -t rsa -f ~/.ssh/cloud-keyCopy the contents of the public key ~/.ssh/cloud-key.pub into .auto.tfvars as public_ssh_key (see .auto.tfvars.example). Terraform will automatically pick up this file.
You can also overwrite as follows
- Using the CLI
-varoption:terraform apply -var="public_ssh_key=..." - Using an environment variable:
export TF_VAR_public_ssh_key="..."
# init, plan, and apply infrastructure
# use `-target=module.gcp_us_central1` to target specific modules
terraform init
terraform plan
terraform apply
# show resources and details
terraform output
terraform state list
terraform state show module.aws_us_east_1.aws_instance.node
# destroy infrastructure
terraform destroy-
Ensure all nodes use Debian 11 -
Open port UDP 51820 for WireGuard (inbound and outbound) -
Install WireGuard on all nodes (docs) -
Configure WireGuard network interface on all nodes (docs) -
Install K3s on all nodes (Conceptual Overview, Quick Start) -
Specify topology (annotating location and optionally region) -
Deploy Kilo on all nodes -
Figure out how to join the Azure node - Deploy traefik/whoami services to test connectivity
- Look into Cloud-init for cloud instance initialisation
- Enable cgroups v2 on the Azure node
- Annotating
locationandforce-endpointin order to make kilo aware of the topology