Write the Kustomize Patches
What You Will Learn
- How to write a Deployment that runs your game container
- How to write a Service that gives the Deployment a stable internal address
- How to write an Ingress that exposes the Service to the internet with TLS
- How these three resources connect to form the request path
How the Pieces Connect
Before writing any YAML, here is the big picture of how a web request reaches your game:
Internet
|
v
Ingress (055-ingress.yaml) -- routes "game.godot-demo.junovy.com" traffic
|
v
Service (050-service.yaml) -- forwards traffic to matching Pods
|
v
Deployment (040-deployment.yaml) -- runs and manages the Pod(s)
|
v
Pod (created automatically) -- the running container with nginx + your game
In Godot terms: the Ingress is like an InputEvent arriving at the SceneTree. The Service routes it to the right node. The Deployment is the node that processes it.
The Deployment (040-deployment.yaml)
The Deployment tells Kubernetes what container to run and how many copies.
Create the file clients/com.junovy.godot-demo/040-deployment.yaml:
# 040-deployment.yaml
# Runs the Godot game container using nginx:alpine + our exported files
apiVersion: apps/v1
kind: Deployment
metadata:
# The name of this Deployment resource
name: godot-demo
# Must match the tenant namespace
namespace: hst-godot-demo
labels:
# Labels are key-value pairs used for selection and filtering
app: godot-demo
spec:
# How many copies of the Pod to run
# 1 is fine for a demo; production apps typically use 2-3
replicas: 1
selector:
# The Deployment manages Pods that have this label
# This MUST match the labels in the Pod template below
matchLabels:
app: godot-demo
template:
metadata:
# These labels are applied to each Pod created by this Deployment
labels:
app: godot-demo
spec:
containers:
# The container definition
- name: godot-demo
# Full ECR image path -- replace <account-id> with the real value
image: <account-id>.dkr.ecr.eu-central-1.amazonaws.com/godot-demo:v1.0.0
ports:
# The port nginx listens on inside the container
- containerPort: 80
protocol: TCP
resources:
# Resource requests: guaranteed minimum allocation
requests:
cpu: 50m # 50 millicores (0.05 of a CPU core)
memory: 64Mi # 64 mebibytes of RAM
# Resource limits: maximum allowed usage
limits:
cpu: 200m # 200 millicores (0.2 of a CPU core)
memory: 128Mi # 128 mebibytes of RAM
# Liveness probe: Kubernetes restarts the container if this fails
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 30
# Readiness probe: traffic is only sent when this passes
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 3
periodSeconds: 10
Key Fields Explained
| Field | What It Does |
|---|---|
replicas: 1 |
Runs one copy of the Pod. Kubernetes will restart it if it crashes. |
selector.matchLabels |
Links the Deployment to its Pods. Must match template.metadata.labels. |
resources.requests |
The minimum CPU and memory guaranteed to the container. |
resources.limits |
The maximum CPU and memory the container is allowed to use. |
livenessProbe |
Kubernetes checks this endpoint. If it fails, the Pod is restarted. |
readinessProbe |
Kubernetes checks this endpoint. If it fails, traffic stops flowing to the Pod. |
The Service (050-service.yaml)
The Service gives your Deployment a stable internal DNS name and IP address. Without it, the Ingress would have no way to find your Pods.
Create the file clients/com.junovy.godot-demo/050-service.yaml:
# 050-service.yaml
# Creates a stable internal endpoint that routes traffic to the Deployment's Pods
apiVersion: v1
kind: Service
metadata:
name: godot-demo
namespace: hst-godot-demo
spec:
# ClusterIP is the default type -- only accessible inside the cluster
# The Ingress will be the external entry point
type: ClusterIP
selector:
# Route traffic to Pods with this label
# This MUST match the labels on the Deployment's Pod template
app: godot-demo
ports:
- protocol: TCP
# The port the Service listens on (other resources use this)
port: 80
# The port on the container to forward to (must match containerPort)
targetPort: 80
The Service works by label matching. It finds all Pods with app: godot-demo and distributes traffic across them. This is the same label you set in the Deployment.
The Ingress (055-ingress.yaml)
The Ingress is the front door. It maps an external hostname to the internal Service and handles TLS termination.
Create the file clients/com.junovy.godot-demo/055-ingress.yaml:
# 055-ingress.yaml
# Exposes the Service to the internet via a hostname with automatic TLS
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: godot-demo
namespace: hst-godot-demo
annotations:
# Use Traefik's built-in ACME resolver for automatic TLS
traefik.ingress.kubernetes.io/router.tls.certresolver: letsencrypt
traefik.ingress.kubernetes.io/router.entrypoints: websecure
# Automatically create a DNS record for this domain
external-dns.alpha.kubernetes.io/hostname: game.godot-demo.junovy.com
spec:
# Use the external Traefik ingress controller
ingressClassName: traefik-external
tls:
# TLS configuration: which hostnames get certificates
- hosts:
- game.godot-demo.junovy.com
# Traefik ACME manages certificates internally
# This secret is managed by Traefik -- you do not need to create it
secretName: godot-demo-tls
rules:
# Route traffic for this hostname to the Service
- host: game.godot-demo.junovy.com
http:
paths:
- path: /
# Prefix means "match this path and anything below it"
pathType: Prefix
backend:
service:
# The name of the Service resource (from 050-service.yaml)
name: godot-demo
port:
# The port on the Service to forward to
number: 80
How TLS Works Here
You do not need to generate a certificate manually. Here is what happens automatically:
- Flux applies the Ingress manifest
- external-dns sees the hostname annotation and creates a DNS A record
- Traefik sees the
certresolver: letsencryptannotation - Traefik requests a certificate from Let's Encrypt via the ACME protocol
- Let's Encrypt verifies domain ownership
- Traefik stores the certificate internally and serves HTTPS
Update the Kustomization
Now add the three new files to kustomization.yaml:
# kustomization.yaml
# Complete resource list for the hst-godot-demo tenant
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
# Applied in order: namespace first, then workload, then networking
- 010-namespace.yaml
- 040-deployment.yaml
- 050-service.yaml
- 055-ingress.yaml
The Complete Directory
Your tenant directory now has everything needed for deployment:
dds-k8s-cluster/
clients/
com.junovy.godot-demo/
kustomization.yaml # Lists all resources for Kustomize
010-namespace.yaml # Creates the namespace
040-deployment.yaml # Runs the game container
050-service.yaml # Internal routing to the Pods
055-ingress.yaml # External access with TLS
Five files. That is a complete tenant for a static web application on Junovy.
The Request Path, One More Time
Here is the full chain from a player's browser to your game:
| Step | What Happens |
|---|---|
| 1 | Player opens https://game.godot-demo.junovy.com |
| 2 | DNS resolves to the cluster's Ingress controller IP |
| 3 | Traefik matches the hostname to your Ingress rule |
| 4 | TLS is terminated using the Traefik provisioned certificate |
| 5 | The request is forwarded to the godot-demo Service on port 80 |
| 6 | The Service routes to a Pod managed by the godot-demo Deployment |
| 7 | nginx inside the Pod serves index.html with the correct headers |
| 8 | The browser loads the WASM engine and PCK file, and the game starts |
Key Takeaways
- The Deployment defines what container to run and how many replicas
- The Service gives the Deployment a stable internal address using label selectors
- The Ingress maps an external hostname to the Service and handles TLS via cert-manager
- All three connect through labels (
app: godot-demo) and names - The
kustomization.yamlmust list every resource file or Flux will not apply it
What Is Next
Next up: Deploy and Verify where you will push everything to Git, trigger Flux, and see your game live.