Introduction
- Terminology:
- “Kubernetes” (“K8s”) is the entire orchestration system
- The
kubectl
(“cube control”) CLI is the main way to configure K8s and manage apps - The “control plane” is a set of “master” containers that manage the cluster
- API server
- scheduler
- controller manager
etcd
(the container’s database)- Core DNS
- possible to add more containers for storage, etc
- A “cluster” is made up nodes and pods
- A “node” is a single server in a K8s cluster
- e.g. locally, Docker Desktop is a node
- A “kubelet” is a K8s agent running on a node that communicates between the control plane and the container
kube-proxy
- A “pod” controls one or more containers running together on a node
- It’s the basic unit of deployment
- Containers are always in and controlled by pods
- A pod can have one or more containers
- It’s not a concrete thing — its an abstract K8s concept
- Even though Docker runs containers, with K8s you can only deploy “pods”
- K8s then tells the container runtime (via the kubelet) to create the actual containers for you
- Every type of resource that creates containers uses this pod concept
- I create pods, K8s creates the containers
- More specifically, I use YAML or
kubectl
to tell the control plane I want to create a pod; the kubelet sees a request for a pod on its node and determines how many containers are needed; the kubelet then tells Docker to create the necessary container
- A “controller” creates and updates pods and other K8s objects
- There are many types of controllers, e.g. Deployment, ReplicaSet, StatefulSet, DaemonSet, Job, CronJob, etc
- By combining the pods with various controllers, you can create many different looking deployments
- A “service” is a network endpoint you use to connect to a set of pods
- A persistent endpoint in the cluster at a specific DNS name and port
- A “namespace” is a filtered group of objects in a cluster (optional)
- Not a security feature
- Just a way to filter my
kubectl
views to only include what I want to see - e.g.
default
namespace filters out system resources
- Q: A “resource” is…
- Resources use the Labels and Selectors keys to find their child resources
- e.g.
Labels: app=my-app
- A resource’s Selector lists the Labels it needs to match in child resources (i.e. a resource’s Labels are how a parent resource finds it)
- e.g.
- Resources use the Labels and Selectors keys to find their child resources
- There’s also secrets, ConfigMaps, etc
- Installing locally:
- many ways to install, each of which includes different containers
- easiest way is to enable it in Docker Desktop
- Play with Kubernetes - a free online K8s playground
- Docker Mastery: The Complete Toolset From a Docker Captain • Includes an extensive introduction to K8s after introducing Docker, Compose and Swarm • Bret Fisher 🧑🎓
- Kubernetes 101, part I, the fundamentals • Leandro Proença 📖
- Kubernetes Explained in 15 Minutes | Hands On (2024 Edition) • Travis Media 📺
- Inbox:
- Course: Docker Mastery: with Kubernetes +Swarm from a Docker Captain | Udemy
- purchased
- The Illustrated Children’s Guide to Kubernetes • Cloud Native Computing Foundation 📖
- Kubernetes for Frontend Developers • Benjamin Ajibade 📖
- Learn Kubernetes Basics • This tutorial provides a walkthrough of the basics of the Kubernetes cluster orchestration system. Each module contains some background information on major Kubernetes features and concepts, and includes an interactive online tutorial. These interactive tutorials let you manage a simple cluster and its containerized applications for yourself. Using the interactive tutorials, you can learn to: Deploy a containerized application on a cluster. Scale the deployment. Update the containerized application with a new software version. • Kubernetes Docs 📚
- Kubernetes: Hello World on Google Cloud Platform • Data Stream 📺
- Kubernetes for Sysadmins – PuppetConf 2016 • Kelsey Hightower 📺
- Getting Started with Containers and Google Kubernetes Engine (Cloud Next ‘18) • Google Cloud Tech 📺
- Deploy Your Next Application to Google Kubernetes Engine (Cloud Next ‘19) • Google Cloud Tech 📺
- Kubernetes 110: Your First Deployment • Daniel Sanche 📖
- Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours] • TechWorld with Nana 📺
- Kubernetes Course - Full Beginners Tutorial (Containerize Your Apps!) • freeCodeCamp 📺
- The container orchestrator landscape • Jordan Webb 📖
- Course: Docker Mastery: with Kubernetes +Swarm from a Docker Captain | Udemy
General
- Inbox:
- A few things I’ve learned about Kubernetes • Julia Evans 📖
- Reasons Kubernetes is cool • Julia Evans 📖
- Kubernetes docs
- Google Kubernetes Engine (GKE) • Google Cloud docs 📚
- Google Cloud blog posts about containers and Kubernetes
- Kubernetes in Action, Second Edition • Marko Luksa 📕
Pods
- Kubernetes 101, part II, pods • Leandro Proença 📖
Deployments & ReplicaSets
- A Deployment is the most common K8s resource type for most purposes
- other controller types like CronJob have very specialized uses, but most apps will end up using a Deployment controller
- A Deployment contains one or more ReplicaSets, which contain one or more Pods
- The Deployment controls the ReplicaSet
- The ReplicaSet control the pods
- Two ways to deploy pods =
kubectl create deploy
or YAML - Creating a deployment with
kubectl
:kubectl create deploy <name> --image <image name>
deploy
can be rephrased asdeployment
ordeployments
- the command asks the control plane for a new deployment for your changes
- the control plane then triggers a new ReplicaSet to be created alongside the existing ReplicaSet
- the kubelet creates the new ReplicaSet’s pods
- the old ReplicaSet remains in place while the new ReplicaSet comes online (in case something is wrong with the new deployment)
- see
kubectl create deploy --help
for all options
kubectl get all
= after creating a deployment, check what resources now exist (likely one or more pods, then the service, then the deployment, then the replicaset)
- Scaling ReplicaSets:
kubectl scale deploy/<name> --replicas <number>
= change the number of desired total pods in the deploymentdeployment
can be rephrased asdeployments
ordeploy
deploy <name>
can be rephrased asdeploy/name
- does not create a new deployment or new replicaset; just changes the existing deployment record the update the number of pods
- the scheduler will see a new pod has been requested and assigns a node to it
- kubelet sees the new pod and tells Docker to start the container
- Self-healing:
- Kubernetes 101, part III, controllers and self-healing • Leandro Proença 📖
- Deployments vs ReplicaSets:
- Compared to creating a
ReplicaSet
directly, a deployment adds the ability to easily scale the number of replicas and rollout/rollback changes with no downtime kubectl rollout status deploy/<name>
shows real time updates during a rolloutkubectl rollout undo deploy/<name>
allows a faster way to roll back changes that caused issues thangit revert
, since it doesn’t risk getting slowed down by merge conflicts or CI pipelines. So, it’s a good first step before doing thatgit revert
as a follow up.kubectl describe deploy/<name> | grep revision
to get the revision number of the current deployment (to compare it to the available revisions listed bykubectl rollout history deploy/<name>
and roll back to one of them withkubectl rollout undo deploy/<name> --to-revision=<number>
)- Kubernetes 101, part IV, deployments • Leandro Proença 📖
- Compared to creating a
- Inbox:
- Overview of deploying workloads • Google Kubernetes Engine Docs 📚
- Deploying a containerized web application • Google Kubernetes Engine Docs 📚
Services
- A “service” is a stable IP address for pods
- If we want to connect to pods (and their containers), we need a service
- CoreDNS allows us to resolve services by name (i.e. a friendly hostname)
- There are four available service types:
- ClusterIP = the default; always available; only available in the cluster; one set of pods talking to another set of pods; only reachable from within the cluster (nodes and pods); pods can reach the service on the app’s port number; doesn’t need any special firewall rules since it only works in the cluster
- Creating a ClusterIP service (by exposing a port):
kubectl get pods --watch
= watch what happens with the following commands in a separate windowkubectl create deploy <name> --image <image name>
= create deploymentkubectl scale deploy/<name> --replicas 5
= scale up to 5 podskubectl expose deploy/<name> --port 8888
= create a ClusterIP service (by default) with the same name as the deployment (by default)kubectl get service
= inspect running services- shows the type, cluster IP, external IP (if any), ports and age
- cURL a ClusterIP service:
- Remember, this service type isn’t accessible from outside the cluster, so we’re going to need to hop into the cluster by creating a pod and connecting to its shell
kubectl run tmp-shell --rm -it --image bretfisher/netshoot
= create one pod with one container using the netshoot image (which includes terminal troubleshooting utilities), open a shell (-it
), run the defaultCMD
(zsh
), and delete itself when Iexit
the shell (--rm
)curl <service name>:8888
= connect to the ClusterIP service by its name and port- Each time you run the command, there’s a chance the request will be handled by a different pod (indicated by the
HOSTNAME
in the cURL reponse) - Run the command a bunch of times to see the
HOSTNAME
change
- Each time you run the command, there’s a chance the request will be handled by a different pod (indicated by the
- Creating a ClusterIP service (by exposing a port):
- NodePort = always available; different from ClusterIP in that it’s designed to allow anything outside the service to talk to the cluster through a high port number that’s allocated on each node; that port is open on every node’s IP; anyone can connect through it; other pods need to be updated to use this port; automatically creates a ClusterIP service to use inside the cluster (if one doesn’t already exist)
- Creating a NodePort service (by exposing a port of type NodePort):
kubectl get pods --watch
= watch what happens with the following commands in a separate windowkubectl create deploy <name> --image <image name>
= create deploymentkubectl scale deploy/<name> --replicas 5
= scale up to 5 podskubectl expose deploy/<name> --port 8888 --name <service-name> --type NodePort
= create a NodePort service with a custom namekubectl get service
= inspect running services- shows the type, cluster IP, external IP (if any), ports (node port is to the right of the
:
) and age
- shows the type, cluster IP, external IP (if any), ports (node port is to the right of the
- cURL a NodePort service:
- This service is accessible externally (e.g. from my laptop environment), so I can curl via the IP of the host (
localhost
if working locally) and the node port shown inkubectl get services
curl localhost:<node port>
= connect to the node via its exposed port (always a high number)
- This service is accessible externally (e.g. from my laptop environment), so I can curl via the IP of the host (
- Creating a NodePort service (by exposing a port of type NodePort):
- LoadBalancer = mostly used in the cloud; controls an external load balancer endpoint through the command line; creates a cluster and node port, then uses third party load balancer solution to automatically set up load balancing; the node port passes requests from the exposed port to the node port; only for traffic coming into cluster from outside; requires an external provider that K8s can talk to remotely; automatically creates a ClusterIP service to use inside the cluster (if one doesn’t already exist) and a NodePort service to expose the cluster externally (if one doesn’t already exist)
kubectl expose deploy/<name> --port 8888 --name <service name> --type LoadBalancer
= if using Docker Desktop, a load balancer resource is includeed (otherwise you need a cloud provider)curl localhost:8888
(can also use the node port directly)
- ExternalName = used left often; relates to things in the cluster talking to things outside the cluster; adds CNAME DNS record to CoreDNS only to allow pods to reach external service; not used for pods, but for giving pods a DNS name to use for something outside K8s; useful when you don’t have control over the external DNS
- ClusterIP = the default; always available; only available in the cluster; one set of pods talking to another set of pods; only reachable from within the cluster (nodes and pods); pods can reach the service on the app’s port number; doesn’t need any special firewall rules since it only works in the cluster
- Multiple services can be running at a time, but they need to have unique names
- DNS for K8s services:
- A DNS server is an optional add-on
- CoreDNS is the default
- DNS-based service discovery = when you create a service, you get a hostname to access the service (
curl <hostname>
)- But that only works for Services in the same NameSpace
kubectl get namespaces
- Since you can have resources with the same names in different namespaces, to talk to a service in another namespace, you need to use the FQDN instead:
curl <hostname>.<namespace>.svc.cluster.local
- K8s Ingress:
- Another service type designed for HTTP traffic
- Exposing pods (and their containers) with containers:
kubectl expose
= create a service for existing pods
YAML config
- A declarative replacement for
kubectl
commands - Similar to how
docker-compose.yaml
can be a declarative replacement fordocker run
- Nice in that it can be easily version controlled and reused
kubectl
- kubectl is a command-line tool for running commands against Kubernetes clusters
- In general, it offers less functionality than using YAML
- Q: when to use
kubectl
(what is it better at)…? - Three main commands:
kubectl run
= start a single pod (similar to howdocker run
creates a single container)- Unlikely to be used in production (where
kubectl create deployment
is the norm) unless testing or debugging a particular container
- Unlikely to be used in production (where
kubectl create
= create resources via CLI or YAML- e.g.
kubectl create deploy
- e.g.
kubectl apply
= create/update something via YAML
- General:
kubectl version
= test your connection to the APIkubectl get all
= list common resources in namespace (many types are hidden by default)- Always includes service
- Inspecting resources with
kubectl get <resource>
:--help
= see all options-o wide
= show more info columns-o yaml
= show all info in YAML format- Examples:
kubectl get all
= list the common resources in the current namespacekubectl get deploy/<name>
= list deploymentkubectl get node/<name>
- Inspecting resources with
kubectl describe <resource>
:kubectl describe <resource type>/<name>
= show the YAML for that resource- A good first step debugging before diving into container or system logs
- The main advantage over
get
is it shows more info and outputs it as YAML
- Watching running resources with
--watch (-w)
:- e.g.
kubectl get pods -w
= start a long-running command to watch all pods in the namespace (e.g. will update if a pod is deleted) kubectl get events -w
= watch all events as they occur
- e.g.
- Inspecting container logs with
logs
:- all K8s logs are container logs (that’s where the actually running processes are)
- by default, container logs are stored in each node’s Docker runtime
- generally, you’d want centralize your container logs using a third party solution that makes them searchable and usable for alerting and compiling metrics
- otherwise, when viewing the logs of multiple containers at a time using
kubetctl logs
, the logs will be jumbled together, not in sequence, not colored, etc - a good alternative to
kubectl logs
is stern/stern: ⎈ Multi pod and container log tailing for Kubernetes
- otherwise, when viewing the logs of multiple containers at a time using
- Example commands:
kubectl logs <resource type>/<name>
= show some container logs for that resourcekubectl logs deploy/<name>
= gets logs from first container in a random pods in a deploymentkubectl logs -l app=<name>
= show logs from all pods in a deployment matching a given label- view available labels by running
kubectl describe deploy/<name>
and searching the output for “Labels:” - labels trickle down, so labels on a Deployment will also be present on its ReplicaSets and Pods (which may gain additional labels of their own)
- view available labels by running
kubectl logs pod/<name> --all-containers=true
= show logs for all containers in a specific podkubectl logs pod/my-pod-xxx-yyy -c <container name>
= show logs for a specific container in a specific pod- can get the container name by running
kubectl describe pod/<name>
and searching the output for “Created container” or “Started container”
- can get the container name by running
kubectl logs --follow --tail 1
= show the last log, then watch for new logs- useful when looking for any bad behaviour in any container
- Cleanup commands:
kubectl delete pod/<name>
= delete a podkubectl delete deploy/<name>
= delete a deploymentkubectl get all
= see what resources still remain
- Pods:
kubectl run <container name> --image <image name>
= deploy a podkubectl get pods
= show list of running pods
- Commands:
kubectl config view
= view my local kubectl configkubectl config current-context
= see just the current context (i.e. which project I’m connected to locally)kubectl config use-context <context name>
= switch to a different contextkubectl set-context ...
= edit the properties of a contextkubectl get pods --all-namespaces
(or-A
) = list all pods in all namespaces of a context; useful when you aren’t sure which namespace you need; rerun with-n <namespace>
once you knowkubectl get namespaces
= list all namespaces in a context
- Inbox:
- kubectl Cheat Sheet
- kubectl Reference Docs
- Kubectl Config Set-Context Tutorial | Airplane
- kubectl for Docker Users | Kubernetes
- Deploy a React app to Kubernetes using Docker • LogRocket Blog 📖
- Kubernetes (k8s) Cheat Sheet by gauravpandey44 - Download free from Cheatography - Cheatography.com: Cheat Sheets For Every Occasion
K8s Management
- …
Lens
k9s
- Tool for inspecting and interacting with Kubernetes clusters from the command line
- Commands:
?
= show all currently-available commandshjkl
= move cursor aroundenter
= move in (view details)esc
= move out (go back up a level)l
= view logsp
= view previous logs
- K9s • K9s docs 📚
- Maximizing Productivity with Kubernetes: The Benefits of Using K9s • Kai Hoffman 📖
Argo
- Argo CD • Argo CD docs 📚
- How to kick off an cron job in the Argo UI:
- go to the deployment and make sure the manifest has been updated (i.e. synced)
- find the cron job’s container’s rectangle, open its ”…” menu, and choose “create job”
- validate by watching the job go in grafana’s explore view with
{namespace=“abc”, container=“cron job container”}
k3s
- Using HA Kubernetes at home, was never so simple! • Detailed K3s setup walkthrough • Christian Lempa 📺
Inbox
-
Q: If I trigger a pod rollout and restart, what if the pod is actively handling a request? Does the whole orchestration process know that? Will it wait? Or does it take down the pod? How is that handled?
-
How does the Kubernetes scheduler work? • Julia Evans 📖
-
kubectl exec -it <pod name> -n <namespace> -- /bin/bash
= open a shell in a running container (may require elevated permissions) -
kubectl: Get a Shell to a Running Container | Kubernetes - how to use
kubectl exec
to ssh into a container -
Configure Liveness, Readiness and Startup Probes | Kubernetes
-
KEDA | Kubernetes Event-driven Autoscaling - KEDA is a tool for horizontally scaling (i.e. adding removing) pods based on how many events a container needs to process (automatically add/remove pods as needed to handle workload)
-
argo: what does it do? what does it add to k8s?
-
Switch Between Multiple Kubernetes Clusters With Ease - use
kubectl config get-contexts
to view the available clusters,current-context
to see which one is active, anduse-context NAME
to switch clusters -
For the Love of God, Stop Using CPU Limits on Kubernetes (Updated) | Robusta
-
PerfectScale | Govern, right-size and scale Kubernetes the easy way - a tool to help set pod cpu and memory resource requests appropriately based on past usage
-
robusta-dev/krr: Prometheus-based Kubernetes Resource Recommendations - another tool that recommends appropriate pod cpu and memory resource settings
-
Kubernetes Metrics Reference | Kubernetes - use
prober_probe_total{probe_type="Liveness", result="failed"}
in Grafana to create dashboards and alerts for liveness, readiness or startup probe failures in a k8s container (that’s a Thanos metric query, rather than a Loki query and can be scoped to a specific cluster, app, etc as usual)- keep this here (as related to k8s) or move to observability (as related to metrics)?
-
kubens
= switch between available namespaces via fzf -
kubectx
= switch between available contexts via fzf -
Skaffold - tool for testing out your local changes in a hosted K8s environment
-
Listing all resources in a namespace • How to use
kubectl
to list all resources in a namespace • Stack Overflow 💬 -
Is it possible to trigger a kubernetes cronjob also upon deployment? • How to use
kubectl
to start a cron job; also here and here • Stack Overflow 💬 -
Retrying a process that fails:
- If an exception is encountered and handled, re-raising it will (helpfully) result in a pod status of ‘failed’ instead of ‘success’ (i.e. if try/except, call
raise
inside the exception block) - Kubernetes has a
backoffLimit
setting that allows X number of retries when “failed” occurs - The combo provides an automatic way to retry jobs when failures can happen for flaky/ephemeral reasons that are fixed by simply retrying
- If an exception is encountered and handled, re-raising it will (helpfully) result in a pod status of ‘failed’ instead of ‘success’ (i.e. if try/except, call
-
kubernetes: health checks:
ErrImagePull
typically means the image does exist, its unable to reach the registry or auth into the registry. -
ArgoCD Tutorial for Beginners | GitOps CD for Kubernetes - TechWorld with Nana
-
sarub0b0/kubetui: An intuitive Terminal User Interface (TUI) tool for real-time monitoring and exploration of Kubernetes resources - Looks like an interesting k9s alternative with a nice log query syntax
-
kdash-rs/kdash: A simple and fast dashboard for Kubernetes - k9s alternative
-
ahmetb/kubectx: Faster way to switch between clusters and namespaces in kubectl
-
kubermatic/fubectl: Reduces repetitive interactions with kubectl - collection of aliases for common kubectl commands
-
Manage Docker and Kubernetes in VSCode • Demo of why VS Code can replace Argo CD, k9s, kubectl, etc as a way to manage Kubernetes and Docker in production • Christian Lempa 📺