class: title, self-paced Kubernetes 101
.nav[*Self-paced version*] .debug[ ``` M slides/Docker-1hour.yml M slides/containers/links.md M slides/intro-fullday.yml.html M slides/intro-selfpaced.yml.html M slides/intro-twodays.yml.html M slides/k8s/links-bridget.md M slides/kadm-fullday.yml.html M slides/kadm-twodays.yml.html M slides/kube-adv.yml.html M slides/kube-fullday.yml.html M slides/kube-halfday.yml.html M slides/kube-selfpaced.yml.html M slides/kube-twodays.yml.html M slides/slides.zip M slides/swarm-fullday.yml.html M slides/swarm-halfday.yml.html M slides/swarm-selfpaced.yml.html M slides/swarm-video.yml.html ?? slides/Docker-1hour.yml.html ?? slides/Kube-1hour.yml ?? slides/Kube-1hour.yml.html ?? slides/shared/about-slides-old.md ``` These slides have been built from commit: 55180b0 [shared/title.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/shared/title.md)] --- class: title, in-person Kubernetes 101
.footnote[ **Slides[:](https://www.youtube.com/watch?v=h16zyxiwDLY) https://ryaxtech.github.io/kube.training/** ] .debug[[shared/title.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/shared/title.md)] --- name: toc-part-1 ## Part 1 - [Kubernetes concepts](#toc-kubernetes-concepts) - [Declarative vs imperative](#toc-declarative-vs-imperative) - [Kubernetes network model](#toc-kubernetes-network-model) - [First contact with `kubectl`](#toc-first-contact-with-kubectl) - [Setting up Kubernetes](#toc-setting-up-kubernetes) .debug[(auto-generated TOC)] --- name: toc-part-2 ## Part 2 - [Running our first containers on Kubernetes](#toc-running-our-first-containers-on-kubernetes) - [Revisiting `kubectl logs`](#toc-revisiting-kubectl-logs) - [Exposing containers](#toc-exposing-containers) - [Shipping images with a registry](#toc-shipping-images-with-a-registry) .debug[(auto-generated TOC)] --- name: toc-part-3 ## Part 3 - [The Kubernetes dashboard](#toc-the-kubernetes-dashboard) - [Security implications of `kubectl apply`](#toc-security-implications-of-kubectl-apply) - [Daemon sets](#toc-daemon-sets) - [Labels and selectors](#toc-labels-and-selectors) - [Rolling updates](#toc-rolling-updates) .debug[(auto-generated TOC)] --- name: toc-part-4 ## Part 4 - [Accessing logs from the CLI](#toc-accessing-logs-from-the-cli) - [Namespaces](#toc-namespaces) - [Next steps](#toc-next-steps) - [Links and resources](#toc-links-and-resources) .debug[(auto-generated TOC)] .debug[[shared/toc.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/shared/toc.md)] --- ## Versions installed - Kubernetes 1.19.2 - Docker Engine 19.03.13 - Docker Compose 1.25.4 .exercise[ - Check all installed versions: ```bash kubectl version docker version docker-compose -v ``` ] .debug[[k8s/versions-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/versions-k8s.md)] --- class: extra-details ## Kubernetes and Docker compatibility - Kubernetes 1.17 validates Docker Engine version [up to 19.03](https://github.com/kubernetes/kubernetes/pull/84476) *however ...* - Kubernetes 1.15 validates Docker Engine versions [up to 18.09](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.15.md#dependencies)
(the latest version when Kubernetes 1.14 was released) - Kubernetes 1.13 only validates Docker Engine versions [up to 18.06](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.13.md#external-dependencies) - Is it a problem if I use Kubernetes with a "too recent" Docker Engine? -- class: extra-details - No! - "Validates" = continuous integration builds with very extensive (and expensive) testing - The Docker API is versioned, and offers strong backward-compatibility
(if a client uses e.g. API v1.25, the Docker Engine will keep behaving the same way) .debug[[k8s/versions-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/versions-k8s.md)] --- ## Kubernetes versioning and cadence - Kubernetes versions are expressed using *semantic versioning* (a Kubernetes version is expressed as MAJOR.MINOR.PATCH) - There is a new *patch* release whenever needed (generally, there is about [2 to 4 weeks](https://github.com/kubernetes/sig-release/blob/master/release-engineering/role-handbooks/patch-release-team.md#release-timing) between patch releases, except when a critical bug or vulnerability is found: in that case, a patch release will follow as fast as possible) - There is a new *minor* release approximately every 3 months - At any given time, 3 *minor* releases are maintained (in other words, a given *minor* release is maintained about 9 months) .debug[[k8s/versions-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/versions-k8s.md)] --- ## Kubernetes version compatibility *Should my version of `kubectl` match exactly my cluster version?* - `kubectl` can be up to one minor version older or newer than the cluster (if cluster version is 1.15.X, `kubectl` can be 1.14.Y, 1.15.Y, or 1.16.Y) - Things *might* work with larger version differences (but they will probably fail randomly, so be careful) - This is an example of an error indicating version compability issues: ``` error: SchemaError(io.k8s.api.autoscaling.v2beta1.ExternalMetricStatus): invalid object doesn't have additional properties ``` - Check [the documentation](https://kubernetes.io/docs/setup/release/version-skew-policy/#kubectl) for the whole story about compatibility ??? :EN:- Kubernetes versioning and compatibility :FR:- Les versions de Kubernetes et leur compatibilité .debug[[k8s/versions-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/versions-k8s.md)] --- class: pic .interstitial[![Image separating from the next part](https://gallant-turing-d0d520.netlify.com/containers/Container-Ship-Freighter-Navigation-Elbe-Romance-1782991.jpg)] --- name: toc-kubernetes-concepts class: title Kubernetes concepts .nav[ [Previous part](#toc-) | [Back to table of contents](#toc-part-1) | [Next part](#toc-declarative-vs-imperative) ] .debug[(automatically generated title slide)] --- # Kubernetes concepts - Kubernetes is a container management system - It runs and manages containerized applications on a cluster -- - What does that really mean? .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- ## What can we do with Kubernetes? - Let's imagine that we have a 3-tier e-commerce app: - web frontend - API backend - database (that we will keep out of Kubernetes for now) - We have built images for our frontend and backend components (e.g. with Dockerfiles and `docker build`) - We are running them successfully with a local environment (e.g. with Docker Compose) - Let's see how we would deploy our app on Kubernetes! .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- ## Basic things we can ask Kubernetes to do -- - Start 5 containers using image `atseashop/api:v1.3` -- - Place an internal load balancer in front of these containers -- - Start 10 containers using image `atseashop/webfront:v1.3` -- - Place a public load balancer in front of these containers -- - It's Black Friday (or Christmas), traffic spikes, grow our cluster and add containers -- - New release! Replace my containers with the new image `atseashop/webfront:v1.4` -- - Keep processing requests during the upgrade; update my containers one at a time .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- ## Other things that Kubernetes can do for us - Autoscaling (straightforward on CPU; more complex on other metrics) - Resource management and scheduling (reserve CPU/RAM for containers; placement constraints) - Advanced rollout patterns (blue/green deployment, canary deployment) .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- ## More things that Kubernetes can do for us - Batch jobs (one-off; parallel; also cron-style periodic execution) - Fine-grained access control (defining *what* can be done by *whom* on *which* resources) - Stateful services (databases, message queues, etc.) - Automating complex tasks with *operators* (e.g. database replication, failover, etc.) .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- ## Kubernetes architecture .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- class: pic ![haha only kidding](images/k8s-arch1.png) .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- ## Kubernetes architecture - Ha ha ha ha - OK, I was trying to scare you, it's much simpler than that ❤️ .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- class: pic ![that one is more like the real thing](images/k8s-arch2.png) .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- ## Credits - The first schema is a Kubernetes cluster with storage backed by multi-path iSCSI (Courtesy of [Yongbok Kim](https://www.yongbok.net/blog/)) - The second one is a simplified representation of a Kubernetes cluster (Courtesy of [Imesh Gunaratne](https://medium.com/containermind/a-reference-architecture-for-deploying-wso2-middleware-on-kubernetes-d4dee7601e8e)) .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- ## Kubernetes architecture: the nodes - The nodes executing our containers run a collection of services: - a container Engine (typically Docker) - kubelet (the "node agent") - kube-proxy (a necessary but not sufficient network component) - Nodes were formerly called "minions" (You might see that word in older articles or documentation) .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- ## Kubernetes architecture: the control plane - The Kubernetes logic (its "brains") is a collection of services: - the API server (our point of entry to everything!) - core services like the scheduler and controller manager - `etcd` (a highly available key/value store; the "database" of Kubernetes) - Together, these services form the control plane of our cluster - The control plane is also called the "master" .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- class: pic ![One of the best Kubernetes architecture diagrams available](images/k8s-arch4-thanks-luxas.png) .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- class: extra-details ## Running the control plane on special nodes - It is common to reserve a dedicated node for the control plane (Except for single-node development clusters, like when using minikube) - This node is then called a "master" (Yes, this is ambiguous: is the "master" a node, or the whole control plane?) - Normal applications are restricted from running on this node (By using a mechanism called ["taints"](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/)) - When high availability is required, each service of the control plane must be resilient - The control plane is then replicated on multiple nodes (This is sometimes called a "multi-master" setup) .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- class: extra-details ## Running the control plane outside containers - The services of the control plane can run in or out of containers - For instance: since `etcd` is a critical service, some people deploy it directly on a dedicated cluster (without containers) (This is illustrated on the first "super complicated" schema) - In some hosted Kubernetes offerings (e.g. AKS, GKE, EKS), the control plane is invisible (We only "see" a Kubernetes API endpoint) - In that case, there is no "master node" *For this reason, it is more accurate to say "control plane" rather than "master."* .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- class: pic ![](images/control-planes/single-node-dev.svg) .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- class: pic ![](images/control-planes/managed-kubernetes.svg) .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- class: pic ![](images/control-planes/single-control-and-workers.svg) .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- class: pic ![](images/control-planes/stacked-control-plane.svg) .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- class: pic ![](images/control-planes/non-dedicated-stacked-nodes.svg) .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- class: pic ![](images/control-planes/advanced-control-plane.svg) .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- class: pic ![](images/control-planes/advanced-control-plane-split-events.svg) .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- class: extra-details ## How many nodes should a cluster have? - There is no particular constraint (no need to have an odd number of nodes for quorum) - A cluster can have zero node (but then it won't be able to start any pods) - For testing and development, having a single node is fine - For production, make sure that you have extra capacity (so that your workload still fits if you lose a node or a group of nodes) - Kubernetes is tested with [up to 5000 nodes](https://kubernetes.io/docs/setup/best-practices/cluster-large/) (however, running a cluster of that size requires a lot of tuning) .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- class: extra-details ## Do we need to run Docker at all? No! -- - By default, Kubernetes uses the Docker Engine to run containers - We can leverage other pluggable runtimes through the *Container Runtime Interface* -
We could also use `rkt` ("Rocket") from CoreOS
(deprecated) .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- class: extra-details ## Some runtimes available through CRI - [containerd](https://github.com/containerd/containerd/blob/master/README.md) - maintained by Docker, IBM, and community - used by Docker Engine, microk8s, k3s, GKE; also standalone - comes with its own CLI, `ctr` - [CRI-O](https://github.com/cri-o/cri-o/blob/master/README.md): - maintained by Red Hat, SUSE, and community - used by OpenShift and Kubic - designed specifically as a minimal runtime for Kubernetes - [And more](https://kubernetes.io/docs/setup/production-environment/container-runtimes/) .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- class: extra-details ## Do we need to run Docker at all? Yes! -- - In this workshop, we run our app on a single node first - We will need to build images and ship them around - We can do these things without Docker
(and get diagnosed with NIH¹ syndrome) - Docker is still the most stable container engine today
(but other options are maturing very quickly) .footnote[¹[Not Invented Here](https://en.wikipedia.org/wiki/Not_invented_here)] .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- class: extra-details ## Do we need to run Docker at all? - On our development environments, CI pipelines ... : *Yes, almost certainly* - On our production servers: *Yes (today)* *Probably not (in the future)* .footnote[More information about CRI [on the Kubernetes blog](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes)] .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- ## Interacting with Kubernetes - We will interact with our Kubernetes cluster through the Kubernetes API - The Kubernetes API is (mostly) RESTful - It allows us to create, read, update, delete *resources* - A few common resource types are: - node (a machine — physical or virtual — in our cluster) - pod (group of containers running together on a node) - service (stable network endpoint to connect to one or multiple containers) .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- class: pic ![Node, pod, container](images/k8s-arch3-thanks-weave.png) .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- ## Scaling - How would we scale the pod shown on the previous slide? - **Do** create additional pods - each pod can be on a different node - each pod will have its own IP address - **Do not** add more NGINX containers in the pod - all the NGINX containers would be on the same node - they would all have the same IP address
(resulting in `Address alreading in use` errors) .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- ## Together or separate - Should we put e.g. a web application server and a cache together?
("cache" being something like e.g. Memcached or Redis) - Putting them **in the same pod** means: - they have to be scaled together - they can communicate very efficiently over `localhost` - Putting them **in different pods** means: - they can be scaled separately - they must communicate over remote IP addresses
(incurring more latency, lower performance) - Both scenarios can make sense, depending on our goals .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- ## Credits - The first diagram is courtesy of Lucas Käldström, in [this presentation](https://speakerdeck.com/luxas/kubeadm-cluster-creation-internals-from-self-hosting-to-upgradability-and-ha) - it's one of the best Kubernetes architecture diagrams available! - The second diagram is courtesy of Weave Works - a *pod* can have multiple containers working together - IP addresses are associated with *pods*, not with individual containers Both diagrams used with permission. ??? :EN:- Kubernetes concepts :FR:- Kubernetes en théorie .debug[[k8s/concepts-k8s.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/concepts-k8s.md)] --- class: pic .interstitial[![Image separating from the next part](https://gallant-turing-d0d520.netlify.com/containers/ShippingContainerSFBay.jpg)] --- name: toc-declarative-vs-imperative class: title Declarative vs imperative .nav[ [Previous part](#toc-kubernetes-concepts) | [Back to table of contents](#toc-part-1) | [Next part](#toc-kubernetes-network-model) ] .debug[(automatically generated title slide)] --- # Declarative vs imperative - Our container orchestrator puts a very strong emphasis on being *declarative* - Declarative: *I would like a cup of tea.* - Imperative: *Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in a cup.* -- - Declarative seems simpler at first ... -- - ... As long as you know how to brew tea .debug[[shared/declarative.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/shared/declarative.md)] --- ## Declarative vs imperative - What declarative would really be: *I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.* -- *¹An infusion is obtained by letting the object steep a few minutes in hot² water.* -- *²Hot liquid is obtained by pouring it in an appropriate container³ and setting it on a stove.* -- *³Ah, finally, containers! Something we know about. Let's get to work, shall we?* -- .footnote[Did you know there was an [ISO standard](https://en.wikipedia.org/wiki/ISO_3103) specifying how to brew tea?] .debug[[shared/declarative.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/shared/declarative.md)] --- ## Declarative vs imperative - Imperative systems: - simpler - if a task is interrupted, we have to restart from scratch - Declarative systems: - if a task is interrupted (or if we show up to the party half-way through), we can figure out what's missing and do only what's necessary - we need to be able to *observe* the system - ... and compute a "diff" between *what we have* and *what we want* .debug[[shared/declarative.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/shared/declarative.md)] --- ## Declarative vs imperative in Kubernetes - With Kubernetes, we cannot say: "run this container" - All we can do is write a *spec* and push it to the API server (by creating a resource like e.g. a Pod or a Deployment) - The API server will validate that spec (and reject it if it's invalid) - Then it will store it in etcd - A *controller* will "notice" that spec and act upon it .debug[[k8s/declarative.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/declarative.md)] --- ## Reconciling state - Watch for the `spec` fields in the YAML files later! - The *spec* describes *how we want the thing to be* - Kubernetes will *reconcile* the current state with the spec
(technically, this is done by a number of *controllers*) - When we want to change some resource, we update the *spec* - Kubernetes will then *converge* that resource ??? :EN:- Declarative vs imperative models :FR:- Modèles déclaratifs et impératifs .debug[[k8s/declarative.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/declarative.md)] --- class: pic .interstitial[![Image separating from the next part](https://gallant-turing-d0d520.netlify.com/containers/aerial-view-of-containers.jpg)] --- name: toc-kubernetes-network-model class: title Kubernetes network model .nav[ [Previous part](#toc-declarative-vs-imperative) | [Back to table of contents](#toc-part-1) | [Next part](#toc-first-contact-with-kubectl) ] .debug[(automatically generated title slide)] --- # Kubernetes network model - TL,DR: *Our cluster (nodes and pods) is one big flat IP network.* -- - In detail: - all nodes must be able to reach each other, without NAT - all pods must be able to reach each other, without NAT - pods and nodes must be able to reach each other, without NAT - each pod is aware of its IP address (no NAT) - pod IP addresses are assigned by the network implementation - Kubernetes doesn't mandate any particular implementation .debug[[k8s/kubenet.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubenet.md)] --- ## Kubernetes network model: the good - Everything can reach everything - No address translation - No port translation - No new protocol - The network implementation can decide how to allocate addresses - IP addresses don't have to be "portable" from a node to another (We can use e.g. a subnet per node and use a simple routed topology) - The specification is simple enough to allow many various implementations .debug[[k8s/kubenet.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubenet.md)] --- ## Kubernetes network model: the less good - Everything can reach everything - if you want security, you need to add network policies - the network implementation that you use needs to support them - There are literally dozens of implementations out there (https://github.com/containernetworking/cni/ lists more than 25 plugins) - Pods have level 3 (IP) connectivity, but *services* are level 4 (TCP or UDP) (Services map to a single UDP or TCP port; no port ranges or arbitrary IP packets) - `kube-proxy` is on the data path when connecting to a pod or container,
and it's not particularly fast (relies on userland proxying or iptables) .debug[[k8s/kubenet.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubenet.md)] --- ## Kubernetes network model: in practice - The nodes that we are using have been set up to use [Weave](https://github.com/weaveworks/weave) - We don't endorse Weave in a particular way, it just Works For Us - Don't worry about the warning about `kube-proxy` performance - Unless you: - routinely saturate 10G network interfaces - count packet rates in millions per second - run high-traffic VOIP or gaming platforms - do weird things that involve millions of simultaneous connections
(in which case you're already familiar with kernel tuning) - If necessary, there are alternatives to `kube-proxy`; e.g. [`kube-router`](https://www.kube-router.io) .debug[[k8s/kubenet.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubenet.md)] --- class: extra-details ## The Container Network Interface (CNI) - Most Kubernetes clusters use CNI "plugins" to implement networking - When a pod is created, Kubernetes delegates the network setup to these plugins (it can be a single plugin, or a combination of plugins, each doing one task) - Typically, CNI plugins will: - allocate an IP address (by calling an IPAM plugin) - add a network interface into the pod's network namespace - configure the interface as well as required routes etc. .debug[[k8s/kubenet.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubenet.md)] --- class: extra-details ## Multiple moving parts - The "pod-to-pod network" or "pod network": - provides communication between pods and nodes - is generally implemented with CNI plugins - The "pod-to-service network": - provides internal communication and load balancing - is generally implemented with kube-proxy (or e.g. kube-router) - Network policies: - provide firewalling and isolation - can be bundled with the "pod network" or provided by another component .debug[[k8s/kubenet.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubenet.md)] --- class: pic ![Overview of the three Kubernetes network layers](images/k8s-net-0-overview.svg) .debug[[k8s/kubenet.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubenet.md)] --- class: pic ![Pod-to-pod network](images/k8s-net-1-pod-to-pod.svg) .debug[[k8s/kubenet.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubenet.md)] --- class: pic ![Pod-to-service network](images/k8s-net-2-pod-to-svc.svg) .debug[[k8s/kubenet.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubenet.md)] --- class: pic ![Network policies](images/k8s-net-3-netpol.svg) .debug[[k8s/kubenet.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubenet.md)] --- class: pic ![View with all the layers again](images/k8s-net-4-overview.svg) .debug[[k8s/kubenet.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubenet.md)] --- class: extra-details ## Even more moving parts - Inbound traffic can be handled by multiple components: - something like kube-proxy or kube-router (for NodePort services) - load balancers (ideally, connected to the pod network) - It is possible to use multiple pod networks in parallel (with "meta-plugins" like CNI-Genie or Multus) - Some solutions can fill multiple roles (e.g. kube-router can be set up to provide the pod network and/or network policies and/or replace kube-proxy) ??? :EN:- The Kubernetes network model :FR:- Le modèle réseau de Kubernetes .debug[[k8s/kubenet.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubenet.md)] --- class: pic .interstitial[![Image separating from the next part](https://gallant-turing-d0d520.netlify.com/containers/blue-containers.jpg)] --- name: toc-first-contact-with-kubectl class: title First contact with `kubectl` .nav[ [Previous part](#toc-kubernetes-network-model) | [Back to table of contents](#toc-part-1) | [Next part](#toc-setting-up-kubernetes) ] .debug[(automatically generated title slide)] --- # First contact with `kubectl` - `kubectl` is (almost) the only tool we'll need to talk to Kubernetes - It is a rich CLI tool around the Kubernetes API (Everything you can do with `kubectl`, you can do directly with the API) - On our machines, there is a `~/.kube/config` file with: - the Kubernetes API address - the path to our TLS certificates used to authenticate - You can also use the `--kubeconfig` flag to pass a config file - Or directly `--server`, `--user`, etc. - `kubectl` can be pronounced "Cube C T L", "Cube cuttle", "Cube cuddle"... .debug[[k8s/kubectlget.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlget.md)] --- class: extra-details ## `kubectl` is the new SSH - We often start managing servers with SSH (installing packages, troubleshooting ...) - At scale, it becomes tedious, repetitive, error-prone - Instead, we use config management, central logging, etc. - In many cases, we still need SSH: - as the underlying access method (e.g. Ansible) - to debug tricky scenarios - to inspect and poke at things .debug[[k8s/kubectlget.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlget.md)] --- class: extra-details ## The parallel with `kubectl` - We often start managing Kubernetes clusters with `kubectl` (deploying applications, troubleshooting ...) - At scale (with many applications or clusters), it becomes tedious, repetitive, error-prone - Instead, we use automated pipelines, observability tooling, etc. - In many cases, we still need `kubectl`: - to debug tricky scenarios - to inspect and poke at things - The Kubernetes API is always the underlying access method .debug[[k8s/kubectlget.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlget.md)] --- ## `kubectl get` - Let's look at our `Node` resources with `kubectl get`! .exercise[ - Look at the composition of our cluster: ```bash kubectl get node ``` - These commands are equivalent: ```bash kubectl get no kubectl get node kubectl get nodes ``` ] .debug[[k8s/kubectlget.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlget.md)] --- ## Obtaining machine-readable output - `kubectl get` can output JSON, YAML, or be directly formatted .exercise[ - Give us more info about the nodes: ```bash kubectl get nodes -o wide ``` - Let's have some YAML: ```bash kubectl get no -o yaml ``` See that `kind: List` at the end? It's the type of our result! ] .debug[[k8s/kubectlget.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlget.md)] --- ## (Ab)using `kubectl` and `jq` - It's super easy to build custom reports .exercise[ - Show the capacity of all our nodes as a stream of JSON objects: ```bash kubectl get nodes -o json | jq ".items[] | {name:.metadata.name} + .status.capacity" ``` ] .debug[[k8s/kubectlget.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlget.md)] --- class: extra-details ## Exploring types and definitions - We can list all available resource types by running `kubectl api-resources`
(In Kubernetes 1.10 and prior, this command used to be `kubectl get`) - We can view the definition for a resource type with: ```bash kubectl explain type ``` - We can view the definition of a field in a resource, for instance: ```bash kubectl explain node.spec ``` - Or get the full definition of all fields and sub-fields: ```bash kubectl explain node --recursive ``` .debug[[k8s/kubectlget.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlget.md)] --- class: extra-details ## Introspection vs. documentation - We can access the same information by reading the [API documentation](https://kubernetes.io/docs/reference/#api-reference) - The API documentation is usually easier to read, but: - it won't show custom types (like Custom Resource Definitions) - we need to make sure that we look at the correct version - `kubectl api-resources` and `kubectl explain` perform *introspection* (they communicate with the API server and obtain the exact type definitions) .debug[[k8s/kubectlget.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlget.md)] --- ## Type names - The most common resource names have three forms: - singular (e.g. `node`, `service`, `deployment`) - plural (e.g. `nodes`, `services`, `deployments`) - short (e.g. `no`, `svc`, `deploy`) - Some resources do not have a short name - `Endpoints` only have a plural form (because even a single `Endpoints` resource is actually a list of endpoints) .debug[[k8s/kubectlget.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlget.md)] --- ## Viewing details - We can use `kubectl get -o yaml` to see all available details - However, YAML output is often simultaneously too much and not enough - For instance, `kubectl get node node1 -o yaml` is: - too much information (e.g.: list of images available on this node) - not enough information (e.g.: doesn't show pods running on this node) - difficult to read for a human operator - For a comprehensive overview, we can use `kubectl describe` instead .debug[[k8s/kubectlget.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlget.md)] --- ## `kubectl describe` - `kubectl describe` needs a resource type and (optionally) a resource name - It is possible to provide a resource name *prefix* (all matching objects will be displayed) - `kubectl describe` will retrieve some extra information about the resource .exercise[ - Look at the information available for `node1` with one of the following commands: ```bash kubectl describe node/node1 kubectl describe node node1 ``` ] (We should notice a bunch of control plane pods.) .debug[[k8s/kubectlget.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlget.md)] --- ## Listing running containers - Containers are manipulated through *pods* - A pod is a group of containers: - running together (on the same node) - sharing resources (RAM, CPU; but also network, volumes) .exercise[ - List pods on our cluster: ```bash kubectl get pods ``` ] -- *Where are the pods that we saw just a moment earlier?!?* .debug[[k8s/kubectlget.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlget.md)] --- ## Namespaces - Namespaces allow us to segregate resources .exercise[ - List the namespaces on our cluster with one of these commands: ```bash kubectl get namespaces kubectl get namespace kubectl get ns ``` ] -- *You know what ... This `kube-system` thing looks suspicious.* *In fact, I'm pretty sure it showed up earlier, when we did:* `kubectl describe node node1` .debug[[k8s/kubectlget.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlget.md)] --- ## Accessing namespaces - By default, `kubectl` uses the `default` namespace - We can see resources in all namespaces with `--all-namespaces` .exercise[ - List the pods in all namespaces: ```bash kubectl get pods --all-namespaces ``` - Since Kubernetes 1.14, we can also use `-A` as a shorter version: ```bash kubectl get pods -A ``` ] *Here are our system pods!* .debug[[k8s/kubectlget.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlget.md)] --- ## What are all these control plane pods? - `etcd` is our etcd server - `kube-apiserver` is the API server - `kube-controller-manager` and `kube-scheduler` are other control plane components - `coredns` provides DNS-based service discovery ([replacing kube-dns as of 1.11](https://kubernetes.io/blog/2018/07/10/coredns-ga-for-kubernetes-cluster-dns/)) - `kube-proxy` is the (per-node) component managing port mappings and such - `weave` is the (per-node) component managing the network overlay - the `READY` column indicates the number of containers in each pod (1 for most pods, but `weave` has 2, for instance) .debug[[k8s/kubectlget.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlget.md)] --- ## Scoping another namespace - We can also look at a different namespace (other than `default`) .exercise[ - List only the pods in the `kube-system` namespace: ```bash kubectl get pods --namespace=kube-system kubectl get pods -n kube-system ``` ] .debug[[k8s/kubectlget.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlget.md)] --- ## Namespaces and other `kubectl` commands - We can use `-n`/`--namespace` with almost every `kubectl` command - Example: - `kubectl create --namespace=X` to create something in namespace X - We can use `-A`/`--all-namespaces` with most commands that manipulate multiple objects - Examples: - `kubectl delete` can delete resources across multiple namespaces - `kubectl label` can add/remove/update labels across multiple namespaces .debug[[k8s/kubectlget.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlget.md)] --- class: extra-details ## What about `kube-public`? .exercise[ - List the pods in the `kube-public` namespace: ```bash kubectl -n kube-public get pods ``` ] Nothing! `kube-public` is created by kubeadm & [used for security bootstrapping](https://kubernetes.io/blog/2017/01/stronger-foundation-for-creating-and-managing-kubernetes-clusters). .debug[[k8s/kubectlget.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlget.md)] --- class: extra-details ## Exploring `kube-public` - The only interesting object in `kube-public` is a ConfigMap named `cluster-info` .exercise[ - List ConfigMap objects: ```bash kubectl -n kube-public get configmaps ``` - Inspect `cluster-info`: ```bash kubectl -n kube-public get configmap cluster-info -o yaml ``` ] Note the `selfLink` URI: `/api/v1/namespaces/kube-public/configmaps/cluster-info` We can use that! .debug[[k8s/kubectlget.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlget.md)] --- class: extra-details ## Accessing `cluster-info` - Earlier, when trying to access the API server, we got a `Forbidden` message - But `cluster-info` is readable by everyone (even without authentication) .exercise[ - Retrieve `cluster-info`: ```bash curl -k https://10.96.0.1/api/v1/namespaces/kube-public/configmaps/cluster-info ``` ] - We were able to access `cluster-info` (without auth) - It contains a `kubeconfig` file .debug[[k8s/kubectlget.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlget.md)] --- class: extra-details ## Retrieving `kubeconfig` - We can easily extract the `kubeconfig` file from this ConfigMap .exercise[ - Display the content of `kubeconfig`: ```bash curl -sk https://10.96.0.1/api/v1/namespaces/kube-public/configmaps/cluster-info \ | jq -r .data.kubeconfig ``` ] - This file holds the canonical address of the API server, and the public key of the CA - This file *does not* hold client keys or tokens - This is not sensitive information, but allows us to establish trust .debug[[k8s/kubectlget.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlget.md)] --- class: extra-details ## What about `kube-node-lease`? - Starting with Kubernetes 1.14, there is a `kube-node-lease` namespace (or in Kubernetes 1.13 if the NodeLease feature gate is enabled) - That namespace contains one Lease object per node - *Node leases* are a new way to implement node heartbeats (i.e. node regularly pinging the control plane to say "I'm alive!") - For more details, see [KEP-0009] or the [node controller documentation] [KEP-0009]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/0009-node-heartbeat.md [node controller documentation]: https://kubernetes.io/docs/concepts/architecture/nodes/#node-controller .debug[[k8s/kubectlget.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlget.md)] --- ## Services - A *service* is a stable endpoint to connect to "something" (In the initial proposal, they were called "portals") .exercise[ - List the services on our cluster with one of these commands: ```bash kubectl get services kubectl get svc ``` ] -- There is already one service on our cluster: the Kubernetes API itself. .debug[[k8s/kubectlget.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlget.md)] --- ## ClusterIP services - A `ClusterIP` service is internal, available from the cluster only - This is useful for introspection from within containers .exercise[ - Try to connect to the API: ```bash curl -k https://`10.96.0.1` ``` - `-k` is used to skip certificate verification - Make sure to replace 10.96.0.1 with the CLUSTER-IP shown by `kubectl get svc` ] The command above should either time out, or show an authentication error. Why? .debug[[k8s/kubectlget.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlget.md)] --- ## Time out - Connections to ClusterIP services only work *from within the cluster* - If we are outside the cluster, the `curl` command will probably time out (Because the IP address, e.g. 10.96.0.1, isn't routed properly outside the cluster) - This is the case with most "real" Kubernetes clusters - To try the connection from within the cluster, we can use [shpod](https://github.com/jpetazzo/shpod) .debug[[k8s/kubectlget.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlget.md)] --- ## Authentication error This is what we should see when connecting from within the cluster: ```json $ curl -k https://10.96.0.1 { "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"", "reason": "Forbidden", "details": { }, "code": 403 } ``` .debug[[k8s/kubectlget.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlget.md)] --- ## Explanations - We can see `kind`, `apiVersion`, `metadata` - These are typical of a Kubernetes API reply - Because we *are* talking to the Kubernetes API - The Kubernetes API tells us "Forbidden" (because it requires authentication) - The Kubernetes API is reachable from within the cluster (many apps integrating with Kubernetes will use this) .debug[[k8s/kubectlget.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlget.md)] --- ## DNS integration - Each service also gets a DNS record - The Kubernetes DNS resolver is available *from within pods* (and sometimes, from within nodes, depending on configuration) - Code running in pods can connect to services using their name (e.g. https://kubernetes/...) ??? :EN:- Getting started with kubectl :FR:- Se familiariser avec kubectl .debug[[k8s/kubectlget.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlget.md)] --- class: pic .interstitial[![Image separating from the next part](https://gallant-turing-d0d520.netlify.com/containers/chinook-helicopter-container.jpg)] --- name: toc-setting-up-kubernetes class: title Setting up Kubernetes .nav[ [Previous part](#toc-first-contact-with-kubectl) | [Back to table of contents](#toc-part-1) | [Next part](#toc-running-our-first-containers-on-kubernetes) ] .debug[(automatically generated title slide)] --- # Setting up Kubernetes - Kubernetes is made of many components that require careful configuration - Secure operation typically requires TLS certificates and a local CA (certificate authority) - Setting up everything manually is possible, but rarely done (except for learning purposes) - Let's do a quick overview of available options! .debug[[k8s/setup-overview.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/setup-overview.md)] --- ## Local development - Are you writing code that will eventually run on Kubernetes? - Then it's a good idea to have a development cluster! - Instead of shipping containers images, we can test them on Kubernetes - Extremely useful when authoring or testing Kubernetes-specific objects (ConfigMaps, Secrets, StatefulSets, Jobs, RBAC, etc.) - Extremely convenient to quickly test/check what a particular thing looks like (e.g. what are the fields a Deployment spec?) .debug[[k8s/setup-overview.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/setup-overview.md)] --- ## One-node clusters - It's perfectly fine to work with a cluster that has only one node - It simplifies a lot of things: - pod networking doesn't even need CNI plugins, overlay networks, etc. - these clusters can be fully contained (no pun intended) in an easy-to-ship VM or container image - some of the security aspects may be simplified (different threat model) - images can be built directly on the node (we don't need to ship them with a registry) - Examples: Docker Desktop, k3d, KinD, MicroK8s, Minikube (some of these also support clusters with multiple nodes) .debug[[k8s/setup-overview.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/setup-overview.md)] --- ## Managed clusters ("Turnkey Solutions") - Many cloud providers and hosting providers offer "managed Kubernetes" - The deployment and maintenance of the *control plane* is entirely managed by the provider (ideally, clusters can be spun up automatically through an API, CLI, or web interface) - Given the complexity of Kubernetes, this approach is *strongly recommended* (at least for your first production clusters) - After working for a while with Kubernetes, you will be better equipped to decide: - whether to operate it yourself or use a managed offering - which offering or which distribution works best for you and your needs .debug[[k8s/setup-overview.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/setup-overview.md)] --- ## Node management - Most "Turnkey Solutions" offer fully managed control planes (including control plane upgrades, sometimes done automatically) - However, with most providers, we still need to take care of *nodes* (provisioning, upgrading, scaling the nodes) - Example with Amazon EKS ["managed node groups"](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html): *...when bugs or issues are reported [...] you're responsible for deploying these patched AMI versions to your managed node groups.* .debug[[k8s/setup-overview.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/setup-overview.md)] --- ## Managed clusters differences - Most providers let you pick which Kubernetes version you want - some providers offer up-to-date versions - others lag significantly (sometimes by 2 or 3 minor versions) - Some providers offer multiple networking or storage options - Others will only support one, tied to their infrastructure (changing that is in theory possible, but might be complex or unsupported) - Some providers let you configure or customize the control plane (generally through Kubernetes "feature gates") .debug[[k8s/setup-overview.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/setup-overview.md)] --- ## Choosing a provider - Pricing models differ from one provider to another - nodes are generally charged at their usual price - control plane may be free or incur a small nominal fee - Beyond pricing, there are *huge* differences in features between providers - The "major" providers are not always the best ones! - See [this page](https://kubernetes.io/docs/setup/production-environment/turnkey-solutions/) for a list of available providers .debug[[k8s/setup-overview.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/setup-overview.md)] --- ## Kubernetes distributions and installers - If you want to run Kubernetes yourselves, there are many options (free, commercial, proprietary, open source ...) - Some of them are installers, while some are complete platforms - Some of them leverage other well-known deployment tools (like Puppet, Terraform ...) - There are too many options to list them all (check [this page](https://kubernetes.io/partners/#conformance) for an overview!) .debug[[k8s/setup-overview.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/setup-overview.md)] --- ## kubeadm - kubeadm is a tool part of Kubernetes to facilitate cluster setup - Many other installers and distributions use it (but not all of them) - It can also be used by itself - Excellent starting point to install Kubernetes on your own machines (virtual, physical, it doesn't matter) - It even supports highly available control planes, or "multi-master" (this is more complex, though, because it introduces the need for an API load balancer) .debug[[k8s/setup-overview.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/setup-overview.md)] --- ## Manual setup - The resources below are mainly for educational purposes! - [Kubernetes The Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way) by Kelsey Hightower - step by step guide to install Kubernetes on Google Cloud - covers certificates, high availability ... - *“Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.”* - [Deep Dive into Kubernetes Internals for Builders and Operators](https://www.youtube.com/watch?v=3KtEAa7_duA) - conference presentation showing step-by-step control plane setup - emphasis on simplicity, not on security and availability .debug[[k8s/setup-overview.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/setup-overview.md)] --- ## About our training clusters - How did we set up these Kubernetes clusters that we're using? -- - We used `kubeadm` on freshly installed VM instances running Ubuntu LTS 1. Install Docker 2. Install Kubernetes packages 3. Run `kubeadm init` on the first node (it deploys the control plane on that node) 4. Set up Weave (the overlay network) with a single `kubectl apply` command 5. Run `kubeadm join` on the other nodes (with the token produced by `kubeadm init`) 6. Copy the configuration file generated by `kubeadm init` - Check the [prepare VMs README](https://github.com/jpetazzo/container.training/blob/master/prepare-vms/README.md) for more details .debug[[k8s/setup-overview.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/setup-overview.md)] --- ## `kubeadm` "drawbacks" - Doesn't set up Docker or any other container engine (this is by design, to give us choice) - Doesn't set up the overlay network (this is also by design, for the same reasons) - HA control plane requires [some extra steps](https://kubernetes.io/docs/setup/independent/high-availability/) - Note that HA control plane also requires setting up a specific API load balancer (which is beyond the scope of kubeadm) ??? :EN:- Various ways to install Kubernetes :FR:- Survol des techniques d'installation de Kubernetes .debug[[k8s/setup-overview.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/setup-overview.md)] --- class: pic .interstitial[![Image separating from the next part](https://gallant-turing-d0d520.netlify.com/containers/container-cranes.jpg)] --- name: toc-running-our-first-containers-on-kubernetes class: title Running our first containers on Kubernetes .nav[ [Previous part](#toc-setting-up-kubernetes) | [Back to table of contents](#toc-part-2) | [Next part](#toc-revisiting-kubectl-logs) ] .debug[(automatically generated title slide)] --- # Running our first containers on Kubernetes - First things first: we cannot run a container -- - We are going to run a pod, and in that pod there will be a single container -- - In that container in the pod, we are going to run a simple `ping` command -- - Sounds simple enough, right? -- - Except ... that the `kubectl run` command changed in Kubernetes 1.18! - We'll explain what has changed, and why .debug[[k8s/kubectl-run.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-run.md)] --- ## Choose your own adventure - First, let's check which version of Kubernetes we're running .exercise[ - Check our API server version: ```bash kubectl version ``` - Look at the **Server Version** in the second part of the output ] - In the following slides, we will talk about 1.17- or 1.18+ (to indicate "up to Kubernetes 1.17" and "from Kubernetes 1.18") .debug[[k8s/kubectl-run.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-run.md)] --- ## Starting a simple pod with `kubectl run` - `kubectl run` is convenient to start a single pod - We need to specify at least a *name* and the image we want to use - Optionally, we can specify the command to run in the pod .exercise[ - Let's ping the address of `localhost`, the loopback interface: ```bash kubectl run pingpong --image alpine ping 127.0.0.1 ``` ] .debug[[k8s/kubectl-run.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-run.md)] --- ## What do we see? - In Kubernetes 1.18+, the output tells us that a Pod is created: ``` pod/pingpong created ``` - In Kubernetes 1.17-, the output is much more verbose: ``` kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead. deployment.apps/pingpong created ``` - There is a deprecation warning ... - ... And a Deployment was created instead of a Pod 🤔 What does that mean? .debug[[k8s/kubectl-run.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-run.md)] --- ## Show me all you got! - What resources were created by `kubectl run`? .exercise[ - Let's ask Kubernetes to show us *all* the resources: ```bash kubectl get all ``` ] Note: `kubectl get all` is a lie. It doesn't show everything. (But it shows a lot of "usual suspects", i.e. commonly used resources.) .debug[[k8s/kubectl-run.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-run.md)] --- ## The situation with Kubernetes 1.18+ ``` NAME READY STATUS RESTARTS AGE pod/pingpong 1/1 Running 0 9s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1
443/TCP 3h30m ``` We wanted a pod, we got a pod, named `pingpong`. Great! (We can ignore `service/kubernetes`, it was already there before.) .debug[[k8s/kubectl-run.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-run.md)] --- ## The situation with Kubernetes 1.17- ``` NAME READY STATUS RESTARTS AGE pod/pingpong-6ccbc77f68-kmgfn 1/1 Running 0 11s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1
443/TCP 3h45 NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/pingpong 1/1 1 1 11s NAME DESIRED CURRENT READY AGE replicaset.apps/pingpong-6ccbc77f68 1 1 1 11s ``` Our pod is not named `pingpong`, but `pingpong-xxxxxxxxxxx-yyyyy`. We have a Deployment named `pingpong`, and an extra Replica Set, too. What's going on? .debug[[k8s/kubectl-run.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-run.md)] --- ## From Deployment to Pod We have the following resources: - `deployment.apps/pingpong` This is the Deployment that we just created. - `replicaset.apps/pingpong-xxxxxxxxxx` This is a Replica Set created by this Deployment. - `pod/pingpong-xxxxxxxxxx-yyyyy` This is a *pod* created by the Replica Set. Let's explain what these things are. .debug[[k8s/kubectl-run.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-run.md)] --- ## Pod - Can have one or multiple containers - Runs on a single node (Pod cannot "straddle" multiple nodes) - Pods cannot be moved (e.g. in case of node outage) - Pods cannot be scaled (except by manually creating more Pods) .debug[[k8s/kubectl-run.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-run.md)] --- class: extra-details ## Pod details - A Pod is not a process; it's an environment for containers - it cannot be "restarted" - it cannot "crash" - The containers in a Pod can crash - They may or may not get restarted (depending on Pod's restart policy) - If all containers exit successfully, the Pod ends in "Succeeded" phase - If some containers fail and don't get restarted, the Pod ends in "Failed" phase .debug[[k8s/kubectl-run.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-run.md)] --- ## Replica Set - Set of identical (replicated) Pods - Defined by a pod template + number of desired replicas - If there are not enough Pods, the Replica Set creates more (e.g. in case of node outage; or simply when scaling up) - If there are too many Pods, the Replica Set deletes some (e.g. if a node was disconnected and comes back; or when scaling down) - We can scale up/down a Replica Set - we update the manifest of the Replica Set - as a consequence, the Replica Set controller creates/deletes Pods .debug[[k8s/kubectl-run.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-run.md)] --- ## Deployment - Replica Sets control *identical* Pods - Deployments are used to roll out different Pods (different image, command, environment variables, ...) - When we update a Deployment with a new Pod definition: - a new Replica Set is created with the new Pod definition - that new Replica Set is progressively scaled up - meanwhile, the old Replica Set(s) is(are) scaled down - This is a *rolling update*, minimizing application downtime - When we scale up/down a Deployment, it scales up/down its Replica Set .debug[[k8s/kubectl-run.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-run.md)] --- ## `kubectl run` through the ages - When we want to run an app on Kubernetes, we *generally* want a Deployment - Up to Kubernetes 1.17, `kubectl run` created a Deployment - it could also create other things, by using special flags - this was powerful, but potentially confusing - creating a single Pod was done with `kubectl run --restart=Never` - other resources could also be created with `kubectl create ...` - From Kubernetes 1.18, `kubectl run` creates a Pod - other kinds of resources can still be created with `kubectl create` .debug[[k8s/kubectl-run.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-run.md)] --- ## Creating a Deployment the proper way - Let's destroy that `pingpong` app that we created - Then we will use `kubectl create deployment` to re-create it .exercise[ - On Kubernetes 1.18+, delete the Pod named `pingpong`: ```bash kubectl delete pod pingpong ``` - On Kubernetes 1.17-, delete the Deployment named `pingpong`: ```bash kubectl delete deployment pingpong ``` ] .debug[[k8s/kubectl-run.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-run.md)] --- ## Running `ping` in a Deployment - When using `kubectl create deployment`, we cannot indicate the command to execute (at least, not in Kubernetes 1.18; but that changed in Kubernetes 1.19) - We can: - write a custom YAML manifest for our Deployment -- - (yeah right ... too soon!) -- - use an image that has the command to execute baked in - (much easier!) -- - We will use the image `jpetazzo/ping` (it has a default command of `ping 127.0.0.1`) .debug[[k8s/kubectl-run.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-run.md)] --- ## Creating a Deployment running `ping` - Let's create a Deployment named `pingpong` - It will use the image `jpetazzo/ping` .exercise[ - Create the Deployment: ```bash kubectl create deployment pingpong --image=jpetazzo/ping ``` - Check the resources that were created: ```bash kubectl get all ``` ] .debug[[k8s/kubectl-run.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-run.md)] --- class: extra-details ## In Kubernetes 1.19 - Since Kubernetes 1.19, we can specify the command to run - The command must be passed after two dashes: ```bash kubectl create deployment pingpong --image=alpine -- ping 127.1 ``` .debug[[k8s/kubectl-run.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-run.md)] --- ## Viewing container output - Let's use the `kubectl logs` command - We will pass either a *pod name*, or a *type/name* (E.g. if we specify a deployment or replica set, it will get the first pod in it) - Unless specified otherwise, it will only show logs of the first container in the pod (Good thing there's only one in ours!) .exercise[ - View the result of our `ping` command: ```bash kubectl logs deploy/pingpong ``` ] .debug[[k8s/kubectl-run.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-run.md)] --- ## Streaming logs in real time - Just like `docker logs`, `kubectl logs` supports convenient options: - `-f`/`--follow` to stream logs in real time (à la `tail -f`) - `--tail` to indicate how many lines you want to see (from the end) - `--since` to get logs only after a given timestamp .exercise[ - View the latest logs of our `ping` command: ```bash kubectl logs deploy/pingpong --tail 1 --follow ``` - Stop it with Ctrl-C ] .debug[[k8s/kubectl-run.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-run.md)] --- ## Scaling our application - We can create additional copies of our container (I mean, our pod) with `kubectl scale` .exercise[ - Scale our `pingpong` deployment: ```bash kubectl scale deploy/pingpong --replicas 3 ``` - Note that this command does exactly the same thing: ```bash kubectl scale deployment pingpong --replicas 3 ``` - Check that we now have multiple pods: ```bash kubectl get pods ``` ] .debug[[k8s/kubectl-run.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-run.md)] --- class: extra-details ## Scaling a Replica Set - What if we scale the Replica Set instead of the Deployment? - The Deployment would notice it right away and scale back to the initial level - The Replica Set makes sure that we have the right numbers of Pods - The Deployment makes sure that the Replica Set has the right size (conceptually, it delegates the management of the Pods to the Replica Set) - This might seem weird (why this extra layer?) but will soon make sense (when we will look at how rolling updates work!) .debug[[k8s/kubectl-run.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-run.md)] --- ## Streaming logs of multiple pods - What happens if we try `kubectl logs` now that we have multiple pods? .exercise[ ```bash kubectl logs deploy/pingpong --tail 3 ``` ] `kubectl logs` will warn us that multiple pods were found. It is showing us only one of them. We'll see later how to address that shortcoming. .debug[[k8s/kubectl-run.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-run.md)] --- ## Resilience - The *deployment* `pingpong` watches its *replica set* - The *replica set* ensures that the right number of *pods* are running - What happens if pods disappear? .exercise[ - In a separate window, watch the list of pods: ```bash watch kubectl get pods ``` - Destroy the pod currently shown by `kubectl logs`: ``` kubectl delete pod pingpong-xxxxxxxxxx-yyyyy ``` ] .debug[[k8s/kubectl-run.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-run.md)] --- ## What happened? - `kubectl delete pod` terminates the pod gracefully (sending it the TERM signal and waiting for it to shutdown) - As soon as the pod is in "Terminating" state, the Replica Set replaces it - But we can still see the output of the "Terminating" pod in `kubectl logs` - Until 30 seconds later, when the grace period expires - The pod is then killed, and `kubectl logs` exits ??? :EN:- Running pods and deployments :FR:- Créer un pod et un déploiement .debug[[k8s/kubectl-run.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-run.md)] --- class: pic .interstitial[![Image separating from the next part](https://gallant-turing-d0d520.netlify.com/containers/container-housing.jpg)] --- name: toc-revisiting-kubectl-logs class: title Revisiting `kubectl logs` .nav[ [Previous part](#toc-running-our-first-containers-on-kubernetes) | [Back to table of contents](#toc-part-2) | [Next part](#toc-exposing-containers) ] .debug[(automatically generated title slide)] --- # Revisiting `kubectl logs` - In this section, we assume that we have a Deployment with multiple Pods (e.g. `pingpong` that we scaled to at least 3 pods) - We will highlights some of the limitations of `kubectl logs` .debug[[k8s/kubectl-logs.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-logs.md)] --- ## Streaming logs of multiple pods - By default, `kubectl logs` shows us the output of a single Pod .exercise[ - Try to check the output of the Pods related to a Deployment: ```bash kubectl logs deploy/pingpong --tail 1 --follow ``` ] `kubectl logs` only shows us the logs of one of the Pods. .debug[[k8s/kubectl-logs.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-logs.md)] --- ## Viewing logs of multiple pods - When we specify a deployment name, only one single pod's logs are shown - We can view the logs of multiple pods by specifying a *selector* - If we check the pods created by the deployment, they all have the label `app=pingpong` (this is just a default label that gets added when using `kubectl create deployment`) .exercise[ - View the last line of log from all pods with the `app=pingpong` label: ```bash kubectl logs -l app=pingpong --tail 1 ``` ] .debug[[k8s/kubectl-logs.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-logs.md)] --- ## Streaming logs of multiple pods - Can we stream the logs of all our `pingpong` pods? .exercise[ - Combine `-l` and `-f` flags: ```bash kubectl logs -l app=pingpong --tail 1 -f ``` ] *Note: combining `-l` and `-f` is only possible since Kubernetes 1.14!* *Let's try to understand why ...* .debug[[k8s/kubectl-logs.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-logs.md)] --- class: extra-details ## Streaming logs of many pods - Let's see what happens if we try to stream the logs for more than 5 pods .exercise[ - Scale up our deployment: ```bash kubectl scale deployment pingpong --replicas=8 ``` - Stream the logs: ```bash kubectl logs -l app=pingpong --tail 1 -f ``` ] We see a message like the following one: ``` error: you are attempting to follow 8 log streams, but maximum allowed concurency is 5, use --max-log-requests to increase the limit ``` .debug[[k8s/kubectl-logs.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-logs.md)] --- class: extra-details ## Why can't we stream the logs of many pods? - `kubectl` opens one connection to the API server per pod - For each pod, the API server opens one extra connection to the corresponding kubelet - If there are 1000 pods in our deployment, that's 1000 inbound + 1000 outbound connections on the API server - This could easily put a lot of stress on the API server - Prior Kubernetes 1.14, it was decided to *not* allow multiple connections - From Kubernetes 1.14, it is allowed, but limited to 5 connections (this can be changed with `--max-log-requests`) - For more details about the rationale, see [PR #67573](https://github.com/kubernetes/kubernetes/pull/67573) .debug[[k8s/kubectl-logs.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-logs.md)] --- ## Shortcomings of `kubectl logs` - We don't see which pod sent which log line - If pods are restarted / replaced, the log stream stops - If new pods are added, we don't see their logs - To stream the logs of multiple pods, we need to write a selector - There are external tools to address these shortcomings (e.g.: [Stern](https://github.com/wercker/stern)) .debug[[k8s/kubectl-logs.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-logs.md)] --- class: extra-details ## `kubectl logs -l ... --tail N` - If we run this with Kubernetes 1.12, the last command shows multiple lines - This is a regression when `--tail` is used together with `-l`/`--selector` - It always shows the last 10 lines of output for each container (instead of the number of lines specified on the command line) - The problem was fixed in Kubernetes 1.13 *See [#70554](https://github.com/kubernetes/kubernetes/issues/70554) for details.* ??? :EN:- Viewing logs with "kubectl logs" :FR:- Consulter les logs avec "kubectl logs" .debug[[k8s/kubectl-logs.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectl-logs.md)] --- ## 19,000 words They say, "a picture is worth one thousand words." The following 19 slides show what really happens when we run: ```bash kubectl create deployment web --image=nginx ``` .debug[[k8s/deploymentslideshow.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/01.svg) .debug[[k8s/deploymentslideshow.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/02.svg) .debug[[k8s/deploymentslideshow.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/03.svg) .debug[[k8s/deploymentslideshow.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/04.svg) .debug[[k8s/deploymentslideshow.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/05.svg) .debug[[k8s/deploymentslideshow.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/06.svg) .debug[[k8s/deploymentslideshow.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/07.svg) .debug[[k8s/deploymentslideshow.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/08.svg) .debug[[k8s/deploymentslideshow.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/09.svg) .debug[[k8s/deploymentslideshow.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/10.svg) .debug[[k8s/deploymentslideshow.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/11.svg) .debug[[k8s/deploymentslideshow.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/12.svg) .debug[[k8s/deploymentslideshow.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/13.svg) .debug[[k8s/deploymentslideshow.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/14.svg) .debug[[k8s/deploymentslideshow.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/15.svg) .debug[[k8s/deploymentslideshow.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/16.svg) .debug[[k8s/deploymentslideshow.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/17.svg) .debug[[k8s/deploymentslideshow.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/18.svg) .debug[[k8s/deploymentslideshow.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic ![](images/kubectl-create-deployment-slideshow/19.svg) .debug[[k8s/deploymentslideshow.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic .interstitial[![Image separating from the next part](https://gallant-turing-d0d520.netlify.com/containers/containers-by-the-water.jpg)] --- name: toc-exposing-containers class: title Exposing containers .nav[ [Previous part](#toc-revisiting-kubectl-logs) | [Back to table of contents](#toc-part-2) | [Next part](#toc-shipping-images-with-a-registry) ] .debug[(automatically generated title slide)] --- # Exposing containers - We can connect to our pods using their IP address - Then we need to figure out a lot of things: - how do we look up the IP address of the pod(s)? - how do we connect from outside the cluster? - how do we load balance traffic? - what if a pod fails? - Kubernetes has a resource type named *Service* - Services address all these questions! .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- ## Services in a nutshell - Services give us a *stable endpoint* to connect to a pod or a group of pods - An easy way to create a service is to use `kubectl expose` - If we have a deployment named `my-little-deploy`, we can run: `kubectl expose deployment my-little-deploy --port=80` ... and this will create a service with the same name (`my-little-deploy`) - Services are automatically added to an internal DNS zone (in the example above, our code can now connect to http://my-little-deploy/) .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- ## Advantages of services - We don't need to look up the IP address of the pod(s) (we resolve the IP address of the service using DNS) - There are multiple service types; some of them allow external traffic (e.g. `LoadBalancer` and `NodePort`) - Services provide load balancing (for both internal and external traffic) - Service addresses are independent from pods' addresses (when a pod fails, the service seamlessly sends traffic to its replacement) .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- ## Many kinds and flavors of service - There are different types of services: `ClusterIP`, `NodePort`, `LoadBalancer`, `ExternalName` - There are also *headless services* - Services can also have optional *external IPs* - There is also another resource type called *Ingress* (specifically for HTTP services) - Wow, that's a lot! Let's start with the basics ... .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- ## `ClusterIP` - It's the default service type - A virtual IP address is allocated for the service (in an internal, private range; e.g. 10.96.0.0/12) - This IP address is reachable only from within the cluster (nodes and pods) - Our code can connect to the service using the original port number - Perfect for internal communication, within the cluster .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: pic ![](images/kubernetes-services/11-CIP-by-addr.png) .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: pic ![](images/kubernetes-services/12-CIP-by-name.png) .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: pic ![](images/kubernetes-services/13-CIP-both.png) .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: pic ![](images/kubernetes-services/14-CIP-headless.png) .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- ## `LoadBalancer` - An external load balancer is allocated for the service (typically a cloud load balancer, e.g. ELB on AWS, GLB on GCE ...) - This is available only when the underlying infrastructure provides some kind of "load balancer as a service" - Each service of that type will typically cost a little bit of money (e.g. a few cents per hour on AWS or GCE) - Ideally, traffic would flow directly from the load balancer to the pods - In practice, it will often flow through a `NodePort` first .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: pic ![](images/kubernetes-services/31-LB-no-service.png) .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: pic ![](images/kubernetes-services/32-LB-plus-cip.png) .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: pic ![](images/kubernetes-services/33-LB-plus-lb.png) .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: pic ![](images/kubernetes-services/34-LB-internal-traffic.png) .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: pic ![](images/kubernetes-services/35-LB-pending.png) .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: pic ![](images/kubernetes-services/36-LB-ccm.png) .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: pic ![](images/kubernetes-services/37-LB-externalip.png) .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: pic ![](images/kubernetes-services/38-LB-external-traffic.png) .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: pic ![](images/kubernetes-services/39-LB-all-traffic.png) .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: pic ![](images/kubernetes-services/41-NP-why.png) .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: pic ![](images/kubernetes-services/42-NP-how-1.png) .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: pic ![](images/kubernetes-services/43-NP-how-2.png) .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: pic ![](images/kubernetes-services/44-NP-how-3.png) .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: pic ![](images/kubernetes-services/45-NP-how-4.png) .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: pic ![](images/kubernetes-services/46-NP-how-5.png) .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: pic ![](images/kubernetes-services/47-NP-only.png) .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- ## `NodePort` - A port number is allocated for the service (by default, in the 30000-32767 range) - That port is made available *on all our nodes* and anybody can connect to it (we can connect to any node on that port to reach the service) - Our code needs to be changed to connect to that new port number - Under the hood: `kube-proxy` sets up a bunch of `iptables` rules on our nodes - Sometimes, it's the only available option for external traffic (e.g. most clusters deployed with kubeadm or on-premises) .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- ## Running containers with open ports - Since `ping` doesn't have anything to connect to, we'll have to run something else - We could use the `nginx` official image, but ... ... we wouldn't be able to tell the backends from each other! - We are going to use `jpetazzo/color`, a tiny HTTP server written in Go - `jpetazzo/color` listens on port 80 - It serves a page showing the pod's name (this will be useful when checking load balancing behavior) .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- ## Creating a deployment for our HTTP server - We will create a deployment with `kubectl create deployment` - Then we will scale it with `kubectl scale` .exercise[ - In another window, watch the pods (to see when they are created): ```bash kubectl get pods -w ``` - Create a deployment for this very lightweight HTTP server: ```bash kubectl create deployment blue --image=jpetazzo/color ``` - Scale it to 10 replicas: ```bash kubectl scale deployment blue --replicas=10 ``` ] .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- ## Exposing our deployment - We'll create a default `ClusterIP` service .exercise[ - Expose the HTTP port of our server: ```bash kubectl expose deployment blue --port=80 ``` - Look up which IP address was allocated: ```bash kubectl get service ``` ] .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- ## Services are layer 4 constructs - You can assign IP addresses to services, but they are still *layer 4* (i.e. a service is not an IP address; it's an IP address + protocol + port) - This is caused by the current implementation of `kube-proxy` (it relies on mechanisms that don't support layer 3) - As a result: you *have to* indicate the port number for your service (with some exceptions, like `ExternalName` or headless services, covered later) .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- ## Testing our service - We will now send a few HTTP requests to our pods .exercise[ - Let's obtain the IP address that was allocated for our service, *programmatically:* ```bash IP=$(kubectl get svc blue -o go-template --template '{{ .spec.clusterIP }}') ``` - Send a few requests: ```bash curl http://$IP:80/ ``` ] -- Try it a few times! Our requests are load balanced across multiple pods. .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: extra-details ## `ExternalName` - Services of type `ExternalName` are quite different - No load balancer (internal or external) is created - Only a DNS entry gets added to the DNS managed by Kubernetes - That DNS entry will just be a `CNAME` to a provided record Example: ```bash kubectl create service externalname k8s --external-name kubernetes.io ``` *Creates a CNAME `k8s` pointing to `kubernetes.io`* .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: extra-details ## External IPs - We can add an External IP to a service, e.g.: ```bash kubectl expose deploy my-little-deploy --port=80 --external-ip=1.2.3.4 ``` - `1.2.3.4` should be the address of one of our nodes (it could also be a virtual address, service address, or VIP, shared by multiple nodes) - Connections to `1.2.3.4:80` will be sent to our service - External IPs will also show up on services of type `LoadBalancer` (they will be added automatically by the process provisioning the load balancer) .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: extra-details ## Headless services - Sometimes, we want to access our scaled services directly: - if we want to save a tiny little bit of latency (typically less than 1ms) - if we need to connect over arbitrary ports (instead of a few fixed ones) - if we need to communicate over another protocol than UDP or TCP - if we want to decide how to balance the requests client-side - ... - In that case, we can use a "headless service" .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: extra-details ## Creating a headless services - A headless service is obtained by setting the `clusterIP` field to `None` (Either with `--cluster-ip=None`, or by providing a custom YAML) - As a result, the service doesn't have a virtual IP address - Since there is no virtual IP address, there is no load balancer either - CoreDNS will return the pods' IP addresses as multiple `A` records - This gives us an easy way to discover all the replicas for a deployment .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: extra-details ## Services and endpoints - A service has a number of "endpoints" - Each endpoint is a host + port where the service is available - The endpoints are maintained and updated automatically by Kubernetes .exercise[ - Check the endpoints that Kubernetes has associated with our `blue` service: ```bash kubectl describe service blue ``` ] In the output, there will be a line starting with `Endpoints:`. That line will list a bunch of addresses in `host:port` format. .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: extra-details ## Viewing endpoint details - When we have many endpoints, our display commands truncate the list ```bash kubectl get endpoints ``` - If we want to see the full list, we can use one of the following commands: ```bash kubectl describe endpoints blue kubectl get endpoints blue -o yaml ``` - These commands will show us a list of IP addresses - These IP addresses should match the addresses of the corresponding pods: ```bash kubectl get pods -l app=blue -o wide ``` .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: extra-details ## `endpoints` not `endpoint` - `endpoints` is the only resource that cannot be singular ```bash $ kubectl get endpoint error: the server doesn't have a resource type "endpoint" ``` - This is because the type itself is plural (unlike every other resource) - There is no `endpoint` object: `type Endpoints struct` - The type doesn't represent a single endpoint, but a list of endpoints .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: extra-details ## The DNS zone - In the `kube-system` namespace, there should be a service named `kube-dns` - This is the internal DNS server that can resolve service names - The default domain name for the service we created is `default.svc.cluster.local` .exercise[ - Get the IP address of the internal DNS server: ```bash IP=$(kubectl -n kube-system get svc kube-dns -o jsonpath={.spec.clusterIP}) ``` - Resolve the cluster IP for the `blue` service: ```bash host blue.default.svc.cluster.local $IP ``` ] .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: extra-details ## `Ingress` - Ingresses are another type (kind) of resource - They are specifically for HTTP services (not TCP or UDP) - They can also handle TLS certificates, URL rewriting ... - They require an *Ingress Controller* to function .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: pic ![](images/kubernetes-services/61-ING.png) .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: pic ![](images/kubernetes-services/62-ING-path.png) .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: pic ![](images/kubernetes-services/63-ING-policy.png) .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: pic ![](images/kubernetes-services/64-ING-nolocal.png) ??? :EN:- Service discovery and load balancing :EN:- Accessing pods through services :EN:- Service types: ClusterIP, NodePort, LoadBalancer :FR:- Exposer un service :FR:- Différents types de services : ClusterIP, NodePort, LoadBalancer :FR:- Utiliser CoreDNS pour la *service discovery* .debug[[k8s/kubectlexpose.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/kubectlexpose.md)] --- class: pic .interstitial[![Image separating from the next part](https://gallant-turing-d0d520.netlify.com/containers/distillery-containers.jpg)] --- name: toc-shipping-images-with-a-registry class: title Shipping images with a registry .nav[ [Previous part](#toc-exposing-containers) | [Back to table of contents](#toc-part-2) | [Next part](#toc-the-kubernetes-dashboard) ] .debug[(automatically generated title slide)] --- # Shipping images with a registry - Initially, our app was running on a single node - We could *build* and *run* in the same place - Therefore, we did not need to *ship* anything - Now that we want to run on a cluster, things are different - The easiest way to ship container images is to use a registry .debug[[k8s/shippingimages.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/shippingimages.md)] --- ## How Docker registries work (a reminder) - What happens when we execute `docker run alpine` ? - If the Engine needs to pull the `alpine` image, it expands it into `library/alpine` - `library/alpine` is expanded into `index.docker.io/library/alpine` - The Engine communicates with `index.docker.io` to retrieve `library/alpine:latest` - To use something else than `index.docker.io`, we specify it in the image name - Examples: ```bash docker pull gcr.io/google-containers/alpine-with-bash:1.0 docker build -t registry.mycompany.io:5000/myimage:awesome . docker push registry.mycompany.io:5000/myimage:awesome ``` .debug[[k8s/shippingimages.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/shippingimages.md)] --- ## Running DockerCoins on Kubernetes - Create one deployment for each component (hasher, redis, rng, webui, worker) - Expose deployments that need to accept connections (hasher, redis, rng, webui) - For redis, we can use the official redis image - For the 4 others, we need to build images and push them to some registry .debug[[k8s/shippingimages.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/shippingimages.md)] --- ## Building and shipping images - There are *many* options! - Manually: - build locally (with `docker build` or otherwise) - push to the registry - Automatically: - build and test locally - when ready, commit and push a code repository - the code repository notifies an automated build system - that system gets the code, builds it, pushes the image to the registry .debug[[k8s/shippingimages.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/shippingimages.md)] --- ## Which registry do we want to use? - There are SAAS products like Docker Hub, Quay ... - Each major cloud provider has an option as well (ACR on Azure, ECR on AWS, GCR on Google Cloud...) - There are also commercial products to run our own registry (Docker EE, Quay...) - And open source options, too! - When picking a registry, pay attention to its build system (when it has one) .debug[[k8s/shippingimages.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/shippingimages.md)] --- ## Building on the fly - Conceptually, it is possible to build images on the fly from a repository - Example: [ctr.run](https://ctr.run/) (deprecated in August 2020, after being aquired by Datadog) - It did allow something like this: ```bash docker run ctr.run/github.com/jpetazzo/container.training/dockercoins/hasher ``` - No alternative yet (free startup idea, anyone?) ??? :EN:- Shipping images to Kubernetes :FR:- Déployer des images sur notre cluster .debug[[k8s/shippingimages.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/shippingimages.md)] --- ## Using images from the Docker Hub - For everyone's convenience, we took care of building DockerCoins images - We pushed these images to the DockerHub, under the [dockercoins](https://hub.docker.com/u/dockercoins) user - These images are *tagged* with a version number, `v0.1` - The full image names are therefore: - `dockercoins/hasher:v0.1` - `dockercoins/rng:v0.1` - `dockercoins/webui:v0.1` - `dockercoins/worker:v0.1` .debug[[k8s/buildshiprun-dockerhub.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/buildshiprun-dockerhub.md)] --- class: pic .interstitial[![Image separating from the next part](https://gallant-turing-d0d520.netlify.com/containers/lots-of-containers.jpg)] --- name: toc-the-kubernetes-dashboard class: title The Kubernetes dashboard .nav[ [Previous part](#toc-shipping-images-with-a-registry) | [Back to table of contents](#toc-part-3) | [Next part](#toc-security-implications-of-kubectl-apply) ] .debug[(automatically generated title slide)] --- # The Kubernetes dashboard - Kubernetes resources can also be viewed with a web dashboard - Dashboard users need to authenticate (typically with a token) - The dashboard should be exposed over HTTPS (to prevent interception of the aforementioned token) - Ideally, this requires obtaining a proper TLS certificate (for instance, with Let's Encrypt) .debug[[k8s/dashboard.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/dashboard.md)] --- ## Three ways to install the dashboard - Our `k8s` directory has no less than three manifests! - `dashboard-recommended.yaml` (purely internal dashboard; user must be created manually) - `dashboard-with-token.yaml` (dashboard exposed with NodePort; creates an admin user for us) - `dashboard-insecure.yaml` aka *YOLO* (dashboard exposed over HTTP; gives root access to anonymous users) .debug[[k8s/dashboard.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/dashboard.md)] --- ## `dashboard-insecure.yaml` - This will allow anyone to deploy anything on your cluster (without any authentication whatsoever) - **Do not** use this, except maybe on a local cluster (or a cluster that you will destroy a few minutes later) - On "normal" clusters, use `dashboard-with-token.yaml` instead! .debug[[k8s/dashboard.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/dashboard.md)] --- ## What's in the manifest? - The dashboard itself - An HTTP/HTTPS unwrapper (using `socat`) - The guest/admin account .exercise[ - Create all the dashboard resources, with the following command: ```bash kubectl apply -f ~/container.training/k8s/dashboard-insecure.yaml ``` ] .debug[[k8s/dashboard.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/dashboard.md)] --- ## Connecting to the dashboard .exercise[ - Check which port the dashboard is on: ```bash kubectl get svc dashboard ``` ] You'll want the `3xxxx` port. .exercise[ - Connect to http://oneofournodes:3xxxx/ ] The dashboard will then ask you which authentication you want to use. .debug[[k8s/dashboard.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/dashboard.md)] --- ## Dashboard authentication - We have three authentication options at this point: - token (associated with a role that has appropriate permissions) - kubeconfig (e.g. using the `~/.kube/config` file from `node1`) - "skip" (use the dashboard "service account") - Let's use "skip": we're logged in! -- .warning[Remember, we just added a backdoor to our Kubernetes cluster!] .debug[[k8s/dashboard.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/dashboard.md)] --- ## Closing the backdoor - Seriously, don't leave that thing running! .exercise[ - Remove what we just created: ```bash kubectl delete -f ~/container.training/k8s/dashboard-insecure.yaml ``` ] .debug[[k8s/dashboard.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/dashboard.md)] --- ## The risks - The steps that we just showed you are *for educational purposes only!* - If you do that on your production cluster, people [can and will abuse it](https://redlock.io/blog/cryptojacking-tesla) - For an in-depth discussion about securing the dashboard,
check [this excellent post on Heptio's blog](https://blog.heptio.com/on-securing-the-kubernetes-dashboard-16b09b1b7aca) .debug[[k8s/dashboard.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/dashboard.md)] --- ## `dashboard-with-token.yaml` - This is a less risky way to deploy the dashboard - It's not completely secure, either: - we're using a self-signed certificate - this is subject to eavesdropping attacks - Using `kubectl port-forward` or `kubectl proxy` is even better .debug[[k8s/dashboard.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/dashboard.md)] --- ## What's in the manifest? - The dashboard itself (but exposed with a `NodePort`) - A ServiceAccount with `cluster-admin` privileges (named `kubernetes-dashboard:cluster-admin`) .exercise[ - Create all the dashboard resources, with the following command: ```bash kubectl apply -f ~/container.training/k8s/dashboard-with-token.yaml ``` ] .debug[[k8s/dashboard.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/dashboard.md)] --- ## Obtaining the token - The manifest creates a ServiceAccount - Kubernetes will automatically generate a token for that ServiceAccount .exercise[ - Display the token: ```bash kubectl --namespace=kubernetes-dashboard \ describe secret cluster-admin-token ``` ] The token should start with `eyJ...` (it's a JSON Web Token). Note that the secret name will actually be `cluster-admin-token-xxxxx`.
(But `kubectl` prefix matches are great!) .debug[[k8s/dashboard.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/dashboard.md)] --- ## Connecting to the dashboard .exercise[ - Check which port the dashboard is on: ```bash kubectl get svc --namespace=kubernetes-dashboard ``` ] You'll want the `3xxxx` port. .exercise[ - Connect to http://oneofournodes:3xxxx/ ] The dashboard will then ask you which authentication you want to use. .debug[[k8s/dashboard.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/dashboard.md)] --- ## Dashboard authentication - Select "token" authentication - Copy paste the token (starting with `eyJ...`) obtained earlier - We're logged in! .debug[[k8s/dashboard.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/dashboard.md)] --- ## Other dashboards - [Kube Web View](https://codeberg.org/hjacobs/kube-web-view) - read-only dashboard - optimized for "troubleshooting and incident response" - see [vision and goals](https://kube-web-view.readthedocs.io/en/latest/vision.html#vision) for details - [Kube Ops View](https://codeberg.org/hjacobs/kube-ops-view) - "provides a common operational picture for multiple Kubernetes clusters" .debug[[k8s/dashboard.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/dashboard.md)] --- class: pic .interstitial[![Image separating from the next part](https://gallant-turing-d0d520.netlify.com/containers/plastic-containers.JPG)] --- name: toc-security-implications-of-kubectl-apply class: title Security implications of `kubectl apply` .nav[ [Previous part](#toc-the-kubernetes-dashboard) | [Back to table of contents](#toc-part-3) | [Next part](#toc-daemon-sets) ] .debug[(automatically generated title slide)] --- # Security implications of `kubectl apply` - When we do `kubectl apply -f
`, we create arbitrary resources - Resources can be evil; imagine a `deployment` that ... -- - starts bitcoin miners on the whole cluster -- - hides in a non-default namespace -- - bind-mounts our nodes' filesystem -- - inserts SSH keys in the root account (on the node) -- - encrypts our data and ransoms it -- - ☠️☠️☠️ .debug[[k8s/dashboard.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/dashboard.md)] --- ## `kubectl apply` is the new `curl | sh` - `curl | sh` is convenient - It's safe if you use HTTPS URLs from trusted sources -- - `kubectl apply -f` is convenient - It's safe if you use HTTPS URLs from trusted sources - Example: the official setup instructions for most pod networks -- - It introduces new failure modes (for instance, if you try to apply YAML from a link that's no longer valid) ??? :EN:- The Kubernetes dashboard :FR:- Le *dashboard* Kubernetes .debug[[k8s/dashboard.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/dashboard.md)] --- class: pic .interstitial[![Image separating from the next part](https://gallant-turing-d0d520.netlify.com/containers/train-of-containers-1.jpg)] --- name: toc-daemon-sets class: title Daemon sets .nav[ [Previous part](#toc-security-implications-of-kubectl-apply) | [Back to table of contents](#toc-part-3) | [Next part](#toc-labels-and-selectors) ] .debug[(automatically generated title slide)] --- # Daemon sets - We want to scale `rng` in a way that is different from how we scaled `worker` - We want one (and exactly one) instance of `rng` per node - We *do not want* two instances of `rng` on the same node - We will do that with a *daemon set* .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- ## Why not a deployment? - Can't we just do `kubectl scale deployment rng --replicas=...`? -- - Nothing guarantees that the `rng` containers will be distributed evenly - If we add nodes later, they will not automatically run a copy of `rng` - If we remove (or reboot) a node, one `rng` container will restart elsewhere (and we will end up with two instances `rng` on the same node) - By contrast, a daemon set will start one pod per node and keep it that way (as nodes are added or removed) .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- ## Daemon sets in practice - Daemon sets are great for cluster-wide, per-node processes: - `kube-proxy` - `weave` (our overlay network) - monitoring agents - hardware management tools (e.g. SCSI/FC HBA agents) - etc. - They can also be restricted to run [only on some nodes](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#running-pods-on-only-some-nodes) .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- ## Creating a daemon set - Unfortunately, as of Kubernetes 1.19, the CLI cannot create daemon sets -- - More precisely: it doesn't have a subcommand to create a daemon set -- - But any kind of resource can always be created by providing a YAML description: ```bash kubectl apply -f foo.yaml ``` -- - How do we create the YAML file for our daemon set? -- - option 1: [read the docs](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#create-a-daemonset) -- - option 2: `vi` our way out of it .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- ## Creating the YAML file for our daemon set - Let's start with the YAML file for the current `rng` resource .exercise[ - Dump the `rng` resource in YAML: ```bash kubectl get deploy/rng -o yaml >rng.yml ``` - Edit `rng.yml` ] .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- ## "Casting" a resource to another - What if we just changed the `kind` field? (It can't be that easy, right?) .exercise[ - Change `kind: Deployment` to `kind: DaemonSet` - Save, quit - Try to create our new resource: ```bash kubectl apply -f rng.yml ``` ] -- We all knew this couldn't be that easy, right! .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- ## Understanding the problem - The core of the error is: ``` error validating data: [ValidationError(DaemonSet.spec): unknown field "replicas" in io.k8s.api.extensions.v1beta1.DaemonSetSpec, ... ``` -- - *Obviously,* it doesn't make sense to specify a number of replicas for a daemon set -- - Workaround: fix the YAML - remove the `replicas` field - remove the `strategy` field (which defines the rollout mechanism for a deployment) - remove the `progressDeadlineSeconds` field (also used by the rollout mechanism) - remove the `status: {}` line at the end -- - Or, we could also ... .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- ## Use the `--force`, Luke - We could also tell Kubernetes to ignore these errors and try anyway - The `--force` flag's actual name is `--validate=false` .exercise[ - Try to load our YAML file and ignore errors: ```bash kubectl apply -f rng.yml --validate=false ``` ] -- 🎩✨🐇 -- Wait ... Now, can it be *that* easy? .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- ## Checking what we've done - Did we transform our `deployment` into a `daemonset`? .exercise[ - Look at the resources that we have now: ```bash kubectl get all ``` ] -- We have two resources called `rng`: - the *deployment* that was existing before - the *daemon set* that we just created We also have one too many pods.
(The pod corresponding to the *deployment* still exists.) .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- ## `deploy/rng` and `ds/rng` - You can have different resource types with the same name (i.e. a *deployment* and a *daemon set* both named `rng`) - We still have the old `rng` *deployment* ``` NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/rng 1 1 1 1 18m ``` - But now we have the new `rng` *daemon set* as well ``` NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/rng 2 2 2 2 2
9s ``` .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- ## Too many pods - If we check with `kubectl get pods`, we see: - *one pod* for the deployment (named `rng-xxxxxxxxxx-yyyyy`) - *one pod per node* for the daemon set (named `rng-zzzzz`) ``` NAME READY STATUS RESTARTS AGE rng-54f57d4d49-7pt82 1/1 Running 0 11m rng-b85tm 1/1 Running 0 25s rng-hfbrr 1/1 Running 0 25s [...] ``` -- The daemon set created one pod per node, except on the master node. The master node has [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) preventing pods from running there. (To schedule a pod on this node anyway, the pod will require appropriate [tolerations](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/).) .footnote[(Off by one? We don't run these pods on the node hosting the control plane.)] .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- ## Is this working? - Look at the web UI -- - The graph should now go above 10 hashes per second! -- - It looks like the newly created pods are serving traffic correctly - How and why did this happen? (We didn't do anything special to add them to the `rng` service load balancer!) .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- class: pic .interstitial[![Image separating from the next part](https://gallant-turing-d0d520.netlify.com/containers/train-of-containers-2.jpg)] --- name: toc-labels-and-selectors class: title Labels and selectors .nav[ [Previous part](#toc-daemon-sets) | [Back to table of contents](#toc-part-3) | [Next part](#toc-rolling-updates) ] .debug[(automatically generated title slide)] --- # Labels and selectors - The `rng` *service* is load balancing requests to a set of pods - That set of pods is defined by the *selector* of the `rng` service .exercise[ - Check the *selector* in the `rng` service definition: ```bash kubectl describe service rng ``` ] - The selector is `app=rng` - It means "all the pods having the label `app=rng`" (They can have additional labels as well, that's OK!) .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- ## Selector evaluation - We can use selectors with many `kubectl` commands - For instance, with `kubectl get`, `kubectl logs`, `kubectl delete` ... and more .exercise[ - Get the list of pods matching selector `app=rng`: ```bash kubectl get pods -l app=rng kubectl get pods --selector app=rng ``` ] But ... why do these pods (in particular, the *new* ones) have this `app=rng` label? .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- ## Where do labels come from? - When we create a deployment with `kubectl create deployment rng`,
this deployment gets the label `app=rng` - The replica sets created by this deployment also get the label `app=rng` - The pods created by these replica sets also get the label `app=rng` - When we created the daemon set from the deployment, we re-used the same spec - Therefore, the pods created by the daemon set get the same labels .footnote[Note: when we use `kubectl run stuff`, the label is `run=stuff` instead.] .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- ## Updating load balancer configuration - We would like to remove a pod from the load balancer - What would happen if we removed that pod, with `kubectl delete pod ...`? -- It would be re-created immediately (by the replica set or the daemon set) -- - What would happen if we removed the `app=rng` label from that pod? -- It would *also* be re-created immediately -- Why?!? .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- ## Selectors for replica sets and daemon sets - The "mission" of a replica set is: "Make sure that there is the right number of pods matching this spec!" - The "mission" of a daemon set is: "Make sure that there is a pod matching this spec on each node!" -- - *In fact,* replica sets and daemon sets do not check pod specifications - They merely have a *selector*, and they look for pods matching that selector - Yes, we can fool them by manually creating pods with the "right" labels - Bottom line: if we remove our `app=rng` label ... ... The pod "disappears" for its parent, which re-creates another pod to replace it .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- class: extra-details ## Isolation of replica sets and daemon sets - Since both the `rng` daemon set and the `rng` replica set use `app=rng` ... ... Why don't they "find" each other's pods? -- - *Replica sets* have a more specific selector, visible with `kubectl describe` (It looks like `app=rng,pod-template-hash=abcd1234`) - *Daemon sets* also have a more specific selector, but it's invisible (It looks like `app=rng,controller-revision-hash=abcd1234`) - As a result, each controller only "sees" the pods it manages .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- ## Removing a pod from the load balancer - Currently, the `rng` service is defined by the `app=rng` selector - The only way to remove a pod is to remove or change the `app` label - ... But that will cause another pod to be created instead! - What's the solution? -- - We need to change the selector of the `rng` service! - Let's add another label to that selector (e.g. `active=yes`) .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- ## Selectors with multiple labels - If a selector specifies multiple labels, they are understood as a logical *AND* (in other words: the pods must match all the labels) - We cannot have a logical *OR* (e.g. `app=api AND (release=prod OR release=preprod)`) - We can, however, apply as many extra labels as we want to our pods: - use selector `app=api AND prod-or-preprod=yes` - add `prod-or-preprod=yes` to both sets of pods - We will see later that in other places, we can use more advanced selectors .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- ## The plan 1. Add the label `active=yes` to all our `rng` pods 2. Update the selector for the `rng` service to also include `active=yes` 3. Toggle traffic to a pod by manually adding/removing the `active` label 4. Profit! *Note: if we swap steps 1 and 2, it will cause a short service disruption, because there will be a period of time during which the service selector won't match any pod. During that time, requests to the service will time out. By doing things in the order above, we guarantee that there won't be any interruption.* .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- ## Adding labels to pods - We want to add the label `active=yes` to all pods that have `app=rng` - We could edit each pod one by one with `kubectl edit` ... - ... Or we could use `kubectl label` to label them all - `kubectl label` can use selectors itself .exercise[ - Add `active=yes` to all pods that have `app=rng`: ```bash kubectl label pods -l app=rng active=yes ``` ] .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- ## Updating the service selector - We need to edit the service specification - Reminder: in the service definition, we will see `app: rng` in two places - the label of the service itself (we don't need to touch that one) - the selector of the service (that's the one we want to change) .exercise[ - Update the service to add `active: yes` to its selector: ```bash kubectl edit service rng ``` ] -- ... And then we get *the weirdest error ever.* Why? .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- ## When the YAML parser is being too smart - YAML parsers try to help us: - `xyz` is the string `"xyz"` - `42` is the integer `42` - `yes` is the boolean value `true` - If we want the string `"42"` or the string `"yes"`, we have to quote them - So we have to use `active: "yes"` .footnote[For a good laugh: if we had used "ja", "oui", "si" ... as the value, it would have worked!] .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- ## Updating the service selector, take 2 .exercise[ - Update the YAML manifest of the service - Add `active: "yes"` to its selector ] This time it should work! If we did everything correctly, the web UI shouldn't show any change. .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- ## Updating labels - We want to disable the pod that was created by the deployment - All we have to do, is remove the `active` label from that pod - To identify that pod, we can use its name - ... Or rely on the fact that it's the only one with a `pod-template-hash` label - Good to know: - `kubectl label ... foo=` doesn't remove a label (it sets it to an empty string) - to remove label `foo`, use `kubectl label ... foo-` - to change an existing label, we would need to add `--overwrite` .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- ## Removing a pod from the load balancer .exercise[ - In one window, check the logs of that pod: ```bash POD=$(kubectl get pod -l app=rng,pod-template-hash -o name) kubectl logs --tail 1 --follow $POD ``` (We should see a steady stream of HTTP logs) - In another window, remove the label from the pod: ```bash kubectl label pod -l app=rng,pod-template-hash active- ``` (The stream of HTTP logs should stop immediately) ] There might be a slight change in the web UI (since we removed a bit of capacity from the `rng` service). If we remove more pods, the effect should be more visible. .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- class: extra-details ## Updating the daemon set - If we scale up our cluster by adding new nodes, the daemon set will create more pods - These pods won't have the `active=yes` label - If we want these pods to have that label, we need to edit the daemon set spec - We can do that with e.g. `kubectl edit daemonset rng` .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- class: extra-details ## We've put resources in your resources - Reminder: a daemon set is a resource that creates more resources! - There is a difference between: - the label(s) of a resource (in the `metadata` block in the beginning) - the selector of a resource (in the `spec` block) - the label(s) of the resource(s) created by the first resource (in the `template` block) - We would need to update the selector and the template (metadata labels are not mandatory) - The template must match the selector (i.e. the resource will refuse to create resources that it will not select) .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- ## Labels and debugging - When a pod is misbehaving, we can delete it: another one will be recreated - But we can also change its labels - It will be removed from the load balancer (it won't receive traffic anymore) - Another pod will be recreated immediately - But the problematic pod is still here, and we can inspect and debug it - We can even re-add it to the rotation if necessary (Very useful to troubleshoot intermittent and elusive bugs) .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- ## Labels and advanced rollout control - Conversely, we can add pods matching a service's selector - These pods will then receive requests and serve traffic - Examples: - one-shot pod with all debug flags enabled, to collect logs - pods created automatically, but added to rotation in a second step
(by setting their label accordingly) - This gives us building blocks for canary and blue/green deployments .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- class: extra-details ## Advanced label selectors - As indicated earlier, service selectors are limited to a `AND` - But in many other places in the Kubernetes API, we can use complex selectors (e.g. Deployment, ReplicaSet, DaemonSet, NetworkPolicy ...) - These allow extra operations; specifically: - checking for presence (or absence) of a label - checking if a label is (or is not) in a given set - Relevant documentation: [Service spec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#servicespec-v1-core), [LabelSelector spec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#labelselector-v1-meta), [label selector doc](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors) .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- class: extra-details ## Example of advanced selector ```yaml theSelector: matchLabels: app: portal component: api matchExpressions: - key: release operator: In values: [ production, preproduction ] - key: signed-off-by operator: Exists ``` This selector matches pods that meet *all* the indicated conditions. `operator` can be `In`, `NotIn`, `Exists`, `DoesNotExist`. A `nil` selector matches *nothing*, a `{}` selector matches *everything*.
(Because that means "match all pods that meet at least zero condition".) .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- class: extra-details ## Services and Endpoints - Each Service has a corresponding Endpoints resource (see `kubectl get endpoints` or `kubectl get ep`) - That Endpoints resource is used by various controllers (e.g. `kube-proxy` when setting up `iptables` rules for ClusterIP services) - These Endpoints are populated (and updated) with the Service selector - We can update the Endpoints manually, but our changes will get overwritten - ... Except if the Service selector is empty! .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- class: extra-details ## Empty Service selector - If a service selector is empty, Endpoints don't get updated automatically (but we can still set them manually) - This lets us create Services pointing to arbitrary destinations (potentially outside the cluster; or things that are not in pods) - Another use-case: the `kubernetes` service in the `default` namespace (its Endpoints are maintained automatically by the API server) ??? :EN:- Scaling with Daemon Sets :FR:- Utilisation de Daemon Sets .debug[[k8s/daemonset.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/daemonset.md)] --- class: pic .interstitial[![Image separating from the next part](https://gallant-turing-d0d520.netlify.com/containers/two-containers-on-a-truck.jpg)] --- name: toc-rolling-updates class: title Rolling updates .nav[ [Previous part](#toc-labels-and-selectors) | [Back to table of contents](#toc-part-3) | [Next part](#toc-accessing-logs-from-the-cli) ] .debug[(automatically generated title slide)] --- # Rolling updates - By default (without rolling updates), when a scaled resource is updated: - new pods are created - old pods are terminated - ... all at the same time - if something goes wrong, ¯\\\_(ツ)\_/¯ .debug[[k8s/rollout.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/rollout.md)] --- ## Rolling updates - With rolling updates, when a Deployment is updated, it happens progressively - The Deployment controls multiple Replica Sets - Each Replica Set is a group of identical Pods (with the same image, arguments, parameters ...) - During the rolling update, we have at least two Replica Sets: - the "new" set (corresponding to the "target" version) - at least one "old" set - We can have multiple "old" sets (if we start another update before the first one is done) .debug[[k8s/rollout.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/rollout.md)] --- ## Update strategy - Two parameters determine the pace of the rollout: `maxUnavailable` and `maxSurge` - They can be specified in absolute number of pods, or percentage of the `replicas` count - At any given time ... - there will always be at least `replicas`-`maxUnavailable` pods available - there will never be more than `replicas`+`maxSurge` pods in total - there will therefore be up to `maxUnavailable`+`maxSurge` pods being updated - We have the possibility of rolling back to the previous version
(if the update fails or is unsatisfactory in any way) .debug[[k8s/rollout.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/rollout.md)] --- ## Checking current rollout parameters - Recall how we build custom reports with `kubectl` and `jq`: .exercise[ - Show the rollout plan for our deployments: ```bash kubectl get deploy -o json | jq ".items[] | {name:.metadata.name} + .spec.strategy.rollingUpdate" ``` ] .debug[[k8s/rollout.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/rollout.md)] --- ## Rolling updates in practice - As of Kubernetes 1.8, we can do rolling updates with: `deployments`, `daemonsets`, `statefulsets` - Editing one of these resources will automatically result in a rolling update - Rolling updates can be monitored with the `kubectl rollout` subcommand .debug[[k8s/rollout.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/rollout.md)] --- ## Rolling out the new `worker` service .exercise[ - Let's monitor what's going on by opening a few terminals, and run: ```bash kubectl get pods -w kubectl get replicasets -w kubectl get deployments -w ``` - Update `worker` either with `kubectl edit`, or by running: ```bash kubectl set image deploy worker worker=dockercoins/worker:v0.2 ``` ] -- That rollout should be pretty quick. What shows in the web UI? .debug[[k8s/rollout.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/rollout.md)] --- ## Give it some time - At first, it looks like nothing is happening (the graph remains at the same level) - According to `kubectl get deploy -w`, the `deployment` was updated really quickly - But `kubectl get pods -w` tells a different story - The old `pods` are still here, and they stay in `Terminating` state for a while - Eventually, they are terminated; and then the graph decreases significantly - This delay is due to the fact that our worker doesn't handle signals - Kubernetes sends a "polite" shutdown request to the worker, which ignores it - After a grace period, Kubernetes gets impatient and kills the container (The grace period is 30 seconds, but [can be changed](https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods) if needed) .debug[[k8s/rollout.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/rollout.md)] --- ## Rolling out something invalid - What happens if we make a mistake? .exercise[ - Update `worker` by specifying a non-existent image: ```bash kubectl set image deploy worker worker=dockercoins/worker:v0.3 ``` - Check what's going on: ```bash kubectl rollout status deploy worker ``` ] -- Our rollout is stuck. However, the app is not dead. (After a minute, it will stabilize to be 20-25% slower.) .debug[[k8s/rollout.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/rollout.md)] --- ## What's going on with our rollout? - Why is our app a bit slower? - Because `MaxUnavailable=25%` ... So the rollout terminated 2 replicas out of 10 available - Okay, but why do we see 5 new replicas being rolled out? - Because `MaxSurge=25%` ... So in addition to replacing 2 replicas, the rollout is also starting 3 more - It rounded down the number of MaxUnavailable pods conservatively,
but the total number of pods being rolled out is allowed to be 25+25=50% .debug[[k8s/rollout.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/rollout.md)] --- class: extra-details ## The nitty-gritty details - We start with 10 pods running for the `worker` deployment - Current settings: MaxUnavailable=25% and MaxSurge=25% - When we start the rollout: - two replicas are taken down (as per MaxUnavailable=25%) - two others are created (with the new version) to replace them - three others are created (with the new version) per MaxSurge=25%) - Now we have 8 replicas up and running, and 5 being deployed - Our rollout is stuck at this point! .debug[[k8s/rollout.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/rollout.md)] --- ## Checking the dashboard during the bad rollout If you didn't deploy the Kubernetes dashboard earlier, just skip this slide. .exercise[ - Connect to the dashboard that we deployed earlier - Check that we have failures in Deployments, Pods, and Replica Sets - Can we see the reason for the failure? ] .debug[[k8s/rollout.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/rollout.md)] --- ## Recovering from a bad rollout - We could push some `v0.3` image (the pod retry logic will eventually catch it and the rollout will proceed) - Or we could invoke a manual rollback .exercise[ - Cancel the deployment and wait for the dust to settle: ```bash kubectl rollout undo deploy worker kubectl rollout status deploy worker ``` ] .debug[[k8s/rollout.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/rollout.md)] --- ## Rolling back to an older version - We reverted to `v0.2` - But this version still has a performance problem - How can we get back to the previous version? .debug[[k8s/rollout.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/rollout.md)] --- ## Multiple "undos" - What happens if we try `kubectl rollout undo` again? .exercise[ - Try it: ```bash kubectl rollout undo deployment worker ``` - Check the web UI, the list of pods ... ] 🤔 That didn't work. .debug[[k8s/rollout.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/rollout.md)] --- ## Multiple "undos" don't work - If we see successive versions as a stack: - `kubectl rollout undo` doesn't "pop" the last element from the stack - it copies the N-1th element to the top - Multiple "undos" just swap back and forth between the last two versions! .exercise[ - Go back to v0.2 again: ```bash kubectl rollout undo deployment worker ``` ] .debug[[k8s/rollout.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/rollout.md)] --- ## In this specific scenario - Our version numbers are easy to guess - What if we had used git hashes? - What if we had changed other parameters in the Pod spec? .debug[[k8s/rollout.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/rollout.md)] --- ## Listing versions - We can list successive versions of a Deployment with `kubectl rollout history` .exercise[ - Look at our successive versions: ```bash kubectl rollout history deployment worker ``` ] We don't see *all* revisions. We might see something like 1, 4, 5. (Depending on how many "undos" we did before.) .debug[[k8s/rollout.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/rollout.md)] --- ## Explaining deployment revisions - These revisions correspond to our Replica Sets - This information is stored in the Replica Set annotations .exercise[ - Check the annotations for our replica sets: ```bash kubectl describe replicasets -l app=worker | grep -A3 ^Annotations ``` ] .debug[[k8s/rollout.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/rollout.md)] --- class: extra-details ## What about the missing revisions? - The missing revisions are stored in another annotation: `deployment.kubernetes.io/revision-history` - These are not shown in `kubectl rollout history` - We could easily reconstruct the full list with a script (if we wanted to!) .debug[[k8s/rollout.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/rollout.md)] --- ## Rolling back to an older version - `kubectl rollout undo` can work with a revision number .exercise[ - Roll back to the "known good" deployment version: ```bash kubectl rollout undo deployment worker --to-revision=1 ``` - Check the web UI or the list of pods ] .debug[[k8s/rollout.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/rollout.md)] --- class: extra-details ## Changing rollout parameters - We want to: - revert to `v0.1` - be conservative on availability (always have desired number of available workers) - go slow on rollout speed (update only one pod at a time) - give some time to our workers to "warm up" before starting more The corresponding changes can be expressed in the following YAML snippet: .small[ ```yaml spec: template: spec: containers: - name: worker image: dockercoins/worker:v0.1 strategy: rollingUpdate: maxUnavailable: 0 maxSurge: 1 minReadySeconds: 10 ``` ] .debug[[k8s/rollout.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/rollout.md)] --- class: extra-details ## Applying changes through a YAML patch - We could use `kubectl edit deployment worker` - But we could also use `kubectl patch` with the exact YAML shown before .exercise[ .small[ - Apply all our changes and wait for them to take effect: ```bash kubectl patch deployment worker -p " spec: template: spec: containers: - name: worker image: dockercoins/worker:v0.1 strategy: rollingUpdate: maxUnavailable: 0 maxSurge: 1 minReadySeconds: 10 " kubectl rollout status deployment worker kubectl get deploy -o json worker | jq "{name:.metadata.name} + .spec.strategy.rollingUpdate" ``` ] ] ??? :EN:- Rolling updates :EN:- Rolling back a bad deployment :FR:- Mettre à jour un déploiement :FR:- Concept de *rolling update* et *rollback* :FR:- Paramétrer la vitesse de déploiement .debug[[k8s/rollout.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/rollout.md)] --- class: pic .interstitial[![Image separating from the next part](https://gallant-turing-d0d520.netlify.com/containers/wall-of-containers.jpeg)] --- name: toc-accessing-logs-from-the-cli class: title Accessing logs from the CLI .nav[ [Previous part](#toc-rolling-updates) | [Back to table of contents](#toc-part-4) | [Next part](#toc-namespaces) ] .debug[(automatically generated title slide)] --- # Accessing logs from the CLI - The `kubectl logs` command has limitations: - it cannot stream logs from multiple pods at a time - when showing logs from multiple pods, it mixes them all together - We are going to see how to do it better .debug[[k8s/logs-cli.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/logs-cli.md)] --- ## Doing it manually - We *could* (if we were so inclined) write a program or script that would: - take a selector as an argument - enumerate all pods matching that selector (with `kubectl get -l ...`) - fork one `kubectl logs --follow ...` command per container - annotate the logs (the output of each `kubectl logs ...` process) with their origin - preserve ordering by using `kubectl logs --timestamps ...` and merge the output -- - We *could* do it, but thankfully, others did it for us already! .debug[[k8s/logs-cli.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/logs-cli.md)] --- ## Stern [Stern](https://github.com/wercker/stern) is an open source project by [Wercker](http://www.wercker.com/). From the README: *Stern allows you to tail multiple pods on Kubernetes and multiple containers within the pod. Each result is color coded for quicker debugging.* *The query is a regular expression so the pod name can easily be filtered and you don't need to specify the exact id (for instance omitting the deployment id). If a pod is deleted it gets removed from tail and if a new pod is added it automatically gets tailed.* Exactly what we need! .debug[[k8s/logs-cli.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/logs-cli.md)] --- ## Checking if Stern is installed - Run `stern` (without arguments) to check if it's installed: ``` $ stern Tail multiple pods and containers from Kubernetes Usage: stern pod-query [flags] ``` - If it's missing, let's see how to install it .debug[[k8s/logs-cli.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/logs-cli.md)] --- ## Installing Stern - Stern is written in Go, and Go programs are usually shipped as a single binary - We just need to download that binary and put it in our `PATH`! - Binary releases are available [here](https://github.com/wercker/stern/releases) on GitHub - The following commands will install Stern on a Linux Intel 64 bit machine: ```bash sudo curl -L -o /usr/local/bin/stern \ https://github.com/wercker/stern/releases/download/1.11.0/stern_linux_amd64 sudo chmod +x /usr/local/bin/stern ``` - On macOS, we can also `brew install stern` or `sudo port install stern` .debug[[k8s/logs-cli.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/logs-cli.md)] --- ## Using Stern - There are two ways to specify the pods whose logs we want to see: - `-l` followed by a selector expression (like with many `kubectl` commands) - with a "pod query," i.e. a regex used to match pod names - These two ways can be combined if necessary .exercise[ - View the logs for all the pingpong containers: ```bash stern pingpong ``` ] .debug[[k8s/logs-cli.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/logs-cli.md)] --- ## Stern convenient options - The `--tail N` flag shows the last `N` lines for each container (Instead of showing the logs since the creation of the container) - The `-t` / `--timestamps` flag shows timestamps - The `--all-namespaces` flag is self-explanatory .exercise[ - View what's up with the `weave` system containers: ```bash stern --tail 1 --timestamps --all-namespaces weave ``` ] .debug[[k8s/logs-cli.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/logs-cli.md)] --- ## Using Stern with a selector - When specifying a selector, we can omit the value for a label - This will match all objects having that label (regardless of the value) - Everything created with `kubectl run` has a label `run` - Everything created with `kubectl create deployment` has a label `app` - We can use that property to view the logs of all the pods created with `kubectl create deployment` .exercise[ - View the logs for all the things started with `kubectl create deployment`: ```bash stern -l app ``` ] ??? :EN:- Viewing pod logs from the CLI :FR:- Consulter les logs des pods depuis la CLI .debug[[k8s/logs-cli.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/logs-cli.md)] --- class: pic .interstitial[![Image separating from the next part](https://gallant-turing-d0d520.netlify.com/containers/Container-Ship-Freighter-Navigation-Elbe-Romance-1782991.jpg)] --- name: toc-namespaces class: title Namespaces .nav[ [Previous part](#toc-accessing-logs-from-the-cli) | [Back to table of contents](#toc-part-4) | [Next part](#toc-next-steps) ] .debug[(automatically generated title slide)] --- # Namespaces - We would like to deploy another copy of DockerCoins on our cluster - We could rename all our deployments and services: hasher → hasher2, redis → redis2, rng → rng2, etc. - That would require updating the code - There has to be a better way! -- - As hinted by the title of this section, we will use *namespaces* .debug[[k8s/namespaces.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/namespaces.md)] --- ## Identifying a resource - We cannot have two resources with the same name (or can we...?) -- - We cannot have two resources *of the same kind* with the same name (but it's OK to have an `rng` service, an `rng` deployment, and an `rng` daemon set) -- - We cannot have two resources of the same kind with the same name *in the same namespace* (but it's OK to have e.g. two `rng` services in different namespaces) -- - Except for resources that exist at the *cluster scope* (these do not belong to a namespace) .debug[[k8s/namespaces.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/namespaces.md)] --- ## Uniquely identifying a resource - For *namespaced* resources: the tuple *(kind, name, namespace)* needs to be unique - For resources at the *cluster scope*: the tuple *(kind, name)* needs to be unique .exercise[ - List resource types again, and check the NAMESPACED column: ```bash kubectl api-resources ``` ] .debug[[k8s/namespaces.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/namespaces.md)] --- ## Pre-existing namespaces - If we deploy a cluster with `kubeadm`, we have three or four namespaces: - `default` (for our applications) - `kube-system` (for the control plane) - `kube-public` (contains one ConfigMap for cluster discovery) - `kube-node-lease` (in Kubernetes 1.14 and later; contains Lease objects) - If we deploy differently, we may have different namespaces .debug[[k8s/namespaces.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/namespaces.md)] --- ## Creating namespaces - Let's see two identical methods to create a namespace .exercise[ - We can use `kubectl create namespace`: ```bash kubectl create namespace blue ``` - Or we can construct a very minimal YAML snippet: ```bash kubectl apply -f- <
(`redis.blue.svc.cluster.local` will be a `CNAME` record) - `ClusterIP` services with explicit `Endpoints`
(instead of letting Kubernetes generate the endpoints from a selector) - Ambassador services
(application-level proxies that can provide credentials injection and more) .debug[[k8s/whatsnext.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/whatsnext.md)] --- ## Stateful services (second take) - If we want to host stateful services on Kubernetes, we can use: - a storage provider - persistent volumes, persistent volume claims - stateful sets - Good questions to ask: - what's the *operational cost* of running this service ourselves? - what do we gain by deploying this stateful service on Kubernetes? - Relevant sections: [Volumes](kube-selfpaced.yml.html#toc-volumes) | [Stateful Sets](kube-selfpaced.yml.html#toc-stateful-sets) | [Persistent Volumes](kube-selfpaced.yml.html#toc-highly-available-persistent-volumes) - Excellent [blog post](http://www.databasesoup.com/2018/07/should-i-run-postgres-on-kubernetes.html) tackling the question: “Should I run Postgres on Kubernetes?” .debug[[k8s/whatsnext.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/whatsnext.md)] --- ## HTTP traffic handling - *Services* are layer 4 constructs - HTTP is a layer 7 protocol - It is handled by *ingresses* (a different resource kind) - *Ingresses* allow: - virtual host routing - session stickiness - URI mapping - and much more! - [This section](kube-selfpaced.yml.html#toc-exposing-http-services-with-ingress-resources) shows how to expose multiple HTTP apps using [Træfik](https://docs.traefik.io/user-guide/kubernetes/) .debug[[k8s/whatsnext.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/whatsnext.md)] --- ## Logging - Logging is delegated to the container engine - Logs are exposed through the API - Logs are also accessible through local files (`/var/log/containers`) - Log shipping to a central platform is usually done through these files (e.g. with an agent bind-mounting the log directory) - [This section](kube-selfpaced.yml.html#toc-centralized-logging) shows how to do that with [Fluentd](https://docs.fluentd.org/v0.12/articles/kubernetes-fluentd) and the EFK stack .debug[[k8s/whatsnext.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/whatsnext.md)] --- ## Metrics - The kubelet embeds [cAdvisor](https://github.com/google/cadvisor), which exposes container metrics (cAdvisor might be separated in the future for more flexibility) - It is a good idea to start with [Prometheus](https://prometheus.io/) (even if you end up using something else) - Starting from Kubernetes 1.8, we can use the [Metrics API](https://kubernetes.io/docs/tasks/debug-application-cluster/core-metrics-pipeline/) - [Heapster](https://github.com/kubernetes/heapster) was a popular add-on (but is being [deprecated](https://github.com/kubernetes/heapster/blob/master/docs/deprecation.md) starting with Kubernetes 1.11) .debug[[k8s/whatsnext.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/whatsnext.md)] --- ## Managing the configuration of our applications - Two constructs are particularly useful: secrets and config maps - They allow to expose arbitrary information to our containers - **Avoid** storing configuration in container images (There are some exceptions to that rule, but it's generally a Bad Idea) - **Never** store sensitive information in container images (It's the container equivalent of the password on a post-it note on your screen) - [This section](kube-selfpaced.yml.html#toc-managing-configuration) shows how to manage app config with config maps (among others) .debug[[k8s/whatsnext.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/whatsnext.md)] --- ## Managing stack deployments - Applications are made of many resources (Deployments, Services, and much more) - We need to automate the creation / update / management of these resources - There is no "absolute best" tool or method; it depends on: - the size and complexity of our stack(s) - how often we change it (i.e. add/remove components) - the size and skills of our team .debug[[k8s/whatsnext.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/whatsnext.md)] --- ## A few tools to manage stacks - Shell scripts invoking `kubectl` - YAML resource manifests committed to a repo - [Kustomize](https://github.com/kubernetes-sigs/kustomize) (YAML manifests + patches applied on top) - [Helm](https://github.com/kubernetes/helm) (YAML manifests + templating engine) - [Spinnaker](https://www.spinnaker.io/) (Netflix' CD platform) - [Brigade](https://brigade.sh/) (event-driven scripting; no YAML) .debug[[k8s/whatsnext.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/whatsnext.md)] --- ## Cluster federation -- ![Star Trek Federation](images/startrek-federation.jpg) -- Sorry Star Trek fans, this is not the federation you're looking for! -- (If I add "Your cluster is in another federation" I might get a 3rd fandom wincing!) .debug[[k8s/whatsnext.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/whatsnext.md)] --- ## Cluster federation - Kubernetes master operation relies on etcd - etcd uses the [Raft](https://raft.github.io/) protocol - Raft recommends low latency between nodes - What if our cluster spreads to multiple regions? -- - Break it down in local clusters - Regroup them in a *cluster federation* - Synchronize resources across clusters - Discover resources across clusters .debug[[k8s/whatsnext.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/whatsnext.md)] --- class: pic .interstitial[![Image separating from the next part](https://gallant-turing-d0d520.netlify.com/containers/aerial-view-of-containers.jpg)] --- name: toc-links-and-resources class: title Links and resources .nav[ [Previous part](#toc-next-steps) | [Back to table of contents](#toc-part-4) | [Next part](#toc-) ] .debug[(automatically generated title slide)] --- # Links and resources - [Microsoft Learn](https://docs.microsoft.com/learn/) - [Azure Kubernetes Service](https://docs.microsoft.com/azure/aks/) - [Cloud Developer Advocates](https://developer.microsoft.com/advocates/) - [Kubernetes Community](https://kubernetes.io/community/) - Slack, Google Groups, meetups - [Local meetups](https://www.meetup.com/) - [devopsdays](https://www.devopsdays.org/) .debug[[k8s/links-bridget.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/k8s/links-bridget.md)] --- class: title, self-paced Thank you! .debug[[shared/thankyou.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/shared/thankyou.md)] --- class: title, in-person That's all, folks!
Questions? ![end](images/end.jpg) .debug[[shared/thankyou.md](git@gitlab.com:ryax-tech/training/training-slides-docker-kube.git/tree/main/slides/shared/thankyou.md)]