class: title, self-paced Deploying and Scaling Microservices
with Kubernetes
.nav[*Self-paced version*] .debug[ ``` M slides/pks/helm.md M slides/pks/prereqs.md ``` These slides have been built from commit: 4d35b81 [shared/title.md](https://github.com/paulczar/container.training.git/tree/pks/slides/shared/title.md)] --- class: title, in-person Deploying and Scaling Microservices
with Kubernetes
.footnote[ **Be kind to the WiFi!**
*Don't use your hotspot.*
*Don't stream videos or download big files during the workshop[.](https://www.youtube.com/watch?v=h16zyxiwDLY)*
*Thank you!* **Slides: http://container.training/** ] .debug[[shared/title.md](https://github.com/paulczar/container.training.git/tree/pks/slides/shared/title.md)] --- ## Intros - This slide should be customized by the tutorial instructor(s). - Hello! We are: - .emoji[👨🏾🎓] Paul Czarkowski ([@pczarkowski](https://twitter.com/pczarkowski), Pivotal Software) - .emoji[👨🏾🎓] Tyler Britten ([@tybritten](https://twitter.com/tybritten), Pivotal Software) - The workshop will run from ... - There will be a lunch break at ... (And coffee breaks!) - Feel free to interrupt for questions at any time - *Especially when you see full screen container pictures!* - Live feedback, questions, help: In person! .debug[[pks/logistics.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/logistics.md)] --- ## A brief introduction - This was initially written by [Jérôme Petazzoni](https://twitter.com/jpetazzo) to support in-person, instructor-led workshops and tutorials - Credit is also due to [multiple contributors](https://github.com/jpetazzo/container.training/graphs/contributors) — thank you! - You can also follow along on your own, at your own pace - We included as much information as possible in these slides - We recommend having a mentor to help you ... - ... Or be comfortable spending some time reading the Kubernetes [documentation](https://kubernetes.io/docs/) ... - ... And looking for answers on [StackOverflow](http://stackoverflow.com/questions/tagged/kubernetes) and other outlets .debug[[k8s/intro.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/intro.md)] --- class: self-paced ## Hands on, you shall practice - Nobody ever became a Jedi by spending their lives reading Wookiepedia - Likewise, it will take more than merely *reading* these slides to make you an expert - These slides include *tons* of exercises and examples - They assume that you have access to a Kubernetes cluster - If you are attending a workshop or tutorial:
you will be given specific instructions to access your cluster - If you are doing this on your own:
the first chapter will give you various options to get your own cluster .debug[[k8s/intro.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/intro.md)] --- ## About these slides - All the content is available in a public GitHub repository: https://github.com/jpetazzo/container.training - You can get updated "builds" of the slides there: http://container.training/ -- - Typos? Mistakes? Questions? Feel free to hover over the bottom of the slide ... .footnote[.emoji[👇] Try it! The source file will be shown and you can view it on GitHub and fork and edit it.] .debug[[shared/about-slides.md](https://github.com/paulczar/container.training.git/tree/pks/slides/shared/about-slides.md)] --- class: extra-details ## Extra details - This slide has a little magnifying glass in the top left corner - This magnifying glass indicates slides that provide extra details - Feel free to skip them if: - you are in a hurry - you are new to this and want to avoid cognitive overload - you want only the most essential information - You can review these slides another time if you want, they'll be waiting for you ☺ .debug[[shared/about-slides.md](https://github.com/paulczar/container.training.git/tree/pks/slides/shared/about-slides.md)] --- name: toc-chapter-1 ## Chapter 1 - [Pre-requirements](#toc-pre-requirements) - [Kubernetes concepts](#toc-kubernetes-concepts) - [First contact with `kubectl`](#toc-first-contact-with-kubectl) .debug[(auto-generated TOC)] --- name: toc-chapter-2 ## Chapter 2 - [Running our first containers on Kubernetes](#toc-running-our-first-containers-on-kubernetes) - [Accessing logs from the CLI](#toc-accessing-logs-from-the-cli) - [Declarative vs imperative](#toc-declarative-vs-imperative) - [Kubernetes network model](#toc-kubernetes-network-model) - [Exposing containers](#toc-exposing-containers) - [Shipping images with a registry](#toc-shipping-images-with-a-registry) - [Running our application on Kubernetes](#toc-running-our-application-on-kubernetes) .debug[(auto-generated TOC)] --- name: toc-chapter-3 ## Chapter 3 - [Setting up Kubernetes](#toc-setting-up-kubernetes) - [Setting up Kubernetes](#toc-setting-up-kubernetes) - [The Kubernetes dashboard](#toc-the-kubernetes-dashboard) - [Security implications of `kubectl apply`](#toc-security-implications-of-kubectl-apply) - [Octant](#toc-octant) - [Scaling our demo app](#toc-scaling-our-demo-app) - [Daemon sets](#toc-daemon-sets) - [Labels and selectors](#toc-labels-and-selectors) - [Rolling updates](#toc-rolling-updates) .debug[(auto-generated TOC)] --- name: toc-chapter-4 ## Chapter 4 - [Exposing HTTP services with Ingress resources](#toc-exposing-http-services-with-ingress-resources) - [Let's do some housekeeping](#toc-lets-do-some-housekeeping) - [Volumes](#toc-volumes) - [Managing configuration](#toc-managing-configuration) - [Managing stacks with Helm](#toc-managing-stacks-with-helm) .debug[(auto-generated TOC)] --- name: toc-chapter-5 ## Chapter 5 - [Next steps](#toc-next-steps) - [Links and resources](#toc-links-and-resources) .debug[(auto-generated TOC)] .debug[[shared/toc.md](https://github.com/paulczar/container.training.git/tree/pks/slides/shared/toc.md)] --- class: pic .interstitial[] --- name: toc-pre-requirements class: title Pre-requirements .nav[ [Previous section](#toc-) | [Back to table of contents](#toc-chapter-1) | [Next section](#toc-kubernetes-concepts) ] .debug[(automatically generated title slide)] --- # Pre-requirements - Be comfortable with the UNIX command line - navigating directories - editing files - a little bit of bash-fu (environment variables, loops) - Some Docker knowledge - `docker run`, `docker ps`, `docker build` - ideally, you know how to write a Dockerfile and build it
(even if it's a `FROM` line and a couple of `RUN` commands) - It's totally OK if you are not a Docker expert! .debug[[pks/prereqs.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/prereqs.md)] --- ## software pre-requirements - You'll need the following software installed on your local laptop: * [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) * [helm](https://helm.sh/docs/using_helm/#installing-helm) - Bonus tools * [octant](https://github.com/vmware/octant#installation) * [stern](https://github.com/wercker/stern/releases/tag/1.11.0) * [jq](https://stedolan.github.io/jq/download/) .debug[[pks/prereqs.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/prereqs.md)] --- class: title *Tell me and I forget.*
*Teach me and I remember.*
*Involve me and I learn.* Misattributed to Benjamin Franklin [(Probably inspired by Chinese Confucian philosopher Xunzi)](https://www.barrypopik.com/index.php/new_york_city/entry/tell_me_and_i_forget_teach_me_and_i_may_remember_involve_me_and_i_will_lear/) .debug[[pks/prereqs.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/prereqs.md)] --- ## Hands-on sections - The whole workshop is hands-on - You are invited to reproduce all the demos - You will be using conference wifi and a shared kubernetes cluster. Please be kind to both. - All hands-on sections are clearly identified, like the gray rectangle below .exercise[ - This is the stuff you're supposed to do! - Go to http://container.training/ to view these slides - Join the chat room: In person! ] .debug[[pks/prereqs.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/prereqs.md)] --- class: in-person ## Where are we going to run our containers? .debug[[pks/prereqs.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/prereqs.md)] --- class: in-person ## shared cluster dedicated to this workshop - A large Pivotal Container Service (PKS) cluster deployed to Google Cloud. - It remain up for the duration of the workshop - You should have a little card with login+password+URL - Logging into this URL will give you a downloadable kubeconfig file. .debug[[pks/prereqs.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/prereqs.md)] --- class: in-person ## Why don't we run containers locally? - Installing this stuff can be hard on some machines (32 bits CPU or OS... Laptops without administrator access... etc.) - *"The whole team downloaded all these container images from the WiFi!
... and it went great!"* (Literally no-one ever) - All you need is a computer (or even a phone or tablet!), with: - an internet connection - a web browser - kubectl - helm .debug[[pks/prereqs.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/prereqs.md)] --- class: in-person ## Connecting to our lab environment .exercise[ - Log into https://workshop.paulczar.wtf with your provided credentials - Follow the instructions on the auth portal to set up a `kubeconfig` file. - Check that you can connect to the cluster with `kubectl get nodes`: ```bash $ kubectl get nodes NAME STATUS ROLES AGE VERSION vm-0f2b473c-5ae6-4af3-4e80-f0a068b03abe Ready
23h v1.14.5 vm-25cfc8d6-88c0-45f6-4305-05e859af7f2c Ready
23h v1.14.5 ... ... ``` ] If anything goes wrong — ask for help! .debug[[pks/connecting.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/connecting.md)] --- ## Doing or re-doing the workshop on your own? - Use something like [Play-With-Docker](http://play-with-docker.com/) or [Play-With-Kubernetes](https://training.play-with-kubernetes.com/) Zero setup effort; but environment are short-lived and might have limited resources - Create your own cluster (local or cloud VMs) Small setup effort; small cost; flexible environments - Create a bunch of clusters for you and your friends ([instructions](https://github.com/jpetazzo/container.training/tree/master/prepare-vms)) Bigger setup effort; ideal for group training .debug[[pks/connecting.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/connecting.md)] --- class: self-paced ## Get your own Docker nodes - If you already have some Docker nodes: great! - If not: let's get some thanks to Play-With-Docker .exercise[ - Go to http://www.play-with-docker.com/ - Log in - Create your first node ] You will need a Docker ID to use Play-With-Docker. (Creating a Docker ID is free.) .debug[[pks/connecting.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/connecting.md)] --- ## Terminals Once in a while, the instructions will say:
"Open a new terminal." There are multiple ways to do this: - create a new window or tab on your machine, and SSH into the VM; - use screen or tmux on the VM and open a new window from there. You are welcome to use the method that you feel the most comfortable with. .debug[[pks/connecting.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/connecting.md)] --- class: pic .interstitial[] --- name: toc-kubernetes-concepts class: title Kubernetes concepts .nav[ [Previous section](#toc-pre-requirements) | [Back to table of contents](#toc-chapter-1) | [Next section](#toc-first-contact-with-kubectl) ] .debug[(automatically generated title slide)] --- # Kubernetes concepts - Kubernetes is a container management system - It runs and manages containerized applications on a cluster -- - What does that really mean? .debug[[pks/concepts-k8s.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/concepts-k8s.md)] --- ## Basic things we can ask Kubernetes to do -- - Start 5 containers using image `atseashop/api:v1.3` -- - Place an internal load balancer in front of these containers -- - Start 10 containers using image `atseashop/webfront:v1.3` -- - Place a public load balancer in front of these containers -- - It's Black Friday (or Christmas), traffic spikes, grow our cluster and add containers -- - New release! Replace my containers with the new image `atseashop/webfront:v1.4` -- - Keep processing requests during the upgrade; update my containers one at a time .debug[[pks/concepts-k8s.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/concepts-k8s.md)] --- ## Other things that Kubernetes can do for us - Basic autoscaling - Blue/green deployment, canary deployment - Long running services, but also batch (one-off) jobs - Overcommit our cluster and *evict* low-priority jobs - Run services with *stateful* data (databases etc.) - Fine-grained access control defining *what* can be done by *whom* on *which* resources - Integrating third party services (*service catalog*) - Automating complex tasks (*operators*) .debug[[pks/concepts-k8s.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/concepts-k8s.md)] --- ## Kubernetes architecture .debug[[pks/concepts-k8s.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/concepts-k8s.md)] --- class: pic  .debug[[pks/concepts-k8s.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/concepts-k8s.md)] --- ## Kubernetes architecture - Ha ha ha ha - OK, I was trying to scare you, it's much simpler than that ❤️ .debug[[pks/concepts-k8s.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/concepts-k8s.md)] --- class: pic  .debug[[pks/concepts-k8s.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/concepts-k8s.md)] --- ## Credits - The first schema is a Kubernetes cluster with storage backed by multi-path iSCSI (Courtesy of [Yongbok Kim](https://www.yongbok.net/blog/)) - The second one is a simplified representation of a Kubernetes cluster (Courtesy of [Imesh Gunaratne](https://medium.com/containermind/a-reference-architecture-for-deploying-wso2-middleware-on-kubernetes-d4dee7601e8e)) .debug[[pks/concepts-k8s.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/concepts-k8s.md)] --- ## Kubernetes architecture: the data plane - The data plane is a collection of nodes that execute our containers - These nodes run a collection of services: - a container Engine (typically Docker) - kubelet (the "node agent") - kube-proxy (a necessary but not sufficient network component) - Nodes were formerly called "minions" (You might see that word in older articles or documentation) .debug[[pks/concepts-k8s.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/concepts-k8s.md)] --- ## Kubernetes architecture: the control plane - The Kubernetes logic (its "brains") is a collection of services: - the API server (our point of entry to everything!) - core services like the scheduler and controller manager - `etcd` (a highly available key/value store; the "database" of Kubernetes) - Together, these services form the control plane of our cluster - The control plane is also called the "master" .debug[[pks/concepts-k8s.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/concepts-k8s.md)] --- class: pic  .debug[[pks/concepts-k8s.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/concepts-k8s.md)] --- class: extra-details ## Running the control plane on special nodes - PKS reserves dedicated node[s] for the control plane - This node is then called a "master" (Yes, this is ambiguous: is the "master" a node, or the whole control plane?) - Normal applications are restricted from running on this node - When high availability is required, each service of the control plane must be resilient - The control plane is then replicated on multiple nodes (This is sometimes called a "multi-master" setup) .debug[[pks/concepts-k8s.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/concepts-k8s.md)] --- class: extra-details ## Do we need to run Docker at all? No! -- - By default, Kubernetes uses the Docker Engine to run containers - We could also use `rkt` ("Rocket") from CoreOS - Or leverage other pluggable runtimes through the *Container Runtime Interface* (like CRI-O, or containerd) .debug[[pks/concepts-k8s.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/concepts-k8s.md)] --- class: extra-details ## Do we need to run Docker at all? Yes! -- - Our Kubernetes cluster is using Docker as the container engine - We still use it to build images and ship them around - We can do these things without Docker
(and get diagnosed with NIH¹ syndrome) - Docker is still the most stable container engine today
(but other options are maturing very quickly) .footnote[¹[Not Invented Here](https://en.wikipedia.org/wiki/Not_invented_here)] .debug[[pks/concepts-k8s.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/concepts-k8s.md)] --- class: extra-details ## Do we need to run Docker at all? - On our development environments, CI pipelines ... : *Yes, almost certainly* - On our production servers: *Yes (today)* *Probably not (in the future)* .footnote[More information about CRI [on the Kubernetes blog](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes)] .debug[[pks/concepts-k8s.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/concepts-k8s.md)] --- ## Interacting with Kubernetes - We will interact with our Kubernetes cluster through the Kubernetes API - The Kubernetes API is (mostly) RESTful - It allows us to create, read, update, delete *resources* - A few common resource types are: - node (a machine — physical or virtual — in our cluster) - pod (group of containers running together on a node) - service (stable network endpoint to connect to one or multiple containers) .debug[[pks/concepts-k8s.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/concepts-k8s.md)] --- class: pic  .debug[[pks/concepts-k8s.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/concepts-k8s.md)] --- ## Credits - The first diagram is courtesy of Lucas Käldström, in [this presentation](https://speakerdeck.com/luxas/kubeadm-cluster-creation-internals-from-self-hosting-to-upgradability-and-ha) - it's one of the best Kubernetes architecture diagrams available! - The second diagram is courtesy of Weave Works - a *pod* can have multiple containers working together - IP addresses are associated with *pods*, not with individual containers Both diagrams used with permission. .debug[[pks/concepts-k8s.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/concepts-k8s.md)] --- class: pic .interstitial[] --- name: toc-first-contact-with-kubectl class: title First contact with `kubectl` .nav[ [Previous section](#toc-kubernetes-concepts) | [Back to table of contents](#toc-chapter-1) | [Next section](#toc-running-our-first-containers-on-kubernetes) ] .debug[(automatically generated title slide)] --- # First contact with `kubectl` - `kubectl` is (almost) the only tool we'll need to talk to Kubernetes - It is a rich CLI tool around the Kubernetes API (Everything you can do with `kubectl`, you can do directly with the API) - On our machines, there is a `~/.kube/config` file with: - the Kubernetes API address - the path to our TLS certificates used to authenticate - You can also use the `--kubeconfig` flag to pass a config file - Or directly `--server`, `--user`, etc. - `kubectl` can be pronounced "Cube C T L", "Cube cuttle", "Cube cuddle"... .debug[[pks/kubectlget.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlget.md)] --- ## `kubectl get` - Let's look at our `Node` resources with `kubectl get`! .exercise[ - Look at the composition of our cluster: ```bash kubectl get node ``` - These commands are equivalent: ```bash kubectl get no kubectl get node kubectl get nodes ``` ] .debug[[pks/kubectlget.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlget.md)] --- ## Obtaining machine-readable output - `kubectl get` can output JSON, YAML, or be directly formatted .exercise[ - Give us more info about the nodes: ```bash kubectl get nodes -o wide ``` - Let's have some YAML: ```bash kubectl get no -o yaml ``` See that `kind: List` at the end? It's the type of our result! ] .debug[[pks/kubectlget.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlget.md)] --- ## (Ab)using `kubectl` and `jq` - It's super easy to build custom reports .exercise[ - Show the capacity of all our nodes as a stream of JSON objects: ```bash kubectl get nodes -o json | jq ".items[] | {name:.metadata.name} + .status.capacity" ``` ] .debug[[pks/kubectlget.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlget.md)] --- class: extra-details ## Exploring types and definitions - We can list all available resource types by running `kubectl api-resources`
(In Kubernetes 1.10 and prior, this command used to be `kubectl get`) - We can view the definition for a resource type with: ```bash kubectl explain type ``` - We can view the definition of a field in a resource, for instance: ```bash kubectl explain node.spec ``` - Or get the full definition of all fields and sub-fields: ```bash kubectl explain node --recursive ``` .debug[[pks/kubectlget.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlget.md)] --- class: extra-details ## Introspection vs. documentation - We can access the same information by reading the [API documentation](https://kubernetes.io/docs/reference/#api-reference) - The API documentation is usually easier to read, but: - it won't show custom types (like Custom Resource Definitions) - we need to make sure that we look at the correct version - `kubectl api-resources` and `kubectl explain` perform *introspection* (they communicate with the API server and obtain the exact type definitions) .debug[[pks/kubectlget.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlget.md)] --- ## Type names - The most common resource names have three forms: - singular (e.g. `node`, `service`, `deployment`) - plural (e.g. `nodes`, `services`, `deployments`) - short (e.g. `no`, `svc`, `deploy`) - Some resources do not have a short name - `Endpoints` only have a plural form (because even a single `Endpoints` resource is actually a list of endpoints) .debug[[pks/kubectlget.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlget.md)] --- ## Viewing details - We can use `kubectl get -o yaml` to see all available details - However, YAML output is often simultaneously too much and not enough - For instance, `kubectl get node node1 -o yaml` is: - too much information (e.g.: list of images available on this node) - not enough information (e.g.: doesn't show pods running on this node) - difficult to read for a human operator - For a comprehensive overview, we can use `kubectl describe` instead .debug[[pks/kubectlget.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlget.md)] --- ## `kubectl describe` - `kubectl describe` needs a resource type and (optionally) a resource name - It is possible to provide a resource name *prefix* (all matching objects will be displayed) - `kubectl describe` will retrieve some extra information about the resource .exercise[ - Look at the information available for `node1` with one of the following commands: ```bash kubectl describe node/node1 kubectl describe node node1 ``` ] (We should notice a bunch of control plane pods.) .debug[[pks/kubectlget.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlget.md)] --- ## Services - A *service* is a stable endpoint to connect to "something" (In the initial proposal, they were called "portals") .exercise[ - List the services on our cluster with one of these commands: ```bash kubectl get services kubectl get svc ``` ] -- There should be no services. This is because you're not running anything yet. But there are some services running in other namespaces. .debug[[pks/kubectlget.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlget.md)] --- ## Services - A *service* is a stable endpoint to connect to "something" (In the initial proposal, they were called "portals") .exercise[ - List the services on our cluster with one of these commands: ```bash kubectl get services --all-namespaces kubectl get svc --all-namespaces ``` ] -- There's a bunch of services already running that are used in the operations of the Kubernetes cluster. .debug[[pks/kubectlget.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlget.md)] --- ## ClusterIP services - A `ClusterIP` service is internal, available from the cluster only - This is useful for introspection from within containers .exercise[ - Try to connect to the API: ```bash curl -k https://`10.100.200.1` ``` - `-k` is used to skip certificate verification - Make sure to replace 10.100.200.1 with the CLUSTER-IP for the `kubernetes` service shown by `kubectl get svc` ] -- The Cluster IP is only accessible from inside the cluster. We'll explore other ways to expose a service later. .debug[[pks/kubectlget.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlget.md)] --- ## Listing running containers - Containers are manipulated through *pods* - A pod is a group of containers: - running together (on the same node) - sharing resources (RAM, CPU; but also network, volumes) .exercise[ - List pods on our cluster: ```bash kubectl get pods ``` ] -- *Where are the pods that we saw just a moment earlier?!?* .debug[[pks/kubectlget.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlget.md)] --- ## Namespaces - Namespaces allow us to segregate resources .exercise[ - List the namespaces on our cluster with one of these commands: ```bash kubectl get namespaces kubectl get namespace kubectl get ns ``` ] -- *You know what ... This `kube-system` thing looks suspicious.* *In fact, I'm pretty sure it showed up earlier, when we did:* `kubectl describe node node1` .debug[[pks/kubectlget.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlget.md)] --- ## Accessing namespaces - By default, `kubectl` uses the `default` namespace - We can see resources in all namespaces with `--all-namespaces` .exercise[ - List the pods in all namespaces: ```bash kubectl get pods --all-namespaces ``` - Since Kubernetes 1.14, we can also use `-A` as a shorter version: ```bash kubectl get pods -A ``` ] *Here are our system pods!* .debug[[pks/kubectlget.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlget.md)] --- ## What are all these control plane pods? - `kube-apiserver` is the API server - `coredns` provides DNS-based service discovery ([replacing kube-dns as of 1.11](https://kubernetes.io/blog/2018/07/10/coredns-ga-for-kubernetes-cluster-dns/)) - the `READY` column indicates the number of containers in each pod (1 for most pods, but `coredns` has 3, for instance) .debug[[pks/kubectlget.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlget.md)] --- ## Scoping another namespace - We can also look at a different namespace (other than `default`) .exercise[ - List only the pods in the `kube-system` namespace: ```bash kubectl get pods --namespace=kube-system kubectl get pods -n kube-system ``` ] .debug[[pks/kubectlget.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlget.md)] --- ## Namespaces and other `kubectl` commands - We can use `-n`/`--namespace` with almost every `kubectl` command - Example: - `kubectl create --namespace=X` to create something in namespace X - We can use `-A`/`--all-namespaces` with most commands that manipulate multiple objects - Examples: - `kubectl delete` can delete resources across multiple namespaces - `kubectl label` can add/remove/update labels across multiple namespaces -- **These commands will not work for you, as you are restricted by Role Based Authentication to only have write access inside your own namespace.** .debug[[pks/kubectlget.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlget.md)] --- class: pic .interstitial[] --- name: toc-running-our-first-containers-on-kubernetes class: title Running our first containers on Kubernetes .nav[ [Previous section](#toc-first-contact-with-kubectl) | [Back to table of contents](#toc-chapter-2) | [Next section](#toc-accessing-logs-from-the-cli) ] .debug[(automatically generated title slide)] --- # Running our first containers on Kubernetes - First things first: we cannot run a container -- - We are going to run a pod, and in that pod there will be a single container -- - In that container in the pod, we are going to run a simple `ping` command - Then we are going to start additional copies of the pod .debug[[k8s/kubectlrun.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/kubectlrun.md)] --- ## Starting a simple pod with `kubectl run` - We need to specify at least a *name* and the image we want to use .exercise[ - Let's ping `1.1.1.1`, Cloudflare's [public DNS resolver](https://blog.cloudflare.com/announcing-1111/): ```bash kubectl run pingpong --image alpine ping 1.1.1.1 ``` ] -- (Starting with Kubernetes 1.12, we get a message telling us that `kubectl run` is deprecated. Let's ignore it for now.) .debug[[k8s/kubectlrun.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/kubectlrun.md)] --- ## Behind the scenes of `kubectl run` - Let's look at the resources that were created by `kubectl run` .exercise[ - List most resource types: ```bash kubectl get all ``` ] -- We should see the following things: - `deployment.apps/pingpong` (the *deployment* that we just created) - `replicaset.apps/pingpong-xxxxxxxxxx` (a *replica set* created by the deployment) - `pod/pingpong-xxxxxxxxxx-yyyyy` (a *pod* created by the replica set) Note: as of 1.10.1, resource types are displayed in more detail. .debug[[k8s/kubectlrun.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/kubectlrun.md)] --- ## What are these different things? - A *deployment* is a high-level construct - allows scaling, rolling updates, rollbacks - multiple deployments can be used together to implement a [canary deployment](https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#canary-deployments) - delegates pods management to *replica sets* - A *replica set* is a low-level construct - makes sure that a given number of identical pods are running - allows scaling - rarely used directly - A *replication controller* is the (deprecated) predecessor of a replica set .debug[[k8s/kubectlrun.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/kubectlrun.md)] --- ## Our `pingpong` deployment - `kubectl run` created a *deployment*, `deployment.apps/pingpong` ``` NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/pingpong 1 1 1 1 10m ``` - That deployment created a *replica set*, `replicaset.apps/pingpong-xxxxxxxxxx` ``` NAME DESIRED CURRENT READY AGE replicaset.apps/pingpong-7c8bbcd9bc 1 1 1 10m ``` - That replica set created a *pod*, `pod/pingpong-xxxxxxxxxx-yyyyy` ``` NAME READY STATUS RESTARTS AGE pod/pingpong-7c8bbcd9bc-6c9qz 1/1 Running 0 10m ``` - We'll see later how these folks play together for: - scaling, high availability, rolling updates .debug[[k8s/kubectlrun.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/kubectlrun.md)] --- ## Viewing container output - Let's use the `kubectl logs` command - We will pass either a *pod name*, or a *type/name* (E.g. if we specify a deployment or replica set, it will get the first pod in it) - Unless specified otherwise, it will only show logs of the first container in the pod (Good thing there's only one in ours!) .exercise[ - View the result of our `ping` command: ```bash kubectl logs deploy/pingpong ``` ] .debug[[k8s/kubectlrun.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/kubectlrun.md)] --- ## Streaming logs in real time - Just like `docker logs`, `kubectl logs` supports convenient options: - `-f`/`--follow` to stream logs in real time (à la `tail -f`) - `--tail` to indicate how many lines you want to see (from the end) - `--since` to get logs only after a given timestamp .exercise[ - View the latest logs of our `ping` command: ```bash kubectl logs deploy/pingpong --tail 1 --follow ``` ] .debug[[k8s/kubectlrun.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/kubectlrun.md)] --- ## Scaling our application - We can create additional copies of our container (I mean, our pod) with `kubectl scale` .exercise[ - Scale our `pingpong` deployment: ```bash kubectl scale deploy/pingpong --replicas 3 ``` - Note that this command does exactly the same thing: ```bash kubectl scale deployment pingpong --replicas 3 ``` ] Note: what if we tried to scale `replicaset.apps/pingpong-xxxxxxxxxx`? We could! But the *deployment* would notice it right away, and scale back to the initial level. .debug[[k8s/kubectlrun.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/kubectlrun.md)] --- ## Resilience - The *deployment* `pingpong` watches its *replica set* - The *replica set* ensures that the right number of *pods* are running - What happens if pods disappear? .exercise[ - In a separate window, list pods, and keep watching them: ```bash kubectl get pods -w ``` - Destroy a pod: ``` kubectl delete pod pingpong-xxxxxxxxxx-yyyyy ``` ] .debug[[k8s/kubectlrun.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/kubectlrun.md)] --- ## What if we wanted something different? - What if we wanted to start a "one-shot" container that *doesn't* get restarted? - We could use `kubectl run --restart=OnFailure` or `kubectl run --restart=Never` - These commands would create *jobs* or *pods* instead of *deployments* - Under the hood, `kubectl run` invokes "generators" to create resource descriptions - We could also write these resource descriptions ourselves (typically in YAML),
and create them on the cluster with `kubectl apply -f` (discussed later) - With `kubectl run --schedule=...`, we can also create *cronjobs* .debug[[k8s/kubectlrun.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/kubectlrun.md)] --- ## What about that deprecation warning? - As we can see from the previous slide, `kubectl run` can do many things - The exact type of resource created is not obvious - To make things more explicit, it is better to use `kubectl create`: - `kubectl create deployment` to create a deployment - `kubectl create job` to create a job - `kubectl create cronjob` to run a job periodically
(since Kubernetes 1.14) - Eventually, `kubectl run` will be used only to start one-shot pods (see https://github.com/kubernetes/kubernetes/pull/68132) .debug[[k8s/kubectlrun.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/kubectlrun.md)] --- ## Various ways of creating resources - `kubectl run` - easy way to get started - versatile - `kubectl create
` - explicit, but lacks some features - can't create a CronJob before Kubernetes 1.14 - can't pass command-line arguments to deployments - `kubectl create -f foo.yaml` or `kubectl apply -f foo.yaml` - all features are available - requires writing YAML .debug[[k8s/kubectlrun.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/kubectlrun.md)] --- ## Viewing logs of multiple pods - When we specify a deployment name, only one single pod's logs are shown - We can view the logs of multiple pods by specifying a *selector* - A selector is a logic expression using *labels* - Conveniently, when you `kubectl run somename`, the associated objects have a `run=somename` label .exercise[ - View the last line of log from all pods with the `run=pingpong` label: ```bash kubectl logs -l run=pingpong --tail 1 ``` ] .debug[[k8s/kubectlrun.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/kubectlrun.md)] --- ### Streaming logs of multiple pods - Can we stream the logs of all our `pingpong` pods? .exercise[ - Combine `-l` and `-f` flags: ```bash kubectl logs -l run=pingpong --tail 1 -f ``` ] *Note: combining `-l` and `-f` is only possible since Kubernetes 1.14!* *Let's try to understand why ...* .debug[[k8s/kubectlrun.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/kubectlrun.md)] --- class: extra-details ### Streaming logs of many pods - Let's see what happens if we try to stream the logs for more than 5 pods .exercise[ - Scale up our deployment: ```bash kubectl scale deployment pingpong --replicas=8 ``` - Stream the logs: ```bash kubectl logs -l run=pingpong --tail 1 -f ``` ] We see a message like the following one: ``` error: you are attempting to follow 8 log streams, but maximum allowed concurency is 5, use --max-log-requests to increase the limit ``` .debug[[k8s/kubectlrun.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/kubectlrun.md)] --- class: extra-details ## Why can't we stream the logs of many pods? - `kubectl` opens one connection to the API server per pod - For each pod, the API server opens one extra connection to the corresponding kubelet - If there are 1000 pods in our deployment, that's 1000 inbound + 1000 outbound connections on the API server - This could easily put a lot of stress on the API server - Prior Kubernetes 1.14, it was decided to *not* allow multiple connections - From Kubernetes 1.14, it is allowed, but limited to 5 connections (this can be changed with `--max-log-requests`) - For more details about the rationale, see [PR #67573](https://github.com/kubernetes/kubernetes/pull/67573) .debug[[k8s/kubectlrun.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/kubectlrun.md)] --- ## Shortcomings of `kubectl logs` - We don't see which pod sent which log line - If pods are restarted / replaced, the log stream stops - If new pods are added, we don't see their logs - To stream the logs of multiple pods, we need to write a selector - There are external tools to address these shortcomings (e.g.: [Stern](https://github.com/wercker/stern)) .debug[[k8s/kubectlrun.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/kubectlrun.md)] --- class: extra-details ## `kubectl logs -l ... --tail N` - If we run this with Kubernetes 1.12, the last command shows multiple lines - This is a regression when `--tail` is used together with `-l`/`--selector` - It always shows the last 10 lines of output for each container (instead of the number of lines specified on the command line) - The problem was fixed in Kubernetes 1.13 *See [#70554](https://github.com/kubernetes/kubernetes/issues/70554) for details.* .debug[[k8s/kubectlrun.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/kubectlrun.md)] --- ## Aren't we flooding 1.1.1.1? - If you're wondering this, good question! - Don't worry, though: *APNIC's research group held the IP addresses 1.1.1.1 and 1.0.0.1. While the addresses were valid, so many people had entered them into various random systems that they were continuously overwhelmed by a flood of garbage traffic. APNIC wanted to study this garbage traffic but any time they'd tried to announce the IPs, the flood would overwhelm any conventional network.* (Source: https://blog.cloudflare.com/announcing-1111/) - It's very unlikely that our concerted pings manage to produce even a modest blip at Cloudflare's NOC! .debug[[k8s/kubectlrun.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/kubectlrun.md)] --- class: pic .interstitial[] --- name: toc-accessing-logs-from-the-cli class: title Accessing logs from the CLI .nav[ [Previous section](#toc-running-our-first-containers-on-kubernetes) | [Back to table of contents](#toc-chapter-2) | [Next section](#toc-declarative-vs-imperative) ] .debug[(automatically generated title slide)] --- # Accessing logs from the CLI - The `kubectl logs` command has limitations: - it cannot stream logs from multiple pods at a time - when showing logs from multiple pods, it mixes them all together - We are going to see how to do it better .debug[[k8s/logs-cli.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/logs-cli.md)] --- ## Doing it manually - We *could* (if we were so inclined) write a program or script that would: - take a selector as an argument - enumerate all pods matching that selector (with `kubectl get -l ...`) - fork one `kubectl logs --follow ...` command per container - annotate the logs (the output of each `kubectl logs ...` process) with their origin - preserve ordering by using `kubectl logs --timestamps ...` and merge the output -- - We *could* do it, but thankfully, others did it for us already! .debug[[k8s/logs-cli.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/logs-cli.md)] --- ## Stern [Stern](https://github.com/wercker/stern) is an open source project by [Wercker](http://www.wercker.com/). From the README: *Stern allows you to tail multiple pods on Kubernetes and multiple containers within the pod. Each result is color coded for quicker debugging.* *The query is a regular expression so the pod name can easily be filtered and you don't need to specify the exact id (for instance omitting the deployment id). If a pod is deleted it gets removed from tail and if a new pod is added it automatically gets tailed.* Exactly what we need! .debug[[k8s/logs-cli.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/logs-cli.md)] --- ## Installing Stern - Run `stern` (without arguments) to check if it's installed: ``` $ stern Tail multiple pods and containers from Kubernetes Usage: stern pod-query [flags] ``` - If it is not installed, the easiest method is to download a [binary release](https://github.com/wercker/stern/releases) - The following commands will install Stern on a Linux Intel 64 bit machine: ```bash sudo curl -L -o /usr/local/bin/stern \ https://github.com/wercker/stern/releases/download/1.11.0/stern_linux_amd64 sudo chmod +x /usr/local/bin/stern ``` .debug[[k8s/logs-cli.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/logs-cli.md)] --- ## Using Stern - There are two ways to specify the pods whose logs we want to see: - `-l` followed by a selector expression (like with many `kubectl` commands) - with a "pod query," i.e. a regex used to match pod names - These two ways can be combined if necessary .exercise[ - View the logs for all the rng containers: ```bash stern ping ``` ] .debug[[k8s/logs-cli.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/logs-cli.md)] --- ## Stern convenient options - The `--tail N` flag shows the last `N` lines for each container (Instead of showing the logs since the creation of the container) - The `-t` / `--timestamps` flag shows timestamps - The `--all-namespaces` flag is self-explanatory .exercise[ - View what's up with the `weave` system containers: ```bash stern --tail 1 --timestamps --all-namespaces weave ``` ] .debug[[k8s/logs-cli.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/logs-cli.md)] --- ## Using Stern with a selector - When specifying a selector, we can omit the value for a label - This will match all objects having that label (regardless of the value) - Everything created with `kubectl run` has a label `run` - We can use that property to view the logs of all the pods created with `kubectl run` - Similarly, everything created with `kubectl create deployment` has a label `app` .exercise[ - View the logs for all the things started with `kubectl run`: ```bash stern -l run ``` ] .debug[[k8s/logs-cli.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/logs-cli.md)] --- ## Cleanup ping pong deployment - Time to clean up pingpong and move on .exercise[ - delete the pingpong deployment ```bash kubectl delete deployment pingpong ``` ] .debug[[k8s/logs-cli.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/logs-cli.md)] --- class: pic .interstitial[] --- name: toc-declarative-vs-imperative class: title Declarative vs imperative .nav[ [Previous section](#toc-accessing-logs-from-the-cli) | [Back to table of contents](#toc-chapter-2) | [Next section](#toc-kubernetes-network-model) ] .debug[(automatically generated title slide)] --- # Declarative vs imperative - Our container orchestrator puts a very strong emphasis on being *declarative* - Declarative: *I would like a cup of tea.* - Imperative: *Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in a cup.* -- - Declarative seems simpler at first ... -- - ... As long as you know how to brew tea .debug[[shared/declarative.md](https://github.com/paulczar/container.training.git/tree/pks/slides/shared/declarative.md)] --- ## Declarative vs imperative - What declarative would really be: *I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.* -- *¹An infusion is obtained by letting the object steep a few minutes in hot² water.* -- *²Hot liquid is obtained by pouring it in an appropriate container³ and setting it on a stove.* -- *³Ah, finally, containers! Something we know about. Let's get to work, shall we?* -- .footnote[Did you know there was an [ISO standard](https://en.wikipedia.org/wiki/ISO_3103) specifying how to brew tea?] .debug[[shared/declarative.md](https://github.com/paulczar/container.training.git/tree/pks/slides/shared/declarative.md)] --- ## Declarative vs imperative - Imperative systems: - simpler - if a task is interrupted, we have to restart from scratch - Declarative systems: - if a task is interrupted (or if we show up to the party half-way through), we can figure out what's missing and do only what's necessary - we need to be able to *observe* the system - ... and compute a "diff" between *what we have* and *what we want* .debug[[shared/declarative.md](https://github.com/paulczar/container.training.git/tree/pks/slides/shared/declarative.md)] --- ## Declarative vs imperative in Kubernetes - With Kubernetes, we cannot say: "run this container" - All we can do is write a *spec* and push it to the API server (by creating a resource like e.g. a Pod or a Deployment) - The API server will validate that spec (and reject it if it's invalid) - Then it will store it in etcd - A *controller* will "notice" that spec and act upon it .debug[[k8s/declarative.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/declarative.md)] --- ## Reconciling state - Watch for the `spec` fields in the YAML files later! - The *spec* describes *how we want the thing to be* - Kubernetes will *reconcile* the current state with the spec
(technically, this is done by a number of *controllers*) - When we want to change some resource, we update the *spec* - Kubernetes will then *converge* that resource .debug[[k8s/declarative.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/declarative.md)] --- ## 19,000 words They say, "a picture is worth one thousand words." The following 19 slides show what really happens when we run: ```bash kubectl run web --image=nginx --replicas=3 ``` .debug[[k8s/deploymentslideshow.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/deploymentslideshow.md)] --- class: pic .interstitial[] --- name: toc-kubernetes-network-model class: title Kubernetes network model .nav[ [Previous section](#toc-declarative-vs-imperative) | [Back to table of contents](#toc-chapter-2) | [Next section](#toc-exposing-containers) ] .debug[(automatically generated title slide)] --- # Kubernetes network model - TL,DR: *Our cluster (nodes and pods) is one big flat IP network.* -- - In detail: - all nodes must be able to reach each other, without NAT - all pods must be able to reach each other, without NAT - pods and nodes must be able to reach each other, without NAT - each pod is aware of its IP address (no NAT) - pod IP addresses are assigned by the network implementation - Kubernetes doesn't mandate any particular implementation .debug[[k8s/kubenet.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/kubenet.md)] --- ## Kubernetes network model: the good - Everything can reach everything - No address translation - No port translation - No new protocol - The network implementation can decide how to allocate addresses - IP addresses don't have to be "portable" from a node to another (We can use e.g. a subnet per node and use a simple routed topology) - The specification is simple enough to allow many various implementations .debug[[k8s/kubenet.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/kubenet.md)] --- ## Kubernetes network model: the less good - Everything can reach everything - if you want security, you need to add network policies - the network implementation that you use needs to support them - There are literally dozens of implementations out there (15 are listed in the Kubernetes documentation) - Pods have level 3 (IP) connectivity, but *services* are level 4 (TCP or UDP) (Services map to a single UDP or TCP port; no port ranges or arbitrary IP packets) - `kube-proxy` is on the data path when connecting to a pod or container,
and it's not particularly fast (relies on userland proxying or iptables) .debug[[k8s/kubenet.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/kubenet.md)] --- ## Kubernetes network model: in practice - The nodes that we are using have been set up to use [Weave](https://github.com/weaveworks/weave) - We don't endorse Weave in a particular way, it just Works For Us - Don't worry about the warning about `kube-proxy` performance - Unless you: - routinely saturate 10G network interfaces - count packet rates in millions per second - run high-traffic VOIP or gaming platforms - do weird things that involve millions of simultaneous connections
(in which case you're already familiar with kernel tuning) - If necessary, there are alternatives to `kube-proxy`; e.g. [`kube-router`](https://www.kube-router.io) .debug[[k8s/kubenet.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/kubenet.md)] --- class: extra-details ## The Container Network Interface (CNI) - Most Kubernetes clusters use CNI "plugins" to implement networking - When a pod is created, Kubernetes delegates the network setup to these plugins (it can be a single plugin, or a combination of plugins, each doing one task) - Typically, CNI plugins will: - allocate an IP address (by calling an IPAM plugin) - add a network interface into the pod's network namespace - configure the interface as well as required routes etc. .debug[[k8s/kubenet.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/kubenet.md)] --- class: extra-details ## Multiple moving parts - The "pod-to-pod network" or "pod network": - provides communication between pods and nodes - is generally implemented with CNI plugins - The "pod-to-service network": - provides internal communication and load balancing - is generally implemented with kube-proxy (or e.g. kube-router) - Network policies: - provide firewalling and isolation - can be bundled with the "pod network" or provided by another component .debug[[k8s/kubenet.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/kubenet.md)] --- class: extra-details ## Even more moving parts - Inbound traffic can be handled by multiple components: - something like kube-proxy or kube-router (for NodePort services) - load balancers (ideally, connected to the pod network) - It is possible to use multiple pod networks in parallel (with "meta-plugins" like CNI-Genie or Multus) - Some solutions can fill multiple roles (e.g. kube-router can be set up to provide the pod network and/or network policies and/or replace kube-proxy) .debug[[k8s/kubenet.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/kubenet.md)] --- class: pic .interstitial[] --- name: toc-exposing-containers class: title Exposing containers .nav[ [Previous section](#toc-kubernetes-network-model) | [Back to table of contents](#toc-chapter-2) | [Next section](#toc-shipping-images-with-a-registry) ] .debug[(automatically generated title slide)] --- # Exposing containers - `kubectl expose` creates a *service* for existing pods - A *service* is a stable address for a pod (or a bunch of pods) - If we want to connect to our pod(s), we need to create a *service* - Once a service is created, CoreDNS will allow us to resolve it by name (i.e. after creating service `hello`, the name `hello` will resolve to something) - There are different types of services, detailed on the following slides: `ClusterIP`, `NodePort`, `LoadBalancer`, `ExternalName` .debug[[pks/kubectlexpose.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlexpose.md)] --- ## Basic service types - `ClusterIP` (default type) - a virtual IP address is allocated for the service (in an internal, private range) - this IP address is reachable only from within the cluster (nodes and pods) - our code can connect to the service using the original port number - `NodePort` - a port is allocated for the service (by default, in the 30000-32768 range) - that port is made available *on all our nodes* and anybody can connect to it - our code must be changed to connect to that new port number These service types are always available. Under the hood: `kube-proxy` is using a userland proxy and a bunch of `iptables` rules. .debug[[pks/kubectlexpose.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlexpose.md)] --- ## More service types - `LoadBalancer` - an external load balancer is allocated for the service - the load balancer is configured accordingly
(e.g.: a `NodePort` service is created, and the load balancer sends traffic to that port) - available only when the underlying infrastructure provides some "load balancer as a service"
(e.g. AWS, Azure, GCE, OpenStack...) - `ExternalName` - the DNS entry managed by CoreDNS will just be a `CNAME` to a provided record - no port, no IP address, no nothing else is allocated .debug[[pks/kubectlexpose.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlexpose.md)] --- ## Running containers with open ports - Since `ping` doesn't have anything to connect to, we'll have to run something else - We could use the `nginx` official image, but ... ... we wouldn't be able to tell the backends from each other! - We are going to use `jpetazzo/httpenv`, a tiny HTTP server written in Go - `jpetazzo/httpenv` listens on port 8888 - It serves its environment variables in JSON format - The environment variables will include `HOSTNAME`, which will be the pod name (and therefore, will be different on each backend) .debug[[pks/kubectlexpose.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlexpose.md)] --- ## Creating a deployment for our HTTP server - We *could* do `kubectl run httpenv --image=jpetazzo/httpenv` ... - But since `kubectl run` is being deprecated, let's see how to use `kubectl create` instead .exercise[ - In another window, watch the pods (to see when they are created): ```bash kubectl get pods -w ``` - Create a deployment for this very lightweight HTTP server: ```bash kubectl create deployment httpenv --image=jpetazzo/httpenv ``` - Scale it to 3 replicas: ```bash kubectl scale deployment httpenv --replicas=3 ``` ] .debug[[pks/kubectlexpose.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlexpose.md)] --- ## Exposing our deployment - We'll create a default `ClusterIP` service .exercise[ - Expose the HTTP port of our server: ```bash kubectl expose deployment httpenv --port 8888 ``` - Look up which IP address was allocated: ```bash kubectl get service ``` ] .debug[[pks/kubectlexpose.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlexpose.md)] --- ## Services are layer 4 constructs - You can assign IP addresses to services, but they are still *layer 4* (i.e. a service is not an IP address; it's an IP address + protocol + port) - This is caused by the current implementation of `kube-proxy` (it relies on mechanisms that don't support layer 3) - As a result: you *have to* indicate the port number for your service - Running services with arbitrary port (or port ranges) requires hacks (e.g. host networking mode) .debug[[pks/kubectlexpose.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlexpose.md)] --- ## Testing our service - We will now send a few HTTP requests to our pods .exercise[ - Let's obtain the IP address that was allocated for our service, *programmatically:* ```bash IP=$(kubectl get svc httpenv -o go-template --template '{{ .spec.clusterIP }}') ``` - Send a few requests: ```bash curl http://$IP:8888/ ``` - Too much output? Filter it with `jq`: ```bash curl -s http://$IP:8888/ | jq .HOSTNAME ``` ] -- Oh right, that doesn't work, its a `cluster-ip`. We need another way to access it. .debug[[pks/kubectlexpose.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlexpose.md)] --- ## port forwarding - You can forward a local port from your machine into a pod .exercise[ - Forward a port into your deployment: ```bash kubectl port-forward service/httpenv 8888:8888 ``` - In a new window run curl a few times: ```bash curl localhost:8888 curl localhost:8888 curl localhost:8888 ``` - Hit `ctrl-c` in the original window to terminate the port-forward ] -- The response was the same from each request. This is because `kubectl port-forward` forwards to a specific pod, not to the cluster-ip. .debug[[pks/kubectlexpose.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlexpose.md)] --- class: extra-details ## If we don't need a clusterIP load balancer - Sometimes, we want to access our scaled services directly: - if we want to save a tiny little bit of latency (typically less than 1ms) - if we need to connect over arbitrary ports (instead of a few fixed ones) - if we need to communicate over another protocol than UDP or TCP - if we want to decide how to balance the requests client-side - ... - In that case, we can use a "headless service" .debug[[pks/kubectlexpose.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlexpose.md)] --- class: extra-details ## Headless services - A headless service is obtained by setting the `clusterIP` field to `None` (Either with `--cluster-ip=None`, or by providing a custom YAML) - As a result, the service doesn't have a virtual IP address - Since there is no virtual IP address, there is no load balancer either - CoreDNS will return the pods' IP addresses as multiple `A` records - This gives us an easy way to discover all the replicas for a deployment .debug[[pks/kubectlexpose.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlexpose.md)] --- class: extra-details ## Services and endpoints - A service has a number of "endpoints" - Each endpoint is a host + port where the service is available - The endpoints are maintained and updated automatically by Kubernetes .exercise[ - Check the endpoints that Kubernetes has associated with our `httpenv` service: ```bash kubectl describe service httpenv ``` ] In the output, there will be a line starting with `Endpoints:`. That line will list a bunch of addresses in `host:port` format. .debug[[pks/kubectlexpose.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlexpose.md)] --- class: extra-details ## Viewing endpoint details - When we have many endpoints, our display commands truncate the list ```bash kubectl get endpoints ``` - If we want to see the full list, we can use one of the following commands: ```bash kubectl describe endpoints httpenv kubectl get endpoints httpenv -o yaml ``` - These commands will show us a list of IP addresses - These IP addresses should match the addresses of the corresponding pods: ```bash kubectl get pods -l app=httpenv -o wide ``` .debug[[pks/kubectlexpose.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlexpose.md)] --- class: extra-details ## `endpoints` not `endpoint` - `endpoints` is the only resource that cannot be singular ```bash $ kubectl get endpoint error: the server doesn't have a resource type "endpoint" ``` - This is because the type itself is plural (unlike every other resource) - There is no `endpoint` object: `type Endpoints struct` - The type doesn't represent a single endpoint, but a list of endpoints .debug[[pks/kubectlexpose.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlexpose.md)] --- ## Exposing services to the outside world - The default type (ClusterIP) only works for internal traffic - If we want to accept external traffic, we can use one of these: - NodePort (expose a service on a TCP port between 30000-32768) - LoadBalancer (provision a cloud load balancer for our service) - ExternalIP (use one node's external IP address) - Ingress (a special mechanism for HTTP services) *We'll see NodePorts and Ingresses more in detail later.* .debug[[pks/kubectlexpose.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlexpose.md)] --- ## Exposing services to the outside world .exercise[ - Set the service to be of type `Loadbalancer`: ```bash kubectl patch svc httpenv -p '{"spec": {"type": "LoadBalancer"}}' ``` - Check for the IP of the loadbalancer: ```bash kubectl get svc httpenv ``` - Test access via the loadbalancer: ```bash curl
:8888 ``` ] -- The `kubectl patch` command lets you patch a kubernetes resource to make minor changes like the above modification of the service type. .debug[[pks/kubectlexpose.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlexpose.md)] --- ## Cleanup .exercise[ - Delete the service ```bash kubectl delete svc httpenv ``` - Delete the deployment ```bash kubectl delete deployment httpenv ``` ] .debug[[pks/kubectlexpose.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/kubectlexpose.md)] --- class: pic .interstitial[] --- name: toc-shipping-images-with-a-registry class: title Shipping images with a registry .nav[ [Previous section](#toc-exposing-containers) | [Back to table of contents](#toc-chapter-2) | [Next section](#toc-running-our-application-on-kubernetes) ] .debug[(automatically generated title slide)] --- # Shipping images with a registry - Initially, our app was running on a single node - We could *build* and *run* in the same place - Therefore, we did not need to *ship* anything - Now that we want to run on a cluster, things are different - The easiest way to ship container images is to use a registry .debug[[k8s/shippingimages.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/shippingimages.md)] --- ## How Docker registries work (a reminder) - What happens when we execute `docker run alpine` ? - If the Engine needs to pull the `alpine` image, it expands it into `library/alpine` - `library/alpine` is expanded into `index.docker.io/library/alpine` - The Engine communicates with `index.docker.io` to retrieve `library/alpine:latest` - To use something else than `index.docker.io`, we specify it in the image name - Examples: ```bash docker pull gcr.io/google-containers/alpine-with-bash:1.0 docker build -t registry.mycompany.io:5000/myimage:awesome . docker push registry.mycompany.io:5000/myimage:awesome ``` .debug[[k8s/shippingimages.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/shippingimages.md)] --- ## Running DockerCoins on Kubernetes - Create one deployment for each component (hasher, redis, rng, webui, worker) - Expose deployments that need to accept connections (hasher, redis, rng, webui) - For redis, we can use the official redis image - For the 4 others, we need to build images and push them to some registry .debug[[k8s/shippingimages.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/shippingimages.md)] --- ## Building and shipping images - There are *many* options! - Manually: - build locally (with `docker build` or otherwise) - push to the registry - Automatically: - build and test locally - when ready, commit and push a code repository - the code repository notifies an automated build system - that system gets the code, builds it, pushes the image to the registry .debug[[k8s/shippingimages.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/shippingimages.md)] --- ## Which registry do we want to use? - There are SAAS products like Docker Hub, Quay ... - Each major cloud provider has an option as well (ACR on Azure, ECR on AWS, GCR on Google Cloud...) - There are also commercial products to run our own registry (Docker EE, Quay...) - And open source options, too! - When picking a registry, pay attention to its build system (when it has one) .debug[[k8s/shippingimages.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/shippingimages.md)] --- ## Using images from the Docker Hub - For everyone's convenience, we took care of building DockerCoins images - We pushed these images to the DockerHub, under the [dockercoins](https://hub.docker.com/u/dockercoins) user - These images are *tagged* with a version number, `v0.1` - The full image names are therefore: - `dockercoins/hasher:v0.1` - `dockercoins/rng:v0.1` - `dockercoins/webui:v0.1` - `dockercoins/worker:v0.1` .debug[[k8s/buildshiprun-dockerhub.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/buildshiprun-dockerhub.md)] --- ## Setting `$REGISTRY` and `$TAG` - In the upcoming exercises and labs, we use a couple of environment variables: - `$REGISTRY` as a prefix to all image names - `$TAG` as the image version tag - For example, the worker image is `$REGISTRY/worker:$TAG` - If you copy-paste the commands in these exercises: **make sure that you set `$REGISTRY` and `$TAG` first!** - For example: ``` export REGISTRY=dockercoins TAG=v0.1 ``` (this will expand `$REGISTRY/worker:$TAG` to `dockercoins/worker:v0.1`) .debug[[k8s/buildshiprun-dockerhub.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/buildshiprun-dockerhub.md)] --- class: pic .interstitial[] --- name: toc-running-our-application-on-kubernetes class: title Running our application on Kubernetes .nav[ [Previous section](#toc-shipping-images-with-a-registry) | [Back to table of contents](#toc-chapter-2) | [Next section](#toc-setting-up-kubernetes) ] .debug[(automatically generated title slide)] --- # Running our application on Kubernetes - We can now deploy our code (as well as a redis instance) .exercise[ - Deploy `redis`: ```bash kubectl create deployment redis --image=redis ``` - Deploy everything else: ```bash for SERVICE in hasher rng webui worker; do kubectl create deployment $SERVICE --image=$REGISTRY/$SERVICE:$TAG done ``` ] .debug[[pks/ourapponkube.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/ourapponkube.md)] --- ## Is this working? - After waiting for the deployment to complete, let's look at the logs! (Hint: use `kubectl get deploy -w` to watch deployment events) .exercise[ - Look at some logs: ```bash kubectl logs deploy/rng kubectl logs deploy/worker ``` ] -- 🤔 `rng` is fine ... But not `worker`. -- 💡 Oh right! We forgot to `expose`. .debug[[pks/ourapponkube.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/ourapponkube.md)] --- ## Connecting containers together - Three deployments need to be reachable by others: `hasher`, `redis`, `rng` - `worker` doesn't need to be exposed - `webui` will be dealt with later .exercise[ - Expose each deployment, specifying the right port: ```bash kubectl expose deployment redis --port 6379 kubectl expose deployment rng --port 80 kubectl expose deployment hasher --port 80 ``` ] .debug[[pks/ourapponkube.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/ourapponkube.md)] --- ## Is this working yet? - The `worker` has an infinite loop, that retries 10 seconds after an error .exercise[ - Stream the worker's logs: ```bash kubectl logs deploy/worker --follow ``` (Give it about 10 seconds to recover) ] -- We should now see the `worker`, well, working happily. .debug[[pks/ourapponkube.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/ourapponkube.md)] --- ## Exposing services for external access - Now we would like to access the Web UI - We will use `kubectl port-forward` because we don't want the whole world to see it. .exercise[ - Create a port forward for the Web UI: ```bash kubectl port-forward deploy/webui 8888:80 ``` - In a new terminal check you can access it: ```bash curl localhost:8888 ``` ] -- The output `Found. Redirecting to /index.html` tells us the port forward worked. .debug[[pks/ourapponkube.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/ourapponkube.md)] --- ## Accessing the web UI - We can now access the web UI from the port-forward. But nobody else can. .exercise[ - Open the web UI in your browser (http://localhost:8888/) ] -- *Alright, we're back to where we started, when we were running on a single node!* .debug[[pks/ourapponkube.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/ourapponkube.md)] --- class: pic .interstitial[] --- name: toc-setting-up-kubernetes class: title Setting up Kubernetes .nav[ [Previous section](#toc-running-our-application-on-kubernetes) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-setting-up-kubernetes) ] .debug[(automatically generated title slide)] --- class: pic .interstitial[] --- name: toc-setting-up-kubernetes class: title Setting up Kubernetes .nav[ [Previous section](#toc-running-our-application-on-kubernetes) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-setting-up-kubernetes) ] .debug[(automatically generated title slide)] --- # Setting up Kubernetes How did we set up these Kubernetes clusters that we're using? -- - We used Pivotal Container Service (PKS) a multicloud Kubernetes broker. - But first we Created a GKE Kubernetes cluster - We installed the Google Cloud Operator on GKE - We installed PKS using the GCP Operator - We installed this Kubernetes cluster using PKS .debug[[pks/setup-k8s.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/setup-k8s.md)] --- # Setting up Kubernetes - How can I set up a basic Kubernetes lab at home? -- - Run `kubeadm` on freshly installed VM instances running Ubuntu LTS 1. Install Docker 2. Install Kubernetes packages 3. Run `kubeadm init` on the first node (it deploys the control plane on that node) 4. Set up Weave (the overlay network)
(that step is just one `kubectl apply` command; discussed later) 5. Run `kubeadm join` on the other nodes (with the token produced by `kubeadm init`) 6. Copy the configuration file generated by `kubeadm init` - Check the [prepare VMs README](https://github.com/jpetazzo/container.training/blob/master/prepare-vms/README.md) for more details .debug[[pks/setup-k8s.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/setup-k8s.md)] --- ## `kubeadm` drawbacks - Doesn't set up Docker or any other container engine - Doesn't set up the overlay network - Doesn't set up multi-master (no high availability) -- (At least ... not yet! Though it's [experimental in 1.12](https://kubernetes.io/docs/setup/independent/high-availability/).) -- - "It's still twice as many steps as setting up a Swarm cluster 😕" -- Jérôme .debug[[pks/setup-k8s.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/setup-k8s.md)] --- ## Other deployment options - [AKS](https://azure.microsoft.com/services/kubernetes-service/): managed Kubernetes on Azure - [GKE](https://cloud.google.com/kubernetes-engine/): managed Kubernetes on Google Cloud - [EKS](https://aws.amazon.com/eks/), [eksctl](https://eksctl.io/): managed Kubernetes on AWS - [kops](https://github.com/kubernetes/kops): customizable deployments on AWS, Digital Ocean, GCE (beta), vSphere (alpha) - [minikube](https://kubernetes.io/docs/setup/minikube/), [kubespawn](https://github.com/kinvolk/kube-spawn), [Docker Desktop](https://docs.docker.com/docker-for-mac/kubernetes/): for local development - [kubicorn](https://github.com/kubicorn/kubicorn), the [Cluster API](https://blogs.vmware.com/cloudnative/2019/03/14/what-and-why-of-cluster-api/): deploy your clusters declaratively, "the Kubernetes way" .debug[[pks/setup-k8s.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/setup-k8s.md)] --- ## Even more deployment options - If you like Ansible: [kubespray](https://github.com/kubernetes-incubator/kubespray) - If you like Terraform: [typhoon](https://github.com/poseidon/typhoon) - If you like Terraform and Puppet: [tarmak](https://github.com/jetstack/tarmak) - You can also learn how to install every component manually, with the excellent tutorial [Kubernetes The Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way) *Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.* - There are also many commercial options available! - For a longer list, check the Kubernetes documentation:
it has a great guide to [pick the right solution](https://kubernetes.io/docs/setup/#production-environment) to set up Kubernetes. .debug[[pks/setup-k8s.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/setup-k8s.md)] --- class: pic .interstitial[] --- name: toc-the-kubernetes-dashboard class: title The Kubernetes dashboard .nav[ [Previous section](#toc-setting-up-kubernetes) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-security-implications-of-kubectl-apply) ] .debug[(automatically generated title slide)] --- # The Kubernetes dashboard - Kubernetes resources can also be viewed with a web dashboard - That dashboard is usually exposed over HTTPS (this requires obtaining a proper TLS certificate) - Dashboard users need to authenticate - Most people just YOLO it into their cluster and then get hacked .debug[[pks/dashboard.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/dashboard.md)] --- ## Stop the madness You know what, this is all a very bad idea. Let's not run the Kubernetes dashboard at all ... ever. The following slides are informational. Do not run them. .debug[[pks/dashboard.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/dashboard.md)] --- ## The insecure method - We could (and should) use [Let's Encrypt](https://letsencrypt.org/) ... - ... but we don't want to deal with TLS certificates - We could (and should) learn how authentication and authorization work ... - ... but we will use a guest account with admin access instead .footnote[.warning[Yes, this will open our cluster to all kinds of shenanigans. Don't do this at home.]] .debug[[pks/dashboard.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/dashboard.md)] --- ## Running a very insecure dashboard - We are going to deploy that dashboard with *one single command* - This command will create all the necessary resources (the dashboard itself, the HTTP wrapper, the admin/guest account) - All these resources are defined in a YAML file - All we have to do is load that YAML file with with `kubectl apply -f` .exercise[ - Create all the dashboard resources, with the following command: ```bash kubectl apply -f ~/container.training/k8s/insecure-dashboard.yaml ``` ] .debug[[pks/dashboard.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/dashboard.md)] --- ## Connecting to the dashboard .exercise[ - Check which port the dashboard is on: ```bash kubectl get svc dashboard ``` ] You'll want the `3xxxx` port. .exercise[ - Connect to http://oneofournodes:3xxxx/ ] The dashboard will then ask you which authentication you want to use. .debug[[pks/dashboard.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/dashboard.md)] --- ## Dashboard authentication - We have three authentication options at this point: - token (associated with a role that has appropriate permissions) - kubeconfig (e.g. using the `~/.kube/config` file from `node1`) - "skip" (use the dashboard "service account") - Let's use "skip": we're logged in! -- .warning[By the way, we just added a backdoor to our Kubernetes cluster!] .debug[[pks/dashboard.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/dashboard.md)] --- ## Running the Kubernetes dashboard securely - The steps that we just showed you are *for educational purposes only!* - If you do that on your production cluster, people [can and will abuse it](https://redlock.io/blog/cryptojacking-tesla) - For an in-depth discussion about securing the dashboard,
check [this excellent post on Heptio's blog](https://blog.heptio.com/on-securing-the-kubernetes-dashboard-16b09b1b7aca) .debug[[pks/dashboard.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/dashboard.md)] --- class: pic .interstitial[] --- name: toc-security-implications-of-kubectl-apply class: title Security implications of `kubectl apply` .nav[ [Previous section](#toc-the-kubernetes-dashboard) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-octant) ] .debug[(automatically generated title slide)] --- # Security implications of `kubectl apply` - When we do `kubectl apply -f
`, we create arbitrary resources - Resources can be evil; imagine a `deployment` that ... -- - starts bitcoin miners on the whole cluster -- - hides in a non-default namespace -- - bind-mounts our nodes' filesystem -- - inserts SSH keys in the root account (on the node) -- - encrypts our data and ransoms it -- - ☠️☠️☠️ .debug[[pks/dashboard.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/dashboard.md)] --- ## `kubectl apply` is the new `curl | sh` - `curl | sh` is convenient - It's safe if you use HTTPS URLs from trusted sources -- - `kubectl apply -f` is convenient - It's safe if you use HTTPS URLs from trusted sources - Example: the official setup instructions for most pod networks -- - It introduces new failure modes (for instance, if you try to apply YAML from a link that's no longer valid) .debug[[pks/dashboard.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/dashboard.md)] --- class: pic .interstitial[] --- name: toc-octant class: title Octant .nav[ [Previous section](#toc-security-implications-of-kubectl-apply) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-scaling-our-demo-app) ] .debug[(automatically generated title slide)] --- # Octant Octant is an open source tool from VMWare which is designed to be a Kubernetes workload visualization tool that runs locally and uses your Kubeconfig to connect to the Kubernetes cluster. Octant only ever performs list and read style requests and does not create/modify/delete resources. This makes it a much safer tool to use than the Kubernetes Dashboard. .exercise[ - Run octant and browse through your resources: ```bash octant ``` ] .debug[[pks/octant.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/octant.md)] --- class: pic .interstitial[] --- name: toc-scaling-our-demo-app class: title Scaling our demo app .nav[ [Previous section](#toc-octant) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-daemon-sets) ] .debug[(automatically generated title slide)] --- # Scaling our demo app - Our ultimate goal is to get more DockerCoins (i.e. increase the number of loops per second shown on the web UI) - Let's look at the architecture again:  - The loop is done in the worker; perhaps we could try adding more workers? .debug[[pks/scalingdockercoins.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/scalingdockercoins.md)] --- ## Adding another worker - All we have to do is scale the `worker` Deployment .exercise[ - Open two new terminals to check what's going on with pods and deployments: ```bash kubectl get pods -w kubectl get deployments -w ``` - Now, create more `worker` replicas: ```bash kubectl scale deployment worker --replicas=2 ``` ] After a few seconds, the graph in the web UI should show up. .debug[[pks/scalingdockercoins.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/scalingdockercoins.md)] --- ## Adding more workers - If 2 workers give us 2x speed, what about 3 workers? .exercise[ - Scale the `worker` Deployment further: ```bash kubectl scale deployment worker --replicas=3 ``` ] The graph in the web UI should go up again. (This is looking great! We're gonna be RICH!) .debug[[pks/scalingdockercoins.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/scalingdockercoins.md)] --- ## Adding even more workers - Let's see if 10 workers give us 10x speed! .exercise[ - Scale the `worker` Deployment to a bigger number: ```bash kubectl scale deployment worker --replicas=10 ``` ] -- The graph will peak at 10 hashes/second. (We can add as many workers as we want: we will never go past 10 hashes/second.) .debug[[pks/scalingdockercoins.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/scalingdockercoins.md)] --- class: extra-details ## Didn't we briefly exceed 10 hashes/second? - It may *look like it*, because the web UI shows instant speed - The instant speed can briefly exceed 10 hashes/second - The average speed cannot - The instant speed can be biased because of how it's computed .debug[[pks/scalingdockercoins.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/scalingdockercoins.md)] --- class: extra-details ## Why instant speed is misleading - The instant speed is computed client-side by the web UI - The web UI checks the hash counter once per second
(and does a classic (h2-h1)/(t2-t1) speed computation) - The counter is updated once per second by the workers - These timings are not exact
(e.g. the web UI check interval is client-side JavaScript) - Sometimes, between two web UI counter measurements,
the workers are able to update the counter *twice* - During that cycle, the instant speed will appear to be much bigger
(but it will be compensated by lower instant speed before and after) .debug[[pks/scalingdockercoins.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/scalingdockercoins.md)] --- ## Why are we stuck at 10 hashes per second? - If this was high-quality, production code, we would have instrumentation (Datadog, Honeycomb, New Relic, statsd, Sumologic, ...) - It's not! - Perhaps we could benchmark our web services? (with tools like `ab`, or even simpler, `httping`) .debug[[pks/scalingdockercoins.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/scalingdockercoins.md)] --- ## Benchmarking our web services - We want to check `hasher` and `rng` - We are going to use `httping` - It's just like `ping`, but using HTTP `GET` requests (it measures how long it takes to perform one `GET` request) - It's used like this: ``` httping [-c count] http://host:port/path ``` - Or even simpler: ``` httping ip.ad.dr.ess ``` - We will use `httping` on the ClusterIP addresses of our services .debug[[pks/scalingdockercoins.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/scalingdockercoins.md)] --- ## Running a debug pod We don't have direct access to ClusterIP services, nor do we want to run a bunch of port-forwards. Instead we can run a Pod containing `httping` and then use `kubectl exec` to perform our debugging. .excercise[ - Run a debug pod ```bash kubectl run debug --image=paulczar/debug \ --restart=Never -- sleep 6000 ``` ] -- This will run our debug pod which contains tools like `httping` that will self-destruct after 6000 seconds. .debug[[pks/scalingdockercoins.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/scalingdockercoins.md)] --- ### Executing a command in a running pod - You may have need to occasionally run a command inside a pod. Rather than trying to run `SSH` inside a container you can use the `kubectl exec` command. .excercise[ - Run curl inside your debug pod: ```bash kubectl exec debug -- curl -s https://google.com ``` ] -- ```html
301 Moved
301 Moved
The document has moved
here
. ``` .debug[[pks/scalingdockercoins.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/scalingdockercoins.md)] --- ## Service Discovery - Each of our services has a Cluster IP which we could get using `kubectl get services` - Or do it programmatically, like so: ```bash HASHER=$(kubectl get svc hasher -o go-template={{.spec.clusterIP}}) RNG=$(kubectl get svc rng -o go-template={{.spec.clusterIP}}) ``` - However Kubernetes has an in-cluster DNS server which means if you're inside the cluster you can simple use the service name as an endpoint. .debug[[pks/scalingdockercoins.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/scalingdockercoins.md)] --- ## Checking `hasher` and `rng` response times .exercise[ - Check the response times for both services: ```bash kubectl exec debug -- httping -c 3 hasher kubectl exec debug -- httping -c 3 rng ``` ] -- - `hasher` is fine (it should take a few milliseconds to reply) - `rng` is not (it should take about 700 milliseconds if there are 10 workers) - Something is wrong with `rng`, but ... what? .debug[[pks/scalingdockercoins.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/scalingdockercoins.md)] --- ## Let's draw hasty conclusions - The bottleneck seems to be `rng` - *What if* we don't have enough entropy and can't generate enough random numbers? - We need to scale out the `rng` service on multiple machines! Note: this is a fiction! We have enough entropy. But we need a pretext to scale out. (In fact, the code of `rng` uses `/dev/urandom`, which never runs out of entropy...
...and is [just as good as `/dev/random`](http://www.slideshare.net/PacSecJP/filippo-plain-simple-reality-of-entropy).) .debug[[shared/hastyconclusions.md](https://github.com/paulczar/container.training.git/tree/pks/slides/shared/hastyconclusions.md)] --- class: pic .interstitial[] --- name: toc-daemon-sets class: title Daemon sets .nav[ [Previous section](#toc-scaling-our-demo-app) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-labels-and-selectors) ] .debug[(automatically generated title slide)] --- # Daemon sets - We want to scale `rng` in a way that is different from how we scaled `worker` - We want one (and exactly one) instance of `rng` per node - We *do not want* two instances of `rng` on the same node - We will do that with a *daemon set* .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- ## Why not a deployment? - Can't we just do `kubectl scale deployment rng --replicas=...`? -- - Nothing guarantees that the `rng` containers will be distributed evenly - If we add nodes later, they will not automatically run a copy of `rng` - If we remove (or reboot) a node, one `rng` container will restart elsewhere (and we will end up with two instances `rng` on the same node) - By contrast, a daemon set will start one pod per node and keep it that way (as nodes are added or removed) .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- ## Daemon sets in practice - Daemon sets are great for cluster-wide, per-node processes: - `kube-proxy` - `weave` (our overlay network) - monitoring agents - hardware management tools (e.g. SCSI/FC HBA agents) - etc. - They can also be restricted to run [only on some nodes](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#running-pods-on-only-some-nodes) .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- ## Creating a daemon set - Unfortunately, as of Kubernetes 1.15, the CLI cannot create daemon sets -- - More precisely: it doesn't have a subcommand to create a daemon set -- - But any kind of resource can always be created by providing a YAML description: ```bash kubectl apply -f foo.yaml ``` -- - How do we create the YAML file for our daemon set? -- - option 1: [read the docs](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#create-a-daemonset) -- - option 2: `vi` our way out of it .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- ## Creating the YAML file for our daemon set - Let's start with the YAML file for the current `rng` resource .exercise[ - Dump the `rng` resource in YAML: ```bash kubectl get deploy/rng -o yaml >rng.yml ``` - Edit `rng.yml` ] .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- ## "Casting" a resource to another - What if we just changed the `kind` field? (It can't be that easy, right?) .exercise[ - Change `kind: Deployment` to `kind: DaemonSet` - Save, quit - Try to create our new resource: ``` kubectl apply -f rng.yml ``` ] -- We all knew this couldn't be that easy, right! .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- ## Understanding the problem - The core of the error is: ``` error validating data: [ValidationError(DaemonSet.spec): unknown field "replicas" in io.k8s.api.extensions.v1beta1.DaemonSetSpec, ... ``` -- - *Obviously,* it doesn't make sense to specify a number of replicas for a daemon set -- - Workaround: fix the YAML - remove the `replicas` field - remove the `strategy` field (which defines the rollout mechanism for a deployment) - remove the `progressDeadlineSeconds` field (also used by the rollout mechanism) - remove the `status: {}` line at the end -- - Or, we could also ... .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- ## Use the `--force`, Luke - We could also tell Kubernetes to ignore these errors and try anyway - The `--force` flag's actual name is `--validate=false` .exercise[ - Try to load our YAML file and ignore errors: ```bash kubectl apply -f rng.yml --validate=false ``` ] -- 🎩✨🐇 -- Wait ... Now, can it be *that* easy? .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- ## Checking what we've done - Did we transform our `deployment` into a `daemonset`? .exercise[ - Look at the resources that we have now: ```bash kubectl get all ``` ] -- We have two resources called `rng`: - the *deployment* that was existing before - the *daemon set* that we just created We also have one too many pods.
(The pod corresponding to the *deployment* still exists.) .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- ## `deploy/rng` and `ds/rng` - You can have different resource types with the same name (i.e. a *deployment* and a *daemon set* both named `rng`) - We still have the old `rng` *deployment* ``` NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/rng 1 1 1 1 18m ``` - But now we have the new `rng` *daemon set* as well ``` NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/rng 2 2 2 2 2
9s ``` .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- ## Too many pods - If we check with `kubectl get pods`, we see: - *one pod* for the deployment (named `rng-xxxxxxxxxx-yyyyy`) - *one pod per node* for the daemon set (named `rng-zzzzz`) ``` NAME READY STATUS RESTARTS AGE rng-54f57d4d49-7pt82 1/1 Running 0 11m rng-b85tm 1/1 Running 0 25s rng-hfbrr 1/1 Running 0 25s [...] ``` -- The daemon set created one pod per node, except on the master node. The master node has [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) preventing pods from running there. (To schedule a pod on this node anyway, the pod will require appropriate [tolerations](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/).) .footnote[(Off by one? We don't run these pods on the node hosting the control plane.)] .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- ## Is this working? - Look at the web UI -- - The graph should now go above 10 hashes per second! -- - It looks like the newly created pods are serving traffic correctly - How and why did this happen? (We didn't do anything special to add them to the `rng` service load balancer!) .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- class: pic .interstitial[] --- name: toc-labels-and-selectors class: title Labels and selectors .nav[ [Previous section](#toc-daemon-sets) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-rolling-updates) ] .debug[(automatically generated title slide)] --- # Labels and selectors - The `rng` *service* is load balancing requests to a set of pods - That set of pods is defined by the *selector* of the `rng` service .exercise[ - Check the *selector* in the `rng` service definition: ```bash kubectl describe service rng ``` ] - The selector is `app=rng` - It means "all the pods having the label `app=rng`" (They can have additional labels as well, that's OK!) .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- ## Selector evaluation - We can use selectors with many `kubectl` commands - For instance, with `kubectl get`, `kubectl logs`, `kubectl delete` ... and more .exercise[ - Get the list of pods matching selector `app=rng`: ```bash kubectl get pods -l app=rng kubectl get pods --selector app=rng ``` ] But ... why do these pods (in particular, the *new* ones) have this `app=rng` label? .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- ## Where do labels come from? - When we create a deployment with `kubectl create deployment rng`,
this deployment gets the label `app=rng` - The replica sets created by this deployment also get the label `app=rng` - The pods created by these replica sets also get the label `app=rng` - When we created the daemon set from the deployment, we re-used the same spec - Therefore, the pods created by the daemon set get the same labels .footnote[Note: when we use `kubectl run stuff`, the label is `run=stuff` instead.] .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- ## Updating load balancer configuration - We would like to remove a pod from the load balancer - What would happen if we removed that pod, with `kubectl delete pod ...`? -- It would be re-created immediately (by the replica set or the daemon set) -- - What would happen if we removed the `app=rng` label from that pod? -- It would *also* be re-created immediately -- Why?!? .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- ## Selectors for replica sets and daemon sets - The "mission" of a replica set is: "Make sure that there is the right number of pods matching this spec!" - The "mission" of a daemon set is: "Make sure that there is a pod matching this spec on each node!" -- - *In fact,* replica sets and daemon sets do not check pod specifications - They merely have a *selector*, and they look for pods matching that selector - Yes, we can fool them by manually creating pods with the "right" labels - Bottom line: if we remove our `app=rng` label ... ... The pod "disappears" for its parent, which re-creates another pod to replace it .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- class: extra-details ## Isolation of replica sets and daemon sets - Since both the `rng` daemon set and the `rng` replica set use `app=rng` ... ... Why don't they "find" each other's pods? -- - *Replica sets* have a more specific selector, visible with `kubectl describe` (It looks like `app=rng,pod-template-hash=abcd1234`) - *Daemon sets* also have a more specific selector, but it's invisible (It looks like `app=rng,controller-revision-hash=abcd1234`) - As a result, each controller only "sees" the pods it manages .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- ## Removing a pod from the load balancer - Currently, the `rng` service is defined by the `app=rng` selector - The only way to remove a pod is to remove or change the `app` label - ... But that will cause another pod to be created instead! - What's the solution? -- - We need to change the selector of the `rng` service! - Let's add another label to that selector (e.g. `enabled=yes`) .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- ## Complex selectors - If a selector specifies multiple labels, they are understood as a logical *AND* (In other words: the pods must match all the labels) - Kubernetes has support for advanced, set-based selectors (But these cannot be used with services, at least not yet!) .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- ## The plan 1. Add the label `enabled=yes` to all our `rng` pods 2. Update the selector for the `rng` service to also include `enabled=yes` 3. Toggle traffic to a pod by manually adding/removing the `enabled` label 4. Profit! *Note: if we swap steps 1 and 2, it will cause a short service disruption, because there will be a period of time during which the service selector won't match any pod. During that time, requests to the service will time out. By doing things in the order above, we guarantee that there won't be any interruption.* .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- ## Adding labels to pods - We want to add the label `enabled=yes` to all pods that have `app=rng` - We could edit each pod one by one with `kubectl edit` ... - ... Or we could use `kubectl label` to label them all - `kubectl label` can use selectors itself .exercise[ - Add `enabled=yes` to all pods that have `app=rng`: ```bash kubectl label pods -l app=rng enabled=yes ``` ] .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- ## Updating the service selector - We need to edit the service specification - Reminder: in the service definition, we will see `app: rng` in two places - the label of the service itself (we don't need to touch that one) - the selector of the service (that's the one we want to change) .exercise[ - Update the service to add `enabled: yes` to its selector: ```bash kubectl edit service rng ``` ] -- ... And then we get *the weirdest error ever.* Why? .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- ## When the YAML parser is being too smart - YAML parsers try to help us: - `xyz` is the string `"xyz"` - `42` is the integer `42` - `yes` is the boolean value `true` - If we want the string `"42"` or the string `"yes"`, we have to quote them - So we have to use `enabled: "yes"` .footnote[For a good laugh: if we had used "ja", "oui", "si" ... as the value, it would have worked!] .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- ## Updating the service selector, take 2 .exercise[ - Update the service to add `enabled: "yes"` to its selector: ```bash kubectl edit service rng ``` ] This time it should work! If we did everything correctly, the web UI shouldn't show any change. .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- ## Updating labels - We want to disable the pod that was created by the deployment - All we have to do, is remove the `enabled` label from that pod - To identify that pod, we can use its name - ... Or rely on the fact that it's the only one with a `pod-template-hash` label - Good to know: - `kubectl label ... foo=` doesn't remove a label (it sets it to an empty string) - to remove label `foo`, use `kubectl label ... foo-` - to change an existing label, we would need to add `--overwrite` .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- ## Removing a pod from the load balancer .exercise[ - In one window, check the logs of that pod: ```bash POD=$(kubectl get pod -l app=rng,pod-template-hash -o name) kubectl logs --tail 1 --follow $POD ``` (We should see a steady stream of HTTP logs) - In another window, remove the label from the pod: ```bash kubectl label pod -l app=rng,pod-template-hash enabled- ``` (The stream of HTTP logs should stop immediately) ] There might be a slight change in the web UI (since we removed a bit of capacity from the `rng` service). If we remove more pods, the effect should be more visible. .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- class: extra-details ## Updating the daemon set - If we scale up our cluster by adding new nodes, the daemon set will create more pods - These pods won't have the `enabled=yes` label - If we want these pods to have that label, we need to edit the daemon set spec - We can do that with e.g. `kubectl edit daemonset rng` .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- class: extra-details ## We've put resources in your resources - Reminder: a daemon set is a resource that creates more resources! - There is a difference between: - the label(s) of a resource (in the `metadata` block in the beginning) - the selector of a resource (in the `spec` block) - the label(s) of the resource(s) created by the first resource (in the `template` block) - We would need to update the selector and the template (metadata labels are not mandatory) - The template must match the selector (i.e. the resource will refuse to create resources that it will not select) .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- ## Labels and debugging - When a pod is misbehaving, we can delete it: another one will be recreated - But we can also change its labels - It will be removed from the load balancer (it won't receive traffic anymore) - Another pod will be recreated immediately - But the problematic pod is still here, and we can inspect and debug it - We can even re-add it to the rotation if necessary (Very useful to troubleshoot intermittent and elusive bugs) .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- ## Labels and advanced rollout control - Conversely, we can add pods matching a service's selector - These pods will then receive requests and serve traffic - Examples: - one-shot pod with all debug flags enabled, to collect logs - pods created automatically, but added to rotation in a second step
(by setting their label accordingly) - This gives us building blocks for canary and blue/green deployments .debug[[k8s/daemonset.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/daemonset.md)] --- class: pic .interstitial[] --- name: toc-rolling-updates class: title Rolling updates .nav[ [Previous section](#toc-labels-and-selectors) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-exposing-http-services-with-ingress-resources) ] .debug[(automatically generated title slide)] --- # Rolling updates - By default (without rolling updates), when a scaled resource is updated: - new pods are created - old pods are terminated - ... all at the same time - if something goes wrong, ¯\\\_(ツ)\_/¯ .debug[[k8s/rollout.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/rollout.md)] --- ## Rolling updates - With rolling updates, when a resource is updated, it happens progressively - Two parameters determine the pace of the rollout: `maxUnavailable` and `maxSurge` - They can be specified in absolute number of pods, or percentage of the `replicas` count - At any given time ... - there will always be at least `replicas`-`maxUnavailable` pods available - there will never be more than `replicas`+`maxSurge` pods in total - there will therefore be up to `maxUnavailable`+`maxSurge` pods being updated - We have the possibility of rolling back to the previous version
(if the update fails or is unsatisfactory in any way) .debug[[k8s/rollout.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/rollout.md)] --- ## Checking current rollout parameters - Recall how we build custom reports with `kubectl` and `jq`: .exercise[ - Show the rollout plan for our deployments: ```bash kubectl get deploy -o json | jq ".items[] | {name:.metadata.name} + .spec.strategy.rollingUpdate" ``` ] .debug[[k8s/rollout.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/rollout.md)] --- ## Rolling updates in practice - As of Kubernetes 1.8, we can do rolling updates with: `deployments`, `daemonsets`, `statefulsets` - Editing one of these resources will automatically result in a rolling update - Rolling updates can be monitored with the `kubectl rollout` subcommand .debug[[k8s/rollout.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/rollout.md)] --- ## Building a new version of the `worker` service .warning[ Only run these commands if you have built and pushed DockerCoins to a local registry.
If you are using images from the Docker Hub (`dockercoins/worker:v0.1`), skip this. ] .exercise[ - Go to the `stacks` directory (`~/container.training/stacks`) - Edit `dockercoins/worker/worker.py`; update the first `sleep` line to sleep 1 second - Build a new tag and push it to the registry: ```bash #export REGISTRY=localhost:3xxxx export TAG=v0.2 docker-compose -f dockercoins.yml build docker-compose -f dockercoins.yml push ``` ] .debug[[k8s/rollout.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/rollout.md)] --- ## Rolling out the new `worker` service .exercise[ - Let's monitor what's going on by opening a few terminals, and run: ```bash kubectl get pods -w kubectl get replicasets -w kubectl get deployments -w ``` - Update `worker` either with `kubectl edit`, or by running: ```bash kubectl set image deploy worker worker=$REGISTRY/worker:$TAG ``` ] -- That rollout should be pretty quick. What shows in the web UI? .debug[[k8s/rollout.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/rollout.md)] --- ## Give it some time - At first, it looks like nothing is happening (the graph remains at the same level) - According to `kubectl get deploy -w`, the `deployment` was updated really quickly - But `kubectl get pods -w` tells a different story - The old `pods` are still here, and they stay in `Terminating` state for a while - Eventually, they are terminated; and then the graph decreases significantly - This delay is due to the fact that our worker doesn't handle signals - Kubernetes sends a "polite" shutdown request to the worker, which ignores it - After a grace period, Kubernetes gets impatient and kills the container (The grace period is 30 seconds, but [can be changed](https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods) if needed) .debug[[k8s/rollout.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/rollout.md)] --- ## Rolling out something invalid - What happens if we make a mistake? .exercise[ - Update `worker` by specifying a non-existent image: ```bash export TAG=v0.3 kubectl set image deploy worker worker=$REGISTRY/worker:$TAG ``` - Check what's going on: ```bash kubectl rollout status deploy worker ``` ] -- Our rollout is stuck. However, the app is not dead. (After a minute, it will stabilize to be 20-25% slower.) .debug[[k8s/rollout.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/rollout.md)] --- ## What's going on with our rollout? - Why is our app a bit slower? - Because `MaxUnavailable=25%` ... So the rollout terminated 2 replicas out of 10 available - Okay, but why do we see 5 new replicas being rolled out? - Because `MaxSurge=25%` ... So in addition to replacing 2 replicas, the rollout is also starting 3 more - It rounded down the number of MaxUnavailable pods conservatively,
but the total number of pods being rolled out is allowed to be 25+25=50% .debug[[k8s/rollout.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/rollout.md)] --- class: extra-details ## The nitty-gritty details - We start with 10 pods running for the `worker` deployment - Current settings: MaxUnavailable=25% and MaxSurge=25% - When we start the rollout: - two replicas are taken down (as per MaxUnavailable=25%) - two others are created (with the new version) to replace them - three others are created (with the new version) per MaxSurge=25%) - Now we have 8 replicas up and running, and 5 being deployed - Our rollout is stuck at this point! .debug[[k8s/rollout.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/rollout.md)] --- ## Checking the dashboard during the bad rollout If you didn't deploy the Kubernetes dashboard earlier, just skip this slide. .exercise[ - Check which port the dashboard is on: ```bash kubectl -n kube-system get svc socat ``` ] Note the `3xxxx` port. .exercise[ - Connect to http://oneofournodes:3xxxx/ ] -- - We have failures in Deployments, Pods, and Replica Sets .debug[[k8s/rollout.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/rollout.md)] --- ## Recovering from a bad rollout - We could push some `v0.3` image (the pod retry logic will eventually catch it and the rollout will proceed) - Or we could invoke a manual rollback .exercise[ - Cancel the deployment and wait for the dust to settle: ```bash kubectl rollout undo deploy worker kubectl rollout status deploy worker ``` ] .debug[[k8s/rollout.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/rollout.md)] --- class: extra-details ## Changing rollout parameters - We want to: - revert to `v0.1` - be conservative on availability (always have desired number of available workers) - go slow on rollout speed (update only one pod at a time) - give some time to our workers to "warm up" before starting more The corresponding changes can be expressed in the following YAML snippet: .small[ ```yaml spec: template: spec: containers: - name: worker image: $REGISTRY/worker:v0.1 strategy: rollingUpdate: maxUnavailable: 0 maxSurge: 1 minReadySeconds: 10 ``` ] .debug[[k8s/rollout.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/rollout.md)] --- class: extra-details ## Applying changes through a YAML patch - We could use `kubectl edit deployment worker` - But we could also use `kubectl patch` with the exact YAML shown before .exercise[ .small[ - Apply all our changes and wait for them to take effect: ```bash kubectl patch deployment worker -p " spec: template: spec: containers: - name: worker image: $REGISTRY/worker:v0.1 strategy: rollingUpdate: maxUnavailable: 0 maxSurge: 1 minReadySeconds: 10 " kubectl rollout status deployment worker kubectl get deploy -o json worker | jq "{name:.metadata.name} + .spec.strategy.rollingUpdate" ``` ] ] .debug[[k8s/rollout.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/rollout.md)] --- class: pic .interstitial[] --- name: toc-exposing-http-services-with-ingress-resources class: title Exposing HTTP services with Ingress resources .nav[ [Previous section](#toc-rolling-updates) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-lets-do-some-housekeeping) ] .debug[(automatically generated title slide)] --- # Exposing HTTP services with Ingress resources - *Services* give us a way to access a pod or a set of pods - Services can be exposed to the outside world: - with type `NodePort` (on a port >30000) - with type `LoadBalancer` (allocating an external load balancer) - What about HTTP services? - how can we expose `webui`, `rng`, `hasher`? - the Kubernetes dashboard? - a new version of `webui`? .debug[[pks/ingress.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/ingress.md)] --- ## Exposing HTTP services - If we use `NodePort` services, clients have to specify port numbers (i.e. http://xxxxx:31234 instead of just http://xxxxx) - `LoadBalancer` services are nice, but: - they are not available in all environments - they often carry an additional cost (e.g. they provision an ELB) - they require one extra step for DNS integration
(waiting for the `LoadBalancer` to be provisioned; then adding it to DNS) - We could build our own reverse proxy .debug[[pks/ingress.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/ingress.md)] --- ## Building a custom reverse proxy - There are many options available: Apache, HAProxy, Hipache, NGINX, Traefik, ... (look at [jpetazzo/aiguillage](https://github.com/jpetazzo/aiguillage) for a minimal reverse proxy configuration using NGINX) - Most of these options require us to update/edit configuration files after each change - Some of them can pick up virtual hosts and backends from a configuration store - Wouldn't it be nice if this configuration could be managed with the Kubernetes API? -- - Enter.red[¹] *Ingress* resources! .footnote[.red[¹] Pun maybe intended.] .debug[[pks/ingress.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/ingress.md)] --- ## Ingress resources - Kubernetes API resource (`kubectl get ingress`/`ingresses`/`ing`) - Designed to expose HTTP services - Basic features: - load balancing - SSL termination - name-based virtual hosting - Can also route to different services depending on: - URI path (e.g. `/api`→`api-service`, `/static`→`assets-service`) - Client headers, including cookies (for A/B testing, canary deployment...) - and more! .debug[[pks/ingress.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/ingress.md)] --- ## Principle of operation - Step 1: deploy an *ingress controller* - ingress controller = load balancer + control loop - the control loop watches over ingress resources, and configures the LB accordingly - Step 2: set up DNS - associate DNS entries with the load balancer address - Step 3: create *ingress resources* - the ingress controller picks up these resources and configures the LB - Step 4: profit! .debug[[pks/ingress.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/ingress.md)] --- ## Ingress in action - We already have an nginx-ingress controller deployed - For DNS, we have a wildcard set up pointing at our ingress LB - `*.ingress.workshop.paulczar.wtf` - We will create ingress resources for various HTTP services .debug[[pks/ingress.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/ingress.md)] --- ## Checking that nginx-ingress runs correctly - If Traefik started correctly, we now have a web server listening on each node .exercise[ - Check that nginx is serving 80/tcp: ```bash curl test.ingress.workshop.paulczar.wtf ``` ] We should get a `404 page not found` error. This is normal: we haven't provided any ingress rule yet. .debug[[pks/ingress.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/ingress.md)] --- ## Expose that webui - Before we can enable the ingress, we need to create a service for the webui .exercise[ - create a service for the webui deployment ```bash kubectl expose deployment webui --port 80 ``` ] .debug[[pks/ingress.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/ingress.md)] --- ## Setting up host-based routing ingress rules - We are going to create an ingress rule for our webui .exercise[ - Write this to `~/workshop/ingress.yaml` and change the host prefix ] ```yaml apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: webui spec: rules: - host: user1.ingress.workshop.paulczar.wtf http: paths: - path: / backend: serviceName: webui servicePort: 80 ``` .debug[[pks/ingress.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/ingress.md)] --- ## Creating our ingress resources .exercise[ - Apply the ingress manifest ```bash kubectl apply -f ~/workshop/ingress.yaml ``` ] -- ```bash $ curl user1.ingress.workshop.paulczar.wtf Found. Redirecting to /index.html ``` .debug[[pks/ingress.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/ingress.md)] --- ## Using multiple ingress controllers - You can have multiple ingress controllers active simultaneously (e.g. Traefik and NGINX) - You can even have multiple instances of the same controller (e.g. one for internal, another for external traffic) - The `kubernetes.io/ingress.class` annotation can be used to tell which one to use - It's OK if multiple ingress controllers configure the same resource (it just means that the service will be accessible through multiple paths) .debug[[pks/ingress.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/ingress.md)] --- ## Ingress: the good - The traffic flows directly from the ingress load balancer to the backends - it doesn't need to go through the `ClusterIP` - in fact, we don't even need a `ClusterIP` (we can use a headless service) - The load balancer can be outside of Kubernetes (as long as it has access to the cluster subnet) - This allows the use of external (hardware, physical machines...) load balancers - Annotations can encode special features (rate-limiting, A/B testing, session stickiness, etc.) .debug[[pks/ingress.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/ingress.md)] --- ## Ingress: the bad - Aforementioned "special features" are not standardized yet - Some controllers will support them; some won't - Even relatively common features (stripping a path prefix) can differ: - [traefik.ingress.kubernetes.io/rule-type: PathPrefixStrip](https://docs.traefik.io/user-guide/kubernetes/#path-based-routing) - [ingress.kubernetes.io/rewrite-target: /](https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/rewrite) - This should eventually stabilize (remember that ingresses are currently `apiVersion: networking.k8s.io/v1beta1`) .debug[[pks/ingress.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/ingress.md)] --- class: pic .interstitial[] --- name: toc-lets-do-some-housekeeping class: title Let's do some housekeeping .nav[ [Previous section](#toc-exposing-http-services-with-ingress-resources) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-volumes) ] .debug[(automatically generated title slide)] --- # Let's do some housekeeping - We've created a lot of resources, let's clean them up. .exercise[ - Delete resources: ```bash kubectl delete deployment,svc hasher redis rng webui kubectl delete deployment worker kubectl delete ingress webui kubectl delete daemonset rng ] .debug[[pks/cleanup-dockercoins.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/cleanup-dockercoins.md)] --- class: pic .interstitial[] --- name: toc-volumes class: title Volumes .nav[ [Previous section](#toc-lets-do-some-housekeeping) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-managing-configuration) ] .debug[(automatically generated title slide)] --- # Volumes - Volumes are special directories that are mounted in containers - Volumes can have many different purposes: - share files and directories between containers running on the same machine - share files and directories between containers and their host - centralize configuration information in Kubernetes and expose it to containers - manage credentials and secrets and expose them securely to containers - store persistent data for stateful services - access storage systems (like Ceph, EBS, NFS, Portworx, and many others) .debug[[k8s/volumes.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/volumes.md)] --- class: extra-details ## Kubernetes volumes vs. Docker volumes - Kubernetes and Docker volumes are very similar (the [Kubernetes documentation](https://kubernetes.io/docs/concepts/storage/volumes/) says otherwise ...
but it refers to Docker 1.7, which was released in 2015!) - Docker volumes allow us to share data between containers running on the same host - Kubernetes volumes allow us to share data between containers in the same pod - Both Docker and Kubernetes volumes enable access to storage systems - Kubernetes volumes are also used to expose configuration and secrets - Docker has specific concepts for configuration and secrets
(but under the hood, the technical implementation is similar) - If you're not familiar with Docker volumes, you can safely ignore this slide! .debug[[k8s/volumes.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/volumes.md)] --- ## Volumes ≠ Persistent Volumes - Volumes and Persistent Volumes are related, but very different! - *Volumes*: - appear in Pod specifications (see next slide) - do not exist as API resources (**cannot** do `kubectl get volumes`) - *Persistent Volumes*: - are API resources (**can** do `kubectl get persistentvolumes`) - correspond to concrete volumes (e.g. on a SAN, EBS, etc.) - cannot be associated with a Pod directly; but through a Persistent Volume Claim - won't be discussed further in this section .debug[[k8s/volumes.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/volumes.md)] --- ## A simple volume example ```yaml apiVersion: v1 kind: Pod metadata: name: nginx-with-volume spec: volumes: - name: www containers: - name: nginx image: nginx volumeMounts: - name: www mountPath: /usr/share/nginx/html/ ``` .debug[[k8s/volumes.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/volumes.md)] --- ## A simple volume example, explained - We define a standalone `Pod` named `nginx-with-volume` - In that pod, there is a volume named `www` - No type is specified, so it will default to `emptyDir` (as the name implies, it will be initialized as an empty directory at pod creation) - In that pod, there is also a container named `nginx` - That container mounts the volume `www` to path `/usr/share/nginx/html/` .debug[[k8s/volumes.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/volumes.md)] --- ## A volume shared between two containers .small[ ```yaml apiVersion: v1 kind: Pod metadata: name: nginx-with-volume spec: volumes: - name: www containers: - name: nginx image: nginx volumeMounts: - name: www mountPath: /usr/share/nginx/html/ - name: git image: alpine command: [ "sh", "-c", "apk add --no-cache git && git clone https://github.com/octocat/Spoon-Knife /www" ] volumeMounts: - name: www mountPath: /www/ restartPolicy: OnFailure ``` ] .debug[[k8s/volumes.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/volumes.md)] --- ## Sharing a volume, explained - We added another container to the pod - That container mounts the `www` volume on a different path (`/www`) - It uses the `alpine` image - When started, it installs `git` and clones the `octocat/Spoon-Knife` repository (that repository contains a tiny HTML website) - As a result, NGINX now serves this website .debug[[k8s/volumes.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/volumes.md)] --- ## Sharing a volume, in action - Let's try it! .exercise[ - Create the pod by applying the YAML file: ```bash kubectl apply -f ~/container.training/k8s/nginx-with-volume.yaml ``` - Check the IP address that was allocated to our pod: ```bash kubectl get pod nginx-with-volume -o wide IP=$(kubectl get pod nginx-with-volume -o json | jq -r .status.podIP) ``` - Access the web server: ```bash curl $IP ``` ] .debug[[k8s/volumes.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/volumes.md)] --- ## The devil is in the details - The default `restartPolicy` is `Always` - This would cause our `git` container to run again ... and again ... and again (with an exponential back-off delay, as explained [in the documentation](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy)) - That's why we specified `restartPolicy: OnFailure` - There is a short period of time during which the website is not available (because the `git` container hasn't done its job yet) - This could be avoided by using [Init Containers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) (we will see a live example in a few sections) .debug[[k8s/volumes.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/volumes.md)] --- ## Volume lifecycle - The lifecycle of a volume is linked to the pod's lifecycle - This means that a volume is created when the pod is created - This is mostly relevant for `emptyDir` volumes (other volumes, like remote storage, are not "created" but rather "attached" ) - A volume survives across container restarts - A volume is destroyed (or, for remote storage, detached) when the pod is destroyed .debug[[k8s/volumes.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/volumes.md)] --- class: pic .interstitial[] --- name: toc-managing-configuration class: title Managing configuration .nav[ [Previous section](#toc-volumes) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-managing-stacks-with-helm) ] .debug[(automatically generated title slide)] --- # Managing configuration - Some applications need to be configured (obviously!) - There are many ways for our code to pick up configuration: - command-line arguments - environment variables - configuration files - configuration servers (getting configuration from a database, an API...) - ... and more (because programmers can be very creative!) - How can we do these things with containers and Kubernetes? .debug[[k8s/configuration.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/configuration.md)] --- ## Passing configuration to containers - There are many ways to pass configuration to code running in a container: - baking it into a custom image - command-line arguments - environment variables - injecting configuration files - exposing it over the Kubernetes API - configuration servers - Let's review these different strategies! .debug[[k8s/configuration.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/configuration.md)] --- ## Baking custom images - Put the configuration in the image (it can be in a configuration file, but also `ENV` or `CMD` actions) - It's easy! It's simple! - Unfortunately, it also has downsides: - multiplication of images - different images for dev, staging, prod ... - minor reconfigurations require a whole build/push/pull cycle - Avoid doing it unless you don't have the time to figure out other options .debug[[k8s/configuration.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/configuration.md)] --- ## Command-line arguments - Pass options to `args` array in the container specification - Example ([source](https://github.com/coreos/pods/blob/master/kubernetes.yaml#L29)): ```yaml args: - "--data-dir=/var/lib/etcd" - "--advertise-client-urls=http://127.0.0.1:2379" - "--listen-client-urls=http://127.0.0.1:2379" - "--listen-peer-urls=http://127.0.0.1:2380" - "--name=etcd" ``` - The options can be passed directly to the program that we run ... ... or to a wrapper script that will use them to e.g. generate a config file .debug[[k8s/configuration.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/configuration.md)] --- ## Command-line arguments, pros & cons - Works great when options are passed directly to the running program (otherwise, a wrapper script can work around the issue) - Works great when there aren't too many parameters (to avoid a 20-lines `args` array) - Requires documentation and/or understanding of the underlying program ("which parameters and flags do I need, again?") - Well-suited for mandatory parameters (without default values) - Not ideal when we need to pass a real configuration file anyway .debug[[k8s/configuration.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/configuration.md)] --- ## Environment variables - Pass options through the `env` map in the container specification - Example: ```yaml env: - name: ADMIN_PORT value: "8080" - name: ADMIN_AUTH value: Basic - name: ADMIN_CRED value: "admin:0pensesame!" ``` .warning[`value` must be a string! Make sure that numbers and fancy strings are quoted.] 🤔 Why this weird `{name: xxx, value: yyy}` scheme? It will be revealed soon! .debug[[k8s/configuration.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/configuration.md)] --- ## The downward API - In the previous example, environment variables have fixed values - We can also use a mechanism called the *downward API* - The downward API allows exposing pod or container information - either through special files (we won't show that for now) - or through environment variables - The value of these environment variables is computed when the container is started - Remember: environment variables won't (can't) change after container start - Let's see a few concrete examples! .debug[[k8s/configuration.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/configuration.md)] --- ## Exposing the pod's namespace ```yaml - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ``` - Useful to generate FQDN of services (in some contexts, a short name is not enough) - For instance, the two commands should be equivalent: ``` curl api-backend curl api-backend.$MY_POD_NAMESPACE.svc.cluster.local ``` .debug[[k8s/configuration.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/configuration.md)] --- ## Exposing the pod's IP address ```yaml - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP ``` - Useful if we need to know our IP address (we could also read it from `eth0`, but this is more solid) .debug[[k8s/configuration.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/configuration.md)] --- ## Exposing the container's resource limits ```yaml - name: MY_MEM_LIMIT valueFrom: resourceFieldRef: containerName: test-container resource: limits.memory ``` - Useful for runtimes where memory is garbage collected - Example: the JVM (the memory available to the JVM should be set with the `-Xmx ` flag) - Best practice: set a memory limit, and pass it to the runtime (see [this blog post](https://very-serio.us/2017/12/05/running-jvms-in-kubernetes/) for a detailed example) .debug[[k8s/configuration.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/configuration.md)] --- ## More about the downward API - [This documentation page](https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/) tells more about these environment variables - And [this one](https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) explains the other way to use the downward API (through files that get created in the container filesystem) .debug[[k8s/configuration.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/configuration.md)] --- ## Environment variables, pros and cons - Works great when the running program expects these variables - Works great for optional parameters with reasonable defaults (since the container image can provide these defaults) - Sort of auto-documented (we can see which environment variables are defined in the image, and their values) - Can be (ab)used with longer values ... - ... You *can* put an entire Tomcat configuration file in an environment ... - ... But *should* you? (Do it if you really need to, we're not judging! But we'll see better ways.) .debug[[k8s/configuration.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/configuration.md)] --- ## Injecting configuration files - Sometimes, there is no way around it: we need to inject a full config file - Kubernetes provides a mechanism for that purpose: `configmaps` - A configmap is a Kubernetes resource that exists in a namespace - Conceptually, it's a key/value map (values are arbitrary strings) - We can think about them in (at least) two different ways: - as holding entire configuration file(s) - as holding individual configuration parameters *Note: to hold sensitive information, we can use "Secrets", which are another type of resource behaving very much like configmaps. We'll cover them just after!* .debug[[k8s/configuration.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/configuration.md)] --- ## Configmaps storing entire files - In this case, each key/value pair corresponds to a configuration file - Key = name of the file - Value = content of the file - There can be one key/value pair, or as many as necessary (for complex apps with multiple configuration files) - Examples: ``` # Create a configmap with a single key, "app.conf" kubectl create configmap my-app-config --from-file=app.conf # Create a configmap with a single key, "app.conf" but another file kubectl create configmap my-app-config --from-file=app.conf=app-prod.conf # Create a configmap with multiple keys (one per file in the config.d directory) kubectl create configmap my-app-config --from-file=config.d/ ``` .debug[[k8s/configuration.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/configuration.md)] --- ## Configmaps storing individual parameters - In this case, each key/value pair corresponds to a parameter - Key = name of the parameter - Value = value of the parameter - Examples: ``` # Create a configmap with two keys kubectl create cm my-app-config \ --from-literal=foreground=red \ --from-literal=background=blue # Create a configmap from a file containing key=val pairs kubectl create cm my-app-config \ --from-env-file=app.conf ``` .debug[[k8s/configuration.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/configuration.md)] --- ## Exposing configmaps to containers - Configmaps can be exposed as plain files in the filesystem of a container - this is achieved by declaring a volume and mounting it in the container - this is particularly effective for configmaps containing whole files - Configmaps can be exposed as environment variables in the container - this is achieved with the downward API - this is particularly effective for configmaps containing individual parameters - Let's see how to do both! .debug[[k8s/configuration.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/configuration.md)] --- ## Passing a configuration file with a configmap - We will start a load balancer powered by HAProxy - We will use the [official `haproxy` image](https://hub.docker.com/_/haproxy/) - It expects to find its configuration in `/usr/local/etc/haproxy/haproxy.cfg` - We will provide a simple HAproxy configuration, `k8s/haproxy.cfg` - It listens on port 80, and load balances connections between IBM and Google .debug[[k8s/configuration.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/configuration.md)] --- ## Creating the configmap .exercise[ - Go to the `k8s` directory in the repository: ```bash cd ~/container.training/k8s ``` - Create a configmap named `haproxy` and holding the configuration file: ```bash kubectl create configmap haproxy --from-file=haproxy.cfg ``` - Check what our configmap looks like: ```bash kubectl get configmap haproxy -o yaml ``` ] .debug[[k8s/configuration.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/configuration.md)] --- ## Using the configmap We are going to use the following pod definition: ```yaml apiVersion: v1 kind: Pod metadata: name: haproxy spec: volumes: - name: config configMap: name: haproxy containers: - name: haproxy image: haproxy volumeMounts: - name: config mountPath: /usr/local/etc/haproxy/ ``` .debug[[k8s/configuration.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/configuration.md)] --- ## Using the configmap - The resource definition from the previous slide is in `k8s/haproxy.yaml` .exercise[ - Create the HAProxy pod: ```bash kubectl apply -f ~/container.training/k8s/haproxy.yaml ``` - Check the IP address allocated to the pod: ```bash kubectl get pod haproxy -o wide IP=$(kubectl get pod haproxy -o json | jq -r .status.podIP) ``` ] .debug[[k8s/configuration.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/configuration.md)] --- ## Testing our load balancer - The load balancer will send: - half of the connections to Google - the other half to IBM .exercise[ - Access the load balancer a few times: ```bash curl $IP curl $IP curl $IP ``` ] We should see connections served by Google, and others served by IBM.
(Each server sends us a redirect page. Look at the URL that they send us to!) .debug[[k8s/configuration.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/configuration.md)] --- ## Exposing configmaps with the downward API - We are going to run a Docker registry on a custom port - By default, the registry listens on port 5000 - This can be changed by setting environment variable `REGISTRY_HTTP_ADDR` - We are going to store the port number in a configmap - Then we will expose that configmap as a container environment variable .debug[[k8s/configuration.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/configuration.md)] --- ## Creating the configmap .exercise[ - Our configmap will have a single key, `http.addr`: ```bash kubectl create configmap registry --from-literal=http.addr=0.0.0.0:80 ``` - Check our configmap: ```bash kubectl get configmap registry -o yaml ``` ] .debug[[k8s/configuration.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/configuration.md)] --- ## Using the configmap We are going to use the following pod definition: ```yaml apiVersion: v1 kind: Pod metadata: name: registry spec: containers: - name: registry image: registry env: - name: REGISTRY_HTTP_ADDR valueFrom: configMapKeyRef: name: registry key: http.addr ``` .debug[[k8s/configuration.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/configuration.md)] --- ## Using the configmap - The resource definition from the previous slide is in `k8s/registry.yaml` .exercise[ - Create the registry pod: ```bash kubectl apply -f ~/container.training/k8s/registry.yaml ``` - Check the IP address allocated to the pod: ```bash kubectl get pod registry -o wide IP=$(kubectl get pod registry -o json | jq -r .status.podIP) ``` - Confirm that the registry is available on port 80: ```bash curl $IP/v2/_catalog ``` ] .debug[[k8s/configuration.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/configuration.md)] --- ## Passwords, tokens, sensitive information - For sensitive information, there is another special resource: *Secrets* - Secrets and Configmaps work almost the same way (we'll expose the differences on the next slide) - The *intent* is different, though: *"You should use secrets for things which are actually secret like API keys, credentials, etc., and use config map for not-secret configuration data."* *"In the future there will likely be some differentiators for secrets like rotation or support for backing the secret API w/ HSMs, etc."* (Source: [the author of both features](https://stackoverflow.com/a/36925553/580281 )) .debug[[k8s/configuration.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/configuration.md)] --- ## Differences between configmaps and secrets - Secrets are base64-encoded when shown with `kubectl get secrets -o yaml` - keep in mind that this is just *encoding*, not *encryption* - it is very easy to [automatically extract and decode secrets](https://medium.com/@mveritym/decoding-kubernetes-secrets-60deed7a96a3) - [Secrets can be encrypted at rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/) - With RBAC, we can authorize a user to access configmaps, but not secrets (since they are two different kinds of resources) .debug[[k8s/configuration.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/configuration.md)] --- class: pic .interstitial[] --- name: toc-managing-stacks-with-helm class: title Managing stacks with Helm .nav[ [Previous section](#toc-managing-configuration) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-next-steps) ] .debug[(automatically generated title slide)] --- # Managing stacks with Helm - We created our first resources with `kubectl run`, `kubectl expose` ... - We have also created resources by loading YAML files with `kubectl apply -f` - For larger stacks, managing thousands of lines of YAML is unreasonable - These YAML bundles need to be customized with variable parameters (E.g.: number of replicas, image version to use ...) - It would be nice to have an organized, versioned collection of bundles - It would be nice to be able to upgrade/rollback these bundles carefully - [Helm](https://helm.sh/) is an open source project offering all these things! .debug[[pks/helm.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/helm.md)] --- ## Helm concepts - `helm` is a CLI tool - `tiller` is its companion server-side component - A "chart" is an archive containing templatized YAML bundles - Charts are versioned - Charts can be stored on private or public repositories -- *We're going to use the beta of Helm 3 as it does not require `tiller` making things simpler and more secure for us.* .debug[[pks/helm.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/helm.md)] --- ## Installing Helm - If the `helm` 3 CLI is not installed in your environment, [install it](https://github.com/helm/helm/releases/tag/v3.0.0-beta.1) .exercise[ - Check if `helm` is installed: ```bash helm version ``` ] -- ```bash version.BuildInfo{Version:"v3.0.0-beta.1", GitCommit:"f76b5f21adb53a85de8925f4a9d4f9bd99f185b5", GitTreeState:"clean", GoVersion:"go1.12.9"}` ``` .debug[[pks/helm.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/helm.md)] --- ## Oops you accidently a Helm 2 If `helm version` gives you a result like below it means you have helm 2 which requires the `tiller` server side component. ``` Client: &version.Version{SemVer:"v2.14.0", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"} Error: forwarding ports: error upgrading connection: pods "tiller-deploy-6fd87785-x8sxk" is forbidden: User "user1" cannot create resource "pods/portforward" in API group "" in the namespace "kube-system" ``` Run `EXPORT TILLER_NAMESPACE=
` and try again. We've pre-installed `tiller` for you in your namespace just in case. -- Some of the commands in the following may not work in helm 2. Good luck! .debug[[pks/helm.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/helm.md)] --- ## Installing Tiller *If you were running Helm 2 you would need to install Tiller. We can skip this.* - Tiller is composed of a *service* and a *deployment* in the `kube-system` namespace - They can be managed (installed, upgraded...) with the `helm` CLI .exercise[ - Deploy Tiller: ```bash helm init ``` ] If Tiller was already installed, don't worry: this won't break it. At the end of the install process, you will see: ``` Happy Helming! ``` .debug[[pks/helm.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/helm.md)] --- ## Fix account permissions *If you were running Helm 2 you would need to install Tiller. We can skip this.* - Helm permission model requires us to tweak permissions - In a more realistic deployment, you might create per-user or per-team service accounts, roles, and role bindings .exercise[ - Grant `cluster-admin` role to `kube-system:default` service account: ```bash kubectl create clusterrolebinding add-on-cluster-admin \ --clusterrole=cluster-admin --serviceaccount=kube-system:default ``` ] (Defining the exact roles and permissions on your cluster requires a deeper knowledge of Kubernetes' RBAC model. The command above is fine for personal and development clusters.) .debug[[pks/helm.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/helm.md)] --- ## View available charts - A public repo is pre-configured when installing Helm - We can view available charts with `helm search` (and an optional keyword) .exercise[ - View all available charts: ```bash helm search hub ``` - View charts related to `prometheus`: ```bash helm search hub prometheus ``` ] .debug[[pks/helm.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/helm.md)] --- ## Add the stable chart repository - Helm 3 does not come configured with any repositories, so we need to start by adding the stable repo. .exercise[ - Add the stable repo ```bash helm repo add stable https://kubernetes-charts.storage.googleapis.com/ helm repo update ``` ] .debug[[pks/helm.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/helm.md)] --- ## Install a chart - Most charts use `LoadBalancer` service types by default - Most charts require persistent volumes to store data - We can relax these requirements a bit .exercise[ - Install on our cluster: ```bash helm install wp stable/wordpress \ --set service.type=ClusterIP \ --set persistence.enabled=false \ --set mariadb.master.persistence.enabled=false ``` ] Where do these `--set` options come from? .debug[[pks/helm.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/helm.md)] --- ## Inspecting a chart - `helm inspect` shows details about a chart (including available options) .exercise[ - See the metadata and all available options for `stable/wordpress`: ```bash helm inspect stable/wordpress ``` ] The chart's metadata includes a URL to the project's home page. (Sometimes it conveniently points to the documentation for the chart.) .debug[[pks/helm.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/helm.md)] --- ## Viewing installed charts - Helm keeps track of what we've installed .exercise[ - List installed Helm charts: ```bash helm list ``` ] .debug[[pks/helm.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/helm.md)] --- ## Why wordpress its 2019?!?! I know ... funny right :) .debug[[pks/helm-wordpress.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/helm-wordpress.md)] --- ## Helm install notes - You'll notice a helpful message after running `helm install` that looks something like this: ``` NOTES: 1. Get the WordPress URL: echo "WordPress URL: http://127.0.0.1:8080/" echo "WordPress Admin URL: http://127.0.0.1:8080/admin" kubectl port-forward --namespace user1 svc/wp-wordpress 8080:80 2. Login with the following credentials to see your blog echo Username: user echo Password: $(kubectl get secret --namespace user1 wp-wordpress -o jsonpath="{.data.wordpress-password}" | base64 --decode) ``` -- Helm charts generally have a `NOTES.txt` template that is rendered out and displayed after helm commands are run. Pretty neat. .debug[[pks/helm-wordpress.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/helm-wordpress.md)] --- ## What did helm install ? - Run `kubectl get all` to check what resources helm installed .exercise[ - Run `kubectl get all`: ```bash kubectl get all ``` ] .debug[[pks/helm-wordpress.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/helm-wordpress.md)] --- ## What did helm install ? ``` NAME READY STATUS RESTARTS AGE pod/wp-mariadb-0 1/1 Running 0 11m pod/wp-wordpress-6cb9cfc94-chbr6 1/1 Running 0 11m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/wp-mariadb ClusterIP 10.100.200.87
3306/TCP 11m service/wp-wordpress ClusterIP 10.100.200.131
80/TCP,443/TCP 11m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/wp-wordpress 1/1 1 1 11m NAME DESIRED CURRENT READY AGE replicaset.apps/tiller-deploy-6487f7bfd8 1 1 1 2d6h replicaset.apps/tiller-deploy-75ccf68856 0 0 0 2d6h replicaset.apps/wp-wordpress-6cb9cfc94 1 1 1 11m NAME READY AGE statefulset.apps/wp-mariadb 1/1 11m ``` .debug[[pks/helm-wordpress.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/helm-wordpress.md)] --- ## Check if wordpress is working - Using the notes provided from helm check you can access your wordpress and login as `user` .exercise[ - run the commands provided by the helm summary: ```bash echo Username: user echo Password: $(kubectl get secret --namespace user1 wp-wordpress -o jsonpath="{.data.wordpress-password}" | base64 --decode) kubectl port-forward --namespace user1 svc/wp-wordpress 8080:80 ``` ] -- Yay? you have a 2003 era blog .debug[[pks/helm-wordpress.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/helm-wordpress.md)] --- ## Helm Chart Values Settings values on the command line is okay for a demonstration, but we should really be creating a `~/workshop/values.yaml` file for our chart. Let's do that now. > the values file is a bit long to copy/paste from here, so lets wget it. .exercise[ - Download the values.yaml file and edit it, changing the URL prefix to be `
-wp`: ```bash wget -O ~/workshop/values.yaml \ https://raw.githubusercontent.com/paulczar/container.training/pks/slides/pks/wp/values.yaml vim ~/workshop/values.yaml helm upgrade wp stable/wordpress -f ~/workshop/values.yaml ``` ] --- .debug[[pks/helm-wordpress.md](https://github.com/paulczar/container.training.git/tree/pks/slides/pks/helm-wordpress.md)] --- class: pic .interstitial[] --- name: toc-next-steps class: title Next steps .nav[ [Previous section](#toc-managing-stacks-with-helm) | [Back to table of contents](#toc-chapter-5) | [Next section](#toc-links-and-resources) ] .debug[(automatically generated title slide)] --- # Next steps *Alright, how do I get started and containerize my apps?* -- Suggested containerization checklist: .checklist[ - write a Dockerfile for one service in one app - write Dockerfiles for the other (buildable) services - write a Compose file for that whole app - make sure that devs are empowered to run the app in containers - set up automated builds of container images from the code repo - set up a CI pipeline using these container images - set up a CD pipeline (for staging/QA) using these images ] And *then* it is time to look at orchestration! .debug[[k8s/whatsnext.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/whatsnext.md)] --- ## Options for our first production cluster - Get a managed cluster from a major cloud provider (AKS, EKS, GKE...) (price: $, difficulty: medium) - Hire someone to deploy it for us (price: $$, difficulty: easy) - Do it ourselves (price: $-$$$, difficulty: hard) .debug[[k8s/whatsnext.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/whatsnext.md)] --- ## One big cluster vs. multiple small ones - Yes, it is possible to have prod+dev in a single cluster (and implement good isolation and security with RBAC, network policies...) - But it is not a good idea to do that for our first deployment - Start with a production cluster + at least a test cluster - Implement and check RBAC and isolation on the test cluster (e.g. deploy multiple test versions side-by-side) - Make sure that all our devs have usable dev clusters (whether it's a local minikube or a full-blown multi-node cluster) .debug[[k8s/whatsnext.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/whatsnext.md)] --- ## Namespaces - Namespaces let you run multiple identical stacks side by side - Two namespaces (e.g. `blue` and `green`) can each have their own `redis` service - Each of the two `redis` services has its own `ClusterIP` - CoreDNS creates two entries, mapping to these two `ClusterIP` addresses: `redis.blue.svc.cluster.local` and `redis.green.svc.cluster.local` - Pods in the `blue` namespace get a *search suffix* of `blue.svc.cluster.local` - As a result, resolving `redis` from a pod in the `blue` namespace yields the "local" `redis` .warning[This does not provide *isolation*! That would be the job of network policies.] .debug[[k8s/whatsnext.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/whatsnext.md)] --- ## Relevant sections - [Namespaces](kube-selfpaced.yml.html#toc-namespaces) - [Network Policies](kube-selfpaced.yml.html#toc-network-policies) - [Role-Based Access Control](kube-selfpaced.yml.html#toc-authentication-and-authorization) (covers permissions model, user and service accounts management ...) .debug[[k8s/whatsnext.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/whatsnext.md)] --- ## Stateful services (databases etc.) - As a first step, it is wiser to keep stateful services *outside* of the cluster - Exposing them to pods can be done with multiple solutions: - `ExternalName` services
(`redis.blue.svc.cluster.local` will be a `CNAME` record) - `ClusterIP` services with explicit `Endpoints`
(instead of letting Kubernetes generate the endpoints from a selector) - Ambassador services
(application-level proxies that can provide credentials injection and more) .debug[[k8s/whatsnext.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/whatsnext.md)] --- ## Stateful services (second take) - If we want to host stateful services on Kubernetes, we can use: - a storage provider - persistent volumes, persistent volume claims - stateful sets - Good questions to ask: - what's the *operational cost* of running this service ourselves? - what do we gain by deploying this stateful service on Kubernetes? - Relevant sections: [Volumes](kube-selfpaced.yml.html#toc-volumes) | [Stateful Sets](kube-selfpaced.yml.html#toc-stateful-sets) | [Persistent Volumes](kube-selfpaced.yml.html#toc-highly-available-persistent-volumes) - Excellent [blog post](http://www.databasesoup.com/2018/07/should-i-run-postgres-on-kubernetes.html) tackling the question: “Should I run Postgres on Kubernetes?” .debug[[k8s/whatsnext.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/whatsnext.md)] --- ## HTTP traffic handling - *Services* are layer 4 constructs - HTTP is a layer 7 protocol - It is handled by *ingresses* (a different resource kind) - *Ingresses* allow: - virtual host routing - session stickiness - URI mapping - and much more! - [This section](kube-selfpaced.yml.html#toc-exposing-http-services-with-ingress-resources) shows how to expose multiple HTTP apps using [Træfik](https://docs.traefik.io/user-guide/kubernetes/) .debug[[k8s/whatsnext.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/whatsnext.md)] --- ## Logging - Logging is delegated to the container engine - Logs are exposed through the API - Logs are also accessible through local files (`/var/log/containers`) - Log shipping to a central platform is usually done through these files (e.g. with an agent bind-mounting the log directory) - [This section](kube-selfpaced.yml.html#toc-centralized-logging) shows how to do that with [Fluentd](https://docs.fluentd.org/v0.12/articles/kubernetes-fluentd) and the EFK stack .debug[[k8s/whatsnext.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/whatsnext.md)] --- ## Metrics - The kubelet embeds [cAdvisor](https://github.com/google/cadvisor), which exposes container metrics (cAdvisor might be separated in the future for more flexibility) - It is a good idea to start with [Prometheus](https://prometheus.io/) (even if you end up using something else) - Starting from Kubernetes 1.8, we can use the [Metrics API](https://kubernetes.io/docs/tasks/debug-application-cluster/core-metrics-pipeline/) - [Heapster](https://github.com/kubernetes/heapster) was a popular add-on (but is being [deprecated](https://github.com/kubernetes/heapster/blob/master/docs/deprecation.md) starting with Kubernetes 1.11) .debug[[k8s/whatsnext.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/whatsnext.md)] --- ## Managing the configuration of our applications - Two constructs are particularly useful: secrets and config maps - They allow to expose arbitrary information to our containers - **Avoid** storing configuration in container images (There are some exceptions to that rule, but it's generally a Bad Idea) - **Never** store sensitive information in container images (It's the container equivalent of the password on a post-it note on your screen) - [This section](kube-selfpaced.yml.html#toc-managing-configuration) shows how to manage app config with config maps (among others) .debug[[k8s/whatsnext.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/whatsnext.md)] --- ## Managing stack deployments - The best deployment tool will vary, depending on: - the size and complexity of your stack(s) - how often you change it (i.e. add/remove components) - the size and skills of your team - A few examples: - shell scripts invoking `kubectl` - YAML resources descriptions committed to a repo - [Helm](https://github.com/kubernetes/helm) (~package manager) - [Spinnaker](https://www.spinnaker.io/) (Netflix' CD platform) - [Brigade](https://brigade.sh/) (event-driven scripting; no YAML) .debug[[k8s/whatsnext.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/whatsnext.md)] --- ## Cluster federation --  -- Sorry Star Trek fans, this is not the federation you're looking for! -- (If I add "Your cluster is in another federation" I might get a 3rd fandom wincing!) .debug[[k8s/whatsnext.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/whatsnext.md)] --- ## Cluster federation - Kubernetes master operation relies on etcd - etcd uses the [Raft](https://raft.github.io/) protocol - Raft recommends low latency between nodes - What if our cluster spreads to multiple regions? -- - Break it down in local clusters - Regroup them in a *cluster federation* - Synchronize resources across clusters - Discover resources across clusters .debug[[k8s/whatsnext.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/whatsnext.md)] --- ## Developer experience *We've put this last, but it's pretty important!* - How do you on-board a new developer? - What do they need to install to get a dev stack? - How does a code change make it from dev to prod? - How does someone add a component to a stack? .debug[[k8s/whatsnext.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/whatsnext.md)] --- class: pic .interstitial[] --- name: toc-links-and-resources class: title Links and resources .nav[ [Previous section](#toc-next-steps) | [Back to table of contents](#toc-chapter-5) | [Next section](#toc-) ] .debug[(automatically generated title slide)] --- # Links and resources All things Kubernetes: - [Kubernetes Community](https://kubernetes.io/community/) - Slack, Google Groups, meetups - [Kubernetes on StackOverflow](https://stackoverflow.com/questions/tagged/kubernetes) - [Play With Kubernetes Hands-On Labs](https://medium.com/@marcosnils/introducing-pwk-play-with-k8s-159fcfeb787b) All things Docker: - [Docker documentation](http://docs.docker.com/) - [Docker Hub](https://hub.docker.com) - [Docker on StackOverflow](https://stackoverflow.com/questions/tagged/docker) - [Play With Docker Hands-On Labs](http://training.play-with-docker.com/) Everything else: - [Local meetups](https://www.meetup.com/) .footnote[These slides (and future updates) are on → http://container.training/] .debug[[k8s/links.md](https://github.com/paulczar/container.training.git/tree/pks/slides/k8s/links.md)] --- class: title, self-paced Thank you! .debug[[shared/thankyou.md](https://github.com/paulczar/container.training.git/tree/pks/slides/shared/thankyou.md)] --- class: title, in-person That's all, folks!
Questions?  .debug[[shared/thankyou.md](https://github.com/paulczar/container.training.git/tree/pks/slides/shared/thankyou.md)]