Skip to main content

Posts

Showing posts with the label Kubernetes

Viewing and tailing multiple Kubernetes container logs concurrently

Why Often I need to look at multiple pod logs at the same time. For example the nginx ingress controller deployment or daemonset usually has at least a handful of pods running to share the load and for additional redundancy. To troubleshoot problems, I need to see them all. Options The trusted kubectl (I am a kube cuttle guy) command has an option to view or tail multiple containers based on a selector like this. $ kubectl logs -n nginx-ingress -l ' app.kubernetes.io/name=fluent -bit '  -f  --max-log-requests 60 --tail=1 --prefix=true However, if the pods in question come and go frequently, I am recommending stern instead:   https://github.com/wercker/stern

Deprecating Networking Ingress API version in Kubernetes 1.22

  Intro Kubernetes deprecates API versions over time. Usually this affects alpha and beta versions and only requires changing the apiVersion: line in your resource file to make it work. However with this Ingress object version change, additional changes are necessary. Basics For this post I am quickly creating a new cluster via Kind (Kubernetes in Docker) . Once done, we can see which API versions are supported by this cluster (version v1.21.1). $ kubectl api-versions | grep networking networking.k8s.io/v1 networking.k8s.io/v1beta1 Kubernetes automatically converts existing resources internally into different supported API versions. So if we create a new Ingress object with version v1beta1 on a recent cluster version, you will receive a deprecation warning - and the same Ingress object will exist both in version v1beta1 and v1. Create $ cat ingress_beta.yaml apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata:   name: clusterpirate-ingress spec:   rules:   - http:       path

Create a Kubernetes cluster using kind (Kubernetes in Docker) in less than 2 minutes

Why Sometimes I just need to quickly test a K8s resource or compare a cluster with a near vanilla version. This is where kind comes in handy, as it can create a clean and fresh Kubernetes cluster in under 2 minutes. Requirements You have a working docker environment. Step 1 Download the kind binary (less than 4 MB). curl -Lso ./kind https://kind.sigs.k8s.io/dl/v0.11.0/kind-linux-amd64 && chmod 755 kind Step 2 Create the actual cluster. $ time ./kind create cluster Creating cluster "kind" ...  ✓ Ensuring node image (kindest/node:v1.21.1)  ✓ Preparing nodes  ✓ Writing configuration  ✓ Starting control-plane  ✓ Installing CNI  ✓ Installing StorageClass Set kubectl context to "kind-kind" You can now use your cluster with: kubectl cluster-info --context kind-kind Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community real     1m55.934s user    0m1.014s sys     0m0.970s Step 3 That's it really - just use  kubectl   (ideally

Building Kubernetes Clusters using Kubespray

Disclaimer I have published this post on my work blog https://reece.tech previously. Preface We are hosting our workloads in Docker containers within various Kubernetes clusters. To ensure consistency and repeatability across environments, we are using idempotent configuration management tools like Ansible. Kubespray is an Ansible playbook used to manage Kubernetes clusters including initial build and also lifecyle of the cluster (adding or removing nodes, version upgrades etc.). Requirements We are automatically provisioning new VMs in our VSphere environment using Ansible. Once the new node is up and running, Kubespray runs across to install required Kubernetes services. In this example we are using a root user ssh key for passwordless authentication. Ansible 2.5 Kubespray git repository Steps Getting Kubespray The following commands will download Kubespray, select the latest release version and install necessary Python modules. git clone https://github.com/kubernetes-sigs/kubespray

Migrating Kubernetes from Docker to Containerd

Disclaimer I have published this post on my work blog https://reece.tech previously. Overview I have operated multiple on-premise and cloud hosted K8s clusters for many years and we heavily utilise   docker   as our container runtime for master and worker nodes. As most readers would be aware by now, the Kubernetes update to 1.20 also announced the deprecation and future removal of the much loved  docker  interface. This post documents our journey from docker to a suitable replacement option. Options The two most obvious alternatives are  cri-o  and  containerd . As  containerd  is the default for many cloud based K8s environments and  containerd  was used behind the scenes by our K8s docker layer already anyway, the choice was quite easy. Changes required The main change (for K8s 1.19.5) was to install  containerd  instead of  dockerd  and then start kubelet with additional  --container-runtime=remote --container-runtime-endpoint=/run/containerd/containerd.sock  command line options.

Building EKS (Amazon hosted Kubernetes) clusters using eksctl

  Disclaimer I have published this post on my work blog https://reece.tech previously. Overview Eksctl acts as a wrapper around CloudFormation templates. Creating a cluster will add one stack for the control plane (EKS master servers) and one stack for each node group configured (a node group is a group of workers using the same networking and sizing as well as IAM permissions). However, certain actions such as upgrading the Kubernetes master or worker version or scaling out the number of workers in a node group does not always update the CF stacks associated with it. Preparation Download and install the latest version of eksctl. Follow the Weaveworks installation guide:  https://eksctl.io/introduction/installation/ . Download eksctl (Linux) curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp Install eksctl (Linux) sudo mv /tmp/eksctl /usr/local/bin Provide AWS credentials Ensure th

Docker Container Size Quota within Kubernetes

Disclaimer I have published this post on my work blog https://reece.tech previously. Intro We are running an on premise Kubernetes cluster on Red Hat Linux 7.5 (in VMware). The  /var/lib/docker  file-system is a separate partition, formatted with  ext4  and we used  overlay  as storage provider for docker, which was recommended for earlier RHEL 7 releases. What happened One fine day, one of our containers started creating core dumps - about 1 GB per minute worth, resulting in  /var/lib/docker  (100 GB in size) to fill up in less than 90 minutes. Existing pods crashed, new pods could not pull their image or start up. We deleted the existing pods on one of the Kubernetes worker nodes manually, however the container in question migrated to a different worker and continued its mission. Investigation We believed there is a 10 GB size limit for each running containers by default, however this did not seem to be the case. After consulting the relevant documentation it became clear that the 

How to check if the Kubernetes control plane is healthy

Disclaimer I have published this post on my work blog https://reece.tech previously. Why is this important We are running an on premise Kubernetes cluster (currently version 1.11.6) on Red Hat Linux 7.5 (in VMware). Most documentation (especially when it comes to master version upgrades) mentions checking that the control plane is healthy prior to performing any cluster changes. Obviously this is an important step to ensure consistency and repeatability - and also important during day to day management of your cluster, but how exactly do we do this? Our approach Our (multi master) Kubernetes control plane consists of a few different services / parts like etcd, kube-apiserver, scheduler, controller-manager and so on. Each component should be verified during this process. Starting simple Run  kubectl get nodes -o wide  to ensure all nodes are  Ready . Also check that the  master  servers have the  master  role. Also running  kubectl get cs  will show you the status of vital control plan