Skip to main content

Posts

Showing posts with the label docker

Exporting and importing docker images manually

  Why Sometimes it can be handy to have a copy of a container image locally or being able to manually copy a docker image from one computer to another.  Recently I had an issue where newly built Kubernetes worker nodes did not work properly due to the fact that the flannel pod image was hosted on quay.io , which was not available at the time. The "fix" was to manually export the image from a server which had flannel running just fine and import on the new worker nodes (and restart the flannel pods). Export Assuming we want to save / export the image below: $ docker images REPOSITORY     TAG       IMAGE ID       CREATED       SIZE kindest/node   <none>    af39c553b6de   2 weeks ago   1.12GB We run docker save with the image id and redirect the output into a new local file. $ docker save af39c553b6de > kindest-node.tar Once done, we end up with a new tar fi...

Create a Kubernetes cluster using kind (Kubernetes in Docker) in less than 2 minutes

Why Sometimes I just need to quickly test a K8s resource or compare a cluster with a near vanilla version. This is where kind comes in handy, as it can create a clean and fresh Kubernetes cluster in under 2 minutes. Requirements You have a working docker environment. Step 1 Download the kind binary (less than 4 MB). curl -Lso ./kind https://kind.sigs.k8s.io/dl/v0.11.0/kind-linux-amd64 && chmod 755 kind Step 2 Create the actual cluster. $ time ./kind create cluster Creating cluster "kind" ...  ✓ Ensuring node image (kindest/node:v1.21.1)  ✓ Preparing nodes  ✓ Writing configuration  ✓ Starting control-plane  ✓ Installing CNI  ✓ Installing StorageClass Set kubectl context to "kind-kind" You can now use your cluster with: kubectl cluster-info --context kind-kind Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community real     1m55.934s user    0m1.014s sys     0m0.970s Step ...

Migrating Kubernetes from Docker to Containerd

Disclaimer I have published this post on my work blog https://reece.tech previously. Overview I have operated multiple on-premise and cloud hosted K8s clusters for many years and we heavily utilise   docker   as our container runtime for master and worker nodes. As most readers would be aware by now, the Kubernetes update to 1.20 also announced the deprecation and future removal of the much loved  docker  interface. This post documents our journey from docker to a suitable replacement option. Options The two most obvious alternatives are  cri-o  and  containerd . As  containerd  is the default for many cloud based K8s environments and  containerd  was used behind the scenes by our K8s docker layer already anyway, the choice was quite easy. Changes required The main change (for K8s 1.19.5) was to install  containerd  instead of  dockerd  and then start kubelet with additional  --container-runtime=remote --co...

Docker Container Size Quota within Kubernetes

Disclaimer I have published this post on my work blog https://reece.tech previously. Intro We are running an on premise Kubernetes cluster on Red Hat Linux 7.5 (in VMware). The  /var/lib/docker  file-system is a separate partition, formatted with  ext4  and we used  overlay  as storage provider for docker, which was recommended for earlier RHEL 7 releases. What happened One fine day, one of our containers started creating core dumps - about 1 GB per minute worth, resulting in  /var/lib/docker  (100 GB in size) to fill up in less than 90 minutes. Existing pods crashed, new pods could not pull their image or start up. We deleted the existing pods on one of the Kubernetes worker nodes manually, however the container in question migrated to a different worker and continued its mission. Investigation We believed there is a 10 GB size limit for each running containers by default, however this did not seem to be the case. After consulting the relevant d...

Upgrading Kubernetes to 1.16 and decommissioned API versions

  Disclaimer I have published this post on my work blog https://reece.tech previously. Overview I like to upgrade our Kubernetes clusters quite frequently. Recently I started the upgrade journey to 1.16. Some upgrades are rather uneventless and completed within in a few minutes (we run 5 master nodes per cluster), however this particular upgrade was different. Preparation The biggest change in 1.16 is that certain (and commonly used) API versions have been removed completely. Yes, there were mentions and deprecation warnings here and there in the past but now it’s for real. For example, you will not be able to create or upgrade deployments or daemonsets created with the  extensions/v1beta1  API version without changing your resource manifests. We did upgrade Kubernetes internal services like Grafana, Prometheus, dashboards and our logging services API versions prior to upgrading our clusters to 1.16. API version changes Here is a list of all changes (removed APIs in Kube...