Skip to main content

Posts

Comparing common docker and cri commands

  Comparing docker and crictl command line options Docker and CRI (Container Runtime Interface) are two popular ways to manage and run containers on a Linux system. Both technologies offer a set of commands and tools for working with containers, but there are some key differences between the two. In this blog post, we will compare some common Docker and CRI commands to help you understand the similarities and differences between the two technologies. First, let's take a look at the docker run command. The docker run command is used to start a new container from an image. The command takes a number of options, such as the image name, ports to expose, and environment variables to set. The docker run command also allows you to specify a command to run inside the container, which is useful for running a specific application or service. In contrast, the equivalent command in CRI is the crictl run command. The crictl run command also starts a new container from an image, but it take...

Introduction to docker

 Introduction to docker     Docker is a powerful tool for building, deploying and running containerized applications. It allows developers to package their applications and dependencies into a single container, which can then be easily deployed and run on any platform that supports Docker. With Docker, developers can build and test their applications on their local machines and then deploy the same exact container to different environments such as production or staging, without worrying about inconsistencies or compatibility issues. This ensures consistency and reproducibility across different environments. Docker also makes it easy to scale and manage applications, as containers can be easily started, stopped, and moved between hosts. It also allows for efficient resource utilization, as containers share the host operating system kernel, reducing the need for multiple copies of the operating system. In this blog post, we will dive deeper into the world of Docker, explori...

Advanced usage of docker

 Advanced usage of docker Docker is a powerful tool for building and deploying containerized applications. It allows developers to package their applications and dependencies into a single container, which can then be easily deployed and run on any platform that supports Docker. One of the key benefits of using Docker is its ability to provide consistency and reproducibility across different environments. With Docker, developers can build and test their applications on their local machines, and then deploy the same exact container to different environments, such as production or staging, without worrying about inconsistencies or compatibility issues. However, as with any powerful tool, there are advanced usage patterns that can help developers to make the most out of Docker. In this blog post, we will explore some of the more advanced usage patterns of Docker, including multi-stage builds, volume management, and network isolation. Multi-stage Builds A common pattern when building a...

Introduction to Kubernetes

Introduction to Kubernetes   Kubernetes is a powerful platform for managing containerized applications. It is an open-source project that was originally developed by Google, and is now maintained by the Cloud Native Computing Foundation (CNCF). One of the key benefits of Kubernetes is its ability to automate the deployment, scaling, and management of containerized applications. This makes it an ideal platform for running microservices, which are small, modular, and independently deployable units of software. Kubernetes is built on top of a number of core components, including: The API server, which exposes the Kubernetes API and handles communication between the different components of the system. The etcd datastore, which stores the configuration data for the Kubernetes cluster. The controller manager and the scheduler, which handle the orchestration of the containerized applications and the scheduling of resources, respectively. The kubelet, which runs on each node in the cluster...

How to check for open TCP ports in Linux using netcat, ssh, nmap, telnet and even just cat

    There are may ways to check for open TCP ports. Usually I prefer to use netcat or telnet, however in some cases (especially within docker containers) these tools are not installed or available. This post shows most common ways to check if a remote port is open or not. telnet Even though the telnet client tool is supposed to be used for the telnet protocol (ie. remotely logging in to a Unix computer before we had ssh), it is also a handy tool to check for an open port. For example, we can use it to check if we can access www.google.com via HTTPS: $ telnet www.google.com 443 Trying 142.250.70.196... Connected to www.google.com. If we see the " Connected " message, we can deduct from this that the port is open - even though there are protocol differences. Furthermore, if the service is unencrypted, telnet will show us status messages / protocol hints and versions etc. The following connects to a Google mail / SMTP server. $ telnet smtp.google.com 25 Trying 74.125.24.27... Co...

Viewing and tailing multiple Kubernetes container logs concurrently

Why Often I need to look at multiple pod logs at the same time. For example the nginx ingress controller deployment or daemonset usually has at least a handful of pods running to share the load and for additional redundancy. To troubleshoot problems, I need to see them all. Options The trusted kubectl (I am a kube cuttle guy) command has an option to view or tail multiple containers based on a selector like this. $ kubectl logs -n nginx-ingress -l ' app.kubernetes.io/name=fluent -bit '  -f  --max-log-requests 60 --tail=1 --prefix=true However, if the pods in question come and go frequently, I am recommending stern instead:   https://github.com/wercker/stern

Migrate CentOS8 to Rocky Linux 8

  The following steps will migrate your CentOS8 server to Rocky Linux 8. dnf -y install wget wget https://raw.githubusercontent. com/rocky-linux/rocky-tools/ main/migrate2rocky/ migrate2rocky.sh chmod a+x migrate2rocky.sh ./migrate2rocky.sh -r rm -rf /etc/yum.repos.d/backups /etc/yum.repos.d/CentOS-Linux- AppStream.repo.rpmsave /etc/yum.repos.d/CentOS-Linux- BaseOS.repo.rpmsave sync && init 6   That's it.