Skip to main content

Posts

Showing posts with the label Linux

Exporting and importing docker images manually

  Why Sometimes it can be handy to have a copy of a container image locally or being able to manually copy a docker image from one computer to another.  Recently I had an issue where newly built Kubernetes worker nodes did not work properly due to the fact that the flannel pod image was hosted on quay.io , which was not available at the time. The "fix" was to manually export the image from a server which had flannel running just fine and import on the new worker nodes (and restart the flannel pods). Export Assuming we want to save / export the image below: $ docker images REPOSITORY     TAG       IMAGE ID       CREATED       SIZE kindest/node   <none>    af39c553b6de   2 weeks ago   1.12GB We run docker save with the image id and redirect the output into a new local file. $ docker save af39c553b6de > kindest-node.tar Once done, we end up with a new tar file, which can be compressed optionally. $ ls -lah kindest-node.tar  -rw-rw-r-- 1 user user 1.1G Jun  6 12:11 kinde

Quick and simple parallel ssh command shell script

Why Sometimes running commands across a range of servers is necessary. Yes, there is Ansible, Puppet and SaltStack etc., however in some cases this is overkill (and these usually require python, ruby or other languages installed. The following shell script runs commands (via sudo if required) on a group of hosts in parallel. It is quite old and not very elegant but does the trick and hopefully helps somebody in the future. Please don't comment on syntax and bugs. Configuration The .run file in your home directory contains one line for user: and one line for pass: . Should this file not exist, the script will ask for the user and password interactively. The servers file includes one line per host (or includes for additional host files) - or if the file does not exist it will be treated as single host name. Usage ./run.sh server "uptime" Script #!/bin/bash # # About: Script to concurrently run commands on a list of servers (with or without sudo) # Author: Lonestarr # Las

Hosting CentOS7 and CentOS8 yum repositories in AWS S3

  Disclaimer I have published this post on my work blog https://reece.tech previously. Overview We are utilising compute instances in different cloud environments as well as traditional data centres. On-premise virtual machines usually run RHEL 7/8  and CentOS 7/8. Scope This post explains how to create and host your own yum repositories in an S3 bucket and how to maintain secure, consistent and reliable server builds. This method also allows for a controlled package version and patch level life-cycle across environments. The problem Using externally hosted yum repositories or mirrors is very convenient and easy for end users installing and updating a single workstation, however it is not the best option in an enterprise environment where many new identical virtual machines could be built every day in an automated fashion. Issues The main problems with publicly hosted repositories are: Security (who has access to the mirror or DNS and can alter packages?) Consistency (packages get upd