Tag Archives: Docker

Backup and Restore your Ubuntu Linux Workstations

This is a mechanism I invented and I’ve been using for decades, to migrate or clone my Linux Desktops to other physical servers.

This script is focused on doing the job for Ubuntu but I was doing this already 30 years ago, for X Window as I was responsible of the Linux platform of a ISP (Internet Service Provider). So, it is compatible with any Linux Desktop or Server.

It has the advantage that is a very lightweight backup. You don’t need to backup /etc or /var as long as you install a new OS and restore the folders that you did backup. You can backup and restore Wine (Windows Emulator) programs completely and to/from VMs and Instances as well.

It’s based on user/s rather than machine.

And it does backup using the Timestamp, so you keep all the different version, modified over time. You can fusion the backups in the same folder if you prefer avoiding time versions and keep only the latest backup. If that’s your case, then replace s_PATH_BACKUP_NOW=”${s_PATH_BACKUP}${s_DATETIME}/” by s_PATH_BACKUP_NOW=”${s_PATH_BACKUP}” for instance. You can also add a folder for machine if you prefer it, for example if you use the same userid across several Desktops/Servers.

I offer you a much simplified version of my scripts, but they can highly serve your needs.

#!/usr/bin/env bash

# Author: Carles Mateo
# Last Update: 2022-10-23 10:48 Irish Time

# User we want to backup data for
s_USER="carles"
# Target PATH for the Backups
s_PATH_BACKUP="/home/${s_USER}/Desktop/Bck/"

s_DATE=$(date +"%Y-%m-%d")
s_DATETIME=$(date +"%Y-%m-%d-%H-%M-%S")

s_PATH_BACKUP_NOW="${s_PATH_BACKUP}${s_DATETIME}/"

echo "Creating path $s_PATH_BACKUP and $s_PATH_BACKUP_NOW"
mkdir $s_PATH_BACKUP
mkdir $s_PATH_BACKUP_NOW

s_PATH_KEY="/home/${s_USER}/Desktop/keys/2007-01-07-cloud23.pem"
s_DOCKER_IMG_JENKINS_EXPORT=${s_DATE}-jenkins-base.tar
s_DOCKER_IMG_JENKINS_BLUEOCEAN2_EXPORT=${s_DATE}-jenkins-blueocean2.tar
s_PGP_FILE=${s_DATETIME}-pgp.zip

# Version the PGP files
echo "Compressing the PGP files as ${s_PGP_FILE}"
zip -r ${s_PATH_BACKUP_NOW}${s_PGP_FILE} /home/${s_USER}/Desktop/PGP/*

# Copy to BCK folder, or ZFS or to an external drive Locally as defined in: s_PATH_BACKUP_NOW
echo "Copying Data to ${s_PATH_BACKUP_NOW}/Data"
rsync -a --exclude={} --acls --xattrs --owner --group --times --stats --human-readable --progress -z "/home/${s_USER}/Desktop/data/" "${s_PATH_BACKUP_NOW}data/"
rsync -a --exclude={'Desktop','Downloads','.local/share/Trash/','.local/lib/python2.7/','.local/lib/python3.6/','.local/lib/python3.8/','.local/lib/python3.10/','.cache/JetBrains/'} --acls --xattrs --owner --group --times --stats --human-readable --progress -z "/home/${s_USER}/" "${s_PATH_BACKUP_NOW}home/${s_USER}/"
rsync -a --exclude={} --acls --xattrs --owner --group --times --stats --human-readable --progress -z "/home/${s_USER}/Desktop/code/" "${s_PATH_BACKUP_NOW}code/"


echo "Showing backup dir ${s_PATH_BACKUP_NOW}"
ls -hal ${s_PATH_BACKUP_NOW}

df -h /

See how I exclude certain folders like the Desktop or Downloads with –exclude.

It relies on the very useful rsync program. It also relies on zip to compress entire folders (PGP Keys on the example).

If you use the second part, to compress Docker Images (Jenkins in this example), you will run it as sudo and you will need also gzip.

# continuation... sudo running required.

# Save Docker Images
echo "Saving Docker Jenkins /home/${s_USER}/Desktop/Docker_Images/${s_DOCKER_IMG_JENKINS_EXPORT}"
sudo docker save jenkins:base --output /home/${s_USER}/Desktop/Docker_Images/${s_DOCKER_IMG_JENKINS_EXPORT}
echo "Saving Docker Jenkins /home/${s_USER}/Desktop/Docker_Images/${s_DOCKER_IMG_JENKINS_BLUEOCEAN2_EXPORT}"
sudo docker save jenkins:base --output /home/${s_USER}/Desktop/Docker_Images/${s_DOCKER_IMG_JENKINS_BLUEOCEAN2_EXPORT}
echo "Setting permissions"
sudo chown ${s_USER}.${s_USER} /home/${s_USER}/Desktop/Docker_Images/${s_DOCKER_IMG_JENKINS_EXPORT}
sudo chown ${s_USER}.${s_USER} /home/${s_USER}/Desktop/Docker_Images/${s_DOCKER_IMG_JENKINS_BLUEOCEAN2_EXPORT}
echo "Compressing /home/${s_USER}/Desktop/Docker_Images/${s_DOCKER_IMG_JENKINS_EXPORT}"
gzip /home/${s_USER}/Desktop/Docker_Images/${s_DOCKER_IMG_JENKINS_EXPORT}
gzip /home/${s_USER}/Desktop/Docker_Images/${s_DOCKER_IMG_JENKINS_BLUEOCEAN2_EXPORT}

rsync -a --exclude={} --acls --xattrs --owner --group --times --stats --human-readable --progress -z "/home/${s_USER}/Desktop/Docker_Images/" "${s_PATH_BACKUP_NOW}Docker_Images/"

There is a final part, if you want to backup to a remote Server/s using ssh:

# continuation... to copy to a remote Server.

s_PATH_REMOTE="bck7@cloubbck11.carlesmateo.com:/Bck/Desktop/${s_USER}/data/"

# Copy to the other Server
rsync -e "ssh -i $s_PATH_KEY" -a --exclude={} --acls --xattrs --owner --group --times --stats --human-readable --progress -z "/home/${s_USER}/Desktop/data/" ${s_PATH_REMOTE}

I recommend you to use the same methodology in all your Desktops, like for example, having a data/ folder in the Desktop for each user.

You can use Erasure Code to split the Backups in blocks and store a piece in different Cloud Providers.

Also you can store your Backups long-term, with services like Amazon Glacier.

Other ideas are storing certain files in git and in Hadoop HDFS.

If you want you can CRC your files before copying to another device or server.

You will use tools like: sha512sum or md5sum.

Docker with Ubuntu with telnet server and Python code to access via telnet

Explanations building the Container and running the python code

Here you can see this Python code to connect via Telnet and executing a command in a Server:

File: telnet_demo.py

#!/usr/bin/env python3
import telnetlib

s_host = "localhost"
s_user = "telnet"
s_password = "telnet"

o_tn = telnetlib.Telnet(s_host)

o_tn.read_until(b"login: ")
o_tn.write(s_user.encode('ascii') + b"\n")

o_tn.read_until(b"Password: ")

o_tn.write(s_password.encode('ascii') + b"\n")

o_tn.write(b"hostname\n")
o_tn.write(b"uname -a\n")
o_tn.write(b"ls -hal /\n")
o_tn.write(b"exit\n")

print(o_tn.read_all().decode('ascii'))

File: Dockerfile

FROM ubuntu:20.04

MAINTAINER Carles Mateo

ARG DEBIAN_FRONTEND=noninteractive

# This will make sure printing in the Screen when running in detached mode
ENV PYTHONUNBUFFERED=1

RUN apt-get update -y && apt install -y sudo telnetd vim systemctl  && apt-get clean

RUN adduser -gecos --disabled-password --shell /bin/bash telnet

RUN echo "telnet:telnet" | chpasswd

EXPOSE 23

CMD systemctl start inetd; while [ true ]; do sleep 60; done

You can see that I use chpasswd command to change the password for the user telnet and set it to telnet. That deals with the complexity of setting the encrypted password.

File: build_docker.sh

#!/bin/bash

s_DOCKER_IMAGE_NAME="ubuntu_telnet"

echo "We will build the Docker Image and name it: ${s_DOCKER_IMAGE_NAME}"
echo "After, we will be able to run a Docker Container based on it."

printf "Removing old image %s\n" "${s_DOCKER_IMAGE_NAME}"
sudo docker rm "${s_DOCKER_IMAGE_NAME}"

printf "Creating Docker Image %s\n" "${s_DOCKER_IMAGE_NAME}"
sudo docker build -t ${s_DOCKER_IMAGE_NAME} .
# If you don't want to use cache this is your line
# sudo docker build -t ${s_DOCKER_IMAGE_NAME} . --no-cache

i_EXIT_CODE=$?
if [ $i_EXIT_CODE -ne 0 ]; then
    printf "Error. Exit code %s\n" ${i_EXIT_CODE}
    exit
fi

echo "Ready to run ${s_DOCKER_IMAGE_NAME} Docker Container"
echo "To run in type: sudo docker run -it -p 23:23 --name ${s_DOCKER_IMAGE_NAME} ${s_DOCKER_IMAGE_NAME}"

When you run sudo ./build_docker.sh the image will be built. Then run it with:

sudo docker run -it -p 23:23 --name ubuntu_telnet ubuntu_telnet

If you get an error indicating that the port is in use, then your computer has already a process listening on the port 23, use another.

You will be able to stop the Container by pressing CTRL + C

From another terminal run the Python program:

python3 ./telnet_demo.py

Creating a RabbitMQ Docker Container accessed with Python and pika

In this video, that I streamed on Twitch, I demonstrate the code showed here.

I launch the Docker Container and operated it a bit, so you can get to learn few tricks.

I created the RabbitMQ Docker installation based on the official RabbitMQ installation instructions for Ubuntu/Debian:

https://www.rabbitmq.com/install-debian.html#apt-cloudsmith

One interesting aspect is that I cover how the messages are delivered as byte sequence. I show this by sending Unicode characters

Files in the project

Dockerfile

FROM ubuntu:20.04

MAINTAINER Carles Mateo

ARG DEBIAN_FRONTEND=noninteractive

# This will make sure printing in the Screen when running in dettached mode
ENV PYTHONUNBUFFERED=1

ARG PATH_RABBIT_INSTALL=/tmp/rabbit_install/

ARG PATH_RABBIT_APP_PYTHON=/opt/rabbit_python/

RUN mkdir $PATH_RABBIT_INSTALL

COPY cloudsmith.sh $PATH_RABBIT_INSTALL

RUN chmod +x ${PATH_RABBIT_INSTALL}cloudsmith.sh

RUN apt-get update -y && apt install -y sudo python3 python3-pip mc htop less strace zip gzip lynx && apt-get clean

RUN ${PATH_RABBIT_INSTALL}cloudsmith.sh

RUN service rabbitmq-server start

RUN mkdir $PATH_RABBIT_APP_PYTHON

COPY requirements.txt $PATH_RABBIT_APP_PYTHON

WORKDIR $PATH_RABBIT_APP_PYTHON

RUN pwd

RUN pip install -r requirements.txt

COPY *.py $PATH_RABBIT_APP_PYTHON

COPY loop_send_get_messages.sh $PATH_RABBIT_APP_PYTHON

RUN chmod +x loop_send_get_messages.sh

CMD ./loop_send_get_messages.sh

cloudsmith.sh

#!/usr/bin/sh
# From: https://www.rabbitmq.com/install-debian.html#apt-cloudsmith

sudo apt-get update -y && apt-get install curl gnupg apt-transport-https -y

## Team RabbitMQ's main signing key
curl -1sLf "https://keys.openpgp.org/vks/v1/by-fingerprint/0A9AF2115F4687BD29803A206B73A36E6026DFCA" | sudo gpg --dearmor | sudo tee /usr/share/keyrings/com.rabbitmq.team.gpg > /dev/null
## Cloudsmith: modern Erlang repository
curl -1sLf https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-erlang/gpg.E495BB49CC4BBE5B.key | sudo gpg --dearmor | sudo tee /usr/share/keyrings/io.cloudsmith.rabbitmq.E495BB49CC4BBE5B.gpg > /dev/null
## Cloudsmith: RabbitMQ repository
curl -1sLf https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/gpg.9F4587F226208342.key | sudo gpg --dearmor | sudo tee /usr/share/keyrings/io.cloudsmith.rabbitmq.9F4587F226208342.gpg > /dev/null

## Add apt repositories maintained by Team RabbitMQ
sudo tee /etc/apt/sources.list.d/rabbitmq.list <<EOF
## Provides modern Erlang/OTP releases
##
deb [signed-by=/usr/share/keyrings/io.cloudsmith.rabbitmq.E495BB49CC4BBE5B.gpg] https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-erlang/deb/ubuntu bionic main
deb-src [signed-by=/usr/share/keyrings/io.cloudsmith.rabbitmq.E495BB49CC4BBE5B.gpg] https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-erlang/deb/ubuntu bionic main

## Provides RabbitMQ
##
deb [signed-by=/usr/share/keyrings/io.cloudsmith.rabbitmq.9F4587F226208342.gpg] https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu bionic main
deb-src [signed-by=/usr/share/keyrings/io.cloudsmith.rabbitmq.9F4587F226208342.gpg] https://dl.cloudsmith.io/public/rabbitmq/rabbitmq-server/deb/ubuntu bionic main
EOF

## Update package indices
sudo apt-get update -y

## Install Erlang packages
sudo apt-get install -y erlang-base \
                        erlang-asn1 erlang-crypto erlang-eldap erlang-ftp erlang-inets \
                        erlang-mnesia erlang-os-mon erlang-parsetools erlang-public-key \
                        erlang-runtime-tools erlang-snmp erlang-ssl \
                        erlang-syntax-tools erlang-tftp erlang-tools erlang-xmerl

## Install rabbitmq-server and its dependencies
sudo apt-get install rabbitmq-server -y --fix-missing

build_docker.sh

#!/bin/bash

s_DOCKER_IMAGE_NAME="rabbitmq"

echo "We will build the Docker Image and name it: ${s_DOCKER_IMAGE_NAME}"
echo "After, we will be able to run a Docker Container based on it."

printf "Removing old image %s\n" "${s_DOCKER_IMAGE_NAME}"
sudo docker rm "${s_DOCKER_IMAGE_NAME}"

printf "Creating Docker Image %s\n" "${s_DOCKER_IMAGE_NAME}"
sudo docker build -t ${s_DOCKER_IMAGE_NAME} . --no-cache

i_EXIT_CODE=$?
if [ $i_EXIT_CODE -ne 0 ]; then
    printf "Error. Exit code %s\n" ${i_EXIT_CODE}
    exit
fi

echo "Ready to run ${s_DOCKER_IMAGE_NAME} Docker Container"
echo "To run in type: sudo docker run -it --name ${s_DOCKER_IMAGE_NAME} ${s_DOCKER_IMAGE_NAME}"
echo "or just use run_in_docker.sh"

requirements.txt

pika

loop_send_get_messages.sh

#!/bin/bash

echo "Starting RabbitMQ"
service rabbitmq-server start

echo "Launching consumer in background which will be listening and executing the callback function"
python3 rabbitmq_getfrom.py &

while true; do

    i_MESSAGES=$(( RANDOM % 10 ))

    echo "Sending $i_MESSAGES messages"
    for i_MESSAGE in $(seq 1 $i_MESSAGES); do
        python3 rabbitmq_sendto.py
    done

    echo "Sleeping 5 seconds"
    sleep 5

done

echo "Exiting loop"

rabbitmq_sendto.py

#!/usr/bin/env python3
import pika
import time

connection = pika.BlockingConnection(pika.ConnectionParameters(host="localhost"))

channel = connection.channel()

channel.queue_declare(queue="hello")

s_now = str(time.time())

s_message = "Hello World! " + s_now + " Testing Unicode: çÇ àá😀"
channel.basic_publish(exchange="", routing_key="hello", body=s_message)
print(" [x] Sent '" + s_message + "'")
connection.close()

rabbitmq_getfrom.py

#!/usr/bin/env python3
import pika


def callback(ch, method, properties, body):
    # print(f" [x] Received in channel: {ch} method: {method} properties: {properties} body: {body}")
    print(f" [x] Received body: {body}")


connection = pika.BlockingConnection(pika.ConnectionParameters(host="localhost"))

channel = connection.channel()

channel.queue_declare(queue="hello")

print(" [*] Waiting for messages. To exit press Ctrl+C")

# This will loop
channel.basic_consume(queue="hello", on_message_callback=callback)
channel.start_consuming()

print("Finishing consumer")

Video: How to create a Docker Container for LAMPP step by step

How to create a Docker Container for Linux Apache MySQL PHP and Python for beginners.

Note: Containers are not persistent. Use this for tests only. If you want to keep persistent information use Volumes.

Sources: https://gitlab.com/carles.mateo/blog.carlesmateo.com-source-code/-/tree/master/twitch/live_20220708_dockerfile_lamp

File: Dockerfile

FROM ubuntu:20.04

MAINTAINER Carles Mateo

ARG DEBIAN_FRONTEND=noninteractive

RUN apt update && \
    apt install -y vim python3-pip &&  \
    apt install -y net-tools mc vim htop less strace zip gzip lynx && \
    apt install -y apache2 mysql-server ntpdate libapache2-mod-php7.4 mysql-server php7.4-mysql php-dev libmcrypt-dev php-pear && \
    apt install -y git && apt autoremove && apt clean && \
    pip3 install pytest

RUN a2enmod rewrite

RUN echo "Europe/Ireland" | tee /etc/timezone

ENV APACHE_RUN_USER  www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR   /var/log/apache2
ENV APACHE_PID_FILE  /var/run/apache2/apache2.pid
ENV APACHE_RUN_DIR   /var/run/apache2
ENV APACHE_LOCK_DIR  /var/lock/apache2
ENV APACHE_LOG_DIR   /var/log/apache2

COPY phpinfo.php /var/www/html/

RUN service apache2 restart

EXPOSE 80

CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"]

File: phpinfo.php

<html>
<?php

// Show all information, defaults to INFO_ALL
phpinfo();

// Show just the module information.
// phpinfo(8) yields identical results.
phpinfo(INFO_MODULES);
?>
</html>

File: build_docker.sh

#!/bin/bash

s_DOCKER_IMAGE_NAME="lampp"

echo "We will build the Docker Image and name it: ${s_DOCKER_IMAGE_NAME}"
echo "After, we will be able to run a Docker Container based on it."

printf "Removing old image %s\n" "${s_DOCKER_IMAGE_NAME}"
sudo docker rm "${s_DOCKER_IMAGE_NAME}"

printf "Creating Docker Image %s\n" "${s_DOCKER_IMAGE_NAME}"
# sudo docker build -t ${s_DOCKER_IMAGE_NAME} . --no-cache
sudo docker build -t ${s_DOCKER_IMAGE_NAME} .

i_EXIT_CODE=$?
if [ $i_EXIT_CODE -ne 0 ]; then
    printf "Error. Exit code %s\n" ${i_EXIT_CODE}
    exit
fi

echo "Ready to run ${s_DOCKER_IMAGE_NAME} Docker Container"
echo "To run in type: sudo docker run -p 80:80 --name ${s_DOCKER_IMAGE_NAME} ${s_DOCKER_IMAGE_NAME}"
echo "or just use run_in_docker.sh"

echo
echo "If you want to debug do:"
echo "docker exec -i -t ${s_DOCKER_IMAGE_NAME} /bin/bash"

Install Jenkins on Docker with Blue Ocean and persisten Voluemes in Ubuntu 20.04 LTS in 4 minutes

Following the official documentation:

https://www.jenkins.io/doc/book/installing/docker/#setup-wizard

The steps are:

Create the network bridge named jenkins

docker network create jenkins

to execute Docker commands inside jenkins nodes we will use docker:dind

docker run \
  --name jenkins-docker \
  --rm \
  --detach \
  --privileged \
  --network jenkins \
  --network-alias docker \
  --env DOCKER_TLS_CERTDIR=/certs \
  --volume jenkins-docker-certs:/certs/client \
  --volume jenkins-data:/var/jenkins_home \
  --publish 2376:2376 \
  docker:dind \
  --storage-driver overlay2

Created a Dockerfile with these contents:

FROM jenkins/jenkins:2.346.1-jdk11
USER root
RUN apt-get update && apt-get install -y lsb-release
RUN curl -fsSLo /usr/share/keyrings/docker-archive-keyring.asc \
  https://download.docker.com/linux/debian/gpg
RUN echo "deb [arch=$(dpkg --print-architecture) \
  signed-by=/usr/share/keyrings/docker-archive-keyring.asc] \
  https://download.docker.com/linux/debian \
  $(lsb_release -cs) stable" > /etc/apt/sources.list.d/docker.list
RUN apt-get update && apt-get install -y docker-ce-cli
USER jenkins
RUN jenkins-plugin-cli --plugins "blueocean:1.25.5 docker-workflow:1.28"

Build it:

docker build -t myjenkins-blueocean:2.346.1-1 .

Run the Container:

docker run \
  --name jenkins-blueocean \
  --restart=on-failure \
  --detach \
  --network jenkins \
  --env DOCKER_HOST=tcp://docker:2376 \
  --env DOCKER_CERT_PATH=/certs/client \
  --env DOCKER_TLS_VERIFY=1 \
  --publish 8080:8080 \
  --publish 50000:50000 \
  --volume jenkins-data:/var/jenkins_home \
  --volume jenkins-docker-certs:/certs/client:ro \
  myjenkins-blueocean:2.346.1-1

See the Id of the running Containers:

docker ps

As in my case my jenkins container Id is 77b6a5a7ae8d in order to know the jenkins administrator password I check the logs for my jenkins Container with docker logs 77b6a5a7ae8d:

docker logs 77b6a5a7ae8d
Running from: /usr/share/jenkins/jenkins.war
webroot: EnvVars.masterEnvVars.get("JENKINS_HOME")
2022-06-26 21:02:05.492+0000 [id=1]	INFO	org.eclipse.jetty.util.log.Log#initialized: Logging initialized @549ms to org.eclipse.jetty.util.log.JavaUtilLog
2022-06-26 21:02:05.583+0000 [id=1]	INFO	winstone.Logger#logInternal: Beginning extraction from war file
2022-06-26 21:02:05.613+0000 [id=1]	WARNING	o.e.j.s.handler.ContextHandler#setContextPath: Empty contextPath
2022-06-26 21:02:05.674+0000 [id=1]	INFO	org.eclipse.jetty.server.Server#doStart: jetty-9.4.45.v20220203; built: 2022-02-03T09:14:34.105Z; git: 4a0c91c0be53805e3fcffdcdcc9587d5301863db; jvm 11.0.15+10
2022-06-26 21:02:05.986+0000 [id=1]	INFO	o.e.j.w.StandardDescriptorProcessor#visitServlet: NO JSP Support for /, did not find org.eclipse.jetty.jsp.JettyJspServlet
2022-06-26 21:02:06.020+0000 [id=1]	INFO	o.e.j.s.s.DefaultSessionIdManager#doStart: DefaultSessionIdManager workerName=node0
2022-06-26 21:02:06.020+0000 [id=1]	INFO	o.e.j.s.s.DefaultSessionIdManager#doStart: No SessionScavenger set, using defaults
2022-06-26 21:02:06.021+0000 [id=1]	INFO	o.e.j.server.session.HouseKeeper#startScavenging: node0 Scavenging every 600000ms
2022-06-26 21:02:06.463+0000 [id=1]	INFO	hudson.WebAppMain#contextInitialized: Jenkins home directory: /var/jenkins_home found at: EnvVars.masterEnvVars.get("JENKINS_HOME")
2022-06-26 21:02:06.647+0000 [id=1]	INFO	o.e.j.s.handler.ContextHandler#doStart: Started w.@7cf7aee{Jenkins v2.346.1,/,file:///var/jenkins_home/war/,AVAILABLE}{/var/jenkins_home/war}
2022-06-26 21:02:06.668+0000 [id=1]	INFO	o.e.j.server.AbstractConnector#doStart: Started ServerConnector@4c402120{HTTP/1.1, (http/1.1)}{0.0.0.0:8080}
2022-06-26 21:02:06.669+0000 [id=1]	INFO	org.eclipse.jetty.server.Server#doStart: Started @1727ms
2022-06-26 21:02:06.669+0000 [id=25]	INFO	winstone.Logger#logInternal: Winstone Servlet Engine running: controlPort=disabled
2022-06-26 21:02:06.925+0000 [id=32]	INFO	jenkins.InitReactorRunner$1#onAttained: Started initialization
2022-06-26 21:02:07.214+0000 [id=39]	INFO	jenkins.InitReactorRunner$1#onAttained: Listed all plugins
2022-06-26 21:02:10.781+0000 [id=47]	INFO	jenkins.InitReactorRunner$1#onAttained: Prepared all plugins
2022-06-26 21:02:10.794+0000 [id=35]	INFO	jenkins.InitReactorRunner$1#onAttained: Started all plugins
2022-06-26 21:02:10.803+0000 [id=42]	INFO	jenkins.InitReactorRunner$1#onAttained: Augmented all extensions
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.codehaus.groovy.vmplugin.v7.Java7$1 (file:/var/jenkins_home/war/WEB-INF/lib/groovy-all-2.4.21.jar) to constructor java.lang.invoke.MethodHandles$Lookup(java.lang.Class,int)
WARNING: Please consider reporting this to the maintainers of org.codehaus.groovy.vmplugin.v7.Java7$1
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2022-06-26 21:02:11.634+0000 [id=30]	INFO	jenkins.InitReactorRunner$1#onAttained: System config loaded
2022-06-26 21:02:11.635+0000 [id=30]	INFO	jenkins.InitReactorRunner$1#onAttained: System config adapted
2022-06-26 21:02:11.642+0000 [id=48]	INFO	jenkins.InitReactorRunner$1#onAttained: Loaded all jobs
2022-06-26 21:02:11.645+0000 [id=46]	INFO	jenkins.InitReactorRunner$1#onAttained: Configuration for all jobs updated
2022-06-26 21:02:11.668+0000 [id=67]	INFO	hudson.model.AsyncPeriodicWork#lambda$doRun$1: Started Download metadata
2022-06-26 21:02:11.675+0000 [id=67]	INFO	hudson.model.AsyncPeriodicWork#lambda$doRun$1: Finished Download metadata. 4 ms
2022-06-26 21:02:11.733+0000 [id=52]	INFO	jenkins.install.SetupWizard#init: 

*************************************************************
*************************************************************
*************************************************************

Jenkins initial setup is required. An admin user has been created and a password generated.
Please use the following password to proceed to installation:

3de0910b83894b9294989552e6fa9773

This may also be found at: /var/jenkins_home/secrets/initialAdminPassword

*************************************************************
*************************************************************
*************************************************************

2022-06-26 21:02:22.901+0000 [id=52]	INFO	jenkins.InitReactorRunner$1#onAttained: Completed initialization
2022-06-26 21:02:23.013+0000 [id=24]	INFO	hudson.lifecycle.Lifecycle#onReady: Jenkins is fully up and running

In my case the password is at the bottom, between the stars: 3de0910b83894b9294989552e6fa9773

Go with your browser to: http://localhost:8080

Using Docker in Windows 10 without Windows Desktop with Docker Engine and without WSL

I added this article to my Docker Combat Guide book.

The change of license of Docker Desktop for Windows has been a low punch, a dirty one.

Many big companies use Windows as for the laptops and workstations, we like it or not.

You can setup a Linux development computer or Virtual Machine, you may argue, but things are not as easier.

Big companies have Software licenses assigned to corporation machines, so you may not use your Pycharm license in a Linux VM.

You may no use Docker Desktop either, if your company did not license it.

And finally you may need to have access to internal resources, like Artifactory, or Servers where access is granted via ACL, so only you, from your Development machine can access it. So you have to be able to run Docker locally.

After Docker introduced this changed of license I was using VirtualBox with NAT attached to the VPN Virtual Ethernet, and I port forwarded to be able to SSH, deploy, test, etc… from outside to my Linux VM, and it was working for a while, until with the last VirtualBox update and some Windows updates where pushed to my Windows box and my VirtualBox VMs stopped booting most of the times and having random problems.

I configured a new Linux VM in a Development Server, and I opened Docker API so my Pycharm’s workstation was able to deploy there and I was able to test. But the Dev Ip’s do not have access to the same Test Servers I need my Python Automation projects to reach (and quickly I used 50 GB of space), so I tried WSL. I like Pycharm I didn’t want to switch to VStudio Code because of their good Docker extensions, in any case I could not run my code locally with venv cause some of the packages where not available for Windows, so I needed Linux to run the Unit Testing and see the Code Coverage, run the code, etc…

I tried Hyper-V, tried with NAT External, but it was incompatible with my VPN.

Note: WSL can be used, but I wanted to use Docker Engine, not docker in WSL.

Installing Docker Command line binaries

The first thing I checked was the Docker downloads page.

I found the stand alone binary.

https://docs.docker.com/engine/install/binaries/#install-server-and-client-binaries-on-windows

In order to install it:

  1. Download the zip file from the page, in my case docker-20.10.12.zip
  2. Open PowerShell as Administrator
  3. Run: Expand-Archive C:\Users\carlesmateo\Downloads\docker-20.10.12.zip  -DestinationPath $Env:ProgramFiles\DockerCLI
  4. Run: cd $Env:ProgramFiles\DockerCLI\docker
  5. Run: .\dockerd.exe –register-service
  6. Run: Start-Service docker
  7. Check that Docker lists the running Containers (no errors) with: docker ps
  8. Check that the Service is running with: Get-Service docker
    You should expect something like:
Status Name DisplayName
------ ---- -----------
Running docker Docker Engine

Attempt to pull an Image with: docker pull ubuntu or docker pull php

If it works, you’re done, but most probably you will get it starting and get this error:

Error response from daemon: unsupported os linux

or this other error:

no matching manifest for windows/amd64 10.0.19042 in the manifest list entries

Depending on your system you may need to do certain things:

Turn Windows features on or off

I would make sure that are enabled:

  • Containers
  • Hyper-V
  • Virtual Machine Platform
  • Windows Hypervisor Platform
As you see WSL is not enabled

Press OK, and restart your computer.

Try Again to docker pull ubuntu

Enable Experimental Mode

Edit this file to enable experimental, you can run from the PowerShell:

notepad C:\ProgramData\Docker\config\daemon.json
Change experimental from false to: true

Save the file and restart the Service:

Restart-Service docker

Check if it works

Get-Service docker
Status   Name               DisplayName
------   ----               -----------
Running  docker             Docker Engine

Try if now it works.

Switch Daemon

If it is not working, try running:

cd "C:\Program Files\Docker\Docker\"
.\DockerCli.exe -SwitchDaemon

Give it two minutes and try to pull an image.

If it is still not working reboot, and try again:

cd "C:\Program Files\Docker\Docker\"
.\DockerCli.exe -SwitchDaemon

After it is working

I recommend you to add the new stand alone docker to the path, so you can call it from the terminal at any moment.

Edit the variable PATH of your user profile (not System wide)

I recommend you to have it on top after Python.

News from the Blog 2022-01-22

News for the Blog

It has been 9 years since I created the blog, and some articles have old content that still get many visitors. To make sure they get the clear picture and not obsolete information, all the articles of the taxonomy POST, will include the Date and Time when the article was first published.

I also removed the annoying link “Leave a comment” on top. I think it influences some people to leave comments before actually having read the article.

It is still possible to add comments, but they are on the bottom of the page. I believe it makes more sense this way. This is the way.

Technically that involved modifying the files of my template:

  • functions.php
  • content.php

My Books

Automating and Provisioning to Amazon AWS using boto3 Amazon’s SDK for Python 3

I finished my book about Automating and Provisioning to Amazon AWS using boto3 Amazon’s SDK for Python 3.

It’s 128 pages in Full size DIN-A4 DRM-Free, and comes with a link to code samples of a real project CLI Menu based.

Docker Combat Guide

I have updated my book Docker Combat Guide and I added a completely new section, including source code, to work with Docker’s Python SDK.

I show the Docker SDK by showing the code of an actual CLI program I wrote in Python 3.

Here you can see a video that demonstrates how I launched a project with three Docker Containers via Docker compose. The Containers have Python, Flask webserver, and redis as bridge between the two Python Containers.

All the source code are downloadable with the book.

Watch at 1080p at Full Screen for better experience

My Classes

In January I resumed the coding classes. I have new students, and few spots free in my agenda, as some of my students graduated and others have been hired as Software Developers. I can not be more proud. :)

Free Training

Symfony is one of the most popular PHP Frameworks.

You can learn it with these free videos:

https://symfonycasts.com/screencast/symfony/

Software Licenses I Purchased

Before leaving 2021, I registered WinRAR for Windows.

WinRAR is a compressing Software that has been with us for 19 years.

I’m pretty sure I registered it in the past, but these holidays I was out only with two Window laptops and I had to do some work for the university and WinRAR came it handy, so I decided to register it again.

I create Software and Books, and I earn my life with this, so it makes a lot of sense to pay others for their good work crafting Software.

Books I Purchased

I bought this book and by the moment is really good. I wanted to buy some updated books as all my Linux books have some years already. Also I keep my skills sharp by reading reading reading.

Hardware I purchased

So I bought a cheap car power inverter.

The ones I saw in Amazon were €120+ and they were not very good rated, so I opted to buy a cheap one in the supermarket and keep it on the car just in case one day I need it. (My new Asus Zenbook laptop has 18 hours of autonomy and I don’t charge it for days, but you never know)

For those that don’t know, a power inverter allows you to get a 220V (120V in US) plug, from the connection of the lighter from your car. Also you can get the energy from an external car battery. This comes in really handy to charge the laptop, cameras, your drone… if you are in the nature and you don’t have any plug near.

I bought one years ago to power up Raspberry Pi’s when I was doing Research for a project I was studying to launch.

Fun

Many friends are using Starlink as a substitute of fiber for their rural homes, and they are super happy with it.

One of them send me a very fun article.

It is in Italian, but you can google translate it.

https://leganerd.com/2022/01/10/starlink-ha-un-piccolo-e-adorabile-problema-con-i-gatti/

Anyways you can get the idea of what’s going on in the picture :)

So tell me… so your speed with Starlink drops 80% in winter uh… aha…

Random news about Software

I tried the voice recognition in Slack huddle, and it works pretty well. Also Zoom has this feature and they are great. Specially when you are in a group call, or in a class.

My health

I was experimenting some problems, so I scheduled an appointment to get blood analysis and to be checked. Just in case.

TL;TR I could have died.

The doctors saw my analysis and sent me to the hospital urgently, where they found something that was going to be lethal. For hours they were checking me and doing several more analysis and tests to discard false positives, etc… and they precisely found the issue and provided urgent treatment and confirmed that I could have died at any moment.

Basically I dodged a bullet.

I was doing certain healthy things that helped me in a situation that could have been deadly or extremely dangerous to my health.

With the treatment and my strict discipline, I reverted the situation really quick and now I have more health and more energy than before. I feel rejuvenated.

I’m feeling lucky that with my work, the classes, the books I wrote, etc… I didn’t have to worry about the expensive medicines, the transport, etc… It was a bad moment, during Christmas, with so many people on holidays, pharmacies and GPs closed, so I had to spend more time looking for, traveling, and to pay more than it would had been strictly necessary. Despite all the time I used to my health, I managed to finish my university duties on time, and I didn’t miss my duties at work after the hospital, neither I had to cancel any programming classes or mentoring sessions. Nobody out of my closest circle knew what was going on, with the exception of my boss, which I kept informed in real time, just in case there was any problem, as I didn’t want to let down the company and have my duties at work to be unattended if something major happened.

I was not afraid to die. Unfortunately I’ve lost very significant people since I was a child. Relatives, very appreciated bosses and colleagues which I considered my friends, and great friends of different circles. Illnesses, accidents, and a friend of mine committed suicide years ago, and some of my partners attempted it (before we know each other). When you see people that are so good leaving, this brings a sadness that cannot be explained with words. I have had a tough life.

We have a limited time, and he have freedom to make choices. Some people choose to be miserable, to mistreat others, to lie, to cheat, to be unfaithful, to lack ethic and integrity. Those are their choices. Their wasted time will not come back.

Some of my friends are doctors. I admire them. They save lives and improve the quality of life of people with health problems.

I like being an Engineer cause I can create things, I can build instead of destroy, I can help to improve the world, and I can help users to have a good time and to avoid the frustration of services being down. I chose to do a positive work. So many times I’ve been offered much bigger salaries to do something I didn’t like, or by companies that I don’t admire, and I refused. Cause I wanted to make a better world. I know many people don’t think like that, and they only take take take. They are even unable to understand my choices, even to believe that I’m like that. But it’s enough that I know what I’m doing, and that it makes sense for me, and that I know that I’m doing well. And then, one day, you realize, that doing well, being fair and nice even if other people stabbed you in the back, you got to know fantastic people like you, and people that adore you and love to have you in their life, in their companies… So I’m really fortunate. To all the good-hearted people around, that give without expect anything in return, that try to make the world a better place, thank you.

Communicating with Docker Containers via Linux Signals and Python

Normally if we need to refresh a config in a Container we will spawn a new one, or we will access with sudo docker exec -it /bin/sh mycontainer for instance and force a reload, or we will have to restart the Container.

What if we want to be able to reload the config at any moment without restarting the process, or to trigger a process in our Container (like a dump or a flush) in another way than implementing an API?.

An unexplored way, for many, to communicate with your Container’s main process is to send Signals.

So basically I will show you how you can trap Signals within a Python process which is the main process for your Docker Container, and send them from your Hypervisor with the command:

sudo docker kill --signal=SIGUSR1

I choose to use SIGUSR1 as it is reserved for user defined Signals.

You can clone the project or get the source code from:

https://gitlab.com/carles.mateo/docker-signal

The Dockerfile

FROM ubuntu:20.04

MAINTAINER Carles Mateo

RUN apt update && apt install -y python3 python3-pip vim less && apt-get clean

# This will make sure printing in the Screen when running in dettached mode
ENV PYTHONUNBUFFERED=1

ENV DOCKERSIGNAL /var/dockersignal

RUN mkdir -p $DOCKERSIGNAL

COPY *.py $DOCKERSIGNAL

WORKDIR $DOCKERSIGNAL

# Again to enforce printing to the Screen when running dettached
CMD ["python3", "-u", "/var/dockersignal/dockersignal.py"]

The dockersignal.py file

# By Carles Mateo https://blog.carlesmateo.com

import signal
import time


def handler(signum, frame):
    print('Signal handler called with signal', signum)

    if signum == 10:
        # 10 is the equivalent to SIGUSR1 for most x86/ARM (not for Alpha/Sparc, MIPS, PARISC)
        print("Simulated action: Reload config")


if __name__ == "__main__":

    print("Waiting for a Signal")
    # Listed for this signal, so can listen for more
    signal.signal(signal.SIGUSR1, handler)

    while True:
        # Do Whatever
        time.sleep(1)

A shell file to build and run the Container like a pro

#!/bin/bash

DOCKER_CONTAINER_NAME="docker-signal"
DOCKER_IMAGE_NAME="docker-signal"

printf "Removing old Container %s\n" "${DOCKER_CONTAINER_NAME}"
sudo docker rm "${DOCKER_IMAGE_NAME}"

printf "Removing old Image %s\n" "${DOCKER_IMAGE_NAME}"
sudo docker image rm "${DOCKER_IMAGE_NAME}"

echo "Creating Docker Image"
sudo docker build -t ${DOCKER_IMAGE_NAME} . --no-cache

retVal=$?
if [ $retVal -ne 0 ]; then
    printf "Error. Exit code %s\n" ${retVal}
    exit
fi

echo "Running Docker Container ${DOCKER_CONTAINER_NAME} based in image ${DOCKER_IMAGE_NAME}"
sudo docker run --cpus="1.0" --name ${DOCKER_CONTAINER_NAME} ${DOCKER_IMAGE_NAME}

See it in action

References

https://docs.docker.com/engine/reference/commandline/kill/

https://man7.org/linux/man-pages/man7/signal.7.html

https://docs.python.org/3/library/signal.html

If you want you can buy my Docker Combat Guide book.

News from the blog 2021-09-20

  • I’ve published a very simple game, Tic Tac Toe, that I created for my Python 3 Exercises for Beginners book.
  • I’ve raised back the price for my books to normal levels.
    I’ve been keeping the price to the minimum to help people that wanted to learn during covid-19. I consider that who wanted to learn has already done it.

I still have bundles with a somewhat reduced price, and I authorized LeanPub platform to do discounts up to 50% at their discretion.

Bundle of four books in https://leanpub.com/b/python3-exercises-zfs-assemble-computer

https://leanpub.com/b/python3-exercises-zfs-assemble-computer

  • I’ve been deleting AMIs, Snapshots, Volumes and backups from Amazon instances I’ll no longer use.

I’ve migrated to Docker some sites and WordPress sites and now I’m CSP (Cloud Service Provider) agnostic. I can deploy wherever I want.

We pay per GB used of storage, so my money will get a better usage.

As I said in my old article from 2013, The Cloud is for Scaling. For Startups and for Enterprises. It is too expensive for small and medium companies.

  • For those studying Python there is a Virtual Meetup about Data Analysis, in Spanish ,the 23th of September

https://www.meetup.com/tech-barcelona/events/280791310/

More meetups:

https://www.meetup.com/tech-barcelona/

Have a cheap Ubuntu in your Windows or Mac with Docker

I had this idea after one my Python and Linux students with two laptops, a Mac OS X and a Windows one explained me that the Mac OS X is often taken by their daughters, and that the Windows 10 laptop has not enough memory to run PyCharm and Virtual Box fluently. She wanted to have a Linux VM to practice Linux, and do the Bash exercises.

So this article explains how to create a Ubuntu 20.04 LTS Docker Container, and execute a shell were you can practice Linux, Ubuntu, Bash, and you can use it to run Python, Apache, PHP, MySQL… as well, if you want.

You need to install Docker for Windows of for Mac:

Docker for Windows is very handy and visual

Just pay attention to your type of processor: Mac with Intel chip or Mac with apple chip.

The first thing is to create the Dockerfile.

FROM ubuntu:20.04

MAINTAINER Carles Mateo

ARG DEBIAN_FRONTEND=noninteractive

RUN apt update && \
    apt install -y vim python3-pip &&  \
    apt install -y net-tools mc htop less strace zip gzip lynx && \
    pip3 install pytest && \
    apt-get clean

RUN echo "#!/bin/bash\nwhile [ true ]; do sleep 60; done" > /root/loop.sh; chmod +x /root/loop.sh

CMD ["/root/loop.sh"]

So basically the file named Dockerfile contains all the blueprints for our Docker Container to be created.

You see that I all the installs and clean ups in one single line. That’s because Docker generates a layer of virtual disk per each line in the Dockerfile. The layers are persistent, so even if in the next line we delete the temporary files, the space used will not be recovered.

You see also that I generate a Bash file with an infinite loop that sleeps 60 seconds each loop and save it as /root/loop.sh This is the file that later is called with CMD, so basically when the Container is created will execute this infinite loop. Basically we give to the Container a non ending task to prevent it from running, and exiting.

Now that you have the Dockerfile is time to build the Container.

For Mac open a terminal and type this command inside the directory where you have the Dockerfile file:

sudo docker build -t cheap_ubuntu .

I called the image cheap_ubuntu but you can set the name that you prefer.

For Windows 10 open a Command Prompt with Administrative rights and then change directory (cd) to the one that has your Dockerfile file.

docker.exe build -t cheap_ubuntu .
Image being built… (some data has been covered in white)

Now that you have the image built, you can create a Container based on it.

For Mac:

sudo docker run -d --name cheap_ubuntu cheap_ubuntu

For Windows (you can use docker.exe or just docker):

docker.exe run -d --name cheap_ubuntu cheap_ubuntu

Now you have Container named cheap_ubuntu based on the image cheap_ubuntu.

It’s time to execute an interactive shell and be able to play:

sudo docker exec -it cheap_ubuntu /bin/bash

For Windows:

docker.exe exec -it cheap_ubuntu /bin/bash
Our Ubuntu terminal inside Windows

Now you have an interactive shell, as root, to your cheap_ubuntu Ubuntu 20.04 LTS Container.

You’ll not be able to run the graphical interface, but you have a complete Ubuntu to learn to program in Bash and to use Linux from Command Line.

You will exit the interactive Bash session in the container with:

exit

If you want to stop the Container:

sudo docker stop cheap_ubuntu

Or for Windows:

docker.exe stop cheap_ubuntu

If you want to see what Containers are running do:

sudo docker ps