Tag Archives: Docker

News from the Blog 2022-01-22

News for the Blog

It has been 9 years since I created the blog, and some articles have old content that still get many visitors. To make sure they get the clear picture and not obsolete information, all the articles of the taxonomy POST, will include the Date and Time when the article was first published.

I also removed the annoying link “Leave a comment” on top. I think it influences some people to leave comments before actually having read the article.

It is still possible to add comments, but they are on the bottom of the page. I believe it makes more sense this way. This is the way.

Technically that involved modifying the files of my template:

  • functions.php
  • content.php

My Books

Automating and Provisioning to Amazon AWS using boto3 Amazon’s SDK for Python 3

I finished my book about Automating and Provisioning to Amazon AWS using boto3 Amazon’s SDK for Python 3.

It’s 128 pages in Full size DIN-A4 DRM-Free, and comes with a link to code samples of a real project CLI Menu based.

Docker Combat Guide

I have updated my book Docker Combat Guide and I added a completely new section, including source code, to work with Docker’s Python SDK.

I show the Docker SDK by showing the code of an actual CLI program I wrote in Python 3.

Here you can see a video that demonstrates how I launched a project with three Docker Containers via Docker compose. The Containers have Python, Flask webserver, and redis as bridge between the two Python Containers.

All the source code are downloadable with the book.

Watch at 1080p at Full Screen for better experience

My Classes

In January I resumed the coding classes. I have new students, and few spots free in my agenda, as some of my students graduated and others have been hired as Software Developers. I can not be more proud. :)

Free Training

Symfony is one of the most popular PHP Frameworks.

You can learn it with these free videos:

https://symfonycasts.com/screencast/symfony/

Software Licenses I Purchased

Before leaving 2021, I registered WinRAR for Windows.

WinRAR is a compressing Software that has been with us for 19 years.

I’m pretty sure I registered it in the past, but these holidays I was out only with two Window laptops and I had to do some work for the university and WinRAR came it handy, so I decided to register it again.

I create Software and Books, and I earn my life with this, so it makes a lot of sense to pay others for their good work crafting Software.

Books I Purchased

I bought this book and by the moment is really good. I wanted to buy some updated books as all my Linux books have some years already. Also I keep my skills sharp by reading reading reading.

Hardware I purchased

So I bought a cheap car power inverter.

The ones I saw in Amazon were €120+ and they were not very good rated, so I opted to buy a cheap one in the supermarket and keep it on the car just in case one day I need it. (My new Asus Zenbook laptop has 18 hours of autonomy and I don’t charge it for days, but you never know)

For those that don’t know, a power inverter allows you to get a 220V (120V in US) plug, from the connection of the lighter from your car. Also you can get the energy from an external car battery. This comes in really handy to charge the laptop, cameras, your drone… if you are in the nature and you don’t have any plug near.

I bought one years ago to power up Raspberry Pi’s when I was doing Research for a project I was studying to launch.

Fun

Many friends are using Starlink as a substitute of fiber for their rural homes, and they are super happy with it.

One of them send me a very fun article.

It is in Italian, but you can google translate it.

https://leganerd.com/2022/01/10/starlink-ha-un-piccolo-e-adorabile-problema-con-i-gatti/

Anyways you can get the idea of what’s going on in the picture :)

So tell me… so your speed with Starlink drops 80% in winter uh… aha…

Random news about Software

I tried the voice recognition in Slack huddle, and it works pretty well. Also Zoom has this feature and they are great. Specially when you are in a group call, or in a class.

My health

I was experimenting some problems, so I scheduled an appointment to get blood analysis and to be checked. Just in case.

TL;TR I could have died.

The doctors saw my analysis and sent me to the hospital urgently, where they found something that was going to be lethal. For hours they were checking me and doing several more analysis and tests to discard false positives, etc… and they precisely found the issue and provided urgent treatment and confirmed that I could have died at any moment.

Basically I dodged a bullet.

I was doing certain healthy things that helped me in a situation that could have been deadly or extremely dangerous to my health.

With the treatment and my strict discipline, I reverted the situation really quick and now I have more health and more energy than before. I feel rejuvenated.

I’m feeling lucky that with my work, the classes, the books I wrote, etc… I didn’t have to worry about the expensive medicines, the transport, etc… It was a bad moment, during Christmas, with so many people on holidays, pharmacies and GPs closed, so I had to spend more time looking for, traveling, and to pay more than it would had been strictly necessary. Despite all the time I used to my health, I managed to finish my university duties on time, and I didn’t miss my duties at work after the hospital, neither I had to cancel any programming classes or mentoring sessions. Nobody out of my closest circle knew what was going on, with the exception of my boss, which I kept informed in real time, just in case there was any problem, as I didn’t want to let down the company and have my duties at work to be unattended if something major happened.

I was not afraid to die. Unfortunately I’ve lost very significant people since I was a child. Relatives, very appreciated bosses and colleagues which I considered my friends, and great friends of different circles. Illnesses, accidents, and a friend of mine committed suicide years ago, and some of my partners attempted it (before we know each other). When you see people that are so good leaving, this brings a sadness that cannot be explained with words. I have had a tough life.

We have a limited time, and he have freedom to make choices. Some people choose to be miserable, to mistreat others, to lie, to cheat, to be unfaithful, to lack ethic and integrity. Those are their choices. Their wasted time will not come back.

Some of my friends are doctors. I admire them. They save lives and improve the quality of life of people with health problems.

I like being an Engineer cause I can create things, I can build instead of destroy, I can help to improve the world, and I can help users to have a good time and to avoid the frustration of services being down. I chose to do a positive work. So many times I’ve been offered much bigger salaries to do something I didn’t like, or by companies that I don’t admire, and I refused. Cause I wanted to make a better world. I know many people don’t think like that, and they only take take take. They are even unable to understand my choices, even to believe that I’m like that. But it’s enough that I know what I’m doing, and that it makes sense for me, and that I know that I’m doing well. And then, one day, you realize, that doing well, being fair and nice even if other people stabbed you in the back, you got to know fantastic people like you, and people that adore you and love to have you in their life, in their companies… So I’m really fortunate. To all the good-hearted people around, that give without expect anything in return, that try to make the world a better place, thank you.

Communicating with Docker Containers via Linux Signals and Python

Normally if we need to refresh a config in a Container we will spawn a new one, or we will access with sudo docker exec -it /bin/sh mycontainer for instance and force a reload, or we will have to restart the Container.

What if we want to be able to reload the config at any moment without restarting the process, or to trigger a process in our Container (like a dump or a flush) in another way than implementing an API?.

An unexplored way, for many, to communicate with your Container’s main process is to send Signals.

So basically I will show you how you can trap Signals within a Python process which is the main process for your Docker Container, and send them from your Hypervisor with the command:

sudo docker kill --signal=SIGUSR1

I choose to use SIGUSR1 as it is reserved for user defined Signals.

You can clone the project or get the source code from:

https://gitlab.com/carles.mateo/docker-signal

The Dockerfile

FROM ubuntu:20.04

MAINTAINER Carles Mateo

RUN apt update && apt install -y python3 python3-pip vim less && apt-get clean

# This will make sure printing in the Screen when running in dettached mode
ENV PYTHONUNBUFFERED=1

ENV DOCKERSIGNAL /var/dockersignal

RUN mkdir -p $DOCKERSIGNAL

COPY *.py $DOCKERSIGNAL

WORKDIR $DOCKERSIGNAL

# Again to enforce printing to the Screen when running dettached
CMD ["python3", "-u", "/var/dockersignal/dockersignal.py"]

The dockersignal.py file

# By Carles Mateo https://blog.carlesmateo.com

import signal
import time


def handler(signum, frame):
    print('Signal handler called with signal', signum)

    if signum == 10:
        # 10 is the equivalent to SIGUSR1 for most x86/ARM (not for Alpha/Sparc, MIPS, PARISC)
        print("Simulated action: Reload config")


if __name__ == "__main__":

    print("Waiting for a Signal")
    # Listed for this signal, so can listen for more
    signal.signal(signal.SIGUSR1, handler)

    while True:
        # Do Whatever
        time.sleep(1)

A shell file to build and run the Container like a pro

#!/bin/bash

DOCKER_CONTAINER_NAME="docker-signal"
DOCKER_IMAGE_NAME="docker-signal"

printf "Removing old Container %s\n" "${DOCKER_CONTAINER_NAME}"
sudo docker rm "${DOCKER_IMAGE_NAME}"

printf "Removing old Image %s\n" "${DOCKER_IMAGE_NAME}"
sudo docker image rm "${DOCKER_IMAGE_NAME}"

echo "Creating Docker Image"
sudo docker build -t ${DOCKER_IMAGE_NAME} . --no-cache

retVal=$?
if [ $retVal -ne 0 ]; then
    printf "Error. Exit code %s\n" ${retVal}
    exit
fi

echo "Running Docker Container ${DOCKER_CONTAINER_NAME} based in image ${DOCKER_IMAGE_NAME}"
sudo docker run --cpus="1.0" --name ${DOCKER_CONTAINER_NAME} ${DOCKER_IMAGE_NAME}

See it in action

References

https://docs.docker.com/engine/reference/commandline/kill/

https://man7.org/linux/man-pages/man7/signal.7.html

https://docs.python.org/3/library/signal.html

If you want you can buy my Docker Combat Guide book.

News from the blog 2021-09-20

  • I’ve published a very simple game, Tic Tac Toe, that I created for my Python 3 Exercises for Beginners book.
  • I’ve raised back the price for my books to normal levels.
    I’ve been keeping the price to the minimum to help people that wanted to learn during covid-19. I consider that who wanted to learn has already done it.

I still have bundles with a somewhat reduced price, and I authorized LeanPub platform to do discounts up to 50% at their discretion.

Bundle of four books in https://leanpub.com/b/python3-exercises-zfs-assemble-computer

https://leanpub.com/b/python3-exercises-zfs-assemble-computer

  • I’ve been deleting AMIs, Snapshots, Volumes and backups from Amazon instances I’ll no longer use.

I’ve migrated to Docker some sites and WordPress sites and now I’m CSP (Cloud Service Provider) agnostic. I can deploy wherever I want.

We pay per GB used of storage, so my money will get a better usage.

As I said in my old article from 2013, The Cloud is for Scaling. For Startups and for Enterprises. It is too expensive for small and medium companies.

  • For those studying Python there is a Virtual Meetup about Data Analysis, in Spanish ,the 23th of September

https://www.meetup.com/tech-barcelona/events/280791310/

More meetups:

https://www.meetup.com/tech-barcelona/

Have a cheap Ubuntu in your Windows or Mac with Docker

I had this idea after one my Python and Linux students with two laptops, a Mac OS X and a Windows one explained me that the Mac OS X is often taken by their daughters, and that the Windows 10 laptop has not enough memory to run PyCharm and Virtual Box fluently. She wanted to have a Linux VM to practice Linux, and do the Bash exercises.

So this article explains how to create a Ubuntu 20.04 LTS Docker Container, and execute a shell were you can practice Linux, Ubuntu, Bash, and you can use it to run Python, Apache, PHP, MySQL… as well, if you want.

You need to install Docker for Windows of for Mac:

Docker for Windows is very handy and visual

Just pay attention to your type of processor: Mac with Intel chip or Mac with apple chip.

The first thing is to create the Dockerfile.

FROM ubuntu:20.04

MAINTAINER Carles Mateo

ARG DEBIAN_FRONTEND=noninteractive

RUN apt update && \
    apt install -y vim python3-pip &&  \
    apt install -y net-tools mc htop less strace zip gzip lynx && \
    pip3 install pytest && \
    apt-get clean

RUN echo "#!/bin/bash\nwhile [ true ]; do sleep 60; done" > /root/loop.sh; chmod +x /root/loop.sh

CMD ["/root/loop.sh"]

So basically the file named Dockerfile contains all the blueprints for our Docker Container to be created.

You see that I all the installs and clean ups in one single line. That’s because Docker generates a layer of virtual disk per each line in the Dockerfile. The layers are persistent, so even if in the next line we delete the temporary files, the space used will not be recovered.

You see also that I generate a Bash file with an infinite loop that sleeps 60 seconds each loop and save it as /root/loop.sh This is the file that later is called with CMD, so basically when the Container is created will execute this infinite loop. Basically we give to the Container a non ending task to prevent it from running, and exiting.

Now that you have the Dockerfile is time to build the Container.

For Mac open a terminal and type this command inside the directory where you have the Dockerfile file:

sudo docker build -t cheap_ubuntu .

I called the image cheap_ubuntu but you can set the name that you prefer.

For Windows 10 open a Command Prompt with Administrative rights and then change directory (cd) to the one that has your Dockerfile file.

docker.exe build -t cheap_ubuntu .
Image being built… (some data has been covered in white)

Now that you have the image built, you can create a Container based on it.

For Mac:

sudo docker run -d --name cheap_ubuntu cheap_ubuntu

For Windows (you can use docker.exe or just docker):

docker.exe run -d --name cheap_ubuntu cheap_ubuntu

Now you have Container named cheap_ubuntu based on the image cheap_ubuntu.

It’s time to execute an interactive shell and be able to play:

sudo docker exec -it cheap_ubuntu /bin/bash

For Windows:

docker.exe exec -it cheap_ubuntu /bin/bash
Our Ubuntu terminal inside Windows

Now you have an interactive shell, as root, to your cheap_ubuntu Ubuntu 20.04 LTS Container.

You’ll not be able to run the graphical interface, but you have a complete Ubuntu to learn to program in Bash and to use Linux from Command Line.

You will exit the interactive Bash session in the container with:

exit

If you want to stop the Container:

sudo docker stop cheap_ubuntu

Or for Windows:

docker.exe stop cheap_ubuntu

If you want to see what Containers are running do:

sudo docker ps

News of the blog 2021-08-16

  • I completed my ZFS on Ubuntu 20.04 LTS book.
    I had an error in an actual hard drive so I added a Troubleshooting section explaining how I fixed it.
  • I paused for a while the advance of my book Python: basic exercises for beginners, as my colleague Michela is translating it to Italian. She is a great Engineer and I cannot be more happy of having her help.
  • I added a new article about how to create a simple web Star Wars game using Flask.
    As always, I use Docker and a Dockerfile to automate the deployment, so you can test it without messing with your local system.
    The code is very simple and easy to understand.
mysql> UPDATE wp_options set option_value='blog.carlesmateo.local' WHERE option_name='siteurl';
Query OK, 1 row affected (0.02 sec)
Rows matched: 1 Changed: 1 Warnings: 0

This way I set an entry in /etc/hosts and I can do all the tests I want.

  • I added a new section to the blog, is a link where you can see all the articles published, ordered by number of views.
    /posts_and_views.php

Is in the main page, just after the recommended articles.
Here you can see the source code.

  • I removed the Categories:
    • Storage
      • ZFS
  • In favor of:
    • Hardware
      • Storage
        • ZFS
  • So the articles with Categories in the group deleted were reassigned the Categories in the second group.
  • Visually:
    • I removed some annoying lines from the Quick Selection access.
      They came from inherited CSS properties from my WordPress, long time customized, and I created new styles for this section.
    • I adjusted the line-height to avoid separation between lines being too much.
  • I added a link in the section of Other Engineering Blogs that I like, to the great https://github.com/lesterchan site, author of many super cool WordPress plugins.

Migrating some Services from Amazon to Digital Ocean

Analyzing the needs

I start with a VM, to learn about the providers and the migration project as I go.

My VM has been running in Amazon AWS for years.

It has 3.5GB of RAM and 1 Core. However is uses only 580MB of RAM. I’m paying around $85/month for this with Amazon.

I need to migrate:

  • DNS Server
  • Email
  • Web
  • Database

For the DNS Server I don’t need it anymore, each Domain provider has included DNS Service for free, so I do not longer to have my two DNS.

For the email I find myself in the same scenario, most providers offer 3 email accounts for your domain, and some alias, for free.

I’ll start the Service as Docker in the new CSP, so I will make it work in my computer first, locally, and so I can move easily in the future.

Note: exporting big images is not the idea I have to make backups.

I locate a Digital Ocean droplet with 1GB of RAM and 1 core and SSD disks for $5, for $6 I can have a NVMe version. That I choose.

Disk Space for the Statics

The first thing I do is to analyze the disk space needs of the service.

In this old AWS CentOS based image I have:

[root@ip-10-xxx-yyy-zzz ec2-user]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1       79G   11G   69G  14% /
devtmpfs        1.8G   12K  1.8G   1% /dev
tmpfs           1.8G     0  1.8G   0% /dev/shm

Ok, so if I keep the same I have I need 11GB.

I have plenty of space on this server so I do a zip of all the contents of the blog:

cd /var/www/wordpress
zip -r /home/ec2-user/wp_sizeZ.zip wp_siteZ

Database dump

I need a dump of the databases I want to migrate.

I check what databases are in this Server.

mysql -u root -p

mysql> show databases;

I do a dump of the databases that I want:

sudo mysqldump --password='XXXXXXXX' --databases wp_mysiteZ > wp_mysiteZ.sql

I get an error, meaning MySQL needs repair:

mysqldump: Got error: 145: Table './wp_mysiteZ/wp_visitor_maps_wo' is marked as crashed and should be repaired when using LOCK TABLES

So I launch a repair:

sudo mysqlcheck --password='XXXXXXXX' --repair --all-databases

And after the dump works.

My dump takes 88MB, not much, but I compress it with gzip.

gzip wp_mysiteZ.sql

It takes only 15MB compressed.

Do not forget the parameter –databases even if only one database is exported, otherwise the CREATE DATABASE and USE `wp_mysiteZ`; will not be added to your dump.

I will need to take some data form the mysql database, referring to the user used for accessing the blog’s database.

I always keep the CREATE USER and the GRANT permissions, if you don’t check the wp-config.php file. Note that the SQL format to create users and grant permissions may be different from a SQL version to another.

I create a file named mysql.sql with this part and I compress with gzip.

Checking PHP version

php -v
PHP 7.3.23 (cli) (built: Oct 21 2020 20:24:49) ( NTS )
Copyright (c) 1997-2018 The PHP Group
Zend Engine v3.3.23, Copyright (c) 1998-2018 Zend Technologies

WordPress is updated, and PHP is not that old.

The new Ubuntu 20.04 LTS comes with PHP 7.4. It will work:

php -v
PHP 7.4.3 (cli) (built: Jul  5 2021 15:13:35) ( NTS )
Copyright (c) The PHP Group
Zend Engine v3.4.0, Copyright (c) Zend Technologies
    with Zend OPcache v7.4.3, Copyright (c), by Zend Technologies

The Dockerfile

FROM ubuntu:20.04

MAINTAINER Carles Mateo

ARG DEBIAN_FRONTEND=noninteractive

# RUN echo "nameserver 8.8.8.8" > /etc/resolv.conf

RUN echo "Europe/Ireland" | tee /etc/timezone

# Note: You should install everything in a single line concatenated with
#       && and finalizing with 
# apt autoremove && apt clean

#       In order to use the less space possible, as every command 
#       is a layer

RUN apt update && apt install -y apache2 ntpdate libapache2-mod-php7.4 mysql-server php7.4-mysql php-dev libmcrypt-dev php-pear git mysql-server less zip vim mc && apt autoremove && apt clean

RUN a2enmod rewrite

RUN mkdir -p /www

# If you want to activate Debug
# RUN sed -i "s/display_errors = Off/display_errors = On/" /etc/php/7.2/apache2/php.ini 
# RUN sed -i "s/error_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT/error_reporting = E_ALL/" /etc/php/7.2/apache2/php.ini 
# RUN sed -i "s/display_startup_errors = Off/display_startup_errors = On/" /etc/php/7.2/apache2/php.ini 
# To Debug remember to change:
# config/{production.php|preproduction.php|devel.php|docker.php} 
# in order to avoid Error Reporting being set to 0.

ENV PATH_WP_MYSITEZ /var/www/wordpress/wp_mysitez/
ENV PATH_WORDPRESS_SITES /var/www/wordpress/

ENV APACHE_RUN_USER  www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR   /var/log/apache2
ENV APACHE_PID_FILE  /var/run/apache2/apache2.pid
ENV APACHE_RUN_DIR   /var/run/apache2
ENV APACHE_LOCK_DIR  /var/lock/apache2
ENV APACHE_LOG_DIR   /var/log/apache2

RUN mkdir -p $APACHE_RUN_DIR
RUN mkdir -p $APACHE_LOCK_DIR
RUN mkdir -p $APACHE_LOG_DIR
RUN mkdir -p $PATH_WP_MYSITEZ

# Remove the default Server
RUN sed -i '/<Directory \/var\/www\/>/,/<\/Directory>/{/<\/Directory>/ s/.*/# var-www commented/; t; d}' /etc/apache2/apache2.conf 

RUN rm /etc/apache2/sites-enabled/000-default.conf

COPY wp_mysitez.conf /etc/apache2/sites-available/

RUN chown --recursive $APACHE_RUN_USER.$APACHE_RUN_GROUP $PATH_WP_MYSITEZ

RUN ln -s /etc/apache2/sites-available/wp_mysitez.conf /etc/apache2/sites-enabled/

# Please note: It would be better to git clone from another location and
# gunzip and delete temporary files in the same line, 
# to save space in the layer.
COPY *.sql.gz /tmp/

RUN gunzip /tmp/*.sql.gz; echo "Starting MySQL"; service mysql start && mysql -u root < /tmp/wp_mysitez.sql && mysql -u root < /tmp/mysql.sql; rm -f /tmp/*.sql; rm -f /tmp/*.gz
# After this root will have password assigned

COPY *.zip /tmp/

COPY services_up.sh $PATH_WORDPRESS_SITES

RUN echo "Unzipping..."; cd /var/www/wordpress/; unzip /tmp/*.zip; rm /tmp/*.zip

RUN chown --recursive $APACHE_RUN_USER.$APACHE_RUN_GROUP $PATH_WP_MYSITEZ

EXPOSE 80

CMD ["/var/www/wordpress/services_up.sh"]

Services up

For starting MySQL and Apache I relay in services_up.sh script.

#!/bin/bash
echo "Starting MySql"
service mysql start

echo "Starting Apache"
service apache2 start
# /usr/sbin/apache2 -D FOREGROUND

while [ true ];
do
    ps ax | grep mysql | grep -v "grep "
    if [ $? -gt 0 ];
    then
        service mysql start
    fi
    sleep 10
done

You see that instead of launching apache2 as FOREGROUND, what keeps the loop, not exiting from my Container is a while [ true ]; that will keep looping and checking if MySQL is up, and restarting otherwise.

MySQL shutting down

Some of my sites receive DoS attacks. More than trying to shutdown my sites, are spammers trying to publish comment announcing fake glasses, or medicines for impotence, etc… also some try to hack into the Server to gain control of it with dictionary attacks or trying to explode vulnerabilities.

The downside of those attacks is that some times the Database is under pressure, and uses more and more memory until it crashes.

More memory alleviate the problem and buys time, but I decided not to invest more than $6 USD per month on this old site. I’m just keeping the contents alive and even this site still receives many visits. A restart of the MySQL if it dies is enough for me.

As you have seen in my Dockerfile I only have one Docker Container that runs both Apache and MySQL. One of the advantages of doing like that is that if MySQL dies, the container does not exit. However I could have had two containers with both scripts with the while [ true ];

When planning I decided to have just one single Container, all-in-one, as when I export the image for a Backup, I’ll be dealing only with a single image, not two.

Building and Running the Container

I created a Bash script named build_docker.sh that does the build for me, stopping and cleaning previous Containers:

#!/bin/bash

# Execute with sudo

s_DOCKER_IMAGE_NAME="wp_sitez"

printf "Stopping old image %s\n" "${s_DOCKER_IMAGE_NAME}"
sudo docker stop "${s_DOCKER_IMAGE_NAME}"

printf "Removing old image %s\n" "${s_DOCKER_IMAGE_NAME}"
sudo docker rm "${s_DOCKER_IMAGE_NAME}"

printf "Creating Docker Image %s\n" "${s_DOCKER_IMAGE_NAME}"
# sudo docker build -t ${s_DOCKER_IMAGE_NAME} . --no-cache
sudo docker build -t ${s_DOCKER_IMAGE_NAME} .

i_EXIT_CODE=$?
if [ $i_EXIT_CODE -ne 0 ]; then
    printf "Error. Exit code %s\n" ${i_EXIT_CODE}
    exit
fi

echo "Ready to run ${s_DOCKER_IMAGE_NAME} Docker Container"
echo "To run type: sudo docker run -d -p 80:80 --name ${s_DOCKER_IMAGE_NAME} ${s_DOCKER_IMAGE_NAME}"
echo "or just use run_in_docker.sh"
echo
echo "Debug running Docker:"
echo "docker exec -it ${s_DOCKER_IMAGE_NAME} /bin/bash"
echo

I assign to the image and the Running Container the same name.

Running in Production

Once it works in local, I set the Firewall rules and I deploy the Droplet (VM) with Digital Ocean, I upload the files via SFTP, and then I just run my script build_docker.sh

And assuming everything went well, I run it:

sudo docker run -d -p 80:80 --name wp_mysitez wp_mysitez

I check that the page works, and here we go.

Some improvements

This could also have been put in a private Git repository. You only have to care about not storing the passwords in it. (Like the MySQL grants)

The build from the Git repository can be validated with a Jenkins. Here you have an article about setup a Jenkins for yourself.

A sample Flask application

Today I bring you a game made with Python and Flask extracted from my book Python 3 Combat Guide.

It is a very simple game where you have to choose what Star wars robot you prefer.

Then an internal counter, kept in a static variable, is updated.

I display the time as well, to show the use of a in import and dynamic contents printed as well.

I added a Dockerfile and a bash script to build the Docker Image, so you can run the Docker Container without installing anything in your computer.

You can download the code from here:

https://gitlab.com/carles.mateo/python-flask-r2d2

Or clone the project:

git clone https://gitlab.com/carles.mateo/python-flask-r2d2.git

Then build the image with the script I provided:

sudo ./build_docker.sh 

After Docker Image flask_app is built, you can run a Docker Container based on it with:

sudo docker run -d -p 5000:5000 --name flask_app flask_app

After you’re done, in order to stop the Container type:

sudo docker stop flask_app

Here is the source code of the Python file flask_app.py:

#
# flask_app.py
#
# Author: Carles Mateo
# Creation Date: 2020-05-10 20:50 GMT+1
# Description: A simple Flask Web Application
#              Part of the samples of https://leanpub.com/pythoncombatguide
#              More source code for the book at https://gitlab.com/carles.mateo/python_combat_guide
#

from flask import Flask
import datetime


def get_datetime(b_milliseconds=False):
    """
    Return the datetime with miliseconds in format YYYY-MM-DD HH:MM:SS.xxxxx
    or without milliseconds as YYYY-MM-DD HH:MM:SS
    """
    if b_milliseconds is True:
        s_now = str(datetime.datetime.now())
    else:
        s_now = str(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"))

    return s_now


app = Flask(__name__)

# Those variables will keep their value as long as Flask is running
i_votes_r2d2 = 0
i_votes_bb8 = 0


@app.route('/')
def page_root():
    s_page = "<html>"
    s_page += "<title>My Web Page!</title>"
    s_page += "<body>"
    s_page += "<h1>Time now is: " + get_datetime() + "</h1>"
    s_page += """<h2>Who is more sexy?</h2>
<a href="r2d2"><img src="static/r2d2.png"></a> <a href="bb8"><img width="250" src="static/bb8.jpg"></a>"""
    s_page += "</body>"
    s_page += "</html>"

    return s_page


@app.route('/bb8')
def page_bb8():
    global i_votes_bb8

    i_votes_bb8 = i_votes_bb8 + 1

    s_page = "<html>"
    s_page += "<title>My Web Page!</title>"
    s_page += "<body>"
    s_page += "<h1>Time now is: " + get_datetime() + "</h1>"
    s_page += """<h2>BB8 Is more sexy!</h2>
                <img width="250" src="static/bb8.jpg">"""
    s_page += "<p>I have: " + str(i_votes_bb8) + "</p>"
    s_page += "</body>"
    s_page += "</html>"

    return s_page


@app.route('/r2d2')
def page_r2d2():
    global i_votes_r2d2

    i_votes_r2d2 = i_votes_r2d2 + 1

    s_page = "<html>"
    s_page += "<title>My Web Page!</title>"
    s_page += "<body>"
    s_page += "<h1>Time now is: " + get_datetime() + "</h1>"
    s_page += """<h2>R2D2 Is more sexy!</h2>
                <img src="static/r2d2.png">"""
    s_page += "<p>I have: " + str(i_votes_r2d2) + "</p>"
    s_page += "</body>"
    s_page += "</html>"

    return s_page


if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5000, debug=True)

As always, the naming of the variables is based on MT Notation.

The Dockerfile is very straightforward:

FROM ubuntu:20.04

MAINTAINER Carles Mateo

ARG DEBIAN_FRONTEND=noninteractive

RUN apt update && \
    apt install -y vim python3-pip &&  pip3 install pytest && \
    apt-get clean

ENV PYTHON_COMBAT_GUIDE /var/python_combat_guide

RUN mkdir -p $PYTHON_COMBAT_GUIDE

COPY ./ $PYTHON_COMBAT_GUIDE

ENV PYTHONPATH "${PYTHONPATH}:$PYTHON_COMBAT_GUIDE/src/:$PYTHON_COMBAT_GUIDE/src/lib"

RUN pip3 install -r $PYTHON_COMBAT_GUIDE/requirements.txt

# This is important so when executing python3 -m current directory will be added to Syspath
# Is not necessary, as we added to PYTHONPATH
#WORKDIR $PYTHON_COMBAT_GUIDE/src/lib

EXPOSE 5000

# Launch our Flask Application
CMD ["/usr/bin/python3", "/var/python_combat_guide/src/flask_app.py"]

News from the blog 2021-07-23

  • I’ve released v. 0.99 of carleslibs package
    This package includes utilities for:
    • Files and Directories handling
    • Date/Time retrieval
    • Python version detection

You can install it with:

pip install carleslibs

The minimum requirement declared is Python 3.6, although they work with Python 3.5 and Python 2.7, as I want to drop support for no longer supported versions.

Instructions can be found in here: carleslibs page.

A small Python + MySql + Docker program as a sample

This article can be found in my book Python Combat Guide.

I wrote this code and article in order to help my Python students to mix together Object Oriented Programming, MySql, and Docker.

You can have everything in action with only downloading the code and running the docker_build.sh and docker_run.sh scripts.

You can download the source code from:

https://gitlab.com/carles.mateo/python-mysql-example

and clone with:

git clone https://gitlab.com/carles.mateo/python-mysql-example.git

Installing the MySql driver

We are going to use Oracle’s official MySql driver for Python.

All the documentation is here:

https://dev.mysql.com/doc/connector-python/en/

In order to install we will use pip.

To install it in Ubuntu:

pip install mysql-connector-python

In Mac Os X you have to use pip3 instead of pip.

However we are going to run everything from a Docker Container so the only thing you need is to have installed Docker.

If you prefer to install MySql in your computer (or Virtual Box instance) directly, skip the Docker steps.

Dockerfile

The Dockerfile is the file that Docker uses to build the Docker Container.

Ours is like that:

FROM ubuntu:20.04

MAINTAINER Carles Mateo

ARG DEBIAN_FRONTEND=noninteractive

RUN apt update && apt install -y python3 pip mysql-server vim mc wget curl && apt-get clean
RUN pip install mysql-connector-python

EXPOSE 3306

ENV FOLDER_PROJECT /var/mysql_carles

RUN mkdir -p $FOLDER_PROJECT

COPY docker_run_mysql.sh $FOLDER_PROJECT
COPY start.sql $FOLDER_PROJECT
COPY src $FOLDER_PROJECT

RUN chmod +x /var/mysql_carles/docker_run_mysql.sh

CMD ["/var/mysql_carles/docker_run_mysql.sh"]

The first line defines that we are going to use Ubuntu 20.04 (it’s a LTS version).

We install all the apt packages in a single line, as Docker works in layers, and what is used as disk space in the previous layer is not deleted even if we delete the files, so we want to run apt update, install all the packages, and clean the temporal files in one single step.

I also install some useful tools like: vim, mc, less, wget and curl.

We expose to outside the port 3306, in case you want to run the Python code from your computer, but having the MySql in the Container.

The last line executes a script that starts the MySql service, creates the table, the user, and add two rows and runs an infinite loop so the Docker does not finish.

build_docker.sh

build_docker.sh is a Bash script that builds the Docker Image for you very easily.

It stops the container and removes the previous image, so your hard drive does not fill with Docker images if you do modifications.

It checks for errors building and it also remembers you how to run and debug the Docker Container.

#!/bin/bash

# Execute with sudo

s_DOCKER_IMAGE_NAME="blog_carlesmateo_com_mysql"

printf "Stopping old image %s\n" "${s_DOCKER_IMAGE_NAME}"
sudo docker stop "${s_DOCKER_IMAGE_NAME}"

printf "Removing old image %s\n" "${s_DOCKER_IMAGE_NAME}"
sudo docker rm "${s_DOCKER_IMAGE_NAME}"

printf "Creating Docker Image %s\n" "${s_DOCKER_IMAGE_NAME}"
sudo docker build -t ${s_DOCKER_IMAGE_NAME} . --no-cache

i_EXIT_CODE=$?
if [ $i_EXIT_CODE -ne 0 ]; then
printf "Error. Exit code %s\n" ${i_EXIT_CODE}
exit
fi

echo "Ready to run ${s_DOCKER_IMAGE_NAME} Docker Container"
echo "To run type: sudo docker run -d -p 3306:3306 --name ${s_DOCKER_IMAGE_NAME} ${s_DOCKER_IMAGE_NAME}"
echo "or just use run_in_docker.sh"
echo
echo "Debug running Docker:"
echo "docker exec -it ${s_DOCKER_IMAGE_NAME} /bin/bash"
echo

docker_run.sh

I also provide a script named docker_run.sh that runs your Container easily, exposing the MySql port.

#!/bin/bash

# Execute with sudo

s_DOCKER_IMAGE_NAME="blog_carlesmateo_com_mysql"

docker run -d -p 3306:3306 --name ${s_DOCKER_IMAGE_NAME} ${s_DOCKER_IMAGE_NAME}

echo "Showing running Instances"
docker ps

As you saw before I named the image after blog_carlesmateo_com_mysql.

I did that so basically I wanted to make sure that the name was unique, as the build_docker.sh deletes an image named like the name I choose, I didn’t want to use a generic name like “mysql” that may lead to you to delete the Docker Image inadvertently.

docker_run_mysql.sh

This script will run when the Docker Container is launched for the first time:

#!/bin/bash

# Allow to be queried from outside
sed -i '31 s/bind-address/#bind-address/' /etc/mysql/mysql.conf.d/mysqld.cnf

service mysql start

# Create a Database, a user with password, and permissions
cd /var/mysql_carles
mysql -u root < start.sql

while [ true ]; do sleep 60; done

With sed command we modify the line 31 of the the MySQL config file so we can connect from Outside the Docker Instance (bind-address: 127.0.0.1)

As you can see it executes the SQL contained in the file start.sql as root and we start MySql.

Please note: Our MySql installation has not set a password for root. It is only for Development purposes.

start.sql

The SQL file that will be ran inside our Docker Container.

CREATE DATABASE carles_database;


CREATE USER ‘python’@’localhost’ IDENTIFIED BY ‘blog.carlesmateo.com-db-password’;
CREATE USER ‘python’@’%’ IDENTIFIED BY ‘blog.carlesmateo.com-db-password’;
GRANT ALL PRIVILEGES ON carles_database.* TO ‘python’@’localhost’;
GRANT ALL PRIVILEGES ON carles_database.* TO ‘python’@’%’;


USE carles_database;


CREATE TABLE car_queue (
i_id_car int,
s_model_code varchar(25),
s_color_code varchar(25),
s_extras varchar(100),
i_right_side int,
s_city_to_ship varchar(25)
);

INSERT INTO car_queue (i_id_car, s_model_code, s_color_code, s_extras, i_right_side, s_city_to_ship) VALUES (1, "GOLF2021", "BLUE7", "COND_AIR, GPS, MULTIMEDIA_V3", 0, "Barcelona");
INSERT INTO car_queue (i_id_car, s_model_code, s_color_code, s_extras, i_right_side, s_city_to_ship) VALUES (2, "GOLF2021_PLUGIN_HYBRID", "BLUEMETAL_5", "COND_AIR, GPS, MULTIMEDIA_V3, SECURITY_V5", 1, "Cork");

As you can see it creates the user “python” with the password ‘blog.carlesmateo.com-db-password’ for access local and remote (%).

It also creates a Database named carles_database and grants all the permissions to the user “python”, for local and remote.

This is the user we will use to authenticate from out Python code.

Then we switch to use the carles_database and we create the car_queue table.

We insert two rows, as an example.

select_values_example.py

Finally the Python code that will query the Database.

import mysql.connector

if __name__ == "__main__":
    o_conn = mysql.connector.connect(user='python', password='blog.carlesmateo.com-db-password', database='carles_database')
    o_cursor = o_conn.cursor()

    s_query = "SELECT * FROM car_queue"

    o_cursor.execute(s_query)

    for a_row in o_cursor:
        print(a_row)

    o_cursor.close()
    o_conn.close()

Nothing special, we open a connection to the MySql and perform a query, and parse the cursor as rows/lists.

Please note: Error control is disabled so you may see any exception.

Executing the Container

First step is to build the Container.

From the directory where you cloned the project, execute:

sudo ./build_docker.sh

Then run the Docker Container:

sudo ./docker_run.sh

The script also performs a docker ps command, so you can see that it’s running.

Entering the Container and running the code

Now you can enter inside the Docker Container:

docker exec -it blog_carlesmateo_com_mysql /bin/bash

Then change to the directory where I installed the sample files:

cd /var/mysql_carles

And execute the Python 3 example:

python3 select_values_example.py

Tying together MySql and a Python Menu with Object Oriented Programming

In order to tie all together, and specially to give a consistent view to my students, to avoid showing only pieces but a complete program, and to show a bit of Objects Oriented in action I developed a small program which simulates the handling of a production queue for Volkswagen.

MySQL Library

First I created a library to handle MySQL operations.

lib/mysqllib.py

import mysql.connector


class MySql():

    def __init__(self, s_user, s_password, s_database, s_host="127.0.0.1", i_port=3306):
        self.s_user = s_user
        self.s_password = s_password
        self.s_database = s_database
        self.s_host = s_host
        self.i_port = i_port

        o_conn = mysql.connector.connect(host=s_host, port=i_port, user=s_user, password=s_password, database=s_database)
        self.o_conn = o_conn

    def query(self, s_query):
        a_rows = []

        o_cursor = self.o_conn.cursor()

        o_cursor.execute(s_query)

        for a_row in o_cursor:
            a_rows.append(a_row)

        o_cursor.close()

        return a_rows

    def insert(self, s_query):

        o_cursor = self.o_conn.cursor()

        o_cursor.execute(s_query)
        i_inserted_row_count = o_cursor.rowcount

        # Make sure data is committed to the database
        self.o_conn.commit()

        return i_inserted_row_count

    def delete(self, s_query):

        o_cursor = self.o_conn.cursor()

        o_cursor.execute(s_query)
        i_deleted_row_count = o_cursor.rowcount

        # Make sure data is committed to the database
        self.o_conn.commit()

        return i_deleted_row_count


    def close(self):

        self.o_conn.close()

Basically when this class is instantiated, a new connection to the MySQL specified in the Constructor is established.

We have a method query() to send SELECT queries.

We have a insert method, to send INSERT, UPDATE queries that returns the number of rows affected.

This method ensures to perform a commit to make sure changes persist.

We have a delete method, to send DELETE Sql queries that returns the number of rows deleted.

We have a close method which closes the MySql connection.

A Data Object: CarDO

Then I’ve defined a class, to deal with Data and interactions of the cars.

do/cardo.py


class CarDO():

    def __init__(self, i_id_car=0, s_model_code="", s_color_code="", s_extras="", i_right_side=0, s_city_to_ship=""):
        self.i_id_car = i_id_car
        self.s_model_code = s_model_code
        self.s_color_code = s_color_code
        self.s_extras = s_extras
        self.i_right_side = i_right_side
        self.s_city_to_ship = s_city_to_ship

        # Sizes for render
        self.i_width_id_car = 6
        self.i_width_model_code = 25
        self.i_width_color_code = 25
        self.i_width_extras = 50
        self.i_width_side = 5
        self.i_width_city_to_ship = 15

    def print_car_info(self):
        print("Id:", self.i_id_car)
        print("Model Code:", self.s_model_code)
        print("Color Code:", self.s_color_code)
        print("Extras:", self.s_extras)
        s_side = self.get_word_for_driving_side()
        print("Drive by side:", s_side)
        print("City to ship:", self.s_city_to_ship)

    def get_word_for_driving_side(self):
        if self.i_right_side == 1:
            s_side = "Right"
        else:
            s_side = "Left"

        return s_side

    def get_car_info_for_list(self):

        s_output = str(self.i_id_car).rjust(self.i_width_id_car) + " "
        s_output += self.s_model_code.rjust(self.i_width_model_code) + " "
        s_output += self.s_color_code.rjust(self.i_width_color_code) + " "
        s_output += self.s_extras.rjust(self.i_width_extras) + " "
        s_output += self.get_word_for_driving_side().rjust(self.i_width_side) + " "
        s_output += self.get_s_city_to_ship().rjust(self.i_width_city_to_ship)

        return s_output

    def get_car_header_for_list(self):
        s_output = str("Id Car").rjust(self.i_width_id_car) + " "
        s_output += "Model Code".rjust(self.i_width_model_code) + " "
        s_output += "Color Code".rjust(self.i_width_color_code) + " "
        s_output += "Extras".rjust(self.i_width_extras) + " "
        s_output += "Drive".rjust(self.i_width_side) + " "
        s_output += "City to Ship".rjust(self.i_width_city_to_ship)

        i_total_length = self.i_width_id_car + self.i_width_model_code + self.i_width_color_code + self.i_width_extras + self.i_width_side + self.i_width_city_to_ship
        # Add the space between fields
        i_total_length = i_total_length + 5

        s_output += "\n"
        s_output += "=" * i_total_length

        return s_output

    def get_i_id_car(self):
        return self.i_id_car

    def get_s_model_code(self):
        return self.s_model_code

    def get_s_color_code(self):
        return self.s_color_code

    def get_s_extras(self):
        return self.s_extras

    def get_i_right_side(self):
        return self.i_right_side

    def get_s_city_to_ship(self):
        return self.s_city_to_ship

Initially I was going to have a CarDO Object without any logic. Only with Data.

In OOP the variables of the Instance are called Properties, and the functions Methods.

Then I decided to add some logic, so I can show what’s the typical use of the objects.

So I will use CarDO as Data Object, but also to do few functions like printing the info of a Car.

Queue Manager

Finally the main program.

We also use Object Oriented Programming, and we use Dependency Injection to inject the MySQL Instance. That’s very practical to do Unit Testing.

from lib.mysqllib import MySql
from do.cardo import CarDO


class QueueManager():

    def __init__(self, o_mysql):
        self.o_mysql = o_mysql

    def exit(self):
        exit(0)

    def main_menu(self):
        while True:
            print("Main Menu")
            print("=========")
            print("")
            print("1. Add new car to queue")
            print("2. List all cars to queue")
            print("3. View car by Id")
            print("4. Delete car from queue by Id")
            print("")
            print("0. Exit")
            print("")

            s_option = input("Choose your option:")
            if s_option == "1":
                self.add_new_car()
            if s_option == "2":
                self.see_all_cars()
            if s_option == "3":
                self.see_car_by_id()
            if s_option == "4":
                self.delete_by_id()

            if s_option == "0":
                self.exit()

    def get_all_cars(self):
        s_query = "SELECT * FROM car_queue"

        a_rows = self.o_mysql.query(s_query)
        a_o_cars = []

        for a_row in a_rows:
            i_id_car = a_row[0]
            s_model_code = a_row[1]
            s_color_code = a_row[2]
            s_extras = a_row[3]
            i_right_side = a_row[4]
            s_city_to_ship = a_row[5]

            o_car = CarDO(i_id_car=i_id_car, s_model_code=s_model_code, s_color_code=s_color_code, s_extras=s_extras, i_right_side=i_right_side, s_city_to_ship=s_city_to_ship)
            a_o_cars.append(o_car)

        return a_o_cars

    def get_car_by_id(self, i_id_car):
        b_success = False
        o_car = None

        s_query = "SELECT * FROM car_queue WHERE i_id_car=" + str(i_id_car)

        a_rows = self.o_mysql.query(s_query)

        if len(a_rows) == 0:
            # False, None
            return b_success, o_car

        i_id_car = a_rows[0][0]
        s_model_code = a_rows[0][1]
        s_color_code = a_rows[0][2]
        s_extras = a_rows[0][3]
        i_right_side = a_rows[0][4]
        s_city_to_ship = a_rows[0][5]

        o_car = CarDO(i_id_car=i_id_car, s_model_code=s_model_code, s_color_code=s_color_code, s_extras=s_extras, i_right_side=i_right_side, s_city_to_ship=s_city_to_ship)
        b_success = True

        return b_success, o_car

    def replace_apostrophe(self, s_text):
        return s_text.replace("'", "´")

    def insert_car(self, o_car):

        s_sql = """INSERT INTO car_queue 
                                (i_id_car, s_model_code, s_color_code, s_extras, i_right_side, s_city_to_ship) 
                         VALUES 
                                (""" + str(o_car.get_i_id_car()) + ", '" + o_car.get_s_model_code() + "', '" + o_car.get_s_color_code() + "', '" + o_car.get_s_extras() + "', " + str(o_car.get_i_right_side()) + ", '" + o_car.get_s_city_to_ship() + "');"

        i_inserted_row_count = self.o_mysql.insert(s_sql)

        if i_inserted_row_count > 0:
            print("Inserted", i_inserted_row_count, " row/s")
            b_success = True
        else:
            print("It was impossible to insert the row")
            b_success = False

        return b_success

    def add_new_car(self):
        print("Add new car")
        print("===========")

        while True:
            s_id_car = input("Enter new ID: ")
            if s_id_car == "":
                print("A numeric Id is needed")
                continue

            i_id_car = int(s_id_car)

            if i_id_car < 1:
                continue

            # Check if that id existed already
            b_success, o_car = self.get_car_by_id(i_id_car=i_id_car)
            if b_success is False:
                # Does not exist
                break

            print("Sorry, this Id already exists")

        s_model_code = input("Enter Model Code:")
        s_color_code = input("Enter Color Code:")
        s_extras = input("Enter extras comma separated:")
        s_right_side = input("Enter R for Right side driven:")
        if s_right_side.upper() == "R":
            i_right_side = 1
        else:
            i_right_side = 0
        s_city_to_ship = input("Enter the city to ship the car:")

        # Sanitize SQL replacing apostrophe
        s_model_code = self.replace_apostrophe(s_model_code)
        s_color_code = self.replace_apostrophe(s_color_code)
        s_extras = self.replace_apostrophe(s_extras)
        s_city_to_ship = self.replace_apostrophe(s_city_to_ship)

        o_car = CarDO(i_id_car=i_id_car, s_model_code=s_model_code, s_color_code=s_color_code, s_extras=s_extras, i_right_side=i_right_side, s_city_to_ship=s_city_to_ship)
        b_success = self.insert_car(o_car)

    def see_all_cars(self):
        print("")

        a_o_cars = self.get_all_cars()

        if len(a_o_cars) > 0:
            print(a_o_cars[0].get_car_header_for_list())
        else:
            print("No cars in queue")
            print("")
            return

        for o_car in a_o_cars:
            print(o_car.get_car_info_for_list())

        print("")

    def see_car_by_id(self, i_id_car=0):
        if i_id_car == 0:
            s_id = input("Car Id:")
            i_id_car = int(s_id)

        s_id_car = str(i_id_car)

        b_success, o_car = self.get_car_by_id(i_id_car=i_id_car)
        if b_success is False:
            print("Error, car id: " + s_id_car + " not located.")
            return False

        print("")
        o_car.print_car_info()
        print("")

        return True

    def delete_by_id(self):

        s_id = input("Enter Id of car to delete:")
        i_id_car = int(s_id)

        if i_id_car == 0:
            print("Invalid Id")
            return

        # reuse see_car_by_id
        b_found = self.see_car_by_id(i_id_car=i_id_car)
        if b_found is False:
            return

        s_delete = input("Are you sure you want to DELETE. Type Y to delete: ")
        if s_delete.upper() == "Y":
            s_sql = "DELETE FROM car_queue WHERE i_id_car=" + str(i_id_car)
            i_num = self.o_mysql.delete(s_sql)

            print(i_num, " Rows deleted")

            # if b_success is True:
            #     print("Car deleted successfully from the queue")


if __name__ == "__main__":

    try:

        o_mysql = MySql(s_user="python", s_password="blog.carlesmateo.com-db-password", s_database="carles_database", s_host="127.0.0.1", i_port=3306)

        o_queue_manager = QueueManager(o_mysql=o_mysql)
        o_queue_manager.main_menu()
    except KeyboardInterrupt:
        print("Detected CTRL + C. Exiting")

This program talks to MySQL, that we have started in a Docker previously.

We have access from inside the Docker Container, or from outside.

The idea of this simple program is to use a library for dealing with MySql, and objects for dealing with the Cars. The class CarDO contributes to the render of its data in the screen.

To enter inside the Docker once you have generated it and is running, do:

docker exec -it blog_carlesmateo_com_mysql /bin/bash

Then:

cd /var/mysql_carles 
python3 queue_manager.py

Bonus

I added a file called queue_manager.php so you can see how easy is to render a HTML page with data coming from the Database, from PHP.

Creating Jenkins configurations for your projects

Obviously for companies is a must, but if you work in your own projects, it will be super great that you configure Jenkins, so you have continuous feedback about if something breaks.

I’ll show you how to configure Jenkins for several projects using only your main computer/laptop.

Check my past article about setting up Jenkins in Docker.

Adding a new Freestyle project

Click on top left: New item.

Then give it an appropriate name and choose Freestyle Project.

Take in count that the name given will be used as the name of the workspace, so you may want to avoid special characters.

It is very convenient to let Jenkins deal with your repository changes instead of using shell commands. So I’m going to fill this section.

I also provided credentials, so Jenkins can log to my Gitlab.

This kind of project is the most simple and we will use the same Docker Container where Jenkins resides, to run the Unit Testing of our code.

We are going to select to Build periodically.

If your Server is in Internet, you can active the Web Hooks so your Jenkins is noticed via a web connection from GitLab, GitHub or your CVS provider. As I’m strictly running this at home, Jenkins will be periodically check for changes in the repository and do nothing if there are no changes.

I’ll set H * * * * so Jenkins will try every hour.

Go down and select Add Build Step:

Select Execute shell.

Then add a basic echo command to print in the Console Output, and ls command so you see what is in the default’s directory your shell script is executing in.

Now save your project.

And go back to Dashboard.

Click inside of Neurona.cat to view Project’s Dashboard.

Click: Build Now. And then click on the Build task (Apr 5, 2021, 10:31 AM)

Click on Console Output.

You’ll see a verbose log of everything that happened.

You’ll see for example that Jenkins has put the script on the path of the git project folder that we instructed before to clone/pull.

This example doesn’t have test. Let’s see one with Unit Test.

Running Unit Testing with pytest

If we enter the project CTOP and then select Configure you’ll see the steps I did for making it do the Unite Testing.

In my case I wanted to have several steps, one per each Unit Test file.

If each one of them I’ve to enter the right directory before launching any test.

If you open the last successful build and and select Console Output you’ll see all the tests, going well.

If a test will go wrong, pytest will exit with Exit Code different of 0, and so Jenkins will detect it and show that the Build Fails.

Building a Project from Pipeline

Pipeline is the set of plugins that allow us to do Continuous Deployment.

Inform the information about your git project.

Then in your gitlab or github project create a file named Jenkinsfile.

Jenkins will look for it when it clones your repo, to build the Pipeline.

Here is my Jenkinsfile in https://gitlab.com/carles.mateo/python_combat_guide/-/blob/master/Jenkinsfile

pipeline {
    agent any
    stages {
        stage('Show Environment') {
            steps {
                echo 'Showing the environment'
                sh 'ls -hal'
            }
        }
        stage('Updating from repository') {
            steps {
                echo 'Grabbing from repository'
                withCredentials([usernamePassword(credentialsId: 'ssh-fast', usernameVariable: 'USERNAME', passwordVariable: 'USERPASS')]) {
                    script {
                        sh "sshpass -p '$USERPASS' -v ssh -o StrictHostKeyChecking=no $USERNAME@$ip_fast 'git clone https://gitlab.com/carles.mateo/python_combat_guide.git; cd python_combat_guide; git pull'"
                    }
                }
            }
        }
        stage('Build Docker Image') {
            steps {
                echo 'Building Docker Container'
                withCredentials([usernamePassword(credentialsId: 'ssh-fast', usernameVariable: 'USERNAME', passwordVariable: 'USERPASS')]) {
                    script {
                        sh "sshpass -p '$USERPASS' -v ssh -o StrictHostKeyChecking=no $USERNAME@$ip_fast 'cd python_combat_guide; docker build -t python_combat_guide .'"
                    }
                }
            }
        }
        stage('Run the Tests') {
            steps {
                echo "Running the tests from the Container"
                withCredentials([usernamePassword(credentialsId: 'ssh-fast', usernameVariable: 'USERNAME', passwordVariable: 'USERPASS')]) {
                    script {
                        sh "sshpass -p '$USERPASS' -v ssh -o StrictHostKeyChecking=no $USERNAME@$ip_fast 'cd python_combat_guide; docker run  python_combat_guide'"
                    }
                }
            }
        }
    }
}

My Jenkins Docker installation has the sshpass command, and I use it to connect via SSH, with username and Password to the server defined by ip_fast environment variable.

We defined the variable ip_fast in Manage Jenkins > Configure System.

There in Global Properties , Environment Variables I defined ip_fast:

In the Build Server I’ll make a new user and allow it to build Docker:

sudo adduser jenkins_build

sudo usermod -aG docker jenkins_build

The Credentials can be managed from Manage Jenkins > Manage Credentials.

You can see how I use all this combined in the Jenkinsfile so I don’t have to store credentials in the CVS and Jenkins (Docker Container) will connect via SSH to make the computer after ip_fast Ip, to build and run another Container. That Container will run with a command to do the Unit Testing. If something goes wrong, that is, if any program return an Exit Code different from 0, Jenkins will consider the build fail.

Take in count that $? only stores the Exit Code of the last program. So be careful if you pass multiple commands in one single line, as this may mask an error.

Separating the execution in multiple Stages helps to save time, as after a failure, execution will not continue.

Also visually is easy to see where the error is.