Only 2% of the viewers donate, so I answered the call every time it was made.
This is my 5th donation to Wikimedia.
I consider that Freedom is very important.
I bought these new books
One of my secrets to be on top is that I’m always studying.
I study all the time, at work and in my free time.
I use Linux Academy and I buy books in paper. I don’t connect with reading in tablets. I think information is stored better when read in paper. I use also a marker and pointers to keep a direct access to the most interesting points on the books.
And I study all kind of themes. Obviously I know a lot of Web Scraping, but there is always room for learning more. And whatever new I learn helps me to be better with my students and more clear writing my books.
I’ve never been a Front End, but I’ve been able to fix bugs in the Front End engines from the companies I worked for, like Privalia. I was passed a bug that prevented the Internet Explorer users to buy just one hour before we launching a massive campaign. I debugged and I found a variable named “value” so the html looked like <input name="value" value="">. In less than 30 minutes I proved to the incredulous Head of Development and the CTO that a bug in Internet Explored was causing a conflict when fetching the value from the input named value. We deployed to Production the update and the campaign was a total success. So I consider knowing Javascript and Front also a need, even if I don’t work directly with it. I want to be able to understand all the requirements and possibilities, and weaknesses, so I can fix bugs and save the day. That allowed me to fix scalability problems in Nodejs and Phantomjs projects too. (They are Javascript Server Side, event driven, projects)
It seems that Amazon.co.uk works well again for Ireland. My two last orders arrived on time and I had no problems of border taxes apparently.
Nice Python article
I enjoyed a lot this article, cause explains part of what I did with my student and friend Albert, in a project that analyzes the access logs from Apache for patterns of attempts of exploits, then feeds a database, and then blocks those offender Ip Addresses in the Firewall.
The article only covers the part of Pandas, of reading the access.log file and working with it, but is a very well redacted article:
Normally if we need to refresh a config in a Container we will spawn a new one, or we will access with sudo docker exec -it /bin/sh mycontainer for instance and force a reload, or we will have to restart the Container.
What if we want to be able to reload the config at any moment without restarting the process, or to trigger a process in our Container (like a dump or a flush) in another way than implementing an API?.
An unexplored way, for many, to communicate with your Container’s main process is to send Signals.
So basically I will show you how you can trap Signals within a Python process which is the main process for your Docker Container, and send them from your Hypervisor with the command:
sudo docker kill --signal=SIGUSR1
I choose to use SIGUSR1 as it is reserved for user defined Signals.
You can clone the project or get the source code from:
FROM ubuntu:20.04
MAINTAINER Carles Mateo
RUN apt update && apt install -y python3 python3-pip vim less && apt-get clean
# This will make sure printing in the Screen when running in dettached mode
ENV PYTHONUNBUFFERED=1
ENV DOCKERSIGNAL /var/dockersignal
RUN mkdir -p $DOCKERSIGNAL
COPY *.py $DOCKERSIGNAL
WORKDIR $DOCKERSIGNAL
# Again to enforce printing to the Screen when running dettached
CMD ["python3", "-u", "/var/dockersignal/dockersignal.py"]
The dockersignal.py file
# By Carles Mateo https://blog.carlesmateo.com
import signal
import time
def handler(signum, frame):
print('Signal handler called with signal', signum)
if signum == 10:
# 10 is the equivalent to SIGUSR1 for most x86/ARM (not for Alpha/Sparc, MIPS, PARISC)
print("Simulated action: Reload config")
if __name__ == "__main__":
print("Waiting for a Signal")
# Listed for this signal, so can listen for more
signal.signal(signal.SIGUSR1, handler)
while True:
# Do Whatever
time.sleep(1)
A shell file to build and run the Container like a pro
#!/bin/bash
DOCKER_CONTAINER_NAME="docker-signal"
DOCKER_IMAGE_NAME="docker-signal"
printf "Removing old Container %s\n" "${DOCKER_CONTAINER_NAME}"
sudo docker rm "${DOCKER_IMAGE_NAME}"
printf "Removing old Image %s\n" "${DOCKER_IMAGE_NAME}"
sudo docker image rm "${DOCKER_IMAGE_NAME}"
echo "Creating Docker Image"
sudo docker build -t ${DOCKER_IMAGE_NAME} . --no-cache
retVal=$?
if [ $retVal -ne 0 ]; then
printf "Error. Exit code %s\n" ${retVal}
exit
fi
echo "Running Docker Container ${DOCKER_CONTAINER_NAME} based in image ${DOCKER_IMAGE_NAME}"
sudo docker run --cpus="1.0" --name ${DOCKER_CONTAINER_NAME} ${DOCKER_IMAGE_NAME}
Here is the complete history of why I migrated all the services from my 11 years old Amazon account to other CSP.
Some lessons can be learned from my adventure.
I migrated my last services from Amazon to GCP
Amazon sent me an email on October 6th, this year 2021, telling me that they will disable EC2-Classic by August 2022. I thought I would not be able to keep my Static Ip’s as in the past VPC Ip’s and EC2-Classic Ip’s were not transferable, so considering that I would loss my Static Ip’s anyway I started to migrate to some to other providers like Digital Ocean.
Is not cool losing Static Ip (Elastic Ip in AWS) Addresses as this is bad for SEO, so given that I though I would lose my Static Ips that have been with me for years, I started to migrate certain services to providers much more economic.
Amazon is terrible communicating, and I talked with some product managers in the past about that, when they lost one of my Volumes, and the email was so cold and terrible that actually that hurt more than Amazon losing my Data. I believed that it was a poorly made Scam and when I realized it was true I reached one of my friends, that is manager there, as I know they care for doing things right, and he organized a meeting with two PM so I can pass my feedback.
The Cloud providers are changing things very fast, and nobody is able to be up to date with the changes, unless their work position allows plenty of time to get updated. Even if pages of documentation are provided, you have to react to an event that they externally generated forcing you to action. Action to read all the documentation about EC2-Classic migrations, action to prepare to have migrated by August 2022.
So August 2022… I was counting that I had plenty of time but I’m writing a new book about using the Amazon SDK for Python, boto3, and I was doing some API calls and they started to fail in a very unusual way, Exceptions with timeout, but only for the only region where I had EC2-Classic.
urllib3.exceptions.NewConnectionError: <botocore.awsrequest.AWSHTTPSConnection object at 0x7f0347d545e0>: Failed to establish a new connection: [Errno -2] Name or service not known
But if I switched to another region name, it would work:
region_name='us-west-2',
I made a mistake in here, the region name is “us-east-1” and not “us-east-1a“. “us-east-1a” is the availability zone. So the SDK was giving a timeout because in order to connect to the endpoint it uses the region name as part of the hostname. So it doesn’t find that endpoint because it doesn’t exist.
I never understood why a company like Amazon is unable to provide the SDK with a sample project or projects 100% working, with the source code so people has a base that works to build up.
Every API that I have created, I have provided it with documentation but also with example for several languages for how to use it.
In 2013 I was CTO of an online travel agency, and we had meta-searchers consuming our API and we were having several hundreds of thousands requests per second. Everything was perfectly documented, examples were provided for several languages, the document and the SDK had version numbers…
Everybody forgets about Developers and companies throw terrible and cold products to the poor Developers, so difficult to use. How many Developers would like to say: Listen Mr. President of the big Cloud Company XXXX, I only want to spawn a VM that works, and fast, with easy wizards. I don’t want to learn 50 hours before being able to use your overpriced platform, by doing 20 things before your Ip’s are reflexes of your infrastructure and based in Microservices. Modern JavaScript frameworks can create nice gently wizards even if you have supercold APIs.
Honestly, I didn’t realize my typo in the region and I connected to the Amazon Console to investigate and I saw this.
Honestly, when I read it I understood that they were going to end my EC2 Networking the 30th of October. It was 29th. I misunderstood.
It was my fault not reading it well to the end, I got shocked by the first part telling about shutdown and I didn’t fully understood as they were going to shutdown EC2-Classic for the zones I didn’t had anything running only.
From the long errors (3 exceptions chained) I didn’t realize that the endpoint is built with the region name. (And I was passing the availability zone)
botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "https://ec2.us-east-1a.amazonaws.com/"
Here is when I say that a good SDM would had thought and cared for the Developers more, and would had made the SDK to check if that region exists. How difficult is to create a SDK a bit more clever that detects a invalid region id?. It is not difficult.
It is true that it was late in the evening and I was tired of all the day, and two days of the week between work and zoom university classes I work 15 hours and 13 hours respectively, not counting the assignments, so by the end of the week I am very tired. But that’s why it is very important to follow methodology and to read well. I think Amazon has 50% of the fault by the way they do things: how the created the SDK, how they communicate, and by the errors that the console returned me when I tried to create a VPC instance of an EC2-Classic AMI (they seem related to the fact I had old VPC Network objects with shorter hash than the current they use) and the other 50% was my fault for not identifying the source of the error, and not reading the message in their website well.
But the fact that there were having those errors in the API’s and timeouts made me believe they were going to cut the EC2-Classic Networking the next day.
All the mistakes fall together in a perfect storm.
I checked for documentation and I saw it was possible to migrate my Static Ip’s to VPC Static Ip’s.
It was Friday evening, and I cancelled my plans, in order to migrate the Blog to VPC in an attempt to keep running it with Amazon.
As Cloud Architect, I like to have running instances in several CSP as it allows me to stay up to date with the changes they do.
I checked the documentation for the migration. Disassociating the Static Ip (Elastic Ip in AWS jargon) was easy. Turning into VPC as well.
As I progressed, what had to be easy turned into a nightmare, as I was getting many errors from the Amazon API, without any information, and my Instances were not created.
I figured out that their API could have problems with old VPC objects I created time ago, so I had to create new objects for several things.
I managed to spawn my instances but they were being launch and terminated instantly without information. Frustrating.
When launching a new instance from the AMI (a Snapshot of the blog), I was giving shown options to add more volumes without any sense. My Instance was using 16GB from a 20GB total Space, and I was shown different volume configs, depending on the instance, in some case an additional 20GB volume, in other small SSD, ephemeral and 10 GB for the AMI (which requires at least 16GB).
After some fight I manage to make it work after deleting the volumes that made no sense, and keeping only one of 20GB, the same size of my AMI.
But then my nightmare started to make the VPC Instance to have Internet access and to be seen from outside. I had to create a new Internet Gateway, NAT, Network, etc…
As mentioned the old objects I was trying to reusing were making the process to fail.
I was running out of time, and I thought in few time they were going to shutdown EC2-Classic network (as I did not read correctly), so I decided to download everything and to migrate to another provider. For doing that first I blocked all the traffic, except for my Ip.
I worked in parallel, creating the new config in Google Cloud, just in case I had forgot something. I had created a document for the migration and it was accurate.
I managed to do everything fast enough. The slower part was to download all the Data, as I hold entire VM’s for projects like Cassandra Universal Driver.
Then I powered off my Amazon Instance for the Blog forever.
In GCP I blocked all the traffic in the firewall, except for my Ip, so I could work calmly.
When everything was ready, I had to redirect the DNS to the new static Ip from Google.
The DNS provider I used had implemented some changes in their API so I was getting errors replacing my old entry ‘.’ (their JSON calls returned Internal Server Error). Finally I figured it out how to workaround it and I was able to confirm that the first service was up and running.
I did some tests to make sure there were not unexpected permission problems, entries in the logs, etc…
Only then I opened the Google Firewall. I have a second firewall in each instance where I block or open at Ip tables level what I want. Basically abusive bot’s IPs trying to find exploits or brute force by dictionary passwords.
I checked with my phone, without Wifi that the Firewall was all good. (It is always a good idea to use another external Ip, different from the management one, to check)
TLTR: I’m undergoing a Maintenance on all my sites.
The main reason was that I was getting unexpected API Exceptions on the AWS SDK for Python (boto3), so I connected to the AWS Console to get more information.
Then I saw a message indicating that they will stop EC2-Classic today 30th of October. (Please read the Update on the Postmortem analysis as I understood incorrectly that banner message)
I already started migrating my Services, some I move to other providers like Digital Ocean. Other I had plant to keep in Amazon.
EOL (End of Life) was scheduled for 2022 August, so when I saw the message from Amazon the evening of the 29th, I decided to migrate my EC2-Classic Public Ip’s and Compute to VPC. Trying to deploy from an AMI, Amazon APIs were returning many internal errors, and as I figured out where their failures would be I was able get instances being launch without being Terminated immediately without an explanation. Still I had many problems with the Internet Gateway, VPC NAT, etc… after hours fighting with their errors, and their console, that is more a bunch of pages to manage Infrastructure rather than a user/developer friendly Cloud Tool I decided that I had enough.
After 11 years using Amazon AWS, including a trip to Dublin to be hired as Manager for Cloud Watch, and giving them the idea to add AutoScaling (I was told the project was too easy for me and that I would get bored in a year or too so I was not hired), I decided to move my Services to Google Cloud and to Digital Ocean.
I’m very polite and I saw that when I told to one Manager that the User Interface was terrible he didn’t like, but I have to speak up and say that tools for developers cannot be cold as your evil girlfriend. Cannot be API alike, stand alone pages to manage infinite parts of Architecture. Web providing services for developers cannot be created like in cold SysAdmin style. If the infrastructure is hard to manage and internally you use APIs, build nice Wizards in Javascript. I was leading a Team of Developers with infinite less resources than Amazon or Google and we wrote a Multi-Cloud product, with nice, and clever, and easy to use Wizards, and they were infinitely more better that those giant CSPs. We won a prize at European level at that time. But it was 2013.
I’ve migrated everything, moved all the data, statics, VMs… but I’m completing the adjustments for certain services like Cassandra nodes, web sites, bootstrapping some of my sites based of my PHP Catalonia Framework, adding Firewall rules to GCP, doing changes for Ansible provisioning, deploying the Server scripts from IaC, Docker, etc…
I published this book to help developers to understand and use Docker.
It is not targeted to SysAdmins, is aimed to Developers that want to get an operative know how by examples very quickly, and easy to read.
My other books have also updates not yet published, however an update has been published for Python 3 exercises for beginners book.
University classes are restarted, and I fixed my tower.
For the Cloud computing degree this semester VMWare is used intensively.
I have a dedicated tower with an AMD Ryzen 7 processor, a Samsung NMVe drive PCIe 4.0, which provides me a throughput of 6GB/second (six Gigabytes, so 48 Gbit/second), SAS drives and SATA too. It’s a little monster with 64 GB of RAM and 2.5 Gbps NIC.
It was not starting.
The problem was in the Video card, which made loosely contact to the motherboard.
I had to disconnect everything until I found what it was, but after moving the video card to another PCI slot, it worked.
I knew it was some sort of short circuit / bad contact as the fans were turning for a second and turning off immediately.
After this, the computer works fine but it will poweroff in about 4h and 12 hours. I’ve been testing and removing each component until I believe is the PSU. I’ve ordered a new one from a Dutch provider with web store in Ireland that my former colleague Thomas showed me one year and half ago.
Since England leaved the EU, it is impossible to buy from amazon.co.uk without experiencing problems in the border and delays.
If you want to learn how to assemble a PC, fix the problems and upgrade your laptop, I wrote this book:
If you are curious about what I use in my day to day:
A tower for developing and reading my email, with Linux, Intel i7 7800X (12 cores) and 64 GB of RAM, with Nvidia graphics card
A tower for holding Virtual Machines, with Linux, AMD Ryzen 7 3700x (16 cores) and 64 GB of RAM, with Nvidia graphics card
An upgraded HP laptop for programming in the cafe, is a Windows 10, with 16 GB of RAM
Raspberry Pi 4 and 3, from time to time
A laptop for programming, for Work, 16 GB of RAM
A tower for programming, for Work, at the office, 32 GB of RAM
I also had a Dell computer which battery inflated elevating the touchpad, an Acer 11.6 Latop very lightweight which screen died cracked apparently (it’s a mistery to me how this happened as I removed from the bag and it was cracked. That little laptop accompanied me during years, to many countries, as for a while I carried it with me 100% of the time. At that time if the companies I worked for had outages they were losing thousands of euros per hour, so as CTO I fixed broken stuff even in a restaurant. Believe when I recommend you and your teams to use Unit Testing) and a 15.6″ Acer with 16GB of RAM that was part of the payment of an Start up I was CTO for, and which screen flicks intermittently and I managed to fix it by applying a pressure point to a connector, so I managed to use as fixed computer at the beginning of being in Ireland. I was not using it much, as I had two laptops from work when working for Sanmina, a Dell with 16 GB of RAM and Core i7 with two external monitors and an Intel Xeon with 32GB of RAM, heavy weight, but very useful for my job (programming, doing demos, having VMs…).
I’ve assembled all my PC from the scratch, piece by piece, and I force myself to do it so I keep up to date of the upcoming technologies, buses, etc…
My students are doing well. Congrats to Albert for getting 8.67 from 10 in his university programming course exams!.
Diablo 2 Resurrected is published and I am in the credits :)
I’m in the credits of all our games since I joined, but I’m happy every time I see myself and my colleagues on them. :)
This release includes SubProcessUtils which is a class that allows you to execute commands to the shell (or without shell) and capture the STDOUT, STDERR, and Exit Code very easily.
I’ve used my libraries for a hackaton PoC for work, for Monitoring one aspect of one of our top games side, and I coded it super quickly. :)
They loved it and we have a meeting scheduled to create a Service from my PoC. :)
I’ve raised back the price for my books to normal levels. I’ve been keeping the price to the minimum to help people that wanted to learn during covid-19. I consider that who wanted to learn has already done it.
I still have bundles with a somewhat reduced price, and I authorized LeanPub platform to do discounts up to 50% at their discretion.
I had this idea after one my Python and Linux students with two laptops, a Mac OS X and a Windows one explained me that the Mac OS X is often taken by their daughters, and that the Windows 10 laptop has not enough memory to run PyCharm and Virtual Box fluently. She wanted to have a Linux VM to practice Linux, and do the Bash exercises.
So this article explains how to create a Ubuntu 20.04 LTS Docker Container, and execute a shell were you can practice Linux, Ubuntu, Bash, and you can use it to run Python, Apache, PHP, MySQL… as well, if you want.
You need to install Docker for Windows of for Mac:
Just pay attention to your type of processor: Mac with Intel chip or Mac with apple chip.
The first thing is to create the Dockerfile.
FROM ubuntu:20.04
MAINTAINER Carles Mateo
ARG DEBIAN_FRONTEND=noninteractive
RUN apt update && \
apt install -y vim python3-pip && \
apt install -y net-tools mc htop less strace zip gzip lynx && \
pip3 install pytest && \
apt-get clean
RUN echo "#!/bin/bash\nwhile [ true ]; do sleep 60; done" > /root/loop.sh; chmod +x /root/loop.sh
CMD ["/root/loop.sh"]
So basically the file named Dockerfile contains all the blueprints for our Docker Container to be created.
You see that I all the installs and clean ups in one single line. That’s because Docker generates a layer of virtual disk per each line in the Dockerfile. The layers are persistent, so even if in the next line we delete the temporary files, the space used will not be recovered.
You see also that I generate a Bash file with an infinite loop that sleeps 60 seconds each loop and save it as /root/loop.sh This is the file that later is called with CMD, so basically when the Container is created will execute this infinite loop. Basically we give to the Container a non ending task to prevent it from running, and exiting.
Now that you have the Dockerfile is time to build the Container.
For Mac open a terminal and type this command inside the directory where you have the Dockerfile file:
sudo docker build -t cheap_ubuntu .
I called the image cheap_ubuntu but you can set the name that you prefer.
For Windows 10 open a Command Prompt with Administrative rights and then change directory (cd) to the one that has your Dockerfile file.
docker.exe build -t cheap_ubuntu .
Image being built… (some data has been covered in white)
Now that you have the image built, you can create a Container based on it.
For Mac:
sudo docker run -d --name cheap_ubuntu cheap_ubuntu
For Windows (you can use docker.exe or just docker):
docker.exe run -d --name cheap_ubuntu cheap_ubuntu
Now you have Container named cheap_ubuntu based on the image cheap_ubuntu.
It’s time to execute an interactive shell and be able to play:
sudo docker exec -it cheap_ubuntu /bin/bash
For Windows:
docker.exe exec -it cheap_ubuntu /bin/bash
Our Ubuntu terminal inside Windows
Now you have an interactive shell, as root, to your cheap_ubuntu Ubuntu 20.04 LTS Container.
You’ll not be able to run the graphical interface, but you have a complete Ubuntu to learn to program in Bash and to use Linux from Command Line.
You will exit the interactive Bash session in the container with:
exit
If you want to stop the Container:
sudo docker stop cheap_ubuntu
Or for Windows:
docker.exe stop cheap_ubuntu
If you want to see what Containers are running do:
I completed my ZFS on Ubuntu 20.04 LTS book. I had an error in an actual hard drive so I added a Troubleshooting section explaining how I fixed it.
I paused for a while the advance of my book Python: basic exercises for beginners, as my colleague Michela is translating it to Italian. She is a great Engineer and I cannot be more happy of having her help.
I added a new article about how to create a simple web Star Wars game using Flask. As always, I use Docker and a Dockerfile to automate the deployment, so you can test it without messing with your local system. The code is very simple and easy to understand.
This way I set an entry in /etc/hosts and I can do all the tests I want.
I added a new section to the blog, is a link where you can see all the articles published, ordered by number of views. /posts_and_views.php
Is in the main page, just after the recommended articles. Here you can see the source code.
I removed the Categories:
Storage
ZFS
In favor of:
Hardware
Storage
ZFS
So the articles with Categories in the group deleted were reassigned the Categories in the second group.
Visually:
I removed some annoying lines from the Quick Selection access. They came from inherited CSS properties from my WordPress, long time customized, and I created new styles for this section.
I adjusted the line-height to avoid separation between lines being too much.
I added a link in the section of Other Engineering Blogs that I like, to the great https://github.com/lesterchan site, author of many super cool WordPress plugins.
After Docker Image flask_app is built, you can run a Docker Container based on it with:
sudo docker run -d -p 5000:5000 --name flask_app flask_app
After you’re done, in order to stop the Container type:
sudo docker stop flask_app
Here is the source code of the Python file flask_app.py:
#
# flask_app.py
#
# Author: Carles Mateo
# Creation Date: 2020-05-10 20:50 GMT+1
# Description: A simple Flask Web Application
# Part of the samples of https://leanpub.com/pythoncombatguide
# More source code for the book at https://gitlab.com/carles.mateo/python_combat_guide
#
from flask import Flask
import datetime
def get_datetime(b_milliseconds=False):
"""
Return the datetime with miliseconds in format YYYY-MM-DD HH:MM:SS.xxxxx
or without milliseconds as YYYY-MM-DD HH:MM:SS
"""
if b_milliseconds is True:
s_now = str(datetime.datetime.now())
else:
s_now = str(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
return s_now
app = Flask(__name__)
# Those variables will keep their value as long as Flask is running
i_votes_r2d2 = 0
i_votes_bb8 = 0
@app.route('/')
def page_root():
s_page = "<html>"
s_page += "<title>My Web Page!</title>"
s_page += "<body>"
s_page += "<h1>Time now is: " + get_datetime() + "</h1>"
s_page += """<h2>Who is more sexy?</h2>
<a href="r2d2"><img src="static/r2d2.png"></a> <a href="bb8"><img width="250" src="static/bb8.jpg"></a>"""
s_page += "</body>"
s_page += "</html>"
return s_page
@app.route('/bb8')
def page_bb8():
global i_votes_bb8
i_votes_bb8 = i_votes_bb8 + 1
s_page = "<html>"
s_page += "<title>My Web Page!</title>"
s_page += "<body>"
s_page += "<h1>Time now is: " + get_datetime() + "</h1>"
s_page += """<h2>BB8 Is more sexy!</h2>
<img width="250" src="static/bb8.jpg">"""
s_page += "<p>I have: " + str(i_votes_bb8) + "</p>"
s_page += "</body>"
s_page += "</html>"
return s_page
@app.route('/r2d2')
def page_r2d2():
global i_votes_r2d2
i_votes_r2d2 = i_votes_r2d2 + 1
s_page = "<html>"
s_page += "<title>My Web Page!</title>"
s_page += "<body>"
s_page += "<h1>Time now is: " + get_datetime() + "</h1>"
s_page += """<h2>R2D2 Is more sexy!</h2>
<img src="static/r2d2.png">"""
s_page += "<p>I have: " + str(i_votes_r2d2) + "</p>"
s_page += "</body>"
s_page += "</html>"
return s_page
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000, debug=True)
As always, the naming of the variables is based on MT Notation.
The Dockerfile is very straightforward:
FROM ubuntu:20.04
MAINTAINER Carles Mateo
ARG DEBIAN_FRONTEND=noninteractive
RUN apt update && \
apt install -y vim python3-pip && pip3 install pytest && \
apt-get clean
ENV PYTHON_COMBAT_GUIDE /var/python_combat_guide
RUN mkdir -p $PYTHON_COMBAT_GUIDE
COPY ./ $PYTHON_COMBAT_GUIDE
ENV PYTHONPATH "${PYTHONPATH}:$PYTHON_COMBAT_GUIDE/src/:$PYTHON_COMBAT_GUIDE/src/lib"
RUN pip3 install -r $PYTHON_COMBAT_GUIDE/requirements.txt
# This is important so when executing python3 -m current directory will be added to Syspath
# Is not necessary, as we added to PYTHONPATH
#WORKDIR $PYTHON_COMBAT_GUIDE/src/lib
EXPOSE 5000
# Launch our Flask Application
CMD ["/usr/bin/python3", "/var/python_combat_guide/src/flask_app.py"]
However we are going to run everything from a Docker Container so the only thing you need is to have installed Docker.
If you prefer to install MySql in your computer (or Virtual Box instance) directly, skip the Docker steps.
Dockerfile
The Dockerfile is the file that Docker uses to build the Docker Container.
Ours is like that:
FROM ubuntu:20.04
MAINTAINER Carles Mateo
ARG DEBIAN_FRONTEND=noninteractive
RUN apt update && apt install -y python3 pip mysql-server vim mc wget curl && apt-get clean
RUN pip install mysql-connector-python
EXPOSE 3306
ENV FOLDER_PROJECT /var/mysql_carles
RUN mkdir -p $FOLDER_PROJECT
COPY docker_run_mysql.sh $FOLDER_PROJECT
COPY start.sql $FOLDER_PROJECT
COPY src $FOLDER_PROJECT
RUN chmod +x /var/mysql_carles/docker_run_mysql.sh
CMD ["/var/mysql_carles/docker_run_mysql.sh"]
The first line defines that we are going to use Ubuntu 20.04 (it’s a LTS version).
We install all the apt packages in a single line, as Docker works in layers, and what is used as disk space in the previous layer is not deleted even if we delete the files, so we want to run apt update, install all the packages, and clean the temporal files in one single step.
I also install some useful tools like: vim, mc, less, wget and curl.
We expose to outside the port 3306, in case you want to run the Python code from your computer, but having the MySql in the Container.
The last line executes a script that starts the MySql service, creates the table, the user, and add two rows and runs an infinite loop so the Docker does not finish.
build_docker.sh
build_docker.sh is a Bash script that builds the Docker Image for you very easily.
It stops the container and removes the previous image, so your hard drive does not fill with Docker images if you do modifications.
It checks for errors building and it also remembers you how to run and debug the Docker Container.
#!/bin/bash
# Execute with sudo
s_DOCKER_IMAGE_NAME="blog_carlesmateo_com_mysql"
printf "Stopping old image %s\n" "${s_DOCKER_IMAGE_NAME}"
sudo docker stop "${s_DOCKER_IMAGE_NAME}"
printf "Removing old image %s\n" "${s_DOCKER_IMAGE_NAME}"
sudo docker rm "${s_DOCKER_IMAGE_NAME}"
printf "Creating Docker Image %s\n" "${s_DOCKER_IMAGE_NAME}"
sudo docker build -t ${s_DOCKER_IMAGE_NAME} . --no-cache
i_EXIT_CODE=$?
if [ $i_EXIT_CODE -ne 0 ]; then
printf "Error. Exit code %s\n" ${i_EXIT_CODE}
exit
fi
echo "Ready to run ${s_DOCKER_IMAGE_NAME} Docker Container"
echo "To run type: sudo docker run -d -p 3306:3306 --name ${s_DOCKER_IMAGE_NAME} ${s_DOCKER_IMAGE_NAME}"
echo "or just use run_in_docker.sh"
echo
echo "Debug running Docker:"
echo "docker exec -it ${s_DOCKER_IMAGE_NAME} /bin/bash"
echo
docker_run.sh
I also provide a script named docker_run.sh that runs your Container easily, exposing the MySql port.
#!/bin/bash
# Execute with sudo
s_DOCKER_IMAGE_NAME="blog_carlesmateo_com_mysql"
docker run -d -p 3306:3306 --name ${s_DOCKER_IMAGE_NAME} ${s_DOCKER_IMAGE_NAME}
echo "Showing running Instances"
docker ps
As you saw before I named the image after blog_carlesmateo_com_mysql.
I did that so basically I wanted to make sure that the name was unique, as the build_docker.sh deletes an image named like the name I choose, I didn’t want to use a generic name like “mysql” that may lead to you to delete the Docker Image inadvertently.
docker_run_mysql.sh
This script will run when the Docker Container is launched for the first time:
#!/bin/bash
# Allow to be queried from outside
sed -i '31 s/bind-address/#bind-address/' /etc/mysql/mysql.conf.d/mysqld.cnf
service mysql start
# Create a Database, a user with password, and permissions
cd /var/mysql_carles
mysql -u root < start.sql
while [ true ]; do sleep 60; done
With sed command we modify the line 31 of the the MySQL config file so we can connect from Outside the Docker Instance (bind-address: 127.0.0.1)
As you can see it executes the SQL contained in the file start.sql as root and we start MySql.
Please note: Our MySql installation has not set a password for root. It is only for Development purposes.
start.sql
The SQL file that will be ran inside our Docker Container.
CREATE DATABASE carles_database;
CREATE USER 'python'@'localhost' IDENTIFIED BY 'blog.carlesmateo.com-db-password';
CREATE USER 'python'@'%' IDENTIFIED BY 'blog.carlesmateo.com-db-password';
GRANT ALL PRIVILEGES ON carles_database.* TO 'python'@'localhost';
GRANT ALL PRIVILEGES ON carles_database.* TO 'python'@'%';
USE carles_database;
CREATE TABLE car_queue (
i_id_car int,
s_model_code varchar(25),
s_color_code varchar(25),
s_extras varchar(100),
i_right_side int,
s_city_to_ship varchar(25)
);
INSERT INTO car_queue (i_id_car, s_model_code, s_color_code, s_extras, i_right_side, s_city_to_ship) VALUES (1, "GOLF2021", "BLUE7", "COND_AIR, GPS, MULTIMEDIA_V3", 0, "Barcelona");
INSERT INTO car_queue (i_id_car, s_model_code, s_color_code, s_extras, i_right_side, s_city_to_ship) VALUES (2, "GOLF2021_PLUGIN_HYBRID", "BLUEMETAL_5", "COND_AIR, GPS, MULTIMEDIA_V3, SECURITY_V5", 1, "Cork");
As you can see it creates the user “python” with the password ‘blog.carlesmateo.com-db-password’ for access local and remote (%).
It also creates a Database named carles_database and grants all the permissions to the user “python”, for local and remote.
This is the user we will use to authenticate from out Python code.
Then we switch to use the carles_database and we create the car_queue table.
We insert two rows, as an example.
select_values_example.py
Finally the Python code that will query the Database.
import mysql.connector
if __name__ == "__main__":
o_conn = mysql.connector.connect(user='python', password='blog.carlesmateo.com-db-password', database='carles_database')
o_cursor = o_conn.cursor()
s_query = "SELECT * FROM car_queue"
o_cursor.execute(s_query)
for a_row in o_cursor:
print(a_row)
o_cursor.close()
o_conn.close()
Nothing special, we open a connection to the MySql and perform a query, and parse the cursor as rows/lists.
Please note: Error control is disabled so you may see any exception.
Executing the Container
First step is to build the Container.
From the directory where you cloned the project, execute:
sudo ./build_docker.sh
Then run the Docker Container:
sudo ./docker_run.sh
The script also performs a docker ps command, so you can see that it’s running.
Then change to the directory where I installed the sample files:
cd /var/mysql_carles
And execute the Python 3 example:
python3 select_values_example.py
Tying together MySql and a Python Menu with Object Oriented Programming
In order to tie all together, and specially to give a consistent view to my students, to avoid showing only pieces but a complete program, and to show a bit of Objects Oriented in action I developed a small program which simulates the handling of a production queue for Volkswagen.
MySQL Library
First I created a library to handle MySQL operations.
lib/mysqllib.py
import mysql.connector
class MySql():
def __init__(self, s_user, s_password, s_database, s_host="127.0.0.1", i_port=3306):
self.s_user = s_user
self.s_password = s_password
self.s_database = s_database
self.s_host = s_host
self.i_port = i_port
o_conn = mysql.connector.connect(host=s_host, port=i_port, user=s_user, password=s_password, database=s_database)
self.o_conn = o_conn
def query(self, s_query):
a_rows = []
o_cursor = self.o_conn.cursor()
o_cursor.execute(s_query)
for a_row in o_cursor:
a_rows.append(a_row)
o_cursor.close()
return a_rows
def insert(self, s_query):
o_cursor = self.o_conn.cursor()
o_cursor.execute(s_query)
i_inserted_row_count = o_cursor.rowcount
# Make sure data is committed to the database
self.o_conn.commit()
return i_inserted_row_count
def delete(self, s_query):
o_cursor = self.o_conn.cursor()
o_cursor.execute(s_query)
i_deleted_row_count = o_cursor.rowcount
# Make sure data is committed to the database
self.o_conn.commit()
return i_deleted_row_count
def close(self):
self.o_conn.close()
Basically when this class is instantiated, a new connection to the MySQL specified in the Constructor is established.
We have a method query() to send SELECT queries.
We have a insert method, to send INSERT, UPDATE queries that returns the number of rows affected.
This method ensures to perform a commit to make sure changes persist.
We have a delete method, to send DELETE Sql queries that returns the number of rows deleted.
We have a close method which closes the MySql connection.
A Data Object: CarDO
Then I’ve defined a class, to deal with Data and interactions of the cars.
Initially I was going to have a CarDO Object without any logic. Only with Data.
In OOP the variables of the Instance are called Properties, and the functions Methods.
Then I decided to add some logic, so I can show what’s the typical use of the objects.
So I will use CarDO as Data Object, but also to do few functions like printing the info of a Car.
Queue Manager
Finally the main program.
We also use Object Oriented Programming, and we use Dependency Injection to inject the MySQL Instance. That’s very practical to do Unit Testing.
from lib.mysqllib import MySql
from do.cardo import CarDO
class QueueManager():
def __init__(self, o_mysql):
self.o_mysql = o_mysql
def exit(self):
exit(0)
def main_menu(self):
while True:
print("Main Menu")
print("=========")
print("")
print("1. Add new car to queue")
print("2. List all cars to queue")
print("3. View car by Id")
print("4. Delete car from queue by Id")
print("")
print("0. Exit")
print("")
s_option = input("Choose your option:")
if s_option == "1":
self.add_new_car()
if s_option == "2":
self.see_all_cars()
if s_option == "3":
self.see_car_by_id()
if s_option == "4":
self.delete_by_id()
if s_option == "0":
self.exit()
def get_all_cars(self):
s_query = "SELECT * FROM car_queue"
a_rows = self.o_mysql.query(s_query)
a_o_cars = []
for a_row in a_rows:
i_id_car = a_row[0]
s_model_code = a_row[1]
s_color_code = a_row[2]
s_extras = a_row[3]
i_right_side = a_row[4]
s_city_to_ship = a_row[5]
o_car = CarDO(i_id_car=i_id_car, s_model_code=s_model_code, s_color_code=s_color_code, s_extras=s_extras, i_right_side=i_right_side, s_city_to_ship=s_city_to_ship)
a_o_cars.append(o_car)
return a_o_cars
def get_car_by_id(self, i_id_car):
b_success = False
o_car = None
s_query = "SELECT * FROM car_queue WHERE i_id_car=" + str(i_id_car)
a_rows = self.o_mysql.query(s_query)
if len(a_rows) == 0:
# False, None
return b_success, o_car
i_id_car = a_rows[0][0]
s_model_code = a_rows[0][1]
s_color_code = a_rows[0][2]
s_extras = a_rows[0][3]
i_right_side = a_rows[0][4]
s_city_to_ship = a_rows[0][5]
o_car = CarDO(i_id_car=i_id_car, s_model_code=s_model_code, s_color_code=s_color_code, s_extras=s_extras, i_right_side=i_right_side, s_city_to_ship=s_city_to_ship)
b_success = True
return b_success, o_car
def replace_apostrophe(self, s_text):
return s_text.replace("'", "´")
def insert_car(self, o_car):
s_sql = """INSERT INTO car_queue
(i_id_car, s_model_code, s_color_code, s_extras, i_right_side, s_city_to_ship)
VALUES
(""" + str(o_car.get_i_id_car()) + ", '" + o_car.get_s_model_code() + "', '" + o_car.get_s_color_code() + "', '" + o_car.get_s_extras() + "', " + str(o_car.get_i_right_side()) + ", '" + o_car.get_s_city_to_ship() + "');"
i_inserted_row_count = self.o_mysql.insert(s_sql)
if i_inserted_row_count > 0:
print("Inserted", i_inserted_row_count, " row/s")
b_success = True
else:
print("It was impossible to insert the row")
b_success = False
return b_success
def add_new_car(self):
print("Add new car")
print("===========")
while True:
s_id_car = input("Enter new ID: ")
if s_id_car == "":
print("A numeric Id is needed")
continue
i_id_car = int(s_id_car)
if i_id_car < 1:
continue
# Check if that id existed already
b_success, o_car = self.get_car_by_id(i_id_car=i_id_car)
if b_success is False:
# Does not exist
break
print("Sorry, this Id already exists")
s_model_code = input("Enter Model Code:")
s_color_code = input("Enter Color Code:")
s_extras = input("Enter extras comma separated:")
s_right_side = input("Enter R for Right side driven:")
if s_right_side.upper() == "R":
i_right_side = 1
else:
i_right_side = 0
s_city_to_ship = input("Enter the city to ship the car:")
# Sanitize SQL replacing apostrophe
s_model_code = self.replace_apostrophe(s_model_code)
s_color_code = self.replace_apostrophe(s_color_code)
s_extras = self.replace_apostrophe(s_extras)
s_city_to_ship = self.replace_apostrophe(s_city_to_ship)
o_car = CarDO(i_id_car=i_id_car, s_model_code=s_model_code, s_color_code=s_color_code, s_extras=s_extras, i_right_side=i_right_side, s_city_to_ship=s_city_to_ship)
b_success = self.insert_car(o_car)
def see_all_cars(self):
print("")
a_o_cars = self.get_all_cars()
if len(a_o_cars) > 0:
print(a_o_cars[0].get_car_header_for_list())
else:
print("No cars in queue")
print("")
return
for o_car in a_o_cars:
print(o_car.get_car_info_for_list())
print("")
def see_car_by_id(self, i_id_car=0):
if i_id_car == 0:
s_id = input("Car Id:")
i_id_car = int(s_id)
s_id_car = str(i_id_car)
b_success, o_car = self.get_car_by_id(i_id_car=i_id_car)
if b_success is False:
print("Error, car id: " + s_id_car + " not located.")
return False
print("")
o_car.print_car_info()
print("")
return True
def delete_by_id(self):
s_id = input("Enter Id of car to delete:")
i_id_car = int(s_id)
if i_id_car == 0:
print("Invalid Id")
return
# reuse see_car_by_id
b_found = self.see_car_by_id(i_id_car=i_id_car)
if b_found is False:
return
s_delete = input("Are you sure you want to DELETE. Type Y to delete: ")
if s_delete.upper() == "Y":
s_sql = "DELETE FROM car_queue WHERE i_id_car=" + str(i_id_car)
i_num = self.o_mysql.delete(s_sql)
print(i_num, " Rows deleted")
# if b_success is True:
# print("Car deleted successfully from the queue")
if __name__ == "__main__":
try:
o_mysql = MySql(s_user="python", s_password="blog.carlesmateo.com-db-password", s_database="carles_database", s_host="127.0.0.1", i_port=3306)
o_queue_manager = QueueManager(o_mysql=o_mysql)
o_queue_manager.main_menu()
except KeyboardInterrupt:
print("Detected CTRL + C. Exiting")
This program talks to MySQL, that we have started in a Docker previously.
We have access from inside the Docker Container, or from outside.
The idea of this simple program is to use a library for dealing with MySql, and objects for dealing with the Cars. The class CarDO contributes to the render of its data in the screen.
To enter inside the Docker once you have generated it and is running, do: