Category Archives: Operations

News from the Blog 2022-06-22

For the first part of June I’ve been quiet on Social Media as I was on holidays and taking some scheduled tests for my health in the hospital.

Carles in the Media/Press/Streaming

Twitch

I started streaming live Python coding sessions in Twitch. I’m giving it a try to see if coders have engagement.

The Software I use to broadcast from Linux is OBS.

I started with my Open Source project ctop.

I had a very long and interesting session on 2022-06-06 about OpenZFS, Data Centers, NVMe, iSCSI, Hard Drives, Storage, performance, Data Centers

More funny things happened like when I was installing a VirtualBox VM live, and the ZFS pool became irresponsible due hardware errors in one SATA Spinning drive.

Things from broadcasting live…

Some of the feedback I got from talented Engineers is that even if the original matter to talk about was interesting, seeing everything falling apart live due to unexpected hardware problems, and me troubleshooting live is being the best of the show… which I found very amusing.

RAB Radio the new digital world

I keep doing my radio space for Radio America Barcelona, once per week, addressed to the Catalan Community across the world and expats.

This radio program, streamed also via Twitch, is available in Catalan language only. RAB.

Open Source

carleslibs

I’ve been working in version 1.0.8 branch, and after a session of refactor on Twitch where I found a bug in MenuUtils class, I fixed it and released v. 1.0.8. You can see the video on the link.

Now I’m working on the branch v. 1.0.9.

ctop

I’ve been working in the branch 0.8.9.

My first Twitch broadcast was about adding Unit Testing to MemUtils class.

You can see all my videos:

http://www.youtube.com/channel/UCYzY-2wJ9W_ooR64-QzEdJg

Infrastructure

OpenStack

I recommend you the videos in this page about Operating OpenStack at Scale.

Some of my Blizzard colleagues talk on it.

https://www.openstack.org/videos/summits/denver-2019/how-blizzard-entertainment-uses-autoscaling-with-overwatch

My last physical server in a Data Center

This week I decommissioned my last physical server in a Data Center.

It has been a long journey since I created my company to launch my own projects, and I started having my own infrastructure, back at 2000.

I was offering VPS at that time, with VMWare as Hypervisor.

This last Rack Server served me well for 21 years.

Now everything is Cloud, and is not viable to host and maintain servers unless this is your main occupation. Server’s motherboards die, hard drives die and they need to be replaced. Maintaining infrastructure it’s a full time job and you require somebody to do it. Also using fixed servers only prevents you from moving fast, locks a lot of money, and from spawning more compute capacity.

If you are curious this Rack Server is a Super Micro with Intel Xeon processor and SCSI drives.

Security

Firewall

I keep blocking thousands of IP Addresses every day.

When I see a pattern of an IP trying an attacks against the Server I look at the IP and if it’s from a hosting provider I just block the entire range.

I keep blocking any IP Address coming from Russia or Belarus since they invaded Ukraine.

My Health

I visited the hospital for a programmed following on my health.

The analysis are super good, and it’s super clear that I’ve improved radically. My discipline with the diet, taking the medicines and doing exercise regularly has been crucial.

My Doctor is confident that I’ll have a full recovery, but to do so I need to loss a lot of weight in a year or two.

So, I need to focus on my health and in doing exercise, being happy and avoid any kind of negative stress.

The cost of the travels and the medicines have put some stress into my economy, but I’m fortunate that I can handle it.

Entertainment / Life / Reflections

Star Wars and racism

I’m really enjoying new Start Wars series Obi Wan, and I’ve been profoundly shocked to read that there are fans being racist against the black characters.

https://www.theverge.com/2022/5/31/23148468/star-wars-obi-wan-moses-ingram-third-sister

So just writing here to show my support to human beings from all races, genders including transgender, LGTB+, conditions and preferences.

Twitch Stream about ZFS, zpool scrubbing, Hard drives, Data Centers, NVMe, Rack Servers…

Twitch stream on 2022-06-06 10:50 IST

In this very long session we went through actual errors in a ZFS pool, we check the Kernel, we remove and reinsert the drive, conduct zpool scrub… in the meantime I talked about Rack, Rack Servers, PSU, redundant components, ECC RAM…

Renewing a SSL Certificate for Apache2 in Ubuntu 20.04

First you have to generate a new csr and key files.

It is not recommended to reuse your old CSR file.

openssl req -new -newkey rsa:2048 -nodes -keyout blog_carles_mateo_com_2022.key -out blog_carlesmateo_com_2022.csr

As you can see I used the name of the domain and the year for the new files to be generated to easily distinguish them.

When you’re asked for the password, in the additional fields, keep that password safe in case you need the Cert to be reissued to you.

You’ll need to submit the CSR file to your SSL provider. They will return you the CRT and the CA-BUNDLE files.

Edit your Apache config file for the SSL site.

For example:

/etc/apache2/sites-enabled/11-https-blog-carlesmateo-com.conf

Your conf file will look similar to this:

<VirtualHost *:443>
	ServerAdmin webmaster@yourdomain.cat

	DocumentRoot /opt/sites/www/blog.carlesmateo.com
	ServerName blog.carlesmateo.com
        SSLEngine on
        SSLCertificateFile /opt/sites/certs/2022/blog_carlesmateo_com_2022.crt
        SSLCertificateKeyFile /opt/sites/certs/2022/blog_carlesmateo_com_2022.key
        SSLCertificateChainFile /opt/sites/certs/2022/blog_carlesmateo_com_2022.ca-bundle
...

Before restarting Apache2, test the configuration for syntax errors with:

apache2ctl -t

If all is good, restart your Web Server with:

service apache2 restart

With a browser, verify that the information of the domain is right. I recommend you to check in Firefox and Chrome at least.

News from the Blog 2021-11-11

New Articles

How to communicate with your Python program running inside a Docker Container, using Linux Signals

Hope you’ll have fun reading this article:

Communicating with Docker Containers via Linux Signals and Python

I migrated my last services from Amazon and the blog to Google Compute Engine (GCE / GCP)

I wrote a Postmortem analysis about the process of migrating my last services from my 11 year old Amazon account.

Updates

Updates to articles

I updated the article about Python weird things that you may not know adding the Ellipsis …

I’ve been working in some Cassandra examples. I may publish an article soon about using it from Python and Docker.

Updates to My Books

I updated my Python and Docker books.

I’m currently writing a book about using Amazon AWS Python SDK (boto3).

Updates to Open Source projects

I have updated ctop, fixed two bugs and increased Code Coverage.

I made a new tag and released the last Stable Version:

https://gitlab.com/carles.mateo/ctop/-/tags/0.8.7

On top of my local Unit Testing, I have Jenkins checking that I don’t commit anything that breaks the Tests.

Some time ago I wrote some articles about how you can setup jenkins in a Docker Container.

Miscellaneous

Charity

I’ve donated to Wikipedia.

Only 2% of the viewers donate, so I answered the call every time it was made.

This is my 5th donation to Wikimedia.

I consider that Freedom is very important.

I bought these new books

One of my secrets to be on top is that I’m always studying.

I study all the time, at work and in my free time.

I use Linux Academy and I buy books in paper. I don’t connect with reading in tablets. I think information is stored better when read in paper. I use also a marker and pointers to keep a direct access to the most interesting points on the books.

And I study all kind of themes. Obviously I know a lot of Web Scraping, but there is always room for learning more. And whatever new I learn helps me to be better with my students and more clear writing my books.

I’ve never been a Front End, but I’ve been able to fix bugs in the Front End engines from the companies I worked for, like Privalia. I was passed a bug that prevented the Internet Explorer users to buy just one hour before we launching a massive campaign. I debugged and I found a variable named “value” so the html looked like <input name="value" value="">. In less than 30 minutes I proved to the incredulous Head of Development and the CTO that a bug in Internet Explored was causing a conflict when fetching the value from the input named value. We deployed to Production the update and the campaign was a total success. So I consider knowing Javascript and Front also a need, even if I don’t work directly with it. I want to be able to understand all the requirements and possibilities, and weaknesses, so I can fix bugs and save the day. That allowed me to fix scalability problems in Nodejs and Phantomjs projects too. (They are Javascript Server Side, event driven, projects)

It seems that Amazon.co.uk works well again for Ireland. My two last orders arrived on time and I had no problems of border taxes apparently.

Nice Python article

I enjoyed a lot this article, cause explains part of what I did with my student and friend Albert, in a project that analyzes the access logs from Apache for patterns of attempts of exploits, then feeds a database, and then blocks those offender Ip Addresses in the Firewall.

The article only covers the part of Pandas, of reading the access.log file and working with it, but is a very well redacted article:

https://mmas.github.io/read-apache-access-log-pandas

Nice Virtual Volumes article from VMware

I prefer Open Source, but there are very good commercial products too.

I liked this article about Virtual Volumes from VMWare:

Understanding Virtual Volumes (vVols) in VMware vSphere 6.7/7.0 (2113013)

https://kb.vmware.com/s/article/2113013

Thanks Blizzard (again)

There is a very nice initiative where we can nominate 4 colleagues a year, that we think that deserve a recognition.

My colleagues voted for me, so I received a gift voucher that I can spend in Ireland stores like Ikea, Pc World, Argos, Adidas, App Store & iTunes…

So thanks a million buds. :)

Migrating my 11 years Amazon AWS account services (Postmortem Analysis)

I started to explain that I was migrating some services from Amazon and that some of my sites were under Maintenance and that I would provide more information.

Here is the complete history of why I migrated all the services from my 11 years old Amazon account to other CSP.

Some lessons can be learned from my adventure.

I migrated my last services from Amazon to GCP

Amazon sent me an email on October 6th, this year 2021, telling me that they will disable EC2-Classic by August 2022. I thought I would not be able to keep my Static Ip’s as in the past VPC Ip’s and EC2-Classic Ip’s were not transferable, so considering that I would loss my Static Ip’s anyway I started to migrate to some to other providers like Digital Ocean.

Is not cool losing Static Ip (Elastic Ip in AWS) Addresses as this is bad for SEO, so given that I though I would lose my Static Ips that have been with me for years, I started to migrate certain services to providers much more economic.

Amazon is terrible communicating, and I talked with some product managers in the past about that, when they lost one of my Volumes, and the email was so cold and terrible that actually that hurt more than Amazon losing my Data. I believed that it was a poorly made Scam and when I realized it was true I reached one of my friends, that is manager there, as I know they care for doing things right, and he organized a meeting with two PM so I can pass my feedback.

The Cloud providers are changing things very fast, and nobody is able to be up to date with the changes, unless their work position allows plenty of time to get updated. Even if pages of documentation are provided, you have to react to an event that they externally generated forcing you to action. Action to read all the documentation about EC2-Classic migrations, action to prepare to have migrated by August 2022.

So August 2022… I was counting that I had plenty of time but I’m writing a new book about using the Amazon SDK for Python, boto3, and I was doing some API calls and they started to fail in a very unusual way, Exceptions with timeout, but only for the only region where I had EC2-Classic.

urllib3.exceptions.NewConnectionError: <botocore.awsrequest.AWSHTTPSConnection object at 0x7f0347d545e0>: Failed to establish a new connection: [Errno -2] Name or service not known

My config was:

        o_config = Config(
            region_name="us-east-1a",
            signature_version="v4",
            retries={
                'max_attempts': 10,
                'mode': 'standard'
            }
        )

But if I switched to another region name, it would work:

            region_name='us-west-2',

I made a mistake in here, the region name is “us-east-1” and not “us-east-1a“. “us-east-1a” is the availability zone. So the SDK was giving a timeout because in order to connect to the endpoint it uses the region name as part of the hostname. So it doesn’t find that endpoint because it doesn’t exist.

I never understood why a company like Amazon is unable to provide the SDK with a sample project or projects 100% working, with the source code so people has a base that works to build up.

Every API that I have created, I have provided it with documentation but also with example for several languages for how to use it.

In 2013 I was CTO of an online travel agency, and we had meta-searchers consuming our API and we were having several hundreds of thousands requests per second. Everything was perfectly documented, examples were provided for several languages, the document and the SDK had version numbers…

Everybody forgets about Developers and companies throw terrible and cold products to the poor Developers, so difficult to use. How many Developers would like to say: Listen Mr. President of the big Cloud Company XXXX, I only want to spawn a VM that works, and fast, with easy wizards. I don’t want to learn 50 hours before being able to use your overpriced platform, by doing 20 things before your Ip’s are reflexes of your infrastructure and based in Microservices. Modern JavaScript frameworks can create nice gently wizards even if you have supercold APIs.

Honestly, I didn’t realize my typo in the region and I connected to the Amazon Console to investigate and I saw this.

Honestly, when I read it I understood that they were going to end my EC2 Networking the 30th of October. It was 29th. I misunderstood.

It was my fault not reading it well to the end, I got shocked by the first part telling about shutdown and I didn’t fully understood as they were going to shutdown EC2-Classic for the zones I didn’t had anything running only.

From the long errors (3 exceptions chained) I didn’t realize that the endpoint is built with the region name. (And I was passing the availability zone)

botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "https://ec2.us-east-1a.amazonaws.com/"

Here is when I say that a good SDM would had thought and cared for the Developers more, and would had made the SDK to check if that region exists. How difficult is to create a SDK a bit more clever that detects a invalid region id?. It is not difficult.

It is true that it was late in the evening and I was tired of all the day, and two days of the week between work and zoom university classes I work 15 hours and 13 hours respectively, not counting the assignments, so by the end of the week I am very tired. But that’s why it is very important to follow methodology and to read well. I think Amazon has 50% of the fault by the way they do things: how the created the SDK, how they communicate, and by the errors that the console returned me when I tried to create a VPC instance of an EC2-Classic AMI (they seem related to the fact I had old VPC Network objects with shorter hash than the current they use) and the other 50% was my fault for not identifying the source of the error, and not reading the message in their website well.

But the fact that there were having those errors in the API’s and timeouts made me believe they were going to cut the EC2-Classic Networking the next day.

All the mistakes fall together in a perfect storm.

I checked for documentation and I saw it was possible to migrate my Static Ip’s to VPC Static Ip’s.

It was Friday evening, and I cancelled my plans, in order to migrate the Blog to VPC in an attempt to keep running it with Amazon.

As Cloud Architect, I like to have running instances in several CSP as it allows me to stay up to date with the changes they do.

I checked the documentation for the migration. Disassociating the Static Ip (Elastic Ip in AWS jargon) was easy. Turning into VPC as well.

As I progressed, what had to be easy turned into a nightmare, as I was getting many errors from the Amazon API, without any information, and my Instances were not created.

I figured out that their API could have problems with old VPC objects I created time ago, so I had to create new objects for several things.

I managed to spawn my instances but they were being launch and terminated instantly without information. Frustrating.

When launching a new instance from the AMI (a Snapshot of the blog), I was giving shown options to add more volumes without any sense. My Instance was using 16GB from a 20GB total Space, and I was shown different volume configs, depending on the instance, in some case an additional 20GB volume, in other small SSD, ephemeral and 10 GB for the AMI (which requires at least 16GB).

After some fight I manage to make it work after deleting the volumes that made no sense, and keeping only one of 20GB, the same size of my AMI.

But then my nightmare started to make the VPC Instance to have Internet access and to be seen from outside. I had to create a new Internet Gateway, NAT, Network, etc…

As mentioned the old objects I was trying to reusing were making the process to fail.

I was running out of time, and I thought in few time they were going to shutdown EC2-Classic network (as I did not read correctly), so I decided to download everything and to migrate to another provider. For doing that first I blocked all the traffic, except for my Ip.

I worked in parallel, creating the new config in Google Cloud, just in case I had forgot something. I had created a document for the migration and it was accurate.

I managed to do everything fast enough. The slower part was to download all the Data, as I hold entire VM’s for projects like Cassandra Universal Driver.

Then I powered off my Amazon Instance for the Blog forever.

In GCP I blocked all the traffic in the firewall, except for my Ip, so I could work calmly.

When everything was ready, I had to redirect the DNS to the new static Ip from Google.

The DNS provider I used had implemented some changes in their API so I was getting errors replacing my old entry ‘.’ (their JSON calls returned Internal Server Error). Finally I figured it out how to workaround it and I was able to confirm that the first service was up and running.

I did some tests to make sure there were not unexpected permission problems, entries in the logs, etc…

Only then I opened the Google Firewall. I have a second firewall in each instance where I block or open at Ip tables level what I want. Basically abusive bot’s IPs trying to find exploits or brute force by dictionary passwords.

I checked with my phone, without Wifi that the Firewall was all good. (It is always a good idea to use another external Ip, different from the management one, to check)

I added a post explaining that I was migrating some of my Services and were under maintenance.

I mentioned in the blog that some of my services were being migrated from Amazon to Digital Ocean.

For some reasons, in the Backup of the Database one user was lost, so I created it in the MySQL with the typical commands:

CREATE USER 'username'@'localhost' IDENTIFIED BY 'password';
FLUSH PRIVILEGES;
GRANT ALL PRIVILEGES ON mydatabase.* TO 'username'@'localhost';

My Sites are under Maintenance

2021-11-08 Update: There is a Postmortem analysis of what happened with Amazon here.

TLTR: I’m undergoing a Maintenance on all my sites.

The main reason was that I was getting unexpected API Exceptions on the AWS SDK for Python (boto3), so I connected to the AWS Console to get more information.

Then I saw a message indicating that they will stop EC2-Classic today 30th of October. (Please read the Update on the Postmortem analysis as I understood incorrectly that banner message)

I already started migrating my Services, some I move to other providers like Digital Ocean. Other I had plant to keep in Amazon.

EOL (End of Life) was scheduled for 2022 August, so when I saw the message from Amazon the evening of the 29th, I decided to migrate my EC2-Classic Public Ip’s and Compute to VPC. Trying to deploy from an AMI, Amazon APIs were returning many internal errors, and as I figured out where their failures would be I was able get instances being launch without being Terminated immediately without an explanation. Still I had many problems with the Internet Gateway, VPC NAT, etc… after hours fighting with their errors, and their console, that is more a bunch of pages to manage Infrastructure rather than a user/developer friendly Cloud Tool I decided that I had enough.

After 11 years using Amazon AWS, including a trip to Dublin to be hired as Manager for Cloud Watch, and giving them the idea to add AutoScaling (I was told the project was too easy for me and that I would get bored in a year or too so I was not hired), I decided to move my Services to Google Cloud and to Digital Ocean.

I’m very polite and I saw that when I told to one Manager that the User Interface was terrible he didn’t like, but I have to speak up and say that tools for developers cannot be cold as your evil girlfriend. Cannot be API alike, stand alone pages to manage infinite parts of Architecture. Web providing services for developers cannot be created like in cold SysAdmin style. If the infrastructure is hard to manage and internally you use APIs, build nice Wizards in Javascript. I was leading a Team of Developers with infinite less resources than Amazon or Google and we wrote a Multi-Cloud product, with nice, and clever, and easy to use Wizards, and they were infinitely more better that those giant CSPs. We won a prize at European level at that time. But it was 2013.

I’ve migrated everything, moved all the data, statics, VMs… but I’m completing the adjustments for certain services like Cassandra nodes, web sites, bootstrapping some of my sites based of my PHP Catalonia Framework, adding Firewall rules to GCP, doing changes for Ansible provisioning, deploying the Server scripts from IaC, Docker, etc…

I’ll be posting updates in Twitter.

Upgrading Amazon AWS EC2 Ubuntu 18.04 LTS to Ubuntu 20.04 LTS

I’ve upgraded one of my AWS machines from Ubuntu 18.04 LTS to Ubuntu 20.04 LTS.

The process was really straightforward, basically run:

sudo apt update
sudp apt upgrade

Then Reboot in order to load the last kernel.

Then execute:

sudo do-release-upgrade

And ask two or three questions in different moments.

After, reboot, and that’s it.

All my Firewall rules, were kept, the services were restarted as they were available, or deferred to be executed when the service is reinstalled in case of dependencies (like for PHP, which was upgraded before Apache) and I’ve not found anything out of place, by the moment. The Kernels were special, with Amazon customization too.

I also recommend you to make sure to disable your Apache directory browsing, if you had like that, as new software install may have enabled it:

a2dismod autoindex
systemctl restart apache2

I always recommend, for Production, to run the Ubuntu LTS version.

Creating Jenkins configurations for your projects

Obviously for companies is a must, but if you work in your own projects, it will be super great that you configure Jenkins, so you have continuous feedback about if something breaks.

I’ll show you how to configure Jenkins for several projects using only your main computer/laptop.

Check my past article about setting up Jenkins in Docker.

Adding a new Freestyle project

Click on top left: New item.

Then give it an appropriate name and choose Freestyle Project.

Take in count that the name given will be used as the name of the workspace, so you may want to avoid special characters.

It is very convenient to let Jenkins deal with your repository changes instead of using shell commands. So I’m going to fill this section.

I also provided credentials, so Jenkins can log to my Gitlab.

This kind of project is the most simple and we will use the same Docker Container where Jenkins resides, to run the Unit Testing of our code.

We are going to select to Build periodically.

If your Server is in Internet, you can active the Web Hooks so your Jenkins is noticed via a web connection from GitLab, GitHub or your CVS provider. As I’m strictly running this at home, Jenkins will be periodically check for changes in the repository and do nothing if there are no changes.

I’ll set H * * * * so Jenkins will try every hour.

Go down and select Add Build Step:

Select Execute shell.

Then add a basic echo command to print in the Console Output, and ls command so you see what is in the default’s directory your shell script is executing in.

Now save your project.

And go back to Dashboard.

Click inside of Neurona.cat to view Project’s Dashboard.

Click: Build Now. And then click on the Build task (Apr 5, 2021, 10:31 AM)

Click on Console Output.

You’ll see a verbose log of everything that happened.

You’ll see for example that Jenkins has put the script on the path of the git project folder that we instructed before to clone/pull.

This example doesn’t have test. Let’s see one with Unit Test.

Running Unit Testing with pytest

If we enter the project CTOP and then select Configure you’ll see the steps I did for making it do the Unite Testing.

In my case I wanted to have several steps, one per each Unit Test file.

If each one of them I’ve to enter the right directory before launching any test.

If you open the last successful build and and select Console Output you’ll see all the tests, going well.

If a test will go wrong, pytest will exit with Exit Code different of 0, and so Jenkins will detect it and show that the Build Fails.

Building a Project from Pipeline

Pipeline is the set of plugins that allow us to do Continuous Deployment.

Inform the information about your git project.

Then in your gitlab or github project create a file named Jenkinsfile.

Jenkins will look for it when it clones your repo, to build the Pipeline.

Here is my Jenkinsfile in https://gitlab.com/carles.mateo/python_combat_guide/-/blob/master/Jenkinsfile

pipeline {
    agent any
    stages {
        stage('Show Environment') {
            steps {
                echo 'Showing the environment'
                sh 'ls -hal'
            }
        }
        stage('Updating from repository') {
            steps {
                echo 'Grabbing from repository'
                withCredentials([usernamePassword(credentialsId: 'ssh-fast', usernameVariable: 'USERNAME', passwordVariable: 'USERPASS')]) {
                    script {
                        sh "sshpass -p '$USERPASS' -v ssh -o StrictHostKeyChecking=no $USERNAME@$ip_fast 'git clone https://gitlab.com/carles.mateo/python_combat_guide.git; cd python_combat_guide; git pull'"
                    }
                }
            }
        }
        stage('Build Docker Image') {
            steps {
                echo 'Building Docker Container'
                withCredentials([usernamePassword(credentialsId: 'ssh-fast', usernameVariable: 'USERNAME', passwordVariable: 'USERPASS')]) {
                    script {
                        sh "sshpass -p '$USERPASS' -v ssh -o StrictHostKeyChecking=no $USERNAME@$ip_fast 'cd python_combat_guide; docker build -t python_combat_guide .'"
                    }
                }
            }
        }
        stage('Run the Tests') {
            steps {
                echo "Running the tests from the Container"
                withCredentials([usernamePassword(credentialsId: 'ssh-fast', usernameVariable: 'USERNAME', passwordVariable: 'USERPASS')]) {
                    script {
                        sh "sshpass -p '$USERPASS' -v ssh -o StrictHostKeyChecking=no $USERNAME@$ip_fast 'cd python_combat_guide; docker run  python_combat_guide'"
                    }
                }
            }
        }
    }
}

My Jenkins Docker installation has the sshpass command, and I use it to connect via SSH, with username and Password to the server defined by ip_fast environment variable.

We defined the variable ip_fast in Manage Jenkins > Configure System.

There in Global Properties , Environment Variables I defined ip_fast:

In the Build Server I’ll make a new user and allow it to build Docker:

sudo adduser jenkins_build

sudo usermod -aG docker jenkins_build

The Credentials can be managed from Manage Jenkins > Manage Credentials.

You can see how I use all this combined in the Jenkinsfile so I don’t have to store credentials in the CVS and Jenkins (Docker Container) will connect via SSH to make the computer after ip_fast Ip, to build and run another Container. That Container will run with a command to do the Unit Testing. If something goes wrong, that is, if any program return an Exit Code different from 0, Jenkins will consider the build fail.

Take in count that $? only stores the Exit Code of the last program. So be careful if you pass multiple commands in one single line, as this may mask an error.

Separating the execution in multiple Stages helps to save time, as after a failure, execution will not continue.

Also visually is easy to see where the error is.

A base Dockerfile for my Jenkins deployments

Update: I’ve created a video and article about how to install jenkins in Docker with docker CLI and Blue Ocean plugins following the official Documentation. You may prefer to follow that one.

Update: Second part of this article: Creating Jenkins configurations for your projects

So I share with you my base Jenkins Dockerfile, so you can spawn a new Jenkins for your projects.

The Dockerfile installs Ubuntu 20.04 LTS as base image and add the required packages to run jenkins but also Development and Testing tools to use inside the Container to run Unit Testing on your code, for example. So you don’t need external Servers, for instance.

You will need 3 files:

  • Dockerfile
  • docker_run_jenkins.sh
  • requirements.txt

The requirements.txt file contains your PIP3 dependencies. In my case I only have pytest version 4.6.9 which is the default installed with Ubuntu 20.04, however, this way, I enforce that this and not any posterior version will be installed.

File requirements.txt:

pytest==4.6.9

The file docker_run_jenkins.txt start Jenkins when the Container is run and it will wait until the initial Admin password is generated and then it will display it.

File docker_run_jenkins.sh:

#!/bin/bash

echo "Starting Jenkins..."

service jenkins start

echo "Configure jenkins in http://127.0.0.1:8080"

s_JENKINS_PASSWORD_FILE="/var/lib/jenkins/secrets/initialAdminPassword"

i_PASSWORD_PRINTED=0

while [ true ];
do
    sleep 1
    if [ $i_PASSWORD_PRINTED -eq 1 ];
    then
        # We are nice with multitasking
        sleep 60
        continue
    fi

    if [ ! -f "$s_JENKINS_PASSWORD_FILE" ];
    then
        echo "File $s_FILE_ORIGIN does not exist"
    else
        echo "Password for Admin is:"
        cat $s_JENKINS_PASSWORD_FILE
        i_PASSWORD_PRINTED=1
    fi
done

That file has the objective to show you the default admin password, but you don’t need to do that, you can just start a shell into the Container and check manually by yourself.

However I added it to make it easier for you.

And finally you have the Dockerfile:

FROM ubuntu:20.04

LABEL Author="Carles Mateo" \
      Email="jenkins@carlesmateo.com" \
      MAINTAINER="Carles Mateo"

# Build this file with:
# sudo docker build -f Dockerfile -t jenkins:base .
# Run detached:
# sudo docker run --name jenkins_base -d -p 8080:8080 jenkins:base
# Run seeing the password:
# sudo docker run --name jenkins_base -p 8080:8080 -i -t jenkins:base
# After you CTRL + C you will continue with:
# sudo docker start
# To debug:
# sudo docker run --name jenkins_base -p 8080:8080 -i -t jenkins:base /bin/bash

ARG DEBIAN_FRONTEND=noninteractive

ENV SERVICE jenkins

RUN set -ex

RUN echo "Creating directories and copying code" \
    && mkdir -p /opt/${SERVICE}

COPY requirements.txt \
    docker_run_jenkins.sh \
    /opt/${SERVICE}/

# Java with Ubuntu 20.04 LST is 11, which is compatible with Jenkins.
RUN apt update \
    && apt install -y default-jdk \
    && apt install -y wget curl gnupg2 \
    && apt install -y git \
    && apt install -y python3 python3.8-venv python3-pip \
    && apt install -y python3-dev libsasl2-dev libldap2-dev libssl-dev \
    && apt install -y python3-venv \
    && apt install -y python3-pytest \
    && apt install -y sshpass \
    && wget -qO - https://pkg.jenkins.io/debian-stable/jenkins.io.key | apt-key add - \
    && echo "deb http://pkg.jenkins.io/debian-stable binary/" > /etc/apt/sources.list.d/jenkins.list \
    && apt update \
    && apt -y install jenkins \
    && apt-get clean

RUN echo "Setting work directory and listening port"
WORKDIR /opt/${SERVICE}

RUN chmod +x docker_run_jenkins.sh

RUN pip3 install --upgrade pip \
    && pip3 install -r requirements.txt


EXPOSE 8080


ENTRYPOINT ["./docker_run_jenkins.sh"]

Build the Container

docker build -f Dockerfile -t jenkins:base .

Run the Container displaying the password

sudo docker run --name jenkins_base -p 8080:8080 -i -t jenkins:base

You need this password for starting the configuration process through the web.

Visit http://127.0.0.1:8080 to configure Jenkins.

Configure as usual

Resuming after CTRL + C

After you configured it, on the terminal, press CTRL + C.

And continue, detached, by running:

sudo docker start jenkins_base

The image is 1.2GB in size, and will allow you to run Python3, Virtual Environments, Unit Testing with pytest and has Java 11 (not all versions of Java are compatible with Jenkins), use sshpass to access other Servers via SSH with Username and Password…

Solving the problem when running a Docker Container: standard_init_linux.go:190: exec user process caused “no such file or directory”

When you see this error for the first time it can be pretty ugly to detect why it happens.

At personal level I use only Linux for my computers, with an exception of a windows laptop that I keep for specific tasks. But my employers often provide me laptops with windows.

I suffered this error for first time when I inherited a project, in a company I joined time ago. And I suffered some time later, by the same reason, so I decided to explain it easily.

In the project I inherited the build process was broken, so I had to fix it, and when this was done I got the mentioned error when trying to run the Container:

standard_init_linux.go:190: exec user process caused "no such file or directory"

The Dockerfile was something like this:

FROM docker-io.battle.net/alpine:3.10.0

LABEL Author="Carles Mateo" \
      Email="docker@carlesmateo.com" \
      MAINTAINER="Carles Mateo"

ENV SERVICE cservice

RUN set -ex

RUN echo "Creating directories and copying code" \
    && mkdir -p /opt/${SERVICE}
    
COPY config.prod \
    config.dev \
    config.st \
    requirements.txt \
    utils.py \
    cservice.py \
    tests/test_cservice.py \
    run_cservice.sh \
    /opt/${SERVICE}/

RUN echo "Setting work directory and listening port"
WORKDIR /opt/${SERVICE}
EXPOSE 7000

RUN echo "Installing dependencies" \
    && apk add build-base openldap-dev python3-dev py-pip \
    && pip3 install --upgrade pip \
    && pip3 install -r requirements.txt \
    && pip3 install pytest

ENTRYPOINT ["./run_cservice.sh"]

So the project was executing a Bash script run_cservice.sh, via Dockerfile ENTRYPOINT.

That script would do the necessary amends depending if the Container is launched with prod, dev, or staging parameter.

I debugged until I saw that the Container never executed this in the expected way.

A echo “Debug” on top of the Bash Script would be enough to know that very basic call was never executed. The error was first.

After much troubleshooting the Container I found that the problem was that the Bash script, that was copied to the container with COPY in the Dockerfile, from a Windows Machines, contained CRLF Windows carriage return. While for Linux and Mac OS X carriage return is just a character, LF.

In that company we all use Windows. And trying to build the Container worked after removing the CRLF, but the Bash script with CRLF was causing that problem.

When I replace the CRLF by Unix type LF, and rebuild the image, and ran the container, it worked lovely.

A very easy, manual way, to do this in Windows, is opening your file with Notepad++ and setting LF as carriage return. Save the file, rebuild, and you’ll see your Container working.

Please note that in the Dockerfile provided I install pytest Framework and a file calles tests/test_cservice.py. That was not in the original Dockerfile, but I wanted to share with you that I provide Unit Testing that can be ran from a Linux Container, for all my projects. What normally I do is to have two Dockerfiles. One for the Production version to be deployed, another for running Unit Testing, and some time functional testing as well, from inside the Docker Container. So strictly speaking for the production version, I would not copy the tests/test_cservice.py and install pytest. A different question are internal Automation Tools, where it may be interested providing a All-in-One image, that can run the Unit Testing before start the service. It is interesting to provide some debugging tools in out Internal Automation Tools, so we can troubleshoot what’s going on in case of problems. Take a look at my previous article about Python version for Docker and Automation tools, for more considerations.