Quick Access to my selection Last Update: 2022-01-22 22:22:22 IST (Irish Standard Time) Unix epoch: 1642890142
Cloud Computing Operations Engineering/DevOps/SRE Engineering Miscellaneous
Python PHP Java Bash
Docker Commodity Hardware ZFS Architecture
Windows Raspberry Pi Cassandra Relational Databases
  • Cassandra Universal Driver
    A HTTP gateway for all the languages supporting curl/sockets. Written in 2014, a time when there were no drivers for many languages.
    2014 Views: 251 views
Python Open Source Utilities Bash Open Source Utilities My Tech Talks My Books

News from the Blog 2022-01-22

News for the Blog

It has been 9 years since I created the blog, and some articles have old content that still get many visitors. To make sure they get the clear picture and not obsolete information, all the articles of the taxonomy POST, will include the Date and Time when the article was first published.

I also removed the annoying link “Leave a comment” on top. I think it influences some people to leave comments before actually having read the article.

It is still possible to add comments, but they are on the bottom of the page. I believe it makes more sense this way. This is the way.

Technically that involved modifying the files of my template:

  • functions.php
  • content.php

My Books

Automating and Provisioning to Amazon AWS using boto3 Amazon’s SDK for Python 3

I finished my book about Automating and Provisioning to Amazon AWS using boto3 Amazon’s SDK for Python 3.

It’s 128 pages in Full size DIN-A4 DRM-Free, and comes with a link to code samples of a real project CLI Menu based.

Docker Combat Guide

I have updated my book Docker Combat Guide and I added a completely new section, including source code, to work with Docker’s Python SDK.

I show the Docker SDK by showing the code of an actual CLI program I wrote in Python 3.

Here you can see a video that demonstrates how I launched a project with three Docker Containers via Docker compose. The Containers have Python, Flask webserver, and redis as bridge between the two Python Containers.

All the source code are downloadable with the book.

Watch at 1080p at Full Screen for better experience

My Classes

In January I resumed the coding classes. I have new students, and few spots free in my agenda, as some of my students graduated and others have been hired as Software Developers. I can not be more proud. :)

Free Training

Symfony is one of the most popular PHP Frameworks.

You can learn it with these free videos:

https://symfonycasts.com/screencast/symfony/

Software Licenses I Purchased

Before leaving 2021, I registered WinRAR for Windows.

WinRAR is a compressing Software that has been with us for 19 years.

I’m pretty sure I registered it in the past, but these holidays I was out only with two Window laptops and I had to do some work for the university and WinRAR came it handy, so I decided to register it again.

I create Software and Books, and I earn my life with this, so it makes a lot of sense to pay others for their good work crafting Software.

Books I Purchased

I bought this book and by the moment is really good. I wanted to buy some updated books as all my Linux books have some years already. Also I keep my skills sharp by reading reading reading.

Hardware I purchased

So I bought a cheap car power inverter.

The ones I saw in Amazon were €120+ and they were not very good rated, so I opted to buy a cheap one in the supermarket and keep it on the car just in case one day I need it. (My new Asus Zenbook laptop has 18 hours of autonomy and I don’t charge it for days, but you never know)

For those that don’t know, a power inverter allows you to get a 220V (120V in US) plug, from the connection of the lighter from your car. Also you can get the energy from an external car battery. This comes in really handy to charge the laptop, cameras, your drone… if you are in the nature and you don’t have any plug near.

I bought one years ago to power up Raspberry Pi’s when I was doing Research for a project I was studying to launch.

Fun

Many friends are using Starlink as a substitute of fiber for their rural homes, and they are super happy with it.

One of them send me a very fun article.

It is in Italian, but you can google translate it.

https://leganerd.com/2022/01/10/starlink-ha-un-piccolo-e-adorabile-problema-con-i-gatti/

Anyways you can get the idea of what’s going on in the picture :)

So tell me… so your speed with Starlink drops 80% in winter uh… aha…

Random news about Software

I tried the voice recognition in Slack huddle, and it works pretty well. Also Zoom has this feature and they are great. Specially when you are in a group call, or in a class.

My health

I was experimenting some problems, so I scheduled an appointment to get blood analysis and to be checked. Just in case.

TL;TR I could have died.

The doctors saw my analysis and sent me to the hospital urgently, where they found something that was going to be lethal. For hours they were checking me and doing several more analysis and tests to discard false positives, etc… and they precisely found the issue and provided urgent treatment and confirmed that I could have died at any moment.

Basically I dodged a bullet.

I was doing certain healthy things that helped me in a situation that could have been deadly or extremely dangerous to my health.

With the treatment and my strict discipline, I reverted the situation really quick and now I have more health and more energy than before. I feel rejuvenated.

I’m feeling lucky that with my work, the classes, the books I wrote, etc… I didn’t have to worry about the expensive medicines, the transport, etc… It was a bad moment, during Christmas, with so many people on holidays, pharmacies and GPs closed, so I had to spend more time looking for, traveling, and to pay more than it would had been strictly necessary. Despite all the time I used to my health, I managed to finish my university duties on time, and I didn’t miss my duties at work after the hospital, neither I had to cancel any programming classes or mentoring sessions. Nobody out of my closest circle knew what was going on, with the exception of my boss, which I kept informed in real time, just in case there was any problem, as I didn’t want to let down the company and have my duties at work to be unattended if something major happened.

I was not afraid to die. Unfortunately I’ve lost very significant people since I was a child. Relatives, very appreciated bosses and colleagues which I considered my friends, and great friends of different circles. Illnesses, accidents, and a friend of mine committed suicide years ago, and some of my partners attempted it (before we know each other). When you see people that are so good leaving, this brings a sadness that cannot be explained with words. I have had a tough life.

We have a limited time, and he have freedom to make choices. Some people choose to be miserable, to mistreat others, to lie, to cheat, to be unfaithful, to lack ethic and integrity. Those are their choices. Their wasted time will not come back.

Some of my friends are doctors. I admire them. They save lives and improve the quality of life of people with health problems.

I like being an Engineer cause I can create things, I can build instead of destroy, I can help to improve the world, and I can help users to have a good time and to avoid the frustration of services being down. I chose to do a positive work. So many times I’ve been offered much bigger salaries to do something I didn’t like, or by companies that I don’t admire, and I refused. Cause I wanted to make a better world. I know many people don’t think like that, and they only take take take. They are even unable to understand my choices, even to believe that I’m like that. But it’s enough that I know what I’m doing, and that it makes sense for me, and that I know that I’m doing well. And then, one day, you realize, that doing well, being fair and nice even if other people stabbed you in the back, you got to know fantastic people like you, and people that adore you and love to have you in their life, in their companies… So I’m really fortunate. To all the good-hearted people around, that give without expect anything in return, that try to make the world a better place, thank you.

Solving problems when updating to GNS3 last version, running with VirtualBox

Last Updated: 2022-01-19 12:05 Irish Time

So here I explain how to solve a problem that was happening to a friend.

He uses GNS3 for the university, and after installing the latest version, which in this case is 2.2.29, it stopped working.

He had it configured to use the local Server and VirtualBox in Windows 10.

The first thing to check and to fix is the Ip address for Host Only.

If you use Linux or Mac, only certain Ip ranges can be used, or you’ll have to edit a config file inside /etc/vbox

So the first thing is to set an Ip Address in VirtualBox VM that will make you worry free.

So start VirtualBox VM directly, and when the VM boots, use the text menu application to Configure to a valid Ip from the range defined for Host Only.

You can check this in VirtualBox in File > Host Network Manager

In my initial test I picket this Ip for the VM:

192.168.56.100

But using 192.168.56.100 can bring problems as the default DHCP Server is defined with this Ip, so I switched to:

192.168.56.10

Press CTRL + X to save and exit.

The VM will reboot automatically. Wait until it has booted and ping 192.168.56.10 from the Command Prompt.

Now, open a Windows Command Prompt or a Linux/Mac Terminal in you computer and ping the Ip:

You should also be able to see the web interface going to:

http://192.168.56.10

If it works then power off the VM, as we will start it automatically when running GNS3 main program (not from VirtualBox).

Now launch GNS3 program. Wait 30 seconds until it initializes and go to Edit > Preferences

Make sure you have the configuration like this:

Pay special attention to the Port for the GNS3 VM.

It seems like the main problem of my friend was that he was using a previous version, and he updated, and the settings from the previous version were kept. In his previous version he had configured the port 3080, but the new GNS 3 Server version 2.2.29 in the VM was using port 80, as you saw in my previous screenshots. So GNS3 was unable to connect to the VM.

After fixing this, restart GNS3, stop the VM if was not automatically stopped, and start GNS3 again.

After one minute approx connecting, you’ll see it working fine.

2021-12-22 News from the blog

Open Source

I released my Python Open Source libraries for rapid application development: carleslibs to v. 1.0.3.

I updated the page of my 2013 PHP Framework Catalonia Framework, linking to the blog. This is a very nice and lightweight PHP Framework that I created then and it worked so well that I had not the need to update it since 2014.

Donations

Blizzard offered to the employees the opportunity to donate in the name of Blizzard, up to USD $100. It has not been the first time, is a regular thing. Also in the past the company matched any donation we employees did, to help diversity will less resources to study in the university.

I chose to donate to Lions in Cork, Ireland.

When they lived, my grand parents were members and donors from Lion’s in Catalonia, so I trust them.

Security

Log4j Java vulnerability

A critical vulnerability named Log4Shell was found in Apache Log4j Java Open Source logging Library.

https://www.engadget.com/log4shell-vulnerability-log4j-155543990.html

The fix has been already released in v. 2.15.0 https://logging.apache.org/log4j/2.x/security.html

NSO zero-click iPhone exploit

An interesting read about NSO zero-click iPhone exploit: https://www.engadget.com/google-researchers-nso-zero-click-iphone-imessage-exploit-143213776.html

https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-into-nso-zero-click.html

News from the blog 2021-12-07

Charity

I’ve donated to Equitas Health.

Equitas Health helps thousands of HIV-positive in Ohio, Dayton and Columbus.

Thousands more are reached with our prevention, testing, and other services. We are excited about embracing our expanded mission as a strategic step to further that legacy and its reach by providing care for all – with a focus on a safe and open space and highest quality healthcare for the LGBTQ community and others who are medically underserved.

https://equitashealth.com/get-involved/give/donate-now/

I did my donation following a post by Terra Field, a former colleague at Blizzard and later leading Netflix’s Trans *ERG, but I didn’t see that she organized a gofund campaign, so I donated again :)

If you want to help them:

https://www.gofundme.com/f/transphobia-is-not-a-joke?utm_source=customer&utm_medium=copy_link_all&utm_campaign=m_pd+share-sheet

https://equitashealth.com/get-involved/give/donate-now/

Articles

I created an article about provisioning to Amazon AWS EC2 and running playbooks (recipes) using Ansible, and Dynamic Directory to store the public ip’s or dns public names in an inventory.

As I saw that there is a lack of clarity in the articles about this theme.

I also provided two alternatives ways, one pure Python3 and the other Bash based (grep awk tr)

Books

The books I publish in LeanPub have two prices, the suggested price, which is the price I consider the right price for the book, and the minimum price, which is the minimum price I authorized a reader can pay to have it.

You can buy it for the minimum price. You know better than anyone your economy.

So when a reader buys one of my books for the suggested price, instead of the minimum price, it’s really showing how they appreciate may work.

So thanks for all the support and appreciation you show!. :)

One of the motives I chose Leanpub platform is because I think is fair. No DRM, no BS. And the reader can ask for a refund within 45 days if they don’t like the book. It also makes very happy seeing that I don’t have any refunds. I appreciate it as a token of the usefulness of my work. Thanks. :)

Updates to Docker Combat File book (v.16 2021-11-24)

I added a nice trick to reverse engineering the original Dockerfile from a running Image.

I also added another typical copy and paste error into the Troubleshoot section.

https://leanpub.com/docker-combat-guide

Automating and Provisioning Amazon AWS (EC2, EBS, S3, CloudWatch) with boto3 (Amazon’s SDK for Python 3) and Python 3 book

I’m writing a book about how to automate your Amazon AWS tasks using Amazon’s AWS Python 3 SDK boto3, provisioning new instances, stopping, starting, creating volumes, creating/deleting buckets in S3, uploading/downloading files from S3…

It is currently 20% completed. With 43 pages it shows EC2 section already.

https://leanpub.com/amazon-aws-boto3

Open Source

I’ve working in carleslibs v.1.0.3. I added MenuUtils class, which allows to assemble menus super quickly, that execute the code referenced in the menu array. Ideal for building CLI applications very fast.

I also added KeyboardUtils class, which allows to ask the user for String within certain lengths allowing or not spaces and/or underscores, and ask user for Integer values within a certain min and max, having 0 for go back.

The plan is to release the new version of carleslibs as soon as I’ve tested it properly.

Social part

For those who follow my recommendations, as always, I have updated the list of new movies I watched and the list of new videogames I played.

Provisioning AWS EC2 Instances with Ansible and Automating Apache deployment with or without using Ansible Dynamic Inventory from Ubuntu 20.04 LTS

This article is being included in my book Provisioning to Amazon AWS using boto3 SDK for Python 3.

Pre-requisites

I’ll use Ubuntu 20.04 LTS.

Python 2 is required for Ansible.

Python 3 is required for our programs.

sudo apt install python2 python3 python3-pip
# Install boto for Python 2 for Ansible (alternative way if pip install boto doesn't work for you)
python2 -m pip install boto
# Install Ansible
sudo apt install ansible

If you want to use Dynamic Inventory

So you can use the Python 2 ec2.py and ec2.ini files, adding them as to the /etc/ansible with mask +x, to use the Dynamic Inventory.

You will need to have your credentials set.

I use Environment variables:

#!/bin/bash

export ANSIBLE_HOST_KEY_CHECKING=false
export AWS_ACCESS_KEY=AKIXXXXXXXXXXXXXOS
export AWS_SECRET_KEY=e4dXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXY6F

Then use the calls inside the shell script, or assuming that the previous file was named credentiasl.sh use source credentials.sh

ec2.py is written in Python 2, so probably will fail for you as it is invoked by python and your default interpreter will be Python 3.

So edit the first line of /etc/ansible/ec2.py and add:

#!/bin/env python2

Once credentials.sh is sourced, then you can just invoke ec2.py to get the list of your Instances in a JSON format dumped by ec2.py

/etc/ansible/ec2.py --list

You can get that JSON file and load it and get the information you need, filtering by group.

You can call:

/etc/ansible/ec2.py --list > instances.json

Or you can run a Python program that escapes to shell and executes ec2.py –list and loads the Output as a JSON file.

I use my carleslibs here to escape to shell using my class SubProcessUtils. You can install them, they are Open Source, or you can code manually if you prefer importing subprocess Python library and catching the stdout, stderr.

import json
from carleslibs import SubProcessUtils

if __name__ == "__main__":
    s_command = "/etc/ansible/ec2.py"

    o_subprocess = SubProcessUtils()
    i_error_code, s_output, s_error = o_subprocess.execute_command_for_output(s_command, b_shell=True, b_convert_to_ascii=True, b_convert_to_utf8=False)
    if i_error_code != 0:
        print("Error escaping to shell!", i_error_code)
        print(s_error)
        exit(1)

    json = json.loads(s_output)

    d_hosts = json["_meta"]["hostvars"]

    for s_host in d_hosts:
        # You'll get a ip/hostnamename in s_host which is the key
        # You have to check for groups and the value for the key Name, in order to get the Name of the group
        # As an exercise, print(d_hosts[s_host]) and look for:
        # @TODO: Capture the s_group_name
        # @TODO: Capture the s_addres
        if s_group_name == "yourgroup":
             # This filters only the instances with your group name, as you want to create an inventory file just for them
             # That's because you don't want to launch the playbook for all the instances, but for those in your group name in the inventory file.
             a_hostnames.append(s_address)

    # After this you can parse you list a_hostnames and generate an inventory file yourinventoryfile 
    # The [ec2hosts] in your inventory file must match the hosts section in your yaml files
    # You'll execute your playbook with:
    # ansible-playbook -i yourinventoryfile youryamlfile.yaml

So an example of a yaml to install Apache2 in Ubuntu 20.04 LTS spawned instances , let’s call it install_apache2.yaml would be:

---
- name: Update web servers
  hosts: ec2hosts
  remote_user: ubuntu

  tasks:
  - name: Ensure Apache is at the latest version
    apt:
      name: apache2
      state: latest
      update_cache: yes
    become: yes

As you can see the section hosts: in the YAML playbook matches the [ec2hosts] in your inventory file.

You can choose to have your private key certificate .pem file in /etc/ansible/ansible.cfg or if you want to have different certificates per host, add them after the ip/address in your inventory file, like in this example:

[localhost]
127.0.0.1
[ec2hosts]
63.35.186.109	ansible_ssh_private_key_file=ansible.pem

The ansible.pem certificate must have restricted permissions, for example chmod 600 ansible.pem

Then you end by running:

ansible-playbook -i yourinventoryfile install_ubuntu.yaml

If you don’t want to use Dynamic Directory

The first method is to use add_host to print in the screen the properties form the ec2 Instances provisioned.

The trick is to escape to shell, executing ansible-playbook and capturing the output, then parsing the text looking for the ‘public_ip:’

This is the Python 3 code I created:

class AwesomeAnsible:

    def extract_public_ips_from_text(self, s_text=""):
        """
        Extracts the addresses returned by Ansible
        :param s_text:
        :return: Boolean for success, Array with the Ip's
        """

        b_found = False
        a_ips = []

        i_count = 0
        while True:
            i_count += 1
            if i_count > 20:
                print("Breaking look")
                break
            s_substr = "'public_ip': '"
            i_first_pos = s_text.find(s_substr)
            if i_first_pos > -1:
                s_text_sub = s_text[i_first_pos + len(s_substr):]
                # Find the ending delimiter
                i_second_pos = s_text_sub.find("'")
                if i_second_pos > -1:
                    b_found = True
                    s_ip = s_text_sub[0:i_second_pos]
                    a_ips.append(s_ip)
                    s_text_sub = s_text_sub[i_second_pos:]
                    s_text = s_text_sub
                    continue

            # No more Ip's
            break

        return b_found, a_ips

Then you’ll use with something like:

        # Catching the Ip's from the output
        b_success, a_ips = self.extract_public_ips_from_text(s_output)
        if b_success is True:
            print("Public Ips:")
            s_ips = ""
            for s_ip in a_ips:
                print(s_ip)
                s_ips = s_ips + self.get_ip_text_line_for_inventory(s_ip)
            print("Adding Ips to group1_inventory file")
            self.o_fileutils.append_to_file("group1_inventory", s_ips)
            print() 

The get_ip_text_line_for_inventory_method() returns a line for the inventory file, with the ip and the key to use separated by a tab (\t):

    def get_ip_text_line_for_inventory(self, s_ip, s_key_path="ansible.pem"):
        """
        Returns the line to add to the inventory, with the Ip and the keypath
        """
        return s_ip + "\tansible_ssh_private_key_file=" + s_key_path + "\n"

Once you have the inventory file, like this below, you can execute the playbook for your group of hosts:

[localhost]
127.0.0.1
[ec2hosts]
63.35.186.109	ansible_ssh_private_key_file=ansible.pem
ansible-playbook -i yourinventoryfile install_ubuntu.yaml

Alternative way parsing with awk and grep

You can run this Bash Shell Script to get only the public ips when you provision to Amazon AWS EC2 the Instances from your group named group1 in this case:

./launch_aws_instances-group1.sh | grep "public_ip" | awk '{ print $13; }' | tr -d "',"
52.213.232.199

In this example 52.213.232.199 is the Ip from the Instance I provisioned.

So to put it together, from a Python file I generate this Bash file and I escape to shell to execute it:

#!/bin/bash

export ANSIBLE_HOST_KEY_CHECKING=false
export AWS_ACCESS_KEY=AKIXXXXXXXXXXXXXOS
export AWS_SECRET_KEY=e4dXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXY6F

# Generate a new Inventory File
echo "[localhost]" > group1_inventory
echo "127.0.0.1" >> group1_inventory
echo "[ec2hosts]" >> group1_inventory

ansible-playbook -i group1_inventory launch_aws_instances-group1.yaml

I set again the credentials because as this Bash Shell Script is invoked from Python, there are not sourced.

The trick in here is that the launch_aws_instances-group1.yaml file has a task to add the hosts to Ansible’s in memory inventory, and to print the information.

That output is what I scrap from Python and then I use extract_public_ips_from_text() showed before.

So my launch_aws_instances-group1.yaml (which I generate from Python customizing the parameter) looks like this:

# launch_aws_instances.yaml

- hosts: localhost
  connection: local
  gather_facts: False
  vars:
      s_keypair_name: "ansible"
      s_instance_type: "t1.micro"
      s_image: "ami-08edbb0e85d6a0a07"
      s_group: "ansible"
      s_region: "eu-west-1"
      i_exact_count: 1
      s_tag_name: "ansible_group1"
  tasks:

    - name: Provision a set of instances
      ec2:
         key_name: "{{ s_keypair_name }}"
         region: "{{ s_region }}"
         group: "{{ s_group }}"
         instance_type: "{{ s_instance_type }}"
         image: "{{ s_image }}"
         wait: true
         exact_count: "{{ i_exact_count }}"
         count_tag:
            Name: "{{ s_tag_name }}"
         instance_tags:
            Name: "{{ s_tag_name }}"
      register: ec2_ips
      
    - name: Add all instance public IPs to host group
      add_host: hostname={{ item.public_ip }} groups=ec2hosts
      loop: "{{ ec2_ips.instances }}"

In this case I use t1.micro cause I provision to EC2-Classic and not to the default VPC, otherwise I would use t2.micro.

So I have a Security Group named ansible created in Amazon AWS EC2 console as EC2-Classic, and not as VPC.

In this Security group I opened the Inbound HTTP Port and the SSH port for the Ip from I’m provisioning, so Ansible can SSH using the Key ansible.pem

The Public Key has been created and named ansible as well (section key_name under ec2).

The Image used is Ubuntu 20.04 LTS (free tier) for the region eu-west-1 which is my wonderful Ireland.

For the variables (vars) I use the MT Notation, so the prefixes show exactly what we are expecting s_ for Strings i_ for Integers and I never have collisions with reserved names.

It is very important to use the count_tag and instance_tags with the name of the group, as the actions will be using that group name. Remember the idempotency.

The task with the add_host is the one that makes the information for the instances to be displayed, like in this screenshot.

Python Programming class for beginners 2021-11-11

Suggested optional pre-requisites to have before the class

  • Install Python 3 (3.8 recommended, any version from Python 3.6 is fine)
  • Install PyCharm Community Edition or Professional Edition (is free for 30 days only)

Suggested optional reading material

Examples of the class today have been upload here

Material from some other classes

On this class, recorded on the video (1h 48m)

Why there are different types of variables

  • Memory
    • All are 0 and 1’s
  • How big can be a variable (float, integer, string)
  • Operations performed, like +
  • MT Notation I use

The order in which the instructions are performed (top to bottom).

  • Declaring a variable
  • We can call thinks that were previously defined up (like functions)
  • Loops will send the execution pointer up
  • Operations with integer variables
    • Additions
  • Operations with Strings

Language syntax and tricks (write a Notebook with your own notes)

  • Pre-created solutions, like reverse an string with for
  • Open a file, read the contents by lines to string, split the strings to arrays by a separator, like tab or space, get what you want by position.
with open(s_filename) as file:
    for s_line in file:
        print(s_line)

# Show Exceptions

# Show hexadecimal of the text:

# cd /home/carles/Desktop/code/carles/python-classes/2021-11-11

# hexdump -c text.txt

# remove enters, spaces at the end of the file with s_line.rstrip()

The blocks and functions

  • Indentation
  • Typical error missing :
  • Functions are for reusing code and reducing errors
    • As they are for reusing, they are flexible (parameters)

The while loop

  • The condition True
  • The condition in a variable
  • The condition in a if
  • Break
  • Pattern counter inside a loop

Building a Menu for user selection in text mode

  • Input
  • Validation

Questions

  • As part of the questions, a question about the mental process to Build the solution was raised.
    • TODO: Was explained
    • The importance of keeping Notes with snippets of code that you normally use one time and another
    • How to search in Google to find explanations in Python sites
  • A question was raised about how menus could be implemented using OOP
    • A parent Menu class, with a MenuAdmin class inheriting was demonstrated. The MenuAdmin inherits the menu options from the parent Menu, and adds Admin only options.

News from the Blog 2021-11-11

New Articles

How to communicate with your Python program running inside a Docker Container, using Linux Signals

Hope you’ll have fun reading this article:

Communicating with Docker Containers via Linux Signals and Python

I migrated my last services from Amazon and the blog to Google Compute Engine (GCE / GCP)

I wrote a Postmortem analysis about the process of migrating my last services from my 11 year old Amazon account.

Updates

Updates to articles

I updated the article about Python weird things that you may not know adding the Ellipsis …

I’ve been working in some Cassandra examples. I may publish an article soon about using it from Python and Docker.

Updates to My Books

I updated my Python and Docker books.

I’m currently writing a book about using Amazon AWS Python SDK (boto3).

Updates to Open Source projects

I have updated ctop, fixed two bugs and increased Code Coverage.

I made a new tag and released the last Stable Version:

https://gitlab.com/carles.mateo/ctop/-/tags/0.8.7

On top of my local Unit Testing, I have Jenkins checking that I don’t commit anything that breaks the Tests.

Some time ago I wrote some articles about how you can setup jenkins in a Docker Container.

Miscellaneous

Charity

I’ve donated to Wikipedia.

Only 2% of the viewers donate, so I answered the call every time it was made.

This is my 5th donation to Wikimedia.

I consider that Freedom is very important.

I bought these new books

One of my secrets to be on top is that I’m always studying.

I study all the time, at work and in my free time.

I use Linux Academy and I buy books in paper. I don’t connect with reading in tablets. I think information is stored better when read in paper. I use also a marker and pointers to keep a direct access to the most interesting points on the books.

And I study all kind of themes. Obviously I know a lot of Web Scraping, but there is always room for learning more. And whatever new I learn helps me to be better with my students and more clear writing my books.

I’ve never been a Front End, but I’ve been able to fix bugs in the Front End engines from the companies I worked for, like Privalia. I was passed a bug that prevented the Internet Explorer users to buy just one hour before we launching a massive campaign. I debugged and I found a variable named “value” so the html looked like <input name="value" value="">. In less than 30 minutes I proved to the incredulous Head of Development and the CTO that a bug in Internet Explored was causing a conflict when fetching the value from the input named value. We deployed to Production the update and the campaign was a total success. So I consider knowing Javascript and Front also a need, even if I don’t work directly with it. I want to be able to understand all the requirements and possibilities, and weaknesses, so I can fix bugs and save the day. That allowed me to fix scalability problems in Nodejs and Phantomjs projects too. (They are Javascript Server Side, event driven, projects)

It seems that Amazon.co.uk works well again for Ireland. My two last orders arrived on time and I had no problems of border taxes apparently.

Nice Python article

I enjoyed a lot this article, cause explains part of what I did with my student and friend Albert, in a project that analyzes the access logs from Apache for patterns of attempts of exploits, then feeds a database, and then blocks those offender Ip Addresses in the Firewall.

The article only covers the part of Pandas, of reading the access.log file and working with it, but is a very well redacted article:

https://mmas.github.io/read-apache-access-log-pandas

Nice Virtual Volumes article from VMware

I prefer Open Source, but there are very good commercial products too.

I liked this article about Virtual Volumes from VMWare:

Understanding Virtual Volumes (vVols) in VMware vSphere 6.7/7.0 (2113013)

https://kb.vmware.com/s/article/2113013

Thanks Blizzard (again)

There is a very nice initiative where we can nominate 4 colleagues a year, that we think that deserve a recognition.

My colleagues voted for me, so I received a gift voucher that I can spend in Ireland stores like Ikea, Pc World, Argos, Adidas, App Store & iTunes…

So thanks a million buds. :)

Communicating with Docker Containers via Linux Signals and Python

Normally if we need to refresh a config in a Container we will spawn a new one, or we will access with sudo docker exec -it /bin/sh mycontainer for instance and force a reload, or we will have to restart the Container.

What if we want to be able to reload the config at any moment without restarting the process, or to trigger a process in our Container (like a dump or a flush) in another way than implementing an API?.

An unexplored way, for many, to communicate with your Container’s main process is to send Signals.

So basically I will show you how you can trap Signals within a Python process which is the main process for your Docker Container, and send them from your Hypervisor with the command:

sudo docker kill --signal=SIGUSR1

I choose to use SIGUSR1 as it is reserved for user defined Signals.

You can clone the project or get the source code from:

https://gitlab.com/carles.mateo/docker-signal

The Dockerfile

FROM ubuntu:20.04

MAINTAINER Carles Mateo

RUN apt update && apt install -y python3 python3-pip vim less && apt-get clean

# This will make sure printing in the Screen when running in dettached mode
ENV PYTHONUNBUFFERED=1

ENV DOCKERSIGNAL /var/dockersignal

RUN mkdir -p $DOCKERSIGNAL

COPY *.py $DOCKERSIGNAL

WORKDIR $DOCKERSIGNAL

# Again to enforce printing to the Screen when running dettached
CMD ["python3", "-u", "/var/dockersignal/dockersignal.py"]

The dockersignal.py file

# By Carles Mateo https://blog.carlesmateo.com

import signal
import time


def handler(signum, frame):
    print('Signal handler called with signal', signum)

    if signum == 10:
        # 10 is the equivalent to SIGUSR1 for most x86/ARM (not for Alpha/Sparc, MIPS, PARISC)
        print("Simulated action: Reload config")


if __name__ == "__main__":

    print("Waiting for a Signal")
    # Listed for this signal, so can listen for more
    signal.signal(signal.SIGUSR1, handler)

    while True:
        # Do Whatever
        time.sleep(1)

A shell file to build and run the Container like a pro

#!/bin/bash

DOCKER_CONTAINER_NAME="docker-signal"
DOCKER_IMAGE_NAME="docker-signal"

printf "Removing old Container %s\n" "${DOCKER_CONTAINER_NAME}"
sudo docker rm "${DOCKER_IMAGE_NAME}"

printf "Removing old Image %s\n" "${DOCKER_IMAGE_NAME}"
sudo docker image rm "${DOCKER_IMAGE_NAME}"

echo "Creating Docker Image"
sudo docker build -t ${DOCKER_IMAGE_NAME} . --no-cache

retVal=$?
if [ $retVal -ne 0 ]; then
    printf "Error. Exit code %s\n" ${retVal}
    exit
fi

echo "Running Docker Container ${DOCKER_CONTAINER_NAME} based in image ${DOCKER_IMAGE_NAME}"
sudo docker run --cpus="1.0" --name ${DOCKER_CONTAINER_NAME} ${DOCKER_IMAGE_NAME}

See it in action

References

https://docs.docker.com/engine/reference/commandline/kill/

https://man7.org/linux/man-pages/man7/signal.7.html

https://docs.python.org/3/library/signal.html

If you want you can buy my Docker Combat Guide book.

Migrating my 11 years Amazon AWS account services (Postmortem Analysis)

I started to explain that I was migrating some services from Amazon and that some of my sites were under Maintenance and that I would provide more information.

Here is the complete history of why I migrated all the services from my 11 years old Amazon account to other CSP.

Some lessons can be learned from my adventure.

I migrated my last services from Amazon to GCP

Amazon sent me an email on October 6th, this year 2021, telling me that they will disable EC2-Classic by August 2022. I thought I would not be able to keep my Static Ip’s as in the past VPC Ip’s and EC2-Classic Ip’s were not transferable, so considering that I would loss my Static Ip’s anyway I started to migrate to some to other providers like Digital Ocean.

Is not cool losing Static Ip (Elastic Ip in AWS) Addresses as this is bad for SEO, so given that I though I would lose my Static Ips that have been with me for years, I started to migrate certain services to providers much more economic.

Amazon is terrible communicating, and I talked with some product managers in the past about that, when they lost one of my Volumes, and the email was so cold and terrible that actually that hurt more than Amazon losing my Data. I believed that it was a poorly made Scam and when I realized it was true I reached one of my friends, that is manager there, as I know they care for doing things right, and he organized a meeting with two PM so I can pass my feedback.

The Cloud providers are changing things very fast, and nobody is able to be up to date with the changes, unless their work position allows plenty of time to get updated. Even if pages of documentation are provided, you have to react to an event that they externally generated forcing you to action. Action to read all the documentation about EC2-Classic migrations, action to prepare to have migrated by August 2022.

So August 2022… I was counting that I had plenty of time but I’m writing a new book about using the Amazon SDK for Python, boto3, and I was doing some API calls and they started to fail in a very unusual way, Exceptions with timeout, but only for the only region where I had EC2-Classic.

urllib3.exceptions.NewConnectionError: <botocore.awsrequest.AWSHTTPSConnection object at 0x7f0347d545e0>: Failed to establish a new connection: [Errno -2] Name or service not known

My config was:

        o_config = Config(
            region_name="us-east-1a",
            signature_version="v4",
            retries={
                'max_attempts': 10,
                'mode': 'standard'
            }
        )

But if I switched to another region name, it would work:

            region_name='us-west-2',

I made a mistake in here, the region name is “us-east-1” and not “us-east-1a“. “us-east-1a” is the availability zone. So the SDK was giving a timeout because in order to connect to the endpoint it uses the region name as part of the hostname. So it doesn’t find that endpoint because it doesn’t exist.

I never understood why a company like Amazon is unable to provide the SDK with a sample project or projects 100% working, with the source code so people has a base that works to build up.

Every API that I have created, I have provided it with documentation but also with example for several languages for how to use it.

In 2013 I was CTO of an online travel agency, and we had meta-searchers consuming our API and we were having several hundreds of thousands requests per second. Everything was perfectly documented, examples were provided for several languages, the document and the SDK had version numbers…

Everybody forgets about Developers and companies throw terrible and cold products to the poor Developers, so difficult to use. How many Developers would like to say: Listen Mr. President of the big Cloud Company XXXX, I only want to spawn a VM that works, and fast, with easy wizards. I don’t want to learn 50 hours before being able to use your overpriced platform, by doing 20 things before your Ip’s are reflexes of your infrastructure and based in Microservices. Modern JavaScript frameworks can create nice gently wizards even if you have supercold APIs.

Honestly, I didn’t realize my typo in the region and I connected to the Amazon Console to investigate and I saw this.

Honestly, when I read it I understood that they were going to end my EC2 Networking the 30th of October. It was 29th. I misunderstood.

It was my fault not reading it well to the end, I got shocked by the first part telling about shutdown and I didn’t fully understood as they were going to shutdown EC2-Classic for the zones I didn’t had anything running only.

From the long errors (3 exceptions chained) I didn’t realize that the endpoint is built with the region name. (And I was passing the availability zone)

botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "https://ec2.us-east-1a.amazonaws.com/"

Here is when I say that a good SDM would had thought and cared for the Developers more, and would had made the SDK to check if that region exists. How difficult is to create a SDK a bit more clever that detects a invalid region id?. It is not difficult.

It is true that it was late in the evening and I was tired of all the day, and two days of the week between work and zoom university classes I work 15 hours and 13 hours respectively, not counting the assignments, so by the end of the week I am very tired. But that’s why it is very important to follow methodology and to read well. I think Amazon has 50% of the fault by the way they do things: how the created the SDK, how they communicate, and by the errors that the console returned me when I tried to create a VPC instance of an EC2-Classic AMI (they seem related to the fact I had old VPC Network objects with shorter hash than the current they use) and the other 50% was my fault for not identifying the source of the error, and not reading the message in their website well.

But the fact that there were having those errors in the API’s and timeouts made me believe they were going to cut the EC2-Classic Networking the next day.

All the mistakes fall together in a perfect storm.

I checked for documentation and I saw it was possible to migrate my Static Ip’s to VPC Static Ip’s.

It was Friday evening, and I cancelled my plans, in order to migrate the Blog to VPC in an attempt to keep running it with Amazon.

As Cloud Architect, I like to have running instances in several CSP as it allows me to stay up to date with the changes they do.

I checked the documentation for the migration. Disassociating the Static Ip (Elastic Ip in AWS jargon) was easy. Turning into VPC as well.

As I progressed, what had to be easy turned into a nightmare, as I was getting many errors from the Amazon API, without any information, and my Instances were not created.

I figured out that their API could have problems with old VPC objects I created time ago, so I had to create new objects for several things.

I managed to spawn my instances but they were being launch and terminated instantly without information. Frustrating.

When launching a new instance from the AMI (a Snapshot of the blog), I was giving shown options to add more volumes without any sense. My Instance was using 16GB from a 20GB total Space, and I was shown different volume configs, depending on the instance, in some case an additional 20GB volume, in other small SSD, ephemeral and 10 GB for the AMI (which requires at least 16GB).

After some fight I manage to make it work after deleting the volumes that made no sense, and keeping only one of 20GB, the same size of my AMI.

But then my nightmare started to make the VPC Instance to have Internet access and to be seen from outside. I had to create a new Internet Gateway, NAT, Network, etc…

As mentioned the old objects I was trying to reusing were making the process to fail.

I was running out of time, and I thought in few time they were going to shutdown EC2-Classic network (as I did not read correctly), so I decided to download everything and to migrate to another provider. For doing that first I blocked all the traffic, except for my Ip.

I worked in parallel, creating the new config in Google Cloud, just in case I had forgot something. I had created a document for the migration and it was accurate.

I managed to do everything fast enough. The slower part was to download all the Data, as I hold entire VM’s for projects like Cassandra Universal Driver.

Then I powered off my Amazon Instance for the Blog forever.

In GCP I blocked all the traffic in the firewall, except for my Ip, so I could work calmly.

When everything was ready, I had to redirect the DNS to the new static Ip from Google.

The DNS provider I used had implemented some changes in their API so I was getting errors replacing my old entry ‘.’ (their JSON calls returned Internal Server Error). Finally I figured it out how to workaround it and I was able to confirm that the first service was up and running.

I did some tests to make sure there were not unexpected permission problems, entries in the logs, etc…

Only then I opened the Google Firewall. I have a second firewall in each instance where I block or open at Ip tables level what I want. Basically abusive bot’s IPs trying to find exploits or brute force by dictionary passwords.

I checked with my phone, without Wifi that the Firewall was all good. (It is always a good idea to use another external Ip, different from the management one, to check)

I added a post explaining that I was migrating some of my Services and were under maintenance.

I mentioned in the blog that some of my services were being migrated from Amazon to Digital Ocean.

For some reasons, in the Backup of the Database one user was lost, so I created it in the MySQL with the typical commands:

CREATE USER 'username'@'localhost' IDENTIFIED BY 'password';
FLUSH PRIVILEGES;
GRANT ALL PRIVILEGES ON mydatabase.* TO 'username'@'localhost';

My Sites are under Maintenance

2021-11-08 Update: There is a Postmortem analysis of what happened with Amazon here.

TLTR: I’m undergoing a Maintenance on all my sites.

The main reason was that I was getting unexpected API Exceptions on the AWS SDK for Python (boto3), so I connected to the AWS Console to get more information.

Then I saw a message indicating that they will stop EC2-Classic today 30th of October. (Please read the Update on the Postmortem analysis as I understood incorrectly that banner message)

I already started migrating my Services, some I move to other providers like Digital Ocean. Other I had plant to keep in Amazon.

EOL (End of Life) was scheduled for 2022 August, so when I saw the message from Amazon the evening of the 29th, I decided to migrate my EC2-Classic Public Ip’s and Compute to VPC. Trying to deploy from an AMI, Amazon APIs were returning many internal errors, and as I figured out where their failures would be I was able get instances being launch without being Terminated immediately without an explanation. Still I had many problems with the Internet Gateway, VPC NAT, etc… after hours fighting with their errors, and their console, that is more a bunch of pages to manage Infrastructure rather than a user/developer friendly Cloud Tool I decided that I had enough.

After 11 years using Amazon AWS, including a trip to Dublin to be hired as Manager for Cloud Watch, and giving them the idea to add AutoScaling (I was told the project was too easy for me and that I would get bored in a year or too so I was not hired), I decided to move my Services to Google Cloud and to Digital Ocean.

I’m very polite and I saw that when I told to one Manager that the User Interface was terrible he didn’t like, but I have to speak up and say that tools for developers cannot be cold as your evil girlfriend. Cannot be API alike, stand alone pages to manage infinite parts of Architecture. Web providing services for developers cannot be created like in cold SysAdmin style. If the infrastructure is hard to manage and internally you use APIs, build nice Wizards in Javascript. I was leading a Team of Developers with infinite less resources than Amazon or Google and we wrote a Multi-Cloud product, with nice, and clever, and easy to use Wizards, and they were infinitely more better that those giant CSPs. We won a prize at European level at that time. But it was 2013.

I’ve migrated everything, moved all the data, statics, VMs… but I’m completing the adjustments for certain services like Cassandra nodes, web sites, bootstrapping some of my sites based of my PHP Catalonia Framework, adding Firewall rules to GCP, doing changes for Ansible provisioning, deploying the Server scripts from IaC, Docker, etc…

I’ll be posting updates in Twitter.