Tag Archives: Python 3

Testing length of Linux ext3/ext4 and ZFS file names with Python 3

Last Update: 2022-04-16 15:22

So, I was working on a project and i wanted to test how long a file can be.

The resources I checked, and also the Kernel and C source I reviewed, were pointing that effectively the limit is 255 characters.

But I had one doubt… given that Python 3 String are UTF-8, and that Linux encode by default is UTF-8, will I be able to use 255 characters length, or this will be reduced as the Strings will be encoded as UTF-8 or Unicode?.

So I did a small Proof of Concept.

For the filenames I grabbed a fragment of the translation to English of the book epic book “Tirant lo Blanc” (The White Knight), one of the first books written in Catalan language and the first chivalry novel known in the world. You can download it for free it in:

https://onlinebooks.library.upenn.edu/webbin/gutbook/lookup?num=378

Python Code:

from carleslibs.fileutils import FileUtils
from colorama import Fore, Back, Style , init


class ColorUtils:

    def __init__(self):
        # For Colorama on Windows
        init()

    def print_error(self, s_text, s_end="\n"):
        """
        Prints errors in Red.
        :param s_text:
        :return:
        """
        print(Fore.RED + s_text)
        print(Style.RESET_ALL, end=s_end)

    def print_success(self, s_text, s_end="\n"):
        """
        Prints errors in Green.
        :param s_text:
        :return:
        """
        print(Fore.GREEN + s_text)
        print(Style.RESET_ALL, end=s_end)

    def print_label(self, s_text, s_end="\n"):
        """
        Prints a label and not the end line
        :param s_text:
        :return:
        """
        print(Fore.BLUE + s_text, end="")
        print(Style.RESET_ALL, end=s_end)

    def return_text_blue(self, s_text):
        """
        Restuns a Text
        :param s_text:
        :return: String
        """
        s_text_return = Fore.BLUE + s_text + Style.RESET_ALL
        return s_text_return


if __name__ == "__main__":

    o_color = ColorUtils()
    o_file = FileUtils()

    s_text = "In the fertile, rich and lovely island of England there lived a most valiant knight, noble by his lineage and much more for his "
    s_text += "courage.  In his great wisdom and ingenuity he had served the profession of chivalry for many years and with a great deal of honor, "
    s_text += "and his fame was widely known throughout the world.  His name was Count William of Warwick.  This was a very strong knight "
    s_text += "who, in his virile youth, had practiced the use of arms, following wars on sea as well as land, and he had brought many "
    s_text += "battles to a successful conclusion."

    o_color.print_label("Erasure Code project by Carles Mateo")
    print()
    print("Task 237 - Proof of Concep of Long File Names, encoded is ASCii, in Linux ext3 and ext4 Filesystems")
    print()
    print("Using as a sample a text based on the translation of Tirant Lo Blanc")
    print("Sample text used:")
    print("-"*30)
    print(s_text)
    print("-" * 30)
    print()
    print("This test uses the OpenSource libraries from Carles Mateo carleslibs")
    print()
    print("Initiating tests")
    print("================")

    s_dir = "task237_tests"

    if o_file.folder_exists(s_dir) is True:
        o_color.print_success("Directory " + s_dir + " already existed, skipping creation")
    else:
        b_success = o_file.create_folder(s_dir)
        if b_success is True:
            o_color.print_success("Directory " + s_dir + " created successfully")
        else:
            o_color.print_error("Directory " + s_dir + " creation failed")
            exit(1)

    for i_length in range(200, 512, 1):
        s_filename = s_dir + "/" + s_text[0:i_length]
        b_success = o_file.write(s_file=s_filename, s_text=s_text)
        s_output = "Writing file length: "
        print(s_output, end="")
        o_color.print_label(str(i_length).rjust(3), s_end="")
        print(" file name: ", end="")
        o_color.print_label(s_filename, s_end="")
        print(": ", end="")
        if b_success is False:
            o_color.print_error("failed", s_end="\n")
            exit(0)
        else:
            o_color.print_success("success", s_end="\n")

    # Note: up to 255 work, 256 fails

I tried this in an Ubuntu Virtual Box VM.

As part of my tests I tried to add a non typical character in English, ASCii >127, like Ç.

When I use ASCii character < 128 (0 to 127) for the filenames in ext3/ext4 and in a ZFS Pool, I can use 255 positions. But when I add characters that are not typical, like the Catalan ç Ç or accents Àí the space available is reduced, suggesting the characters are being encoded:

Ç has value 128 in the ASCii table and ç 135.

I used my Open Source carleslibs and the package colorama.

Install them with:

pip3 install carleslibs colorama

News from the blog 2022-03-22

Support to Ukraine

I’m Catalan. In 1936 the fascist military leaded by franco raised in arms against the elected government of the Spanish Republic. The Italian and nazi German fascist in power bombed the Catalan population. Hundreds of thousands of innocent citizens were assassinated and millions of Catalan and Spaniards had to exile. The sons of those that were ruling with the dictator have been insisting in naming it a “civil war”, but it was the military lead by a fascist, revolting against the legitimate Republic and ending a democracy.

The dictatorship lasted until 1975, when the dictator died in the bed. The effects of the repression never abandoned Catalonia, and nowadays in Catalonia people is still detained by the Spanish police for talking the Catalan language in front of them, and our Parliament decisions are cancelled by the Spanish courts, for example to force the exit of a President of Catalonia that they didn’t like, or to force the Catalan schools to teach 25% of the time in Spanish attacking the Catalan teaching system.

During WW2 millions of Jews were mass murdered, also people from all the nations were assassinated.

Russian population suffered a lot also fighting the nazis.

Now we have to see how Russia’s army is invading Ukraine and murdering innocent citizens.

That’s horrible.

I know Engineers from Ukraine. Those guys were doing great building wealthy based on knowledge and working well for companies across the world. Now these people are being killed or Engineers, amongst all the brave population, are arming themselves to fight the invasion. Shells destroy beautiful cities and population are starving, and young soldiers from both sides will never be seen again by their mothers.

I wrote a small article on how to identify and block in the Ubuntu firewall the Ip’s from Russia and Belarus until Russia leaves Ukraine.

Let music play in solidarity with Ukraine. First is a Catalan group. Second is a famous Irish band in this epic song dedicated to the brave International Brigades, volunteers that fought the fascism in Spain and in Catalonia trying to make a better world.

The Blog

I’ve updated the SSL Certificate. The previous one I bought was issued for two years, and I renewed as it was due to expire.

I wrote a short article about how to update the SSL Certificates for Apache 2 in Ubuntu 20.04.

Articles

I published a small Python script to show the local datetime and the Unix Epoch Time.

Open Source

carleslibs

On the 6th of January I released carleslibs v.1.0.7

https://pypi.org/project/carleslibs/1.0.7/

The new version contains these improvements:

  • Modified OsUtils.get_total_and_free_space_in_gib() to return float instead of Integer.
  • Added HashUtils class with md5 for unicode Strings.
    • Produces the same as md5sum Linux tool.
  • Created FileUtils.create_folders() which creates all the subfolders in the path deep.
  • Unit Testing:
    • Added test_get_inodes_in_use_and_free() to test_osutils.py
    • Added two tests more to test_osutils.py
    • Added test for version.py
    • Tests for HashUtils class.

My books

Python 3 Exercises for Beginners

I have updated the book, offering solution to exercises 11.1, 11.2 (simple and encapsulated in a function) and I’ve created exercise 11.3.

If you purchased the book before, you can download any update for free.

Amazon AWS

I got an offer by a super editorial to publish my book Automating and Provisioning with Amazon Python 3 SDK boto3.

Honestly, my ego was flattered. It is a lot of reputation.

Although in the past I got an offer from another monstrously big editorial to publish world wide my book Python 3 Combat Guide and I also rejected, and an offer from a digital learning platform to create an interactive course from this same book.

I’ve rejected it again this time.

If you are curious, this is what I answered to them:

Hi XXXX,

I'm well, thank you. I hope you are doing well too.

Thanks for taking the time to explain your conditions to me.

I feel flattered by your editorial thinking about me. I respect your brand, as I mentioned, as I own several of your titles.

However, I have to refuse your offer.

Is not the first time an editor has offered to publish one or more of my books. For all over the world, with much higher economic expectations.

I'll tell you why I love being at LeanPub:

1- I own the rights. All of them.
2- I can publish updates, and my readers get them for free. As I add new materials, the value is maximized for my readers.
3- I get 80% of the royalties.
4- If a reader is not happy, they can return the book for 60 days.
5- I can create vouchers and give a discount to certain readers, or give for free to people that are poor and are trying to get a career in Engineering.

The community of readers are very honest, and I only got 2 returns. One of them I think was from an editorial that purchased the book, evaluated it, and they contacted me to publish it, and after I rejected they applied for the refund.

I teach classes, and I charge 125 EUR per hour. I can make much more by my side than the one time payment you offer. The compensation for the video seems really obsolete.

Also, I could be using Amazon self publishing, which also brings bigger margins than you.

So many thanks for your offer. I thought about it because of the reputation, but I already have a reputation. I've thousands of visits to my tech blog, and because of the higher royalties, even if I sell less books through LeanPub it is much more rewarding.

Thanks again and have a lovely day and rest of the week.


Best,
Carles

The provisioning in Amazon AWS through their SDK is a book I’m particularly proud, as it empowers the developers so much. And I provide source code so they can go from zero to hero, in a moment. Amazon should provide a project sample as I do, not difficult to follow documentation.

Teaching / Mentoring

As I was requested, I’ve been offering advice and having virtual coffees with some people that recently started their journey to become Software Engineers and wanted some guidance and advice.

It has been great seeing people putting passion and studying hard to make a better future for themselves and for their families.

I’ll probably add to the blog more contents for beginners, although it will continue being a blog dedicated to extreme IT, and to super cool Engineering skills and troubleshooting.

For my regular students I have a discord space where we can talk and they can meet new friends studying or working in Engineering.

Free Resources

This github link provides many free books in multiple languages:

https://github.com/EbookFoundation/free-programming-books

Tricks

Zoom can zoom the view. So if they are sharing their screen, and font is too small, you can give a relax to your eyes by using Zoom’s zoom feature. It is located in View.

My health

After being in the hospital in December 2021, with risk for my life, and after my incredible recuperation, I’ve got the good news that I don’t need anymore 2 of the 3 medicines I was taking in a daily basis. It looks well through a completely recovery thanks to my discipline, doing sport every day several times, and the fantastic Catalan doctors that are supporting me so well.

Since they found what was failing in me, and after the emergency treatments I started to sleep really well. All night. That’s a privilege that I didn’t have for long long time.

Humor

Sad but true history. How many super talented Engineers have been hired and then they were given a shitty laptop/workstation super slow? That happened to me when I was hired by Volkswagen IT: gedas. I was creating projects for very big companies and I calculated that I was wasting 2 hours of my time compiling. The computer did not had enough RAM and was using swap.

JavaScript fun (or not)

Yes, this works like this.

You can try yourself:

<html>
<body>
<script>
    console.log("11" + 1)
    console.log("11" - 1)
</script>
</body>
</html>

As you can see if you open the Browser Developer tools (in Linux and Windows press F12 key):

A simple Python script to get the date and time in Unix Epoch and in local time

The Unix Epoch is the time, in seconds, that has passed since 1970-01-01 00:00:00 UTC.

#!/usr/bin/env python3

#
# Date time methods
#
# Author: Carles Mateo
# Creation Date: 2019-11-20 17:23 IST
# Description: Class to return Date, datetime, Unix EPOCH timestamp
#

import datetime
import time


class DateTimeUtils:

    def get_unix_epoch(self):
        """
        Will return the EPOCH Time. For convenience is returned as String
        :return:
        """
        s_now_epoch = str(int(time.time()))

        return s_now_epoch

    def get_datetime(self, b_milliseconds=False):
        """
        Return the datetime with miliseconds in format YYYY-MM-DD HH:MM:SS.xxxxx
        or without milliseconds as YYYY-MM-DD HH:MM:SS"""
        if b_milliseconds is True:
            s_now = str(datetime.datetime.now())
        else:
            s_now = str(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"))

        return s_now


def main():
    o_datetime = DateTimeUtils()
    s_now_epoch = o_datetime.get_unix_epoch()
    s_now_date = o_datetime.get_datetime()

    print(s_now_epoch)
    print(s_now_date)


main()

You can also get the code from:

https://gitlab.com/carles.mateo/blog.carlesmateo.com-source-code/

Using Docker in Windows 10 without Windows Desktop with Docker Engine and without WSL

I added this article to my Docker Combat Guide book.

The change of license of Docker Desktop for Windows has been a low punch, a dirty one.

Many big companies use Windows as for the laptops and workstations, we like it or not.

You can setup a Linux development computer or Virtual Machine, you may argue, but things are not as easier.

Big companies have Software licenses assigned to corporation machines, so you may not use your Pycharm license in a Linux VM.

You may no use Docker Desktop either, if your company did not license it.

And finally you may need to have access to internal resources, like Artifactory, or Servers where access is granted via ACL, so only you, from your Development machine can access it. So you have to be able to run Docker locally.

After Docker introduced this changed of license I was using VirtualBox with NAT attached to the VPN Virtual Ethernet, and I port forwarded to be able to SSH, deploy, test, etc… from outside to my Linux VM, and it was working for a while, until with the last VirtualBox update and some Windows updates where pushed to my Windows box and my VirtualBox VMs stopped booting most of the times and having random problems.

I configured a new Linux VM in a Development Server, and I opened Docker API so my Pycharm’s workstation was able to deploy there and I was able to test. But the Dev Ip’s do not have access to the same Test Servers I need my Python Automation projects to reach (and quickly I used 50 GB of space), so I tried WSL. I like Pycharm I didn’t want to switch to VStudio Code because of their good Docker extensions, in any case I could not run my code locally with venv cause some of the packages where not available for Windows, so I needed Linux to run the Unit Testing and see the Code Coverage, run the code, etc…

I tried Hyper-V, tried with NAT External, but it was incompatible with my VPN.

Note: WSL can be used, but I wanted to use Docker Engine, not docker in WSL.

Installing Docker Command line binaries

The first thing I checked was the Docker downloads page.

I found the stand alone binary.

https://docs.docker.com/engine/install/binaries/#install-server-and-client-binaries-on-windows

In order to install it:

  1. Download the zip file from the page, in my case docker-20.10.12.zip
  2. Open PowerShell as Administrator
  3. Run: Expand-Archive C:\Users\carlesmateo\Downloads\docker-20.10.12.zip  -DestinationPath $Env:ProgramFiles\DockerCLI
  4. Run: cd $Env:ProgramFiles\DockerCLI\docker
  5. Run: .\dockerd.exe –register-service
  6. Run: Start-Service docker
  7. Check that Docker lists the running Containers (no errors) with: docker ps
  8. Check that the Service is running with: Get-Service docker
    You should expect something like:
Status Name DisplayName
------ ---- -----------
Running docker Docker Engine

Attempt to pull an Image with: docker pull ubuntu or docker pull php

If it works, you’re done, but most probably you will get it starting and get this error:

Error response from daemon: unsupported os linux

or this other error:

no matching manifest for windows/amd64 10.0.19042 in the manifest list entries

Depending on your system you may need to do certain things:

Turn Windows features on or off

I would make sure that are enabled:

  • Containers
  • Hyper-V
  • Virtual Machine Platform
  • Windows Hypervisor Platform
As you see WSL is not enabled

Press OK, and restart your computer.

Try Again to docker pull ubuntu

Enable Experimental Mode

Edit this file to enable experimental, you can run from the PowerShell:

notepad C:\ProgramData\Docker\config\daemon.json
Change experimental from false to: true

Save the file and restart the Service:

Restart-Service docker

Check if it works

Get-Service docker
Status   Name               DisplayName
------   ----               -----------
Running  docker             Docker Engine

Try if now it works.

Switch Daemon

If it is not working, try running:

cd "C:\Program Files\Docker\Docker\"
.\DockerCli.exe -SwitchDaemon

Give it two minutes and try to pull an image.

If it is still not working reboot, and try again:

cd "C:\Program Files\Docker\Docker\"
.\DockerCli.exe -SwitchDaemon

After it is working

I recommend you to add the new stand alone docker to the path, so you can call it from the terminal at any moment.

Edit the variable PATH of your user profile (not System wide)

I recommend you to have it on top after Python.

A simple example to grab the title of a page using Python and beautifulsoup4

A really simple code I added to my Python 3 Exercises for Beginners book, to grab the title of a Web page.

from urllib import request
from bs4 import BeautifulSoup

s_url = "https://blog.carlesmateo.com/movies-i-saw/"
s_html = request.urlopen(s_url).read().decode('utf8')

o_soup = BeautifulSoup(s_html, 'html.parser')
o_title = o_soup.find('title')

print(o_title.string) # Prints the tag string content

# Another possible way
if o_soup.title is not None:
    s_title = o_soup.title.string
else:
    s_title = o_title.title
print(s_title)

I also included this code in the code repository for Python 3 Combat Guide book.

https://gitlab.com/carles.mateo/python_combat_guide/-/blob/master/src/html_parse_beautifulsoup4.py

Provisioning AWS EC2 Instances with Ansible and Automating Apache deployment with or without using Ansible Dynamic Inventory from Ubuntu 20.04 LTS

This article is being included in my book Provisioning to Amazon AWS using boto3 SDK for Python 3.

Pre-requisites

I’ll use Ubuntu 20.04 LTS.

Python 2 is required for Ansible.

Python 3 is required for our programs.

sudo apt install python2 python3 python3-pip
# Install boto for Python 2 for Ansible (alternative way if pip install boto doesn't work for you)
python2 -m pip install boto
# Install Ansible
sudo apt install ansible

If you want to use Dynamic Inventory

So you can use the Python 2 ec2.py and ec2.ini files, adding them as to the /etc/ansible with mask +x, to use the Dynamic Inventory.

You will need to have your credentials set.

I use Environment variables:

#!/bin/bash

export ANSIBLE_HOST_KEY_CHECKING=false
export AWS_ACCESS_KEY=AKIXXXXXXXXXXXXXOS
export AWS_SECRET_KEY=e4dXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXY6F

Then use the calls inside the shell script, or assuming that the previous file was named credentiasl.sh use source credentials.sh

ec2.py is written in Python 2, so probably will fail for you as it is invoked by python and your default interpreter will be Python 3.

So edit the first line of /etc/ansible/ec2.py and add:

#!/bin/env python2

Once credentials.sh is sourced, then you can just invoke ec2.py to get the list of your Instances in a JSON format dumped by ec2.py

/etc/ansible/ec2.py --list

You can get that JSON file and load it and get the information you need, filtering by group.

You can call:

/etc/ansible/ec2.py --list > instances.json

Or you can run a Python program that escapes to shell and executes ec2.py –list and loads the Output as a JSON file.

I use my carleslibs here to escape to shell using my class SubProcessUtils. You can install them, they are Open Source, or you can code manually if you prefer importing subprocess Python library and catching the stdout, stderr.

import json
from carleslibs import SubProcessUtils

if __name__ == "__main__":
    s_command = "/etc/ansible/ec2.py"

    o_subprocess = SubProcessUtils()
    i_error_code, s_output, s_error = o_subprocess.execute_command_for_output(s_command, b_shell=True, b_convert_to_ascii=True, b_convert_to_utf8=False)
    if i_error_code != 0:
        print("Error escaping to shell!", i_error_code)
        print(s_error)
        exit(1)

    json = json.loads(s_output)

    d_hosts = json["_meta"]["hostvars"]

    for s_host in d_hosts:
        # You'll get a ip/hostnamename in s_host which is the key
        # You have to check for groups and the value for the key Name, in order to get the Name of the group
        # As an exercise, print(d_hosts[s_host]) and look for:
        # @TODO: Capture the s_group_name
        # @TODO: Capture the s_addres
        if s_group_name == "yourgroup":
             # This filters only the instances with your group name, as you want to create an inventory file just for them
             # That's because you don't want to launch the playbook for all the instances, but for those in your group name in the inventory file.
             a_hostnames.append(s_address)

    # After this you can parse you list a_hostnames and generate an inventory file yourinventoryfile 
    # The [ec2hosts] in your inventory file must match the hosts section in your yaml files
    # You'll execute your playbook with:
    # ansible-playbook -i yourinventoryfile youryamlfile.yaml

So an example of a yaml to install Apache2 in Ubuntu 20.04 LTS spawned instances , let’s call it install_apache2.yaml would be:

---
- name: Update web servers
  hosts: ec2hosts
  remote_user: ubuntu

  tasks:
  - name: Ensure Apache is at the latest version
    apt:
      name: apache2
      state: latest
      update_cache: yes
    become: yes

As you can see the section hosts: in the YAML playbook matches the [ec2hosts] in your inventory file.

You can choose to have your private key certificate .pem file in /etc/ansible/ansible.cfg or if you want to have different certificates per host, add them after the ip/address in your inventory file, like in this example:

[localhost]
127.0.0.1
[ec2hosts]
63.35.186.109	ansible_ssh_private_key_file=ansible.pem

The ansible.pem certificate must have restricted permissions, for example chmod 600 ansible.pem

Then you end by running:

ansible-playbook -i yourinventoryfile install_ubuntu.yaml

If you don’t want to use Dynamic Directory

The first method is to use add_host to print in the screen the properties form the ec2 Instances provisioned.

The trick is to escape to shell, executing ansible-playbook and capturing the output, then parsing the text looking for the ‘public_ip:’

This is the Python 3 code I created:

class AwesomeAnsible:

    def extract_public_ips_from_text(self, s_text=""):
        """
        Extracts the addresses returned by Ansible
        :param s_text:
        :return: Boolean for success, Array with the Ip's
        """

        b_found = False
        a_ips = []

        i_count = 0
        while True:
            i_count += 1
            if i_count > 20:
                print("Breaking look")
                break
            s_substr = "'public_ip': '"
            i_first_pos = s_text.find(s_substr)
            if i_first_pos > -1:
                s_text_sub = s_text[i_first_pos + len(s_substr):]
                # Find the ending delimiter
                i_second_pos = s_text_sub.find("'")
                if i_second_pos > -1:
                    b_found = True
                    s_ip = s_text_sub[0:i_second_pos]
                    a_ips.append(s_ip)
                    s_text_sub = s_text_sub[i_second_pos:]
                    s_text = s_text_sub
                    continue

            # No more Ip's
            break

        return b_found, a_ips

Then you’ll use with something like:

        # Catching the Ip's from the output
        b_success, a_ips = self.extract_public_ips_from_text(s_output)
        if b_success is True:
            print("Public Ips:")
            s_ips = ""
            for s_ip in a_ips:
                print(s_ip)
                s_ips = s_ips + self.get_ip_text_line_for_inventory(s_ip)
            print("Adding Ips to group1_inventory file")
            self.o_fileutils.append_to_file("group1_inventory", s_ips)
            print() 

The get_ip_text_line_for_inventory_method() returns a line for the inventory file, with the ip and the key to use separated by a tab (\t):

    def get_ip_text_line_for_inventory(self, s_ip, s_key_path="ansible.pem"):
        """
        Returns the line to add to the inventory, with the Ip and the keypath
        """
        return s_ip + "\tansible_ssh_private_key_file=" + s_key_path + "\n"

Once you have the inventory file, like this below, you can execute the playbook for your group of hosts:

[localhost]
127.0.0.1
[ec2hosts]
63.35.186.109	ansible_ssh_private_key_file=ansible.pem
ansible-playbook -i yourinventoryfile install_ubuntu.yaml

Alternative way parsing with awk and grep

You can run this Bash Shell Script to get only the public ips when you provision to Amazon AWS EC2 the Instances from your group named group1 in this case:

./launch_aws_instances-group1.sh | grep "public_ip" | awk '{ print $13; }' | tr -d "',"
52.213.232.199

In this example 52.213.232.199 is the Ip from the Instance I provisioned.

So to put it together, from a Python file I generate this Bash file and I escape to shell to execute it:

#!/bin/bash

export ANSIBLE_HOST_KEY_CHECKING=false
export AWS_ACCESS_KEY=AKIXXXXXXXXXXXXXOS
export AWS_SECRET_KEY=e4dXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXY6F

# Generate a new Inventory File
echo "[localhost]" > group1_inventory
echo "127.0.0.1" >> group1_inventory
echo "[ec2hosts]" >> group1_inventory

ansible-playbook -i group1_inventory launch_aws_instances-group1.yaml

I set again the credentials because as this Bash Shell Script is invoked from Python, there are not sourced.

The trick in here is that the launch_aws_instances-group1.yaml file has a task to add the hosts to Ansible’s in memory inventory, and to print the information.

That output is what I scrap from Python and then I use extract_public_ips_from_text() showed before.

So my launch_aws_instances-group1.yaml (which I generate from Python customizing the parameter) looks like this:

# launch_aws_instances.yaml

- hosts: localhost
  connection: local
  gather_facts: False
  vars:
      s_keypair_name: "ansible"
      s_instance_type: "t1.micro"
      s_image: "ami-08edbb0e85d6a0a07"
      s_group: "ansible"
      s_region: "eu-west-1"
      i_exact_count: 1
      s_tag_name: "ansible_group1"
  tasks:

    - name: Provision a set of instances
      ec2:
         key_name: "{{ s_keypair_name }}"
         region: "{{ s_region }}"
         group: "{{ s_group }}"
         instance_type: "{{ s_instance_type }}"
         image: "{{ s_image }}"
         wait: true
         exact_count: "{{ i_exact_count }}"
         count_tag:
            Name: "{{ s_tag_name }}"
         instance_tags:
            Name: "{{ s_tag_name }}"
      register: ec2_ips
      
    - name: Add all instance public IPs to host group
      add_host: hostname={{ item.public_ip }} groups=ec2hosts
      loop: "{{ ec2_ips.instances }}"

In this case I use t1.micro cause I provision to EC2-Classic and not to the default VPC, otherwise I would use t2.micro.

So I have a Security Group named ansible created in Amazon AWS EC2 console as EC2-Classic, and not as VPC.

In this Security group I opened the Inbound HTTP Port and the SSH port for the Ip from I’m provisioning, so Ansible can SSH using the Key ansible.pem

The Public Key has been created and named ansible as well (section key_name under ec2).

The Image used is Ubuntu 20.04 LTS (free tier) for the region eu-west-1 which is my wonderful Ireland.

For the variables (vars) I use the MT Notation, so the prefixes show exactly what we are expecting s_ for Strings i_ for Integers and I never have collisions with reserved names.

It is very important to use the count_tag and instance_tags with the name of the group, as the actions will be using that group name. Remember the idempotency.

The task with the add_host is the one that makes the information for the instances to be displayed, like in this screenshot.

Python Programming class for beginners 2021-11-11

Suggested optional pre-requisites to have before the class

  • Install Python 3 (3.8 recommended, any version from Python 3.6 is fine)
  • Install PyCharm Community Edition or Professional Edition (is free for 30 days only)

Suggested optional reading material

Examples of the class today have been upload here

Material from some other classes

On this class, recorded on the video (1h 48m)

Why there are different types of variables

  • Memory
    • All are 0 and 1’s
  • How big can be a variable (float, integer, string)
  • Operations performed, like +
  • MT Notation I use

The order in which the instructions are performed (top to bottom).

  • Declaring a variable
  • We can call thinks that were previously defined up (like functions)
  • Loops will send the execution pointer up
  • Operations with integer variables
    • Additions
  • Operations with Strings

Language syntax and tricks (write a Notebook with your own notes)

  • Pre-created solutions, like reverse an string with for
  • Open a file, read the contents by lines to string, split the strings to arrays by a separator, like tab or space, get what you want by position.
with open(s_filename) as file:
    for s_line in file:
        print(s_line)

# Show Exceptions

# Show hexadecimal of the text:

# cd /home/carles/Desktop/code/carles/python-classes/2021-11-11

# hexdump -c text.txt

# remove enters, spaces at the end of the file with s_line.rstrip()

The blocks and functions

  • Indentation
  • Typical error missing :
  • Functions are for reusing code and reducing errors
    • As they are for reusing, they are flexible (parameters)

The while loop

  • The condition True
  • The condition in a variable
  • The condition in a if
  • Break
  • Pattern counter inside a loop

Building a Menu for user selection in text mode

  • Input
  • Validation

Questions

  • As part of the questions, a question about the mental process to Build the solution was raised.
    • TODO: Was explained
    • The importance of keeping Notes with snippets of code that you normally use one time and another
    • How to search in Google to find explanations in Python sites
  • A question was raised about how menus could be implemented using OOP
    • A parent Menu class, with a MenuAdmin class inheriting was demonstrated. The MenuAdmin inherits the menu options from the parent Menu, and adds Admin only options.

News from the Blog 2021-11-11

New Articles

How to communicate with your Python program running inside a Docker Container, using Linux Signals

Hope you’ll have fun reading this article:

Communicating with Docker Containers via Linux Signals and Python

I migrated my last services from Amazon and the blog to Google Compute Engine (GCE / GCP)

I wrote a Postmortem analysis about the process of migrating my last services from my 11 year old Amazon account.

Updates

Updates to articles

I updated the article about Python weird things that you may not know adding the Ellipsis …

I’ve been working in some Cassandra examples. I may publish an article soon about using it from Python and Docker.

Updates to My Books

I updated my Python and Docker books.

I’m currently writing a book about using Amazon AWS Python SDK (boto3).

Updates to Open Source projects

I have updated ctop, fixed two bugs and increased Code Coverage.

I made a new tag and released the last Stable Version:

https://gitlab.com/carles.mateo/ctop/-/tags/0.8.7

On top of my local Unit Testing, I have Jenkins checking that I don’t commit anything that breaks the Tests.

Some time ago I wrote some articles about how you can setup jenkins in a Docker Container.

Miscellaneous

Charity

I’ve donated to Wikipedia.

Only 2% of the viewers donate, so I answered the call every time it was made.

This is my 5th donation to Wikimedia.

I consider that Freedom is very important.

I bought these new books

One of my secrets to be on top is that I’m always studying.

I study all the time, at work and in my free time.

I use Linux Academy and I buy books in paper. I don’t connect with reading in tablets. I think information is stored better when read in paper. I use also a marker and pointers to keep a direct access to the most interesting points on the books.

And I study all kind of themes. Obviously I know a lot of Web Scraping, but there is always room for learning more. And whatever new I learn helps me to be better with my students and more clear writing my books.

I’ve never been a Front End, but I’ve been able to fix bugs in the Front End engines from the companies I worked for, like Privalia. I was passed a bug that prevented the Internet Explorer users to buy just one hour before we launching a massive campaign. I debugged and I found a variable named “value” so the html looked like <input name="value" value="">. In less than 30 minutes I proved to the incredulous Head of Development and the CTO that a bug in Internet Explored was causing a conflict when fetching the value from the input named value. We deployed to Production the update and the campaign was a total success. So I consider knowing Javascript and Front also a need, even if I don’t work directly with it. I want to be able to understand all the requirements and possibilities, and weaknesses, so I can fix bugs and save the day. That allowed me to fix scalability problems in Nodejs and Phantomjs projects too. (They are Javascript Server Side, event driven, projects)

It seems that Amazon.co.uk works well again for Ireland. My two last orders arrived on time and I had no problems of border taxes apparently.

Nice Python article

I enjoyed a lot this article, cause explains part of what I did with my student and friend Albert, in a project that analyzes the access logs from Apache for patterns of attempts of exploits, then feeds a database, and then blocks those offender Ip Addresses in the Firewall.

The article only covers the part of Pandas, of reading the access.log file and working with it, but is a very well redacted article:

https://mmas.github.io/read-apache-access-log-pandas

Nice Virtual Volumes article from VMware

I prefer Open Source, but there are very good commercial products too.

I liked this article about Virtual Volumes from VMWare:

Understanding Virtual Volumes (vVols) in VMware vSphere 6.7/7.0 (2113013)

https://kb.vmware.com/s/article/2113013

Thanks Blizzard (again)

There is a very nice initiative where we can nominate 4 colleagues a year, that we think that deserve a recognition.

My colleagues voted for me, so I received a gift voucher that I can spend in Ireland stores like Ikea, Pc World, Argos, Adidas, App Store & iTunes…

So thanks a million buds. :)

Migrating my 11 years Amazon AWS account services (Postmortem Analysis)

I started to explain that I was migrating some services from Amazon and that some of my sites were under Maintenance and that I would provide more information.

Here is the complete history of why I migrated all the services from my 11 years old Amazon account to other CSP.

Some lessons can be learned from my adventure.

I migrated my last services from Amazon to GCP

Amazon sent me an email on October 6th, this year 2021, telling me that they will disable EC2-Classic by August 2022. I thought I would not be able to keep my Static Ip’s as in the past VPC Ip’s and EC2-Classic Ip’s were not transferable, so considering that I would loss my Static Ip’s anyway I started to migrate to some to other providers like Digital Ocean.

Is not cool losing Static Ip (Elastic Ip in AWS) Addresses as this is bad for SEO, so given that I though I would lose my Static Ips that have been with me for years, I started to migrate certain services to providers much more economic.

Amazon is terrible communicating, and I talked with some product managers in the past about that, when they lost one of my Volumes, and the email was so cold and terrible that actually that hurt more than Amazon losing my Data. I believed that it was a poorly made Scam and when I realized it was true I reached one of my friends, that is manager there, as I know they care for doing things right, and he organized a meeting with two PM so I can pass my feedback.

The Cloud providers are changing things very fast, and nobody is able to be up to date with the changes, unless their work position allows plenty of time to get updated. Even if pages of documentation are provided, you have to react to an event that they externally generated forcing you to action. Action to read all the documentation about EC2-Classic migrations, action to prepare to have migrated by August 2022.

So August 2022… I was counting that I had plenty of time but I’m writing a new book about using the Amazon SDK for Python, boto3, and I was doing some API calls and they started to fail in a very unusual way, Exceptions with timeout, but only for the only region where I had EC2-Classic.

urllib3.exceptions.NewConnectionError: <botocore.awsrequest.AWSHTTPSConnection object at 0x7f0347d545e0>: Failed to establish a new connection: [Errno -2] Name or service not known

My config was:

        o_config = Config(
            region_name="us-east-1a",
            signature_version="v4",
            retries={
                'max_attempts': 10,
                'mode': 'standard'
            }
        )

But if I switched to another region name, it would work:

            region_name='us-west-2',

I made a mistake in here, the region name is “us-east-1” and not “us-east-1a“. “us-east-1a” is the availability zone. So the SDK was giving a timeout because in order to connect to the endpoint it uses the region name as part of the hostname. So it doesn’t find that endpoint because it doesn’t exist.

I never understood why a company like Amazon is unable to provide the SDK with a sample project or projects 100% working, with the source code so people has a base that works to build up.

Every API that I have created, I have provided it with documentation but also with example for several languages for how to use it.

In 2013 I was CTO of an online travel agency, and we had meta-searchers consuming our API and we were having several hundreds of thousands requests per second. Everything was perfectly documented, examples were provided for several languages, the document and the SDK had version numbers…

Everybody forgets about Developers and companies throw terrible and cold products to the poor Developers, so difficult to use. How many Developers would like to say: Listen Mr. President of the big Cloud Company XXXX, I only want to spawn a VM that works, and fast, with easy wizards. I don’t want to learn 50 hours before being able to use your overpriced platform, by doing 20 things before your Ip’s are reflexes of your infrastructure and based in Microservices. Modern JavaScript frameworks can create nice gently wizards even if you have supercold APIs.

Honestly, I didn’t realize my typo in the region and I connected to the Amazon Console to investigate and I saw this.

Honestly, when I read it I understood that they were going to end my EC2 Networking the 30th of October. It was 29th. I misunderstood.

It was my fault not reading it well to the end, I got shocked by the first part telling about shutdown and I didn’t fully understood as they were going to shutdown EC2-Classic for the zones I didn’t had anything running only.

From the long errors (3 exceptions chained) I didn’t realize that the endpoint is built with the region name. (And I was passing the availability zone)

botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "https://ec2.us-east-1a.amazonaws.com/"

Here is when I say that a good SDM would had thought and cared for the Developers more, and would had made the SDK to check if that region exists. How difficult is to create a SDK a bit more clever that detects a invalid region id?. It is not difficult.

It is true that it was late in the evening and I was tired of all the day, and two days of the week between work and zoom university classes I work 15 hours and 13 hours respectively, not counting the assignments, so by the end of the week I am very tired. But that’s why it is very important to follow methodology and to read well. I think Amazon has 50% of the fault by the way they do things: how the created the SDK, how they communicate, and by the errors that the console returned me when I tried to create a VPC instance of an EC2-Classic AMI (they seem related to the fact I had old VPC Network objects with shorter hash than the current they use) and the other 50% was my fault for not identifying the source of the error, and not reading the message in their website well.

But the fact that there were having those errors in the API’s and timeouts made me believe they were going to cut the EC2-Classic Networking the next day.

All the mistakes fall together in a perfect storm.

I checked for documentation and I saw it was possible to migrate my Static Ip’s to VPC Static Ip’s.

It was Friday evening, and I cancelled my plans, in order to migrate the Blog to VPC in an attempt to keep running it with Amazon.

As Cloud Architect, I like to have running instances in several CSP as it allows me to stay up to date with the changes they do.

I checked the documentation for the migration. Disassociating the Static Ip (Elastic Ip in AWS jargon) was easy. Turning into VPC as well.

As I progressed, what had to be easy turned into a nightmare, as I was getting many errors from the Amazon API, without any information, and my Instances were not created.

I figured out that their API could have problems with old VPC objects I created time ago, so I had to create new objects for several things.

I managed to spawn my instances but they were being launch and terminated instantly without information. Frustrating.

When launching a new instance from the AMI (a Snapshot of the blog), I was giving shown options to add more volumes without any sense. My Instance was using 16GB from a 20GB total Space, and I was shown different volume configs, depending on the instance, in some case an additional 20GB volume, in other small SSD, ephemeral and 10 GB for the AMI (which requires at least 16GB).

After some fight I manage to make it work after deleting the volumes that made no sense, and keeping only one of 20GB, the same size of my AMI.

But then my nightmare started to make the VPC Instance to have Internet access and to be seen from outside. I had to create a new Internet Gateway, NAT, Network, etc…

As mentioned the old objects I was trying to reusing were making the process to fail.

I was running out of time, and I thought in few time they were going to shutdown EC2-Classic network (as I did not read correctly), so I decided to download everything and to migrate to another provider. For doing that first I blocked all the traffic, except for my Ip.

I worked in parallel, creating the new config in Google Cloud, just in case I had forgot something. I had created a document for the migration and it was accurate.

I managed to do everything fast enough. The slower part was to download all the Data, as I hold entire VM’s for projects like Cassandra Universal Driver.

Then I powered off my Amazon Instance for the Blog forever.

In GCP I blocked all the traffic in the firewall, except for my Ip, so I could work calmly.

When everything was ready, I had to redirect the DNS to the new static Ip from Google.

The DNS provider I used had implemented some changes in their API so I was getting errors replacing my old entry ‘.’ (their JSON calls returned Internal Server Error). Finally I figured it out how to workaround it and I was able to confirm that the first service was up and running.

I did some tests to make sure there were not unexpected permission problems, entries in the logs, etc…

Only then I opened the Google Firewall. I have a second firewall in each instance where I block or open at Ip tables level what I want. Basically abusive bot’s IPs trying to find exploits or brute force by dictionary passwords.

I checked with my phone, without Wifi that the Firewall was all good. (It is always a good idea to use another external Ip, different from the management one, to check)

I added a post explaining that I was migrating some of my Services and were under maintenance.

I mentioned in the blog that some of my services were being migrated from Amazon to Digital Ocean.

For some reasons, in the Backup of the Database one user was lost, so I created it in the MySQL with the typical commands:

CREATE USER 'username'@'localhost' IDENTIFIED BY 'password';
FLUSH PRIVILEGES;
GRANT ALL PRIVILEGES ON mydatabase.* TO 'username'@'localhost';

News from the blog 2021-10-21

  • I made a Donation to The Document Foundation, which makes the OpenOffice.

I use OpenOffice suite for writing my books and other documents, so I think it’s fair to contribute with their operating costs.

  • I’ve installed a plugin to add Code Highlighting

It also allows me to add blocks of Code, like this:

if CodeHighlighting.b_is_installed == True:
    return VisualImprovement.update_to(10), "It's easy to read"
else:
    return VisualImprovement.get_still_the_same_difficult_to_read(), "The blog lives in the medieval age"

Or Inline Code like print(self.awareness) which is also great

  • I’ve improved a bit, visually, the blog

I modified a bit my template. The changes consist into adding an id attribute to the table for the Quick Selection of the articles, and modifying my template: the styles in the file css/blocks.css and modifying the version in functions.php to reflect the new timestamp.

I also made that when the mouse goes over a link it is displayed in blue, and the already visited in a slightly darker blue.

#articles_selection a:hover {color: #2222FF;}

In the images below you can see the before, the intermediate, and the final.

I’ve also added a button to hide or show the Quick Selection

If you have a WordPress and jQuery does not work for you, with error:

TypeError: $ is not a function

$(document).ready(function(){

This is because for compatibility reasons you have to do different in WordPress:

jQuery(document).ready(function($){
  • I created several videos of 5 minutes to learn Unit Testing in Python 3 with pytest

I also use my package carleslibs to execute the command from shell.

Web CTOP in this case :)

As I did this I discovered a bug (bug #47) in CTOP for setting the number of rows.

  • I fixed the bug #47 and the bug #48 in CTOP and started version 0.8.7 (available in Master).

The changelog.txt file details all the changes for each version.

Here CTOP is displayed with a fixed width and height as by launching with:
ctop.py –rows=50 –columns=170
  • The new PSU arrived and I replaced it on Saturday 16th

After 5 days working nonstop, with no problems, it seems clear that the failing item was the expensive, 850W, Corsair PSU. Sometimes it happens that a new component comes defective, but I paid overprice expecting quality, and it seems that the PSU was defective. Since the beginning the computer powered off every few hours max, so I have to finally assume that effectively it was the PSU. Disappointed with Corsair.

  • Firewall. This month I’ve blocked around 2,000 visitors that were mainly bots searching for exploits

I review the logs several times every day.

Actually I’ve blocked many more Ip’s in the firewall, as when I identify a company source of bots, I block all their range (Imagine, as I block entire class C addresses, there are 256 Ips each class C /24). This has translated into 2,000 visitors less per month to the blog, that were offenders.

  • I added some rules / guidelines to the Leave a Reply section

I moderate all the comments to keep the blog an useful and healthy place.

And I don’t publish Spam, or Marketing messages.

Abusive comments are blocked. Competent Engineers and nice human beings share their points and doubts with data, with technical arguments, with education, in a respectful and polite way. People that cannot observe a minimum decoration are not welcome.