Quick Access to my selection Last Update: 2020-08-07 02:29:01 IST Unix epoch: 1596763741
Web Programming Operations Engineering/DevOps/SRE Engineering Miscellaneous
Back End Programming Commodity Hardware ZFS Architecture
Python Open Source Utilities Bash Open Source Utilities My Tech Talks My Books
  • count_repeated_pattern_in_logs.sh
    A easy way to see errors that are repeating, e.g.: NFS/iSCSI timeouts.
    2020-May Views: 678 views
  • backup_partition_in_files.sh
    Compressing an unmounted partition to a image file while compressing on the fly, and breaking into 1GB gz files.
    Also explains in a funny way about STDIN, STDOUT, STDERR and methodology investigating in deep.
    2020-May Views: 911 views
  • iostat_bandwidth.sh
    See the aggregated bandwidth used by all the drives, and the maximum speed achieved.
    2020-Aug Views: 75 views

News from the blog 2020-08-10

  • Atlassian tells employees they can work from home indefinitely. Which is nice and follow the steps of other giants before.

https://www.cnbc.com/2020/08/07/atlassian-tells-employees-they-can-work-from-home-indefinitely.html

This new scenario challenges the old companies and is driving to some internal tensions in the companies that resist to allow employees to freely Remote Work.

  • I published this script to read the combined bandwidth, and peak max, from all drives.

https://blog.carlesmateo.com/2020/08/06/iostat_bandwitdth-sh-utility-to-calculate-the-bandwidth-used-by-all-your-drives/

  • I got new sales of my book in LeanPub and I’ve to say that this really makes me happy

I’ve been working in adding new information and I released a new update this week, the version 0.77. Talking about mutable and immutable objects when passing to a function, and references.

  • Bought new books and resumed my routines to study daily in the Coffee Shop (recently open)
  • I bought some new hardware
    • An Arm to support the laptop and a Vesa Monitor. That’s my Desk, actually

It is exactly this model:

  • I bought also an HDMI switch with 3 inputs and PinP
The most cool feature is the PinP. Is a simple model with reduced but does what I wanted perfectly and cheap. Worth the price.
  • I bought also this Mic/headphone USB dondle with a single jack. Very cool
Does exactly what I wanted, adding a new device. I’m using on a Windows 10 Enterprise box.
  • I’ve bought a static bicycle for the WFH / lockdown covid-19.

A simple trick to find your Git Submodules imports in Python by adding to Syspath

If you are using Git Submodules, is very probable that at some point you will create you own libraries. Probably those libraries will have their own structure, even with their own tests/ folder and you’re adding into a subfolder into your new project and maybe you have problems using relative imports.

This is a trick you can use to add the relevant root folder of your project to the System Path, so the libraries are found, specially when you call by command line from anywhere in the filesystem. This works for Python2 and Python3.

#!/usr/bin/env python3

import sys
import os

s_path_program = os.path.dirname(__file__)
sys.path.append(s_path_program + '../../')

from clib.src.argsutils import ArgsUtils
from clib.src.datetimeutils import DateTimeUtils
from clib.src.fileutils import FileUtils

This sample can be found in my book Pythom Combat Guide.

iostat_bandwitdth.sh – Utility to calculate the bandwidth used by all your drives

This is a shell script I made long time ago and I use it to monitor in real time what’s the total or individual bandwidth and maximum bandwidth achieved, for READ and WRITE, of Hard drives and NMVe devices.

It uses iostat to capture the metrics, and then processes the maximum values, the combined speed of all the drives… has also an interesting feature to let out the booting device. That’s very handy for Rack Servers where you boot from an SSD card or and SD, and you want to monitor the speed of the other (SAS probably) devices.

I used it to monitor the total bandwidth achieved by our 4U60 and 4U90 Servers, the All-Flash-Arrays 2U and the NVMe 1U units in Sanmina and the real throughput of IOC (Input Output Controllers).

I used also to compare what was the real data written to ZFS and mdraid RAID systems, and to disks and the combined speed with different pool configurations, as well as the efficiency of iSCSI and NFS from clients to the Servers.

You can specify how many times the information will be printed, whether you want to keep the max speed of each device per separate, and specify a drive to exclude. Normally it will be the boot drive.

If you want to test performance metrics you should make sure that other programs are not running or using the swap, to prevent bias. You should disable the boot drive if it doesn’t form part of your tests (like in the 4U60 with an SSD boot drive in a card, and 60 hard drive bays SAS or SATA).

You may find useful tools like iotop.

You can find the code here, and in my gitlab repo:

https://gitlab.com/carles.mateo/blog.carlesmateo.com-source-code/-/blob/master/iostat_bandwidth.sh

#!/usr/bin/env bash

AUTHOR="Carles Mateo"
VERSION="1.4"

# Changelog
# 1.4
# Added support for NVMe drives
# 1.3
# Fixed Decimals in KB count that were causing errors
# 1.2
# Added new parameter to output per drive stats
# Counting is performed in KB

# Leave boot device empty if you want to add its activity to the results
# Specially thinking about booting SD card or SSD devices versus SAS drives bandwidth calculation.
# Otherwise use i.e.: s_BOOT_DEVICE="sdcv"
s_BOOT_DEVICE=""
# If this value is positive the loop will be kept n times
# If is negative ie: -1 it will loop forever
i_LOOP_TIMES=-1
# Display all drives separatedly
i_ALL_SEPARATEDLY=0
# Display in KB or MB
s_DISPLAY_UNIT="M"

# Init variables
i_READ_MAX=0
i_WRITE_MAX=0
s_READ_MAX_DATE=""
s_WRITE_MAX_DATE=""
i_IOSTAT_READ_KB=0
i_IOSTAT_WRITE_KB=0

# Internal variables
i_NUMBER_OF_DRIVES=0
s_LIST_OF_DRIVES=""
i_UNKNOWN_OPTION=0

# So if you run in screen you see colors :)
export TERM=xterm

# ANSI colors
s_COLOR_RED='\033[0;31m'
s_COLOR_BLUE='\033[0;34m'
s_COLOR_NONE='\033[0m'

for i in "$@"
do
    case $i in
        -b=*|--boot_device=*)
        s_BOOT_DEVICE="${i#*=}"
        shift # past argument=value
        ;;
        -l=*|--loop_times=*)
        i_LOOP_TIMES="${i#*=}"
        shift # past argument=value
        ;;
        -a=*|--all_separatedly=*)
        i_ALL_SEPARATEDLY="${i#*=}"
        shift # past argument=value
        ;;
        *)
        # unknown option
        i_UNKNOWN_OPTION=1
        ;;
    esac
done

if [[ "${i_UNKNOWN_OPTION}" -eq 1 ]]; then
    echo -e "${s_COLOR_RED}Unknown option${s_COLOR_NONE}"
    echo "Use: [-b|--boot_device=sda -l|--loop_times=-1 -a|--all-separatedly=1]"
    exit 1
fi

if [ -z "${s_BOOT_DEVICE}" ]; then
    i_NUMBER_OF_DRIVES=`iostat -d -m | grep "sd\|nvm" | wc --lines`
    s_LIST_OF_DRIVES=`iostat -d -m | grep "sd\|nvm" | awk '{printf $1" ";}'`
else
    echo -e "${s_COLOR_BLUE}Excluding Boot Device:${s_COLOR_NONE} ${s_BOOT_DEVICE}"
    # Add an space after the name of the device to prevent something like booting with sda leaving out drives like sdaa sdab sdac...
    i_NUMBER_OF_DRIVES=`iostat -d -m | grep "sd\|nvm" | grep -v "${s_BOOT_DEVICE} " | wc --lines`
    s_LIST_OF_DRIVES=`iostat -d -m | grep "sd\|nvm" | grep -v "${s_BOOT_DEVICE} " | awk '{printf $1" ";}'`
fi

AR_DRIVES=(${s_LIST_OF_DRIVES})
i_COUNTER_LOOP=0
for s_DRIVE in ${AR_DRIVES};
do
    AR_DRIVES_VALUES_AVG[i_COUNTER_LOOP]=0
    AR_DRIVES_VALUES_READ_MAX[i_COUNTER_LOOP]=0
    AR_DRIVES_VALUES_WRITE_MAX[i_COUNTER_LOOP]=0
    i_COUNTER_LOOP=$((i_COUNTER_LOOP+1))
done


echo -e "${s_COLOR_BLUE}Bandwidth for drives:${s_COLOR_NONE} ${i_NUMBER_OF_DRIVES}"
echo -e "${s_COLOR_BLUE}Devices:${s_COLOR_NONE} ${s_LIST_OF_DRIVES}"
echo ""

while [ "${i_LOOP_TIMES}" -lt 0 ] || [ "${i_LOOP_TIMES}" -gt 0 ] ;
do
    s_READ_PRE_COLOR=""
    s_READ_POS_COLOR=""
    s_WRITE_PRE_COLOR=""
    s_WRITE_POS_COLOR=""
    # In MB
    # s_IOSTAT_OUTPUT_ALL_DRIVES=`iostat -d -m -y 1 1 | grep "sd\|nvm"`
    # In KB
    s_IOSTAT_OUTPUT_ALL_DRIVES=`iostat -d -y 1 1 | grep "sd\|nvm"`
    if [ -z "${s_BOOT_DEVICE}" ]; then
        s_IOSTAT_OUTPUT=`printf "${s_IOSTAT_OUTPUT_ALL_DRIVES}" | awk '{sum_read += $3} {sum_write += $4} END {printf sum_read"|"sum_write"\n"}'`
    else
        # Add an space after the name of the device to prevent something like booting with sda leaving out drives like sdaa sdab sdac...
        s_IOSTAT_OUTPUT=`printf "${s_IOSTAT_OUTPUT_ALL_DRIVES}" | grep -v "${s_BOOT_DEVICE} " | awk '{sum_read += $3} {sum_write += $4} END {printf sum_read"|"sum_write"\n"}'`
    fi

    if [ "${i_ALL_SEPARATEDLY}" -eq 1 ]; then
        i_COUNTER_LOOP=0
        for s_DRIVE in ${AR_DRIVES};
        do
            s_IOSTAT_DRIVE=`printf "${s_IOSTAT_OUTPUT_ALL_DRIVES}" | grep $s_DRIVE | head --lines=1 | awk '{sum_read += $3} {sum_write += $4} END {printf sum_read"|"sum_write"\n"}'`
            i_IOSTAT_READ_KB=`printf "%s" "${s_IOSTAT_DRIVE}" | awk -F '|' '{print $1;}'`
            i_IOSTAT_WRITE_KB=`printf "%s" "${s_IOSTAT_DRIVE}" | awk -F '|' '{print $2;}'`
            if [ "${i_IOSTAT_READ_KB%.*}" -gt ${AR_DRIVES_VALUES_READ_MAX[i_COUNTER_LOOP]%.*} ]; then
                AR_DRIVES_VALUES_READ_MAX[i_COUNTER_LOOP]=${i_IOSTAT_READ_KB}
                echo -e "New Max Speed Reading for ${s_COLOR_BLUE}$s_DRIVE${s_COLOR_NONE} at ${s_COLOR_RED}${i_IOSTAT_READ_KB} KB/s${s_COLOR_NONE}"
            echo
            fi
            if [ "${i_IOSTAT_WRITE_KB%.*}" -gt ${AR_DRIVES_VALUES_WRITE_MAX[i_COUNTER_LOOP]%.*} ]; then
                AR_DRIVES_VALUES_WRITE_MAX[i_COUNTER_LOOP]=${i_IOSTAT_WRITE_KB}
                echo -e "New Max Speed Writing for ${s_COLOR_BLUE}$s_DRIVE${s_COLOR_NONE} at ${s_COLOR_RED}${i_IOSTAT_WRITE_KB} KB/s${s_COLOR_NONE}"
            fi

            i_COUNTER_LOOP=$((i_COUNTER_LOOP+1))
        done
    fi

    i_IOSTAT_READ_KB=`printf "%s" "${s_IOSTAT_OUTPUT}" | awk -F '|' '{print $1;}'`
    i_IOSTAT_WRITE_KB=`printf "%s" "${s_IOSTAT_OUTPUT}" | awk -F '|' '{print $2;}'`

    # CAST to Integer
    if [ "${i_IOSTAT_READ_KB%.*}" -gt ${i_READ_MAX%.*} ]; then
        i_READ_MAX=${i_IOSTAT_READ_KB%.*}
        s_READ_PRE_COLOR="${s_COLOR_RED}"
        s_READ_POS_COLOR="${s_COLOR_NONE}"
        s_READ_MAX_DATE=`date`
        i_READ_MAX_MB=$((i_READ_MAX/1024))
    fi
    # CAST to Integer
    if [ "${i_IOSTAT_WRITE_KB%.*}" -gt ${i_WRITE_MAX%.*} ]; then
        i_WRITE_MAX=${i_IOSTAT_WRITE_KB%.*}
        s_WRITE_PRE_COLOR="${s_COLOR_RED}"
        s_WRITE_POS_COLOR="${s_COLOR_NONE}"
        s_WRITE_MAX_DATE=`date`
        i_WRITE_MAX_MB=$((i_WRITE_MAX/1024))
    fi

    if [ "${s_DISPLAY_UNIT}" == "M" ]; then
        # Get MB
        i_IOSTAT_READ_UNIT=${i_IOSTAT_READ_KB%.*}
        i_IOSTAT_WRITE_UNIT=${i_IOSTAT_WRITE_KB%.*}
        i_IOSTAT_READ_UNIT=$((i_IOSTAT_READ_UNIT/1024))
        i_IOSTAT_WRITE_UNIT=$((i_IOSTAT_WRITE_UNIT/1024))
    fi

    # When a MAX is detected it will be displayed in RED
    echo -e "READ  ${s_READ_PRE_COLOR}${i_IOSTAT_READ_UNIT} MB/s ${s_READ_POS_COLOR} (${i_IOSTAT_READ_KB} KB/s) Max: ${i_READ_MAX_MB} MB/s (${i_READ_MAX} KB/s) (${s_READ_MAX_DATE})"
    echo -e "WRITE ${s_WRITE_PRE_COLOR}${i_IOSTAT_WRITE_UNIT} MB/s ${s_WRITE_POS_COLOR} (${i_IOSTAT_WRITE_KB} KB/s) Max: ${i_WRITE_MAX_MB} MB/s (${i_WRITE_MAX} KB/s) (${s_WRITE_MAX_DATE})"
    if [ "$i_LOOP_TIMES" -gt 0 ]; then
        i_LOOP_TIMES=$((i_LOOP_TIMES-1))
    fi
done

News from the Blog 2020-07-19

I start this section to mention things that are worth.

  1. I read this article from Slack outage in May. It is really nice:

https://slack.engineering/a-terrible-horrible-no-good-very-bad-day-at-slack-dfe05b485f82

2. I read this interesting article about Netflix and Hexagonal Architecture:

https://netflixtechblog.com/ready-for-changes-with-hexagonal-architecture-b315ec967749

3. This new from Amazon is interesting for everybody in the sector:

https://www.livemint.com/companies/news/amazon-extends-work-from-home-policy-for-corporate-employees-until-2021-11594874878195.html

4. I updated CTOP.py to v.0.7.6 adding some command line switches to:

  • Limit the number of loops (like just getting the info and exiting, for scripting)
  • Work mode in black and white
  • Sleep time between retraces, to avoid flickering over SSH, and reduce bandwidth

5. I’ve been asked by an editorial to print my books in paper, for all the world. I’m thinking about it.

6. I’ve been asked to talk in a DevOps event.

7. I’ve created a new section in the blog, for adding the score of the videogames I played.

LDAPGUI a LDAP GUI program in Python and Tkinter

I wanted to automate certain operations that we do very often, and so I decided to do a PoC of how handy will it be to create GUI applications that can automate tasks.

As locating information in several repositories of information (ldap, databases, websites, etc…) can be tedious I decided to create a small program that queries LDAP for the information I’m interested, in this case a Location. This small program can very easily escalated to launch the VPN, to query a Database after querying LDAP if no results are found, etc…

I share with you the basic application as you may find interesting to create GUI applications in Python, compatible with Windows, Linux and Mac.

I’m super Linux fan but this is important, as many multinationals still use Windows or Mac even for Engineers and SRE positions.

With the article I provide a Dockerfile and a docker-compose.yml file that will launch an OpenLDAP Docker Container preloaded with very basic information and a PHPLDAPMIN Container.

Installation of the dependencies

Ubuntu:

sudo apt-get install python3.8 python3-tk
pip install ldap3

Windows and Mac:

Install Python3.6 or greater and from command line.

For Mac install pip if you don’t have it.

pip install ldap3

Python Code

#!/bin/env python3

import tkinter as tk
from ldap3 import Server, Connection, MODIFY_ADD, MODIFY_REPLACE, ALL_ATTRIBUTES, ObjectDef, Reader
from ldap3.utils.dn import safe_rdn
from lib.fileutils import FileUtils

# sudo apt-get install python3.8 python3-tk
# pip install ldap3


class LDAPGUI:

    s_config_file = "config.cfg"

    s_ldap_server = "ldapslave01"
    s_connection = "uid=%USERNAME%, cn=users, cn=accounts, dc=demo1, dc=carlesmateo, dc=com"
    s_username = "carlesmateo"
    s_password = "Secret123"
    s_query = "location=%LOCATION%,dc=demo1,dc=carlesmateo,dc=com"

    i_window_width = 610
    i_window_height = 650
    i_frame_width = i_window_width-5
    i_frame_height = 30
    i_frame_results_height = 400

    # Graphical objects
    o_window = None
    o_entry_ldap = None
    o_entry_connection = None
    o_entry_results = None
    o_entry_username = None
    o_entry_location = None
    o_button_search = None

    def __init__(self, o_fileutils):
        self.o_fileutils = o_fileutils

    def replace_connection_with_username(self):
        s_connection_raw = self.o_entry_username.get()
        s_connection = self.s_connection.replace("%USERNAME%", s_connection_raw)
        return s_connection

    def replace_query_with_location(self):
        s_query_raw = self.o_entry_location.get()
        s_query = self.s_query.replace("%LOCATION%", s_query_raw)
        return s_query

    def clear_results(self):
        self.o_entry_results.delete("1.0", tk.END)

    def enable_button(self):
        self.o_button_search["state"] = tk.NORMAL

    def disable_button(self):
        self.o_button_search["state"] = tk.DISABLED

    def ldap_run_query(self):

        self.disable_button()
        self.clear_results()

        s_ldap_server = self.o_entry_ldap.get()
        self.o_entry_results.insert(tk.END, "Connecting to: " + s_ldap_server + "...\n")
        try:

            o_server = Server(s_ldap_server)
            o_conn = Connection(o_server,
                                self.replace_connection_with_username(),
                                self.s_password,
                                auto_bind=False)
            o_conn.bind()

            if isinstance(o_conn.last_error, str):
                s_last_error = o_conn.last_error
                self.o_entry_results.insert(tk.END, "Last error: " + s_last_error + "\n")

            if isinstance(o_conn.result, str):
                s_conn_result = o_conn.result
                self.o_entry_results.insert(tk.END, "Connection result: " + s_conn_result + "\n")

            s_query = self.replace_query_with_location()
            self.o_entry_results.insert(tk.END, "Performing Query: " + s_query + "\n")

            obj_location = ObjectDef('organizationalUnit', o_conn)
            r = Reader(o_conn, obj_location, s_query)
            r.search()
            self.o_entry_results.insert(tk.END, "Results: \n" + str(r) + "\n")
            #print(r)

            #print(0, o_conn.extend.standard.who_am_i())
            # https://github.com/cannatag/ldap3/blob/master/tut1.py
            # https://github.com/cannatag/ldap3/blob/master/tut2.py

        except:
            self.o_entry_results.insert(tk.END, "There has been a problem" + "\n")

            if isinstance(o_conn.last_error, str):
                s_last_error = o_conn.last_error
                self.o_entry_results.insert(tk.END, "Last error: " + s_last_error + "\n")

        try:
            self.o_entry_results.insert(tk.END, "Closing connection\n")
            o_conn.unbind()
        except:
            self.o_entry_results.insert(tk.END, "Problem closing connection" + "\n")

        self.enable_button()

    def create_frame_location(self, o_window, i_width, i_height):
        o_frame = tk.Frame(master=o_window, width=i_width, height=i_height)
        o_frame.pack()

        o_lbl_location = tk.Label(master=o_frame,
                                  text="Location:",
                                  width=10,
                                  height=1)
        o_lbl_location.place(x=0, y=0)

        o_entry = tk.Entry(master=o_frame, fg="yellow", bg="blue", width=50)
        o_entry.insert(tk.END, "")
        o_entry.place(x=100, y=0)

        return o_frame, o_entry

    def create_frame_results(self, o_window, i_width, i_height):
        o_frame = tk.Frame(master=o_window, width=i_width, height=i_height)
        o_frame.pack()

        o_entry = tk.Text(master=o_frame, fg="grey", bg="white", width=75, height=20)
        o_entry.insert(tk.END, "")
        o_entry.place(x=0, y=0)

        return o_frame, o_entry

    def create_frame_username(self, o_window, i_width, i_height, s_username):
        o_frame = tk.Frame(master=o_window, width=i_width, height=i_height)
        o_frame.pack()

        o_lbl_location = tk.Label(master=o_frame,
                                  text="Username:",
                                  width=10,
                                  height=1)
        o_lbl_location.place(x=0, y=0)

        o_entry_username = tk.Entry(master=o_frame, fg="yellow", bg="blue", width=50)
        o_entry_username.insert(tk.END, s_username)
        o_entry_username.place(x=100, y=0)

        return o_frame, o_entry_username

    def create_frame_ldapserver(self, o_window, i_width, i_height, s_server):
        o_frame = tk.Frame(master=o_window, width=i_width, height=i_height)
        o_frame.pack()

        o_lbl_ldapserver = tk.Label(master=o_frame,
                                    text="LDAP Server:",
                                    width=10,
                                    height=1)
        o_lbl_ldapserver.place(x=0, y=0)
        # We don't need to pack the label, as it is inside a Frame, packet
        # o_lbl_ldapserver.pack()

        o_entry = tk.Entry(master=o_frame, fg="yellow", bg="blue", width=50)
        o_entry.insert(tk.END, s_server)
        o_entry.place(x=100, y=0)

        return o_frame, o_entry

    def create_frame_connection(self, o_window, i_width, i_height, s_connection):
        o_frame = tk.Frame(master=o_window, width=i_width, height=i_height)
        o_frame.pack()

        o_lbl_connection = tk.Label(master=o_frame,
                                    text="Connection:",
                                    width=10,
                                    height=1)
        o_lbl_connection.place(x=0, y=0)
        # We don't need to pack the label, as it is inside a Frame, packet
        # o_lbl_ldapserver.pack()

        o_entry_connection = tk.Entry(master=o_frame, fg="yellow", bg="blue", width=50)
        o_entry_connection.insert(tk.END, s_connection)
        o_entry_connection.place(x=100, y=0)

        return o_frame, o_entry_connection

    def create_frame_query(self, o_window, i_width, i_height, s_query):
        o_frame = tk.Frame(master=o_window, width=i_width, height=i_height)
        o_frame.pack()

        o_lbl_query = tk.Label(master=o_frame,
                                    text="Connection:",
                                    width=10,
                                    height=1)
        o_lbl_query.place(x=0, y=0)

        o_entry_query = tk.Entry(master=o_frame, fg="yellow", bg="blue", width=50)
        o_entry_query.insert(tk.END, s_query)
        o_entry_query.place(x=100, y=0)

        return o_frame, o_entry_query

    def create_button(self):
        o_button = tk.Button(
            text="Search",
            width=25,
            height=1,
            bg="blue",
            fg="yellow",
            command=self.ldap_run_query
        )

        o_button.pack()

        return o_button

    def render_screen(self):
        o_window = tk.Tk()
        o_window.title("LDAPGUI by Carles Mateo")
        self.o_window = o_window
        self.o_window.geometry(str(self.i_window_width) + 'x' + str(self.i_window_height))

        o_frame_ldap, o_entry_ldap = self.create_frame_ldapserver(o_window=o_window,
                                                                  i_width=self.i_frame_width,
                                                                  i_height=self.i_frame_height,
                                                                  s_server=self.s_ldap_server)
        self.o_entry_ldap = o_entry_ldap
        o_frame_connection, o_entry_connection = self.create_frame_connection(o_window=o_window,
                                                                              i_width=self.i_frame_width,
                                                                              i_height=self.i_frame_height,
                                                                              s_connection=self.s_connection)
        self.o_entry_connection = o_entry_connection

        o_frame_user, o_entry_user = self.create_frame_username(o_window=o_window,
                                                                i_width=self.i_frame_width,
                                                                i_height=self.i_frame_height,
                                                                s_username=self.s_username)
        self.o_entry_username = o_entry_user

        o_frame_query, o_entry_query = self.create_frame_query(o_window=o_window,
                                                               i_width=self.i_frame_width,
                                                               i_height=self.i_frame_height,
                                                               s_query=self.s_query)
        self.o_entry_query = o_entry_query


        o_frame_location, o_entry_location = self.create_frame_location(o_window=o_window,
                                                                        i_width=self.i_frame_width,
                                                                        i_height=self.i_frame_height)

        self.o_entry_location = o_entry_location

        o_button_search = self.create_button()
        self.o_button_search = o_button_search

        o_frame_results, o_entry_results = self.create_frame_results(o_window=o_window,
                                                                     i_width=self.i_frame_width,
                                                                     i_height=self.i_frame_results_height)
        self.o_entry_results = o_entry_results

        o_window.mainloop()

    def load_config_values(self):
        b_success, d_s_config = self.o_fileutils.read_config_file_values(self.s_config_file)
        if b_success is True:
            if 'server' in d_s_config:
                self.s_ldap_server = d_s_config['server']
            if 'connection' in d_s_config:
                self.s_connection = d_s_config['connection']
            if 'username' in d_s_config:
                self.s_username = d_s_config['username']
            if 'password' in d_s_config:
                self.s_password = d_s_config['password']
            if 'query' in d_s_config:
                self.s_query = d_s_config['query']


def main():
    o_fileutils = FileUtils()

    o_ldapgui = LDAPGUI(o_fileutils=o_fileutils)
    o_ldapgui.load_config_values()

    o_ldapgui.render_screen()


if __name__ == "__main__":
    main()

Dockerfile

FROM osixia/openldap

# Note: Docker compose will generate the Docker Image.
# Run:  sudo docker-compose up -d

LABEL maintainer="Carles Mateo"

ENV LDAP_ORGANISATION="Carles Mateo Test Org" \
    LDAP_DOMAIN="carlesmateo.com"

COPY bootstrap.ldif /container/service/slapd/assets/config/bootstrap/ldif/50-bootstrap.ldif

docker-compose.yml

version: '3.3'
services:
  ldap_server:
      build:
        context: .
        dockerfile: Dockerfile
      environment:
        LDAP_ADMIN_PASSWORD: test1234
        LDAP_BASE_DN: dc=carlesmateo,dc=com
      ports:
        - 389:389
      volumes:
        - ldap_data:/var/lib/ldap
        - ldap_config:/etc/ldap/slapd.d
  ldap_server_admin:
      image: osixia/phpldapadmin:0.7.2
      ports:
        - 8090:80
      environment:
        PHPLDAPADMIN_LDAP_HOSTS: ldap_server
        PHPLDAPADMIN_HTTPS: 'false'
volumes:
  ldap_data:
  ldap_config:

config.cfg

# Config File for LDAPGUI by Carles Mateo
# https://blog.carlesmateo.com
#

# Configuration for working with the prepared Docker
server=127.0.0.1:389
connection=cn=%USERNAME%,dc=carlesmateo,dc=com
username=admin_gh
password=admin_gh_pass
query=cn=%LOCATION%,dc=carlesmateo,dc=com

# Carles other tests
#server=ldapslave.carlesmateo.com
#connection=uid=%USERNAME%, cn=users, cn=accounts, dc=demo1, dc=carlesmateo, dc=com
#username=carlesmateo
#password=Secret123
#query=location=%LOCATION%,dc=demo1,dc=carlesmateo,dc=com

bootstrap.ldif

In order to bootstrap the Data on the LDAP Server I use a bootstrap.ldif file copied into the Server.

Credits, I learned this trick in this page: https://medium.com/better-programming/ldap-docker-image-with-populated-users-3a5b4d090aa4

dn: cn=developer,dc=carlesmateo,dc=com
changetype: add
objectclass: Engineer
cn: developer
givenname: developer
sn: Developer
displayname: Developer User
mail: developer@carlesmateo.com
userpassword: developer_pass

dn: cn=maintainer,dc=carlesmateo,dc=com
changetype: add
objectclass: Engineer
cn: maintainer
givenname: maintainer
sn: Maintainer
displayname: Maintainer User
mail: maintainer@carlesmateo.com
userpassword: maintainer_pass

dn: cn=admin,dc=carlesmateo,dc=com
changetype: add
objectclass: Engineer
cn: admin
givenname: admin
sn: Admin
displayname: Admin
mail: admin@carlesmateo.com
userpassword: admin_pass

dn: ou=Groups,dc=carlesmateo,dc=com
changetype: add
objectclass: organizationalUnit
ou: Groups

dn: ou=Users,dc=carlesmateo,dc=com
changetype: add
objectclass: organizationalUnit
ou: Users

dn: cn=Admins,ou=Groups,dc=carlesmateo,dc=com
changetype: add
cn: Admins
objectclass: groupOfUniqueNames
uniqueMember: cn=admin,dc=carlesmateo,dc=com

dn: cn=Maintaners,ou=Groups,dc=carlesmateo,dc=com
changetype: add
cn: Maintaners
objectclass: groupOfUniqueNames
uniqueMember: cn=maintainer,dc=carlesmateo,dc=com
uniqueMember: cn=developer,dc=carlesmateo,dc=com

lib/fileutils.py

You can download this file from the lib folder in my project CTOP.py

More information about programming with tkinter:

https://realpython.com/python-gui-tkinter/#getting-user-input-with-entry-widgets

https://www.tutorialspoint.com/python/python_gui_programming.htm

https://effbot.org/tkinterbook/entry.htm

About LDAP:

https://ldap3.readthedocs.io/en/latest/

A script to backup your partition compressed and a game to learn about dd and pipes

This article is more an exercise, like a game, so you get to know certain things about Linux, and follow my mental process to uncover this. Is nothing mysterious for the Senior Engineers but Junior Sys Admins may enjoy this reading. :)

Ok, so the first thing is I wrote an script in order to completely backup my NVMe hard drive to a gziped file and then I will use this, as a motivation to go deep into investigations to understand.

Ok, so the first script would be like this:

#!/bin/bash
SOURCE_DRIVE="/dev/nvme0n1"
TARGET_PATH="/media/carles/Seagate\ Backup\ Plus\ Drive/BCK/"
TARGET_FILE="nvme.img"
sudo bash -c "dd if=${SOURCE_DRIVE} | gzip > ${TARGET_PATH}${TARGET_FILE}.gz"

So basically, we are going to restart the computer, boot with Linux Live USB Key, mount the Seagate Hard Drive, and run the script.

We are booting with a Live Linux Cd in order to have our partition unmounted and unmodified while we do the backup. This is in order to avoid corruption or data loss as a live Filesystem is getting modifications as we read it.

The problem with this first script is that it will generate a big gzip file.

By big I mean much more bigger than 2GB. Not all physical supports support files bigger than 2GB or 4GB, but even if they do, it’s a pain to transfer this over the Network, or in USB files, so we are going to do a slight modification.

#!/bin/bash
SOURCE_DRIVE="/dev/nvme0n1"
TARGET_PATH="/media/carles/Seagate\ Backup\ Plus\ Drive/BCK/"
TARGET_FILE="nvme.img"
sudo bash -c "dd if=${SOURCE_DRIVE} | gzip | split -b 1024MiB - ${TARGET_PATH}${TARGET_FILE}-split.gz_"

Ok, so we will use pipes and split in order to generate many files as big as 1GB.

If we ls we will get:

-rwxrwxrwx 1 carles carles 1073741824 May 24 14:57 nvme.img-split.gz_aa
-rwxrwxrwx 1 carles carles 1073741824 May 24 14:58 nvme.img-split.gz_ab
-rwxrwxrwx 1 carles carles 1073741824 May 24 14:59 nvme.img-split.gz_ac

Then one may say, Ok, this is working, but how I know the progress?.

For old versions of dd you can use pv which stands for Pipe Viewer and allows you to know the transference between processes using pipes.

For more recent versions of dd you can use status=progress.

So the script updated with status=progress is:

#!/bin/bash
SOURCE_DRIVE="/dev/nvme0n1"
TARGET_PATH="/media/carles/Seagate\ Backup\ Plus\ Drive/BCK/"
TARGET_FILE="nvme.img"
sudo bash -c "dd if=${SOURCE_DRIVE} status=progress | gzip | split -b 1024MiB - ${TARGET_PATH}${TARGET_FILE}-split.gz_"

You can also download the code from:

https://gitlab.com/carles.mateo/blog.carlesmateo.com-source-code/-/blob/master/backup_partition_in_files.sh

Then one may ask himself, wait, if pipes use STDOUT and STDIN and dd is displaying into the screen, then will our gz file get corrupted?.

I like when people question things, and investigate, so let’s answer this question.

If it was a young member of my Team I would ask:

  • Ok, try,it. Check the output file to see if is corrupted.

So they can do zcat or zless to inspect the file, see if it has errors, and to make sure:

gzip -v -t nvme.img.gz
nvme.img.gz:        OK

Ok, so what happened?, because we were seeing output in the screen.

Assuming the young Engineer does not know the answer I would had told:

  • Ok, so you know that if dd would print to STDOUT, then you won’t see it, cause it would be sent to the pipe, so there is something more you’re missing. Let’s check the source code of dd to see what status=progress does

And then look for “progress”.

Soon you’ll find things like everywhere:

  if (progress_time)
    fputc ('\r', stderr);

Ok, pay attention to where is the data written: stderr. So basically the answer is: dd status=progress does not corrupt STDOUT and prints into the screen because it uses STDERR.

Other funny ways to get the progress would be to use:

watch -n10 "ls -alh /BCK/ | grep nvme | wc --lines"
So you would see in real time what was the advance and finally 512GB where compressed to around 336GB in 336 files of 1 GB each (except the last one)

Another funny way would had been sending USR1 signal to the dd process:

Hope you enjoyed this little exercise about the importance of going deep, to the end, to understand what’s going on on the system. :)

Instead of gzip you can use bzip2 or pixz. pixz is very handy if you want to just compress a file, as it uses multiple processors in parallel for the tasks.

xz or lrzip are other compressors. lrzip aims to compress very large files, specially source code.

Some advice for WFH

Those are crazy times in which is difficult to handle working from home, doing the lock down…

But very nice times in which other people help others. Doctors and sanitary personnel fight in first line, truck drivers and supermarket staff are doing extra hours to provide to the society, investigators are working hard to get a vaccine…

Is beautiful that so many people are helping and contributing to the society.

I want to provide my humble experience on working remotely, so you avoid going bananas. Is very easy get depressed, anxious… So here is my advice.

  1. Stick to a routine
    Respect the working times, like if you was going to the office.
    Dress yourself like a normal day in the office. Don’t be all day in pajamas or sport wear. Switch to sports wear when you finish your daily work at 6PM, if you want, but not before.
    If you talk via Slack, myself always keep the video turned on, as is a way to force myself into dressing and taking care of my look.
    I take a shower, dress like an Engineer at work, I have my break for breakfast and for lunch, and when I finish work I study 30 minutes and do exercise 30 minutes or more. Then I take another shower and I consider myself free.
    Note: The only exception I do to the dressing is in the shoes. I don’t wear any shoes, as my feet really enjoy walking freely over the parket.
  2. Take care of yourself
    Shave, cut your nails… be polite. The same way you would if you had to go to the office.
  3. Stick to the working clock
    As said in the stick to a routine advice, work your time, from 9AM to 6PM, and don’t get lost. The week ends are week ends, don’t work to free your mind.
    For the week ends I have my side projects, like writing books.
  4. Walk
    If you can, walk an enjoy nature as much as you can. This clears the mind and keeps your mental health wellness.
  5. Do exercise
    May be walking, but if you have a home bicycle, use it.
    Try to set a goal, like 15 minutes of bicycle daily, and grow from there, of keep it like that. But doing even 15 minutes of cardio every day will be highly beneficial for your mind and body.
  6. Stretch
    After doing exercise stretch your muscles.
  7. See the light
    As much as you can, see the sun light. Try to do walks too and see nature.
    Daylight and nature are amazingly good for your mental health and morale.
  8. Study/Learn new things
    I keep this as part of my routine. I study every day in Linux Academy or read a book at least half an hour. I’ve done this for years.
  9. Keep you hydrated
    Drink lots of water.
  10. Do your breaks.
    After 1 hour try to walk a bit in the house, to focus your view in distant points to relax your ocular muscles.
  11. Ergonomics and light
    Try to have a correct light in the working are, a comfortable chair, the right height for the keyboard and for the monitor, so your neck doesn’t hurt and your hands neither.
  12. Do like in the office: discipline
    In the office you don’t drink, you don’t smoke at your desk.
    So do the same. If you want to smoke one cigar after 2 hours of work, Ok, but don’t loss yourself in self-indulgence. Set strict rules respect alcohol if you love beer:
    No alcohol during working hours.
    Years ago I was CTO of a company with Team in half the world and I had a Team in Belarus. The Team Lead would be drunk in the sofa at business hours and start talking common words. If you have weak points, you don’t want to lose yourself. Be disciplined.
  13. Set boundaries with your family and pets
    If you want to close the door of the room you work, your cat is not gonna die.
    It is used to be alone when you’re are in the office.
    So if it’s excessively demanding and wants you to pet him, or jumps over your laptop’s keyword while you are typing commands as root, set boundaries. Close the door. He can take it.
    Also for the kids, the wife, the mother.
    Please, do not disturb me while I’m working. We will play after.
  14. During covid-19 lockdown, do videoconference
    Do videoconference with your family and friends.
    You can use Slack, Skype, Zoom, Whatsapp, Facebook Messenger…
    I schedule a daily whatsapp video conf with my family, and a weekly with my cool friend Alex :)
  15. Be very patient with your colleagues and Team members
    We are all humans and each one has its own situation.
    Some people may feel depressed, alone, others may have problems with the partners or the parents of her/him living with them.
    Other may have a cat trolling all the time stopping the work.
    Others may have hyperactive children, or just having a poor chair, poor desk, and having only the laptop and no external monitor.
    Others may have the family far, in another country, and suffer for them.
    Be patient and understanding. Be Human.
  16. Have a good Internet connection and use cable over Wifi
    I have 360 Mbit at home with Virgin and I connect the laptops with Ethernet Gigabit cables. That brings me the best and most stable connection.
    In case of emergency I can do tethering with the phone (share the connection using the phone as Wifi Hotspot)
  17. Have spare Hardware and cables
    In this time of lockdown, is good to have spare cables and adapters for everything. Just in case they die.
    If you can have a spare monitor, and spare laptops this is great too.
    I have 3 laptops, plus one tower, plus several raspberry pis, plus the tablet, plus my working laptop. If one dies, I can use the others.
    I always have all kind of Hardware and cables as spare (Gigabit switches, power adapters, international adapters, Ethernet cables, USB cables, headphones…). Even if I can buy in Amazon nothing can stop me.
    One day we had an incident with Virgin, which affected all Ireland.
    I was out as the Fiber was not working and the phone was with Virgin too, but I have two additional SIM cards and spare phones, from vodaphone and Tesco mobile.
    So I’m well protected. :)
    As per the comment of Jordi Soler, I update the list of gadgets, mentioning my Hp Laserjet Color Printer, very handy to print document that I have to sign and then scan and sending back by email, and the UPS.
    The UPS is cool, as if electricity goes down I don’t loss Fiber Internet.
    Imagine, a router connected to the UPS can last hours! however is very infrequent that in Ireland we loss electricity. And the issues I experienced in 3 years were quickly resolved (unlike Barcelona where once more than 300,000 people were 3 days without electricity).
  18. Keep doing backups
    If talking about job things, you can upload to Corporate Google Drive or Microsoft One Drive.
A practical sense of humor

Refreshing settings in a Docker immutable image with Python and Flask

This is a trick to restart a Service that is running on a immutable Docker, with some change, and you need to refresh the values very quickly without having to roll the CI/CD Jenkins Pipeline and uploading a new image.

So why would you need to do that?.

I can think about possible scenarios like:

  • Need to roll out an urgent fix in a time critical manner
  • Jenkins is broken
  • Somebody screw it on the git master branch
  • Docker Hub is down
  • GitHub is down
  • Your artifactory is down
  • The lines between your jumpbox or workstation and the secure Server are down and you have really few bandwidth
  • You have to fix something critical and you only have a phone with you and SSH only
  • Maybe the Dockerfile had latest, and the latest image has changed
FROM os:latest

The ideal is that if you work with immutable images, you roll out a new immutable image and that’s it.

But if for whatever reason you need to update this super fast, this trick may become really handy.

Let’s go for it!.

Normally you’ll start your container with a command similar to this:

docker run -d --rm -p 5000:5000 api_carlesmateo_com:v7 prod 

The first thing we have to do is to stop the container.

So:

docker ps

Locate your container across the list of running containers and stop it, and then restart without the –rm:

docker stop container_name
docker run -d -p 5000:5000 api_carlesmateo_com:v7 prod

the –rm makes the container to cleanup. By default a container’s file system persists even after the container exits. So don’t start it with –rm.

Ok, so login to the container:

docker exec -it container_name /bin/sh 

Edit the config you require to change, for example config.yml

If what you have to update is a password, and is encoded in base64, encode it:

echo -n "ThePassword" | base64
VGhlUGFzc3dvcmQ=

Stop the container. You can do it by stopping the container with docker stop or from inside the container, killing the listening process, probably a Python Flask.

If your Dockerfile ends with something like:

ENTRYPOINT ["./webservice.py"]

And webservice.py has Python Flask code similar to this:

#!/usr/bin/python3
#
# webservice.py
#
# Author: Carles Mateo
# Creation Date: 2020-05-10 20:50 GMT+1
# Description: A simple Flask Web Application
#              Part of the samples of https://leanpub.com/pythoncombatguide
#              More source code for the book at https://gitlab.com/carles.mateo/python_combat_guide
#


from flask import Flask, request
import logging

# Initialize Flask
app = Flask(__name__)


# Sample route so http://127.0.0.1/carles
@app.route('/carles', methods=['GET'])
def carles():
    logging.critical("A connection was established")
    return "200"

logging.info("Initialized...")

if __name__ == "__main__":
    app.run(host='0.0.0.0', port=5000, debug=True)

Then you can kill the process, and so ending the container, from inside the container by doing:

ps -ax | grep webservice
 5750 root     56:31 {webservice.py} /usr/bin/python /opt/webservice/webservice.py
kill -9 5790

This will finish the container the same way as docker stop container_name.

Then start the container (not run)

docker_start container_name

You can now test from outside or from inside the container. If from insise:

/opt/webservice # wget localhost:5000/carles
Connecting to localhost:5000 (127.0.0.1:5000)
carles               100% |**************************************************************************************************************|     3  0:00:00 ETA
/opt/webservice # cat debug.log
2020-05-06 20:46:24,349 Initialized...
2020-05-06 20:46:24,359  * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
2020-05-06 20:46:24,360  * Restarting with stat
2020-05-06 20:46:24,764 Initialized...
2020-05-06 20:46:24,771  * Debugger is active!
2020-05-06 20:46:24,772  * Debugger PIN: 123-456-789
2020-05-07 13:18:43,890 127.0.0.1 - - [07/May/2020 13:18:43] "GET /carles HTTP/1.1" 200 -

if you don’t use YAML files or what you need is to change the code, all this can be avoided as when you update the Python code, Flash realizes that and reloads. See this line in the logs:

2020-05-07 13:18:40,431  * Detected change in '/opt/webservice/wwebservice.py', reloading

Python Combat Guide published

After some work reviewing it and ensuring it has the expected quality, I finally published my book Python Combat Guide.

Is an atypical creation. Is more a Master Class to my best friend, it could be a SDM, TL leading a small Software Development department, a Coder or a Scientist wanting to join IT as programmer and to learn a lot of stuff very quickly, than rather a formal Python Book for learning. Absolutely is not for beginners.

If you want to buy it, to explore the TOC, extended description…

https://leanpub.com/pythoncombatguide

Bash Script: Count repeated lines in the logs

This small script will count repeated patterns in the Logs.

Ideal for checking if there are errors that you’re missing while developing.

#!/usr/bin/env bash
# count_repeated_pattern_in_logs.sh
# By Carles Mateo
# Helps to find repeated lines in Logs
LOGFILE_MESSAGES="/var/log/messages"
LOGFILE_SYSLOG="/var/log/syslog"
if [[ -f "${LOGFILE_MESSAGES}" ]]; then
LOGFILE=${LOGFILE_MESSAGES}
else
LOGFILE=${LOGFILE_SYSLOG}
fi
echo "Using Logfile: ${LOGFILE}"
CMD_OUTPUT=`cat ${LOGFILE} | awk '{ $1=$2=$3=$4=""; print $0 }' | sort | uniq --
count | sort --ignore-case --reverse --numeric-sort`
echo -e "$CMD_OUTPUT"

Basically it takes out the non relevant fields that can prevent from detecting repetition, like the time, and prints the rest.
Then you will launch it like this:

count_repeated_pattern_in_logs.sh | head -n20

If you are checking a machine with Ubuntu UFW (Firewall) and want to skip those likes:

./repeated.sh | grep -v "UFW BLOCK" | head -n20