This article talks about how at Riot Games they use Slack. Slack is really a powerful tool, and also makes the communication more human in companies with their approach and the funny icons and /giphy. I’m very serious when it comes to work but I recognize the friendly, warm, human and lovely touch these kind of animated icons bring to the conversations.
Remember that life of the SSD is different from spinning drives. I recommend to keep your backups on external spinning drives disconnected most of the time.
I updated it the Nov-01, as I normally do, bringing more content.
I’ve been paid the royalties for he past two months and I reinvested everything (and more from my pocket) in Hardware for working with ZFS.
I was offered by an editorial in The States to publish Python Combat Guide and other of my books worldwide. I was thinking it for a while. It was very good money, translation to multiple languages and platforms and marketing and a lot of promotion, but I would had loss the rights and the Freedom I have now, like the possibility to offer discount coupons to who I want and to update the contents often. So to celebrate my decision for you, readers of the blog, during September, I provide a discounted price of $5 USD for the fist 100 sales instead of the $25 USD suggested price. Use the following link:
As part of my effort to contributing with nice Open Source products to the Community I have made some investments to keep contributing to:
OpenZFS
My old tool for managing ZFS and Network shares easily
I’m writing a new book about managing ZFS for Small Business too, so I show how to operate on this hardware, good points and downsides.
I’m assembling a new Pc with ZFS plenty of Disk Storage within a mix of:
SAS Enterprise grade SSD 2.5″
SATA 12Gb Enterprise grade SSD 2.5″
SATA SSD 2.5″
SATA HDD 2TB 2.5″
SATA HDD 2TB 3.5″
I’m a big fan of Intel, but this time I have chosen AMD. Concretely a AMD Ryzen 7 3700X AM4 8 Core / 16 Threads, 3.6 GHz to 4.4 GHz with Turbo. The reason I chose this CPU is because it only uses 65W but still has 8 Cores / 16 Threads.
Also I want to see the performance of this AMD Ryzen with CMIPS and another important reason is that AMD motherboards support PCI 4.0. I have bought a NVMe SSD Samsung 980 PRO PCI 4.0 (x4) able to read at 6,400 MB/s. I will use this AMD box for running VMs as well. Basically Virtual Box and Docker.
I’ve been surprised that for 169.99 GBP I can have a very good Asus Motherboard with a 2.5 Gb Ethernet: ASUS ROG STRIX B550-F GAMING, AMD B550, AM4, DDR4, PCIe 4.0, SATA3, Dual M.2, CrossFire, 2.5GbE, USB 3.2 Gen2 A+C, ATX.
In order to have an Asus motherboard with a 2.5 Gb Ethernet for Intel I had to jump to a 254 GBP motherboard and Intel is still PCI 3.0. Actually there are PCI 10Gb NICs at 80 GBP so at some point I’ll upgrade my home network from Gigabit to 10 Gb. That will come slowly, but if the new equipment I assemble has 2.5 Gb when I upgrade the main switches to 10 Gb, at least I’ll be able to communicate at 2.5 Gb without ant additional change.
Also memory at 3200, speed that the AMD motherboard can provide, is more than affordable.
This new server will have 64 GB of RAM (Corsair DDR4 Vengeance PC4-25600 (3200)), as I plan to run VMs and use Volumes mounted via iSCSI and locally as block devices to improve my Software. I’ve bought a new UPS to keep it running in case power goes down. That’s something that doesn’t happen often in my city in Ireland, honestly, but I never forget that this happens in Barcelona two or three times per year, and that a high tension spike can burn your motherboard, drives, or electronics like the TV or the fridge. I’ve bought as well a new KVM Switch, a HDMI 4K and USB too one, so I don’t have to have so many keyboards. My logitech M720 allowed me to use it with 3 computers, but still I want something more operational. The KVM I bought allow me to switch with a button or within a hotkey in the keyboard.
I bought a new Icy box fox handling 6 2.5 drives in just one bay of the tower, and a 850 Watt Corsair PSU that will be able to power the many drives I want at the same time.
ZFS on Ubuntu 20.04.1 LTS A guide for Small/Medium Business and power users to work with ZFS. https://leanpub.com/zfs-ubuntu
Those can be purchased while I’m still working on them and get the updates that I’ll be publishing and keeping a communication with me about doubts or improvements.
Halloween Software Offers
I saw some Halloween offers and I purchased Software licenses for Software I use.
I contribute a lot to Open Source, and many years ago before Open Source existed I was creating Freeware Software. But I think that good commercial Software deserves to be supported. Like everything in life, if they are doing a good work that is useful to me, why not giving them support?. It is also a way to make sure they will continue producing amazing Software. And in the other hand, myself, I create Software. Some times commercial Software, and I like to be paid, so I apply the same principle.
I wanted to automate certain operations that we do very often, and so I decided to do a PoC of how handy will it be to create GUI applications that can automate tasks.
As locating information in several repositories of information (ldap, databases, websites, etc…) can be tedious I decided to create a small program that queries LDAP for the information I’m interested, in this case a Location. This small program can very easily escalated to launch the VPN, to query a Database after querying LDAP if no results are found, etc…
I share with you the basic application as you may find interesting to create GUI applications in Python, compatible with Windows, Linux and Mac.
I’m super Linux fan but this is important, as many multinationals still use Windows or Mac even for Engineers and SRE positions.
With the article I provide a Dockerfile and a docker-compose.yml file that will launch an OpenLDAP Docker Container preloaded with very basic information and a PHPLDAPMIN Container.
This is a trick to restart a Service that is running on a immutable Docker, with some change, and you need to refresh the values very quickly without having to roll the CI/CD Jenkins Pipeline and uploading a new image.
So why would you need to do that?.
I can think about possible scenarios like:
Need to roll out an urgent fix in a time critical manner
Jenkins is broken
Somebody screw it on the git master branch
Docker Hub is down
GitHub is down
Your artifactory is down
The lines between your jumpbox or workstation and the secure Server are down and you have really few bandwidth
You have to fix something critical and you only have a phone with you and SSH only
Maybe the Dockerfile had latest, and the latest image has changed
FROM os:latest
The ideal is that if you work with immutable images, you roll out a new immutable image and that’s it.
But if for whatever reason you need to update this super fast, this trick may become really handy.
Let’s go for it!.
Normally you’ll start your container with a command similar to this:
docker run -d --rm -p 5000:5000 api_carlesmateo_com:v7 prod
The first thing we have to do is to stop the container.
So:
docker ps
Locate your container across the list of running containers and stop it, and then restart without the –rm:
docker stop container_name
docker run -d -p 5000:5000 api_carlesmateo_com:v7 prod
the –rm makes the container to cleanup. By default a container’s file system persists even after the container exits. So don’t start it with –rm.
Ok, so login to the container:
docker exec -it container_name /bin/sh
Edit the config you require to change, for example config.yml
If what you have to update is a password, and is encoded in base64, encode it:
echo -n "ThePassword" | base64
VGhlUGFzc3dvcmQ=
Stop the container. You can do it by stopping the container with docker stop or from inside the container, killing the listening process, probably a Python Flask.
If your Dockerfile ends with something like:
ENTRYPOINT ["./webservice.py"]
And webservice.py has Python Flask code similar to this:
#!/usr/bin/python3
#
# webservice.py
#
# Author: Carles Mateo
# Creation Date: 2020-05-10 20:50 GMT+1
# Description: A simple Flask Web Application
# Part of the samples of https://leanpub.com/pythoncombatguide
# More source code for the book at https://gitlab.com/carles.mateo/python_combat_guide
#
from flask import Flask, request
import logging
# Initialize Flask
app = Flask(__name__)
# Sample route so http://127.0.0.1/carles
@app.route('/carles', methods=['GET'])
def carles():
logging.critical("A connection was established")
return "200"
logging.info("Initialized...")
if __name__ == "__main__":
app.run(host='0.0.0.0', port=5000, debug=True)
Then you can kill the process, and so ending the container, from inside the container by doing:
This will finish the container the same way as docker stop container_name.
Then start the container (not run)
docker start container_name
You can now test from outside or from inside the container. If from insise:
/opt/webservice # wget localhost:5000/carles
Connecting to localhost:5000 (127.0.0.1:5000)
carles 100% |**************************************************************************************************************| 3 0:00:00 ETA
/opt/webservice # cat debug.log
2020-05-06 20:46:24,349 Initialized...
2020-05-06 20:46:24,359 * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
2020-05-06 20:46:24,360 * Restarting with stat
2020-05-06 20:46:24,764 Initialized...
2020-05-06 20:46:24,771 * Debugger is active!
2020-05-06 20:46:24,772 * Debugger PIN: 123-456-789
2020-05-07 13:18:43,890 127.0.0.1 - - [07/May/2020 13:18:43] "GET /carles HTTP/1.1" 200 -
if you don’t use YAML files or what you need is to change the code, all this can be avoided as when you update the Python code, Flash realizes that and reloads. See this line in the logs:
2020-05-07 13:18:40,431 * Detected change in '/opt/webservice/wwebservice.py', reloading
You can also start a container with shell directly:
First you have to understand that Python, Java and PHP are worlds completely different.
In Python you’ll probably use Flask, and listen to the port you want, inside Docker Container.
In PHP you’ll use a Frameworks like Laravel, or Symfony, or Catalonia Framework (my Framework) :) and a repo or many (as the idea is that the change in one microservice cannot break another it is recommended to have one git repo per Service) and split the requests with the API Gateway and Filters (so /billing/ goes to the right path in the right Server, is like rewriting URLs). You’ll rely in Software to split your microservices. Usually you’ll use Docker, but you have to add a Web Server and any other tools, as the source code is not packet with a Web Server and other Dependencies like it is in Java Spring Boot.
In Java you’ll use Spring Cloud and Spring Boot, and every Service will be auto-contained in its own JAR file, that includes Apache Tomcat and all other Dependencies and normally running inside a Docker. Tcp/Ip listening port will be set at start via command line, or through environment. You’ll have many git repositories, one per each Service.
Using many repos, one per Service, also allows to deploy only that repository and to have better security, with independent deployment tokens.
It is not unlikely that you’ll use one language for some of your Services and another for other, as well as a Database or another, as each Service is owner of their data.
In any case, you will be using CI/CD and your pipeline will be something like this:
Pull the latest code for the Service from the git repository
Compile the code (if needed)
Run the Unit and Integration Tests
Compile the service to an executable artifact (f.e. Java JAR with Tomcat server and other dependencies)
Generate a Machine image with your JAR deployed (for Java. Look at Spotify Docker Plugin to Docker build from Maven), or with Apache, PHP, other dependencies, and the code. Normally will be a Docker image. This image will be immutable. You will probably use Dockerhub.
Machine image will be started. Platform test are run.
If platform tests pass, the service is promoted to the next environment (for example Dev -> Test -> PreProd -> Prod), the exact same machine is started in the next environment and platform tests are repeated.
Before deploying to Production the new Service, I recommend running special Application Tests / Behavior-driven. By this I mean, to conduct tests that really test the functionality of everything, using a real browser and emulating the acts of a user (for example with BeHat, Cucumber or with JMeter). I recommend this specially because Microservices are end-points, independent of the implementation, but normally they are API that serve to a whole application. In an Application there are several components, often a change in the Front End can break the application. Imagine a change in Javascript Front End, that results in a call a bit different, for example, with an space before a name. Imagine that the Unit Tests for the Service do not test that, and that was not causing a problem in the old version of the Service and so it will crash when the new Service is deployed. Or another example, imagine that our Service for paying with Visa cards generates IDs for the Payment Gateway, and as a result of the new implementation the IDs generated are returned. With the mocked objects everything works, but when we deploy for real is when we are going to use the actual Bank Payment. This is also why is a good idea to have a PreProduction environment, with PreProduction versions of the actual Services we use (all banks or the GDS for flights/hotel reservation like Galileo or Amadeus have a Test, exactly like Production, Gateway)
If you work with Microsoft .NET, you’ll probably use Azure DevOps.
We IT Engineers, CTOs and Architects, serve the Business. We have to develop the most flexible approaches and enabling the business to release as fast as their need.
Take in count that Microservices is a tool, a pattern. We will use it to bring more flexibility and speed developing, resilience of the services, and speed and independence deploying. However this comes at a cost of complexity.
Microservices is more related to giving flexibility to the Business, and developing according to the Business Domains. Normally oriented to suite an API. If you have an API that is consumed by third party you will have things like independence of Services (if one is down the others will still function), gradual degradation, being able to scale the Services that have more load only, being able to deploy a new version of a Service which is independent of the rest of the Services, etc… the complexity in the technical solution comes from all this resilience, and flexibility.
If your Dev Team is up to 10 Developers or you are writing just a CRUD Web Application, a PoC, or you are an Startup with a critical Time to Market you probably you will not want to use Microservices approach. Is like killing flies with laser cannons. You can use typical Web services approach, do everything in one single Https request, have transactions, a single Database, etc…
But if your team is 100 Developer, like a big eCommerce, you’ll have multiple Teams between 5 and 10 Developers per Business Domain, and you need independence of each Service, having less interdependence. Each Service will own their own Data. That is normally around 5 to 7 tables. Each Service will serve a Business Domain. You’ll benefit from having different technologies for the different needs, however be careful to avoid having Teams with different knowledge that can have hardly rotation and difficult to continue projects when the only 2 or 3 Devs that know that technology leave. Typical benefit scenarios can be having MySql for the Billing Services, but having NoSQL Database for the image catalog, or to store logs of account activity. With Microservices, some services will be calling other Services, often asynchronously, using Queues or Streams, you’ll have Callbacks, Databases for reading, you’ll probably want to have gradual and gracefully failure of your applications, client load balancing, caches and read only databases/in-memory databases… This complexity is in order to protect one Service from the failure of others and to bring it the necessary speed under heavy load.
Here you can find a PDF Document of the typical resources I use for Microservice Projects.
This was interesting under the point of view of dealing with elastic Ip’s, Amazon AWS Volumes, etc… but was a process basically manual. I could have generated an immutable image to start from next time, but this is another discussion, specially because that Server Instance has different base Software, including a MySql Database.
This time I want to explain, step by step, how to conainerize my Server, so I can port to different platforms, and I can be independent on what the Server Operating System is. It will work always, as we defined the Operating System for the Docker Container.
So we start to use IaC (Infrastructure as Code).
So first you need to install docker.
So basically if your laptop is an Ubuntu 18.04 LTS you have to:
sudo apt install docker.io
Start and Automate Docker
The Docker service needs to be setup to run at startup. To do so, type in each command followed by enter:
sudo systemctl start docker
sudo systemctl enable docker
Create the Dockerfile
For doing this you can use any text editor, but as we are working with IaC why not use a Code Editor?.
You can use the versatile PyCharm, that has modules for understanding Docker and so you can use Control Version like git too.
This is the Dockerfile
FROM ubuntu:19.04
MAINTAINER Carles <carles@carlesmateo.com>
ARG DEBIAN_FRONTEND=noninteractive
#RUN echo "nameserver 8.8.8.8" > /etc/resolv.conf
RUN echo "Europe/Ireland" | tee /etc/timezone
# Note: You should install everything in a single line concatenated with
# && and finalising with apt autoremove && apt clean
# In order to use the less space possible, as every command is a layerRUN apt-get update && apt-get install -y apache2 ntpdate libapache2-mod-php7.2 \
mysql-server php7.2-mysql php-dev libmcrypt-dev php-pear git && \
apt autoremove && apt clean
RUN a2enmod rewrite
RUN mkdir -p /www
# In order to activate Debug
# RUN sed -i "s/display_errors = Off/display_errors = On/" /etc/php/7.2/apache2/php.ini
# RUN sed -i "s/error_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT/error_reporting = E_ALL/" /etc/php/7.2/apache2/php.ini
# RUN sed -i "s/display_startup_errors = Off/display_startup_errors = On/" /etc/php/7.2/apache2/php.ini
# To Debug remember to change:
# config/{production.php|preproduction.php|devel.php|docker.php}
# in order to avoid Error Reporting being set to 0.
ENV PATH_CATALONIA_CACHE /www/www.cataloniaframework.com/cache/
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_PID_FILE /var/run/apache2/apache2.pid
ENV APACHE_RUN_DIR /var/run/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
ENV APACHE_LOG_DIR /var/log/apache2
RUN mkdir -p $APACHE_RUN_DIR
RUN mkdir -p $APACHE_LOCK_DIR
RUN mkdir -p $APACHE_LOG_DIR
# Remove the default Server
RUN sed -i '/<Directory \/var\/www\/>/,/<\/Directory>/{/<\/Directory>/ s/.*/# var-www commented/; t; d}' /etc/apache2/apache2.conf
RUN rm /etc/apache2/sites-enabled/000-default.conf
COPY www.cataloniaframework.com.conf /etc/apache2/sites-available/
RUN chmod 777 $PATH_CATALONIA_CACHE
RUN chmod 777 $PATH_CATALONIA_CACHE.
RUN chown --recursive $APACHE_RUN_USER.$APACHE_RUN_GROUP $PATH_CATALONIA_CACHERUN ln -s /etc/apache2/sites-available/www.cataloniaframework.com.conf /etc/apache2/sites-enabled/
# Note: You should clone locally and COPY to the Docker Image
# Also you should add the .git directory to your .dockerignore file
# I made this way to show you and for simplicity, having everything
# in a single file
RUN git clone https://github.com/cataloniaframework/cataloniaframework_v1_sample_website /www/www.cataloniaframework.com
RUN git checkout tags/v.1.16-web-1.0
# In order to change profile to Production
# RUN sed -i "s/define('ENVIRONMENT', DOCKER)/define('ENVIRONMENT', PRODUCTION)/" /var/www/www.cataloniaframework.com/config/general.php # for debugging
#RUN apt-get install -y vim
RUN service apache2 restart
EXPOSE 80
CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"]
This will copy the file www.cataloniaframework.com.conf that must be in the same directory that the Dockerfile file, to the /etc/apache2/sites-available/ folder in the conainer.
<VirtualHost *:80>
ServerAdmin webmaster@cataloniaframework.com# Uncomment to use a DNS name in a multiple VirtualHost Environment #ServerName www.cataloniaframework.com #ServerAlias cataloniaframework.com DocumentRoot /www/www.cataloniaframework.com/www
<Directory /www/www.cataloniaframework.com/www/>
Options -Indexes +FollowSymLinks +MultiViews AllowOverride All Order allow,deny allow from all Require all granted
</Directory>
ErrorLog ${APACHE_LOG_DIR}/www-cataloniaframework-com-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/www-cataloniaframework-com-access.log combined
</VirtualHost>
Stoping, starting the docker Service and creating the Catalonia image
service docker stop && service docker start
To build the Docker Image we will do:
docker build -t catalonia . --no-cache
I use the –no-cache so git is pulled and everything is reworked, not kept from cache.
Now we can run the Catalonia Docker, mapping the 80 port.
docker run -d -p 80:80 catalonia
If you want to check what’s going on inside the Docker, you’ll do:
docker ps
And so in this case, we will do:
docker exec -i -t distracted_wing /bin/bash
Finally I would like to check that the web page works, and I’ll use my preferred browser. In this case I will use lynx, the text browser, cause I don’t want Firefox to save things in the cache.