I was using a lot Amazon. Sending parcels to my previous job offices, and now to Blizzard offices, so I subscribed to Amazon Prime. With COVID-19 virus we were sent to do Remote Work, and now with the lock down basically I’m 99.99% of the time at home.
I did a test to see how it works sending to home during the pandemic.
I choose two different items, I reviews the order, they were going to be delivered separately, one day of distance.
I choose two items that will fit in my mailbox, separated or together. One USB3 3mts male female and a Blu-ray movie.
My surprise comes when I go to the mailbox one day before and I see that I have a paper from an-post telling that they pass by to deliver my parcel, and they did not leave because it doesn’t fit the mailbox and they did not want to leave it a common space. For my surprise both Amazon parcels were grouped and sent before time. Maybe in a bigger box. But the mailman did not ring my door.
The paper tells me to get my parcel in the middle of the city, during the lock down. No way! I’m not going to risk my health and specially from elders, just to grab a cable and a movie.
I had the chance to request re-delivery to an Post, so I do. I fill all the info, I inform my phone number, email, I indicate which door to ring, and two days after as promised… a paper from an Post!.
They did not even rang my bell again.
I go to Amazon to cancel the order, but the process is only created for if you got the items.
Fuck it. I’m not going to order anything else to Amazon until that COVID-19 passes.
I don’t know if the postman just avoids people for fear to contagion or the process of an Post is awful and he didn’t get any information. But I’ll not buy anything even if I cannot buy in other places cause the lock down.
I was going to maintain my Amazon Prime subscription, even if I know that I’ll not use it much with the lock down, but makes no sense. Also:
I use Netflix and my Raspberry Pi 4, I was not using Amazon Prime Video.
I use Spotify, I was not using Amazon Prime Music.
I like to read in paper, not in eBook, so I was not using the eReader options.
I’ve been working for years within Data centers, with D&R strategies, and then in the middle of COVID-19, with huge demands on increments of bandwidth and compute, some DCs decided to do not allow in the Engineers of their customers.
As somebody that had my own Startup and CSP and had infrastructure in DCs and servers from customers in colocation, and has replaced Hw components at 1AM, replaced drives from broken RAIDs, and fixed systems so many times inside so many Datacenters across the world, I’m shocked about that.
I understand health reasons can be argued, but I still have Servers in Datacenters because we all believed they were the most safe place, prepared for disaster and recovery, with security, 24×7… and now, one realise that cannot enter to fix or upgrade the own machines.
Please note, still you can use the remote hands from the DC, although this is not a good idea many times, I’m not sure this will still be an available option when the lock down in those countries becomes more strict.
I’m wondering if DCs current model have any future at all.
I think most of the D&R strategies from now will be in the cloud, in different regions, with different providers, so companies can resist providers or governments letting them down.
Quick Access to my selection Last Update: 2020-06-28 22:54:25 IST Unix epoch: 1593381265
Compressing an unmounted partition to a image file while compressing on the fly, and breaking into 1GB gz files.
Also explains in a funny way about STDIN, STDOUT, STDERR and methodology investigating in deep.
2020-May Views: 655 views
Just installed a media player in my Raspberry Pi 4
So I mentioned it was one of my pending tasks, to do while I’m confined here, at home, to help the Irish government to stop the quick spread of the coronavirus.
I’m happy that the situation in Ireland has stabilized, unlikely in Spain, where that historical lack of discipline and selfishness and super ego to believe Madrid the capital of the world, and so deciding not to close it for quarantine, will cause a lot of pain. I hope the closing of frontiers in Catalonia works.
They have a very nice SD image writer for Linux, Mac and Windows, that will install the proper image on the micro-SD for your ARM device.
This Raspberry Pi 4 comes with Wifi integrated and a Gigabit Ethernet network port.
When I was in Barcelona, I had Kodi with Raspberry pi 2 and version 3.
This model v. 4 is much more cooler. I bought the 4GB version, and has 2xHDMI 4K.
So it is great to connect to any modern TV.
In Barcelona, I have Linux tower as NFS Server sharing my files with the Pi. Work good, even for the 100Mbit NIC of the version 3, but at that time I was only playing Full HD as the Pi didn’t supported greater resolution, and I only had that resolution on my displays too.
For now, I’m going to explore how is reading from a USB 3.0. Let’s see if it’s able to play smoothly.
The cool thing also is that I have SSH access, and so I can use the Pi for many more things. :)
I have my first update, I noticed that copying to that USB was not the best for me, as I tried to copy a .MKV file of 4.9GB and I encountered the limit of 4GB of FAT32. I could format the USB as ext4, but what I did is, SSH into the box, I see that I have two partitions on the SD for booting the Pi, the second one is a ext4 called storage. So I copied to the SD, through the network, using sftp the file I wanted.
The Gigabit connection was fast, but when the buffer fulled it started to show the real speed of the SD which is 15MB/s for writing.
Ext4 has no problem in holding a file 4.9GB so I’m watching my movie now. Will think about setting a NFS for the Pi as it will be very convenient. :)
I have an external, remote, keyboard logitech, but it happens that LibreELEC recognizes my Sony command, from the television. I don’t need the keyboard/mouse. Nice.
Here you can see my Raspberry Pi 4, connected to TV, in “combat mode”, naked, as PoC, before setting in its definitive place behind the TV.
Playing from the external USB 3.0 stick was also fluid, allowing 4K perfectly.
The only problem I has was when I was pushing movies to the USB through the network, and playing at the same time from the SD. It seems like the Raspberry reached its limits doing this and playing stuck frequently.
After years in which many Engineers requested to the companies to be able to Remote Work, with most of answers No, now it happens that not only is good for the company, is the only way to ensure continuity of business, of many businesses.
One of my colleagues from Denmark, which government has shutdown the country by sending all the public servants to home, in order to prevent the spread of the coronavirus, told me:
“Yes, remote working is here, but has been necessary the four horsemen of the apocalypse”
It is curious, how Remote Working has arrived, no thanks to that was obvious, but due to external emergencies. And I’m glad that my company was prepared for business continuity.
I’ll be staying home, working remotely, in order to contribute to non-spreading the virus, specially among old people. I’m perfectly healthy but that’s a use case, many people will not develop the symptoms and still be able to spread to others.
So I have some plans related to technology to do at home, including few improvements to the blog. What are your plans?.
Update: 2020-03-13 23:16 UTC I’m thinking in all those business which are forces to close, and all the employees that will not get a salary, or will be fired, or will get a salary and the business owner maybe ends in bankrupt as is paying the salaries and no income is being generated.
Update: 2020-03-19 10:58 UTC Some of my friends, even in Human Resources/Recruiting, are starting to remote work for first time. So here is some advice:
I would recommend to get an external monitor, at least 22″, so you neck is not forcing position looking low and your eyes don’t suffer, good light (don’t in dark), a nespresso can be a good friend in the morning, and to have your hands and arms aligned correctly so you don’t suffer from a bad position. Watch the position of the wrists, your arms should be comfortably at the same level than the table, similar in an L, and your eyes be aligned to the top of your monitor. Finally I would recommend to follow a routine, like if you were going to work, so dress like you would do. Don’t stay at home all day in pijamas! ;)
2020-03-06 Heya, I’m doing a set of improvements to the blog.
One, you can already see. I added a new section to the CSS @media, so now screens bigger than 1,800 px in width, will use that width for rendering the page. The original WordPress theme at 960x was too small for our current screens. I will add a new CSS @media for 4K screens promptly.
Other is about the organization of the content. I want to separate a bit the contents, now articles are sequential and is difficult to discover nice contents if they have 2 or more articles more recent, so I will group articles by content and provide a small index on the top page. Also I will provide more areas for Operations, SRE, where it will be easy to locate code, scripts, tricks… things that are useful to our day to day. I also want to make visible the articles about living in different cities, for IT Engineers, with useful tricks and tips. And keep the more complex and more interesting Engineering matters in the main page.
2020-03-13 15:49 Added SSL to the blog
With more delay I wanted, I bought a SSL certificate, configured Apache, and after few changes to the blog has been set. One very annoying is that WordPress linked the images statically pointing to http://blog.carlesmateo.com so I changed the latest article’s images to point to relative path so they will work nice with http or https.
My reflection is that everything negative can have its positive output. With this coronavirus thing, I decided to focus into improving things. And so I’m doing. :)
ctop.py is an Open Source tool for Linux System Administration that I’ve written in Python3. It uses only the System (/proc), and not third party libraries, in order to get all the information required. I use only this modules, so it’s ideal to run in all the farm of Servers and Dockers:
shutil (for getting the Terminal width and height)
The purpose of this tool is to help to troubleshot and to identify problems with a single view to a single tool that has all the typical indicators.
It provides in a single view information that is typically provided by many programs:
top, htop for the CPU usage, process list, memory usage
df to see the free space in / and the free inodes
iftop to see real-time bandwidth usage
ip addr list to see the main Ip for the interfaces
netstat or lsof to see the list of listening TCP Ports
uname -a to see the Kernel version
Other cool things it does is:
Identifying if you’re inside an Amazon VM, Virtual Box, Docker or lxc
Uses colors, and marks in yellow the warnings and in red the errors, problems like few disk space reaming or high CPU usage according to the available cores and CPUs.
Redraws the screen and adjust to the size of the Terminal, bigger terminal displays more information
It doesn’t use external libraries, and does not escape to shell. It reads everything from /proc /sys or /etc files.
Identifies the Linux distribution
Shows the most repeated binaries, so you can identify DDoS attacks (like having 5,000 apache instances where you have normally 500 or many instances of Python)
Indicates if an interface has the cable connected or disconnected
Shows the Speed of the Network Connection (useful for Mellanox cards than can operate and 200Gbit/sec, 100, 50, 40, 25, 10…)
It displays the local time and the Linux Epoch Time, which is universal (very useful for logs and to detect when there was an issue, for example if your system restarted, your SSH Session would keep latest Epoch captured)
No root required
Displays recent errors like NFS Timed outs or Memory Read Errors.
It only works for Linux, not for Mac or for Windows. Although the idea is to help with Server’s Linux Administration and Troubleshot and Mac and Windows do not have /proc
The list of process of the System is read every 30 seconds, to avoid adding much overhead on the System, other info every second
I decided to code name the version 0.7 as “Catalan Republic” to support the dreams and hopes and democratic requests of the Catalans people to become and independent republic.
I created this tool as Open Source and if you want to help I need people to test under different versions of:
Atypical Linux distributions
If you are a Cloud Provider and want me to implement the detection of your VMs, so the tool knows that is a instance of the Amazon, Google, Azure, Cloudsigma, Digital Ocean… contact me through my LinkedIn.
Some of the features I’m working on are parsing the logs checking for errors, kernel panics, processed killed due to lack of memory, iscsi disconnects, nfs errors, checking the logs of mysql and Oracle databases to locate errors
First you have to understand that Python, Java and PHP are worlds completely different.
In Python you’ll probably use Flask, and listen to the port you want, inside Docker Container.
In PHP you’ll use a Frameworks like Laravel, or Symfony, or Catalonia Framework (my Framework) :) and a repo or many (as the idea is that the change in one microservice cannot break another it is recommended to have one git repo per Service) and split the requests with the API Gateway and Filters (so /billing/ goes to the right path in the right Server, is like rewriting URLs). You’ll rely in Software to split your microservices. Usually you’ll use Docker, but you have to add a Web Server and any other tools, as the source code is not packet with a Web Server and other Dependencies like it is in Java Spring Boot.
In Java you’ll use Spring Cloud and Spring Boot, and every Service will be auto-contained in its own JAR file, that includes Apache Tomcat and all other Dependencies and normally running inside a Docker. Tcp/Ip listening port will be set at start via command line, or through environment. You’ll have many git repositories, one per each Service.
Using many repos, one per Service, also allows to deploy only that repository and to have better security, with independent deployment tokens.
It is not unlikely that you’ll use one language for some of your Services and another for other, as well as a Database or another, as each Service is owner of their data.
In any case, you will be using CI/CD and your pipeline will be something like this:
Pull the latest code for the Service from the git repository
Compile the code (if needed)
Run the Unit and Integration Tests
Compile the service to an executable artifact (f.e. Java JAR with Tomcat server and other dependencies)
Generate a Machine image with your JAR deployed (for Java. Look at Spotify Docker Plugin to Docker build from Maven), or with Apache, PHP, other dependencies, and the code. Normally will be a Docker image. This image will be immutable. You will probably use Dockerhub.
Machine image will be started. Platform test are run.
If platform tests pass, the service is promoted to the next environment (for example Dev -> Test -> PreProd -> Prod), the exact same machine is started in the next environment and platform tests are repeated.
If you work with Microsoft .NET, you’ll probably use Azure DevOps.
We IT Engineers, CTOs and Architects, serve the Business. We have to develop the most flexible approaches and enabling the business to release as fast as their need.
Take in count that Microservices is a tool, a pattern. We will use it to bring more flexibility and speed developing, resilience of the services, and speed and independence deploying. However this comes at a cost of complexity.
Microservices is more related to giving flexibility to the Business, and developing according to the Business Domains. Normally oriented to suite an API. If you have an API that is consumed by third party you will have things like independence of Services (if one is down the others will still function), gradual degradation, being able to scale the Services that have more load only, being able to deploy a new version of a Service which is independent of the rest of the Services, etc… the complexity in the technical solution comes from all this resilience, and flexibility.
If your Dev Team is up to 10 Developers or you are writing just a CRUD Web Application, a PoC, or you are an Startup with a critical Time to Market you probably you will not want to use Microservices approach. Is like killing flies with laser cannons. You can use typical Web services approach, do everything in one single Https request, have transactions, a single Database, etc…
But if your team is 100 Developer, like a big eCommerce, you’ll have multiple Teams between 5 and 10 Developers per Business Domain, and you need independence of each Service, having less interdependence. Each Service will own their own Data. That is normally around 5 to 7 tables. Each Service will serve a Business Domain. You’ll benefit from having different technologies for the different needs, however be careful to avoid having Teams with different knowledge that can have hardly rotation and difficult to continue projects when the only 2 or 3 Devs that know that technology leave. Typical benefit scenarios can be having MySql for the Billing Services, but having NoSQL Database for the image catalog, or to store logs of account activity. With Microservices, some services will be calling other Services, often asynchronously, using Queues or Streams, you’ll have Callbacks, Databases for reading, you’ll probably want to have gradual and gracefully failure of your applications, client load balancing, caches and read only databases/in-memory databases… This complexity is in order to protect one Service from the failure of others and to bring it the necessary speed under heavy load.
Here you can find a PDF Document of the typical resources I use for Microservice Projects.
As the company I was working for, Sanmina, has decided to move all the Software Development to Colorado, US, and closing the offices in Bishopstown, Cork, Ireland I found myself with the need to get a new laptop. At work I was using two Dell laptops, one very powerful and heavy equipped with an Intel Xeon processor and 32 GB of RAM. The other a lightweight one that I updated to 32 GB of RAM.
I had an accident around 8 months ago, that got my spine damaged, and so I cannot carry much weight.
My personal laptops at home, in Ireland are a 15″ with 16 GB of RAM, too heavy, and an Acer 11,6″ with 8GB of RAM and SSD (I upgraded it), but unfortunately the screen crashed. I still use it through the HDMI port. My main computer is a tower with a Core i7, 64GB of RAM and a Samsung NVMe SSD drive. And few Raspberrys Pi 4 and 3 :)
I was thinking about what ultra-lightweight laptop to buy, but I wanted to buy it in Barcelona, as I wanted a Catalan keyboard (the layout with the broken ç and accents). I tried by Amazon.es but I have problems to have shipped the Catalan keyboard layout laptops to my address in Ireland.
I was trying to find the best laptop for me.
While I was investigating I found out that none of the laptops in the market were convincing me.
The ones in around 1Kg, which was my initial target, were too big, and lack a proper full size HDMI port and Gigabit Ethernet. Honestly, some models get the HDMI or the Ethernet from an USB 3.1, through an adapter, or have mini-HDMI, many lack the Gigabit port, which is very annoying. Also most of the models come with 8GB of RAM only and were impossible to upgrade. I enrolled my best friend in my quest, in the research, and had the same conclusions.
I don’t want to have to carry adapters with me to just plug to a monitor or projector. I don’t even want to carry the power charger. I want a laptop that can work with me for a complete day, a full work session, without needing to recharge.
So while this investigation was going on, I decided to buy a cheap laptop with a good trade off of weight and cost, in order to be able to work on the coffee. I needed it for writing documents in Google Docs, creating microservices architectures, programming in Java and PHP, and writing articles in my blog. I also decided that this would be my only laptop with Windows, as honestly I missed playing Star Craft 2, and my attempts with Wine and Linux did not success.
Not also, for playing games :) , there are tools that are only available for Windows or for Mac Os X and Windows, like: POSTMAN, Kitematic for managing dockers visually, vSphere…
(Please note, as I reviewed the article I realized that POSTMAN is available for Linux too)
Please note: although I use mainly Linux everywhere (Ubuntu, CentOS, and RedHat mainly) and I contribute to Open Source projects, I do have Windows machines.
I created my Start up in 2004, and I still have Windows Servers, physical machines in a Data Center in Barcelona, and I still have VMs and Instances in Public Clouds with Windows Servers. Also I programmed some tools using Visual Studio and Visual Basic .NET, ASP.NET and C#, but when I needed to do this I found more convenient spawn an instance in Amazon or Azure and pay for its use.
When I created my Start up I offered my infrastructure as a way to get funding too, and I offered VMs with VMWare. I found that having my Mail Servers in VMs was much more convenient for Backups, cloning, to scale up, to avoid disruption and for Disaster and Recovery.
I wanted a cheap laptop that will not make feel bad if transporting it in a daily basis gets a hit and breaks, or that if it rains (and this happens more than often in Ireland) and it breaks is not super-hurtful, or even if it gets stolen. Yes, I’m from a big city, like is Barcelona, Catalonia, and thieves are a real problem. I travel, so I want a laptop decent enough that I can take to travel, and for going for a coffee, coding anything, and I feel comfortable enough that if something happens to it is not the end of the world.
Cork is not a big city, so the options were reduced. I found a laptop that meets my needs.
It is equipped with a Intel® Core™ i3-6006U (2 GHz, 3 MB cache, 2 cores) , a 500GB SATA HDD, and 4 GB of DDR4 RAM.
The information on HP webpage is really scarce, but checking other pages I was able to see that the motherboard has 2 memory banks, accepting a max of 16GB of RAM.
I saw that there was an slot, unclear if supporting NVMe SSD drives, but supporting M.2 SSD for sure.
So I bought in Amazon 2x8GB and a M.2 500GB drive.
Since I was 5 years old I’ve been upgrading and assembling by myself all the computers. And this is something that I want to keep doing. It keeps me sharp, knowing the new ports, CPUs, and motherboard architectures, and keeps me in contact with the Hardware. All my life I’ve thought that specializing Software Engineers and Systems Engineers, like if computers were something separate, is a mistake, so I push myself to stay up to date of the news in all the fields.
I removed the spinning 500 GB SATA HDD, cause it’s slow and it consumes a lot of energy. With the M.2 SSD the battery last forever.
The interesting part is how I cloned the drive from the Spinning HDD to the new M.2.
Open the computer (see pics below) and Insert the new drive M.2
Boot with an USB Linux Rescue distribution (to do that I had to enable Legacy Boot on BIOS and boot with the USB)
Use lsblk command to identify the HDD drive, it was easy as it was the one with partitions
dd from if=/dev/sda to of=/dev/sdb with status=progress to see live status and speed (around 70MB/s) and estimated time to complete.
Please note that the new drive should be bigger or at least have the same number of bytes to avoid problems with the last partition.
I removed the HDD drive, this reduces the weight of my laptop by 100 grams
Disable Legacy Boot, and boot the computer. Windows started perfectly :)
I found so few information about this model, that I wanted to share the pictures with the Community. Here are the pictures of the upgrade process.
Few months ago I encountered with a problem with RHEL installer and some of the M.2 drives.
I’ve productized my Product, to be released with M.2 booting SATA drives of 128GB.
The procedure for preparing the Servers (90 and 60 drives, Cold Storage) was based on the installation of RHEL in the M.2 128GB drive. Then the drives are cloned.
Few days before mass delivery the company request to change the booting M.2 drives for others of our own, 512 GB drives.
I’ve tested many different M.2 drives and all of them were slightly different.
Those 512 GB M.2 drives had one problem… Red Hat installer was failing with a python error.
We were running out of time, so I decided to clone directly from the 128GB M.2 working card, with everything installed, to the 512 GB card. Doing that is so easy as booting with a Rescue Linux USB disk, and then doing a dd from the 128GB drive to the 512GB drive.
Booting with a live USB system is important, as Filesystem should not be mounted to prevent corruption when cloning.
Then, the next operation would be booting the 512 GB drive and instructing Linux to claim the additional space.
Here is the procedure for doing it (note, the OS installed in the M.2 was CentOS in this case):
Determine the device that needs to be operated on (this will usually be the boot drive); in this example it is /dev/sdae
Extend the desired LVM partition (lvextend command)
# pvdisplay /dev/sdbm: open failed: No medium found /dev/sdbn: open failed: No medium found /dev/sdbj: open failed: No medium found /dev/sdbk: open failed: No medium found /dev/sdbl: open failed: No medium found --- Physical volume --- PV Name /dev/sdae2 VG Name centos_4602c PV Size 118.24 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 30269 Free PE 0 Allocated PE 30269 PV UUID yvHO6t-cYHM-CCCm-2hOO-mJWf-6NUI-zgxzwc
# pvresize /dev/sdae2 /dev/sdbm: open failed: No medium found /dev/sdbn: open failed: No medium found /dev/sdbj: open failed: No medium found /dev/sdbk: open failed: No medium found /dev/sdbl: open failed: No medium found Physical volume "/dev/sdae2" changed 1 physical volume(s) resized or updated / 0 physical volume(s) not resized
# pvdisplay /dev/sdbm: open failed: No medium found /dev/sdbn: open failed: No medium found /dev/sdbj: open failed: No medium found /dev/sdbk: open failed: No medium found /dev/sdbl: open failed: No medium found --- Physical volume --- PV Name /dev/sdae2 VG Name centos_4602c PV Size <475.84 GiB / not usable 3.25 MiB Allocatable yes PE Size 4.00 MiB Total PE 121813 Free PE 91544 Allocated PE 30269 PV UUID yvHO6t-cYHM-CCCm-2hOO-mJWf-6NUI-zgxzwc
# vgdisplay --- Volume group --- VG Name centos_4602c System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 6 VG Access read/write VG Status resizable MAX LV 0 Cur LV 3 Open LV 3 Max PV 0 Cur PV 2 Act PV 2 VG Size <475.93 GiB PE Size 4.00 MiB Total PE 121838 Alloc PE / Size 30269 / <118.24 GiB Free PE / Size 91569 / 357.69 GiB VG UUID ORcp2t-ntwQ-CNSX-NeXL-Udd9-htt9-kLfvRc
# lvextend -l +91569 /dev/centos_4602c/root Size of logical volume centos_4602c/root changed from 50.00 GiB (12800 extents) to <407.69 GiB (104369 extents). Logical volume centos_4602c/root successfully resized.
Extend the xfs file system to use the extended space
The xfs file system for the root partition will need to be extended to use the extra space; this is done using the xfs_grow command as shown below.