So I show here how I launched a fresh Ubuntu 24.04 in Google Cloud, on 2026-05-04, and demostrate the exploit of escalation privileges Copy Fail (CVE-2026-31431) which allows you to become root from a regular user account in almost any Linux since year 2017.
It consists in the execution of a Python 3 code, which is only 732 bytes.
I show how I fixed it by upgrading the kernel and rebooting.
If you are running your instances in Google Gloud Compute Engine and you want to increase the size of the Disk without having to reboot, this video explains step by step how you can do it.
This is a mechanism I invented and I’ve been using for decades, to migrate or clone my Linux Desktops to other physical servers.
This script is focused on doing the job for Ubuntu but I was doing this already 30 years ago, for X Window as I was responsible of the Linux platform of a ISP (Internet Service Provider). So, it is compatible with any Linux Desktop or Server.
It has the advantage that is a very lightweight backup. You don’t need to backup /etc or /var as long as you install a new OS and restore the folders that you did backup. You can backup and restore Wine (Windows Emulator) programs completely and to/from VMs and Instances as well.
It’s based on user/s rather than machine.
And it does backup using the Timestamp, so you keep all the different version, modified over time. You can fusion the backups in the same folder if you prefer avoiding time versions and keep only the latest backup. If that’s your case, then replace s_PATH_BACKUP_NOW=”${s_PATH_BACKUP}${s_DATETIME}/” by s_PATH_BACKUP_NOW=”${s_PATH_BACKUP}” for instance. You can also add a folder for machine if you prefer it, for example if you use the same userid across several Desktops/Servers.
I offer you a much simplified version of my scripts, but they can highly serve your needs.
#!/usr/bin/env bash
# Author: Carles Mateo
# Last Update: 2022-10-23 10:48 Irish Time
# User we want to backup data for
s_USER="carles"
# Target PATH for the Backups
s_PATH_BACKUP="/home/${s_USER}/Desktop/Bck/"
s_DATE=$(date +"%Y-%m-%d")
s_DATETIME=$(date +"%Y-%m-%d-%H-%M-%S")
s_PATH_BACKUP_NOW="${s_PATH_BACKUP}${s_DATETIME}/"
echo "Creating path $s_PATH_BACKUP and $s_PATH_BACKUP_NOW"
mkdir $s_PATH_BACKUP
mkdir $s_PATH_BACKUP_NOW
s_PATH_KEY="/home/${s_USER}/Desktop/keys/2007-01-07-cloud23.pem"
s_DOCKER_IMG_JENKINS_EXPORT=${s_DATE}-jenkins-base.tar
s_DOCKER_IMG_JENKINS_BLUEOCEAN2_EXPORT=${s_DATE}-jenkins-blueocean2.tar
s_PGP_FILE=${s_DATETIME}-pgp.zip
# Version the PGP files
echo "Compressing the PGP files as ${s_PGP_FILE}"
zip -r ${s_PATH_BACKUP_NOW}${s_PGP_FILE} /home/${s_USER}/Desktop/PGP/*
# Copy to BCK folder, or ZFS or to an external drive Locally as defined in: s_PATH_BACKUP_NOW
echo "Copying Data to ${s_PATH_BACKUP_NOW}/Data"
rsync -a --exclude={} --acls --xattrs --owner --group --times --stats --human-readable --progress -z "/home/${s_USER}/Desktop/data/" "${s_PATH_BACKUP_NOW}data/"
rsync -a --exclude={'Desktop','Downloads','.local/share/Trash/','.local/lib/python2.7/','.local/lib/python3.6/','.local/lib/python3.8/','.local/lib/python3.10/','.cache/JetBrains/'} --acls --xattrs --owner --group --times --stats --human-readable --progress -z "/home/${s_USER}/" "${s_PATH_BACKUP_NOW}home/${s_USER}/"
rsync -a --exclude={} --acls --xattrs --owner --group --times --stats --human-readable --progress -z "/home/${s_USER}/Desktop/code/" "${s_PATH_BACKUP_NOW}code/"
echo "Showing backup dir ${s_PATH_BACKUP_NOW}"
ls -hal ${s_PATH_BACKUP_NOW}
df -h /
See how I exclude certain folders like the Desktop or Downloads with –exclude.
It relies on the very useful rsync program. It also relies on zip to compress entire folders (PGP Keys on the example).
If you use the second part, to compress Docker Images (Jenkins in this example), you will run it as sudo and you will need also gzip.
There is a final part, if you want to backup to a remote Server/s using ssh:
# continuation... to copy to a remote Server.
s_PATH_REMOTE="bck7@cloubbck11.carlesmateo.com:/Bck/Desktop/${s_USER}/data/"
# Copy to the other Server
rsync -e "ssh -i $s_PATH_KEY" -a --exclude={} --acls --xattrs --owner --group --times --stats --human-readable --progress -z "/home/${s_USER}/Desktop/data/" ${s_PATH_REMOTE}
I recommend you to use the same methodology in all your Desktops, like for example, having a data/ folder in the Desktop for each user.
You can use Erasure Code to split the Backups in blocks and store a piece in different Cloud Providers.
Also you can store your Backups long-term, with services like Amazon Glacier.
Other ideas are storing certain files in git and in Hadoop HDFS.
If you want you can CRC your files before copying to another device or server.
FROM ubuntu:20.04
MAINTAINER Carles Mateo
ARG DEBIAN_FRONTEND=noninteractive
RUN apt update && \
apt install -y vim python3-pip && \
apt install -y net-tools mc vim htop less strace zip gzip lynx && \
apt install -y apache2 mysql-server ntpdate libapache2-mod-php7.4 mysql-server php7.4-mysql php-dev libmcrypt-dev php-pear && \
apt install -y git && apt autoremove && apt clean && \
pip3 install pytest
RUN a2enmod rewrite
RUN echo "Europe/Ireland" | tee /etc/timezone
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_PID_FILE /var/run/apache2/apache2.pid
ENV APACHE_RUN_DIR /var/run/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
ENV APACHE_LOG_DIR /var/log/apache2
COPY phpinfo.php /var/www/html/
RUN service apache2 restart
EXPOSE 80
CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"]
File: phpinfo.php
<html>
<?php
// Show all information, defaults to INFO_ALL
phpinfo();
// Show just the module information.
// phpinfo(8) yields identical results.
phpinfo(INFO_MODULES);
?>
</html>
File: build_docker.sh
#!/bin/bash
s_DOCKER_IMAGE_NAME="lampp"
echo "We will build the Docker Image and name it: ${s_DOCKER_IMAGE_NAME}"
echo "After, we will be able to run a Docker Container based on it."
printf "Removing old image %s\n" "${s_DOCKER_IMAGE_NAME}"
sudo docker rm "${s_DOCKER_IMAGE_NAME}"
printf "Creating Docker Image %s\n" "${s_DOCKER_IMAGE_NAME}"
# sudo docker build -t ${s_DOCKER_IMAGE_NAME} . --no-cache
sudo docker build -t ${s_DOCKER_IMAGE_NAME} .
i_EXIT_CODE=$?
if [ $i_EXIT_CODE -ne 0 ]; then
printf "Error. Exit code %s\n" ${i_EXIT_CODE}
exit
fi
echo "Ready to run ${s_DOCKER_IMAGE_NAME} Docker Container"
echo "To run in type: sudo docker run -p 80:80 --name ${s_DOCKER_IMAGE_NAME} ${s_DOCKER_IMAGE_NAME}"
echo "or just use run_in_docker.sh"
echo
echo "If you want to debug do:"
echo "docker exec -i -t ${s_DOCKER_IMAGE_NAME} /bin/bash"
Can't locate IPC/Run.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at ./check_ipmi_sensor line 35.
BEGIN failed--compilation aborted at ./check_ipmi_sensor line 35.
The solutions is simple.
sudo yum makecache
yum install perl-IPC-Run
You’ll see the list of mirrors and an output similar to this:
I’ve upgraded one of my AWS machines from Ubuntu 18.04 LTS to Ubuntu 20.04 LTS.
The process was really straightforward, basically run:
sudo apt update
sudp apt upgrade
Then Reboot in order to load the last kernel.
Then execute:
sudo do-release-upgrade
And ask two or three questions in different moments.
After, reboot, and that’s it.
All my Firewall rules, were kept, the services were restarted as they were available, or deferred to be executed when the service is reinstalled in case of dependencies (like for PHP, which was upgraded before Apache) and I’ve not found anything out of place, by the moment. The Kernels were special, with Amazon customization too.
I also recommend you to make sure to disable your Apache directory browsing, if you had like that, as new software install may have enabled it:
a2dismod autoindex
systemctl restart apache2
I always recommend, for Production, to run the Ubuntu LTS version.
Those are the programs that more or less, I use always in all my Linux workstations.
I use Ubuntu LTS. I like how they maintain the packages.
I like to run the same base version in my Desktops, like I have in the Servers. So if I have my Servers deployed with Ubuntu 20.04 LTS, then my Desktops will run with the same version. Is a way to get to better know the distribution, compatibilities, run faster than the problems, and an easy way to test things. If I have several deployments with several versions (so LTS not upgraded) I may run VMs with that version as Desktop or Server, to ensure compatibility. And obviously I use Docker and a lot of Command Line Tools, which I covered in another article.
Audacity sound recorder and editor
Charles Proxy
Chromium web browser
The Chrome Extension of LastPass for Teams
Filezilla
Firefox
GIMP
IntelliJ
LibreOffice
OpenShot Video Editor
PAC
Packet Tracer from Cisco
PHPStorm
PyCharm
Slack
Skype (usage going down in favor of Zoom and Slack)
If that fails is very probably that creating a new configuration, for a new user, will make things right.
Update 2022-01-05: Take in count that you will be copying the Windows registry when doing this. I use this trick to clone applications that are no longer downloadable from the Internet. I clone wine to dedicated Virtual Machines. You may need different Virtual Machines for different programs if windows registry is different for them.
You know I love Linux. I was compiling my own Kernels back in 1995, when it took more than 24 hours in a 386, and working on the first ISPs in Barcelona managing the Linux Systems.
For my computers I prefer Linux, no doubt about it, but many multinationals I worked for have Windows option only for the Laptops and Desktops.
During years I had to deal with sending files to Linux or Unix (HP UX, Sun Solaris…) to process them and getting back the result. Some sort of ETL and Map Reduce in the prehistory of personal computers, taking in count aspects like Networks speeds too, available space, splitting files for processing.
When I was working as Senior Project Manager in Winterthur Insurance, now Axa, I had to run a lot of ETL (Extract Transform Load) for considerably big files, or when I was project manager and later head of department in Volkswagen gedas or later helping Start ups like Privalia. I can tell you that Windows didn’t like you to open editors to work with 1GB text or CSV file, and doesn’t like it, even if your computer has 16GB of Memory, and even if they do the simplicity of Bash scripts and using pipes, grep, awk… is so powerful that is very convenient to have those files processed using Linux.
And honestly is a pain to send back and forth files to a UNIX System just for Data Crunch. And a VM will be slow and use memory, and you have enable some sort of sharing with it so it can access the Data. Not to talk if you need to split the data files in blocks to be processed in parallel by several computers.
There are many solutions, like using Virtual Machines, Docker, external Servers, etc…
WSL allows you to run Linux command line tools inside Windows.
Having WSL allows things to be done much more straightforward, processing the files in your local windows hard drives.
Please note: Maybe you have enough using GitBash.
Error installing: WslRegisterDistribution failed with error: 0x80070057
When I installed it I found this error and look for an answer online. I found no solutions and many people suffering from the same problem, so I decided to publish an article on how to make it work.
Microsoft use Powershell to activate the features disabled in Windows, I did the same with Command Line, which I found more convenient for most of the non extremely tech people.
You will need:
For x64 systems: Version 1903 or higher, with Build 18362 or higher.
You can check your version of windows opening a Terminal (CMD.exe) and typing:
winver
For ARM64 systems: Version 2004 or higher, with Build 19041 or higher.
I’m not covering installing WSL for ARM, only for Intel/AMD Desktop/Laptops with Windows 10.
If you’re unsure, you can open a Terminal (CMD.exe) and run:
The recommended way to install Ubuntu on WSL is through the Microsoft Store.
The following Ubuntu releases are available as apps on the Microsoft Store:
Ubuntu 16.04 LTS (Xenial) is the first release available for WSL. It supports the x64 architecture only. (offline installer: x64)
Ubuntu 18.04 LTS (Bionic) is the second LTS release and the first one supporting ARM64 systems, too. (offline installers: x64, ARM64)
Ubuntu 20.04 LTS (Focal) is the current LTS release, supporting both x64 and ARM64 architecture.
Ubuntu (without the release version) always follows the recommended release, switching over to the next one when it gets the first point release. Right now it installs Ubuntu 20.04 LTS.
Each app creates a separate root file system in which Ubuntu shells are opened but app updates don’t change the root file system afterwards. Installing a different app in parallel creates a different root file system allowing you to have both Ubuntu LTS releases installed and running in case you need it for keeping compatibility with other external systems. You can also upgrade your Ubuntu 16.04 to 18.04 by running ‘do-release-upgrade’ and have three different systems running in parallel, separating production and sandboxes for experiments.
But if you prefer, instead of using the Windows Store, you can download the appx.
In the same page mentioned you can do it for several versions, I attach the link for Ubuntu 20.04 LTS: https://aka.ms/wslubuntu2004
Assuming you used the Windows Store, if you did not reboot and try now to execute it for the first time, or you go to the Command Line and write bash, or open Ubuntu from Windows menu, whatever method you use, you’ll get the abovementioned error.
If that happens to you, just reboot and when you open it will work and will start the install and ask for a user and password:
From here you’re able to update the system, execute the text commands available in Linux, access to the Windows drives, launch htop, git, Python3, apt, wget… copy and paste between windows and Linux terminal, share PATH…
And of course you can run CTOP.py
Take in count that the space reported in / partition is not real, and that you have a 4GB swap.