A step by step guide, with some tricks and troubleshot.
Category Archives: Ubuntu Linux
Sudo problems in Ubuntu 26.04 LTS with Google Cloud: I’m sorry user. I’m afraid I can’t do that
I show in the video, how briefly after using sudo, it stops working.
I did this proof of concept and, I got the same problem:
sleep 300 && sudo cat /etc/lsb-release
Checking with:
id -nG
clearly showed that my user is part of google-sudoers But then:
journalctl -u google-guest-agent -f
Displays:
Apr 26 17:54:07 ubuntu26-04 google_guest_agent[851]: Adding existing user carles_mateo to google-sudoers group.
Apr 26 17:54:07 ubuntu26-04 gpasswd[89648]: user carles_mateo added by root to group google-sudoers
Apr 26 17:54:07 ubuntu26-04 google_guest_agent[851]: Updating keys for user carles_mateo.
Apr 26 17:54:11 ubuntu26-04 google_guest_agent[851]: Updating keys for user carles_mateo.
Apr 26 17:57:11 ubuntu26-04 google_guest_agent[851]: ERROR non_windows_accounts.go:219 invalid ssh key entry - expired key: carles_mateo:...google-ssh {"userName":"carles.mateo@gmail.com","expireOn":"2026-04-26T17:57:05+0000"}
Apr 26 17:57:11 ubuntu26-04 google_guest_agent[851]: ERROR non_windows_accounts.go:219 invalid ssh key entry - expired key: carles_mateo:ssh-rsa...
Apr 26 17:57:11 ubuntu26-04 google_guest_agent[851]: Removing user carles_mateo.
Apr 26 17:57:11 ubuntu26-04 gpasswd[89736]: user carles_mateo removed by root from group google-sudoers
Installing Ubuntu 26.04 LTS in Google Cloud Compute Engine
The video shows step by step how to create an Instance in Google Cloud Compute Engine of the type e2, increase the size of the disk, and install Ubuntu 26.04 LTS Server.
Also shows how the new htop looks, with new IO options.
You know that utilites from coreutils have been rewriten in Rust, like sudo. I was wondering if it would work well. I thoguht I was encountering the first problems, after I experienced that when launched sudo, like in example, a sudo apt install package , sudo then stops working and I’ve to exit the shell and relogin.

I found that it is Google Cloud that removes my user from google-sudoers after 3 minutes.
Solving silent exit error on eZ Launchpad
You have installed eZ Launchpad, and you can execute the binary ez from your home folder or other paths, however when you execute it from a project folder you cloned with git (with its .platform.app.yaml file) ez returns to prompt without any error message.
The exit code is 255, but even if you strace the process you don’t find the exact problem.
Inside your project you run ez without any argument in a clean install of Ubuntu 24.04 LTS with PHP 8.3, or with PHP 8.4, without xDebug, without opcache, without memory limit… nothing works with no visible error message in the logs or in the error output. However if you run it outside the project folder, it works, and it displays the typical help messages.
I reproduced this behavior on several Ubuntu computers. The fix I found is to execute ez with PHP 8.1
You can install PHP8.1 from ondrej repository, then you can update alternates to execute PHP 8.1 by default in your system, or you create the project by invoking ez with PHP 8.1 explicitly with:
php8.1 ~/ez create
This will kickstart the creation of your ez project based on Docker containers.
Backup and Restore your Ubuntu Linux Workstations
This is a mechanism I invented and I’ve been using for decades, to migrate or clone my Linux Desktops to other physical servers.
This script is focused on doing the job for Ubuntu but I was doing this already 30 years ago, for X Window as I was responsible of the Linux platform of a ISP (Internet Service Provider). So, it is compatible with any Linux Desktop or Server.
It has the advantage that is a very lightweight backup. You don’t need to backup /etc or /var as long as you install a new OS and restore the folders that you did backup. You can backup and restore Wine (Windows Emulator) programs completely and to/from VMs and Instances as well.
It’s based on user/s rather than machine.
And it does backup using the Timestamp, so you keep all the different version, modified over time. You can fusion the backups in the same folder if you prefer avoiding time versions and keep only the latest backup. If that’s your case, then replace s_PATH_BACKUP_NOW=”${s_PATH_BACKUP}${s_DATETIME}/” by s_PATH_BACKUP_NOW=”${s_PATH_BACKUP}” for instance. You can also add a folder for machine if you prefer it, for example if you use the same userid across several Desktops/Servers.
I offer you a much simplified version of my scripts, but they can highly serve your needs.
#!/usr/bin/env bash
# Author: Carles Mateo
# Last Update: 2022-10-23 10:48 Irish Time
# User we want to backup data for
s_USER="carles"
# Target PATH for the Backups
s_PATH_BACKUP="/home/${s_USER}/Desktop/Bck/"
s_DATE=$(date +"%Y-%m-%d")
s_DATETIME=$(date +"%Y-%m-%d-%H-%M-%S")
s_PATH_BACKUP_NOW="${s_PATH_BACKUP}${s_DATETIME}/"
echo "Creating path $s_PATH_BACKUP and $s_PATH_BACKUP_NOW"
mkdir $s_PATH_BACKUP
mkdir $s_PATH_BACKUP_NOW
s_PATH_KEY="/home/${s_USER}/Desktop/keys/2007-01-07-cloud23.pem"
s_DOCKER_IMG_JENKINS_EXPORT=${s_DATE}-jenkins-base.tar
s_DOCKER_IMG_JENKINS_BLUEOCEAN2_EXPORT=${s_DATE}-jenkins-blueocean2.tar
s_PGP_FILE=${s_DATETIME}-pgp.zip
# Version the PGP files
echo "Compressing the PGP files as ${s_PGP_FILE}"
zip -r ${s_PATH_BACKUP_NOW}${s_PGP_FILE} /home/${s_USER}/Desktop/PGP/*
# Copy to BCK folder, or ZFS or to an external drive Locally as defined in: s_PATH_BACKUP_NOW
echo "Copying Data to ${s_PATH_BACKUP_NOW}/Data"
rsync -a --exclude={} --acls --xattrs --owner --group --times --stats --human-readable --progress -z "/home/${s_USER}/Desktop/data/" "${s_PATH_BACKUP_NOW}data/"
rsync -a --exclude={'Desktop','Downloads','.local/share/Trash/','.local/lib/python2.7/','.local/lib/python3.6/','.local/lib/python3.8/','.local/lib/python3.10/','.cache/JetBrains/'} --acls --xattrs --owner --group --times --stats --human-readable --progress -z "/home/${s_USER}/" "${s_PATH_BACKUP_NOW}home/${s_USER}/"
rsync -a --exclude={} --acls --xattrs --owner --group --times --stats --human-readable --progress -z "/home/${s_USER}/Desktop/code/" "${s_PATH_BACKUP_NOW}code/"
echo "Showing backup dir ${s_PATH_BACKUP_NOW}"
ls -hal ${s_PATH_BACKUP_NOW}
df -h /
See how I exclude certain folders like the Desktop or Downloads with –exclude.
It relies on the very useful rsync program. It also relies on zip to compress entire folders (PGP Keys on the example).
If you use the second part, to compress Docker Images (Jenkins in this example), you will run it as sudo and you will need also gzip.
# continuation... sudo running required.
# Save Docker Images
echo "Saving Docker Jenkins /home/${s_USER}/Desktop/Docker_Images/${s_DOCKER_IMG_JENKINS_EXPORT}"
sudo docker save jenkins:base --output /home/${s_USER}/Desktop/Docker_Images/${s_DOCKER_IMG_JENKINS_EXPORT}
echo "Saving Docker Jenkins /home/${s_USER}/Desktop/Docker_Images/${s_DOCKER_IMG_JENKINS_BLUEOCEAN2_EXPORT}"
sudo docker save jenkins:base --output /home/${s_USER}/Desktop/Docker_Images/${s_DOCKER_IMG_JENKINS_BLUEOCEAN2_EXPORT}
echo "Setting permissions"
sudo chown ${s_USER}.${s_USER} /home/${s_USER}/Desktop/Docker_Images/${s_DOCKER_IMG_JENKINS_EXPORT}
sudo chown ${s_USER}.${s_USER} /home/${s_USER}/Desktop/Docker_Images/${s_DOCKER_IMG_JENKINS_BLUEOCEAN2_EXPORT}
echo "Compressing /home/${s_USER}/Desktop/Docker_Images/${s_DOCKER_IMG_JENKINS_EXPORT}"
gzip /home/${s_USER}/Desktop/Docker_Images/${s_DOCKER_IMG_JENKINS_EXPORT}
gzip /home/${s_USER}/Desktop/Docker_Images/${s_DOCKER_IMG_JENKINS_BLUEOCEAN2_EXPORT}
rsync -a --exclude={} --acls --xattrs --owner --group --times --stats --human-readable --progress -z "/home/${s_USER}/Desktop/Docker_Images/" "${s_PATH_BACKUP_NOW}Docker_Images/"
There is a final part, if you want to backup to a remote Server/s using ssh:
# continuation... to copy to a remote Server.
s_PATH_REMOTE="bck7@cloubbck11.carlesmateo.com:/Bck/Desktop/${s_USER}/data/"
# Copy to the other Server
rsync -e "ssh -i $s_PATH_KEY" -a --exclude={} --acls --xattrs --owner --group --times --stats --human-readable --progress -z "/home/${s_USER}/Desktop/data/" ${s_PATH_REMOTE}
I recommend you to use the same methodology in all your Desktops, like for example, having a data/ folder in the Desktop for each user.
You can use Erasure Code to split the Backups in blocks and store a piece in different Cloud Providers.
Also you can store your Backups long-term, with services like Amazon Glacier.
Other ideas are storing certain files in git and in Hadoop HDFS.
If you want you can CRC your files before copying to another device or server.
You will use tools like: sha512sum or md5sum.
Docker with Ubuntu with telnet server and Python code to access via telnet
Here you can see this Python code to connect via Telnet and executing a command in a Server:
File: telnet_demo.py
#!/usr/bin/env python3
import telnetlib
s_host = "localhost"
s_user = "telnet"
s_password = "telnet"
o_tn = telnetlib.Telnet(s_host)
o_tn.read_until(b"login: ")
o_tn.write(s_user.encode('ascii') + b"\n")
o_tn.read_until(b"Password: ")
o_tn.write(s_password.encode('ascii') + b"\n")
o_tn.write(b"hostname\n")
o_tn.write(b"uname -a\n")
o_tn.write(b"ls -hal /\n")
o_tn.write(b"exit\n")
print(o_tn.read_all().decode('ascii'))
File: Dockerfile
FROM ubuntu:20.04 MAINTAINER Carles Mateo ARG DEBIAN_FRONTEND=noninteractive # This will make sure printing in the Screen when running in detached mode ENV PYTHONUNBUFFERED=1 RUN apt-get update -y && apt install -y sudo telnetd vim systemctl && apt-get clean RUN adduser -gecos --disabled-password --shell /bin/bash telnet RUN echo "telnet:telnet" | chpasswd EXPOSE 23 CMD systemctl start inetd; while [ true ]; do sleep 60; done
You can see that I use chpasswd command to change the password for the user telnet and set it to telnet. That deals with the complexity of setting the encrypted password.
File: build_docker.sh
#!/bin/bash
s_DOCKER_IMAGE_NAME="ubuntu_telnet"
echo "We will build the Docker Image and name it: ${s_DOCKER_IMAGE_NAME}"
echo "After, we will be able to run a Docker Container based on it."
printf "Removing old image %s\n" "${s_DOCKER_IMAGE_NAME}"
sudo docker rm "${s_DOCKER_IMAGE_NAME}"
printf "Creating Docker Image %s\n" "${s_DOCKER_IMAGE_NAME}"
sudo docker build -t ${s_DOCKER_IMAGE_NAME} .
# If you don't want to use cache this is your line
# sudo docker build -t ${s_DOCKER_IMAGE_NAME} . --no-cache
i_EXIT_CODE=$?
if [ $i_EXIT_CODE -ne 0 ]; then
printf "Error. Exit code %s\n" ${i_EXIT_CODE}
exit
fi
echo "Ready to run ${s_DOCKER_IMAGE_NAME} Docker Container"
echo "To run in type: sudo docker run -it -p 23:23 --name ${s_DOCKER_IMAGE_NAME} ${s_DOCKER_IMAGE_NAME}"
When you run sudo ./build_docker.sh the image will be built. Then run it with:
sudo docker run -it -p 23:23 --name ubuntu_telnet ubuntu_telnet
If you get an error indicating that the port is in use, then your computer has already a process listening on the port 23, use another.
You will be able to stop the Container by pressing CTRL + C
From another terminal run the Python program:
python3 ./telnet_demo.py

See where is the space used in your Android phone from Ubuntu Terminal
So, you may have your Android phone full and you don’t know where the space is.
You may have tried Apps for Android but none shows the information in the detail you would like. Linux to the rescue.
First of all you need a cable able to transfer Data. It is a cable that will be connected to your computer, normally with an USB 3.0 cable and to your smartphone, normally with USB-C.
Sometimes phone’s connectors are dirty and don’t allow a stable connection. Your connections should allow a stable connection, otherwise the connection will be interrupted in the middle.
Once you connect the Android smartphone to the computer, unlock the phone and authorize the data connection.
You’ll see that your computer recognizes the phone:

Open the terminal and enter this directory:
cd /run/user/1000/gvfs/
Here you will see your device and the name is very evident.
The usual is to have just one device listed, but if you had several Android devices attached you may want to query first, in order to identify it.
The Android devices use a protocol named Media Transfer Protocol (MTP) when connecting to the USB port, and that’s different on the typical way to access the USB port.
usb-devices | grep "Manufacturer=" -B 3
Run this command to see all the devices connected to the USB.
You may see Manufacturer=SAMSUNG or Manufacturer=OnePlus etc…
The information returned will allow you to identify your device in /run/user/1000/gvfs/
You may get different type of outputs, but if you get:
T: Bus=02 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#= 13 Spd=480 MxCh= 0
your device can be accessed inside:
cd mtp\:host\=%5Busb%3A002%2C013%5D/
There you’ll see Card and Phone.
You can enter the Phone internal storage or the SD Card storage directory:
cd Phone
To see how the space is distributed nicely I recommend you to use the program ncdu if you don’t have you can install it with:
sudo apt install ncdu
Then run ncdu:
ncdu
It will calculate the space…

… and let you know, sorted from more to less, and will allow you to browse the sub-directories with the keyboard arrow keys and enter to get major clarity.
For example, in my case I saw 8.5 GB in the folder Movies on the phone, where I didn’t download any movie, so I checked.

I entered inside by pressing Enter:

So Instagram App was keeping a copy all the videos I uploaded, in total 8.5 GB of my Phone’s space and never releasing them!.
Example for the SD card, where the usage was as expected:

Install Jenkins on Docker with Blue Ocean and persisten Voluemes in Ubuntu 20.04 LTS in 4 minutes
Following the official documentation:
https://www.jenkins.io/doc/book/installing/docker/#setup-wizard
The steps are:
Create the network bridge named jenkins
docker network create jenkins
to execute Docker commands inside jenkins nodes we will use docker:dind
docker run \ --name jenkins-docker \ --rm \ --detach \ --privileged \ --network jenkins \ --network-alias docker \ --env DOCKER_TLS_CERTDIR=/certs \ --volume jenkins-docker-certs:/certs/client \ --volume jenkins-data:/var/jenkins_home \ --publish 2376:2376 \ docker:dind \ --storage-driver overlay2
Created a Dockerfile with these contents:
FROM jenkins/jenkins:2.346.1-jdk11 USER root RUN apt-get update && apt-get install -y lsb-release RUN curl -fsSLo /usr/share/keyrings/docker-archive-keyring.asc \ https://download.docker.com/linux/debian/gpg RUN echo "deb [arch=$(dpkg --print-architecture) \ signed-by=/usr/share/keyrings/docker-archive-keyring.asc] \ https://download.docker.com/linux/debian \ $(lsb_release -cs) stable" > /etc/apt/sources.list.d/docker.list RUN apt-get update && apt-get install -y docker-ce-cli USER jenkins RUN jenkins-plugin-cli --plugins "blueocean:1.25.5 docker-workflow:1.28"
Build it:
docker build -t myjenkins-blueocean:2.346.1-1 .
Run the Container:
docker run \ --name jenkins-blueocean \ --restart=on-failure \ --detach \ --network jenkins \ --env DOCKER_HOST=tcp://docker:2376 \ --env DOCKER_CERT_PATH=/certs/client \ --env DOCKER_TLS_VERIFY=1 \ --publish 8080:8080 \ --publish 50000:50000 \ --volume jenkins-data:/var/jenkins_home \ --volume jenkins-docker-certs:/certs/client:ro \ myjenkins-blueocean:2.346.1-1
See the Id of the running Containers:
docker ps

As in my case my jenkins container Id is 77b6a5a7ae8d in order to know the jenkins administrator password I check the logs for my jenkins Container with docker logs 77b6a5a7ae8d:
docker logs 77b6a5a7ae8d
Running from: /usr/share/jenkins/jenkins.war
webroot: EnvVars.masterEnvVars.get("JENKINS_HOME")
2022-06-26 21:02:05.492+0000 [id=1] INFO org.eclipse.jetty.util.log.Log#initialized: Logging initialized @549ms to org.eclipse.jetty.util.log.JavaUtilLog
2022-06-26 21:02:05.583+0000 [id=1] INFO winstone.Logger#logInternal: Beginning extraction from war file
2022-06-26 21:02:05.613+0000 [id=1] WARNING o.e.j.s.handler.ContextHandler#setContextPath: Empty contextPath
2022-06-26 21:02:05.674+0000 [id=1] INFO org.eclipse.jetty.server.Server#doStart: jetty-9.4.45.v20220203; built: 2022-02-03T09:14:34.105Z; git: 4a0c91c0be53805e3fcffdcdcc9587d5301863db; jvm 11.0.15+10
2022-06-26 21:02:05.986+0000 [id=1] INFO o.e.j.w.StandardDescriptorProcessor#visitServlet: NO JSP Support for /, did not find org.eclipse.jetty.jsp.JettyJspServlet
2022-06-26 21:02:06.020+0000 [id=1] INFO o.e.j.s.s.DefaultSessionIdManager#doStart: DefaultSessionIdManager workerName=node0
2022-06-26 21:02:06.020+0000 [id=1] INFO o.e.j.s.s.DefaultSessionIdManager#doStart: No SessionScavenger set, using defaults
2022-06-26 21:02:06.021+0000 [id=1] INFO o.e.j.server.session.HouseKeeper#startScavenging: node0 Scavenging every 600000ms
2022-06-26 21:02:06.463+0000 [id=1] INFO hudson.WebAppMain#contextInitialized: Jenkins home directory: /var/jenkins_home found at: EnvVars.masterEnvVars.get("JENKINS_HOME")
2022-06-26 21:02:06.647+0000 [id=1] INFO o.e.j.s.handler.ContextHandler#doStart: Started w.@7cf7aee{Jenkins v2.346.1,/,file:///var/jenkins_home/war/,AVAILABLE}{/var/jenkins_home/war}
2022-06-26 21:02:06.668+0000 [id=1] INFO o.e.j.server.AbstractConnector#doStart: Started ServerConnector@4c402120{HTTP/1.1, (http/1.1)}{0.0.0.0:8080}
2022-06-26 21:02:06.669+0000 [id=1] INFO org.eclipse.jetty.server.Server#doStart: Started @1727ms
2022-06-26 21:02:06.669+0000 [id=25] INFO winstone.Logger#logInternal: Winstone Servlet Engine running: controlPort=disabled
2022-06-26 21:02:06.925+0000 [id=32] INFO jenkins.InitReactorRunner$1#onAttained: Started initialization
2022-06-26 21:02:07.214+0000 [id=39] INFO jenkins.InitReactorRunner$1#onAttained: Listed all plugins
2022-06-26 21:02:10.781+0000 [id=47] INFO jenkins.InitReactorRunner$1#onAttained: Prepared all plugins
2022-06-26 21:02:10.794+0000 [id=35] INFO jenkins.InitReactorRunner$1#onAttained: Started all plugins
2022-06-26 21:02:10.803+0000 [id=42] INFO jenkins.InitReactorRunner$1#onAttained: Augmented all extensions
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.codehaus.groovy.vmplugin.v7.Java7$1 (file:/var/jenkins_home/war/WEB-INF/lib/groovy-all-2.4.21.jar) to constructor java.lang.invoke.MethodHandles$Lookup(java.lang.Class,int)
WARNING: Please consider reporting this to the maintainers of org.codehaus.groovy.vmplugin.v7.Java7$1
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2022-06-26 21:02:11.634+0000 [id=30] INFO jenkins.InitReactorRunner$1#onAttained: System config loaded
2022-06-26 21:02:11.635+0000 [id=30] INFO jenkins.InitReactorRunner$1#onAttained: System config adapted
2022-06-26 21:02:11.642+0000 [id=48] INFO jenkins.InitReactorRunner$1#onAttained: Loaded all jobs
2022-06-26 21:02:11.645+0000 [id=46] INFO jenkins.InitReactorRunner$1#onAttained: Configuration for all jobs updated
2022-06-26 21:02:11.668+0000 [id=67] INFO hudson.model.AsyncPeriodicWork#lambda$doRun$1: Started Download metadata
2022-06-26 21:02:11.675+0000 [id=67] INFO hudson.model.AsyncPeriodicWork#lambda$doRun$1: Finished Download metadata. 4 ms
2022-06-26 21:02:11.733+0000 [id=52] INFO jenkins.install.SetupWizard#init:
*************************************************************
*************************************************************
*************************************************************
Jenkins initial setup is required. An admin user has been created and a password generated.
Please use the following password to proceed to installation:
3de0910b83894b9294989552e6fa9773
This may also be found at: /var/jenkins_home/secrets/initialAdminPassword
*************************************************************
*************************************************************
*************************************************************
2022-06-26 21:02:22.901+0000 [id=52] INFO jenkins.InitReactorRunner$1#onAttained: Completed initialization
2022-06-26 21:02:23.013+0000 [id=24] INFO hudson.lifecycle.Lifecycle#onReady: Jenkins is fully up and running
In my case the password is at the bottom, between the stars: 3de0910b83894b9294989552e6fa9773
Go with your browser to: http://localhost:8080

Twitch Stream about ZFS, zpool scrubbing, Hard drives, Data Centers, NVMe, Rack Servers…
Twitch stream on 2022-06-06 10:50 IST
In this very long session we went through actual errors in a ZFS pool, we check the Kernel, we remove and reinsert the drive, conduct zpool scrub… in the meantime I talked about Rack, Rack Servers, PSU, redundant components, ECC RAM…
