Category Archives: DevOps

Installing Red Hat Linux in a M.2 that crashes the installer

Few months ago I encountered with a problem with RHEL installer and some of the M.2 drives.

I’ve productized my Product, to be released with M.2 booting SATA drives of 128GB.

The procedure for preparing the Servers (90 and 60 drives, Cold Storage) was based on the installation of RHEL in the M.2 128GB drive. Then the drives are cloned.

Few days before mass delivery the company request to change the booting M.2 drives for others of our own, 512 GB drives.

I’ve tested many different M.2 drives and all of them were slightly different.

Those 512 GB M.2 drives had one problem… Red Hat installer was failing with a python error.

We were running out of time, so I decided to clone directly from the 128GB M.2 working card, with everything installed, to the 512 GB card. Doing that is so easy as booting with a Rescue Linux USB disk, and then doing a dd from the 128GB drive to the 512GB drive.

Booting with a live USB system is important, as Filesystem should not be mounted to prevent corruption when cloning.

Then, the next operation would be booting the 512 GB drive and instructing Linux to claim the additional space.

Here is the procedure for doing it (note, the OS installed in the M.2 was CentOS in this case):

Determine the device that needs to be operated on (this will usually be the boot drive); in this example it is /dev/sdae

# df -h 
Filesystem                             Size  Used Avail Use% Mounted on
/dev/mapper/centos_4602c-root           50G  2.4G   47G   1% /
devtmpfs                                16G     0   16G   0% /dev
tmpfs                                   16G     0   16G   0% /dev/shm
tmpfs                                   16G  395M   16G   3% /run
tmpfs                                   16G     0   16G   0% /sys/fs/cgroup
/dev/sdae1                            1014M  146M  869M  15% /boot
/dev/mapper/centos_4602c-home           57G   33M   57G   1% /home
tmpfs                                  3.2G     0  3.2G   0% /run/user/0
logs                                    68G  7.4M   68G   1% /logs
mysql                                  481G  128K  481G   1% /mysql
N58-C3-D16-P3-S1                       491T  334G  490T   1% /N58-C3-D16-P3-S1

Extend the OS partition using Parted

# parted /dev/sdae
print
resizepart PART_NUMBER END
quit

Where:

  • PART_NUMBER: Is the partition number obtained from the “print” command
  • END: This is the end of the drive; for example, for a 50GB drive, enter 50000

Examining the LVM Partitions

The centos_4602c-root LVM partition is the one we want to extend.

# lsblk /dev/sdae
NAME                          MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sdae                           65:224  0   477G  0 disk 
├─sdae1                        65:225  0     1G  0 part /boot
└─sdae2                        65:226  0 475.9G  0 part 
  ├─centos_4602c-root         253:0    0    50G  0 lvm  /
  ├─centos_4602c-swap         253:1    0  11.9G  0 lvm  [SWAP]
  └─centos_4602c-home         253:2    0  56.3G  0 lvm  /home

Using LVM Commands

The following commands will:

  • Display the LVM volumes on the system
  • Resize a volume (device)
  • Re-display the updated LVM volumes
  • Extend the desired LVM partition (lvextend command)
# pvdisplay
  /dev/sdbm: open failed: No medium found
  /dev/sdbn: open failed: No medium found
  /dev/sdbj: open failed: No medium found
  /dev/sdbk: open failed: No medium found
  /dev/sdbl: open failed: No medium found
  --- Physical volume ---
  PV Name               /dev/sdae2
  VG Name               centos_4602c
  PV Size               118.24 GiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              30269
  Free PE               0
  Allocated PE          30269
  PV UUID               yvHO6t-cYHM-CCCm-2hOO-mJWf-6NUI-zgxzwc
# pvresize /dev/sdae2
  /dev/sdbm: open failed: No medium found
  /dev/sdbn: open failed: No medium found
  /dev/sdbj: open failed: No medium found
  /dev/sdbk: open failed: No medium found
  /dev/sdbl: open failed: No medium found
  Physical volume "/dev/sdae2" changed
  1 physical volume(s) resized or updated / 0 physical volume(s) not resized
# pvdisplay
  /dev/sdbm: open failed: No medium found
  /dev/sdbn: open failed: No medium found
  /dev/sdbj: open failed: No medium found
  /dev/sdbk: open failed: No medium found
  /dev/sdbl: open failed: No medium found
  --- Physical volume ---
  PV Name               /dev/sdae2
  VG Name               centos_4602c
  PV Size               <475.84 GiB / not usable 3.25 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              121813
  Free PE               91544
  Allocated PE          30269
  PV UUID               yvHO6t-cYHM-CCCm-2hOO-mJWf-6NUI-zgxzwc
# vgdisplay
  --- Volume group ---
  VG Name               centos_4602c
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  6
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               <475.93 GiB
  PE Size               4.00 MiB
  Total PE              121838
  Alloc PE / Size       30269 / <118.24 GiB
  Free  PE / Size       91569 / 357.69 GiB
  VG UUID               ORcp2t-ntwQ-CNSX-NeXL-Udd9-htt9-kLfvRc
# lvextend -l +91569 /dev/centos_4602c/root 
  Size of logical volume centos_4602c/root changed from 50.00 GiB (12800 extents) to <407.69 GiB (104369 extents).
  Logical volume centos_4602c/root successfully resized.

Extend the xfs file system to use the extended space

The xfs file system for the root partition will need to be extended to use the extra space; this is done using the xfs_grow command as shown below.

# xfs_growfs /dev/centos_4602c/root  
meta-data=/dev/mapper/centos_4602c-root isize=512    agcount=4, agsize=3276800 blks
         =                       sectsz=512   attr=2, projid32bit=1          =                       crc=1        finobt=0 spinodes=0 data     =                       bsize=4096   blocks=13107200, imaxpct=25 
         =                       sunit=0      swidth=0 blks 
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1 log      =internal               bsize=4096   blocks=6400, version=2          =                       sectsz=512   sunit=0 blks, lazy-count=1 
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 13107200 to 106873856 

Verify the results

Note that the c-root LVM partition is now 408GB.

# df -h 
Filesystem                             Size  Used Avail Use% Mounted on
/dev/mapper/centos_4602c-root          408G  2.4G  406G   1% /
devtmpfs                                16G     0   16G   0% /dev
tmpfs                                   16G     0   16G   0% /dev/shm
tmpfs                                   16G  395M   16G   3% /run
tmpfs                                   16G     0   16G   0% /sys/fs/cgroup
/dev/sdae1                            1014M  146M  869M  15% /boot
/dev/mapper/centos_4602c-home           57G   33M   57G   1% /home
tmpfs                                  3.2G     0  3.2G   0% /run/user/0
logs                                    68G  7.4M   68G   1% /logs
mysql                                  481G  128K  481G   1% /mysql
N58-C3-D16-P3-S1                       491T  334G  490T   1% /N58-C3-D16-P3-S1

So now we are able to clone directly from one 512GB to another.

You may be interested to take a look to the commands:

growpart
resize2fs
xfs_growfs (from xfsprogs package)

If you want to do this in an instance in Amazon, here is a very good documentation.

A handy trick command line to get the usages of our Python Methods in the code

We all use powerful code analysis tool, but sometimes you’re presented with a problem and you have just… the terminal.

This Bash code is handy.

grep "def " /home/carles/code/gitlab/cloud/terraform/src/scale/lib/iscsi.py | tr "()" "  " | awk '{ print $2; }' |  grep -v "__init" > ~/function_names_iscsi.txt

So this basically will get all the methods (“def ” whatever), strip the parenthesis with tr, and get the second column with awk, so basically the method name.

Then I will cd to the src directory and execute the seconds part:

cd /home/carles/code/gitlab/cloud/terraform/src/
for fname in $(cat ~/function_names_iscsi.txt); do printf "%s: %s\n" "$fname" "$(grep -r $fname *|grep -v 'def ' -c)"; done > ~/functions_being_used.txt

That will produce a nice list in the form of:

method_name: occurrences

That’s the equivalent to doing Find Usages is PyCharm.

It’s easy to identify dead code then, with method_name: 0.

You can also run this to your Jenkins to warn when there is Dead Code in your repository.

Adding my Server as Docker, with PHP Catalonia Framework, explained

The previous day I explained how I migrated my old Server (Amazon Instance) to a more powerful model, with more recent OS, WebServer, etc…

This was interesting under the point of view of dealing with elastic Ip’s, Amazon AWS Volumes, etc… but was a process basically manual. I could have generated an immutable image to start from next time, but this is another discussion, specially because that Server Instance has different base Software, including a MySql Database.

This time I want to explain, step by step, how to conainerize my Server, so I can port to different platforms, and I can be independent on what the Server Operating System is. It will work always, as we defined the Operating System for the Docker Container.

So we start to use IaC (Infrastructure as Code).

So first you need to install docker.

So basically if your laptop is an Ubuntu 18.04 LTS you have to:

sudo apt install docker.io

Start and Automate Docker

The Docker service needs to be setup to run at startup. To do so, type in each command followed by enter:

sudo systemctl start docker
sudo systemctl enable docker

Create the Dockerfile

For doing this you can use any text editor, but as we are working with IaC why not use a Code Editor?.

You can use the versatile PyCharm, that has modules for understanding Docker and so you can use Control Version like git too.

This is the Dockerfile

FROM ubuntu:19.04

MAINTAINER Carles <carles@carlesmateo.com>

ARG DEBIAN_FRONTEND=noninteractive

#RUN echo "nameserver 8.8.8.8" > /etc/resolv.conf

RUN echo "Europe/Ireland" | tee /etc/timezone

# Note: You should install everything in a single line concatenated with
#       && and finalising with apt autoremove && apt clean
#       In order to use the less space possible, as every command is a layer
RUN apt-get update && apt-get install -y apache2 ntpdate libapache2-mod-php7.2 \
mysql-server php7.2-mysql php-dev libmcrypt-dev php-pear git && \
apt autoremove && apt clean

RUN a2enmod rewrite

RUN mkdir -p /www

# In order to activate Debug
# RUN sed -i "s/display_errors = Off/display_errors = On/" /etc/php/7.2/apache2/php.ini 
# RUN sed -i "s/error_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT/error_reporting = E_ALL/" /etc/php/7.2/apache2/php.ini 
# RUN sed -i "s/display_startup_errors = Off/display_startup_errors = On/" /etc/php/7.2/apache2/php.ini 
# To Debug remember to change:
# config/{production.php|preproduction.php|devel.php|docker.php} 
# in order to avoid Error Reporting being set to 0.

ENV PATH_CATALONIA_CACHE /www/www.cataloniaframework.com/cache/

ENV APACHE_RUN_USER  www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR   /var/log/apache2
ENV APACHE_PID_FILE  /var/run/apache2/apache2.pid
ENV APACHE_RUN_DIR   /var/run/apache2
ENV APACHE_LOCK_DIR  /var/lock/apache2
ENV APACHE_LOG_DIR   /var/log/apache2

RUN mkdir -p $APACHE_RUN_DIR
RUN mkdir -p $APACHE_LOCK_DIR
RUN mkdir -p $APACHE_LOG_DIR

# Remove the default Server
RUN sed -i '/<Directory \/var\/www\/>/,/<\/Directory>/{/<\/Directory>/ s/.*/# var-www commented/; t; d}' /etc/apache2/apache2.conf 

RUN rm /etc/apache2/sites-enabled/000-default.conf

COPY www.cataloniaframework.com.conf /etc/apache2/sites-available/

RUN chmod 777 $PATH_CATALONIA_CACHE
RUN chmod 777 $PATH_CATALONIA_CACHE.
RUN chown --recursive $APACHE_RUN_USER.$APACHE_RUN_GROUP $PATH_CATALONIA_CACHE

RUN ln -s /etc/apache2/sites-available/www.cataloniaframework.com.conf /etc/apache2/sites-enabled/

# Note: You should clone locally and COPY to the Docker Image
#       Also you should add the .git directory to your .dockerignore file
#       I made this way to show you and for simplicity, having everything
#       in a single file
RUN git clone https://github.com/cataloniaframework/cataloniaframework_v1_sample_website /www/www.cataloniaframework.com
RUN git checkout tags/v.1.16-web-1.0
# In order to change profile to Production
# RUN sed -i "s/define('ENVIRONMENT', DOCKER)/define('ENVIRONMENT', PRODUCTION)/" /var/www/www.cataloniaframework.com/config/general.php 

# for debugging
#RUN apt-get install -y vim

RUN service apache2 restart

EXPOSE 80

CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"]

The www.cataloniaframework.com.conf file

As you saw in the Dockerfile you have the line:

COPY www.cataloniaframework.com.conf /etc/apache2/sites-available/

This will copy the file www.cataloniaframework.com.conf that must be in the same directory that the Dockerfile file, to the /etc/apache2/sites-available/ folder in the conainer.

<VirtualHost *:80>
    ServerAdmin webmaster@cataloniaframework.com
    # Uncomment to use a DNS name in a multiple VirtualHost Environment
    #ServerName www.cataloniaframework.com
    #ServerAlias cataloniaframework.com
    DocumentRoot /www/www.cataloniaframework.com/www
    <Directory /www/www.cataloniaframework.com/www/>
            Options Indexes FollowSymLinks MultiViews
            AllowOverride All
            Order allow,deny
            allow from all
            Require all granted
    </Directory>
    ErrorLog ${APACHE_LOG_DIR}/www-cataloniaframework-com-error.log
    # Possible values include: debug, info, notice, warn, error, crit,
    # alert, emerg.
    LogLevel warn
    CustomLog ${APACHE_LOG_DIR}/www-cataloniaframework-com-access.log combined
</VirtualHost>

Stoping, starting the docker Service and creating the Catalonia image

service docker stop && service docker start

To build the Docker Image we will do:

docker build -t catalonia . --no-cache

I use the –no-cache so git is pulled and everything is reworked, not kept from cache.

Now we can run the Catalonia Docker, mapping the 80 port.

docker run -d -p 80:80 catalonia

If you want to check what’s going on inside the Docker, you’ll do:

docker ps

And so in this case, we will do:

docker exec -i -t distracted_wing /bin/bash

Finally I would like to check that the web page works, and I’ll use my preferred browser. In this case I will use lynx, the text browser, cause I don’t want Firefox to save things in the cache.

Upgrading the Blog after 5 years, AWS Amazon Web Services, under DoS and Spam attacks

Few days ago I was under a heavy DoS attack.

Nothing new, zombie computers, hackers, pirates, networks of computers… trying to abuse the system and to hack into it. Why? There could be many reasons, from storing pirate movies, trying to use your Server for sending Spam, try to phishing or to host Ransomware pages…

Most of those guys doesn’t know that is almost impossible to Spam from Amazon. Few emails per hour can come out from the Server unless you explicitly requests that update and configure everything.

But I thought it was a great opportunity to force myself to update the Operating System, core tools, versions of PHP and MySql.

Forensics / Postmortem of the incident

The task was divided in two parts:

  • Understanding the origin of the attack
  • Blocking the offending Ip addresses or disabling XMLRPC
  • Making the VM boot again (problems with Amazon AWS)
    • I didn’t know why it was not booting so.
  • Upgrading the OS

I disabled the access to the site while I was working using Amazon Web Services Firewall. Basically I turned access to my ip only. Example: 8.8.8.8/32

I changed 0.0.0.0/0 so the world wide mask to my_Ip/3

That way the logs were reflecting only what I was doing from my Ip.

Dealing with Snapshots and Volumes in AWS

Well the first thing was doing an Snapshot.

After, I tried to boot the original Blog Server (so I don’t stop offering service) but no way, the Server appeared to be dead.

So then I attached the Volume to a new Server with the same base OS, in order to extract (dump) the database. Later I would attach the same Volume to a new Server with the most recent OS and base Software.

Something that is a bit annoying is that the new Instances, the new generation instances, run only in VPC, not in Amazon EC2 Classic. But my static Ip addresses are created for Amazon EC2 Classic, so I could not use them in new generation instances.

I choose the option to see all the All the generations.

Upgrading the system base Software had its own challenges too.

Upgrading the OS / Base Software

My approach was to install an Ubuntu 18.04 LTS, and install the base Software clean, and add any modification I may need.

I wanted to have all the supported packages and a recent version of PHP 7 and the latest Software pieces link Apache or MySQL.

sudo apt update

sudo apt install apache2

sudo apt install mysql-server

sudo apt install php libapache2-mod-php php-mysql

Apache2

Config files that before were working stopped working as the new Apache version requires the files or symlinks under /etc/apache2/sites-enabled/ to end with .conf extension.

Also some directives changed, so some websites will not able to work properly.

Those projects using my Catalonia Framework were affected, although I have this very well documented to make it easy to work with both versions of Apache Http Server, so it was a very straightforward change.

From the previous version I had to change my www.cataloniaframework.com.conf file and enable:

    <Directory /www/www.cataloniaframework.com>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
Order allow,deny
allow from all
</Directory>

Then Open the ports for the Web Server (443 and 80).

sudo ufw allow in "Apache Full"

Then service apache restart

Catalonia Framework Web Site, which is also created with Catalonia Framework itself once restored

MySQL

The problem was to use the most updated version of the Database. I could use one of the backups I keep, from last week, but I wanted more fresh data.

I had the .db files and it should had been very straightforward to copy to /var/lib/mysql/ … if they were the same version. But they weren’t. So I launched an instance with the same base Software as the old previous machine had, installed mysql-server, stopped it, copied the .db files, started it, and then I made a dump with mysqldump –all-databases > 2019-04-29-all-databases.sql

Note, I copied the .db files using the mythical mc, which is a clone from Norton Commander.

Then I stopped that instance and I detached that volume and attached it to the new Blog Instance.

I did a Backup of my original /var/lib/mysql/ files for the purpose of faster restoring if something went wrong.

I mounted it under /mnt/blog_old and did mysql -u root -p < /mnt/blog_old/home/ubuntu/2019-04-29-all-databases.sql

That worked well I had restored the blog. But as I was watching the /var/log/mysql/error.log I noticed some columns were not where they should be. That’s because inadvertently I overwritten the MySql table as well, which in MySQL 5.7 has different structure than in MySQL 5.5. So I screwed. As I previewed this possibility I restored from the backup in seconds.

So basically then I edited my .sql files and removed all that was for the mysql database.

I started MySql, and run the mysql import procedure again. It worked, but I had to recreate the users for all the Databases and Grant them permissions.

GRANT ALL PRIVILEGES ON db_mysqlproxycache.* TO 'wp_dbuser_mysqlproxy'@'localhost' IDENTIFIED BY 'XWy$&{yS@qlC|<¡!?;:-ç';

PHP7

Some modules in my blogs where returning errors in /var/log/apache2/mysite-error.log so I checked that it was due to lack of support of latest PHP versions, and so I patched manually the code or I just disabled the offending plugin.

WordPress

As seen checking the /var/log/apache2/blog.carlesmateo.com-error.log some URLs where not located by WordPress.

For example:

The requested URL /wordpress/wp-json/ was not found on this server

I had to activate modrewrite and then restart Apache.

a2enmod rewrite; service apache2 restart

Making the site more secure

Checking at the logs of Apache, /var/log/apache2/blog.carlesmateo.com-access.log I checked for Ip’s accessing Admin areas, I looked for 404 Errors pointing to intents to exploit a unsafe WP Plugin, I checked for POST protocol as well.

I added to the Ubuntu Uncomplicated Firewall (UFW) the offending Ip’s and patched the xmlrpc.php file to exit always.

Dropping caches in Linux, to check if memory is actually being used

I encountered that Server, Xeon, 128 GB of RAM, with those 58 Spinning drives 10 TB and 2 SSD of 2 TB each, where I was testing the latest version of my Software.

Monitoring long term tests, data validation, checking for memory leaks…
I notice the Server is using 70 GB of RAM. Only 5.5 GB are used for buffers according to the usual tools (top, htop, free, cat /proc/meminfo, ps aux…) and no programs are eating that amount, so where is the RAM?.
The rest of the Servers are working well, including models: same mode, 4U60 with 64 GB of RAM, 4U90 with 128 GB and All-Flash-Array with 256 GB of RAM, only using around 8 GB of RAM even under load.
iSCSI sharings being used, with I/O, iSCSI initiators trying to connect and getting rejected, several requests for second, disk pulling, and that usual stuff. And this is the only unit using so many memory, so what?.
I checked some modules to see memory consumption, but nothing clear.
Ok, after a bit of investigation one member of the Team said “Oh, while you was on holidays we created a Ramdisk and filled it for some validations, we deleted that already but never rebooted the Server”.
Ok. The easy solution would be to reboot, but that would had hidden a memory leak it that was the cause.
No, I had to find the real cause.

I requested assistance of one my colleagues, specialist, Kernel Engineer.
He confirmed that processes were not taking that memory, and ask me to try to drop the cache.

So I did:

sync
echo 3 > /proc/sys/vm/drop_caches

Then the memory usage drop to 11.4 GB and kept like that while I maintain sustained the load.

That’s more normal taking in count that we have 16 Volumes shared and one host is attempting to connect to Volumes that do not exist any more like crazy, Services and Cronjobs run in background and we conduct tests degrading the pool, removing drives, etc..

After tests concluded memory dropped to 2 GB, which is what we use when we’re not under load.

Note: In order to know about the memory being used by Kernel slab cache in real time you can use command:

 slabtop

You can also check:

sudo vmstat -m

Solving a persistent MD Array problem in RHEL7.4

Ok, so I lend one of my Servers to two of my colleagues in The States, that required to prepare some test for a customer. I always try to be nice and to stimulate sales.

I work with Declustered RAID, DRAID, and ZFS.

The Server was a 4U90, so a 4U Server with 90 SAS3 drives and 4 SSD. Drives are Dual Ported, and two Controllers (motherboard + CPU) have access simultaneously to the drives for HA.

After their tests my colleagues, returned me the Server, and I needed to use it and my surprise was when I tried to provision with ZFS and I encountered problems. Not much in the logs.

I checked:

cat /proc/mdstat

And that was the thing 8 MD Arrays where there.

[root@4u90-B ~]# cat /proc/mdstat 
Personalities : 
md2 : inactive sdba1[9](S) sdag1[7](S) sdaf1[3](S)
11720629248 blocks super 1.2

md1 : inactive sdax1[7](S) sdad1[5](S) sdac1[1](S) sdae1[9](S)
12056071168 blocks super 1.2

md0 : inactive sdat1[1](S) sdav1[9](S) sdau1[5](S) sdab1[7](S) sdaa1[3](S)
19534382080 blocks super 1.2

md4 : inactive sdbf1[9](S) sdbe1[5](S) sdbd1[1](S) sdal1[7](S) sdak1[3](S)
19534382080 blocks super 1.2

md5 : inactive sdam1[1](S) sdan1[5](S) sdao1[9](S)
11720629248 blocks super 1.2

md8 : inactive sdcq1[7](S) sdz1[2](S)
7813752832 blocks super 1.2

md7 : inactive sdbm1[7](S) sdar1[1](S) sdy1[9](S) sdx1[5](S)
15627505664 blocks super 1.2

md3 : inactive sdaj1[9](S) sdai1[5](S) sdah1[1](S)
11720629248 blocks super 1.2

md6 : inactive sdaq1[7](S) sdap1[3](S) sdr1[8](S) sdp1[0](S)
15627505664 blocks super 1.2

Ok. So I stop the Arrays

mdadm --stop /dev/md127

And then I zero the superblock:

mdadm --zero-superblock /dev/sdb1

After doing this for all I try to provision and… surprise! does not work. /dev/md127 has respawned like in the old times from Doom video game.

I check the mdmonitor service and even disable it.

systemctl disable mdmonitor

I repeat the process.

And /dev/md127 appears again, using another device.

At this point, just in case, I check the other controller, which should be powered off.

Ok, it was on. I launch the poweroff command, and repeat, same!.

I see that the poweroff command on the second Controller is doing a reboot. So I launch the halt command that makes it not respond to the ping anymore.

I repeat the process, and still the ghost md array appears there, and blocks me from doing my zpool create.

The /etc/mdadm.conf file did not exist (by default is not created).

I try a more aggressive approach:

DRIVES=`cat /proc/partitions | grep 3907018584 | awk '{ print $4; }'`

for DRIVE in $DRIVES; do echo "Trying /dev/${DRIVE}1"; mdadm --examine /dev/${DRIVE}1; done

Ok. And destruction time:

for DRIVE in $DRIVES; do echo "Trying /dev/${DRIVE}"; wipefs -a -f /dev/${DRIVE}; done

for DRIVE in $DRIVES; do echo "Trying /dev/${DRIVE}1"; mdadm --zero-superblock /dev/${DRIVE}1; done

Apparently the system is clean, but still I cannot provision, and /dev/md127 respaws and reappears all the time.

After googling and not finding anything about this problem, and my colleagues no having clue about what is causing this, I just proceed with a simple solution, as I need the Server for my company completing the tests in the next 24 hours.

So I create the file /etc/mdadm.conf with this content:

[root@draid-08 ~]# cat /etc/mdadm.conf 
AUTO -all

After that I rebooted the Server and I saw the infamous /dev/md127 is not there and I’m able to provision.

I share the solution as it may help other people.

Troubleshooting upgrading and loading a ZFS module in RHEL7.4

I illustrate this troubleshooting as it will be useful for some of you.

I requested to one of the members of my Team to compile and to install ZFS 7.9 to some of the Servers loaded with drives, that were running ZFS 7.4 older version.

Those systems were running RHEL7.4.

The compilation and install was fine, however the module was not able to load.

My Team member reported that: when trying to run “modprobe zfs”. It was giving the error:

modprobe: ERROR: could not insert 'zfs': Invalid argument

Also when trying to use a zpool command it gives the error:

Failed to initialize the libzfs library

That was only failing in one of the Servers, but not in the others.

My Engineer ran dmesg and found:

zfs: `' invalid for parameter `metaslab_debug_unload

He though it was a compilation error, but I knew that metaslab_debug_unload is an option parameter that you can set in /etc/zfs.conf

So I ran:

 modprobe -v zfs

And that confirmed my suspicious, so I edited /etc/zfs.conf and commented the parameter and tried again. And it failed.

As I run modprobe -v zfs (verbose) it was returning me the verbose info, and so I saw that it was still trying to load those parameters so I knew it was reading those parameters from some file.
I could have grep all the files in the filesystem looking for the parameter failing in the verbose or find all the files in the system named zfs.conf. To me it looked inefficient as it would be slow and may not bring any result (as I didn’t know how exactly my team member had compiled the code), however I expected to get the result. But what if I found 5 or 7 zfs.conf files?. Slow.
I used strace. It was not installed but the RHEL license was active so I simple did:

 yum install strace

strace is for System Trace and so it records all the System Calls that the programs do.
That’s a pro trick that will accompany you all your career.

So I did strace modprobe zfs

I did not use -v in here cause all the verbose would had been logged as a System Call and made more difficult my search.
I got the output of all the System Calls and I just had to look for which files were being read.

Then I found that zfs.conf under /etc/modprobe.d/zfs.conf
That was the one being read. So I commented the line and tried modprobe zfs and it worked perfectly. :)

 

Creating a Content Filter for Postfix in PHP

In this article I want to explain how I created a content filter for Postfix, in PHP.

The basic idea is to examine all the incoming messages, looking for a Credit Card pattern, and then sending those emails to another Server, that for instance is PCI compliant, and sending an email to the original receiver telling that they received an email with a CC, that is stored in a safe Server.

I choose the pipe mechanism, because is the last one in the chain of content filters, and first I want to pass the antivirus (Amavis), antispam and other content filters.

Then I inject the emails to sendmail, with the params -G -i , granting that the email will not be reprocessed entering an infinite loop.

/usr/sbin/sendmail -G -i

A remembering about the SMTP protocol, that I’ll mention later. Another link in wikipedia.

Edit the file /etc/postfix/master.cf to add these lines:

# ==========================================================================
# service type private unpriv chroot wakeup maxproc command + args
# (yes) (yes) (yes) (never) (100)
# ==========================================================================
smtp inet n - n - - smtpd
 -o content_filter=filter:dummy



# Other external delivery methods.

filter unix - n n - 10 pipe
 flags=Rq user=filter argv=/var/filtermails/filtercard.php -f ${sender} -- ${size} ${recipient}

The last parameter ${recipient} will expand with as many recipients (RCPT TO:) as the mail has.

Now the code for the PHP filter. Check a simple content filter example here.

The file /var/filtermails/filtercard.php

#!/usr/bin/php
<?php

/*
 * Carles Mateo
 */

date_default_timezone_set('Europe/Andorra');

$s_dest_mail_secure = 'secure@pciserver.carlesmateo.com';

$b_regex_found = false;
$b_emails_rcpt_to = Array();

// All major credit cards regex
// The CC anywhere
$s_cc_regex = '/(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|6011[0-9]{12}|622((12[6-9]|1[3-9][0-9])|([2-8][0-9][0-9])|(9(([0-1][0-9])|(2[0-5]))))[0-9]{10}|64[4-9][0-9]{13}|65[0-9]{14}|3(?:0[0-5]|[68][0-9])[0-9]{11}|3[47][0-9]{13})/';

function log_event($s_message) {
 syslog(LOG_WARNING, $s_message);
}

function save_message_to_file($s_file, $s_message) {
 $o_file = fopen($s_file, "a");
 fwrite($o_file, $s_message);
 fclose($o_file);
}

function read_file($s_file) {
 $s_contents = file_get_contents($s_file);
 
 if ($s_contents === false) {
 return '';
 }
 
 return $s_contents;
}

function get_all_rcpt_to($st_emails_input) {
 // First email is pos 5 of the array
 $st_emails = $st_emails_input;
 
 unset($st_emails[0]);
 unset($st_emails[1]);
 unset($st_emails[2]);
 unset($st_emails[3]);
 unset($st_emails[4]);
 
 asort($st_emails);
 
 return $st_emails;
}

/*
 * Returns a @secure. email, from the original email
 */
function get_secure_email($s_email) {
 $i_pos = strpos($s_email, '@');
 
 $s_email_new = $s_email;
 
 if ($i_pos > 0) {
 $s_email_new = substr($s_email, 0, $i_pos);
 $s_email_new .= 'secure.';
 $s_email_new .= substr($s_email, $i_pos +1);
 }
 
 return $s_email_new;
}

function replace_tpl_variables($s_text, $s_sender_original) {

 // TODO: Replace static values
 
 $s_date_sent = date('r'); // RFC 2822 formatted date

 $s_text = str_replace('#DATE_NOW#', $s_date_sent, $s_text);
 $s_text = str_replace('#FROM_NAME#', 'Carles Mateo', $s_text);
 $s_text = str_replace('#FROM_EMAIL#', 'mateo@blog.carlesmateo.com', $s_text);
 $s_text = str_replace('#EMAIL_SENDER_ORIGINAL#', $s_sender_original, $s_text);

 return $s_text;
}

function delete_file($s_file) {
 unlink($s_file);
}

// Read the RCPT TO: fields ${recipient}
$st_emails_rcpt_to = get_all_rcpt_to($argv);

// Read the email
$email = '';
$fd = fopen("php://stdin", "r");
while (!feof($fd)) {
 $line = fread($fd, 1024);
 $email .= $line;
}
fclose($fd);

// Get the portion of the email without headers (to avoid id's being detected as CC numbers)
$i_pos_subject = strpos($email, 'Subject:');
if ($i_pos_subject > 0) {
 // Found
 $email_sanitized = substr($email, $i_pos_subject);
} else {
 // If we don't locate subject we look for From:
 $i_pos_from = strpos($email, 'From:');
 if ($i_pos_from > 0) {
 $email_sanitized = substr($email, $i_pos_from);
 } else {
 // Impossible email, but continue
 $email_sanitized = $email;
 }
}

// Remove spaces, and points so we find 4111.1111.1111.111 and so
$email_sanitized = str_replace(' ', '', $email_sanitized);
$email_sanitized = str_replace('.', '', $email_sanitized);
$email_sanitized = str_replace('-', '', $email_sanitized);

$s_message = "Script filtercard.php successfully ran\n";

log_event('Arguments: '.serialize($argv));

$i_result = preg_match($s_cc_regex, $email_sanitized, $s_matches);
if ($i_result == 1) {
 $b_regex_found = true;
 $s_message .= 'Card found'."\n";
 log_event($s_message);
} else {
 // No credit card
 $s_message .= 'No credit card found'."\n";
 log_event($s_message);
}

$s_dest_mail_original = $argv[5];
$s_sender_original = $argv[2];

// Generate a unique id
$i_unique_id = time().'-'.rand(0,99999).'-'.rand(0,99999);

$INSPECT_DIR='/var/spool/filter/';
// NEVER NEVER NEVER use "-t" here.
$SENDMAIL="/usr/sbin/sendmail -G -i";

$s_file_unique = $INSPECT_DIR.$i_unique_id;

# Exit codes from <sysexits.h>
$EX_TEMPFAIL=75;
$EX_UNAVAILABLE=69;

// Save the file
save_message_to_file($s_file_unique, $email);

$st_output = Array();

if ($b_regex_found == false) {
 // Send normally
 
 foreach ($st_emails_rcpt_to as $i_key=>$s_email_rcpt_to) {
 $s_sendmail = $SENDMAIL.' "'.$s_email_rcpt_to.'" <'.$s_file_unique;
 $i_status = exec($s_sendmail, $st_output);
 
 log_event('Status Sendmail (original mail): '.$i_status.' to: '.$s_email_rcpt_to);
 }
 
 delete_file($s_file_unique);
 
 exit();
}

// Send secure email
$s_sendmail = $SENDMAIL.' "'.$s_dest_mail_secure.'" <'.$s_file_unique;
$i_status = exec($s_sendmail, $st_output);

log_event('Status Sendmail (secure email): '.$i_status.' to: '.$s_dest_mail_secure);

$s_email_tpl = read_file('/usr/share/secure/smtpfilter_email.txt');

if ($s_email_tpl == '') {
 // Generic message
 
 $s_date_sent = date('r'); // RFC 2822 formatted date
 
 $s_email_tpl = <<<EOT
Date: $s_date_sent
From: secure <noreply@secure.carlesmateo.com>
Subject: Message with a Credit Card from $s_sender_original
You received a message with a Credit Card
EOT;
 
}

$s_email_tpl = replace_tpl_variables($s_email_tpl, $s_sender_original);

save_message_to_file($s_file_unique.'-tpl', $s_email_tpl);

// Send the replacement email
foreach ($st_emails_rcpt_to as $i_key=>$s_email_rcpt_to) {
 $st_output = Array();
 $s_sendmail = $SENDMAIL.' "'.$s_email_rcpt_to.'" <'.$s_file_unique.'-tpl';
 
 $i_status = exec($s_sendmail, $st_output);
 log_event('Status Sendmail (TPL): '.$i_status.' to: '.$s_email_rcpt_to);
 
}

delete_file($s_file_unique);
delete_file($s_file_unique.'-tpl');

/* Headers:

From: Carles Mateo <mateo@carlesmateo.com>
To: "carles2@carlesmateo.com" <carles2@carlesmateo.com>, Secure
 <secure@secure.carlesmateo.com>
CC: "test@carlesmateo.com" <test@carlesmateo.com>
Subject: Test with several emails and CCs
Thread-Topic: Test with several emails and CCs
Thread-Index: AQHRt1tmO/z+TpI64UiniKm7I56onw==
Date: Thu, 25 May 2016 14:32:15 +0000
 */

You can test it connecting by telnet to port 25 and doing (in bold the SMTP commands):

HELO mycomputer.com
MAIL FROM: test@carlesmateo.com
RCPT TO: just@asample.com
RCPT TO: another@different.com
DATA
Date: Mon, 30 May 2016 14:07:56 +0000
From: Carles Mateo <mateo@blog.carlesmateo.com>
To: Undisclosed recipients
Subject: Test with CC
This is just a test with a Visa CC 4111 1111 11-11-1111.

You can use the nc command for commodity.

When you’re all set I recommend you to test it by sending real emails from real servers