Author Archives: Carles Mateo

A sample way to return in Python not-to-do

Today I was checking the code, the latest push to the git repo, as I always do, and I saw something that was wrong.

Often Engineers can be confused by the ways different languages treat similar operations, so similarly as POSIX I try to use an standard way to program in any language that makes the code very clear and easy to understand, no matter if it’s C, Java, Python, PHP…

My code and the code of my Teams will be clear, and easy to understand. And as the good Engineers jump from language to language upon the needs, is better for all to proceed like this to avoid confusions.

In this case I want to cover a simple case that I detected. A wrong usage.

The code was returning True on success and if not simply return.

Here I show a simple demonstration that return itself will be returning return None.

# Proof of Concept for avoiding return without the type
# Author: Carles Mateo
# Creation Date: 2018-03-27
#

from pprint import pprint

def boolean_test(b_value):
if b_value is False:
return

return True

b_true = boolean_test(True)
b_false = boolean_test(False)

pprint(b_true)
pprint(b_false)

if b_false is False:
print “I detect it as False (even if it’s None)”

if b_false is True:
print “I detect it as True (even if it’s None)”

if b_false is None:
print “It is None!”

print “Be careful”

Variables use the MT Notation. I include tips like this and guidelines in programming guide for my Teams.

See: Wiki Python Programming/Data Types

CSort multithread versus QuickSort with Java Source Code

Updated on 2017-04-04 12:58 Barcelona Time 1491303515:

  • A method writeValuesFromArrayListToDisk(String sFilename) has been introduced as per a request, to easily check that the data is properly sorted.
  • A silly bug in the final ArrayList generation has been solved. It was storing iCounter that was always 1 as this is not the compressed version, for supporting repeated numbers, of the algorithm. I introduced this method for the article, as it is not necessary for the algorithm as it is already sorted, and unfortunately I didn’t do a final test on the output. My fault.
  • Some JavaDoc has been updated

Past Friday I was discussing with my best friend about algorithms and he told me that hadoop is not fast enough, and about when I was in Amazon and as part of the test they asked me to defined an S3 system from the scratch, and I did using Java and multiple streams per file and per node (replication factor) and they told me that what I just created was the exact way their system works, and we ended talking about my sorting algorithm CSort, and he asked me if it could run in MultiThread. Yes, it is one of the advantages in front of QuickSort. Also it can run in multinode, different computers. So he asked me how much faster it would be a MultiThread version of CSort, versus a regular QuickSort.

Well here is the answer with 500 Million of registers, with values from 1 to 1000000, and deduplicating.

2017-03-26 18:50:41 CSort time in seconds:0.189129089
2017-03-26 18:51:47 QuickSort cost in seconds:61.853190885

That’s Csort is 327 times faster than QuickSort!. In this example and with my busy 4 cores laptop. In my 8 cores computer it is more than 525 times faster. Imagine in a Intel Xeon Server with a 64 cores!.

How is it possible? The answer is easy, it has O(n) complexity. I use linear access.

This depends on your universe of Data. Please read my original posting about CSort here, that explains it on detail.

Please note that CSort with compression is also available for keeping duplicated values and also saving memory (space) and time, with equally totally astonishing results.

Please note that in this sample I first load the values to an Array, and then I work from this. This is just to avoid bias by discarding the time of loading the data from disk, but, in the other article you have samples where CSort sorts at the same time that loads the data from disks. I have a much more advanced algorithm that self allocates the memory needed for handling an enormous universe of numbers (big numbers and small with no memory penalty), but I’m looking forward to discuss this with a tech giant when it hires me. ;) Yes, I’m looking for a job.

In my original article I demonstrated it in C and PHP, this time here is the code in Java. It uses the MT Notation.

You can download the file I used for the tests from here:

Obviously it runs much more faster than hashing. I should note that hashing and CSorting with .containsKey() is faster than QuickSorting. (another day I will talk about sorting Strings faster) ;)

/*
 * (c) Carles Mateo blog.carlesmateo.com
 * Proof of concept of CSort, with multithread, versus QuickSort
 * For the variables notation MT Notation is used.
 */
package com.carlesmateo.blog;

import java.io.BufferedReader;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import java.text.DateFormat;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Date;

/**
 * @author carles mateo
 */
public class CSortMultiThread extends Thread {
    // Download the original file from the blog
    public static final int piNUM_REGISTERS_IN_FILE = 50000000;
    // Max value found, to optimize the memory used for CSort
    // Note that CSort can be implemented in the read from file mechanism
    // in order to save memory (space)
    public static int piMaxValue = 0;
    // The array containing the numbers read from disk
    public static int[] paiNumbers;
    // The array used by CSort (if not using direct loading from disk)
    public static int[] paiNumbersCsorted;
    // Final ArrayList Sorted. CSort and QuickSort finally fullfil this
    public static ArrayList<Integer> pliNumbers = new ArrayList<>();
   
    // For the Threads
    private Thread oT;
    private String sThreadName;
    private int piStart;
    private int piEnd;
    private boolean bFinished = false;
   
    CSortMultiThread (String name, int iStart, int iEnd) {
        sThreadName = name;
        piStart = iStart;
        piEnd = iEnd;
        writeWithDateTime("Creating " +  sThreadName );
    }
   
    public void run() {
        writeWithDateTime("Running " +  sThreadName + " to sort from " + piStart + " to " + piEnd);
        int iCounter;
        int iNumber;        

        for (iCounter=piStart; iCounter < piEnd; iCounter++) {
            iNumber = paiNumbers[iCounter];
            paiNumbersCsorted[iNumber] = 1;
        }          

      System.out.println("Thread " +  sThreadName + " exiting.");
      bFinished = true;
    }
   
    public void start () {
        writeWithDateTime("Starting " +  sThreadName );
        if (oT == null) {
            oT = new Thread (this, sThreadName);
            oT.start ();
        }
    }
    /**
     * Write values to Disk to demonstrate that are sorted ;)
     * @param sFilenameData 
     */
    private static void writeValuesFromArrayListToDisk(String sFilenameData) {
        ObjectOutputStream out = null;
        int iCounter;
        try {
            out = new ObjectOutputStream(new FileOutputStream(sFilenameData));
            
            for (iCounter=0; iCounter<pliNumbers.size(); iCounter++) {
                out.writeChars(pliNumbers.get(iCounter).toString() + "\n");
            }
            
            // To store the object instead
            //out.writeObject(pliNumbers);
            
        } catch (IOException e) {
            System.out.println("I/O Error!");
            e.printStackTrace();
            displayHelpAndQuit(10);
        } finally {
            if (out != null) {
                try {
                    out.close();    
                } catch (IOException e) {
                    System.out.println("I/O Error!");
                    e.printStackTrace();
                    displayHelpAndQuit(10);
                }   
            }
        }
    }
   
    /** 
     *  Reads the data from the disk. The file has 50M and we will be duplicating
     *  to get 500M registers.
     *  @param sFilenameData 
     */
    private static void readValuesFromFileToArray(String sFilenameData) {

        BufferedReader oBR = null;
        String sLine;
        int iCounter = 0;
        int iRepeat;
        int iNumber;
        
        // We will be using 500.000.000 items, so dimensionate the array
        paiNumbers = new int[piNUM_REGISTERS_IN_FILE * 10];

        try {

            oBR = new BufferedReader(new FileReader(sFilenameData));

            while ((sLine = oBR.readLine()) != null) {
                for (iRepeat = 0; iRepeat < 10; iRepeat++) {
                    int iPointer = (piNUM_REGISTERS_IN_FILE * iRepeat) + iCounter;
                    iNumber = Integer.parseInt(sLine);
                    paiNumbers[iPointer] = iNumber;
                    if (iNumber > piMaxValue) {
                        piMaxValue = iNumber;
                    }
                }                
                
                iCounter++;
            }
            
            if (iCounter < piNUM_REGISTERS_IN_FILE) {
                write("Warning... only " + iCounter + " values were read");
            }

        } catch (FileNotFoundException e) {
            System.out.println("File not found! " + sFilenameData);
            displayHelpAndQuit(100);
        } catch (IOException e) {
            System.out.println("I/O Error!");
            e.printStackTrace();
            displayHelpAndQuit(10);
        } finally {
            if (oBR != null) {
                try {
                    oBR.close();
                } catch (IOException e) {
                    System.out.println("I/O Error!");
                    e.printStackTrace();
                    displayHelpAndQuit(10);
                }
            }
        }        
    }
    
    private static String displayHelp() {
        String sHelp = "Help\n" +
                "====\n" +
                "Csort from Carles Mateo blog.carlesmateo.com\n" +
                "\n" +
                "Proof of concept of the fast load algorithm\n" +
                "\n";
        
        return sHelp;
    }
    
    /**
     * Displays Help Message and Quits with and Error Level
     * Errors:
     *  - 1   - Wrong number of parameters
     *  - 10  - I/O Error
     *  - 100 - File not found
     * @param iErrorLevel 
     */
    private static void displayHelpAndQuit(int iErrorLevel) {
        System.out.println(displayHelp());
        System.exit(iErrorLevel);
        
    }
    
    // This is QuickSort from vogella http://www.vogella.com/tutorials/JavaAlgorithmsQuicksort/article.html
    public static void sort() {
        int piNumber = paiNumbers.length;
        quicksort(0, piNumber - 1);        
    }
    
    private static void quicksort(int low, int high) {
        int i = low, j = high;
        // Get the pivot element from the middle of the list
        int pivot = paiNumbers[low + (high-low)/2];

        // Divide into two lists
        while (i <= j) {
            // If the current value from the left list is smaller then the pivot
            // element then get the next element from the left list
            while (paiNumbers[i] < pivot) {
                i++;
            }
            // If the current value from the right list is larger then the pivot
            // element then get the next element from the right list
            while (paiNumbers[j] > pivot) {
                j--;
            }

            // If we have found a values in the left list which is larger then
            // the pivot element and if we have found a value in the right list
            // which is smaller then the pivot element then we exchange the
            // values.
            // As we are done we can increase i and j
            if (i <= j) {
                exchange(i, j);
                i++;
                j--;
            }
        }
        // Recursion
        if (low < j)
            quicksort(low, j);
        if (i < high)
            quicksort(i, high);
    }

    private static void exchange(int i, int j) {
        int temp = paiNumbers[i];
        paiNumbers[i] = paiNumbers[j];
        paiNumbers[j] = temp;
    }
    
    /** 
     * We want to remove duplicated values
     */
    private static void removeDuplicatesFromQuicksort() {
        int iCounter;
        int iOldValue=-1;
        int iNewValue;
        
        for (iCounter=0; iCounter<paiNumbers.length; iCounter++) {
            iNewValue = paiNumbers[iCounter];
            if (iNewValue != iOldValue) {
                iOldValue = iNewValue;
                pliNumbers.add(iNewValue);
            }
        }
    }
// End of vogella QuickSort code

    /**
     * Generate the final Array
     */
    private static void copyFromCSortToArrayList() {
        int iCounter;
        int iNewValue;
        
        for (iCounter=0; iCounter<=piMaxValue; iCounter++) {
            iNewValue = paiNumbersCsorted[iCounter];
            if (iNewValue > 0) {
                pliNumbers.add(iCounter);
            }
        }
    }

    /**
     * Write with the date
     * @param sText 
     */
    private static void writeWithDateTime(String sText) {
        DateFormat oDateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
        Date oDate = new Date();
        String sDate = oDateFormat.format(oDate);
        write(sDate + " " + sText);
    }
    
    /**
     * Write with \n
     * @param sText 
     */
    private static void write(String sText) {
        System.out.println(sText + "\n");
    }    
   
    public static void main(String args[]) throws InterruptedException {
        // For Profiling
        long lStartTime;
        double dSeconds;
        long lElapsedTime;
        
        int iThreads = 8;
        int iRegistersPerThread = piNUM_REGISTERS_IN_FILE / iThreads;
        CSortMultiThread[] aoCSortThreads = new CSortMultiThread[iThreads];
        
        writeWithDateTime("CSort MultiThread proof of concept by Carles Mateo");
        writeWithDateTime("Reading values from Disk...");
        readValuesFromFileToArray("/home/carles/Desktop/codi/java/carles_sort.txt");
        writeWithDateTime("Readed");

        writeWithDateTime("Total values to sort de deduplicate " + paiNumbers.length);
        writeWithDateTime("The max value between the Data is " + piMaxValue);
        paiNumbersCsorted = new int[piMaxValue + 1];
                
        writeWithDateTime("Performing CSort with removal of duplicates");
        lStartTime = System.nanoTime();
        for (int iThread=0; iThread < iThreads; iThread++) {
            int iStart = iThread * iRegistersPerThread;
            int iEnd = ((iThread + 1) * iRegistersPerThread) - 1;
            if (iThread == (iThreads -1)) {
                // Last thread grabs the remaining. 
                // For instance 100/8 = 12 so each Thread orders 12 registers,
                // but last thread orders has 12 + 4 = 16
                iEnd = piNUM_REGISTERS_IN_FILE -1 ;
            }
            CSortMultiThread oThread = new CSortMultiThread("Thread-" + iThread, iStart, iEnd);
            oThread.start();
            
            aoCSortThreads[iThread] = oThread;
        }       
        
        boolean bExit = false;
        while (bExit == false) {
            bExit = true;
            for (int iThread=0; iThread < iThreads; iThread++) {
                if (aoCSortThreads[iThread].bFinished == false) {
                    bExit = false;
                    // Note: 10 milliseconds. This takes some CPU cycles, but we need
                    // to ensure that all the threads did finish.
                    sleep(10);
                    continue;
                }
            }
        }
        writeWithDateTime("Main loop ended");
                
        writeWithDateTime("Copy to the ArrayList");
        copyFromCSortToArrayList();
        
        writeWithDateTime("The final array contains " + pliNumbers.size());
        
        lElapsedTime = System.nanoTime() - lStartTime;
        dSeconds = (double)lElapsedTime / 1000000000.0;
        writeWithDateTime("CSort time in seconds:" + dSeconds);

        writeWithDateTime("Writing values to Disk...");
        writeValuesFromArrayListToDisk("/home/carles/Desktop/codi/java/carles_sort-csorted.txt");

        // Reset the ArrayList
        pliNumbers = new ArrayList<>();
        
        lStartTime = System.nanoTime();
        /** QuickSort begin **/
        writeWithDateTime("Sorting with QuickSort");
        sort();
        writeWithDateTime("Finished QuickSort");
        
        writeWithDateTime("Removing duplicates from QuickSort");
        removeDuplicatesFromQuicksort();
        writeWithDateTime("The final array contains " + pliNumbers.size());
        lElapsedTime = System.nanoTime() - lStartTime;
        dSeconds = (double)lElapsedTime / 1000000000.0;
        
        writeWithDateTime("QuickSort cost in seconds:" + dSeconds);
    }
}

The complete traces:

run:
2017-03-26 19:28:13 CSort MultiThread proof of concept by Carles Mateo
2017-03-26 19:28:13 Reading values from Disk...
2017-03-26 19:28:39 Readed
2017-03-26 19:28:39 Total values to sort de deduplicate 500000000
2017-03-26 19:28:39 The max value between the Data is 1000000
2017-03-26 19:28:39 Performing CSort with removal of duplicates
2017-03-26 19:28:39 Creating Thread-0
2017-03-26 19:28:39 Starting Thread-0
2017-03-26 19:28:39 Creating Thread-1
2017-03-26 19:28:39 Starting Thread-1
2017-03-26 19:28:39 Running Thread-0 to sort from 0 to 6249999
2017-03-26 19:28:39 Creating Thread-2
2017-03-26 19:28:39 Starting Thread-2
2017-03-26 19:28:39 Running Thread-1 to sort from 6250000 to 12499999
2017-03-26 19:28:39 Creating Thread-3
2017-03-26 19:28:39 Running Thread-2 to sort from 12500000 to 18749999
2017-03-26 19:28:39 Starting Thread-3
2017-03-26 19:28:39 Creating Thread-4
2017-03-26 19:28:39 Starting Thread-4
2017-03-26 19:28:39 Running Thread-3 to sort from 18750000 to 24999999
2017-03-26 19:28:39 Creating Thread-5
2017-03-26 19:28:39 Running Thread-4 to sort from 25000000 to 31249999
2017-03-26 19:28:39 Starting Thread-5
2017-03-26 19:28:39 Creating Thread-6
2017-03-26 19:28:39 Starting Thread-6
2017-03-26 19:28:39 Running Thread-5 to sort from 31250000 to 37499999
2017-03-26 19:28:39 Creating Thread-7
2017-03-26 19:28:39 Starting Thread-7
2017-03-26 19:28:39 Running Thread-6 to sort from 37500000 to 43749999
2017-03-26 19:28:39 Running Thread-7 to sort from 43750000 to 49999999

Thread Thread-0 exiting.
Thread Thread-2 exiting.
Thread Thread-1 exiting.
Thread Thread-6 exiting.
Thread Thread-7 exiting.
Thread Thread-5 exiting.
Thread Thread-4 exiting.
Thread Thread-3 exiting.
2017-03-26 19:28:39 Main loop ended

2017-03-26 19:28:39 Copy to the ArrayList
2017-03-26 19:28:39 The final array contains 1000001
2017-03-26 19:28:39 CSort time in seconds:0.189129089

2017-03-26 19:28:39 Sorting with QuickSort
2017-03-26 19:29:40 Finished QuickSort
2017-03-26 19:29:40 Removing duplicates from QuickSort
2017-03-26 19:29:41 The final array contains 1000001
2017-03-26 19:29:41 QuickSort cost in seconds:61.853190885

BUILD SUCCESSFUL (total time: 1 minute 28 seconds)

Creating a Content Filter for Postfix in PHP

In this article I want to explain how I created a content filter for Postfix, in PHP.

The basic idea is to examine all the incoming messages, looking for a Credit Card pattern, and then sending those emails to another Server, that for instance is PCI compliant, and sending an email to the original receiver telling that they received an email with a CC, that is stored in a safe Server.

I choose the pipe mechanism, because is the last one in the chain of content filters, and first I want to pass the antivirus (Amavis), antispam and other content filters.

Then I inject the emails to sendmail, with the params -G -i , granting that the email will not be reprocessed entering an infinite loop.

/usr/sbin/sendmail -G -i

A remembering about the SMTP protocol, that I’ll mention later. Another link in wikipedia.

Edit the file /etc/postfix/master.cf to add these lines:

# ==========================================================================
# service type private unpriv chroot wakeup maxproc command + args
# (yes) (yes) (yes) (never) (100)
# ==========================================================================
smtp inet n - n - - smtpd
 -o content_filter=filter:dummy



# Other external delivery methods.

filter unix - n n - 10 pipe
 flags=Rq user=filter argv=/var/filtermails/filtercard.php -f ${sender} -- ${size} ${recipient}

The last parameter ${recipient} will expand with as many recipients (RCPT TO:) as the mail has.

Now the code for the PHP filter. Check a simple content filter example here.

The file /var/filtermails/filtercard.php

#!/usr/bin/php
<?php

/*
 * Carles Mateo
 */

date_default_timezone_set('Europe/Andorra');

$s_dest_mail_secure = 'secure@pciserver.carlesmateo.com';

$b_regex_found = false;
$b_emails_rcpt_to = Array();

// All major credit cards regex
// The CC anywhere
$s_cc_regex = '/(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|6011[0-9]{12}|622((12[6-9]|1[3-9][0-9])|([2-8][0-9][0-9])|(9(([0-1][0-9])|(2[0-5]))))[0-9]{10}|64[4-9][0-9]{13}|65[0-9]{14}|3(?:0[0-5]|[68][0-9])[0-9]{11}|3[47][0-9]{13})/';

function log_event($s_message) {
 syslog(LOG_WARNING, $s_message);
}

function save_message_to_file($s_file, $s_message) {
 $o_file = fopen($s_file, "a");
 fwrite($o_file, $s_message);
 fclose($o_file);
}

function read_file($s_file) {
 $s_contents = file_get_contents($s_file);
 
 if ($s_contents === false) {
 return '';
 }
 
 return $s_contents;
}

function get_all_rcpt_to($st_emails_input) {
 // First email is pos 5 of the array
 $st_emails = $st_emails_input;
 
 unset($st_emails[0]);
 unset($st_emails[1]);
 unset($st_emails[2]);
 unset($st_emails[3]);
 unset($st_emails[4]);
 
 asort($st_emails);
 
 return $st_emails;
}

/*
 * Returns a @secure. email, from the original email
 */
function get_secure_email($s_email) {
 $i_pos = strpos($s_email, '@');
 
 $s_email_new = $s_email;
 
 if ($i_pos > 0) {
 $s_email_new = substr($s_email, 0, $i_pos);
 $s_email_new .= 'secure.';
 $s_email_new .= substr($s_email, $i_pos +1);
 }
 
 return $s_email_new;
}

function replace_tpl_variables($s_text, $s_sender_original) {

 // TODO: Replace static values
 
 $s_date_sent = date('r'); // RFC 2822 formatted date

 $s_text = str_replace('#DATE_NOW#', $s_date_sent, $s_text);
 $s_text = str_replace('#FROM_NAME#', 'Carles Mateo', $s_text);
 $s_text = str_replace('#FROM_EMAIL#', 'mateo@blog.carlesmateo.com', $s_text);
 $s_text = str_replace('#EMAIL_SENDER_ORIGINAL#', $s_sender_original, $s_text);

 return $s_text;
}

function delete_file($s_file) {
 unlink($s_file);
}

// Read the RCPT TO: fields ${recipient}
$st_emails_rcpt_to = get_all_rcpt_to($argv);

// Read the email
$email = '';
$fd = fopen("php://stdin", "r");
while (!feof($fd)) {
 $line = fread($fd, 1024);
 $email .= $line;
}
fclose($fd);

// Get the portion of the email without headers (to avoid id's being detected as CC numbers)
$i_pos_subject = strpos($email, 'Subject:');
if ($i_pos_subject > 0) {
 // Found
 $email_sanitized = substr($email, $i_pos_subject);
} else {
 // If we don't locate subject we look for From:
 $i_pos_from = strpos($email, 'From:');
 if ($i_pos_from > 0) {
 $email_sanitized = substr($email, $i_pos_from);
 } else {
 // Impossible email, but continue
 $email_sanitized = $email;
 }
}

// Remove spaces, and points so we find 4111.1111.1111.111 and so
$email_sanitized = str_replace(' ', '', $email_sanitized);
$email_sanitized = str_replace('.', '', $email_sanitized);
$email_sanitized = str_replace('-', '', $email_sanitized);

$s_message = "Script filtercard.php successfully ran\n";

log_event('Arguments: '.serialize($argv));

$i_result = preg_match($s_cc_regex, $email_sanitized, $s_matches);
if ($i_result == 1) {
 $b_regex_found = true;
 $s_message .= 'Card found'."\n";
 log_event($s_message);
} else {
 // No credit card
 $s_message .= 'No credit card found'."\n";
 log_event($s_message);
}

$s_dest_mail_original = $argv[5];
$s_sender_original = $argv[2];

// Generate a unique id
$i_unique_id = time().'-'.rand(0,99999).'-'.rand(0,99999);

$INSPECT_DIR='/var/spool/filter/';
// NEVER NEVER NEVER use "-t" here.
$SENDMAIL="/usr/sbin/sendmail -G -i";

$s_file_unique = $INSPECT_DIR.$i_unique_id;

# Exit codes from <sysexits.h>
$EX_TEMPFAIL=75;
$EX_UNAVAILABLE=69;

// Save the file
save_message_to_file($s_file_unique, $email);

$st_output = Array();

if ($b_regex_found == false) {
 // Send normally
 
 foreach ($st_emails_rcpt_to as $i_key=>$s_email_rcpt_to) {
 $s_sendmail = $SENDMAIL.' "'.$s_email_rcpt_to.'" <'.$s_file_unique;
 $i_status = exec($s_sendmail, $st_output);
 
 log_event('Status Sendmail (original mail): '.$i_status.' to: '.$s_email_rcpt_to);
 }
 
 delete_file($s_file_unique);
 
 exit();
}

// Send secure email
$s_sendmail = $SENDMAIL.' "'.$s_dest_mail_secure.'" <'.$s_file_unique;
$i_status = exec($s_sendmail, $st_output);

log_event('Status Sendmail (secure email): '.$i_status.' to: '.$s_dest_mail_secure);

$s_email_tpl = read_file('/usr/share/secure/smtpfilter_email.txt');

if ($s_email_tpl == '') {
 // Generic message
 
 $s_date_sent = date('r'); // RFC 2822 formatted date
 
 $s_email_tpl = <<<EOT
Date: $s_date_sent
From: secure <noreply@secure.carlesmateo.com>
Subject: Message with a Credit Card from $s_sender_original
You received a message with a Credit Card
EOT;
 
}

$s_email_tpl = replace_tpl_variables($s_email_tpl, $s_sender_original);

save_message_to_file($s_file_unique.'-tpl', $s_email_tpl);

// Send the replacement email
foreach ($st_emails_rcpt_to as $i_key=>$s_email_rcpt_to) {
 $st_output = Array();
 $s_sendmail = $SENDMAIL.' "'.$s_email_rcpt_to.'" <'.$s_file_unique.'-tpl';
 
 $i_status = exec($s_sendmail, $st_output);
 log_event('Status Sendmail (TPL): '.$i_status.' to: '.$s_email_rcpt_to);
 
}

delete_file($s_file_unique);
delete_file($s_file_unique.'-tpl');

/* Headers:

From: Carles Mateo <mateo@carlesmateo.com>
To: "carles2@carlesmateo.com" <carles2@carlesmateo.com>, Secure
 <secure@secure.carlesmateo.com>
CC: "test@carlesmateo.com" <test@carlesmateo.com>
Subject: Test with several emails and CCs
Thread-Topic: Test with several emails and CCs
Thread-Index: AQHRt1tmO/z+TpI64UiniKm7I56onw==
Date: Thu, 25 May 2016 14:32:15 +0000
 */

You can test it connecting by telnet to port 25 and doing (in bold the SMTP commands):

HELO mycomputer.com
MAIL FROM: test@carlesmateo.com
RCPT TO: just@asample.com
RCPT TO: another@different.com
DATA
Date: Mon, 30 May 2016 14:07:56 +0000
From: Carles Mateo <mateo@blog.carlesmateo.com>
To: Undisclosed recipients
Subject: Test with CC
This is just a test with a Visa CC 4111 1111 11-11-1111.

You can use the nc command for commodity.

When you’re all set I recommend you to test it by sending real emails from real servers

Linux command-line tools I usually install

Some additional command-line tools that I use to install and use on my text client Systems. Initially here were not listed commands that are shipped with every Linux, but the additional tools I install in every Workstation or Server.

Apache benchmarks (ab)

To stress a Web Server

atop

A good complement to htop, iftop… monitoring tools

bzip2

Cool compressor better than gzip and that also accepts streams.

cfdisk

Nice tool to work with partitions through modern menus.

ctop

Command line / text based Linux Containers monitoring tool.

If you use docker stats, ctop is much better.

ctop.py

My own moniroring too.

dmidecode

Is the dmi Table decoder.

For example: dmidecode -t memory

dstat

Very nice System Stats tool. You can specify individual stats, like a drive.

edac-util

Error reporting utility

edac-util --verbose
mc0: 0 Uncorrected Errors with no DIMM info
mc0: 0 Corrected Errors with no DIMM info
mc0: csrow0: 0 Uncorrected Errors
mc0: csrow0: CPU_SrcID#0_MC#0_Chan#0_DIMM#0: 0 Corrected Errors
mc0: csrow0: CPU_SrcID#0_MC#0_Chan#1_DIMM#0: 0 Corrected Errors
mc0: csrow0: CPU_SrcID#0_MC#0_Chan#2_DIMM#0: 0 Corrected Errors
mc1: 0 Uncorrected Errors with no DIMM info
mc1: 0 Corrected Errors with no DIMM info
mc1: csrow0: 0 Uncorrected Errors
mc1: csrow0: CPU_SrcID#0_MC#1_Chan#0_DIMM#0: 0 Corrected Errors
mc1: csrow0: CPU_SrcID#0_MC#1_Chan#1_DIMM#0: 0 Corrected Errors
mc1: csrow0: CPU_SrcID#0_MC#1_Chan#2_DIMM#0: 0 Corrected Errors
mc2: 0 Uncorrected Errors with no DIMM info
mc2: 0 Corrected Errors with no DIMM info
mc2: csrow0: 0 Uncorrected Errors
mc2: csrow0: CPU_SrcID#1_MC#0_Chan#0_DIMM#0: 0 Corrected Errors
mc2: csrow0: CPU_SrcID#1_MC#0_Chan#1_DIMM#0: 0 Corrected Errors
mc2: csrow0: CPU_SrcID#1_MC#0_Chan#2_DIMM#0: 0 Corrected Errors
mc3: 0 Uncorrected Errors with no DIMM info
mc3: 0 Corrected Errors with no DIMM info
mc3: csrow0: 0 Uncorrected Errors
mc3: csrow0: CPU_SrcID#1_MC#1_Chan#0_DIMM#0: 0 Corrected Errors
mc3: csrow0: CPU_SrcID#1_MC#1_Chan#1_DIMM#0: 0 Corrected Errors
mc3: csrow0: CPU_SrcID#1_MC#1_Chan#2_DIMM#0: 0 Corrected Errors

ethtool

ethtool

fatrace
Reports file access events from all running processes in real time.

flock
With flock several processes can have a shared lock at the same time, or be waiting to acquire a write lock. With lslocks from util-linux package you can get a list of these processes.

fstrim

discard unused blocks on a mounted filesystem (local or remote). Is useful for freeing blocks no longer used in ZFS zvols. That can also be achieved by mount -o discard

fuser
Show which processes use the named files, sockets, or filesystems.

git

glances

A tool for monitoring similar to ctop and htop.

hdparm
Get/set SATA/IDE device parameters

htop

An improved top

httpie

A http client.

id (configure to query OpenLDAP)

ifmetrics

To set the metrics of all IPV4 routes attached to a given network interface

ifstat

Pretty network interfaces stats.

iftop

To watch metrics for a network interface (or wireless)

iftop-wlan0

iostat

CPU and IO devices stats. I modified some collectors for telegraf and influxdb consumed by grafana for fetching the Write KB/s, Read KB/s, Bandwidth of the Magnetic Spinning drives and SSD during declustered rebuild.

iotop

iperf

Perform network throughput tests

ipmitool

iptables

iscsiadm

java (jre Oracle and OpenJDK)

journalctl

ldap-utils

ldapsearch and the other tools to work with LDAP.

less
According to manpages, the opposite of more. :)
What it does is display a file, and you can scroll up/down, you can search for patterns…
Examples:
cat /etc/passwd | less
less /etc/passwd
# -n doesn’t count the lines, to save time
# For a specific Offset
less -n +500000000P /var/log/apache2/giant.log
# For 50% point
less -n +50p /var/log/apache2/giant.log

lrzip /lrztar

Compressor that compresses very efficiently big files, specially GB of of source code.

lrzsz (Zmodem)

An utility to send files to the Server through a terminal.

Very useful when you don’t want to scp or rsftp, for example because that requires MFA  (Multi Factor Authentication) to be performed again and you already have a session open.

Moba xTerm for Windows is one of the Terminal clients that accepts Upload/Download of Z-modem

Moba xTerm for Windows is one of the Terminal clients that accepts Upload/Download of Z-modem

apt install lrzsz

lsblk

List the block devices. Also is handy blkid but you can get this from /dev/disk/by-id/

lynx

Text browser. really handy.

ltrace
To trace library calls.

mc

Midnight Commander

mc

 

md5sum

memtester

Basically for testing Memory.

This will allocate 4GB of RAM and run the test 10 times.

sudo memtester 4096 10

mtr

Network tool mix between ping and traceroute.

mytop

To see in real time queries and slow queries to mysql

ncdu

Show the space used by any directory and subdirectory

ncdu

ncdu-2

nginx (fpm-php) and apache

The webservers

nfs client

nmon

Offers monitoring of different aspects: Network, Disk, Processes…

open-vpn

openssh-server

parted

Partition manipulation

perf

Performance profiler.

Ie:
perf top
perf stat ls

PHP + curl + mysql (hhvm)

pixz

A parallel, multiprocessor, variant of gzip/bzip2 that can leverage several processors to speed up the compression over files.

If the input looks like a tar archive, it also creates an index of all the files in the archive. This allows the extraction of only a small segment of the tarball, without needing to decompress the entire archive.

postcat

postcat -q ID shows the details of a message in the queue

python-pip and pypy

pv
Pipe Viewer – is a terminal-based tool for monitoring the progress of data through a pipeline. It can be inserted into any normal pipeline between two processes to give a visual indication of how quickly data is passing through, how long it has taken, how near to completion it is, and an estimate of how long it will be until completion.

dd if=/dev/urandom | pv | dd of=/dev/null

Output:

1,74MB 0:00:09 [ 198kB/s] [      <=>                               ]

Probably you’ll prefer to use dd with status=progress option, it’s just a sample.

qshape

It shows the composition of the mail queue.

screen

sdparm
Access SCSI modes pages; read VPD pages; send simple SCSI commands

sfdisk

Utility to work with partitions that can export and import configs through STDIN and STDOUT to automate partitions operations.

slabtop
Displays Kernel slab cache information in real time.

smartctl

Utility for dealing with the S.M.A.R.T. features of the disks, knowing errors…

split
Split a file into several, based by text lines, or binary: number of bytes per file.

sha512sum

sshfs

Mount a mountpoint on a remote Server by using SSH.

sshpass
SSH without typing the password. -f for reading it from a file.
sshpass -p “mypassword” ssh -o StrictHostKeyChecking=no root@10.251.35.251

In this sample passing the command ls, so this will be executed, and logout.
sshpass -p “mypassword” ssh -o StrictHostKeyChecking=no root@10.251.35.251 ls

sshuttle

A poor’s man VPN through SSH that is available for Linux and Mac OS X.

E.g.: sudo sshuttle -r carles@8.8.1.234:8275 172.30.0.0/16

Sshuttle forwards TCP and DNS but does not forward UDP or ICMP. So ping or ipmi protocol won’t work. But it does work for http, https, ssh…

Nice article on tunneling only certain things here.

strace

To trace the system calls and signals.
To redirect the output to another process use:
strace zpool status 2>&1 | less

stress-ng

Basically to stress Servers.

strings

Shows only strings from a file (normally a Binary)

From package binutils.

subversion svn

systemd-cgtop and systemd-analyze

tcpdump

To see the traffic to your NIC

tcptraceroute

TCP/IP Traceroure. Targets ports so you can use it to see if port is open too.

tee

Reads from sdtin and sends to a file and outputs to stdout as well.
For example:

find . -mtime -1 -print | grep -v "/logs/" 2>&1 | tee /var/log/results.log

timeout
Kills the process after the indicated timeout.

timeout 1000 dd if=/dev/urandom of=/dev/zvol/carles-N51-C5-D8-P2-S1/gb1000

tracepath

traceroute

tree

Tree simply shows the directory hierarchy in a graphical (text mode) way. Useful to see where files and subfolders are.

xxd
Make a hexdump or do the reverse

sudo xxd /dev/nvme0n1p1 | less

zcat
Just like cat, but for compressed filed.

zcat logs.tar.gz | grep "Error"
zcat logs.tar.gz | less

zless

zram-config

Sergey Davidoff stumbled upon a project called compcache that creates a RAM based block device which acts as a swap disk, but is compressed and stored in memory instead of swap disk (which is slow), allowing very fast I/O and increasing the amount of memory available before the system starts swapping to disk. compcache was later re-written under the name zRam and is now integrated into the Linux kernel.

http://www.webupd8.org/2011/10/increased-performance-in-linux-with.html

watch

Whatch allow you to execuate a command and what it (refresh it) at a given intervals, for example every two seconds.
For example, if normally I would do:

 while [ true ]; do zpool status | head -n10 ; sleep 10; done 
 while [ true ]; do df -h; sleep 60; done 
while [ true ]; do ls -al /tmp | head -n5 ; sleep 2; done 

Then I can do:

watch -n10 zpool status

Or:

watch -n60 df -h

Or

watch "ls -al /tmp | head -n5"

For my Dockers, and cloudinit with Ubuntu, my defaults are:

apt update; apt install htop mc ncdu strace git binutils

Using Windows 10 Appliance in Ubuntu Virtual Box 4.3.10

blog-carlesmateo-com-microsoft-edgeMicrosoft has released Windows 10, and with it the possibility to Download a Windows 10 Appliance to run under Virtual Box, VMWare player, HyperV (for windows), Parallels (Mac). Their idea is to allow you to test Microsoft Edge new browser in addition of being able to test the older browsers in older VM images.

I wanted to use Windows 10 to check compatibility with my messenger c-client.

Also I wanted to know how Java behaves.

The Windows 10 VM image will work for 90 days. You can download it from here (http://dev.modern.ie/tools/vms/linux/).

Instructions are very precarious and they didn’t specify a minimum version, however if you use Virtual Box under Ubuntu 14.04, so Virtual Box 4.3.10, you’ll not be able to import the Appliance as you’ll get an error.

Update: Thanks to Razvan and Eric!, readers that reported that this also works for Mac OS 10.9.5. + Virtual Box 4.3.12 and VirtualBox 4.3.20 running under Windows 7 respectively.

‘Windows10_64’ is not a valid Guest OS type.

Result Code: NS_ERROR_INVALID_ARG (0x80070057)
Component: VirtualBox
Interface: IVirtualBox {fafa4e17-1ee2-4905-a10e-fe7c18bf5554}
Callee: IAppliance {3059cf9e-25c7-4f0b-9fa5-3c42e441670b}

blog-carlesmateo-com-virtualbox-appliance-import-error-is-not-a-valid-guest-os-type

I was looking to find a solution and found no solution on the Internet, so I decided to give a chance and try to fix it by myself.

The error is: ‘Windows10_64’ is not a valid Guest OS type. so obviously, the Windows10_64 is not on the list of the VirtualBox yet, it is a pretty new release. Microsoft could had shipped it with OS Type Windows 64 Other, or Windows 8 64 bits, but they did’t. I wondered if I could edit the image to trick it to appear as a recognized image.

I edited the file (MSEdge – Win10.ova) with Bless Hex Editor, an hexadecimal editor.

I looked for the String “Windows10_64” and found two occurrences.

blog-carlesmateo-com-bless-hex-editor-searchingI had to replace the string and leave it with exact number of bytes it has, so the same length (do not insert additional bytes). I searched for the list of supported OSes and found that “WindowsXP_64” would be a perfect match. I replaced that 10 for XP twice.

blog-carlesmateo-com-bless-hex-editor-windows10_64-to-windowsXP_64Then tried to import the Appliance and it worked.

blog-carlesmateo-com-virtual-box-importing-windows10-appliance-ova-cutblog-carlesmateo-com-bless-applicance-settingsI tried to run it like that, but it froze on the boot, with the new blue logo of windows.

I figured out that Windows XP would probably not be the best similar architecture, so I edited the config and I set Windows 8.1 (64 bit). I also increased the RAM to 4096 MB and set a 32 MB memory for the video card.

blog-carlesmateo-com-config-vbox-microsoft-windows-10-msedge

Then I just started the VM and everything worked.

blog-carlesmateo-com-windows10-in-virtual-box-linux

Ok, a funny note: Just started, it installed me an update without asking ;)

Scaling phantomjs with PHP

One of my clients had a problem with a Phantomjs Software.

I was asked to help in their project, that was relying on one of its features.

Phantomjs is an interesting project, but unfortunately it has not had enough maintenance and a terrible lack of sufficient documentation. The last contributions to repo are from mid May, with small frequency. (Latest releases are from Feb 2015, see the Phantomjs releases on github)

The Software from my client ran well for certain requests, but not for others and after a random time, seconds, or minutes, it became irresponsible.

My client wanted to fix that or to use nodejs to scale their phantom code or in the worst case to rewrite the code in nodejs. And it was urgent, because they were losing a lot of money because of their programs malfunctioning.

I began to investigate. That’s the history of how I fixed…

Connections being irresponsible

My client was using the Phantomjs webserver.

The problem with Phantom’s webserver is that it has a hard limit of 10 concurrent connections. After that all the next http connections are queried until one becomes free.

So if you do a telnet to that port, the connection is accepted, but nothing happens. Even sending malformed GET requests.

My guess was that something in the process of parsing the requests was wrong, and then some of those 10 connections became frozen. I started to debug.

I implemented a timedout that will quit the worker after some time.

mTimerExit = setTimeout(forceExitByTimeout, DEFAULT_TIME_TO_EXIT);

Before exiting is important to clear the timers

clearTimeout(mTimerExit);

I also implemented a debug mode to see what was going on with a method consoleDebug that basically did console.log according to if a parameter debug was set to true.

My quickwin system was working, but many urls still were not being parsed by the phantomjs Engine.

Connecting with nodejs

My client had the bad experience of previous versions of Phantomjs crashing a lot.

So it has the idea of running nodejs as the main webserver, for scaling, and invoking Phantomjs from it.

I did several work in this line.

I tried to link with nodejs with products like:

1) https://github.com/alexscheelmeyer/node-phantom

Unfortunately those packets are no longer maintained, having seen the last update from 2013.

It doesn’t work. I found no documentation, and no traces on errors.

I also got errors like:

XMLHttpRequest cannot load http://localhost:8888/start Origin file:// is not allowed by Access-Control-Allow-Origin

And had to figure out what parameters to tune. I did by starting phantomjs with the param:

 --web-security=false 

In the js scene products and packages are changing very fast and sadly often breaking retrocompatibility.

So you better have a very well defined package.json that installs exactly the software version that you need, or soon, when you deploy to another server it will be a disaster.

2) https://www.npmjs.com/package/ghost-town

Ghost Town is a product that allows to run phantomjs from inside nodejs.

It is a company maintained product, by a contributor, Teddy.

He was very nice replying my questions, but it didn’t help.

The process was failing with no debug, no info.

The package really lacks documentation, and has only the same sample across all the web.

I provide this ghost-town code sample, in case it is useful for people looking for more:

var phantomClusterOptions = require("./phantomClusterOptions");
var town = require("ghost-town")(phantomClusterOptions);
var alerts = require("./qualitynodephantom"); // Do not ad .js
var PORT = 8080;

if (town.isMaster) {
    var express = require('express');
    var app = express();
    app.get('/', function(req, res) {
        // Every request comes here
        var data = {url:req.query.url,device:req.query.quality};

        town.queue(data, function(err,result) {
            res.set('Content-Type', 'text/plain');
            if (!err) {
                res.send(result);
            } else {
                res.send(err);
            }
        }, phantomClusterOptions.pageTries);

    });
    app.listen(PORT);
    console.log('App running');
} else {
    town.on("queue", function(page, data, callback) {
        town.phantom.set('onError', function(msg,trace){});
        // quality is the exported method, you pass the useful page object as parameter
        quality(page, data, function(str){
            callback(null, str);
        });
    });
    town.on("error", function(err) {console.log("error");});
}

And the file phantomClusterOptions has:

//Options here https://github.com/buzzvil/ghost-town
phantomClusterOptions = {
  //phantomBinary:'./phantomjs', //if you want to use a different phantomjs version
  //phantomBinary:'/usr/bin/phantomjs', 
  workerDeath: 3, //number of times that instance of phantom will be reused
  pageTries:5, //tries to the page before rejecting
  pageCount: 1, //number of pages analysed concurrently by the same phantom instance (1 is recommended)
  // This is for versions 1.9 and older of ghost-town
  //phantomFlags:['--load-images=no', '--local-to-remote-url-access=yes', '--ignore-ssl-errors=true', '--web-security=false', '--debug=true'] //flags (http://phantomjs.org/api/command-line.html)
// For v.2 and newer versions
  phantomFlags: {"load-images" : false, "local-to-remote-url-access" : true, "ignore-ssl-errors" : true, "web-security" : false, "debug" : true}
}
module.exports = phantomClusterOptions;

3) Other products

https://www.npmjs.com/package/node-phantom-simple

https://github.com/sgentle/phantomjs-node

I tried to debug with node debugger from command-line:

 

node debug myapp.js

blog-carlesmateo-com-node-debug-command-line

And with node-debug (very nice integration with Chrome):

node-debug myapp.js

blog-carlesmateo-com-debug-node-ghost-town

But I was unable to see what was failing. The nodejs App was up, and the ghost-town queue was increased, but apparently the worker processing the queue was not working or unable to execute phantomjs. But I saw no errors. When I switched the params for ghost-town to v.2, I got some exception, and it really looks like is unable to execute Phantom, or perhaps phantomjs could not exec the .js due to some dependencies problem.

(throw err and error spawn EACCES)

blog-carlesmateo-com-ghost-town-throw-err-error-spawn-eaccessAlso:

Error: /mypath/node_modules/ghost-town/node_modules/phantom/node_modules/dnode/node_modules/weak/build/Release/weakref.node: undefined symbol: node_module_register
    at Module.load (module.js:356:32)
    at Function.Module._load (module.js:312:12)
    at Module.require (module.js:364:17)
    at require (module.js:380:17)
    at bindings (/mypath/node_modules/ghost-town/node_modules/phantom/node_modules/dnode/node_modules/weak/node_modules/bindings/bindings.js:76:44)
    at Object.<anonymous> (/mypath/node_modules/ghost-town/node_modules/phantom/node_modules/dnode/node_modules/weak/lib/weak.js:7:35)
    at Module._compile (module.js:456:26)
    at Object.Module._extensions..js (module.js:474:10)
    at Module.load (module.js:356:32)
    at Function.Module._load (module.js:312:12)

/mypath/node_modules/ghost-town/node_modules/phantom/node_modules/dnode/node_modules/weak/node_modules/bindings/bindings.js:83
        throw e

But I was unable to find more info on the net, I tried to install additional modules and I even straced the processes but I didn’t find the origin of the problem.

I was using:

npm install browserify express ghost-town phantom socket.io URIjs
async dnode forever node-phantom request underscore.string waitfor

About CentOs and Ubuntu

Some SysAdmins love CentOs. I’m in love with Ubuntu.

Basically, is per the packages system. They are really well maintained.

Ubuntu has LTS Long Time Support versions, that last for 5 years.

And in the other hand, they release a new version every 6 months, and if you install a modern server, you have the latest stable packages of Software.

Working with Open Source, this is a really important point. As I have access to modern versions of PHP, Apache, Tomcat, etc…

To use phantomjs with CentOS you have to download the sources and compile it, it took like an hour in a Cloud commodity Virtual Server, and there were problems of dependencies. Also using a phantomjs compiled with a CentOS system didn’t worked with a Server with a different CentOS version. So it was a bit painful to distribute across heterogeneous machines.

With an Ubuntu 14.04 LTS, just:

sudo apt-get install phantomjs

did the trick installing phantomjs (1.9.0-1)

Scaling with PHP

So we had the decision to make between:

  • rewriting completely the application to nodejs, that certainly would take time
  • to invest more time trying to determine why workers freeze under phantomjs

Phantomjs is a headless WebKit scriptable so it was very convenient.

Nodejs is built on Chrome’s Javascript runtime, so it would do what we want to.

As we had a time-constraint and for my client was very important to have the system working asap.

So I decided to debug a bit more.

I found that url’s were being stop loading at the event page.onNavigationRequested

So I could keep all the url and after a timedout could force a page.open(url) inside the event if it stopped (timedout)

mPage.onNavigationRequested = function(url, type, willNavigate, main) {

That was working, finally, but was not my favourite solution. I wanted to understand why it was failing initially.

The lack of documentation was frustrating, but debugging the problematic urls, I found that they were doing several redirections, and after some I was getting SSL certificate error on one of the destination urls.

The thing had to be with chain certificates bad configured.

As nowadays there many cheap SSL certificates providers, based on chain certificates, and many sites are configuring them wrong, phantomjs was sensible to that and stopping following urls.

I already had the param:

--ignore-ssl-errors=true

But investigating I found a very interesting contribution on stackoverflow from user Micah:

http://stackoverflow.com/questions/12021578/phantomjs-failing-to-open-https-site

Note that as of 2014-10-16, PhantomJS defaults to using SSLv3 to open HTTPS connections. With the POODLE vulnerability recently announced, many servers are disabling SSLv3 support.

To get around that, you should be able to run PhantomJS with:

phantomjs --ssl-protocol=tlsv1

Hopefully, PhantomJS will be updated soon to make TLSv1 the default instead of SSLv3.

I decided to give a try to forcing the version of SSL to TLSV1:

--ssl-protocol=tlsv1

And it worked. It did the trick. All the urls were now being parsed right and following the redirects to the end (or to my timedout).

The problem and the solution has been there since 2015 October, and the default use of tlsv1 has not been implemented as default in Phantomjs. That lack of maintenance I found disappointing.

That is why, when recently a multinational interviewed me, and asked me about technologies like nodejs I told them that I’m conservative until it is clear that the version has been proved as stable. And I told that, in any case, a member of the company should me a core member of the contributors to the technology. They were surprised but they shouldn’t! they should have known what I told!. I explained them that if you use a new technology in production, at least you should have a member of your staff in the core of that product. So you pay a guy to build an Open Source technology, basically. This warranties you that if a heavy bug or security flaw appears, you’ll not be screwed until the release. You guy can fix it immediately and share the solution with the community.

Companies like google, Facebook or Amazon do that.

That conservativeness is what I drawn in an interview with Facebook Operations, where I was asked about an scenario where I would be requested by some Developers and DevOps to upgrade the Load Balancers Software. They were more for the action, and I told that LB are critical and I was replied that everything in FB was critical. I argued that if a chat component fails, only the chat fails, but if the Load Balancers fail, everything will fail as they are the entrance point. I had the confirmation that I was right when some months ago they had an outage for hours.

Sometimes you have to keep strong, defend your point, because you know you’re right. Even if you are in front of a person that doesn’t see the things like you and will take a decision that will let you out. Being honest is priceless.

Scaling Phantomjs with PHP

So cool, the system was working fine.

But there was something that could be improved.

As Phantomjs had the limit of 10 connections in their webserver, that was the maximum concurrent connections that it can serve at the same time, and so it was a bottleneck.

// Sample code to create a webserver from PhantomJS
mWebserver = require('webserver');
mServer = mWebserver.create();
console.log("Server created");
//consoleDebug('Debug enabled');
mService = mServer.listen(8080,{'keepAlive': true}, function(request, response) {
    //consoleDebug('URL:' + request.url);
    s_params = request.url;
    doRender(s_params, function(res) {
        //consoleDebug('Response from URL:' + request.url + ' (processed)');
        writeStringResponse(response,res);
    });
    //consoleDebug('URL:' + request.url + ' ready for processing');
});

I decided to do propose to the company to use one of my tricks.

To launch phantomjs from PHP.

This is doing a wrapper to launch Phantomjs from commandline, and getting the response. I did the same in my CQLSÍ Cassandra wrapper around cqlsh before Cassandra drivers for PHP were available. I did also this to connect the payment gateway of a bank, written in C, with the Java libraries from Ticketing Solutions in 1999.

That way the server would be able to process as many concurrent Phantomjs instances as we want, as each one would be running in its own process.

I modified the js code to remove the webserver functionality and to get parameters from command line.

var system = require('system');
var args = system.args;
var b_debug_write = false;

if (args.length < 2) {
    console.log("Minim 2 parameters");
    console.log("call with: phantomjs program.js http://myurl.com quality");
    console.log("Parameter debug is optional");
    args.forEach(function(arg, i) {
            console.log(i + ': ' + arg);
        });
    // Exit with error level 1
    phantom.exit(1);
}

var s_url = args[1];
var s_quality = args[2];

if (args.length > 3) {
    // Enable debug
    b_debug_write = true;
}

consoleDebug("Starting with url:" + s_url + " and quality:" + s_quality);

Then the PHP code:

<?php
/**
 * Creator: Carles Mateo
 * Date: 2015-05-11 11:56
 */

// Report all PHP errors
error_reporting(E_ALL);

$b_debug = false;

if (!isset($_GET['url']) || !isset($_GET['quality'])) {
    echo 'Invalid parameters';
    exit();
}

if (isset($_GET['debug'])) {
    $b_debug = true;
}

$s_url = $_GET['url'];
$s_quality = $_GET['quality'];

// Just in case is not decoded by the PHP installed
$s_url = urldecode($s_url);
// reencode url
$s_url = urlencode($s_url);

$s_script = '/mypath/myapp_commandline.sh';

$s_script_with_params = $s_script.' '.$s_url.' '.$s_quality;

if ($b_debug == true) {
    $s_script_with_params .= ' debug';
    echo 'Executing '.$s_script_with_params."<br />\n";
}

//$message=shell_exec("/var/www/scripts/testscript 2>&1");
$s_message = shell_exec($s_script_with_params);

header("Content-Type: text/plain");
echo $s_message;

And finally the bash script myapp_comandline.sh:

#!/bin/bash

PATH_QUALITY=/mypath/
#tlsv1 is recommended to avoid problems with certificates
PARAMETERS="--local-to-remote-url-access=yes --ignore-ssl-errors=true --web-security=false --ssl-protocol=tlsv1"
cd $PATH_QUALITY

#echo "Debug param1=$1 param2=$2 param3=$3"

if [ -z "$3" ]
  then
    phantomjs $PARAMETERS quality.js $1 $2
  else
    echo "Launching phantomjs with debug. url=$1 quality=$2"
    phantomjs $PARAMETERS quality.js $1 $2 $3
fi

If you don’t need to load the images you can speed up the thing with parameter:

--load-images=false

So finally we were able to use only 285 MB of RAM to handle more than 20 concurrent phantomjs processes.

blog-carlesmateo-com-scaling-phantomjs-with-php-htop

csort, my algorithm that heavily beats quicksort

Here is my proposal for sorting much, much faster than quicksort.

Time and memory consumption in C, for my algorithm csort, versus quicksort, with an universe of 50 Million registers, with values duplicated from 0 to 1,000,000. Less is better

Time and memory consumption in C, for my algorithm csort, versus quicksort, with an universe of 50 Million registers, with values duplicated from 0 to 1,000,000. Less is better

All the algorithms must be checked with different universes of data and their utility is limited to some situations. I explain here the algorithm, show the logic behind code in C, in PHP and Java, the metricas, push the boundaries of the computer… and test against different sets of data looking for the big-O. I prove how much more perfomant it is and explain everything.

This article is complex, is intended for very Senior Engineers at huge companies that need to handle very big data.

The code is written in C to be sure that we compare everything in the same order of magnitude and to avoid optimizations from the compilers / precompilers / Just in Time engines that could bias the conclusions, and so knowing that we are playing fair, but there is a porting to PHP so everyone can understand the algorithm and there are some utilities written in PHP as well. I’ll also explain certain interesting things about PHP internal functions and how it handles memory, and other goodies about C. Finally the code is also available in Java, and is possible to observe how clearly my csort beats quicksort and how fast que JIT compiler performs. Perhaps you can be interested into reading my previous article performance of several languages.

We need a huge amount of data in order to be able to demonstrate the time differences clearly.

Some of the PHP tests and proves require you to have a decent computer with at least 16 GB of RAM. You can skip those or reduce the magnitude of the set of data, or simply use C.

Take this article as an intellectual entertainment, although I think you will discover this useful algorithm / technique of mine will allow you to sort much faster than quicksort in most of the cases I can imagine, and so you will have another tool in your set of resources for sorting fast and dealing with heavy data structures.

Problem

An initial intellectual problem description to enter into matter:

Having a set of 50,000,000 numbers, with values possible between 0 and 1,000,000:

  1. Sort them and eliminate duplicates as fast as possible

To be fair and trustworthy the numbers will be generated and saved to disk, and the same set of numbers will be used for comparing between all the algorithms and tests and when checking with different languages.

Generating the numbers

For this test I created an array of numeric items.

I created a .txt file filled with the numbers, and then I load the same set of numbers for all the tests so all the numbers are always the same and results cannot be biased by a random set very lucky for the algorithm to sort.

I’ve generated several universes of tests, that you can download.

 

Code in PHP to generate the random numbers and save to a file

If you prefer to generate a new set of numbers, you can do it by yourselves.

Please note: my computers have a lot of memory, but if you have few memory you can easily modify this program to save the values one by one and so saving memory by not having all the array in memory. Although I recommend you to generate big arrays and pass to a function while measuring the memory consumption to really understand how PHP manages passing Arrays to functions and memory. I found many Senior devs wrong about it.

<?php 
/**
 * Created by   : carles.mateo
 * Creation Date: 2015-05-22 18:36
 */
error_reporting(E_ALL);
// To avoid raising warnings from HHVM
date_default_timezone_set('Europe/Andorra');

// This freezes the server, using more than 35 GB RAM
//define('i_NUM_VALUES', 500000000);
// 50M uses ~ 10 GB RAM
define('i_NUM_VALUES', 50000000);
define('i_MAX_NUMBER', 500000000);

function get_random_values() {

    $st_values = null;

    $s_values_filename = 'carles_sort.txt';

    if (file_exists($s_values_filename)) {
        print_with_datetime("Reading values...");
        $st_values = file($s_values_filename);
        print_with_datetime("Ensuring numeric values...");
        foreach($st_values as $i_key=>$s_value) {
            $i_value = intval($s_value);
            $st_values[$i_key] = $i_value;
        }
    } else {
        print_with_datetime("Generating values...");
        $st_values = generate_random_values();
        print_with_datetime("Writing values to disk...");
        save_values_to_disk($s_values_filename, $st_values);
        print_with_datetime("Finished writing values to disk");
    }

    return $st_values;
}

function generate_random_values() {

    $i_num_values = i_NUM_VALUES;
    $i_max_number = i_MAX_NUMBER;

    $st_values = array();

    for($i_counter = 0; $i_counter<$i_num_values; $i_counter++) {
        $i_random = rand(0, $i_max_number);
        $st_values[] = $i_random;
    }

    return $st_values;

}

function save_values_to_disk($s_values_filename, $st_values) {
    $i_counter = 0;

    $o_file = fopen($s_values_filename, 'w');
    foreach ($st_values as $i_key => $s_value) {
        $i_counter++;
        fwrite($o_file, $s_value.PHP_EOL);
        if (($i_counter % 100000) == 0) {
            // every 100K numbers we show a status message
            $f_advance_percent = ($i_counter / $i_max_values)*100;
            print_with_datetime("Item num: $i_counter ($f_advance_percent%) value: $s_value written to disk");
        }
    }
    fclose($o_file);
}

function get_datetime() {
    return date('Y-m-d H:i:s');
}

function print_with_datetime($s_string) {
    echo get_datetime().' '.$s_string."\n";
}
print_with_datetime('Starting number generator...');

$st_test_array = get_random_values();

First comments about this code:

  • Obviously to generate an array from 50M items in PHP you need enough of RAM. In PHP this takes ~9GB, in C you need much less: just 50M * 4 bytes per item = 200 MB, although as my program is multipurpose and use several arrays you will realistic be needing >2 GB RAM.
  • Playing with memory_get_usage() ayou will discover that returning a big array from the function to the main code does not work by copying it to the stack, and that by passing the array to the function by value (not specifying by reference), does not copy the values neither. A pointer alike is passed unless the data passed is modified. It used a technique called copy-on-write. Other modern languages use similar techniques as well to save memory. So it is not true that “A copy of the original $array is created within the function scope“. As PHP.net section PHP at the Core: A Hacker’s guide defines “PHP is a dynamic, loosely typed language, that uses copy-on-write and reference counting.
  • Please note that when we read from disk we do:
            foreach($st_values as $i_key=>$s_value) {
                $i_value = intval($s_value);
                $st_values[$i_key] = $i_value;
            }

    This forced conversion to int is because when PHP reads the values from the disk it reads them as String (and it puts a jump of line at the end as well). We convert to make sure we use PHP integer values.

    To know the max value of an Integer in your platform do:

    echo PHP_INT_MAX;

    For a 64 bits is: 9223372036854775807

    blog-carlesmateo-com-php_int_max-64-bitI limited the number of registers sorted in this tests to 50,000,000 because it makes PHP use 12.6GB of RAM, and my computer has 32 GB. I tried with 500,000,000 but the Linux computer started to swap like crazy and had to kill the process by ssh (yes, the keyboard and mouse were irresponsible, I had to login via ssh from another computer and kill the process). Although 500M is a reasonably number for the code in C or Java.

  • So the fourth conclusion is that for working with big Data Structures PHP is not recommended because it uses a lot of RAM. You can use it if its flexibility brings you more value than performance and memory consumption.
  • Everything that you do can do in PHP will be much much much faster if done in C.

Writing the 50M values to disk

Sample status of the advance of the process of writing to disk from PHP (that is slow) 50M values from ranges 0 to 500M

My solution 1: simple csort

I’ve developed several implementations, with several utility. This is the first case.

Please take a look at this table of the cost of sorting an Array by using different algorithms from http://bigocheatsheet.com/ :

blog-carlesmateo-com-big-o-of-array-sorting-algorithms

It is interesting to see the complexity of each algorithm in all the cases.

So Quicksort has a complexity of O(n log(n)) for the best case, and O(n^2) for the worst.

For a 50 Million Array, so n=50M, then 50M * log2(50M) = 1,278,771,237.95 ~ 1.28 Billion

blog-carlesmateo-com-50M-log2_50M-at-google-cat

My algorithm approach is this: having the unsorted array loaded into the array st_values, I do a main loop like this (remember that we want to sort and remove duplicates):

for (l_counter=0; l_counter < l_max_values; l_counter++) {
    st_values_sorted[st_values[l_counter]] = st_values[l_counter];
}

So, I copy to the destination array st_values_sorted the values, assigning the value as the index of the Array st_values_sorted.

With this:

  1. We are sure we will have no duplicates, as if the value is duplicated n times it will be simply reassigned to the same position n times, and so having only it one time as st_values_sorted[n] in the final array.
  2. We have an array, that has as index, the same value. So even, if in a foreach loop you would get the values assorted, they’re sorted by their key (hash index numeric value).

Finally, if you want to generate a sorted sequential array or a linked list, we do:

        // Resort for a creating a ordered list 
        l_max_range = l_max_values_sorted + 1;
        for (l_counter=0; l_counter < l_max_range; l_counter++) {
            if (st_values_sorted[l_counter] > -1 ) {
                // If the value existed
                st_values_sorted_renum[l_counter_hit] = st_values_sorted[l_counter];
                l_counter_hit++;
            }
        }        

The magic of this:

  1. In the first loop, we read each item of the array just one time. So  O(n)
  2. In the second loop, if we really want to generate a sorted list, we simply loop the sorted and deduplicated Array by index, as seen in the code before, from the initial possible value (0) to the maximum value we can have (in this sample m=1M)
  3. So we loop, for 1,000,001 items
  4. So in total we do n+m cycles
    The cost in the Best and in the Worst case is O(n)
  5. This is fucking awesome faster than quicksort
    For quicksort time complexity is: Best O(n log n). Worst O(n^2)
  6. After quicksort you would have to delete duplicates also, adding some cost (in the case of PHP unique() function is really costly)
  7. An optimization is that we can also associate the index, in the process of loading the values from disk, so we save the first loop (step 1) and so we save O(n).
        while( b_exit == 0 ) {
            i_fitems_read = fscanf(o_fp, "%li", &l_number);
            if (i_fitems_read < 1) {
                printf("Unable to read item %li from file.\n", l_counter);
                exit(EXIT_FAILURE);            
            }
            st_values[l_number] = l_number;
                           
            l_counter++;
            if (l_counter > l_max_values -1) break;
        }

    And we will have the Array sorted (by key) without any sort. That’s why I say…
    The fastest sort… is no sorting at all!.

    This is the recommended way of using the algorithm because it saves a lot of memory, so, instead of having and Array of 50M for loading the data to sort, and another of 1M to hold the csorted data, we will have only the 1M Array. We will be adding the data using csort and eliminating duplicates and sorting on the way.

  8. If in the process of loading the have all the values sorted, we will not do the first loop O(n), so our cost will be O(m) where m<n
  9. Finally, if our array is sorted (by index) there is no need to do the second loop to produce a sorted list, as the array is sorted by key. But if we need to do it, the cost is m, otherwise we have no cost at all, it was sorted and deduplicated when loaded from disk!
  10. Even doing both loops, this is awesome faster than quicksort (even without deduplicating), for the universes tested:
    1. 50M items, possible values 0-1M
    2. 50M items, possible values 0-50M
    3. 50M items, possible values 0-500M
    4. The same proportions with less items

Comparing the performance for the first case

Notes:

  • Please note the code has been compiled with -O0 to disable optimizations for what I think is a bug I found in gcc (Ubuntu 4.8.2-19ubuntu1) 4.8.2. that manifests in the csort with compression. With the parameter -O2 the times for the sorting are half and better (0.1 seconds for csort basic, around 4.7 for quicksort).
  • To use arrays of 500M, so using more than 2GB of ram you should pass the parameter  -mcmodel=medium to gcc or the compiler will return an error “relocation truncated to fit: R_X86_64_32S against .bss”
  • Computer used is Intel(R) Core(TM) i7-4770S CPU @ 3.10GHz, as computers nowadays use turbo, times vary a bit from test to test, so tests are done many times and an average value is provided
  • Please understand how your language pass the parameters in functions before trying to port this code to another language (passing a pure by value string of 50M items to a function would not be a good idea, even worst for the case of recursive functions)

As you can see in the C code below, I have used two quicksort algorithms, both quicksort take between 8.7 and 10 seconds. (the second algorithm is a bit slower)

blog-carlesmateo-com-quicksort-1-50M-items-500M-random-values-9secondsblog-carlesmateo-com-quicksort-3-50M-items-500M-random-values-9secondsThis time does not change, no matter if the random numbers were for ranges 0-1M, 0-50M or 0-500M.

My implementation of the quicksort with removing duplicates take just a bit more as I use the same base of the csort for removing the duplicates.

 

With the csort basic, doing the two loops (so no optimizing on load) the time for sorting with removing duplicates and generating a final array like a list, depends on the maximum possible value:

  • Universe of 50M items, random values between 0-1M, time: 0.5 to 0.7 seconds

    blog-carlesmateo-com-csort-basic-50M-items-1M-random-values-0_7seconds

    Here csort basic invoked from command line. Firefox was opened and heavily full of tabs, so it was a bit slower but it takes 785 ms

    blog-carlesmateo-com-c-csort-basic-553ms-0_5s

    Launching csort basic from the netbeans IDE, Release config but -O0, and firewox removed, takes 554 milliseconds / 0.554 seconds

  • Universe of 50M items, random values between 0-50M, 1.7-2 seconds
    blog-carlesmateo-com-csort-basic-50M-items-50M-random-values-2seconds
  • Universe of 50M items, random values between 0-500M, 3.7 seconds

blog-carlesmateo-com-csort-basic-50M-items-500M-random-values-3_7seconds

Advantages of csort (for its purpose, so for the universes compatible, so if the range of values m is smaller or not many times bigger than the number of items n):

  • Is super-fast
  • It saves a lot of memory
  • Can be used to remove duplicates or to keep duplicates
  • It is possible to use it with hybrid universes (see additional optimizations) and have significant performance gains respect using a single sorting algorithm
  • It is very easy to understand and to implement

Disadvantages of csort basic:

  • It is intended for the universe of max(m) < n. If max(m) (maximum value of the number of random values) is much much higher than n (the number of items), then the sorted by key array will use more memory (bigger array), even if it has few values. (* see csort with compression bellow and Additional optimizations sections)
  • In certain border edge cases where n is many more times smaller than the difference between min(m) and max(m) the algorithm could be slower than quicksort. (this is if data does not accomplish max(m) < n)
  • The sorted array uses memory. Other algorithms, starting from an initial array, do not need additional arrays in order to sort. Although we can generate a final linked list and remove the arrays from memory during the process we are using that memory. But this was supposing we have an original array not sorted, but actually we don’t need to have an starting array, just csort the Data as you get it to the destination Array, and then you save memory because an Array of 1M takes less memory than an Array of 50M. (See My solution 2: optimized when reading)

 

My solution 2: optimized when reading

This is basically csort, but using the read from disk process to assign the right index, not only save CPU but a lot of memory.

That way the complexity is totally reduced as data is sorted (by index) and deduplicated when loading, with no cost.

In the case of the optimized read, with an universe of 50M items, ranges 0-1M, it takes the astonishing time of.. 3 ms! So 0.003 seconds.

blog-carlesmateo-com-c-csort-read-opt-3ms

Probably this represents better the idea of beating quicksort: csort 0.003 seconds vs ~10 seconds quicksort.

My solution 3: supporting linked data

Image we have a numeric array with unique values, for example id, and another String array with the names.

For example:

st_values[0] = 1500000; st_names[0] = “Larry Page”;

st_values[1] = 731207; st_names[1] = “Sergey Brin”;

st_values[2] = 23788889; st_names[2] = “Steve Wozniak”;

st_values[3] = 5127890; st_names[3] = “Bill Gates”;

st_values[1000] = 5127890; st_names[1000] = “Alan Turing”;

st_values[10000] = 73215; st_names[10000] = “Grace Hopper”;

st_values[100000] = 218919; st_names[100000] = “Jean E. Sammet”;

Instead of doing:

st_sorted[1500000] = 1500000;

st_values[731207] = 731207;

We will do:

st_sorted[1500000] = 0;

st_values[731207] = 1;

st_values[23788889] = 2;

So, we will have csorted for the id, but the value will be pointed to the linked data in st_names[].

This can be used too with csort optimized reading and saving the use in memory of the array st_values[]. Obviously st_names[] must be kept.

My solution 4: csort with compression and supporting duplicates

csort is very quick for sorting and removing duplicates. But what happens if we want to sort and keep the duplicates?

The algorithm csort with compression what does is:

    if (i_algorithm == i_ALGORITHM_CSORT_COMP) {
        for (l_counter=0; l_counter < l_max_values; l_counter++) {
            l_value = st_values[l_counter];
            st_values_sorted[l_value] = st_values_sorted[l_value] + 1;
        }
        l_counter_hit = l_max_values_sorted + 1;
    }   

So, it assigns the index, as the other versions of csort, but instead of assigned the same value as the index, it is assigned the number of times that this value is found.

So, assuming we have a list like this:

{0,10,15,0,3,5,21,5,100000,0,7,5}

It will produce:

st_values[0] = 3;
st_values[1] = 0;
st_values[2] = 0;
st_values[3] = 1;
st_values[4] = 0;
st_values[5] = 3;
st_values[6] = 0;
st_values[7] = 1;…
st_values[100000] = 1;

From this Array it is possible to get the original linked list sorted easily:
{0,0,0,3,5,5,5,7,10,15,21,100000}

Now, take the first universe of data 50M items, with ranges 0-1M, and you get that you have compressed an originally unsorted array of 50M items into a sorted array of 1M items (1,000,001 to be exact) and you can still access by index. So a compression of 50 to 1, for this universe of numbers. (As you are thinking that won’t compress, but use more space, if the range of numbers is bigger than the number of items.)

If your values repeat a lot, and you have a huge set of items to sort, like in Big Data, this algorithm is wonderful and have huge memory savings and amazing speedy random access to data.

The idea, as mentioned before, is to do not load the data into an Array, and then sort, but to assigning using csort when reading data from disk. That way instead of having an Array of 50M and another of 1M, we would have only a single Array of 1M.

This is the relation of times for the universe 50M, 0-1M:

  • csort basic: 0.553 seconds
    Please note: this sample including the time to generate a array like a list: st_values_sorted_renum, so the time just to create the csorted array is ~3ms less
  • csort opt read: 0.003 seconds
  • csort with compression and duplicates support: 0.883 seconds

blog-carlesmateo-com-c-csort-compressed

But awesome news are that the csort with compression can be implemented on the load time, like csort opt read.

Additional improvements

1) Using overlay to save Memory and run even faster

Ideally min(m) and max(m) should be calculated.

Imagine that you have a universe of data composed by id driver licenses. Guess that those numbers range from 140,000,000 to 190,000,000.

There is no need have create an array of 190M items. Just create an array of 50M, (max(m) – min(m)) and save an overlay to a variable.

l_overlay = 140000000;

Then simply do your loops from 0 to max(m) – l_overlay and when you’re building the final linked list or the sequentially ordered final array add l_overlay to each value.

You will save memory and increase speed.

2) Hybrid universes / spaces

Guess you have a hybrid universe, composed by let’s say 75% of your values in the range of 0-1M and 25% of your values being frontier values, f.e. between 35M and 900M.

    // Last value covered by the csort
    long l_last_frontier = 1000000;

    long l_counter_unsorted_array = 0;

    while( b_exit == 0 ) {
        i_fitems_read = fscanf(o_fp, "%li", &l_number);
        if (i_fitems_read < 1) {
            printf("Unable to read item %li from file.\n", l_counter);
            exit(EXIT_FAILURE);            
        }

        if (l_number > l_last_frontier) {
            st_values_unsorted[l_counter_unsorted_array] = l_number;
            l_counter_unsorted_array++;
        } else {
            st_values[l_number] = l_number;
        }
                       
        l_counter++;
        if (l_counter > l_max_values -1) break;
    }

You can then sort the 25% with another algorithm and merge the csorted array (it goes first) and the 25% (bigger values, go later), now sorted Array, and you’ll have benefit from a 75% super-fast sorting.

3) Hybrid sort big-O

For my tests, as long as you have memory, you can csort up to n * 10, and have significant time saving respect Quicksort. So if you have a really huge universe you can implement a hybrid algorithm that sorts up to m =  n * 10 (l_last_frontier = n * 10) and use another algorithm to sort the rest >m. Finally just add the sorted second block to the sorted from first block results.

4) Hybrid sorting, with undetermined n and m

If you don’t know how long will be your space, like in a mapreduce, or the max value, you can also use the hybrid approach and define an array big enough, like 10M. If your arrays are based on long int, so using 4 bytes per position, it will take only 38 Mbytes of RAM. Peanuts. (Use the l_last_frontier seen in the point before just in case you have bigger values)

When you’re loading the data, and csorting using csort opt read, keep a variable with the max and min values found, so you’ll not have to parse all the 10M, just from min-max in order to generate a final sorted, deduplicated, list or array.

It’s up to you to fine tune and adjust the size for the minimum memory consumption and maximum efficiency according to the nature of your data.

5) Using less memory

If you have stored the data to sort in a Collection, you can reduce an item one by one, so you will save memory for huge amounts of Data. So, you can add that item to csort resulting array, and eliminate from the Collection/LinkedList source. One by one. That way you’ll save memory. With the sample of 50M registers with 1M different random values, it passes from 400MB* of RAM to 4MB of RAM in C. (* I’m calculating here a Linked List structure of 2 Long Data-type, one for the value and another for next item, in C. In other languages gain can be even more spectacular)

csort and Big Data

With the progression of Big Data we are finding ourselves with seas of data. Those universe of data tend to have a much bigger number of registers (n) than the range of values (m).

For example, in the case of devices monitoring temperatures every second, we will have 3,600 rows per hour, 86,400 rows per day, 31,536,000 per year while the range of values read will be a very stable subset, like from 19.0 to 24.0 C. In this example, multiplying the float value per 10 to work always with Integers, we would be working with a subset of 190-240, so m would be 51 if optimized, or 0-240 if not. If we want to get the unique temperatures for the past year, csort will generate just an Array of 51 items, crushing any ordering algorithm in space and time.

The bigger the Data is, the faster csort will perform compared to other sorting algorithms.

The complete code in C

blog-carlesmateo-com-csort-instructions

This is the complete source code. It supports all the cases and you can invoke it from command line with a parameter for the case number you want to test.

/* 
 * File:   main.c
 * Author: Carles Mateo
 *
 * Created on 2015-05-14 16:45
 */

#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <sys/time.h>
#include <string.h>

/*
 * Change does values for your tests, and recompile it
 */
#define l_MAX_ITEMS 50000000
#define l_MAX_VALUE 1000000


static int const i_ALGORITHM_QUICKSORT        = 1;
static int const i_ALGORITHM_QUICKSORT_UNIQUE = 2;
static int const i_ALGORITHM_QUICKSORT2       = 3;
static int const i_ALGORITHM_CSORT_BASIC      = 7;
static int const i_ALGORITHM_CSORT_OPT1       = 8;
static int const i_ALGORITHM_CSORT_COMP       = 9;


#define s_HELP  "CSort by Carles Mateo\n"\
                "=====================\n"\
                "\n"\
                "This demonstrates a sort algorithm faster than quicksort,\n"\
                "as proposed in the article.\n"\
                "Please read the complete article at:\n"\
                "http://blog.carlesmateo.com\n"\
                "\n"\
                "Please invoke the program with one of these parameters:\n"\
                "\n"\
                "--help or without parameters for this help message\n"\
                "1 To test using quicksort\n"\
                "  Only sorts, does not deletes duplicates\n"\
                "2 To test using quicksort and removing duplicates\n"\
                "3 To test using another implementation of quicksort\n"\
                "  Only sorts, does not deletes duplicates\n"\
                "7 To test using csort (basic example, sorts and deletes duplicates)\n"\
                "8 To test using csort, improvement with indexes assigned when reading\n"\
                "  Use to watch the time saving from basic csort\n"\
                "9 To test using csort, with indexes assigned when reading and counting\n"\
                "  the number of coincidences (compressed output array)\n"\
                "\n"\
                "Please, make sure that the file carles_sort.txt is present in the\n"\
                "current directory.\n"\
                "\n"


// Allocate in the BSS segment, which is a part of the heap
static long st_values[l_MAX_ITEMS];
// Only max value is required
static long st_values_sorted[l_MAX_VALUE];
static long st_values_sorted_renum[l_MAX_VALUE];

int init_values(long l_max_values);
void quicksort(long x[l_MAX_ITEMS], long first, long last);
void quicksort2(long arr[], long beg, long end);
void swap(long *a, long *b);
void printwdt(char s_string[]);

int init_values(long l_max_values) {
    long l_counter = 0;
    for (l_counter = 0; l_counter < l_max_values; l_counter++) {
        // Init to -1
        st_values[l_counter] = -1;
    }
}

int init_values_sorted(long l_max_values, long l_num) {
    long l_counter = 0;
    for (l_counter = 0; l_counter < l_max_values; l_counter++) {
        // Init to -1
        st_values_sorted[l_counter] = l_num;
    }
}

// Quicksort modified from original sample from http://www.cquestions.com/2008/01/c-program-for-quick-sort.html
 void quicksort(long x[l_MAX_ITEMS], long first, long last){
    long pivot,j,temp,i;

     if (first<last) {
         pivot=first;
         i=first;
         j=last;

         while(i<j){
             while(x[i]<=x[pivot]&&i<last)
                 i++;
             while(x[j]>x[pivot])
                 j--;
             if(i<j){
                 temp=x[i];
                  x[i]=x[j];
                  x[j]=temp;
             }
         }

         temp=x[pivot];
         x[pivot]=x[j];
         x[j]=temp;
         quicksort(x,first,j-1);
         quicksort(x,j+1,last);

    }
}

// Modified by Carles Mateo to use long based on http://alienryderflex.com/quicksort/
void quicksort2(long arr[], long beg, long end)
{
  if (end > beg + 1)
  {
    long piv = arr[beg], l = beg + 1, r = end;
    while (l < r)
    {
      if (arr[l] <= piv)
        l++;
      else
        swap(&arr[l], &arr[--r]);
    }
    swap(&arr[--l], &arr[beg]);
    quicksort2(arr, beg, l);
    quicksort2(arr, r, end);
  }
} 

void swap(long *a, long *b)
{
  long t=*a; *a=*b; *b=t;
}

 /* print with date and time
  */
 void printwdt(char s_string[]) {
    time_t timer;
    char buffer[26];
    struct tm* tm_info;

    time(&timer);
    tm_info = localtime(&timer);

    strftime(buffer, 26, "%Y-%m-%d %H:%M:%S", tm_info);
    
    printf("%s %s\n", buffer, s_string);
  
 }
 
int main(int argc, char** argv) {
    
    char s_filename[] = "carles_sort.txt";
    long l_max_values = l_MAX_ITEMS;
    long l_max_values_sorted = l_MAX_VALUE;
    char s_ch;
    FILE *o_fp;
    long l_counter=0;
    long l_counter_hit=0;
    long l_number;
    int b_exit = 0;
    int i_fitems_read = 0;
    int i_algorithm = 0;
    // Small counter for printing some final results
    int i_counter=0;
    // temp value for the final print
    long l_value=0;
    // Used for counting the number of results on optimized array
    long l_total_items_sorted = 0;
    long l_max_range;
    long l_items_found = 0;
    
    if (argc < 2) {
        printf(s_HELP);
        exit(EXIT_SUCCESS);
    }
        
    printwdt("Starting program...");
    if (strcmp(argv[1], "1") == 0) {
        i_algorithm = i_ALGORITHM_QUICKSORT;
        printwdt("Algorithm: Quicksort with duplicates");
    } else if (strcmp(argv[1], "2") == 0) {
        i_algorithm = i_ALGORITHM_QUICKSORT_UNIQUE;
        printwdt("Algorithm: Quicksort removing duplicates");
    } else if (strcmp(argv[1], "3") == 0) {
        i_algorithm = i_ALGORITHM_QUICKSORT2;
        printwdt("Algorithm: Quicksort alternative");
    } else if (strcmp(argv[1], "7") == 0) {
        i_algorithm = i_ALGORITHM_CSORT_BASIC;
        printwdt("Algorithm: csort basic");
    } else if (strcmp(argv[1], "8") == 0) {
        i_algorithm = i_ALGORITHM_CSORT_OPT1;
        printwdt("Algorithm: csort optimized read");
    } else if (strcmp(argv[1], "9") == 0) {
        i_algorithm = i_ALGORITHM_CSORT_COMP;
        printwdt("Algorithm: csort compress");
    }
    
    if (i_algorithm == 0) {
        printf(s_HELP);
        exit(EXIT_SUCCESS);        
    }
    
    if (i_algorithm == i_ALGORITHM_QUICKSORT || i_algorithm == i_ALGORITHM_QUICKSORT2) {
        // Use this if you want to make tests with partial data to find the end of the values
        //printf("Initializing %li values to -1...\n", l_max_values);
        //init_values(l_max_values);        
    }
    
    if (i_algorithm == i_ALGORITHM_CSORT_BASIC || i_algorithm == i_ALGORITHM_CSORT_OPT1 || i_algorithm == i_ALGORITHM_QUICKSORT_UNIQUE) {
        // We init to -1 as it could happen that some indexes have not a value,
        // and we want to know what's the case.
        printf("Initializing %li values to -1...\n", l_max_values_sorted + 1);
        init_values_sorted(l_max_values_sorted + 1, -1);        
    } 
    
    if (i_algorithm == i_ALGORITHM_CSORT_COMP) {
        printf("Initializing %li values_sorted to 0...\n", l_max_values_sorted + 1);
        init_values_sorted(l_max_values_sorted + 1, 0);        
    }

    
    o_fp = fopen(s_filename,"r"); // read mode
    
    if( o_fp == NULL ) {
        printf("File %s not found or I/O error.\n", s_filename);  
        perror("Error while opening the file.\n");
        exit(EXIT_FAILURE);
    }
    
    printf("Reading file %s :\n", s_filename);
    while( b_exit == 0 ) {
        i_fitems_read = fscanf(o_fp, "%li", &l_number);
        if (i_fitems_read < 1) {
            printf("Unable to read item %li from file.\n", l_counter);
            perror("File ended before reading all the items.\n");
            exit(EXIT_FAILURE);            
        }
        if (i_algorithm == i_ALGORITHM_QUICKSORT || i_algorithm == i_ALGORITHM_QUICKSORT_UNIQUE ||
            i_algorithm == i_ALGORITHM_QUICKSORT2 ||
            i_algorithm == i_ALGORITHM_CSORT_BASIC || i_algorithm == i_ALGORITHM_CSORT_COMP) {
            st_values[l_counter] = l_number;
        }
        if (i_algorithm == i_ALGORITHM_CSORT_OPT1) {
            st_values[l_number] = l_number;
            l_total_items_sorted++;
        }
        
        if (l_counter < 10) {
            // Echo the reads from the first 10 items, for debugging
            printf("Item %li: read %li\n", l_counter, l_number);            
        }

        l_counter++;
        if (l_counter > l_max_values -1) break;
    }
    
    printf("Total read values: %li\n", l_counter);
    printf("Sample values read from file: First item: %li Last item: %li\n", st_values[0], st_values[l_max_values -1]);
 
    fclose(o_fp);        
    
    printwdt("Sorting...");
    
    struct timeval time;
    gettimeofday(&time, NULL); //This actually returns a struct that has microsecond precision.
    long l_microsec_ini = ((unsigned long long)time.tv_sec * 1000000) + time.tv_usec;

    if (i_algorithm == i_ALGORITHM_QUICKSORT || i_algorithm == i_ALGORITHM_QUICKSORT_UNIQUE) {
        quicksort(st_values, 0, l_MAX_ITEMS-1);
        l_counter_hit = l_MAX_ITEMS;
    }
    
    if (i_algorithm == i_ALGORITHM_QUICKSORT2) {
        quicksort2(st_values, 0, l_MAX_ITEMS-1);
        l_counter_hit = l_MAX_ITEMS;
    }
    
    if (i_algorithm == i_ALGORITHM_QUICKSORT_UNIQUE) {        
        printwdt("Removing duplicates...");
        l_counter_hit = 0;
        long l_last_known_value = -1;
        for (l_counter=0; l_counter < l_max_values; l_counter++) {
            if (l_last_known_value != st_values[l_counter]) {
                l_last_known_value = st_values[l_counter];
                st_values_sorted_renum[l_counter_hit] = st_values[l_counter];
                l_counter_hit++;
            }
        }            
    }
    
    if (i_algorithm == i_ALGORITHM_CSORT_BASIC) {
        // Shrink and set the index
        for (l_counter=0; l_counter < l_max_values; l_counter++) {
            st_values_sorted[st_values[l_counter]] = st_values[l_counter];
        }

        // Resort for forearch
        l_max_range = l_max_values_sorted + 1;
        for (l_counter=0; l_counter < l_max_range; l_counter++) {
            if (st_values_sorted[l_counter] > -1 ) {
                st_values_sorted_renum[l_counter_hit] = st_values_sorted[l_counter];
                l_counter_hit++;
            }
        }        
    }
    
    if (i_algorithm == i_ALGORITHM_CSORT_OPT1) {
        l_max_range = l_max_values_sorted + 1;
        for (l_counter=0; l_counter < l_max_range; l_counter++) {
            if (st_values[l_counter] > -1 ) {
                st_values_sorted_renum[l_counter_hit] = st_values[l_counter];
                l_counter_hit++;
            }
        }        
    }
    
    if (i_algorithm == i_ALGORITHM_CSORT_COMP) {
        for (l_counter=0; l_counter < l_max_values; l_counter++) {
            l_value = st_values[l_counter];
            st_values_sorted[l_value] = st_values_sorted[l_value] + 1;
        }
        l_counter_hit = l_max_values_sorted + 1;
    }   
    
    
    struct timeval time_end;
    gettimeofday(&time_end, NULL); //This actually returns a struct that has microsecond precision.
    long l_microsec_end = ((unsigned long long)time_end.tv_sec * 1000000) + time_end.tv_usec;
    
    long l_microsec_total = l_microsec_end - l_microsec_ini;
    long l_milliseconds_total = l_microsec_total / 1000;
    double f_seconds = (double)l_milliseconds_total / 1000;
    printf("Time used: %li microseconds = %li milliseconds = %f seconds \n", l_microsec_total, l_milliseconds_total, f_seconds);
    
    printf("Total items in the final array %li\n", l_counter_hit);
    printf("Sample items in the final array:\n");

    if (i_algorithm == i_ALGORITHM_CSORT_COMP) {
        l_max_range = l_max_values_sorted + 1;        
        l_counter_hit = 0;

        for (l_counter=0; l_counter < l_max_range; l_counter++) { 
            if (st_values_sorted[l_counter] > 0) {
                l_counter_hit++;
                l_items_found = l_items_found + st_values_sorted[l_counter];                
            }            
        }
        printf("A final array of %li significant items, contains sorted lossless %li\n", l_counter_hit, l_items_found);
        
    } else {
        for (i_counter=0; i_counter<10000; i_counter=i_counter+1000) {
            /* if (st_values_sorted[i_counter] > -1) {
                // If you enjoy looking at values being printed or want to debug :)
                printf("item at position %i : %li\n", i_counter, st_values_sorted[i_counter]);
            }*/
            if (i_algorithm == i_ALGORITHM_QUICKSORT || i_algorithm == i_ALGORITHM_QUICKSORT2) {
                l_value = st_values[i_counter];
            } else {
                l_value = st_values_sorted_renum[i_counter];
            }
            printf("item at position %i : %li\n", i_counter, l_value);

        }
        
    }
    
    return (EXIT_SUCCESS);
}

My solution in PHP

As told before I have implemented my algorithm in PHP, and it outperforms by much the PHP native functions (PHP sort() is a quicksort algorithm written in C, and unique() ). I explain everything below for the PHP lovers, is the same than in C, but with special considerations for PHP language. Even in PHP the efficiency of the algorithm is clearly demonstrated.

Warning, do not attempt to run the PHP code as is in your machine unless you have at least 16 GB of RAM. If you have 8 GB RAM you should reduce the scope to avoid your computer to hang up or swap very badly.

 

 

Printing of datetime is done by those functions:

function get_datetime() {
    return date('Y-m-d H:i:s');
}

function print_with_datetime($s_string) {
    echo get_datetime().' '.$s_string."\n";
}

So once we have our numbers generated we do a sort($st_test_values);

And we calculate the time elapsed with microtime for profiling:

$f_microtime_ini = microtime(true);

print_algorithm_type('PHP sort() method');
sort($st_test_array);

$f_microtime_end = microtime(true);

$f_microtime_resulting = $f_microtime_end - $f_microtime_ini;

echo "Microtime elapsed: $f_microtime_resulting\n";

blog-carlesmateo-com-php-sort-quicksortIn this case we did a PHP sort(), and we didn’t even remove the duplicates and it takes 88.5 seconds.

 

If we do a sort() and array_unique() to get the unique results sorted it takes: 741.27 seconds

blog-carlesmateo-com-php-sort-array_unique

We do the opposite array_unique() and sort() and the result is: 402.45 seconds

blog-carlesmateo-com-php-array_unique-sort-slowIt takes a lot.

Please note that sorting and doing array_unique takes more time than doing in the opposite order.

This is curious as one may thing that removing duplicates and sorting could be much faster than sorting and removing duplicates, but just sorting takes 88 seconds. I found the explanation looking at the source Source Code of PHP array_unique. Internally it sorts the array (zend_qsort line 2813) for removing the duplicates.

blog-carlesmateo-com-php-array_c-zend_qsortSo well, we were sorting twice, and sorting a sorted array. This is the worst case for quicksort, so that explains the slowlyness.

Then it enters what I engined. PHP works with arrays, that can have a numeric index. So my point is: the fastest way to sort an array is… not sorting it at all.

We know that the range of values we have is much smaller than the number of registers:

  • We have values between 0 and 1,000,000
  • We have 50,000,000 of registers

PHP sort() function is written in C, so it should be fastest than any thing we program in PHP. Should be…. really?.

Knowing that the values are 0-1M and the registers are 50M, and knowing the number of operations that a quick sort will do, I figured another way to improve this.

Look at this code:

function csort($st_test_array) {

    $s_algorithm = 'Carles_Sort';
    echo "Testing $s_algorithm\n";

    $i_loop_counter = 0;
    $st_result_array = Array();

    $i_ending_pos = count($st_test_array);
    for($i_pos = 0; $i_pos < $i_ending_pos; $i_pos++) {
        $i_loop_counter++;
        $i_value = $st_test_array[$i_pos];
        $st_result_array[$i_value] = $i_value;
    }

    return $st_result_array;

}

So, the only think I’m doing is looping the 50M array and in a new array $st_result_array setting the key to the same value, than the integer value. So if the original $st_test_array[0] is equal to 574145, then I set st_result_array[574145] = 574145. Easy.

Think about it.

With this we achieve three objectives:

  1. You eliminate duplicates. With 50M items, with values ranged from 0 to 1M you’ll set a value many times. No problem, it will use it’s index.
  2. We have the key assigned, matching the value, so you can reference every single value
  3. It’s amazingly faster. Much more than any sorting even in C

If you do a var_export or var_dump or print_r or a foreach you will notice that the first item of the array has key 574145 and value 574145, the second 262240, the third 153341, the fourth 656736 and the fifth 879329. They are not ordered in the way that one will think about an ordered array parsed with foreach(), but the thing is: do we need to sort it?. Not really. Every key is a hash that PHP will access very quickly. So we can do something like this to get the results ordered:

    $i_max_value = max($st_result_array);

    for($i_pos = 0; $i_pos <= $i_max_value; $i_pos++) {
        if (isset($st_result_array[$i_pos])) {
            $st_result_array2[] = $i_pos;
        }
    }

In this code we create a new array, with the values sorted in the sense that a foreach will return them sorted. But instead of recurring 50M registers, we only go to the max value (as values and indexes are the same, the max value also shows the max index), because we now don’t have 50M registers as we eliminated the duplicates, but 1M. And in an universe of 50M values from a range 0-1M, many are duplicates.

Even doing this second step still makes this sorting algorithm written in PHP, much more efficient than the quicksort written in C.

I ported the code to C, with the same data file, and it takes 0.674 seconds (674 milliseconds) and it uses only ~1 GB of RAM.

blog-carlesmateo-com-csort-0_674-secondsTake in consideration that if the data is directly loaded assigning the right index, so doing like in the next code, it will be blazing fast:

    o_fp = fopen(s_filename,"r"); // read mode
    
    if( o_fp == NULL ) {
        printf("File %s not found or I/O error.\n", s_filename);  
        perror("Error while opening the file.\n");
        exit(EXIT_FAILURE);
    }
    
    printf("Reading file %s :\n", s_filename);
    while( 1 ) {
        fscanf(o_fp, "%li", &l_number);
        // The key is here ;)
        st_values[l_number] = l_number;
        l_counter++;
        if (l_counter > l_max_values -1) break;
    }
    
    printf("First item: %li Last item: %li\n", st_values[0], st_values[l_counter-1]);
 
    fclose(o_fp);

So if we only resort in another array to have the values sorted eliminating the non existing values (half the tasks we do in the previous program), it takes and order of milliseconds.

blog-carlesmateo-com-csort-second-part-fast-0_1seconds

Using csort opt read with a busy computer (firefox using 16 GB RAM and many tabs) it takes 108 ms, with a idle computer 3 ms

Now imagine you don’t want to eliminate the duplicates.

You can create an memory optimized array where the index is the number, and the value represents the number of apparitions in the original file:

$st_example = Array( 0 => 1,
                     3 => 5,
                     4 => 1);

That would mean that the original sorted array, in form of list would be:

{0, 3, 3, 3, 3, 3, 4}

For the case of C, where the arrays have static size, a value of 0 would indicate there is no presence of that number in the orderer list so:

st_example[0] = 1;
st_example[1] = 0;
st_example[2] = 0;
st_example[3] = 5;
st_example[4] = 1;

Even in the case of the resulting C array, it will be much smaller than the original 50M Array, as will have only the size corresponding to the max value. In our sample 1M.

In PHP you can get an Array alike the csort compressed by using a function called array_count_values(). It will not have the non existing items = 0, so it would not generate st_example[1] = 0 neither st_example[2] = 0, but it would generate the others.

So conditions for the algorithm to work blazing fast:

  • The max number (max(m)) must be ideally lower than the number of registers (this is true for languages like C or Java. Languages like PHP grow dynamically the Array size and work as Array Hash, so it’s not necessary as we can have st_sorted[0] and st_sorted[1000000] and the array size will be only 2 items. In this case we can use less memory than the csort algorithm and be faster using foreach() instead as for() from 0 to 1M)
  • The most numbers that are repeated, the faster that will work the algorithm

Key conclusions and Advantages:

  • If you directly load the values form disk assigning the value as a key as well, you’ll save one loop per item (n=50M in this universe) and make it faster (having only to parse count(m), in PHP it doesn’t have to be 0-m).
  • The fastest way to sort it, is no sorting at all. You can have the values stored per key, even if a foreach doesn’t start by 0,1,2… simply accessing the value index you need will be super-fast.

Source code in PHP

<?php
/**
 * User: Carles Mateo
 * Date: 2015-04-05 14:19
 */

error_reporting(E_ALL);
// To avoid raising warnings from HHVM
date_default_timezone_set('Europe/Andorra');

// This freezes the server, using more than 35 GB RAM
//define('i_NUM_VALUES', 500000000);
// 50M uses ~ 10 GB RAM
define('i_NUM_VALUES', 50000000);
define('i_MAX_NUMBER', 1000000);

define('i_EXIT_SUCCESS', 0);
define('i_EXIT_ERROR', 1);

abstract class CsortDemo {

    const i_ALGORITHM_QUICKSORT        = 1;
    const i_ALGORITHM_PHP_QSORT        = 4;
    const i_ALGORITHM_PHP_ARRAY_FLIP   = 5;
    const i_ALGORITHM_PHP_ARRAY_UNIQUE = 6;
    const i_ALGORITHM_CSORT_BASIC      = 7;
    const i_ALGORITHM_CSORT_OPT1       = 8;


    public static function displayHelp() {
        $s_help = <<<EOT
CSort.php by Carles Mateo
=========================

This demonstrates a sort algorithm faster than quicksort,
as proposed in the article.
Please read the complete article at:
http://blog.carlesmateo.com

Please invoke the program with one of these parameters:

--help or without parameters for this help message
1 To test using a quicksort algorithm written in PHP
4 To test using PHP sort() (is a quicksort algorithm)
5 To test using PHP array_flip()
6 To test using PHP unique() (is a quicksort algorithm)
7 To test using csort (basic example, sorts and deletes duplicates)
8 To test using csort, improvement with indexes assigned when reading
  Use to watch the time saving from basic csort


9 To test using csort, with indexes assigned when reading and counting
  the number of coincidences (compressed output array)

Please, make sure that the file carles_sort.txt is present in the
current directory.

EOT;

        echo $s_help;

    }

    public static function get_random_values($i_algorithm) {

        $st_values = Array();
        $i_value = 0;

        $s_values_filename = 'carles_sort.txt';

        if (file_exists($s_values_filename)) {
            self::print_with_datetime("Reading values...");
            if ($i_algorithm == CsortDemo::i_ALGORITHM_CSORT_OPT1) {
                $o_fp = fopen($s_values_filename, 'r');
                if ($o_fp) {
                    while($s_line = fgets($o_fp) !== false) {
                        $i_value = intval($s_line);
                        $st_values[$i_value] = $i_value;
                    }
                }
            } else {
                $st_values = file($s_values_filename);
                self::print_with_datetime("Ensuring numeric values...");
                foreach($st_values as $i_key=>$s_value) {
                    $i_value = intval($s_value);
                    $st_values[$i_key] = $i_value;
                }

            }
        } else {
            self::print_with_datetime("Generating values...");
            $st_values = self::generate_random_values();
            self::print_with_datetime("Writing values to disk...");
            self::save_values_to_disk($s_values_filename, $st_values);
            self::print_with_datetime("Finished writing values to disk");
        }

        return $st_values;
    }

    public static function save_values_to_disk($s_values_filename, $st_values) {
        $i_counter = 0;
        $f_advance_percent = 0;
        $i_max_values = 0;

        $i_max_values = count($st_values);

        $o_file = fopen($s_values_filename, 'w');
        foreach ($st_values as $i_key => $s_value) {
            $i_counter++;
            fwrite($o_file, $s_value.PHP_EOL);
            if (($i_counter % 100000) == 0) {
                // every 100K numbers we show a status message
                $f_advance_percent = ($i_counter / $i_max_values)*100;
                self::print_with_datetime("Item num: $i_counter ($f_advance_percent%) value: $s_value written to disk");
            }
        }
        fclose($o_file);
    }

    public static function generate_random_values() {

        $i_num_values = i_NUM_VALUES;
        $i_max_number = i_MAX_NUMBER;

        $st_values = array();

        for($i_counter = 0; $i_counter<$i_num_values; $i_counter++) {
            $i_random = rand(0, $i_max_number);
            $st_values[] = $i_random;
        }

        return $st_values;

    }

    public static function get_datetime() {
        return date('Y-m-d H:i:s');
    }

    public static function print_with_datetime($s_string) {
        echo self::get_datetime().' '.$s_string."\n";
    }

    public static function print_algorithm_type($s_algorithm) {
        echo "Testing $s_algorithm\n";
    }

    public static function print_results($st_values, $i_initial_pos_to_display, $i_number_of_results_to_display) {

        self::print_with_datetime("Showing results from $i_initial_pos_to_display, $i_number_of_results_to_display items :");

        $i_counter = 0;
        $i_counter_found = 0;
        for($i_counter=0; ($i_counter_found<$i_number_of_results_to_display && (($i_counter + $i_initial_pos_to_display) < i_MAX_NUMBER)); $i_counter++) {
            $i_pos = $i_initial_pos_to_display + $i_counter;
            if (isset($st_values[$i_pos])) {
                echo '['.$i_pos.'] '.$st_values[$i_pos]."\n";
                $i_counter_found++;
            }
        }

    }

    // Found on the web
    public static function quicksort( $array ) {
        if( count( $array ) < 2 ) {
            return $array;
        }
        $left = $right = array( );
        reset( $array );
        $pivot_key  = key( $array );
        $pivot  = array_shift( $array );
        foreach( $array as $k => $v ) {
            if( $v < $pivot )
                $left[$k] = $v;
            else
                $right[$k] = $v;
        }
        return array_merge(self::quicksort($left), array($pivot_key => $pivot), self::quicksort($right));
    }

    public static function csort_basic($st_test_array) {
        // Note that even if we do not explicit to PHP to pass array by reference it does it,
        // and we do not use more memory unless we try to write to original array


        $s_algorithm = 'Carles_sort_basic';
        echo "Testing $s_algorithm\n";

        $st_result_array = Array();
        $st_result_array2 = Array();

        $i_min_value_pos = null;
        $i_max_value     = null;
        $i_max_value_pos = null;

        $i_temp_value = null;
        $i_loop_counter = 0;

        $i_num_items = count($st_test_array);

        $i_initial_pos = 0;
        $i_ending_pos  = $i_num_items;

        for($i_pos = $i_initial_pos; $i_pos < $i_ending_pos; $i_pos++) {
            $i_loop_counter++;

            $i_value = $st_test_array[$i_pos];
            $st_result_array[$i_value] = $i_value;

            // If you want to debug a bit
            //var_export($st_result_array); sleep(3);

            // If you want to debug...
            //if ($i_loop_counter % 1000 == 0) {
                //echo "Loop: $i_loop_counter i_pos: $i_pos Value: $i_value\n";
                //print_results($st_test_array, 0, 10);
                //print_results($st_test_array, $i_num_items - 2, 2);
            //}

        }

        // Return in the way of a foreach sorted Array
        $i_max_value = max($st_result_array);

        for($i_pos = 0; $i_pos <= $i_max_value; $i_pos++) {
            if (isset($st_result_array[$i_pos])) {
                $st_result_array2[] = $i_pos;
            }
        }

        return $st_result_array2;

    }

}

    // Checking parameters
    if (count($argv) < 2) {
        // Display help and exit
        CsortDemo::displayHelp();
        exit(i_EXIT_SUCCESS);
    }

    $i_algorithm = intval($argv[1]);

    CsortDemo::print_with_datetime('Starting program...');

    $st_test_array = CsortDemo::get_random_values($i_algorithm);

    $i_num_items = count($st_test_array);
    CsortDemo::print_with_datetime("Starting sorting with $i_num_items items, i_MAX_NUMBER was ".i_MAX_NUMBER);

    $i_loop_counter = 0;

    $i_initial_pos = 0;
    $i_ending_pos = $i_num_items - 1;

    $i_min_value     = i_MAX_NUMBER * 2; // A high number for the tests
    $i_min_value_pos = null;
    $i_max_value     = null;
    $i_max_value_pos = null;

    $i_temp_value = null;

    $b_changes_done = false;
    $b_exit = false;


    // TODO Test array_count_values
    $f_microtime_ini = microtime(true);

    if ($i_algorithm == CsortDemo::i_ALGORITHM_QUICKSORT) {
        $s_algorithm = 'Quicksort written in PHP';
        CsortDemo::print_algorithm_type($s_algorithm);
        CsortDemo::quicksort($st_test_array);
    }

    if ($i_algorithm == CsortDemo::i_ALGORITHM_CSORT_BASIC) {
        $st_test_array = CsortDemo::csort_basic($st_test_array);
    }

    if ($i_algorithm == CsortDemo::i_ALGORITHM_PHP_QSORT) {
        CsortDemo::print_algorithm_type('PHP sort() method');
        sort($st_test_array);
    }

    if ($i_algorithm == CsortDemo::i_ALGORITHM_PHP_ARRAY_FLIP) {
        CsortDemo::print_algorithm_type('PHP array_flip() method');
        $st_test_array = array_flip($st_test_array);
    }

    if ($i_algorithm == CsortDemo::i_ALGORITHM_PHP_ARRAY_UNIQUE) {
        CsortDemo::print_algorithm_type('PHP array_unique() method');
        $st_test_array = array_unique($st_test_array);
    }

    $f_microtime_end = microtime(true);

    $f_microtime_resulting = $f_microtime_end - $f_microtime_ini;

    echo "Microtime elapsed: $f_microtime_resulting\n";

    CsortDemo::print_with_datetime('Results:');
    //print_r($st_test_array);
    $i_total_items = count($st_test_array);
    echo 'Total items:'.$i_total_items."\n";
    CsortDemo::print_results($st_test_array, 0, 10);

    $i_last_number  = array_pop($st_test_array);
    $i_last_number1 = array_pop($st_test_array);
    CSortDemo::print_with_datetime('Last values:');
    echo $i_last_number1."\n";
    echo $i_last_number."\n";

    CsortDemo::print_with_datetime('Finished program');

PHP under HHVM

Additionally I executed the csort.php in the Facebook’s Hip Hop Virtual Machine, version 3.4.0-dev, and as expected everything is much faster than with PHP standard.

For example:

  • Using PHP sort() 88-104 seconds, using HHVM PHP sort() 15.8 seconds
  • Using csort basic from PHP 34 to 40 seconds, using csort basic from HHVM 3.4 to 5.13 seconds

The version of PHP used for these tests was:

PHP 5.5.9-1ubuntu4.9 (cli) (built: Apr 17 2015 11:44:57)
Copyright (c) 1997-2014 The PHP Group
Zend Engine v2.5.0, Copyright (c) 1998-2014 Zend Technologies
with Zend OPcache v7.0.3, Copyright (c) 1999-2014, by Zend Technologies

The version of hhvm used for the test was:

HipHop VM 3.4.0-dev (rel)
Compiler: heads/master-0-g96dec459bb84f606df2fe15c8c82df684b20ed83
Repo schema: 0100f1aaeeacd7b9d9a369b34ae5160acb0d1163
Extension API: 20140829

Must also mention that some operations like initializing the arrays, are super-fast with HHVM.

blog-carlesmateo-com-hhvm-executing-csort-php-5s

The solution in Java

The code has been executed with Oracle’s JDK 1.7.

I love Just In Time compilers. They are amazingly fast.

For this case, the results with Java, for the universe of 50M items, range 0-1M of values:

  • quicksort in Java: 3555 ms  =3.555 seconds versus ~10 seconds in C
    Important: In C I had to disable -O2 optimization for what it looks like a bug in gcc (only detected in csort with compression). With -O2 times were around 4.7 seconds
  • csort.java basic: 26 ms to 120 ms = 0.026 seconds versus 0.5 seconds csort in C
  • csort.java opt read: 2-3 ms = 0.002-0.003 seconds versus 3 ms = 0.003 seconds in C
  • csort.java with compression and duplicates support: 160 ms = 0.16 seconds versus 0.272-0.277 segons in C
  • The execution on those languages (Java and C) is so fast that it’s hard to measure accurately as any random execution of Ubuntu in background will increase a bit the time.

blog-carlesmateo-com-java-quicksort-3555ms

Quicksort in Java, 3555 ms = 3.555 seconds

blog-carlesmateo-com-java-csort-basic-26ms

Csort basic with Java, 26 milliseconds

blog-carlesmateo-com-java-csort-read-opt-5ms

Csort read opt with Java, 5 milliseconds

The code in Java:

/*
 * This code, except quicksort implementation, is written by Carles Mateo, 
 * and published as Open Source.
 */
package csort;

import java.io.BufferedReader;
import java.io.FileReader;
import java.io.IOException;
import java.text.DateFormat;
import java.text.SimpleDateFormat;
import java.util.Date;

/**
 *
 * @author carles
 */
public class Csort {

    public static final int i_MAX_REGISTERS = 50000000;
    public static final int i_MAX_VALUE     = 1000000;
    public static final String s_FILE = "/home/carles/Desktop/codi/php/tests/carles_sort.txt";
    
    public static final int i_ERROR_SUCCESS = 0;
    public static final int i_ERROR_FAILURE = 1;
    
    public static final int i_ALGORITHM_QUICKSORT        = 1;
    public static final int i_ALGORITHM_CSORT_BASIC      = 7;
    public static final int i_ALGORITHM_CSORT_OPT1       = 8;
    public static final int i_ALGORITHM_CSORT_COMP       = 9;

    // Original values read
    public int[] st_values; 
    // Our Csorted array
    public int[] st_values_sorted;
    // To resort as a alike list array
    public int[] st_values_sorted_renum;
    
    // For quicksort
    private int[] numbers;
    private int number;    
    
    public void systemExit(String s_reason, int i_error_code) {
        printWithDatetime("Exiting to system, cause: " + s_reason);
        System.exit(i_error_code);
    }
    
    public static String getDateTime() {
        DateFormat o_dateformat = new SimpleDateFormat("yyyy/MM/dd HH:mm:ss");
        Date o_date = new Date();
        
        return o_dateformat.format(o_date);
    }
    
    public static void printWithDatetime(String s_message) {
        System.out.println(getDateTime() + " " + s_message);
    }
    
    public void printAlgorith(String s_algorithm) {
        Csort.printWithDatetime("Using algorithm: " + s_algorithm);
    }
    
    public static void showHelp() {
        
        String s_help = "" + 
            "CSort.java by Carles Mateo\n" +
            "==========================\n" +
            "\n" +
            "This demonstrates a sort algorithm faster than quicksort,\n" +
            "as proposed in the article.\n" +
            "Please read the complete article at:\n" +
            "http://blog.carlesmateo.com\n" +
            "\n" +
            "Please invoke the program with one of these parameters:\n" +
            "\n" +
            "--help or without parameters for this help message\n" +
            "1 To test using quicksort\n" +
            "  Only sorts, does not deletes duplicates\n" +
            "7 To test using csort (basic example, sorts and deletes duplicates)\n" +
            "8 To test using csort, improvement with indexes assigned when reading\n" +
            "  Use to watch the time saving from basic csort\n" +
            "\n" +
            "Please, make sure that the file carles_sort.txt is present in the\n" +
            "current directory.\n" +
            "\n";
        
        System.out.println(s_help);
        
    }
    
    public int readFile(int i_algorithm) {
        BufferedReader br = null;

        int i_value = 0;
        int i_counter = 0;        
        
        try {

                String sCurrentLine;

                br = new BufferedReader(new FileReader(Csort.s_FILE));

                while ((sCurrentLine = br.readLine()) != null) {
                    i_value = Integer.parseInt(sCurrentLine);
                    if (i_algorithm == Csort.i_ALGORITHM_CSORT_OPT1) {
                        this.st_values[i_value] = i_value;
                    } else {
                        this.st_values[i_counter] = i_value;
                    }
                    
                    if (i_counter % 500000 == 0) {
                        float f_percent = ((float) i_counter / Csort.i_MAX_REGISTERS) * 100;
                        String s_percent_msg = "Read register " + i_counter + " value " + i_value + " " + f_percent + "% read";
                        System.out.println(s_percent_msg);
                    }
                    i_counter++;
                }

        } catch (IOException e) {
                e.printStackTrace();
        } finally {
                try {
                    if (br != null) br.close();
                } catch (IOException ex) {
                    ex.printStackTrace();
                    this.systemExit("Error processing the file: " + Csort.s_FILE, Csort.i_ERROR_FAILURE);
                }
        }
        
        return i_counter;
    }
    
    // http://www.vogella.com/tutorials/JavaAlgorithmsQuicksort/article.html
    public void qsort(int[] values) {
        // check for empty or null array
        if (values == null || values.length == 0){
            return;
        }
        this.numbers = values;
        number = values.length;
        quicksort(0, number - 1);
    }    
    
    private void quicksort(int low, int high) {
        int i = low, j = high;
        // Get the pivot element from the middle of the list
        int pivot = numbers[low + (high-low)/2];

        // Divide into two lists
        while (i <= j) {
          // If the current value from the left list is smaller then the pivot
          // element then get the next element from the left list
          while (numbers[i] < pivot) {
            i++;
          }
          // If the current value from the right list is larger then the pivot
          // element then get the next element from the right list
          while (numbers[j] > pivot) {
            j--;
          }

          // If we have found a values in the left list which is larger then
          // the pivot element and if we have found a value in the right list
          // which is smaller then the pivot element then we exchange the
          // values.
          // As we are done we can increase i and j
          if (i <= j) {
            exchange(i, j);
            i++;
            j--;
          }
        }
        // Recursion
        if (low < j)
          quicksort(low, j);
        if (i < high)
          quicksort(i, high);
    }

    private void exchange(int i, int j) {
        int temp = numbers[i];
        numbers[i] = numbers[j];
        numbers[j] = temp;
    }
    // End of http://www.vogella.com/tutorials/JavaAlgorithmsQuicksort/article.html
  
    public void csortBasic() {
        int i_counter = 0;
        int i_value = 0;
        int i_max_range = 0;
        int i_counter_hit = 0;
        
        try {
            for (i_counter=0; i_counter < i_MAX_REGISTERS; i_counter++) {
                i_value = this.st_values[i_counter];
                this.st_values_sorted[i_value] = i_value;
            }

            // Resort for forearch
            i_max_range = i_MAX_VALUE;
            for (i_counter=0; i_counter < i_max_range; i_counter++) {
                if (this.st_values_sorted[i_counter] > -1 ) {
                    this.st_values_sorted_renum[i_counter_hit] = this.st_values_sorted[i_counter];
                    i_counter_hit++;
                }
            }
        } catch (Exception e) {
            printWithDatetime("Exception captured at i_counter: " + i_counter);
            this.systemExit("Exception, so ending", i_ERROR_FAILURE);
        }        
        
    }
    
    public void csortOptimized() {
        int i_counter = 0;
        int i_value = 0;
        int i_max_range = 0;
        int i_counter_hit = 0;
        
        try {
            // It was sorted while reading :)

            // Resort for forearch/list-style lovers
            i_max_range = i_MAX_VALUE;
            for (i_counter=0; i_counter < i_max_range; i_counter++) {
                if (this.st_values[i_counter] > -1 ) {
                    this.st_values_sorted_renum[i_counter_hit] = this.st_values[i_counter];
                    i_counter_hit++;
                }
            }
        } catch (Exception e) {
            printWithDatetime("Exception captured at i_counter: " + i_counter);
            this.systemExit("Exception, so ending", i_ERROR_FAILURE);
        }        
        
    }    
    
    public void csortCompress() {
        int i_counter = 0;
        int i_value = 0;
        
        for (i_counter=0; i_counter < i_MAX_REGISTERS; i_counter++) {
            i_value = this.st_values[i_counter];
            this.st_values_sorted[i_value] = this.st_values_sorted[i_value] + 1;
        }

    }
    
    public void printItems(int i_num_items, int[] st_array) {
        int i_counter = 0;
        int i_value   = 0;
        
        for(i_counter=0;i_counter<i_num_items;i_counter++) {
            i_value = st_array[i_counter];
            System.out.println("Item[" + i_counter + "] value: " + i_value);
        }
    }
        
    public Csort(int i_registers, int i_max_value) {
        
        // int range is -2,147,483,648 to 2,147,483,647
        this.st_values = new int[i_registers];
        // If max value is 10 then we could have values from 0 to 10, so a 11 items Array
        this.st_values_sorted = new int[i_max_value + 1];
        this.st_values_sorted_renum = new int[i_max_value + 1];
        
    }
    
    /**
     * @param args the command line arguments
     */
    public static void main(String[] args) {
                
        long l_milliseconds_ini   = 0;
        long l_milliseconds_end   = 0;
        long l_milliseconds_total = 0;
        
        int i_items_read = 0;
        
        int i_algorithm = 0;
        String s_algorithm = "";
        
        int i_counter = 0;
        
        /* Check parameter consistency */
        if (args.length == 0 || args.length > 1 || (args.length == 1 && Integer.valueOf(args[0]) == 0)) {            
            showHelp();
            System.exit(i_ERROR_SUCCESS);
        }
        
        i_algorithm = Integer.valueOf(args[0]);
        
        Csort o_csort = new Csort(Csort.i_MAX_REGISTERS, Csort.i_MAX_VALUE);
        
        if (i_algorithm == Csort.i_ALGORITHM_CSORT_BASIC || i_algorithm == Csort.i_ALGORITHM_CSORT_OPT1 || i_algorithm == Csort.i_ALGORITHM_QUICKSORT) {
            // We init empty position to -1 to differenciate from empty items and from value 0 (that is valid)
            Csort.printWithDatetime("Initializing values to -1...");
            for(i_counter=0; i_counter < Csort.i_MAX_VALUE; i_counter++) {
                o_csort.st_values_sorted[i_counter] = -1;
                o_csort.st_values_sorted_renum[i_counter] = -1;
            }            
        } else if (i_algorithm == Csort.i_ALGORITHM_CSORT_COMP) {
            // We init empty position to -1 to differenciate from empty items and from value 0 (that is valid)
            Csort.printWithDatetime("Initializing values to 0...");
            for(i_counter=0; i_counter < Csort.i_MAX_VALUE; i_counter++) {
                o_csort.st_values_sorted[i_counter] = 0;
            }            
        } else {
            showHelp();
            System.exit(i_ERROR_SUCCESS);            
        }        
        
        Csort.printWithDatetime("Starting csort.java...");
        
        Csort.printWithDatetime("Reading file...");
        i_items_read = o_csort.readFile(i_algorithm);
        Csort.printWithDatetime("Registers read:" + i_items_read);
        
        l_milliseconds_ini = System.currentTimeMillis();
        
        if (i_algorithm == Csort.i_ALGORITHM_QUICKSORT) {            
            s_algorithm = "Quicksort";
            o_csort.printAlgorith(s_algorithm);
            
            o_csort.qsort(o_csort.st_values);
        }
        
        if (i_algorithm == Csort.i_ALGORITHM_CSORT_BASIC) {
            s_algorithm = "Csort basic";
            o_csort.printAlgorith(s_algorithm);
            
            o_csort.csortBasic();
        }
        
        if (i_algorithm == Csort.i_ALGORITHM_CSORT_OPT1) {
            s_algorithm = "Csort Read optimized";
            o_csort.printAlgorith(s_algorithm);
            
            o_csort.csortOptimized();            
        }
        
        if (i_algorithm == Csort.i_ALGORITHM_CSORT_COMP) {
            s_algorithm = "Csort compress with duplicates";
            o_csort.printAlgorith(s_algorithm);
            
            o_csort.csortCompress();
        }        
                
        l_milliseconds_end = System.currentTimeMillis();
        
        l_milliseconds_total = l_milliseconds_end - l_milliseconds_ini;
        
        Csort.printWithDatetime("Algorithm execution time: " + l_milliseconds_total + " milliseconds");
        
        if (i_algorithm == Csort.i_ALGORITHM_CSORT_BASIC || i_algorithm == Csort.i_ALGORITHM_CSORT_OPT1) {
            o_csort.printItems(10, o_csort.st_values_sorted_renum);
        }
        
        if (i_algorithm == Csort.i_ALGORITHM_QUICKSORT) {
            o_csort.printItems(10, o_csort.numbers);
        }
        
        if (i_algorithm == Csort.i_ALGORITHM_CSORT_COMP) {
            o_csort.printItems(10, o_csort.st_values_sorted);
        }
                        
    }
    
}

 

Note: This article is growing. I’ll be expanding the article, putting more samples, detailing timmings from different universes I’ve tested and screenshots, specially in the PHP section, to add more tricks and whys, but I wanted to put the base for being able to discuss with some colleagues.

What is PrototypeC and how you can DIY

The C with the external keyboard+touchpad and PHPStormIt all started with my best friend, and the best Engineer I know, E. H. We often meet for doing hackatons and we explore node, go, patterns, technologies… Some times we just meet and improvise a hackaton.

I’ve been carrying with me a laptop, almost every single day of my life, since many years ago (that’s why I created some jokes about gym and IT and have fun with my friends, like: we IT guys go to the gym to be so strong so we can carry a laptop everywhere, all the time, without effort).

But some of my friends suffer from the back, and they can’t carry weight, so they don’t bring with them the laptop unless necessary.

There are some other friends, Sysadmins in the Operations field, that sometimes have alerts, so they have to carry a laptop with them when they’re on guard, but sometimes they have an alert out of guard, and nobody else can fix it and they don’t have a computer with them.

I also I’m concerned about privacy and sadly most smartphone builders install crappy Software that spy the users, and send keystrokes to third party companies, etc… So having a portable device, small as a credit card, where you can type your business ideas or your thoughts with some privacy, or you can use to fix remote servers when they broke, etc… looked appealing. This also fits with my private messenger c-client.

So, when I configured my raspberry 2 with OSMC and started to install Software, program crons, etc… I realized that is a tiny computer, capable enough. And then an idea came to me.

What if I can make this lightweight motherboard a wearable computer?.

And I started to play with this.

OS

I first needed an interesting enough OS, that I or my DevOps friends would like to use.

I choose Debian Jessie for Arm (armhf). Raspberry Pi 2 uses ARM v7 that is a very important improvement respect previous Raspberry models, not only in performance, but also in the floating point. That simplifies many things and much more Software packages are available (like Snappy Ubuntu Core or Windows 10).

Using Debian Jessie provided me with a very basic system that really uses few RAM memory. It uses only 65 MB RAM, with all the Wifi supported firmwares.

Then I installed X-Windows, with LXDE as Desktop Manager. Everything uses only 120 MB RAM.

I installed several packages, to highlight the epiphany web browser, Open Java Runtime Environment, and PHPStorm IDE.

I also use Ubuntu 15.04 Desktop for ARM.

Battery

Unless Raspberry pi first generation, Raspberry Pi 2 uses micro-USB standard 5V input, like most of the Android phones. So I decided to do some tests about sustained energy input with batteries.

First I calculated the energy consumption of the Pi 2, plus Ethernet and plus some USB devices I would like to attach.

I found that common power banks, those you use to charge your smartphone when battery is depleted and you’re on the run, bring a continuous energy signal that works perfectly well. I tested it by running the Raspberry Pi 2 for hours while reproducing video, doing live updates, flawlessly.

Then the first version of the Prototype C was born.

Costs:

  • $40 Raspbery Pi 2
  • $10 Powerbank 2000 mAh
  • $15 USB Wifi-N card

It weights 160 grams/0.3527 pounds/5.64 ounces.

Some of the powerbanks are so cool that they allow you to charge them while at the same time they are providing energy to the output. So you can have the Prototype C continuously running, and if the battery is going to deplete you can just plug it to the energy plug of a bar, and the C will continue running non-stop, and the battery will charge, so in a while you can continue your walk without stopping the C.

Prototype C-2 Spartan: 5000 mAh solar powered Levin battery, Raspberry Pi 2, blue micro-USB connector, Ziron blue Z-cableLater I bought a Levin solar powered battery, that is lightweight, protected to impacts and that has two outputs (1, 2 A) and charges while exposed to sun, so I can have the battery in the outside of my bag, and while I walk in the city it is being charged at a max rate of 200 mA. This is ideal for people hiking in the mountain or traveling in the world, so it can save lives if a problem comes and phone’s battery is depleted.

I think also in boys in Africa or poor countries where they children have to walk a long to the school. This walking could help charging the battery so they can study at the school and continue studying at home.

With the C a solar powered battery also allow to blog from everywhere. :)

You don't even need a cable, just an small USB-micro-USB connector

You don’t even need a cable, just an small USB-micro-USB connector

Note: This Levin solar battery doesn’t accept to be charged and to bring energy to the device at the same time.

Note: you can also aliment it from the car lighter connector with and adapter.

Monitor

That part is not easy.

The monitors I was interested in were not sold to Catalonia. Finally I was able to find some of my interest in Amazon.es

Most of those monitors work for the Raspberry Pi, first gen, or for Arduino, Banana Pi, etc… but not for Raspberry Pi 2. So this is the first thing to be careful.

Other of those monitors get the signal from the Raspberry Pi GPIO, the General Purpose Input Output, from Raspberry. So they need some kernel patching, binaries, not Open Source code usually, and that also prevents from certain kind direct access to drawing in X not being displayed in the monitor attached via GPIO.

gpio-pins

Other monitors require external power source, often 12 V, and so are very voluminous. So they are not a fit for our means.

Normally those monitors are touch screens, some take the power from the GPIO and also control the touching from there, but others require two USB: one for the energy and other for the touch controlling part.

You have also to consider if they have external button to power on, power off.

Also those monitors have extremely low resolution, 320×480 and like this. Only found a model supporting 800×600 and is a 5″.

And finally you have to take in consideration the power consumption.

I solved this by buying power banks that have a 2 or 2.5 A output.

Simple power banks provide a 1 A output, better ones provide two outputs: 1 A output for smartphones, and 2 A output, normally for tablets.

Part of the cool thing is the possibility to have the monitor in a place, and the Pi and the battery in another place, like for creating wearables and other cool stuff.

As I wanted to provide DIY for people, and an elegant cheap lightweight solution, the difficulty to find those monitors was a gap, so I explored several ways to use cheaper screens until I realized what is what every engineer (and common people) carries with him every day?. The smartphone.

USB Wifi

As I wanted to have Wifi I needed USB Wifi that can work with Raspberry Pi 2 and Debian Jessie.

I had some problems at the beginning (caused by a bug in wcid), but I managed to make everything work. I have made it work with two different Wifi card models flawlessly: TP-Link Nano, Asus Nano USB-N10.

Nothing stops you from adding two Wifi USB to the device. In fact is my preferred option, as I use one for being connected/controlled/display to the phone, and another for connecting to the Wifi of the place I am. That way I don’t use my mobile Data plan that much when I’m outside. In this case I recommend to use different cards (different supported chipsets) to save you headaches configuring and setting what does what (unless you like to fight with configurations until you master everything :).

Some USB Wifi devices also equip Bluetooth transmission, that is very interesting for future prototypes for controlling other devices (car radio, headsets, commands..).

Controlling the C from the smartphone

I started to test certain things to overcome the display difficulties and finally I had cool an idea and I found a very nice option.

Here configuring the Vnc client to access to :1

Here configuring the Vnc client to access to :1

Sharing the Internet from my iPhone creates an internal network of the type 172.20.10.x.

Sharing the Internet connection from the phone, and connecting the C automatically it  creates a private network, with full network visibility between the devices.

So with a ssh app for iPhone like Server Auditor I was able to connect through ssh to the C, and with a VNC Client, like Real Vnc Client, I was able to connect to the Desktop of the Prototype C, control it, send keystrokes, zoom in, zoom out. Both apps are free to download and to use.

For Android it works the same, for sharing the Internet connection it creates an internal network with another range and there are also free apps for the purpose.

The ip assigned is normally the same, 172.20.10.9 in my case.

The only required thing was to setup the C to auto-connect via Wifi to my phone’s Wifi shared connection, and then connect to the C device’s ip.

This is very easily done with Wcid from LXDE.

Mark Automatically connect to this Network

Mark Automatically connect to this Network

I configured vnc server so I can access to the same display than the HDMI, so I can switch from working with the smartphone to HDMI easily, and I also configured vnc server through other display at a 1024×768 resolutions. That was more comfortable to work only with the small screen of my iPhone 4. Although setting a lower resolution, like 800×600, makes it easier to work with tiny screens.

With the Vnc client you can zoom in, zoom out, with the classical gestures and use keyboard.

Not only this, I allowed several of my friends to have their own independent sessions to the same C. So with only one C several Engineers can Develop or do DevOps stuff.

Must say that with the Nexus and other smartphones with bigger screens, or tablets, the experience is amazing, much much better than with my tiny iPhone’s 4 screen.

And of course you can control the C also from a laptop.

Nailing it

I added to the set an external bluetooth keyboard+touchpad.

Prototype C with the bluetooth keyboard. The motherboard is hidden under the battery

Prototype C with the bluetooth keyboard. The motherboard is hidden under the battery

If you use VNC on display :0 it’s perfect, obviously external keyboard won’t work if you use different displays (as key strokes are sent only to the device and not to the different X VNC sessions, of course).

With an external keyboard, the worries about spyware in the iPhone were mitigated, as all the keystrokes are on the physical external bluetooth keyboard and not on the iPhone’s screen.

The C with a smaller battery

The C with a smaller battery

I used a Rii mini i8, and a Logitech.

Some photos of real use examples

Sometimes I just connect the raw motherboard to the solar charger in the bag and only use the smartphone over the table.

Sometimes I just connect the raw motherboard to the solar charger in the bag, and just use the smartphone over the table.

We were dinning and to the cinema with some IT friends when suddenly one of the SysAdmins got an alert and needed a Linux box

We were dinning and to the cinema with some IT Engineer friends when suddenly one of the SysAdmins got an alert and needed a Linux box

No space? Do you need a sand box and have no memory for a virtual one?

Opening PHPStorm at the bar. Under LXDE

blog-carlesmateo-com-web-browsing-at-the-bar

The C at the bar displaying epiphany web browser with the address http://www.prototypec.com

C over a jacket in a restaurant

C over a jacket in a restaurant

Waiting time? Get C!

Waiting time? Get C!

Mounting a USB mini webcam in a glass, while at a bar. The battery is charging the iPhone and powering the Prototype C

Mounting a USB mini webcam in a glass, while at a bar. The battery is charging the iPhone and powering the Prototype C

Honestly, I love to carry with me the raw Raspberry Pi 2 motherboard (in a cartoon box) and to connect it as is, but you can use a plastic box. It is also very useful to hang to walls, furniture, or the car.

Honestly, I love to carry with me the raw Raspberry Pi 2 motherboard (in a cartoon box) and to connect it as is, but you can use a plastic box. It is also very useful to hang to walls, furniture, or the car.

Raspberry Pi and osmc

RaspberryPiB+There is something that fascinates me from the new Raspberry Pi, and using it as a media center.
It is the fact that is a really small board.
That is powered by a micro USB 1000 mhA.
That is powered with Linux.

I had other media centers before but they were magnetic hard disk, closed in a proprietary system.
The media center I installed, with RaspBerry Pi+, is osmc, that is Open Source Media Center.

blog-carlesmateo-com-raspberry-pi-2-osmc-ssh-topSo I have full access via ssh to the RaspBerry, and as it used so few energy I have it all the day up.
Then, as it is a Linux box, and I have full access, and I’ve around 546 MB RAM free, I can run as many background process as I want.
Do I want to be a jump point for my VPN? Let’s go.
Do I want to have some monitoring processes over few websites? Let’s do it!.

I’m really happy about having a so tiny, so few energy consuming, full Linux, being my media center and whatever I want to it to do.

I must say that is wonderful having SSH and a network interface. Ok, it’s 10/100 Mbps, not Gigabit, but it is enough to allow me to copy new files in background to the USB stick via SFTP while reproducing at FullHD Blueray MKV, files right. Also allows to mount network folders via NFS or SMB amd play from them. Copying via SFTP to the USB device is generally very slow -don’t be surprised to upload at 30 KB/s- so I recommend to set a NFS folder in the computer, with read access to the ip of the Raspberry. It’s very cool and plays totally smooth using the 100 Mbit ethernet connection. You can also configure a FTP in the Pi, that will be much faster than the SFTP.

The RaspBerry micro SD card has a performance of ~22 MB, that is enough to boot very quickly and to load programs quite fast. I have other microSD cards with Debian Jessie, and I load PHPStorm (Java based PHP IDE) quite fast.

It boots really fast, in case you stop and start it frequently.

It accepts my wired Mouse and Keyboard, and also wireless bluetooth.

I’m really in love with this small motherboard. :)

This tiny RaspBerry 2, has 4 cores at 900 Mhz.

The CPU announces (cat /proc/cpuinfo):

processor    : 3
model name    : ARMv7 Processor rev 5 (v7l)
BogoMIPS    : 38.40
Features    : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm
CPU implementer    : 0x41
CPU architecture: 7
CPU variant    : 0x0
CPU part    : 0xc07
CPU revision    : 5

As you see, it scores only 38.40 bogomips, compared to my tower desktop 6384.59, and my old laptop 2593.45, but it’s still beautiful.

Note: you cannot trust bogomips as a performance measurement, and in addition my computers are Intel based -so CISC architecture- while RasbBerry uses ARM processors that are RISC, that is a completely different architecture. I notice a very fluid speed, only I sense a bit slowliness in the process when I install new packages. When unpacking it feels slow, although it can perfectly be caused by the SSD card IO as well, so I installed iotop and monitorized the I/O while I was installing PHP5 :) . I got small writings up to 1,000 KB/sec, so 1 MB/s, with average of ~30-50KB writing operations, no iowait, while I was seeing with htop that the core unpacking was at 100 % of CPU, the other 3 were free, so my initial conclusion is that the bottleneck was on the CPU. Still happy about my little gadget. :)

The osmc image I installed comes with python 2.7.9 and Linux kernel 3.18.9 as uname -a shows:

Linux osmc 3.18.9-5-osmc #1 SMP PREEMPT Wed Mar 11 18:59:35 UTC 2015 armv7l GNU/Linux

It also comes with wget 1.16 and curl 7.38.0.

In fact the OSMC is based on the Debian Jessie distro.

The OSMC software also have upgrades, and Debian upgrades, that keep the Linux box up to date.

So that brings a lot of possibilities.

After a sudo apt-get update I was able to install htop, mc and apache2.

sudo apt-get install htop
sudo apt-get install iotop
sudo apt-get install iftop
sudo apt-get install mc
sudo apt-get install apache2
sudo apt-get install php5
sudo apt-get instlal ncdu

So it’s a lot of fun. :)

Note: Although a 1000mhA is enough (Raspberry Pi 2 needs around 700mhA) if you plan to plug a cheap case 2.5 hard disk without external power -just USB- it will not be enough. In this case I recommend buying a 2000mhA transformer for the Pi, or a external USB hub energy powered (2000mhA otherwise you risk energy from Raspbery + USB hub being to sufficient). If the disk has external power, then you’ll have no probem. Personaly I use USB sticks.

When I had my incubator of Start ups some years ago, one of my Start up project was embedding motherboards within screens, and offering the ability to play videos, images, even flash games and animations, and manage and update everything and update contents for a groups of players from the Internet, or based on time triggers. I was finalist for selling my product to a enormous multinational, it was close, but finally a Korean company with a cheaper (and less powerful solution) won. At that time, it was 2004, motherboards were huge comparing to this tiny piece of hardware and I had to deal with different voltage, power consumption, heat dissipation, safety, etc…. so I’m really in love with this tiny piece hardware that doesn’t need even a ventilator or a big dissipation mechanism.

A quick note about symmetric encryption and entropy

I had the discussion with some friends about symmetric encryption, and about the problem of security that it supposes that the encryption resulting of a block of data, for the same password, will be always the same. I write it here the explanations I tell them and what I do, so I can just send this link instead of explaining the same every time. :)

I am using different kinds of encryption for my new 2014’s Messenger C-Client.

The main advantage of symmetric encryption for me, is that is cheap in terms of CPU usage.

When I send files thought my messenger all the data blocks have to be encrypted, and so, if I send the Ubuntu 14.10 ISO image, this is a lot of data to encrypt, and in order to get a good throughput and performance I needed to find an outstanding optimal way to do it, that is at the same time secure.

Many encryption techniques and algorithms can be used, and all at the same time over the same packets to make them more secure. In this article I will describe only an improvement to the symmetric encryption to make it no predictable and very fast, so cheap to use.

So I introduce the concept here of noise, or entropy, in symmetric data packets.

Imagine that you want to send a chunk of data that is:

SGVsbG8sIEkgc2VlIHRoYXQgeW91IHRyeSB0aGluZ3MuIDopIENvbmdyYXRz

For clarity, only base64 are used in this sample, no binary data.

The problem is that every time that you send this chunk of data, you’ll get the same encrypted string. This is predictable and can lead to someone to reverse engineer your password.

So taking in consideration this sample Java 1.7 source code (note that variables use MT notation) that just encrypts for a given key:

import java.security.NoSuchAlgorithmException;
import java.security.SecureRandom;
import javax.xml.bind.DatatypeConverter;
import java.util.UUID;
import javax.crypto.Cipher;
import javax.crypto.IllegalBlockSizeException;
import javax.crypto.KeyGenerator;
import javax.crypto.NoSuchPaddingException;
import javax.crypto.SecretKey;

/**
 * @author Carles Mateo
 */
public abstract class Security {

    public static String encrypt(String s_key, String s_str) {
        try {
            
            /** encryption cypher */
            Cipher o_ecipher; 
          
            SecureRandom seed = SecureRandom.getInstance("SHA1PRNG");

            seed.setSeed(s_key.getBytes());

            KeyGenerator keygen = KeyGenerator.getInstance("DES");

            keygen.init(seed);

            SecretKey key = keygen.generateKey();

            o_ecipher = Cipher.getInstance("DES");
            o_ecipher.init(Cipher.ENCRYPT_MODE, key);
            
            // Encode the string into bytes using utf-8
            byte[] utf8 = s_str.getBytes("UTF8");

            // Encrypt
            byte[] enc = o_ecipher.doFinal(utf8);
            
            // Encode bytes to base64 to get a string
            String s_encrypted_encoded = encodeBase64(enc);
            
            return s_encrypted_encoded;
        } catch (javax.crypto.BadPaddingException e) {
        } catch (NoSuchAlgorithmException e) { 
        } catch (NoSuchPaddingException e) { 
        } catch (IllegalBlockSizeException e) {
        } catch (java.io.IOException e) {
        } catch (Exception e) {
            System.out.println(e.getMessage());
        }

        return null;
    }
}

Then here is the cheap way that consist in making the original data always be different, that I came with.

Instead of sending just the data, add a bit of entropy or noise, like an UUID, so our packet would be:

f47ac10b-58cc-4372-a567-0e02b2c3d479|SGVsbG8sIEkgc2VlIHRoYXQgeW91IHRyeSB0aGluZ3MuIDopIENvbmdyYXRz

If you’re reading this article you already know, but just in case, the f47ac10b-58cc-4372-a567-0e02b2c3d479 is an UUID.

The pipe | is for clarity, as the UUID has a length fix it is not really needed to manipulate the string.

Having this UUID in here is also cool, as it will allow us to do things like using this UUID as the password for the next packet. So if our messenger starts with the user’s password, from there the next packets could use this UUID as password (and the next one, the previously random generated UUID and so on…) so every packet has a new password, that must be known from the previous history in order to get all the packets description.

So if a man-in-the-middle is capturing all the data sent, hoping to being able to break the encryption in a near future, it will have to be sure to record all the packets to decode the sequence easily, otherwise will have to fight to break every single packet.

So UUID is random, but is not noise at all. It is useful entropy.

Another thing that we can do to is set a timestamp on the packet, that way, even if a man in the middle is able to clone and later send the initial packet to the Server, it will be discarded.

For example:

f47ac10b-58cc-4372-a567-0e02b2c3d479|1428002755|SGVsbG8sIEkgc2VlIHRoYXQgeW91IHRyeSB0aGluZ3MuIDopIENvbmdyYXRz

The second field is the unix timestamp, so our Server that has tracked the time or the Client’s computer and knows its rid, knows that if he gets a timestamp in the packet that has more than 2 seconds of difference, it will discard the packet and end the connection. So no one in-the-middle will be able to success cloning a login packet, and injecting it to the Server.

So having a basic encapsulation, that is cheap, for transport, the data inside can be also in addition be encrypted with asymmetric and other mechanisms that guarantee that only Client1 and Client2 can decrypt it, and not the Server, while the Server ensures packets are compliant and no noise/attacks are sent though the Server to Clients.