# Upgrading the Blog after 5 years, AWS Amazon Web Services, under DoS and Spam attacks

Few days ago I was under a heavy DoS attack.

Nothing new, zombie computers, hackers, pirates, networks of computers… trying to abuse the system and to hack into it. Why? There could be many reasons, from storing pirate movies, trying to use your Server for sending Spam, try to phishing or to host Ransomware pages…

Most of those guys doesn’t know that is almost impossible to Spam from Amazon. Few emails per hour can come out from the Server unless you explicitly requests that update and configure everything.

But I thought it was a great opportunity to force myself to update the Operating System, core tools, versions of PHP and MySql.

## Forensics / Postmortem of the incident

The task was divided in two parts:

• Understanding the origin of the attack
• Blocking the offending Ip addresses or disabling XMLRPC
• Making the VM boot again (problems with Amazon AWS)
• I didn’t know why it was not booting so.

I disabled the access to the site while I was working using Amazon Web Services Firewall. Basically I turned access to my ip only. Example: 8.8.8.8/32

That way the logs were reflecting only what I was doing from my Ip.

### Dealing with Snapshots and Volumes in AWS

Well the first thing was doing an Snapshot.

After, I tried to boot the original Blog Server (so I don’t stop offering service) but no way, the Server appeared to be dead.

So then I attached the Volume to a new Server with the same base OS, in order to extract (dump) the database. Later I would attach the same Volume to a new Server with the most recent OS and base Software.

Something that is a bit annoying is that the new Instances, the new generation instances, run only in VPC, not in Amazon EC2 Classic. But my static Ip addresses are created for Amazon EC2 Classic, so I could not use them in new generation instances.

I choose the option to see all the All the generations.

### Upgrading the OS / Base Software

My approach was to install an Ubuntu 18.04 LTS, and install the base Software clean, and add any modification I may need.

I wanted to have all the supported packages and a recent version of PHP 7 and the latest Software pieces link Apache or MySQL.

sudo apt update

sudo apt install apache2

sudo apt install mysql-server

sudo apt install php libapache2-mod-php php-mysql


#### Apache2

Config files that before were working stopped working as the new Apache version requires the files or symlinks under /etc/apache2/sites-enabled/ to end with .conf extension.

Also some directives changed, so some websites will not able to work properly.

Those projects using my Catalonia Framework were affected, although I have this very well documented to make it easy to work with both versions of Apache Http Server, so it was a very straightforward change.

From the previous version I had to change my www.cataloniaframework.com.conf file and enable:

    <Directory /www/www.cataloniaframework.com>             Options Indexes FollowSymLinks MultiViews             AllowOverride All             Order allow,deny             allow from all     </Directory> 

Then Open the ports for the Web Server (443 and 80).

sudo ufw allow in "Apache Full"

Then service apache restart

#### MySQL

The problem was to use the most updated version of the Database. I could use one of the backups I keep, from last week, but I wanted more fresh data.

I had the .db files and it should had been very straightforward to copy to /var/lib/mysql/ … if they were the same version. But they weren’t. So I launched an instance with the same base Software as the old previous machine had, installed mysql-server, stopped it, copied the .db files, started it, and then I made a dump with mysqldump –all-databases > 2019-04-29-all-databases.sql

Note, I copied the .db files using the mythical mc, which is a clone from Norton Commander.

Then I stopped that instance and I detached that volume and attached it to the new Blog Instance.

I did a Backup of my original /var/lib/mysql/ files for the purpose of faster restoring if something went wrong.

I mounted it under /mnt/blog_old and did mysql -u root -p < /mnt/blog_old/home/ubuntu/2019-04-29-all-databases.sql

That worked well I had restored the blog. But as I was watching the /var/log/mysql/error.log I noticed some columns were not where they should be. That’s because inadvertently I overwritten the MySql table as well, which in MySQL 5.7 has different structure than in MySQL 5.5. So I screwed. As I previewed this possibility I restored from the backup in seconds.

So basically then I edited my .sql files and removed all that was for the mysql database.

I started MySql, and run the mysql import procedure again. It worked, but I had to recreate the users for all the Databases and Grant them permissions.

GRANT ALL PRIVILEGES ON db_mysqlproxycache.* TO 'wp_dbuser_mysqlproxy'@'localhost' IDENTIFIED BY 'XWy$&{yS@qlC|<¡!?;:-ç'; #### PHP7 Some modules in my blogs where returning errors in /var/log/apache2/mysite-error.log so I checked that it was due to lack of support of latest PHP versions, and so I patched manually the code or I just disabled the offending plugin. #### WordPress As seen checking the /var/log/apache2/blog.carlesmateo.com-error.log some URLs where not located by WordPress. For example: The requested URL /wordpress/wp-json/ was not found on this server I had to activate modrewrite and then restart Apache. a2enmod rewrite; service apache2 restart ### Making the site more secure Checking at the logs of Apache, /var/log/apache2/blog.carlesmateo.com-access.log I checked for Ip’s accessing Admin areas, I looked for 404 Errors pointing to intents to exploit a unsafe WP Plugin, I checked for POST protocol as well. I added to the Ubuntu Uncomplicated Firewall (UFW) the offending Ip’s and patched the xmlrpc.php file to exit always. # Google Compute Engine Talk for Group Google Developers Cork My talk in Google Developers Cork Group. It’s about deploying an Instance in GCE and grows in complexity until Deploying a Load Balancer with AutoScaling for a group of LAMP Webservers. Join the group at: https://www.meetup.com/GDG-Cork/ The videos: Keshan Sodimana: Tensors # Using Windows 10 Appliance in Ubuntu Virtual Box 4.3.10 Microsoft has released Windows 10, and with it the possibility to Download a Windows 10 Appliance to run under Virtual Box, VMWare player, HyperV (for windows), Parallels (Mac). Their idea is to allow you to test Microsoft Edge new browser in addition of being able to test the older browsers in older VM images. I wanted to use Windows 10 to check compatibility with my messenger c-client. Also I wanted to know how Java behaves. The Windows 10 VM image will work for 90 days. You can download it from here (http://dev.modern.ie/tools/vms/linux/). Instructions are very precarious and they didn’t specify a minimum version, however if you use Virtual Box under Ubuntu 14.04, so Virtual Box 4.3.10, you’ll not be able to import the Appliance as you’ll get an error. Update: Thanks to Razvan and Eric!, readers that reported that this also works for Mac OS 10.9.5. + Virtual Box 4.3.12 and VirtualBox 4.3.20 running under Windows 7 respectively. ‘Windows10_64’ is not a valid Guest OS type.  Result Code: NS_ERROR_INVALID_ARG (0x80070057) Component: VirtualBox Interface: IVirtualBox {fafa4e17-1ee2-4905-a10e-fe7c18bf5554} Callee: IAppliance {3059cf9e-25c7-4f0b-9fa5-3c42e441670b} I was looking to find a solution and found no solution on the Internet, so I decided to give a chance and try to fix it by myself. The error is: ‘Windows10_64’ is not a valid Guest OS type. so obviously, the Windows10_64 is not on the list of the VirtualBox yet, it is a pretty new release. Microsoft could had shipped it with OS Type Windows 64 Other, or Windows 8 64 bits, but they did’t. I wondered if I could edit the image to trick it to appear as a recognized image. I edited the file (MSEdge – Win10.ova) with Bless Hex Editor, an hexadecimal editor. I looked for the String “Windows10_64” and found two occurrences. I had to replace the string and leave it with exact number of bytes it has, so the same length (do not insert additional bytes). I searched for the list of supported OSes and found that “WindowsXP_64” would be a perfect match. I replaced that 10 for XP twice. Then tried to import the Appliance and it worked. I tried to run it like that, but it froze on the boot, with the new blue logo of windows. I figured out that Windows XP would probably not be the best similar architecture, so I edited the config and I set Windows 8.1 (64 bit). I also increased the RAM to 4096 MB and set a 32 MB memory for the video card. Then I just started the VM and everything worked. Ok, a funny note: Just started, it installed me an update without asking ;) # Performance of several languages Notes on 2017-03-26 18:57 CEST – Unix time: 1490547518 : 1. As several of you have noted, it would be much better to use a random value, for example, read by disk. This will be an improvement done in the next benchmark. Good suggestion thanks. 2. Due to my lack of time it took more than expected updating the article. I was in a long process with google, and now I’m looking for a new job. 3. I note that most of people doesn’t read the article and comment about things that are well indicated on it. Please before posting, read, otherwise don’t be surprise if the comment is not published. I’ve to keep the blog clean of trash. 4. I’ve left out few comments cause there were disrespectful. Mediocrity is present in the society, so simply avoid publishing comments that lack the basis of respect and good education. If a comment brings a point, under the point of view of Engineering, it is always published. Thanks. (This article was last updated on 2015-08-26 15:45 CEST – Unix time: 1440596711. See changelog at bottom) One may think that Assembler is always the fastest, but is that true?. If I write a code in Assembler in 32 bit instead of 64 bit, so it can run in 32 and 64 bit, will it be faster than the code that a dynamic compiler is optimizing in execution time to benefit from the architecture of my computer?. What if a future JIT compiler is able to use all the cores to execute a single thread developed program?. Are PHP, Python, or Ruby fast comparing to C++?. Does Facebook Hip Hop Virtual machine really speeds PHP execution?. This article shows some results and shares my conclusions. It is as a base to discuss with my colleagues. Is not an end, we are always doing tests, looking for the edge, and looking at the root of the things in detail. And often things change from one version to the other. This article shows not an absolute truth, but brings some light into interesting aspects. It could show the performance for the certain case used in the test, although generic core instructions have been selected. Many more tests are necessary, and some functions differ in the performance. But this article is a necessary starting for the discussion with my IT-extreme-lover friends and a necessary step for the next upcoming tests. It brings very important data for Managers and Decision Makers, as choosing the adequate performance language can save millions in hardware (specially when you use the Cloud and pay per hour of use) or thousand hours in Map Reduce processes. ## Acknowledgements and thanks Credit for the great Eduard Heredia, for porting my C source code to: • Go • Ruby • Node.js And for the nice discussions of the results, an on the optimizations and dynamic vs static compilers. Thanks to Juan Carlos Moreno, CTO of ECManaged Cloud Software for suggesting adding Python and Ruby to the languages tested when we discussed my initial results. Thanks to Joel Molins for the interesting discussions on Java performance and garbage collection. Thanks to Cliff Click for his wonderful article on Java vs C performance that I found when I wanted to confirm some of my results and findings. I was inspired to do my own comparisons by the benchmarks comparing different framework by techempower. It is amazing to see the results of the tests, like how C++ can serialize JSon 1,057,793 times per second and raw PHP only 180,147 (17%). # For the impatients I present the results of the tests, and the conclusions, for those that doesn’t want to read about the details. For those that want to examine the code, and the versions of every compiler, and more in deep conclusions, this information is provided below. # Results This image shows the results of the tests with every language and compiler. All the tests are invoked from command line. All the tests use only one core. No tests for the web or frameworks have been made, are another scenarios worth an own article. More seconds means a worst result. The worst is Bash, that I deleted from the graphics, as the bar was crazily high comparing to others. * As later is discussed my initial Assembler code was outperformed by C binary because the final Assembler code that the compiler generated was better than mine. After knowing why (later in this article is explained in detail) I could have reduced it to the same time than the C version as I understood the improvements made by the compiler. Table of times: Seconds executing Language Compiler used Version 6 s. Java Oracle Java Java JDK 8 6 s. Java Oracle Java Java JDK 7 6 s. Java Open JDK OpenJDK 7 6 s. Java Open JDK OpenJDK 6 7 s. Go Go Go v.1.3.1 linux/amd64 7 s. Go Go Go v.1.3.3 linux/amd64 8 s. Lua LuaJit Luajit 2.0.2 10 s. C++ g++ g++ (Ubuntu 4.8.2-19ubuntu1) 4.8.2 10 s. C gcc gcc (Ubuntu 4.8.2-19ubuntu1) 4.8.2 10 s. (* first version was 13 s. and then was optimized) Assembler nasm NASM version 2.10.09 compiled on Dec 29 2013 10 s. Nodejs nodejs Nodejs v0.12.4 14 s. Nodejs nodejs Nodejs v0.10.25 18 s. Go Go go version xgcc (Ubuntu 4.9-20140406-0ubuntu1) 4.9.0 20140405 (experimental) [trunk revision 209157] linux/amd64 20 s. Phantomjs Phantomjs phantomjs 1.9.0 21 s. Phantomjs Phantomjs phantomjs 2.0.1-development 38 s. PHP Facebook HHVM HipHop VM 3.4.0-dev (rel) 44 s. Python Pypy Pypy 2.2.1 (Python 2.7.3 (2.2.1+dfsg-1, Nov 28 2013, 05:13:10)) 52 s. PHP Facebook HHVM HipHop VM 3.9.0-dev (rel) 52 s. PHP Facebook HHVM HipHop VM 3.7.3 (rel) 128 s. PHP PHP PHP 7.0.0alpha2 (cli) (built: Jul 3 2015 15:30:23) 278 s. Lua Lua Lua 2.5.3 294 s. Gambas3 Gambas3 3.7.0 316 s. PHP PHP PHP 5.5.9-1ubuntu4.3 (cli) (built: Jul 7 2014 16:36:58) 317 s. PHP PHP PHP 5.6.10 (cli) (built: Jul 3 2015 16:13:11) 323 s. PHP PHP PHP 5.4.42 (cli) (built: Jul 3 2015 16:24:16) 436 s. Perl Perl Perl 5.18.2 523 s. Ruby Ruby ruby 1.9.3p484 (2013-11-22 revision 43786) [x86_64-linux] 694 s. Python Python Python 2.7.6 807 s. Python Python Python 3.4.0 47630 s. Bash GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu) ## Conclusions and Lessons Learnt 1. There are languages that will execute faster than a native Assembler program, thanks to the JIT Compiler and to the ability to optimize the program at runtime for the architecture of the computer running the program (even if there is a small initial penalty of around two seconds from JIT when running the program, as it is being analysed, is it more than worth in our example) 2. Modern Java can be really fast in certain operations, it is the fastest in this test, thanks to the use of JIT Compiler technology and a very good implementation in it 3. Oracle’s Java and OpenJDK shows no difference in performance in this test 4. Script languages really sucks in performance. Python, Perl and Ruby are terribly slow. That costs a lot of money if you Scale as you need more Server in the Cloud 5. JIT compilers for Python: Pypy, and for Lua: LuaJit, make them really fly. The difference is truly amazing 6. The same language can offer a very different performance using one version or another, for example the go that comes from Ubuntu packets and the last version from official page that is faster, or Python 3.4 is much slower than Python 2.7 in this test 7. Bash is the worst language for doing the loop and inc operations in the test, lasting for more than 13 hours for the test 8. From command line PHP is much faster than Python, Perl and Ruby 9. Facebook Hip Hop Virtual Machine (HHVM) improves a lot PHP’s speed 10. It looks like the future of compilers is JIT. 11. Assembler is not always the fastest when executed. If you write a generic Assembler program with the purpose of being able to run in many platforms you’ll not use the most powerful instructions specific of an architecture, and so a JIT compiler can outperform your code. An static compiler can also outperform your code with very clever optimizations. People that write the compilers are really good. Unless you’re really brilliant with Assembler probably a C/C++ code beats the performance of your code. Even if you’re fantastic with Assembler it could happen that a JIT compiler notices that some executions can be avoided (like code not really used) and bring magnificent runtime optimizations. (for example a near JMP is much more less costly than a far JMP Assembler instruction. Avoiding dead code could result in a far JMP being executed as near JMP, saving many cycles per loop) 12. Optimizations really needs people dedicated to just optimizations and checking the speed of the newly added code for the running platforms 13. Node.js was a big surprise. It really performed well. It is promising. New version performs even faster 14. go is promising. Similar to C, but performance is much better thanks to deciding at runtime if the architecture of the computer is 32 or 64 bit, a very quick compilation at launch time, and it compiling to very good assembler (that uses the 64 bit instructions efficiently, for example) 15. Gambas 3 performed surprisingly fast. Better than PHP 16. You should be careful when using C/C++ optimization -O3 (and -O2) as sometimes it doesn’t work well (bugs) or as you may expect, for example by completely removing blocks of code if the compiler believes that has no utility (like loops) 17. Perl performance really change from using a for style or another. (See Perl section below) 18. Modern CPUs change the frequency to save energy. To run the tests is strictly recommended to use a dedicated machine, disabling the CPU governor and setting a frequency for all the cores, booting with a text only live system, without background services, not mounting disks, no swap, no network (Please, before commenting read completely the article ) # Explanations in details Obviously an statically compiled language binary should be faster than an interpreted language. C or C++ are much faster than PHP. And good code machine is much faster of course. But there are also other languages that are not compiled as binary and have really fast execution. For example, good Web Java Application Servers generate compiled code after the first request. Then it really flies. For web C# or .NET in general, does the same, the IIS Application Server creates a native DLL after the first call to the script. And after this, as is compiled, the page is really fast. With C statically linked you could generate binary code for a particular processor, but then it won’t work in other processors, so normally we write code that will work in all the processors at the cost of not using all the performance of the different CPUs or use another approach and we provide a set of different binaries for the different architectures. A set of directives doing one thing or other depending on the platform detected can also be done, but is hard, long and tedious job with a lot of special cases treatment. There is another approach that is dynamic linking, where certain things will be decided at run time and optimized for the computer that is running the program by the JIT (Just-in-time) Compiler. Java, with JIT is able to offer optimizations for the CPU that is running the code with awesome results. And it is able to optimize loops and mathematics operations and outperform C/C++ and Assembler code in some cases (like in our tests) or to be really near in others. It sounds crazy but nowadays the JIT is able to know the result of several times executed blocks of code and to optimize that with several strategies, speeding the things incredible and to outperform a code written in Assembler. Demonstrations with code is provided later. A new generation has grown knowing only how to program for the Web. Many of them never saw Assembler, neither or barely programmed in C++. None of my Senior friends would assert that a technology is better than another without doing many investigations before. We are serious. There is so much to take in count, so much to learn always, that one has to be sure that is not missing things before affirming such things categorically. If you want to be taken seriously, you have to take many things in count. # Environment for the tests ## Hardware and OS Intel(R) Core(TM) i7-4770S CPU @ 3.10GHz with 32 GB RAM and SSD Disk. Ubuntu Desktop 14.04 LTS 64 bit ## Software base and compilers ### PHP versions Shipped with my Ubuntu distribution: php -v PHP 5.5.9-1ubuntu4.3 (cli) (built: Jul 7 2014 16:36:58) Copyright (c) 1997-2014 The PHP Group Zend Engine v2.5.0, Copyright (c) 1998-2014 Zend Technologies with Zend OPcache v7.0.3, Copyright (c) 1999-2014, by Zend Technologies Compiled from sources: PHP 5.6.10 (cli) (built: Jul 3 2015 16:13:11) Copyright (c) 1997-2015 The PHP Group Zend Engine v2.6.0, Copyright (c) 1998-2015 Zend Technologies PHP 5.4.42 (cli) (built: Jul 3 2015 16:24:16) Copyright (c) 1997-2014 The PHP Group Zend Engine v2.4.0, Copyright (c) 1998-2014 Zend Technologies ### Java 8 version java -showversion java version "1.8.0_05" Java(TM) SE Runtime Environment (build 1.8.0_05-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode) ### C++ version g++ -v Using built-in specs. COLLECT_GCC=g++ COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/4.8/lto-wrapper Target: x86_64-linux-gnu Configured with: ../src/configure -v --with-pkgversion='Ubuntu 4.8.2-19ubuntu1' --with-bugurl=file:///usr/share/doc/gcc-4.8/README.Bugs --enable-languages=c,c++,java,go,d,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.8 --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.8 --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-gnu-unique-object --disable-libmudflap --enable-plugin --with-system-zlib --disable-browser-plugin --enable-java-awt=gtk --enable-gtk-cairo --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-4.8-amd64/jre --enable-java-home --with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-4.8-amd64 --with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-4.8-amd64 --with-arch-directory=amd64 --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --enable-objc-gc --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32 --with-tune=generic --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu Thread model: posix gcc version 4.8.2 (Ubuntu 4.8.2-19ubuntu1) ### Gambas 3 gbr3 --version 3.7.0 ### Go (downloaded from google) go version go version go1.3.1 linux/amd64 ### Go (Ubuntu packages) go version go version xgcc (Ubuntu 4.9-20140406-0ubuntu1) 4.9.0 20140405 (experimental) [trunk revision 209157] linux/amd64 ### Nasm nasm -v NASM version 2.10.09 compiled on Dec 29 2013 ### Lua lua -v Lua 5.2.3 Copyright (C) 1994-2013 Lua.org, PUC-Rio ### Luajit luajit -v LuaJIT 2.0.2 -- Copyright (C) 2005-2013 Mike Pall. http://luajit.org/ ### Nodejs Installed with apt-get install nodejs: nodejs --version v0.10.25 Installed by compiling the sources: node --version v0.12.4 ## Phantomjs Installed with apt-get install phantomjs: phantomjs --version 1.9.0 Compiled from sources: /path/phantomjs --version 2.0.1-development ### Python 2.7 python --version Python 2.7.6 ### Python 3 python3 --version Python 3.4.0 ## Perl perl -version This is perl 5, version 18, subversion 2 (v5.18.2) built for x86_64-linux-gnu-thread-multi (with 41 registered patches, see perl -V for more detail) ## Bash bash --version GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu) Copyright (C) 2013 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software; you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. ## Test: Time required for nested loops This is the first sample. It is an easy-one. The main idea is to generate a set of nested loops, with a simple counter inside. When the counter reaches 51 it is set to 0. This is done for: 1. Preventing overflow of the integer if growing without control 2. Preventing the compiler from optimizing the code (clever compilers like Java or gcc with -O3 flag for optimization, if it sees that the var is never used, it will see that the whole block is unnecessary and simply never execute it) Doing only loops, the increment of a variable and an if, provides us with basic structures of the language that are easily transformed to Assembler. We want to avoid System calls also. This is the base for the metrics on my Cloud Analysis of Performance cmips.net project. Here I present the times for each language, later I analyze the details and the code. Take in count that this code only executes in one thread / core. ## C++ C++ result, it takes 10 seconds. Code for the C++: /* * File: main.cpp * Author: Carles Mateo * * Created on August 27, 2014, 1:53 PM */ #include <cstdlib> #include <iostream> #include <stdio.h> #include <stdlib.h> #include <sys/time.h> #include <ctime> using namespace std; typedef unsigned long long timestamp_t; static timestamp_t get_timestamp() { struct timeval now; gettimeofday (&now, NULL); return now.tv_usec + (timestamp_t)now.tv_sec * 1000000; } int main(int argc, char** argv) { timestamp_t t0 = get_timestamp(); // current date/time based on current system time_t now = time(0); // convert now to string form char* dt_now = ctime(&now); printf("Starting at %s\n", dt_now); int i_loop1 = 0; int i_loop2 = 0; int i_loop3 = 0; for (i_loop1 = 0; i_loop1 < 10; i_loop1++) { for (i_loop2 = 0; i_loop2 < 32000; i_loop2++) { for (i_loop3 = 0; i_loop3 < 32000; i_loop3++) { i_counter++; if (i_counter > 50) { i_counter = 0; } } // If you want to test how the compiler optimizes that, remove the comment //i_counter = 0; } } // This is another trick to avoid compiler's optimization. To use the var somewhere printf("Counter: %i\n", i_counter); timestamp_t t1 = get_timestamp(); double secs = (t1 - t0) / 1000000.0L; time_t now_end = time(0); // convert now to string form char* dt_now_end = ctime(&now_end); printf("End time: %s\n", dt_now_end); return 0; } You can try to remove the part of code that makes the checks:  /* if (i_counter > 50) { i_counter = 0; }*/ And the use of the var, later:  //printf("Counter: %i\n", i_counter); Note: And adding a i_counter = 0; at the beginning of the loop to make sure that the counter doesn’t overflows. Then the C or C++ compiler will notice that this result is never used and so it will eliminate the code from the program, having as result and execution time of 0.0 seconds. ## Java The code in Java: package cpu; /** * * @author carles.mateo */ public class Cpu { /** * @param args the command line arguments */ public static void main(String[] args) { int i_loop1 = 0; //int i_loop_main = 0; int i_loop2 = 0; int i_loop3 = 0; int i_counter = 0; String s_version = System.getProperty("java.version"); System.out.println("Java Version: " + s_version); System.out.println("Starting cpu.java..."); for (i_loop1 = 0; i_loop1 < 10; i_loop1++) { for (i_loop2 = 0; i_loop2 < 32000; i_loop2++) { for (i_loop3 = 0; i_loop3 < 32000; i_loop3++) { i_counter++; if (i_counter > 50) { i_counter = 0; } } } } System.out.println(i_counter); System.out.println("End"); } }  It is really interesting how Java, with JIT outperforms C++ and Assembler. It takes only 6 seconds. ## Go The case of Go is interesting because I saw a big difference from the go shipped with Ubuntu, and the the go I downloaded from http://golang.org/dl/. I downloaded 1.3.1 and 1.3.3 offering the same performance. 7 seconds. Source code for nested_loops.go package main import ("fmt" "time") func main() { fmt.Printf("Starting: %s", time.Now().Local()) var i_counter = 0; for i_loop1 := 0; i_loop1 < 10; i_loop1++ { for i_loop2 := 0; i_loop2 < 32000; i_loop2++ { for i_loop3 := 0; i_loop3 < 32000; i_loop3++ { i_counter++; if i_counter > 50 { i_counter = 0; } } } } fmt.Printf("\nCounter: %#v", i_counter) fmt.Printf("\nEnd: %s\n", time.Now().Local()) } ### Assembler Here is the Assembler for Linux code, with SASM, that I created initially (bellow is optimized). %include "io.inc" section .text global CMAIN CMAIN: ;mov rbp, rsp; for correct debugging ; Set to 0, the faster way xor esi, esi DO_LOOP1: mov ecx, 10 LOOP1: mov ebx, ecx jmp DO_LOOP2 LOOP1_CONTINUE: mov ecx, ebx loop LOOP1 jmp QUIT DO_LOOP2: mov ecx, 32000 LOOP2: mov eax, ecx ;call DO_LOOP3 jmp DO_LOOP3 LOOP2_CONTINUE: mov ecx, eax loop LOOP2 jmp LOOP1_CONTINUE DO_LOOP3: ; Set to 32000 loops MOV ecx, 32000 LOOP3: inc esi cmp esi, 50 jg COUNTER_TO_0 LOOP3_CONTINUE: loop LOOP3 ;ret jmp LOOP2_CONTINUE COUNTER_TO_0: ; Set to 0 xor esi, esi jmp LOOP3_CONTINUE ; jmp QUIT QUIT: xor eax, eax ret It took 13 seconds to complete. One interesting explanation on why binary C or C++ code is faster than Assembler, is because the C compiler generates better Assembler/binary code at the end. For example, the use of JMP is expensive in terms of CPU cycles and the compiler can apply other optimizations and tricks that I’m not aware of, like using faster registers, while in my code I use ebx, ecx, esi, etc… (for example, imagine that using cx is cheaper than using ecx or rcx and I’m not aware but the guys that created the Gnu C compiler are) To be sure of what’s going on I switched in the LOOP3 the JE and the JMP of the code, for groups of 50 instructions, INC ESI, one after the other and the time was reduced to 1 second. (In C also was reduced even a bit more when doing the same) To know what’s the translation of the C code into Assembler when compiled, you can do: objdump --disassemble nested_loops Look for the section main and you’ll get something like: 0000000000400470 <main>: 400470: bf 0a 00 00 00 mov$0xa,%edi
400475:    31 c9                    xor    %ecx,%ecx
400477:    be 00 7d 00 00           mov    $0x7d00,%esi 40047c: 0f 1f 40 00 nopl 0x0(%rax) 400480: b8 00 7d 00 00 mov$0x7d00,%eax
400485:    0f 1f 00                 nopl   (%rax)
400488:    83 c2 01                 add    $0x1,%edx 40048b: 83 fa 33 cmp$0x33,%edx
40048e:    0f 4d d1                 cmovge %ecx,%edx
400491:    83 e8 01                 sub    $0x1,%eax 400494: 75 f2 jne 400488 <main+0x18> 400496: 83 ee 01 sub$0x1,%esi
400499:    75 e5                    jne    400480 <main+0x10>
40049b:    83 ef 01                 sub    $0x1,%edi 40049e: 75 d7 jne 400477 <main+0x7> 4004a0: 48 83 ec 08 sub$0x8,%rsp
4004a4:    be 34 06 40 00           mov    $0x400634,%esi 4004a9: bf 01 00 00 00 mov$0x1,%edi
4004ae:    31 c0                    xor    %eax,%eax
4004b0:    e8 ab ff ff ff           callq  400460 <__printf_chk@plt>
4004b5:    31 c0                    xor    %eax,%eax
4004b7:    48 83 c4 08              add    $0x8,%rsp 4004bb: c3 retq Note: this is in the AT&T syntax and not in the Intel. That means that add$0x1,%edx is adding 1 to EDX registerg (origin, destination).

As you can see the C compiler has created a very differed Assembler version respect what I created.
For example at 400470 it uses EDI register to store 10, so to control the number of the outer loop.
It uses ESI to store 32000 (Hexadecimal 0x7D00), so the second loop.
And EAX for the inner loop, at 400480.
It uses EDX for the counter, and compares to 50 (Hexa 0x33) at 40048B.
In 40048E it uses the CMOVGE (Mov if Greater or Equal), that is an instruction that was introduced with the P6 family processors, to move the contents of ECX to EDX if it was (in the CMP) greater or equal to 50. As in 400475 a XOR ECX, ECX was performed, EXC contained 0.
And it cleverly used SUB and JNE (JNE means Jump if not equal and it jumps if ZF = 0, it is equivalent to JNZ Jump if not Zero).
It uses between 4 and 16 clocks, and the jump must be -128 to +127 bytes of the next instruction. As you see Jump is very costly.

Looks like the biggest improvement comes from the use of CMOVGE, so it saves two jumps that my original Assembler code was performing.
Those two jumps multiplied per 32000 x 32000 x 10 times, are a lot of Cpu clocks.

So, with this in mind, as this Assembler code takes 10 seconds, I updated the graph from 13 seconds to 10 seconds.

### Lua

This is the initial code:

local i_counter = 0

local i_time_start = os.clock()

for i_loop1=0,9 do
for i_loop2=0,31999 do
for i_loop3=0,31999 do
i_counter = i_counter + 1
if i_counter > 50 then
i_counter = 0
end
end
end
end

local i_time_end = os.clock()
print(string.format("Counter: %i\n", i_counter))
print(string.format("Total seconds: %.2f\n", i_time_end - i_time_start))

In the case of Lua theoretically one could take great advantage of the use of local inside a loop, so I tried the benchmark with modifications to the loop:

for i_loop1=0,9 do
for i_loop2=0,31999 do
local l_i_counter = i_counter
for i_loop3=0,31999 do
l_i_counter = l_i_counter + 1
if l_i_counter > 50 then
l_i_counter = 0
end
end
i_counter = l_i_counter
end
end

I ran it with LuaJit and saw no improvements on the performance.

### Node.js

var s_date_time = new Date();
console.log('Starting: ' + s_date_time);

var i_counter = 0;

for (var $i_loop1 = 0;$i_loop1 < 10; $i_loop1++) { for (var$i_loop2 = 0; $i_loop2 < 32000;$i_loop2++) {
for (var $i_loop3 = 0;$i_loop3 < 32000; $i_loop3++) { i_counter++; if (i_counter > 50) { i_counter = 0; } } } } var s_date_time_end = new Date(); console.log('Counter: ' + i_counter + '\n'); console.log('End: ' + s_date_time_end + '\n'); Execute with: nodejs nested_loops.js ## Phantomjs The same code as nodejs adding to the end: phantom.exit(0); In the case of Phantom it performs the same in both versions 1.9.0 and 2.0.1-development compiled from sources. ### PHP The interesting thing on PHP is that you can write your own extensions in C, so you can have the easy of use of PHP and create functions that really brings fast performance in C, and invoke them from PHP. <?php$s_date_time = date('Y-m-d H:i:s');

echo 'Starting: '.$s_date_time."\n";$i_counter = 0;

for ($i_loop1 = 0;$i_loop1 < 10; $i_loop1++) { for ($i_loop2 = 0; $i_loop2 < 32000;$i_loop2++) {
for ($i_loop3 = 0;$i_loop3 < 32000; $i_loop3++) {$i_counter++;
if ($i_counter > 50) {$i_counter = 0;
}
}
}
}

$s_date_time_end = date('Y-m-d H:i:s'); echo 'End: '.$s_date_time_end."\n";

Facebook’s Hip Hop Virtual Machine is a very powerful alternative, that is JIT powered.

git clone https://github.com/facebook/hhvm.git
cd hhvm
rm -r third-party
git submodule update --init --recursive
./configure
make

Or just grab precompiled packages from https://github.com/facebook/hhvm/wiki/Prebuilt%20Packages%20for%20HHVM

### Python

from datetime import datetime
import time

print ("Starting at: " + str(datetime.now()))
s_unixtime_start = str(time.time())

i_counter = 0

# From 0 to 31999
for i_loop1 in range(0, 10):
for i_loop2 in range(0,32000):
for i_loop3 in range(0,32000):
i_counter += 1
if ( i_counter > 50 ) :
i_counter = 0

print ("Ending at: " + str(datetime.now()))
s_unixtime_end = str(time.time())

i_seconds = long(s_unixtime_end) - long(s_unixtime_start)
s_seconds = str(i_seconds)

print ("Total seconds:" + s_seconds)

### Ruby

#!/usr/bin/ruby -w

time1 = Time.new

puts "Starting : " + time1.inspect

i_counter = 0;

for i_loop1 in 0..9
for i_loop2 in 0..31999
for i_loop3 in 0..31999
i_counter = i_counter + 1
if i_counter > 50
i_counter = 0
end
end
end
end

time1 = Time.new

puts "End : " + time1.inspect

### Perl

The case of Perl was very interesting one.

This is the current code:

#!/usr/bin/env perl

print "$s_datetime Starting calculations...\n";$i_counter=0;

$i_unixtime_start=time(); for my$i_loop1 (0 .. 9) {
for my $i_loop2 (0 .. 31999) { for my$i_loop3 (0 .. 31999) {
$i_counter++; if ($i_counter > 50) {
$i_counter = 0; } } } }$i_unixtime_end=time();

$i_seconds=$i_unixtime_end-$i_unixtime_start; print "Counter:$i_counter\n";
print "Total seconds: $i_seconds"; But before I created one, slightly different, with the for loops like in the C style: #!/usr/bin/env perl$i_counter=0;

$i_unixtime_start=time(); for (my$i_loop1=0; $i_loop1 < 10;$i_loop1++) {
for (my $i_loop2=0;$i_loop2 < 32000; $i_loop2++) { for (my$i_loop3=0; $i_loop3 < 32000;$i_loop3++) {
$i_counter++; if ($i_counter > 50) {
$i_counter = 0; } } } }$i_unixtime_end=time();

$i_seconds=$i_unixtime_end-$i_unixtime_start; print "Total seconds:$i_seconds";

I repeated this test, with the same version of Perl, due to the comment of a reader (thanks mpapec) that told:

In this particular case perl style loops are about 45% faster than original code (v5.20)

And effectively and surprisingly the time passed from 796 seconds to 436 seconds.

So graphics are updated to reflect the result of 436 seconds.

### Bash

#!/bin/bash
echo "Bash version ${BASH_VERSION}..." date let "s_time_start=$(date +%s)"
let "i_counter=0"

for i_loop1 in {0..9}
do
echo "."
date
for i_loop2 in {0..31999}
do
for i_loop3 in {0..31999}
do
((i_counter++))
if [[ $i_counter > 50 ]] then let "i_counter=0" fi done #((var+=1)) #((var=var+1)) #((var++)) #let "var=var+1" #let "var+=1" #let "var++" done done let "s_time_end=$(date +%2)"

let "s_seconds = s_time_end - s_time_start"
echo "Total seconds: $s_seconds" # Just in case it overflows date ### Gambas 3 Gambas is a language and an IDE to create GUI applications for Linux. It is very similar to Visual Basic, but better, and it is not a clone. I created a command line application and it performed better than PHP. There has been done an excellent job with the compiler. Note: in the screenshot the first test ran for few seconds more than in the second. This was because I deliberately put the machine under some load and I/O during the tests. The valid value for the test, confirmed with more iterations is the second one, done under the same conditions (no load) than the previous tests. ' Gambas module file MMain.module Public Sub Main() ' @author Carles Mateo http://blog.carlesmateo.com Dim i_loop1 As Integer Dim i_loop2 As Integer Dim i_loop3 As Integer Dim i_counter As Integer Dim s_version As String i_loop1 = 0 i_loop2 = 0 i_loop3 = 0 i_counter = 0 s_version = System.Version Print "Performance Test by Carles Mateo blog.carlesmateo.com" Print "Gambas Version: " & s_version Print "Starting..." & Now() For i_loop1 = 0 To 9 For i_loop2 = 0 To 31999 For i_loop3 = 0 To 31999 i_counter = i_counter + 1 If (i_counter > 50) Then i_counter = 0 Endif Next Next Next Print i_counter Print "End " & Now() End Changelog 2015-08-26 15:45 Thanks to the comment of a reader, thanks Daniel, pointing a mistake. The phrase I mentioned was on conclusions, point 14, and was inaccurate. The original phrase told “go is promising. Similar to C, but performance is much better thanks to the use of JIT“. The allusion to JIT is incorrect and has been replaced by this: “thanks to deciding at runtime if the architecture of the computer is 32 or 64 bit, a very quick compilation at launch time, and it compiling to very good assembler (that uses the 64 bit instructions efficiently, for example)” 2015-07-17 17:46 Benchmarked Facebook HHVM 3.9 (dev., the release date is August 3 2015) and HHVM 3.7.3, they take 52 seconds. Re-benchmarked Facebook HHVM 3.4, before it was 72 seconds, it takes now 38 seconds. I checked the screen captures from 2014 to discard an human error. Looks like a turbo frequency issue on the tests computer, with the CPU governor making it work bellow the optimal speed or a CPU-hungry/IO process that triggered during the tests and I didn’t detect it. Thinking about forcing a fixed CPU speed for all the cores for the tests, like 2.4 Ghz and booting a live only text system without disk access and network to prevent Ubuntu launching processes in the background. 2015-07-05 13:16 Added performance of Phantomjs 1.9.0 installed via apt-get install phantomjs in Ubuntu, and Phantomjs 2.0.1-development. Added performance of nodejs 0.12.04 (compiled). Added bash to the graphic. It has so bad performance that I had to edit the graphic to fit in (color pink) in order prevent breaking the scale. 2015-07-03 18:32 Added benchmarks for PHP 7 alpha 2, PHP 5.6.10 and PHP 5.4.42. 2015-07-03 15:13 Thanks to the contribution of a reader (thanks mpapec!) I tried with Perl for style, resulting in passing from 796 seconds to 436 seconds. (I used the same Perl version: Perl 5.18.2) Updated test value for Perl. Added new graphics showing the updated value. Thanks to the contribution of a reader (thanks junk0xc0de!) added some additional warnings and explanations about the dangers of using -O3 (and -O2) if C/C++. Updated the Lua code, to print i_counter and do the if i_counter > 50 This makes it take a bit longer, few cents, but passing from 7.8 to 8.2 seconds. Updated graphics. # Stopping and investigating a WordPress xmlrpc.php attack One of my Servers got heavily attacked for several days. I describe here the steps I took to stop this. The attack consisted in several connections per second to the Server, to path /xmlrpc.php. This is a WordPress file to control the pingback, when someone links to you. My Server it is a small Amazon instance, a m1.small with only one core and 1,6 GB RAM, magnetic disks and that scores a discrete 203 CMIPS (my slow laptop scores 460 CMIPS). Those massive connections caused the server to use more and more RAM, and while the xmlrpc requests were taking many seconds to reply, so more and more processes of Apache were spawned. That lead to more memory consumption, and to use all the available RAM and start using swap, with a heavy performance impact until all the memory was exhausted and the mysql processes stopped. I saw that I was suffering an attack after the shutdown of MySql. I checked the CloudWatch Statistics from Amazon AWS and it was clear that I was receiving many -out of normal- requests. The I/O was really high too. This statistics are from today to three days ago, look at the spikes when the attack was hitting hard and how relaxed the Server is now (plain line). First I decided to simply rename the xmlrpc.php file as a quick solution to stop the attack but the number of http connections kept growing and then I saw very suspicious queries to the database. Those queries, in addition to what I’ve seen in the Apache’s error log suggested me that may be the Server was hacked by a WordPress/plugin bug and that now they were trying to hide from the database’s logs. (Specially the DELETE FROM wp_useronline WHERE user_ip = the Ip of the attacker) [Tue Aug 26 11:47:08 2014] [error] [client 94.102.49.179] Error in WordPress Database Lost connection to MySQL server during query a la consulta SELECT option_value FROM wp_options WHERE option_name = 'uninstall_plugins' LIMIT 1 feta per include('wp-load.php'), require_once('wp-config.php'), require_once('wp-settings.php'), include_once('/plugins/captcha/captcha.php'), register_uninstall_hook, get_option [Tue Aug 26 11:47:09 2014] [error] [client 94.102.49.179] Error in WordPress Database Lost connection to MySQL server during query a la consulta SELECT option_value FROM wp_options WHERE option_name = 'uninstall_plugins' LIMIT 1 feta per include('wp-load.php'), require_once('wp-config.php'), require_once('wp-settings.php'), include_once('/plugins/captcha/captcha.php'), register_uninstall_hook, get_option [Tue Aug 26 11:47:10 2014] [error] [client 94.102.49.179] Error in WordPress Database Lost connection to MySQL server during query a la consulta SELECT option_value FROM wp_options WHERE option_name = 'widget_wppp' LIMIT 1 feta per include('wp-load.php'), require_once('wp-config.php'), require_once('wp-settings.php'), do_action('plugins_loaded'), call_user_func_array, wppp_check_upgrade, get_option The error log was very ugly. The access log was not reassuring, as it shown many attacks like that: 94.102.49.179 - - [26/Aug/2014:10:34:58 +0000] "POST /xmlrpc.php HTTP/1.0" 200 598 "-" "Mozilla/4.0 (compatible: MSIE 7.0; Windows NT 6.0)" 94.102.49.179 - - [26/Aug/2014:10:34:59 +0000] "POST /xmlrpc.php HTTP/1.0" 200 598 "-" "Mozilla/4.0 (compatible: MSIE 7.0; Windows NT 6.0)" 127.0.0.1 - - [26/Aug/2014:10:35:09 +0000] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" 94.102.49.179 - - [26/Aug/2014:10:34:59 +0000] "POST /xmlrpc.php HTTP/1.0" 200 598 "-" "Mozilla/4.0 (compatible: MSIE 7.0; Windows NT 6.0)" 94.102.49.179 - - [26/Aug/2014:10:34:59 +0000] "POST /xmlrpc.php HTTP/1.0" 200 598 "-" "Mozilla/4.0 (compatible: MSIE 7.0; Windows NT 6.0)" 94.102.49.179 - - [26/Aug/2014:10:35:00 +0000] "POST /xmlrpc.php HTTP/1.0" 200 598 "-" "Mozilla/4.0 (compatible: MSIE 7.0; Windows NT 6.0)" 94.102.49.179 - - [26/Aug/2014:10:34:59 +0000] "POST /xmlrpc.php HTTP/1.0" 200 598 "-" "Mozilla/4.0 (compatible: MSIE 7.0; Windows NT 6.0)" Was difficult to determine if the Server was receiving SQL injections so I wanted to be sure. Note: The connection from 127.0.0.1 with OPTIONS is created by Apache when spawns another Apache. As I had super fresh backups in another Server I was not afraid of the attack dropping the database. I was a bit suspicious also because the /readme.html file mentioned that the version of WordPress is 3.6. In other installations it tells correctly that the version is the 3.9.2 and this file is updated with the auto-update. I was thinking about a possible very sophisticated trojan attack able to modify wp-includes/version.php and set fake$wp_version = ‘3.9.2’;
Later I realized that this blog had WordPress in Catalan, my native language, and discovered that the guys that do the translations forgot to update this file (in new installations it comes not updated, and so showing 3.6). I have alerted them.

In fact later I did a diff of all the files of my WordPress installation against the official WordPress 3.9.2-ca and later a did a diff between the WordPress 3.9.2-ca and the WordPress 3.9.2 (English – default), and found no differences. My Server was Ok. But at this point, at the beginning of the investigation I didn’t know that yet.

With the info I had (queries, times, attack, readme telling v. 3.6…) I balanced the possibility to be in front of something and I decided that I had an unique opportunity to discover how they do to inject those Sql, or discover if my Server was compromised and how. The bad point is that it was the same Amazon’s Server where this blog resides, and I wanted the attack to continue so I could get more information, so during two days I was recording logs and doing some investigations, so sorry if you visited my blog and database was down, or the Server was going extremely slow. I needed that info. It was worth it.

First I changed the Apache config so the massive connections impacted a bit less the Server and so I could work on it while the attack was going on.

I informed my group of Senior friends on what’s going on and two SysAdmins gave me some good suggestions on other logs to watch and on how to stop the attack, and later a Developer joined me to look at the logs and pointed possible solutions to stop the attack. But basically all of them suggested on how to block the incoming connections with iptables and to do things like reinstalling WordPress, disabling xmlrpc.php in .htaccess, changing passwords or moving wp-admin/ to another place, but the point is that I wanted to understand exactly what was going on and how.

I checked the logs, certificates, etc… and no one other than me was accessing the Server. I also double-checked the Amazon’s Firewall to be sure that no unnecessary ports were left open. Everything was Ok.

I took a look at the Apache logs for the site and all the attacks were coming from the same Ip:

94.102.49.179

It is an Ip from a dedicated Servers company called ecatel.net. I reported them the abuse to the abuse address indicated in the ripe.net database for the range.

I found that many people have complains about this provider and reports of them ignoring the requests to stop the spam use from their servers, so I decided that after my tests I will block their entire network from being able to access my sites.

All the requests shown in the access.log pointed to requests to /xmlrpc.php. It was the only path requested by the attacker so that Ip did nothing more apparently.

I added some logging to WordPress xmlrpc.php file:

if ($_SERVER['REMOTE_ADDR'] == '94.102.49.179') { error_log('XML POST: '.serialize($_POST));
error_log('XML GET: '.serialize($_GET)); error_log('XML REQUEST: '.serialize($_REQUEST));
error_log('XML SERVER: '.serialize($_SERVER)); error_log('XML FILES: '.serialize($_FILES));
error_log('XML ENV: '.serialize($_ENV)); error_log('XML RAW: '.$HTTP_RAW_POST_DATA);
}

This was the result, it is always the same:

[Fri Aug 29 19:02:54 2014] [error] [client 94.102.49.179] XML POST: a:0:{}
[Fri Aug 29 19:02:54 2014] [error] [client 94.102.49.179] XML GET: a:0:{}
[Fri Aug 29 19:02:54 2014] [error] [client 94.102.49.179] XML REQUEST: a:0:{}
[Fri Aug 29 19:02:54 2014] [error] [client 94.102.49.179] XML FILES: a:0:{}
[Fri Aug 29 19:02:54 2014] [error] [client 94.102.49.179] XML ENV: a:0:{}
[Fri Aug 29 19:02:54 2014] [error] [client 94.102.49.179] XML RAW: <?xmlversion="1.0"?><methodCall><methodName>pingback.ping</methodName><params><param><value><string>http://seretil.me/</string></value></param><param><value><string>http://barcelona.afterstart.com/2013/09/27/afterstart-barcelona-2013-09-26/</string></value></param></params></methodCall>
[Fri Aug 29 19:02:54 2014] [error] [client 94.102.49.179] XML ALL_HEADERS: a:5:{s:4:"Host";s:24:"barcelona.afterstart.com";s:12:"Content-type";s:8:"text/xml";s:14:"Content-length";s:3:"287";s:10:"User-agent";s:50:"Mozilla/4.0 (compatible: MSIE 7.0; Windows NT 6.0)";s:10:"Connection";s:5:"close";}

So nothing in $_POST, nothing in$_GET, nothing in $_REQUEST, nothing in$_SERVER, no files submitted, but a text/xml Posted (that was logged by storing: $HTTP_RAW_POST_DATA): <?xmlversion="1.0"?><methodCall><methodName>pingback.ping</methodName><params><param><value><string>http://seretil.me/</string></value></param><param><value><string>http://barcelona.afterstart.com/2013/09/27/afterstart-barcelona-2013-09-26/</string></value></param></params></methodCall> I show you in a nicer formatted aspect:So basically they were trying to register a link to seretil dot me. I tried and this page, hosted in CloudFare, is not working. The problem is that responding to this spam xmlrpc request took around 16 seconds to the Server. And I was receiving several each second. I granted access to my Ip only on the port 80 in the Firewall, restarted Apache, restarted MySql and submitted the same malicious request to the Server, and it even took 16 seconds in all my tests: cat http_post.txt | nc barcelona.afterstart.com 80 I checked and confirmed that the logs from the attacker were showing the same Content-Length and http code. Other guys tried xml request as well but did one time or two and leaved. The problem was that this robot was, and still sending many requests per second for days. May be the idea was to knock down my Server, but I doubted it as the address selected is the blog of one Social Event for Senior Internet Talents that I organize: afterstart.com. It has not special interest, I do not see a political, hateful or other motivation to attack the blog from this project. Ok, at this point it was clear that the Ip address was a robot, probably running from an infected or hacked Server, and was trying to publish a Spam link to a site (that was down). I had to clarify those strange queries in the logs. I reviewed the WPUsersOnline plugin and I saw that the strange queries (and inefficient) that I saw belonged to WPUsersOnline plugin. The thing was that when I renamed the xmlrpc.php the spamrobot was still posting to that file. According to WordPress .htaccess file any file that is not found on the filesystem is redirected to index.php. So what was happening is that all the massive requests sent to xmlrpc.php were being attended by index.php, then showing an error message that page not found, but the WPUsersOnline plugin was deleting those connections. And was doing it many times, overloading also the Database. Also I was able to reproduce the behaviour by myself, isolating by firewalling the WebServer from other Ips other than mine and doing the same post by myself many times per second. I checked against a friend’s blog but in his Server xmlrpc.php responds in 1,5 seconds. My friend’s Server is a Digital Ocean Virtual Server with 2 cores and SSD Disks. My magnetic disks on Amazon only bring around 40 MB/second. I’ve to check in detail why my friend’s Server responds so much faster. Checked the integrity of my databases, just in case, and were perfect. Nothing estrange with collations and the only errors in the /var/log/mysql/error.log was due to MySql crashing when the Server ran out of memory. Rechecked in my Server, now it takes 12 seconds. I disabled 80% of the plugins but the times were the same. The Statistics show how the things changed -see the spikes before I definitively patched the Server to block request from that Spam-robot ip, to the left-. I checked against another WordPress that I have in the same Server and it only takes 1,5 seconds to reply. So I decided to continue investigating why this WordPress took so long to reply. As I said before I checked that the files from my WordPress installation were the same as the original distribution, and they were. Having discarded different files the thing had to be in the database. Even when I checked the MySql it told me that all the tables were OK, having seen that the WPUserOnline deletes all the registers older than 5 minutes, I guessed that this could lead to fragmentation, so I decided to do OPTIMIZE TABLE on all the tables of the database for the WordPress failing, with InnoDb it is basically recreating the Tables and the Indexes. I tried then the call via RPC and my Server replied in three seconds. Much better. Looking with htop, when I call the xmlrpc.php the CPU uses between 50% and 100%. I checked the logs and the robot was gone. He leaved or the provider finally blocked the Server. I don’t know. Everything became clear, it was nothing more than a sort of coincidences together. Deactivating the plugin the DELETE queries disappeared, even under heavy load of the Server. It only was remain to clarify why when I send a call to xmlrpc to this blog, it replies in 1,5 seconds, and when I request to the Barcelona.afterstart.com it takes 3 seconds. I activated the log of queries in mysql. To do that edit /etc/mysql/my.cnf and uncomment: general_log_file = /var/log/mysql/mysql.log general_log = 1 Then I checked the queries, and in the case of my blog it performs many less queries, as I was requesting to pingback to an url that was not existing, and WordPress does this query: SELECT wp_posts.* FROM wp_posts WHERE 1=1 AND ( ( YEAR( post_date ) = 2013 AND MONTH( post_date ) = 9 AND DAYOFMONTH( post_date ) = 27 ) ) AND wp_posts.post_name = 'afterstart-barcelona-2013-09-26-meet' AND wp_posts.post_type = 'post' ORDER BY wp_posts.post_date DESC As the url afterstart-barcelona-2013-09-26-meet with the dates indicated does not exist in my other blog, the execution ends there and does not perform the rest of the queries, that in the case of Afterstart blog were: 40 Query SELECT post_id, meta_key, meta_value FROM wp_postmeta WHERE post_id IN (81) ORDER BY meta_id ASC 40 Query SELECT ID, post_name, post_parent, post_type FROM wp_posts WHERE post_name IN ('http%3a','','seretil-me') AND post_type IN ('page','attachment') 40 Query SELECT wp_posts.* FROM wp_posts WHERE 1=1 AND (wp_posts.ID = '0') AND wp_posts.post_type = 'page' ORDER BY wp_posts.post_date DESC 40 Query SELECT * FROM wp_comments WHERE comment_post_ID = 81 AND comment_author_url = 'http://seretil.me/' To confirm my theory I tried the request to my blog, with a valid url, and it lasted for 3-4 seconds, the same than Afterstart’s blog. Finally I double-checked with the blog of my friend and was slower than before. I got between 1,5 and 6 seconds, with a lot of 2 seconds response. (he has PHP 5.5 and OpCache that improves a bit, but the problem is in the queries to the database) Honestly, the guys creating WordPress should cache this queries instead of performing 20 live queries, that are always the same, before returning the error message. Using Cache Lite or Stash, or creating an InMemory table for using as Cache, or of course allowing the use of Memcached would eradicate the DoS component of this kind of attacks. As the xmlrpc pingback feature hits the database with a lot of queries to end not allowing the publishing. While I was finishing those tests (remember that the attacker ip has gone) another attacker from the same network tried, but I had patched the Server to ignore it: 94.102.52.157 - - [31/Aug/2014:02:06:16 +0000] "POST /xmlrpc.php HTTP/1.0" 200 189 "-" "Mozilla/4.0 (compatible: MSIE 7.0; Windows NT 6.0)" This was trying to get a link published to a domain called socksland dot net that is a domain registered in Russia and which page is not working. As I had all the information I wanted I finally blocked the network from the provider to access my Server ever again. Unfortunatelly Amazon’s Firewall does not allow to block a certain Ip or range. So you can block at Iptables level or in .htaccess file or in the code. I do not recommend blocking at code level because sadly WordPress has many files accessible from outside so you would have to add your code at the beginning of all the files and because when there is a WordPress version update you’ll loss all your customizations. But I recommend proceeding to patch your code to avoid certain Ip’s if you use a CDN. As the POST will be sent directly to your Server, and the Ip’s are Ip’s from the CDN -and you can’t block them-. You have to look at the Header: X-Forwarded-For that indicates the Ip’s the proxies have passed by, and also the Client’s Ip. I designed a program that is able to patch any PHP project to check for blacklisted Ip’s (even though a proxy) with minimal performance impact. It works with WordPress, drupal, joomla, ezpublish and Framework like Zend, Symfony, Catalonia… and I patched my code to block those unwanted robot’s requests. A solution that will work for you probably is to disable the pingback functionality, there are several plugins that do that. Disabling completely xmlrpc is not recommended as WordPress uses it for several things (JetPack, mobile, validation…) The same effect as adding the plugin that disables the xmlrpc pingback can be achieved by editing the functions.php from your Theme and adding: add_filter( 'xmlrpc_methods', 'remove_xmlrpc_pingback_ping' ); function remove_xmlrpc_pingback_ping($methods ) {
unset( $methods['pingback.ping'] ); return$methods;
}

Update: 2016-02-24 14:40 CEST
I got also a heavy dictionary attack against wp-login.php .Despite having a Captcha plugin, that makes it hard to hack, it was generating some load on the system.
What I did was to rename the wp-login.php to another name, like wp-login-carles.php and in wp-login.php having a simply exit();

<?php
exit();


# Improving performance in PHP

This year I was invited to speak at the PHP Conference at Berlin 2014.

It was really nice, but I had to decline as I was working hard in a Start up, and I hadn’t the required time in order to prepare the nice conference I wanted and that people deserves.

However, having time, I decided to write an article about what I would had speak at the conference.

I will cover improving performance in a single server, and Scaling out multi-Server architecture, focusing on the needs of growing and Start up projects. Many of those techniques can be used to improve performance with other languages, not just with PHP.

Many of my friends are very good Developing, but know nothing about Architecture and Scaling. Hope this approach the two worlds, Development ad Operatings, into a DevOps bridge.

# Improving performance on a single server

## Hosting

Choose a good hosting. And if you can afford it choose a dedicated server.

Shared hostings are really bad. Some of them kill your http and mysql instances if you reach certain CPU use (really few), while others share the same hardware between 100+ users serving your pages sloooooow. Others cap the amount of queries that your MySql will handle per hour at so ridiculous few amount that even Drupal or WordPress are unable to complete a request in development.

Other ISP (Internet Service Providers) have poor Internet bandwidth, and so you web will load slow to users.

Some companies invest hundreds of thousands in developing a web, and then spend 20 € a year in the hosting. Less than the cost of a dinner.

You can use a decent dedicated server from 50 to 99 €/month and you will celebrate this decision every day.

Take in count that virtualization wastes between 20% and 30% of the CPU power. And if there are several virtual machines the loss will be more because you loss the benefits of the CPU caching for optimizing parallel instructions execution and prediction. Also if the hypervisor host allows to allocate more RAM than physically available and at some point it swaps, the performance of all the VM’s will be much worst.

If you have a VM and it swaps, in most providers the swap goes over the network so there is an additional bottleneck and performance penalty.

To compare the performance of dedicated servers and instances from different Cloud Providers you can take a look at my project cmips.net

If your Sever has few RAM, add more. And if your project is running slow and you can afford a better Server, do it.

Using SSD disk will incredibly improve the performance on I/O operations and on swap operations. (but please, do backups and keep them in another place)

If you use a CMS like ezpublish with http_cache enabled probably you will prefer to have a Server with faster cores, rather tan a Server with one or more CPU’s plenty of cores, but slower cores, and that last for a longer time to render the page to the http cache.

That may seem obvious but often companies invest 320 hours in optimizing the code 2%, at a cost of let’s say 50 €/h * 320 hours = 16.000 €, while hiring a better Server would had bring between a 20% to 1000% improvement at a cost of additional 50€/month only or at the cost of 100 € of increasing the RAM memory.

The point here is that the hardware is cheap, while the time of the Engineers is expensive. And good Engineers are really hard to find.

And you probably, as a CEO or PO, prefer to use the talent to warranty a nice time to market for your project, or adding more features, rather than wasting this time in refactorizing.

Even with the most optimal code in the universe, if your project is successful at certain point you’ll have to scale. So adding more Servers. To save a Server now at the cost of slowing the business has not any sense.

Many projects still use PHP 5.3, and 5.4.

Latest versions of PHP bring more and more performance. If you use old versions of PHP you can have a Quick Win by just upgrading to the last PHP version.

## Use OpCache (or other cache accelerator)

OpCache is shipped with PHP 5.5 by default now, so it is the recommended option. It is though to substitute APC.

To activate OpCache edit php.ini and add:

Linux/Unix:

zend_extension=/path/to/opcache.so

Windows:

zend_extension=C:\path\to\php_opcache.dll

It will greatly improve your PHP performance.

Ensure that OpCache in Production has the optimal config for Production, that will be different from Development Environment.

Note: If you plan to use it with XDebug in Development environments, load OpCache before XDebug.

## Disable Profiling and xdebug in Production

In Production disable the profiling, xdebug, and if you use a Framework ensure the Development/Debug features are disabled in Production.

## Ensure your logs are not full of warnings

Check that Production logs are not full of warnings.

I’ve seen systems were every seconds 200 warnings were written to logs, the same all the time, and that obviously was slowing down the system.

Typical warnings like this can be easily fixed:

Message: date() [function.date]: It is not safe to rely on the system’s timezone settings. You are required to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected ‘UTC’ for ‘8.0/no DST’ instead

## Profile in Development

To detect where your slow code is, profile it in Development to see where it is spent the most CPU/time.

Check the slow-queries if you use MySql.

## Cache html to disk

Imagine you have a sort of craigslist and you are displaying all the categories, and the number of new messages in this landing page. To do that you are performing many queries to the database, SELECT COUNTs, etc… every time a user visits your page. That certainly will overload your database with actually few concurrent visitors.

Instead of querying the Database all the time, do cache the generated page for a while.

This can be achieved by checking if the cache html file exists, and checking the TTL, and generating a new page if needed.

A simple sample would be:

<?php
// Cache pages for 5 minutes
$i_cache_TTL = 300;$b_generate_cache = false;

$s_cache_file = '/tmp/index.cache.html'; if (file_exists($s_cache_file)) {
// Get creation date
$i_file_timestamp = filemtime($s_cache_file);
$i_time_now = microtime(true); if ($i_time_now > ($i_file_timestamp +$i_cache_TTL)) {
$b_generate_cache = true; } else { // Up to date, get from the disk$o_fh = fopen($s_cache_file, "rb");$s_html = stream_get_contents($o_fh); fclose($o_fh);

// If the file was empty something went wrong (disk full?), so don't use it
if (strlen($s_html) == 0) {$b_generate_cache = true;
} else {
// Print the page and exit
echo $s_html; exit(); } } } else {$b_generate_cache = true;
}

ob_start();

// Render your page normally here
// ....

$s_html = ob_get_clean(); if ($b_generate_cache == true) {
// Create the file with fresh contents
$o_fp = fopen($s_cache_file, 'w');
if (fwrite($o_fp,$s_html) === false) {
// Error. Impossible to write to disk
// throw new Exception('CacheCantWrite');
}
fclose($o_fp); } // Send the page to the browser echo$s_html;


This sample is simple, and works for many cases, but presents problems.

Imagine for example that the page takes 5 seconds to be generated with a single request, and you have high traffic in that page, let’s say 500 requests per second.

What will happen when the cache expires is that the first user will trigger the cache generation, and the second, and the third…. so all of the 500 requests * 5 seconds will be hitting the database to generate the cache, but… if creating the page per one requests takes 5 seconds, doing this 2,500 times will not last 5 seconds… so your process will enter in a vicious state where the first queries have not ended after minutes, and more and more queries are being added to the queue until:

a) Apache runs out of childs/processes, per configuration

b) Mysql runs out of connections, per configuration

c) Linux runs out of memory, and processes crashes/are killed

Not to mention the users or the API client, waiting infinitely for the http request to complete, and other processes reading a partial file (size bigger than 0 but incomplete).

Different strategies can be used to prevent that, like:

a) using semaphores to lock access to the cache generation (only one process at time)

b) using a .lock file to indicate that the file is being generated, and so next requests serving from the cache until the cache generation process ends the task, also writing to a buffer like acachefile.buffer (to prevent incomplete content being read) and finally when is complete renaming to the final name and removing the .lock

c) using memcached, or similar, to keep an index in memory of what pages are being generated now, and why not, keeping the cached files there instead of a filesystem

d) using crons to generate the cache files, so they run hourly and you ensure only one process generates the cache files

If you use crons, a cheap way to generate the .html content is that the crons curls/wget your webpage. I don’t recommend this as has some problems, like if that web request fails for any reason, you’ll have cached an error instead of content.

I prefer preparing my projects to being able of rendering the content being invoked from HTTP/S or from command line. But if you use curl because is cheap and easy and time to market is important for your project, then be sure that you check that your backend code writes an Status OK in the HTML that the cron can check to ensure that the content has been properly generated. (some crons only check for http status, like 200, but if your database or a xml gateway you use fails you will likely get a 200 and won’t detect that you’re caching pages with “error I can’t connect to the database” instead)

Many Frameworks have their own cache implementation that prevent corruption that could come by several processes writing to the same file at the same time, or from PHP dying in the middle of the render.

You can see a more complex MVC implementation, with Views, from my Framework Catalonia here:

By serving .html files instead of executing PHP with logic and performing queries to the database you will be able to serve hundreds of thousands requests per day with a single machine and really fast -that’s important for SEO also-.

I’ve done this in several Start ups with wonderful results, and my Framework Catalonia also incorporates this functionality very easily to use.

Note: This is only one of the techniques to save the load of the Database Servers. Many more come later.

## Cache languages to disk

If you have an application that is multi-language, or if your point for the Strings (sections, pages, campaigns..) to be edited by Marketing is the Database, there is no need to query it all the time.

Simply provide a tool to “generate language files”.

Your languages files can be Javascript files loaded by the page, or can be PHP files generated.

For example, the file common_footer_en.php could be generated reading from Database and be like that:

<php
/* Autogenerated English translations file common_footer_en.php
on 2014-08-10 02:22 from the database */
$st_translations['seconds'] = 'seconds';$st_translations['Time']                   = 'Time';
$st_translations['Vars used'] = 'Vars used in these templates';$st_translations['Total Var replacements'] = 'Total replaced';
$st_translations['Exec time'] = 'Execution time';$st_translations['Cached controller']      = 'Cached controller';


So the PHP file is going to be generated when someone at your organization updates the languages, and your code is including it normally like with any other PHP file.

## Use the Crons

You can set cron jobs to do many operations, like map reduce, counting in the database or effectively deleting the data that the user selected to delete.

Imagine that you have classified portal, and you want to display the number of announces for that category. You can have a table NUM_ANNOUNCES to store the number of announces, and update it hourly. Then your database will only do the counting once per hour, and your application will be reading the number from the table NUM_ANNOUNCES.

The Cron can also be used to make expire old announces. That way you can avoid a user having to wait for that clean up taking process when you have a http request to PHP.

A cron file can be invoked by:

php -f cron.php

By:

./cron.php

If you give permissions of execution with chmod +x and set the first line in cron.php as:

#!/usr/bin/env php

Or you can do a trick, that is emulate a http request from bash, by invoking a url with curl or with wget. Set the .htaccess so the folder for the cron tasks can only be executed from localhost for adding security.

This last trick has the inconvenient that the calling has the same problems as any http requests: restarting Apache will kill the process, the connection can be closed by timed out (e.g. if process is taking more seconds than the max. execution time, etc…)

## Use Ramdisk for PHP files

With Linux is very easy to setup a RamDisk.

You can setup a RamDisk and rsync all your web .PHP files at system boot time, and when deploying changes, and config Apache to use the Ramdisk folder for the website.

That way for every request to the web, PHP files will be served from RAM directly, saving the slow disk access. Even with OpCache active, is a great improvement.

At these times were one Gigabyte of memory is really cheap there is a huge difference from reading files from disk, and getting them from memory. (Reading and writing to RAM memory is many many many times faster than magnetic disks, and many times faster than SSD disks)

Also .js, .css, images… can be served from a Ram disk folder, depending on how big your web is.

## Ramdisk for /tmp

If your project does operations on disk, like resizing images, compressing files, reading/writing large CSV files, etcetera you can greatly improve the performance by setting the /tmp folder to a Ramdisk.

If your PHP project receives file uploads they will also benefit (a bit) from storing the temporal files to RAM instead to the disk.

## Use Cache Lite

Cache Lite is a Pear extension that allows you to keep data in a local cache of the Web Server.

You can cache .html pages, or you can cache Queries and their result.

<?php
require_once "Cache/Lite.php";

$options = array( 'cacheDir' => '/tmp/', 'lifeTime' => 7200, 'pearErrorMode' => CACHE_LITE_ERROR_DIE );$cache = new Cache_Lite($options); if ($data = $cache->get('id_of_the_page')) { // Cache hit ! // Content is in$data
echo $data; } else { // No valid cache found (you have to make and save the page)$data = '<html><head><title>test</title></head><body><p>this is a test</p></body></html>';
echo $data;$cache->save($data); } It is nice that Cache Lite handles the TTL and keeps the info stored in different sub-directories in order to keep a decent performance. (As you may know many files in the same directory slows the access much). ## Use HHVM (HipHop Virtual Machine) from Facebook Facebook Engineers are always trying to optimize what is run on the Servers. Faster code means, less machines. Even 1% of CPU use improvement means a lot of Servers less. Less Servers to maintain, less money wasted, less space on the Data Centers… So they created the HHVM HipHop Virtual Machine that is able to run PHP code, much much faster than PHP. And is compatible with most of the Frameworks and Open Source projects. They also created the Hack language that is an improved PHP, with type hinting. So you can use HHVM to make your code run faster with the same Server and without investing a single penny. ## Use C extensions You can create and use your own C extensions. C extensions will bring really fast execution. Just to get the idea: I built a PHP extension to compare the performance from calculating the Bernoulli number with PHP and with the .so extension created in C. In my Core i7 times were: PHP: Computed in 13.872583150864 s PHP calling the C compiled extension: Computed in 0.038495063781738 s That’s 360.37 times faster using the C extension. Not bad. ## Use Zephir Zephir is a an Open Source language, very similar to PHP, that allows to create and maintain easily extensions for PHP. ## Use Phalcon Phalcon is a Web MVC Framework implemented as C extension, so it offers a high performance. The views syntax are very very similar to Twig. ## Check if you’re using the correct Engine for MySql Many Developers create the tables and never worry about that. And many are using MyIsam by default. It was the by default Engine prior to MySql 5.5. While MyIsam can bring good performance in some certain cases, my recommendation is to use InnoDb. Normally you’ll have a gain in performance with MyIsam if you’ve a table were you only write or only read, but in all the other cases InnoDb is expected to be much more performant and safe. MyIsam tables also get corruption from time to time and need manually fixing and writing to disks are not so reliable than InnoDb. As MyIsam uses table-locking for updates and deletes to any existing row, it is easy to see that if you’re in a web environment with multiple users, blocking the table -so the other operations have to wait- will make things be slow. If you have to use Joins clearly you will benefit from using InnoDb also. ## Use InMemory Engine from MySql MySql has a very powerful Engine called InMemory. The InMemory Engine will store things in RAM and loss the data when MySql is restarted. However is very fast and very easy to use. Imagine that you have a travel application that constantly looks at which country belongs the city specified by user. A Quickwin would be to INSERT all this data in the InMemory Engine of MySql when it is started, and do just one change in your code: to use that Table. Really easy. Quick improvement. ## Use curl asynchronously If your PHP has to communicate with other systems using curl, you can do the http/s call, and instead of waiting for a response let your PHP do more things in the meantime, and then check the results. You can also call to multiple curl calls in parallel, and so avoid doing one by one in serial. ## Serialize Guess that you have a query that returns 1000 results. Then you add one by one to an array. Probably you’re going to have substantial gain if you keep in the database a single row, with the array serialized. So an array like:$st_places = Array(‘Barcelona’, ‘Dublin’, ‘Edinburgh’, ‘San Francisco’, ‘London’, ‘Berlin’, ‘Andorra la Vella’, ‘Prats de Lluçanès’);

Would be serialized to an string like:

a:8:{i:0;s:9:”Barcelona”;i:1;s:6:”Dublin”;i:2;s:9:”Edinburgh”;i:3;s:13:”San Francisco”;i:4;s:6:”London”;i:5;s:6:”Berlin”;i:6;s:16:”Andorra la Vella”;i:7;s:19:”Prats de Lluçanès”;}

This can be easily stored as String and unserialized later back to an array.

Note: In Internet we have a lot of encodings, Hebrew, Japanese… languages. Be careful with encodings when serializing, using JSon, XML, storing in databases without UTF support, etc…

## Use Memcached to store common things

Memcached is a NoSql database in memory that can run in cluster.

The idea is to keep things there, in order to offload the load of the database. And as everything is in RAM it really runs fast.

You can use Memcached to cache Queries and their results also.

For example:

You have query SELECT * FROM translations WHERE section=’MAIN’.

Then you look if that String exists as key in the Memcached, and if it exists you fetch the results (that are serialized) and you avoid the query. If it doesn’t exist, you do normally the query to the database, serialize the array and store it in the Memcached with a TTL (Time to Live) using the Query (String) as primary key. For security you may prefer to hash the query with MD5 or SHA-1 and use the hash as key instead of using it plain.

When the TTL is reached the validity of the data would have expired and so it’s time to reinsert the contents in the next query.

Be careful, I’ve seen projects that were caching private data from users without isolating the key properly, so other users were getting the info from other users.

For example, if the key used was ‘Name’ and the value ‘Carles Mateo’ obviously the next user that fetch the key ‘Name’ would get my name and not theirs.

If you store private data of users in Memcache, it is a nice idea to append the owner of that info to the hash. E.g. using key: 10701577-FFADCEDBCCDFFFA10C

Where ‘10701577’ would be the user_id of the owner of the info, and ‘FFADCEDBCCDFFFA10C’ a hash of the query.

Before I suggested that you can keep a table of counting for the announces in a classified portal. This number can be stored in the Memcached instead.

You can store also common things, like translations, or cities like in the example before, rate of change for a currency exchanging website…

The most common way to store things there is serialized or Json encoded.

Be aware of the memory limits of Memcached and contrl the cache hitting ratio to avoid inserting data, and losing it constantly because is used few and Memcached has few memory.

You can also use Redis.

## Use jQuery for Production (small file) and minimized files for js

Use the Production jQuery library in Production, I mean do not use the bigger file Development jQuery library for Production.

There are product that eliminate all the necessary spaces in .js and .css files, and so are served much faster. These process is called minify.

It is important to know that in many emerging markets in the world, like Brazil, they have slow DSL lines. Many 512 Kbit/secons, and even modem connections!.

## Activate compression in the Server

If you send large text files, or Jsons, you’ll benefit from activating the compression at the Server.

It consumes some CPU, but many times it brings an important improvement in speed serving the pages to the users.

## Use a CDN

You can use a Content Delivery Network to offload your Servers from sending plain texts, html, images, videos, js, css…

You can delegate this to the CDN, they have very speedy Internet lines and Servers, so your Servers can concentrate into doing only BackEnd operations.

The most well known are Akamai and Amazon Cloud Front.

Please take attention to the documentation, a common mistake is to send Cache Headers to the CDN servers, while they’ll use this headers to set the cache TTL and ignore their web configuration parameters. (For example s-maxage, like: Cache-Control: public, s-maxage=600)

HTTP/1.1 200 OK
Server: nginx
Date: Wed, 20 Aug 2014 10:50:21 GMT
Content-Type: text/html; charset=UTF-8
Connection: close
Vary: Accept-Encoding
Cache-Control: max-age=0, public, s-maxage=10800
Vary: X-User-Hash,Accept-Encoding
X-Location-Id: 2
X-Content-Digest: ezlocation/2/end5139244ced4b25606ef0a39235982b1662d01cc
Content-Length: 68250
Age: 3

You can take a look at any website by telneting to the port 80 and doing the request manually or easily by using lynx:

## Do you need a Framework?

If you’re processing only BackEnd petitions, like in the video games industry, serving API’s, RESTful, etc… you probably don’t need a Framework.

The Frameworks are generic and use much more resources than you’re really need for a fast reply.

Many times using a heavy Framework has a cost of factor times, compared to use simply PHP.

## Save database connections until really needed

Many Frameworks create a connection to the Database Server by default. But certain parts of your code application do not require to connect to the database.

For example, validating the data from a form. If there are missing fields, the PHP will not operate with the Database, just return an error via JSon or refreshing the page, informing that the required field is missing.

If a not logged user is requesting the dashboard page, there is no need to open a connection to the database (unless you want to write the access try to an error log in the database).

In fact opening connections by default makes easier for attackers to do DoS attacks.

With a Singleton pattern you can easily implement a Db class that handles this transparently for you.

## Memcached session

When you have several Web Servers you’ll need something more flexible than the default PHP handler (that stores to a file in the Web Server).

The most common is to store the Session, serialized, in a Memcached Cluster.

## Use Cassandra

Apache Cassandra is a NoSql database that allows to Scale out very easily.

The main advantage is that scales linearly. If you have 4 nodes and add 4 more, your performance will be doubled. It has no single point of failure, is also resilient to node failures, it replicates the data among the nodes, splits the load over the nodes automatically and support distributed datacenter architectures.

To know more abiut NoSql and Cassandra, read my article: Upgrade your scalability with NoSql. And to start developing with Cassandra in PHP, python or Java read my contributed article: Begin developing with Cassandra.

## Use MySql primary and secondaries

A easy way to split the load is to have a MySql primary Server, that handles the writes, and MySql secondary (or Slave) Servers handling the reads.

Every write sent to the Master is replicated into the Slaves. Then your application reads from the slaves.

You have to tell your code to do the writes to database to the primary Server, and the reads to the secondaries. You can have a Load Balancer so your code always ask the Load Balancer for the reads and it makes the connection to the less used server.

## Do Database sharding

To shard the data consist into splitting the data according to a criteria.

For example, imagine we have 8 MySql Servers, named mysql0 to mysql7. If we want to insert or read data for user 1714, then the Server will be chosen from dividing the user_id, so 1714, between the number of Servers, and getting the MOD.

So 1714 % 8 gives 2. This means that the MySql Server to use is the mysql2.

For the user_id 16: 16 & 8 gives 0, so we would use mysql0. And so.

You can shard according to the email, or other fields as well. And you can have the same master and secondaries for the shards also.

When doing sharding in MySql you cannot do joins to data in other Servers. (but you can replicate all the data from the several shards in one big server in house, in your offices, and so query it and join if you need that for marketing purposes).

I always use my own sharding, but there is a very nice product from CodeFutures called dbshards. It handles the traffics transparently. I used it when in a video games Start up with very satisfying result.

## Use Cassandra assync queries

Cassandra support asynchronous queries. That means you can send the query to the Server, and instead of waiting, do other jobs. And check for the result later, when is finished.

## Consider using Hadoop + HBASE

A Cluster alternative to Cassandra.

You can put a Load Balancer or a Reverse Proxy in front of your Web Servers. The Load Balancer knows the state of the Web Servers, so it will remove a Web Server from the Array if it stops responding and everything will continue being served to the users transparently.

There are many ways to do Load Balancing: Round Robin, based on the load on the Web Servers, on the number of connections to each Web Server, by cookie…

To use a Cookie based Load Balancer is a very easy way to split the load for WordPress and Drupal Servers.

Imagine you have 10 Web Servers. In the .htaccess they set a rule to set a Cookie like:

SERVER_ID=WEB01

That was in the case of the first Web Server.

SERVER_ID=WEB02

Etcetera

When for first time an user connects to the Load Balancer it sends the user to one of the 10 Web Servers. Then the Web Server sends its cookie to the browser of the Client. E.g. WEB07

After that, in the next requests from the client it will be redirected to the server by the Load Balancer to the Server that set the Cookie, so in this example WEB07.

The nice thing of this way of splitting the traffic is that you don’t have to change your code, nor handling the Sessions different.

If you use two Load Balancers you can have a heartbeat process in them and a Virtual Ip, and so in case your main Load Balancer become irresponsible the Virtual Ip will be mapping to the second Load Balancer in milliseconds. That provides HA.

## Use http accelerators

Nginx, varnish, squid… to serve static content and offload the PHP Web Servers.

## Auto-Scale in the Cloud

If you use the Cloud you can easily set Auto-Scaling for different parts of your core.

A quick win is to Scale the Web Servers.

As in the Cloud you pay per hour using a computer, you will benefit from cost reduction in you stop using the servers when you don’t need them, and you add more Servers when more users are coming to your sites.

Video game companies are a good example of hours of plenty use and valleys with few users, although as users come from all the planet it is most and most diluted.

Some cool tools to Auto-Scaling are: ECManaged, RightScale, Amazon CloudWatch.

Actually the Performance of the Google Cloud to Scale without any precedent is great.

Opposite to other Clouds that are based on instances, Google Cloud offers the platform, that will spawn your code across so many servers as needed, transparently to you. It’s a black box.

## Schedule operations with RabbitMQ

Or other Queue Manager.

The idea is to send the jobs to the Queue Manager, the PHP will continue working, and the jobs will be performed asynchronously and notify the end.

RabbitMQ is cool also because it can work in cluster and HA.

## Use GlusterFs for NAS

GlusterFs (and other products) allow you to have a Distributed File System, that splits the load and the data across the Servers, and resist node failures.

If you have to have a shared folder for the user’s uploads, for example for the profile pictures, to have the PHP and general files locally in the Servers and the Shared folder in a GlusterFs is a nice option.

## Avoid NFS for PHP files and config files

As told before try to have the PHP files in a RAM disk, or in the local disk (Linux caches well and also OpCache), and try to not write code that reads files from disk for determining config setup.

I remember a Start up incubator that had a very nice Server, but the PHP files were read from a mounted NFS folder.

That meant that on every request, the Server had to go over the network to fetch the files.

Sadly for the project’s performance the PHP was reading a file called ENVIRONMENT that contained “PROD” or “DEVEL”. And this was done in every single request.

Even worst, I discovered that the switch connecting the Web Server and the NFS Server was a cheap 10 Mbit one. So all the traffic was going at 10 Mbit/s. Nice bottleneck.

You can use 10 GbE (10 Gigabit Ethernet) to connect the Servers. The Web Servers to the Databases, Memcached Cluster, Load Balancers, Storage, etc…

You will need 10 GbE cards and 10 GbE switchs supporting bonding.

Use bonding to aggregate 10 + 10 so having 20 Gigabit.

You can also use Fibre Channel, for example 10 Gb and aggregate them, like  10 + 10 so 20 Gbit for the connection between the Servers and the Storage.

The performance improvements that your infrastructure will experiment are amazing.

# The Cloud is for Scaling

The Cloud is for Startups, and for Scaling. Nothing more.

In the future will be used by phone operators, to re-dimension their infrastructure and bandwidth in real time according to demand, but nowadays the Cloud is for Startups.

Examine the prices in my post in cmips, take a look, examine the performance also of the different CPU. You see that according to CMIPS v.1.03 a Desktop Processor Intel i7-4770S, worth USD $300, performs better than an Amazon M2 High Memory Quadruple Extra Large and than a Rackspace First gen. 30 GB RAM 8 Cores?. Today the public cost of an Amazon M2 High Memory Quadruple Extra Large running for a month is USD$1,180.80 so USD $1.64 per hour and the Rackspace First Generation 30 GB RAM 8 Cores 1200 GB of disk costs is USD$1,425.60 so USD $1.98 per hour running. And that’s the key, the cost per hour. Because the greatness, the majesty of the Cloud is that you pay per hour, you pay as you need, or as you go. No attaching contracts. All on demand. I had my company at a time where the hosting companies and the Data Centers were forcing customers to sign yearly contracts. What if a company only needs to host their Servers for three months? What if they have to close?. No options. You take it or you leave it. Even renting a dedicated hosting was for at least a month or more, and what if the latency was not good? What if the bandwidth of the provider was not enough?. Amazon irrupted in the market with strength. I really like that company because they grew the best eCommerce company for buying books, they did a system that really worked, and was able to recommend very useful computer books, and the delivery, logistics was so good, also post-sales service. They simply started to rent the same infrastructure they were using to attend their millions of customers and was a total success. And for a while few people knew about Amazon deep technologies and functionalities, but later became a fashion. Now people is using Amazon or whatever provider/Service that contains the word “Cloud” because the Cloud is in the mouth of everyone. Magazines and newspapers speak about the Cloud, so many many companies use it simply because everyone is talking about the Cloud. And those ISP that didn’t had a Cloud have invested heavily to create a Cloud, just because they didn’t want to be the ones without a Cloud, since everyone was asking for it and all the ISP companies were offering their “Clouds”. Every company claims to have “Cloud” where the only many of them have is Vmware servers, Xen servers, Open Stack… running the tenants or instances of the customers always on the same host servers. No real Cloud, professional Cloud, abstract layered in a Professional way like Amazon, only the traditional “shared hosting” with another name, sharing CPU and RAM and Disk storage using virtual machines called instances. So, Cloud fashion has become a confusing craziness where no one knows why they are in the Cloud but they believe they have to be in. But do companies need the Cloud?. Cloud instances? It depends. The best would be to ask that companies Why you choose the Cloud?. If you compare the cost of having an instance in the Cloud, is much much more expensive than having a dedicated server. And for that high cost you don’t get more performance. Virtualization is always slower and disk speed is always an issue in Cloud providers, where all the data travels via network from the disk cabins NAS to the Host servers running the guest instances. Data cannot be at local disks, since every time you start an instance, the resources like CPU and RAM are provisioned, and your instance run in totally different hardware. Only your data remain in the NAS (Network Attached Storage). So unless you run your in-the-Cloud instance in a special provider that offers local disks, like DigitalOcean that offers SSD but monthly paying, (and so you pay the price by losing the hardware abstraction capability because you’re attached to the CPU that has the disk connected, and also you loss the flexibility of paying per hour of use, as you go), then you’ll face a bottleneck that is the hard disk performance (that for real takes all the data from NAS, where is stored, through the local network). So what are the motivations to use the Cloud?. I try to put some examples, out of these it has no much sense, I think. You can send me your happy-in-Cloud scenarios if you found other good uses. Example A) Saving initial costs, avoid contract attachment and grow easily own-made Imagine a Developer that start its own project. May be it works, may be not, but instead of having a monthly contract for a dedicated server, he starts with an Amazon Free Tier (better not, use Small instance at least) and runs a web. If it does not work, simply stop the instance and pay no more. If the project works and has more and more users he can re-dimension the server with a click. Just stop the instance, change the type of instance, start it again with more RAM and more CPU power. Fast. Hiring a dedicated server implies at least monthly contracts, average of USD$100 per month, and is not easy to move to a bigger server, not fast and is expensive as it requires the ISP tech guys to move the data, to migrate from a Server to another.

Also the available bandwidth is to be taken in consideration. Bandwidth is expensive and Amazon can offer 150 Mbit to smaller machines. Not all the Internet Service Providers can offer that bandwidth even with most advanced packets.

If the project still grows, with a click, in seconds, 20 instances with a lot of bandwidth can be deployed and serving traffic to your customers very quick.

You save the init costs of buying Servers, and the time to deal with hardware, bandwidth limitations and avoid contracts, but you pay an hourly rate a lot more expensive. So in the long run is much much expensive using Amazon and less powerful than having dedicated servers. That happened to Zynga, that was paying $63M annually to Amazon and decided to step back from Amazon to their own Data Centers again. (another fortune tech link) The limited CPU power was also a deal breaker for many companies that needed really powerful CPU and gigs of RAM for their Database Servers. Now this situation is much better with the introduction of the new Servers. This developer can benefit from doing bacups with a click, cloning, starting instances from an image, having more static ip’s with a click, deploying built-in (from the Cloud provider) load balancers, using monitoring services like CloudWatch, creating Volumes and attaching to the servers for additional space… Example B) An Startup with fluctuating number of users and hopes of growing Imagine an Startup with a wonderful Facebook Application. During 80% of the day has few visits, may be only need 3 Servers, but during 20% of the hours of the day from 10:00 to 15:00 users connect like hell, so they need 20 servers to attend this traffic and workload, and may be tomorrow needs 30 servers. With the Cloud they pay for 3 servers 24 hours per day and for the other 17 servers only pay the hours they are on, that’s 5 hours per day. Doing that they save money and they have an unlimited * amount of power. (* There are limits for real, you have to specially request authorisation to run more than default max. servers for the zone, that is normally 20 instances for Amazon. Also it can happen theoretically that when you request new instances the Zone has no instances available). So well, for an Startup growing, avoiding hiring 20 dedicated servers and instead running into the Cloud as many as they need, for just the time they need, Auto-Scaling up and down, and can use the servers NOW and pay the next month with Visa card, all of that can make a difference for a growing Startup. If the servers chosen are not powerful enough that is solved with a click, changing instance type. So fast. A minute. It’s only a matter of money. Example C) e-Learning companies and online universities e-Learning platforms also get benefits from the Auto-Scaling for the full occupation hours. The built-in functionalities of the Cloud to clone instances is very useful to deploy new web servers, or new environments for students doing practices, in the case of teaching Information Technology subjects, where the users need to practice against a real server (Linux or windows). Those servers can be created and destroyed, cloned from the main -ready to go- template. And also servers can be scheduled to stop at a certain hour and to start also, so saving the money from the hours not needed. Example D) Digital agencies, sports and other events When there is an Special event, like motorcycle running, when a Football Team scores, when there is an spot in tv announcing a product… At those moments the traffic to the site can multiply, so more servers and more bandwidth have to be deployed instantly. That cannot be done with physical servers, hardware, but is very easy to provision instances from the Cloud. Mass mailing email campaigns can also benefit from creating new Servers when needed. Example E) Proximity and SEO Cloud providers have Data Centres everywhere. If you want to have servers in Asia, or static content to be deployed faster, or in South-America, or in Europe… the Cloud providers have plenty of Data Centers all over the world. Example F) Game aficionado and friends sharing contents People that loves cooperative games can find the needed hungry bandwidth and at a moderate price. If they run their private server few hours, at night, from 22:00 to 01:00 as example, they will benefit from a great bandwidth from the big Cloud provider and pay only 3 hours per day (the exceed of traffic uses to be paid in most providers, but price of additional GB uses to be really really competitive). Friends sharing contents in an Ftp also, can benefit from this Cloud servers, but probably they will find more easy to use services like Dropbox. Example G) Startup serving contents An Startup serving videos, images, or books, can benefit not only from the great bandwidth of big Cloud providers (this has been covered before), but for a very cheap price for exceeding Gigabyte transferred. Local ISP can’t offer 150 Mbit for an instance of USD$20 and USD \$0.12 per additional GB transferred.

Many Cloud providers also allow unlimited incoming traffic from the Internet, and from Server to Server through private ip’s.

Other cases

For other cases Dedicated Servers are much more Powerful, faster and cheaper, at the price of being “static” in the sense of attached, not layer abstracted, but all the aspects of your Project have to be taken in count before deciding stepping into or out of the Cloud.

In general terms I would say that the Cloud is for Scaling.

# NAS and Gigabit

I’ve found this problem in several companies, and I’ve had to show their error and convince experienced SysAdmins, CTOs and CEO about the erroneous approach. Many of them made heavy investments in NAS, that they are really wasting, and offering very poor performance.

Normally the rack servers have their local disks, but for professional solutions, like virtual machines, blade servers, and hundreds of servers the local disk are not used.

NAS – Network Attached Storage- Servers are used instead.

This NAS Servers, when are powerful (and expensive) offer very interesting features like hot backups, hot backups that do not slow the system (the most advanced), hot disk replacement, hot increase of total available space, the Enterprise solutions can replicate and copy data from different NAS in different countries, etc…

Smaller NAS are also used in configurations like Webservers’ Webfarms, were all the nodes has to have the same information replicated, and when a used uploads a new profile image, has to be available to all the webservers for example.

In this configurations servers save and retrieve the needed data from the NAS Servers, through LAN (Local Area Network).

The main error I have seen is that no one ever considers the pipe where all the data is travelling, so most configurations are simply Gigabit, and so are bottleneck.

Imagine a Dell blade server, like this in the image on the left.

This enclosure hosts 16 servers, hot plugable, with up to two CPU’s each blade, we also call those blade servers “pizza” (like we call before to rack servers).

A common use is to use those servers to have Vmware, OpenStack, Xen or other virtualization software, so the servers run instances of customers. In this scenario the virtual disks (the hard disk of the virtual machines) are stored in the NAS Server.

So if a customer shutdown his virtual server, and start it later, the physical server where its virtual machine is running will be another, but the data (the disk of the virtual server) is stored in the NAS and all the data is saved and retrieved from the NAS.

The enclosure is connected to the NAS through a Gigabit connection, as 10 Gigabit connections are still too expensive and not yet supported in many servers.

Once we have explained that, imagine, those 16 servers, each with 4 or 5 virtual machines, accessing to their disks through a Gigabit connection.

If only one of these 80 virtual machines is accessing to disk, the will be no problem, but if more than one is accessing the Gigabit connection, that’s a maximum of 125 MB (Megabytes) per second, will be shared among all the virtual machines.

So imagine, 70 virtual machines are accessing NAS to serve web pages, with not much traffic, OK, but the other 10 virtual machines are doing heavy data transmission: for example one is serving data through FTP server, the other is broadcasting video, the other is copying heavy log files, and so… Imagine that scenario.

The 125 MB per second is divided between the 80 servers, so those 10 servers using extensively the disk will monopolize the bandwidth, but even those 10 servers will have around 12,5 MB each, that is 100 Mbit each and is very slow.

Imagine one of the virtual machines broadcast video. To broadcast video, first it has to get it from the NAS (the chunks of data), so this node serving video will be able to serve different videos to few customers, as the network will not provide more than 12,5 MB under the circumstances provided.

This is a simplified scenario, as many other things has to be taken in count, like the SATA, SCSI and SAS disks do not provide sustained speeds, speed depends on locating the info, fragmentation, etc… also has to be considered that NAS use protocol iSCSI, a sort of SCSI commands sent through the Ethernet. And Tcp/Ip uses verifications in their protocol, and protocol headers. That is also an overhead. I’ve considered only traffic in one direction, so the servers downloading from the NAS, as assuming Gigabit full duplex, so Gigabit for sending and Gigabit for receiving.

So instead of 125 MB per second we have available around 100 MB per second with a Gigabit or even less.

Also the virtualization servers try to handle a bit better the disk access, by keeping a cache in memory, and not writing immediately to disk.

So you can’t do dd tests in virtual machines like you would do in any Linux with local disks, and if you do go for big files, like 10 GB with random data (not just 0, they have optimizations for that).

Let’s recalculate it now:

70 virtual machines using as low as 0.10 MB/second each, that’s 7 MB/second. That’s really optimist as most webservers running PHP read many big files for attending a simple request and webservers server a lot of big images.

10 virtual machines using extensively the NAS, so sharing 100 MB – 7 MB = 93 MB. That is 9.3 MB each.

So under these circumstances for a virtual machine trying to read from disk a file of 1 GB (1000 MB), this operation will take 107 seconds, so 1:47 minutes.

So with this considerations in mind, you can imagine that the performance of the virtual machines under those configurations are leaved to the luck. The luck that nobody else of the other guests in the servers are abusing the disk I/O.

I’ve explained you in a theoretical plan. Sadly reality is worst. A lot worst. Those 70 web virtual machines with webservers will be so slow that they will leave your company very disappointed, and the other 10 will not even be happier.

One of the principal problems of Amazon EC2 has been always disk performance. Few months ago they released IOPS, high performance disks, that are more expensive, but faster.

It has to be recognized that in Amazon they are always improving.

They have also connection between your servers at 10 Gbit/second.

Returning to the Blades and NAS, an easy improvement is to aggregate two Gigabits, so creating a connection of 2 Gbit. This helps a bit. Is not the solution, but helps.

Probably different physical servers with few virtual machines and a dedicated 1 Gbit connection (or 2 Gbit by 1+1 aggregated if possible) to the NAS, and using local disks as much as possible would be much better (harder to maintain at big scale, but much much better performance).

But if you provide infrastructure as a Service (IaS) go with 2 x 10 Gbit Fibre aggregated, so 20 Gigabit, or better aggregate 2 x 20 Gbit Fibre. It’s expensive, but crucial.

Now compare the 9.3 MB per second, or even the 125 MB theoretical of Gigabit of the average real sequential read of 50 MB/second that a SATA disk can offer when connected on local, or nearly the double for modern SAS 15.000 rpm disks… (writing is always slower)

… and the 550 MB/s for reading and 550 MB/s for writing that some SSD disks offer when connected locally. (I own two OSZ SSD disks that performs 550 MB).

I’ve seen also better configuration for local disks, like a good disk controller with Raid 5 and disks SSD. With my dd tests I got more than 900 MB per second for writing!.

So if you are going to spend 30.000 € in your NAS with SATA disks (really bad solution as SATA is domestic technology not aimed to work 24×7 and not even fast) or SAS disks, and 30.000 € more in your blade servers, think very well what you need and what configuration you will use. Contact experts, but real experts, not supposedly real experts.

Otherwise you’ll waste your money and your customers will have very very poor performance on these times where applications on the Internet demand more and more performance.