Category Archives: Data Centers

Datacenters, D&R and coronavirus

I’ve been working for years within Data centers, with D&R strategies, and then in the middle of COVID-19, with huge demands on increments of bandwidth and compute, some DCs decided to do not allow in the Engineers of their customers.

As somebody that had my own Startup and CSP and had infrastructure in DCs and servers from customers in colocation, and has replaced Hw components at 1AM, replaced drives from broken RAIDs, and fixed systems so many times inside so many Datacenters across the world, I’m shocked about that.

I understand health reasons can be argued, but I still have Servers in Datacenters because we all believed they were the most safe place, prepared for disaster and recovery, with security, 24×7… and now, one realise that cannot enter to fix or upgrade the own machines.
Please note, still you can use the remote hands from the DC, although this is not a good idea many times, I’m not sure this will still be an available option when the lock down in those countries becomes more strict.

I’m wondering if DCs current model have any future at all.

I think most of the D&R strategies from now will be in the cloud, in different regions, with different providers, so companies can resist providers or governments letting them down.

CTOP.py

For updated information visit the main page for CTOP.py

Current stable version is v.0.8.9 updated on 2022-07-03.

Current branch under development is v.0.8.10 updated on 2022-07-03.

Version 0.8.0 added compatibility with Python 2, for older Systems.

Find the source code in: https://gitlab.com/carles.mateo/ctop

Clone it with:

git clone https://gitlab.com/carles.mateo/ctop.git

ctop.py is an Open Source tool for Linux System Administration that I’ve written in Python3. It uses only the System (/proc), and not third party libraries, in order to get all the information required.
I use only this modules, so it’s ideal to run in all the farm of Servers and Dockers:

  • os
  • sys
  • time
  • shutil (for getting the Terminal width and height)

The purpose of this tool is to help to troubleshot and to identify problems with a single view to a single tool that has all the typical indicators.

It provides in a single view information that is typically provided by many programs:

  • top, htop for the CPU usage, process list, memory usage
  • meminfo
  • cpuinfo
  • hostname
  • uptime
  • df to see the free space in / and the free inodes
  • iftop to see real-time bandwidth usage
  • ip addr list to see the main Ip for the interfaces
  • netstat or lsof to see the list of listening TCP Ports
  • uname -a to see the Kernel version

Other cool things it does is:

  • Identifying if you’re inside an Amazon VM, Google GCP, OpenStack VMs, Virtual Box VMs, Docker Containers or lxc.
  • Compatible with Raspberry Pi (tested on 3 and 4, on Raspbian and Ubuntu 20.04LTS)
  • Uses colors, and marks in yellow the warnings and in red the errors, problems like few disk space reaming or high CPU usage according to the available cores and CPUs.
  • Redraws the screen and adjust to the size of the Terminal, bigger terminal displays more information
  • It doesn’t use external libraries, and does not escape to shell. It reads everything from /proc /sys or /etc files.
  • Identifies the Linux distribution
  • Supports Plugins loaded on demand.
  • Shows the most repeated binaries, so you can identify DDoS attacks (like having 5,000 apache instances where you have normally 500 or many instances of Python)
  • Indicates if an interface has the cable connected or disconnected
  • Shows the Speed of the Network Connection (useful for Mellanox cards than can operate and 200Gbit/sec, 100, 50, 40, 25, 10…)
  • It displays the local time and the Linux Epoch Time, which is universal (very useful for logs and to detect when there was an issue, for example if your system restarted, your SSH Session would keep latest Epoch captured)
  • No root required
  • Displays recent errors like NFS Timed outs or Memory Read Errors.
  • You can enforce the output to be in a determined number of columns and rows, for data scrapping.
  • You can specify the number of loops (1 for scrapping, by default is infinite)
  • You can specify the time between screen refreshes, for long placed SSH sessions
  • You can specify to see the output in b/w or in color (default)

Plugins allow you to extend the functionality effortlessly, without having to learn all the code. I provide a Plugin sample for starting lights on a Raspberry Pi, depending on the CPU Load, and playing a message “The system is healthy” or “Warning. The CPU is at 80%”.

Limitations:

  • It only works for Linux, not for Mac or for Windows. Although the idea is to help with Server’s Linux Administration and Troubleshot, and Mac and Windows do not have /proc
  • The list of process of the System is read every 30 seconds, to avoid adding much overhead on the System, other info every second
  • It does not run in Python 2.x, requires Python 3 (tested on 3.5, 3.6, 3.7, 3.8, 3.9)

I decided to code name the version 0.7 as “Catalan Republic” to support the dreams and hopes and democratic requests of the Catalan people, to become and independent republic.

I created this tool as Open Source and if you want to help I need people to test under different versions of:

  • Atypical Linux distributions

If you are a Cloud Provider and want me to implement the detection of your VMs, so the tool knows that is a instance of the Amazon, Google, Azure, Cloudsigma, Digital Ocean… contact me through my LinkedIn.

Monitoring an Amazon Instance, take a look at the amount of traffic sent and received

Some of the features I’m working on are parsing the logs checking for errors, kernel panics, processed killed due to lack of memory, iscsi disconnects, nfs errors, checking the logs of mysql and Oracle databases to locate errors

Erasure Coding, simple, for huge Storage needs

According to Wikipedia Erasure Coding means:

In coding theory, an erasure code is a forward error correction (FEC) code under the assumption of bit erasures (rather than bit errors), which transforms a message of k symbols into a longer message (code word) with n symbols such that the original message can be recovered from a subset of the n symbols. The fraction r = k/n is called the code rate. The fraction k’/k, where k’ denotes the number of symbols required for recovery, is called reception efficiency.

So Raid systems applied to drives are Erasure Code too.

But I want to talk about Erasure Code for the needs of organizations like Instagram, that need to store huge amount of files and they cannot afford to lose the data simply because several drives, or all the Server, fails.

So what is the way to make this sure if you have thousands of Servers?.

Many Start ups that require to host files, cannot afford to have every file duplicated or triplicated in other systems.

So how to do this in a cheap an efficient way?.

Here is where Erasure Coding comes to play.

Erasure Coding work so simply as:

  1. Given a given file, for example, 1 video of 10 MB
  2. We apply the Erasure Coding to encode the file
  3. We select, for example, to generate 3 additional chunks
  4. So our original 10MB file fill be split in 13 blocks (13 new files), each block will have approx. 1MB
  5. We can rebuild the original file by combining any 10 of those 13 files

That means that we can afford to loss 3 blocks (1MB files) and we will still be able to reconstruct the original file.

Examples:

  1. Ok, so now imagine we have 13 identical Servers, and we encode all our files, using Erasure Coding. Imagine that we store each block in a different Server. That means that we can lose 3 Servers and still have all our information intact.
  2. Imagine we have 100 Servers, and we split all those files to the Servers that have more free space available. We could lose 3 Serversand still not having lose any information. If we are really lucky (or the SDS – Software Defined Storage is very clever) we could lose more than 3 Servers.
  3. Now imagine we have 100 Racks full of Servers. Our SDS selects the Rack that has more free space and places one of the blocks in there, and the same for the other 12 blocks. We could afford to lose 3 racks without losing any Data. That’s more manageable for Google or Yahoo than managing at Server Level.

We can use Erasure coding with different configs like 8+3, or 10+4… The sample I choose 10+3 is easy to understand, as we clearly see that will occupy only 30% of additional space.

Those blocks can conveniently be stored in different Servers, across different regions too, for example, using a config of 9+3 you can have 4 different Cloud Providers in different geographic regions, and each holding 25% of the required files, so 3 files each. Then, you only require 3 Cloud providers to rebuild the original file (you only precise 9 surviving blocks, not all 12). Possibilities are infinite.

When one Rack is down, you can rebalance all the blocks that were there to another rack.

Also you can have different Servers, with different capacity… your SDS should be clever enough to accommodate the blocks for protection and space efficiency. To checksum them to ensure no corruption in the block as was stored or transported over the network. Your SDS Software should be clever enough to be able to add new nodes and Racks, and to substract nodes, to Rebalance, to checksum the blocks in the Servers… and to store the information effectively on the local Servers (not many files per folder…), to use Commodity Hardware with low memory, or even VM’s… if your System is good enough it will even put to sleep, to save energy, the Servers that are not in use (typically the Servers that are full), until required.

Also, when in need to recover a file, the clever SDS Software, using multithread, will ask to the 9 locations at the same time, in parallel, so using all the available bandwidth, in order to fetch the blocks and rebuild the original file really quick. This can also be implemented with no single point of failure, will all the nodes being able to be the headnode.

That’s exactly what my Erasure Coding solution did.

I invented a lot of technologies to scale out since I created my messenger in 1996.

You can do it yourself, or use existing Erasure Coding solutions. The most known is OpenStack Swift, although in my opinion is a pain to configure and to maintain.

Dealing with Performance degradation on ZFS (DRAID) Rebuilds when migrating from a single processor to a multiprocessor platform

This is the history it happen to me some time ago, and so the commands I used to troubleshot. The purpose is to share knowledge in a interactive way. There are some hidden gems that you’ll acquire if you have the patience to go over all the document and read it all…

I had qualified Intel Xeon single processor platform to run my DRAID (ZFS Declustered RAID) project for my employer.

The platforms I qualified were:

1) single processor for Cold Storage (SAS Spinning drives): 4U60, newest models 4602

2) for multiprocessor: the 4U90 (90 Spinning drives) and Flash: All-Flash-Arrays.

The amounts of RAM I was using for my tests range for 64GB to 384GB.

Somebody in the company, at executive level, assembled an experimental config that was totally new for us and wanted to try by their own. It was the 4602 with multiprocessor and 32GB of RAM.

When they were unable to make it work at the expected speed, they required me to troubleshot and to make it work.

The 4602 single processor had two IOC (Input Output Controller, LSI Logic / Symbios Logic SAS3008 PCI-Express Fusion-MPT SAS-3 (rev 02) ), while the 4602 double processor had four IOC, so given that each of those IOC can perform at peaks of 6GB/s, with a maximum total of 24 GB/s, the performance when reading/writing from all the drives should be better.

But this Server was returning double times for Rebuilding, respect the single processor version, which didn’t make any sense.

I had to check everything. There was the commands I ran:

Check the upgrade of the CPU:

htop
lscpu

Changing the Zoning.

Those Servers use SAS drives dual ported, which means that two different computers can be connected to the same drive and operate at the same time. Is up to you to make sure you don’t introduce corruption. Those systems are used mainly for HA (High Availability).

Those Systems allow to be configured in different zoning modes. That’s the way on how each of the two servers (Controllers) see the disk. In one zoning each Controller sees only 30 drives, in another each IOC sees all the drives (for redundancy but performance constrained to 1 IOC Speed).

The config I set is each IOC will see 15 drives, so each one of the 4 IOC will have 6GB/s for 15 drives. Given that these spinning drives perform in the outtermost part of the cylinder at 265MB/s, that means that at maximum speed one IOC will be using 3.97 GB/s, will say 4GB/s. Plenty of bandwidth.

Note: Spinning drives have different performance depending on how close you’re to the cylinder. In the innermost part it goes under 145 MB/s, and if you read all of those drive sequentially with dd it will return an average speed of 145 MB/s.

With this command you can sive live how it performs and the average read speed in real time. Use skip to jump to that position (relative to bs) in the drive, so you can test directly the speed at the innermost close to the cylinder part of t.

dd if=/dev/sda of=/dev/null bs=1M status=progress

I saw that the zoning was not right one, so I set it correctly:

[root@4602Carles ~]# sg_map -i | grep NEWISYS
/dev/sg30  NEWISYS   NDS-4602-CS       0112
/dev/sg61  NEWISYS   NDS-4602-CS       0112
/dev/sg63  NEWISYS   NDS-4602-CS       0112
/dev/sg64  NEWISYS   NDS-4602-CS       0112
[root@4602Carles10 ~]# sg_senddiag /dev/sg30  --pf --raw=04,00,00,01,53
[root@4602Carles10 ~]# sleep 50
[root@4602Carles10 ~]# sg_senddiag /dev/sg30 --pf -r 04,00,00,01,43
[root@4602Carles10 ~]# sleep 50
[root@4602Carles10 ~]# reboot

The sleeps after rebooting the expanders are recommended. Rebooting the Operating System too, to avoid problems with some Software as the expanders changed live.

If you have ZFS pools or workloads stop them and export the pool before messing with the expanders.

In order to check to which drives is connected each IOC:

[root@4602Carles10 ~]# sg_map -i -x
/dev/sg0  0 0 0 0  0  /dev/sda  TOSHIBA   MG07SCA14TA       0101
/dev/sg1  0 0 1 0  0  /dev/sdb  TOSHIBA   MG07SCA14TA       0101
/dev/sg2  0 0 2 0  0  /dev/sdc  TOSHIBA   MG07SCA14TA       0101
/dev/sg3  0 0 3 0  0  /dev/sdd  TOSHIBA   MG07SCA14TA       0101
/dev/sg4  0 0 4 0  0  /dev/sde  TOSHIBA   MG07SCA14TA       0101
/dev/sg5  0 0 5 0  0  /dev/sdf  TOSHIBA   MG07SCA14TA       0101
/dev/sg6  0 0 6 0  0  /dev/sdg  TOSHIBA   MG07SCA14TA       0101
/dev/sg7  0 0 7 0  0  /dev/sdh  TOSHIBA   MG07SCA14TA       0101
/dev/sg8  1 0 8 0  0  /dev/sdi  TOSHIBA   MG07SCA14TA       0101
/dev/sg9  1 0 9 0  0  /dev/sdj  TOSHIBA   MG07SCA14TA       0101
/dev/sg10  1 0 10 0  0  /dev/sdk  TOSHIBA   MG07SCA14TA       0101
/dev/sg11  1 0 11 0  0  /dev/sdl  TOSHIBA   MG07SCA14TA       0101
[...]
/dev/sg16  4 0 16 0  0  /dev/sdq  TOSHIBA   MG07SCA14TA       0101
/dev/sg17  4 0 17 0  0  /dev/sdr  TOSHIBA   MG07SCA14TA       0101
[...]
/dev/sg30  0 0 30 0  13  NEWISYS   NDS-4602-CS       0112
[...]

Still after setting the right zone the Rebuilds were slow, the scan rate half of the obtained with a single processor.

I tested that the system was able to provide the expected performance by reading from all the drives at the same time. This is done with:

dd if=/dev/sda of=/dev/null bs=1M status=progress &
dd if=/dev/sdb of=/dev/null bs=1M status=progress &
dd if=/dev/sdc of=/dev/null bs=1M status=progress &
dd if=/dev/sdd of=/dev/null bs=1M status=progress &
[...]

I do this for all the drives at the same time and with iostat:

iostat -y 1 1

I check the status of the memory with:

slabtop
free
htop

I checked the memory and htop during a Rebuild. Memory was more than enough. However CPU usage was higher than expected.

The red bars in the image correspond to kernel processes, in this case is the DRAID Rebuild. I see that the load is higher than the usual with a single processor.

I capture all the parameters from ZFS with:

zfs get all

All this information is logged into my forensics document, so later can be checked by my Team or I can share with other Architects or other members of the company. I started this methodology after I knew how Google do their SRE forensics / postmortem documents. Also for myself is useful for the future to have a log of the commands I executed and a verbose output of the results.

I install the smp_utils

yum install smp_utils

Check things:

ls -al  /dev/bsg/
total 0drwxr-xr-x.  2 root root     3020 May 22 10:16 .
drwxr-xr-x. 20 root root     8680 May 22 10:16 ..
crw-------.  1 root root 248,  76 May 22 10:00 1:0:0:0
crw-------.  1 root root 248, 126 May 22 10:00 10:0:0:0
crw-------.  1 root root 248, 127 May 22 10:00 10:0:1:0
crw-------.  1 root root 248, 136 May 22 10:00 10:0:10:0
crw-------.  1 root root 248, 137 May 22 10:00 10:0:11:0
crw-------.  1 root root 248, 138 May 22 10:00 10:0:12:0
crw-------.  1 root root 248, 139 May 22 10:00 10:0:13:0
[...]
[root@4602Carles10 ~]# smp_discover /dev/bsg/expander-1:0
[...]
[root@4602Carles10 ~]# smp_discover /dev/bsg/expander-1:1

I check for errors in the expander that could justify the problems of performance:

for i in `seq 0 64`; do smp_rep_phy_err_log -p $i /dev/bsg/expander-1\:0 ; done
Report phy error log response:
  Expander change count: 567
  phy identifier: 0
  invalid dword count: 0
  running disparity error count: 0
  loss of dword synchronization count: 0
  phy reset problem count: 0
[...]
Report phy error log response:
  Expander change count: 567
  phy identifier: 52
  invalid dword count: 168
  running disparity error count: 172
  loss of dword synchronization count: 5
  phy reset problem count: 0
Report phy error log response:
  Expander change count: 567
  phy identifier: 53
  invalid dword count: 6
  running disparity error count: 6
  loss of dword synchronization count: 0
  phy reset problem count: 0
Report phy error log response:
  Expander change count: 567
  phy identifier: 54
  invalid dword count: 267
  running disparity error count: 270
  loss of dword synchronization count: 4
  phy reset problem count: 0
Report phy error log response:
  Expander change count: 567
  phy identifier: 55
  invalid dword count: 127
  running disparity error count: 131
  loss of dword synchronization count: 5
  phy reset problem count: 0
Report phy error log result: Phy vacant
Report phy error log result: Phy vacant
Report phy error log result: Phy vacant
Report phy error log result: Phy vacant
Report phy error log result: Phy vacant
Report phy error log result: Phy vacant
Report phy error log result: Phy vacant
Report phy error log result: Phy vacant
Report phy error log result: Phy vacant

There are some errors, and I check with the Hardware Team, which pass a battery of tests on the machine and say that the machine passes. They tell me that if the errors counted were in order of millions then it would be a problem, but having few of them is usual.

My colleagues previously reported that the memory was performing well, and the CPU too. They told me that the speed was exactly double respect a platform with one single CPU of the same kind.

Even if they told me that, I ran cmips tests to make sure.

git clone https://github.com/cmips/cmips_bin

It scored 16,000. The performance was Ok in general terms but the problem is that I didn’t have a baseline for that processor in single processor, so I cannot make sure that the memory bandwidth was Ok. The performance was less that an Amazon c3.8xlarge. The system I was testing is a two processor system, but each CPU is cheap, around USD $400.

Still my gut feeling was telling me that this double processor server should score more.

lscpu
[root@DRAID-1135-14TB-2CPU ~]# lscpu
 Architecture:          x86_64
 CPU op-mode(s):        32-bit, 64-bit
 Byte Order:            Little Endian
 CPU(s):                32
 On-line CPU(s) list:   0-31
 Thread(s) per core:    2
 Core(s) per socket:    8
 Socket(s):             2
 NUMA node(s):          2
 Vendor ID:             GenuineIntel
 CPU family:            6
 Model:                 79
 Model name:            Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
 Stepping:              1
 CPU MHz:               2299.951
 CPU max MHz:           3000.0000
 CPU min MHz:           1200.0000
 BogoMIPS:              4199.73
 Virtualization:        VT-x
 L1d cache:             32K
 L1i cache:             32K
 L2 cache:              256K
 L3 cache:              20480K
 NUMA node0 CPU(s):     0-7,16-23
 NUMA node1 CPU(s):     8-15,24-31
 Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 intel_ppin intel_pt ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts spec_ctrl intel_stibp

I check the memory configuration with:

dmidecode -t memory

I examined the results, I see that the processor can only operate the DDR4 ECC 2400 Memory at 2133 and… I see something!. This Controller before was a single processor with 2 Memory Sticks of 16GB each, dual rank.

I see that now I have the same number of sticks in that machine, but I have two CPU!. So 2 Memory sticks in total, for 2 CPU.

That’s no good. The memory must be in pairs and in the right slots to get the maximum performance.

1 memory module for 1 CPU doesn’t allow to have Dual Channel and probably is affecting the performance. Many Servers will not even boot if you add an odd number of memory sticks per CPU.

And many Servers can operate at full speed only if all the banks are filled.

I request to the Engineers in Silicon Valley to add 4 modules in the right slots. They did, and I repeated the tests and the performance was doubled then.

After some days I had some time with the machine, I repeated the test and I got a CMIPS Score of around 20,000.

Multiprocessor world is far more complicated than single processor. Some times things can work not as expected, and not be evident, for example cache pipeline can act diferent for a program working in multiprocessor and single processor. Or the QPI could be saturated.

After this I shared my forensics document with as many Engineers as I could, so they could learn how I did to troubleshot the problem, and what was the origin of it, and I asked them to do the same so we can track their steps and progress if something needs to be troubleshoot.

After proper intensive testing the Server was qualified. Lesson here is that changes cannot be commited quickly, need their time.

The Cloud is for Scaling

Last Update: 2022-08-03 I added the Enterprises, as more and more services have been provided by Amazon and the other major Cloud Providers and they offer a suitable solution for Enterprises requiring high availability in services managed by Amazon, without the need to configure and maintain their own setups manually.

dell-blades-m4110The Cloud is for Startups, and for Scaling. Nothing more.

In the future will be used by phone operators, to re-dimension their infrastructure and bandwidth in real time according to demand, but nowadays the Cloud is for Startups.

Examine the prices in my post in cmips, take a look, examine the performance also of the different CPU. You see that according to CMIPS v.1.03 a Desktop Processor Intel i7-4770S, worth USD $300, performs better than an Amazon M2 High Memory Quadruple Extra Large and than a Rackspace First gen. 30 GB RAM 8 Cores?.

Today the public cost of an Amazon M2 High Memory Quadruple Extra Large running for a month is USD $1,180.80 so USD $1.64 per hour and the Rackspace First Generation 30 GB RAM 8 Cores 1200 GB of disk costs is USD $1,425.60 so USD $1.98 per hour running.

And that’s the key, the cost per hour.

Because the greatness, the majesty of the Cloud is that you pay per hour, you pay as you need, or as you go. No attaching contracts. All on demand.

I had my company at a time where the hosting companies and the Data Centers were forcing customers to sign yearly contracts. What if a company only needs to host their Servers for three months? What if they have to close?. No options. You take it or you leave it.

Even renting a dedicated hosting was for at least a month or more, and what if the latency was not good? What if the bandwidth of the provider was not enough?.

Amazon irrupted in the market with strength. I really like that company because they grew the best eCommerce company for buying books, they did a system that really worked, and was able to recommend very useful computer books, and the delivery, logistics was so good, also post-sales service. They simply started to rent the same infrastructure they were using to attend their millions of customers and was a total success.

And for a while few people knew about Amazon deep technologies and functionalities, but later became a fashion.

Now people is using Amazon or whatever provider/Service that contains the word “Cloud” because the Cloud is in the mouth of everyone. Magazines and newspapers speak about the Cloud, so many many companies use it simply because everyone is talking about the Cloud. And those ISP that didn’t had a Cloud have invested heavily to create a Cloud, just because they didn’t want to be the ones without a Cloud, since everyone was asking for it and all the ISP companies were offering their “Clouds”.

Every company claims to have “Cloud” where the only many of them have is Vmware servers, Xen servers, Open Stack… running the tenants or instances of the customers always on the same host servers. No real Cloud, professional Cloud, abstract layered in a Professional way like Amazon, only the traditional “shared hosting” with another name, sharing CPU and RAM and Disk storage using virtual machines called instances.

So, Cloud fashion has become a confusing craziness where no one knows why they are in the Cloud but they believe they have to be in.

But do companies need the Cloud?. Cloud instances?

It depends. The best would be to ask that companies Why you choose the Cloud?.

If you compare the cost of having an instance in the Cloud, is much much more expensive than having a dedicated server. And for that high cost you don’t get more performance.

Virtualization is always slower and disk speed is always an issue in Cloud providers, where all the data travels via network from the disk cabins NAS to the Host servers running the guest instances. Data cannot be at local disks, since every time you start an instance, the resources like CPU and RAM are provisioned, and your instance run in totally different hardware. Only your data remain in the NAS (Network Attached Storage).

So unless you run your in-the-Cloud instance in a special provider that offers local disks, like DigitalOcean that offers SSD but monthly paying, (and so you pay the price by losing the hardware abstraction capability because you’re attached to the CPU that has the disk connected, and also you loss the flexibility of paying per hour of use, as you go), then you’ll face a bottleneck that is the hard disk performance (that for real takes all the data from NAS, where is stored, through the local network).

So what are the motivations to use the Cloud?. I try to put some examples, out of these it has no much sense, I think. You can send me your happy-in-Cloud scenarios if you found other good uses.

Example A) Saving initial costs, avoid contract attachment and grow easily own-made

Imagine a Developer that start its own project. May be it works, may be not, but instead of having a monthly contract for a dedicated server, he starts with an Amazon Free Tier (better not, use Small instance at least) and runs a web. If it does not work, simply stop the instance and pay no more. If the project works and has more and more users he can re-dimension the server with a click. Just stop the instance, change the type of instance, start it again with more RAM and more CPU power. Fast.

Hiring a dedicated server implies at least monthly contracts, average of USD $100 per month, and is not easy to move to a bigger server, not fast and is expensive as it requires the ISP tech guys to move the data, to migrate from a Server to another.

Also the available bandwidth is to be taken in consideration. Bandwidth is expensive and Amazon can offer 150 Mbit to smaller machines. Not all the Internet Service Providers can offer that bandwidth even with most advanced packets.

If the project still grows, with a click, in seconds, 20 instances with a lot of bandwidth can be deployed and serving traffic to your customers very quick.

You save the init costs of buying Servers, and the time to deal with hardware, bandwidth limitations and avoid contracts, but you pay an hourly rate a lot more expensive. So in the long run is much much expensive using Amazon and less powerful than having dedicated servers. That happened to Zynga, that was paying $63M annually to Amazon and decided to step back from Amazon to their own Data Centers again. (another fortune tech link)

The limited CPU power was also a deal breaker for many companies that needed really powerful CPU and gigs of RAM for their Database Servers. Now this situation is much better with the introduction of the new Servers.

This developer can benefit from doing bacups with a click, cloning, starting instances from an image, having more static ip’s with a click, deploying built-in (from the Cloud provider) load balancers, using monitoring services like CloudWatch, creating Volumes and attaching to the servers for additional space…

Example B) An Startup with fluctuating number of users and hopes of growing

Imagine an Startup with a wonderful Facebook Application.

During 80% of the day has few visits, may be only need 3 Servers, but during 20% of the hours of the day from 10:00 to 15:00 users connect like hell, so they need 20 servers to attend this traffic and workload, and may be tomorrow needs 30 servers.

With the Cloud they pay for 3 servers 24 hours per day and for the other 17 servers only pay the hours they are on, that’s 5 hours per day. Doing that they save money and they have an unlimited * amount of power. (* There are limits for real, you have to specially request authorisation to run more than default max. servers for the zone, that is normally 20 instances for Amazon. Also it can happen theoretically that when you request new instances the Zone has no instances available).

So well, for an Startup growing, avoiding hiring 20 dedicated servers and instead running into the Cloud as many as they need, for just the time they need, Auto-Scaling up and down, and can use the servers NOW and pay the next month with Visa card, all of that can make a difference for a growing Startup.

If the servers chosen are not powerful enough that is solved with a click, changing instance type. So fast. A minute.

It’s only a matter of money.

Example C) e-Learning companies and online universities

e-Learning platforms also get benefits from the Auto-Scaling for the full occupation hours.

The built-in functionalities of the Cloud to clone instances is very useful to deploy new web servers, or new environments for students doing practices, in the case of teaching Information Technology subjects, where the users need to practice against a real server (Linux or windows).

Those servers can be created and destroyed, cloned from the main -ready to go- template. And also servers can be scheduled to stop at a certain hour and to start also, so saving the money from the hours not needed.

Example D) Digital agencies, sports and other events

When there is an Special event, like motorcycle running, when a Football Team scores, when there is an spot in tv announcing a product…

At those moments the traffic to the site can multiply, so more servers and more bandwidth have to be deployed instantly. That cannot be done with physical servers, hardware, but is very easy to provision instances from the Cloud.

Mass mailing email campaigns can also benefit from creating new Servers when needed.

Example E) Proximity and SEO

Cloud providers have Data Centres everywhere. If you want to have servers in Asia, or static content to be deployed faster, or in South-America, or in Europe… the Cloud providers have plenty of Data Centers all over the world.

Example F) Game aficionado and friends sharing contents

People that loves cooperative games can find the needed hungry bandwidth and at a moderate price. If they run their private server few hours, at night, from 22:00 to 01:00 as example, they will benefit from a great bandwidth from the big Cloud provider and pay only 3 hours per day (the exceed of traffic uses to be paid in most providers, but price of additional GB uses to be really really competitive).

Friends sharing contents in an Ftp also, can benefit from this Cloud servers, but probably they will find more easy to use services like Dropbox.

Example G) Startup serving contents

An Startup serving videos, images, or books, can benefit not only from the great bandwidth of big Cloud providers (this has been covered before), but for a very cheap price for exceeding Gigabyte transferred.

Local ISP can’t offer 150 Mbit for an instance of USD $20 and USD $0.12 per additional GB transferred.

Many Cloud providers also allow unlimited incoming traffic from the Internet, and from Server to Server through private ip’s.

Example H) Enterprises that want to outsource the solutions and having better availability

Since I wrote this article in 2013, many CSP have been incorporation more and more powerful services to their catalalog.

For instance Amazon has service like DynamoDB, RDS, Load Balancers able to balance servers in different availability zones, with Auto-Scaling, SQS for Message Queues, ElastiCache, Lambda, S3…

Many Enterprise find convenient to have all their services offloaded to Amazon, Google Cloud, Microsoft Azure… so they require less Operations and System Administration Engineers.

Other cases

For other cases Dedicated Servers are much more Powerful, faster and cheaper, at the price of being “static” in the sense of attached, not layer abstracted, but all the aspects of your Project have to be taken in count before deciding stepping into or out of the Cloud.

In general terms I would say that the Cloud is for Scaling, but in 2022 it’s also to outsource Operation Services, to fully managed services by the CSPs.

NAS and Gigabit

Note 2019-05-28: This article was written in 2013. It is still valid, but since then 10Gbps have dropped in price a lot. In my DRAID Solution I’ve qualified 10GbE based in copper, RJ45, and for fiber: 10, 25, 40 and 100 Gbps. Mellanox switches and NICs are a reference. 10GbE based in copper are cheap and easy to deploy, as you can reuse existing infrastructure and grow your segments. https://en.wikipedia.org/wiki/10_Gigabit_Ethernet

I’ve found this problem in several companies, and I’ve had to show their error and convince experienced SysAdmins, CTOs and CEO about the erroneous approach. Many of them made heavy investments in NAS, that they are really wasting, and offering very poor performance.

Normally the rack servers have their local disks, but for professional solutions, like virtual machines, blade servers, and hundreds of servers the local disk are not used.

NAS – Network Attached Storage- Servers are used instead.

This NAS Servers, when are powerful (and expensive) offer very interesting features like hot backups, hot backups that do not slow the system (the most advanced), hot disk replacement, hot increase of total available space, the Enterprise solutions can replicate and copy data from different NAS in different countries, etc…

Smaller NAS are also used in configurations like Webservers’ Webfarms, were all the nodes has to have the same information replicated, and when a used uploads a new profile image, has to be available to all the webservers for example.

In this configurations servers save and retrieve the needed data from the NAS Servers, through LAN (Local Area Network).

The main error I have seen is that no one ever considers the pipe where all the data is travelling, so most configurations are simply Gigabit, and so are bottleneck.

dell-blade-servers-enclosureImagine a Dell blade server, like this in the image on the left.

This enclosure hosts 16 servers, hot plugable, with up to two CPU’s each blade, we also call those blade servers “pizza” (like we call before to rack servers).

A common use is to use those servers to have Vmware, OpenStack, Xen or other virtualization software, so the servers run instances of customers. In this scenario the virtual disks (the hard disk of the virtual machines) are stored in the NAS Server.

So if a customer shutdown his virtual server, and start it later, the physical server where its virtual machine is running will be another, but the data (the disk of the virtual server) is stored in the NAS and all the data is saved and retrieved from the NAS.

The enclosure is connected to the NAS through a Gigabit connection, as 10 Gigabit connections are still too expensive and not yet supported in many servers.

Once we have explained that, imagine, those 16 servers, each with 4 or 5 virtual machines, accessing to their disks through a Gigabit connection.

If only one of these 80 virtual machines is accessing to disk, the will be no problem, but if more than one is accessing the Gigabit connection, that’s a maximum of 125 MB (Megabytes) per second, will be shared among all the virtual machines.

So imagine, 70 virtual machines are accessing NAS to serve web pages, with not much traffic, OK, but the other 10 virtual machines are doing heavy data transmission: for example one is serving data through FTP server, the other is broadcasting video, the other is copying heavy log files, and so… Imagine that scenario.

The 125 MB per second is divided between the 80 servers, so those 10 servers using extensively the disk will monopolize the bandwidth, but even those 10 servers will have around 12,5 MB each, that is 100 Mbit each and is very slow.

Imagine one of the virtual machines broadcast video. To broadcast video, first it has to get it from the NAS (the chunks of data), so this node serving video will be able to serve different videos to few customers, as the network will not provide more than 12,5 MB under the circumstances provided.

This is a simplified scenario, as many other things has to be taken in count, like the SATA, SCSI and SAS disks do not provide sustained speeds, speed depends on locating the info, fragmentation, etc… also has to be considered that NAS use protocol iSCSI, a sort of SCSI commands sent through the Ethernet. And Tcp/Ip uses verifications in their protocol, and protocol headers. That is also an overhead. I’ve considered only traffic in one direction, so the servers downloading from the NAS, as assuming Gigabit full duplex, so Gigabit for sending and Gigabit for receiving.

So instead of 125 MB per second we have available around 100 MB per second with a Gigabit or even less.

Also the virtualization servers try to handle a bit better the disk access, by keeping a cache in memory, and not writing immediately to disk.

So you can’t do dd tests in virtual machines like you would do in any Linux with local disks, and if you do go for big files, like 10 GB with random data (not just 0, they have optimizations for that).

Let’s recalculate it now:

70 virtual machines using as low as 0.10 MB/second each, that’s 7 MB/second. That’s really optimist as most webservers running PHP read many big files for attending a simple request and webservers server a lot of big images.

10 virtual machines using extensively the NAS, so sharing 100 MB – 7 MB = 93 MB. That is 9.3 MB each.

So under these circumstances for a virtual machine trying to read from disk a file of 1 GB (1000 MB), this operation will take 107 seconds, so 1:47 minutes.

So with this considerations in mind, you can imagine that the performance of the virtual machines under those configurations are leaved to the luck. The luck that nobody else of the other guests in the servers are abusing the disk I/O.

I’ve explained you in a theoretical plan. Sadly reality is worst. A lot worst. Those 70 web virtual machines with webservers will be so slow that they will leave your company very disappointed, and the other 10 will not even be happier.

One of the principal problems of Amazon EC2 has been always disk performance. Few months ago they released IOPS, high performance disks, that are more expensive, but faster.

It has to be recognized that in Amazon they are always improving.

They have also connection between your servers at 10 Gbit/second.

Returning to the Blades and NAS, an easy improvement is to aggregate two Gigabits, so creating a connection of 2 Gbit. This helps a bit. Is not the solution, but helps.

Probably different physical servers with few virtual machines and a dedicated 1 Gbit connection (or 2 Gbit by 1+1 aggregated if possible) to the NAS, and using local disks as much as possible would be much better (harder to maintain at big scale, but much much better performance).

But if you provide infrastructure as a Service (IaS) go with 2 x 10 Gbit Fibre aggregated, so 20 Gigabit, or better aggregate 2 x 20 Gbit Fibre. It’s expensive, but crucial.

Now compare the 9.3 MB per second, or even the 125 MB theoretical of Gigabit of the average real sequential read of 50 MB/second that a SATA disk can offer when connected on local, or nearly the double for modern SAS 15.000 rpm disks… (writing is always slower)

… and the 550 MB/s for reading and 550 MB/s for writing that some SSD disks offer when connected locally. (I own two OSZ SSD disks that performs 550 MB).

I’ve seen also better configuration for local disks, like a good disk controller with Raid 5 and disks SSD. With my dd tests I got more than 900 MB per second for writing!.

So if you are going to spend 30.000 € in your NAS with SATA disks (really bad solution as SATA is domestic technology not aimed to work 24×7 and not even fast) or SAS disks, and 30.000 € more in your blade servers, think very well what you need and what configuration you will use. Contact experts, but real experts, not supposedly real experts.

Otherwise you’ll waste your money and your customers will have very very poor performance on these times where applications on the Internet demand more and more performance.