Category Archives: Hardware

What is PrototypeC and how you can DIY

The C with the external keyboard+touchpad and PHPStormIt all started with my best friend, and the best Engineer I know, E. H. We often meet for doing hackatons and we explore node, go, patterns, technologies… Some times we just meet and improvise a hackaton.

I’ve been carrying with me a laptop, almost every single day of my life, since many years ago (that’s why I created some jokes about gym and IT and have fun with my friends, like: we IT guys go to the gym to be so strong so we can carry a laptop everywhere, all the time, without effort).

But some of my friends suffer from the back, and they can’t carry weight, so they don’t bring with them the laptop unless necessary.

There are some other friends, Sysadmins in the Operations field, that sometimes have alerts, so they have to carry a laptop with them when they’re on guard, but sometimes they have an alert out of guard, and nobody else can fix it and they don’t have a computer with them.

I also I’m concerned about privacy and sadly most smartphone builders install crappy Software that spy the users, and send keystrokes to third party companies, etc… So having a portable device, small as a credit card, where you can type your business ideas or your thoughts with some privacy, or you can use to fix remote servers when they broke, etc… looked appealing. This also fits with my private messenger c-client.

So, when I configured my raspberry 2 with OSMC and started to install Software, program crons, etc… I realized that is a tiny computer, capable enough. And then an idea came to me.

What if I can make this lightweight motherboard a wearable computer?.

And I started to play with this.

OS

I first needed an interesting enough OS, that I or my DevOps friends would like to use.

I choose Debian Jessie for Arm (armhf). Raspberry Pi 2 uses ARM v7 that is a very important improvement respect previous Raspberry models, not only in performance, but also in the floating point. That simplifies many things and much more Software packages are available (like Snappy Ubuntu Core or Windows 10).

Using Debian Jessie provided me with a very basic system that really uses few RAM memory. It uses only 65 MB RAM, with all the Wifi supported firmwares.

Then I installed X-Windows, with LXDE as Desktop Manager. Everything uses only 120 MB RAM.

I installed several packages, to highlight the epiphany web browser, Open Java Runtime Environment, and PHPStorm IDE.

I also use Ubuntu 15.04 Desktop for ARM.

Battery

Unless Raspberry pi first generation, Raspberry Pi 2 uses micro-USB standard 5V input, like most of the Android phones. So I decided to do some tests about sustained energy input with batteries.

First I calculated the energy consumption of the Pi 2, plus Ethernet and plus some USB devices I would like to attach.

I found that common power banks, those you use to charge your smartphone when battery is depleted and you’re on the run, bring a continuous energy signal that works perfectly well. I tested it by running the Raspberry Pi 2 for hours while reproducing video, doing live updates, flawlessly.

Then the first version of the Prototype C was born.

Costs:

  • $40 Raspbery Pi 2
  • $10 Powerbank 2000 mAh
  • $15 USB Wifi-N card

It weights 160 grams/0.3527 pounds/5.64 ounces.

Some of the powerbanks are so cool that they allow you to charge them while at the same time they are providing energy to the output. So you can have the Prototype C continuously running, and if the battery is going to deplete you can just plug it to the energy plug of a bar, and the C will continue running non-stop, and the battery will charge, so in a while you can continue your walk without stopping the C.

Prototype C-2 Spartan: 5000 mAh solar powered Levin battery, Raspberry Pi 2, blue micro-USB connector, Ziron blue Z-cableLater I bought a Levin solar powered battery, that is lightweight, protected to impacts and that has two outputs (1, 2 A) and charges while exposed to sun, so I can have the battery in the outside of my bag, and while I walk in the city it is being charged at a max rate of 200 mA. This is ideal for people hiking in the mountain or traveling in the world, so it can save lives if a problem comes and phone’s battery is depleted.

I think also in boys in Africa or poor countries where they children have to walk a long to the school. This walking could help charging the battery so they can study at the school and continue studying at home.

With the C a solar powered battery also allow to blog from everywhere. :)

You don't even need a cable, just an small USB-micro-USB connector

You don’t even need a cable, just an small USB-micro-USB connector

Note: This Levin solar battery doesn’t accept to be charged and to bring energy to the device at the same time.

Note: you can also aliment it from the car lighter connector with and adapter.

Monitor

That part is not easy.

The monitors I was interested in were not sold to Catalonia. Finally I was able to find some of my interest in Amazon.es

Most of those monitors work for the Raspberry Pi, first gen, or for Arduino, Banana Pi, etc… but not for Raspberry Pi 2. So this is the first thing to be careful.

Other of those monitors get the signal from the Raspberry Pi GPIO, the General Purpose Input Output, from Raspberry. So they need some kernel patching, binaries, not Open Source code usually, and that also prevents from certain kind direct access to drawing in X not being displayed in the monitor attached via GPIO.

gpio-pins

Other monitors require external power source, often 12 V, and so are very voluminous. So they are not a fit for our means.

Normally those monitors are touch screens, some take the power from the GPIO and also control the touching from there, but others require two USB: one for the energy and other for the touch controlling part.

You have also to consider if they have external button to power on, power off.

Also those monitors have extremely low resolution, 320×480 and like this. Only found a model supporting 800×600 and is a 5″.

And finally you have to take in consideration the power consumption.

I solved this by buying power banks that have a 2 or 2.5 A output.

Simple power banks provide a 1 A output, better ones provide two outputs: 1 A output for smartphones, and 2 A output, normally for tablets.

Part of the cool thing is the possibility to have the monitor in a place, and the Pi and the battery in another place, like for creating wearables and other cool stuff.

As I wanted to provide DIY for people, and an elegant cheap lightweight solution, the difficulty to find those monitors was a gap, so I explored several ways to use cheaper screens until I realized what is what every engineer (and common people) carries with him every day?. The smartphone.

USB Wifi

As I wanted to have Wifi I needed USB Wifi that can work with Raspberry Pi 2 and Debian Jessie.

I had some problems at the beginning (caused by a bug in wcid), but I managed to make everything work. I have made it work with two different Wifi card models flawlessly: TP-Link Nano, Asus Nano USB-N10.

Nothing stops you from adding two Wifi USB to the device. In fact is my preferred option, as I use one for being connected/controlled/display to the phone, and another for connecting to the Wifi of the place I am. That way I don’t use my mobile Data plan that much when I’m outside. In this case I recommend to use different cards (different supported chipsets) to save you headaches configuring and setting what does what (unless you like to fight with configurations until you master everything :).

Some USB Wifi devices also equip Bluetooth transmission, that is very interesting for future prototypes for controlling other devices (car radio, headsets, commands..).

Controlling the C from the smartphone

I started to test certain things to overcome the display difficulties and finally I had cool an idea and I found a very nice option.

Here configuring the Vnc client to access to :1

Here configuring the Vnc client to access to :1

Sharing the Internet from my iPhone creates an internal network of the type 172.20.10.x.

Sharing the Internet connection from the phone, and connecting the C automatically it  creates a private network, with full network visibility between the devices.

So with a ssh app for iPhone like Server Auditor I was able to connect through ssh to the C, and with a VNC Client, like Real Vnc Client, I was able to connect to the Desktop of the Prototype C, control it, send keystrokes, zoom in, zoom out. Both apps are free to download and to use.

For Android it works the same, for sharing the Internet connection it creates an internal network with another range and there are also free apps for the purpose.

The ip assigned is normally the same, 172.20.10.9 in my case.

The only required thing was to setup the C to auto-connect via Wifi to my phone’s Wifi shared connection, and then connect to the C device’s ip.

This is very easily done with Wcid from LXDE.

Mark Automatically connect to this Network

Mark Automatically connect to this Network

I configured vnc server so I can access to the same display than the HDMI, so I can switch from working with the smartphone to HDMI easily, and I also configured vnc server through other display at a 1024×768 resolutions. That was more comfortable to work only with the small screen of my iPhone 4. Although setting a lower resolution, like 800×600, makes it easier to work with tiny screens.

With the Vnc client you can zoom in, zoom out, with the classical gestures and use keyboard.

Not only this, I allowed several of my friends to have their own independent sessions to the same C. So with only one C several Engineers can Develop or do DevOps stuff.

Must say that with the Nexus and other smartphones with bigger screens, or tablets, the experience is amazing, much much better than with my tiny iPhone’s 4 screen.

And of course you can control the C also from a laptop.

Nailing it

I added to the set an external bluetooth keyboard+touchpad.

Prototype C with the bluetooth keyboard. The motherboard is hidden under the battery

Prototype C with the bluetooth keyboard. The motherboard is hidden under the battery

If you use VNC on display :0 it’s perfect, obviously external keyboard won’t work if you use different displays (as key strokes are sent only to the device and not to the different X VNC sessions, of course).

With an external keyboard, the worries about spyware in the iPhone were mitigated, as all the keystrokes are on the physical external bluetooth keyboard and not on the iPhone’s screen.

The C with a smaller battery

The C with a smaller battery

I used a Rii mini i8, and a Logitech.

Some photos of real use examples

Sometimes I just connect the raw motherboard to the solar charger in the bag and only use the smartphone over the table.

Sometimes I just connect the raw motherboard to the solar charger in the bag, and just use the smartphone over the table.

We were dinning and to the cinema with some IT friends when suddenly one of the SysAdmins got an alert and needed a Linux box

We were dinning and to the cinema with some IT Engineer friends when suddenly one of the SysAdmins got an alert and needed a Linux box

No space? Do you need a sand box and have no memory for a virtual one?

Opening PHPStorm at the bar. Under LXDE

blog-carlesmateo-com-web-browsing-at-the-bar

The C at the bar displaying epiphany web browser with the address http://www.prototypec.com

C over a jacket in a restaurant

C over a jacket in a restaurant

Waiting time? Get C!

Waiting time? Get C!

Mounting a USB mini webcam in a glass, while at a bar. The battery is charging the iPhone and powering the Prototype C

Mounting a USB mini webcam in a glass, while at a bar. The battery is charging the iPhone and powering the Prototype C

Honestly, I love to carry with me the raw Raspberry Pi 2 motherboard (in a cartoon box) and to connect it as is, but you can use a plastic box. It is also very useful to hang to walls, furniture, or the car.

Honestly, I love to carry with me the raw Raspberry Pi 2 motherboard (in a cartoon box) and to connect it as is, but you can use a plastic box. It is also very useful to hang to walls, furniture, or the car.

Raspberry Pi and osmc

RaspberryPiB+There is something that fascinates me from the new Raspberry Pi, and using it as a media center.
It is the fact that is a really small board.
That is powered by a micro USB 1000 mhA.
That is powered with Linux.

I had other media centers before but they were magnetic hard disk, closed in a proprietary system.
The media center I installed, with RaspBerry Pi+, is osmc, that is Open Source Media Center.

blog-carlesmateo-com-raspberry-pi-2-osmc-ssh-topSo I have full access via ssh to the RaspBerry, and as it used so few energy I have it all the day up.
Then, as it is a Linux box, and I have full access, and I’ve around 546 MB RAM free, I can run as many background process as I want.
Do I want to be a jump point for my VPN? Let’s go.
Do I want to have some monitoring processes over few websites? Let’s do it!.

I’m really happy about having a so tiny, so few energy consuming, full Linux, being my media center and whatever I want to it to do.

I must say that is wonderful having SSH and a network interface. Ok, it’s 10/100 Mbps, not Gigabit, but it is enough to allow me to copy new files in background to the USB stick via SFTP while reproducing at FullHD Blueray MKV, files right. Also allows to mount network folders via NFS or SMB amd play from them. Copying via SFTP to the USB device is generally very slow -don’t be surprised to upload at 30 KB/s- so I recommend to set a NFS folder in the computer, with read access to the ip of the Raspberry. It’s very cool and plays totally smooth using the 100 Mbit ethernet connection. You can also configure a FTP in the Pi, that will be much faster than the SFTP.

The RaspBerry micro SD card has a performance of ~22 MB, that is enough to boot very quickly and to load programs quite fast. I have other microSD cards with Debian Jessie, and I load PHPStorm (Java based PHP IDE) quite fast.

It boots really fast, in case you stop and start it frequently.

It accepts my wired Mouse and Keyboard, and also wireless bluetooth.

I’m really in love with this small motherboard. :)

This tiny RaspBerry 2, has 4 cores at 900 Mhz.

The CPU announces (cat /proc/cpuinfo):

processor    : 3
model name    : ARMv7 Processor rev 5 (v7l)
BogoMIPS    : 38.40
Features    : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm
CPU implementer    : 0x41
CPU architecture: 7
CPU variant    : 0x0
CPU part    : 0xc07
CPU revision    : 5

As you see, it scores only 38.40 bogomips, compared to my tower desktop 6384.59, and my old laptop 2593.45, but it’s still beautiful.

Note: you cannot trust bogomips as a performance measurement, and in addition my computers are Intel based -so CISC architecture- while RasbBerry uses ARM processors that are RISC, that is a completely different architecture. I notice a very fluid speed, only I sense a bit slowliness in the process when I install new packages. When unpacking it feels slow, although it can perfectly be caused by the SSD card IO as well, so I installed iotop and monitorized the I/O while I was installing PHP5 :) . I got small writings up to 1,000 KB/sec, so 1 MB/s, with average of ~30-50KB writing operations, no iowait, while I was seeing with htop that the core unpacking was at 100 % of CPU, the other 3 were free, so my initial conclusion is that the bottleneck was on the CPU. Still happy about my little gadget. :)

The osmc image I installed comes with python 2.7.9 and Linux kernel 3.18.9 as uname -a shows:

Linux osmc 3.18.9-5-osmc #1 SMP PREEMPT Wed Mar 11 18:59:35 UTC 2015 armv7l GNU/Linux

It also comes with wget 1.16 and curl 7.38.0.

In fact the OSMC is based on the Debian Jessie distro.

The OSMC software also have upgrades, and Debian upgrades, that keep the Linux box up to date.

So that brings a lot of possibilities.

After a sudo apt-get update I was able to install htop, mc and apache2.

sudo apt-get install htop
sudo apt-get install iotop
sudo apt-get install iftop
sudo apt-get install mc
sudo apt-get install apache2
sudo apt-get install php5
sudo apt-get instlal ncdu

So it’s a lot of fun. :)

Note: Although a 1000mhA is enough (Raspberry Pi 2 needs around 700mhA) if you plan to plug a cheap case 2.5 hard disk without external power -just USB- it will not be enough. In this case I recommend buying a 2000mhA transformer for the Pi, or a external USB hub energy powered (2000mhA otherwise you risk energy from Raspbery + USB hub being to sufficient). If the disk has external power, then you’ll have no probem. Personaly I use USB sticks.

When I had my incubator of Start ups some years ago, one of my Start up project was embedding motherboards within screens, and offering the ability to play videos, images, even flash games and animations, and manage and update everything and update contents for a groups of players from the Internet, or based on time triggers. I was finalist for selling my product to a enormous multinational, it was close, but finally a Korean company with a cheaper (and less powerful solution) won. At that time, it was 2004, motherboards were huge comparing to this tiny piece of hardware and I had to deal with different voltage, power consumption, heat dissipation, safety, etc…. so I’m really in love with this tiny piece hardware that doesn’t need even a ventilator or a big dissipation mechanism.

The Cloud is for Scaling

Last Update: 2022-08-03 I added the Enterprises, as more and more services have been provided by Amazon and the other major Cloud Providers and they offer a suitable solution for Enterprises requiring high availability in services managed by Amazon, without the need to configure and maintain their own setups manually.

dell-blades-m4110The Cloud is for Startups, and for Scaling. Nothing more.

In the future will be used by phone operators, to re-dimension their infrastructure and bandwidth in real time according to demand, but nowadays the Cloud is for Startups.

Examine the prices in my post in cmips, take a look, examine the performance also of the different CPU. You see that according to CMIPS v.1.03 a Desktop Processor Intel i7-4770S, worth USD $300, performs better than an Amazon M2 High Memory Quadruple Extra Large and than a Rackspace First gen. 30 GB RAM 8 Cores?.

Today the public cost of an Amazon M2 High Memory Quadruple Extra Large running for a month is USD $1,180.80 so USD $1.64 per hour and the Rackspace First Generation 30 GB RAM 8 Cores 1200 GB of disk costs is USD $1,425.60 so USD $1.98 per hour running.

And that’s the key, the cost per hour.

Because the greatness, the majesty of the Cloud is that you pay per hour, you pay as you need, or as you go. No attaching contracts. All on demand.

I had my company at a time where the hosting companies and the Data Centers were forcing customers to sign yearly contracts. What if a company only needs to host their Servers for three months? What if they have to close?. No options. You take it or you leave it.

Even renting a dedicated hosting was for at least a month or more, and what if the latency was not good? What if the bandwidth of the provider was not enough?.

Amazon irrupted in the market with strength. I really like that company because they grew the best eCommerce company for buying books, they did a system that really worked, and was able to recommend very useful computer books, and the delivery, logistics was so good, also post-sales service. They simply started to rent the same infrastructure they were using to attend their millions of customers and was a total success.

And for a while few people knew about Amazon deep technologies and functionalities, but later became a fashion.

Now people is using Amazon or whatever provider/Service that contains the word “Cloud” because the Cloud is in the mouth of everyone. Magazines and newspapers speak about the Cloud, so many many companies use it simply because everyone is talking about the Cloud. And those ISP that didn’t had a Cloud have invested heavily to create a Cloud, just because they didn’t want to be the ones without a Cloud, since everyone was asking for it and all the ISP companies were offering their “Clouds”.

Every company claims to have “Cloud” where the only many of them have is Vmware servers, Xen servers, Open Stack… running the tenants or instances of the customers always on the same host servers. No real Cloud, professional Cloud, abstract layered in a Professional way like Amazon, only the traditional “shared hosting” with another name, sharing CPU and RAM and Disk storage using virtual machines called instances.

So, Cloud fashion has become a confusing craziness where no one knows why they are in the Cloud but they believe they have to be in.

But do companies need the Cloud?. Cloud instances?

It depends. The best would be to ask that companies Why you choose the Cloud?.

If you compare the cost of having an instance in the Cloud, is much much more expensive than having a dedicated server. And for that high cost you don’t get more performance.

Virtualization is always slower and disk speed is always an issue in Cloud providers, where all the data travels via network from the disk cabins NAS to the Host servers running the guest instances. Data cannot be at local disks, since every time you start an instance, the resources like CPU and RAM are provisioned, and your instance run in totally different hardware. Only your data remain in the NAS (Network Attached Storage).

So unless you run your in-the-Cloud instance in a special provider that offers local disks, like DigitalOcean that offers SSD but monthly paying, (and so you pay the price by losing the hardware abstraction capability because you’re attached to the CPU that has the disk connected, and also you loss the flexibility of paying per hour of use, as you go), then you’ll face a bottleneck that is the hard disk performance (that for real takes all the data from NAS, where is stored, through the local network).

So what are the motivations to use the Cloud?. I try to put some examples, out of these it has no much sense, I think. You can send me your happy-in-Cloud scenarios if you found other good uses.

Example A) Saving initial costs, avoid contract attachment and grow easily own-made

Imagine a Developer that start its own project. May be it works, may be not, but instead of having a monthly contract for a dedicated server, he starts with an Amazon Free Tier (better not, use Small instance at least) and runs a web. If it does not work, simply stop the instance and pay no more. If the project works and has more and more users he can re-dimension the server with a click. Just stop the instance, change the type of instance, start it again with more RAM and more CPU power. Fast.

Hiring a dedicated server implies at least monthly contracts, average of USD $100 per month, and is not easy to move to a bigger server, not fast and is expensive as it requires the ISP tech guys to move the data, to migrate from a Server to another.

Also the available bandwidth is to be taken in consideration. Bandwidth is expensive and Amazon can offer 150 Mbit to smaller machines. Not all the Internet Service Providers can offer that bandwidth even with most advanced packets.

If the project still grows, with a click, in seconds, 20 instances with a lot of bandwidth can be deployed and serving traffic to your customers very quick.

You save the init costs of buying Servers, and the time to deal with hardware, bandwidth limitations and avoid contracts, but you pay an hourly rate a lot more expensive. So in the long run is much much expensive using Amazon and less powerful than having dedicated servers. That happened to Zynga, that was paying $63M annually to Amazon and decided to step back from Amazon to their own Data Centers again. (another fortune tech link)

The limited CPU power was also a deal breaker for many companies that needed really powerful CPU and gigs of RAM for their Database Servers. Now this situation is much better with the introduction of the new Servers.

This developer can benefit from doing bacups with a click, cloning, starting instances from an image, having more static ip’s with a click, deploying built-in (from the Cloud provider) load balancers, using monitoring services like CloudWatch, creating Volumes and attaching to the servers for additional space…

Example B) An Startup with fluctuating number of users and hopes of growing

Imagine an Startup with a wonderful Facebook Application.

During 80% of the day has few visits, may be only need 3 Servers, but during 20% of the hours of the day from 10:00 to 15:00 users connect like hell, so they need 20 servers to attend this traffic and workload, and may be tomorrow needs 30 servers.

With the Cloud they pay for 3 servers 24 hours per day and for the other 17 servers only pay the hours they are on, that’s 5 hours per day. Doing that they save money and they have an unlimited * amount of power. (* There are limits for real, you have to specially request authorisation to run more than default max. servers for the zone, that is normally 20 instances for Amazon. Also it can happen theoretically that when you request new instances the Zone has no instances available).

So well, for an Startup growing, avoiding hiring 20 dedicated servers and instead running into the Cloud as many as they need, for just the time they need, Auto-Scaling up and down, and can use the servers NOW and pay the next month with Visa card, all of that can make a difference for a growing Startup.

If the servers chosen are not powerful enough that is solved with a click, changing instance type. So fast. A minute.

It’s only a matter of money.

Example C) e-Learning companies and online universities

e-Learning platforms also get benefits from the Auto-Scaling for the full occupation hours.

The built-in functionalities of the Cloud to clone instances is very useful to deploy new web servers, or new environments for students doing practices, in the case of teaching Information Technology subjects, where the users need to practice against a real server (Linux or windows).

Those servers can be created and destroyed, cloned from the main -ready to go- template. And also servers can be scheduled to stop at a certain hour and to start also, so saving the money from the hours not needed.

Example D) Digital agencies, sports and other events

When there is an Special event, like motorcycle running, when a Football Team scores, when there is an spot in tv announcing a product…

At those moments the traffic to the site can multiply, so more servers and more bandwidth have to be deployed instantly. That cannot be done with physical servers, hardware, but is very easy to provision instances from the Cloud.

Mass mailing email campaigns can also benefit from creating new Servers when needed.

Example E) Proximity and SEO

Cloud providers have Data Centres everywhere. If you want to have servers in Asia, or static content to be deployed faster, or in South-America, or in Europe… the Cloud providers have plenty of Data Centers all over the world.

Example F) Game aficionado and friends sharing contents

People that loves cooperative games can find the needed hungry bandwidth and at a moderate price. If they run their private server few hours, at night, from 22:00 to 01:00 as example, they will benefit from a great bandwidth from the big Cloud provider and pay only 3 hours per day (the exceed of traffic uses to be paid in most providers, but price of additional GB uses to be really really competitive).

Friends sharing contents in an Ftp also, can benefit from this Cloud servers, but probably they will find more easy to use services like Dropbox.

Example G) Startup serving contents

An Startup serving videos, images, or books, can benefit not only from the great bandwidth of big Cloud providers (this has been covered before), but for a very cheap price for exceeding Gigabyte transferred.

Local ISP can’t offer 150 Mbit for an instance of USD $20 and USD $0.12 per additional GB transferred.

Many Cloud providers also allow unlimited incoming traffic from the Internet, and from Server to Server through private ip’s.

Example H) Enterprises that want to outsource the solutions and having better availability

Since I wrote this article in 2013, many CSP have been incorporation more and more powerful services to their catalalog.

For instance Amazon has service like DynamoDB, RDS, Load Balancers able to balance servers in different availability zones, with Auto-Scaling, SQS for Message Queues, ElastiCache, Lambda, S3…

Many Enterprise find convenient to have all their services offloaded to Amazon, Google Cloud, Microsoft Azure… so they require less Operations and System Administration Engineers.

Other cases

For other cases Dedicated Servers are much more Powerful, faster and cheaper, at the price of being “static” in the sense of attached, not layer abstracted, but all the aspects of your Project have to be taken in count before deciding stepping into or out of the Cloud.

In general terms I would say that the Cloud is for Scaling, but in 2022 it’s also to outsource Operation Services, to fully managed services by the CSPs.

NAS and Gigabit

Note 2019-05-28: This article was written in 2013. It is still valid, but since then 10Gbps have dropped in price a lot. In my DRAID Solution I’ve qualified 10GbE based in copper, RJ45, and for fiber: 10, 25, 40 and 100 Gbps. Mellanox switches and NICs are a reference. 10GbE based in copper are cheap and easy to deploy, as you can reuse existing infrastructure and grow your segments. https://en.wikipedia.org/wiki/10_Gigabit_Ethernet

I’ve found this problem in several companies, and I’ve had to show their error and convince experienced SysAdmins, CTOs and CEO about the erroneous approach. Many of them made heavy investments in NAS, that they are really wasting, and offering very poor performance.

Normally the rack servers have their local disks, but for professional solutions, like virtual machines, blade servers, and hundreds of servers the local disk are not used.

NAS – Network Attached Storage- Servers are used instead.

This NAS Servers, when are powerful (and expensive) offer very interesting features like hot backups, hot backups that do not slow the system (the most advanced), hot disk replacement, hot increase of total available space, the Enterprise solutions can replicate and copy data from different NAS in different countries, etc…

Smaller NAS are also used in configurations like Webservers’ Webfarms, were all the nodes has to have the same information replicated, and when a used uploads a new profile image, has to be available to all the webservers for example.

In this configurations servers save and retrieve the needed data from the NAS Servers, through LAN (Local Area Network).

The main error I have seen is that no one ever considers the pipe where all the data is travelling, so most configurations are simply Gigabit, and so are bottleneck.

dell-blade-servers-enclosureImagine a Dell blade server, like this in the image on the left.

This enclosure hosts 16 servers, hot plugable, with up to two CPU’s each blade, we also call those blade servers “pizza” (like we call before to rack servers).

A common use is to use those servers to have Vmware, OpenStack, Xen or other virtualization software, so the servers run instances of customers. In this scenario the virtual disks (the hard disk of the virtual machines) are stored in the NAS Server.

So if a customer shutdown his virtual server, and start it later, the physical server where its virtual machine is running will be another, but the data (the disk of the virtual server) is stored in the NAS and all the data is saved and retrieved from the NAS.

The enclosure is connected to the NAS through a Gigabit connection, as 10 Gigabit connections are still too expensive and not yet supported in many servers.

Once we have explained that, imagine, those 16 servers, each with 4 or 5 virtual machines, accessing to their disks through a Gigabit connection.

If only one of these 80 virtual machines is accessing to disk, the will be no problem, but if more than one is accessing the Gigabit connection, that’s a maximum of 125 MB (Megabytes) per second, will be shared among all the virtual machines.

So imagine, 70 virtual machines are accessing NAS to serve web pages, with not much traffic, OK, but the other 10 virtual machines are doing heavy data transmission: for example one is serving data through FTP server, the other is broadcasting video, the other is copying heavy log files, and so… Imagine that scenario.

The 125 MB per second is divided between the 80 servers, so those 10 servers using extensively the disk will monopolize the bandwidth, but even those 10 servers will have around 12,5 MB each, that is 100 Mbit each and is very slow.

Imagine one of the virtual machines broadcast video. To broadcast video, first it has to get it from the NAS (the chunks of data), so this node serving video will be able to serve different videos to few customers, as the network will not provide more than 12,5 MB under the circumstances provided.

This is a simplified scenario, as many other things has to be taken in count, like the SATA, SCSI and SAS disks do not provide sustained speeds, speed depends on locating the info, fragmentation, etc… also has to be considered that NAS use protocol iSCSI, a sort of SCSI commands sent through the Ethernet. And Tcp/Ip uses verifications in their protocol, and protocol headers. That is also an overhead. I’ve considered only traffic in one direction, so the servers downloading from the NAS, as assuming Gigabit full duplex, so Gigabit for sending and Gigabit for receiving.

So instead of 125 MB per second we have available around 100 MB per second with a Gigabit or even less.

Also the virtualization servers try to handle a bit better the disk access, by keeping a cache in memory, and not writing immediately to disk.

So you can’t do dd tests in virtual machines like you would do in any Linux with local disks, and if you do go for big files, like 10 GB with random data (not just 0, they have optimizations for that).

Let’s recalculate it now:

70 virtual machines using as low as 0.10 MB/second each, that’s 7 MB/second. That’s really optimist as most webservers running PHP read many big files for attending a simple request and webservers server a lot of big images.

10 virtual machines using extensively the NAS, so sharing 100 MB – 7 MB = 93 MB. That is 9.3 MB each.

So under these circumstances for a virtual machine trying to read from disk a file of 1 GB (1000 MB), this operation will take 107 seconds, so 1:47 minutes.

So with this considerations in mind, you can imagine that the performance of the virtual machines under those configurations are leaved to the luck. The luck that nobody else of the other guests in the servers are abusing the disk I/O.

I’ve explained you in a theoretical plan. Sadly reality is worst. A lot worst. Those 70 web virtual machines with webservers will be so slow that they will leave your company very disappointed, and the other 10 will not even be happier.

One of the principal problems of Amazon EC2 has been always disk performance. Few months ago they released IOPS, high performance disks, that are more expensive, but faster.

It has to be recognized that in Amazon they are always improving.

They have also connection between your servers at 10 Gbit/second.

Returning to the Blades and NAS, an easy improvement is to aggregate two Gigabits, so creating a connection of 2 Gbit. This helps a bit. Is not the solution, but helps.

Probably different physical servers with few virtual machines and a dedicated 1 Gbit connection (or 2 Gbit by 1+1 aggregated if possible) to the NAS, and using local disks as much as possible would be much better (harder to maintain at big scale, but much much better performance).

But if you provide infrastructure as a Service (IaS) go with 2 x 10 Gbit Fibre aggregated, so 20 Gigabit, or better aggregate 2 x 20 Gbit Fibre. It’s expensive, but crucial.

Now compare the 9.3 MB per second, or even the 125 MB theoretical of Gigabit of the average real sequential read of 50 MB/second that a SATA disk can offer when connected on local, or nearly the double for modern SAS 15.000 rpm disks… (writing is always slower)

… and the 550 MB/s for reading and 550 MB/s for writing that some SSD disks offer when connected locally. (I own two OSZ SSD disks that performs 550 MB).

I’ve seen also better configuration for local disks, like a good disk controller with Raid 5 and disks SSD. With my dd tests I got more than 900 MB per second for writing!.

So if you are going to spend 30.000 € in your NAS with SATA disks (really bad solution as SATA is domestic technology not aimed to work 24×7 and not even fast) or SAS disks, and 30.000 € more in your blade servers, think very well what you need and what configuration you will use. Contact experts, but real experts, not supposedly real experts.

Otherwise you’ll waste your money and your customers will have very very poor performance on these times where applications on the Internet demand more and more performance.