Tag Archives: NoSql

Resources for Microservices and Business Domain Solutions for the Cloud Architect / Microservices Architect

First you have to understand that Python, Java and PHP are worlds completely different.

In Python you’ll probably use Flask, and listen to the port you want, inside Docker Container.

In PHP you’ll use a Frameworks like Laravel, or Symfony, or Catalonia Framework (my Framework) :) and a repo or many (as the idea is that the change in one microservice cannot break another it is recommended to have one git repo per Service) and split the requests with the API Gateway and Filters (so /billing/ goes to the right path in the right Server, is like rewriting URLs). You’ll rely in Software to split your microservices. Usually you’ll use Docker, but you have to add a Web Server and any other tools, as the source code is not packet with a Web Server and other Dependencies like it is in Java Spring Boot.

In Java you’ll use Spring Cloud and Spring Boot, and every Service will be auto-contained in its own JAR file, that includes Apache Tomcat and all other Dependencies and normally running inside a Docker. Tcp/Ip listening port will be set at start via command line, or through environment. You’ll have many git repositories, one per each Service.

Using many repos, one per Service, also allows to deploy only that repository and to have better security, with independent deployment tokens.

It is not unlikely that you’ll use one language for some of your Services and another for other, as well as a Database or another, as each Service is owner of their data.

In any case, you will be using CI/CD and your pipeline will be something like this:

  1. Pull the latest code for the Service from the git repository
  2. Compile the code (if needed)
  3. Run the Unit and Integration Tests
  4. Compile the service to an executable artifact (f.e. Java JAR with Tomcat server and other dependencies)
  5. Generate a Machine image with your JAR deployed (for Java. Look at Spotify Docker Plugin to Docker build from Maven), or with Apache, PHP, other dependencies, and the code. Normally will be a Docker image. This image will be immutable. You will probably use Dockerhub.
  6. Machine image will be started. Platform test are run.
  7. If platform tests pass, the service is promoted to the next environment (for example Dev -> Test -> PreProd -> Prod), the exact same machine is started in the next environment and platform tests are repeated.
  8. Before deploying to Production the new Service, I recommend running special Application Tests / Behavior-driven. By this I mean, to conduct tests that really test the functionality of everything, using a real browser and emulating the acts of a user (for example with BeHat, Cucumber or with JMeter).
    I recommend this specially because Microservices are end-points, independent of the implementation, but normally they are API that serve to a whole application. In an Application there are several components, often a change in the Front End can break the application. Imagine a change in Javascript Front End, that results in a call a bit different, for example, with an space before a name. Imagine that the Unit Tests for the Service do not test that, and that was not causing a problem in the old version of the Service and so it will crash when the new Service is deployed. Or another example, imagine that our Service for paying with Visa cards generates IDs for the Payment Gateway, and as a result of the new implementation the IDs generated are returned. With the mocked objects everything works, but when we deploy for real is when we are going to use the actual Bank Payment. This is also why is a good idea to have a PreProduction environment, with PreProduction versions of the actual Services we use (all banks or the GDS for flights/hotel reservation like Galileo or Amadeus have a Test, exactly like Production, Gateway)

If you work with Microsoft .NET, you’ll probably use Azure DevOps.

We IT Engineers, CTOs and Architects, serve the Business. We have to develop the most flexible approaches and enabling the business to release as fast as their need.

Take in count that Microservices is a tool, a pattern. We will use it to bring more flexibility and speed developing, resilience of the services, and speed and independence deploying. However this comes at a cost of complexity.

Microservices is more related to giving flexibility to the Business, and developing according to the Business Domains. Normally oriented to suite an API. If you have an API that is consumed by third party you will have things like independence of Services (if one is down the others will still function), gradual degradation, being able to scale the Services that have more load only, being able to deploy a new version of a Service which is independent of the rest of the Services, etc… the complexity in the technical solution comes from all this resilience, and flexibility.

If your Dev Team is up to 10 Developers or you are writing just a CRUD Web Application, a PoC, or you are an Startup with a critical Time to Market you probably you will not want to use Microservices approach. Is like killing flies with laser cannons. You can use typical Web services approach, do everything in one single Https request, have transactions, a single Database, etc…

But if your team is 100 Developer, like a big eCommerce, you’ll have multiple Teams between 5 and 10 Developers per Business Domain, and you need independence of each Service, having less interdependence. Each Service will own their own Data. That is normally around 5 to 7 tables. Each Service will serve a Business Domain. You’ll benefit from having different technologies for the different needs, however be careful to avoid having Teams with different knowledge that can have hardly rotation and difficult to continue projects when the only 2 or 3 Devs that know that technology leave. Typical benefit scenarios can be having MySql for the Billing Services, but having NoSQL Database for the image catalog, or to store logs of account activity. With Microservices, some services will be calling other Services, often asynchronously, using Queues or Streams, you’ll have Callbacks, Databases for reading, you’ll probably want to have gradual and gracefully failure of your applications, client load balancing, caches and read only databases/in-memory databases… This complexity is in order to protect one Service from the failure of others and to bring it the necessary speed under heavy load.

Here you can find a PDF Document of the typical resources I use for Microservice Projects.

You can also download it from my github repository:

https://github.com/carlesmateo/awesome-microservices

Do you use other solutions that are not listed?. Leave a message. I’ll investigate them and update the Document, to share with the Community.

Update 2020-03-06: I found this very nice article explaining the same. Microservices are not for everybody and not the default option: https://www.theregister.co.uk/AMP/2020/03/04/microservices_last_resort/

Update 2020-03-11: Qcom with 1,600 microservices says that microservices architecture is the las resort: https://www.theregister.co.uk/AMP/2020/03/09/monzo_microservices/

Begin developing with Cassandra

This article is contributed to Luna Cloud blog by Carles Mateo.

We architects, developers and start ups are facing new challenges.

We have now to create applications that have to scale and scale at world-wide level.

That puts over the table big and exciting challenges.

To allow that increasing level of scaling, we designed and architect tools and techniques and tricks, but fortunately now there are great products born to scale out and to deal with this problems: like NoSql databases like Cassandra, MongoDb, Riak, Hadoop’s Hbase, Couchbase or CouchDb, NoSql in-Memory like Memcached or Redis, big data solutions like Hadoop, distributed files systems like Hadoop’s HDFS, GlusterFs, Lustre, etc…

In this article I will cover the first steps to develop with Cassandra, under the Developer point of view.

As a first view you may be interested in Cassandra because:

  • Is a Database with no single point of failure
  • Where all the Database Servers work in Peer to Peer over Tcp/Ip
  • Fault-tolerance. You can set replication factor, and the data will be sharded and replicated over different servers and so being resilient to node failures
  • Because the Cassandra Cluster splits and balances the work across the Cluster automatically
  • Because you can scale by just adding more nodes to the Cluster, that’s scaling horizontally, and it’s linear. If you double the number of servers, you double the performance
  • Because you can have cool configurations like multi-datacenter and multi-rack and have the replication done automatically
  • You can have several small, cheap, commodity servers, with big SATA disks with better result than one very big, very expensive, and unable-to-scale-more server with SSD or SAS expensive disks.
  • It has the CQL language -Cassandra Query Language-, that is close to SQL
  • Ability to send querys in async mode (the CPU can do other things while waiting for the query to return the results)

Cassandra is based in key/value philosophy but with columns. It supports multiple columns. That’s cool, as theoretically it supports 2 GB per column (at practical level is not recommended to go with data so big, specially in multi-user environments).

I will not lie to you: It is another paradigm, and comes with a lot of knowledge to acquire, but it is necessary and a price worth to pay for being able of scaling at nowadays required levels.

Cassandra only offers native drivers for: Java, .NET, C++ and Python 2.7. The rest of solutions are contributed, sadly most of them are outdated and unmantained.

You can find all the drivers here:

http://planetcassandra.org/client-drivers-tools/

To develop with PHP

Cassandra has no PHP driver officially, but has some contributed solutions.

By myself I created several solutions: CQLSÍ uses cqlsh to perform queries and interfaces without needing Thrift, and Cassandra Universal Driver is a Web Gateway that I wrote in Python that allows you to query Cassandra from any language, and recently I contributed to a PHP driver that speaks the Cassandra binary protocol (v1) directly using Tcp/Ip sockets.

That’s the best solution for me by now, as it is the fastest and it doesn’t need any third party library nor Thrift neither.

You can git clone it from:

https://github.com/uri2x/php-cassandra

Here we go with some samples:

Create a keyspace

KeySpace is the equivalent to a database in MySQL.

<?php

require_once 'Cassandra/Cassandra.php';

$o_cassandra = new Cassandra();

$s_server_host     = '127.0.0.1';    // Localhost
$i_server_port     = 9042; 
$s_server_username = '';  // We don't use username
$s_server_password = '';  // We don't use password
$s_server_keyspace = '';  // We don't have created it yet

$o_cassandra->connect($s_server_host, $s_server_username, $s_server_password, $s_server_keyspace, $i_server_port);

// Create a Keyspace with Replication factor 1, that's for a single server
$s_cql = "CREATE KEYSPACE cassandra_tests WITH REPLICATION = { 'class': 'SimpleStrategy', 'replication_factor': 1 };";

$st_results = $o_cassandra->query($s_cql);

We can run it from web or from command line by using:

php -f cassandra_create.php

Create a table

<?php

require_once 'Cassandra/Cassandra.php';

$o_cassandra = new Cassandra();

$s_server_host     = '127.0.0.1';    // Localhost
$i_server_port     = 9042; 
$s_server_username = '';  // We don't use username
$s_server_password = '';  // We don't use password
$s_server_keyspace = 'cassandra_tests';

$o_cassandra->connect($s_server_host, $s_server_username, $s_server_password, $s_server_keyspace, $i_server_port);

$s_cql = "CREATE TABLE carles_test_table (s_thekey text, s_column1 text, s_column2 text,PRIMARY KEY (s_thekey));";

$st_results = $o_cassandra->query($s_cql);

If we don’t plan to insert UTF-8 strings, we can use VARCHAR instead of TEXT type.

Do an insert

In this sample we create an Array of 100 elements, we serialize it, and then we store it.

<?php

require_once 'Cassandra/Cassandra.php';

// Note this code uses the MT notation http://blog.carlesmateo.com/maria-teresa-notation-for-php/
$i_start_time = microtime(true);

$o_cassandra = new Cassandra();

$s_server_host     = '127.0.0.1';    // Localhost
$i_server_port     = 9042; 
$s_server_username = '';  // We don't have username
$s_server_password = '';  // We don't have password
$s_server_keyspace = 'cassandra_tests';  

$o_cassandra->connect($s_server_host, $s_server_username, $s_server_password, $s_server_keyspace, $i_server_port);

$s_time = strval(time()).strval(rand(0,9999));
$s_date_time = date('Y-m-d H:i:s');

// An array to hold a emails
$st_data_emails = Array();

for ($i_bucle=0; $i_bucle<100; $i_bucle++) {
    // Add a new email
    $st_data_emails[] = Array('datetime'  => $s_date_time,
                              'id_email'  => $s_time);

}

// Serialize the Array
$s_data_emails = serialize($st_data_emails);

$s_cql = "INSERT INTO carles_test_table (s_thekey, s_column1, s_column2)
VALUES ('first_sample', '$s_data_emails', 'Some other data');";

$st_results = $o_cassandra->query($s_cql);

$o_cassandra->close();

print_r($st_results);

$i_finish_time = microtime(true);
$i_execution_time = $i_finish_time-$i_start_time;

echo 'Execution time: '.$i_execution_time."\n";
echo "\n";

This insert took Execution time: 0.0091850757598877 seconds executed from CLI (Command line).

If the INSERT works well you’ll have a [result] => ‘success’ in the resulting array.

cassandra-php-insert-result-success

Do some inserts

Here we do 9000 inserts.

<?php

require_once 'Cassandra/Cassandra.php';

// Note this code uses the MT notation http://blog.carlesmateo.com/maria-teresa-notation-for-php/
$i_start_time = microtime(true);

$o_cassandra = new Cassandra();

$s_server_host     = '127.0.0.1';    // Localhost
$i_server_port     = 9042; 
$s_server_username = '';  // We don't have username
$s_server_password = '';  // We don't have password
$s_server_keyspace = 'cassandra_tests';  

$o_cassandra->connect($s_server_host, $s_server_username, $s_server_password, $s_server_keyspace, $i_server_port);

$s_date_time = date('Y-m-d H:i:s');

for ($i_bucle=0; $i_bucle<9000; $i_bucle++) {
    // Add a sample text, let's use time for example
    $s_time = strval(time());

    $s_cql = "INSERT INTO carles_test_table (s_thekey, s_column1, s_column2)
VALUES ('$i_bucle', '$s_time', 'http://blog.carlesmateo.com');";

    // Launch the query
    $st_results = $o_cassandra->query($s_cql);

}

$o_cassandra->close();

$i_finish_time = microtime(true);
$i_execution_time = $i_finish_time-$i_start_time;

echo 'Execution time: '.$i_execution_time."\n";
echo "\n";

Those 9,000 INSERTs takes 6.49 seconds in my test virtual machine, executed from CLI (Command line).

cqlsh-loaded-9000-rows-select-limit-10

Do a Select

<?php

require_once 'Cassandra/Cassandra.php';

// Note this code uses the MT notation http://blog.carlesmateo.com/maria-teresa-notation-for-php/
$i_start_time = microtime(true);

$o_cassandra = new Cassandra();

$s_server_host     = '127.0.0.1';    // Localhost
$i_server_port     = 9042; 
$s_server_username = '';  // We don't have username
$s_server_password = '';  // We don't have password
$s_server_keyspace = 'cassandra_tests';  

$o_cassandra->connect($s_server_host, $s_server_username, $s_server_password, $s_server_keyspace, $i_server_port);


$s_cql = "SELECT * FROM carles_test_table LIMIT 10;";

// Launch the query
$st_results = $o_cassandra->query($s_cql);
echo 'Printing 10 rows:'."\n";

print_r($st_results);

$o_cassandra->close();

$i_finish_time = microtime(true);
$i_execution_time = $i_finish_time-$i_start_time;

echo 'Execution time: '.$i_execution_time."\n";
echo "\n";

Printing 10 rows passing the query with LIMIT:

$s_cql = "SELECT * FROM carles_test_table LIMIT 10;";

echoing as array with print_r takes Execution time: 0.01090407371521 seconds (the cost of printing is high).

cassandra-php-select-limit-10

If you don’t print the rows, it takes only Execution time: 0.00714111328125 seconds.
Selecting 9,000 rows, if you don’t print them, takes Execution time: 0.18086194992065.

Java

The official driver for Java works very well.

The only initial difficulties may be to create the libraries required with Maven and to deal with the different Cassandra native data types.

To make that travel easy, I describe what you have to do to generate the libraries and provide you with a Db Class made by me that will abstract you from dealing with Data types and provide a simple ArrayList with the field names and all the data as String.

Datastax provides the pom.xml for maven so you’ll create you jar files. Then you can copy those jar file to Libraries folder of any project you want to use Cassandra with.

cmateo-cassandra-java-dependenciesMy Db class:

/*
 * By Carles Mateo blog.carlesmateo.com
 * You can use this code freely, or modify it.
 */

package server;

import java.util.ArrayList;
import java.util.List;
import com.datastax.driver.core.*;

/**
 * @author carles_mateo
 */
public class Db {

    public String[] s_cassandra_hosts = null;
    public String s_database = "cchat";
    
    public Cluster o_cluster = null;
    public Session o_session = null;
    
    Db() {
        // The Constructor
        this.s_cassandra_hosts = new String[10];
        
        String s_cassandra_server = "127.0.0.1";
        
        this.s_cassandra_hosts[0] = s_cassandra_server;
        
        this.o_cluster = Cluster.builder()
                                        .addContactPoints(s_cassandra_hosts[0]) // More than 1 separated by comas
                                        .build();
        this.o_session = this.o_cluster.connect(s_database);  // This is the KeySpace

    }
    
    public static String escapeApostrophes(String s_cql) {
        String s_cql_replaced = s_cql.replaceAll("'", "''");
        
        return s_cql_replaced;
    }
    
    public void close() {
        // Destructor calles by the garbagge collector
        this.o_session.close();
        this.o_cluster.close();
    }
    
    public ArrayList query(String s_cql) {
        
        ResultSet rows = null;
        
        rows = this.o_session.execute(s_cql);
        
        ArrayList st_results = new ArrayList();
        List<String> st_column_names = new ArrayList<String>();
        List<String> st_column_types = new ArrayList<String>();

        ColumnDefinitions o_cdef = rows.getColumnDefinitions();

        int i_num_columns = o_cdef.size();
        for (int i_columns = 0; i_columns < i_num_columns; i_columns++) {
            st_column_names.add(o_cdef.getName(i_columns));
            st_column_types.add(o_cdef.getType(i_columns).toString());                
        }                
        
        st_results.add(st_column_names);
        
        for (Row o_row : rows) {
            
            List<String> st_data = new ArrayList<String>();
            for (int i_column=0; i_column<i_num_columns; i_column++) {
                if (st_column_types.get(i_column).equals("varchar") || st_column_types.get(i_column).equals("text")) {
                    st_data.add(o_row.getString(i_column));
                } else if (st_column_types.get(i_column).equals("timeuuid")) {
                    st_data.add(o_row.getUUID(i_column).toString());
                } else if (st_column_types.get(i_column).equals("integer")) {
                    st_data.add(String.valueOf(o_row.getInt(i_column)));
                }
                // TODO: Implement other data types
                
            }
            st_results.add(st_data);
           
        }
        
        return st_results;
        
    }
    
    public static String getFieldFromRow(ArrayList st_results, int i_row, String s_fieldname) {
        
        List<String> st_column_names = (List)st_results.get(0);
        
        boolean b_column_found = false;
        
        int i_column_pos = 0;
        
        for (String s_column_name : st_column_names) {
            if (s_column_name.equals(s_fieldname)) {
                b_column_found = true;
                break;
            }
            i_column_pos++;
        }
        
        if (b_column_found == false) {
            return null;
        }
        
        int i_num_columns = st_results.size();
        
        List<String> st_data = (List)st_results.get(i_row);
        
        String s_data = st_data.get(i_column_pos);
        
        return s_data;
    }
    
}

 

Python 2.7

There is no currently driver for Python 3. I requested Datastax and they told me that they are working in a new driver for Python 3.

To work with Datastax’s Python 2.7 driver:

1) Download the driver from http://planetcassandra.org/client-drivers-tools/ or git clone from https://github.com/datastax/python-driver

2) Install the dependencies for the Datastax’s driver

Install python-pip (Installer)

sudo apt-get install python-pip

Install python development tools

sudo apt-get install python-dev

This is required for some of the libraries used by original Cassandra driver.

Install Cassandra driver required libraries

sudo pip install futures
sudo pip install blist
sudo pip install metrics
sudo pip install scales

Query Cassandra from Python

The problem is the same as with Java, the different data types are hard to deal with.
So I created a function convert_to_string that converts known data types to String, and so later we will only deal with Strings.

In this sample, the results of the query are rendered in xml or in html.

#!/usr/bin/env python
# -*- coding: UTF-8 -*-
# Use with Python 2.7+

__author__ = 'Carles Mateo'
__blog__ = 'http://blog.carlesmateo.com'

import sys

from cassandra import ConsistencyLevel
from cassandra.cluster import Cluster
from cassandra.query import SimpleStatement

s_row_separator = u"||*||"
s_end_of_row = u"//*//"
s_data = u""

b_error = 0
i_error_code = 0
s_html_output = u""
b_use_keyspace = 1 # By default use keyspace
b_use_user_and_password = 1 # Not implemented yet

def return_success(i_counter, s_data, s_format = 'html'):
    i_error_code = 0
    s_error_description = 'Data returned Ok'

    return_response(i_error_code, s_error_description, i_counter, s_data, s_format)
    return

def return_error(i_error_code, s_error_description, s_format = 'html'):
    i_counter = 0
    s_data = ''

    return_response(i_error_code, s_error_description, i_counter, s_data, s_format)
    return

def return_response(i_error_code, s_error_description, i_counter, s_data, s_format = 'html'):

    if s_format == 'xml':
        print ("Content-Type: text/xml")
        print ("")
        s_html_output = u"<?xml version='1.0' encoding='utf-8' standalone='yes'?>"
        s_html_output = s_html_output + '<response>' \
                                        '<status>' \
                                        '<error_code>' + str(i_error_code) + '</error_code>' \
                                        '<error_description>' + '<![CDATA[' + s_error_description + ']]>' + '</error_description>' \
                                        '<rows_returned>' + str(i_counter) + '</rows_returned>' \
                                        '</status>' \
                                        '<data>' + s_data + '</data>' \
                                        '</response>'
    else:
        print("Content-Type: text/html; charset=utf-8")
        print("")
        s_html_output = str(i_error_code)
        s_html_output = s_html_output + '\n' + s_error_description + '\n'
        s_html_output = s_html_output + str(i_counter) + '\n'
        s_html_output = s_html_output + s_data + '\n'

    print(s_html_output.encode('utf-8'))
    sys.exit()
    return

def convert_to_string(s_input):
    # Convert other data types to string

    s_output = s_input

    try:
        if value is not None:

            if isinstance(s_input, unicode):
                # string unicode, do nothing
                return s_output

            if isinstance(s_input, (int, float, bool, set, list, tuple, dict)):
                # Convert to string
                s_output = str(s_input)
                return s_output

            # This is another type, try to convert
            s_output = str(input)
            return s_output

        else:
            # is none
            s_output = ""
            return s_output

    except Exception as e:
        # Were unable to convert to str, will return as empty string
        s_output = ""

    return s_output

def convert_to_utf8(s_input):
    return s_input.encode('utf-8')

# ********************
# Start of the program
# ********************

s_format = 'xml'  # how you want this sample program to output

s_cql = 'SELECT * FROM test_table;'
s_cluster = '127.0.0.1'
s_port = "9042" # default port
i_port = int(s_port)

b_use_keyspace = 1
s_keyspace = 'cassandra_tests'
if s_keyspace == '':
    b_use_keyspace = 0

s_user = ''
s_password = ''
if s_user == '' or s_password == '':
    b_use_user_and_password = 0

try:
    cluster = Cluster([s_cluster], i_port)
    session = cluster.connect()
except Exception as e:
    return_error(200, 'Cannot connect to cluster ' + s_cluster + ' on port ' + s_port + '.' + e.message, s_format)

if (b_use_keyspace == 1):
    try:
        session.set_keyspace(s_keyspace)
    except:
        return_error(210, 'Keyspace ' + s_keyspace + ' does not exist', s_format)

try:
    o_results = session.execute_async(s_cql)
except Exception as e:
    return_error(300, 'Error executing query. ' + e.message, s_format)

try:
    rows = o_results.result()
except Exception as e:
    return_error(310, 'Query returned result error. ' + e.message, s_format)

# Query returned values
i_counter = 0
try:
    if rows is not None:
        for row in rows:
            i_counter = i_counter + 1

            if i_counter == 1 and s_format == 'html':
                # first row is row titles
                for key, value in vars(row).iteritems():
                    s_data = s_data + key + s_row_separator

                s_data = s_data + s_end_of_row

            if s_format == 'xml':
                s_data = s_data + ''

            for key, value in vars(row).iteritems():
                # Convert to string numbers or other types
                s_value = convert_to_string(value)
                if s_format == 'xml':
                    s_data = s_data + '<' + key + '>' + '<![CDATA[' + s_value + ']]>' + ''
                else:
                    s_data = s_data + s_value
                    s_data = s_data + s_row_separator


            if s_format == 'xml':
                s_data = s_data + ''
            else:
                s_data = s_data + s_end_of_row

except Exception as e:
    # No iterable data
    return_success(i_counter, s_data, s_format)

# Just print the data
return_success(i_counter, s_data, s_format)

cassandra-lunacloud-sample-py

If you did not create the namespace like in the samples before, change those lines to:

s_cql = 'CREATE KEYSPACE cassandra_tests WITH REPLICATION = { \'class\': \'SimpleStrategy\', \'replication_factor\': 1 };'
s_cluster = '127.0.0.1'
s_port = "9042" # default port
i_port = int(s_port)

b_use_keyspace = 1
s_keyspace = ''

Run the program to create the Keyspace and you’ll get:

carles@ninja8:~/Desktop/codi/python/test$ ./lunacloud-create.py 
Content-Type: text/xml

<error_code>0<error_description>

Then you can create the table simply by setting:

s_cql = 'CREATE TABLE test_table (s_thekey text, s_column1 text, s_column2 text,PRIMARY KEY (s_thekey));'
s_cluster = '127.0.0.1'
s_port = "9042" # default port
i_port = int(s_port)

b_use_keyspace = 1
s_keyspace = 'cassandra_tests'

IDE PyCharm Community Edition

Cassandra Universal Driver

As mentioned above if you use a language Tcp/Ip enabled very new, or very old like ASP or ColdFusion, or from Unix command line and you want to use it with Cassandra, you can use my solution http://www.cassandradriver.com/.

cassandradriver-v1-1-xml-sample

It is basically a Web Gateway able to speak XML, JSon or CSV alike. It relies on the official Datastax’s python driver.

It is not so fast as a native driver, but it works pretty well and allows you to split your architecture in interesting ways, like intermediate layers to restrict even more security (For example WebServers may query the gateway, that will enstrict tome permissions instead of having direct access to the Cassandra Cluster. That can also be used to perform real-time map-reduce operations on the amount of data returned by the Cassandras, so freeing the webservers from that task and saving CPU).

Tip: If you use Cassandra for Development only, you can limit the amount of memory used by editing the file /etc/cassandra/cassandra-env.sh and hardcoding:

    # limit the memory for development environment
    # --------------------------------------------
    system_memory_in_mb="512"
    system_cpu_cores="1"
    # --------------------------------------------

Just before the line:

# set max heap size based on the following

That way Cassandra will believe your system memory is 512 MB and reserve only 256 MB for its use.

Upgrade your Scalability with NoSql

CAP-theorem
We’re experiencing another digital breach.

The first one was between people not knowing about IT and those knowing, but we’re living another between IT guys being unable to Scale and those being able to Scale well.

Few years ago I was working all the time with Relational Databases. Designing cool relational Schemas for amazing projects. I had work for years with Oracle, Microsoft Sql Server, Informix, Dbase, Trees, Xml, and in the last times with PostgreSql and MySql.

I was doing a lot of improvements to MySql installations to allow Scaling and Scaling more, to bring more reliability, to improve performance, to allow more sessions… in definitive to fit the needs of the businesses in a challenging world that demanded more and more avility to handle more and more users.

Master Master, Master with secondaries for read, cluster of memcached or redis to use as cache, database sharding, Ip’s fail over, load balancers, additional indexes, InMemory engines, Ramdisks… everything that could help to match an increase on the load volumes.

I used commercial products like Code Futures dbshards, I created my own database sharding solution, in order to split the data to severl MySql servers, etc..

Artisan’s setup and a lot of studying and testing, everything to Scale to the needs of the companies, to handle more and more traffic, more and more users…

And I was proud of my level. Since I was able of suceed where few were able.

But now that is not needed anymore.

Basically the NoSql systems were born to deal with the actual problems.

NoSql servers -take in mind that the term comprises a lot of different solutions- were born to:

  • Work in cluster
  • Split the load among the cluster
  • Work in cheap commodity servers (or small cloud instances)
  • Resistance to failure: Allow the destruction of some nodes without data loss
  • Work with nodes at distant-location datacenters

There are many different NoSql Softwares like: Cassandra, Hadoop, MongoDb, Riak, Neo4J, Redis…

And they do auto-sharding of the data, distribute the data across the network to fit the replication factor set, support load balancing, and in the case of Cassandra Scaling horizontally is so easy like adding more nodes to the Cassandra Cluster.

So yes, believe it. That’s why I write this article. So you can improve your projects and save tons of money.

Databases like Cassandra allow you to Scale so easily like adding new nodes. It is a peer to peer cluster with no single point of failure. All the nodes know the status of the other nodes and they distribute the load.

You can query all the time the same server, but it will be splitting the load among the other servers.

NoSql like hadoop allows you to create a large filesystem in cluster, with as-big-as-all-the-cluster files, but the best quality of HDFS is that it balances the load, and replicates the blocks of data among different servers, so if you loss nodes of the cluster and you have enough replication factor you’ll not loss data. I know companies in Barcelona with 500+ TB in HDFS and companies in the States with thousands of nodes.

So unlike most people believes, NoSql is not about how the information is stored in the database: Schemaless. (* take a look at Graph NoSql databases for relations in NoSql)

NoSql has not an Schema in the traditional sense of Relational Databases, but it has aggregation, columns, supercolumns, or documents depending on the solution, and the design has impact on the performance, but the principal virtue of the NoSql systems is that they were born to work in cluster, to distribute the load, to be resilent to errors and to Scale.

I’ve seen many Startups suffering problems of overloaded MySql databases, but it happens that nothing of this will happen with NoSql like Cassandra, or MongoDb.

Before they were scaling vertically the MySql server, so adding more Ram, adding more CPU, having better disks, until it was impossible to upgrade more. And if sharding was not possible due to joins, the project was in serious trouble.

But with NoSql you can have, instead of an expensive very powerful server, 5 really cheap servers, and it could be faster, cheaper, resilent to errors, with a better uptime. And if you want to Scale simply add more cheap servers.

The most important of this article has been said, so you can start to look at NoSql solutions.

For bonus, I add a list of NoSql’s and the kind of Data Model that they have:

 

Database name Type of data model Extra info Companies using it
Memcached Key-Value Storage is in Memory, so it is used mainly as cache Companies I’ve worked for: ECManaged, privalia.
Other well known companies:
LiveJournal, Wikipedia, Flickr, Bebo, Twitter, Typepad, Yellowbot, Youtube, Digg, WordPress.com, Craigslist, Mixi
Redis Key-Value Work in cluster. Can be used in memory or persistant Companies I’ve worked for: Atrapalo, ECManaged
Other well known companies: Twitter, Instagram, Github, Engine Yard, Craiglist, guardian.co.uk, blizzard, digg, flickr, stackoverflow, tweetdeck
Riak Key-Value Supports a REST API through HTTP and Protocol Buffers for basic PUT, GET, POST, and DELETE. MapReduce with native Javascript and Erlang. In multi-datacenter replication, one cluster acts as a “primary cluster”. AT&T, AOL, Ask.com, Best Buy, Boeing, Bump, Braintree, Comcast, DataPipe, Gilt Group, UK National Health Services (NHS), OpenX, Rovio, Symantec, TBS, The Weather Channel, WorkDay, Voxer, Yahoo! Japan, Yandex
BerkeleyDB Key-Value
LevelDB Key-Value
Project Voldemort Key-Value LinkedIn
Google BigTable Key-Value
Amazon DynamoDB Key-Value DynamoDB from Amazon, run in their AWS Cloud solution. See info on wikipedia
Cassandra Column-Family My favourite Db-alike. You can download my CQLSÍ wrapper for PHP :) NetFlix, Spotify, Facebok used it until 2010, Instagram, Rackspace, Rockyou, Zoho, Soundcloud, Hailo, ComCast, Hulu
HBase Column-Family Provides BigTable-like, SQL alike, support on the Hadoop core
Hypertable Column-Family
Amazon SimpleDB Column-Family
MongoDB Document Databases Written in C++, JSON-style documents, default stores to RAM until flush, high performance but dangerous for data integrity. Supports Map-Reduce
CouchDB Document Databases
OrientDb Document Databases
RavenDB Document Databases
Terrastore Document Databases (legacy)
Infinite Graph Graph Databases
HyperGraph DB Graph Databases
FlockDB Graph Databases
Neo4J Graph Databases
OrientDB Graph Databases

Bonus for PHP Developers: A kind of lightweight key-value store very simple component useful for one-server PHP projects are: APC (datastore capability), and Cache Lite (part of PEAR).

I can’t miss to mention hadoop, that is a NoSql that does not match the categories of Data Storage up, because is a Framework for the distributed processing of large data sets across clusters, so a monster, being able to do many many things and to distribute loads across its nodes. The most well-known components are HDFS, the distributed filesystem, and Map-Reduce: a simple to develop YARN-based system for parallel processing of large data sets across the clusters. All the big companies like Netflix, Amazon, Yahoo, etc… are using Hadoop. Often synomym when talking about BigData.

Hadoop is a world itself, and the many projects surrounding, but is worth, because allow incredible possibilities to distribute loads and to Scale.

Hadoop has a single point of failure in the namenode, that stores the name of the files of the HDFS in RAM, but solutions like MapR have overcome this.

Don’t get me wrong. Relational databases are wonderful, very useful, support transactions, stored procedures, have been tested for years, focused on consistency, and are very reliable.

Simply they don’t allow to Scale according to our current needs, while NoSql opens a wonderful world of easy, nearly infinite, Scaling.

As you see Open Source is ruling the world. :)

Companies are still sleeping and not supporting NoSql. I’m particularly disappointed with Open Source CMS that are still based on Relational Models, and are very hard to Scale. Drupal, WordPress, Joomla… and e-Commerces like Magento, osCommerce… and plugins for the CMS mentioned (uberkart, woocommerce, virtuemart…) need to be ported to NoSql urgently. (Although some partial support exists in some solutions, it is not fully supported)
That’s why I’ve started to create a very simple Open Source CMS based on NoSql. To help companies and bloggers that can’t Scale more their sites.