View Natalino Busa's profile on LinkedIn

Principal Data Scientist, Director for Data Science, AI, Big Data Technologies. O’Reilly author on distributed computing and machine learning.​

Natalino leads the definition, design and implementation of data-driven financial and telecom applications. He has previously served as Enterprise Data Architect at ING in the Netherlands, focusing on fraud prevention/detection, SoC, cybersecurity, customer experience, and core banking processes.

​Prior to that, he had worked as senior researcher at Philips Research Laboratories in the Netherlands, on the topics of system-on-a-chip architectures, distributed computing and compilers. All-round Technology Manager, Product Developer, and Innovator with 15+ years track record in research, development and management of distributed architectures, scalable services and data-driven applications.

Tuesday, April 29, 2014

Hadoop and big data: Why density matters

Dimensioning an hadoop cluster depends on many factors. The main use is still centered around batch analytics, and queries crunching large files. But other use cases are emerging and becoming more common use. Think for instance of ad-hoc queries, streaming analytics and in-memory workflows.

Distributed processing, in order to be done efficiently, relies on the following

  • available processing resources (cores, cpus)
  • available storage hierarchy (cache, ram, disk, ethernet)
  • locality of data (dataflow and data scheduling)
  • task mapping and partitioning (allocation of computing resources)

Let's start with a very well known distributed computing paradigm: namely hadoop's map-reduce. 

In short, what hadoop does is to take some chunks of data from storage (typically a file from a local HDD), processing the data while "streaming" input file and then writing back the results on file (hopefully again locally). Once the "map" phase is finished, the data is sorted and merged in buckets sharing the same "key" and the process is repeated once again in the "reduce" phase of map-reduce.

Hadoop : map - shuffle&sort - reduce

The map-reduce paradigm can be very efficient way of dealing with a large class of parallel problems.

However, processing resources and data must be kept as close as possible. And this is not always possible or feasable. Hence hadoop map-reduce is an effective parellalization paradigm, provided that during shuffle&sort data is kept relatively local. And provided that enough reducers are running in parallel during the reduce phase, which in turns depends on way keys are crafted/engineered.

Moreover, this paradigm relies heavily on disk I/O in order to de-couple the various stages of the computation. Although many classes of problems can be re-coded by means of map-reduce operations, it is in many cases possible to gain speed and efficiency by reusing the data already in memory and execute more complex DAG (Directed Acyclic Graphs) as larger atomic dataflow operations.

The main idea behind hadoop is to move processing close to the storage, and allocate enough memory and cores to balance the throughput provided by the local disks. The ideal hadoop building block is an efficient computing unit with a full process, storage, and uplink hardware stack tightly integrated.

Keeping power and space in control

The solution to this problem from a IT and infrastructure perspective has been so far based on rack or blade servers. In the past years, we have seen rack and blade systems becoming increasing efficient in terms of form factor, power, computing and storage resources.

Server miniaturization and scaling axes by Intel

In our quest for increasingly better performance, you can wait for better cores, more cores or more nodes (or all of the above of course). However more cores in a chip and brainy chips are tough to realize, because most of the low hanging fruits there are taken. Instruction level parallelism (ILP), task level parallelism as well as dissipated power/inch-square on chip are not scaling well according to the moore's law.

Smaller form factors: the microserver generation

On the other side, smaller form factor with a good hardware stack, are becoming more common. What about a 4 cores i7 intel chip, with 32GB and dual SSD that fits in palm of your hand? Imagine these units becoming the new nodes of your hadoop cluster. You can have tens of those instead of a single rack server. And since the density is so high, computing resources do not need to get much lower, even if the cores are optimized for low air flow and lower power consumption.

As reported by znet, microservers because of their small size, and the fact they require less cooling than their traditional counterparts, can also be densely packed together to save physical space in the datacentre. Scaling out capacity to meet demand simply requires adding more microservers. Efficiency is further increased by the fact microservers typically share infrastructure controlling networking, power and cooling, which is built into the server chassis.

HP Moonshot

HP has released figures claiming that 1,600 of its Project Moonshot Calxeda EnergyCore microservers, built around ARM-based SoCs, packed into just half a server rack were able to carry out a light scale-out application workload that took 10 racks of 1U servers -- reducing cabling, switching, and peripheral device complexity. 

The result, according to HP, was that carrying out the workload used 89 percent less energy and cost 63 percent less.


The PowerEdge™ C5220 packs up to 12 microservers in a 3U PowerEdge™ C5000 chassis.

Intel® Xeon® E3-1200 or E3-1200v2 processor with 2 or 4 cores supports up to 65 W thermal design power (TDP) for the 12-sled microserver

Four DIMM slots for up to 32GB DDR3 ECC UDIMM (1333 MT/s or 1600 MT/s), two DIMMs per channel, and two 3.5" SATA/SAS per server

AMD and Xi3

Up to 28 microSERV3Rs can fit onto one 3U-high, 30-inch deep data server shelf, delivering up to 9X the server density of standard 1U servers.

The microSERV3R data server features a quad-core 3.2GHz processor, 8GB DDR3 RAM, one 1Gb Ethernet ports, and four eSATAp ports, plus internal SSD memory of 64GB – 512GB.

Microservers limitations

Microservers serve well for very specific power/dimension/processing/storage tradeoffs. For system that require heavy single tasks to be executed, more "traditional" high-performance, high-power server should be used instead. 

However the microserver are ideal when you require a number of smaller tasks to be distributed on a massive amount of cores. Because of the advanced power control is much easier to lower the power of those nodes as low as 10W when the nodes of the microserver are not in use.


The upshot is servers that, when compared to alternatives, can cost less to run and take up less physical space in the datacentre; both key concerns for businesses with a sizeable server footprint.

Microservers aren't replacing servers though, but becaouse of their dense design and the shared cooling and interconnect resoiurces, they are ideal for loosely coupled distributed systems, such as hadoop, scalable cdn systems and modern web hosted applications.

This new server technology poses new and interesting issues in terms of system architecting.

If a traditional full rack fits in a few 3U rackmounted bricks, does a rack now have the same role of former datacenter zones?

And what about distributing over multiple datacenters, and distributing over multiple racks? Are we required to redefine the current strategies for data replication in hadoop hdfs and cassandra because of the denser architectures provided by microservers?


Popular Posts

Featured Post

The AI scene in the valley: A trip report

A few weeks back I was lucky enough to attend and present at the Global AI Summit in the bay area. This is my personal trip report about th...