Upcoming talks and demos:

Jupyter Con - New York 23-25 Aug









View Natalino Busa's profile on LinkedIn





Principal Data Scientist, Director for Data Science, AI, Big Data Technologies. O’Reilly author on distributed computing and machine learning.



Natalino leads the definition, design and implementation of data-driven financial and telecom applications. He has previously served as Enterprise Data Architect at ING in the Netherlands, focusing on fraud prevention/detection, SoC, cybersecurity, customer experience, and core banking processes.


​Prior to that, he had worked as senior researcher at Philips Research Laboratories in the Netherlands, on the topics of system-on-a-chip architectures, distributed computing and compilers. All-round Technology Manager, Product Developer, and Innovator with 15+ years track record in research, development and management of distributed architectures, scalable services and data-driven applications.

Sunday, April 28, 2013

Ansible: the new kid on the deploy block


For those familiar to automated deployment strategies, names such as fabric and chef, surely make some bells ring. They all have their pro and cons, but now there's a new contestant out there: ansible.



The major tradeoff about deployment is flexibility vs ease of use. What we really want is being ables to define logical roles and then mapping these roles on a set of machines. In picture, it is like having a graph where we define some set of functions (roles) and push them to a set of nodes.

This is deployment in a nutshell. However when you move closer, say you are actually setting up a deployment system a number of issues pop up:

 - upgrade vs full re-deploy

Suppose that you actually have deployed your system. What do you do with incremental deployments. Do you start from a blank slate of are you capable of incrementally deploy the delta's in your cluster? Or even more ambitiously, are you able to do both?

 - dependencies

If role A is deployed on machine 1 and role B on machine 2 requires role A. How do you deal with dependencies? Is your deployment strategy able to "propagate" throughout your set of machines and eventually reach a stable configuration?

 - dev/op design time

How much time do you require to setup a deployment system? How much time do you need to push a new feature in deployment ( a new database, a new installed tool, etc.)

 - deploy revisioning

How easy can you revert to previous deployments?

Fabric, and Chef:
Once you stumble in a serious deployment, both chef and fabric tend to tackle deployment as another programming flow. While this gives you quite some flexibility, it also mean that now there's something more to debug: namely your deployment script resp. in python (fabric) or ruby (chef). Since this is not captured by continuos development strategies, it also imply that the traditional continuos development needs to be enhances by continuos deployment and dev/test/acceptance/production strategies.

Ansible:
Ansible takes a more simplistic approach to deployment. Namely configuration over coding. While chef and fabric developers would eventually write all their recipies to exactly fit the deployment, Ansible users would simply define a number of configuration entries, in YAML. Yaml is a great format which appeals to both humans & machines, and is easily readble by the first and parseable by the latter. Defining features & roles for deployment now looks more like compiling a grocery list than devising an algorithm with routines and methods.

This is in essence why I think that ansible has earned a place in the deployment world. It comes with a list of pre-packaged mini-recipies, what you need to do is just to bundle them under a role definition and define which nodes needs which role. Then you let ansible take care of updates, dependencies and deployment. You just provide yaml configuration lists. Granted, you might loose a bit of flexibility, but it also deliver a working deployment on a multinode cluster in hours rather than days.