Richard Bucker

Docker, dockerclient, citadel, fig, multi-node, hipache, etcd, nginx, crypt

Posted at — Oct 20, 2014

It’s only a matter of time before the Docker team closes the loop on the multi-node Docker stack and starts to chase the complete PaaS solution. Sure; in the early days of Docker it’s open source and the various teams are absorbing the code as quickly as it’s available; and the different framework teams are all stitching as much code together as they can. But the one quote that seems to be sticking out in my head is something like:

Build it yourself
 So while I have been testing all of the PaaS frameworks out there they are still lacking. Whether it’s current (Docker <1.3.0, CoreOS <Alpha, Go < 1.3.3) or of it simply does not work or it only supports a limited functional set.

So here’s my intuition… and if it were my money looking for a solution in this space:

I’m starting from the fractal dimension. Docker containers are simply just another fractal dimension either in or out from that of virtual machines, mainframes, or J2EE-like enterprise SOA solutions. And like I need a solid and stable ESX server for running VMware I also need a stable bare-metal OS and so I start with CoreOS.

CoreOS provides five key technologies for free and a sixth with a support contract:

  1. locksmith - manage autoupdates on the server side
  2. systemd - linux system init
  3. etcd - raft-based replicated key/value store
  4. fleetd - distributed systemd and service manager. It maintains the service policy for each if the services. Auto restart, cluster distributed patterns, cron-like config, pre and post command that can be used to alter etcd instance data
  5. docker - just the container
  6. console - enterprise level console monitoring
With docker version 1.3.0 the Docker team has introduced governance which will eventually end up in CoreOS and turn into a trusted compute model. Once you trust the application or micro-services running on the machine you might be able to trust their interconnection.


Fig was acquired by Docker and it provides a single node multi-service configuration tool. This includes many of the config parameters in the Docker build and run commands like both sides of the volume, port mapping, service linking, and it can scale the services on the one node in order to test the linkage of your applications. And so it is ideal for the developer.

Nginx is a nice tool. One killer feature is the soft reboot. The ability to reconfigure the proxy without restarting or losing current connections. Someone in the nodejs implemented a project called hipache. Hipache is capable of the same, however, where Nginx requires a sidekick or ambassador in order to capture the config changes Hipache will simply monitor a Redis instance in realtime. Changes in the Redis db will immediately effect the transaction routes. There is a variation of the Hipache proxy that will monitor various etcd keys which will then update the redis db.

What’s nice here is that there is a go implementation of hipache called hipache-go. While it too uses redis it not much of a stretch to replace that code with etcd code and so the route table can be effected directly by the backend services as they are started.

A Fleetd pre event can be used to tell etcd that a new service is to be started and as such can inform hipache to create a route to this service. And when the service is being stopped the post command can cause the config to be removed from etcd which will in turn remove the active route.

Finally there is one additional project that can be used to protect configuration information. The crypt project is a crypto strategy for storing all configuration information in etcd encrypted with a public key via a special purpose CLI tool. The private key would be stored in a private folder on the host OS with permissions limited to a specific user. (do not store the private key in the image or container; instead on the host volume). This has a few weak points that need to be worked out.

Finally with the enterprise GUI I can set policy and monitor updates to CoreOS. This, in turn, will allow me to monitor other aspects of the system as the GUI features increase.

One last thing. While some of the tools I mention here are baked in they are incomplete. That psrt of the puzzle will be addressed by the Citadel project, that uses the dockerclient project, which can deploy multi-node containers with custom schedulers. So any place where these tools fail to deliver they can be augmented with my own code. Now all I need to do is sprinkle in a little SQLite and maybe some RethinkDB…

There you have it.