The idea that data might have an effective date is nothing new, however, using effective dating on code fragments is interesting but when you add zero downtime with complete code and data synchronicity it is something completely different; possibly evolutionary.
I had initially thought of using C, C#, Java, perl, Ruby, Python, Go, erlang, and elixir. As I went back and forth I skimming away the many fractal dimensions and with every generation I found myself back where I started.
First of all there are two major strategies and a potential hybrid.
The first approach is a router with a synchronized clock which routes transactions to an green/blue server instances. Blue/green is a typical model and in most cases is implemented in an HA scenario… but the key point is that a) the code is staged in advance; b) the switch is instantaneous; c) the system is always known to be in a particular state and transactions are always reproducible (audit-able).
The second approach is effectively a hot-plug type deployment. While erlang and elixir “can” support the glue/green approach most EVM geeks prefer hot-plugging. The only problem with the EVM release manager is that is deploys a module at a time and therefore getting a transaction guarantee (reproducible) is just not possible and this is a big deal is most businesses including gaming.
And then I recently saw a zero downtime solution for Go where the new application assumes the active port numbers as the replacement server takes over for the primary. The differences here are a) there is no router but the transaction seem to move from the primary to the secondary in a deliberate fashion allowing transactions in flight to complete rather than all at once; b) the loading of the replacement might not be triggered by the effective date; and c) there is some latency from the startup to actual assumption of duties.
Of course there are other complications when comparing compiled versus scripted code. The scripted code is almost ready to execute once the code is assembled and the compiled code requires at least one compile/link step. And then there is the library management requirement when trying to deploy fragments versus small change sets.
Finally all of the above is fine when you are connecting to loosely coupled databases through REST or some sort of IPC/RPC… but when there are DB schema revision requirements everything could go belly up as the two schemas need to coexist. And then there is the librarian and the necessary regression testing.
As for the actual payoff. Who knows. But for certain if having the continuous deployment, effective delivery, zero downtime in your toolbox then you can decide whether or not to deploy every VCS commit automatically or wait for a librarian to collect, regression test and commit.
And I’m working on a Go version now. I see some code for zero downtime integration so I’m looking forward to testing that… but the last piece that will make that possible is some sort of ORM that would manage the schema. Which is hot-plug in the ACID sort of way.
All of the above, however, requires new discipline.