Yesterday morning there was a fiber cut that effected everyone in Weston Florida. As I understand it this included every internet and cable provider in the city. Assuming that there is only one trunk into Weston would make it a very bad place for certain businesses that depend on reliable services.
Anyway, this prevented me from using my usual development machines because they are located in Weston and the databases that they connect to are at Amazon. Google’s OnHub does not have a feature that would allow me to bridge my entire network nor would I want to given how much bandwidth my YouTube kids consume.
I managed to move my development and get back to work and this is how I did it.
- Put mu phone in hotspot mode.
- Connected my desktop, a Chromebox, to my hot spot. Since it was already connected to my local network via Ethernet I was able to talk to both networks.
- Logged into Digital Ocean and created a CoreOS instance
- Logged into the instance
- Created a key: ssh-keygen
- Gave the key to github
- Cloned my current project
- Added my personal credentials to .ssh
- Tested the helloworld compile/run procedure
- Asked the sysadmin to add my IP to the DB server auth list
And continued as if everything were normal. This process took about 15 minutes. I could almost make a case for shutting down development at night if there were some cost savings. And that might permit me to have a bigger machine during the day.
Here are some of the facts:
- CoreOS is essentially immutable in the host partition
- git and many other basic tools are available on the host. These are the tools that are generally immutable. The GREAT news is that CoreOS keeps them safe and up to date. In a way they do all of the research that I would have to do if I were running the IT department and startups are way understaffed for that.
- My build script has many layers. First of all I use CoreOS’ rkt-builder. It’s a version of sid that is meant to build rkt. I’m reusing that tool chain to build my project. Inside my project there is a build script that launches rkt to build and once that container is running it launches the project build script that creates the binary and also an associated aci file. The build process also shares the host volume so that the compiled targets can be returned to the host.
- There is a separate run script that can launch the executable inside a rkt container.
- There is a separate build script for a wrapper project which compiles multiple targets and combines them into one container and can run them all at once.
- The bottom line is that I do nothing except the keys in order to get my environment operational. This means anyone taking over the project will not have to do anything special in their environment. (one of the things I always hated about assuming someone else’s projects was the net effect on my environment. With this structure the net effect is ZERO.
The next part of this process is going to be spawning the build instead of doing it locally. This way I will not need a huge local machine in order to compile/run my project. In fact I think I can make a case for some sort of golang sandbox making an IDE out of the go slide server.