[UPDATE] sadly while I was collecting my notes and trying to reproduce the results I was getting from my n1-standard project I was not able to get my /media/state partition to mount and I have not been able to determine if it has anything to do with the FREE partition or not. I will try my steps again with a larger system and see if that makes any difference. Keep in mind that the FREE f1-micro is awful for anything other than the simplest tasks anyway.
[UPDATE 2] I was able to get it to work. The debug session started after attempting to build the system about 10 times... then I started scanning the logs (sudo journalctl -f) but did not yield any fruit. Finally I looked at the entire log file (sudo journalctl -a) and ready it after several redeploys and reboots. I found a strange error message in the log: "Failed parsing user-data: Unrecognized user-data header: coreos:" I rebooted a few more times and did a few google searches. Still nothing. Then I found an obscure reference in the cloud-config doc/spec that said that "#cloud-config" was the proper header. I thought #cloud-config was a comment and so I had deleted it before trying to build my instance. I do not mind complaining about this fact. Clearly a comment should not be a header. Not even in a yaml file.
I have a dedicated server at Rackspace that I once used to host a number of dedicated apps that have since been decommissioned or are now hosted via 3rd party SaaS systems and strangely enough none of these SaaS providers support naked domains.
(a naked domain is a domain where the first part of the domain name is absent. A regular domain name for a web server might be www.domain.com and the naked domain would be domain.com. There are some historical reasons why this is the case but it’s probably a good thing for now.)
Now my dedicated server simply intercepts the naked domains and with a wildcard also gets all of the typos and redirects the user to the default server. So if the user entered fred.domain.com the browser would be redirected to www.domain.com. That server costs me $15 a month and is just generating heat. The redirect is rarely used and at this point I just need to make it go away.
Enter Google’s f1-micro server. It’s an anemic server that is free to operate and it will do the trick. The best part of this is going to be the use of CoreOS as the host OS. You’ll have to explore CoreOS for yourself but what interests me is the upgrade process that just works and means that every version is basically LTS (long term support)
And so we begin…
Assuming you have a GCE (google compute engine) account these are the steps I used to deploy the project.
** you’ll need a file on your local system: cloud-config.yaml
# generate a new token for each unique cluster from https://discovery.etcd.io/new
discovery: https://discovery.etcd.io/… your key goes here...
# multi-region and multi-cloud deployments need to use $public_ipv4
- name: etcd.service
- name: fleet.service
- name: media-state.mount
- install the GCE SDK and tools
- update the SDK
- set the default project
- gcloud config set project <project-id>
- allocate some disk if needed
- gcutil adddisk --size_gb=10 --zone=us-central1-a pdisk01
- capture the latest CoreOS
- gcutil addimage --description="CoreOS 298.0.0" coreos-v298-0-0 gs://storage.core-os.net/coreos/amd64-usr/alpha/coreos_production_gce.tar.gz
- create an instance
- gcutil addinstance --image=coreos-v298-0-0 --persistent_boot_disk --zone=us-central1-a --machine_type=f1-micro --metadata_from_file=user-data:cloud-config.yaml pcore1
- attach the disk to the instance
- gcutil attachdisk --disk=pdisk01 pcore1
- shell into the system and format the disk
- gcutil ssh pcore1
- sudo mkfs.ext4 -F /dev/sdb
- update the GCE firewall to let port 80 through (by default it is disabled)
- sudo reboot
I made a mistake with my cloud-config.yaml. These two steps will help:
- get the fingerprint with this command
- gcutil getinstance pcore1
- update the config
- gcutil setinstancemetadata pcore1 --metadata_from_file=user-data:cloud-config.yaml --fingerprint=". . . fingerprint"
Here is a sample Dockerfile:
RUN apt-get update -q
RUN apt-get install -qy nginx-extras
RUN echo "\ndaemon off;" >> /etc/nginx/nginx.conf
# Attach volumes.
# Set working directory.
Once the system has rebooted you need to ssh back into the system.
- create a folder in your home folder where you’ll store your Dockerfiles.
- mkdir -p ~/Dockerfiles
- cd ~/Dockerfiles
- open a new Dockerfile
- vim Dockerfile
- build the container (note that rbucker is my name; you should read up on the docker registry in order to name your container properly)
- docker build -t=rbucker/nginx .
- run the container. This command does a few things. The ‘-v’ command mounts the host OS path to the container path in rw mode. the '-p’ maps the container’s port to the host’s port. This command will run the container in the foreground. You’ll have to open a new ssh session to execute the ‘docker stop’ command in order to operate properly. One could also use a ‘-d’ option to run the container in the background.
- docker run -p 80:80 -v /media/state/etc/nginx:/etc/nginx/sites-enabled:ro -v /media/state/var/log/nginx:/var/log/nginx rbucker/nginx
- running the container normally
- sudo docker start rbucker/nginx
- You can see of the container is running with
- docker ps
- then load your host’s external IP address into a browser and give it a try. The default nginx page should display.
** CoreOS and Docker would prefer that you restart the container with systems service commands instead of letting docker auto-restart. Although that is possible and topic for another post.