I read some place that all I needed was 1GB or RAM. That seems pretty simple and if you look at the various systems from digital ocean and AWS then 1GB is still pretty simple, After I added sysdig my system ran for a day and then froze. A reboot failed too. ultimately dmesg indicated that there was no disk space remaining and until I determined that it was /var/lib/docker and /var/lib/journal it took a while to get the system back in service.I would think that by now “people” (docker, coreos, rancher) know what the ideal server configuration is and not recommending 1GB is a good starting point.UPDATE: I’m getting some flack from the owner of the rancher github account for reporting this as an issue. I’ve tried to explain that this is related to the VM configuration but I’m starting to realize something else… (a) when I deploy bare metal servers I usually allocate some multiple of 500GB of storage depending on the price of drives and RAID. (b) when I deploy an virtual host on a VPS I usually start at 20GB per instance. By extension that means that each docker instance should have 20GB. Given that docker has a host requirement that means I need to allocate more storage on the host side of the equation.