Both Proxmox and vmWare talk about High Availability(HA) and hot migration from host to host. One way they claim to be
able to do this is by storing the guest VM’s disk on a fast network drive(NAS) and in this case iSCSI. My recent
dive into the synology iSCSI service has been a disaster.
- OpenBSD refused to connect to my Synology iSCSI service (connection reset by peer)
- FreeBSD connected the first server and installed ZFS but could not connect the second.
Synology’s GUI has a warning about supporting multipath filesystems and in DSM 6.x they even offered a few
examples like cephfs and vmfs. Unfortunately …
- the google did not produce any meaningful results
- cephfs documentation offers 19 complicated steps to get started
And so I come back to the beginning of my journey. WTF! What am I building that needs that kind of availability?
How many transactions per second am I processing and how many users are there? I know I’m fixated on the idea
that this configuration brings but the complexity makes it easy to lose everything as I’ve experienced many
- see the Fossil-scm documentation on scale
- see the sqlite documentation on scale and even large file performance (store everything in the DB; go file-less)
- some of the challenges can go away with replication, blockchain replication, and encryption (still not perfect; every last transaction gets exponentailly more expensive)
- some of the challenges can go away with an OS that has short boot times (see clearlinux from intel, CoreOS, and a few others; just 5 seconds)
- some of the challenges can go away with discrete bare metal rather than VM farms (break out of guest VMware)
- some of the challenges can go away when you are in the same room as the hardware (intel management access)