Richard Bucker

Bare Metal or Virtualization

Posted at — Sep 15, 2022

Both Lawrence Systems and Level1Linux popped on my youtube feed this morning. If there was a reason then only google knows and that might be a bit disturbing… my new chicken (Minisforum BB550) arrived and it’s time to install it with something. While I was intending to install OpenBSD these videos seemed to force me into thinking about my decision again.

making your decision should not be about fear

There are so many choices and those choices might mean running a host BSD, Linux, Windows and so on

Then there was a recent VMware patch that caused a performance problem.

What’s it good for

I have three reasons for using virtualization. Some sort of overlap but it always comes down to money whether it’s the cost of reliability or recovery. It’s always money.

These are the reasons… but they are crap. Here’s why:

remote operations

If you do the normal operations dance then [a] your hardware likely has an admin port and so you can KVM directly into the hardware remotely. Then you can install anything you want. This is enterprise grade hardware and to be expected. [b] before deploying the hardware your team either installed the baremetal OS at the data center or in the lab and then relocated it. From that point on maintenance should be tested in the lab before modifying the remote machines.

Consider the latest Voyager hardware issues and the recovery. There are data centers all over the country. With remote workers there is no reason not to have someone close by.

multiple OS version and flavors

This is likely the penultimate reason for hosting virtual machines. Many organizations cannit decide which OS to use or maybe there is a compelling reason to host more than one OS. The counter argument is the same reason that people like Microsoft. It’s one platform. It was once cheap and ubiquitous; Windows operators were plentiful. Also you’re going to pay a fortune for devops staff that can operate reliably on multiple OS (at a core/kernel level)

Also when running in a 10x environment where every minute counts and headcount is thin it’s impossible to care and feed all the systems to the same degree. OPS staff should understand every aspect of every patch being applied.

CentOS is the public domain and experimental branch of RedHat Enterprise. While it’s great that RH has made it available this way there are a number of problems. It’s experimental. The “stream” version, similar to CoreOS and ClearLinux receives automatic updates meaning that some innovation is a challenge to deploy and some patches may impact the service being deployed.

compartmented security

The security model for virtual machines puts some of the burden on the host OS and the hardware. There are also multiple shim layers between the hardware, host, and guest OS. All of this extra code are additional opportunities for failure and exposure. You’re also using more of the combined systems than not. Also if you do not trust them individually then why is it better when combined.

if you do not trust a vendor don’t use them

Unrelated to security Level1Linux said… “use local storage for performance”. That’s not a terrible choice and of course depends on so many ideas. For example is the host installed optimally for guests with high disk IO or RAM needs. How does that work with combinations with different service needs?

Continuous delivery

This was a pretty good idea. It started off as chroot and jail and then bacame a series of scripts and code to completely box in a service. Later they made the layers even more robust that it could host a complete OS. After a short period of container attacks it was reiterated that constainers were meant for services. Then followed continuous delivery (CD). The idea was that the service could be deployed over and over… everything was self contained. Nothing it did not need was in the package.

The same is still true. Just the service. The idea of running a VMhost, then a docker or container host, then some orchestration layer like kubernetes or docker swarm is a mess. An average person cannot reason the complete stack. And then there are so many more layers to disaster recovery.

Conclusion

VMs are interesting in the lab environment but in production it’s just generating heat. As soon as I power up my BB550 it’ll get a new copy of OpenBSD 7.1.