In modern computing there are plenty of elegant models and at times so called experts wrap their arms and their wallets around then with gusto. The problem is that while there is this outward elegance there is nothing but tension on the inside.
Agile started out as a good idea. It defined the contract between the programmer/consultant and the customer/client. The problem is that the managers saw an opportunity to sell certification and then services to uninitiated executives. And it worked for a time and now there is so much momentum and money involved…
Junior programmers are still junior programmers. No amount of process is going to accellerate that process.
It’s a great process for finding the root cause to a “system” failure, however, over the years the consultants have eased the rules. In the beginning a failure was a failure. But now the consultants say that if there is an intended failure that’s ok and does not count against you. In the agile world there is a notion of failing fast. While they might be talking about the development process… it seems to have made it’s way into production too.
The first description of a reasonable VLAN I saw was one where a company had many different departments like sales, accounting, and customer service. The advantage here is that it’s less likely that a customer service person would hack the CEO; either intentionally or unintentionally. There are a few similar use cases in IT… like email server, web server aka the DMZ. But where this goes to pot is that many networks are loose. One adds clients and nodes here and there. Not one example have I seen where each machine get’s it’s own VLAN leaving the switches to do the complex “allow lists”. This is because the machine itself can register it’s own VLAN afficiliation assuming it’s connected to an untagged network. And this is further complicated by DHCP, DNS and domain names.
While some VLAN implementations support encryption it’s just another problematic layer. But more importantly while there is all this infrastructure there is typically a backhaul that links it all together for the admin. oh well.
Gitlab takes git and then bolts on a wiki, issue system, CI/CD, gists aka snippets, and so on. I suppose if you are a Gitlab operations engineer and you have a dedicated resource… it might be a great tool. But my problem is that I have encounter continuous failures. It’s just complicated. And all these add-ons are designed to make me dependent on their implementation and impossible to move.
Sadly there are a number of programming languages that have integrated git as a dependency and that in itself has caused problems. Golang’s module and package system is vulnerable to exploits.
Once you start to add monitors and distributed syslogs with admin backhauls… neck deep containers and other services… There are simply too many vectors for attack.
To be clear if you only use about 50% of your compute capability then a deploy or an a/b test could be implemented by using the second half of the hardware… but what this means is that you should always be at 50% operational capacity. While this is virtual resources consider that if a car repair shop only used half it’s bays that it’s only earning half it’s potential.
But for this to be effective the operations part of the system has to be more secure and more strictly partitioned from the product.