Rule number one… know what system resources you need. Rule number two… know what decisions you make could offset the balance of rule number one.
If you have a database server that is running at 100% capacity and you create a hardware copy of the resources and then try to deploy deplication and you do not take into account the overhead of the replication process and the demands on shared hardware (my example is a network design in an IBM location in the 90s. They had multiple parallel networks with QOS so that large file transfers like disk images were take place on a different network that the terminal sessions.) I personaly have experienced network starvation when the sysadmin put the active FTP and OLTP on the same physical network.
In recent days I have been struggling with resource requirements because my DEV system has about 8GB free disk space (about 1/625th) of the system recommendation. Even that has to be proven because this machine is expected to receive syslog messages from 15 to 20 very chatty machines. And since the messages are not filtered at the source the entire set needs to be filtered by my system. So the one system needs to filter 15-20x the load. Also, where the filtering would normally happen in different channels/streams they are now consolidated all the way to the end.
sigh… know your design. know your needs. work with the team and assert your needs.