safe practice for memory Overcommit

11:05 AM
safe practice for memory Overcommit -

I had another discussion with memory passionate people overcommitting for virtual desktops. My position is that it can be dangerous if you take too far. Unfortunately, many reports I see talk about the value of the memory overcommit take too far. So where is just far enough? Let's go through an example (I generalized what I do not want to talk about each different hypervisor) ...

Let's say you need 100 servers (192GB RAM) for hosting virtual desktops 7500 users. These 100 servers will reach the maximum capacity (CPU and memory) when they host 75 virtual desktop VM (2.5GB RAM). For fault tolerance, you go with the N + 10% formula where you get 10% more servers than you need. This means that you really have 110 servers.

As I spread the load over 110 servers, my concurrency server drops to 75 users per server to 68. It also reduces my RAM usage of 187GB to 170GB. I paid for the RAM, so I want to use RAM. In this example, be conservative, I will set the upper limit of memory for desktop computers 2.8GB RAM each and the lowest being 2.5GB (which is what I should have determined those users).

Based on the example, during normal production mode, my desktop is not overcommitting RAM. However, during an outage (planned or not), my servers will be needed to accommodate the additional desktop VM. If no RAM is free, desktop virtual machines are overcommitting, well distributed throughout the server and the environment as a whole, the impact is low and likely to go unnoticed. In addition, the overcommitting occurs only during a power failure. So the operations day to day continue to operate smoothly and provide a good user experience (at least from the perspective of the RAM).

What did you use on your deployments? What experiences have you had with this feature?

Daniel

XD Design Handbook

Previous
Next Post »
0 Komentar