After a long internal debate and struggle, I decided to finally write about the pagefile. Why fight? Because the subject is the Windows swap file ... a pretty old concept that I think most people understand, not to mention the virtual memory is a fairly abstract and boring subject, so I am sure to lose a few people along the way. So why am I writing about it last? As it stands, I still get questions about this all time ... and still see the ridiculous advice being distributed (even by our own people) all time ... and I still see our customers to configure evil all the time pagefile. And instead of repeating myself over and over again and pointing people to the same resources, I thought I could show people my own article that consolidates a lot of good information out there, while debunking some of the myths about the pagefile in the process. I'll finish the article with some real world examples so you can see what I have recommended the setting of the pagefile in the past in some scenarios
start with the basics -. Paging is a technique by which memory-management system can store and retrieve frequently accessed data from secondary storage for use in the main memory. The operating system retrieves data from the secondary storage in blocks called pages. Thus, the paging file is essentially a collection of "pages" and these pages are stored on the disk (which is secondary storage I mentioned earlier). This extension of the virtual memory is important because it allows an operating system to take advantage of disk storage for data that will not fit into memory, which improves performance and prevents application crashes. The pagefile allows the physical address space of a process to be noncontiguous (preventing things like fragmentation and other problems), but that is about as good a theory I want to get into in this article the paging and memory segmentation. If I continued to ramble what my college professor taught me in my class operating systems, I am sure to lose even more readers. 😉
So now that we have a basic understanding of what the paging file is (virtual memory extension to disk) and why it is important (speed, performance, etc.), how should we go about setting this thing? And that's really what I want to talk about in this article - how the size of the Windows swap file
Now that I'm stoned people to misconfiguration of this thing, I am. will give everyone a break. Because why most people configure the paging file is incorrect because the "authorities" about providing unsuitable advice and do not seem to understand how the pagination is! So it's not your fault ... articles like this and it is still referenced and that is probably why I still see pagefiles who blindly set to 1 GB or RAM 1.5-3x! It does not allow the default setting is to allow a pagefile "managed system", which is almost never what we want. Perhaps this is even more revealing ... this is a quote from a comment posted on one of my favorite references in the world:
"ME was involved in the selection of default min / max size for system managed pagefiles in Vista, and I am certain that these figures are not just copied from a magazine the minimum of 1 GB was selected based on the actual load commit observed on small machines ( 512 MB of RAM). The maximum RAM * 3 may seem excessive on machines with lots of RAM, but remember that pagefile will only grow if the great demand is real. Furthermore, short of engaging (eg due to a leak in some application) can bring the whole system stopped, and a higher maximum size can make the difference between a system that unresponsive and must be restarted and a system that can be recovered by restarting a process. I must admit that the maximum size scale linearly with the size of the RAM is somewhat arbitrary. Maybe he should have a fixed constant instead. "
Pretty mean when even the guy (or gal) at MSFT Ass up, right? It also shows that the default settings are designed primarily for desktop computers ... or we we bought all our servers in 00, when 2 GB RAM boxes were the norm. the reality is these defaults are not good and they have not been for a very long time. and especially in a world-server Citrix that we people live and particularly in 2011, when servers with 100 GB of memory + are very common.
so what then Citrix recommends for the swap file? that's where I turn to one of the smartest people in the world, Mark Russinovich. Name sound familiar? for starters, he is the author of one of my favorite computer books of all time called "Windows Internals . "But he is also the founder / creator of Winternals and Sysinternals.com (procmon anyone?). After Microsoft engulfed everything he ever wrote and invented, it is now a Technical Fellow at MSFT. Sometimes we have studied the same thing in college - computer engineering. The only difference? He has a PhD from Carnegie Mellon and I went to Tulane ... let's just say that there were "other" things to focus on New Orleans. 😉
So now that everyone knows Mark, let me point out to one of my favorite references on virtual memory in general but also the pagefile (towards the end of the article):
- Pushing the limits of Windows: Virtual memory
please, please, take 15 minutes to read this article. Because after reading through it, you have a much better understanding of how virtual memory pagefile and work. And once you understand how the swap file is used by the operating system, then you can right-size it! But it all comes down to commit peak load or Maximum commit . I'll let Mark finish the "Citrix best practices" for design pagefiles for me:
To optimally size your paging file you should start all the applications you run at the same time , load typical data sets, and record the peak load commit (or look at this value after a period of time where you know maximum load was reached.) Set the minimum pagefile to be that value minus the amount RAM in your system (if the value is negative, pick a minimum size to allow the dump type of incident, you are set to). If you want to have some room for potentially large commit demands, set the maximum to double that number.
And it really is! what a new concept ... do some performance testing / actual load (which seems to be a lost art these days -ci) and set the swap file based appropriately on the peak commit. Thus, the optimal size for the pagefile is actually very little to do with the amount of memory in the system or a multiple of RAM. And it has everything to do with your single workload and how your applications are paging!
So why would you still hear some Citrix or Microsoft Consultants say to put the "size of RAM" plus perhaps 1% or 12 Mo? It's just a rule of thumb when we know nothing about the workload or can not determine peak committed by appropriate testing ... and making the RAM size plus an additional bit simply allows for a full memory dump to take. that's why you might hear that advice again ... it is better to say something 10 years ago as "1.5x RAM" or "2-3X RAM", but it is important to remember that is just a basic rule ... and the only way to the optimal size of your pagefile for the best performance (and cost that we will discuss in a minute) is to follow the advice of Mark and watch the peak commit charge . We even this nifty thing called ILECs that allows you to simulate the load so that you can determine peak commit and properly set the pagefile. Why do not more people do it? I do not know.
Now let's talk about cost appearance and memory dumps a little more. Because the dimensions of the pagefile in the direction of Mark can also save you a lot of money, especially in certain scenarios and deployments XA XD based on PVS. Let me explain. Let's say you deploy XA 6.5 (64-bit) on bare metal and the server has 256 GB of RAM. If you followed the logic of the RAM size or RAM 1.5x, you must order these boxes with sizeable disc even put a pagefile there! (Assuming min = max value and the file is increasingly in demand). And you do not even need a dump full or complete memory? Maybe, maybe not. Is a minidump or kernel memory dump enough? Maybe, maybe not. Sure, you might need a full dump if you get blue screens on your boxes and support MSFT gets engaged and ask. But it was a heavy price to pay for a full dump, since I have seen MSFT request a full dump of 2 or 3 ~ 0 projects that I went out of the last 8 years. And I've already started to see customers take this "luck" ... they are boxes with say 256 GB of RAM with 128GB SSD say control! So if we are going to bare metal or cut these giant boxes with a server virtualization product like XenServer, we still will not have enough disk space for a swap file that is the size of the RAM so that we can make a complete discharge. So it seems customers have already taken this calculated risk for some XA workloads that can not be considered essential to the mission of the company. Moreover, I recently ran into this situation to a customer and we used storage vMotion to move some resources around and take a full dump. Of course, the discharge has proven useless MSFT, but we still found a way to do it. I also used a relatively new feature "File Dump Dedicated" on a system / R2 XA6 take a full dump "outside" the typical pagefile used to store virtual memory ... this little feature can be extremely valuable and is still a another reason why the pagefile does not need to be the size of physical memory! That's why I want everyone to ask their customers (or themselves based on who you are) if you take a complete memory dump is necessary or not. Or would it be better to save money on the disc and maybe move things when the time comes (in this case, rare, as I said when you really need a full dump) ?
Let's do another example ... say you are deploying Win7 workstations via XD and PVS. You use hdd target device for wC PVS (persistent disk with 5 GB) and your Win7 VM spec is 2 vCPUs and 4GB of RAM. Again, if you follow the rule of thumb used and the size of the RAM, 5GB your secondary drive would get swallowed by a 4 GB pagefile almost immediately. This would only allow 1 GB for wC itself, the event logs, the ES data, etc. Not good. So you have two options - make this secondary disk or even properly size the paging file! Making the largest secondary drive is a shot of huge cost because it is a "hit store" by VM-I mean. So that adds and becomes expensive quickly. So I recommend it ... the last time I made it to a client, we saw that the clerk peak was slightly below 2 GB, so we set the min and max equal to 2 Go. This left us with 3 GB for everything else that was a safe bet in my mind with nightly reboots (and subsequent rinsing of the wC). And since they are workstations, having a smaller paging file that the amount of memory was also OK because we really do not care to take a complete discharge. As a base, I typically configure a desktop to a mini-dump because it requires only a few MB ... and I could configure my server to a minidump or kernel memory dump, as opposed to a complete memory dump. But if I have the disk space, the cost is not a problem and the workload of XA is deemed absolutely essential to the mission by now, then I will set my boxes for a full dump to be sure. So it depends, but what are the factors on which it depends and I want everyone to start asking these questions so that we can be a little smarter about pagefile configuration and probably save money in the process.
There are other things I could keep going on the subject, especially when even have a pagefile is required, if the parameter min = max is a best practice, either having multiple sense pagefiles, or divide the page file on multiple disks sense, but I think most of the industry agrees on this stuff and we already know the answers (yes, almost always , probably not, and only if they are really separate disks and not partitions,!) respectively.
To conclude this article up, let's recap quickly some important elements:
- The pagefile is an extension of the virtual memory on the disk
- settings default involved with a "managed system" pagefile should not be trusted or used
- Setting the pagefile equal to the size of RAM (plus a small amount of overhead to take a dump) is just a basic rule to follow when no test can be done
- councils MarkR should be followed to properly size a swap file (based on appropriate testing and peak commit - not a multiple of RAM)
- There are a variety of memory dumps that can be configured ... and a full or complete emptying may be necessary in some situations
- The "dump file dedicated" feature can be very useful on newer operating systems
I really hope this article proves useful in your travels. Please send me a note in the comments section if you have any comments or questions. Thanks for the reading.
-Nick
Nick Rintalan, Senior Architect, Citrix Consulting
0 Komentar