Performance By Design A blog devoted to Windows performance, application responsiveness and scala

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg

Wednesday, 10 July 2013

Virtual memory management in VMware: Swapping

Posted on 11:49 by Unknown
This is a continuation of a series of blog posts on VMware memory management. The previous post in the series is here.


Swapping

VMware has recourse to steal physical memory pages granted to a guest OS at random, which VMware terms swapping, to relieve a serious shortage of machine memory. When free machine memory drops below a 4% threshold, swapping is triggered. 

During the case study, VMware resorted to swapping beginning around 9:10 AM when the Memory State variable reported a memory state transition to the “Hard” memory state, as shown in Figure 19. Initially, VMware swapped out almost 600 MB of machine memory granted to the four guest machines. Also, note that swapping is very biased. The ESXAS12B guest machine was barely touched, while at one point 400 MB of machine memory from the ESXAS12E machine was swapped out.
Figure 19. VMware resorted to random page replacement – or swapping – to relieve a critical shortage of machine memory when usage of machine memory exceeded 96%. Swapping was biased – not all guest machines were penalized equally.

Given how infrequently random page replacement policies are implemented, it is surprising to discover they often perform reasonably well in simulations, although they still perform much worse than stack algorithms that order candidates for page replacements based on Least Recently Used criteria. Because VMware selects pages from a guest machine’s allotted machine memory for swapping at random, it is entirely possible for VMware to remove truly awful candidates from the current working set of a guest machine’s machine memory pages using swapping. With random page replacement, some worst case scenarios are entirely possible. For example VMware might to choose to swap out a frequently referenced page that contains code from the operating system kernel or Page Table entries, pages that the guest OS would be among the pages least likely to be chosen for page replacement.

To see how effective VMware’s random page replacement policy is, the rate of pages swapped out were compared to the swap-in rate. This comparison is shown in Figure 20. There were two large bursts of swap out activity, the first one taking place at 9:10 AM when the swap out rate was reported at about 8 MB/sec. The swap-in rate never exceeded 1 MB/sec, but a small amount of swap-in activity continued to be necessary over the next 90 minutes of the benchmark run, until the guest machines were shut down and machine memory was no longer over-committed. In clustered VMware environments, the vMotion facility can be invoked automatically to migrate a guest machine from an over-committed ESX Host to another machine in the cluster that is not currently experiencing memory contention. This action may relieve the immediate memory over-commitment, but may also succeed in simply shifting the problem to another VM Host.

As noted in the previous blog entry, the benchmark program took three times longer to execute when there was memory contention from all four active guest machines, compared to running in a standalone guest machine. Delays due to VMware swapping were certainly one of the important factors contributing to elongated program run-times.

Figure 20. Comparing pages swapped out to pages swapped in.
This entry on VMware swapping concludes the presentation of the results of the case study that stressed the virtual memory management facilities of an VMware ESX host machine. Based on an analysis of the performance data on memory usage gathered at the level of both the VMware Host and internally in the Windows guest machines, it was possible to observe the virtual memory management mechanisms used by VMware in operation very clearly. 

With this clearer understanding of VMware memory management in mind, I'll discuss some of the broader implications for performance and capacity planning of large scale virtualized computing infrastrucures in the next (and last) post in this series.
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest
Posted in memory management, VMware | No comments
Newer Post Older Post Home

0 comments:

Post a Comment

Subscribe to: Post Comments (Atom)

Popular Posts

  • Using QueryThreadCycleTime to access CPU execution timing
    As a prelude to a discussion of the Scenario instrumentation library, I mentioned in the previous post that a good understanding of the cloc...
  • Using xperf to analyze CSwitch events
    Continuing the discussion from the previous blog entry on event-driven approaches to measuring CPU utilization in Windows ... Last time arou...
  • Virtual memory management in VMware: memory ballooning
    This is a continuation of a series of blog posts on VMware memory management. The previous post in the series is  here . Ballooning Ballooni...
  • Correcting the Process level measurements of CPU time for Windows guest machines running under VMware ESX
    Recently, I have been writing about how Windows guest machine performance counters are affected by running in a virtual environment, includi...
  • Virtual memory management in VMware: Swapping
    This is a continuation of a series of blog posts on VMware memory management. The previous post in the series is  here . Swapping VMware has...
  • Deconstructing disk performance rules: final thoughts
    To summarize the discussion so far: While my experience with rule-based approaches to computer performance leads me to be very skeptical of ...
  • Rules in PAL: the Performance Analysis of Logs tool
    In spite of their limitations, some of which were discussed in an earlier blog entry , rule-based bromides for automating computer performan...
  • Measuring application response time using the Scenario instrumentation library.
    This blog post describes the Scenario instrumentation library, a simple but useful tool for generating response time measurements from insi...
  • High Resolution Clocks and Timers for Performance Measurement in Windows.
    Within the discipline of software performance engineering (SPE), application response time monitoring refers to the capability of instrument...
  • Page Load Time and the YSlow scalability model of web application performance
    This is the first of a new series of blog posts where I intend to drill into an example of a scalability model that has been particularly in...

Categories

  • artificial intelligence; automated decision-making;
  • artificial intelligence; automated decision-making; Watson; Jeopardy
  • hardware performance; ARM
  • Innovation; History of the Internet
  • memory management
  • VMware
  • Windows
  • Windows 8
  • windows-performance; application-responsiveness; application-scalability; software-performance-engineering
  • windows-performance; context switches; application-responsiveness; application-scalability; software-performance-engineering

Blog Archive

  • ▼  2013 (14)
    • ►  November (1)
    • ►  October (1)
    • ►  September (1)
    • ▼  July (3)
      • Virtual memory management in VMware: Final thoughts
      • Virtual memory management in VMware: Swapping
      • Virtual memory management in VMware: memory balloo...
    • ►  June (5)
    • ►  May (1)
    • ►  February (1)
    • ►  January (1)
  • ►  2012 (11)
    • ►  December (1)
    • ►  November (2)
    • ►  October (2)
    • ►  July (1)
    • ►  May (1)
    • ►  April (2)
    • ►  March (2)
  • ►  2011 (14)
    • ►  November (3)
    • ►  October (2)
    • ►  May (1)
    • ►  April (1)
    • ►  February (3)
    • ►  January (4)
Powered by Blogger.

About Me

Unknown
View my complete profile