Available Physical Memory Lower Than Total



  1. Available Physical Memory Lower Than Total Eclipse
  2. Available Physical Memory Lower Than Total Physical Memory
  3. Available Physical Memory Is Less Than Total Windows 10
  4. Available Physical Memory Lower Than Total Income

Dec 06, 2013 I have installed 6.00 GB of RAM, Windows recognise it as 'Installed Physical Memory (RAM)', but it can only use 2.49 GB (Total Physical Memory).

-->

To resolve this issue, set the Sorter memory configuration setting to a value lower than the total amount of physical memory installed. Additional Information It would not help performance to configure the memory available for the Sorter to be larger than the total amount of physical memory in the system. Total physical memory is the total amount of memory that Windows can use. A portion of this will be in active use by the OS and applications. The remainder is available for future use by any application that needs it and is labeled as 'Available'. This is a subset of total memory and must be. Re: available physical memory(14.2GB) is less than total physical(16GB) 2014-11-29, 16:47 PM ariana - if you have a laptop (specifically an ideapad Y510p judging by your other posts) then there's nothing in BIOS that can be disabled to free up space.

October 2016

Volume 31 Number 10

[Universal Windows Platform]

By Andrew Whitechapel

Far more than any other app platform, the Universal Windows Platform (UWP) supports a vast range of background activities. If these were allowed to compete for resources in an uncontrolled manner, it would degrade the foreground experience to an unacceptable level. All concurrent processes compete for system resources—memory, CPU, GPU, disk and network I/O, and so on. The system Resource Manager encapsulates rules for arbitrating this contention, and the two most important mechanisms are memory limits and task priorities.

The promise of the UWP is that a developer can build an app that will run successfully on a wide range of Windows 10 platforms, from a minimalist IoT device, to the full range of mobile and desktop devices, plus Xbox and HoloLens. Resource policy applies to all Windows 10 platforms, and most policy is common across the range—specifically to support the UWP promise of consistency. That said, some aspects of policy do vary, because different platforms support different sets of hardware devices with different capabilities.

So, for example, the memory limits on a Lumia 950 phone are almost identical to those on a HoloLens because these two devices have similar RAM characteristics and other hardware capabilities. Conversely, the Lumia 950 limits are significantly higher than on a Lumia 650, which has far less physical RAM and a lower hardware specification, generally. Pagefile is another factor: Desktop devices have a dynamically sizeable pagefile that’s also often very fast, whereas on all other Windows 10 devices, the pagefile is small, slow and a fixed-size. This is one reason why memory limits are completely removed on desktop, but enforced on all other devices.

In a few well-defined scenarios, memory limits can also vary at different times on the same device, so apps should take advantage of the Windows.System.MemoryManager APIs to discover the limit that’s actually applied at any point in time. This API will always reliably tell the app its current limit and its current usage—and these same values are exactly the values that the Resource Manager uses in its own internal calculations. In the following example, the app pays attention to its memory limit, and before it attempts a memory-intensive operation, it checks to see that it does in fact have enough headroom available for this operation:

It helps to think of memory as just another device capability. That is, it’s common for an app to test the availability of the device features it can use. Is there a compass on this device? Is there a forward-­facing camera? Also, some features are available only in certain app states. For example, if a device has a microphone, it’s almost always available to the app in the foreground, but typically not available to any background task. So it behooves the app to check availability at different times. In the same way, the app should be testing how much memory is available to it at any given time. The app can adapt to this by, for example, selecting different image resolutions, or different data transfer options, or even by completely enabling or disabling certain app features. Documentation for the MemoryManager API is at bit.ly/2bqepDL.

Memory Limits

What happens if an app hits its limit? Contrary to popular belief, in most cases, the Resource Manager doesn’t terminate apps for out-of-memory conditions. Instead, if the app does something that would result in a memory allocation that would exceed its limit, the allocation fails. In some cases, the failure is surfaced to the app (as an OutOfMemoryException in a managed code app, or a null pointer in a native app). If this happens, the app can handle the failure. If not, the app will crash. Consider the following examples. DoSomething is allocating simple byte array memory in an infinite loop that will eventually result in an OutOfMemory­Exception, which the app can handle:

Conversely, DoAnother is using imaging APIs in an infinite loop that are internally allocating memory on the native heap for graphics data. This allocation is outside the app’s direct control, and when it fails, it will almost certainly not propagate any exception that can be handled to the app and, therefore, the app will simply crash:

The scenario is a little contrived, as no app would realistically expect to be able to create an infinite number of bitmaps, but the point is that some allocation failures are easily handled while others are not. You should handle OutOfMemoryExceptions when you can, and examine your app code for scenarios where memory is allocated outside your direct control; police these areas carefully to avoid failures. You’re more likely to be successful handling exceptions for operations that allocate large amounts of memory—attempting to handle OutOfMemoryExceptions for small allocations is usually not worth the added complexity. It’s also worth noting that an app can hit an OutOfMemoryException well below its limit if it’s making very large allocations—and especially in managed code. This can arise as a result of address space fragmentation for your process. For example, the DoSomething method is allocating 10MB blocks, and it will hit OutOfMemoryException sooner than if it were allocating 1MB blocks. Finally, it must be said that the cases where your app can handle an OutOfMemoryException and continue in a meaningful way are rare; in practice, it’s more often used as an opportunity to clean up, notify the user and then fail gracefully.

Using Task Priorities to Resolve Contention

The system arbitrates between competing task types by weighing the relative importance of each user scenario. For example, the system generally assigns a higher priority to the app with which the user is actively engaged, and a lower priority to background activity of which the user might even be completely unaware. Even among background tasks there are different priority levels. For example, VoIP and push notification tasks are typically higher priority than time-triggered tasks.

When the user launches an app, or when a trigger event tries to activate a background task, the Resource Manager checks to see if there are sufficient free resources for this request. If there are, the activation goes ahead. If not, it then examines all running tasks and starts canceling (or in some cases rudely terminating) tasks from the lowest priority upward until it has freed enough resources to satisfy the incoming request.

Prioritization is finely nuanced, but everything falls into one of two broad priority categories, summarized in Figure 1.

Figure 1 The Two Broad Categories of App Task

CategoryTypical ExamplesDescription
Critical tasksForeground app activations and some important background tasks such as VoIP, background audio playback and any background task invoked directly by a foreground app.These are effectively always guaranteed to run whenever requested (except in cases of extreme and unexpected system process activity).
Opportunistic tasksEverything else.These are only allowed to launch (or to continue to run) when there are sufficient available resources and there’s no higher-priority task contending those resources. There are multiple finely grained priority levels within this category.

Soft and Hard Memory Limits

Resource policy limits ensure that no one app can run away with all the memory on the device to the exclusion of other scenarios. However, one of the side effects is that a situation can arise where a task can hit its memory limit even though there might be free memory available in the system.

The Windows 10 Anniversary Update addresses this by relaxing the hard memory limits to soft limits. To best illustrate this, consider the case of extended execution scenarios. In previous releases, when an app is in the foreground it has, say, a 400MB limit (a fictitious value for illustration only), and when it transitions to the background for extended execution, policy considers it to be less important—plus it doesn’t need memory for UI rendering—so its limit is reduced to perhaps 200MB. Resource policy does this to ensure that the user can successfully run another foreground app at the same time. However, in the case where the user doesn’t run another foreground app (other than Start), or runs only a small foreground app, the extended execution app may well hit its memory limit and crash even though there’s free memory available.

So in Windows 10 Anniversary Update, when the app transitions to extended execution in the background, even though its limit is reduced, it’s allowed to use more memory than its limit. In this way, if the system isn’t under memory pressure, the extended execution app is allowed to continue, increasing the likelihood that it can complete its work. If the app does go over its limit, the MemoryManager API will report that its AppMemoryUsageLevel is OverLimit. It’s important to consider that when an app is over-limit, it’s at higher risk of getting terminated if the system comes under memory pressure. The exact behavior varies per platform: Specifically, on Xbox, an over-limit app has two seconds to get itself below its limit or it will be suspended. On all other platforms, the app can continue indefinitely unless and until there’s resource pressure.

The net result of this change is that more tasks will be able to continue in the background more often than before. The only downside is that the model is slightly less predictable: Previously, a task that attempted to exceed its limit would always fail to allocate (and likely crash). Now, the allocation-failure-and-crash behavior doesn’t always follow: The task will often be allowed to exceed its limit without crashing.

The Resource Manager raises the AppMemoryUsageIncreased event when an app’s memory usage increases from any given level to a higher level, and conversely, the AppMemoryUsageDecreased event when it decreases a level. An app can respond to AppMemory­UsageIncreased by checking its level and taking appropriate action to reduce its usage:

Then, when it has successfully reduced its usage, it can expect to get a further notification that it has fallen to a safer level, via an AppMemoryUsageDecreased event:

An app can also sign up for the AppMemoryUsageLimitChanging event, which the Resource Manager raises when it changes an app’s limit. The OverLimit scenario deserves special handling, because of the associated change in priority. An app can listen to the notification event that’s raised when the system changes its limit, so it can immediately take steps to reduce its memory consumption. For this scenario, you should use the old and new limit values passed in as payload of the event, rather than querying the AppMemoryUsageLevel directly:

Extended execution is just one of the scenarios where the limit is changed. Another common scenario is where the app calls exter­nal app services—each of these will reduce the calling app’s limit for the duration of the call. It’s not always obvious when an app is calling an app service: For example, if the app uses a middleware library, this might implement some APIs as app services under the covers. Or, if the app calls into system apps, the same might happen; Cortana APIs are a case in point.

ProcessDiagnosticInfo API

Commit usage is the amount of virtual memory the app has used, including both physical memory and memory that has been paged out to the disk-backed pagefile. Working set is the set of memory pages in the app’s virtual address space that’s currently resident in physical memory. For a detailed breakdown of memory terminology, see bit.ly/2b5UwjL. The MemoryManager API exposes both a GetAppMemoryReport and a GetProcessMemoryReport for commit metrics and working-set metrics, respectively. Don’t be misled by the names of the properties—for example, in the AppMemory­Report class, the private commit used by the app is represented by PrivateCommitUsage (which seems obvious), whereas in the ProcessMemoryUsageReport class the same value is represented by PageFileSizeInBytes (which is a lot less obvious). Apps can also use a related API: Windows.System.Diagnostics.ProcessDiagnosticInfo. This provides low-level diagnostic information on a per-process basis, including memory diagnostics, CPU and disk-usage data. This is documented at bit.ly/2b1IokD. There’s some overlap with the MemoryManager API, but there’s additional information in ProcessDiagnosticInfo beyond what’s available in MemoryManager. For example, consider an app that allocates memory, but doesn’t immediately use it:

You could use the ProcessMemoryReport or ProcessMemoryUsageReport to get information about commit and working-set, including private (used only by this app), total (includes private plus shared working set), and peak (the maximum used during the current process’s lifetime so far). For comparison, note that the memory usage reported by Task Manager is the app’s private working-set:

Each time the app calls its ConsumeMemory method, more commit is allocated, but unless the memory is used, it doesn’t significantly increase the working set. It’s only when the memory is used that the working set increases:

Most apps only need to focus on commit (which is what the Resource Manager bases its decisions on), but some more sophisticated apps might be interested in tracking working-set, also. Some apps, notably games and media-intensive apps, rapidly switch from one set of data to the next (think graphics buffers), and the more their data is in physical memory, the more they can avoid UI stuttering and tearing.

Also, you can think of memory as a closed ecosystem: It can be useful to track your working-set just to see how much pressure you’re putting on the system as a whole. Certain system operations—such as creating processes and threads—require physical memory, and if your app’s working-set usage is excessive this can degrade performance system-wide. This is particularly important on the desktop, where policy doesn’t apply commit limits.

GlobalMemoryStatusEx API

From the Windows 10 Anniversary Update, apps also have available to them the Win32 GlobalMemoryStatusEx API. This provides some additional information beyond the Windows RT APIs, and while most apps will never need to use it, it has been provided for the benefit of UWP apps that are highly complex and have very finely tuned memory behaviors. To use this API you also need the MEMORYSTATUSEX struct, as shown in Figure 2.

Figure 2 Importing the GlobalMemoryStatusEx Win32 API

Then, you can instantiate this struct and pass it to GlobalMemoryStatusEx, which will fill in the struct fields on return:

Again, don’t be misled by the names of the fields. For example, if you’re interested in the size of the pagefile, don’t just look at ullTotalPageFile, because this actually represents the current maximum amount of commit, which includes both the pagefile and physical memory. So, what most folks understand as the pagefile size is computed by subtracting the ullTotalPhys value from the ullTotalPageFile value, like so:

Also note that ullTotalPhys is not the total amount of memory physically installed on the device. Rather, it’s the amount of physical memory the OS has available to it at boot, which is always slightly less than the absolute total of physical memory.

Another interesting value returned is dwMemoryLoad, which represents the percentage of physical memory in use system-wide. In some environments it’s important for an app’s memory usage to be mostly in physical memory, to avoid the disk I/O overhead of using the pagefile. This is especially true for games and media apps—and critically important for Xbox and HoloLens apps.

Remember this is a Win32 API so it will return information that doesn’t account for the UWP sandbox and, in particular, it has no knowledge of resource policy. So, for example, the value returned in ullAvailPhys is the amount of physical memory currently available on the system, but this doesn’t mean that this memory is actually available to the app. On all platforms apart from the desktop, it’s likely to be significantly more than the amount of memory the current UWP app will actually be allowed to use, because its commit usage is constrained by policy, regardless of available physical memory.

For most apps, the MemoryManager API gives you all you need, and all the metrics that you can directly and easily influence in your app. ProcessDiagnosticInfo and GlobalMemoryStatusEx include some additional information that you can’t directly influence, but which a more sophisticated app might want to pivot off for logic decisions, for profiling during development, and for telemetry purposes.

Visual Studio Diagnostics Tools

The memory diagnostic tool in Visual Studio 2015 updates its report in real time during debugging. You can turn this on while in a debug session by selecting the Debug menu, and then Show Diagnostic Tools, as shown in Figure 3.


Figure 3 Analyzing Process Memory in the Visual Studio Diagnostic Tools

The live graph in the Process Memory window tracks private commit, which corresponds to the AppMemoryReport.PrivateCommitUsage and the ProcessMemoryUsageReport.PageFileSizeInBytes. It doesn’t include shared memory, so it represents only part of the metric reported in MemoryManager.AppMemoryUsage, for example. Note that if you hover over any point in the graph, you’ll get a tooltip with usage data for that point in time.

You can also use the tool to take snapshots for more detailed comparisons. This is especially useful if you’re trying to track down a suspected memory leak. In the following example, the app has two methods, one that allocates memory (simulating a leak) and the other that’s naively attempting to release that memory:

A glance at the memory graph will show that the memory isn’t actually getting released at all. In this example, the simplest fix is to force a garbage collection. Because collection is generational, in scenarios where you have complex object trees (which this example doesn’t), you might need to make two collection passes, and also wait for the collected objects’ finalizers to run to completion. If your app is allocating large objects, you can also set GCLargeObjectHeapCompactionMode to compact the Large Object Heap when a collection is made. The Large Object Heap is used for objects greater than 80KB; it’s rarely collected unless forced; and even when collected, it can leave heap fragmentation. Forcing it to be compacted will increase your app’s chances of allocating large objects later on. Note that the garbage collector generally does a very good job on its own without prompting from the app—you should profile your app carefully before deciding whether you need to force a collection at any time:

The example in Figure 3 shows three snapshots, taken before allocating memory, after allocating memory and then after releasing memory (using the updated version of the code). The increase and decrease in memory usage is clear from the graph.

The blue arrows show where the snapshots were taken. The gold arrow shows where a garbage collection was done. The snapshot data is listed in the Memory Usage window below, including details of the heap. The red up arrow in this list indicates where memory usage increased relative to the previous snapshot; conversely the green down arrow shows where it decreased. The delta in this case is 10MB, which matches the allocation done in the code. The Object column lists the total number of live objects on the heap—but the more useful count is the Diff. Both counts are also hyperlinks: for example, if you click the Diff count, it will expand out a detailed breakdown of the increase or decrease in objects allocated by the app at that point in time. From this list, you can select any object to get a more detailed view. For example, select the MainPage in the object window, and this will pull up a breakdown in the Referenced Types window, as shown in Figure 4. The size increase in this example is clearly for the 10MB array.


Figure 4 Examining the Referenced Types in a Memory Snapshot

The Referenced Types view shows you a graph of all types your selected type is referencing. The alternative view is the Paths to Root view, which shows the opposite—the complete graph of types rooting your selected type; that is, all the object references that are keeping the selected object alive.

You can choose to focus either on managed memory allocations, native memory allocations or both. To change this, select the Project menu, then the project Properties. On the Debug tab, select the Debugger type (Managed, Native or Mixed), then turn on Heap Profiling in the Memory Usage, as shown in Figure 5. In some cases, as in this example, you’ll see that even though the app is forcing a garbage collection and cleaning up managed allocations, the number and size of native allocations actually increases. This is a good way to track the full memory usage effects of any operation your app performs, rather than focusing solely on the managed side of things.


Figure 5 Tracking Both Managed and Native Heap Usage

This is turned off by default because profiling the native heap while debugging will significantly slow down the app’s performance. As always, profiling should be done on a range of target devices—and preferably using real hardware rather than emulators, as the characteristics are different. While profiling is useful during development, your app can continue to use the MemoryManager and related APIs during production, for making alternate feature decisions in production and for telemetry. On top of that, because they can be used outside the debugging environment—and without the memory overhead of debugging—they more accurately represent the app’s behavior in real use.

Wrapping Up

Users typically install many apps on their devices, and many apps have both foreground and background components. Resource policy strives to ensure that the limited resources on the device are apportioned thoughtfully, in a way that matches the user’s expectations. Priority is given to activities the user is immediately aware of, such as the foreground app, background audio or incoming VoIP calls. However, some resources are also allocated to less important background tasks, to ensure that, for example, the user’s tiles are updated in a timely manner, e-mail is kept synced in the background, and that app data can be kept refreshed ready for the next time the user launches an app.

Resource policy is consistent across all Windows 10 platforms, although it also allows for variability in device capabilities. In this way, a UWP app can be written to target Windows desktop, mobile, Xbox, HoloLens or IoT while being resilient to device variation. The app platform offers a set of APIs the app can use to track its resource usage, to respond to notifications from the system when interesting resource-related events happen and to tune its behavior accordingly. The app developer can also use debugging tools in Visual Studio to profile his app, and eliminate memory leaks.

Andrew Whitechapelis a program manager in the Microsoft Windows division, responsible for the app execution and resource policy for the Universal Windows Application Platform.

Thanks to the following technical experts for reviewing this article: Mark Livschitz and Jeremy Robinson
Mark Livschitz is a Software Engineer working on the Base and Kernel team for the Microsoft Windows division.

Jeremy Robinson is a Software Engineer working on the Developer Platform for the Microsoft Windows division.

By: Tibor Nagy | Updated: 2011-02-18 | Comments (9) | Related: More >Performance Tuning


Free MSSQLTips Webinar: Gathering and using historical data to solve SQL Server performance issues

Lower

Available Physical Memory Lower Than Total Eclipse

In this webinar we look at how to collect and use historical data from SQL Server to help troubleshoot and solve performance problems.


Problem

We experience regular slowdowns on our MS SQL Server database. We would like to start the root cause investigation by examining memory bottlenecks. What is your recommendation to uncover memory bottlenecks in SQL Server?

Solution

There are many reasons for memory related performance problems on a MS SQL Server instance, the source can be either a limit in virtual or physical memory, memory pressure from other applications or inside the SQL Server. Fortunately enough, we have many built-in tools which can be used to track down the root cause.

Performance Monitor

Performance Monitor is part of the Microsoft Management Console, you can find it by navigating to Start Menu -> Administrative Tools group. First, I would like to emphasize that the values below can vary from system to system, depending on the amount of memory, system volume, load, etc. I suggest saving metrics of the system under normally working load so you have a reference of the typical values. As a starting point, review the Memory: Available [M, K] Bytes performance counter. Low amount of available memory might indicate external memory pressure. A rule of thumb is to look into this counter when the value drops below 5% of all available memory. If there are memory-related errors, you will have to look for the key memory consumers on the system. They can be identified by using the Process: Working Set performance counter. The total physical memory in use can be approximately calculated by summing the following counters:

  • Process object, Working Set counter for each process
  • Memory object
    • Cache Bytes counter for system working set
    • Pool Nonpaged Bytes counter for size of unpaged pool
    • Available Bytes counter
    • Modified Page List Bytes counter
The Process: Private Bytes counter should be around the size of the working set (Process: Working Set), otherwise the memory is paged out.

Unfortunately these performance counters do not take into account AWE mechanisms. If AWE is enabled, you will need to look at the memory distribution inside SQL Server using DBCC MEMORYSTATUS command or Dynamic Management Views (see below).

You need to find out whether the page file has enough space for the virtual memory. Take a look at the following counters: Memory: Commit Limit, Paging File: %Usage, Paging File: %Usage Peak. You can estimate the amount of memory that is paged out per process by calculating the difference between Process: Working Set and Process Private Bytes counters. High Paging File: %Usage Peak can indicate low virtual memory event. A solution can be to increase the size of your page file. High Paging File: %Usage is a sign of physical memory over commitment so you should also look for potential external physical memory pressure.

The following performance counters on SQL Server: Buffer Manager object can also indicate memory pressure:

  • High number of Checkpoint pages/sec
  • High number of Lazy writes/sec
  • High number of Page reads/sec
  • Low Buffer cache hit ratio
  • Low Page Life Expectancy

For further reading on this topic, check out these tips:

Available Physical Memory Lower Than Total Physical Memory

DBCC MEMORYSTATUS command

You can use the DBCC MEMORYSTATUS command to check for any abnormal memory buffer distribution inside SQL Server. The buffer pool uses most of the memory committed by SQL Server. Run the DBCC MEMORYSTATUS command and scroll down to the Buffer Pool section (or Buffer Counts in SQL Server 2005), look for the Target value. It shows the number of 8-KB pages which can be committed without causing paging. A drop in the number of target pages might indicate response to an external physical memory pressure.

If the Committed amount is above Target, continue investigating the largest memory consumers inside SQL Server. When the server is not loaded, Target normally exceeds Committed and the value of the Process: Private Bytes performance counter.

If the Target value is low, but the server Process: Private Bytes is high, you might be facing internal SQL memory problems with components that use memory from outside of the buffer pool. Such components include linked servers, COM objects, extended stored procedures, SQL CLR, etc. If the Target value is low but its value is close to the total memory used by SQL Server, than you should check whether your SQL Server received a sufficient amount of memory from the system. Also you can check the server memory configuration parameters.

You can compare the Target count against the max server memory values if it is set. Latter option limits the maximum memory consumption of the buffer pool. Therefore the Target value cannot exceed this value. Also the low Target count can indicate problems: in case it is less than the min server memory setting, you should suspect external virtual memory pressure.

My last recommendation on the DBCC MEMORYSTATUS output is to check the Stolen Pages count. A high percentage (>75%) of Stolen Pages compared to Target can be a sign of internal memory pressure.

Further reading on Microsoft Support pages:

Dynamic Management Views

You can use the sys.dm_os_memory_clerksdynamic management view (DMV) to get detailed information about memory allocation by the server components in SQL Server 2005 and 2008. Some of the DMVs provide similar data as the DBCC MEMORYSTATUS command, but their output is much more programmer friendly. For example the following query returns the amount of memory SQL Server has allocated through the AWE mechanism.

You can also check the amount of memory that is consumed from outside of the buffer pool through the multipage allocator.

If you are seeing significant amounts of memory (more than 100-200 MB) allocated through the multipage allocator, check the server configuration and try to identify the components that consume the most memory by using the following query:

In SQL Server 2008, you can query the sys.dm_os_process_memory DMV to retrieve similar data. Look for the columns physical_memory_in_use, large_page_allocations_kb, locked_pages_allocations_kb and memory_utilization_percentage. The process_physical_memory_low = 1 value indicates that the process responds to physical memory low notification from the OS.

Check the main consumers of the buffer pool pages:

In SQL Server 2005 and 2008, internal clock hand controls the relative size of caches. It launches when the cache is about to reach its maximum. The external clock hand moves as the SQL Server gets into memory pressure. Information about clock hands can be obtained through the sys.dm_os_memory_cache_clock_hands DMV. Each cache has a separate entry for the internal and the external clock hand. If the rounds_count and removed_all_rounds_count values are increasing then your server is under memory pressure.

You can get additional information about the caches by joining with the sys.dm_os_cache_counters (Please note that the amount of pages is NULL for USERSTORE entries):

Available Physical Memory Is Less Than Total Windows 10

Virtual Address Space consumption can be tracked by using the sys.dm_os_virtual_address_dump DMV. If the largest available region is less than 4 MB then your system is most likely under VAS pressure. SQL Server 2005 and 2008 actively monitor and respond to VAS pressure.

You can also use the following DMVs for memory troubleshooting both in SQL Server 2005 and 2008:

  • sys.dm_exec_cached_plans
  • sys.dm_exec_query_memory_grants
  • sys.dm_exec_query_resource_semaphores
  • sys.dm_exec_requests
  • sys.dm_exec_sessions
  • sys.dm_os_memory_cache_entries

There are several new DMVs in SQL Server 2008 which make us easier to gather memory diagnosis information. I would like summarize these new DMVs for memory troubleshooting:

  • sys.dm_os_memory_brokers provides information about memory allocations using the internal SQL Server memory manager. The information provided can be useful in determining very large memory consumers.
  • sys.dm_os_memory_nodes and sys.dm_os_memory_node_access_stats provide summary information of the memory allocations per memory node and node access statistics grouped by the type of the page. This information can be used instead of running DBCC MEMORYSTATUS to quickly obtain summary memory usage. (sys.dm_os_memory_node_access_stats is populated under dynamic trace flag 842 due to its performance impact.)
  • sys.dm_os_nodes provides information about CPU node configuration for SQL Server. This DMV also reflects software NUMA (soft-NUMA) configuration.
  • sys.dm_os_sys_memory returns the system memory information. The ‘Available physical memory is low' value in the system_memory_state_desc column is a sign of external memory pressure that requires further analysis.

Resource Governor

The Resource Governor in SQL Server 2008 Enterprise edition allows you to fine tune SQL Server memory allocation strategies, but incorrect settings can be a cause for out-of-memory errors. The following DMVs can provide information about the Resource Governor feature of SQL Server 2008: sys.dm_resource_governor_configuration, sys.dm_resource_governor_resource_pools and sys.dm_resource_governor_workload_groups

SQL Server ring buffers

Another source of diagnostic memory information is the sys.dm_os_ring_buffers DMV. Each ring buffer records the last number of notifications. You can query the ring buffer event counts using the following code:

Here is a list of ring buffers of interest:

  • RING_BUFFER_SCHEDULER_MONITOR: Stores information about the overall state of the server. The SystemHealth records are created with one minute intervals.
  • RING_BUFFER_RESOURCE_MONITOR: This ring buffer captures every memory state change by using resource monitor notifications.
  • RING_BUFFER_OOM: This ring buffer contains records indicating out-of-memory conditions.
  • RING_BUFFER_MEMORY_BROKER: This ring buffer contains memory notifications for the Resource Governor resource pool.
  • RING_BUFFER_BUFFER_POOL: This ring buffer contains records of buffer pool failures.

SQL Server Profiler

This tool is part of the SQL Server program suite, you can find it in Start Menu -> {your MS SQL Server version} -> Performance Tools. Additional information can be found in the articles Working with SQL Server Profiler Trace Files and Creating a Trace Template in SQL Server Profiler.

Log File Viewer in SQL Server Management Studio

You can check the SQL Server error log and Windows application and system logs in one place. For more information about this tool please read this article.

Next Steps
  • Collect and compare performance counters.
  • Study DBCC MEMORYSTATUS output.
  • Analyze DMV information.
  • Check Resource Governor configuration.

Available Physical Memory Lower Than Total Income

Last Updated: 2011-02-18



About the author
Tibor Nagy is a SQL Server professional in the financial industry with experience in SQL 2000-2012, DB2 and MySQL.
View all my tips