Monitoring Server Performance
Does your ECM server seem to have performance related issues? Are requests to your ECM software taking longer than normal? If so then you may have a few choices:
- You can always add some extra physical ram, this may help but may not get you to the root of the problem.
- You could maybe add a processor or two and hope for the best, or you can figure out why your ECM server is running so slowly.
There are several different things that can cause ECM system performance degradation. In this article we will explain how you can use Performance Monitor aka “PerfMon” a Microsoft Windows built in diagnostic tool to help determine the cause of your ECM server’s bottle neck.
As with any other diagnostic tool Performance Monitor is a running process and like any other process, Performance Monitor may consume things like CPU cycles, system memory, and potentially hard disk resources. This will slightly lower the amount of available system resources reported by Performance Monitor then when performance monitor is not running. Even with that Performance Monitor is still accurate enough that it can be used to help IT staff determine system bottle necks that are causing performance related issues.
When running Performance Monitor we recommend turning off all of the default counters. This will allow you to add the counters and attributes that we are going to discuss below.
This counter will show you what percentage of the available CPU cycles that are being consumed by the system. For example if the % Processor Time counter reads 40, then the CPU is working at 40% capacity. If your processor is exceeding 65% capacity then this will be a problem area
This counter’s average value should be as low as possible. An average value of 70% or above indicates that the hard disk can’t keep up. Adding more spindles or faster hard disks may be required.
Current Disk Queue Length:
This counter will show you how many I/O operations are waiting for the hard disk to become available. Our recommendation is that the average disk queue length should be 4 or less. Adding more spindles or faster hard disks may be required.
This counter monitors the amount of memory being used for the file system cache. Anything over 10 MB would be considered too much. If this is the case it is recommended that you add more physical memory.
Pool Non-paged Bytes and Pool Non-paged Allocations:
Another way to test for memory leaks is to monitor these two counters. The Pool Non-paged Bytes counter counts pages of memory that can’t be moved to virtual memory, these will stay in physical RAM. Most likely, if this value is too high, you’ll have to add more physical memory to the system. You can also watch the Pool Non-paged Allocations counter to see how many calls are being made to that portion of the memory. If the number of calls does not seem to correspond with the number of memory pages, you may likely have a memory leak rather than an insufficient amount of physical ram.
This value counts the number of times per second that the system is accessing virtual memory rather than physical memory. A value above 20 is considered to be high, and it may indicate a problem with the way your virtual memory is configured rather than a problem or shortage of physical memory.
We recommend monitoring these counters over a day or more to document and establish a baseline.
Senior Systems Engineer
Hey there! This is my first visit to your blog! We are a team of volunteers and starting a new initiative in a community
in the same niche. Your blog provided us
useful information to work on. You have done a marvellous job!