The brand new DPM MP is starting to become something usable. It completely lacks any serious performance collection for trending and statistics, but at least we have some decent monitors. From a backup system MP I need to have evidence of any substantial change in backup volumes and speed, just to cite an example. (Maybe I should work on it)
But what’s going to hurt you if you implement such an MP is an high CPU usage on your DPM servers caused by the logical disk monitoring of the OS MP. If you implemented DPM you know that every protected resource has it’s own disk, these disks are implemented as mounting points at the OS level. It’s not uncommon to have hundreds of such disks on a DPM Server. All these disks are discovered as logical disks and monitored the way logical disks deserve:
- availability check every 5′
- space availability check every 1 hour
- avg disk seconds per transfer check every 1′
All these checks are implemented via WMI, so we have these wmi queries for hundreds of disks, guess what? Your cpu is constantly overloaded. Not taking into account the performance collection rules.
The solution is pretty easy, and immo should have been implemented in the DPM MP or at least documented in the MP guide. Since these disks are managed by DPM there’s no need to double check them, if any problem arises then the DPM MP will take charge of letting us know. Given this fact we can simply disable the “Mount Point Discovery Rule” (from the OS MP) for the group “DPM Server Group” (exposed by the DPM MP). And if you want to cleanup your console, don’t forget to run Remove-DisabledMonitoringObject from CommandShell.