This monitoring script will help you monitor for queue buildups and mysterious performance issues by monitoring processes and other performance indicators, this script can run every hour and then update the dashboard with the new information.
Note : This targets hMailServer but could be used for any custom mail solution just change the variables like process names and paths.
The Challenge: Understanding hMailServer Health
Before diving into the solution, let me explain what makes monitoring hMailServer challenging:
- Queue Management: Email queues can build up silently, causing delivery delays
- Process Monitoring: hMailServer processes can crash or consume excessive memory
- Log Analysis: Critical information is buried in verbose log files
- Remote Monitoring: Most email servers run on dedicated machines requiring remote access
- Real-time Visibility: Traditional monitoring tools don't provide email-specific insights
My goal was to create a system that could run remotely, collect comprehensive metrics, and present them in a format that both technical staff and management could understand.
Part 1: The Health Monitor Script
The core of the system is hMailServer-HealthMonitor.ps1
, a PowerShell script that performs deep analysis of hMailServer instances. Here's what it monitors:
Queue Analysis
The script starts by examining the email queue, which is often the first indicator of problems:
function Analyze-QueueDirectory {
Write-Log "Analyzing queue directory: $hMailDataPath" -Console
try {
if ($Credential) {
$TempDrive = New-PSDrive -Name "QueueDrive" -PSProvider FileSystem -Root $hMailDataPath -Credential $Credential -ErrorAction Stop
}
$QueueFiles = Get-ChildItem -Path $hMailDataPath -Filter "*.eml" -Recurse -ErrorAction Stop
$QueueCount = $QueueFiles.Count
$HealthReport.QueueHealth = @{
TotalMessages = $QueueCount
TotalSize = if ($QueueFiles) { ($QueueFiles | Measure-Object -Property Length -Sum).Sum } else { 0 }
}
The script counts .eml
files in the queue directory and calculates total size. I implemented configurable thresholds—100 messages triggers a warning, 500 triggers a critical alert. These numbers work well for our environment, but they're easily adjustable.
Log File Analysis
One of the most valuable features is intelligent log parsing. Rather than analyzing every log file (which could be hundreds), the script focuses on the two most recent files:
# Get latest 2 log files
$LogFiles = Get-ChildItem -Path $hMailLogPath -Filter "*.log" -ErrorAction Stop |
Sort-Object LastWriteTime -Descending | Select-Object -First 2
$LogAnalysis = @{
FilesAnalyzed = $LogFiles.Count
InboundEmails = 0
OutboundEmails = 0
Errors = 0
TopSenders = @{}
TopRecipients = @{}
}
The parsing logic handles both traditional mail servers and relay configurations. For relay servers, I had to adjust the regex patterns to capture the right events:
# Parse inbound/outbound emails
if ($Line -match "RECEIVED|Message received|accepted for delivery|MAIL FROM") {
$LogAnalysis.InboundEmails++
# Extract sender
if ($Line -match "from\s+([^\s,]+)") {
$Sender = $matches[1]
if ($LogAnalysis.TopSenders.ContainsKey($Sender)) {
$LogAnalysis.TopSenders[$Sender]++
} else {
$LogAnalysis.TopSenders[$Sender] = 1
}
}
}
Process Monitoring
Monitoring the hMailServer process itself proved tricky. Remote process monitoring has limitations, so I implemented a fallback strategy:
# Try Get-Process first
try {
$hMailProcesses = Get-Process -ComputerName $ServerName -Name "hMailServer" -ErrorAction Stop
if ($hMailProcesses) {
$ProcessAnalysis.DetectionMethod = "Get-Process"
$ProcessAnalysis.ProcessCount = $hMailProcesses.Count
foreach ($Process in $hMailProcesses) {
$MemoryMB = [math]::Round($Process.WorkingSet / 1MB, 2)
$ProcessAnalysis.TotalMemoryMB += $MemoryMB
}
}
} catch {
Write-Log "Get-Process failed, trying pslist..." -Level "WARN" -Console
}
If Get-Process
fails (which happens with certain security configurations), the script falls back to pslist
from Sysinternals:
if ($pslistPath) {
Write-Log "Running pslist \\$ServerName hmailserver command..." -Console
$pslistOutput = & pslist.exe "\\$ServerName" hMailServer 2>&1
if ($LASTEXITCODE -eq 0 -and $pslistOutput) {
$processLines = $pslistOutput | Where-Object { $_ -match "hMailServer\s+\d+" }
foreach ($line in $processLines) {
if ($line -match "hMailServer\s+(\d+)\s+(\d+)\s+(\d+)\s+(\d+)\s+(\d+)\s+([\d:\.]+)") {
$ProcessId = $matches[1]
$MemoryMB = [math]::Round([int]$matches[5] / 1024, 2)
$CPUTime = $matches[6]
# Store process details...
}
}
}
}
Performance Metrics
The script also collects system-level performance data using WMI:
$CPUParams = @{ ComputerName = $ServerName; ErrorAction = 'Stop' }
if ($Credential) { $CPUParams.Credential = $Credential }
# CPU
$CPUInfo = Get-CimInstance -ClassName Win32_Processor @CPUParams
$PerformanceData.CPU = @{
Name = $CPUInfo.Name
Cores = $CPUInfo.NumberOfCores
LoadPercentage = if ($CPUInfo.LoadPercentage) { $CPUInfo.LoadPercentage } else { 0 }
}
# Memory
$MemoryInfo = Get-CimInstance -ClassName Win32_OperatingSystem @CPUParams
$TotalGB = [math]::Round($MemoryInfo.TotalVisibleMemorySize / 1MB, 2)
$FreeGB = [math]::Round($MemoryInfo.FreePhysicalMemory / 1MB, 2)
Structured Logging
Everything gets logged to a daily log file with structured data that the web generator can parse:
function Write-Log {
param([string]$Message, [string]$Level = "INFO", [switch]$Console)
$Timestamp = Get-Date -Format "yyyy-MM-dd HH:mm:ss"
$LogMessage = "[$Timestamp] [$Level] $Message"
if ($Console) {
switch ($Level) {
"ERROR" { Write-Host $LogMessage -ForegroundColor Red }
"WARN" { Write-Host $LogMessage -ForegroundColor Yellow }
"SUCCESS" { Write-Host $LogMessage -ForegroundColor Green }
default { Write-Host $LogMessage -ForegroundColor White }
}
}
Add-Content -Path $LogFile -Value $LogMessage -Force
}
The script logs specific metrics in a format that's both human-readable and machine-parseable:
Write-Log "Queue Messages: $QueueCount" -Level "INFO"
Write-Log "Inbound Emails: $($LogAnalysis.InboundEmails)" -Level "INFO"
Write-Log "Outbound Emails: $($LogAnalysis.OutboundEmails)" -Level "INFO"
Part 2: The Web Dashboard Generator
The second script, Report-Generator.ps1
, transforms the health monitoring logs into a professional web dashboard. This separation of concerns allows the monitoring script to run on a schedule while the web generator can update the dashboard independently.
Log Parsing Engine
The heart of the web generator is its parsing engine, which extracts structured data from the monitoring logs:
function Parse-HealthLog {
param([string]$LogFilePath)
$HealthData = @{
ServerName = "Unknown"
ScanTime = Get-Date
OverallStatus = "HEALTHY"
QueueMessages = 0
QueueSizeMB = 0
InboundEmails = 0
OutboundEmails = 0
LogErrors = 0
ProcessCount = 0
MemoryUsage = 0
CPUTime = "Unknown"
ThreadCount = 0
TotalAlerts = 0
RawAlerts = @()
}
The parser uses regex patterns to extract specific metrics from log lines:
foreach ($Line in $LogContent) {
switch -Regex ($Line) {
"Starting health monitoring for (.+)" {
$HealthData.ServerName = $matches[1]
}
"Queue Messages: (\d+)" {
$HealthData.QueueMessages = [int]$matches[1]
}
"Inbound Emails: (\d+)" {
$HealthData.InboundEmails = [int]$matches[1]
}
"pslist found - PID: (\d+), Memory: ([\d\.]+) MB, CPU Time: ([^,]+)" {
$HealthData.ProcessPID = [int]$matches[1]
$HealthData.MemoryUsage = [decimal]$matches[2]
$HealthData.CPUTime = $matches[3]
}
}
}
Status Determination Logic
The script includes intelligent status determination based on multiple factors:
# Determine overall status based on collected data
if ($HealthData.LogErrors -gt 0) {
$HealthData.OverallStatus = "CRITICAL"
} elseif ($HealthData.HasWarnings -or $HealthData.QueueMessages -gt 100) {
$HealthData.OverallStatus = "WARNING"
} elseif ($HealthData.ProcessCount -eq 0) {
$HealthData.OverallStatus = "CRITICAL"
} else {
$HealthData.OverallStatus = "HEALTHY"
}
Modern Web Interface
The generated HTML uses modern CSS with a clean, professional design:
body { font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif; background: #f8fafc; color: #1e293b; line-height: 1.5; padding: 20px; } .status-card { background: white; border-radius: 12px; padding: 24px; margin-bottom: 24px; box-shadow: 0 1px 3px rgba(0, 0, 0, 0.1); text-align: center; border-left: 4px solid $StatusColor;
}
The dashboard uses a responsive grid layout that works on desktop and mobile:
.metrics-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
gap: 16px;
margin-bottom: 24px;
}
Dynamic Status Indicators
Each metric includes color-coded status indicators generated by PowerShell functions:
function Get-QueueIcon {
param($count)
if ($count -eq 0) { return '<span class="status-dot green"></span>' }
elseif ($count -lt 100) { return '<span class="status-dot yellow"></span>' }
else { return '<span class="status-dot red"></span>' }
}
function Get-MemoryIcon {
param($usage)
if ($usage -lt 100) { return '<span class="status-dot green"></span>' }
elseif ($usage -lt 200) { return '<span class="status-dot yellow"></span>' }
else { return '<span class="status-dot red"></span>' }
}
Auto-Refresh Capability
The generated HTML includes automatic refresh functionality:
<meta http-equiv="refresh" content="$RefreshInterval">
The default is 5 minutes, but it's configurable. The footer shows when the page was generated and the refresh interval.
Implementation and Deployment
The health monitoring script is designed to run on a schedule. Here's how I typically deploy it:
# Basic usage
.\hMailServer-HealthMonitor.ps1 -ServerName "bear-mailsrv1"
# Custom thresholds
.\hMailServer-HealthMonitor.ps1 -ServerName "bear-mailsrv1"
-QueueThreshold 50 -CriticalThreshold 200
Generating the Dashboard
The web generator can run independently and finds the latest log automatically:
# Generate dashboard in current directory
.\Report-Generator.ps1
# Custom output location (useful for web servers)
.\Report-Generator.ps1 -OutputPath "C:\inetpub\wwwroot\health.html"
# Custom refresh interval (10 minutes)
.\Report-Generator.ps1 -RefreshInterval 600
Conclusion
Building this monitoring system solved a real business problem while teaching me valuable lessons about PowerShell, remote system management, and web interface design. The modular approach—separating data collection from presentation—has proven robust and flexible..