I recently enhanced the Proofpoint queue monitoring script to include daily pattern analysis and trending. What started as a simple alert system evolved into a comprehensive monitoring solution that tracks queue behavior over time. Here's how I built it and the challenges I encountered along the way.
The Requirements
The original script monitored two queue types (Attachment Defense and Message Defense) and sent email alerts when queues exceeded zero. I needed to add:
- Continuous monitoring with configurable intervals
- Queue pattern tracking throughout the day
- Daily summary emails at 7 AM showing 24-hour trends
- Visual heatmaps showing hourly queue states
- Separate recipient lists for alerts vs. summaries
Visual Overview
The only new feature here is the daily overview showing the heat map so lets look at that now:
The heat map is the key section here so lets look at that a little more, as you can see that messages will generally be an issue between 07:00 and 15:00 in this example, but this is taken from the JSON file which will track the number of messages in the queue every hour for this report:
Implementation Overview
Upgrading from Plain Text to HTML Emails
The original script sent plain text notifications. I transformed these into professional HTML emails with health card-style status indicators:
# Function to determine status color
function Get-StatusColor($value) {
if ($value -eq 0) { return "#4CAF50" } # Green
elseif ($value -ge 1 -and $value -le 5) { return "#FF9800" } # Amber
else { return "#F44336" } # Red
}
# Function to determine status text
function Get-StatusText($value) {
if ($value -eq 0) { return "Healthy" }
elseif ($value -ge 1 -and $value -le 5) { return "Warning" }
else { return "Critical" }
}
These thresholds provide immediate visual feedback:
- Green (0): No issues, system healthy
- Amber (1-5): Warning level, needs attention
- Red (6+): Critical, immediate action required
The HTML design uses a minimalistic approach with mobile-responsive CSS:
.status-card {
flex: 1;
background-color: #f8f9fa;
border-radius: 8px;
padding: 20px;
text-align: center;
border: 1px solid #e0e0e0;
}
@media only screen and (max-width: 480px) {
.status-grid {
flex-direction: column;
}
}
Adding the Loop Functionality
First, I wrapped the entire monitoring logic in a continuous loop with a configurable interval:
# Loop Settings
$loopIntervalMinutes = 60 # Time to wait between runs (in minutes)
$loopCount = 0
while ($true) {
$loopCount++
Write-Host "[INFO] Run #$loopCount started at $(Get-Date -Format 'yyyy-MM-dd HH:mm:ss')"
# ... monitoring logic ...
Start-Sleep -Seconds ($loopIntervalMinutes * 60)
}
This allows the script to run indefinitely, checking queues every hour (or whatever interval you configure).
Tracking Queue Data
I implemented a JSON-based storage system to track queue values throughout the day:
# Trending Data Settings
$trendingDataFolder = "C:\ProgramData\ProofpointMonitor"
$trendingDataFile = Join-Path $trendingDataFolder "QueueTrends.json"
$lastSummaryFile = Join-Path $trendingDataFolder "LastSummary.txt"
Each reading captures:
- Timestamp
- Hour (for heatmap generation)
- Attachment queue value
- Message queue value
Separate Notification Streams
One key requirement was different recipient lists for alerts vs. summaries:
# SMTP Settings for immediate alerts
$to = "lee@croucher.cloud"
$subject = "Proofpoint Queue Notification"
# Summary Email Settings for daily reports
$summaryTo = "lee@croucher.cloud"
$summarySubject = "Proofpoint Queue Daily Summary Report"
This allows operational alerts to go to on-call staff while summaries reach a broader audience for trend analysis.
Why Pattern Analysis Instead of Message Tracking
An important design decision was focusing on queue patterns rather than trying to track individual messages. The challenge: when we see a queue of 3, we can't determine if it's:
- Three different messages
- One message stuck for three hours
- A combination of stuck and new messages
Since we only have visibility into queue counts, not message IDs, I chose to track patterns and trends instead. This approach provides valuable insights without making false assumptions about message identity.
The JSON Array Challenge
The first major issue I encountered was with PowerShell's JSON handling. When appending data to the JSON file, I got this error:
Method invocation failed because [System.Management.Automation.PSObject] does not contain a method named 'op_Addition'.
The problem? PowerShell's ConvertFrom-Json returns different types depending on the JSON structure. Sometimes it's an array, sometimes a PSObject. Here's how I fixed it:
function Add-TrendingData {
param($AttachmentValue, $MessageValue)
$newEntry = @{
Timestamp = (Get-Date).ToString("yyyy-MM-dd HH:mm:ss")
Hour = (Get-Date).Hour
AttachmentQueue = $AttachmentValue
MessageQueue = $MessageValue
}
# Read existing data
$data = @()
if (Test-Path $trendingDataFile) {
$jsonContent = Get-Content $trendingDataFile -Raw
if ($jsonContent) {
$existingData = $jsonContent | ConvertFrom-Json
# Convert to array if it's a single object
if ($existingData -is [System.Array]) {
$data = @($existingData)
} else {
$data = @($existingData)
}
}
}
# Add new entry
$data += $newEntry
# Save updated data
$data | ConvertTo-Json -Depth 10 | Set-Content $trendingDataFile -Force
}
The key improvements:
- Always ensure
$datais an array using@() - Check if existing data is already an array
- Add
-Depth 10toConvertTo-Jsonfor proper serialization
Building the Daily Summary
The daily summary runs at 7 AM and provides insights into the previous 24 hours of queue activity. I calculate several key metrics:
# Calculate statistics
$totalReadings = $data.Count
$attachmentReadings = @($data | Where-Object { $_.AttachmentQueue -gt 0 })
$messageReadings = @($data | Where-Object { $_.MessageQueue -gt 0 })
$maxAttachment = ($data | ForEach-Object { $_.AttachmentQueue } | Measure-Object -Maximum).Maximum
$maxMessage = ($data | ForEach-Object { $_.MessageQueue } | Measure-Object -Maximum).Maximum
$attachmentActiveHours = ($attachmentReadings.Count / [Math]::Max($totalReadings, 1)) * 24
$messageActiveHours = ($messageReadings.Count / [Math]::Max($totalReadings, 1)) * 24
Tracking Queue Events
One interesting metric is "queue events" - how many times a queue went from 0 to a non-zero value. This indicates new incidents rather than sustained issues:
# Count new queue events (transitions from 0 to >0)
$attachmentEvents = 0
$messageEvents = 0
for ($i = 1; $i -lt $data.Count; $i++) {
if ($data[$i-1].AttachmentQueue -eq 0 -and $data[$i].AttachmentQueue -gt 0) {
$attachmentEvents++
}
if ($data[$i-1].MessageQueue -eq 0 -and $data[$i].MessageQueue -gt 0) {
$messageEvents++
}
}
Creating the Visual Heatmap
The most impactful part of the summary is the 24-hour heatmap. I build it dynamically in HTML:
# Build hourly heatmap data
$hourlyMax = @{}
0..23 | ForEach-Object {
$hourlyMax[$_] = @{
Attachment = 0
Message = 0
}
}
foreach ($reading in $data) {
$hour = [int]$reading.Hour
if ([int]$reading.AttachmentQueue -gt $hourlyMax[$hour].Attachment) {
$hourlyMax[$hour].Attachment = [int]$reading.AttachmentQueue
}
if ([int]$reading.MessageQueue -gt $hourlyMax[$hour].Message) {
$hourlyMax[$hour].Message = [int]$reading.MessageQueue
}
}
Note the explicit [int] casting - another lesson learned when PowerShell was treating numbers as strings from the JSON.
HTML Email Generation
I kept the HTML simple and MIME-compatible, avoiding Unicode characters. The design philosophy was minimalistic and professional:
# Generate color-coded cells
0..23 | ForEach-Object {
$value = $hourlyMax[$_].Attachment
$class = if ($value -eq 0) { "heat-green" }
elseif ($value -le 5) { "heat-amber" }
else { "heat-red" }
$displayValue = if ($value -eq 0) { "0" } else { $value }
$body += "<td><span class='heat-cell $class'>$displayValue</span></td>"
}
The heat cells use simple CSS classes with clear visual distinction:
.heat-cell {
width: 30px;
height: 30px;
display: inline-block;
border-radius: 4px;
line-height: 30px;
color: white;
font-weight: 600;
}
.heat-green { background-color: #4CAF50; }
.heat-amber { background-color: #FF9800; }
.heat-red { background-color: #F44336; }
Important: No emojis or special Unicode characters - just colors and numbers that render reliably across all email clients.
Preventing Duplicate Summaries
To ensure the summary only sends once per day at 7 AM, I track the last sent date:
function Should-SendSummary {
$currentHour = (Get-Date).Hour
# Check if it's 7 AM
if ($currentHour -ne $summaryHour) {
return $false
}
# Check if we already sent summary today
if (Test-Path $lastSummaryFile) {
$lastSummaryDate = Get-Content $lastSummaryFile
$today = (Get-Date).ToString("yyyy-MM-dd")
if ($lastSummaryDate -eq $today) {
return $false
}
}
return $true
}
Data Cleanup for Fresh Daily Tracking
After sending the 7 AM summary, the script cleans up to start fresh for the new day:
# Send summary email
Send-MailMessage -From $from -To $summaryTo.Split(',') -Subject $summarySubject -Body $body -BodyAsHtml -SmtpServer $smtpServer
Write-Host "[INFO] Daily summary email sent to: $summaryTo"
# Clear trending data for new day
Remove-Item $trendingDataFile -Force -ErrorAction SilentlyContinue
Write-Host "[INFO] Trending data cleared for new day"
# Record that we sent summary today
(Get-Date).ToString("yyyy-MM-dd") | Set-Content $lastSummaryFile -Force
This ensures:
- Each day starts with a clean JSON file
- File size remains manageable (only 24 hours of data)
- Historical data doesn't interfere with new patterns
Maintaining Test Mode
I preserved the original FireEmail test mode, which exits after one run:
# Exit if FireEmail mode (test mode - single run)
if ($FireEmail) {
Write-Host "[INFO] FireEmail mode - exiting after single run."
break
}
Conclusion
The enhanced monitoring system now provides:
- Real-time alerts when queues exceed thresholds
- Historical pattern analysis showing peak times
- Quantifiable metrics (active hours, event counts)
- Visual representation of daily patterns
- Separate notification streams for different audiences
The daily summaries have already revealed patterns I hadn't noticed before - consistent morning spikes and afternoon peaks that correlate with business email patterns. This data helps with capacity planning and identifying systemic issues versus one-off incidents.
The complete script runs as a scheduled task, providing 24/7 monitoring with minimal overhead. The JSON file remains small (under 1KB for 24 hours of hourly readings), and the script automatically cleans up after sending each daily summary.