I have always wondered if I could modernize how we visualize Windows Performance Counters, as the inbuilt system performance counter runs for 60 seconds and doesn’t give you a lot of data to go on in the report.
If you're tired of staring at PerfMon's dated interface and want something more visually appealing, this post will walk you through using PerfLens - a web-based dashboard that transforms your performance data into a futuristic monitoring experience.
Visuals of PerfLens
This is the main interface when loaded, that runs from a HTML file:
Lets choose the processor option, which will then only how you the processor options as below:
I needed a modern dashboard that could display Windows Performance Counters over time from data collected on Windows machines. The built-in Performance Monitor works, but let's be honest - it looks like it hasn't been updated since Windows 2000. I wanted something that would make performance data actually pleasant to look at and easier to analyze.
PerfLens, takes your performance counter data and creates visually pleasing charts and metric cards that you can actually show to management without apologizing for the interface.
Understanding Windows Performance Counters
Before diving into the solution, let me quickly explain what we're working with. Windows Performance Counters are built-in system metrics that give you insight into:
- CPU utilization and scheduling pressure
- Memory availability and paging activity
- Disk latency, queue length, and throughput
- Network throughput and packet rates
- Process/thread counts and context switches
These counters follow a specific path format like \Processor(_Total)\% Processor Time or \Memory\Available MBytes. Each sample creates a row in what becomes a time-series dataset - perfect for visualization.
Recommended Counter Set
After years of troubleshooting performance issues, I've settled on this comprehensive set that captures all potential bottlenecks through queue lengths and wait times:
CPU Metrics
\Processor(_Total)\% Processor Time\Processor(_Total)\% User Time\Processor(_Total)\% Privileged Time\Processor(_Total)\% Interrupt Time\Processor(_Total)\% DPC Time\System\Processor Queue Length- Key bottleneck indicator\System\Context Switches/sec\System\System Calls/sec\System\Threads\System\Processes
Memory Metrics
\Memory\Available MBytes\Memory\% Committed Bytes In Use\Memory\Pages/sec- Key bottleneck indicator\Memory\Page Faults/sec\Memory\Page Reads/sec\Memory\Page Writes/sec\Memory\Pages Input/sec\Memory\Pages Output/sec\Memory\Pool Nonpaged Bytes\Memory\Pool Paged Bytes\Memory\Cache Bytes\Memory\Standby Cache Core Bytes\Memory\Standby Cache Normal Priority Bytes\Memory\Standby Cache Reserve Bytes\Paging File(_Total)\% Usage- Key bottleneck indicator
Disk Metrics
\PhysicalDisk(_Total)\% Disk Time\PhysicalDisk(_Total)\Current Disk Queue Length- Key bottleneck indicator\PhysicalDisk(_Total)\Avg. Disk Queue Length- Key bottleneck indicator\PhysicalDisk(_Total)\Avg. Disk Read Queue Length\PhysicalDisk(_Total)\Avg. Disk Write Queue Length\PhysicalDisk(_Total)\Disk Reads/sec\PhysicalDisk(_Total)\Disk Writes/sec\PhysicalDisk(_Total)\Disk Bytes/sec\PhysicalDisk(_Total)\Disk Read Bytes/sec\PhysicalDisk(_Total)\Disk Write Bytes/sec\PhysicalDisk(_Total)\Avg. Disk sec/Read- Latency indicator\PhysicalDisk(_Total)\Avg. Disk sec/Write- Latency indicator\PhysicalDisk(_Total)\Avg. Disk sec/Transfer\LogicalDisk(_Total)\% Free Space
Network Metrics
\Network Interface(*)\Bytes Total/sec\Network Interface(*)\Bytes Received/sec\Network Interface(*)\Bytes Sent/sec\Network Interface(*)\Packets/sec\Network Interface(*)\Packets Received/sec\Network Interface(*)\Packets Sent/sec\Network Interface(*)\Output Queue Length- Key bottleneck indicator\Network Interface(*)\Packets Received Errors\Network Interface(*)\Packets Outbound Errors\Network Interface(*)\Packets Received Discarded\Network Interface(*)\Packets Outbound Discarded\Network Interface(*)\Current Bandwidth
System Metrics (Critical for bottleneck detection)
\System\System Up Time\System\Processor Queue Length- CPU bottleneck\System\Threads\System\Processes\System\Exception Dispatches/sec\System\File Read Operations/sec\System\File Write Operations/sec\System\File Control Operations/sec\System\File Data Operations/sec\System\File Read Bytes/sec\System\File Write Bytes/sec\System\File Control Bytes/sec
Process Metrics (for identifying resource hogs)
\Process(_Total)\Handle Count\Process(_Total)\Thread Count\Process(_Total)\Working Set\Process(_Total)\Private Bytes\Process(_Total)\Page Faults/sec\Process(_Total)\IO Data Operations/sec
The key to identifying bottlenecks is watching the queue lengths - if any queue consistently has waiting items, you've found your constraint. Disk queues above 2 per spindle, processor queues above 2 per core, or network output queues with any sustained value all indicate bottlenecks.
Interpreting Bottleneck Indicators
Through my experience troubleshooting performance issues, here are the critical thresholds I watch for:
CPU Bottlenecks
- Processor Queue Length > 2 per core: Threads waiting for CPU time
- % Processor Time sustained > 80%: Getting close to capacity
- High Context Switches/sec (>15,000): Could indicate lock contention
- % Interrupt Time or % DPC Time > 15%: Driver or hardware issues
Memory Bottlenecks
- Pages/sec > 1,000: Active paging indicates memory pressure
- Page Reads/sec > 5: Hard page faults requiring disk access
- Available MBytes < 10% of total RAM: Running low on memory
- Paging File % Usage > 70%: Need more RAM or reduce load
Disk Bottlenecks
- Current/Avg Disk Queue Length > 2 per spindle: I/O backing up
- Avg. Disk sec/Read or sec/Write > 0.020 (20ms): High latency
- For SSDs: > 0.010 (10ms) indicates problems
- % Disk Time at 100%: Disk is saturated
Network Bottlenecks
- Output Queue Length > 0 sustained: Network can't keep up
- Packet errors or discards increasing: Physical layer issues
- Bytes Total/sec approaching interface bandwidth: Need faster NIC or aggregation
System-Wide Issues
- Handle Count growing continuously: Handle leak
- Thread Count excessive (>2000): Poor thread management
- File Data Operations backed up: Storage subsystem struggling
I've found that bottlenecks rarely occur in isolation - a disk bottleneck often leads to memory pressure as processes wait, which then impacts CPU efficiency. That's why collecting all these counters together gives you the complete picture.
Collecting Counters with PowerShell
I prefer using PowerShell for collection because it gives me more control and can be easily automated. Here's how I do it:
Discovering Available Counters
First, I check what's available on the system:
# List all counter categories
Get-Counter -ListSet * | Select-Object CounterSetName | Sort-Object CounterSetName
# List specific counters under Processor
Get-Counter -ListSet "Processor" | Select-Object -ExpandProperty Counter
Direct CSV Export (The Simplest Method)
This is my preferred approach - it creates a CSV that's immediately ready for PerfLens:
$counters = @(
# CPU and System
"\Processor(_Total)\% Processor Time",
"\Processor(_Total)\% User Time",
"\Processor(_Total)\% Privileged Time",
"\Processor(_Total)\% Interrupt Time",
"\Processor(_Total)\% DPC Time",
"\System\Processor Queue Length",
"\System\Context Switches/sec",
"\System\System Calls/sec",
"\System\Threads",
"\System\Processes",
"\System\Exception Dispatches/sec",
# Memory
"\Memory\Available MBytes",
"\Memory\% Committed Bytes In Use",
"\Memory\Pages/sec",
"\Memory\Page Faults/sec",
"\Memory\Page Reads/sec",
"\Memory\Page Writes/sec",
"\Memory\Pool Nonpaged Bytes",
"\Memory\Pool Paged Bytes",
"\Paging File(_Total)\% Usage",
# Disk - All queue lengths for bottleneck detection
"\PhysicalDisk(_Total)\Current Disk Queue Length",
"\PhysicalDisk(_Total)\Avg. Disk Queue Length",
"\PhysicalDisk(_Total)\Avg. Disk Read Queue Length",
"\PhysicalDisk(_Total)\Avg. Disk Write Queue Length",
"\PhysicalDisk(_Total)\% Disk Time",
"\PhysicalDisk(_Total)\Disk Reads/sec",
"\PhysicalDisk(_Total)\Disk Writes/sec",
"\PhysicalDisk(_Total)\Avg. Disk sec/Read",
"\PhysicalDisk(_Total)\Avg. Disk sec/Write",
"\LogicalDisk(_Total)\% Free Space",
# Network - Including output queue for bottlenecks
"\Network Interface(*)\Bytes Total/sec",
"\Network Interface(*)\Output Queue Length",
"\Network Interface(*)\Packets Received Errors",
"\Network Interface(*)\Packets Outbound Errors"
)
Get-Counter -Counter $counters -SampleInterval 1 -MaxSamples 300 |
Export-Counter -Path "C:\PerfLogs\perflens_capture.csv" -FileFormat CSV -Force
This captures data every second for 5 minutes (300 samples) and outputs directly to CSV format that PerfLens can consume.
Dealing with Binary Log Files (.blg)
Sometimes you'll end up with .blg files, especially if you're using Performance Monitor's Data Collector Sets for longer captures. These binary files are:
- Smaller on disk than CSV
- Faster to write during collection
- More reliable for sustained monitoring
I convert them using the built-in relog tool:
Basic Conversion
relog "C:\PerfLogs\capture.blg" -f CSV -o "C:\PerfLogs\capture.csv"
Time-Range Filtering
relog "C:\PerfLogs\capture.blg" -f CSV -o "C:\PerfLogs\filtered.csv" `
-b "01/15/2024 09:00:00" -e "01/15/2024 17:00:00"
Batch Conversion Script
I often use this to convert multiple files at once:
Get-ChildItem "C:\PerfLogs\*.blg" | ForEach-Object {
$csvPath = $_.FullName -replace '\.blg$', '.csv'
relog $_.FullName -f CSV -o $csvPath
Write-Host ("Converted: {0} -> {1}" -f $_.Name, (Split-Path $csvPath -Leaf))
}
Collection Script
Here's the script I use most often. It's parameterized so I can adjust duration and sampling interval as needed, and includes all counters necessary to detect system bottlenecks:
#Requires -RunAsAdministrator
param(
[int]$DurationMinutes = 5,
[int]$IntervalSeconds = 1,
[string]$OutputPath = "C:\PerfLogs\perflens_$(Get-Date -Format 'yyyyMMdd_HHmmss').csv"
)
# Comprehensive counter set for bottleneck detection
$counters = @(
# CPU - watch for sustained queue length > 2 per core
"\Processor(_Total)\% Processor Time",
"\Processor(_Total)\% User Time",
"\Processor(_Total)\% Privileged Time",
"\Processor(_Total)\% Interrupt Time",
"\Processor(_Total)\% DPC Time",
"\System\Processor Queue Length", # KEY BOTTLENECK INDICATOR
"\System\Context Switches/sec",
"\System\System Calls/sec",
"\System\Threads",
"\System\Processes",
"\System\Exception Dispatches/sec",
# Memory - pages/sec > 1000 indicates pressure
"\Memory\Available MBytes",
"\Memory\% Committed Bytes In Use",
"\Memory\Pages/sec", # KEY BOTTLENECK INDICATOR
"\Memory\Page Faults/sec",
"\Memory\Page Reads/sec",
"\Memory\Page Writes/sec",
"\Memory\Pages Input/sec",
"\Memory\Pages Output/sec",
"\Memory\Pool Nonpaged Bytes",
"\Memory\Pool Paged Bytes",
"\Memory\Cache Bytes",
"\Paging File(_Total)\% Usage", # > 70% is concerning
# Disk - queue length > 2 per spindle indicates bottleneck
"\PhysicalDisk(_Total)\Current Disk Queue Length", # KEY BOTTLENECK
"\PhysicalDisk(_Total)\Avg. Disk Queue Length", # KEY BOTTLENECK
"\PhysicalDisk(_Total)\Avg. Disk Read Queue Length",
"\PhysicalDisk(_Total)\Avg. Disk Write Queue Length",
"\PhysicalDisk(_Total)\% Disk Time",
"\PhysicalDisk(_Total)\Disk Reads/sec",
"\PhysicalDisk(_Total)\Disk Writes/sec",
"\PhysicalDisk(_Total)\Disk Bytes/sec",
"\PhysicalDisk(_Total)\Avg. Disk sec/Read", # > 20ms is slow
"\PhysicalDisk(_Total)\Avg. Disk sec/Write", # > 20ms is slow
"\PhysicalDisk(_Total)\Avg. Disk sec/Transfer",
"\LogicalDisk(_Total)\% Free Space",
# Network - any sustained output queue indicates bottleneck
"\Network Interface(*)\Bytes Total/sec",
"\Network Interface(*)\Bytes Received/sec",
"\Network Interface(*)\Bytes Sent/sec",
"\Network Interface(*)\Packets/sec",
"\Network Interface(*)\Output Queue Length", # KEY BOTTLENECK INDICATOR
"\Network Interface(*)\Packets Received Errors",
"\Network Interface(*)\Packets Outbound Errors",
"\Network Interface(*)\Packets Received Discarded",
"\Network Interface(*)\Packets Outbound Discarded",
# System I/O
"\System\File Read Operations/sec",
"\System\File Write Operations/sec",
"\System\File Data Operations/sec",
# Process totals for context
"\Process(_Total)\Handle Count",
"\Process(_Total)\Thread Count",
"\Process(_Total)\Working Set",
"\Process(_Total)\Private Bytes"
)
$maxSamples = [int](($DurationMinutes * 60) / $IntervalSeconds)
# Ensure output folder exists
$dir = Split-Path $OutputPath -Parent
if (!(Test-Path $dir)) {
New-Item -ItemType Directory -Path $dir -Force | Out-Null
}
Write-Host "Collecting performance data for $DurationMinutes minutes..."
Write-Host "Monitoring $(($counters | Measure-Object).Count) counters for bottlenecks"
Get-Counter -Counter $counters -SampleInterval $IntervalSeconds -MaxSamples $maxSamples |
Export-Counter -Path $OutputPath -FileFormat CSV -Force
Write-Host "Saved CSV to: $OutputPath"
Write-Host "Next: open PerfLens and drag & drop the CSV."
Write-Host ""
Write-Host "Bottleneck thresholds to watch for:"
Write-Host " - Processor Queue Length > 2 per core"
Write-Host " - Memory Pages/sec > 1000"
Write-Host " - Disk Queue Length > 2 per spindle"
Write-Host " - Network Output Queue Length > 0 (sustained)"
Write-Host " - Disk Latency (sec/Read or sec/Write) > 0.020"
PerfLens : Modern Performance Counter Viewer
The magic happens when you drag and drop your CSV into PerfLens. The CSV format from PerfMon is already perfectly structured:
"(PDH-CSV 4.0) (Local Time)(300)","\HOST\Processor(_Total)\% Processor Time","\HOST\Memory\Available MBytes"
"01/15/2024 10:30:00.000","25.5","8192"
"01/15/2024 10:30:01.000","32.1","8190"
- Row 1 contains the counter paths (metadata)
- Column 1 is the timestamp (X-axis)
- Remaining columns are numeric values (Y-series)
PerfLens parses this structure and renders:
- Area charts for utilization metrics (CPU, disk time)
- Line charts for rate metrics (bytes/sec, reads/sec)
- Sparklines for quick trend visualization
- Metric cards showing current/min/max values with trends
The result is a dashboard that actually looks like it belongs in 2024, not 1999.
Collecting Data Process
If you want to try this yourself:
- Collect the data - Run my PowerShell script above
- Convert if needed - If you have .blg files, use
relogto convert to CSV - Visualize - Drag the CSV into PerfLens
- Analyze - Enjoy your futuristic performance dashboard
Using the GUI Alternative
If PowerShell isn't your thing, you can still use the traditional Performance Monitor:
- Open Run dialog: Win +R
- Type: perfmon
- Navigate to: Data Collector Sets → User Defined
- Create a New Data Collector Set
- Add your performance counters
- Start collection, wait, stop
- Find the .blg output file
- Convert it:
relog "C:\PerfLogs\yourcapture.blg" -f CSV -o "C:\PerfLogs\yourcapture.csv" - Drag the CSV into PerfLens
Conclusion
I've been using this approach for monitoring everything from Exchange servers to SQL clusters, and the visual difference compared to traditional PerfMon is night and day. The ability to quickly spot trends, correlate metrics, and actually enjoy looking at performance data has made troubleshooting much more efficient.
The best part? Once you have your CSV, PerfLens handles all the heavy lifting of making it look good. No more apologizing for ugly performance reports - just drag, drop, and analyze.