Notice: Due to size constraints and loading performance considerations, scripts referenced in blog posts are not attached directly. To request access, please complete the following form: Script Request Form Note: A Google account is required to access the form.
Disclaimer: I do not accept responsibility for any issues arising from scripts being run without adequate understanding. It is the user's responsibility to review and assess any code before execution. More information

Monitoring DNS Performance Across Domain Controllers Counters with PowerShell

You often need to monitor DNS performance across multiple domain controllers to identify bottlenecks and ensure optimal network performance. Recently, I developed a PowerShell script that collects DNS query metrics remotely and logs them to CSV for analysis.

The Challenge

When troubleshooting network performance issues, I frequently find that DNS response times can be a significant factor. However, getting real-time DNS performance data from multiple domain controllers simultaneously was proving to be a challenge. The built-in Windows Performance Monitor works well for single servers, but I needed something that could:

  • Collect data from multiple domain controllers remotely
  • Run continuously without manual intervention
  • Export data in a format suitable for analysis
  • Handle network connectivity issues gracefully

Initial Attempts and Roadblocks

First Try: Using logman for Performance Counter Collection

My initial approach was to use the built-in logman utility to create performance counter logs:

relog ".\DNS-Queries_000001.blg" -f csv -o "dns.csv"

However, this quickly ran into problems:

Input
----------------
File(s):
     .\DNS-Queries_000001.blg (Binary)
Error:
Unable to read counter information and data from input binary log files.

The counter was running but the log files were empty, suggesting that the performance counters weren't being properly configured when logging started. After several attempts to restart the data collector and verify counter availability, I realized that logman wasn't going to be reliable for my needs.

Second Attempt: PowerShell Script with Service Discovery

I then tried building a comprehensive PowerShell script that would automatically discover domain controllers and validate services before collecting counters. This seemed like the right approach - until I ran into multiple issues:

Domain Controller Discovery Problems

The script attempted to enumerate domain controllers using various methods:

# This was failing to return proper results
$nltest = nltest /dclist: 2>$null

The discovery function was returning empty strings, leading to errors like:

[2025-06-11 10:36:20] Testing DNS Server counter availability on ...
[2025-06-11 10:36:20] ERROR: DNS Server counters are not available on . 
Error: Unable to connect to the specified computer or the computer is offline.

Syntax Errors with Complex Pipelines

As I tried to make the script more robust, I encountered multiple PowerShell syntax errors:

At C:\DNS-Logger.ps1:85 char:36
+     # Create directory if it doesn't exist
+                                    ~
Unexpected token 't' in expression or statement.

At C:\DNS-Logger.ps1:360 char:19
+                 } | Where-Object {$_ -and $_.Length -gt 0}
+                   ~
An empty pipe element is not allowed.

The complex pipeline operations I was using weren't parsing correctly, and the apostrophes in comments were causing parser errors.

Wrong Counter Paths

Even when the syntax was correct, I was using the wrong performance counter path:

Get-Counter "\DNS Server(_Total)\Total Query Received/sec" -ComputerName $ServerName

The script would run but return errors like:

[2025-06-11 10:53:38] ERROR: DNS Server counters are not available on bearclaws.bear.local. 
Error: The specified object was not found on the server.

The Breakthrough

After struggling with these issues, I decided to test the counter collection manually:

Get-Counter -Counter "\\bearclaws\DNS\Total Query Received/sec"

And it worked perfectly!

Timestamp                 CounterSamples
---------                 --------------
11/06/2025 10:57:20       \\bearclaws\dns\total query received/sec :
                          235.781524839084

This revealed two key insights:

  1. The correct counter path was \DNS\Total Query Received/sec, not \DNS Server(_Total)\Total Query Received/sec
  2. I didn't need all the service checking - I just needed to try collecting the counter and handle failures gracefully

Basic Counter Collection

The core function collects the DNS counter from each specified server:

function Get-DNSCounter {
    param([string]$ServerName)
    
    $timestamp = Get-Date -Format "yyyy-MM-dd HH:mm:ss"
    $counterValue = 0
    
    try {
        if ($ServerName -eq $env:COMPUTERNAME -or $ServerName -eq "localhost") {
            # Local counter
            $counter = Get-Counter "\DNS\Total Query Received/sec" -MaxSamples 1 
            -ErrorAction Stop
        } else {
            # Remote counter - use the working format
            $counter = Get-Counter "\\$ServerName\DNS\Total Query Received/sec" 
            -MaxSamples 1 -ErrorAction Stop
        }
        
        if ($counter -and $counter.CounterSamples) {
            $counterValue = [math]::Round($counter.CounterSamples[0].CookedValue, 2)
        }
        
        return "$timestamp,$ServerName,$counterValue"
    }
    catch {
        Write-Log "Warning: Cannot collect from $ServerName - $($_.Exception.Message)"
        return "$timestamp,$ServerName,0"
    }
}

CSV Output Management

I wanted the script to create a clean CSV file that I could easily import into Excel or other analysis tools:

function Initialize-CSVFile {
    param([string]$FilePath)
    
    $headers = "Timestamp,ServerName,TotalQueryReceivedPerSec"
    
    # Create directory if needed
    $directory = Split-Path $FilePath -Parent
    if ($directory -and !(Test-Path $directory)) {
        New-Item -ItemType Directory -Path $directory -Force | Out-Null
    }
    
    # Write headers if file does not exist
    if (!(Test-Path $FilePath)) {
        $headers | Out-File -FilePath $FilePath -Encoding UTF8
        Write-Log "Created CSV file: $FilePath"
    }
}

Main Collection Loop

The script runs continuously, collecting data at specified intervals:

try {
    while ($iteration -lt $totalIterations) {
        $iteration++
        
        # Collect from each server
        foreach ($server in $Servers) {
            $csvRow = Get-DNSCounter -ServerName $server
            $csvRow | Out-File -FilePath $OutputPath -Append -Encoding UTF8
        }
        
        # Progress update every 20 iterations
        if ($iteration % 20 -eq 0) {
            $elapsed = $iteration * $IntervalSeconds
            $remaining = ($totalIterations - $iteration) * $IntervalSeconds
            Write-Log "Progress: $iteration/$totalIterations 
            (Elapsed: ${elapsed}s, Remaining: ${remaining}s)"
        }
        
        # Wait for next collection
        Start-Sleep -Seconds $IntervalSeconds
    }
}

Using the Script

I can now monitor DNS performance across my domain controllers with simple commands:

# Monitor specific servers every 10 seconds
.\dns_logger.ps1 -Servers @('bearclaws','bearpaws','grizzlypaws') -IntervalSeconds 10

# Run a short test for 30 minutes
.\dns_logger.ps1 -Servers @('bearclaws','bearpaws') -MaxHours 0.5 -IntervalSeconds 5

# Custom output location
.\dns_logger.ps1 -Servers @('bearclaws') -OutputPath "C:\Reports\dns_metrics.csv"

The Results

The script generates CSV output like this:

Timestamp,ServerName,TotalQueryReceivedPerSec
2025-06-11 10:57:25,bearclaws,235.78
2025-06-11 10:57:25,bearpaws,189.45
2025-06-11 10:57:25,grizzlypaws,312.67
2025-06-11 10:57:30,bearclaws,241.33
2025-06-11 10:57:30,bearpaws,195.22
2025-06-11 10:57:30,grizzlypaws,298.41

This data allows me to:

  • Identify peak usage periods by analyzing query rates over time
  • Balance load by understanding which domain controllers are handling the most DNS requests
  • Troubleshoot performance issues by correlating DNS query spikes with user complaints
  • Plan capacity by understanding growth trends in DNS usage
The Output

When run run the script it will look like this , no data in the console but the data will be written to the CSV in the background as you can see below:



This will then create the file as shown below, this CSV file will be required for the next blog post about data visualisation:


Error Handling

The script is designed to be resilient. If a server is unreachable or doesn't have DNS services running, it logs a warning but continues collecting from other servers:

[2025-06-11 10:57:25] Warning: Cannot collect from noclaws - 
The specified object was not found on the computer.

This means I can include servers in my list even if I'm not sure they're running DNS services - the script will simply skip them and continue.

Conclusion

This PowerShell script has become an essential tool in my DNS monitoring. It provides the real-time visibility I need to maintain optimal DNS performance across my domain infrastructure. The ability to collect data remotely and export it in a standardized format makes it easy to integrate into my existing monitoring and reporting processes.

Previous Post Next Post

نموذج الاتصال