Notice: Due to size constraints and loading performance considerations, scripts referenced in blog posts are not attached directly. To request access, please complete the following form: Script Request Form Note: A Google account is required to access the form.
Disclaimer: I do not accept responsibility for any issues arising from scripts being run without adequate understanding. It is the user's responsibility to review and assess any code before execution. More information

From 13 Hours to 3 Minutes: Building a Windows File Count and Disk Health Monitor


This is how I built a complete disk health monitoring solution for Windows servers mainly to monitor the amount of files on the primary file system And to run an and to run a chkdsk in read only mode, it also covers all the obstacles I encountered along the way.

The Problem

I needed a way to monitor disk health across multiple Windows servers which only applies to Domain Controllers due to their critical nature and in our environment. Specifically, I wanted to track:

  • File count on C:\ drive with health thresholds
  • Disk health status using read-only chkdsk scans
  • Professional reporting with a web dashboard

The requirements were simple, but the implementation proved to be anything but straightforward.

Visual Results

This is a visual of the final website that shows the statistics and data:


This is the deployment script this handles the folder creation and the scheduled tasks at 2am and kicks this scheduled task off immediately:


The is the collect script in action:


Then the log file that drives the website is shown below, notice that the disk errors is 0kb which mean Healthy, which you can get with the command:
Get-Content "\\beardc\c$\windows\temp\DiskCheck\system-info.log" -Tail 10

This is the output:

[2025-06-09 16:13:40] === Disk Health Monitor Started ===
[2025-06-09 16:13:40] Server: BearDc
[2025-06-09 16:13:40] User: BearDC$
[2025-06-09 16:13:40] Process ID: 14668
[2025-06-09 16:13:40] Start Time: 06/09/2025 16:13:39
[2025-06-09 16:13:40] Starting file count on C:\
[2025-06-09 16:14:55] File count completed: 331147 files in 1.3 minutes
[2025-06-09 16:14:55] Starting read-only disk health check (chkdsk C: /scan)
[2025-06-09 16:16:32] DISK ERRORS DETECTED:          0 KB in bad sectors.
[2025-06-09 16:16:32] === Monitoring completed in 2.9 minutes ===

Attempt #1: The Direct Approach (That Didn't Work)

My first instinct was to create a simple PowerShell script that would run locally and count files:

# This works locally in ~90 seconds
(Get-ChildItem -Recurse -Path C:\ -Force -ErrorAction SilentlyContinue 
| Measure-Object).Count

This command worked perfectly when run directly on a server - taking about 90 seconds to count ~575,000 files. Naturally, I thought extending this to remote servers would be trivial.

I was wrong.

Attempt #2: WinRM Remote Execution (The First Failure)

My next approach was to use PowerShell remoting with Invoke-Command:

$FileCount = Invoke-Command -ComputerName $ServerName -ScriptBlock {
    (Get-ChildItem -Recurse -Path C:\ -Force -ErrorAction SilentlyContinue 
| Measure-Object).Count
}

The Problem: WinRM wasn't enabled on our target servers. When I tried to run this, I got connection failures:

Test-WSMan -ComputerName st1w1660 -ErrorAction Stop
# ERROR: The client cannot connect to the destination specified in the request

Lesson Learned: Always verify your infrastructure prerequisites before building solutions around them, slip up there.

Attempt #3: PsExec to the Rescue (Or So I Thought)

Since WinRM was off the table, I turned to PsExec - a tool that could execute commands on remote systems without requiring WinRM:

# This worked for simple commands
.\psexec.exe \\$ServerName -accepteula cmd /c "echo test"

# But this hung indefinitely
.\psexec.exe \\$ServerName -accepteula powershell.exe -Command 
"(Get-ChildItem -Recurse -Path C:\ -Force -ErrorAction SilentlyContinue | 
Measure-Object).Count"

The Problem: While basic PsExec commands worked instantly, trying to execute the PowerShell file counting command via PsExec would hang for hours without returning results.

I spent considerable time debugging this, thinking it was:

  • Execution policy issues
  • Output buffering problems
  • Network timeouts
  • PowerShell startup delays

Attempt #4: UNC Path Scanning (Slow But Functional)

Frustrated with remote execution, I tried a different approach - running the scan locally but pointing to the remote drive via UNC paths:

# Scan remote drive from management server
$UNCPath = "\\$ServerName\C$"
$FileCount = (Get-ChildItem -Recurse -Path $UNCPath -Force -ErrorAction SilentlyContinue |
 Measure-Object).Count

The Good News: This approach actually worked!

The Bad News: What took 90 seconds locally took over 13 hours via UNC paths:

[2025-06-07 02:16:30] UNC scan completed successfully
[2025-06-07 02:16:30] Total files found: 739421
[2025-06-07 02:16:30] Total time: 829.6 minutes

Lesson Learned: Network overhead can make operations orders of magnitude slower than local execution.

The Breakthrough: Scheduled Tasks + File Collection

After all these failed attempts, I realized I was approaching the problem wrong. Instead of trying to force remote execution, why not:

  1. Deploy monitoring scripts to each server
  2. Run them locally via scheduled tasks
  3. Collect the results via simple file retrieval

This led to a three-script solution:

Script 1: The Deployment Script

This script deploys monitoring to target servers:

# Deploy-DiskMonitor.ps1 - Sets up monitoring on remote servers
.\Deploy-DiskMonitor.ps1 -Server "ServerName"
.\Deploy-DiskMonitor.ps1 -Servers @("Server1","Server2") 
.\Deploy-DiskMonitor.ps1 -ADDS  # All domain controllers

What it does:

  • Creates folder structure on remote server
  • Copies monitoring script to C:\Windows\Temp\DiskCheck\
  • Creates scheduled task (weekly at 2 AM)
  • Runs initial scan immediately

Here's the core monitoring script that gets deployed:

# The script that runs locally on each server
$StartTime = Get-Date
$BaseFolder = 'C:\Windows\Temp\DiskCheck'

# File count (runs locally - fast!)
$FileCount = (Get-ChildItem -Recurse -Path 'C:\' -Force -ErrorAction SilentlyContinue | 
Measure-Object).Count
$FileCount | Out-File -FilePath "$BaseFolder\filecount.txt" -Encoding ASCII

# Disk health check (read-only)
$ChkdskOutput = & chkdsk C: /scan 2>&1
$ChkdskOutput | Out-File -FilePath "$BaseFolder\diskhealth.log" -Encoding ASCII

# Status file indicates completion
"COMPLETED" | Out-File -FilePath "$BaseFolder\status.txt" -Encoding ASCII

Script 2: The Collection Script

This script gathers results from all monitored servers:

# Collect-DiskResults.ps1 - Fast file collection
.\Collect-DiskResults.ps1 -Server "ServerName"
.\Collect-DiskResults.ps1 -ADDS

The magic: Instead of remote execution, this simply reads files:

# Lightning fast - just file reads
$FileCountFile = "\\$ServerName\C$\Windows\Temp\DiskCheck\filecount.txt"
$FileCount = Get-Content $FileCountFile

$DiskHealthFile = "\\$ServerName\C$\Windows\Temp\DiskCheck\diskhealth.log" 
$DiskContent = Get-Content $DiskHealthFile

Script 3: The Dashboard Generator

This creates a professional web dashboard from collected results:

# Generate-HealthDashboard.ps1 - Creates web dashboard
.\Generate-HealthDashboard.ps1

Health Thresholds:

  • Healthy: ≤ 550,000 files (Green)
  • Warning: 550,001 - 920,000 files (Yellow)
  • Critical: > 92,0000 files (Red)

Disk Status:

  • Healthy: 0 KB in bad sectors
  • Unhealthy: Any bad sectors detected

The Final Architecture

The solution that finally worked:

Management Server          Target Servers
┌─────────────────┐       ┌──────────────────┐
│                 │       │                  │
│ 1. Deploy       │────→  │ Scheduled Task   │
│ 2. Collect      │←────  │ (runs locally)   │
│ 3. Dashboard    │       │                  │
│                 │       │ Results in:      │
└─────────────────┘       │ C:\Windows\Temp\ │
                          │ DiskCheck\       │
                          └──────────────────┘

Performance Results:

  • Local execution: ~90 seconds (same as manual)
  • Collection time: ~5 seconds per server
  • Total time: 2-3 minutes instead of 13+ hours

Health Thresholds in Action

The dashboard correctly interprets disk health:

# This is HEALTHY (0 bad sectors)
[2025-06-09 10:41:34] DISK ERRORS DETECTED: 0 KB in bad sectors.

# This would be UNHEALTHY (actual bad sectors)
[2025-06-09 10:41:34] DISK ERRORS DETECTED: 150 KB in bad sectors.

File Count Classification

function Get-FileCountStatus {
    param([int]$FileCount)
    
    if ($FileCount -le 550000) {
        return @{Status = "Healthy"; Color = "#28a745"}
    }
    elseif ($FileCount -le 800000) {
        return @{Status = "Warning"; Color = "#ffc107"}
    }
    else {
        return @{Status = "Critical"; Color = "#dc3545"}
    }
}

Need to disabled the Scheduled Task?

# Disable the task
schtasks /change /tn "DiskHealthMonitor" /disable /s ServerName

# Enable the task  
schtasks /change /tn "DiskHealthMonitor" /enable /s ServerName

# Check task status
schtasks /query /tn "DiskHealthMonitor" /s ServerName

Final Thoughts

What started as a "simple" file counting script turned into a complete monitoring solution with:

  • Automated deployment across multiple servers
  • Local execution for performance
  • Professional dashboards for reporting
  • Scheduled automation for ongoing monitoring

The key was recognizing when to stop fighting the problem and start solving it differently. Sometimes the best remote solution is no remote execution at all.

Performance Improvements

From 13+ hours to 3 minutes - a 260x speed improvement.

The working scripts are production-ready and handle edge cases like:

  • Connection failures
  • Missing files
  • Parsing errors
  • Disk health interpretation
  • Professional reporting

Sometimes the journey teaches you more than the destination - but I'd rather have both the knowledge AND the working solution.

Previous Post Next Post

نموذج الاتصال