Notice: Due to size constraints and loading performance considerations, scripts referenced in blog posts are not attached directly. To request access, please complete the following form: Script Request Form Note: A Google account is required to access the form.
Disclaimer: I do not accept responsibility for any issues arising from scripts being run without adequate understanding. It is the user's responsibility to review and assess any code before execution. More information

Automating NetScaler Zero-Day CVE Assessment

When CVE-2025-6543 was disclosed as a critical zero-day vulnerability affecting Netscaler appliances, I knew that manual assessment across multiple devices would be both time-consuming and prone to human error.

Note : This post builds on the script you can get on this link > https://github.com/NCSC-NL/citrix-2025/blob/main/TLPCLEAR_check_script_cve-2025-6543-v1.7.sh

Let’s build a 
solution that quietly handles vulnerability assessment, log retrieval, and stakeholder notification without any user interaction and then gives the option of an email or website to show the data!

The Challenge

The automation needed to:

  • Execute vulnerability assessment scripts on three NetScaler devices
  • Retrieve the generated log files for analysis
  • Notify the security team with professional documentation
  • Do all of this without manual intervention or user prompts

The manual process would have required SSH access to each device, running shell commands, waiting for script completion, and manually copying files back to a central location.

Building the Remote Execution Engine

The first script I developed handles remote command execution and log retrieval. Here's how the core functionality works:

Secure Credential Management

Rather than hardcoding credentials or prompting users, the script uses an encrypted XML approach:

function New-CredentialsFile {
    param([string]$FilePath)
    
    $username = Read-Host "Enter NetScaler username"
    $password = Read-Host "Enter NetScaler password" -AsSecureString
    $encryptedPassword = $password | ConvertFrom-SecureString
    
    $credentialsXml = @"
<?xml version="1.0" encoding="utf-8"?>
<NetScalerCredentials>
    <Username>$username</Username>
    <EncryptedPassword>$encryptedPassword</EncryptedPassword>
</NetScalerCredentials>
"@
    
    $credentialsXml | Out-File -FilePath $FilePath -Encoding UTF8
}

This approach ensures credentials are encrypted using Windows DPAPI and are only accessible to the user account that created them. Once configured, the script runs completely silently.

Overcoming Remote Execution Challenges

The most significant hurdle was getting PowerShell to properly execute commands on NetScaler devices through plink. I encountered several critical issues:

Challenge 1: Batch Mode Authentication Failures

Initial attempts used plink's -batch parameter, which failed with interactive authentication prompts:

FATAL ERROR: Cannot answer interactive prompts in batch mode

Fixing this required replacing batch mode with explicit password authentication using the -pw parameter and removed the -batchflag entirely.

Challenge 2: Host Key Verification

The script failed when encountering new NetScaler devices:

FATAL ERROR: Cannot confirm a host key in batch mode

Fixing this required the script to performs an initial connection attempt to accept host keys automatically:

# Accept host key first
$acceptKeyCommand = "echo y"
$plinkTestArgs = @(
    "-ssh",
    "-l", $Credentials.Username,
    "-pw", $Credentials.Password,
    $IPAddress,
    "exit"
)

try {
    $acceptKeyCommand | & plink @plinkTestArgs 2>$null
} catch {
    # Ignore errors - we just want to accept the host key
}

Challenge 3: Command Sequence Execution

The most frustrating issue was that commands weren't executing properly in the Netscaler shell environment. The script would hang indefinitely when trying to pipe multiple commands.

My initial approach used command piping:

$plinkCommand = @"
shell
cd /tmp
/bin/sh TLPCLEAR_check_script_cve-2025-6543-v1.7.sh
exit
"@

$plinkCommand | & plink @plinkArgs

This approach failed because Netscaler's shell interface doesn't handle piped command sequences the same way as standard Unix shells.

Fixing this required the introduction of a timeout-based job system with pseudo-terminal allocation:

$directArgs = @(
    "-ssh",
    "-t",  # Force pseudo-terminal allocation
    "-l", $Credentials.Username,
    "-pw", $Credentials.Password,
    $IPAddress,
    "shell; cd /tmp; /bin/sh TLPCLEAR_check_script_cve-2025-6543-v1.7.sh; exit"
)

$job = Start-Job -ScriptBlock {
    param($args)
    & plink @args 2>&1
} -ArgumentList $directArgs

if (Wait-Job $job -Timeout 90) {
    $output = Receive-Job $job
    Remove-Job $job
} else {
    Stop-Job $job
    Remove-Job $job
}

The -t parameter forces pseudo-terminal allocation, which resolves shell interaction issues, while the job-based approach with timeout prevents indefinite hanging.

Intelligent Log Retrieval

After script execution, the system waits 20 seconds for log file generation (as the CVE assessment script can take up to 15 seconds to complete), then retrieves the files:

# Wait for log file generation
Write-Host "Waiting 20 seconds for log file generation..." -ForegroundColor Gray
Start-Sleep -Seconds 20

# Retrieve log file using pscp
$pscpArgs = @(
    "-l", $Credentials.Username,
    "-pw", $Credentials.Password,
    "$IPAddress`:$remoteLogFile",
    $localLogFile
)

& pscp @pscpArgs 2>$null

The script stores all retrieved logs in a local Netscaler_CVE_Logs directory with timestamps, making them easy to identify and process.

Automated Security Notification System

The second component is the email notification system that automatically alerts the security team with professional documentation and evidence.

Smart File Selection

The email script intelligently selects the most recent log file from each NetScaler device:

function Get-LatestNetScalerLogs {
    param([string]$LogPath)
    
    $logFiles = Get-ChildItem -Path $LogPath -Filter "*.log" | Sort-Object LastWriteTime -Descending
    
    # Get the latest file from each NetScaler
    $tc1Latest = $logFiles | Where-Object { $_.Name -match "^NewYork_" } | Select-Object -First 1
    $tc3Latest = $logFiles | Where-Object { $_.Name -match "^London_" } | Select-Object -First 1
    $azureLatest = $logFiles | Where-Object { $_.Name -match "^Toronto_" } | Select-Object -First 1
    
    $selectedFiles = @()
    if ($tc1Latest) { $selectedFiles += $newyorkLatest }
    if ($tc3Latest) { $selectedFiles += $londonLatest }
    if ($azureLatest) { $selectedFiles += $torontoLatest }
    
    return $selectedFiles
}

This ensures that only the most current assessment results are included in the notification, preventing confusion from multiple assessment runs.

Option A : Security Notification email

The email generates a comprehensive HTML-formatted security alert that includes:

  • Executive Summary: Clear description of the vulnerability and required actions
  • Critical Review Instructions: Specific guidance to focus on non-"Low Compromise" findings
  • Infrastructure Details: Complete list of assessed devices with IP addresses
  • Evidence Attachments: Automatic inclusion of the latest log files from each device

Here's how the email dynamically incorporates the actual log files:

# Add attachments and update email content
foreach ($logFile in $LogFiles) {
    $attachment = New-Object System.Net.Mail.Attachment($logFile.FullName)
    $mail.Attachments.Add($attachment)
    
    $fileSize = [math]::Round($logFile.Length / 1KB, 2)
    $attachmentList += "<li><strong>$($logFile.Name)</strong> ($fileSize KB) - Generated: $(Get-Date $logFile.LastWriteTime -Format 'yyyy-MM-dd HH:mm:ss')</li>"
}

$mail.Body = $mail.Body -replace '<ul id="attachment-list">.*?</ul>', "<ul>$attachmentList</ul>"

Visual Email

This is an example of the e-mail that is sent, using standard CSS formatting and styling:


Option B : Dynamic Website (generated from the report data)

I find that sometimes an inbox full of alerts can cause issues with compliance and focus, therefore sometimes it is better to have a website that will show this information so people can simply visit a website.

Websites can be an easier source of the information processing, and this is the option I prefer as when you build the website the script can build the new report into that logic so the website is continually updated, which prevents alerting fatigue.

Visual Website

This is a tour of how the website was presented that showed the same information in an email but could be easily updated throughout the day or week without additional emails.


If you when click on one of the devices you would then see the log information right there:


Automated Workflow

When executed, the combined solution:

  1. Loads encrypted credentials from the local XML file (no user prompts)
  2. Connects to each Netscaler device using optimized plink parameters
  3. Executes the CVE assessment script in the proper shell environment
  4. Waits for script completion with appropriate timeouts
  5. Retrieves the generated log files to the local directory
  6. Identifies the latest logs from each device automatically
  7. Generates a professional security alert with complete documentation
  8. Sends the notification email/website with evidence attachments

Conclusion

The final scripts demonstrate how PowerShell can effectively orchestrate complex security workflows while maintaining the professional standards required for incident response.

The silent operation model ensures that security assessments can be performed consistently and immediately, without requiring security team availability or manual intervention. In an environment where zero-day vulnerabilities demand rapid response, this level of automation provides a significant competitive advantage in threat mitigation.

Previous Post Next Post

نموذج الاتصال