ℹ️ Many blog posts do not include full scripts. If you require a complete version, please use the Support section in the menu.
Disclaimer: I do not accept responsibility for any issues arising from scripts being run without adequate understanding. It is the user's responsibility to review and assess any code before execution. More information

Cross-Cloud Private DNS Between Azure and AWS

If you have come across the question "We have private resources in both Azure and AWS, and we just need them to resolve each other's DNS names." then you will think this sounds simple enough, right? Wrong.

The confusion usually starts with a reasonable assumption — that somehow the clouds will magically know about each other's private DNS zones. Spoiler alert: they don't, and they're not designed to. Each cloud provider treats their private DNS as exactly that — private to their environment.

After spending way too much time debugging failed DNS queries and dealing with the inevitable "it works from my machine" scenarios.

What We're Actually Building

Lets be clear about what this solution does:

  • Azure resources can resolve AWS private hosted zone records
  • AWS resources can resolve Azure Private DNS records
  • Each cloud stays authoritative for its own zones
  • Everything flows through managed services (no custom DNS servers to babysit)

The key is Azure DNS Private Resolver and AWS Route 53 Resolver. They're essentially the equivalent services in each cloud that know how to conditionally forward DNS queries to external resolvers.

The Non-Negotiables Before You Start

I learned these the hard way, so you don't have to:

  1. IP connectivity must exist between Azure and AWS - If your packets can't route between the clouds, DNS queries certainly won't work
  2. Resolver subnets need to be routable - Both sides need to see each other's resolver IPs
  3. UDP and TCP port 53 must be allowed in both directions - DNS uses both protocols
  4. No proxy or firewall should be inspecting DNS traffic - This will break everything

Testing Your Setup (Before You Build Everything)

Lets define what "working" looks like before you build anything.

Success from Azure looks like:

  • An Azure VM runs nslookup app.aws.bear.local
  • The DNS server shown is 168.63.129.16 (Azure-provided DNS)
  • The resolved IP belongs to the AWS VPC CIDR

Success from AWS looks like:

  • An EC2 instance runs nslookup bear.sql.privatelink.database.windows.net
  • It returns a private IP from Azure
  • No public IP resolution happens

Azure Side: Setting Up the DNS Private Resolver

Step 1: Create the Special Subnets

In the VNet that will host DNS, create two empty subnets and delegate both to Microsoft.Network/dnsResolvers. This is crucial — these subnets are exclusively for the DNS resolver service.

Inbound subnet: For receiving DNS queries from AWS
Outbound subnet: For forwarding queries to AWS

Nobody else gets to live in these subnets. They're VIP-only zones for DNS.

Step 2: Deploy the Private Resolver

In the Azure portal:

  1. Navigate to DNS Private Resolvers
  2. Create a new resolver and attach it to your VNet
  3. Add an inbound endpoint in the inbound subnet
  4. Add an outbound endpoint in the outbound subnet

Write down those inbound endpoint IPs — AWS will need them.

Step 3: Configure Forwarding Rules

Create a DNS forwarding ruleset and add rules for your AWS zones:

Domain: app.aws.bear.local.
Target IPs: [Your AWS inbound resolver IPs]

That trailing dot matters! DNS is picky about fully qualified domain names.

Link this ruleset to all VNets that need to resolve AWS names.

AWS Side: Setting Up Route 53 Resolver

Step 4: Create Resolver Subnets

In your VPC:

  • Create at least two subnets in different Availability Zones
  • Keep them small — they're just for resolver endpoints

Step 5: Deploy the Inbound Resolver

In Route 53 → Resolver → Inbound endpoints:

  1. Create a new endpoint in your VPC
  2. Choose 2 to 6 IP addresses from at least two Availability Zones
  3. Configure the security group to allow:
    • UDP port 53 from Azure resolver ranges
    • TCP port 53 from Azure resolver ranges

Note those IPs — Azure needs them for the forwarding rules.

Step 6: Deploy the Outbound Resolver

Still in Route 53 Resolver:

  1. Create an outbound endpoint
  2. Use the same VPC and subnets
  3. Allow outbound DNS in the security group

Step 7: Create Resolver Rules for Azure

In Route 53 → Resolver → Rules:

Domain: sql.privatelink.database.windows.net.
Target IPs: [Your Azure inbound resolver IPs]

Associate these rules with VPCs that need Azure DNS resolution.

How to Know If It's Actually Working

Testing Azure → AWS Resolution

From an Azure VM linked to the forwarding ruleset:

# This should work
nslookup app.aws.internal

# Force through Azure resolver to isolate issues
nslookup app.aws.internal [Azure outbound IP]

If the first fails but the second works, check your VNet forwarding ruleset links.

Testing AWS → Azure Resolution

From an EC2 instance in a VPC with the resolver rules:

# Basic test
nslookup app.privatelink.database.windows.net

# Test the resolver directly
nslookup app.privatelink.database.windows.net [AWS outbound IP]

If direct resolver queries work but regular ones don't, verify your VPC rule associations.

Understanding Domain-Level Forwarding vs Individual Records

Here's something crucial that trips up a lot of people: DNS forwarding rules work at the domain/zone level, not for individual hostnames. Let me explain what this means in practice, in this example I am talking from Azure to AWS:

The Root Domain Approach

When you create a forwarding rule like this:

Domain: aws.bear.local.
Target IPs: [AWS inbound resolver IPs]

You're telling Azure: "Forward ALL queries for anything to aws.bear.local to AWS"

This means:

  • app.aws.bear.local → Forwarded to AWS
  • db.aws.bear.local → Forwarded to AWS
  • api.aws.bear.local → Forwarded to AWS
  • anything-at-all.aws.bear.local → Forwarded to AWS

The Critical Part: AWS will only resolve these if the records actually exist in the Route 53 private hosted zone. If you query for app.aws.bear.local but haven't created that A record in Route 53, you'll get NXDOMAIN (non-existent domain).

The Maintenance Contract You're Signing

When you forward an entire domain, you're essentially saying "AWS owns everything in this zone." This means:

  1. Every hostname must exist in Route 53 - No record = No resolution
  2. You can't split the zone - Once you forward aws.bear.local., you can't have some records in Azure and some in AWS
  3. Plan for growth - Every new service in that domain needs a Route 53 record

Alternative: Subdomain Forwarding

Instead of forwarding the entire root domain, consider using subdomains:

# More specific forwarding rules
Domain: prod.aws.bear.local.
Target IPs: [AWS prod resolver IPs]

Domain: dev.aws.bear.local.
Target IPs: [AWS dev resolver IPs]

This lets you:

  • Forward only specific environments to AWS
  • Reduce the blast radius of DNS changes

Real Example That Confuse People

Let's say you set up forwarding for internal.croucher.cloud. to AWS, thinking you'll just add records as needed. Then:

  1. Someone tries to access blog.internal.croucher.cloud
  2. Azure forwards the query to AWS (because of your rule)
  3. AWS doesn't have the record (because nobody created it yet)
  4. Query fails with NXDOMAIN
  5. Everyone blames "DNS is broken" when really it's working exactly as designed

The Fix: Before forwarding an entire domain, audit what should exist in that namespace and ensure all records are created in the target location.

Comparison Table

FeatureAzure DNS Private ResolverAWS Route53 Resolver
Resolve inbound private zonesYes (Azure only)Yes (AWS only)
Forward to other resolverYes (Outbound)Yes (Outbound)
Accept external DNS queriesYes (Inbound)Yes (Inbound)
Cross-cloud resolutionOnly via forwarding endpointsOnly via forwarding endpoints

Common Gotchas That Will Waste Your Afternoon

  1. Forgetting the trailing dot in domain names - croucher.cloud. not croucher.cloud
  2. Not allowing both TCP and UDP - DNS uses both, not just UDP
  3. Assuming it works because one query succeeded - Test from multiple sources
  4. Using the wrong resolver IPs - Inbound IPs receive queries, outbound IPs send them
  5. Security groups blocking return traffic - Remember the ephemeral port range 1024-65535

Architecture Overview

What you end up with is beautifully simple:

  • Azure queries for AWS domains hit the Azure resolver, get forwarded to AWS inbound endpoints
  • AWS queries for Azure domains hit the AWS resolver, get forwarded to Azure inbound endpoints
  • Each cloud maintains authority over its own zones
  • No custom infrastructure to maintain

Why This Approach Wins

After implementing this in several production environments, here's why it's become my go-to approach:

  1. It's officially supported - Both Azure and AWS document this pattern
  2. Fully managed - No DNS servers to patch or scale
  3. Survives scrutiny - Security teams love managed services over custom VMs
  4. Actually maintainable - New team members understand it quickly

Final Sanity Checks

Before declaring victory:

  • Azure VNets that need AWS DNS are linked to the forwarding ruleset
  • AWS VPCs that need Azure DNS are associated with resolver rules
  • TCP and UDP port 53 allowed bidirectionally
  • Routing exists between resolver subnets
  • You've tested from multiple sources in each cloud

If DNS still isn't working after all this, it's almost always one of three things:

  1. Network connectivity is broken (test with ping/telnet first)
  2. A rule association is missing somewhere
  3. Someone made a typo in a domain name

Conclusion

Cross-cloud DNS doesn't have to be complicated. Both Azure and AWS provide the tools — Azure's Private DNS Resolver is the equivalent of AWS's Route53 Resolver. Once you understand that each service conditionally forwards queries for the other cloud's domains, the rest falls into place naturally.

Skip the custom DNS servers, forget about duplicating records, and definitely don't expose private names publicly. Use the managed services, follow the pattern above, and get on with building actual business value instead of fighting DNS.

Because let's be honest — nobody ever got promoted for building a really clever DNS forwarding solution. But plenty of people have lost weekends debugging broken ones.

Previous Post Next Post

نموذج الاتصال