When you deploy a new Pool, the VM’s in that pool will need access to some URLs and internal IP’s for the deployment to complete. First, the VM’s will be joined to your domain, meaning they will need the standard ports open to the domain controllers and DNS servers. Secondly, an agent is deployed that allows the VM to login to AAD and with that login token register themselves to the WVD service. That last part is done through “public” internet.
In a “normal” deployment, the VM’s (and the users logging into the VM’s) would have full internet access. Even if you configure a proxy server, nothing stops a user from opening a command prompt or PowerShell and bypass the proxy. Which brings us to this post, which was created in co-operation with my colleague Lyes OU-arti . Let’s completely lock down internet access for the VM’s but allow them to deploy by using URL whitelisting on an Azure Firewall.
The Azure Firewall is a standard service available in almost all regions. It’s a fully L3 firewall but also adds the possibility to whitelist based on URLs. To deploy it, you will need a separate subnet called AzureFirewallSubnet with at least a /26 address space.
Next is the deployment of the FW itself. Through the marketplace, search for firewall and deploy the Azure firewall.
The initial deployment speaks for itself. Give it a name, put it in the right region and link it to the right VNET and give it a public IP. That’s all about that needs to happen.
I will use split routing to send all regular traffic to on-premises and only internet traffic will go through the Azure Firewall, but we will set that up later. For now, let’s create application rules that allow certain traffic only: Go to Rules, Application Rules and click + New and add the following rules:
|*.blob.core.windows.net||https||Agent, SXS stack updates and traffic|
|www.msftconnecttest.com||http||OS connection test|
|fs.microsoft.com||https||Others – deployment|
|slscr.update.microsoft.com||https||Others – deployment|
|*.events.data.microsoft.com||https||Others – deployment|
And also add another rule (with FQDN Tags) for Windows update and monitoring:
Lastly, the VM’s need to be able to connect to a KMS server which is on TCP:1688 at kms.core.microsoft.com, unfortunately the FQDN rules are only for http, https and mssql. So we will need to create one network rule allowing TCP:1688 to the KMS server based on IP address:
Note: yes I know, the * opens up a lot of websites, but bear with me as we also close that down in another chapter..
Next create a new route table, enable the route propagation from your gateway to ensure the VM’s can still talk to on-premises and other services.
Next create a single route in the route table pointing 0.0.0.0/0 to the Private IP address of the Azure Firewall (you can get it through in the overview page of the firewall)
Finally, link it to the subnet of the WVD VM’s to make it active.
Being more Restrictive
I know when we opened up the FW for the URL’s, we used the *.blob and *.servicebus and some other wildcards. In this chapter we can restrict these even more. When you deploy a new Pool, the pool actually gets assigned a storage account and servicebus in the backend. After the deployment we now know the more specific URL’s. Meaning we can now restrict traffic to these specific storage accounts rather than wildcard.
When the VM’s is booted the WVD/RDS agent notes all used URL’s in an eventlog. You can extract this by going to the application log on one of the VM’s and then looking for eventid 3701:
Note that this only applies to the wvd and blob URL’s, and does not include the AAD login URL’s. But at least you can limit the blob, queue and table URL’s:
After restricting the FW to only these storage accounts and service bus URL’s, the existing pool should continue to function. But new deployments will fail. When deploying a new pool, make sure to open the wildcard URL’s again.
Your own proxy
Obviously if you want to allow your users internet access through an (external) proxy, you should allow access to the proxy as well using URL’s or IP.
Your WVD VM’s will also need access to (at least) domain controllers. This to join them to the domain and allow users to login to the VM’s. When you have VPN or ExpressRoute (or the DC’s in another VNET) you can also restrict the traffic from the WVD VM to the domain controllers (in a particular site or all DC’s). For this we have to look at the standard Workstation/MemberServer to Domain Controller traffic. There are no URL’s and you can limit traffic based on IP / Protocol / Port.
The below ports are from WVD (subnet) to the DC’s
|TCP/UDP||464||Kerberos Password Change|
|TCP||49152-65535||Random Windows Ports|
|TCP/UDP||636||LDAPS (if used)|
Obviously if you want to remotely manage the WVD VM’s, you can also allow inbound ports, such as TCP:5986 for WinRM