The bulk of what you need to know is in Microsofts on-line Docs here. Its good information so I’ll repeat what’s in it, I will however cover a few gotchas I came across.
First thing to know is that when you create the AKS from the portal, it creates 2 Resource Groups (RG). One has the AKS servers along with the VNET and the other has all the nodes, scale sets, load balancers once they get created, etc. The process also creates a Service Principal (Enterprise Application) that is used why you use the kubectl commands and so on. You can find this by looking in the IAM second of the second RG that is created. The problem is that it doesn’t have any access by default to the VNET in the main RG that got created. So when you try to apply a deployment that has a loadbalancer it can’t bind to VNET. If you run:
az aks browse –resource-group $resGrp –name $aks
to launch the dashboard you’ll see the error. Go into the IAM section of the VNET and add the Service Principal. I added it as a Contributor.
Once of the reasons to use Advanced networking is so you can peer AKS with other networks, including one that has a Site-to-Site VPN connection to your on-prem site. The thing it took me awhile to find in the docs is that you have to create the peering on both the kubernetes VNET and the VNET with the VPN connection.
So this brings us to one more quirk about networking. The Kubernetes Service IP ranges and the Docker Bridge IPs don’t show up anywhere in Azure. Squirreled away somewhere in kubernetes land I guess. But that means the VNET with the VPN connection doesn’t know where those address are, and they are the ones that matter. At least the Kubernetes Service IPs do. (Still working on the routing situation for those)