Custom vNet on Kubernetes on Azure with acs-engine

Categories: Kubernetes

These are a few quirks you might encounter when you deploy Kubernetes using acs-engine onto Azure cloud with pre-defined custom vNet. Deploying custom vNet is a common scenario in most of the use-cases. You usually want to run Kubernetes along other services on your Azure cloud, such as legacy applications. This article is intended to provide you with some guidance on how to avoid the same issues I have experienced during acs-engine deployments.

First thing - necessary parameters for Kubernetes deployment

According to the official acs-engine documentation the only parameter you need to deploy custom vNet is firstConsecutiveStaticIP. However, with Kubernetes you need to specify also vnetCidr. If you forget to do that your cluster will fail to start. This is because of a script on master machine that does the setup of iptables. This script is run before any kubelet is started and it requires vnetCidr parameter. Unfortunately acs-engine does not validate whether this parameter is provided or not.

Second thing - Azure CNI plugin needs different configuration than the one in the acs-engine docs

The previous step uses Swarm as an exemplary orchestrator. However, according to the documentation of custom vNet, there is a route table that needs to be updated with Kubernetes clusters. You can find more info here.

Unfortunately the documentation does not clarify that the route table needs to be updated only when you are using KubeNet as a network solution (option "networkPolicy": "none"). When you are using Azure CNI plugin (option"networkPolicy": "azure"`), you don't need to actually update the routing table.

Third thing - Custom vNet on azure requires RBAC enabled

When dpeloying Kubernetes cluster using acs-engine, after having gone through all the problems in the previous paragraphs, you will discover that kubeDNS service fails to start. It turns out that it did not have permissions to communicate with kubelet. This is because, by default, when you use option "enableRbac": false, acs-engine deploys Kubernetes with Node Authorization Mode. This prevents kubeDNS from reaching kubelet or any of Kubernetes control plane components. After turning on RBAC ("enableRbac": true), everything finally works as expected. You can read more about Node Authorization Mode here.

BONUS - Full kubernetes.json acs-engine config

The example kubernetes.json should work with the custom VNET you deploy provided that subnet names and IP ranges match. I have customized the template below a little bit more as well. These are the customizations that I recommend:

Increase OS disk size

Increase default OS Disk size so that docker has enough space to store pulled image. Kubernetes image garbage collection mechanism might be too slow to kick-in on smaller drivers.

Change network policy to Azure

Network policy Azure enables you to use azure-native networking to assign to your pods. This is easy and convenient way to deploy Kubernetes network plugin. I definitely recommend it for beginners as it's most of the time frictionless.

Setup custom, well known DNS prefix

Master profile dnsPrefix must be unique per cluster as it will be used to create DNS entries for master nodes. Therefore I recommend coming up with some naming convention so that you can track clusters by their DNS names.

Disable tiller and kubernetes-dashboard addons

Both of those addons are enabled by default. However, you cannot change or upgrade them. Any change you will try to do will be overridden by default setup. The only way to change them is to actually ssh into master machines and change the manifest file itself. Therefore, I recommend disabling them and deploying them manually. This gives you more control over those components as well as upgrade possbility in the future.

``` { "apiVersion": "vlabs", "properties": { "orchestratorProfile": { "orchestratorType": "Kubernetes", "orchestratorVersion": "1.9.1", "kubernetesConfig": { "networkPolicy": "azure", "enableRbac": true, "maxPods": 80, "addons": [ { "name": "tiller", "enabled": false }, { "name": "kubernetes-dashboard", "enabled": false } ] } }, "masterProfile": { "count": 3, "dnsPrefix": "[YOUR DNS PERFIX HERE]", "vmSize": "Standard_D2_v2", "osDiskSizeGB": 200, "vnetSubnetId": "/subscriptions/[SUBSCRIPTION ID]/resourceGroups/[RESOURCE GROUP NAME]/providers/Microsoft.Network/virtualNetworks/ExampleCustomVNET/subnets/ExampleMasterSubnet", "firstConsecutiveStaticIP": "172.20.15.170", "vnetCidr": "172.20.0.0/20" }, "agentPoolProfiles": [ { "name": "agentpool1", "count": 3, "vmSize": "Standard_D2_v2", "availabilityProfile": "AvailabilitySet", "osDiskSizeGB": 200, "vnetSubnetId": "/subscriptions/[SUBSCRIPTION ID]/resourceGroups/[RESOURCE GROUP NAME]/providers/Microsoft.Network/virtualNetworks/ExampleCustomVNET/subnets/ExampleAgentSubnet" } ], "linuxProfile": { "adminUsername": "", "ssh": { "publicKeys": [ { "keyData": "[YOUR PUBLIC SSH KEY]" } ] } }, "servicePrincipalProfile": { "clientId": "[YOUR SERVICE PRINCIPAL ID]", "secret": "[YOUR SERVICE PRINCIPAL PASSWORD]" } } } ```

See also

Share this post with your friends

comments powered by Disqus