An Azure service that provides serverless Kubernetes, an integrated continuous integration and continuous delivery experience, and enterprise-grade security and governance.
Hello Zach,
Thank you for providing the details about the AKS cluster creation issue.
The behavior you’re seeing is due to stricter validation in newer Azure CLI versions (for example, az 2.84 with the aks-preview extension). When you specify a custom --pod-cidr such as 100.64.0.0/10, the CLI now requires you to explicitly define both the network plugin and the plugin mode.
In your case, using --network-plugin kubenet will not work because kubenet has a scaling limit of 400 nodes when cluster autoscaler is enabled. Since your configuration allows scaling up to 1000 nodes, this results in the error you observed.
To support this scale, you should use Azure CNI with overlay mode. This setup allows pod IPs to be allocated from the specified pod CIDR, independent of the VNet subnet, which enables larger cluster sizes without hitting kubenet limitations.
To resolve the issue, please update your command to include the following parameters:
az aks create --name myname --location mylocation --resource-group myresourcegroup --nodepool-name default --node-vm-size Standard_B2s --node-count 1 --enable-managed-identity --tier standard --outbound-type managedNATGateway --nat-gateway-managed-outbound-ip-count 4 --nat-gateway-idle-timeout 10 --enable-cluster-autoscaler --min-count 1 --max-count 1000 --network-plugin azure --network-plugin-mode overlay --pod-cidr 100.64.0.0/10
Also, please make sure that:
- Your Azure CLI version is 2.48 or later (required for
--network-plugin-mode) - The pod CIDR range does not overlap with any existing VNet or subnet
- The aks-preview extension is up to date, if you are using preview features
Hope this helps! Please let me know if you have any queries.