Azure Container Instances (ACI) across 3 regions in under 30 seconds with Azure Traffic Manager | Aaron|MSFT

Azure Container Instances (ACI) across 3 regions in under 30 seconds with Azure Traffic Manager

Azure Container Instances (ACI) is a serverless container platform that enables anyone to run one or more containers in Azure in seconds with a single CLI command or API call and be billed per second. Other features include the option to choose a restart policy that can ensure containers are automatically deleted upon completion (alternatively, you can delete them manually via the CLI, API or Portal) and a free subdomain for your instance under the azurecontainer.io domain.

ACI is typically a lower-level building block often used with higher-level platforms such as Kubernetes in the case of virtual-kubelet, or initiated via serverless event-driven workflows such as Azure Logic Apps(1).

The recent launch of DNS name labels for Azure Container Instances(1 2 3) enables us to pair ACI with Azure Traffic Manager to distribute traffic across multiple instances and across multiple regions using global dns load balancing that also supports built in endpoint health checking and range of traffic-routing methods.

In this example we will deploy a single container across 3 Azure Regions (East US, West US and West Europe) using bash and the CLI. You will need an Azure Account and our cross platform Azure CLI which can be run in Azure Cloud Shell, in a Docker Container, or on Linux, Mac, or Windows where bash is also available via the Windows Subsystem for Linux.

You can copy/paste or run the below snippet as a script. It will:

  1. Set bash variables we will re-use throughout. CONTAINER_NAME and LOCATION allow us to test a single container outside of the main loop. We also use DNS_SUFFIX is a random alpnumberic string that ensures we have a globally unique DNS name for Azure Traffic Manager Profile and each of our Azure Container Instances.
  2. Create an Azure Resource Group and an Azure Traffic Manager Profile.
  3. Loop through an array of Container Instance names and regions.
  4. Delete any existing Container Instances (in case we are re-deploying in quick succession).
  5. Create Container Instances, using the public hashicorp/http-echo image from Docker Hub.
  6. Get the FQDN (a subdomain of azurecontainer.io) of each Container Instance and create an endpoint on our Azure Traffic Manager Profile.
RESOURCE_GROUP='180300-aci'
PROFILE_NAME='180300-traffic-manager'
CONTAINER_NAME='container-1'
DNS_SUFFIX=$(LC_ALL=C tr -dc 'a-z0-9' </dev/urandom | head -c 5 ; echo)
DNS_NAME='180300-traffic-manager-'$DNS_SUFFIX
LOCATION='eastus'

az group create -n $RESOURCE_GROUP -l $LOCATION

az network traffic-manager profile create --name $PROFILE_NAME \
    --resource-group $RESOURCE_GROUP \
    --routing-method Weighted \
    --unique-dns-name $DNS_NAME \
    --monitor-path / \
    --monitor-port 80 \
    --monitor-protocol HTTP \
    --status Enabled \
    --ttl 10 

array=(
    'container-1::eastus'
    'container-2::westus'
    'container-3::westeurope'
)

for index in "${array[@]}" ; do
    CONTAINER_NAME="${index%%::*}"
    LOCATION="${index##*::}"

    az container delete -y -g $RESOURCE_GROUP -n $CONTAINER_NAME

    az container create -g $RESOURCE_GROUP -n $CONTAINER_NAME -l $LOCATION \
        --cpu 1 \
        --memory 1 \
        --ip-address public \
        --image hashicorp/http-echo \
        --command-line '/http-echo -text "'$CONTAINER_NAME'" -listen :80' \
        --dns-name-label $CONTAINER_NAME'-'$DNS_SUFFIX

    CONTAINER_FQDN=$(az container show -g $RESOURCE_GROUP -n $CONTAINER_NAME | jq -r .ipAddress.fqdn)

    az network traffic-manager endpoint create -g $RESOURCE_GROUP -n $CONTAINER_NAME \
        --profile-name $PROFILE_NAME \
        --type externalEndpoints \
        --weight 1 \
        --target $CONTAINER_FQDN

done

Now let’s try it out. The first option is to use curl, but remembering that our operating system will cache the DNS and it will take a little longer for requests from the same machine to be distributed across the instances.

# curl
echo 'http://'$DNS_NAME'.trafficmanager.net/'
while true; do
    curl --connect-timeout 1 'http://'$DNS_NAME'.trafficmanager.net/'
    sleep 1
done

We can also bypass the operating system cache by performing a DNS lookup for each request ourselves using dig. This returns the hostname of the specific container instance which we then use for the request.

# dig + curl
# name server sourced via: dig +short trafficmanager.net NS
while true; do
    CONTAINER_HOST=$(dig +short @tm1.msft.net $DNS_NAME'.trafficmanager.net' | sed -e 's/\.$//')
    curl --connect-timeout 1 'http://'$CONTAINER_HOST'/'
    sleep 1
done

The http-echo container responds with its hostname and you should see the requests are being load balanced, round robin, between instances you have deployed.

Questions or feedback are more than welcome and my DMs (@as_w) are always open! We’ll explore different options for deploying Azure resources like this in a follow-up post!

© Aaron|MSFT ~ "My Software Fixes Things" ~ @as_w ~ aka.ms/aaronw