In part 1, I introduced Docker swarm mode, explained how to create a local swarm cluster and showed it in action. This should have given you enough to get started, but typically when you want to get more serious about running containers in a development or production environment, you can’t have things running locally. That’s where Azure comes into play.
The two main ways to go with regard to deploying containers in Azure are Azure Container Service (ACS) or Docker for Azure. Both offerings make it easier to create, configure and manage Virtual Machines (VMs) that are preconfigured to run containers.
If you’re interested in using something else besides Docker for container orchestration, ACS will also allow you to use Marathon and the Distributed Cloud Operating System (DC/OS) or Kubernetes. For this article, I’ve chosen Docker for Azure simply because ACS doesn’t support Docker swarm mode (as of the writing of this blog) and is a bit more intuitive to use.
- Access to an Azure account with admin privileges
- An SSH public/private key pair to install on the Azure VMs to gain access. It’s fairly simple to create one on Linux or Windows.
Setting up Docker for Azure
- Creating an AD Service Principal (SP)
The service principal is required to make Azure API calls to when scaling nodes up/down or when deploying apps on your swarm cluster that require Azure Load Balancer configuration.
- Get the docker4x/create-sp-azure container. This container just runs a helper script to create the SP. You can download and run it by running the following commands at either a command prompt or PowerShell:
> docker pull docker4x/create-sp-azure:latest
> docker run -ti docker4x/create-sp-azure [sp-name] [rg-name] [rg-region]
- Replace sp-name with any name you want. The name is not important, but it’s something you’ll recognize in the Azure portal.
- Replace rg-name with the name of the Azure resource group you want to create. If you have an existing resource group you would like to use, enter that name instead.
- Replace rg-region with the name of the Azure region you want to deploy to (e.g. – eastus)
- Connecting to Your Manager Node
- You can connect to the manager node, by navigating to the resource group specified during deployment and opening the externalSSHLoadBalancer. The overview section shows the public IP address you need.
- Take your SSH private key (which you should have your public key was generated) and make sure that’s properly loaded before trying to SSH in. For example with Windows, you can specify this via Putty or load it
- The host address is docker@<external-ssh-lb-public-ip>
- The SSH Port is 50000. By default the inbound NAT rules of the external SSH load balancer map 50000 to 22.
- When connecting, allow agent forwarding. This needs to be enabled if you need/want to SSH into a worker node to pass your SSH private key through. If you are using putty, make sure your private key is loaded via pageant as well.
- Connecting to Your Worker Node(s)(optional)
- All of your service/task deployments are done from your manager node. However if you want to SSH into a worker node, you must do through the manager. For security reasons, Azure does this out of the box and prevents incoming external connections to worker nodes.
- Once connected to the manager node, run the following commands
> cat /etc/resolv.conf
> docker node ls
> ssh <node-hostname><internal-domain-name>
- The first command gives you the internal domain all nodes are running on.
- The second command gives you all the host name of all nodes in the swarm cluster.
- The third command is to ssh to the desired node.
So now you have a Docker swarm cluster up and running in Azure. From here, feel free to deploy some services and test things out. Don’t forget to delete everything when you’re done. Otherwise, you’ll keep getting charged. The easiest way to do that is to delete the resource group.