Ready to setup Docker Swarm now that we have a key/value store (Consul)
- Swarm “Cluster” consists of:
- Swarm Master - The “master node” that acts as the Swarm manager
- Responsible for the entire cluster and manages all the nodes
- An arbitrary number of “ordinary nodes”
- Good article at
- Swarm Master - The “master node” that acts as the Swarm manager
- Swarm “Cluster” consists of:
https://blog.nimbleci.com/2016/08/17/how-to-set-up-and-deploy-to-a-1000-node-docker-swarm/
Create Docker Swarm “Master”
- Only ONE SWARM MASTER
$ docker-machine create -d digitalocean --swarm \
--swarm-master --swarm-discovery="consul://${KV_IP}:8500" \
--engine-opt="cluster-store=consul://${KV_IP}:8500" \
--engine-opt="cluster-advertise=eth1:2376" \
Master
>>> Output >>>
Configuring swarm...
Checking connection to Docker...
Docker is up and running! → Tells us Docker Swarm is running
Just a simple “docker-machine create” command w/lots of flags
- -d → sets driver (digitalocean aws virtualbox/local)
- --swarm
- --swarm-master
→ identifies this node (docker-machine) as Swarm Master
- --swarm-discovery → sets discovery (aka key/value store)
Gives instances of this Swarm the ability to find & communicate w/each other
- --eng-opt → allows us to set the docker daemon (server) flags for this created docker-machine (called “master”)
--cluster-store flag
→ what key/value store to use for cluster coordination
--cluster-advertise flag
Address master “advertises” to cluster as connectable. Tells Consul to do this …
If run $ docker-machine ls, see all digital ocean docker-machines (nodes) are on port 2376
- master → name of this newly created docker-machine
Creating Docker Swarm “Slaves”
- Can have as many slaves as you want
- Command to create slave node is very similar to master node
- Once again, simply creating a docker-machine
- (but no --swarm-master)
$ docker-machine create \
-d digitalocean \
--swarm \
--swarm-discovery="consul://${KV_IP}:8500" \
--engine-opt="cluster-store=consul://${KV_IP}:8500" \
--engine-opt="cluster-advertise=eth1:2376" \
slave
>>> Output >>>
(slave) Creating SSH key...
(slave) Creating Digital Ocean droplet...
(slave) Waiting for IP address to be assigned to the Droplet...
Docker is up and running!
Connect docker client to the swarm // note “-” in -swarm
- $ eval $(docker-machine env -swarm master)
- $ docker-machine ls // note: “ swarm” instead of just “”
- $ docker info
- 3 containers but only 2 nodes. Why?
$ docker-machine ps -a //to find out more
slave/swarm-agent running on slave host
master/swarm-agent running on master host
master/swarm-agent-master on master host
Master host is a slave node as well as a master node
As Master → responsible for which host to run containers
As Slave → can run containers on itself
- Also note “strategy” is spread
Master runs containers in nodes with least load
- Changes to dockerapp prod.yml file to run app container in Swarm
- Add networks section to prod.yml
- b/c in default docker, multiple containers can only communicate intra-host (all containers in same host/docker-machine). Adding networks: to yaml file allows inter-host communications.
- Add Environment constraint so that dockerapp always runs on master node (otherwise Swarm can assign nodes based on load)
- Add networks section to prod.yml
- Changes to dockerapp prod.yml file to run app container in Swarm
version: '2'
services:
dockerapp:
extends:
file: common.yml
service: dockerapp
image: jleetutorial/dockerapp
Environment: //ensures dockerapp always runs on master node
- constraint:node==master
Depends_on: // ensures redis container is up & running 1st
- redis // b/c dockerapp needs redis
networks:
- mynet // allows inter-host container communications
redis:
extends:
file: common.yml
service: redis
Networks: // allows inter-host container communications
- mynet
Networks: // overlay supports multiple host networking
Mynet: // natively (works “out of the box”)
driver: overlay