Ready to setup Docker Swarm now that we have a key/value store (Consul)

      1. Swarm “Cluster” consists of:
        1. Swarm Master - The “master node” that acts as the Swarm manager
          1. Responsible for the entire cluster and manages all the nodes
        2. An arbitrary number of “ordinary nodes”
        3. Good article at

https://blog.nimbleci.com/2016/08/17/how-to-set-up-and-deploy-to-a-1000-node-docker-swarm/

Create Docker Swarm “Master”

        1. Only ONE SWARM MASTER

$ docker-machine create -d digitalocean --swarm \

--swarm-master --swarm-discovery="consul://${KV_IP}:8500" \

--engine-opt="cluster-store=consul://${KV_IP}:8500" \

--engine-opt="cluster-advertise=eth1:2376" \

Master

>>> Output >>>

Configuring swarm...

Checking connection to Docker...

Docker is up and running! → Tells us Docker Swarm is running

Just a simple “docker-machine create” command w/lots of flags

          1. -d → sets driver (digitalocean aws virtualbox/local)
          2. --swarm
          3. --swarm-master

→ identifies this node (docker-machine) as Swarm Master

          1. --swarm-discovery → sets discovery (aka key/value store)

Gives instances of this Swarm the ability to find & communicate w/each other

          1. --eng-opt → allows us to set the docker daemon (server) flags for this created docker-machine (called “master”)

--cluster-store flag

→ what key/value store to use for cluster coordination

--cluster-advertise flag

Address master “advertises” to cluster as connectable. Tells Consul to do this …

If run $ docker-machine ls, see all digital ocean docker-machines (nodes) are on port 2376

          1. master → name of this newly created docker-machine

Creating Docker Swarm “Slaves”

        1. Can have as many slaves as you want
        2. Command to create slave node is very similar to master node
          1. Once again, simply creating a docker-machine
          2. (but no --swarm-master)

$ docker-machine create \

-d digitalocean \

--swarm \

--swarm-discovery="consul://${KV_IP}:8500" \

--engine-opt="cluster-store=consul://${KV_IP}:8500" \

--engine-opt="cluster-advertise=eth1:2376" \

slave

>>> Output >>>

(slave) Creating SSH key...

(slave) Creating Digital Ocean droplet...

(slave) Waiting for IP address to be assigned to the Droplet...

Docker is up and running!

Connect docker client to the swarm // note “-” in -swarm

        1. $ eval $(docker-machine env -swarm master)
        2. $ docker-machine ls // note: “ swarm” instead of just “
        3. $ docker info
          1. 3 containers but only 2 nodes. Why?

$ docker-machine ps -a //to find out more

slave/swarm-agent running on slave host

master/swarm-agent running on master host

master/swarm-agent-master on master host

Master host is a slave node as well as a master node

As Master → responsible for which host to run containers

As Slave → can run containers on itself

          1. Also note “strategy” is spread

Master runs containers in nodes with least load

      1. Changes to dockerapp prod.yml file to run app container in Swarm
        1. Add networks section to prod.yml
          1. b/c in default docker, multiple containers can only communicate intra-host (all containers in same host/docker-machine). Adding networks: to yaml file allows inter-host communications.
        2. Add Environment constraint so that dockerapp always runs on master node (otherwise Swarm can assign nodes based on load)

version: '2'

services:

dockerapp:

extends:

file: common.yml

service: dockerapp

image: jleetutorial/dockerapp

Environment: //ensures dockerapp always runs on master node

  • constraint:node==master

Depends_on: // ensures redis container is up & running 1st

  • redis // b/c dockerapp needs redis

networks:

  • mynet // allows inter-host container communications

redis:

extends:

file: common.yml

service: redis

Networks: // allows inter-host container communications

  • mynet

Networks: // overlay supports multiple host networking

Mynet: // natively (works “out of the box”)

driver: overlay

results matching ""

    No results matching ""