Deploying a Symfony Application with Docker Swarm

Symfony App with Docker Swarm Deployment...

Choosing the orchestration framework for your deployments can be hard sometimes and can lead to polarizing discussions wether one is better than the other. They're all great and each has advantages and disadvantages. None of these frameworks are the one framework to rule them all or the only acceptable solution, and, at the end of the day, it heavily depends on what are your requirements, budget and knowledge for your particular case.

For small to medium size projects, Docker Swarm is ideal due to it's ease of setup, the syntax for it being almost the same as the one for docker-compose with a few additional keys that allow you to configure swarm related features.

As the example for today, I wrote a simple application that allows users to upload files and share them. The application can be found here.

This application uses the following stack:

  • nginx as the webserver
  • php-7.2 running in cgi mode to serve the dynamic content
  • mysql as the database
  • minio as the storage server

The only thing to note in the stack is the need to configure minio in a way that allows you to connect to it using the KnpGaufrette and Amazon's Aws\S3\Client libraries, the rest of the stack being configure in a pretty standard way.

The important configuration options are as follows:

  # services.yaml
  document.s3:
    class: Aws\S3\S3Client
    factory: [Aws\S3\S3Client, 'factory']
    arguments:
      -
        version: 'latest'
        region: 'us-east-1' # must provide region here
        endpoint: 'http://storage:9000' # use the name of the service from docker-compose.yml; can also be an evironment variable if you so choose
        use_path_style_endpoint: true
        credentials:
          key: '%env(STORAGE_ACCESS_KEY)%'
          secret: '%env(STORAGE_SECRET_KEY)%'
// minio/config.json
{
  // ...
  "region": "us-east-1", // make sure you set this to be the same value as in your document.s3 service
  // ...
}

In case you came here specifically for the part above, make sure you mount the minio config as readonly so that minio doesn't change it.

For this project, I'll use 5 VMs, 1 for each service and 1 for the master. This isn't mandatory, it's just intended to help demonstrate labels and deployment contraints.

First of all, we should make sure that our Ubuntu based VMs are up to date. This can easily be done by running the following commands as root:

apt-get update && apt-get update -y

After the updates are done, we install docker in each VM, by running the following command, again, as root:

curl -sSL get.docker.com | bash

Now it's time to create the swarm manager by running docker swarm init --advertise-addr 192.168.0.100 (replace 192.168.0.100 with the IP of the manager that is accessible from the network). The command will output something similar to this:

Swarm initialized: current node (tv9wp779n6vrxeqm6szjlfgzz) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa-aaaaaaaaaaaaaaaaaaaaaaaaa 192.168.0.100:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

The output provides the command we need to run in order to add the workers to the swarm and we have to run this command on each of the worker nodes.

If everything went well, when we run the command docker node ls on the master and we should be able to see all the nodes from the swarm and it will look something like this:

root@ubuntu-2gb-nbg1-1:~# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
tv9wp779n6vrxeqm6szjlfgzz *   ubuntu-2gb-nbg1-1   Ready               Active              Leader              18.09.1
6qc4c5b3fbu1m5br5mzysx5fl     ubuntu-2gb-nbg1-2   Ready               Active                                  18.09.1
u1c63m88zew8qdidv104674jn     ubuntu-2gb-nbg1-3   Ready               Active                                  18.09.1
r9yg3lzk3oo5w6i5h53i07nie     ubuntu-2gb-nbg1-4   Ready               Active                                  18.09.1
mza0rj2nzdril9i4cevb54nwe     ubuntu-2gb-nbg1-5   Ready               Active                                  18.09.1

We're now done with the worker nodes and the rest of the commands will be ran on the master.

Because we want to control where each service in our swarm ends up, we will add a label to each of the worker nodes. The command is docker node update --label-add <key>=<value> <node-id>. For me this will look something like this:

  • docker node update --label-add role=master ubuntu-2gb-nbg1-1
  • docker node update --label-add role=db ubuntu-2gb-nbg1-2
  • docker node update --label-add role=storage ubuntu-2gb-nbg1-3
  • docker node update --label-add role=nginx ubuntu-2gb-nbg1-4
  • docker node update --label-add role=php ubuntu-2gb-nbg1-5

The last thing we need to do before we are able to control docker from our local machine, is to create certificates and setup docker on the master to allow remote commands.

I've put together a script based on the information available on the Protect the Docker daemon socket page to ease my setup. Run this script from a safe location and make sure you keep your keys safe.

#!/usr/bin/env bash

# get external IP
export IP=`curl httpbin.org/ip | grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}'`

# cleanup old configs
rm -f exftfile*.cnf

# generate CA
# uncomment this line if you want to use password protected keys
#openssl genrsa -aes256 -out ca-key.pem 4096
# comment this line if you want to use password protected keys
openssl genrsa -out ca-key.pem 4096
openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem

# generate server keys
openssl genrsa -out server-key.pem 4096
openssl req -subj "/CN=$IP" -sha256 -new -key server-key.pem -out server.csr

echo subjectAltName = IP:$IP >> extfile-server.cnf
echo extendedKeyUsage = serverAuth >> extfile-server.cnf
openssl x509 -req -days 365 -sha256 -in server.csr -CA ca.pem -CAkey ca-key.pem \
  -CAcreateserial -out server-cert.pem -extfile extfile-server.cnf

openssl genrsa -out key.pem 4096
openssl req -subj '/CN=client' -new -key key.pem -out client.csr
echo extendedKeyUsage = clientAuth >> extfile-client.cnf
openssl x509 -req -days 365 -sha256 -in client.csr -CA ca.pem -CAkey ca-key.pem \
  -CAcreateserial -out cert.pem -extfile extfile-client.cnf
rm -v client.csr server.csr
chmod 400 *.pem

cp ca.pem /etc/docker/ca.pem
cp server-cert.pem /etc/docker/server-cert.pem
cp server-key.pem /etc/docker/server-key.pem

The above script will generate all the keys we need in order to set up both the master's and the client. The server keys will be copied in the /etc/docker/ folder for convenience.

With the keys in place, we can now configure the docker daemon to use the keys by creating a file at this exact location: /etc/docker/daemon.json with the following content:

{
    "storage-driver": "overlay2",
    "hosts": [
        "unix:///var/run/docker.sock",
        "tcp://0.0.0.0:2376"
    ],
    "tlsverify": true,
    "tlscacert": "/etc/docker/ca.pem",
    "tlscert": "/etc/docker/server-cert.pem",
    "tlskey": "/etc/docker/server-key.pem"
}

Now we restart the docker daemon with the command systemctl restart docker.service.

You will most likely get an error here, saying that the docker daemon can't start:

Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.

And if you dig deeper by running any of the commands that you're given in the previous error message, you will see something similar to this:

-- Unit docker.service has begun starting up.
Jan 24 00:11:10 ubuntu-2gb-nbg1-1 dockerd[21404]: unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives are specified both as a flag and in the configuration file: hosts: (from flag: [fd://], from file: [unix:///var/run/docker.sock tcp://0.0.0.0:2376])
Jan 24 00:11:10 ubuntu-2gb-nbg1-1 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jan 24 00:11:10 ubuntu-2gb-nbg1-1 systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 24 00:11:10 ubuntu-2gb-nbg1-1 systemd[1]: Failed to start Docker Application Container Engine.

This is because by default, the docker service is getting a few arguments/flags that we no longer want in our scenario. So it's time to change this. In order to override the docker service in systemd we simply need to create a file with this path: /etc/systemd/system/docker.service.d/docker.conf And the following content:

[Service]
ExecStart=
ExecStart=/usr/bin/dockerd

Once this is done, we simply need to reload the systemd confiration and restart the docker service. This is as easy as running the following 2 commands:

root@ubuntu-2gb-nbg1-1:/etc/systemd/system# systemctl daemon-reload
root@ubuntu-2gb-nbg1-1:/etc/systemd/system# systemctl restart docker.service

If at this point we check the status of the docker service (systemctl status docker.service) we'll see something similar to this:

root@ubuntu-2gb-nbg1-1:/etc/systemd/system# systemctl status docker.service
● docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/docker.service.d
           └─docker.conf
   Active: active (running) since Thu 2019-01-24 00:23:46 CET; 7s ago
     Docs: https://docs.docker.com
 Main PID: 21622 (dockerd)
    Tasks: 9
   CGroup: /system.slice/docker.service
           └─21622 /usr/bin/dockerd

Jan 24 00:23:47 ubuntu-2gb-nbg1-1 dockerd[21622]: time="2019-01-24T00:23:47.035910068+01:00" level=info msg="Node 8e08c2052b89/116.203.64.131, joined gossip cluster"
Jan 24 00:23:47 ubuntu-2gb-nbg1-1 dockerd[21622]: time="2019-01-24T00:23:47.036079632+01:00" level=info msg="Node 8e08c2052b89/116.203.64.131, added to nodes list"
Jan 24 00:23:47 ubuntu-2gb-nbg1-1 dockerd[21622]: time="2019-01-24T00:23:47.036233730+01:00" level=info msg="Node 5da982d2f987/195.201.220.107, joined gossip cluster"
Jan 24 00:23:47 ubuntu-2gb-nbg1-1 dockerd[21622]: time="2019-01-24T00:23:47.036363198+01:00" level=info msg="Node 5da982d2f987/195.201.220.107, added to nodes list"
Jan 24 00:23:47 ubuntu-2gb-nbg1-1 dockerd[21622]: time="2019-01-24T00:23:47.036555544+01:00" level=info msg="Node 50857f655645/116.203.64.129, joined gossip cluster"
Jan 24 00:23:47 ubuntu-2gb-nbg1-1 dockerd[21622]: time="2019-01-24T00:23:47.036689913+01:00" level=info msg="Node 50857f655645/116.203.64.129, added to nodes list"
Jan 24 00:23:48 ubuntu-2gb-nbg1-1 dockerd[21622]: time="2019-01-24T00:23:48.167328050+01:00" level=info msg="worker 6qc4c5b3fbu1m5br5mzysx5fl was successfully registered" method="(*Dispatcher).register"
Jan 24 00:23:48 ubuntu-2gb-nbg1-1 dockerd[21622]: time="2019-01-24T00:23:48.569705082+01:00" level=info msg="worker u1c63m88zew8qdidv104674jn was successfully registered" method="(*Dispatcher).register"
Jan 24 00:23:48 ubuntu-2gb-nbg1-1 dockerd[21622]: time="2019-01-24T00:23:48.672399583+01:00" level=info msg="worker r9yg3lzk3oo5w6i5h53i07nie was successfully registered" method="(*Dispatcher).register"
Jan 24 00:23:50 ubuntu-2gb-nbg1-1 dockerd[21622]: time="2019-01-24T00:23:50.379060067+01:00" level=info msg="worker mza0rj2nzdril9i4cevb54nwe was successfully registered" method="(*Dispatcher).register"

It's alive!... and we're done with it! Well, we're done with setting up our nodes to be able to run our docker services.

All that is left to do here, is get the client certificates locally and we can close our last ssh client to the master.

We can use scp for this: cp root@12.34.56.78:/path/to/keys/{ca,cert,key}.pem ..

Once the keys are copied locally, we can test the connectivity by running any docker specific commands and passing the proper options. An example for this could be docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem -H=tcp://12.34.56.78:2376 node ls.

radu@RaduPC:/there/is/no/spoon$ docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem -H=tcp://12.34.56.78:2376 node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
tv9wp779n6vrxeqm6szjlfgzz *   ubuntu-2gb-nbg1-1   Ready               Active              Leader              18.09.1
6qc4c5b3fbu1m5br5mzysx5fl     ubuntu-2gb-nbg1-2   Ready               Active                                  18.09.1
u1c63m88zew8qdidv104674jn     ubuntu-2gb-nbg1-3   Ready               Active                                  18.09.1
r9yg3lzk3oo5w6i5h53i07nie     ubuntu-2gb-nbg1-4   Ready               Active                                  18.09.1
mza0rj2nzdril9i4cevb54nwe     ubuntu-2gb-nbg1-5   Ready               Active                                  18.09.1

Note the --tls* arguments, they are all require in order to be able to connect using tls.

Unfortunately, we will not be setting up CI/CD in this article, as this deserves it's own article which will come soon.

It is now time to build and actually deploy the code on the swarm.

When working with symfony projects, I personally prefer to organize my docker specific files under config/docker to keep my repository a bit cleaner. This means that that the docker-compose configs and the docker commands that I run need extra arguments to run properly, but it's a small price to pay in order to keep the repositories neat.

For example, building the application image which would normally be docker build -t name:tag . becomes something like this docker build -t dilibau/docker-swarm-example-app:latest -f config/docker/php/Dockerfile ..

As you can see, I'm passing -f config/docker/php/Dockerfile instead of running something like docker build config/docker/php, this is because I want to use the current folder as the context of the build, but I want to use the Dockerfile that is situated somewhere else.

It's time to create the configs for our services. Looking at the MySQL container image on the Docker Hub we know that it supports docker secrets. Awesome!

Let's create a docker secret to be able to use it as our password. We can do this by simply running this:

printf therootpassword | docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem -H=tcp://12.34.56.78:2376 secret create mysql-root-password -

Note the usage of printf instead of echo. This is because echo ads a new line at the end of the string you give it as argument.

Now, let's create a docker config for the symfony application. The reason why we do this, is because docker configs can be mounted anywhere, for example, we will mount the symfony config as .env in the application folder.

First, create a new file somewhere, and add your configuration there. It should look similar to this:

MYSQL_USER=root
MYSQL_ROOT_PASSWORD=j74bqdm6tdm8832p4cghc2hgm
MYSQL_DATABASE=uploads
APP_ENV=prod
APP_SECRET=f96c2d666ace1278ec4c9e2304381bc3
DATABASE_URL=mysql://${MYSQL_USER}:${MYSQL_ROOT_PASSWORD}@db:3306/${MYSQL_DATABASE}
MAILER_URL=null://localhost
STORAGE_ACCESS_KEY=ANIHPIX324E6DDT54D0R
STORAGE_SECRET_KEY=hqVg4a+Ra0IpSG+FJfR3OFdAQNA4QvkzSdLvNAaB
BUCKET_NAME=storage

Once this is done, we can create the docker config using this command: docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem -H=tcp://12.34.56.78:2376 config create app_config my/new/.env

We'll do the same for the minio config: docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem -H=tcp://12.34.56.78:2376 config create minio_config my/new/config.json

With all that configured, it's time to put it together in a config file and deploy our services!

version: '3.5'

services:
  php:
    image: dilibau/docker-swarm-example-app:latest
    deploy:
      placement:
        constraints:
        - node.labels.role == php
    configs:
    - source: app_config
      target: /app/.env
      mode: 0555
  nginx:
    image: dilibau/docker-swarm-example-nginx:latest
    deploy:
      placement:
        constraints:
        - node.labels.role == nginx
    ports:
    - 80:80
  db:
    image: mysql:5.7
    deploy:
      placement:
        constraints:
        - node.labels.role == db
    volumes:
    - mysql:/var/lib/mysql
    secrets:
    - mysql-root-password
    environment:
      MYSQL_ROOT_PASSWORD_FILE: /run/secrets/mysql-root-password
      MYSQL_USER: root
      MYSQL_DATABASE: uploads
  storage:
    image: minio/minio:latest
    deploy:
      placement:
        constraints:
        - node.labels.role == storage
    volumes:
    - storage:/data
    configs:
    - source: minio_config
      target: /config/config.json
      mode: 0555
    command: server --config-dir=/config /data

volumes:
  mysql:
  storage:

secrets:
  mysql-root-password:
    external: true

configs:
  minio_config:
    external: true
  app_config:
    external: true

Things to note here:

  • volumes: we create them near the bottom of the file for them to be persistent, there's better ways to do this, and we will cover this in another article
  • configs:
    • as a top level key, this is where you specify the externally defined configs that you want to bring into the stack
    • when this key is nested under a service, it allows you to specify an array of objects where we can choose the source, target and mode for mounting the configs defined in the top level key
  • secrets: same as the configs, we mark them as external so that we can mange them with docker swarm
  • the deploy key:
    • we use this to specify the contraints so that each service gets placed inside our swarm
    • we can also configure other deployment related things here, like the number of replicas, global, and more

With all that being said and done, let's actually deploy this thing:

docker --tlsverify --tlscacert=path/to/ca.pem --tlscert=path/to/cert.pem --tlskey=path/to/key.pem -H=tcp://12.34.56.78:2376 stack deploy -c path/to/docker-stack.yml name_of_stack

We can now run docker --tlsverify --tlscacert=path/to/ca.pem --tlscert=path/to/cert.pem --tlskey=path/to/key.pem -H=tcp://12.34.56.78:2376 service ls to see the status of our services. Once they all have 1/1 in the REPLICAS section you're almost ready to start using the application.

There is only one thing left to do, which is to run the migrations. Unfortunately, there's no easy one liner that can run a command on a specific node in the swarm cluster, so you will have to ssh into it and run it manually.

And with that last step, we're actually done. We can check the application by accessing port 80 on any of the publicly available IPs in the cluster because by default, swarm opens the ports on all the nodes, but routes them properly to the correct service.

Want more awesome stuff?

Check us out on our Medium Remote Symfony Team publication.