Setting up a new local environment can be challenging and really time-consuming if you’re doing it from scratch. While this might not be a big deal when working as a single developer on a project, in a team-based scenario it’s important to share the same infrastructure configuration. That’s why we highly recommended using a tool like Docker to simplify the process.
Last summer, Jesus Manuel Olivas (Project lead) and I started working on a new project, and we had to discuss which setup we should use for the local environments. Since the project was already set up to use Lightning and BLT, we both agreed to use DrupalVM with Vagrant. Everything seemed to work great apart from some permissions conflicts, which we could easily resolve since the project only had two developers at the time.
DrupalVM is a tool used in creating Drupal development environments quick and easy, and it comes with the option to use Docker instead of or in addition to Vagrant but is mostly known and used with Vagrant.
The comparison of Vagrant and Docker is being discussed actively by many developers.
Why We Switched to Docker?
After a few weeks of development, more developers came on board on the project and we started running into some issues. Vagrant was not working as expected on some machines and we were spending way too much time researching and fixing the provisioning issues, so Jesus and I had to go back to the drawing board to come up with a comprehensive solution, and we decided to switch from Vagrant to Docker.
Trying Docker
Docker is a tool for building and deploying applications by packaging them into lightweight containers. A container can hold pretty much any software component along with its dependencies (executables, libraries, configuration files, etc.), and execute it in a guaranteed and repeatable runtime environment.
This makes it very easy to build your app once and deploy it anywhere – on your laptop for testing, then on different servers for live deployment, etc.
There are plenty of ‘ready to use’ tools to implement Docker with Drupal, just to mention a few:
At this point we didn’t want to add an extra layer or tool to the setup process, so we decided to go straight to a plain vanilla Docker configuration.
How To Implement a Basic Docker Configuration For Drupal
Installing Docker
This should be an easy step and once the installation is completed you should have the docker daemon running, confirm by running docker on your terminal and you should see the list of available commands. Download link
Step 1. Add the hostname
Edit your /etc/hosts file and add the new site name.
127.0.0.1 site.local
Step 2. Add the docker-compose.yml file
Add the following file to your project root.
version: "2"
services:
mariadb:
image: wodby/mariadb:10.1-2.3.3
env_file: .env
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${DATABASE_NAME}
MYSQL_USER: ${DATABASE_USER}
MYSQL_PASSWORD: ${DATABASE_PASSWORD}
ports:
- '3306:3306'
volumes:
- mysqldata:/var/lib/mysql
php:
image: wodby/drupal-php:7.0-2.4.3
env_file: .env
environment:
PHP_SENDMAIL_PATH: /usr/sbin/sendmail -t -i -S mailhog:1025
DB_HOST: ${DATABASE_HOST}
DB_USER: ${DATABASE_USER}
DB_PASSWORD: ${DATABASE_PASSWORD}
DB_NAME: ${DATABASE_NAME}
DB_DRIVER: mysql
volumes:
- ./:/var/www/html:cached
# - ./mariadb-init:/docker-entrypoint-initdb.d
nginx:
image: wodby/drupal-nginx:8-1.13-2.4.2
depends_on:
- php
environment:
NGINX_STATIC_CONTENT_OPEN_FILE_CACHE: "off"
NGINX_ERROR_LOG_LEVEL: debug
NGINX_BACKEND_HOST: php
NGINX_SERVER_ROOT: /var/www/html/docroot
volumes:
- ./:/var/www/html:cached
labels:
- 'traefik.backend=nginx'
- 'traefik.port=80'
- 'traefik.frontend.rule=Host:${TRAEFIK_HOST}'
mailhog:
image: mailhog/mailhog
labels:
- 'traefik.backend=mailhog'
- 'traefik.port=8025'
- 'traefik.frontend.rule=Host:mailhog.${TRAEFIK_HOST}'
traefik:
image: traefik
command: -c /dev/null --web --docker --logLevel=INFO
ports:
- '80:80'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
volumes:
mysqldata:
driver: "local"
Step 3. Add the .env file
Create a new file named .env at the project root, to provide per-environment configuration.
# ENV
ENVIRONMENT=local
# Database
MYSQL_ROOT_PASSWORD=password
DATABASE_NAME=drupal
DATABASE_USER=drupal
DATABASE_PASSWORD=drupal
DATABASE_HOST=mariadb
DATABASE_PORT=3306
# Traefik host
TRAEFIK_HOST=site.local
Step 4. Starting the containers
To start the containers you need to execute the following command docker-compose up -d, grab some coffee or a beer and be patient while the images are downloaded to your local computer.
Step 5. Importing a database dump (optional)
You can import previously exported DB dump by copying the dump file under the mariadb-init directory and uncommenting the following line on your docker-compose.yml file.
- ./mariadb-init:/docker-entrypoint-initdb.d
Step 6. Checking for used ports
One common issue you’ll likely run into while starting the containers, is finding the ports in use, this could mean an instance of Apache, Nginx, MySQL or other service is already running, so if you want to know what is using the ports you can run this commands on your terminal:
lsof -i :<PORT_NUMBER>
Useful docker-compose commands
Starting the containers, using detached mode
docker-compose up -d
Stopping the containers.
docker-compose stop
Destroying the containers
docker-compose down [-v]
NOTE: You can pass the -v flag to destroy the shared volumes as well. Be careful this will destroy any data on the shared volumes between the container and the local machine.
Checking the logs
docker-compose logs -f <CONTAINER_NAME>
Executing CLI commands.
While working with containers is common to see developers ssh-ing into the machine to execute commands. To avoid this practice you can take advantage of the docker-compose exec command.
docker-compose exec <CONTAINER_NAME> <COMMAND_NAME>
Using Composer
Drupal 8 really takes a lot of advantages of using composer, you can install/uninstall dependencies and patches. Although it’s a good practice to run this commands inside your container because if you have a PHP version on your local machine, you could install dependencies that are not suitable for your container instance.
docker-compose exec --user=82 php composer <COMMAND_NAME>
Using DrupalConsole
If you want to use DrupalConsole on your project you can add an alias file to the repo at console/sites/site.yml containing the following configuration.
local:
root: /var/www/html
extra-options: docker-compose exec --user=82 php
type: container
After this file is added you will be able to run commands locally but actually execute them on the container by calling DrupalConsole commands you can use:
drupal @site.local <COMMAND_NAME>
Find more information here: https://docs.drupalconsole.com/en/alias/using-site-alias.html
On the other hand, if you prefer the direct way to run the commands you can use
docker-compose exe --user=82 php drupal <COMMAND_NAME>
Wrapping up
The new setup worked really well on everyone’s computer, and we didn’t have any more issues on this since we changed, now the project went live and we got a great experience and we plan to keep using Docker for future projects.
If you have the feeling Docker’s architecture is hard to understand and could be complex to get up and running you can take advantage of the projects mentioned below (Lando, Docksal, etc) to make it easy for you to start working with containers.
UPDATE:
Because of the nature of the project a Drupal site, we based our docker configuration on the Drupal4Docker project by wodby. For other projects using technologies as Symfony, ReactJS, MeteorJS we create our own custom Docker files and custom images.