Containerizing a static website with Docker, part I

Posted on by Matthias Noback

Recently a former colleague of mine, Lucas van Lierop, showed me his new website, which he created using Spress. Lucas took two bold moves: he started freelancing, and he open-sourced his website code. This to me was very inspiring. I've been getting up to speed with Docker recently and am planning to do a lot more with it over the coming months, and being able to take a look at the source code of up-to-date projects that use Docker is certainly invaluable.

Taking lots of inspiration from Lucas's codebase, and after several hours of fiddling with configuration files, I can now guide you through the steps it took to containerize my blog (which is the site you're visiting now) and deploy a single container to a production server.

This blog is generated from a large set of Markdown files, some images, some SCSS files (compiled to CSS) and some JS files (minified and combined). I use Sculpin for the conversion of Markdown to HTML files and NodeJS/NPM/Bower/Grunt for all things CSS/JS. The result of generating the website from sources is a set of files in output_prod/ which for the past four years I've been happily rsync-ing with an Apache document root on my sponsored ServerGrove VPS.

This kind of setup asks for two containers: one with all the development tools (like NPM, Composer, etc.) and one which serves the generated website. If we do this right, we can use the latter container to preview the website on our development machine, and push the same container to a production environment where it can serve the website to actual visitors.

Although Sculpin makes a distinction between a prod and a dev environment, I wanted to streamline its build process: I want no surprises when I switch from dev to prod just before I release a new version of the website. I chose to hide the concept by forcing Sculpin to generate the website in output/. So in app/config/sculpin_kernel.yml I added:

sculpin:
    output_dir: %sculpin.project_dir%/output

The build container

I decided to call the container with all the development tools the build container. I created docker/build/Dockerfile containing:

FROM php:7-cli
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN curl -sL https://deb.nodesource.com/setup_7.x | bash && apt-get install -y nodejs
RUN npm install -g bower grunt
RUN apt-get install -y git
WORKDIR /opt
ENTRYPOINT ["make", "--makefile=docker/build/Makefile"]
CMD ["nothing"]

Taking the most recent PHP CLI image as the base image, these RUN commands install Composer, then NodeJS, then Bower and Grunt and finally Git (for downloading dependencies from Git). The working directory is set to /opt, which is a sensible place to store custom software. Please note that in most cases it's a best practice to combine the RUN statements into one big concatenated one, to reduce the number of filesystem layers Docker produces. I decided to ignore this practice, since this is a build container which doesn't have to be of "production quality".

The default entrypoint for the container is to run the nothing target of the designated Make file. This is what docker/build/Makefile looks like:

# I have no idea why this directive works, but at least we can now use spaces instead of tabs for recipes:
.RECIPEPREFIX +=

# nothing is a "phony" target, it produces no actual step in the build
.PHONY: nothing

nothing:
    @echo Nothing to be done

# Install all dependencies defined in composer.json, package.json and bower.json
install:
    npm install
    bower install --allow-root
    composer install

# Process all source files (assets and pages)
all:
    grunt
    vendor/bin/sculpin --project-dir=/opt --env=prod generate

I found out that Bower doesn't like to be executed by the root user (the user that runs the build container), so I "fixed" it using the --allow-root flag.

I decided to use docker-compose, which allows for separate configuration for production and development machines, based on the presence of a docker-compose.override.yml file. Since development work will only be done on a local development machine, I added the build container only to the list of services in docker-compose.override.yml, like this:

version: '2'

services:
    build:
        container_name: php-and-symfony-build
        image: matthiasnoback/php-and-symfony-build
        build:
            context: ./
            dockerfile: docker/build/Dockerfile
        volumes:
            - ./:/opt

By running docker compose build build (which builds the build container ;)) and then docker compose run build, we get the message "Nothing to be done" from make (as expected, since by default it runs the nothing target we defined earlier). We can run other specific Make targets, by adding an extra argument, e.g. docker compose run build install to install project dependencies.

To be honest I don't think I ever saw someone use make as an entrypoint for a Docker container, so consider this an experiment. Also, the targets could be arranged a little smarter (they probably should be more specialized), and I didn't add a "watch" target, to rebuild upon file changes.

The volumes directive instructs docker-compose to mount the local working directory (the root directory of the project) as /opt inside the container. This means that all the files that will be changed by the development tools inside the build container are actually changes on the host machine. This, of course, is very usual in a development environment.

Next up: the blog container

In the next post we'll look at how to set up the blog container, which will serve the generated website (for preview and deployment purposes).

By the way, since I'm learning Docker right now, I'm really hoping to receive some feedback from more experienced Docker users - feel free to use the comment form below!

PHP Docker Sculpin Docker
Comments
This website uses MailComments: you can send your comments to this post by email. Read more about MailComments, including suggestions for writing your comments (in HTML or Markdown).
s.molinari

I can (sort of) understand the building for a PHP app needing both node and php, however, good Docker practice dictates each process should have its own container. So, theoretically there should be a node and a php container. Can you see that being a possibility? ;-)

Scott

Matthias Noback

For now I consider that rule to only be valid for production processes for which Docker will be responsible to keep them up and running. These build utilities like those you run inside a running container (e.g. ls, vi, etc.).

Sadok

Thank you for sharing your experience with docker, I'm not a experienced Docker user but since I start using it last year it changed a lot for me the way I work, it's so easy and so flexible to start any project with the specific configuration.
I just add the dependencies I need and push it to Github and import it in docker hub.
Another good thing also is most of Cloud solution support docker, recently I deployed a project to AWS with eb cli, it detect automatically the Dockerfile and create a proper running container with just 2 command: eb init and eb create app-name.

Matthias Noback

That's good to hear. Thanks for mentioning AWS too. I picked Digital Ocean this time, see also http://php-and-symfony.matt...

Dmitri Lakachauskis

which for the past four _years_ (a word is missing)

Matthias Noback

Thanks, fixed it.