Containerizing a static website with Docker, part III

Posted on by Matthias Noback

In the previous posts we looked at creating a build container, and after that we created a blog container, serving our generated static website.

It's quite surprising to me how simple the current setup is — admittedly, it's a simple application too. It takes about 50 lines of configuration to get everything up and running.

The idea of the blog container, which has nginx as its main process, is to deploy it to a production server whenever we feel like it, in just "one click". There should be no need to configure a server to host our website, and it should not be necessary to build the application on the server too. This is in fact the promise, and the true power of Docker.

Running containers on a remote server requires two things:

  1. The server should be able to retrieve the container's image.
  2. The Docker engine should be running on the server.

Pushing the container image to Docker Hub

The first step is quite easy. You can create an account at the (default) image registry Docker Hub. There are alternatives, but this seems like the usual place to start. You need to provide the full image name in docker-compose.yml (as we did in the previous post):

        image: matthiasnoback/php-and-symfony-blog

You can now build the image on your machine, using docker-compose build blog, and then push that image to Docker Hub by running docker-compose push blog. On the production server, it will later be possible (see below), to pull the container image from the registry, by running docker-compose pull blog.

Deployment to Digital Ocean using Docker Machine

Now that the container image has been pushed to Docker Hub, we can continue with the next step: installing the Docker engine on the server. You can do it manually, which I did at first. However, I thought it would be a nice occasion to learn about another tool called Docker Machine that performs this task in an automated fashion: it remotely provisions a server, making it ready to run Docker containers.

I already had an account at Digital Ocean, so I just followed the steps described in the Digital Ocean example documentation page. Basically, you let docker-machine create a new "droplet" for you, which is a nice name for a virtual private server (VPS). Once you have done this, you can run docker (and consequently docker-compose) commands on the remote server, from your own laptop. It wasn't entirely clear for me at first, but it works by populating some specific environment variables, which influence the behavior of docker commands.

First I provisioned my server by running:

docker-machine create --driver digitalocean --digitalocean-access-token secret-api-token php-and-symfony-blog

After some time I could run docker-machine env php-and-symfony-blog, which showed something like:

docker-machine env php-and-symfony-blog
export DOCKER_HOST="tcp://x.x.x.x:2376"
export DOCKER_CERT_PATH="..."
export DOCKER_MACHINE_NAME="php-and-symfony-blog"
# Run this command to configure your shell: 
# eval $(docker-machine env php-and-symfony-blog)

So I followed the instructions and ran eval $(docker-machine env php-and-symfony-blog). From that moment on I could run any docker command and it would be executed against the Docker engine running on the remote server, but — and this is why it's so awesome — based on the configuration files available on the host machine.

This means that I can simply run the following commands from my project root directory:

eval $(docker-machine env php-and-symfony-blog)
docker-compose -f docker-compose.yml pull blog
docker-compose -f docker-compose.yml up -d --no-deps --force-recreate --no-build blog

This pulls the previously pushed blog image from Docker Hub, then starts running the blog container. Running docker-compose ps reveals that indeed, the blog is now up and running, serving the website at port 80 as it should.

Since the environment variables produced by docker-machine env will transparently run docker commands against the remote server from now on, you should not forget to unset these environment variables when you want to communicate with your locally installed Docker engine. Florian Klein pointed out an easy way to accomplish this in the comment section:

eval $(docker-machine env -u)

Some last suggestions:

  • It may be a good idea to write another Make file containing recipes for the above actions (e.g. create and provision a server — if you want that to be a reproducable thing; build, push and run a container image, etc.).
  • Read more about Docker, Docker Compose, Docker Hub (and possibly Docker Machine) by browsing through its documentation pages. Digital Ocean also provides lots of useful documentation, tutorials and guides.


Again: it's all pretty simple, very cool and highly rewarding. I like the fact that:

  • I'm in full control of every software dependency of my application.
  • I don't have to manually install anything on the production server.
  • I won't be afraid to destroy my VPS, since it's very easy to bring a new one up again.

Of course, we have to be very honest about our achievements: once we start going down the road, containerizing larger applications, or more inter-connected applications, we may soon get into trouble. I'm personally setting out on a journey to learn much more about this, so you may expect more about this soon.

Categories: PHP Docker

Tags: Sculpin Docker

Comments: Comments

Containerizing a static website with Docker, part II

Posted on by Matthias Noback

In the previous post we looked at the process of designing a build container, consisting of all the required build tools for generating a static website from source files. In order to see the result of the build process, we still need to design another container, which runs a simple web server, serving the static website (mainly .html, .css, .js and .jpg files).

Designing the blog container

We'll use a light-weight install of Nginx as the base image and simply copy the website files to the default document root (/usr/share/nginx/html) (only after removing any placeholder files that are currently inside that directory). The complete file docker/blog/Dockerfile looks like this:

FROM nginx:1.11-alpine
RUN rm -rf /usr/share/nginx/html
COPY output /usr/share/nginx/html

Eventually, I want to turn this into something more advanced, by configuring SSL, and by making the pages "auto-fast" with the Pagespeed module developed by Google. But for now, this basic image is just fine (and pretty fast).

Let's add the blog container to docker-compose.yml too:

version: '2'

        # optional
        container_name: php-and-symfony-blog

        # tag the image, so we can later push it
        image: matthiasnoback/php-and-symfony-blog

        # should Nginx crash, always restart it
        restart: always

        # treat port 80 of the host as port 80 of the container 
            - 80:80

Remember I've used docker-compose.override.yml to define development-specific configuration for Docker? Since we're only building the container in a development environment, the build configuration for the blog container only needs to be in available in docker-compose.override.yml:

version: '2'

        # already defined in the previous post...

            context: ./
            dockerfile: docker/blog/Dockerfile
            # Nginx should pick up local changes to files in ./output
            - ./output:/usr/share/nginx/html

For development purposes, we make sure that the current contents of the output/ directory will always be available for Nginx to serve. To achieve this, we only need to mount output/ as a volume at Nginx's default document root location.

After building the website files using docker-compose run build all, we can start serving the blog: docker-compose up -d blog. We use up -d to start the web server in detached mode and keep it running. We can now look at the website by opening http://localhost in a browser.

Next up: deploying the blog container

The promise of Docker to me was: producing a build artifact that can travel through a build pipeline and eventually be deployed as-is to a production server. Deploying a static website is particularly easy now that we have a simple blog container that really is a self-contained web server. We'll look into deployment in the next post.

Categories: PHP Docker

Tags: Sculpin Docker

Comments: Comments

Containerizing a static website with Docker, part I

Posted on by Matthias Noback

Recently a former colleague of mine, Lucas van Lierop, showed me his new website, which he created using Spress. Lucas took two bold moves: he started freelancing, and he open-sourced his website code. This to me was very inspiring. I've been getting up to speed with Docker recently and am planning to do a lot more with it over the coming months, and being able to take a look at the source code of up-to-date projects that use Docker is certainly invaluable.

Taking lots of inspiration from Lucas's codebase, and after several hours of fiddling with configuration files, I can now guide you through the steps it took to containerize my blog (which is the site you're visiting now) and deploy a single container to a production server.

This blog is generated from a large set of Markdown files, some images, some SCSS files (compiled to CSS) and some JS files (minified and combined). I use Sculpin for the conversion of Markdown to HTML files and NodeJS/NPM/Bower/Grunt for all things CSS/JS. The result of generating the website from sources is a set of files in output_prod/ which for the past four years I've been happily rsync-ing with an Apache document root on my sponsored ServerGrove VPS.

This kind of setup asks for two containers: one with all the development tools (like NPM, Composer, etc.) and one which serves the generated website. If we do this right, we can use the latter container to preview the website on our development machine, and push the same container to a production environment where it can serve the website to actual visitors.

Although Sculpin makes a distinction between a prod and a dev environment, I wanted to streamline its build process: I want no surprises when I switch from dev to prod just before I release a new version of the website. I chose to hide the concept by forcing Sculpin to generate the website in output/. So in app/config/sculpin_kernel.yml I added:

    output_dir: %sculpin.project_dir%/output

The build container

I decided to call the container with all the development tools the build container. I created docker/build/Dockerfile containing:

FROM php:7-cli
RUN curl -sS | php -- --install-dir=/usr/local/bin --filename=composer
RUN curl -sL | bash && apt-get install -y nodejs
RUN npm install -g bower grunt
RUN apt-get install -y git
ENTRYPOINT ["make", "--makefile=docker/build/Makefile"]
CMD ["nothing"]

Taking the most recent PHP CLI image as the base image, these RUN commands install Composer, then NodeJS, then Bower and Grunt and finally Git (for downloading dependencies from Git). The working directory is set to /opt, which is a sensible place to store custom software. Please note that in most cases it's a best practice to combine the RUN statements into one big concatenated one, to reduce the number of filesystem layers Docker produces. I decided to ignore this practice, since this is a build container which doesn't have to be of "production quality".

The default entrypoint for the container is to run the nothing target of the designated Make file. This is what docker/build/Makefile looks like:

# I have no idea why this directive works, but at least we can now use spaces instead of tabs for recipes:

# nothing is a "phony" target, it produces no actual step in the build
.PHONY: nothing

    @echo Nothing to be done

# Install all dependencies defined in composer.json, package.json and bower.json
    npm install
    bower install --allow-root
    composer install

# Process all source files (assets and pages)
    vendor/bin/sculpin --project-dir=/opt --env=prod generate

I found out that Bower doesn't like to be executed by the root user (the user that runs the build container), so I "fixed" it using the --allow-root flag.

I decided to use docker-compose, which allows for separate configuration for production and development machines, based on the presence of a docker-compose.override.yml file. Since development work will only be done on a local development machine, I added the build container only to the list of services in docker-compose.override.yml, like this:

version: '2'

        container_name: php-and-symfony-build
        image: matthiasnoback/php-and-symfony-build
            context: ./
            dockerfile: docker/build/Dockerfile
            - ./:/opt

By running docker-compose build build (which builds the build container ;)) and then docker-compose run build, we get the message "Nothing to be done" from make (as expected, since by default it runs the nothing target we defined earlier). We can run other specific Make targets, by adding an extra argument, e.g. docker-compose run build install to install project dependencies.

To be honest I don't think I ever saw someone use make as an entrypoint for a Docker container, so consider this an experiment. Also, the targets could be arranged a little smarter (they probably should be more specialized), and I didn't add a "watch" target, to rebuild upon file changes.

The volumes directive instructs docker-compose to mount the local working directory (the root directory of the project) as /opt inside the container. This means that all the files that will be changed by the development tools inside the build container are actually changes on the host machine. This, of course, is very usual in a development environment.

Next up: the blog container

In the next post we'll look at how to set up the blog container, which will serve the generated website (for preview and deployment purposes).

By the way, since I'm learning Docker right now, I'm really hoping to receive some feedback from more experienced Docker users - feel free to use the comment form below!

Categories: PHP Docker

Tags: Sculpin Docker

Comments: Comments