About 2 years ago I created a package that combines the power of two famous Symfony components: the Form component and the Console component. In short: this package allows you to interactively fill in a form by typing in the answers at the CLI. When I started working on it, this seemed like a pretty far-fetched idea. However, it made a lot of sense to me in terms of a ports & adapters architecture that I was looking for back then (and still always am, by the way). We could (and often should) write the code in our application layer in such a way that it doesn't make a big difference whether we call applications services from a web controller or from a CLI "controller".
As described by Bernhard Schüssek (author of the Symfony Form component) in his article Symfony2 Form Architecture, the Form component strictly separates model and view responsibilities. It also makes a clear distinction between processing a form and rendering it (e.g. in an HTML template). If you're familiar with Symfony forms, you already know this. You first define the structure and behavior of a form, then you convert it to a view (which is basically a DTO). This simple data structure is used to render HTML.
This strict separation of concerns has brought us a well-designed, yet fairly complicated form component. In my quest for "console forms", this was a great gift: it was actually quite easy to render form fields to the terminal output, instead of to an HTTP response.
I decided to rely on the existing Question Helper which allows console commands to ask questions to the user. This basically left me with the need for "bridging the gap" between the Form and Console component. This is a quick list of things I needed to fix:
- Different form fields require different console question types. A
choice type form field matches more or less with a
ChoiceQuestion. But since the mapping isn't one-to-one, I introduced the concept of a
FormToQuestionResolver which configures a
Question object in such a way that the user experience matches to that of a web form field. For example: a
password type form field gets transformed into a
Question with hidden output.
- It's relatively easy to ask single questions (e.g. "Your name: "), but it was a bit harder to ask for "collections", like "Phone number 1: ", "Phone number 2: ", etc.). I fixed this by introducing something called a
FormInteractor which gets asked to arrange any number of user interactions that are required to fill in the form.
- CSRF tokens aren't needed for console commands, so CSRF protection will automatically be disabled.
- In some places this package relies on behaviors of the Form and Console component that are not covered by the Symfony Backwards Compatibility Promise. This means that for every minor release the package is bound to break for some (often obscure) reason. I have kept up with Symfony 2.8, 3.0, 3.1 and 3.2, but if things get too hard to maintain, I will consider dropping some versions.
A short list of changes in external libraries that have been causing trouble so far (I'm by no means criticizing the Symfony development team for this, it just might be interesting to share this with you):
- The behavior of a
ChoiceQuestion changed at some point, switching what was returned as the selected value (instead of the selected "key" at some point it started returning the selected "label" of a choice). See also my custom solution,
AlwaysReturnKeyOfChoiceQuestion. Furthermore, around the same time the Form component switched the behavior of its
ChoiceType form type, accepting an array of
"label" => "data" value pairs, where I was used to providing the exact opposite (an array of
"data" => "label" pairs).
- Of course, there was the major overhaul of the form type system, where form types changed to being fully-qualified class names instead of simple names. I wanted to keep supporting both styles, so this took some time to get right. At some point, this became simply too complex to maintain, so I dropped support for old-style form types.
Follow the instructions from the README of the project to install the package and register it in your project as a Symfony bundle. Then define a Symfony form type like this:
class DemoType extends AbstractType
public function buildForm(FormBuilderInterface $builder, array $options)
'label' => 'Your name',
'required' => true,
'data' => 'Matthias'
// maybe add some more fields
public function configureOptions(OptionsResolver $resolver)
'data_class' => 'Some\Namespace\Demo',
Now create a data class for your form, like this:
// add some more properties to match the form fields
Finally, create a console command, like this one:
class DemoCommand extends Command
protected function configure()
protected function execute(InputInterface $input, OutputInterface $output)
/** @var FormHelper $formHelper */
$formHelper = $this->getHelper('form');
$formData = $formHelper->interactUsingForm(new DemoType(), $input, $output);
// $formData is the valid and populated form data object
Register it as a console command and run it:
bin/console form:demo. The output will look something like this (provided you've also added an email field an a country choice field):
Although it took some fiddling, along the way I found out that the Form component is in fact very suitable for types of input/output other than just plain old HTTP requests/responses.
I also found out that it was impossible (and still is, I think) to globally register:
- styles for formatting console messages,
- command helpers (like the
So I created event listeners and compiler passes to accomplish this. These may deserve their own bundle at some point. Feel free to create it, based on the code from this package.
The main use case for me so far was to demonstrate delivery-mechanism-agnosticness of application services in my workshops. But I've heard of projects adopting this package to allow installation wizards to be used from the CLI as well as the web.
This package might make it easier for you to get started with interactive console commands if you already know how to work with forms - in that case, you don't need to learn anything new.
Anyway, I'd love to hear what you're doing with it!
In the previous posts we looked at creating a build container, and after that we created a blog container, serving our generated static website.
It's quite surprising to me how simple the current setup is — admittedly, it's a simple application too. It takes about 50 lines of configuration to get everything up and running.
The idea of the
blog container, which has
nginx as its main process, is to deploy it to a production server whenever we feel like it, in just "one click". There should be no need to configure a server to host our website, and it should not be necessary to build the application on the server too. This is in fact the promise, and the true power of Docker.
Running containers on a remote server requires two things:
- The server should be able to retrieve the container's image.
- The Docker engine should be running on the server.
Pushing the container image to Docker Hub
The first step is quite easy. You can create an account at the (default) image registry Docker Hub. There are alternatives, but this seems like the usual place to start. You need to provide the full image name in
docker-compose.yml (as we did in the previous post):
You can now build the image on your machine, using
docker-compose build blog, and then push that image to Docker Hub by running
docker-compose push blog. On the production server, it will later be possible (see below), to pull the container image from the registry, by running
docker-compose pull blog.
Deployment to Digital Ocean using Docker Machine
Now that the container image has been pushed to Docker Hub, we can continue with the next step: installing the Docker engine on the server. You can do it manually, which I did at first. However, I thought it would be a nice occasion to learn about another tool called Docker Machine that performs this task in an automated fashion: it remotely provisions a server, making it ready to run Docker containers.
I already had an account at Digital Ocean, so I just followed the steps described in the Digital Ocean example documentation page. Basically, you let
docker-machine create a new "droplet" for you, which is a nice name for a virtual private server (VPS). Once you have done this, you can run
docker (and consequently
docker-compose) commands on the remote server, from your own laptop. It wasn't entirely clear for me at first, but it works by populating some specific environment variables, which influence the behavior of
First I provisioned my server by running:
docker-machine create --driver digitalocean --digitalocean-access-token secret-api-token php-and-symfony-blog
After some time I could run
docker-machine env php-and-symfony-blog, which showed something like:
docker-machine env php-and-symfony-blog
# Run this command to configure your shell:
# eval $(docker-machine env php-and-symfony-blog)
So I followed the instructions and ran
eval $(docker-machine env php-and-symfony-blog). From that moment on I could run any
docker command and it would be executed against the Docker engine running on the remote server, but — and this is why it's so awesome — based on the configuration files available on the host machine.
This means that I can simply run the following commands from my project root directory:
eval $(docker-machine env php-and-symfony-blog)
docker-compose -f docker-compose.yml pull blog
docker-compose -f docker-compose.yml up -d --no-deps --force-recreate --no-build blog
This pulls the previously pushed
blog image from Docker Hub, then starts running the
blog container. Running
docker-compose ps reveals that indeed, the
blog is now up and running, serving the website at port 80 as it should.
Since the environment variables produced by
docker-machine env will transparently run
docker commands against the remote server from now on, you should not forget to unset these environment variables when you want to communicate with your locally installed Docker engine. Florian Klein pointed out an easy way to accomplish this in the comment section:
eval $(docker-machine env -u)
Some last suggestions:
- It may be a good idea to write another Make file containing recipes for the above actions (e.g. create and provision a server — if you want that to be a reproducable thing; build, push and run a container image, etc.).
- Read more about Docker, Docker Compose, Docker Hub (and possibly Docker Machine) by browsing through its documentation pages. Digital Ocean also provides lots of useful documentation, tutorials and guides.
Again: it's all pretty simple, very cool and highly rewarding. I like the fact that:
- I'm in full control of every software dependency of my application.
- I don't have to manually install anything on the production server.
- I won't be afraid to destroy my VPS, since it's very easy to bring a new one up again.
Of course, we have to be very honest about our achievements: once we start going down the road, containerizing larger applications, or more inter-connected applications, we may soon get into trouble. I'm personally setting out on a journey to learn much more about this, so you may expect more about this soon.
In the previous post we looked at the process of designing a
build container, consisting of all the required build tools for generating a static website from source files. In order to see the result of the build process, we still need to design another container, which runs a simple web server, serving the static website (mainly
We'll use a light-weight install of Nginx as the base image and simply copy the website files to the default document root (
/usr/share/nginx/html) (only after removing any placeholder files that are currently inside that directory). The complete file
docker/blog/Dockerfile looks like this:
RUN rm -rf /usr/share/nginx/html
COPY output /usr/share/nginx/html
Eventually, I want to turn this into something more advanced, by configuring SSL, and by making the pages "auto-fast" with the Pagespeed module developed by Google. But for now, this basic image is just fine (and pretty fast).
Let's add the
blog container to
# tag the image, so we can later push it
# should Nginx crash, always restart it
# treat port 80 of the host as port 80 of the container
Remember I've used
docker-compose.override.yml to define development-specific configuration for Docker? Since we're only building the container in a development environment, the
build configuration for the
blog container only needs to be in available in
# already defined in the previous post...
# Nginx should pick up local changes to files in ./output
For development purposes, we make sure that the current contents of the
output/ directory will always be available for Nginx to serve. To achieve this, we only need to mount
output/ as a volume at Nginx's default document root location.
After building the website files using
docker-compose run build all, we can start serving the blog:
docker-compose up -d blog. We use
up -d to start the web server in detached mode and keep it running. We can now look at the website by opening
http://localhost in a browser.
Next up: deploying the
The promise of Docker to me was: producing a build artifact that can travel through a build pipeline and eventually be deployed as-is to a production server. Deploying a static website is particularly easy now that we have a simple
blog container that really is a self-contained web server. We'll look into deployment in the next post.