Comments on stack topology for PHP application

Hi All,

following on from some general wuestions I posted here https://forums.docker.com/t/dockerized-php-application-architecture-best-practices/3705/2 I’d like to provoke some of you smarter folks about a dilemma in structuring an app…

Has anyone been running in production (or heavy development even) rancher with a php+nginx setup? How have you laid out your containers? (nginx & php-fpm as 2 containers, or a single container as “app server”?) If its the former, why have you done it and has there been any particular gain?

Considering separate PHP & Nginx containers linked together, would it make sense to decouple them? (i.e. scale them separately depedning on load? like 1nginx + 2 php-fpm?) And how would one make sure that a certain nginx container is accessing the “local” php instance? (All the dockerfiles and docs I’ve seen mentioning php-fpm & nginx containers are using TCP communication between them… would it be wise or even sane to consider pointing nginx at 127.0.0.1:9000 as opposed to php:9000?) Assuming every physical host has 1 container of each, how would I make sure the webservcers use their adjacent phps?

Thanks for any ideas and insights on these matters!

For me the benefits of having two containers are:

  1. Separation of concerns.
  2. PHP processes are way more resource consuming than NGinx and will have to be scaled up more often (maybe also interesting read);
  3. Running multiple versions of PHP becomes easier as they are decoupled;

Last week I’ve been trying to get a good setup working, but until now I haven’t found any good solutions for separate NGinx and PHP containers. I’ll list the things I’ve tried below. I hope you find it useful.

A. Building two separate images

PHP:

FROM php:7-fpm

ADD ./php/php.ini /usr/local/etc/php/php.ini
ADD ./src /var/www/html

NGinx:

FROM nginx

ADD ./nginx/default.conf /etc/nginx/conf.d/default.conf
ADD ./src /var/www/html

docker-compose.yml:

php:
    image : <your_php_image>
    links:
        - memcached
        - mysql
nginx:
    image: <your_nginx_image>
    links:
        - php
    ports:
        - "80:80"

Every time you want to deploy something to production both an PHP image and an NGinx image need to be build which will contain your source code. This solutions just makes me feel uncomfortable and doesn’t seem to be DRY.

B. Volumes-From

php:
    image : <your_php_image>
    links:
        - memcached
        - mysql
nginx:
    image: nginx
    links:
        - php
    volumes_from:
        - php
    ports:
        - "80:80"

In this example only one container needs to be build which will contain your application. This image might also contain the configuration files for NGinx if you like. If not, you only need to rebuild the nginx container when some config file needs to be updated. In Rancher we need to use a sidekick if we want to use volumes_from:

php:
    image : <your_php_image>
    links:
        - memcached
        - mysql
nginx:
    image: nginx
    volumes_from:
        - php
    ports:
        - "80:80"
    labels:
        io.rancher.sidekicks: php

This seamed like a good solution to me, however the problem is that sidekicks are not to allowed to have any links. Because PHP has links to the memcached and mysql service this won’t work. To bad! Maybe someone knows how to bypass this?

C. Volumes

php:
    image : php
    volumes:
        - /volume/src:/var/www/html
    links:
        - memcached
        - mysql
nginx:
    image: nginx
    links:
        - php
    volumes:
        - /volume/src:/var/www/html
    ports:
        - "80:80"

A third option could be to use something like convoy fs to share an underlying filesystem which both the PHP and NGinx container could use to mount the source code. I’ve been using this method on AWS Elastic beanstalk and It works great for me there, but I have not tested it yet for rancher. The benefits here are that you can just use public repositories as there is no sensitive information in your containers. The thing I’m uncertain about it is how to do rolling updates if a shared filesystem is used. Maybe someone has an idea about this?

In the end I might just go with a single php-nginx container as it simplifies the whole process :astonished:.

Like RVN_BR I also would like to see other solutions / ideas :smile:

Great! thanks for the detailed response boedy…

I was experimenting with 4 containers, Nginx+nginx-config, php-fpm+php-config… However, since I had to make some changes to php (add exts etc) and that posed a problem getting the config files into it (I wasnt able to write a php config and then add a volume as it was overwriting it… an option would be to add another php config dir to scan but i couldnt get it to work, despite there exisitng an ENV var in the php buildfile that add to ./configure)…

anyways, I ended up with 1 nginx+nginx-config, and 1 php-fpm(including configs & mods)…

Its still pretty easy to upgrade, and I may even make a change to use 1 nginx only since there is the sidekicks+volumes_from incompatibility… (alhtough I seem to remember something about this being fixed)…

1 small edit: I do have the application code in a data only container, so its nginx+nginxconfig+datavol, and php+datavol right now… I need to check about the incompatiblity as I really like the idea of the app code being separate…

Have you gotten the nginx+nginxconfig+datavol, and php+datavol working? Are you using the one data container for that or two separate containers?

Could you maybe share your docker-compose.yml?

Hi @boedy… sorry for the delay… I got it to work with docker-compose but its not working with rancher…

I’m having trouble running rancher, but I guess its just me… tried 3-4 different computers, workstations, laptops and a server… but rancher isnt working for the last 4-5 releases… Whenever something tried to be deployed its disconnecting all the sgnets and servers… Some problem related to networking as far as I can tell… some issue with the vagrantfile I suppose but couldnt pinpoint right now…

Using docker-compose on its own is much easier and more reliable for now… I may look elsewhere despite rancher being the most complete offering I found for creation, “block building” and overall “paas-like” solution, I can really put my hands around some stuff… I guess the rancher guys are spread so thin with Rancher, RancherOS and the ton of other (Great) projects they are working on… It will probably be a while until rancher is ready… I’d personally use it with coreos, not something “brand new” like rancher os in production… but thats just my opinion…

I think I’ll be looking elsewhere for now as its just not stable enough anymore… (was better for me around 0.42, 0.43, etc… the latest runs dont even run a vagrant test… difficult to push something like this into real testing when vagrant test case fails at EVERY stack launch… its a pitty really… Were they really focused on Rancher only (not all the other projects) I’m sure this could give tectonic a run for their money…

Most of my comments in Deploy to Rancher button apply here as well. Please run Rancher v0.51.x w/ CoreOS Beta, you’ll be much happier. I wanted to let you know that we are building full support such that real docker-compose, not just rancher-compose will work in Rancher. In the past we really couldn’t use docker-compose as it lacked the ability to run across multiple nodes. Now with Swarm 1.0 much of this has been addressed so you will be seeing support for docker-compose working in Rancher and still being able to leverage our LB, networking, and all other management functionality.

Cool!

I ended up doing 1 nginx container based off the official nginx image and this image includes the config files and extra required packages. 1 php container based off the official php image, with the additional config files and additional configure/extensions commands… and 1 container with the HTML or PHP files…

I had to create 2 containers due to the container linking limitation in rahcner… so basically I have 2 identical containers, one linked to php one linked to nginx (same dockerfile, different container name)… I’ll try again with the coreos beta as it just crashed the rancheros install I was trying on…

Using just docker-compose would be cool… my setup works much more smoothly with docker compose… Is there any documentation on how to use the docker-compose instead wiht swarm? or is this still unreleased?

thanks!

Has anyone gotten something similar to the above working? Whenever the stack is starting, I keep getting an nginx error “1/15/2016 1:00:31 PMnginx: [emerg] host not found in upstream “php5” in /etc/nginx/sites-enabled/site.conf:16” basically its as if for some reason the nginx container isnt seeing the php container…

Nonetheless, if I remove the upstream, and get the container to start, I can ping and even check that the port 9000 is open with nmap…

Running the same docker-compose file with docker-compose works… :confused:

this is the docker-compose file… I removed basically everything and left only the 2 containers… If I remove the upstream configuration the nginx container works without php… (I tried using the fastcgi pass directly without upstream, same thing…)

php5:
    build: php5-fpm
    ports:
        - 9000:9000
nginx:
    build: nginx
    ports:
        - 8080:80
    links:
        - php5

Then I came across this article http://serverfault.com/questions/341810/nginx-failing-to-resolve-upstream-names-on-reload-even-if-they-do-resolve-by-the so I removed the nginx process as the main process, and tried executing it after the container is up… This made it work…

So I reasoned that the Nginx process must be trying to reach the linked container before the container network is in place or something to this effect… So I changed my dockerfile to an entrypoint shell script, and added a 30 second delay to start nginx… Like this it works… (EDIT: a 5 second sleep also does the trick and improves container spinup time a bit)…

Nonetheless, I think this may be a bug or something at least worth mentioning… Basically in some cases when starting a service depends on resolving/connecting to another linked container, the container startup is failing (and keeps retrying) because the intercontainer networking is not “up” yet or not resolving (I’m not sure if its not resolving or not connecting…)