Tutorial for fullstack deployment (ex: wordpress+dbcluster+gluster)

Hi,

I tried following several of the tutorials on rancher.com but they all seem outdated and not working…

I was almost able to get http://rancher.com/building-microservices-with-docker-on-the-new-rancher-beta/ working but it appears to be using 0.23, which is really old…?

I was able to deploy everything apparently, gluster working, etc, but when I ghit the IP of a LB I just get a blank screen instead of wordpress… (I did change the healthcheck over to the wordpress servic instead of the LB) All containres look to be up and running…

Is there an up to date tutorial? I’d like to investigate getting a custom php app working with PHP, Redis, Gluster, DB (mariadb/mysql or xtradb? I need HA for the DB) and maybe some other things like rethinkdb and ELK… but one step at a time…

I was trying to follow the “stock” tutorial but even this isnt giving me much love :smile:

Are there any up to date tutorials one can follow?

(Also I suggest adding a clear note on all of the old rancher.com posts saying it wont work with current versions, etc…)

Thanks!

Just scanning through the post quickly, the only thing that’s majorly changed is configuring health checks… so if you just skip that part you should have a balancer that works. I don’t think your problem from the other thread is actually related to the steps taken in Rancher.

Just redid the entire setup…

Here are my yml files:

docker-compose

    pxc:
      restart: always
      environment:
        PXC_SST_PASSWORD: sstpassword
        PXC_ROOT_PASSWORD: master
      labels:
        io.rancher.scheduler.global: 'true'
        io.rancher.scheduler.affinity:host_label: target.service=database
      tty: true
      image: nixel/rancher-percona-xtradb-cluster:v1.1.3
      volumes:
      - /var/lib/mysql:/var/lib/mysql
      stdin_open: true
    wordpress-lb:
      ports:
      - 80:80
      restart: always
      tty: true
      image: rancher/load-balancer-service
      links:
      - wordpress:wordpress
      stdin_open: true
    wordpress:
      restart: always
      environment:
        DB_PASSWORD: master
      labels:
        io.rancher.scheduler.affinity:host_label: target.service=web
      tty: true
      image: nixel/rancher-wordpress-ha:v1.1
      links:
      - gluster:storage
      - pxc:db
      privileged: true
      stdin_open: true
    gluster:
      restart: always
      environment:
        ROOT_PASSWORD: master
      labels:
        io.rancher.scheduler.global: 'true'
        io.rancher.scheduler.affinity:host_label: target.service=storage
      tty: true
      image: nixel/rancher-glusterfs-server:v2.3
      privileged: true
      volumes:
      - /gluster_volume:/gluster_volume
      stdin_open: true

rancher-compose:

pxc:
  scale: 1
wordpress-lb:
  scale: 1
  load_balancer_config:
    name: wordpress-lb config
wordpress:
  scale: 4
  health_check:
    port: 80
    interval: 2000
    unhealthy_threshold: 3
    request_line: GET /healthcheck.txt HTTP/1.0
    healthy_threshold: 2
    response_timeout: 2000
gluster:
  scale: 1

(In the end of the test I scaled the LB to 1 to check the logs…) here is the LB log

10/8/2015 1:33:50 PMINFO: Downloading agent http://mgmt.rancher.neoassist.com:8080/v1/configcontent/configscripts
10/8/2015 1:33:50 PMINFO: Updating configscripts
10/8/2015 1:33:50 PMINFO: Downloading http://mgmt.rancher.neoassist.com:8080/v1//configcontent//configscripts current=
10/8/2015 1:33:50 PMINFO: Running /var/lib/cattle/download/configscripts/configscripts-1-f5763391fb7914dcd14001f29ffe28d1167a3bfcc0ee0ec05d2ca9c722103c02/apply.sh
10/8/2015 1:33:50 PMINFO: Sending configscripts applied 1-f5763391fb7914dcd14001f29ffe28d1167a3bfcc0ee0ec05d2ca9c722103c02
10/8/2015 1:33:50 PMINFO: Updating agent-instance-startup
10/8/2015 1:33:50 PMINFO: Downloading http://mgmt.rancher.neoassist.com:8080/v1//configcontent//agent-instance-startup current=
10/8/2015 1:33:50 PMINFO: Running /var/lib/cattle/download/agent-instance-startup/agent-instance-startup-1-d10e0fcba01455f57a9bed779d117ffb650eeae46c70627a71832d2f6e4d93bf/apply.sh
10/8/2015 1:33:50 PMINFO: Updating services
10/8/2015 1:33:50 PMINFO: Downloading http://mgmt.rancher.neoassist.com:8080/v1//configcontent//services current=
10/8/2015 1:33:50 PMINFO: Running /var/lib/cattle/download/services/services-1-061405f3edd960bfdfe1cfb8447be40eab5b4b608731608e224cc51c5dc30b91/apply.sh
10/8/2015 1:33:50 PMINFO: HOME -> ./
10/8/2015 1:33:50 PMINFO: HOME -> ./services
10/8/2015 1:33:50 PMINFO: Sending services applied 1-061405f3edd960bfdfe1cfb8447be40eab5b4b608731608e224cc51c5dc30b91
10/8/2015 1:33:51 PMINFO: Getting agent-instance-scripts
10/8/2015 1:33:51 PMINFO: Updating agent-instance-scripts
10/8/2015 1:33:51 PMINFO: Downloading http://mgmt.rancher.neoassist.com:8080/v1//configcontent//agent-instance-scripts current=
10/8/2015 1:33:51 PMINFO: Running /var/lib/cattle/download/agent-instance-scripts/agent-instance-scripts-1-4b5124bd74cd423f98d57550b481ec77ec3a7135c6a650886ab95c043305d642/apply.sh
10/8/2015 1:33:51 PMINFO: HOME -> ./
10/8/2015 1:33:51 PMINFO: HOME -> ./events/
10/8/2015 1:33:51 PMINFO: HOME -> ./events/ping
10/8/2015 1:33:51 PMINFO: HOME -> ./events/config.update
10/8/2015 1:33:51 PMINFO: Sending agent-instance-scripts applied 1-4b5124bd74cd423f98d57550b481ec77ec3a7135c6a650886ab95c043305d642
10/8/2015 1:33:51 PMINFO: Getting monit
10/8/2015 1:33:51 PMINFO: Updating monit
10/8/2015 1:33:51 PMINFO: Downloading http://mgmt.rancher.neoassist.com:8080/v1//configcontent//monit current=
10/8/2015 1:33:51 PMINFO: Running /var/lib/cattle/download/monit/monit-1-d166a713486fe4c0f039e152939d8d3b8ab8e6c2518e13d88f0b7ec68da46109/apply.sh
10/8/2015 1:33:51 PMINFO: ROOT -> ./
10/8/2015 1:33:51 PMINFO: ROOT -> ./etc/
10/8/2015 1:33:51 PMINFO: ROOT -> ./etc/monit/
10/8/2015 1:33:51 PMINFO: ROOT -> ./etc/monit/monitrc
10/8/2015 1:33:51 PMINFO: Sending monit applied 1-d166a713486fe4c0f039e152939d8d3b8ab8e6c2518e13d88f0b7ec68da46109
10/8/2015 1:33:51 PMINFO: HOME -> ./
10/8/2015 1:33:51 PMINFO: HOME -> ./etc/
10/8/2015 1:33:51 PMINFO: HOME -> ./etc/cattle/
10/8/2015 1:33:51 PMINFO: HOME -> ./etc/cattle/startup-env
10/8/2015 1:33:51 PMINFO: ROOT -> ./
10/8/2015 1:33:51 PMINFO: ROOT -> ./etc/
10/8/2015 1:33:51 PMINFO: ROOT -> ./etc/init.d/
10/8/2015 1:33:51 PMINFO: ROOT -> ./etc/init.d/agent-instance-startup
10/8/2015 1:33:51 PMINFO: Sending agent-instance-startup applied 1-d10e0fcba01455f57a9bed779d117ffb650eeae46c70627a71832d2f6e4d93bf
10/8/2015 1:33:51 PMmonit: generated unique Monit id d85e2bac11d61c51f52b7ccf8b47dca6 and stored to '/var/lib/monit/id'
10/8/2015 1:33:51 PMStarting monit daemon with http interface at [localhost:2812]

When I try to access it I just get a blank page…

I scaled down WP to 1 container to to look at the logs (only in the end of my tests), and here is the log for 1 WP continers:

10/8/2015 1:34:08 PM=> Checking if I can reach GlusterFS node 10.42.166.44 ...
Invalid Date Invalid Date=> GlusterFS node 10.42.166.44 is alive
Invalid Date Invalid Date=> Mounting GlusterFS volume ranchervol from GlusterFS node 10.42.166.44 ...
Invalid Date Invalid Date/usr/lib/python2.7/dist-packages/supervisor/options.py:295: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
Invalid Date Invalid Date  'Supervisord is running as root and it is searching '
Invalid Date Invalid Date2015-10-08 16:34:18,426 CRIT Supervisor running as root (no user in config file)
Invalid Date Invalid Date2015-10-08 16:34:18,426 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
Invalid Date Invalid Date2015-10-08 16:34:18,446 INFO RPC interface 'supervisor' initialized
Invalid Date Invalid Date2015-10-08 16:34:18,447 CRIT Server 'unix_http_server' running without any HTTP authentication checking
Invalid Date Invalid Date2015-10-08 16:34:18,447 INFO supervisord started with pid 56
Invalid Date Invalid Date2015-10-08 16:34:19,449 INFO spawned: 'haproxy' with pid 59
Invalid Date Invalid Date2015-10-08 16:34:19,450 INFO spawned: 'nginx' with pid 60
Invalid Date Invalid Date2015-10-08 16:34:19,451 INFO spawned: 'php5-fpm' with pid 61
Invalid Date Invalid Date2015-10-08 16:34:20,489 INFO success: haproxy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
Invalid Date Invalid Date2015-10-08 16:34:20,489 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
Invalid Date Invalid Date2015-10-08 16:34:20,489 INFO success: php5-fpm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

After scaling WP down the lb began taking a few seconds to respond (with an empty page), and at 1 point gave a single 503 (as if haproxy didnt have any backends), but then it stopped giving error…just blank page…

Thanks for any hints?

Posting this as a separate reply, turns out there was a wp-config.php file which was empty and wordpress craps out when that happens…

I’m unsure what to use here however, as I suppose DP_Password and DB_hos should be automatic? (load balancer for db host?) anyways… i’m one step closer but oh so far LOL

I find in ENV, DB_HOST, DB_USER & DB_PASSWORD. but these values dont work…

I can ping the DB host but cannot connect with mysql from the wordpress container shell.

This might help describe what needs to go in the wp-config.php. I have not been able to try out how to get it set up.

Are you able to ping the mysql container IP from the wordpress container shell? If not, it sounds like you have a managed network issue, which might need to be looked into. You can read our troubelshooting faqs to see if that helps.

http://docs.rancher.com/rancher/faqs/troubleshooting/

Hey denise, it turns out I actually was able to get things working after deleting wp-config and running the setup…

It has been quite flaky, however, and if anyone comes around a more “commented” example I’d be very interested in checking it out…

Particularly I’d also be interested in different setups being used for production systems… Databases being used, and how (convoy? other ebs-type systems? replication? etc), there is a pretty neat ELK post so I think that might cover that part, but I’m still interested in other areas of a “fullstack”…

Our stack includes several DB backends (MySQL, Redis, ES), different app servers such as nginx+php-fpm & node.js, etc… I’d like to see different full-deployments…

I think a “getting started” guide that shows how to use the internal Rancher workings would be a very positive thing… Right now some of the magic (no matter how fascinating it is) seems a bit “magic” and thats not how we want our systems to run LOL… I’d really like to understand how services are registered, broadcasted, loadbalanced, etc…

I’m extremely interested in the potential of crossdata-center management with Rancher, as well as other things…

Some one-liners that come to mind:

  • Assuming I have multiple Hosts, how does one add automatic failover/multiple loadbalancers or similar setup? Does rancher support some sort of VirtualIP setup accross multiple hosts?
  • In some examples containers are started with Environment variables for picking up hostnames, etc… but it seems some of them are using local loadbalancers as well as remote hosts? How does that work, and is it always done on a 1-by-1 basis (ex: the run.sh script in the microservices examples) or is there some method that builds it for you, or at least a methodology to adhere?
  • Are there any best practices/example services for common services: (as mentioned above, RDDBMs: MySQL, PXC, MariaDB, Postgres? / NoSQLs: ElasticSearch, RethinkDB, Redis? / Loadbalancer examples? (including for user-facing load balancing as well as internal services loadbalancing)…

I’ll try and devote a bit more time to this and explore, but if there are some ready-to-see similar cases it would definitely make things easier… I found that things tend to “break” in weird ways, and I’m confident that knowing more about the non-docker elements that “glue” the docker containers into services and stacks would really ease the transition…

2 Likes