Just redid the entire setup…
Here are my yml files:
docker-compose
pxc:
restart: always
environment:
PXC_SST_PASSWORD: sstpassword
PXC_ROOT_PASSWORD: master
labels:
io.rancher.scheduler.global: 'true'
io.rancher.scheduler.affinity:host_label: target.service=database
tty: true
image: nixel/rancher-percona-xtradb-cluster:v1.1.3
volumes:
- /var/lib/mysql:/var/lib/mysql
stdin_open: true
wordpress-lb:
ports:
- 80:80
restart: always
tty: true
image: rancher/load-balancer-service
links:
- wordpress:wordpress
stdin_open: true
wordpress:
restart: always
environment:
DB_PASSWORD: master
labels:
io.rancher.scheduler.affinity:host_label: target.service=web
tty: true
image: nixel/rancher-wordpress-ha:v1.1
links:
- gluster:storage
- pxc:db
privileged: true
stdin_open: true
gluster:
restart: always
environment:
ROOT_PASSWORD: master
labels:
io.rancher.scheduler.global: 'true'
io.rancher.scheduler.affinity:host_label: target.service=storage
tty: true
image: nixel/rancher-glusterfs-server:v2.3
privileged: true
volumes:
- /gluster_volume:/gluster_volume
stdin_open: true
rancher-compose:
pxc:
scale: 1
wordpress-lb:
scale: 1
load_balancer_config:
name: wordpress-lb config
wordpress:
scale: 4
health_check:
port: 80
interval: 2000
unhealthy_threshold: 3
request_line: GET /healthcheck.txt HTTP/1.0
healthy_threshold: 2
response_timeout: 2000
gluster:
scale: 1
(In the end of the test I scaled the LB to 1 to check the logs…) here is the LB log
10/8/2015 1:33:50 PMINFO: Downloading agent http://mgmt.rancher.neoassist.com:8080/v1/configcontent/configscripts
10/8/2015 1:33:50 PMINFO: Updating configscripts
10/8/2015 1:33:50 PMINFO: Downloading http://mgmt.rancher.neoassist.com:8080/v1//configcontent//configscripts current=
10/8/2015 1:33:50 PMINFO: Running /var/lib/cattle/download/configscripts/configscripts-1-f5763391fb7914dcd14001f29ffe28d1167a3bfcc0ee0ec05d2ca9c722103c02/apply.sh
10/8/2015 1:33:50 PMINFO: Sending configscripts applied 1-f5763391fb7914dcd14001f29ffe28d1167a3bfcc0ee0ec05d2ca9c722103c02
10/8/2015 1:33:50 PMINFO: Updating agent-instance-startup
10/8/2015 1:33:50 PMINFO: Downloading http://mgmt.rancher.neoassist.com:8080/v1//configcontent//agent-instance-startup current=
10/8/2015 1:33:50 PMINFO: Running /var/lib/cattle/download/agent-instance-startup/agent-instance-startup-1-d10e0fcba01455f57a9bed779d117ffb650eeae46c70627a71832d2f6e4d93bf/apply.sh
10/8/2015 1:33:50 PMINFO: Updating services
10/8/2015 1:33:50 PMINFO: Downloading http://mgmt.rancher.neoassist.com:8080/v1//configcontent//services current=
10/8/2015 1:33:50 PMINFO: Running /var/lib/cattle/download/services/services-1-061405f3edd960bfdfe1cfb8447be40eab5b4b608731608e224cc51c5dc30b91/apply.sh
10/8/2015 1:33:50 PMINFO: HOME -> ./
10/8/2015 1:33:50 PMINFO: HOME -> ./services
10/8/2015 1:33:50 PMINFO: Sending services applied 1-061405f3edd960bfdfe1cfb8447be40eab5b4b608731608e224cc51c5dc30b91
10/8/2015 1:33:51 PMINFO: Getting agent-instance-scripts
10/8/2015 1:33:51 PMINFO: Updating agent-instance-scripts
10/8/2015 1:33:51 PMINFO: Downloading http://mgmt.rancher.neoassist.com:8080/v1//configcontent//agent-instance-scripts current=
10/8/2015 1:33:51 PMINFO: Running /var/lib/cattle/download/agent-instance-scripts/agent-instance-scripts-1-4b5124bd74cd423f98d57550b481ec77ec3a7135c6a650886ab95c043305d642/apply.sh
10/8/2015 1:33:51 PMINFO: HOME -> ./
10/8/2015 1:33:51 PMINFO: HOME -> ./events/
10/8/2015 1:33:51 PMINFO: HOME -> ./events/ping
10/8/2015 1:33:51 PMINFO: HOME -> ./events/config.update
10/8/2015 1:33:51 PMINFO: Sending agent-instance-scripts applied 1-4b5124bd74cd423f98d57550b481ec77ec3a7135c6a650886ab95c043305d642
10/8/2015 1:33:51 PMINFO: Getting monit
10/8/2015 1:33:51 PMINFO: Updating monit
10/8/2015 1:33:51 PMINFO: Downloading http://mgmt.rancher.neoassist.com:8080/v1//configcontent//monit current=
10/8/2015 1:33:51 PMINFO: Running /var/lib/cattle/download/monit/monit-1-d166a713486fe4c0f039e152939d8d3b8ab8e6c2518e13d88f0b7ec68da46109/apply.sh
10/8/2015 1:33:51 PMINFO: ROOT -> ./
10/8/2015 1:33:51 PMINFO: ROOT -> ./etc/
10/8/2015 1:33:51 PMINFO: ROOT -> ./etc/monit/
10/8/2015 1:33:51 PMINFO: ROOT -> ./etc/monit/monitrc
10/8/2015 1:33:51 PMINFO: Sending monit applied 1-d166a713486fe4c0f039e152939d8d3b8ab8e6c2518e13d88f0b7ec68da46109
10/8/2015 1:33:51 PMINFO: HOME -> ./
10/8/2015 1:33:51 PMINFO: HOME -> ./etc/
10/8/2015 1:33:51 PMINFO: HOME -> ./etc/cattle/
10/8/2015 1:33:51 PMINFO: HOME -> ./etc/cattle/startup-env
10/8/2015 1:33:51 PMINFO: ROOT -> ./
10/8/2015 1:33:51 PMINFO: ROOT -> ./etc/
10/8/2015 1:33:51 PMINFO: ROOT -> ./etc/init.d/
10/8/2015 1:33:51 PMINFO: ROOT -> ./etc/init.d/agent-instance-startup
10/8/2015 1:33:51 PMINFO: Sending agent-instance-startup applied 1-d10e0fcba01455f57a9bed779d117ffb650eeae46c70627a71832d2f6e4d93bf
10/8/2015 1:33:51 PMmonit: generated unique Monit id d85e2bac11d61c51f52b7ccf8b47dca6 and stored to '/var/lib/monit/id'
10/8/2015 1:33:51 PMStarting monit daemon with http interface at [localhost:2812]
When I try to access it I just get a blank page…
I scaled down WP to 1 container to to look at the logs (only in the end of my tests), and here is the log for 1 WP continers:
10/8/2015 1:34:08 PM=> Checking if I can reach GlusterFS node 10.42.166.44 ...
Invalid Date Invalid Date=> GlusterFS node 10.42.166.44 is alive
Invalid Date Invalid Date=> Mounting GlusterFS volume ranchervol from GlusterFS node 10.42.166.44 ...
Invalid Date Invalid Date/usr/lib/python2.7/dist-packages/supervisor/options.py:295: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
Invalid Date Invalid Date 'Supervisord is running as root and it is searching '
Invalid Date Invalid Date2015-10-08 16:34:18,426 CRIT Supervisor running as root (no user in config file)
Invalid Date Invalid Date2015-10-08 16:34:18,426 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
Invalid Date Invalid Date2015-10-08 16:34:18,446 INFO RPC interface 'supervisor' initialized
Invalid Date Invalid Date2015-10-08 16:34:18,447 CRIT Server 'unix_http_server' running without any HTTP authentication checking
Invalid Date Invalid Date2015-10-08 16:34:18,447 INFO supervisord started with pid 56
Invalid Date Invalid Date2015-10-08 16:34:19,449 INFO spawned: 'haproxy' with pid 59
Invalid Date Invalid Date2015-10-08 16:34:19,450 INFO spawned: 'nginx' with pid 60
Invalid Date Invalid Date2015-10-08 16:34:19,451 INFO spawned: 'php5-fpm' with pid 61
Invalid Date Invalid Date2015-10-08 16:34:20,489 INFO success: haproxy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
Invalid Date Invalid Date2015-10-08 16:34:20,489 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
Invalid Date Invalid Date2015-10-08 16:34:20,489 INFO success: php5-fpm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
After scaling WP down the lb began taking a few seconds to respond (with an empty page), and at 1 point gave a single 503 (as if haproxy didnt have any backends), but then it stopped giving error…just blank page…
Thanks for any hints?