I’m all the way at the end and don’t see anywhere in the UI to add the health check functionality depicted in the article. Just wondering if that functionality has gone away.
The trouble with blog posts is they don’t automatically keep themselves up to date
In the current releases you define the health check policy on the target service (WordPress). This works for service discovery in general, failing services are not advertised in DNS and will get replaced if they don’t become healthy again. And when you balance to a service the same health check is used to determine which containers are healthy to send traffic to. So it is not necessary to define this twice anymore.
Thanks, Vincent, I appreciate your reply. I was wondering if the lack of the health check was preventing the containers from building as I never got to that point. But, yes, indeed, the posts don’t keep themselves up to date as they really should.
I’ll go back to the drawing board and see if I can get this created.
Well, I got down to the WP section. The service keeps bouncing across the four containers I’ve set up and am seeing this in the log:
Invalid Date Invalid DateMount failed. Please check the log file for more details.
Invalid Date Invalid Date=> Checking if I can reach GlusterFS node 10.42.52.62 …
Invalid Date Invalid Date=> GlusterFS node 10.42.52.62 is alive
Invalid Date Invalid Date=> Mounting GlusterFS volume ranchervol from GlusterFS node 10.42.52.62 …
Invalid Date Invalid DateMount failed. Please check the log file for more details.
Invalid Date Invalid Date=> Checking if I can reach GlusterFS node 10.42.126.92 …
Joins Gluster then disconnects then joins and disconnects.
The same tutorial wont work for me… Using the xtradb commented in the comments I was able to get all up including gluster, but WP just wont show anything…blacnk screen… any url blank… no 404 or anything either…
Looking for an uptodate tutorial if anyone has one…
Nothing anywhere… Nothing in the logs and nothing in the browser/network/inspection…
I’m noticing some weird behavior on LB, not sure if it can be related…
Created a single mysql container exposing 3306:3306.
— I can access the hostip:3306
Create a LB instance listening on 3307 connected to container mysql port 3306
— No response when trying LBHost:3306 (its same host for examples sake)
Create a LB instance listening on 3306 connected to container mysql
— No response when trying LBHost:3306 (its same host for examples sake)
I notice that when I configure the LB to listen on 1 port on host, and pass it to another in container it doesnt “stick”… lb config shows the “public” port twice… ex: I want public port to be 3307, LB option after saving shows Host port: 3307, Container port: 3307…
So I removed the host port on the container, and left it as 3306:3306 in the LB, still no go…
I can only connect directly to the container if I expose it on the host…whenever I try to insert a LB it fails.
Accessing mysql directly on host:3333 works, but accessing host:3336 (mysqlLB->mysql) doesnt connect…
Also, visually when I click on LB_Agent its reporting the wrong port (that had thrown me off but apparently its only visual)
Ports
State IP Address Public (on Host) Private (in Container) Protocol
ACTIVE 3336 3336 TCP
I hope this can help somewhat?
(In all cases my initial container with exposed port 3306 isnt working… I think it stopped working when I added an LB, but its weird nonetheless? unless there is some sort of port reservation on the hosts 3306?)
Just an FYI - im using only one host, so when I say host:xxxx all containers are on the same host…
There appears to be a UI bug, which I also hit and filed, where we are showing the incorrect private port. But you can check the haproxy.cfg for the load balancer to make sure it was configured correctly. http://docs.rancher.com/rancher/faqs/troubleshooting/#load-balancer
As for the docker-compose.yml, for mysql2-lb, I’m not too sure why the LB wouldn’t work. Could you try using a different container like nginx instead of mysql?