I am trying to deploy my django container via Stack but inside the container I pointed NFS Volume where I could able to deploy successfully but I could not able to scale the django container.
version: '2’
services:
django:
image: 10.42.180.249:5000/django-red:0.1
container_name: django-red
ports:
- "8000:8000"
external_links:
- "db"
volumes:
- nfs:/data
volumes:
nfs:
driver: local
driver_opts:
type: nfs
o: addr=192.168.56.103,rw
device: “:/nfs-data/django-data”
Which is the best method should I have to follow where the /data volume should not get deleted when I delete the stack.
I tried even the rancher-nfs the problem is if I delete the stack even the it deletes the NFS volume too. I want my data to be save even after I delete the stack.
FYI, your post title is different than your actual problem: scaling container vs NFS data retention.
I just tested it with rancher-nfs, and I can see that the directory on my NFS server is still there after I delete the stack.
docker-compose.yml
version: '2'
volumes:
ubuntu-vol-test:
external: true
driver: rancher-nfs
services:
test-ubuntu-2:
image: ubuntu:16.04
stdin_open: true
volumes:
- ubuntu-vol-test:/data/test
tty: true
labels:
io.rancher.container.pull_image: always
rancher-compose.yml
version: '2'
services:
test-ubuntu-2:
scale: 1
start_on_create: true
After deleting the stack, if I go to Infrastructure > Volumes, I can see the ubuntu-vol-test is still present, and the state is Detached.
I am on Rancher 1.6.3 using Cattle orcestration.
You may be interested to read about the new purge/retain options added as part of 1.6.6:
https://github.com/rancher/rancher.github.io/pull/843/files
geom
November 16, 2017, 9:48am
6
Hi,
I have also a Scaling Problem with Rancher v1.6.10
3 Hosts with Rancher-NFS running, 1 Service running with working NFS Volume
version: '2'
volumes:
directusstorage:
driver: rancher-nfs
services:
directus:
image: getdirectus/directus:6.4
container_name: directus
ports:
- "8080:8080"
restart: always
volumes:
- directusstorage:/var/www/html/storage
environment:...
The Volume is mounted corretly as
Directus_directusstorage_7816e Directus-directus-1: /var/www/html/storage
However, as soon as I try to scale the service to 2, the second service hangs in a loop:
6. service.update.info Requested: 2, Created: 2, Unhealthy: 0, Bad: 0, Incomplete: 0
5. service.update.wait (Running) Waiting for instances to start
4. service.update.wait.exception Waiting: container starting [container:1i182]
3. service.update.exception Waiting: container starting [container:1i182]
2. service.update.info Requested: 2, Created: 2, Unhealthy: 0, Bad: 0, Incomplete: 0
1. service.update.wait (Running) Waiting for instances to start
I am out of ideas… Any suggestion is welcome. Thanks!