Kubernetes rancher-ingress-controller keeps restarting

Hi,

I am running k8s under rancher and created a nginx-ingress-controller to manage the app´s url´s (ingress).
Checking on the Rancher Infrastructure Stack I noticed that my kubernetes-rancher-ingress-controller keeps restarting all the time.
Despite this, the url´s seems to be working fine.
Here are the logs:

2017-6-20 18:35:06time=“2017-06-20T21:35:06Z” level=info msg="Starting Rancher LB service"
2017-6-20 18:35:06time=“2017-06-20T21:35:06Z” level=info msg="LB controller: kubernetes"
2017-6-20 18:35:06time=“2017-06-20T21:35:06Z” level=info msg=“LB provider: rancher"
2017-6-20 18:35:06time=“2017-06-20T21:35:06Z” level=info msg=“starting kubernetes controller"
2017-6-20 18:35:06time=“2017-06-20T21:35:06Z” level=info msg=“Healthcheck handler is listening on :10241"
2017-6-20 18:35:06time=“2017-06-20T21:35:06Z” level=info msg=“Event(api.ObjectReference{Kind:“Ingress”, Namespace:“whoami3”, Name:“whoami3”, UID:“3231e546-512d-11e7-96a5-021ad4b10d4d”, APIVersion:“extensions”, ResourceVersion:“2336348”, FieldPath:””}): type: ‘Normal’ reason: ‘CREATE’ whoami3/whoami3"
2017-6-20 18:35:06time=“2017-06-20T21:35:06Z” level=info msg=“Event(api.ObjectReference{Kind:“Ingress”, Namespace:“whoami2”, Name:“whoami2”, UID:“344e7da5-512d-11e7-96a5-021ad4b10d4d”, APIVersion:“extensions”, ResourceVersion:“2335980”, FieldPath:””}): type: ‘Normal’ reason: ‘CREATE’ whoami2/whoami2"
2017-6-20 18:35:06time=“2017-06-20T21:35:06Z” level=info msg=“Event(api.ObjectReference{Kind:“Ingress”, Namespace:“whoami1”, Name:“whoami1”, UID:“d6cc288e-5086-11e7-96a5-021ad4b10d4d”, APIVersion:“extensions”, ResourceVersion:“2336409”, FieldPath:””}): type: ‘Normal’ reason: ‘CREATE’ whoami1/whoami1"
2017-6-20 18:35:26time=“2017-06-20T21:35:26Z” level=error msg="Timed out waiting for condition [publicEndpoints] "
2017-6-20 18:35:26time=“2017-06-20T21:35:26Z” level=info msg="Couldn’t get publicEndpoints for LB [whoami2-rancherlb-whoami2], skipping endpoint update"
2017-6-20 18:35:26time=“2017-06-20T21:35:26Z” level=info msg="Updating ingress whoami2/whoami2. Removing IP 10.1.7.41"
2017-6-20 18:35:26time=“2017-06-20T21:35:26Z” level=info msg="Updating ingress whoami2/whoami2. Removing IP 10.1.7.35"
2017-6-20 18:35:26panic: runtime error: slice bounds out of range
2017-6-20 18:35:26
2017-6-20 18:35:26goroutine 54 [running]:
2017-6-20 18:35:26panic(0x2222ca0, 0xc82001e040)
2017-6-20 18:35:26 /usr/local/go/src/runtime/panic.go:481 +0x3e6
2017-6-20 18:35:26github.com/rancher/lb-controller/controller/kubernetes.(*loadBalancerController).updateIngressStatus(0xc8202a21e0, 0xc82043b340, 0xf)
2017-6-20 18:35:26 /go/src/github.com/rancher/lb-controller/controller/kubernetes/kubernetes.go:275 +0x1784
2017-6-20 18:35:26github.com/rancher/lb-controller/controller/kubernetes.(*loadBalancerController).(github.com/rancher/lb-controller/controller/kubernetes.updateIngressStatus)-fm(0xc82043b340, 0xf)
2017-6-20 18:35:26 /go/src/github.com/rancher/lb-controller/controller/kubernetes/kubernetes.go:96 +0x34
2017-6-20 18:35:26github.com/rancher/lb-controller/utils.(*TaskQueue).worker(0xc820312840)
2017-6-20 18:35:26 /go/src/github.com/rancher/lb-controller/utils/utils.go:64 +0x138
2017-6-20 18:35:26github.com/rancher/lb-controller/utils.(*TaskQueue).(github.com/rancher/lb-controller/utils.worker)-fm()
2017-6-20 18:35:26 /go/src/github.com/rancher/lb-controller/utils/utils.go:33 +0x20
2017-6-20 18:35:26github.com/rancher/lb-controller/vendor/k8s.io/kubernetes/pkg/util/wait.JitterUntil.func1(0xc82062df60)
2017-6-20 18:35:26 /go/src/github.com/rancher/lb-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:84 +0x19
2017-6-20 18:35:26github.com/rancher/lb-controller/vendor/k8s.io/kubernetes/pkg/util/wait.JitterUntil(0xc82062df60, 0x3b9aca00, 0x0, 0x1, 0xc8203e92c0)
2017-6-20 18:35:26 /go/src/github.com/rancher/lb-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:85 +0xb4
2017-6-20 18:35:26github.com/rancher/lb-controller/vendor/k8s.io/kubernetes/pkg/util/wait.Until(0xc82062df60, 0x3b9aca00, 0xc8203e92c0)
2017-6-20 18:35:26 /go/src/github.com/rancher/lb-controller/vendor/k8s.io/kubernetes/pkg/util/wait/wait.go:47 +0x43
2017-6-20 18:35:26github.com/rancher/lb-controller/utils.(*TaskQueue).Run(0xc820312840, 0x3b9aca00, 0xc8203e92c0)
2017-6-20 18:35:26 /go/src/github.com/rancher/lb-controller/utils/utils.go:33 +0x48
2017-6-20 18:35:26created by github.com/rancher/lb-controller/controller/kubernetes.(*loadBalancerController).Run
2017-6-20 18:35:26 /go/src/github.com/rancher/lb-controller/controller/kubernetes/kubernetes.go:340 +0x246

Can anybody help me stop this container to restart?

Best regards
Paulo Leal