Rancher Cluster K8S - 504 Gateway Timeout after 60 seconds


We have an environment with k8s + Rancher 2 (3 nodes) and an external nginx that only forwards connections to the k8s cluster according to this documentation: https://rancher.com/docs/rancher/v2.x/en/installation/k8s-install/

In a specific application running in this environment, when we perform a POST (since this POST takes around 3 to 4 minutes to complete), it is interrupted with the message “504 Gateway Time-Out” after 60 seconds. I’ve tried to apply specific notes to change the timeout as below, but to no avail:

Ingress of application:

apiVersion: extensions/v1beta1
kind: Ingress
  name: api-loteamento-spring-hml
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "3600"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
    nginx.ingress.kubernetes.io/server-snippet: "keepalive_timeout 3600s;client_body_timeout 3600s;client_header_timeout 3600s;"
    run: api-loteamento-spring-hml
  - host: hml-api-loteamento-sp.gruposfa.bla.bla
      - backend:
          serviceName: api-loteamento-spring-hml
          servicePort: 80

I have also tried to create a Global ConfigMap with the parameters as below, also without success:

[rancher@srv-rcnode01 ssl]$ kubectl get pods -n ingress-nginx
NAME                                    READY   STATUS    RESTARTS   AGE
default-http-backend-67cf578fc4-lcz82   1/1     Running   1          38d
nginx-ingress-controller-7jcng          1/1     Running   11         225d
nginx-ingress-controller-8zxbf          1/1     Running   8          225d
nginx-ingress-controller-l527g          1/1     Running   8          225d

[rancher@srv-rcnode01 ssl]$ kubectl get pod nginx-ingress-controller-8zxbf -n ingress-nginx -o yaml |grep configmap
    - --configmap=$(POD_NAMESPACE)/nginx-configuration
    - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
    - --udp-services-configmap=$(POD_NAMESPACE)/udp-services

[rancher@srv-rcnode01 ~]$ cat global-configmap.yaml
apiVersion: v1
  client-body-timeout: "360"
  client-header-timeout: "360"
  proxy-connect-timeout: "360"
  proxy-read-timeout: "360"
  proxy-send-timeout: "360"
kind: ConfigMap
  name: nginx-configuration

And apply:

kubectl apply -f global-configmap.yaml

Accessing the ingress pods and checking the nginx.conf, I see that annotations are created according to the parameters set inside the application block:

[rancher@srv-rcnode01 ~]$ kubectl -n ingress-nginx exec --stdin --tty nginx-ingress-controller-8zxbf -- /bin/bash

And view nginx.conf

keepalive_timeout 3600s;client_body_timeout 3600s;client_header_timeout 3600s;

# Custom headers to proxied server
			proxy_connect_timeout                   3600s;
			proxy_send_timeout                      3600s;
			proxy_read_timeout                      3600s;

What I noticed at the beginning of the nginx.conf file in the “server” configuration block is that it has default 60-second timeout values:

# Custom headers to proxied server
			proxy_connect_timeout                   5s;
			proxy_send_timeout                      60s;
			proxy_read_timeout                      60s;

My question in this case is whether these values may be interfering with this problem, and how can I change these values in the k8s?

Has anyone gone through this situation or something and can give me a north?



Hope you are doing well. Have you found any solution yet for this issue?

I tried to surpass the default nginx controller timeouts configuration, by passing the annotations through the ingress.yaml file , but still its referring to the defaults configuration.
Is there anyway we can surpass those defaults value , below is the snippet of my ingress file


I have tried in countless ways to change this but to no avail. I decided to change the application. But if successful, I am interested in knowing how to solve it.

I could solve same 60seconds timeout issue following Configure the idle connection timeout for your Classic Load Balancer - Elastic Load Balancing