Slack Notifier Errors - proxyconnect tcp: dial tcp :0: connect: connection refused

I am having errors on two different Rancher 2.2.6/2.2.7 installs creating a Slack Notifier - it’s probably worth a bug report - but I want to try to get to the bottom of it log-wise with better information for a bug report.

I have a single host Rancher Server running in docker a standalone hyper-v vm (homelab install) and a single host Rancher Server running in docker on a Digital Ocean droplet. Both are managing a separate rancher cluster.

Docker was docker-ce 18.09.8 with 2.2.6 and is now docker-ce 19.03.1 with 2.2.7.

In both platforms - trying to create a Slack notifier results in “Post https://hooks.slack.com/services/token: proxyconnect tcp: dial tcp :0: connect: connection refused”

I am new to Rancher. And while I’ve tried to do my due diligence digging through code to see if I see anything obvious - I don’t know where this test actually executes from - and where to get to the bottom of it logs wise (and no, as far as I can tell, creating it anyway still doesn’t work for the actual alerts).

I’ve had email and Pagerduty alerts both work.

My guess is that this might be a docker networking configuration error - or maybe the proxy configuration is not actually blank (I am not behind a proxy in either situation). But I don’t know where to begin to debug this.

Any and all pointers appreciated.

Can you share the exact docker run command you used to start the rancher/rancher container (or the output of docker inspect)? And also the output of docker info from the host running the rancher/rancher container.

docker run was a very vanilla:

docker run -d —restart=unless-stopped   -p 80:80 -p 443:443   rancher/rancher:latest

For the Rancher server running in DigitalOcean - I included an --acme-domain - and the current docker run is using --volumes-from because of the 2.2.6 to 2.2.7 upgrade

docker info:

$ sudo docker info
Client:
 Debug Mode: false

Server:
 Containers: 5
  Running: 1
  Paused: 0
  Stopped: 4
 Images: 4
 Server Version: 19.03.1
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
 runc version: 425e105d5a03fabd737a126ad93d62a9eeede87f
 init version: fec3683
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 4.15.0-55-generic
 Operating System: Ubuntu 18.04.3 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 1
 Total Memory: 2.931GiB
 Name: ballpark
 ID: S77O:KS6U:JI24:6M4Q:CJFP:YBBY:IBQH:H7W6:CSOJ:JYNB:YIWY:4FIJ
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support

Here’s the networking portion of docker inspect:

"NetworkSettings": {
            "Bridge": "",
            "SandboxID": "1a9658dfcd0066e4689223913620f2f9d15f27eb07ae41df279814609354e2a7",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "443/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "443"
                    }
                ],
                "80/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "80"
                    }
                ]
            },
            "SandboxKey": "/var/run/docker/netns/1a9658dfcd00",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "526ac97e218c7d5fdf3a2c0f3b5e6af5e52283cff5d1d17d2d965ba17d0f6f2e",
            "Gateway": "172.17.0.1",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "172.17.0.2",
            "IPPrefixLen": 16,
            "IPv6Gateway": "",
            "MacAddress": "02:42:ac:11:00:02",
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "5bcd5426e734fa06de2f4fb0b00a3c3e260414cfb6b6351203c2570aa728dfe1",
                    "EndpointID": "526ac97e218c7d5fdf3a2c0f3b5e6af5e52283cff5d1d17d2d965ba17d0f6f2e",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.2",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:02",
                    "DriverOpts": null
                }
            }
        }

The proxy field seems to be sent as null when it is highlighted but not used, can you try refreshing the UI, then configure your notifier but do not focus on or use the HTTP proxy field and see if that helps?

That was it. Thanks Sebastiaan! I didn’t even think to look at the form post. And when I did “view in api” I didn’t think twice about proxyUrl being there in the properties but blank.

This definitely seems like a UI bug - because once it’s set in the API I can’t seem to do something with the UI to fix it. (though it really seems like an alertmanager http properties bug really)

I went through the commits referenced in the issue where this was added to the UI: https://github.com/rancher/rancher/issues/17885 - but I’m afraid I don’t know enough about Ember or Go’s Json properties parsing to be much help for drilling down to where this should be handled. (seems like “omitempty” should ignore the proxyUrl: null and not make it

I’m happy to do a bug report though - though I’m not sure I can cogently describe anything other than “the UI does this and then it breaks”

Cool, thanks for confirming. I filed an issue for it here: https://github.com/rancher/rancher/issues/22144