Unable to start pod infra images in k8s

We are running k8s with private docker registry.
We want all the pods to run from the private docker registry so that it can be air-gapped.

Here is our k8s version v1.7.2

Here is my configuration for pod infra images

My registry is set to ip-10-0-0-160:5000/gcr.io without a trailing slash. Learnt it the hard way the trailing slash causes issues.

Here is my log on docker pull

docker pull ip-10-0-0-160:5000/gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.2
1.14.2: Pulling from gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64

Digest: sha256:70b157a6695e5dddc5a3741c401c0507f0fd9326376c77162c690f5f5e82a264
Status: Image is up to date for ip-10-0-0-160:5000/gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.2

The kube dns does not comeup.

NAME                                   READY     STATUS             RESTARTS   AGE
heapster-1584063577-n56m2              1/1       Running            0          16h
kube-dns-1842970385-ww5vm              0/3       Pending            0          16h
kube-dns-2943014524-gsskt              0/3       InvalidImageName   0          14m
kubernetes-dashboard-96113950-48pz7    1/1       Running            0          16h
monitoring-grafana-1854198434-ngjt4    1/1       Running            0          16h
monitoring-influxdb-2450037842-wwt38   1/1       Running            0          16h
tiller-deploy-3765135481-f9dg8         1/1       Running            0          16h

And this is what is in the logs in kubectl describe

Events:
  FirstSeen	LastSeen	Count	From							SubObjectPath			Type		Reason			Message
  ---------	--------	-----	----							-------------			--------	------			-------
  14m		14m		6	default-scheduler									Warning		FailedScheduling	No nodes are available that match all of the following predicates:: MatchInterPodAffinity (1).
  14m		14m		1	default-scheduler									Normal		Scheduled		Successfully assigned kube-dns-2943014524-gsskt to ip-10-0-0-160.us-west-1.compute.internal
  14m		14m		1	kubelet, ip-10-0-0-160.us-west-1.compute.internal					Normal		SuccessfulMountVolume	MountVolume.SetUp succeeded for volume "kube-dns-config"
  14m		14m		1	kubelet, ip-10-0-0-160.us-west-1.compute.internal					Normal		SuccessfulMountVolume	MountVolume.SetUp succeeded for volume "kube-dns-token-p41d6"
  14m		13m		3	kubelet, ip-10-0-0-160.us-west-1.compute.internal					Normal		SandboxChanged		Pod sandbox changed, it will be killed and re-created.
  14m		4s		68	kubelet, ip-10-0-0-160.us-west-1.compute.internal					Warning		FailedSync		Error syncing pod
  13m		4s		65	kubelet, ip-10-0-0-160.us-west-1.compute.internal	spec.containers{kubedns}	Warning		InspectFailed		Failed to apply default image tag "ip-10-0-0-160:5000/gcr.io//google_containers/k8s-dns-kube-dns-amd64:1.14.2": couldn't parse image reference "ip-10-0-0-160:5000/gcr.io//google_containers/k8s-dns-kube-dns-amd64:1.14.2": invalid reference format
  13m		4s		65	kubelet, ip-10-0-0-160.us-west-1.compute.internal	spec.containers{dnsmasq}	Warning		InspectFailed		Failed to apply default image tag "ip-10-0-0-160:5000/gcr.io//google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.2": couldn't parse image reference "ip-10-0-0-160:5000/gcr.io//google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.2": invalid reference format
  13m		4s		65	kubelet, ip-10-0-0-160.us-west-1.compute.internal	spec.containers{sidecar}	Warning		InspectFailed		Failed to apply default image tag "ip-10-0-0-160:5000/gcr.io//google_containers/k8s-dns-sidecar-amd64:1.14.2": couldn't parse image reference "ip-10-0-0-160:5000/gcr.io//google_containers/k8s-dns-sidecar-amd64:1.14.2": invalid reference format

The issue is the code seems to be adding additional trailing slash ip-10-0-0-160:5000/gcr.io//google_containers/k8s-dns-kube-dns-amd64:1.14.2"

how do we avoid this?

Thanks