Rancher/K3S Backup Restore from S3/Minio Backup

I have been playing with Rancher/K3S on a 3 nodes cluster with 1 master and two agents. I have been testing the backup restore process before running any workloads. In doing the restore I have been banging my head on my desk trying to figure out why freenas.gravyflex.ca is resolving to my router’s public address, instead of the local address. I have verified that DNS resolves to the local address. Any ideas on what could be causing this?

ERRO[2022/05/04 21:29:30] error syncing 'restore-migration': handler restore: failed to check s3 bucket:k3s, err:Get "https://freenas.gravyflex.ca:9000/k3s/?location=": dial tcp <PUBLIC IP>:9000: i/o timeout, requeuing
INFO[2022/05/04 21:29:30] Processing Restore CR restore-migration
INFO[2022/05/04 21:29:30] Restoring from backup may3-e095543e-f4db-4472-a114-557927fef785-2022-05-03T23-36-38Z.tar.gz
INFO[2022/05/04 21:29:30] invoking set s3 service client                s3-accessKey=<REDACTED> s3-bucketName=k3s s3-endpoint="freenas.gravyflex.ca:9000" s3-endpoint-ca= s3-folder= s3-region=
# migrationResource.yaml
apiVersion: resources.cattle.io/v1
kind: Restore
metadata:
  name: restore-migration
spec:
  backupFilename: may3-e095543e-f4db-4472-a114-557927fef785-2022-05-03T23-36-38Z.tar.gz
  prune: false
  storageLocation:
    s3:
      credentialSecretName: creds
      credentialSecretNamespace: default
      bucketName: k3s
      endpoint: freenas.gravyflex.ca:9000