Cluster provision issues after rancher-server EC2 restart

I am using rancher with AWS EC2. I can start and stop the docker container running rancher server no problem. I restart with this command

docker run -d --restart=unless-stopped \
  -p 80:80 -p 443:443 \
  -v /opt/rancher:/var/lib/rancher \ran
  rancher/rancher:latest\

however if I shut down the EC2 instance, restart it then restart the rancher-server docker container I have problems. Rancher server starts up fine with my previous settings. At this point I have 0 clusters.

I am not however, able to re-provision a cluster. I try making a single node cluster with one node with all roles. When I attempt to do so the cluster gets stuck provisioning and returns the following error

Cluster must have at least one etcd plane host: please specify one or more etcd in cluster config

When I ssh into the EC2 instance that is created by Rancher and look at the logs for rancher/rancher-agent I see this message

ERROR: https://3.85.224.54/ping is not accessible (Failed to connect to 3.85.224.54 port 443: Connection timed out)
INFO: Arguments: --server https://3.85.224.54 --token REDACTED --ca-checksum 6cae832251c9eab5148e34cd2b593470406e7a8eea6459b7d9df279aedcd59eb -r -n m-xgkf8
INFO: Environment: CATTLE_ADDRESS=172.31.39.129 CATTLE_AGENT_CONNECT=true CATTLE_INTERNAL_ADDRESS= CATTLE_NODE_NAME=m-xgkf8 CATTLE_SERVER=https://3.85.224.54 CATTLE_TOKEN=REDACTED
INFO: Using resolv.conf: nameserver 172.31.0.2 search ec2.internal

I’d appreciate any hints or links to relevant documentation