Hi everyone,
I’m trying to create a new rke (?) rancher cluster with terraform.
Everything seems to work well, in the sense that terraform exit successfully, creating cluster but not provisionning nodes as describe in my manifest file.
here the snippet corresponding to said cluster :
resource "rancher2_cluster" "ad" {
provider = rancher2.admin
name = "ad"
description = "admin cluster"
rke_config {
nodes {
role = ["controlplane", "worker", "etcd"]
address = "50.0.0.120"
user = "root"
}
private_registries {
url = "50.0.0.1:5000"
is_default = true
}
}
}
As I understand I expect that as terraform ends its run, it should display a new cluster named “ad” with one node configured.
I confirm that from the host running terraform I’m able to ssh into 50.0.0.120 with ssh keypair added to ssh agent.