I’m creating a Proof of Concept using Kubernetes Jobs running in a Rancher Cluster. We are using the jobs to run long running Data Analytics tasks (the jobs may take anywhere from a minute to an hour). Everything is working okay and as we expect.
I need to understand, if we have 4 worker nodes and we send 1,000 jobs to run on those 4 nodes, but realistically those nodes only have enough resources to run 200 jobs (50 per node) simultaneously, what happens to the remaining 800 jobs we request? Are they queued by Rancher or Kubernetes? Will they end in error right away? What happens?