Longhorn would try to prevent disrupting the data path unless it’s necessary (e.g. one host has been lost). In your case, the replica on the powered down host is considered lost because it’s not reachable when you reboot the host, so Longhorn has rebuilt the replica on another available host to keep volume healthy.
Longhorn won’t automatically detect the availability of the new host and move the load there automatically. Maybe we can add a feature for that later to auto balancing the load between nodes. But it would be pretty costly to migrate the data constantly.
For now, since you have 3 replicas, you can deliberately delete one of the two replicas on the same node. It should trigger the rebuilding process of Longhorn and the system will take a look at current status of the nodes, and it should decide that the third node is a better place for the new replica due to the soft anti-affinity rule we’ve set.
After we add support for update replica counts for the volume (https://github.com/rancher/longhorn/issues/299) you should able to increase the replica count temporarily to trigger the rebuilding process, then decrease the replica count to the normal, and finally remove the extra replicas.