Hey guys, I’ve mentioned before that my company hosts websites with Docker/Rancher. By January, Im going to be migrating 150+ websites to the platform and currently I am running an Nginx container for each website (to separate each website from one another for security purposes). Is this an inefficient use of Rancher? Im wondering how much more memory it will use to do things this way instead of having a global nginx instance.
I feel like it’s a bit of a comparison of apples and oranges. You can accomplish the same task (serve web sites) with both methods. However, an investment in conainerization means you should be able to trivially scale an application out horizontally with orchestration on a platform like Rancher. I have a series of web applications, one of which is widely more popular than the others. In this case, it lives in Rancher, and we can scale it up or down, while leaving our older apps running in more traditional multi-vhost web servers.
If you want to have a self-contained environment that’s portable, then Docker is the the cool new kid on the block. From an operations point of view, I’d rather deploy my dev team’s docker containers with an orchestration tool like Rancher, rather than spending my time configuring monolithic servers that have all the necessary software dependencies. It takes me out of the business of supporting the software, and empowers my users to build containers that suit their needs.
I like Rancher’s built-in load balancing, and metadata service. You don’t get that by rolling your own VMs or physical hardware running nginx right out of the box. I think it’s those value adds that make it more easy to swallow the micro-services model.