using docker for webapplications with old apache/database/ph

Hi,

we have several old webapplications which are developments of our own. They run currently e.g. on SLES 10 SP4, Apache 2.2, MySQL 5.0.26, postgres 8.1.23, perl 5.8.8 or php 5.2.14. And they run on old hardware. We tried to move the webapplications on more recent versions of the needed software, but that didn’t work properly. The Webapplications have to be adapted, and our developer left us some months ago and we will not get a new one.

Are containers a possibility to prolonge the life-time of such applications ? I build the necessary environment for the applications, but nevertheless they run on new hardware. Do container provide a level of security, such as chroot ? Are docker isolated from other containers and from the real os ?
I know that i can put the whole system in a vm, but i’d like to get some experience in docker.
Can i do live migrations with docker, like with vm’s ?

Thanks.

Bernd

Hi Bernd,

Docker containers do provide a level of isolation, much more than chroot will do (see i.e. https://docs.docker.com/engine/security/security/). But as in effect the processes within are still running natively on the host machine, live migration is not available AFAICT.

Depending on your scenario, running these applications on old base software can inflict a severe security problem. Isolating the applications may not be sufficient at all, but it’s something you’ll have to decide, as only you know the environment in which these applications are used, what security policies are to apply and what potential damage can result in exploiting loopholes in i.e. PHP, MySQL and/or the software packages.

Using Docker can be quite different from running you application in a VM, especially since typically, each service is isolated in a separate container. So usually there’s no bundling httpd and MySQL in a common container.

If you come from a VM world and will finally run everything inside VMs, I’d recommend to skip Docker for the time being. OTOH, getting to know containers will definitely lead to a valuable expansion of your production toolset :slight_smile:

Regards,
J

Hi J (doesn’t an agent in MIB have this name ?),
i just saw your message. I didn’t get an e-mail about it, maybe i did not have subscribed or i oversaw it.
Thanks for your answer. Of course you are right, to mess about container is valuable. So i’d like to give it a try. From what i read until now i know that i need at least two containers, one for httpd and one for postgres. But i can interconnect them ?
That live migration is not possible is not a problem, it would be just a “nice to have”. But i could run my containers in a VM which i can live migrate in my cluster.
I’m quite familiar with VM (KVM). But i never saw vm’s as a security measure. OK, i can’t break out of the vm (although i read a short time ago it’s possible, on the heise security newsletter). But normally the vm has a network connection (otherwise a breach is difficult to imagine), so the intruder can have a look around and search for new targets.
What i’m thinking about is protecting the old webapp with AppArmor (which seems to be more difficult as i imagined, i started already a bit) or putting it into containers. What do you think provides more security ?

Thanks.

Bernd

Hi Bernd,

…and I do wear black, most of the time. But fortunately I am not “your first, last, and only line of defense”, you have a lot of other skilled people at hand with SUSE :wink:

Sure - you’ll just have to provide some common means of communication, typically network.

[QUOTE=berndgsflinux;38719][…]
I’m quite familiar with VM (KVM). But i never saw vm’s as a security measure. OK, i can’t break out of the vm (although i read a short time ago it’s possible, on the heise security newsletter). But normally the vm has a network connection (otherwise a breach is difficult to imagine), so the intruder can have a look around and search for new targets.[/QUOTE]
Splitting vulnerable environments into different domains, with some form of protection between them, is an improvement when it comes to security. Communication channels between these domains lessens the level of security, but most of the time you still have more protection than running everything on a single box :wink:

It like with DMZ systems. Ideally, you’ll isolate each DMZ system on a separate machine, with the DMZ network running on separate switches and only connected via a firewall machine. What do we see in real life? The DMZ network often is set up as a VLAN on the same switch infrastructure as the other (production!) networks, DMZ hosts are VMs on a common hardware server. This indeed is less secure than the ideal approach, but will help with many risks. And is often the only way to go, budget-wise, for smaller installations. You can secure access to the DMZ hosts (even if one got compromised) per system and restrict access to the remaining network via according rules on the firewall machine.

If you put Docker containers into the picture, the level of isolation is less than with VMs (and expliting a kernel problem brings down everything on that single server), but I see a much higher risk level by running your applications on old software levels. If the PHP / Perl implementation allows to bypass security measures, then the data presented by these applications is pretty vulnerable. No matter if the application runs on separate machines, inside VMs or via containers. That’s what I was trying to point out in my original answer.

Containers. IMO it’s easier to set up containers in a way that escaping that sandbox is hard. Properly configuring AppArmor can be much more difficult - but that’s depending on the complexity of accesses and whether you have all cases covered for testing your AppArmor rule sets. Admins tend to relax those rules too easily, once they hit an AppArmor-related production issue (after initially having been too restrictive earlier on).

Regards,
J

[QUOTE=jmozdzen;38721]
If you put Docker containers into the picture, the level of isolation is less than with VMs (and expliting a kernel problem brings down everything on that single server), but I see a much higher risk level by running your applications on old software levels. If the PHP / Perl implementation allows to bypass security measures, then the data presented by these applications is pretty vulnerable. No matter if the application runs on separate machines, inside VMs or via containers. That’s what I was trying to point out in my original answer.

J[/QUOTE]

Hi J,

the database is read-only, the user can’t enter data into it. And the data is also not modified by us, so we deliver constantly the same content. If the db really would be comprimised, i would restore it easily from a backup.
What i want to ask:
I found on the docker hub respective images with our postgres, apache and php version. But what is if the apache/php image is missing one needed php package ? Can i modify an image easily ? Or would it be a better choice to create our own images ?
Although the wheel has not to be reinvented a thousand times.
I’d like to use container to get used to it. Although i don’t see the real benefit, and i’m wondering if we still talk about containers in 5 or 10 years. I experienced already some hypes in IT.

Bernd

Hi Bernd,

I didn’t mean to over-emphasize the subject “security risks with software stacks”, I’m sure you have a solid assessment, i.e. for data leaks.

Can i modify an image easily ?

You can, and you should, i.e. when updates to packages are available. For an example workflow, see recent announcements about SUSE Manager 3.1 (and I known that your containers in question are for old software, likely without ever receiving patches. Updating images is a topic for such containers that need to be up-to-date, especially security-wise).

I’d like to use container to get used to it.

I like that attitude :slight_smile:

Regards,
J