Configure SuSE servers in HA

Hi There,

I have got this interesting thing which I need to configure on ESX servers.
I want to configure SuSE Linux servers in high availability by using the storage which is already on a SAN and this SAN is connected to the ESX server.

Scenario:
There would be two SuSE Linux servers running on two different ESX servers, ESX01 and ESX02. SuSE01 on ESX01 and SuSE02 on ESX02. and these servers will have a software HA configured between them.
These ESX servers have a common iSCSI storage configured, which are already setup as targets over them, so ESX01 and 02 list them as LUN.
Now, I have these other tow SuSE servers, where I will be installing them on the local storage of the ESX servers.
But, for the application that I would be hosting on the SuSE server would be using these iSCSI storage area.
As the iSCSI storage is already configured as targets on the ESX.
In order to communicate to the iSCSI channel I have also configured another network adapter and provided them withing in the same IP range of the iSCSI.

Question:
The first question is that, is this setup even possible?
If yes, then what are the steps that I might need to take to accomplish this successfully?

Thank you,

On 04/11/2015 11:04 PM, ddgaikwad wrote:[color=blue]

I have got this interesting thing which I need to configure on ESX
servers.[/color]

“interesting things”… that sounds, well, interesting?
[color=blue]

I want to configure SuSE Linux servers in high availability by using the
storage which is already on a SAN and this SAN is connected to the ESX
server.

Scenario:
There would be two SuSE Linux servers running on two different ESX
servers, ESX01 and ESX02. SuSE01 on ESX01 and SuSE02 on ESX02. and these
servers will have a software HA configured between them.
These ESX servers have a common iSCSI storage configured, which are
already setup as targets over them, so ESX01 and 02 list them as LUN.
Now, I have these other tow SuSE servers, where I will be installing
them on the local storage of the ESX servers.
But, for the application that I would be hosting on the SuSE server
would be using these iSCSI storage area.[/color]

Directly? When you mention iSCSI used by the application that makes me
think that the application is somehow iSCSI-aware, but I doubt you mean that.
[color=blue]

As the iSCSI storage is already configured as targets on the ESX.[/color]

guessing this is a fragment…
[color=blue]

In order to communicate to the iSCSI channel I have also configured
another network adapter and provided them withing in the same IP range
of the iSCSI.[/color]

That’s nice, but probably irrelevant since iSCSI can be accessed
regardless of network proximity; whether or not doing so is a good thing
is another issue entirely. I presume “network adapter” in this case means
on the virtual machine, not the ESX host, though why you would do it this
way is also confusing to me when you already have the iSCSI initiator
within ESX using the target and exposing it to the VM (per my
understanding) as local storage.
[color=blue]

Question:
The first question is that, is this setup even possible?[/color]

I thought you already had this setup, so that would seem to imply ‘yes’.
Is it a good idea?
[color=blue]

If yes, then what are the steps that I might need to take to accomplish
this successfully?[/color]

I have never used VMware ESX as an admin, and have only poked around
within existing systems during regular day-to-day work over the past
decade, but every time I have used iSCSI with SLES I have done so directly
from SLES systems to iSCSI targets. Using ESX in the middle may not be
terrible, and may even have benefits by having that storage work handled
in bare metal vs. within the VM, but doing anything with those targets in
both places (ESX as well as the SLES VMs) directly seems like undesirable
redundancy. Maybe that is not what you are trying to do, but it is what I
am interpreting from what you wrote. iSCSI is happy to have a target
connected-to by a large number of initiators, and that should be fun,
however unnecessary, so long as you follow the rules of the filesystem
within. For example, if using things like ext or xfs, do not mount
writable (or even at all to be safe) in multiple places simultaneously.
With other things like OCFS2 that restriction is lifted, but without any
mention of the filesystem in play it is probably best to just not mount
things until read for them. The HAE software has simple resources
prepared to help you do this properly so all you really need is to have
the iSCSI stuff setup on the SLES boxes and then configure HAE to
mount/unmount at the appropriate times when one node or the other becomes
active.

Also, as a reminder, for quorum reasons it is best to always have at least
three nodes in a cluster.


Good luck.

If you find this post helpful and are logged into the web interface,
show your appreciation and click on the star below…