DomU's seethe same NSS volume, but not same data

I had posted this on the oes forum but think it is more applicable to this thread. Here is the situation:

I have two oes11 sp1 servers virtualized on a xen sles 11 sp 2 server. The host has a raid 5 from which an nss volume is created from. Both servers see the same volume in the same path in their directory structure. Yet, the two servers do not see the same directories on the volume. I know this is not good but am hoping that someone can give me some pointers if my plan is not the best.

More specifics:

Server 1
Examine the /dev/disk/by-path I see the xvdb which is the nss volume. That is fine. When I examine the output from df -h command it shows up as expected.

Server 2
Examine the /dev/disk/by-path I see the xvdb which is not shown in the output from df -h. I also see the xvdd in both the /dev/disk/by-path as is expected. Yet the xvdb is a path that can be drilled down into from the terminal or the gui.

The bad part is this xvdb has the directory for groupwise gwia. My plan tonight is to stop gw and to copy it from the xvdb seen by server 2 to the xvdd on server 2. Change the path in C1 for gwia, start it back up, that should take care of that, but…

How do I detach (is that the right syntax) the nss vol xvdb from server2?

Also, now server 2, which runs our Groupwise system,which seems to be running fine, has become sluggish. I can connect to it via ssh but the terminal screen never becomes responsive. On the host terminal, using the console of the DomU I couldn’t get top to be displayed on the terminal so I issued a init 3 and took it out of init 5, then tried to go back to init 5 and now all I see is a a spinning progress icon but the process doesn’t finish to come back to the desktop.

Even the Dom0 is becoming less responsive. I try to examine disk space by df -h and the process won’t finish and display the data. I am hoping that someone will offer some suggestions before this whole system goes down.

jmcg wrote:
[color=blue]

I had posted this on the oes forum but think it is more applicable to
this thread. Here is the situation:

I have two oes11 sp1 servers virtualized on a xen sles 11 sp 2 server.[/color]

Okay…
[color=blue]

The host has a raid 5 from which an nss volume is created from.[/color]

Is this hardware RAID device? Do you use the whole array or have you
created a separate Logical Volume (LUN)? Or, are you using Linux
software RAID?

In any case, your Dom0 should see a disk (sdX) if you are using
hardware RAID or an mdX device if you are using Linux software RAID.
Which do you have?

Now, SLES11-SP2 doesn’t know anything about NSS so I assume you are
passing the sdX/mdX device to each of your two OES DomUs? And that in
one of your DomUs you created an NSS volume?

The first thing to verify is that you got this part of it right. There
are some special consideration when you share a disk.

Have you seen TID 7004451?
How to share a XEN disk with two or more virtual machines.
http://www.novell.com/support/kb/doc.php?id=7004451

It says, in part:[color=blue]

Customer needs to share a disk file (i.e.
/var/lib/xen/images/VM-NAME/disk0) or a physical device (i.e.
/dev/sdb1) with two or more XEN virtual machines in order to be able
to install a cluster services like Linux Pacemaker or Novell Cluster
Services.[/color]

All this does is ensure that a device can be shared between multiple
DomUs.
[color=blue]

Both servers see the same volume in the same path in their
directory structure.[/color]

This is what you would expect but things get complicated if both
servers have write access. Normally you can’t have two separate systems
update the same filesystem. Some filesystems do support this shared
access, others do not.

This is not really a virtualization issue. As I suggested in your other
post:
[color=blue]

you should post this one in ‘OES-L: Storage and Backup’
[/color]
(https://forums.novell.com/novell-product-discussions/file-networking-services/open-enterprise-server/oes-linux/oes-l-storage-backup/)[color=blue]
where you’ll find those with the necessary expertise to help.[/color]

[color=blue]

Yet, the two servers do not see the same directories on the
volume. I know this is not good but am hoping that someone can give me
some pointers if my plan is not the best.

More specifics:

Server 1
Examine the /dev/disk/by-path I see the xvdb which is the nss volume.
That is fine. When I examine the output from df -h command it shows up
as expected.

Server 2
Examine the /dev/disk/by-path I see the xvdb which is not shown in the
output from df -h. I also see the xvdd in both the /dev/disk/by-path
as is expected. Yet the xvdb is a path that can be drilled down into
from the terminal or the gui.[/color]

A shared disk employs a locking mechanism to prevent concurrent updates
from multiple systems. If a system thinks it has full access to a
device but doesn’t, lots of weird things can happen… including data
corruption.

[color=blue]

The bad part is this xvdb has the directory for groupwise gwia. My
plan tonight is to stop gw and to copy it from the xvdb seen by
server 2 to the xvdd on server 2. Change the path in C1 for gwia,
start it back up, that should take care of that, but…[/color]

I really don’t know what xvdb or xvdd are used for but that is beside
the point. If you are not careful you may very well wipe out all your
data.

Rather than deal with the issues in the rest of your post, I suggest
you refer to the documentation for some guidelines then post any
follow-up questions in the ‘OES-L: Storage and Backup’ forum.

OES 11 SP1: File System Management Guide
http://www.novell.com/documentation/oes11/stor_filesys_lx/data/hn0r5fzo.html#hn0r5fzo

OES 11 SP1: NSS File System Administration Guide for Linux
http://www.novell.com/documentation/oes11/stor_nss_lx/data/front.html#front


Kevin Boyle - Knowledge Partner
If you find this post helpful and are logged into the web interface,
show your appreciation and click on the star below…

Kevin,
I won’t bore you with my details, but the bottom line is that the two servers do not see the same disk. The names were the same to the first part of the directory structure, but after that they are different.

To give you more specifics, the xen host has a RAID 1 that the xen os is installed on and has the OS of each DomU on it. The host also has a RAID 5 on it that I have carved up into different partitions for each DomU to use. The system is on its 6 th year and was my first experience in virtualization. The xvda, xvdb, xvdc, and xvdd are the different partitions of the RAID 5 that different DomU’s have assigned to them. So all is good on the storage end.

However, I was having some performance issues as mentioned at the end of my first post. I could not get the host to display df -h on the terminal. I wasn’t sure how to troubleshoot why that was occurring, I shut down the DomU’s one at a time to see if I could get the command to display, but it wouldn’t. So I eventually shutdown all the DomU’s and restart the Xen server and all seems to be fine. I think my problem is that the RAID one is 86% allocated. There is free space for each of the DomU’s on the disk images, but to the host it only has 14% space left. Won’t that cause performance issues? Isn’t that problematic or potentially so? Thank you for the help.

jmcg wrote:
[color=blue]

Kevin,
I won’t bore you with my details, but the bottom line is that the two
servers do not see the same disk.[/color]

That’s good to hear but it is not what I understood when I read:
[color=blue]

The host has a raid 5 from which an nss volume is created from. Both
servers see the same volume in the same path in their directory
structure.[/color]
[color=blue]
The names were the same to the first part of the directory structure,
but after that they are different.[/color]

I don’t understand what that means.

[color=blue]

To give you more specifics, the xen host has a RAID 1 that the xen os
is installed on and has the OS of each DomU on it. The host also has
a RAID 5 on it that I have carved up into different partitions for
each DomU to use. The system is on its 6 th year and was my first
experience in virtualization.[/color]

So far so good.

[color=blue]

The xvda, xvdb, xvdc, and xvdd are the
different partitions of the RAID 5 that different DomU’s have
assigned to them. So all is good on the storage end.[/color]

Again, I’m confused…

When you say “The host also has a RAID 5 on it that I have carved up
into different partitions”, that suggests to me that your Dom0 sees a
single large drive. If you partitioned it, you would have sdd1, sdd2,
sdd3, etc.

The other possibility is that you used your RAID utility to create
Logical Volumes (LUNs) in which case Dom0 would see separate drives but
I would expect them to be seen in Dom0 as sda, sdb, sdc, etc. Your DomU
would see the device as xvdX but regardless how the device in known in
Dom0, it would likely have the same name in each DomU (e.g. xvda). So
where do the names xvda, xvdb, xvdc, and xvdd come from?

[color=blue]

However, I was having some performance issues as mentioned at the end
of my first post. I could not get the host to display df -h on the
terminal. I wasn’t sure how to troubleshoot why that was occurring, I
shut down the DomU’s one at a time to see if I could get the command
to display, but it wouldn’t. So I eventually shutdown all the DomU’s
and restart the Xen server and all seems to be fine.[/color]

Okay.

[color=blue]

I think my problem is that the RAID one is 86% allocated.[/color]

If you are explaining it correctly, that is not necessarily a problem.
For example, if you had a 100 GB RAID 1 array and created three
partitions (LUNs) each of which was 25 GB, your array would be 75%
allocated, and that is fine, but somehow I don’t think that is what you
are saying.
[color=blue]

There is free space for each of the DomU’s on the disk images, but to[/color]
the host it only has 14% space left.

Again, you haven’t actually described how you have configured your RAID
1 array. When you say “There is free space for each of the DomU’s on
the disk images” am I to assume that you are using file backed (which
means that the virtual disk is a single image file on a larger physical
disk) storage for your DomU’s system disk? If that is what you have
done, are you using a sparse image file, which means that the virtual
disk is a single image file, but the space is not preallocated?

If you are using a 50 GB sparse image file, the DomU would see a 50 GB
disk but the file size on your Dom0 may only be 30 GB. The Dom0 file
will grow to 50 GB as the DomU uses more storage space.

If this is how your system is configured, and if your RAID 1 array
is seen by your Dom0 as a single drive/partition (other than swap), and
if you only have 14% free space, then yes, this is a problem!

[color=blue]

Won’t that cause performance issues?[/color]

Sparse files by themselves can be a performance issue but you haven’t
confirmed that’s what you’re using.

[color=blue]

Isn’t that problematic or potentially so?[/color]

Very much so. Very bad things can happen to your system if you run out
of space. Depending upon how you have configured storage your Dom0, you
may exhaust all available storage at a specific mount point and still
show free space available on the drive.

Much of what I have said is speculation based on assumptions. Have I
misunderstood (again) what you are trying to explain?


Kevin Boyle - Knowledge Partner
If you find this post helpful and are logged into the web interface,
show your appreciation and click on the star below…

I hope that my ignorance is not too obvious to you or anyone else who views this, but the more I put into this, the better I understand my system.
First, the physical server has a RAID 1 – hardware type and a hardware type RAID 5. Those are physical discs. The host OS is SLES 11 and is install as a Xen server, minimal extras, on the RAID 1 .
The RAID 1 shows up in the terminal (df –h) as c0d0p1 in the /dev/cciss/ directory. That is the disk space that is 78% used up.

Each DomU (guests or vm’s) are installed as disk files on the RAID 1. That is, not only is the Host OS installed on the RAID 1 but also the disk files for each of the DomU’s (they are in the /var/lib/xen/images/ directory) are too, but they are technically files (disk images).

The RAID 5 shows up in the host directory (/dev/cciss/c0d1x) with each of the partitions (linux) that I have created for the DomU’s to use. Each DomU has a partition of the RAID 5 attached to it. They show up as /dev/cciss/c0d1p2 or 3 or 4, depending on the order I created them.

Within each of the domU’s, I see those partitions (RAID 5) as xvdb or xvdd or whatever. That is what I see from the terminal of the domU when I enter df –h. I don’t see them from the Dom0’s terminal because they are not attached to it, only to the DomU’s.

So, it seems that what I have done is used disk files for the OS on each DomU but am using actually using partitions on the RAID 5 for each of the DomU’s. So I am mixing and matching. Is that bad?

Now with all that said I have one more issue. One of the DomU’s that is an OES11sp1 server that is running my primary domain for groupwise has become sluggish. Over a period of 5 or so days it becomes less responsive. Now I can’t even get top or df –h to activate from the terminal or from an ssh session. The groupwise system is running fine, but the server seems busy. What can I do to investigate why it is not responsive to terminal commands? Yesterday I did a restart of the groupwise system and it took 5 or 6 minutes to restart and it only has 100 users, a small system. I appreciate your help.

jmcg wrote:
[color=blue]

I hope that my ignorance is not too obvious to you or anyone else who
views this, but the more I put into this, the better I understand my
system.[/color]

That is a good thing. Detailed questions make us think and even
question what has been put into place to help us better understand and
better able to identify issues.

[color=blue]

First, the physical server has a RAID 1 hardware type and a hardware
type RAID 5. Those are physical discs. The host OS is SLES 11 and is
install as a Xen server, minimal extras, on the RAID 1 .
The RAID 1 shows up in the terminal (df h) as c0d0p1 in the
/dev/cciss/ directory. That is the disk space that is 78% used up.[/color]

Hardware RAID is good. 78% used (or is it 86% as stated in your other
post?) doesn’t help much without knowing a bit more. Can you provide
the output from these commands run on Dom0? It will explain a lot more
than I can get by asking specific questions.

fdisk -l
cat /etc/fstab

Note: That’s a lower case ELL on the fdisk command.

Please post your results between “code” tags (using the “#” from the
web interface) to make the output more readable.

[color=blue]

Each DomU (guests or vms) are installed as disk files on the RAID 1.
That is, not only is the Host OS installed on the RAID 1 but also the
disk files for each of the DomUs (they are in the /var/lib/xen/images/
directory) are too, but they are technically files (disk images).[/color]

So far, that’s fine. The commands I asked you to run will tell us more.

[color=blue]

The RAID 5 shows up in the host directory (/dev/cciss/c0d1x) with each
of the partitions (linux) that I have created for the DomUs to use.
Each DomU has a partition of the RAID 5 attached to it. They show up
as /dev/cciss/c0d1p2 or 3 or 4, depending on the order I created
them.[/color]

It is good that you separated your data storage from that which
contains your operating system.

[color=blue]

Within each of the domUs, I see those partitions (RAID 5) as xvdb or
xvdd or whatever. That is what I see from the terminal of the domU
when I enter df h. I dont see them from the Dom0s terminal because
they are not attached to it, only to the DomUs.[/color]

What you say makes sense but if on a particular DomU you can see
/dev/xvdd, that suggests to me you have assigned one file-backed
storage device for the operating system and three separate partitions
from your RAID 5 device. Is that correct? Why three devices for data?
(Just curious.)

[color=blue]

So, it seems that what I have done is used disk files for the OS on
each DomU but am using actually using partitions on the RAID 5 for
each of the DomUs. So I am mixing and matching. Is that bad?[/color]

No, that is not a bad idea depending upon what you are trying to
achieve. File-backed storage is created by default. It offers some
flexibilities and the ability to “thin provision” but is not
recommended for use with NSS data. Partitions on the RAID 5 (a block
storage device) offer better performance and are recommended for NSS.

[color=blue]

Now with all that said I have one more issue. One of the DomUs that
is an OES11sp1 server that is running my primary domain for groupwise
has become sluggish. Over a period of 5 or so days it becomes less
responsive. Now I cant even get top or df h to activate from the
terminal or from an ssh session. The groupwise system is running
fine, but the server seems busy. What can I do to investigate why it
is not responsive to terminal commands? Yesterday I did a restart of
the groupwise system and it took 5 or 6 minutes to restart and it
only has 100 users, a small system. I appreciate your help.[/color]

The first thing we have to determine is whether this issue lies within
your Dom0 or your DomU. If only one DomU is affected, I would suspect
the DomU (Server 2?).

Please run these commands on your DomU:

fdisk -l
cat /etc/fstab

If you shutdown GroupWise, is your server more responsive?

Did you move GWIA as you planned to do in your OP?

Poor performance can be caused by IO bottlenecks. Have you checked for
excessive swapping on each server or verified that there are no disk
errors that are causing excessive retries?

One other suggestion: If this is a production server and you need to
get this resolved quickly, have you considered opening a Service
Request to get some hands-on support?

Please provide the requested information to get a better understanding
of what’s happening.


Kevin Boyle - Knowledge Partner
If you find this post helpful and are logged into the web interface,
show your appreciation and click on the star below…

[QUOTE=KBOYLE;16512]jmcg wrote:
[COLOR=blue]

I hope that my ignorance is not too obvious to you or anyone else who
views this, but the more I put into this, the better I understand my
system.[/COLOR]

That is a good thing. Detailed questions make us think and even
question what has been put into place to help us better understand and
better able to identify issues.

[COLOR=blue]

First, the physical server has a RAID 1 hardware type and a hardware
type RAID 5. Those are physical discs. The host OS is SLES 11 and is
install as a Xen server, minimal extras, on the RAID 1 .
The RAID 1 shows up in the terminal (df h) as c0d0p1 in the
/dev/cciss/ directory. That is the disk space that is 78% used up.[/COLOR]

Hardware RAID is good. 78% used (or is it 86% as stated in your other
post?) doesn’t help much without knowing a bit more. Can you provide
the output from these commands run on Dom0? It will explain a lot more
than I can get by asking specific questions.

fdisk -l
cat /etc/fstab

Note: That’s a lower case ELL on the fdisk command.

Please post your results between “code” tags (using the “#” from the
web interface) to make the output more readable.
[COLOR=red]

[CODE]
SLES-XEN:/ # fdisk -l

Disk /dev/cciss/c0d0: 146.8 GB, 146778685440 bytes
255 heads, 63 sectors/track, 17844 cylinders, total 286677120 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00075c47

       Device Boot      Start         End      Blocks   Id  System

/dev/cciss/c0d0p1 63 4209029 2104483+ 82 Linux swap / Solaris
/dev/cciss/c0d0p2 * 4209030 286663859 141227415 83 Linux

Disk /dev/cciss/c0d1: 899.9 GB, 899898718208 bytes
255 heads, 63 sectors/track, 109406 cylinders, total 1757614684 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0006a452

       Device Boot      Start         End      Blocks   Id  System

/dev/cciss/c0d1p1 32 706876064 353438016+ 5 Extended
/dev/cciss/c0d1p2 706876065 1757607389 525365662+ 83 Linux
/dev/cciss/c0d1p5 64 195309599 97654768 83 Linux
/dev/cciss/c0d1p6 195309632 292968479 48829424 83 Linux
/dev/cciss/c0d1p7 292977468 481966064 94494298+ 83 Linux
SLES-XEN:/ # cat /etc/fstab
/dev/disk/by-id/cciss-3600508b1001032373120202020200004-part2 / reiserfs acl,user_xattr 1 1
/dev/disk/by-id/cciss-3600508b1001032373120202020200004-part1 swap swap defaults 0 0
proc /proc proc defaults 0 0
sysfs /sys sysfs noauto 0 0
debugfs /sys/kernel/debug debugfs noauto 0 0
usbfs /proc/bus/usb usbfs noauto 0 0
devpts /dev/pts devpts mode=0620,gid=5 0 0
/dev/disk/by-id/cciss-3600508b1001032373120202020200005-part4 /mnt/mount ext3 rw 0 0
##/dev/cciss/c0d1p4 /mnt/mount ext3 rw 0 0
##next line edited by jmcg 21May2012 so partition not mounted
##/dev/disk/by-id/cciss-3600508b1001032373120202020200005-part3 /mnt/mount2 ext3 acl,user_xattr 1 2
/dev/disk/by-id/cciss-3600508b1001032373120202020200005-part6 /mnt/mount3 ext3 acl,user_xattr 1 2
SLES-XEN:/ #

[CODE]
[/COLOR]

[COLOR=blue]

Each DomU (guests or vms) are installed as disk files on the RAID 1.
That is, not only is the Host OS installed on the RAID 1 but also the
disk files for each of the DomUs (they are in the /var/lib/xen/images/
directory) are too, but they are technically files (disk images).[/COLOR]

So far, that’s fine. The commands I asked you to run will tell us more.

[COLOR=blue]

The RAID 5 shows up in the host directory (/dev/cciss/c0d1x) with each
of the partitions (linux) that I have created for the DomUs to use.
Each DomU has a partition of the RAID 5 attached to it. They show up
as /dev/cciss/c0d1p2 or 3 or 4, depending on the order I created
them.[/COLOR]

It is good that you separated your data storage from that which
contains your operating system.

[COLOR=blue]

Within each of the domUs, I see those partitions (RAID 5) as xvdb or
xvdd or whatever. That is what I see from the terminal of the domU
when I enter df h. I dont see them from the Dom0s terminal because
they are not attached to it, only to the DomUs.[/COLOR]

What you say makes sense but if on a particular DomU you can see
/dev/xvdd, that suggests to me you have assigned one file-backed
storage device for the operating system and three separate partitions
from your RAID 5 device. Is that correct? Why three devices for data?
(Just curious.)

[COLOR=blue]

So, it seems that what I have done is used disk files for the OS on
each DomU but am using actually using partitions on the RAID 5 for
each of the DomUs. So I am mixing and matching. Is that bad?[/COLOR]

No, that is not a bad idea depending upon what you are trying to
achieve. File-backed storage is created by default. It offers some
flexibilities and the ability to “thin provision” but is not
recommended for use with NSS data. Partitions on the RAID 5 (a block
storage device) offer better performance and are recommended for NSS.

[COLOR=blue]

Now with all that said I have one more issue. One of the DomUs that
is an OES11sp1 server that is running my primary domain for groupwise
has become sluggish. Over a period of 5 or so days it becomes less
responsive. Now I cant even get top or df h to activate from the
terminal or from an ssh session. The groupwise system is running
fine, but the server seems busy. What can I do to investigate why it
is not responsive to terminal commands? Yesterday I did a restart of
the groupwise system and it took 5 or 6 minutes to restart and it
only has 100 users, a small system. I appreciate your help.[/COLOR]

The first thing we have to determine is whether this issue lies within
your Dom0 or your DomU. If only one DomU is affected, I would suspect
the DomU (Server 2?).

Please run these commands on your DomU:

fdisk -l
cat /etc/fstab

[COLOR=red]

[CODE]
groupwise:~ # fdisk -l

Disk /dev/xvda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00007bc8

Device Boot      Start         End      Blocks   Id  System

/dev/xvda1 2048 4208639 2103296 82 Linux swap / Solaris
/dev/xvda2 * 4208640 41943039 18867200 83 Linux

Disk /dev/xvdb: 4189 MB, 4189161472 bytes
255 heads, 63 sectors/track, 509 cylinders, total 8181956 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/xvdb doesn’t contain a valid partition table

Disk /dev/xvdd: 96.8 GB, 96762161664 bytes
255 heads, 63 sectors/track, 11763 cylinders, total 188988597 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001b841

Device Boot      Start         End      Blocks   Id  System

/dev/xvdd1 2048 188987391 94492672 83 Linux
groupwise:~ # cat etc/fstab
cat: etc/fstab: No such file or directory
groupwise:~ # cat /etc/fstab
/dev/xvda1 swap swap defaults 0 0
/dev/xvda2 / ext3 acl,user_xattr 1 1
proc /proc proc defaults 0 0
sysfs /sys sysfs noauto 0 0
debugfs /sys/kernel/debug debugfs noauto 0 0
devpts /dev/pts devpts mode=0620,gid=5 0 0
/dev/xvdd1 /mnt/vol ext3 acl,user_xattr 1 2
198.nnn.nnn.209:/media/nss/SVUSD1DATA/stu-domain /media/nss/SVUSD1DATA/stu-domain nfs defaults 0 0
groupwise:~ #

[CODE]
[/COLOR]
If you shutdown GroupWise, is your server more responsive?

Did you move GWIA as you planned to do in your OP?

Poor performance can be caused by IO bottlenecks. Have you checked for
excessive swapping on each server or verified that there are no disk
errors that are causing excessive retries?

One other suggestion: If this is a production server and you need to
get this resolved quickly, have you considered opening a Service
Request to get some hands-on support?

Please provide the requested information to get a better understanding
of what’s happening.

I’ll post more later today.

I’m going to summarize what you told me in your four previous posts and hopefully consolidate all the important facts in one place. We still need some additional info which I hope you can provide. This way, anyone who has an idea or suggestion can jump in.

[FONT=Arial Black][COLOR="#000000"]Xen host (SLES-XEN):[/COLOR][/FONT]

[CODE]SLES-XEN:/ # fdisk -l

Disk /dev/cciss/c0d0: 146.8 GB, 146778685440 bytes
255 heads, 63 sectors/track, 17844 cylinders, total 286677120 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00075c47

Device Boot Start End Blocks Id System
/dev/cciss/c0d0p1 63 4209029 2104483+ 82 Linux swap / Solaris
/dev/cciss/c0d0p2 * 4209030 286663859 141227415 83 Linux

Disk /dev/cciss/c0d1: 899.9 GB, 899898718208 bytes
255 heads, 63 sectors/track, 109406 cylinders, total 1757614684 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0006a452

Device Boot Start End Blocks Id System
/dev/cciss/c0d1p1 32 706876064 353438016+ 5 Extended
/dev/cciss/c0d1p2 706876065 1757607389 525365662+ 83 Linux
/dev/cciss/c0d1p5 64 195309599 97654768 83 Linux
/dev/cciss/c0d1p6 195309632 292968479 48829424 83 Linux
/dev/cciss/c0d1p7 292977468 481966064 94494298+ 83 Linux
[/CODE]

SLES-XEN:/ # cat /etc/fstab /dev/disk/by-id/cciss-3600508b1001032373120202020200004-part2 / reiserfs acl,user_xattr 1 1 /dev/disk/by-id/cciss-3600508b1001032373120202020200004-part1 swap swap defaults 0 0 proc /proc proc defaults 0 0 sysfs /sys sysfs noauto 0 0 debugfs /sys/kernel/debug debugfs noauto 0 0 usbfs /proc/bus/usb usbfs noauto 0 0 devpts /dev/pts devpts mode=0620,gid=5 0 0 /dev/disk/by-id/cciss-3600508b1001032373120202020200005-part4 /mnt/mount ext3 rw 0 0 ##/dev/cciss/c0d1p4 /mnt/mount ext3 rw 0 0 ##next line edited by jmcg 21May2012 so partition not mounted ##/dev/disk/by-id/cciss-3600508b1001032373120202020200005-part3 /mnt/mount2 ext3 acl,user_xattr 1 2 /dev/disk/by-id/cciss-3600508b1001032373120202020200005-part6 /mnt/mount3 ext3 acl,user_xattr 1 2 SLES-XEN:/ #

Okay, your RAID 1 array has a swap partition and one partition for /. All the Xen host system is contained in that one partition, including /var/lib/xen/images/. If you are seeing disk utilisation of 78% (or 86%), that is an issue that needs to be looked at.

[QUOTE]The RAID 5 shows up in the host directory (/dev/cciss/c0d1x) with each of the partitions (linux) that I have created for the DomU’s to use. Each DomU has a partition of the RAID 5 attached to it. They show up as /dev/cciss/c0d1p2 or 3 or 4, depending on the order I created them.

Within each of the domU’s, I see those partitions (RAID 5) as xvdb or xvdd or whatever. That is what I see from the terminal of the domU when I enter df –h. I don’t see them from the Dom0’s terminal because they are not attached to it, only to the DomU’s.[/QUOTE]

According to your fstab I see:
[LIST=1]
[]/dev/disk/by-id/cciss-3600508b1001032373120202020200005-part4 /mnt/mount
[
]/dev/disk/by-id/cciss-3600508b1001032373120202020200005-part6 /mnt/mount3
[*]
[/LIST]

That clearly contradicts what you say. The device at /mnt/mount does not appear to exist but I’m more concerned with the device at /mnt/mount3. What is that device used for? Is it also assigned to a DomU?

[FONT=Arial Black][COLOR="#000000"]DomU server 2 (groupwise):[/COLOR][/FONT]

[CODE]groupwise:~ # fdisk -l

Disk /dev/xvda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00007bc8

Device Boot Start End Blocks Id System
/dev/xvda1 2048 4208639 2103296 82 Linux swap / Solaris
/dev/xvda2 * 4208640 41943039 18867200 83 Linux

Disk /dev/xvdb: 4189 MB, 4189161472 bytes
255 heads, 63 sectors/track, 509 cylinders, total 8181956 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/xvdb doesn’t contain a valid partition table

Disk /dev/xvdd: 96.8 GB, 96762161664 bytes
255 heads, 63 sectors/track, 11763 cylinders, total 188988597 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001b841

Device Boot Start End Blocks Id System
/dev/xvdd1 2048 188987391 94492672 83 Linux[/CODE]

groupwise:~ # cat /etc/fstab /dev/xvda1 swap swap defaults 0 0 /dev/xvda2 / ext3 acl,user_xattr 1 1 proc /proc proc defaults 0 0 sysfs /sys sysfs noauto 0 0 debugfs /sys/kernel/debug debugfs noauto 0 0 devpts /dev/pts devpts mode=0620,gid=5 0 0 /dev/xvdd1 /mnt/vol ext3 acl,user_xattr 1 2 198.nnn.nnn.209:/media/nss/SVUSD1DATA/stu-domain /media/nss/SVUSD1DATA/stu-domain nfs defaults 0 0 groupwise:~ #

[QUOTE]Server 2
Examine the /dev/disk/by-path I see the xvdb which is not shown in the output from df -h. I also see the xvdd in both the /dev/disk/by-path as is expected. Yet the xvdb is a path that can be drilled down into from the terminal or the gui.

The bad part is this xvdb has the directory for groupwise gwia. My plan tonight is to stop gw and to copy it from the xvdb seen by server 2 to the xvdd on server 2. Change the path in C1 for gwia, start it back up, that should take care of that, but…[/QUOTE]

If you look at your fdisk output you’ll see that “Disk /dev/xvdb doesn’t contain a valid partition table” and according to fstab, it isn’t mounted. Of course you would need a valid filesystem to mount it.
[LIST]
[]What filesystem do you expect it should contain?
[
]How did you create it?
[]How are you able to drill down into it as you state above?
[
]
[/LIST]

Please explain what you are doing with /media/nss/SVUSD1DATA/stu-domain. It looks as if you are mounting a remote NSS volume yet you say:

[QUOTE]Kevin,
I won’t bore you with my details, but the bottom line is that the two servers do not see the same disk.[/QUOTE]

I know you have described other symptoms and have performance issues but before we get to them we need answers to these questions.

Hi Kevin, hi jmcg,

[QUOTE=KBOYLE;16588]I’m going to summarize what you told me in your four previous posts and hopefully consolidate all the important facts in one place. We still need some additional info which I hope you can provide. This way, anyone who has an idea or suggestion can jump in.
[…][/QUOTE]

I’ve been following this thread and was/am confused by the diverging infos, too. May I ask you (jmcg) to provide a current output of “mount” from the three systems as well? Noticing the comment characters in fstab and the entry for a non-existing partition makes it clear that the actual mount situation may differ from the way it is set up in the files.

Additionally, I’d like to ask for at least the disk definitions for the two DomUs, as contained in the DomUs’ config files (or persisted configurations), plus “ls -l /var/lib/xen/images” from the Dom0 (where the disk images files are said to be stored).

This all just to clear up the potentially wrong conclusions that can be drawn from the information provided so far.

Regards,
Jens

I’m not confident the code box will be right. Hopefully you can see what you have asked for. If you need more information, please let me know.

From the host


SLES-XEN:/var/lib/xen/images # mount -l
/dev/cciss/c0d0p2 on / type reiserfs (rw,acl,user_xattr)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
debugfs on /sys/kernel/debug type debugfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,mode=1777)
devpts on /dev/pts type devpts (rw,mode=0620,gid=5)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
none on /var/lib/ntp/proc type proc (ro,nosuid,nodev)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
gvfs-fuse-daemon on /root/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev)
/dev/sr0 on /media/O type udf (ro,nosuid,nodev,uid=0) [O]
/root/Desktop/OES11-SP1-addon_with_SLES11-SP2-x86_64-DVD.iso on /nfs type iso9660 (ro) [CDROM]
nfsd on /proc/fs/nfsd type nfsd (rw)
198.nnn.nnn.250:/nfs on /tmp/tmpcOUDYL type nfs (rw,addr=198.nnn.nnn.250)
SLES-XEN:/var/lib/xen/images # ls -l /var/lib/xen/images/
total 4
-rw-r--r-- 1 root root  65 Sep 20 08:57 .disk-list
drwxr-xr-x 2 root root  72 May 21  2012 WinSrvr1
drwxr-xr-x 2 root root  80 Feb 19  2013 grpwisenew
drwxr-xr-x 2 root root 152 Sep 20 09:15 stu-grpwise
drwxr-xr-x 2 root root  80 May 16  2012 student-ehs-server
SLES-XEN:/var/lib/xen/images # cd ../

SLES-XEN:/etc/xen/vm # ls
grpwisenew      sles10-rescue   stu-grpwise.xml
grpwisenew.xml  sles10-rescue1  student-ehs-server
old             stu-grpwise     student-ehs-server.xml
SLES-XEN:/etc/xen/vm # cat grpwisenew
name="grpwisenew"
description="New grp wise server to install gw 2012 on "
uuid="3d7fb292-e28d-ee75-6a0b-854b0ff1ee6c"
memory=4000
maxmem=4000
vcpus=1
on_poweroff="destroy"
on_reboot="restart"
on_crash="destroy"
localtime=0
keymap="en-us"
builder="linux"
bootloader="/usr/bin/pygrub"
bootargs=""
extra=" "
disk=[ 'file:/var/lib/xen/images/grpwisenew/disk0.raw,xvda,w', 'file:/root/Desktop/OES11-SP1-addon_with_SLES11-SP2-x86_64-DVD.iso,xvdb:cdrom,r', ]
vif=[ 'mac=00:16:3e:74:77:91,bridge=br0', ]
vfb=['type=vnc,vncunused=1']
SLES-XEN:/etc/xen/vm # cat stu-grpwise
name="stu-grpwise"
description="Server for Studetn Groupwise domain and PO"
uuid="4ba7ac47-8cb6-ab99-c103-e50eefbf4496"
memory=4000
maxmem=4000
vcpus=1
on_poweroff="destroy"
on_reboot="restart"
on_crash="destroy"
localtime=0
keymap="en-us"
builder="linux"
bootloader="/usr/bin/pygrub"
bootargs=""
extra=" hostip=198.nnn.nnn.209 netmask=255.255.255.0 gateway=198.nnn.nnn.1"
disk=[ 'file:/var/lib/xen/images/stu-grpwise/disk0.raw,xvda,w', ]
vif=[ 'mac=00:16:3e:5e:37:09,bridge=br0', ]
vfb=['type=vnc,vncunused=1']
SLES-XEN:/etc/xen/vm # fdisk --help
fdisk: invalid option -- '-'
Usage:
 fdisk [options] <disk>    change partition table
 fdisk [options] -l <disk> list partition table(s)
 fdisk -s <partition>      give partition size(s) in blocks
 
Options:
 -b <size>             sector size (512, 1024, 2048 or 4096)
 -c[=<mode>]           compatible mode: 'dos' or 'nondos' (default)
 -h                    print this help text
 -u[=<unit>]           display units: 'cylinders' or 'sectors' (default)
 -v                    print program version
 -C <number>           specify the number of cylinders
 -H <number>           specify the number of heads
 -S <number>           specify the number of sectors per track
 
SLES-XEN:/etc/xen/vm # fdisk -l
 
Disk /dev/cciss/c0d0: 146.8 GB, 146778685440 bytes
255 heads, 63 sectors/track, 17844 cylinders, total 286677120 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00075c47
 
           Device Boot      Start         End      Blocks   Id  System
/dev/cciss/c0d0p1              63     4209029     2104483+  82  Linux swap / Solaris
/dev/cciss/c0d0p2   *     4209030   286663859   141227415   83  Linux
 
Disk /dev/cciss/c0d1: 899.9 GB, 899898718208 bytes
255 heads, 63 sectors/track, 109406 cylinders, total 1757614684 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0006a452
 
           Device Boot      Start         End      Blocks   Id  System
/dev/cciss/c0d1p1              32   706876064   353438016+   5  Extended
/dev/cciss/c0d1p2       706876065  1757607389   525365662+  83  Linux
/dev/cciss/c0d1p5              64   195309599    97654768   83  Linux
/dev/cciss/c0d1p6       195309632   292968479    48829424   83  Linux
/dev/cciss/c0d1p7       292977468   481966064    94494298+  83  Linux
SLES-XEN:/etc/xen/vm #

From DomU=grpwisenew

groupwise:~/Desktop # mount -l
/dev/xvda2 on / type ext3 (rw,acl,user_xattr)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
debugfs on /sys/kernel/debug type debugfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,mode=1777)
devpts on /dev/pts type devpts (rw,mode=0620,gid=5)
/dev/xvdd1 on /mnt/vol type ext3 (rw,acl,user_xattr)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
securityfs on /sys/kernel/security type securityfs (rw)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
198.nnn.nnn.209:/media/nss/SVUSD1DATA/stu-domain on /media/nss/SVUSD1DATA/stu-domain type nfs (rw,addr=198.nnn.nnn.209)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
none on /var/lib/ntp/proc type proc (ro,nosuid,nodev)
nfsd on /proc/fs/nfsd type nfsd (rw)
admin on /_admin type nssadmin (rw)
novfs on /var/opt/novell/nclmnt type novfs (rw)
gvfs-fuse-daemon on /root/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev)
groupwise:~/Desktop # top

groupwise:~/Desktop # fdisk -l

Disk /dev/xvda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00007bc8

    Device Boot      Start         End      Blocks   Id  System
/dev/xvda1            2048     4208639     2103296   82  Linux swap / Solaris
/dev/xvda2   *     4208640    41943039    18867200   83  Linux

Disk /dev/xvdb: 4189 MB, 4189161472 bytes
255 heads, 63 sectors/track, 509 cylinders, total 8181956 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/xvdb doesn't contain a valid partition table

Disk /dev/xvdd: 96.8 GB, 96762161664 bytes
255 heads, 63 sectors/track, 11763 cylinders, total 188988597 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001b841

    Device Boot      Start         End      Blocks   Id  System
/dev/xvdd1            2048   188987391    94492672   83  Linux
groupwise:~/Desktop #

From Domu=stu-grpwise

login as: root
Using keyboard-interactive authentication.
Password:
Last login: Tue Sep 24 07:53:45 2013 from ehs11.sisnet.ssku.k12.ca.us
stu-grpwise:~ # mount -l
/dev/xvda2 on / type ext3 (rw,acl,user_xattr)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
debugfs on /sys/kernel/debug type debugfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,mode=1777)
devpts on /dev/pts type devpts (rw,mode=0620,gid=5)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
securityfs on /sys/kernel/security type securityfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
none on /var/lib/ntp/proc type proc (ro,nosuid,nodev)
admin on /_admin type nssadmin (rw)
/dev/pool/SVUSD1DATA on /opt/novell/nss/mnt/.pools/SVUSD1DATA type nsspool (rw,name=SVUSD1DATA)
SVUSD1DATA on /media/nss/SVUSD1DATA type nssvol (rw,name=SVUSD1DATA)
novfs on /var/opt/novell/nclmnt type novfs (rw)
gvfs-fuse-daemon on /root/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
nfsd on /proc/fs/nfsd type nfsd (rw)
mail.svusd.us:/mnt/vol/gwdomain on /mnt/vol/gwdomain type nfs (rw,addr=198.nnn.nnn.240)
stu-grpwise:~ # fdisk -l

Disk /dev/xvda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000dfc59

    Device Boot      Start         End      Blocks   Id  System
/dev/xvda1            2048     4208639     2103296   82  Linux swap / Solaris
/dev/xvda2   *     4208640    41943039    18867200   83  Linux

Disk /dev/xvdb: 100.0 GB, 99998482432 bytes
255 heads, 32 sectors/track, 23934 cylinders, total 195309536 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

    Device Boot      Start         End      Blocks   Id  System
/dev/xvdb1              32   195301439    97650704   65  Novell Netware 386

Disk /dev/mapper/SVUSD1DATA: 100.0 GB, 99994304512 bytes
255 heads, 63 sectors/track, 12156 cylinders, total 195301376 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x42530000

Disk /dev/mapper/SVUSD1DATA doesn't contain a valid partition table
stu-grpwise:~ #

There is a lot of data here. To make it easier to discuss, I will create a separate post for each server. If Jens or anyone else has any ideas, please jump in.

[SIZE=3]Xen host (SLES-XEN):[/SIZE]

Items I have highlighted in [COLOR="#00FF00"]green[/COLOR] look fine. Items I have highlighted in [COLOR="#FF0000"]red[/COLOR] require some additional explanation.

SLES-XEN:/ # cat /etc/fstab [COLOR="#00FF00"]/dev/disk/by-id/cciss-3600508b1001032373120202020200004-part2 / reiserfs acl,user_xattr 1 1 /dev/disk/by-id/cciss-3600508b1001032373120202020200004-part1 swap swap defaults 0 0 proc /proc proc defaults 0 0 sysfs /sys sysfs noauto 0 0 debugfs /sys/kernel/debug debugfs noauto 0 0[/COLOR] usbfs /proc/bus/usb usbfs noauto 0 0 [COLOR="#00FF00"]devpts /dev/pts devpts mode=0620,gid=5 0 0[/COLOR] [COLOR="#FF0000"]/dev/disk/by-id/cciss-3600508b1001032373120202020200005-part4 /mnt/mount ext3 rw 0 0[/COLOR] ##/dev/cciss/c0d1p4 /mnt/mount ext3 rw 0 0 ##next line edited by jmcg 21May2012 so partition not mounted ##/dev/disk/by-id/cciss-3600508b1001032373120202020200005-part3 /mnt/mount2 ext3 acl,user_xattr 1 2 [COLOR="#FF0000"]/dev/disk/by-id/cciss-3600508b1001032373120202020200005-part6 /mnt/mount3 ext3 acl,user_xattr 1 2[/COLOR] SLES-XEN:/ #

These two items appear in /etc/fstab but are not mounted. What do they represent? If they are not needed, they should be removed from fstab.
[LIST=1]
[]/dev/disk/by-id/cciss-3600508b1001032373120202020200005-part4 /mnt/mount
[
]/dev/disk/by-id/cciss-3600508b1001032373120202020200005-part6 /mnt/mount3
[*]
[/LIST]

SLES-XEN:/var/lib/xen/images # mount -l [COLOR="#00FF00"]/dev/cciss/c0d0p2 on / type reiserfs (rw,acl,user_xattr) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) debugfs on /sys/kernel/debug type debugfs (rw) udev on /dev type tmpfs (rw,mode=0755) tmpfs on /dev/shm type tmpfs (rw,mode=1777) devpts on /dev/pts type devpts (rw,mode=0620,gid=5) fusectl on /sys/fs/fuse/connections type fusectl (rw) none on /var/lib/ntp/proc type proc (ro,nosuid,nodev)[/COLOR] rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) gvfs-fuse-daemon on /root/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev) /dev/sr0 on /media/O type udf (ro,nosuid,nodev,uid=0) [O] [COLOR="#FF0000"]/root/Desktop/OES11-SP1-addon_with_SLES11-SP2-x86_64-DVD.iso on /nfs type iso9660 (ro) [CDROM][/COLOR] nfsd on /proc/fs/nfsd type nfsd (rw) [COLOR="#FF0000"]198.nnn.nnn.250:/nfs on /tmp/tmpcOUDYL type nfs (rw,addr=198.nnn.nnn.250)[/COLOR]

These devices do not appear in /etc/fstab:
[LIST]
[]OES11-SP1-addon_with_SLES11-SP2-x86_64-DVD.iso on your root Desktop which you have mounted on /nfs. Please explain why you have done this and what you hope to accomplish.
[
]
[*]198.nnn.nnn.250:/nfs is mounted on /tmp/tmpcOUDYL. Please explain why you have done this and what you hope to accomplish.
[/LIST]

[SIZE=3]DomU Analysis[/SIZE]

SLES-XEN:/var/lib/xen/images # ls -l /var/lib/xen/images/ total 4 -rw-r--r-- 1 root root 65 Sep 20 08:57 .disk-list drwxr-xr-x 2 root root 72 May 21 2012 WinSrvr1 drwxr-xr-x 2 root root 80 Feb 19 2013 grpwisenew drwxr-xr-x 2 root root 152 Sep 20 09:15 stu-grpwise drwxr-xr-x 2 root root 80 May 16 2012 student-ehs-server

According to this directory listing, you may have image files for four DomUs. If WinSrvr1 no longer exists, if that directory still contains an disk image file, and if you no longer need it, you can free up some space by deleting it.

Note:
When you delete a DomU (xm delete ) it removes the DomU’s definition from the XenStore but it does not delete the disk image file.

SLES-XEN:/etc/xen/vm # ls grpwisenew sles10-rescue stu-grpwise.xml grpwisenew.xml sles10-rescue1 student-ehs-server old stu-grpwise student-ehs-server.xml SLES-XEN:/etc/xen/vm # cat grpwisenew

This shows definitions for five DomUs:
[LIST=1]
[]grpwisenew
[
]stu-grpwise
[]student-ehs-server
[
]sles10-rescue
[]sles10-rescue1
[
]old
[/LIST]

Note:
These files are created when you create a new DomU. The definitions are then imported into the XenStore. After that, these files are not maintained or updated. For example, any changes you make to a DomU using the Virtual Machine Manager are not reflected in these files. To view the current definition for a DomU, use "xm list -l "

If you want a better understanding how this works, see my article:
Managing your Xen DomU: Difference between “xm new” and “xm create”

[SIZE=3]DomU server 2 (groupwise):[/SIZE]

I’m going to add your “mount -l” to the information from my earlier post. Please provide the information I requested.

[CODE]groupwise:~ # fdisk -l

Disk /dev/xvda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00007bc8

Device Boot Start End Blocks Id System
/dev/xvda1 2048 4208639 2103296 82 Linux swap / Solaris
/dev/xvda2 * 4208640 41943039 18867200 83 Linux

Disk /dev/xvdb: 4189 MB, 4189161472 bytes
255 heads, 63 sectors/track, 509 cylinders, total 8181956 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

[COLOR="#FF0000"]Disk /dev/xvdb doesn’t contain a valid partition table[/COLOR]

Disk /dev/xvdd: 96.8 GB, 96762161664 bytes
255 heads, 63 sectors/track, 11763 cylinders, total 188988597 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001b841

Device Boot Start End Blocks Id System
/dev/xvdd1 2048 188987391 94492672 83 Linux
[/CODE]

groupwise:~ # cat /etc/fstab /dev/xvda1 swap swap defaults 0 0 /dev/xvda2 / ext3 acl,user_xattr 1 1 proc /proc proc defaults 0 0 sysfs /sys sysfs noauto 0 0 debugfs /sys/kernel/debug debugfs noauto 0 0 devpts /dev/pts devpts mode=0620,gid=5 0 0 /dev/xvdd1 /mnt/vol ext3 acl,user_xattr 1 2 [COLOR="#FF0000"]198.nnn.nnn.209:/media/nss/SVUSD1DATA/stu-domain /media/nss/SVUSD1DATA/stu-domain nfs defaults 0 0[/COLOR] groupwise:~ #

groupwise:~/Desktop # mount -l /dev/xvda2 on / type ext3 (rw,acl,user_xattr) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) debugfs on /sys/kernel/debug type debugfs (rw) udev on /dev type tmpfs (rw,mode=0755) tmpfs on /dev/shm type tmpfs (rw,mode=1777) devpts on /dev/pts type devpts (rw,mode=0620,gid=5) /dev/xvdd1 on /mnt/vol type ext3 (rw,acl,user_xattr) fusectl on /sys/fs/fuse/connections type fusectl (rw) securityfs on /sys/kernel/security type securityfs (rw) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) [COLOR="#FF0000"]198.nnn.nnn.209:/media/nss/SVUSD1DATA/stu-domain on /media/nss/SVUSD1DATA/stu-domain type nfs (rw,addr=198.nnn.nnn.209)[/COLOR] none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) none on /var/lib/ntp/proc type proc (ro,nosuid,nodev) nfsd on /proc/fs/nfsd type nfsd (rw) admin on /_admin type nssadmin (rw) novfs on /var/opt/novell/nclmnt type novfs (rw) gvfs-fuse-daemon on /root/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev) groupwise:~/Desktop #

[QUOTE=jmcg;16416]Server 2
Examine the /dev/disk/by-path I see the xvdb which is not shown in the output from df -h. I also see the xvdd in both the /dev/disk/by-path as is expected. Yet the xvdb is a path that can be drilled down into from the terminal or the gui.

The bad part is this xvdb has the directory for groupwise gwia. My plan tonight is to stop gw and to copy it from the xvdb seen by server 2 to the xvdd on server 2. Change the path in C1 for gwia, start it back up, that should take care of that, but… [/QUOTE]

If you look at your fdisk output you’ll see that “Disk /dev/xvdb doesn’t contain a valid partition table” and according to fstab, it isn’t mounted. Of course you would need a valid filesystem to mount it.

[LIST]
[]What filesystem do you expect it should contain?
[
]How did you create it?
[*]How are you able to drill down into it as you state above?
[/LIST]

[QUOTE=jmcg;16430]Kevin,
I won’t bore you with my details, but the bottom line is that the two servers do not see the same disk. [/QUOTE]
Please explain what you are doing with /media/nss/SVUSD1DATA/stu-domain. The third code box clearly shows that it is mounted on /media/nss/SVUSD1DATA/stu-domain.

We need to know what you are trying to do before we can provide any suggestions on how to improve things. If you have already resolved your issues, please let us know.

[SIZE=3]DomU Server 1 (stu-grpwise):[/SIZE]

[CODE]stu-grpwise:~ # fdisk -l

Disk /dev/xvda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000dfc59

Device Boot      Start         End      Blocks   Id  System

/dev/xvda1 2048 4208639 2103296 82 Linux swap / Solaris
/dev/xvda2 * 4208640 41943039 18867200 83 Linux

Disk /dev/xvdb: 100.0 GB, 99998482432 bytes
255 heads, 32 sectors/track, 23934 cylinders, total 195309536 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Device Boot      Start         End      Blocks   Id  System

/dev/xvdb1 32 195301439 97650704 65 Novell Netware 386

Disk /dev/mapper/SVUSD1DATA: 100.0 GB, 99994304512 bytes
255 heads, 63 sectors/track, 12156 cylinders, total 195301376 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x42530000

[COLOR="#FF0000"]Disk /dev/mapper/SVUSD1DATA doesn’t contain a valid partition table[/COLOR]
stu-grpwise:~ #[/CODE]

stu-grpwise:~ # mount -l /dev/xvda2 on / type ext3 (rw,acl,user_xattr) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) debugfs on /sys/kernel/debug type debugfs (rw) udev on /dev type tmpfs (rw,mode=0755) tmpfs on /dev/shm type tmpfs (rw,mode=1777) devpts on /dev/pts type devpts (rw,mode=0620,gid=5) fusectl on /sys/fs/fuse/connections type fusectl (rw) securityfs on /sys/kernel/security type securityfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) none on /var/lib/ntp/proc type proc (ro,nosuid,nodev) admin on /_admin type nssadmin (rw) [COLOR="#0000FF"]/dev/pool/SVUSD1DATA on /opt/novell/nss/mnt/.pools/SVUSD1DATA type nsspool (rw,name=SVUSD1DATA) SVUSD1DATA on /media/nss/SVUSD1DATA type nssvol (rw,name=SVUSD1DATA)[/COLOR] novfs on /var/opt/novell/nclmnt type novfs (rw) gvfs-fuse-daemon on /root/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) nfsd on /proc/fs/nfsd type nfsd (rw) [COLOR="#FF0000"]mail.svusd.us:/mnt/vol/gwdomain on /mnt/vol/gwdomain type nfs (rw,addr=198.nnn.nnn.240)[/COLOR] stu-grpwise:~ #

[QUOTE]Server 1
Examine the /dev/disk/by-path I see the xvdb which is the nss volume. That is fine. When I examine the output from df -h command it shows up as expected.[/QUOTE]

That confirms what I see too:
[LIST]
[]/dev/xvdb is 100 GB.
[
]/dev/xvdb1 shows partition type is Novell Netware 386.
[]/dev/pool/SVUSD1DATA is your NSS Pool. Although the info you provided doesn’t show it, I assume as you stated it is using /dev/xvdb1.
[
]SVUSD1DATA is your only NSS volume mounted on /media/nss/SVUSD1DATA.
[/LIST]
All of this looks fine although I normally use different names for my pools and volumes just to avoid confusion. [COLOR="#FF0000"]Disk /dev/mapper/SVUSD1DATA doesn’t contain a valid partition table[/COLOR] because it is used by NSS.

The last item requires some additional explanation: mail.svusd.us:/mnt/vol/gwdomain. I assume from its name it contains your GroupWise domain.
[LIST]
[]What is the source server?
[
]What is running on that server?
[]If it is one of the servers we are discussing here, which device actually contains the data.?
[
]
[/LIST]

I have just posted my comments about your DomU Server 1 (stu-grpwise) and have gained some additional insight.

It would seem that the mounted device “198.nnn.nnn.209:/media/nss/SVUSD1DATA/stu-domain” as shown in your fstab and your list of mounted devices is actually a directory on your NSS volume “/media/nss/SVUSD1DATA” which is mounted and, presumably, being used on your “stu-grpwise” server. If that is true, then it is an issue that requires your attention.

[COLOR="#FF0000"]Disk /dev/xvdb doesn’t contain a valid partition table.[/COLOR] We see the same message on your stu-grpwise DomU but concluded it was okay because it was used by NSS. On this server I see no NSS pools or NSS volumes in your fstab or among your mounted devices which raises some additional questions:
[LIST]
[]Where did /dev/xvdb come from?
[
]Was it added to the DomU as an additional device after the DomU was created?
[]Does it, in fact, contain an NSS filesystem? If so, how/where was it created?
[
]Since you haven’t provided current configurations for your DomUs, is it possible that the /dev/xvdb source device is the same source device used by /dev/xvdb in your stu-grpwise DomU?
[/LIST]