XEN SLOWWWW To Boot

SLES12sp1 and 2 Xen VM’s which are server 2012r2. Every since the update to sp1, these VM’s take almost 10 minutes to boot. Is there something that can be done to bring these VM’s back to a more acceptable boot time?

carnold6,

It appears that in the past few days you have not received a response to your
posting. That concerns us, and has triggered this automated reply.

These forums are peer-to-peer, best effort, volunteer run and that if your issue
is urgent or not getting a response, you might try one of the following options:

Be sure to read the forum FAQ about what to expect in the way of responses:
http://forums.suse.com/faq.php

If this is a reply to a duplicate posting or otherwise posted in error, please
ignore and accept our apologies and rest assured we will issue a stern reprimand
to our posting bot…

Good luck!

Your SUSE Forums Team
http://forums.suse.com

So i just cannot get the memory lowered on Dom0! I followed:

https://www.suse.com/documentation/sles-12/book_virt/data/sec_xen_vhost_memory.html

and

http://wiki.xen.org/wiki/Xen_Best_Practices#Xen_dom0_dedicated_memory_and_preventing_dom0_memory_ballooning

for GRUB2. Then reboot server and Dom0 still has all the memory. Xl info shows:

and xl list:

Ok, so i am trying to better understand Dom0 according to https://www.suse.com/documentation/sles-12/book_virt/data/sec_xen_basics_components.html

Dom0 appears to be the OS (in this case SLES12SP1) on the physical server?

So i changed /etc/default/grub with GRUB_CMDLINE_XEN_DEFAULT=“dom0_mem=512M,max:1024M” and upon reboot, the server OS was super slow to boot and to respond to whatever i typed or clicked. Change /etc/default/grub with GRUB_CMDLINE_XEN_DEFAULT=“dom0_mem=7096M” and the server OS is much better with boot performance and response time. However, the Xen guests (which are server 2012R2) are still incredibly slow in boot and response time. When i look at the connection detail in virt manager of Dom0, i see the current allocation is now 6996 (which is NOT 7096 and what i entered into the grub file) but the max allocation is still 10240000 and is why, i believe, my guest VM’s are so incredibly slow!!

So, i finally got the Dom0 memory worked out and that does not fix the VM’s slow boot/response issue. It took 5 minutes to boot and 4 minutes to login once it did boot. It took a little over 1 minute to open the services mmc! Both VM’s are 2012 R2 and have 4GB RAM and did NOT use to be this slow before the SP1 update

So today, the systems stopped responding to all/any requests. Had no video couldn’t VNC to the system, nothing! Reboot server and now i only get a black screen. No login screen. Same thing with trying a snapshot. Any ideas how to get the server back up and running without losing any data?

After doing everything i know of, including btrfs check --repair which did find errors and after a reboot it hung on loading kernel modules, i finally got the system back up. This time i selected advanced sles12 with Xen option and chose a different kernel. System now boots and Xen VM’s appear to be working, checking now…

Any ideas out there?

Hi carnold6,

if the issue is still slow booting VMs, I’d suggest to start bottle-neck analysis.

From previous messages, I see that Dom0 is taking it’s fair share of memory. You attempted to boot with 512M, which is awfully small, so no wonder booting the server took so long. Have you ever run “vmstat 1” to see the actual memory consumption of Dom0, so that you’d be able to set an appropriate value (which in effect means, enough memory to run without major initial swapping and especially without constant swapping, then add some (hundreds) MB for file system cache)?

Once you’ve restricted Dom0 to some reasonable value (to avoid the overhead of ballooning), you ought to run “vmstat 1” (on Dom0) during start-up of your VM. What is taking resources… CPU? (I doubt it) Memory? (Maybe Dom0 now starting to swap) I/O?

Once you’ve identified the major slowing point, we can try to work out on how to improve on that.

You never mentioned the resource setup of the VM - are the disks local, on NFS, on iSCSI, on SAN? Is the server’s memory large enough to hold both Dom0 and DmoU(s)? Such details will help to give advice on reducing the identified bottle-necks.

With regards,
J

Your knowledge far exceeds mine in knowing the tools to use to find the bottleneck and i am hoping here too, how to read the output of vmstat 1? After looking here to interpret the output:
https://www.thomas-krenn.com/en/wiki/Linux_Performance_Measurements_using_vmstat

Seems to be a whole lotta swapping going on?

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 69568 36024 12 5275148 0 0 605 247 19 15 2 1 89 8 0 0 0 69568 33248 12 5277308 0 0 2048 3756 9727 6364 3 2 94 1 1 0 0 69568 33776 12 5276992 0 0 4096 0 12421 8200 2 1 96 0 0 0 0 69568 33248 12 5277128 0 0 0 0 12375 7601 1 2 96 0 0 0 0 69568 35264 12 5275176 0 0 2048 496 8858 5668 2 1 96 0 0 0 0 69568 33520 12 5277108 0 0 2048 0 7815 4995 2 1 97 0 0 1 0 69568 33680 12 5277120 0 0 0 564 12163 7613 2 1 96 0 0 0 0 69568 35364 12 5275232 0 0 2048 0 8938 5779 2 1 96 0 0 0 0 69568 35568 12 5275120 0 0 32 0 11165 7067 2 1 96 0 0 1 0 69568 35304 12 5275312 0 0 4064 0 11428 7212 1 1 96 1 1 0 0 69568 31176 12 5279108 0 0 4096 0 10352 6522 2 2 96 0 0 2 0 69568 35048 12 5275196 0 0 0 860 11276 7460 3 1 96 0 0 0 0 69568 33912 12 5277228 0 0 2048 0 11314 7665 1 1 98 0 0 0 0 69568 33984 12 5277196 0 0 140 8136 10859 7342 2 2 92 5 0 0 0 69568 33832 12 5277192 0 0 0 0 15453 11446 4 2 93 0 1 0 0 69568 31888 12 5279252 0 0 1908 0 12559 7889 1 1 97 1 0 0 0 69568 36364 12 5273840 0 0 1880 544 11170 6972 1 1 94 4 0 0 0 69568 32228 12 5277988 0 0 4256 0 12076 7457 1 1 98 0 0 0 0 69568 30248 12 5280056 0 0 2048 0 10443 6423 0 1 98 0 0 0 0 69568 30248 12 5280060 0 0 32 0 9347 5848 1 1 98 0 0 0 1 69568 35212 12 5273668 0 0 4704 0 10905 7029 1 1 96 2 0 procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 2 0 69560 35768 12 5273820 0 0 80 488 12882 8007 0 1 97 1 0 0 0 69560 31752 12 5277860 0 0 4096 0 11485 6982 1 1 97 1 0 1 0 69560 31800 12 5277900 0 0 0 0 14506 8909 1 1 97 0 0 0 0 69560 38400 12 5271000 0 0 2188 8 10443 6672 2 1 96 0 0 1 1 69560 34500 12 5272536 0 0 1788 0 9969 6969 6 2 90 2 0 0 1 69560 33372 12 5266800 0 0 21168 160 12942 8493 7 2 77 13 0 1 0 69560 38816 12 5261724 0 0 5632 0 8764 6308 9 1 83 6 0 0 0 69560 34832 12 5265868 0 0 4064 0 13449 9003 4 2 94 0 0 1 0 69560 34800 12 5265868 0 0 32 296 18274 12935 4 2 93 0 1 0 0 69560 32384 12 5268032 0 0 2048 336 13609 9179 2 2 96 0 1 0 0 69560 32480 12 5268084 0 0 0 15152 16132 10623 1 2 96 0 1 1 0 69560 37416 12 5262936 0 0 4064 68 9962 6314 1 1 92 6 0 0 0 69560 37224 12 5263056 0 0 84 68 10148 6466 1 1 97 1 0 1 0 69560 33000 12 5267068 0 0 4012 0 12332 7714 1 1 97 0 1 1 0 69560 36768 12 5263184 0 0 220 0 9300 6322 6 1 91 1 0 2 0 69560 32784 12 5267288 0 0 4096 280 11401 7551 3 2 95 0 0 0 0 69560 36616 12 5262044 0 0 84 0 11856 7624 5 2 92 0 0 0 0 69560 32696 12 5266200 0 0 4012 0 10369 6651 1 1 98 0 0 0 0 69560 32736 12 5266124 0 0 0 0 10167 6567 1 1 97 0 1 1 0 69560 30944 12 5268164 0 0 2048 0 12373 7664 1 1 98 0 1 0 0 69560 36024 12 5262964 0 0 4096 140 10867 6760 1 2 97 1 0

As for the resource setup, forgive me for not posting that bit of helpful info. It seems i was in a panic when the system would not boot. Here it is:
16GB RAM total with 4GB set aside for each of the 2 VM’s. The rest, 8GB should be left over
2 SAS drives (local) with a PERC 6 controller. 1x300GB and 1x750GB in a non-RAID setup (i didn’t configure the server). Opt directory is mounted on the 750GB drive
2 quad core Xeon’s. Dom0 has 8 vCPU’s and both of the VM’s have 6 vCPU’s a piece

While, for now, i am OK with the system booting on a different kernel, i would like to get back to the latest kernel (like it was before).

Hi,

[QUOTE=carnold6;33354]
Seems to be a whole lotta swapping going on?

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 69568 36024 12 5275148 0 0 605 247 19 15 2 1 89 8 0 0 0 69568 33248 12 5277308 0 0 2048 3756 9727 6364 3 2 94 1 1 0 0 69568 33776 12 5276992 0 0 4096 0 12421 8200 2 1 96 0 0 0 0 69568 33248 12 5277128 0 0 0 0 12375 7601 1 2 96 0 0 0 0 69568 35264 12 5275176 0 0 2048 496 8858 5668 2 1 96 0 0 0 0 69568 33520 12 5277108 0 0 2048 0 7815 4995 2 1 97 0 0 1 0 69568 33680 12 5277120 0 0 0 564 12163 7613 2 1 96 0 0 0 0 69568 35364 12 5275232 0 0 2048 0 8938 5779 2 1 96 0 0 0 0 69568 35568 12 5275120 0 0 32 0 11165 7067 2 1 96 0 0 1 0 69568 35304 12 5275312 0 0 4064 0 11428 7212 1 1 96 1 1 0 0 69568 31176 12 5279108 0 0 4096 0 10352 6522 2 2 96 0 0 2 0 69568 35048 12 5275196 0 0 0 860 11276 7460 3 1 96 0 0 0 0 69568 33912 12 5277228 0 0 2048 0 11314 7665 1 1 98 0 0 0 0 69568 33984 12 5277196 0 0 140 8136 10859 7342 2 2 92 5 0 0 0 69568 33832 12 5277192 0 0 0 0 15453 11446 4 2 93 0 1 0 0 69568 31888 12 5279252 0 0 1908 0 12559 7889 1 1 97 1 0 0 0 69568 36364 12 5273840 0 0 1880 544 11170 6972 1 1 94 4 0 0 0 69568 32228 12 5277988 0 0 4256 0 12076 7457 1 1 98 0 0 0 0 69568 30248 12 5280056 0 0 2048 0 10443 6423 0 1 98 0 0 0 0 69568 30248 12 5280060 0 0 32 0 9347 5848 1 1 98 0 0 0 1 69568 35212 12 5273668 0 0 4704 0 10905 7029 1 1 96 2 0 procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 2 0 69560 35768 12 5273820 0 0 80 488 12882 8007 0 1 97 1 0 0 0 69560 31752 12 5277860 0 0 4096 0 11485 6982 1 1 97 1 0 1 0 69560 31800 12 5277900 0 0 0 0 14506 8909 1 1 97 0 0 0 0 69560 38400 12 5271000 0 0 2188 8 10443 6672 2 1 96 0 0 1 1 69560 34500 12 5272536 0 0 1788 0 9969 6969 6 2 90 2 0 0 1 69560 33372 12 5266800 0 0 21168 160 12942 8493 7 2 77 13 0 1 0 69560 38816 12 5261724 0 0 5632 0 8764 6308 9 1 83 6 0 0 0 69560 34832 12 5265868 0 0 4064 0 13449 9003 4 2 94 0 0 1 0 69560 34800 12 5265868 0 0 32 296 18274 12935 4 2 93 0 1 0 0 69560 32384 12 5268032 0 0 2048 336 13609 9179 2 2 96 0 1 0 0 69560 32480 12 5268084 0 0 0 15152 16132 10623 1 2 96 0 1 1 0 69560 37416 12 5262936 0 0 4064 68 9962 6314 1 1 92 6 0 0 0 69560 37224 12 5263056 0 0 84 68 10148 6466 1 1 97 1 0 1 0 69560 33000 12 5267068 0 0 4012 0 12332 7714 1 1 97 0 1 1 0 69560 36768 12 5263184 0 0 220 0 9300 6322 6 1 91 1 0 2 0 69560 32784 12 5267288 0 0 4096 280 11401 7551 3 2 95 0 0 0 0 69560 36616 12 5262044 0 0 84 0 11856 7624 5 2 92 0 0 0 0 69560 32696 12 5266200 0 0 4012 0 10369 6651 1 1 98 0 0 0 0 69560 32736 12 5266124 0 0 0 0 10167 6567 1 1 97 0 1 1 0 69560 30944 12 5268164 0 0 2048 0 12373 7664 1 1 98 0 1 0 0 69560 36024 12 5262964 0 0 4096 140 10867 6760 1 2 97 1 0 [/QUOTE]

no, actually there’s (during normal operations) no swapping going on, and that’s good.

Here’s how I read that output:

procs: let’s ignore that for now

memory: while there’s some “swapped” memory reported, this only tells that at some point in time the kernel decided to swap out some (at that time) unused stuff, in favor of making better use of that physical memory. There’s still some “free” memory (only saying there’s no operational pressure on physical memory management), a bit is used for buffers (that number appears awfully small to me, I have no explanation for that) and a lot is used for caching. As any free memory is used for i/o caching, that big “cache” number seems to imply that you may have more memory committed to that Dom0 than actually required - but it’s a trade-off versus block i/o (coming to that further below).

swap: Those zeros tell you that there’s no current swap activity - nothing swapped in (stuff that got swapped out because of memory constraints, but now is needed again for current operations) nor swapped out (as memory might be tight - which obviously isn’t)

io: actual i/o to local block devices. You’re doing some reads, and there are some writes, too… nothing to worry about. Depending on access patterns, file system settings and available cache memory (see above), these numbers might get high because every read/write needs to go the the actual block device, rather than being served by the cache. In your case I doubt that, and because of the low percentage of “waits” (see below, CPU “wa” number) you’re not having an issue there, anyhow.

system: “in” are interrupts, i.e. caused by devices signaling available data. according to my personal experience, these numbers are a bit high - but depending on your setup, this may be normal, rather than an indicator of real problems. I’d look at the numbers in /proc/interrupts, to get a feeling where these are coming from. “cs” are so called “context switches”, telling you the scheduler let a different process get it’s share of CPU.

cpu: “us” is “user space”, hence programs you’re running. “sy” is “system space”, stuff that’s being handled by the kernel. Then you have “idle times”, which often (and in your case) is high and thus indicating that those CPUs are not doing much. More than enough horse-power for the actual workload. “wa” is the percentage of CPU time spent waiting for i/o to complete - for DomUs on local disks, that’s definitely a number to watch. But in your case, watching gets mostly boring, only once that number show a significant value.

So looking at these numbers gives an impression of a Dom0 with too much available memory (which might help the VMs instead, I’d immediately turn it down by 2 or 3 GB). The CPUs are mostly bored, there’s a bit of reading and writing to disk going on, but nothing to actually worry about. The overall i/o wait is at 8 percent (shown by the first line of output), that looks a bit high, but may be explainable. Those high number of interrupts may be a pointer to something, too, but I’m not sure about that.

You gave the VMs 4 GB each - I suspect that that’s not enough to reach optimum performance, as the VMs itself (the Windows OS) might be forced to swap out memory during boot. This then might be the cause for those 8 percent over-all i/o wait reported by vmstat - you might want to take a look at vmstat’s output during start of the VMs and check the i/o wait percentage at that time… if it’s going high, I’d try to throw more memory at these VMs (of course, doing a similar analysis inside the VMs would be a better starting point, but might prove to be difficult during the startup phase). Swapping of a DomU does slow it down significantly, even worse than if the Dom0 needs to swap.

Hope this helps a bit to clear up the picture :slight_smile:

Regards,
J

I think i might have bigger problems. When the server would not boot (post above), i ran btrfs check --repair and remember it returned bad block errors. Now i have gotten the system booted with a different kernel, i am seeing major slow response issues when i just click to open a folder or yast. I end up having to force quit. It may be i need to start getting the VM’s, web sites and all web app data off the server in preparation to rebuild the OS. Is it possible to copy the qcow2 files to an external network location and when/if the OS rebuild happens copy them back and have Xen use them without recreating the VM’s?

So the btrfs scrub finished with 1 unrecoverable error. Still seeing a performance problem with the OS. Need to know if i can reuse the existing qcow2 VM’s?

Re,

sorry for the late reply… having a business to run, too :slight_smile:

You can “save” your VMs by exporting the configuration and then checking for all referenced virtual disks’ files - saving the VM config and those virtual disk files to an external location and restoring them after re-creating the server installation should keep your VMs safe.

Regards,
J

[QUOTE=jmozdzen;33356]Hi,
So looking at these numbers gives an impression of a Dom0 with too much available memory (which might help the VMs instead, I’d immediately turn it down by 2 or 3 GB). The CPUs are mostly bored, there’s a bit of reading and writing to disk going on, but nothing to actually worry about. The overall i/o wait is at 8 percent (shown by the first line of output), that looks a bit high, but may be explainable. Those high number of interrupts may be a pointer to something, too, but I’m not sure about that.[/quote]

So Dom0 had 7092 min and max memory. Onj your advice, i took it down 3GB. Now Dom0 has 4096 min and max memory (made changes in /etc/default/grub and then ran grub2-mkconfig -o /boot/grub2/grub.cfg to write the changes to grub.cfg. I have not rebooted as of yet).

Before the update to SP1, 4GB for each of the VM’s was fine. No VM slow boot and inside the VM was very responsive. 1 VM is a secondary domain controller. Thats all this VM does, nothing else. The other VM is Exchange Edge transport. That’s all this VM does, nothing else.

Got some new numbers from vmstat:

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 428 36304 456 5095020 0 0 352 337 1075 1078 8 3 73 15 0 0 0 428 36520 456 5095084 0 0 0 284 1270 1807 2 0 98 0 0 0 0 428 36440 456 5095092 0 0 0 280 1453 2006 3 1 97 0 0 1 0 428 36440 456 5095092 0 0 0 0 1126 1650 2 0 98 0 0 0 0 428 36776 456 5095076 0 0 0 0 1091 1665 2 0 98 0 0 0 0 428 36424 456 5095076 0 0 0 140 1235 1821 2 0 97 0 0 0 0 428 36712 456 5095076 0 0 0 0 4174 3885 2 1 97 0 0 0 0 428 36712 456 5095076 0 0 0 0 4796 4949 3 1 95 0 0 0 0 428 36896 456 5095068 0 0 0 0 1958 2304 2 0 97 0 0 0 0 428 36800 456 5095068 0 0 0 516 1711 2080 2 0 98 0 0 1 0 428 36776 456 5095120 0 0 0 152 1401 1842 2 0 98 0 0 0 0 428 36784 456 5095176 0 0 0 280 2383 2590 3 1 96 0 0 0 0 428 37072 456 5095260 0 0 0 5476 1952 2212 3 1 96 1 0 0 0 428 37160 456 5095260 0 0 0 0 1369 1883 3 1 96 0 0 0 0 428 36784 456 5095260 0 0 0 0 1235 1766 2 0 97 0 0 0 0 428 36688 456 5095260 0 0 16 416 1734 2249 2 0 97 1 0 0 0 428 37040 456 5095336 0 0 0 0 1309 1828 3 0 96 0

One thing i really would like to know, i think i know the answer. Is Dom0 the actual OS, in this case SLES12 SP1? So if Dom0 has 4GB of memory, SLES12 and all the apps running on it has access to only 4GB. Am i thinking about this right?

Hi carnold6,

hm, you’re right, then this points more at changes by upgrading the server’s OS. But I’m still wondering:

[QUOTE=carnold6;33421]Got some new numbers from vmstat:

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 428 36304 456 5095020 0 0 352 337 1075 1078 8 3 73 15 0 0 0 428 36520 456 5095084 0 0 0 284 1270 1807 2 0 98 0 0 0 0 428 36440 456 5095092 0 0 0 280 1453 2006 3 1 97 0 0[/QUOTE]

There still must be times when your server is massively waiting for i/o - 15 % over-all is quite a lot. Have you a chance to run some longer-term monitoring producing graphical output, i.e. Opennms or alike? It might be helpful to see the graphs of the CPU values over a longer time range.

In the sense you’re asking, you’re right. Technically speaking, I’d have to watch my back if I say “Dom0 is the actual OS” - think of it more like the “management VM”, used to operate Xen. But either way: Yes, all apps inside Dom0 have access to Dom0’s memory only. The major difference to DomUs, memory-wise, is that available memory on Dom0 can change automatically when you let Xen do “ballooning”.

Regards,
Jens

jmoz,
This just gets more interesting by the minute! Today, i had the chance to add more memory to a VM. This VM was 4GB so i up’ed it to 8GB and the VM would not start. Got a xenlight could not create domain “”. So i ran vmstat whilst none of the vm were running:

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 4 0 0 1330048 884 1095440 0 0 250 87 155 160 8 1 84 7 0 0 0 0 1329416 884 1095480 0 0 0 0 627 1065 7 1 92 0 0 0 0 0 1329308 884 1095476 0 0 0 0 726 1145 9 1 90 0 0 8 0 0 1330036 884 1095476 0 0 0 0 540 978 7 1 92 0 0 0 0 0 1330028 884 1095476 0 0 0 0 3136 2941 15 2 83 0 0 0 0 0 1329852 884 1095500 0 0 0 0 759 1217 9 1 90 0 0 9 0 0 1329780 884 1095492 0 0 0 0 1504 2456 22 2 76 0 0 1 0 0 1329628 884 1095492 0 0 0 0 1229 1552 16 1 83 0 0 1 0 0 1327988 884 1095492 0 0 0 0 1110 1679 15 1 84 0 0 1 0 0 1327804 884 1095628 0 0 0 0 873 1405 7 1 92 0 0 0 0 0 1328340 884 1095576 0 0 0 0 620 1165 9 1 90 0 0 1 0 0 1328340 884 1095576 0 0 0 0 548 1041 8 1 91 0 0 0 0 0 1327964 884 1095576 0 0 0 0 485 1022 7 1 93 0 0 0 0 0 1328184 884 1095576 0 0 0 0 771 1421 12 1 87 0 0 1 0 0 1328376 884 1095524 0 0 0 0 809 1306 12 1 87 0 0 0 0 0 1328116 884 1095528 0 0 0 0 541 1027 7 1 92 0 0 0 0 0 1328116 884 1095532 0 0 0 0 745 1399 12 1 87 0 0 1 0 0 1328108 884 1095532 0 0 0 0 745 1282 11 1 88 0 0 0 0 0 1328268 884 1095504 0 0 0 0 553 1035 7 1 92 0 0 0 0 0 1327700 884 1095504 0 0 0 0 3133 3305 19 2 79 0 0 0 0 0 1327580 884 1095508 0 0 0 0 753 1201 9 1 90 0 0 procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 0 1327708 884 1095508 0 0 0 0 769 1452 12 1 87 0 0 0 0 0 1327668 884 1095512 0 0 0 0 652 1156 9 1 90 0 0 0 0 0 1327668 884 1095512 0 0 0 0 691 1161 9 1 90 0 0 0 0 0 1327660 884 1095512 0 0 0 0 864 1437 14 1 85 0 0 0 0 0 1327508 884 1095536 0 0 0 16 671 1192 9 1 90 0 0 0 0 0 1327340 884 1095552 0 0 0 0 688 1164 9 1 90 0 0 0 0 0 1327332 884 1095552 0 0 0 0 835 1422 12 1 87 0 0 0 0 0 1327492 884 1095552 0 0 0 0 697 1173 9 1 90 0 0 0 0 0 1327428 884 1095552 0 0 0 4876 931 1321 9 1 89 1 0 0 0 0 1327412 884 1095552 0 0 0 0 839 1434 12 1 87 0 0 0 0 0 1327332 884 1095568 0 0 0 0 741 1175 9 1 90 0 0 1 0 0 1327524 884 1095568 0 0 0 0 849 1411 12 1 87 0 0 0 0 0 1327180 884 1095568 0 0 0 0 705 1187 9 1 90 0 0 0 0 0 1327396 884 1095568 0 0 0 0 3223 3107 17 2 82 0 0 0 0 0 1327084 884 1095600 0 0 0 80 893 1406 12 1 87 0 0 0 0 0 1327228 884 1095608 0 0 0 0 666 1175 9 1 90 0 0 0 0 0 1327068 884 1095608 0 0 0 0 683 1192 9 1 90 0 0 0 0 0 1327204 884 1095604 0 0 0 0 707 1434 12 1 87 0 0 0 0 0 1327228 884 1095604 0 0 0 0 751 1198 9 1 90 0 0 0 0 0 1327316 884 1095604 0 0 0 0 642 1165 9 1 90 0 0 1 0 0 1327516 884 1095604 0 0 0 0 870 1456 12 1 87 0 0

Then, whilst the VM’s were booting:

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 0 1118092 884 1202524 0 0 197 69 157 177 9 1 84 5 0 0 0 0 1117852 884 1202520 0 0 0 0 750 1197 10 1 89 0 0 1 2 0 1107828 884 1205440 0 0 416 12928 2452 4081 27 3 64 6 0 2 3 0 1101228 884 1206560 0 0 16 7936 4771 5932 25 7 52 16 0 1 5 0 1100300 884 1207228 0 0 568 10912 1904 2014 10 2 59 29 0 1 3 0 1099412 884 1208088 0 0 528 9184 2123 1844 14 6 47 34 0 1 3 0 1099756 884 1208436 0 0 288 4960 4569 3841 27 6 44 24 0 1 5 0 1097100 884 1210108 0 0 1284 5376 2353 2323 15 7 41 37 0 2 3 0 1092412 884 1216212 0 0 5948 7424 2624 2677 23 7 44 26 0 2 3 0 1093124 884 1216692 0 0 348 8096 2579 2516 26 5 45 24 0 12 3 0 1092724 884 1216752 0 0 16 1216 2421 3015 31 7 43 20 0 3 1 0 1087284 884 1221152 0 0 3804 1612 3108 3027 23 12 55 10 0 4 0 0 1073140 884 1227088 0 0 2184 0 4988 3777 24 15 53 8 0 14 3 0 1058424 884 1234544 0 0 2068 0 5865 6722 20 14 43 22 0 4 3 0 1029192 884 1240912 0 0 1348 0 17759 17360 20 19 49 12 1 1 0 0 1027284 884 1242740 0 0 544 0 24447 19886 24 19 54 3 1 2 2 0 1059472 884 1240228 0 0 88 8616 5228 5041 23 14 61 2 0 2 3 0 1059624 884 1240588 0 0 60 4768 25791 34544 8 5 71 15 1 3 3 0 1058232 884 1240912 0 0 380 5216 65522 64276 17 9 62 10 2 2 3 0 1057924 884 1242748 0 0 1948 6528 72970 70938 15 9 51 22 2 3 3 0 1057172 884 1243256 0 0 384 4992 69147 65575 25 11 45 17 3 procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 5 4 0 1055680 884 1245156 0 0 404 7488 70584 55030 38 10 32 18 2 3 2 0 1053836 884 1245940 0 0 692 2912 77220 65796 26 10 47 15 3 6 2 0 1046468 884 1252340 0 0 3600 12 77510 68893 13 17 54 14 2 3 3 0 1047024 884 1255812 0 0 2296 4 73527 71405 21 15 47 14 3 2 1 0 1044380 884 1258976 0 0 3192 0 71042 68214 21 10 53 14 2 3 3 0 1038544 884 1264064 0 0 4988 0 71115 68692 22 10 54 12 2 2 4 0 1035556 884 1268164 0 0 3804 16 71517 70263 20 11 51 15 3 1 3 0 1022764 884 1279680 0 0 5804 896 74351 66822 16 10 59 12 2 2 2 0 1019700 884 1283672 0 0 2168 7264 78544 73323 17 11 49 21 2 3 3 0 1015936 884 1286480 0 0 3088 11360 72019 70805 22 11 43 22 2 2 3 0 1016140 884 1287356 0 0 644 6144 86192 64289 15 11 52 20 2 4 3 0 1010364 884 1292624 0 0 5364 20 70873 77635 23 12 46 17 3 2 3 0 1004608 884 1298572 0 0 5680 0 54120 90058 24 14 35 24 3 3 3 0 1001604 884 1301680 0 0 2852 0 59656 78179 26 10 37 23 3 4 2 0 996844 884 1306996 0 0 5240 0 61491 66765 26 10 45 16 3 3 0 0 986588 884 1316032 0 0 8884 0 64034 57978 40 14 42 3 2 4 0 0 980740 884 1322468 0 0 6144 56 70514 72963 26 18 55 0 1 4 0 0 976728 884 1326664 0 0 4112 0 62150 67371 26 17 55 0 1 4 1 0 972540 884 1329332 0 0 2548 0 73070 89415 23 16 57 2 2 16 1 0 968212 884 1334896 0 0 6212 0 68555 84749 34 17 45 2 2 2 2 0 961572 884 1341432 0 0 6156 0 80497 103012 23 19 48 8 2 procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 5 0 0 952764 884 1350924 0 0 9492 80 85975 85825 23 18 53 4 2 3 1 0 946080 884 1358436 0 0 7024 0 75362 97781 24 17 51 6 2 6 1 0 935528 884 1367248 0 0 8852 0 77316 72739 36 19 39 3 3 11 1 0 930272 884 1373000 0 0 5788 0 82908 71842 35 20 38 5 2 4 0 0 925796 884 1377660 0 0 4408 0 88502 76912 36 18 37 6 2 3 2 0 922164 884 1381284 0 0 3652 0 97951 80875 24 18 45 10 3 4 2 0 919324 884 1384344 0 0 3068 0 65708 107273 34 18 40 5 3 3 1 0 915720 884 1388464 0 0 4028 0 66566 106743 34 19 34 9 3 4 1 0 907816 884 1394352 0 0 6252 0 79160 82616 31 18 39 10 3 4 2 0 902220 884 1400804 0 0 5936 0 89479 88671 30 16 44 8 3 3 3 0 899480 884 1403896 0 0 2992 76 89381 84162 25 18 43 11 3 4 4 0 896012 884 1407640 0 0 3632 0 105728 108067 22 17 46 13 3 4 3 0 886648 884 1416256 0 0 7984 0 97581 87079 18 18 53 8 2 4 5 0 877368 884 1425996 0 0 8284 0 80781 100376 20 18 48 13 2 4 3 0 865104 884 1438572 0 0 6748 0 80675 81140 26 18 33 21 2 3 3 0 856120 884 1447188 0 0 6612 20 75629 79150 26 17 48 7 2 2 1 0 848632 884 1453676 0 0 6204 0 70250 81561 26 18 49 5 2 4 0 0 840040 884 1461788 0 0 8108 0 79569 92775 25 18 44 9 3 4 2 0 833816 884 1469404 0 0 7428 0 76832 104559 25 19 42 11 2 4 2 0 823452 884 1478160 0 0 8040 0 76535 92930 30 19 39 9 3 20 3 0 806136 884 1491308 0 0 11660 20 77341 78896 35 18 33 12 2 procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 12 4 0 799128 884 1496172 0 0 1512 15776 74031 105582 20 20 40 18 2 3 4 0 798800 884 1496964 0 0 484 8416 68406 114317 17 17 46 16 3 4 4 0 799312 884 1497376 0 0 372 9440 68903 112764 23 20 32 22 3 5 3 0 798692 884 1498044 0 0 584 4096 83522 97503 35 18 24 20 3 3 1 0 795036 884 1502264 0 0 4372 12 76433 104558 22 20 37 19 3 4 1 0 784124 884 1508660 0 0 6336 0 82664 93961 24 19 47 7 3 5 2 0 779320 884 1514328 0 0 5424 0 78103 113975 18 20 51 9 2 4 0 0 771300 884 1521140 0 0 6948 0 80861 92592 19 18 52 8 3 4 0 0 772032 884 1528704 0 0 7332 0 76961 81925 27 20 44 6 3 3 0 0 750244 884 1536304 0 0 7716 0 73314 83922 28 20 45 5 2 4 0 0 743904 884 1544076 0 0 7548 0 77061 75125 33 21 43 1 2 2 2 0 736568 884 1551136 0 0 6932 0 62879 94448 25 16 52 4 3 4 1 0 727992 884 1559732 0 0 8676 0 72555 104421 21 20 47 10 3 2 1 0 719688 884 1566864 0 0 7292 0 81088 97707 23 19 46 11 2 3 2 0 712880 884 1574688 0 0 7576 0 92254 78076 23 17 49 8 3 4 1 0 707108 884 1580572 0 0 6052 0 73342 91887 30 20 42 5 3 4 1 0 699856 884 1588220 0 0 7248 0 71891 100198 25 20 39 13

I dropped the memory back to 4GB and the VM did start.

Hi carnold6,

so why didn’t it start? Couldn’t the DomU be created and if so, what caused that failure?

The vmstat, if in seconds interval, isn’t too bad and especially covers just about a minute of high i/o load - were iowaits back to low values after that? I’m pretty old-school, having to wait a minute 'till a machine is up looks fast to me :smiley:

Regards,
J

[QUOTE=jmozdzen;33463]Hi carnold6,

so why didn’t it start? Couldn’t the DomU be created and if so, what caused that failure?

The vmstat, if in seconds interval, isn’t too bad and especially covers just about a minute of high i/o load - were iowaits back to low values after that? I’m pretty old-school, having to wait a minute 'till a machine is up looks fast to me :D[/QUOTE]

No, the iowaits stay at high values. I just didn’t post all the output because it would require 30 or more minutes of output text. That’s approx. how long it takes for the VM to boot up. After the VM boots, the iowaits return to low values.

I don’t know if you would be game for this or not, but i could set you up with remote access so you can see exactly what i see. Let me know how you feel about this.

BTW, everytime i search for xen export vm/config, i get stupid citrix results. Do you know of a link to walk me through exporting those vm correctly so i can reuse them after the disk is replaced?