either you created a new account or it was some other place you originally posted this question - you have a post count of “1” with this user, so there’s no “above”
I guess you’re seeing the effects of block buffering on Dom0 - the writes from DomU go to the io buffers in DomU and are flushed there when you close the FS - but those writes go to the Dom0 block device, which will do it’s own buffering.
On the other Dom0, where you run vm1, you’ll run into the same situation: Dom0 caches the block device again. So for a “clean test”, you’ll have to
do your write action & vg deactivation on vm1
sync & flush the block cache on Dom0(vm1)
flush the block cache on Dom0(vm2)
activate the vg, read, write, deactivate vg on vm2
sync & flush the block cache on Dom0(vm2)
flush the block cache on Dom0(vm1)
activate the vg, read on vm1
Obviously this is no way to go for a production system.
either you created a new account or it was some other place you originally posted this question - you have a post count of “1” with this user, so there’s no “above”
I guess you’re seeing the effects of block buffering on Dom0 - the writes from DomU go to the io buffers in DomU and are flushed there when you close the FS - but those writes go to the Dom0 block device, which will do it’s own buffering.
On the other Dom0, where you run vm1, you’ll run into the same situation: Dom0 caches the block device again. So for a “clean test”, you’ll have to
do your write action & vg deactivation on vm1
sync & flush the block cache on Dom0(vm1)
flush the block cache on Dom0(vm2)
activate the vg, read, write, deactivate vg on vm2
sync & flush the block cache on Dom0(vm2)
flush the block cache on Dom0(vm1)
activate the vg, read on vm1
Obviously this is no way to go for a production system.
Regards,
Jens[/QUOTE]
Hi Jens,
Not above, but below, sorry, it was my mistake. And there I write, that one of my attempt was to specify cache mode with value “none”.
I’ll be realy happy to listen Your opinion how to provide storage resources for 2 VM (resources in cluster state, and must moving between VM, and You cannot use iscsi cause SAN on FC), cause I have no more ideas.
how to provide storage resources for 2 VM (resources in cluster state, and must moving between VM, and You cannot use iscsi cause SAN on FC), cause I have no more ideas
before walking down that route, have you tried to explicitly de-caching on Dom0s to verify that Dom0 caching is really the cause? Maybe it’s actually something different that’s preventing the update to be seen on the other DomU, especially since data written on vm1 seems to be visible on vm2?
Also, how about doing two rounds of “writing on vm1, then reading on vm2” to verify that it works reliably in that direction? It could help in better understanding the issue.
how to provide storage resources for 2 VM (resources in cluster state, and must moving between VM, and You cannot use iscsi cause SAN on FC), cause I have no more ideas
before walking down that route, have you tried to explicitly de-caching on Dom0s to verify that Dom0 caching is really the cause? Maybe it’s actually something different that’s preventing the update to be seen on the other DomU, especially since data written on vm1 seems to be visible on vm2?
Also, how about doing two rounds of “writing on vm1, then reading on vm2” to verify that it works reliably in that direction? It could help in better understanding the issue.
Regards,
Jens[/QUOTE]
Ok, but I am new in xen virtualization, could You explain please, how can I do this correctly: sync & flush the block cache on Dom0
You explain please, how can I do this correctly: sync & flush the block cache on Dom0
I was referring to “sync && echo 3 > /proc/sys/vm/drop_caches”, being the standard Linux sequence to first try to write all dirty cache pages and then drop any (clean) cached pages.
What about next steps, how can I ignore this buffer and do write direct on SAN from VMs, any suggestions?
My best bet would have been using “cache=none” (which you already tried) - this will not avoid the write cache, but in your sequence of operations, the guest should have issued the required flush I/O commands to purge that cache.
In other words, I have no idea on how to make sure the caches will be fully avoided Probably you should consider an active/active approach using a cluster FS?
If you can spare a “service request”, you might try to get an answer from the SUSE engineers on how to successfully de-configure caching.