In the past, I always would use the NOOP scheduler for my SLES guest VMs running under VMWare ESXi (elevator=noop). I see that in SLES 15.2 NOOP is gone. Or is it replaced with NONE?
Is this still a best practice? And if so, what is the proper way to make this persistent? I’ve been playing around with UDEV rules. Is it proper/best practice to directly modify 60-io-scheduler.rules in /lib/udev/rules.d?
TIA
Matt
@MATT, Hi what is the underlying storage? The new block schedulers for sure, boot with scsi_mod.use_blk_mq=1
in grub kernel options if not present. For rotating rust - bfq, for ssd - mq-deadline, for nvme - none.
If you going to create your own scheduler rule, add a /etc/udev/rules.d/61-bfq-scheduler.rules
and modify this as required.
I use to use the following for just HDD/SSD;
# Add multi-queue scheduler support
# Also add to grub kernel options
# scsi_mod.use_blk_mq=1
# Filename: /etc/udev/rules.d/61-bfq-scheduler.rules
# SSD
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="mq-deadline"
# Rotating
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="bfq"
Does it matter that it is a VM and not direct attached storage? The controller is VMWare Paravirtualized SCSI. The storage (in this particular case) is a SAN backed by SSD (I believe, trying to verify). It was defaulting to mq-deadline for the scheduler. I’m surprised how hard it is to find any current information on this topic. Sounds like I should just leave it alone then?
So I should NOT be modifying the 60-io-scheduler.rules then?
Matt
@MATT Hi, if it’s defaulting to mq-deadline, then you should be good to go Any configuration changes should all be done via new rules in /etc else you make a change and it will get over written on an update. You might also want to run rpmconfigcheck
to see if any unresolved changes. I normally run diff -Naur systemfile systemfile.rpmnew
to check the changes and replace with the rpmnew one if required…