SLES 11 SP3 - recomendations for SSD disk

Is there any recommendations available to get max from SSD disk using? for application server
SLES11 SP3 /ext4 file systems. Any best practices ?

Which one will be better for ext3 /ext4 application mount point:

noatime,nodiratime,nobh,data=writeback, Then set kernel to NOOP. (Restart)
echo noop > /sys/block/sdb/queue/scheduler

or

noatime,nodiratime,discard,errors=remount-ro 0 1
echo noop > /sys/block/sdb/queue/scheduler

or

ext4 defaults,errors=remount-ro,noatime,discard 0 1
echo noop > /sys/block/sdb/queue/scheduler

or

ext4 noatime,data=writeback,barrier=0,nobh,errors=remount-ro 1 1
echo noop > /sys/block/sdb/queue/scheduler

Hi and welcome to the Forum :slight_smile:
Back then when I had a SSD with SLE 11, all I used was the noop elevator and added discard, left everything as default. Also reduced swappiness via /etc/sysctl.conf file.

Use a modern and robust file system. Recommendation for SLE11: XFSv4
and for SLE12: XFSv5:
https://forums.suse.com/showthread.php?10294-about-partition&p=40594#post40594

Check trim support of your SSD:
https://forums.suse.com/showthread.php?9508-discard-option-is-not-supported-why&p=38320#post38320

https://techgage.com/article/enabling_and_testing_ssd_trim_support_under_linux/2/

Remove all discard-mount options in /etc/fstab (for ext3/ext4/xfs) and use fstrim!

[CODE]# man xfs

man ext4

man fstrim[/CODE]

# command-not-found fstrim => rpm "util-linux"

# fstrim -a -v /home: 366.7 GiB (393764179968 Bytes) getrimmt /: 6.4 GiB (6835650560 Bytes) getrimmt

There is an automatic weekly systemd job for fstrim in SLE12:

# rpm -ql util-linux-systemd |grep -i fstrim /usr/lib/systemd/system/fstrim.service /usr/lib/systemd/system/fstrim.timer /usr/sbin/rcfstrim

[CODE]# systemctl status fstrim.timer
● fstrim.timer - Discard unused blocks once a week
Loaded: loaded (/usr/lib/systemd/system/fstrim.timer; enabled; vendor preset: enabled)
Active: active (waiting) since Sam 2018-07-28 10:37:11 CEST; 52min ago
Docs: man:fstrim

Jul 28 10:37:11 core2duo2400 systemd[1]: Started Discard unused blocks once a week.[/CODE]

There is probably a cron job for fstrim under SLE11 /etc/cron.weekly/!?

Activate “discard-once” for swap partition in /etc/fstab:

/dev/sda6            swap                 swap       defaults,discard=once
# man swapon

Reduce swapiness in /etc/sysctl.conf:

vm.swappiness = 10

Use mount options “noatime,nodiratime” in /etc/fstab to extend life time of your SSD:

/dev/sda1 / xfs rw,noatime,nodiratime,nodev,attr2,inode64,noquota 1 1 /dev/sda5 /home xfs rw,noatime,nodiratime,nodev,attr2,inode64,noquota,nosuid,noexec 1 2

Check periodical your SSD health state with smartctl:

cnf smartctl => rpm smartmontools

[CODE]# /usr/sbin/smartctl -t long /dev/sda
=> wait some minutes…

/usr/sbin/smartctl -i /dev/sda

/usr/sbin/smartctl -H -l error /dev/sda

/usr/sbin/smartctl -A /dev/sda

/usr/sbin/smartctl -l selftest /dev/sda[/CODE]

Linux 4.12 I/O Scheduler Benchmarks:
https://www.phoronix.com/scan.php?page=article&item=linux-412-io&num=1

Run /tmp and /var/tmp with tmpfs:

/etc/fstab

tmpfs /tmp tmpfs rw,noatime,nodiratime,nodev,noexec,nosuid 0 0 tmpfs /var/tmp tmpfs rw,noatime,nodiratime,nodev,noexec,nosuid 0 0

Some file system benchmarks:
https://www.phoronix.com/scan.php?page=article&item=linux-416-fs

https://dzone.com/articles/xfs-vs-ext4-comparing-mongodb-performance-on-aws-e

/dev/SSD_DG/SSD_LVM /u02 xfs defaults,noatime,discard,nodiratime 0 0

Is it correct for SSD mount point for oracle application tier?

Did you read the manual?

[CODE]# man xfs
discard|nodiscard
Enable/disable the issuing of commands to let the block device
reclaim space freed by the filesystem. This is useful for SSD
devices, thinly provisioned LUNs and virtual machine images, but
may have a performance impact
.

          Note: It is currently recommended that you use the fstrim appli-
          cation to discard unused blocks rather than  the  discard  mount
          option  [B]because  the  performance impact of this option is quite
          severe[/B].  For this reason, nodiscard is the default.[/CODE]

There is “wiper.sh” in “hdparm” rpm for manual trim support on SLE11:
https://www.suse.com/documentation/sles11/stor_admin/data/sec_filesystems_info.html

oul164:/u02 # /sbin/wiper.sh /dev/sdb

wiper.sh: Linux SATA SSD TRIM utility, version 2.5, by Mark Lord.
wiper.sh: This tool is DANGEROUS! Please read and understand
wiper.sh: /usr/share/doc/packages/hdparm/README.wiper
wiper.sh: before going any further.
HDIO_DRIVE_CMD(identify) failed: Invalid exchange
/dev/sdb: DSM/TRIM command not supported (continuing with dry-run).
/dev/sdb: offline TRIM not supported for LVM2_member filesystems, aborting.