Shared Hard drive problem

Hi,
I have a load balancer, with three frontend servers (AWS EC2 Suse Linux), and because of our application requirements, I need all three systems to read and run the application from a single source. For this I have created a multi-attached EBS volume formatted in XFS.

The problem is that when restarting any of the three frontends, a corruption problem appears. Look at a part of the logs:

Jan 20 09:53:44 FRONT-2 kernel: XFS (nvme1n1p1): Metadata corruption detected at xfs_iread_extents+0x420/0x4f0 [xfs], >
Jan 20 09:53:44 FRONT-2 kernel: XFS (nvme1n1p1): Unmount and run xfs_repair
Jan 20 09:53:44 FRONT-2 kernel: XFS (nvme1n1p1): First 72 bytes of corrupted metadata buffer:
Jan 20 09:53:44 FRONT-2 kernel: 00000000: 42 4d 41 33 00 00 00 28 ff ff ff ff ff ff ff ff  BMA3...(........
Jan 20 09:53:44 FRONT-2 kernel: 00000010: ff ff ff ff ff ff ff ff 00 00 00 00 00 00 6f 98  ..............o.
Jan 20 09:53:44 FRONT-2 kernel: 00000020: 00 00 00 04 00 00 4e 63 43 f4 45 43 2b 1a 43 7c  ......NcC.EC+.C|
Jan 20 09:53:44 FRONT-2 kernel: 00000030: 80 36 c5 56 5c 5f 18 e9 00 00 00 00 00 00 41 bf  .6.V\\_........A.
Jan 20 09:53:44 FRONT-2 kernel: 00000040: 0c 59 9a d7 00 00 00 00                          .Y......

If I repair the filesystem, and we return to fill it with the application, it is a question of (little) time that it returns to appear this error again, although in another part of the disk. I have also tried to format them with ext4, and the problem keeps appearing.

This is my (identical) fstab file on all three servers:

LABEL=ROOT        /            xfs   defaults  0  0
LABEL=EFI         /boot/efi    vfat  defaults  0  0
LABEL=SCHOOLYARD  /srv  xfs   defaults  0  0

I have also deleted an EBS, and created a new one just in case, but the problem persists.

Can you help me with this problem?

Thank you in advance,

Please read:
https://stackoverflow.com/questions/62770861/what-must-be-the-file-system-on-aws-ebs-multi-attach-volume