One of the researchers at my organization was gifted a JBOD enclosure populated with sixty 10TB drives. He was also given two 480GB SSD’s for use with the enclosure.
To this point, all of my large-scale storage experience has been with SAN arrays or NAS enclosures.
Since this enclosure does not have the intelligence of either of those devices, I’m trying to figure out the best way to utilize the enclosure.
I’ve had the researcher purchase a server with a Xeon Gold 5122 CPU and 256GB of RAM to serve as the front end for the enclosure.
The question is, what software to use?
The SSD’s were provided with the intent of using them as a cache for the enclosure.
As far as I can tell, the only way to provide the caching mechanism with an enclosure that does not have a built-in mechanism seems to be using the enclosure with a software-defined storage product, like SUSE Enterprise storage
The enclosure will only be used by a half dozen researchers to store large images and the output of the system that will be used to analyze those images.
Given that the data will be 1-5 TB files, and that most of the activity will involve writing the files, as opposed to frequent file reads, I’m not sure that the cache is needed.
In addition, the multiple servers needed to provide SDS with storage nodes and monitor nodes seems to be an expensive, overly complex solution for this particular situation.
I’ve also considered FreeNAS, since it has the ability to utilize the SSD’s for write caching, but saw a fair amount of discussion over how dependable the ZFS file system is.
The fact that the server has ECC memory addresses at least one issue with ZFS and its ARC method of write caching.
My original intention was to simply set up the server as a SLES 11/12 server, use software RAID to administrate the storage and set it up as an SMB share.
I wanted to make sure, however, that I’m not missing an alternative solution that might be a better fit for the hardware.
Any thoughts are welcome.