All the performance specifications of NeoSapphire all-flash arrays are numbers tested against 100% consecutive 4KB random writes. Rather than publishing 4KB random write numbers, most vendors either mention mixed read/write numbers; values with a different, favorable block size; or small IOPS numbers derived from the most basic measurable unit (without scaling, stacking, etc.).
The two HA nodes communicate using heartbeat signals sent via Gigabit Ethernet interlink to inform each other of their operation status. The heartbeat signal is synced between two nodes in 1 sec intervals.
When data synchronize via the IB link, the updated synchronization status (to/from) is displayed under the iSCSI server in the HA information.
Additional host-side MPIO (multi-path I/O) configurations are required to correctly handle the failover of tasks between different paths to ports on other nodes. The task path will only automatically switch to another working path from a failed path if the MPIO settings are configured correctly.
Using asynchronous mode for access does not impact HA behavior. The data will still be written and mirrored between the two HA nodes.
Yes, the two nodes on a NeoSapphire H710 work in an active/active HA configuration, while all the synchronization and communication between the two nodes are done via the IB link.
Yes, the replaced SSD will automatically become a hot spare with its dedicated node.
The NeoSapphire H710 has two HA nodes, each configured with an identical SSD group. Every disk group is composed of 12 SSDs, with one used as a spare. The logs/tables are saved across the other 11 SSDs in the group, with one used for storing parities. For single-node NeoSapphire models, the logs/tables are saved similarly across all the SSDs, apart from one spare.
After replacing the controller, the new one has to be configured with the IP setting (Management Port/Data Port). To join the new controller with the existing HA group (original controller), set the new controller version as ZERO. The data will start to synchronize from SSDs on the existing controller to SSDs on the new controller after the new controller has successfully joined the HA group.
The NeoSapphire 3501 and 3505 support the NFS RDMA. By default, the service is enabled for the NFS Client. The NFS RDMA port is 20049.
For the RedHat Linux environment, you can use the following commands to install the RDMA package and mount NFS via RDMA:
# yum install rdma; chkconfig --level 2345 rdma on
# mount -t nfs -o rdma,port=20049