Where can I configure the network MTU (Maximum Transmission Unit) value? (Jumbo Frame)
On the web interface, under “System \ Network,” you can find the data port setting.
Ensure the “Green” Link is on and indicated when the physical link is connected.
Note:
For the 10GbE SFP+ transceiver/GBIC, please use the SFP+ transceiver:
Intel Ethernet SFP SR Optics, P/N: E10GSFPSR http://www.intel.com/content/www/us/en/support/network-and-i-o/ethernet-products/000005528.html
Currently, NeoSapphire products are installed with Intel x710-DA4 10GbE SFP network cards to provide 10Gbps access. However, some 10GbE SFP+ transceivers are incompatible with the Intel x740-DA4 card.
We use the 4KB block size in the dashboard to measure the IOPS value. If we were to use 8KB/32KB or even 128KB to test storage benchmarks, the IOPS in your benchmarking utilities, such as fio or IOmeter, would be much less than what you see in the Web UI dashboard. But the concept of total throughput is the same, i.e., Throughput = block size * IOPS.
Google Chrome 48 and above
Microsoft Internet Explorer 11 and above
Mozilla Firefox 42 and above
If you still encounter login issues, please clear the browser’s cookies and try again.
For Internet Explore:
Go to Tools in the menu bar, then click on Internet Options in the drop-down menu.
Click on the Privacy tab on top.
Click on Sites, then a new window will open called Per Site Privacy Actions.
Under the Managed websitesbox a list of all the websites you’ve visited will appear.
To remove all cookies simply click the Remove All
For Chrome:
Type in: chrome://settings/Advanced Settings to clear the browsing history.
Web UI access on NeoSapphire products use TCP port 80. You can also use https for access via TCP port 443. These port number settings are adjustable in the Web UI, under the System\General Settings\Web Administration page.
The default IP address of the management port on a NeoSapphire product is 192.168.1.1, with subnet mask 255.255.255.0. Before you can start configuring, you might have to configure your client to the same subnet as the NeoSapphire to gain access to the management port. Use the ping test utility to check and verify the connectivity between the NeoSapphire and your client if necessary.
For detailed information, please refer to the “Core Technology” -> “Technical Insights”.
AccelStor NeoSapphire can accommodate up to 20 SSDs in a 1U rack space, delivering sustained 360K IOPS for 4KB random writes (with standard iSCSI protocol over mainstream 10GbE connectivity) and 11TB usable capacity. No comparable product is available with such density of both performance and capacity in a 1U rack space.
People generally understand that the data reduction ratio is highly dependent on the workload. So, the common metric for comparing the capacity is “usable capacity without any data reduction” of the basic unit of the all-flash array. Current models of AccelStor NeoSapphire all-flash arrays come with 5TB - 13TB usable capacity in a single 2U basic unit (without stacking).
The 1GbE management port is only for the web connection. It would help if you connected to the data port using the 10GbE SFP+ transceiver to access data.