Backup setup for two HP Microserver Gen8 NAS servers

By | February 8, 2017

I am using two HP MicroServer GEN8 as NAS servers with the following setup:

NAS1 (Linux Fedora 21):

  • LSI SAS 9207-4i4e Host Bus Adapter Kit (4 Port Internal, 4 Port External, 6Gb/s SATA+ SAS, PCIe3.0 HBA)
  • RAID1 : 2x WD Red 5GB on the LSI HBA – RAID1 data read speed is around 145MB/s so enough to feed the LTO-4 drive.
  • Tape drive: HP StorageWorks 1760 LTO-4 – the tape drive requires a sustained rate of 4.5GB/min to avoid shoe-shine effect (constant rewinding to keep up with a low data feed rate). The required rate is roughly 80MB/sec

NAS2 (Linux Fedora 21):

  •   RAID1 : 2x WD Red 3GB on the 6Gb ports of B120i – RAID1 data read speed is around 125MB/s (looks like the B120i HBA is slower than the LSI) so in theory enough to feed the LTO-4 drive.

NAS Switch

HP 1810- 8G gigabit switch between the NAS1 and NAS2 servers.

And a picture with the whole stack 🙂

10gkxuh

For the backup of the resources from NAS1 there are no issues and executes at the maximum speed of 4.5GB/min (aprox. 80MB/sec) using Yosemite Barracuda server backup Basic.

The backup of the resources from NAS2 (NFS export of the storage directory) is another issue.

  1. First trial was with the default setup. A simple NFS export from NAS2. I got constant show-shine and a rate of around 2GB/min maximum (bellow 40MB/sec). It was clear the bottleneck was not at the NAS2 disk read level as 125MB/sec exceeds the 40MB/sec feed to the tape drive.
  2. Because Yosemite Barracuda does some cashing in the /tmp directory on NAS1 (the host of the tape drive) my first idea was to check the write/read speed to /tmp. The read speed from the OS SSD of NAS1 placed on the ODD port was around 145MB/sec, clearly very bad for a SSD. After moving the SSD cable to cable 0 of the B120i HBA (note that the RAID 1 drives are already on the LSI HBA) I got a much better result for read speed (so also for write) at 295MB/sec. A a result the back-up of the NAS2 NFS export was faster but still only at around 3GB/min
  3. The next bottleneck remaining is the network link between NAS1 and NAS2. Testing with iperf i got a real transfer rate of 980Mbps between NAS1 and NAS2. That resulted in a real case scenario of NFS copy to a maximum of 70MB/sec. If  I had a lot of small files the rate drops to bellow 50MB/sec. The idea to remove this issue was to bond on both NAS1 and NAS2 the two gigabit Ethernet ports to get double the connection speed.
  4. After several solutions were tried the following is the best setup until now. On both servers create a bond0 bonding both Ethernet ports of each server.
  • a. make sure bond is a round robin bonding: “mode=0 miimon=100″
  • b. make sure uses jumbo frames: MTO=9000

The configuration files on NAS1 (similar on NAS2)

[root@nas1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-em1
TYPE=”Ethernet”
BOOTPROTO=none
NAME=”em1″
UUID=”bc2f1e54-9d8a-4e23-8a19-5af83d3b9aa2″
ONBOOT=no
HWADDR=”A0:1D:48:C7:9C:24″
MASTER=bond0

[root@nas1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-em2
TYPE=Ethernet
BOOTPROTO=none
NAME=em2
UUID=62d398e2-40f5-4b03-89a2-e6f647ecef5a
ONBOOT=no
HWADDR=A0:1D:48:C7:9C:25
SLAVE=yes
MASTER=bond0

[root@nas1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
BONDING_OPTS=”mode=0 miimon=100″
BOOTPROTO=none
IPADDR=192.168.1.103
NETMASK=255.255.255.0
NETWORK=192.168.1.0
BROADCAST=192.168.1.255
GATEWAY=192.168.1.1
ONBOOT=yes
USERCTL=no
TYPE=Ethernet
MTU=9000

On switch:

  • do not specify any LCAP or static trunking on the server ports
  • make sure to enable jumbo frames support
  • create two VLANs (21 and 22)
  • tag NAS1 port1 and NAS2 port1 as members of VLAN 21
  • tag NAS1 port2 and NAS2 port2 as members of VLAN 22
  • no VLAN tagging is needed to be done at the OS level of NAS1 or NAS2

It is very important to note that:

  1. LCAP bonding will never exceed the speed of one port in the case of a one server to one client connection, so is a no go.
  2.  Jumbo frames are needed to get bigger speed, otherwise you end up with a lot of re-transmits that will lower the transfer rate in the end.
  3. By making a VLAN for each pair of ports there will be only “one MAC” to “one port” relation in each VLAN so the switch will not be confused to which port to send the traffic.  With simple round robin bonding but without VLANs even if NAS2 is sending data on both ports the switch ended up delivering the packets only to one of the ports of NAS1 thus limiting the transfer rate to one port speed.

In the end with this setup I am able to get a 115MB/sec transfer rate from the NAS2 exported NFS directory so a rate bigger that the 80MB/sec required by the tape drive.

The 115MB/sec is very close to the 125MB/sec read speed of the RAID1 from NAS2 and I think this is the bottleneck now. The biggest transfer rate measured by iperf between the NAS1 and NAS2 servers is now 1.48Gbps so a maximum of 185MB/sec. I think I am hitting another limitation here due to memory size of NAS2 , 4GB compared to the 16GB of NAS1.

To test the network speed without the interference of disk resources:

nas1# iperf -s

nas2# iperf -c nas1 -m -P 10

To test the NFS transfer speed:

nas1# rsync -ah /media/storage-nas2/test.file /tmp/test.file

where /media/storage-nas2/ is the NFS mount directory of the storage from NAS2

To test the disk speed without using disk cache

nas1# dd if=/dev/zero of=/root/tmp.tmp oflag=dsync bs=1M count=1024

I hope to get a better rate after I will add another LSI HBA also in the NAS2 server (I am waiting to receive the HBA from Amazon). I expect that I will get in the end at least 135MB/sec real transfer rate for the transfers from the NFS directory of NAS2 so being able to accommodate even LTO-5 tape drives over NFS.

The speed will be further increased if :

1. I will add more RAM to NAS 2 so to pass the 4GB limitation

2. I will add also the SSD from NAS 2 on the B120i HBA port 0 or 1 instead of the ODD port.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.