Exploring Software-Defined Storage with GlusterFS on the ODROID-HC1: Part 2 - Client Performance

ODROID Magazine GlusterFS

In my previous article, I described how to setup a Distributed Replicated GlusterFS Volume as well as a simple Replicated Volume. I also described how to use the GlusterFS Native Client to access the volumes. In this part, I am going to show you how to setup NFS and Samba clients to access the GlusterFS volume and compare the performance of the different clients.

NFS Client

A NFS server is automatically set up when we install GlusterFS and create a Distributed Replicated volume. However, if your installation is like mine, when you execute the following command, you will find that the NFS servers are all offline, as shown in Figure 1.

$ gluster volume status
ODROID Magazine Figure 1 - NFS Offline
Figure 1 - NFS Offline
It is likely due to rpcbind not running, as evident in the /var/log/glusterfs/nfs.log shown in Figure 2.
ODROID Magazine Figure 2 - RPC Error
Figure 2 - RPC Error
Executing the following commands on all GlusterFS servers will start the NFS servers, where gvolume0 is the GlusterFS Distributed Replicated volume I created in Part 1:
$ sudo /etc/init.d/rpcbind start
$ sudo gluster volume set gvolume0 nfs.disable off
$ sudo gluster volume stop gvolume0
$ sudo gluster volume start gvolume0
$ sudo gluster volume status gvolume0
ODROID Magazine Figure 3 - NFS Online
Figure 3 - NFS Online
I then setup a NFS client on one of the machines on my ODROID-MC1 namely, xu4-master,where xu4-gluster0 is one of the GlusterFS servers:
$ sudo apt-get update
$ sudo apt-get install nfs-common
$ sudo mkdir /mnt/nfs
$ sudo mount -t nfs -o vers=3,mountproto=tcp xu4-gluster0:/gvolume0 /mnt/nfs
Now, you can access the GlusterFS volume on xu4-master. You can issue the following command on the list of 50 files we created in Part 1:
$ ls /mnt/nfs/testdir

SAMBA Client

The simplest way to access a GlusterFS volume is to export the Gluster mount point as the samba export and mount it using CIFS protocol. Of course, you need to install the Samba and CIFS packages first:

$ sudo apt-get update
$ sudo apt-get install samba smbfs cifs
Then, you have to set up a password for samba,
$ sudo smbpasswd -a odroid
Edit the /etc/samba/smb.conf file with the following settings:
[global]
 security = user
 #guest account = nobody
 [gvolume0]
 guest ok = yes
 path = /mnt/gfs
 read only = no
 valid users = odroid
 admin users = root
Next, restart Samba and mount the share:
$ sudo /etc/init.d/samba restart
$ sudo mount -t cifs -ouser=odroid,password=odroid //192.168.1.80/gvolume0 /mnt/samba
To mount on the NFS share instead of Gluster share, simply unmount Samba, modify the /etc/samba/smb.conf file, restart Samba and mount the file system as follows:
[global]
 security = user
 #guest account = nobody
 [gvolume0]
 guest ok = yes
 path = /mnt/nfs
 read only = no
 valid users = odroid
Restart samba and mount the share:
$ sudo /etc/init.d/samba restart
$ sudo mount -t cifs -ouser=odroid,password=odroid //192.168.1.80/gvolume0 /mnt/samba

Client Performance

To compare the performance of the various clients, we need a baseline and the baseline is the native file system performance, i.e., a partition mounted locally on a server. This implies that we have 5 different clients to compare. They include:

  1. Native File System - performance test is running on a server where a local disk partition is mounted
  2. GlusterFS Native Client - mounted on a machine which is not a GlusterFS server using the GlusterFS Native Client
  3. Gluster NFS client - mounted on a machine which is not a GlusterFS server using the GlusterFS NFS Client
  4. Samba client based on GlusterFS Native Client - CIFS share of the GlusterFS Native Client file system
  5. Samba client based on GlusterFS NFS Client - CIFS share of the GlusterFS NFS Client file system

The file system performance benchmark tool used is iozone, which is not in the Ubuntu software repository, but can be downloaded from http://bit.ly/2BnxxfA. It generates and measures a variety of file operations. We are using the following 4 benchmarks to compare client performances: Single-Thread Write, 8-Thread Write, Single-Thread Read, 8-Thread Read.

Single-Thread Write

The iozone command used in the test is:

$ iozone -w -c -e -i 0 -+n -C -r 64k -s 1g -t 1 -F path/f0.ioz
The options used are: -c Include close() -e Include flush in time calculations. (-c -e options are used together to measure the time it takes for data to reach persistent storage) -w Do not unlink temporary files when finished using them -i 0=write, 1=read (we only used 0 and 1 in these tests -+n Save time by skipping re-read and re-write tests -C Show how much each thread participated in the test -r data transfer size -s per thread file size -t number of threads -F List of files

The command is run for every client. The output of the command is shown in Figure 4, and the result is summarized in the bar chart in Figure 5.

ODROID Magazine Figure 4 - Figure 4 - Native W1
Figure 4 - Native W1

ODROID Magazine Figure 5 - 1 Thread Write
Figure 5 - 1 Thread Write
Native is the fastest, followed by NFS and GlusterFS native clients. As expected, the Samba clients are the slowest in our configuration because they rely on the underlying GlusterFS native and NFS clients.

8-Thread Write

The command used is:

$ iozone -w -c -e -i 0 -+n -C -r 64k -s 1g -t 8 -F path/f{0,1,2,3,4,5,6,7,8}.ioz
The output is shown in Figure 6, and the result graphics are shown in Figure 7.
ODROID Magazine Figure 6 - Native W8
Figure 6 - Native W8

ODROID Magazine Figure 7 - 8 Thread Write
Figure 7 - 8 Thread Write
In the multithreading write benchmark, the Native result is the fastest, followed by the NFS and Samba client using NFS. Note that the Samba client using NFS is faster than the NFS client itself, for which I do not have a clear explanation.

Single-Thread Read

The cache is cleared before the read benchmark:

$ sync
$ echo 1 > /proc/sys/vm/drop_caches
$ iozone -w -c -e -i 1 -+n -C -r 64k -s 1g -t 1 -F path/f0.ioz
The resultant results’ graphic are shown in Figure 8.
ODROID Magazine Figure 8 - 1 Thread Read
Figure 8 - 1 Thread Read
The result is consistent in that the Native result is the fastest, followed by NFS-based clients and then GlusterFS native client-based clients. Again, I have no explanation as to why the Samba client using NFS is faster than the NFS client itself.

8-thread Read

Finally, run the multithread read benchmark using the following commands:

$ sync
$ echo 1 > /proc/sys/vm/drop_caches
$ iozone -w -c -e -i 1 -+n -C -r 64k -s 1g -t 8 -F path/f{0,1,2,3,4,5,6,7,8}.ioz
The result is shown in Figure 9.
ODROID Magazine Figure 9 - Figure 9 - 8 Thread Read
Figure 9 - 8 Thread Read
The performance is quite different here, since the GlusterFS native client is the fastest, likely due to distribution. Files are stored on different servers, which adds parallelism in retrieval, unlike writes, which adds overhead in writing data to multiple servers.

Auto-Failover and high availability clients

Of all the clients tested, only the GlusterFS Native Client provides auto-failover and high availability capability. This means that if the GlusterFS server that specified in the mount command fails, it will automatically switch over to use another Gluster server in our Replicated or Distributed Replicated Volume.

The NFS and Samba clients used do not have such capabilities. If you want that capability for NFS, you have to disable the GlusterFS NFS server and install the NFS-Ganesha server (http://bit.ly/2BuH9Ek).

Similarly, for Samba/CIFS, you have to install the Samba VFS Plugin from http://bit.ly/2i7gMjI. In addition to providing high availability and auto-failover, it also uses libgfapi to avoid the performance penalty to go between user and kernel mode which happens in our Samba client based on GlusterFS Native Client used in our test.

Note that the plugin for gluster is absent in the Ubuntu package samba-vfs-modules. I encourage you to look into them if you want high availability and auto-failover for your NFS and Samba clients.

Conclusion

I have shown you how to setup GlusterFS Replicated and Distributed Replicated Volumes using ODROID-HC1 devices, and how to access them using GlusterFS Native Client, NFS and Samba clients. I have also shown you their performance in charts for easy comparison. You now have sufficient information to select the the appropriate client. Personally, I think it is a good enterprise technology that lends itself to easy home use. ODROID-HC1s are more economical and flexible than off-the-shelf NAS systems, in my opinion. I hope you will share my enthusiasm in using them at home.

Be the first to comment

Leave a Reply