200TB GlusterFS Server: Using The ODROID-HC2 For Massively Distributed Applications

Over the years, I have upgraded my home storage several times. Like many, I started with a consumer-grade NAS. My first was a Netgear ReadyNAS, then several QNAP devices. About a two years ago, I got tired of the limited CPU and memory of QNAP and devices like it, so I built my own using a Supermicro XEON D, Proxmox, and FreeNAS. It was great, but adding more drives was a pain. Migrating between ZRAID levels was basically impossible without lots of extra disks.

The fiasco that was FreeNAS 10 was the final straw. I wanted to be able to add disks in smaller quantities and I wanted better partial failure modes, kind of like unRAID, while remaining able to scale to as many disks as I wanted. I also wanted to avoid any single points of failure, such as a home bus adaptor, motherboard, or power supply.

I had been experimenting with GlusterFS and Ceph using roughly forty small virtual machines (VM) to simulate various configurations and failure modes such as power loss, failed disk, corrupt files, and so forth. In the end, GlusterFS was the best at protecting my data because even if GlusterFS was a complete loss, my data was mostly recoverable due to being stored on a plain ext4 filesystem on my nodes. Ceph did a great job too, but it was rather brittle (though recoverable) and difficult to configure.

Enter the ODROID-HC2. With 8 cores, 2 GB of RAM, Gbit ethernet, and a SATA port it offers a great base for massively distributed applications. I grabbed four ODROIDs and started retesting GlusterFS. After proving out my idea, I ordered another 16 nodes and got to work migrating my existing array.

Figure 1 - This is a prime example of a cluster of ODROIDs handling real world data applications.

In a speed test, I can sustain writes at 8 GBPS and reads at 15 GBPS over the network when operations are sufficiently distributed over the filesystem. Single file reads are capped at the performance of 1 node, so ~910 Mbit read/write.

In terms of power consumption, with moderate CPU load and a high disk load (rebalancing the array), running a pfSense box, 3 switches, 2 Unifi Access Points, a Verizon Fios modem, and several VMs on the XEON-D host, the entire setup uses about 250 watts. Where I live, in New Jersey, that works out to about $350 a year in electricity. I'm writing this article because I couldn't find much information about using the ODROID-HC2 at any meaningful scale.

Parts list

The crazy thing is that there isn't much configuration for GlusterFS. That’s what I love about it. It takes literally three commands to get GlusterFS up and running after you get the OS installed and disks formatted. I'll probably post a write up on my github at some point in the next few weeks. First, I want to test out Presto ( https://prestodb.io/), a distributed SQL engine, on these puppies before doing the write up.

$ sudo apt-get install glusterfs-server glusterfs-client
$ sudo gluster peer probe gfs01.localdomain ...  gfs20.localdomain
$ sudo gluster volume create gvol0 replicate 2 transport tcp gfs01.localdomain:/mnt/gfs/brick/gvol1 ...  gfs20.localdomain:/mnt/gfs/brick/gvol1
$ sudo cluster volume start gvol0
For comments, questions, and suggestions, please visit the original article at https://www.reddit.com/r/DataHoarder/comments/8ocjxz/200tb_glusterfs_odroid_hc2_build/.

Be the first to comment

Leave a Reply