ZFS is a file system developed by Sun for Solaris, and specialized in storing data in Datacenter.
ZFS offers many benefits that do surpass almost all its competitors:
Does not use partitions but a logical system of pool
A pool can combine multiple disks and run RAID, without additional software layer
Detection and correction of errors
Hotspare disk hot replacement
Monitoring
Ability to create “slices” in a pool, for example one for / and/usr
There is no size to specify for these bands
Possibility of compression of the slices
Attributes on slices, for example prohibition to execute programs
Snapshots highly configurable
So ZFS is the ideal system for resizing/movement of data (even between machines), secure (for redundancy or algorithms of controls), and the continuity of service. Once his administration taking in hand, we are literally liberated from the constraints of the old partitions systems such as NTFS or EXT4.
Note: while ZFS is developed for OpenSolaris/Solaris at the base, it is available on FreeBSD, NetBSD, and partially on Linux. Its license (CDDL) incompatible with GPLv2 is that it will never be integrated into the Linux kernel. Nevertheless there are third-party ports and an implementation in FUSE. Oracle develops btrfs specially for Linux, and which should provide functionality similar to ZFS.
Equipment for testing
Currently I have a file server equipped with a single 1 TB hard drive. To ensure the safety of my data, I got myself a second identical drive and has undertaken to mount a mirror (RAID1). My motherboard does not support RAID and I did not really want to buy a card for it. So I turned to software RAID.
One solution is to install Debian since the latter Installer to set up a software RAID. But being in a period “FreeBSD” and having heard a lot about ZFS, I wanted to try. Server configuration:
Map parent Intel D945GSEJT equipped with a single-core + hyperthreading 1.6 GHz Atom
One stick of 512MB of memory (DDR2) PC5300
Two Samsung Spinpoint F3 1 TB, 7200 RPM drives
A PSU ATX connected in flying leads
The House of Geek still thrills
The version of FreeBSD is the last stable, that is to say the 8.1 – RELEASE in i386 version, since the atom does not support 64-bit.
Installation
The procedure to install FreeBSD on a ZFS file system is available here. What is painful it is to have to download the DVD version that weighs 2Gio, even though the system installed at the final will be less 200Mio. But it takes the DVD to be able to enjoy both the package installation and the “Livefs” mode which allows to have a console and all of the Unix tools to work. The wiki describes a procedure making us build a system divided into slices: / var/log/var/mail etc… some being compressed. If the procedure is followed carefully without rushing and read quietly, it should work. If like me you hover over the instructions you will find yourself with a system that does not boot.
Discovery and testing
Once you successfully boot hard, we can have fun to do some testing. The command:
# zpool status
Returns the State of your pool. You can check that there’s no error and that the drives are properly functional. Here for example a from the Sun doc status:
zpool status tank
pool: tank
State: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
Action: Attach the missing device and online it using ‘zpool online’.
see: http://www.sun.com/msg/ZFS-8000-2Q
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
c1t0d0 ONLINE 0 0 0
c1t1d0 UNAVAIL 0 0 0 cannot open
errors: No known data errors
There we see that the operation is not optimal, and reading one realizes that “c1t1d0” disk is missing. I have even unplugged one of my drives (cold) and has noted that the system boots always, and that error is reported.
Transfer speed
My server is used to store files, should be able to transfer to broadband with my desktop computer, on a 1000Mbps (Gigabit) network. A lightweight and simple network transfer protocol is FTP I use for its speed. On FreeBSD you can use the reference FTP (FTPd) server or install VSFTPd via ports or packages.
Only here, the big surprise is that transfer rates are really very low, and regularly interrupted by the microcuts. I do not rejuvenate 12 MB/s while the rate expected for a Gigabit network is 60 MB/s about (limited by the speed of the disc-writing). The speed of 12 MB/s is the speed of a 100Mbps network, I therefore wondered if it wasn’t a network card problem. Unfortunately an ifconfig confirms me that it is correctly set in 1000, just like my switch and its Green LED that indicates the same thing. Therefore, it is not a network card problem.
After various tests of the ZFS Setup, aided by the doc, I got no conclusive results and even eventually break my… cannot boot OS. I reinstalled FreeBSD but this time with a single disc and with the default options (System UFS, ZFS). And there no problem, maximum transfer rate. So my problems came well ZFS.
Explanation: It would seem that ZFS requires a relatively fast machine, particularly because of its use of the RAM as a cache. Thus, on Solaris documentation, it is recommended to turn on 64-bit with at least 1 GB of RAM. The different people who reported to operate under ZFS NAS had at least a processor dual core supported by several GB of memory.
Are performance issues due to my material too “primitive” to stand in a proper way? Possible, but not 100% certain. Because it would seem that the implementation of ZFS on FreeBSD suffers withdrawal important performance by reports to OpenSolaris. Benchmark published on this page have confronted FreeNAS (= FreeBSD), OpenSolaris and Nexenta. I’ll let you watch the graphics but the difference in speed to the level of the operations per second can vary by a factor of 10.
Conclusion
The purpose of my tests was to know whether FreeBSD + ZFS was a real advantage over eggs or EXT in Linux. The answer is Yes, without a doubt, more flexibility in the administration. However, to the question “will I keep it?” the answer is no since there is a serious performance problem.
The suite?
Tests with (even if more supported) OpenSolaris and Nexenta. If none give satisfaction, it will be back on Debian GNU/Linux with RAID with dmraid.
Leave a Reply