The Z file system, developed by Sun™, is a new technology designed to use a pooled storage method. This means that space is only used as it is needed for data storage. It has also been designed for maximum data integrity, supporting data snapshots, multiple copies, and data checksums. A new data replication model, known as RAID-Z has been added. The RAID-Z model is similar to RAID5 but is designed to prevent data write corruption.
The ZFS subsystem utilizes much of the system resources, so some tuning may be required to provide maximum efficiency during every-day use. As an experimental feature in FreeBSD this may change in the near future; however, at this time, the following steps are recommended.
The total system memory should be at least one gigabyte, with two gigabytes or more recommended. In all of the examples here, the system has one gigabyte of memory with several other tuning mechanisms in place.
Some people have had luck using fewer than one gigabyte of memory, but with such a limited amount of physical memory, when the system is under heavy load, it is very plausible that FreeBSD will panic due to memory exhaustion.
It is recommended that unused drivers and options be removed from the kernel configuration file. Since most devices are available as modules, they may simply be loaded using the /boot/loader.conf file.
Users of the i386™ architecture should add the following option to their kernel configuration file, rebuild their kernel, and reboot:
options KVA_PAGES=512
This option will expand the kernel address space, thus allowing the vm.kvm_size
tunable to be pushed beyond the currently imposed
limit of 1 GB (2 GB for PAE). To
find the most suitable value for this option, divide the desired address space in
megabytes by four (4). In this case, it is 512 for
2 GB.
The kmem address space should be increased on all FreeBSD architectures. On the test system with one gigabyte of physical memory, success was achieved with the following options which should be placed in the /boot/loader.conf file and the system restarted:
vm.kmem_size="330M" vm.kmem_size_max="330M" vfs.zfs.arc_max="40M" vfs.zfs.vdev.cache.size="5M"
For a more detailed list of recommendations for ZFS-related tuning, see http://wiki.freebsd.org/ZFSTuningGuide.
There is a start up mechanism that allows FreeBSD to mount ZFS pools during system initialization. To set it, issue the following commands:
# echo 'zfs_enable="YES"' >> /etc/rc.conf # /etc/rc.d/zfs start
The remainder of this document assumes three SCSI disks are available, and their device names are da0, da1 and da2. Users of IDE hardware may use the ad devices in place of SCSI hardware.
To create a simple, non-redundant ZFS pool using a single disk device, use the zpool command:
# zpool create example /dev/da0
To view the new pool, review the output of the df:
# df Filesystem 1K-blocks Used Avail Capacity Mounted on /dev/ad0s1a 2026030 235230 1628718 13% / devfs 1 1 0 100% /dev /dev/ad0s1d 54098308 1032846 48737598 2% /usr example 17547136 0 17547136 0% /example
This output clearly shows the example pool has not only been created but mounted as well. It is also accessible just like a normal file system, files may be created on it and users are able to browse it as in the following example:
# cd /example # ls # touch testfile # ls -al total 4 drwxr-xr-x 2 root wheel 3 Aug 29 23:15 . drwxr-xr-x 21 root wheel 512 Aug 29 23:12 .. -rw-r--r-- 1 root wheel 0 Aug 29 23:15 testfile
Unfortunately this pool is not taking advantage of any ZFS features. Create a file system on this pool, and enable compression on it:
# zfs create example/compressed # zfs set compression=gzip example/compressed
The example/compressed is now a ZFS compressed file system. Try copying some large files to it by copying them to /example/compressed.
The compression may now be disabled with:
# zfs set compression=off example/compressed
To unmount the file system, issue the following command and then verify by using the df utility:
# zfs umount example/compressed # df Filesystem 1K-blocks Used Avail Capacity Mounted on /dev/ad0s1a 2026030 235232 1628716 13% / devfs 1 1 0 100% /dev /dev/ad0s1d 54098308 1032864 48737580 2% /usr example 17547008 0 17547008 0% /example
Re-mount the file system to make it accessible again, and verify with df:
# zfs mount example/compressed # df Filesystem 1K-blocks Used Avail Capacity Mounted on /dev/ad0s1a 2026030 235234 1628714 13% / devfs 1 1 0 100% /dev /dev/ad0s1d 54098308 1032864 48737580 2% /usr example 17547008 0 17547008 0% /example example/compressed 17547008 0 17547008 0% /example/compressed
The pool and file system may also be observed by viewing the output from mount:
# mount /dev/ad0s1a on / (ufs, local) devfs on /dev (devfs, local) /dev/ad0s1d on /usr (ufs, local, soft-updates) example on /example (zfs, local) example/data on /example/data (zfs, local) example/compressed on /example/compressed (zfs, local)
As observed, ZFS file systems, after creation, may be used like ordinary file systems; however, many other features are also available. In the following example, a new file system, data is created. Important files will be stored here, so the file system is set to keep two copies of each data block:
# zfs create example/data # zfs set copies=2 example/data
It is now possible to see the data and space utilization by issuing the df again:
# df Filesystem 1K-blocks Used Avail Capacity Mounted on /dev/ad0s1a 2026030 235234 1628714 13% / devfs 1 1 0 100% /dev /dev/ad0s1d 54098308 1032864 48737580 2% /usr example 17547008 0 17547008 0% /example example/compressed 17547008 0 17547008 0% /example/compressed example/data 17547008 0 17547008 0% /example/data
Notice that each file system on the pool has the same amount of available space. This is the reason for using the df through these examples, to show that the file systems are using only the amount of space they need and will all draw from the same pool. The ZFS file system does away with concepts such as volumes and partitions, and allows for several file systems to occupy the same pool. Destroy the file systems, and then destroy the pool as they are no longer needed:
# zfs destroy example/compressed # zfs destroy example/data # zpool destroy example
Disks go bad and fail, an unavoidable trait. When this disk goes bad, the data will be lost. One method of avoiding data loss due to a failed hard disk is to implement a RAID. ZFS supports this feature in its pool design which is covered in the next section.
As previously noted, this section will assume that three SCSI disks exist as devices da0, da1 and da2 (or ad0 and beyond in case IDE disks are being used). To create a RAID-Z pool, issue the following command:
# zpool create storage raidz da0 da1 da2
Note: Sun recommends that the amount of devices used in a RAID-Z configuration is between three and nine. If your needs call for a single pool to consist of 10 disks or more, consider breaking it up into smaller RAID-Z groups. If you only have two disks and still require redundancy, consider using a ZFS mirror instead. See the zpool(8) manual page for more details.
The storage zpool should have been created. This may be verified by using the mount(8) and df(1) commands as before. More disk devices may have been allocated by adding them to the end of the list above. Make a new file system in the pool, called home where user files will eventually be placed:
# zfs create storage/home
It is now possible to enable compression and keep extra copies of the user's home directories and files. This may be accomplished just as before using the following commands:
# zfs set copies=2 storage/home # zfs set compression=gzip storage/home
To make this the new home directory for users, copy the user data to this directory, and create the appropriate symbolic links:
# cp -rp /home/* /storage/home # rm -rf /home /usr/home # ln -s /storage/home /home # ln -s /storage/home /usr/home
Users should now have their data stored on the freshly created /storage/home file system. Test by adding a new user and logging in as that user.
Try creating a snapshot which may be rolled back later:
# zfs snapshot storage/home@08-30-08
Note that the snapshot option will only capture a real file system, not a home directory or a file. The @ character is a delimiter used between the file system name or the volume name. When a user's home directory gets trashed, restore it with:
# zfs rollback storage/home@08-30-08
To get a list of all available snapshots, run the ls in the file system's .zfs/snapshot directory. For example, to see the previously taken snapshot, perform the following command:
# ls /storage/home/.zfs/snapshot
It is possible to write a script to perform monthly snapshots on user data; however, over time, snapshots may consume a great deal of disk space. The previous snapshot may be removed using the following command:
# zfs destroy storage/home@08-30-08
There is no reason, after all of this testing, we should keep /storage/home around in its present state. Make it the real /home file system:
# zfs set mountpoint=/home storage/home
Issuing the df and mount commands will show that the system now treats our file system as the real /home:
# mount /dev/ad0s1a on / (ufs, local) devfs on /dev (devfs, local) /dev/ad0s1d on /usr (ufs, local, soft-updates) storage on /storage (zfs, local) storage/home on /home (zfs, local) # df Filesystem 1K-blocks Used Avail Capacity Mounted on /dev/ad0s1a 2026030 235240 1628708 13% / devfs 1 1 0 100% /dev /dev/ad0s1d 54098308 1032826 48737618 2% /usr storage 26320512 0 26320512 0% /storage storage/home 26320512 0 26320512 0% /home
This completes the RAID-Z configuration. To get status updates about the file systems created during the nightly periodic(8) runs, issue the following command:
# echo 'daily_status_zfs_enable="YES"' >> /etc/periodic.conf
Every software RAID has a method of monitoring their state. ZFS is no exception. The status of RAID-Z devices may be viewed with the following command:
# zpool status -x
If all pools are healthy and everything is normal, the following message will be returned:
all pools are healthy
If there is an issue, perhaps a disk has gone offline, the pool state will be returned and look similar to:
pool: storage state: DEGRADED status: One or more devices has been taken offline by the administrator. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Online the device using 'zpool online' or replace the device with 'zpool replace'. scrub: none requested config: NAME STATE READ WRITE CKSUM storage DEGRADED 0 0 0 raidz1 DEGRADED 0 0 0 da0 ONLINE 0 0 0 da1 OFFLINE 0 0 0 da2 ONLINE 0 0 0 errors: No known data errors
This states that the device was taken offline by the administrator. This is true for this particular example. To take the disk offline, the following command was used:
# zpool offline storage da1
It is now possible to replace the da1 after the system has been powered down. When the system is back online, the following command may issued to replace the disk:
# zpool replace storage da1
From here, the status may be checked again, this time without the -x
flag to get state information:
# zpool status storage pool: storage state: ONLINE scrub: resilver completed with 0 errors on Sat Aug 30 19:44:11 2008 config: NAME STATE READ WRITE CKSUM storage ONLINE 0 0 0 raidz1 ONLINE 0 0 0 da0 ONLINE 0 0 0 da1 ONLINE 0 0 0 da2 ONLINE 0 0 0 errors: No known data errors
As shown from this example, everything appears to be normal.
As previously mentioned, ZFS uses checksums to verify the integrity of stored data. They are enabled automatically upon creation of file systems and may be disabled using the following command:
# zfs set checksum=off storage/home
This is not a wise idea; however, as checksums take very little storage space and are more useful enabled. There also appear to be no noticeable costs having them enabled. While enabled, it is possible to have ZFS check data integrity using checksum verification. This process is known as “scrubbing.” To verify the data integrity of the storage pool, issue the following command:
# zpool scrub storage
This process may take considerable time depending on the amount of data stored. It is also very I/O intensive, so much that only one of these operations may be run at any given time. After the scrub has completed, the status is updated and may be viewed by issuing a status request:
# zpool status storage pool: storage state: ONLINE scrub: scrub completed with 0 errors on Sat Aug 30 19:57:37 2008 config: NAME STATE READ WRITE CKSUM storage ONLINE 0 0 0 raidz1 ONLINE 0 0 0 da0 ONLINE 0 0 0 da1 ONLINE 0 0 0 da2 ONLINE 0 0 0 errors: No known data errors
The completion time is in plain view in this example. This feature helps to ensure data integrity over a long period of time.
There are many more options for the Z file system, see the zfs(8) and zpool(8) manual pages.
This, and other documents, can be downloaded from ftp://ftp.FreeBSD.org/pub/FreeBSD/doc/.
For questions about FreeBSD, read the documentation before contacting <questions@FreeBSD.org>.
For questions about this documentation, e-mail <doc@FreeBSD.org>.