ZFS is a stable, robust, and fault-tolerant file system with built-in RAID-like properties and drive pools. We show you how to create a ZFS drive pool and control access to it.<\/p>\n
ZFS is an advanced file system<\/a> that originated at Sun Microsystems\u00a0for their\u00a0Solaris operating system<\/a>. It is now owned by\u00a0Oracle Corporation following Oracle\u2019s 2009 acquisition of Sun Microsystems. However, an open source version was released by Sun Microsystems from 2005 onwards. This was ported to Linux making it widely available. The open source version of ZFS is managed and maintained by the\u00a0OpenZFS project<\/a>.<\/p>\n ZFS is a high-capacity, fault tolerant file system. ZFS originally stood for\u00a0Zettabyte File System. Nowadays it can store up to 256\u00a0Zebibytes\u00a0of data.<\/p>\n ZFS is exceptionally fault-tolerant. It combines features that deliver file system pooling, cloning and copying, and RAID-like functionality, natively. Each file has a checksum, so ZFS can tell if a file is corrupted or not.<\/p>\n We\u2019re going to walk through the steps required to install ZFS on Ubuntu 20.04 (Focal Fossa) and to set up and use a drive pool. We’ll also describe a way to control access to the data in the pool.<\/p>\n To install ZFS, use this command:<\/p>\n <\/p>\n When the installation is complete you can check ZFS is present and correct by using the <\/p>\n The A ZFS\u00a0pool<\/em> is created by logically combining and treating different physical hard drives as though they were a single addressable entity.<\/p>\n There are two ways to do this. If you combine the hard drives as a\u00a0striped<\/em>\u00a0or RAID 0<\/a> pool, you get to use all of the combined capacity of the hard drives. However, there is no redundant storage. If a hard drive fails it will break the file system and you will lose data.<\/p>\n The preferred\u2014and strongly recommended\u2014method is to create a\u00a0mirrored<\/em> or RAID 1<\/a> pool. With this type of pool, your capacity is limited to the size of the smallest drive in the pool but, even with the loss of a hard drive, the file system will be operational.<\/p>\n You can replace a failed drive with no loss of data, and no downtime. With a pool of three drives you could withstand a failure of two of the physical drives and still have an operational system with intact data.<\/p>\n The generic form of the command to create a striped pool is:<\/p>\n To create a mirror pool we added the word \u201cmirror\u201d to the command:<\/p>\n Before we can tell ZFS which hard drives to include in your pool, we need to identify them. To do so, use this command:<\/p>\n <\/p>\n The Linux identifies drives by letter and partitions by number.<\/p>\n We\u2019re going to use hard drives two, three, and four. So we will be using Here is the command to create the pool. Note that we are including the \u201cmirror\u201d parameter to create a RAID 1 pool, and we\u2019re naming our pool \u201citenterpriser.\u201d We\u2019ll be able to refer to the pool by that name later.<\/p>\n <\/p>\n You\u2019re quietly returned to the command prompt. Did anything actually happen? We can check the status of all of our ZFS pools using the zpool status command.<\/p>\n <\/p>\n We have a single pool configured on this computer, and it is called “itenterpriser.”<\/p>\n Let\u2019s use the <\/p>\n This tells us two things. Our “itenterpriser” pool is mounted on “\/itenterpriser”. As expected, although there are three hard drives in the pool, it has the capacity of just one of the hard drives.<\/p>\n We can <\/p>\n That’s working. Great. Now let\u2019s destroy it.<\/p>\n By default, the pool will be mounted on a mount point in the root of the file system, and the mount point is named the same as the pool. Usually, you\u2019d choose where to mount your pool.<\/p>\n To remove a pool we use the <\/p>\n Again, we\u2019re silently returned to the command prompt. We\u2019ll recreate our pool and use the <\/p>\n Only root is able to store information in the pool. To allow other users to have write access to the pool we need to follow a few steps.<\/p>\n We\u2019re going to control who can access the pool. We\u2019ll create a new user group, and set that user group as the group-owner of the data location. That means we can add and remove users from that group to grant or remove access to the data.<\/p>\n We use <\/p>\n The user must log out and back in before they are seen as a member of the group.<\/p>\n We’re going to create a directory in the pool and change its group ownership to the “ite-pool” group. We’ll then set the group file permissions for that directory to read, write, and execute. The effect of this is to grant those permissions to any users who are in the “ite-pool” group.<\/p>\n We could do this on the root folder of the pool, of course, but we gain flexibility and control by setting the permissions on the new directory.\u00a0 For example, we can create as many directories in the pool as we need, and configure different groups of users to have access to them. It also means the users don’t need to have read, write, and execute permissions across the entire pool.<\/p>\n To create a directory called \u201cdata\u201d in the pool, we type:<\/p>\n We\u2019ll use To set group permissions for the directory we\u2019ll use <\/p>\n What we’ve achieved is to set the group ownership of the “data” directory to “ite-pool.” members of that group will have read, write, and execute permissions in that directory. Earlier, we added our current user “dave” to the “ite-pool” group.<\/p>\n If that user tries to create a file in the root of the pool they are denied permission. If they repeat that command inside the “data” folder, they\u2019re granted permission and the file is created.<\/p>\n <\/p>\n Our permissions are working.<\/p>\n ZFS makes it simple for you to keep your data as safe and always-accessible as possible, and with relative ease too. But fault-tolerant file systems don’t replace backups. What ZFS does is allow you to have drive failures without downtime nor having to resort to restoring backups.<\/p>\n You must maintain your backup regime and schedule regular backups. But with ZFS you should need to turn to your backups much less frequently.<\/p>\n","protected":false},"excerpt":{"rendered":" ZFS is a stable, robust, and fault-tolerant file system with built-in RAID-like properties and drive pools. We show you how to create a ZFS drive pool and control access to it. What is ZFS? Installing ZFS ZFS Pool Types Creating a Mirrored Pool Verifying the Pool Defining the Mount Point Setting User Permissions Data is […]<\/p>\n","protected":false},"author":10,"featured_media":2878,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[24],"tags":[],"yoast_head":"\nInstalling ZFS<\/h2>\n
sudo apt-get install zfsutils-linux<\/pre>\n
which<\/code> and
where<\/code> commands.<\/p>\n
which zfs<\/pre>\n
whereis zfs<\/pre>\n
which<\/code> command ensures ZFS is in your command search path. The
whereis<\/code> command shows where the ZFS binaries are located, where its supporting and additional files are located, and that the man page has been installed too.<\/p>\n
ZFS Pool Types<\/h2>\n
sudo zpool create pool-name drive-1 drive-2 drive-3 ...<\/pre>\n
sudo zpool create pool-name mirror drive-1 drive-2 drive-3 ...<\/pre>\n
Creating a Mirrored Pool<\/h2>\n
sudo blkid | grep \/dev\/sd<\/pre>\n
blkid<\/code> (print block device attributes) command lists the block devices in your system, and we\u2019re piping that through
grep<\/code> to filter out the
\/dev\/sd<\/code> devices. These are the hard drives. We can see the four hard drives<\/a> fitted to this computer.<\/p>\n
\n
\/dev\/sdb<\/code>,
\/dev\/sdc<\/code>, and
\/dev\/sdd<\/code>.<\/p>\n
sudo zpool create itenterpriser mirror \/dev\/sdb \/dev\/sdc \/dev\/sdd<\/pre>\n
Verifying the Pool<\/h2>\n
sudo szpool status<\/pre>\n
df<\/code> (disk free) command and pipe that through
grep<\/code> (search using regular expression) to locate entries with \u201citenterpriser\u201d in them. The
-h<\/code> (human-readable) option tells
df<\/code> to show hard drive capacities in user-friendly units.<\/p>\n
df -h | grep itenterpriser<\/pre>\n
cd<\/code> into that location just as though it was any other directory in the computer\u2019s file system.<\/p>\n
cd \/itenterpriser\/<\/pre>\n
Defining the Mount Point<\/h2>\n
zpool destroy<\/code> command and the name of the pool:<\/p>\n
sudo zpool destroy itenterpriser<\/pre>\n
-m<\/code> (mount point) option to specify where we\u2019d like the pool to be mounted.<\/p>\n
sudo zpool create -m \/usr\/share\/itenterpriser itenterpriser mirror \/dev\/sdb \/dev\/sdc \/dev\/sdd<\/pre>\n
Setting User Permissions<\/h2>\n
groupadd<\/code> (create a new group) to add a user group. We\u2019re calling it \u201cite-pool.\u201d We use the
usermod<\/code> (modify a user account) command to add a user to a group. The
-a<\/code> (append) and
-G<\/code> (groups) options combine to add the new group to the list of existing groups that the user is in.<\/p>\n
sudo groupadd ite-pool<\/pre>\n
sudo usermod -a -G ite-pool dave<\/pre>\n
sudo mkdir \/usr\/share\/itenterpriser\/data\/<\/pre>\n
chgrp<\/code> (change group ownership) to set the group owner of the directory to “ite-pool”:<\/p>\n
sudo chgrp ite-pool \/usr\/share\/itenterpriser\/data\/<\/pre>\n
chmod<\/code>. The “s” sticky bit flag means all files and folders created under that directory will inherit these permissions.<\/p>\n
sudo chmod g+rwsx \/usr\/share\/itenterpriser\/data\/<\/pre>\n
touch user\/share\/itenterpriser\/text.txt<\/pre>\n
touch user\/share\/itenterpriser\/data\/text.txt<\/pre>\n
ls user\/share\/itenterpriser\/data\/<\/pre>\n
<\/a>Data is the New Gold<\/h2>\n