Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Books Programming Book Reviews BSD

Book Review: FreeBSD Mastery: Storage Essentials 75

Saint Aardvark writes If, like me, you administer FreeBSD systems, you know that (like Linux) there is an embarrassment of riches when it comes to filesystems. GEOM, UFS, soft updates, encryption, disklabels — there is a *lot* going on here. And if, like me, you're coming from the Linux world your experience won't be directly applicable, and you'll be scaling Mount Learning Curve. Even if you *are* familiar with the BSDs, there is a lot to take in. Where do you start? You start here, with Michael W. Lucas' latest book, FreeBSD Mastery: Storage Essentials. You've heard his name before; he's written Sudo Mastery (which I reviewed previously), along with books on PGP/GnuPGP, Cisco Routers and OpenBSD. This book clocks in at 204 pages of goodness, and it's an excellent introduction to managing storage on FreeBSD. From filesystem choice to partition layout to disk encryption, with sidelong glances at ZFS along the way, he does his usual excellent job of laying out the details you need to know without every veering into dry or boring. Keep reading for the rest of Saint Aardvark's review.
FreeBSD Mastery: Storage Essentials
author Michael W. Lucas
pages 240
publisher Tilted Windmill Press
rating 9/10
reviewer Saint Aardvark
ISBN 0692343202
summary FreeBSD Mastery: Storage Essentials takes you on a deep dive into FreeBSD’s disk management systems.
Do you need to know about GEOM? It's in here: Lucas takes your from "What *is* GEOM, anyway?" (answer: FreeBSD's system of layers for filesytem management) through "How do I set up RAID 10?" through "Here's how to configure things to solve that weird edge-case." Still trying to figure out GUID partitions? I sure was...and then I read Chapter Two. Do you remember disklabels fondly, and wonder whatever happened to them? They're still around, but mainly on embedded systems that still use MBR partitions — so grab this book if you need to deal with them.

The discussion of SMART disk monitoring is one of the best introductions to this subject I've ever read, and should serve *any* sysadmin well, no matter what OS they're dealing with; I plan on keeping it around for reference until we no longer use hard drives. RAID is covered, of course, but so are more complex setups — as well as UFS recovery and repair for when you run into trouble.

Disk encryption gets three chapters (!) full of details on the two methods in FreeBSD, GBDE and GELI. But just as important, Lucas outlines why disk encryption might *not* be the right choice: recovering data can be difficult or impossible, it might get you unwanted attention from adversaries, and it will *not* protect you against, say, an adversary who can put a keylogger on your laptop. If it still make sense to encrypt your hard drive, you'll have the knowledge you need to do the job right.

I said that this covers *almost* everything you need to know, and the big omission here is ZFS. It shows up, but only occasionally and mostly in contrast to other filesystem choices. For example, there's an excellent discussion of why you might want to use FreeBSD's plain UFS filesystem instead of all-singing, all-dancing ZFS. (Answer: modest CPU or RAM, or a need to do things in ways that don't fit in with ZFS, make UFS an excellent choice.) I would have loved to see ZFS covered here — but honestly, that would be a book of its own, and I look forward to seeing one from Lucas someday; when that day comes, it will be a great companion to this book, and I'll have Christmas gifts for all my fellow sysadmins.

One big part of the appeal of this book (and Lucas' writing in general) is that he is clear about the tradeoffs that come with picking one solution over another. He shows you where the sharp edges are, and leaves you well-placed to make the final decision yourself. Whether it's GBDE versus GELI for disk encryption, or what might bite you when enabling soft updates journaling, he makes sure you know what you're getting into. He makes recommendations, but always tells you their limits.

There's also Lucas' usual mastery of writing; well-written explanations with liberal dollops of geek humor that don't distract from the knowledge he's dropping. He's clear, he's thorough, and he's interesting — and that's an amazing thing to say about a book on filesystems.

Finally, the technical review was done by Poul Henning-Kamp; he's a FreeBSD developer who wrote huge parts of the GEOM and GBDE systems mentioned above. That gives me a lot of warm fuzzies about the accuracy of this book.

If you're a FreeBSD (or Linux, or Unix) sysadmin, then you need this book; it has a *lot* of hard-won knowledge, and will save your butt more than you'll be comfortable admitting.

You can purchase FreeBSD Mastery: Storage Essentials from amazon.com. Slashdot welcomes readers' book reviews (sci-fi included) -- to see your own review here, read the book review guidelines, then visit the submission page. If you'd like to see what books we have available from our review library please let us know.
This discussion has been archived. No new comments can be posted.

Book Review: FreeBSD Mastery: Storage Essentials

Comments Filter:
  • by Anonymous Coward
    I prefer NTFS as a journaling file system. It provides all the functionality and none of the incompatibility that you get with these niche OS's.
    • I prefer NTFS as a journaling file system. It provides all the functionality and none of the incompatibility that you get with these niche OS's.

      Isn't NTFS kind of frozen in time as of 10 years ago at least? As I understand it, devs are mortally afraid to touch it out of fear of breaking it. No new features of any note for how long, a dozen years? And no new optimizations. Kind of zombified really. But don't believe me, see Microsoft's ADHD ReFS effort, they obviously have ZFS envy but don't quite know what to do about it. Supposedly started with gutting the NTFS code base. Great way to start with a clean sheet guys, or not.

      Bottom line, nobody looks

      • Isn't NTFS kind of frozen in time as of 10 years ago at least?

        AFAIK it gets revisions with every major release. Like the EXT family its backwards compatible, transparently.

        No new features of any note for how long, a dozen years?

        What big features is it missing aside from the checksumming / self-healing stuff thats already in ReFS? Feature wise its a pretty decent FS; its biggest flaw AFAICT is its bad performance in directories with huge numbers of files.

      • I'm thinking, if there's one piece of software I'd kinda like "frozen in time", it would be a dependable journaling filesystem.
        • NTFS is metadata only journaling, no data protection. Kinda stone age. But good enough for Windows I guess, meets or exceeds the braindamage quotient for the system as a whole.

      • No, NTFS has continued to evolve. The original design team were given changing requirements right up until Windows NT shipped. The on-disk format is basically an efficient way of storing large blobs of data and an efficient way of storing small blobs of data (much like BFS, though with a different approach). Everything else is policy that is layered on top.
  • by phoenix_rizzen ( 256998 ) on Monday January 19, 2015 @03:43PM (#48851859)

    I said that this covers *almost* everything you need to know, and the big omission here is ZFS. It shows up, but only occasionally and mostly in contrast to other filesystem choices. For example, there's an excellent discussion of why you might want to use FreeBSD's plain UFS filesystem instead of all-singing, all-dancing ZFS. (Answer: modest CPU or RAM, or a need to do things in ways that don't fit in with ZFS, make UFS an excellent choice.) I would have loved to see ZFS covered here â" but honestly, that would be a book of its own, and I look forward to seeing one from Lucas someday; when that day comes, it will be a great companion to this book, and I'll have Christmas gifts for all my fellow sysadmins.

    That's planned as another book in the Storage Mastery series (with a possible third on networked storage). But, whether that book is written depends on how well this first book is received and what his schedule is like for other books. If the first book doesn't sell enough or garner enough attention, then it will be the last one in that series.

    There's a bunch more detail on Michael's blog [michaelwlucas.com] about this.

  • by dmoen ( 88623 ) on Monday January 19, 2015 @03:46PM (#48851867) Homepage

    Now that ZFS is the default operating system for new installs of FreeBSD 10.x, it sounds like this book documents a lot of hard won technical insights that have been made obsolete by ZFS. Why would I configure RAID 10 for UFS when ZFS provides superior data protection? And so on. It's probably useful for people who have parachuted in and now must maintain a legacy FreeBSD system. It doesn't sound particularly useful for someone who is migrating from Linux to FreeBSD right now, since this is all about how people *used* to configure FreeBSD storage.

    • by agshekeloh ( 67349 ) on Monday January 19, 2015 @03:48PM (#48851875) Homepage

      ZFS is NOT the default in FreeBSD 10. UFS is still the standard.

      (I try not to comment on reviews of my books, but a technical statement merits a technical answer.)

      ==ml

      • That's interesting, given that it is the default in PC-BSD. Also, I'd think it'd be the default in any 64-bit FreeBSD installation
        • UFS vs ZFS (Score:5, Informative)

          by agshekeloh ( 67349 ) on Monday January 19, 2015 @04:03PM (#48851967) Homepage

          PC-BSD is built atop FreeBSD, but it's unquestionably a different thing than FreeBSD.

          There are reasons to use ZFS, and other reasons to use UFS. Sometimes you really DO want UFS on raid-10. It depends entirely on the workload.

          UFS has been around for decades now. I can't say it's bug free--nothing is--but most of the code paths have been quite well exercised. ZFS is newer and more complex than UFS, and more actively developed.

          UFS is likely to remain the default in mainstream FreeBSD, for licensing reasons if nothing else.

          • Licensing? BSDL ain't religious about it the way GNU people are, so that's not so much of an issue: the CDDL is not so incompatible w/ BSDL, so FreeBSD happily accepts it when needed. It would be more of a problem in Linux, since GPL and CDDL are completely incompatible.
            • All of the CDDL stuff in our tree is in a separate cddl directories. You can get a copy of the tree that does not include them and you can build the system without them. This is a requirement for a number of downstream consumers of FreeBSD.

              That said, we do enable DTrace by default and the installer lets you choose UFS or ZFS. I'd recommend ZFS over UFS for anything with a reasonable amount of RAM (not your Raspberry Pi, but anything with a few GBs).

          • There are reasons to use ZFS, and other reasons to use UFS.

            Like, for example, UFS actually has a repairing FSCK. ZFS fanatics will argue to the ends of the earth that ZFS doesn't need fsck repair because it has built-in raid. Riiiggght.

            Bottom line is, ZFS is groovy and all (though no speed daemon) until it breaks. Chances are excellent that you are well and truly screwed.

            • In what scenario would fsck help you with a ZFS filesystem? What is this "break" you speak of that fsck would fix?
              • Here you go. [freebsd.org] Please tell me how the poor sod has any hope of getting anything back without a proper repair tool. Oh yeah, I forgot, when your blood runs thick with ZFS koolaid, all those unrecoverable pool errors out there never happened.

                • So you don't have an actual example then? Vague references aren't really helpful to having a meaningful discussion. In most circles what you're doing at this point would be considered FUD at best.
                  • Don't be an ass. Anybody can search "ZFS unrecoverable" or the like. Up the creek without a fsck. [nas4free.org]

                    • by koinu ( 472851 )
                      ZFS has got "self-healing" and does not need fsck. It will probably destroy more data while a user cannot give hints with "y" or "n" during self-healing, but the filesystem will become stable and available. ZFS can of course become unrecoverable, like UFS can be un-fsck-able, because essential meta data might have become destroyed. You'll always need backups for important filesystems, no matter if it is UFS or ZFS, everything can get FUBAR.
                    • ZFS has got "self-healing" and does not need fsck.

                      Every time I hear this absurb blather repeated I hear "up the creek without a paddle" but hey, it's ok because ZFS is "self-healing". Hah what rubbish. Do you even know what the so-called self-healing is? Recovering data from parity. Now... Suppose you lose a whole stripe, what now? Hmm?

                      The truth about why ZFS has no repairing fsck is, nobody in that camp is smart enough to write it, especially considering the baroque mess that is ZFS metadata.

                    • by ImdatS ( 958642 )

                      I think there is a mismatch between "self-healing filesystem" and "recovering data".

                      The "self-healing filesystem" (as I understand in the case of ZFS) is that it makes sure that the filesystem itself is not corrupted, i.e. the *whole* system is automatically healed. This doesn't guarantee recoverability of any single file that might have been corrupted.

                      Fsck (UFS) usually helps you to (a) heal a corrupted filesystem and (b) helps you to (partially) recover lost data (in case of corrupted files).

                      ZFS, AFAIK, p

                    • by koinu ( 472851 )

                      It is not the task of a filesystem to recover data, but to keep the data as consistent as possible. UFS does not have any protections against bit rot or against hardware failures, so you'll never know if data is broken.

                      Data recovery is done with backups (which you'll always have, if the data is important for you). There is no way around it.

                      How exactly does fsck help, if it shows structural inconsistencies to you? It says "blabla... CLEAR [y/n]?" and when you press "y" something is lost and when you say

                    • I don't think so. ZFS protects all data blocks and data pointers with checksums. Not only the special filesystem structures (superblock, inodes, metadata tables, ...).
                    • The "self-healing filesystem" (as I understand in the case of ZFS) is that it makes sure that the filesystem itself is not corrupted, i.e. the *whole* system is automatically healed.

                      False. ZFS "self-healing" has no concept of filesystem structure at all, it only knows about raid parity sets so it can fix a single block dropout or corruption of that sort. But blow big holes in the filesystem the way a head crash or failing flash does and ZFS "self-healing" gets useless fast. And yes, I have seen exactly that kind of corruption with Ext3/4 on multiple occasions and e2fsck has been able to get nearly all the data back. Obviously, if you blow a hole right in the middle of nonredundant data

                    • But the protection is not perfect. Throw random data into a few adjacent blocks the way a head crash does, and if those blocks happen to be structural metadata, think about how extensive the data loss could be. In most cases, e2fsck and repair damage like that. ZFS can't.

                      If you need help imagining this, think about the effect of sticking a pin into your aortic valve.

                    • It is not the task of a filesystem to recover data, but to keep the data as consistent as possible.

                      You are right, it is the task of the recovery tools to recover data*, which for ZFS are deficient or completely missing.

                      * Short of online repair, which today only exists in the limited form of raid recovery, and BTW, ZFS is far from the only fs that has that.

                    • But the protection is not perfect. Throw random data into a few adjacent blocks the way a head crash does, and if those blocks happen to be structural metadata, think about how extensive the data loss could be. In most cases, e2fsck and repair damage like that. ZFS can't.

                      True I've wondered this myself lately with btrfs. Ext? has backup copies of the superblock. Which it uses during repair I presume.
                      There is no reason ZFS shouldn't have redundant copies of the critical structures. How foolproof is ZFS da

                    • by koinu ( 472851 )

                      You are partially right. I miss the good old dump/restore tools for ZFS, but zfs send/receive do their job (in a limited fashion) well.

                      I see ZFS as the best option to run larger systems. I've had some problems with it when it was still experimental on FreeBSD, but at the moment it is running fine. I had to replace 2 faulty drives recently. It was painless and has not cost me any bit of lost data. I cannot complain, because there is one administrative problem less that I have to care about.

                • His pool has problems but I'm willing to bet it was still mountable (ONLINE) at that time.
            • Like, for example, UFS actually has a repairing FSCK. ZFS fanatics will argue to the ends of the earth that ZFS doesn't need fsck repair because it has built-in raid. Riiiggght.

              That's not the argument. fsck is not magic. It is designed around a number of possible kinds of error. It verifies on-disk structures and will attempt to reconstruct metadata if it finds that it is corrupted. Equivalent logic to fsck is built into ZFS. Every ZFS I/O operation runs a small part of this, validating metadata and attempting to repair it if it is damaged. You could pull this logic out into a separate tool, but why bother? zpool scrub will do the same thing, forcing the zpool to read (and

      • Naming a book "Storage Essentials" and then not talking about ZFS was a mistake. If you're going to be building any type of NAS, you're going to want to use ZFS for it's scalability, reliability and stability. While you might get away with UFS for a couple of terabytes, you're going to have a bad time of it when you've got 40TB worth of storage space to manage.
        • by mlts ( 1038732 )

          That can be debated. A DYI NAS that does the job can be done pretty easily using RAID Z2 [1]. However, an unRAID appliance has some flexibility where one can add more hard disks as one sees fit dynamically without having to rebuild the entire array. Next to an EMC Isilon (which has 3+ nodes connected via Infiniband), this does the job quite well.

          Maybe this is the next step up for evolution of filesystems, where an array can be upgraded (disks added/subtracted) without affecting the data on them. Of cour

        • Naming a book "Storage Essentials" and then not talking about ZFS was a mistake. If you're going to be building any type of NAS, you're going to want to use ZFS for it's scalability, reliability and stability. While you might get away with UFS for a couple of terabytes, you're going to have a bad time of it when you've got 40TB worth of storage space to manage.

          Essentials means "the essentials," not "everything you should know about X."

          After quickly looking through the table of contents, I don't think there's actually enough room in the book to even introduce ZFS. What should he have taken out? Smart? RAID? Encryption? I would argue that all that is way more "essentials" than ZFS. ZFS deserves its own book.

          • He could have simply made the book 260 pages instead of 240 and put in a 20 page chapter on ZFS right after RAID. The first couple of pages would be about the design philosophy of ZFS. Next introduce the concepts of vdevs, pools and pool types (in relation to what the reader just learned about RAID), sub file systems, snapshots and file system attributes. Next layout some scenarios using 8 disks in a JBOD. Create a raidZ, raidZ2 and a raid10. Next talk about tacking on another 8 disks and what the optio
      • by dmoen ( 88623 )

        Sorry about that mistake. I'm not a FreeBSD expert, I'm someone who is setting up a FreeBSD file server, after 10 years of using Linux. So I'm in the market for a book like this, and of course it is largely useless to me, because of course I'm using ZFS. I was confused by the PC-BSD installer, which *does* default to ZFS.

        • by mcrbids ( 148650 )

          Are you switching to BSD just for ZFS?

          Learning BSD is probably a good investment, but ZFS on Linux [zfsonlinux.org] is production/stable and is excellent. I've been using on CENTOS 6 for over a year and it has been even more stable than EXT4 in a production environment.

          • While ZFS on linux is pretty good and I'm using it on a few things it's still a bit behind the other versions in speed and reliability. At the moment BSD is still a long way ahead if you have a lot of disks and want raidz or raidz2.
            I had a failure on every disk of a sixteen disk pool when setting up on linux late last year for example, just after I'd copied a few TB to it - there's still some edge cases where it falls over and dies. Reinstalled with BSD it was much faster, nearly twice as fast, and hasn't
    • by Noxal ( 816780 )

      ZFS is a filesystem, not an operating system.

    • Now that ZFS is the default operating system for new installs of FreeBSD 10.x, it sounds like this book documents a lot of hard won technical insights that have been made obsolete by ZFS. Why would I configure RAID 10 for UFS when ZFS provides superior data protection? And so on. It's probably useful for people who have parachuted in and now must maintain a legacy FreeBSD system. It doesn't sound particularly useful for someone who is migrating from Linux to FreeBSD right now, since this is all about how people *used* to configure FreeBSD storage.

      O_o I'm assuming you meant filesystem.

  • It's a minor quibble, but the review states it's 204 pages and the table says 240. Obviously at least one of these numbers is incorrect.
  • Thank you Saint Aardvark for taking the time to write this review. You write very well.
    --Z.

  • The cover artwork is nice, but i like the cover style of his "Absolute ..." series more.

  • If it was, it wouldn't be about UFS, it'd be about ZFS.

    UFS is still fine for small installations on embedded systems, but for any thing of any size, you should be using ZFS. Its superior in every way other than memory usage.

    • ZFS lacks fsck, it's slow unless you through massive hardware at it, and "send" as a replication API is a drug addled piece of crap. Can't resume interrupted replication without starting from the beginning, what kind of amateur effort is that?

      • by dbIII ( 701233 )

        it's slow unless you through massive hardware at it

        I've got some 32 bit stuff with 4GB of memory running ZFS that still can saturate gigabit if you ask it for a file. Not the sort of thing you want a few people hitting at once but still not slow even on crap hardware. In general terms a ZFS filesystem that is nearly full gets very slow but that's something that afflicts others as well.
        You have a point with send but it still shits all over rsync in terms of speed so if there's a rare chance of interruption

        • At work I have a little AMD (1.6GHz I think) freebsd system with 2Gb of RAM that saturates gigabit easily. I have actually been quite impressed by the performance of ZFS. Perhaps I am easily impressed though. The last system we had was an off-the-shelf seagate home level NAS that ran some form of embedded linux and failed horribly.

          It keeps complaining it wants another 2gb of RAM to enable prefetch or something, but I can't be bothered because it is really better than it needs to be, and it does provide me

        • by Shinobi ( 19308 )

          I've recently gone back to my roots and started dabbling with 3D animation and compositing again. My fileserver is a FreeBSD machine running on a decent 64-bit CPU with 16GiB RAM, with ZFS. And let me tell you, ZFS is dog slow for some uses, without it being anywhere near full. In my case, lossless-encoded video, and directories with thousands of 4MiB+ images, and working against that in realtime(or trying to), the filesystem stalled out at 80MiB/s, while my old fileserver running Linux and XFS easily satur

          • So what is the dog slow setup? A dozen disks in a single raidz2 vdev will be slow, splitting it into multiple vdevs will not, and mirrored disks (one vdev per two disks) should be very fast. It's not the same as RAID where the speed is proportional to the number of disks, the speed is proportional to the number of vdevs (virtual devices).
            Then there's the possibility of running 4k per sectors disks as if they are not and losing a lot of performance.
            So what did you do to get a speed slower than a single dis
            • by Shinobi ( 19308 )

              6 WD Red 2TB disks split over 3 VDEVs, sector size is set correctly etc. No encryption, no compression. And it makes no difference whether I use NFS or Samba.

              ZFS is something with which I have yet to familiarize myself with the internals so I can only guess, but my initial impression is that it's similar to older unix filesystems(and why Silicon Graphics developed XFS) in that it is not that good at handling many large files simultaneously. So I have the original video clip, then I have individual folders w

              • by dbIII ( 701233 )
                OK then, it's not a setup failure, but something is definitely wrong. It's probably to do with ZFS using 512 byte sector sizes which many disks lie about when they really have a 4k sector size.

                in that it is not that good at handling many large files simultaneously

                I'm using it for seismic data with a lot of multi-TB files being accessed simultaneously and it still saturates bonded gigabit links (better networking coming soon). Other stuff may be better but it's tasks like handling many large files simultan

      • it's slow unless you through massive hardware at it,

        Ran my home file server / desktop PC on a 32-bit Intel P4 with only 2 GB of RAM. Booted off a pair of 2 GB USB sticks (/ and /usr installed there, RAID1 via gmirror), and a 4 GB USB stick for L2ARC, while using 4x 160 GB SATA1 harddrives in a raidz1 vdev. Ran XBMC locally to catalogue all the shows into MySQL, and then to stream the videos to the other two XBMC systems in the house (10/100 Ethernet). No issues watching 480p and 720p shows while others w

  • The thing that is stopping me right now from my server fleet wholesale to a BSD is the lack of good choices when it comes to clustered filesystems, like OCFS2. I know there is GlusterFS, which is not the right solution for me and HAMMER / HAMMER2 which don't have the maturity and feature set we need yet. A shared drive, with a simple and fast DLM. Maybe I have overlooked something, but that is what is lacking right now in File Systems for BSD...
  • the cover is so great... Visit Blogger Tutorials! [blogspot.com]

"Gotcha, you snot-necked weenies!" -- Post Bros. Comics

Working...