Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Linux Books Media Software Book Reviews

Managing RAID on Linux 225

rjnagle writes "The availability of HOW-TOs and newsgroups is supposed to make the sysadmin's job easier, right? Much as I am a proponent of the 'distributed learning model' for Linux, the endless searching for answers on the Web for setting up Linux RAID was getting to be a royal pain. Sure, there was a RAID how-to and an excellent newgroup, but some of the information is out of date, and the tricks suggested by people a year ago may be no longer needed today. Robert reviews the O'Reilly title Managing RAID on Linux below to see how it stacks up to HOWTOs, guesswork and anecdotal evidence.
Managing RAID on Linux
author Derek Vadala
pages 245
publisher O'Reilly
rating The best
reviewer Robert Nagle (aka idiotprogrammer)
ISBN 1565927303
summary This book brings RAID to the masses

A person deciding to go with RAID faces a panoply of options and gotchas. Hardware or software? How many controllers? ATA or SCSI (or ataraid)? RAID 1 or RAID 5? Which file system or distribution? Kernel options? Mdadm or raidtools? /swap or /boot on raid? Hybrid? Left or right symmetric? One poster pointed out that putting two ATA drives on the same controller could impact performance. Yikes! Didn't I do that? Upon discovering that O'Reilly had just published its Managing RAID on Linux book, looking at sample chapter , I bought the book and let my blood pressure return to normal.

RAID is one of these subjects that is really not complex; it's just very hard to find all the information in one place. This is precisely the book to solve the problem. Author Derek Vadala, sysadmin and founder of Azurance.com, an open source/security consulting firm, has gathered a lot of information and even personal anecdotes to go through the decision making process when going over to RAID. He goes step-by-step through that process, educating us about hard drives, controllers, and bottlenecks along the way. This exhaustive book may be the first to bring RAID to the masses.

Although parts of the book (RAID types, file system types) may seem already familiar to experienced Linux users, it is helpful nonetheless to have everything in a nifty little book. A section of file systems provided not only a rundown of the merits and drawbacks of each one, but also a guide to their utilities. I learned for example what "file tails" for Reiser are, and why using them causes performance to degrade after reaching 85% capacity. The book compares raidtools with mdadm as well as lovely commands like nohup mdadm -monitor -mail=paranoidsysadmin@home.com (which, if you haven't guessed, causes the system to email you RAID status reports upon boot).

People who use software RAID may skip over the chapter on RAID utilities for the leading RAID controller cards. Still, there was one interesting tidbit: Why, the author asks, do makers of controller cards put all their BIOS utilities on DOS floppies which require us to find a DOS boot disk? Seriously, how many of us carry around DOS boot disks nowadays? The book made me aware for the first time of freedos, an open source solution that solves precisely that problem.

The Software RAID stuff was pretty thorough and clarified a lot of things. The book does an excellent job in helping to identify and eliminate bottlenecks and optimizing hard drive performance (using hdparm and various monitoring commands). The anecdotes and case studies definitely clarified which RAID solution is suited for which task.

I am less impressed by the book's sections on disaster recovery and troubleshooting. Although these subjects are brought up at several places in the software RAID chapter, the book could have discussed several failure scenarios or used a fault tree (such as the famous Fault Tree in Chapter 9 of the Samba book, a marvel for any tech writer to read). The book doesn't even discuss booting with software RAID until the last 10 page of the book and then gives it only a single paragraph (even though the author acknowledges it as "one of the most frequently asked questions on the linux-raid mailing list."). Call me old-fashioned, but isn't the ability to boot into your RAID system ... kinda important? As someone who just spent a significant amount of time troubleshooting RAID booting problems in Gentoo, I for one would have liked more insight into the grub/lilo thing. Also, in the next paragraph in the last chapter on page 228, the author casually mentions that "all /boot and / partitions must be on a RAID-1." Say what? Please pity the poor newbie who religiously follows the instructions in the book but fails to read until the end. I'm not sure what the author meant by this statement, but it required a much more substantial explanation and needed to go into a much earlier chapter.

These complaints don't detract very much from this excellent book, a true O'Reilly classic and a model of clarity and helpfulness. This book provides enough knowledge to avoid the dread and uncertainty that comes with trying to tackle Linux RAID. With a book like this, a sysadmin can sleep a little easier.

Recommended Readings:


Robert Nagle (aka Idiotprogrammer )is a Texas technical writer, trainer and Linux aficionado. You can purchase Managing RAID on Linux from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.

This discussion has been archived. No new comments can be posted.

Managing RAID on Linux

Comments Filter:
  • by supun ( 613105 ) on Wednesday February 12, 2003 @11:07AM (#5287921)
    but the easiest way I've found is to go with hardware RAID. It's easier to setup, doesn't put any extra load on the CPU, and only costs a few hundred dollars extra.

    Mind you I'm thinking of RAID used in producion instead of someone RAIDing two drives in there home machine.
    • Few hundred? For IDE they're under a hundred, sometimes right on the motherboard.
      • by Anonymous Coward on Wednesday February 12, 2003 @11:49AM (#5288270)
        For IDE they're under a hundred, sometimes right on the motherboard

        Those cheap-o-RAIDs are essentially software RAIDs. Most if not all RAID functions are done by the drivers, not on the card itself.

        Entry-level real hardware IDE RAID cards [3ware.com] cost approximately $500 - almost the same as a SCSI RAID. That's obviously offset by the cheaper disks, but still...

        • Actually, 3ware IDE RAID cards are much cheaper than that. About $120 for a 2-channel card good for raid 1/0 and $245 for a 4-channel, $365 for a 8-channel and $520 for a 12-channel. I pulled these prices from hypermicro.com and no, I'm not affiliated with them, just a satisfied customer.

          If you're looking to do any more than 4 channels, I'd take a serious look at the SATA cards and drives simply to reduce cabling hassles.
        • Why pay for that when a few IDE cards @ $50 each and a decent athlon CPU will perform the same task as a fancy hardware RAID. Is there some benefit to hardware RAID other than server class equipment. We are talking IDE here afterall. I thought they stopped making server class IDE drives, which is why we RAID them in the first place, I guess.
          If you need a RAID solution that never fails and want to spend some money might as well go Sun A/Dx000 and build a couple redundant servers to setup load balancing and failover. The whole point of using IDE RAID is to save money and get massive amounts of relatively stable storage space.
          • The reason that some people like hardware RAID over software is that it only cost around $300.00 more AND most implement HOT SWAP features. This is kinda nice. The ONLY downside is that you need to use the vendors driver as apposed to a standard IDE driver.

            The other thing you get is the ability to add drives on the fly.

            I would agree that if I didn't have the $300.00 for a 3ware RAID controler then I would try software RAID again. My first experience with it (RED HAT 7.3) sucked big time. I couldn't get the thing to install and be stable. This brings up another point. Your OS doesn't know that it is being RAIDed. In some cases that is nice.

        • by spongman ( 182339 ) on Wednesday February 12, 2003 @03:19PM (#5290046)
          Not so, there are plenty of comparitive reviews like this one on tom's hardware [tomshardware.com] that suggest that the cleap-o-RAID, while not as feature-complete (ie RAID 5) as some of the more expensive offerings, are just as performant and sometimes faster and less CPU taxing than the more expensive options.

          You can get excellent performance for less than $100. Why pay more?

      • And are basically software raid.
        Those cheap IDE raid cards do most of the work in the driver, and don't give you much adavntage over software raid.
        True hardware raid is a few hundred dollars, like the 3ware series of cards.
    • About a year and a half ago I was looking for a hardware RAID controller that had stable Linux drivers and supports IDE for my personal server. Know of any?
      • 3ware (Score:3, Informative)

        by Anonymous Coward
        See 3ware's site [3ware.com]. They have an excellent range of IDE RAID cards that are real in the sense that the processing is done by the card and not by your computer's CPU (unlike in the cheap RAID-on-a-motherboard kludges). They are Linux friendly too.

        Up until now I've bought only SCSI drives because heavy compiles (which I do a lot) just choke IDE down. I now have a 4 x 60 GB RAID-1 and it just screams. With a one time investment in a proper IDE RAID card with escalator scheduling, tagged queueing and big cache I still save a lot of money by being able to buy large but cheap IDE disks.

      • tom's hardware [tomshardware.com] is your friend...

        check out the highpoint and dawicontrol offerings unless you need RAID5.

    • I got a nice RAID for one of my boxes for $200. It's 100% hardware. The OS doesn't even know it's there. Hot swappable cheap-o IDE drives. It's great. I would *never* deal with the headache of a software RAID solution. That's like buying a buggy, slow software modem to save $10. There's no point.
    • There are plenty of reasons to go software RAID over hardware RAID. With Linux, one of the main reasons is the same reason many of us choose Linux to begin with-- it's open source. I know that isn't traditionally a factor to be considered when picking hardware, but remember that when a hardware controller fails you are at the mercy of the vendor. If a Linux software RAID fails, you have access to the source code and perhaps also the developers, so maybe you just have a shot at recovering data in a catastrophic event, even if it does mean writing some recover tool on your own. In fact, with RAID-1 in the Linux kernel, if something goes kablooey you can just mount a member disk standalone and get some rest.

      That's only one consideration. It used to be that the headache of booting from, and installing to Linux with software RAID was a huge hassle. Today almost every distribution supports out of the box installation to software RAID. So the 'ease of use' considerations for going hardware are all but gone.

      Now here's the issue that always starts the tug of war-- performance. Traditionally hardware RAID was simply better because it didn't hit the CPU. Today that doesn't make a difference, especially if you use SCSI. Now with ATA you might see the overhead of RAID a little more, but that's because ATA already has overhead to begin with. The CPU hit with SCSI is negligible, and I doubt if it will be noticed in most cases, even in so called "production". That's because the real bottleneck in most systems in I/O throughput and not CPU performance. That's most systems, not all systems. Obviously if you are a good sysadmin you are evaluating these issues on a case by case basis.

      Finally I just want to say that it's a widely held opininion among the Linux RAID community that the kernel RAID (the md driver) outperforms all but the most high-end SCSI RAID controllers. I'm sure many will disagree, but that's been my experience and I know that if you ask certain kernel developers who shall remain nameless they will tell you the same thing.

      Run bonnie, you'll see.

      Derek Vadala, lowly author.
    • The "hidden" problem with hardware RAID is that often the operating system isn't aware when an active drive has failed. Some vendors offer monitoring utilities that install into the host OS (eg, MegaRAID controllers have a Linux utility) but this raises dozens of issues. Will the utility impact the server stability or performance? What library dependencies are required for the utility? How do I integrate the utility into my enterprise monitoring system eg, Nagios or Tivoli?

      Another problem - perhaps less serious - is that hardware RAID controllers often require a reboot into their proprietary BIOS to do anything. This isn't very useful if you want to expand the RAID array without disrupting service. Some vendors offer utilities to modify the RAID configuration but I've never found all the functionality to be exposed within the utilities. Of course, if you are mucking about with disk arrays on production systems then you have bigger issues to deal with.
  • by VitrosChemistryAnaly ( 616952 ) on Wednesday February 12, 2003 @11:10AM (#5287952) Journal
    ...was the use of the word "panoply".

    That word simply isn't used enough in the modern vernacular.

    Okay, mod me down now...
  • RAID and Firewire (Score:5, Interesting)

    by syr ( 647840 ) on Wednesday February 12, 2003 @11:14AM (#5287979)
    Is there any way the peripheral to peripheral features of firewire could be used to create an advanced disk redundancy solution ala RAID? I ask this because I know the new Firewire specs shipping on the fancy new Apple machines are getting quite speedy and one of the prime advantages of 1394 over USB is the device to device communication that is possible.

    Is it possible to use Firewire and a service like Rendevous to make an intelligent redundant system? It's a thought at least. My firewire drive I use for my Inspiron works nicely enough. Would firewire be cheaper than RAID for servers, however?


    Syr GameTab.com [gametab.com] - Game Reviews Database

    • Re:RAID and Firewire (Score:4, Interesting)

      by Oculus Habent ( 562837 ) <oculus.habent@gm ... m minus caffeine> on Wednesday February 12, 2003 @11:44AM (#5288227) Journal
      Interesting Thought.

      1. Rendevous probably wouldn't come into play - it's really system-to-system.

      2. The device to device communication could be especially useful when recovering a failed disk - no overhead on the controller. This, though, would require the devices themselves be better than mere drives, driving the cost up.

      3. Unfortunately - without drives with actual FireWire interfaces (all externals use FW-IDE bridges, the Oxford 911 being the fastest at 50MB/s [fwdepot.com], 35MB/s sustained) the true potential of FireWire will remain untapped. Perhaps as we move to Serial-ATA and away from the standard parallel IDE, manufacturers will be prompted to offer FireWire drives as well.

      Additional possibilities:
      Think of a trimmed-down Xserve RAID [apple.com] with FireWire instead of Fibre Channel - it would be able to take advantage of the bandwidth of FireWire and still maintain (?) affordability for low-to-mid range businesses looking for large high-speed external storage.

      All sorts of possibilities.
    • firewire is a bus, raid is a configuration.

      there are raid arrays with firewire interfaces, and software raid using firewire drives is quite possible. (osx makes it easy as pie)

      here are some cool firewire raid products:

      http://www.usbshop.com/firewireraid.html
      http://www.sancube.com/
      http://www.voyager.uk.com/products_master.asp?prod Type=firewire

      the x-stream from sancube has two firewire busses for double the speed, or for sharing.
      • Here's how I interpreted the parent poster's idea:

        Attached to the firewire bus is a RAID controller, but just a controller -- no disks or cabinet. The controller is configured to read/write to N drives also connected to the firewire bus. Writes to the card would be buffered and parceled out to the individual drives as the bus becomes available.

        The beauty would be you could connect generic firewire drives to the bus, and wouldn't need an expensive cabinet or dedicated drives. With enough buffer and courage to do cached writes, you could get good throughput since the real disk writes could wait for the bus to be free.

  • by SysPig ( 63656 ) on Wednesday February 12, 2003 @11:15AM (#5287984)
    ...but the best part was, I learned a new word today.

    panoply
    n. pl. panoplies

    1. A splendid or striking array
    2. Ceremonial attire with all accessories
    3. Something that covers and protects
    4. The complete arms and armor of a warrior

    Looks like number one is most appropriate, although I've never referred to my arrays as "splendid".

  • /boot / on RAID 1? (Score:5, Informative)

    by mj01nir ( 153067 ) on Wednesday February 12, 2003 @11:17AM (#5288015)
    "all /boot and / partitions must be on a RAID-1."

    With raidtools, at least, /boot must be RAID1, but / can most assuredly be RAID 5 (or, I presume, any of the other RAID levels). I have this running on an ol' RedHat 7.0 box:

    Hunk 'o fstab:
    /dev/md1 / ext2 defaults 1 1
    /dev/md0 /boot ext2 defaults 1 2

    Similar hunk 'o raidtab
    raiddev /dev/md0
    raid-level 1
    nr-raid-disks 2
    chunk-size 64k
    persistent-superblock 1
    #nr-spare-disks 0
    device /dev/sdb1
    raid-disk 0
    device /dev/sda1
    raid-disk 1

    raiddev /dev/md1
    raid-level 5
    nr-raid-disks 3
    chunk-size 64k
    persistent-superblock 1
    #nr-spare-disks 0
    device /dev/sda6
    raid-disk 0
    device /dev/sdb6
    raid-disk 1
    device /dev/sdc5
    raid-disk 2

    *Shrug* Wonder what the context of that quote was within the book?
  • The learning curve on most Wintel software is on the order of the time needed to search through half a dozen menus to find the right command.

    Trying to make everyone be an expert before they can operate their machine is how operating systems die.
  • multipath? (Score:2, Informative)

    by Anonymous Coward

    Does this book talk about the md driver's
    multipath personality?

    This is the most poorly documented part of the
    md driver.

    if you read the raidtab man page ("man raidtab")
    you will find _no_ mention of multipath whatsovever.

    Yet, the md driver can do mulitpath (well, failover) if you set it up right.

    It has limitations though... You can't install to multipath devices, or boot from them (lilo/grub, the various distributions installers don't understand md multipath) and, if an hba fails in such a way that interrupts are not generated...commands just go out to lunch... then md won't notice anything is wrong, and so won't failover. Also, it does nothing to notice if the failover path is actually working, so if that path fails you won't have any notice that redundancy is lost....

    Well, multipath is not RAID, so maybe this book
    doesn't cover it, but any book on software RAID for linux should probably cover all the features of the md driver.

    I will be interested to see this book.

  • by thefoobar ( 131715 ) on Wednesday February 12, 2003 @11:22AM (#5288064) Homepage
    I've stepped away from the software RAID idea on my boxes, due to the availability of cheap hardware RAID, such as Promise's SX4000. It will do hardware RAID 5 for four+ drives and has a SDRAM slot for cache expansion. Coupled with LVM, it ended up being a good solution for me, as I had both the reliability, and good volume management if I wanted to combine arrays.

    The problem I've had with the software RAID is reliability and expandability. It is a pain in the ass if you lose a drive in the array, and it is next to impossible to add a drive (other than a stand by drive) to your existing RAID 5 setup.

    Aah, opinions...
    • by Sludge ( 1234 ) <slashdot&tossed,org> on Wednesday February 12, 2003 @01:08PM (#5289010) Homepage

      You actually feel good about the Linux drivers that Promise gives you with the SX4000? I bought this card, and I wished I stayed away from it.

      I am using it with four 120gb IDE drives with 8mb cache. For starters, if you use anything but the sxcslapp program in Linux to configure the drive, your drives are corrupt. All of 'em. And, your bios will return corrupt information regarding them. This causes DOS not to boot (hard freeze), and Linux to produce keyboard smashings on boot. This is a known firmware problem, and I'll be damned if they have any flashes available, even though the card is four months old. I just checked before writing this review.

      Once I figured out that all the work had to be done with sxcslapp in Linux, I started building my RAID5, albeit with caution. Things here went pretty well, except a) performance sucked about as bad as a single drive and b) the closed source drivers rebuild the raid array with no warning if a drive fails and is replaced, even if the file system is mounted. So, this means that if you have a drive that bombs and you replace it, anything you write to the raid array will be wiped out. I could have used some notification.

      The Linux drivers are horrible. They are written in 'Engrish', and the documentation might as well have been written by someone who doesn't understand computers. "Select the remove drive from array option to remove a drive from array". This continues for all of the options in their menu-driven app.

      I am also forced to use Red Hat 7.3 for this. Great. I now have a cluster of Debian 3 servers I administrate and one Red Hat server.

      I would have returned the card if my reseller would have taken my money. It's about equally expensive to buy IDE add-on cards, or maybe a bit less, and the software RAID in Linux seems to be firmly documented. I've used RAID1 in software on servers before, and it works nicely.

  • by vasqzr ( 619165 ) <vasqzr@n[ ]cape.net ['ets' in gap]> on Wednesday February 12, 2003 @11:23AM (#5288070)


    Please pity the poor newbie who religiously follows the instructions in the book but fails to read until the end.


    On the other hand, pity the newbie who cracks a book open and starts setting a server up page-by-page.
  • by Wee ( 17189 ) on Wednesday February 12, 2003 @11:34AM (#5288156)
    I've looked everywhere about why I keep getting these error messages on my Red Hat 7.3, 2.4.18-3 kernel RAID1 setup:

    Jan 26 04:15:02 hostname kernel: hdb: dma_intr: status=0x51 { DriveReady SeekComplete Error }
    Jan 26 04:15:02 hostname kernel: hdb: dma_intr: error=0x84 { DriveStatusError BadCRC }

    I've looked all over the place for the answer, google, mailing list archives, Usenet, local Linux friends, etc. and haven't been able to find a definitive answer. It's like nobody really knows what that error messages really means.

    Newsgroups suggested bad cables, so I replaced those (twice, once with brand new cables bought specifically for the purpose). Some info suggested the drive or the drive's controller was failing, so I replaced it. Other info pointed to my IDE controller, so I installed a new one dedicated only to the RAID pair. I saw info that said the raid tools were to blame, and to see if the errors go away when the mirror is broken. No dice. Other info I found suggested that it was the IDE drivers in the kernel and that the messages were nothing to worry about unless I was seeing data corruption. I'm not seeing corruption so I'm left with this option.

    If the book can shed some light on the error message voodoo one sees with Linux's IDE driver, then I'll buy it. I'd pay double what they're asking, even.

    -B

    • When I've gotten that error it has meant that the drive itself is heading towards the great hardware graveyard in the sky. Since it's raid1 you should be able to simply put in a new /dev/hdb and all should be fine.
      • When I've gotten that error it has meant that the drive itself is heading towards the great hardware graveyard in the sky. Since it's raid1 you should be able to simply put in a new /dev/hdb and all should be fine.

        Well, data safety is the reason I have RAID. Trouble is, this is the replacement drive on /dev/hdb. The first had the same problem, which is why I was looking for controller/cable/driver issues.

        I'm tempted to go buy a real RAID controller card and get away from software RAID. Problem is the Linux drivers are usually pretty strange. I like being able to upgrade my kernel, for instance.

        -B

        • A couple of thoughts:

          - have you tried putting the hard drive in another linux box and seeing if the same errors show up
          - have you tried diddling with the dma/hdparm settings. I know some controllers have "issues" with dma
          - Are there actual errors, trouble getting data, slow performance, etc? This could be random warnings thrown out by the driver that may not mean that there is anything disasterously wrong (wild guess).

          • have you tried diddling with the dma/hdparm settings. I know some controllers have "issues" with dma

            Turns out that the error isn't actually related to RAID at all. I mean, it is. The drives don't have errors when used outside a RAID setup. Put 'em back into a mirrored pair, and I start getting errors. But the problem is really caused by Linux's software RAID, per se.

            You are exactly correct about the DMA stuff, though. Someone else suggested it, and I found out [slashdot.org] that it was in fact DMA. I had DMA enabled on one drive and not the other. Take the drives out of the RAID pair, and they don't individually show errors. Put them together, errors. That's why I thought RAID was the culprit (and why the book might help).

            Thanks for the suggestions, BTW. Very much appreciated.

            -B

        • I'm tempted to go buy a real RAID controller card and get away from software RAID.

          What do you think it'll buy you, honestly? I've got a half dozen software RAID1 systems out there, three of them being pounded mightily every day (10k-user ISP mail/radius servers) without so much as a squeak of complaint. Throughput is pretty decent as well:

          hdparm -tT /dev/md0

          /dev/md0:
          Timing buffer-cache reads: 128 MB in 0.87 seconds =147.13 MB/sec
          Timing buffered disk reads: 64 MB in 2.16 seconds = 29.63 MB/sec

          (yes I know it's not a thorough benchmark) -- So without taking the drive cache into play, I can hit about 30MB/sec sustained. If I had better drives I bet I could boost those numbers significantly. Probably close to the 90MB/sec I am seeing on my new server, single-drive stats.

          • What do you think it'll buy you, honestly?

            Well, I had thought that my IDE controller was bad, the IDE drivers are wonky, the raid tools stuff was weird, whatever. I mean, I had two drives which both worked great when used by themsleves. I put them in a RAID pair, and I got errors. Turns out I had DMA disabled on one of them, but I was looking at Linux software RAID as the culprit. I thought buyiung dedicated hardware would isolate any problems. It was a last ditch, straw-grasping effort to tell the truth.

            I'm actually a fan of Linux's software RAID1. No "special" drivers, I can use any kernel I want, easy to set up, minimal performance impact, and fairly transparent to use. Now that I know why I was getting errors, and that it wasn't anything to do with software RAID, I'm fine with it.

            -B

    • I've seen those errors on 3 drives over the years, and all 3 have failed within a month of starting to see them.

      Check your backups... :)
    • by dentar ( 6540 ) on Wednesday February 12, 2003 @11:54AM (#5288297) Homepage Journal
      Dude, that's hardware. Turn off the dma on your drives with hdparm.

      hdparm -d 0 /dev/hdb

      You might also have to turn off 32 bit mode:

      hdparm -c 0 /dev/hdb

      Of course, this will slow things down.

      Be sure everything's jumpered correctly.

      Also, of course, I'm not responsible if you fry your data!
      • Dude, that's hardware. Turn off the dma on your drives with hdparm.

        You know what? The other drive in the RAID pair (/dev/hdd) had DMA off, while /dev/hdb had it turned on. I don't know why that was the case. Perhaps my late night fiddling resulting in some sort of fat fingering (wait... that sounded really bad). Anyway, I decided to do some tests by copying about 150MB of MP3s to my array while setting DMA to either on or off.

        With DMA on/off (regardless of which drive has DMA on or off), I get the errors. With it set to off/off, I don't get errors, and the array is slower than a wounded prawn and a huge CPU hog (the copy takes around 50 seconds and the load avg hovers around 4.50). I don't care about slow since this is an NFS/Samba server and CAT5 is my bottleneck. The CPU load I do care about since the box does other things besides simply serve files. With DMA set to on for both drives, I also don't get the errors, which is very cool. The copy takes around 10 seconds and the load avg is about 0.70. All to be expected, since DMA gives quite a performance boost. But it's good to know I can turn it on.

        Looks like my issue was with wacked DMA settings, and not the hardware going bad. So thanks for getting me to take another look! I probably ought to go buy the RAID book now...

        -B

        • The problem may be that you're running the drive at a higher UDMA level than the controller can support. I had problems like this when I plugged an ATA-100 hard drive into an older computer with only an ATA-33 controller.
          The work around is either to turn the Maximum UDMA level down on the hard drive itself (with the dos utils provided by the manufacture) or turn it down in Linux using hdparm -X6n, where n =
          6 -> UDMA2 -> ATA-33
          8 -> UDMA4 -> ATA-66
          9 -> UDMA5 -> ATA-100 ... iirc, check with your manpage to make sure.
    • by ThePurpleBuffalo ( 111594 ) on Wednesday February 12, 2003 @12:17PM (#5288552)
      If the drive is fairly new, it may be SMART capable. Go and install http://www.linux-ide.org/smart/smartsuite-2.1.tar. gz.
      This can be done with (as root):

      wget http://www.linux-ide.org/smart/
      smartsuite-2.1.tar.gz
      tar -xzvf smartsuite-2.1.tar.gz
      cd smartsuite-2.1
      make
      make install

      You might get some non-fatal type errors. The makefile doesn't always work for setting up the rc.d scripts.

      Now run:

      /usr/local/sbin/smartd
      /usr/local/sbin/smartctl -a /dev/hda

      I'm assuming the bad disk is /dev/hda, but change it to suit your needs. If you get some errors, then SMART may not be enabled, so you'll need to run:

      /usr/local/sbin/smartctl -e /dev/hda

      Anyway, when you run smartctl with the -a, it will tell you all about hardware failures and whatnot. For more info on the codes it returns, go to this page: http://www.ariolic.com/activesmart/docs/smart-attr ibute-meaning.html

      I hope this helps

      Beware TPB

    • Nice one. You just managed to post a question on /. in the hope of eliciting technical help, qualifying it with an unconvincing statement that you might even buy the book

      I can't wait for a review of a book about Gentoo (1.4rc2) installation so that I don't have to camp out on irc.openprojects.net everytime GCC segfaults on my Athlon MP :)

      • Nice one. You just managed to post a question on /. in the hope of eliciting technical help, qualifying it with an unconvincing statement that you might even buy the book.

        Dig those knee-jerk accusations, man.

        I've got a Linux RAID setup, it's been giving me errors for a while. I read the book review, and was wondering if maybe the book had any info about those errors, since no online source I could find did. After all, the problems are most definitely related to the drives being in a RAID pair because they don't have problems otherwise. So I composed a wool-gathering post about wondering how much detail could fit in 245-odd pages, and whether or not the book was worth it.

        Then I read what I was about to post and judged it to be completly useless, uninformative, and uninteresting. So I added a question as to whether or not anyone had actually read the book, and could they tell me if it had info about the errors I was seeing. That was basically useless as well, so I pasted in an actual error (in order to be specific and get away from some lame "uh, I have RAID and it has errrors... can the book help?" question; it was also easier to copy-n-paste than explain what the error was), explained my situation, and said I'd buy the book if it could help me. Turns out it probably wouldn't be able to, which is exactly what I wanted to know.

        Anyway, there was the rational behind my post. Anything else you'd like me to explain to you?

        -B

        • Dig those knee-jerk accusations, man.
          Knee jerk? I just read it as i saw it, so yes, I suppose that could be construed as knee jerk. As a certain AC so eloquently phrased it [slashdot.org], it did appear to be a rather cheeky way of posting a tech support request, but hey, if you can get away with it, then go for it. Personally, I think you should have been modded +5 Crafty-SOB :) Thankfully, whoever moderated my comment as 'Funny' had the insight to interpret my response in the spirit in which it was intended.

          This does, however, prompt an interesting question. Maybe /. could have a section devoted to technical queries?

          • As a certain AC so eloquently phrased it [slashdot.org], it did appear to be a rather cheeky way of posting a tech support request, but hey, if you can get away with it, then go for it. Personally, I think you should have been modded +5 Crafty-SOB :)

            Crafty or not, my post really wasn't a thinly veiled tech support plea. I'd been up and down and back and even mapped that road. I pretty much considered my problem unsolvable since there appeared to be a million ways to solve it. I never figured I'd get any kind of answer here I hadn't gotten elsewhere the few dozen times I'd been looking. I'd googled, went through newsgroups, asked on mailing lists, asked friends that do lots of RAID stuff at work, asked folks at a LUG, asked around at school, and -- back to my point -- looked through more than a couple books. I got pretty much different answers every time, like I said, and none of them worked. So you can see where I'd rate my chances of getting an answer on Slashdot to be fairly slim. Besides, I could sneak in a tech suport question much better than that. :-)

            Looking back, I could see how you came to think I was trying to get cheap tech support. The trouble is, that I tend to err on the side of giving too much information. It's easy to start making something like that into a bug report. But my post was more of a "Yeah, well, if that tiny book is so great, does anyone know if it can explain this apparently unexplainable mystery? If so, I'm buyin' it..." I bought the book, BTW.

            Anyway, the horse has been well beaten by now.

            Thankfully, whoever moderated my comment as 'Funny' had the insight to interpret my response in the spirit in which it was intended.

            Heh... I get you. I just don't like accusations, and saw your reply as such. Maybe I saw it as questioning my intent (another pet peeve). I dunno. It's really no big deal; it's just a Slashdot post after all.

            This does, however, prompt an interesting question. Maybe /. could have a section devoted to technical queries?

            That's a good idea, but it might be hard to set up. You'd have to vet the people that did the answering, maybe like Google [google.com] does it. The whole thing would hinge on the quality and speed of the answers. You might be able to have the people who ask pay a small fee (two bucks? 5? 10?) and then the people that answer get a small kickback, or a Slashdot subscription or discontinued Thinkgeek stuff or karma or something. They could make answering questions like tossing rings onto bottles at the fair: the more you get, the higher up the shelves you get to pick your prize from. Those that wanted to get way into it could, those that pitched in here and there would get a small bennie.

            I think it'd work. There are probably lots of folks who see the Slashdot membership as more clueful than most (I'm taking the Fifth on that issue). In any case, it's normally good to have a lot of eyes look at a problem and there are lots of eyes here if nothing else.

            -B

  • by Slartibartfast ( 3395 ) <ken@@@jots...org> on Wednesday February 12, 2003 @11:36AM (#5288164) Homepage Journal
    While I'm certainly a proponent of "dead-tree" documentation, I have to take a moment to disagree with one of the statements made -- I'm sorry, but newsgroups, while perhaps containing out-of-date info, are (if it's a good newsgroup) capable of letting you know the current state-of-affairs. This is substantially -less- true with books. Case-in-point is Samba: it's *DARN* hard to know, from the Amazon description (or wherever) which Samba books describe the current state (2.4 and above) of Samba, whereas the FAQs, newsgroups, etc., are fairly obvious on it. Bottom line? I'll take a good book any day, but when in doubt, I'll go with current info gleaned off the newsgroups and other on-line resources.
  • BIOS utilities (Score:5, Interesting)

    by tmark ( 230091 ) on Wednesday February 12, 2003 @11:37AM (#5288169)
    Why, the author asks, do makers of controller cards put all their BIOS utilities on DOS floppies which require us to find a DOS boot disk? Seriously, how many of us carry around DOS boot disks nowadays?

    Well, given Dell's recent announcements, I suppose fewer and fewer of us will be doing so.

    But really, the author's point is so moot that it's embarassing: if it's my job to maintain a RAID array, and the utilities are on DOS floppies, of course I'm going to have access to a DOS boot disk. So what ? Just how hard is it to carry such a thing around, and why is this is a worthy thing to rail about, in a book about RAID ? If the author wastes too much time talking about stuff like this, this book can't be that useful - arggh, I've wasted too much of my own time already.

    • Why, the author asks, do makers of controller cards put all their BIOS utilities on DOS floppies which require us to find a DOS boot disk? Seriously, how many of us carry around DOS boot disks nowadays?

      But really, the author's point is so moot that it's embarassing: if it's my job to maintain a RAID array, and the utilities are on DOS floppies, of course I'm going to have access to a DOS boot disk. So what ? Just how hard is it to carry such a thing around, and why is this is a worthy thing to rail about, in a book about RAID ? If the author wastes too much time talking about stuff like this, this book can't be that useful - arggh, I've wasted too much of my own time already.

      I thought it was an issue last week, when I was at a customers site, and needed one to flash a BIOS on an old pentium. Google is your friend : "dos bootdisk" reveals bootdisk.com.

      I had more problems trying to boot off my new Asus A7V333 with Promise's 'RAID LITE' crap. The drives are found, but individually..I ended up using ataraid... but that just doesn't seem right. (Note: I've had a Promise Ultra 66 RAID running for two years now - installing the driver 'just worked'. But not with this 'Lite' verison)

  • by grub ( 11606 ) <slashdot@grub.net> on Wednesday February 12, 2003 @11:39AM (#5288186) Homepage Journal

    It's not that hard.

    - Power down the computer
    - Remove cover
    - Blow out all dust and insect husks
    - Spray in RAID
    - put cover back on for 15 minutes.
    - Remove cover again
    - blow out insect husks.
  • by xchino ( 591175 ) on Wednesday February 12, 2003 @11:52AM (#5288284)
    That onboard Promise RAID controller you dished out the extra $50 for on that new motherboard is not going to get you a nice hardware RAID 5. AFAIK they can only do 1,0, 0+1, or 1+0. Also, I see people whining about software RAID as compared to hardware RAID. Running a striped set through software was nearly unfeasable a few years ago, but with the resources new machines have these days, the difference is almost negligable, as long as it doesn't have to fight for system resources. let's not forget software RAID is alot cheaper than buying a RAID controller.

    At any rate, taking the view that hardware RAID is always the solution and software RAID is never the solution is just bad sysadministration.
  • Last time I tried software RAID under Linux, it was less than optimal. (about a year ago) But the idea that somebody wrote a book about RAID under Linux and only spends 10 pages on software RAID is amazing - in a bad way. Hardware RAID as many have pointed out is the way to go if the state of software RAID has not progressed.

    And why would I buy this book or any book on RAID if I am going to use a hardware solution. If I have hardware then I am going to just make sure it has support & instructions for Linux and be done with it.

  • by erroneus ( 253617 ) on Wednesday February 12, 2003 @12:18PM (#5288559) Homepage
    When I decided to set up a RAID under Linux, I recalled seeing an icon in my webmin. I used Webmin almost exclusively in setting up the RAID. I didn't need any HOWTOs in the process of setting up this thing.

    So while there are good collections of information out there, there are also very good tools out there with which to accomplish useful tasks.

    I think it's precisely that HOWTOs are rarely if ever needed with Windows stuff that it still has an edge over Linux where the masses are concerned. So it's nice that HOWTOs are out there, I think it's more important that good tools are out there that are easy and self explanatory.
  • by Greedo ( 304385 )
    This guy really like to review books [amazon.com].
  • EVMS [sourceforge.net] is IBM's version of RAID for linux. This is natively available on gentoo linux. I've been running it on a few boxes with great success. The utilities make it a lot easier to set up raid, lvm, etc.. Definately worth looking at for those interested.
  • All I want is a software RAID 5 array that I can use in both Windows AND Linux. Maybe it's possible. I dunno. I've never found any instructions.
  • by Peter H.S. ( 38077 ) on Wednesday February 12, 2003 @01:15PM (#5289077) Homepage
    3-4 years ago, when we decided to use hardware RAID on our Linux servers, we bought some DPT Smartraid V hardware RAID controllers. Unfortunatly DPT was bought by Adaptec some time after. Adaptec has been really good at getting the driver included in the kernel, but the takeover seemed to delay this proces, so the time in between was a rough ride.
    The lesson learned was, never have a production Linux system with (binary) drivers tied to a specific kernel or distro version.

    That said, we have been very happy with the controllers, and since at least two disks has died without warning, the expense has easely been worth it. Our systems are used 24/7/365, so every minute of downtime annoys somebody. RAID really makes me sleep better, restoring a server from a slow tapestreamer, at some ungodly hour, while people nervously checks in, asking when we will be up again, is something I really want to avoid too much of.

    YMMV, but I think hardware RAID still has an edge over software raid, mostly because I find it simpler to maintain in the long run.

    If you are into LVM's, FS tools, and software RAID, go to:
    http://evms.sourceforge.net/
    and _drool_. Future stuff for now on production servers, but nevertheless.

  • Tons of Dell's with the PERC (aacraid/megaraid) controllers.

    The nice thing about hardware raid is other than the driver for the scsi card, the OS thinks there is just one drive sitting there. No configuration on the OS side.

    Also, RAID is going before the OS even starts booting. If a disk dies, so what.

    Please correct me if I'm wrong, but if you have software raid and the disk the os/boot/raidconfig files are on goes, you have a dead box.
  • at O'Reilly, mdadm [oreillynet.com]
    and, I'd recommend Enterprise Volume Management System [sourceforge.net] rather than LVM ( Logical Volume Manager ), simply because LVM's seems to be being dropped as
    redundant ( ironic, that : ) as EVMS gets more effective, and I don't want the conversion-work from LVM to EVMS, if I can just do EVMS right now, see

  • complete waste (Score:2, Interesting)

    by Britz ( 170620 )
    I've just been through setting up a raid system. I set up a file server that automatically backs up data every week that the users on the network put on it via samba. Since I only want to show up at the place every 6 month or so to check on the server it needs to be bullet proof to the max and still cheap, because they don't have much money as social workers.

    I purchased a used p2 system with a stable mb and two ibm scsi drives on an adaptec controller. I installed Debian GNU/Linux stable and upgraded to the latest stable. Then I put up a softraid and opted for xfs in case of a power failure. I decided against an ups, because I hooked the machine up to the local power network, which is very stable, since the server lives in Berlin/Germany, and I wanted to save the cost.
    Then I moved the root filesystem over to the raid device. Up until now everything was documented very good, except for the fact, that I heard that reiserfs doesn't work with softraid and I didn't find that info on the net anymore. I would have taken reiserfs instead if I would have had a reliable source, such as the book, telling me that that is OK.
    The only thing I had problems with was how to make the system boot off the raid device. Here the howtos and man pages had contradicting stands on how to do this.

    I read this Slashdot article with some regret, because I thought it could have saved me a lot of trouble. But the only section that gave me trouble also seems to confuse the auther of the book. Now that is no help at all. So this book is a waste of time if You know how to use google, which I had to learn painfully fast getting into Debian :-(, since doku is the last thing those guys seem to think about.

    But since Debian is still by far the best system out there overall I have no choice. If You start to rely on seemingly simple things such as a reliable update of Your system with very low hassle then You are hooked.

"To take a significant step forward, you must make a series of finite improvements." -- Donald J. Atwood, General Motors

Working...