Hardware RAID vs Software RAID: Your Opinions

By | 2008/06/14

I’ve been teaching software RAID on RHEL5 for some time now and today I came to the realization that nearly every student I’ve ever taught that is currently using RAID is using hardware RAID.  Nobody seems to use software RAID, at least in what I’ve run into.

Now, in my mind, the performance difference between the two (software vs hardware) can come down to how much you’re willing to spend on your hardware RAID controller.  A quick example:

Lets say you put a cheap hardware RAID controller in your quad-core machine, which normally has a minimal load.  I would think the quad-core machine would have more than enough processing power to handle the RAID, as compared to the cheap hardware controller.

On the flip-side, if you get a quality card that has RAID specific instruction sets it could likely perform even better than a quad-core machine.

Is this an accurate assumption?  I should mention that I have only really used software RAID so I don’t have a lot of first-hand experience on the other end.  What I’m looking for is your experience..

I’d really like to hear the communities thoughts and experiences on hardware RAID vs software RAID.  Which do you use and why? (The why is what I’m looking for).

33 thoughts on “Hardware RAID vs Software RAID: Your Opinions

  1. Daniel Robitaille

    Just made me realized that all my servers use hardware RAID, and I have never actually used software RAID. I wasn’t involved in the purchasing on any of them, so I can always say it wasn’t my fault our group went that way :)

    So software vs hardware RAID is now another thing to do add to my to-do list of things to improve my knowledge of.

  2. Thomas

    With a decent controller, disk I/O is very cheap. And writing the same block to two disks is a simple operation.

    For RAID5 or RAID6 it eats a bit more CPU. How much is dependent on bandwidth. See how fast the CPU handles it (per core) see the bootlog, where various methods are tested. If your I/O writes are way below this, it will eat only a fraction of the CPU.

    But hardware RAID cards may have battery backed memory, which could improve the response time for I/O writes. This may make them worthwhilie even if only running a single disk, or when doing software raid to a JBOD config on thet controller.

  3. Jon

    On critical work servers, hardware RAID5 because the PHB requires it. On everything else, software RAID1 over hardware RAID1 because we’re rarely CPU/IO-limited and there are fewer un-recoverable failure modes.
    - I have had zero success replacing a RAID controller and having it work seamlessly with old drives. I don’t want to spend the money to buy two high-dollar controllers and shelve one, so software it is. Even a wholly new OS will understand half of a RAID1 pair.
    - I’ve had one flaky RAID controller trash all the drives; yeah, the CPU/memory/kernel can do that in software RAID mode, too, but it could also do that by sending bad data to the RAID controller, so I’m still better off.

  4. mlissner

    I have always used software RAID because it’s what was in my computer when I was given it. I would rather use hardware RAID since it would be an upgrade for my server, but it always seems like a complicated thing to figure out yet another piece of hardware. With the software RAID, everything is set to go when I install the OS, and I don’t have to think about figuring out drivers and such.

    Maybe I’m unnecessarily scared, but on the other hand, if it ain’t broke…

  5. Chris Samuel

    We tend to use software RAID anywhere where performance is important (local scratch space on clusters for instance, or for cached local copies of databases that are rebuilt from a master). Don’t underestimate the performance boost you get, on some high spec’d HP boxes at a client I left a Bonnie++ run going for a few hours on H/W RAID and had to kill it, when we rebuilt the striped system using software RAID it completed in about 30 minutes (off the top of my head).

    We use hardware RAID anywhere where we would just want to pull out a failed drive and replace it without the OS needing to care about.

    In some cases we use a hybrid system where we build multiple HW RAID 5 or 6 volumes on a large array and then use software RAID (striping or mirroring) to pull them into a single volume.

  6. CombatWombat

    I have always used Software RAID, after an absolute nightmare trying to wrangle a Promise Hardware RAID controller into behaving on Redhat 7.3. It never did settle down, so I went to Software and IT JUST WORKED. Has ever since. Also, where I am (bottom of the world, turn left), the Hardware cards are nigh on impossible to find. I would spend more time trying to find one than configuring the software raid systems.

    Another reason for software is that, as you point out, a grunty enough machine will handle it without sneezing. The servers I have made all had power to spare, just to be sure.

  7. Robvdl

    On one of my friends’ PC we are running both Windows and Ubuntu on two 80 gig drives. We would have liked to run in hardware RAID0 because then both Windows and Linux will utilise the RAID and we would have only needed to partition the 160gig RAID0 drive in two.

    However, Ubuntu doesn’t seem to recognise the NVidia onboard SATA RAID controller (it sees two 80 gig drives, not one 160 gig), and this is not the first RAID controller I have seen that Ubuntu won’t recognise, I had an IDE raid controller which I purchased which was supposed to be “Linux Compatible”, but Ubuntu didn’t recognise it, so I blew away money buying the card, maybe it was only for Redhat.

    In the end we setup a pretty complex partitioning scheme that ran Windows in non-RAID and Linux in LVM, it works but it wasn’t exactly a pretty partitioning layout.

    My own machine also dual boots, as I am also a heavy gamer (as well as Python hacker and Ubuntu fan). I would also like my Windows XP to utilise the RAID0 funcionality so the games run super fast, so pretty much need hardware RAID. I have been looking high and low for a hardware SATA PCI or PCI-e x1 RAID controller that is guaranteed to work in Ubuntu out of the box, I have asked on the Ubuntu forums before, but nobody seems to know.

    I don’t wish to buy another RAID card which claims to be Linux compatible, but doesn’t work out of the box with Ubuntu, does anyone know of any that will fit the bill?

  8. sharms

    Software raid is the way to go for most enterprise systems. Especially being that you are going to want to run LVM on top of it anyway.

  9. Timo Zimmermann

    I always use a software raid.

    I never trusted RAID controllers – after 2 years broken, no new controller available that can replace the old one and import the drives and BUM. That’s it for you data.

    Since there is ZFS in (Open)Solaris, FreeBSD and soon in OSX server I think we’ll see less RAID controllers.

  10. Ante

    Every technology has its use case. One can’t decide between those two, because both have advantages and disadvantages.

    It’s important to note that when talking about hardware RAID, we should talk only about rhe RAID controllers that costs more than $300, not about those on-board RAID wannabe fake controllers.

    Software RAID just can’t beat hardware RAID’s options like hot swap of disks, batteries, live enlargement of logical disk… On the other hand, hardware RAID can’t beat software RAID’s price.

    On a mission critical systems I use only hardware RAID.

  11. Ante

    @sharms one can always use LVM on top of RAID, whether it’s software or hardware.

  12. Ante

    @Robvdl – any 3ware SATA RAID controller will work. It will costs you, of course, but that’s hardware RAID – it costs money.