Hardware RAID vs Software RAID: Your Opinions

By | 2008/06/14

I’ve been teaching software RAID on RHEL5 for some time now and today I came to the realization that nearly every student I’ve ever taught that is currently using RAID is using hardware RAID.  Nobody seems to use software RAID, at least in what I’ve run into.

Now, in my mind, the performance difference between the two (software vs hardware) can come down to how much you’re willing to spend on your hardware RAID controller.  A quick example:

Lets say you put a cheap hardware RAID controller in your quad-core machine, which normally has a minimal load.  I would think the quad-core machine would have more than enough processing power to handle the RAID, as compared to the cheap hardware controller.

On the flip-side, if you get a quality card that has RAID specific instruction sets it could likely perform even better than a quad-core machine.

Is this an accurate assumption?  I should mention that I have only really used software RAID so I don’t have a lot of first-hand experience on the other end.  What I’m looking for is your experience..

I’d really like to hear the communities thoughts and experiences on hardware RAID vs software RAID.  Which do you use and why? (The why is what I’m looking for).

33 thoughts on “Hardware RAID vs Software RAID: Your Opinions

  1. Daniel Robitaille

    Just made me realized that all my servers use hardware RAID, and I have never actually used software RAID. I wasn’t involved in the purchasing on any of them, so I can always say it wasn’t my fault our group went that way 🙂

    So software vs hardware RAID is now another thing to do add to my to-do list of things to improve my knowledge of.

  2. Thomas

    With a decent controller, disk I/O is very cheap. And writing the same block to two disks is a simple operation.

    For RAID5 or RAID6 it eats a bit more CPU. How much is dependent on bandwidth. See how fast the CPU handles it (per core) see the bootlog, where various methods are tested. If your I/O writes are way below this, it will eat only a fraction of the CPU.

    But hardware RAID cards may have battery backed memory, which could improve the response time for I/O writes. This may make them worthwhilie even if only running a single disk, or when doing software raid to a JBOD config on thet controller.

  3. Jon

    On critical work servers, hardware RAID5 because the PHB requires it. On everything else, software RAID1 over hardware RAID1 because we’re rarely CPU/IO-limited and there are fewer un-recoverable failure modes.
    – I have had zero success replacing a RAID controller and having it work seamlessly with old drives. I don’t want to spend the money to buy two high-dollar controllers and shelve one, so software it is. Even a wholly new OS will understand half of a RAID1 pair.
    – I’ve had one flaky RAID controller trash all the drives; yeah, the CPU/memory/kernel can do that in software RAID mode, too, but it could also do that by sending bad data to the RAID controller, so I’m still better off.

  4. mlissner

    I have always used software RAID because it’s what was in my computer when I was given it. I would rather use hardware RAID since it would be an upgrade for my server, but it always seems like a complicated thing to figure out yet another piece of hardware. With the software RAID, everything is set to go when I install the OS, and I don’t have to think about figuring out drivers and such.

    Maybe I’m unnecessarily scared, but on the other hand, if it ain’t broke…

  5. Chris Samuel

    We tend to use software RAID anywhere where performance is important (local scratch space on clusters for instance, or for cached local copies of databases that are rebuilt from a master). Don’t underestimate the performance boost you get, on some high spec’d HP boxes at a client I left a Bonnie++ run going for a few hours on H/W RAID and had to kill it, when we rebuilt the striped system using software RAID it completed in about 30 minutes (off the top of my head).

    We use hardware RAID anywhere where we would just want to pull out a failed drive and replace it without the OS needing to care about.

    In some cases we use a hybrid system where we build multiple HW RAID 5 or 6 volumes on a large array and then use software RAID (striping or mirroring) to pull them into a single volume.

  6. CombatWombat

    I have always used Software RAID, after an absolute nightmare trying to wrangle a Promise Hardware RAID controller into behaving on Redhat 7.3. It never did settle down, so I went to Software and IT JUST WORKED. Has ever since. Also, where I am (bottom of the world, turn left), the Hardware cards are nigh on impossible to find. I would spend more time trying to find one than configuring the software raid systems.

    Another reason for software is that, as you point out, a grunty enough machine will handle it without sneezing. The servers I have made all had power to spare, just to be sure.

  7. Robvdl

    On one of my friends’ PC we are running both Windows and Ubuntu on two 80 gig drives. We would have liked to run in hardware RAID0 because then both Windows and Linux will utilise the RAID and we would have only needed to partition the 160gig RAID0 drive in two.

    However, Ubuntu doesn’t seem to recognise the NVidia onboard SATA RAID controller (it sees two 80 gig drives, not one 160 gig), and this is not the first RAID controller I have seen that Ubuntu won’t recognise, I had an IDE raid controller which I purchased which was supposed to be “Linux Compatible”, but Ubuntu didn’t recognise it, so I blew away money buying the card, maybe it was only for Redhat.

    In the end we setup a pretty complex partitioning scheme that ran Windows in non-RAID and Linux in LVM, it works but it wasn’t exactly a pretty partitioning layout.

    My own machine also dual boots, as I am also a heavy gamer (as well as Python hacker and Ubuntu fan). I would also like my Windows XP to utilise the RAID0 funcionality so the games run super fast, so pretty much need hardware RAID. I have been looking high and low for a hardware SATA PCI or PCI-e x1 RAID controller that is guaranteed to work in Ubuntu out of the box, I have asked on the Ubuntu forums before, but nobody seems to know.

    I don’t wish to buy another RAID card which claims to be Linux compatible, but doesn’t work out of the box with Ubuntu, does anyone know of any that will fit the bill?

  8. sharms

    Software raid is the way to go for most enterprise systems. Especially being that you are going to want to run LVM on top of it anyway.

  9. Timo Zimmermann

    I always use a software raid.

    I never trusted RAID controllers – after 2 years broken, no new controller available that can replace the old one and import the drives and BUM. That’s it for you data.

    Since there is ZFS in (Open)Solaris, FreeBSD and soon in OSX server I think we’ll see less RAID controllers.

  10. Ante

    Every technology has its use case. One can’t decide between those two, because both have advantages and disadvantages.

    It’s important to note that when talking about hardware RAID, we should talk only about rhe RAID controllers that costs more than $300, not about those on-board RAID wannabe fake controllers.

    Software RAID just can’t beat hardware RAID’s options like hot swap of disks, batteries, live enlargement of logical disk… On the other hand, hardware RAID can’t beat software RAID’s price.

    On a mission critical systems I use only hardware RAID.

  11. Ante

    @sharms one can always use LVM on top of RAID, whether it’s software or hardware.

  12. Ante

    @Robvdl – any 3ware SATA RAID controller will work. It will costs you, of course, but that’s hardware RAID – it costs money.

  13. Serge van Ginderachter

    Over all, I prefer software RAID (on Linux that is) because
    – more control
    – easy to migrate disks
    – no problems with replacing controllers
    – no controllers which might ruin your whole array when replacing a disk (happened to me on a Dell box)
    – most hosts have enough cpu power to handle the average loads I need in most of my environments

    Downside of software raid:
    – I wouldn’t use it when your apss need big iron (e.g. heavily used and optimised database environment)
    – Bad support from most installers to easily boot from second mirror drive when the first failed; also not so easy to configure Grub to handle all possible failures (e.g. bad FS vs. bad disk)

    But I’m surprised to read the earlier comments being so positive on software RAID, especially on performance.

  14. pyyhttu

    For my home system it’s now definitely software raid mainly because of easy of recovery and hardware portability. I did research on this for some time before committed to format and setting up my raid1.

    I recommend reading this excellent but long post from /.: http://ask.slashdot.org/comments.pl?sid=111305&threshold=1&commentsort=0&mode=thread&cid=9446712

    It analyzes both the benefits of software and hardware raid.

    If someone got interested and not sure ẃhat it comes to setting up and testing the raidX then read this: http://users.piuha.net/martti/comp/ubuntu/en/raid.html

    And you’re ready to go.

  15. Timucin Kizilay

    I have used hardware RAID on critical server with expensive RAID cards with lots of cache, special processor and battery backup.
    Now in my own small company, we could only buy a small server with onboard SATA RAID card so I assume it is fakeraid so I’m using software RAID.
    That depends on othe hardware, if there is a decent raid card, I use it, if there is cheap onboard fakeraid, I do not bother to install drivers for it.

  16. troll

    It really is not about performance. It’s about reliability. Real hardware raid card will produce you wonderful things on that front. Also, you can get one with several hundred megabytes of cache, and disable all of software side caching, for some real performance boost.

  17. Herman Bos

    Software RAID:
    If using software RAID, I would only recommend RAID1. Its quite ideal for this, easy to recover as well.

    I didn’t feel particular safe with Software RAID5 (identifying which disk is the faulty one and swap it is quite tricky).

    Fake RAID:
    Stay the hell away from these chips. In general your first warning should be its cheaper or onboard.

    Hardware RAID:
    Can handle much more RAID levels.
    You can know which port is connected to which disk easily.
    Onboard cache.
    Bit more expensive.
    Does have a better performance especially other levels then RAID1.

    When you have lots of I/O or just a big volume I would go for hardware raid. Anything else then RAID1, also go for the hardware variant.

  18. Chris Samuel

    It is about performance when you’re running HPC codes that do lots of disk I/O, such as BLAST or some of the out-of-core solvers in engineering & computational chemistry.

    We had one person who was running a code that needed over 600GB of scratch space..

  19. https://me.yahoo.com/a/ODuMKFx4s.1juPGbX6GmmxUNrQhsxt2YQg--#d92b4

    I have done my Software RAID a few years ago on a production server.
    It has multiple roles: file server, web server, database server.
    I made it with 4 sata disks on an Athlon 64 3000 (socket 754), and it’s still running fine today.
    I use a RAID 5 with LVM on top of it. Back in the day I remember I pretty much had the combined speed of 3 drives, scaled linearly.
    The most intensive process is writing, and I remember it reached a maximum load of 30% CPU usage with that piece of hardware, old for today’s standards. With the new SSE extensions and beefy CPUs, this is certainly going to be even less taxing on the system. As long as kernel scheduling loves the RAID providing it with the cpu cycles it needs with very low latency, I see no reason for software RAID to perform poorly.

  20. Thomas

    HW Raid
    + more likely to support hot-swap
    + off-load CPU (if you paid enough)
    + PHB-friendly, buzzword compliant
    + Nifty stuff like Cache backup (for when you care enough to buy the very best)

    SW Raid
    + cheaper
    + more flexible
    + less likely to be proprietary
    + less likely to have braindead limitations
    + less likely to have driver problems
    + more likely to have SMART support

    Summary:
    If you don’t know better, use SW raid ‘cos you’ll probably waste money buying a cheap HW Raid solution.
    If you do know better, and are prepared to spend enough money on HW Raid to get a decent system, then WTF are you doing reading this?

    Check out the link too. It’s mainly about NFS but has some high-end RAID stuff too.

    P.S.
    RAID != Backup
    You still need backups, and you still need to test them.

  21. Colin

    Hardware RAID is the best way to get fucked up the day the hardware RAID controller dies.

  22. Nathan Dbb

    All RAID is software RAID, it is just if the software is PC-based or Card-specific-based. The dedicated cards have some advantages.

    Software RAID-5 under Linux has a single-point of failure at boot time. Either the MBR and boot files are on a non-RAID boot drive, or they sits on the first disk of the RAID array. If the boot device fails, the system will not boot (hence the name “boot device”).

    A server could be booted off of cheap flash memory devices and then start/rebuild the systems RAID array. This would make the system more reliable. It could allow the system to send a call for help over the network in case of disk failure. Maintenance is then easy, people just replace drives (don’t touch the software).

    I would like to set up something that was zero-skill-maintenance like the ReadyNAS.

    Look at the Infrant (now Netgear) ReadyNAS products. These products are Samba/NFS/FTP/RSYNC/webDAV, DHCP, and print servers that feature disk-failure-resistant storage. The downside is that they are slow and expensive.

    http://www.readynas.com/?p=214

    The X-RAID appears to be just software RAID5 (ext3 + lvm2 + md + mdadm) and scripts to start RAID rebuilds. It is easy, just put drives in and the array expands or rebuilds a lost drive. If the OS is on the flash memory, it is less likely to fail, and so the system will boot and run the detection/rebuilding scripts.

    I don’t know if this is best done with chroot switching, odd mount points & directory structures, or live-CD-like (and Eee-PC-like) Union-FS tricks.

    Any input?

  23. Robvdl

    Ante: thanks for this, I had a look at my local supplier and they do stock 3ware RAID cards, very nice I must say, but as you said they are quite expensive, then again you get what you pat for.

    Chris Samuel: Thanks for this, I stumbled upon dmraid last week while checking the ubuntu 8.10 blueprints, although it doesn’t seem like work has yet been done on this blueprint just yet, it has been approved and the priority is set to high. I hope this will make it into intrepid.

    https://blueprints.launchpad.net/ubuntu/+spec/dmraid-support

  24. Brendan

    I use an adaptec 2100S raid card for a raid 5 of U160 SCSI disks at home. At work all our servers use hardware raid 1 with a hot spare new ones are on SAS and older ones are on SCSI. Some Dell 1850’s only have single raid1.

    We always buy the hardware raid controller, because dell basically flog it away, and it supports raid 1 with a hot spare which is what we normally use. I have used software raid 1 before (both on the motherboard of a Abit NF7-S with the promise Sil3112 controller and on a Asus A8V deluxe with a Via VT8237 controller.)

    Motherboard software raids (all those nforce, promise and via chipset boards, some tyan/supermicro actually have real raid cards from LSI and adaptec onboard) are worst that software raid in my experience. They are slow, buggy proprietary and a real hastle to recover, dont do half the things they do either.

    I’ve used software raid as well, only in raid0, and it worked flawlessly in SuSE linux and still use software raid 1 in SuSE 9.3 with an adaptec SATA controller. (non raid controller).

    Hardware raid is great if you have the money, and the space (my adpatec 2100S is a full PCI-X card – it’s huge in my atx case). It provides weird raid modes – raid 5/6/10/01/25/50/ you get the idea. Also there is a battery operated cache so my card has 128mb of ram on it just for the disks. In the new dell 1950 with the perc6i LSI controller, they come with 256mb cache which is really quite nice.

    I think it all boils down to cost, if you can afford it, it’s awesome works flawlessly, if you don’t – ignore the raid on your motherboard – it’s not worth bugger all, use software raid.

  25. syed

    I would say hardware raid for raid5, have run software raid on 4 x 1tb sata drives with average write of 20-27mb/s add a smart array p800 gave me 35-47mb/s.

    This is all on a suse 10.1 box. Downside i see on the hardware raid on linux is when you add a drive to extend the array on the controller side, the os doesnt allow you to do any resizing of the drive array.

    Is there a way round this or is this a limitation?

  26. Brett

    For servers/production machines, we have all hardware raid. Raid5 mostly. Drives have failed and thus far all we did was put a new drive in, rebuild, and we were back in business in no time. It does depend on the card mostly. They do get quite steep, but there is a reason for it. Don’t get a 100 dollar card and expect miracles to happen.

    For home use I use software raid0 on XP 64. It is amazingly faster and I really have no idea why someone would say software raid0 offers no better performance. It’s the difference from night and day. I use Acronis 2009 for my data backups, so I don’t really need a raid5, had a raid 0+1 before, and it worked just fine.

    Having to really use a software raid now because I want to install Ubuntu 7.10 on my raid0 for a dual boot, and with hardware raid…I don’t want to even think about the headache I would get trying that.

  27. Pieter

    If I may suggest something: try RAID 10 instead of RAID 5. Much better write speeds and more reliable.

  28. Jon

    I've been using software RAID 5 on Windows for a while and it's worked great. I just recently switched to using Linux for my RAID needs, mostly for flexibility reasons. It is true that write speed on software RAID 5 aren't that good, but I'm not sure if a hardware controller would be that much faster. For me cost was the key decision that easily swayed me towards software RAID. The cost saving for me was two fold. Not only did I not have to buy a RAID controller, but I could also make better use of my disks. Linux allows for maximum use of disks by allowing users to add to already existing RAID arrays and by allowing users to mix different RAID arrays together using LVM. This allowed me make a RAID 5 array and a RAID 1 array and allow them to be used as one large drive. If anybody is interested here's a link to a tutorial on how to make a RAID 0, 1, 5, 6 or 10 array in Fedora Linux with a GUI. Also, if somebody doesn't want to move away from Windows, here is a tutorial on how to setup a RAID 0, 1, 5 or JBOD in Windows XP Pro SP3.

  29. Jon

    I've been using software RAID 5 on Windows for a while and it's worked great. I just recently switched to using Linux for my RAID needs, mostly for flexibility reasons. It is true that write speed on software RAID 5 aren't that good, but I'm not sure if a hardware controller would be that much faster. For me cost was the key decision that easily swayed me towards software RAID. The cost saving for me was two fold. Not only did I not have to buy a RAID controller, but I could also make better use of my disks. Linux allows for maximum use of disks by allowing users to add to already existing RAID arrays and by allowing users to mix different RAID arrays together using LVM. This allowed me make a RAID 5 array and a RAID 1 array and allow them to be used as one large drive. If anybody is interested here's a link to a tutorial on how to make a RAID 0, 1, 5, 6 or 10 array in Fedora Linux with a GUI. Also, if somebody doesn't want to move away from Windows, here is a tutorial on how to setup a RAID 0, 1, 5 or JBOD in Windows XP Pro SP3.

  30. Jon

    I've been using software RAID 5 on Windows for a while and it's worked great. I just recently switched to using Linux for my RAID needs, mostly for flexibility reasons. It is true that write speed on software RAID 5 aren't that good, but I'm not sure if a hardware controller would be that much faster. For me cost was the key decision that easily swayed me towards software RAID. The cost saving for me was two fold. Not only did I not have to buy a RAID controller, but I could also make better use of my disks. Linux allows for maximum use of disks by allowing users to add to already existing RAID arrays and by allowing users to mix different RAID arrays together using LVM. This allowed me make a RAID 5 array and a RAID 1 array and allow them to be used as one large drive. If anybody is interested here's a link to a tutorial on how to make a RAID 0, 1, 5, 6 or 10 array in Fedora Linux with a GUI. Also, if somebody doesn't want to move away from Windows, here is a tutorial on how to setup a RAID 0, 1, 5 or JBOD in Windows XP Pro SP3.

  31. Jon

    I've been using software RAID 5 on Windows for a while and it's worked great. I just recently switched to using Linux for my RAID needs, mostly for flexibility reasons. It is true that write speed on software RAID 5 aren't that good, but I'm not sure if a hardware controller would be that much faster. For me cost was the key decision that easily swayed me towards software RAID. The cost saving for me was two fold. Not only did I not have to buy a RAID controller, but I could also make better use of my disks. Linux allows for maximum use of disks by allowing users to add to already existing RAID arrays and by allowing users to mix different RAID arrays together using LVM. This allowed me make a RAID 5 array and a RAID 1 array and allow them to be used as one large drive. If anybody is interested here's a link to a tutorial on how to make a RAID 0, 1, 5, 6 or 10 array in Fedora Linux with a GUI. Also, if somebody doesn't want to move away from Windows, here is a tutorial on how to setup a RAID 0, 1, 5 or JBOD in Windows XP Pro SP3.

  32. Tomas

    Hi,
    I have noticed that software RAID is faster than hardware RAID for RedHat 5.8 systems.
    The hardware RAID controller card we tested where HP P812 card, with 1G memory.

    One interesting note I read in another article, was that the CPU is probably faster in system than in the controller card.
    Another observation is that with CFQ (io-eleavator) the read performance is faster than write. In DEADLINE (io-elevator) it is the oposite!?

    I have involved both HP and RedHat to describe this strange behaviour.
    If someone has noticed the same, than please update me.

    Regards Tomas

Comments are closed.