View Single Post
  #7 (permalink)  
Old 21st November, 2013, 07:41 AM
Gizmo's Avatar
Gizmo Gizmo is offline
Chief BBS Administrator
 
Join Date: May 2003
Location: Webb City, Mo
Posts: 16,178
Send a message via ICQ to Gizmo Send a message via AIM to Gizmo Send a message via MSN to Gizmo Send a message via Yahoo to Gizmo Send a message via Skype™ to Gizmo

Quote:
Originally Posted by ThunderRd View Post
I could be wrong, but I'm quite sure that if one drive fails in the 0+1 scenario, some of the mirrored component of the RAID is gone forever, and the array can't be rebuilt without it.
RAID 0+1 and RAID 1+0 are both redundant configurations, so as long as the array is simply degraded (and not offline) all of the data necessary to operate the array still exist.

RAID 1+0 is preferable to RAID 0+1, as RAID 1+0 gives you the same net performance, better degraded performance, and better fault tolerance: RAID 0+1 on 4 drives means that any single drive failure takes down the entire stripe and breaks the mirror (meaning only two drives are operational and there is NO redundancy), where RAID 1+0 on 4 drives can lose any single drive and still have both mirrors working. In addition, RAID 1+0 on 4 drives can tolerate ANY 2 drive failures and continue operating, where RAID 0+1 on 4 drives can tolerate 2 drive failures only under specific conditions (both failures occur in the same stripe).

Further, when recovering a degraded RAID 0+1, you have to rebuild the ENTIRE STRIPE, not just the failed drive, whereas with RAID 1+0, only the defective drive needs to be rebuilt.

Short story long, I see no valid reason to use RAID 0+1.

All that being said, the reason nVidia's tool doesn't let you rebuild the array is probably down to what I mentioned above: you have to rebuild the entire strip, and so nVidia's tool probably sees the whole thing as hopeless. As I have no personal experience with the tool, I can only speculate, however.

Last edited by Gizmo; 21st November, 2013 at 07:42 AM.
Reply With Quote