AOA Forums

AOA Forums (http://www.aoaforums.com/forum/)
-   General Hardware Discussion (http://www.aoaforums.com/forum/7-general-hardware-discussion/)
-   -   raid pci card (http://www.aoaforums.com/forum/general-hardware-discussion/30722-raid-pci-card.html)

barneygumble742 28th April, 2005 10:18 PM

raid pci card
 
hi...let's say i have a storage system with a 80 gig hd for the OS and 4x250 gigs on a sata 150 controller on the pci bus. it's not pci-X, just regular old pci, 32-bit. i heard that the pci cards actually minimize the bandwidth because of its limitations. yet so many systems have sata controllers on the pci bus. i also heard that having the raid on the mobo itself is better but the maximum devices on most mobos is like 2. so what's the bandwidth on the above mentioned system with the sata 150 pci controller?

GrahamGarside 28th April, 2005 11:20 PM

well a new sata hardrive will have a read rate of about 65mb/s maybe more. Standard pci has a max of 133mb/s, so two drives striped would easily reach that, and thats without other things going on over the bus. Onboard additional sata chips which use the pci bus have the same limitation.

The sata ports on the southbridge/chipset won't have this limitation, but as you said most only have 2 sata channels native to the board, other than nforce and I think i9XX

barneygumble742 29th April, 2005 06:23 PM

so getting a 5-channel controller card won't increase my performance?

or more specifically, where can i see the huge performance increase if i get a 5-channel controller card?

the main difference i heard between onboard vs pci is that with the onboard, the main cpu is being used which slows overall performance. correct?

GrahamGarside 29th April, 2005 06:29 PM

Yeah the main cpu will be taxed about 5-10% during heavy load, where as a good onboard controller will have it's own processing unit and additional cache.
You won't see a huge increase in performance really though, not for single user stuff, as most software is optimised to minimise disk writes during operation anyway, of course if you have background programs running that write to disk this will be more transparrent.
Until the first pci-e cards come along there is little gain to be had from having more than 2 drives striped other than the fact that 60-70mb/s is usually their peak read rate and their slowest is uaually around 40mb/s, so more than 2 striped would give a consant peak rate

dolanenwindrift 29th April, 2005 06:48 PM

Honestly if you have 4 or more drives, skip striping entirely and go for a RAID 5 setup. There are ways to set up RAID 5 withing XP if your controller doesn't support it directly, and believe me if you looses a drive you will be very happy you had redundancy. Lessee that's basically 4 X 250GB or roughly a Terrabyte? That's a lot of data to loose in one shot.

GrahamGarside 29th April, 2005 06:57 PM

Thats the other good thing about controller cards, hardware raid 5 mode, you will notice a fair drop in performance using a cheap card or software to do it.

Aedan 30th April, 2005 09:54 AM

Hardware RAID 5 is typically slightly slower than software RAID 5. Why? Well, the usual hardware used on a RAID 5 card is a 100MHz processor. Compare that to the main CPU which is sitting there with GHz worth of speed, and you can quickly see why.

Additionally, most hardware RAID 5 cards have cache memory onboard (If it doesn't, it's probably not doing anything much in hardware). Whilst this sounds good in theory, you have to realise it's on the wrong side of the PCI bus to have much of an impact. Data that is cached on the RAID 5 card has to go across the 133Mbyte/sec PCI bus. Data that is cached in main memory only has to go across a 1+Gbyte/sec memory bus.

On the other hand, RAID 5 hardware tends to be a bit more reliable, as the RAID 5 control is seperated from the main system. One thing to bear in mind is that typically RAID 5 doesn't offer great performance. RAID0 and RAID1 offer far higher performance levels than RAID 5 does.

barneygumble742 30th April, 2005 07:00 PM

so for a storage (1Tb) pc on a home lan, getting a controller card is definitely worth it? to save myself from the hassle of lower performance and maintenance? the data will be read/written via 802.11g and 10/100mbps lines and i shouldn't see a huge drop in reaction times?

i was told that the max bandwidth i can get out of the pci bus is 33mb/sec thus that might not be a wise idea.

thanks.

Aedan 1st May, 2005 12:43 AM

The PCI bus is capable of 133Mbyte/sec maximum theoretical bandwidth. Depending on the platform, it translates to either ~90Mbyte/sec or ~120Mbyte/sec.

If you're accessing the data via 802.11g, then you've got a maximum bandwidth of about 6Mbyte/sec. If you're using 100M ethernet, you have a maximum bandwidth of about 11Mbyte/sec. Neither of those two are going to push a PCI adapter hard!

Proper hardware RAID controller cards are not cheap however. The basic rule of thumb is that if the card doesn't take some kind of memory (like SIMM or DIMM), it's probably not a hardware RAID controller, just a glorified IDE interface. Hardware RAID does not get you higher performance - software RAID is slightly faster than hardware RAID.

Either way around, bear in mind that companies like Sun and Veritas have been using software RAID for years without any issues.

amarkarian 15th May, 2005 03:22 PM

I was thinking of installing this Ultra ATA 133/100 IDE RAID PCI Controller Card for two 20 gig hard drives on a home built server. My motherboard has no built in raid controller, two questions, Is this a good Idea? And Is all i have to do is plug it in and it works or is it more complicated?

MONKEYMAN 15th May, 2005 03:33 PM

I have done it on an nforce2 board and it worked very well, seek times were unaffected and write times had little improvement but reads were a bit faster, I got 88mb/s with 2x80gb 8mb cache drives, you will need to reinstall your os from scratch and use the raid bios to set up the array, you will also need to install the drivers when installing your os.

amarkarian 15th May, 2005 04:39 PM

Sry to sound so stupid but what exactly is the raid bios display, i am going to install readhat linux, is it different using that os?

MONKEYMAN 15th May, 2005 04:52 PM

not a clue on that one im afraid, the raid bios screen will show after your post screen when you turn your pc on.

Aedan 15th May, 2005 08:42 PM

Unless you're using a fully hardware RAID card, it's best not to use the RAID supplied on the card. This is because the Linux drivers don't support the software RAID that the cards do.

There's a lot of information on the net on how to set up RAID using Linux's LVM or similar, which doesn't need a software RAID controller to work (and will work with the cheap cards/onboard 'RAID' adapters)

shakey_shane 3rd June, 2005 01:23 PM

Raid?
 
What are Raid drivers, what do they drive? large HDDs?
im new to this jargon btw so dont mock

from an extrmely confused newbie

Aedan 3rd June, 2005 01:38 PM

RAID is a technology that's been around for a while, but originally was designed for big servers. To give you context, it stands for "Redundant Array of Inexpensive Disks". Basically, it's a way of talking to disks that combines them into one disk. Many RAID configurations are actually redundant, in that a hard disk can fail and the system carries on working.

Common RAID "levels" are:
RAID0: Two or more disks combined into a single 'volume'. No redundancy, but you get all your space. Hence 2x 80Gb disks give 160Gb of space.
RAID1: Two or other even number of drives combined into a single 'volume'. Drives are in 'mirrored' sets, so there's always two copies, but you only get half your space
RAID5: A complex system that requires at least 3 drives, and splits data and error correction across the drives. Any one drive can fail, and the system will be ok. You get the space of all but one drive.

krazefinn 4th June, 2005 05:02 PM

2nd post here after lurking for months....does the third drive in a raid5 array also have to be equal size? Or should it be twice the size? offhand i would guess that if the first two, (say 80 g) drives are striped (into a 160), that third drive, to mirror the whole first pair, *should* be a 160 gig. Or is my cursory understanding impaired somehow? I'm also assuming that your raid card MUST officially support raid 5, or you are SOL, and furthermore, to avoid the master/slave slowdown (no concurrent read/writes on same channel) that one really requires a four channel...Thx for edifying me......

gedon 6th June, 2005 09:03 PM

@krazefinn
what i know about raid 5 is that all drives should have the same size and one of your 3 or 8 or 12 disk can fail without loosing data.

i'm still searching for some benchmarks with a raid0 and a raid5(4HDDs) setup

i'm not shure if i upgrade my sys or not. what i see is that speed goes down when 3 or more read/write accesses running.

has raid 5(4hdd@pci) a constant datarate?

GrahamGarside 6th June, 2005 09:07 PM

Raid 5 needs all the disks to be the same just like 0 and 1, it uses checksums so that when a drive fails, even though a third of the data is gone (in a 3 disk setup) it can calculate the missing data from what left

Gizmo 6th June, 2005 09:13 PM

Quote:

Originally Posted by krazefinn
2nd post here after lurking for months....does the third drive in a raid5 array also have to be equal size? Or should it be twice the size? offhand i would guess that if the first two, (say 80 g) drives are striped (into a 160), that third drive, to mirror the whole first pair, *should* be a 160 gig. Or is my cursory understanding impaired somehow? I'm also assuming that your raid card MUST officially support raid 5, or you are SOL, and furthermore, to avoid the master/slave slowdown (no concurrent read/writes on same channel) that one really requires a four channel...Thx for edifying me......

Sorry for not responding before now.

Eh, no. Err..yes. Umm.......what? Sorry, your question has really confused me. :)

In general, all drives in a RAID should be the same size, so if you currently have two drives that are 80G in a RAID0 or a RAID1, and you add a third drive, it should also be 80G. However, if you are adding the drive to an existing RAID1, then you will only be able to add the drive as a 'hot-spare', because RAID1 only works with 2 drives. For RAID0, depending on the controller, you may be able to build the drive into the array and expand the array, or you may have to flush all the data on the drives and reinitialize the array in the new configuration.

Not that I said 'in general'. If you have two 80G drives in a RAID and you can't get another 80G drive, but you do have a 160G drive, you can put the larger drive into the array, but you will only be able to use 80G of the storage space on the drive, unless your controller allows 'spanning' of dissimilar drives. I have only ever seen this done with a RAID0 configuration. Because of the way RAID works, I would be HIGHLY surprised if you could do it in any of the truly redundant modes (like RAID1 or RAID5).


All times are GMT +1. The time now is 06:01 PM.


Copyright ©2001 - 2010, AOA Forums


Search Engine Friendly URLs by vBSEO 3.3.0