AOA Forums AOA Forums AOA Forums Folding For Team 45 AOA Files Home Front Page Become an AOA Subscriber! UserCP Calendar Memberlist FAQ Search Forum Home


Go Back   AOA Forums > Hardware > AMD Motherboards & CPUs

AMD Motherboards & CPUs Questions or comments on AMD products?


Reply
 
LinkBack Thread Tools Rate Thread
  #1 (permalink)  
Old 13th May, 2006, 06:21 AM
Member/Contributor/Resident Crystal Ball
 
Join Date: March 2004
Posts: 7,451

PCI-E Peer-to-Peer Write.

Crossfire, SLi...and the above.

thoughts?
__________________
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiFurl this Post!
Reply With Quote
  #2 (permalink)  
Old 13th May, 2006, 02:18 PM
Gizmo's Avatar
Chief BBS Administrator
BassTeroids Champion, Global Player Champion, Aim & Fire Champion, Puzzle Maniax Champion, Othello Champion, Canyon Glider Champion, Unicycle Challenge Champion, YetiSports 9: Final Spit Champion, Zed Champion
 
Join Date: May 2003
Location: Webb City, Mo
Posts: 16,178
Send a message via ICQ to Gizmo Send a message via AIM to Gizmo Send a message via MSN to Gizmo Send a message via Yahoo to Gizmo Send a message via Skype™ to Gizmo

Hmm........my first thought is, do you really want all that data traveling across your main data bus for the card? Any bandwidth you chew up for Crossfire/SLI comes out of the bandwidth you have available for regular CPU/Memory - Graphics card communications.
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiFurl this Post!
Reply With Quote
  #3 (permalink)  
Old 13th May, 2006, 02:35 PM
Member/Contributor/Resident Crystal Ball
 
Join Date: March 2004
Posts: 7,451

All what data?

...and how does a point-point link across the pci-e bus have anything to do with the cpu and memory subsystems?

Peer-to-Peer.


Um, yeah, what happened to AGP 8x being "far more than enough"? Isn't a 32x or 16x pci-e even more?


And why is this a bios option?
__________________
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiFurl this Post!
Reply With Quote
  #4 (permalink)  
Old 13th May, 2006, 02:41 PM
Gizmo's Avatar
Chief BBS Administrator
BassTeroids Champion, Global Player Champion, Aim & Fire Champion, Puzzle Maniax Champion, Othello Champion, Canyon Glider Champion, Unicycle Challenge Champion, YetiSports 9: Final Spit Champion, Zed Champion
 
Join Date: May 2003
Location: Webb City, Mo
Posts: 16,178
Send a message via ICQ to Gizmo Send a message via AIM to Gizmo Send a message via MSN to Gizmo Send a message via Yahoo to Gizmo Send a message via Skype™ to Gizmo

Quote:
Originally Posted by cadaveca
All what data?

...and how does a point-point link across the pci-e bus have anything to do with the cpu and memory subsystems?

Peer-to-Peer.


Um, yeah, what happened to AGP 8x being "far more than enough"? Isn't a 32x or 16x pci-e even more?


And why is this a bios option?
Err, doesn't the CPU send data to the card for textures and such? Or does the video card just magically manufacture that stuff out of the ether?

As for Peer-to-Peer, riddle me this. If my two graphics cards are busy yammering at each other, how do I go about talking to either one of them? All Peer-to-Peer means is that the conversation between the video cards won't keep me from talking to my hard-drive, which is what happens now on the PCI bus.

Now, it may be that there really isn't all that much data being sent from the CPU to the video card, I really don't know. But if there isn't all that much data going from the CPU/Memory to the video card, then why do we need PCIe-x16? For that matter, why did we need AGP?
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiFurl this Post!
Reply With Quote
  #5 (permalink)  
Old 13th May, 2006, 02:49 PM
Member/Contributor/Resident Crystal Ball
 
Join Date: March 2004
Posts: 7,451

x1800GTO dongleless.
x1600 dongleless
x1300dongleless

6200 bridgeless
6600 bridgeless(don't forget about the 6600's that had no bridge. change the asic ID to match one that does...and SLi works.)
6800 bridgeless
7200 bridgeless
7600 bridgeless
7800 bridgeless
7900 bridgeless.
In fact, all nVidia Sli cards work without hte conenctor in SLi mode, albeit with a peformance hit.


and the final thing...

7900 SLi FASTER on Crossfire3200 chipset WITHOUT bridge pcb, than on nForce4 32x WITH bridge pcb.

Break out the math. how much bandwidth is in AGP?

How much is in a 8-lane, and 16-lane pci-e config.

And instead of answering with more questions, find out how much bandwidth the SLi connector has.
__________________
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiFurl this Post!
Reply With Quote
  #6 (permalink)  
Old 13th May, 2006, 04:00 PM
Gizmo's Avatar
Chief BBS Administrator
BassTeroids Champion, Global Player Champion, Aim & Fire Champion, Puzzle Maniax Champion, Othello Champion, Canyon Glider Champion, Unicycle Challenge Champion, YetiSports 9: Final Spit Champion, Zed Champion
 
Join Date: May 2003
Location: Webb City, Mo
Posts: 16,178
Send a message via ICQ to Gizmo Send a message via AIM to Gizmo Send a message via MSN to Gizmo Send a message via Yahoo to Gizmo Send a message via Skype™ to Gizmo

Quote:
Originally Posted by cadaveca
In fact, all nVidia Sli cards work without hte conenctor in SLi mode, albeit with a peformance hit.
I never said that you couldn't do something like SLI without a dedicated bus. I asked the question "Would you really want to?". It's all about bandwidth and latency. If I'm using the PCIe bus to talk from the CPU to the video card, then the two video cards can't talk to each other.

Quote:
Originally Posted by cadaveca
7900 SLi FASTER on Crossfire3200 chipset WITHOUT bridge pcb, than on nForce4 32x WITH bridge pcb.
So? It is entirely reasonable that SLI on a lightly loaded PCIe bus might be faster than on a dedicated but lower bandwidth bus. What happens when you start shoveling a lot of other traffic across that PCIe bus, though? Although PCIe is a point-to-point bus, would I not want to send traffic to the card from elsewhere? If I'm already chewing up a chunk of bandwidth for just basic video generation because of SLI, then I go adding additional data transfers for extra texture data and so on, isn't it possible that the additional data transfers will interfere with the video traffic?

BTW, you've admitted in the previous post that nVidia in SLI mode on nVidia hardware is faster than nVidia in non-SLI mode on nVidia hardware. Clearly, there's a performance advantage there somewhere for the dedicated SLI bus.

Quote:
Originally Posted by cadaveca
Break out the math. how much bandwidth is in AGP?

How much is in a 8-lane, and 16-lane pci-e config.

And instead of answering with more questions, find out how much bandwidth the SLi connector has.
AGP 1x was simply a 66 MHz PCI implementation, and offered 266 MB/s maximum transfer. AGP 8x is still 66 MHz PCI, but using 4 clocks that are 90 degrees out of phase, double pumped, resulting in 2,133 GB/s maximum transfer rate. Practical transfer rates are rather lower, although exactly how much lower depends on the type of transfers that are taking place.

PCIe offers a maximum transfer rate of 250 MB/s per lane. Thus, PCIe 8x offers performance on par with AGP 8x, and PCIe 16x offers twice that. A key difference, though, is that the bandwidth is in EACH direction (send and receive) on PCIe, where it is aggregate bandwidth on AGP.

The SLI connector is, from what I understand, basically a PCIe 4x implementation, offering about 1GB/s in data transfer.

So, back to my original questions:

Quote:
Originally Posted by gizmo
Err, doesn't the CPU send data to the card for textures and such? Or does the video card just magically manufacture that stuff out of the ether?

As for Peer-to-Peer, riddle me this. If my two graphics cards are busy yammering at each other, how do I go about talking to either one of them? All Peer-to-Peer means is that the conversation between the video cards won't keep me from talking to my hard-drive, which is what happens now on the PCI bus.

Now, it may be that there really isn't all that much data being sent from the CPU to the video card, I really don't know. But if there isn't all that much data going from the CPU/Memory to the video card, then why do we need PCIe-x16? For that matter, why did we need AGP?
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiFurl this Post!
Reply With Quote
  #7 (permalink)  
Old 13th May, 2006, 04:25 PM
Member/Contributor/Resident Crystal Ball
 
Join Date: March 2004
Posts: 7,451

Quote:
Originally Posted by gizmo
I never said that you couldn't do something like SLI without a dedicated bus. I asked the question "Would you really want to?". It's all about bandwidth and latency. If I'm using the PCIe bus to talk from the CPU to the video card, then the two video cards can't talk to each other.
Assuming that all the pci-e bandwidth was actually used.

what was teh purpose of flipping the paddle in SLI for then?



Quote:
Originally Posted by gizmo
BTW, you've admitted in the previous post that nVidia in SLI mode on nVidia hardware is faster than nVidia in non-SLI mode on nVidia hardware. Clearly, there's a performance advantage there somewhere for the dedicated SLI bus.
Um, WHAT? better spell that one out for me. faster on crossfire without bridge connecting the cards than on nforce4 with the bridge. according to you, that's missing 4 pci-e lanes...yet on the crossfire chipset, this does not matter. Are you trying to say that the 32x SLi config is truly only 28 lanes, with 4 of them through the bridge pcb? Or just distract everyone from what i was really saying? You refer only to one config...and not the same that i do.

Or... is it the lack of need of TWO chipsets for 32 lanes, and the link between these chipsets, that is causing the SLi-on-Crossfire to show better numbers?


So, on a 16-lane config we have 4gb(x2), and roughly 4x that of AGP.

We have 16x pci-e videocards, working in 8x slots. flip a paddle, peer-to-peer link created(not bi-directional, because of patents, i hear). this makes for a need for more bandwidth, had by the use of the external connector.

On a 32-lane config we have 8gb(x2), and roughly 8x that of AGP.

We have 16x pci-e videocard, working in 16x slots, no paddle flipping, and a performance increase(bidirectional peer-to-peer working, and enabled in bios). There's still 8 lanes to each card dedicated to CPU/mem, as is the case with the first config.

So, as before, i don't see what your complaint about lack of bandwidth comes from...it's all still there. This is why the use of the bridge pcb, or the lack of it's use, is so important. It's nessecary for the 8x8 config, but not for the 16x16. And who holds the patents for a single-chip, 32-lane config?

But what's that you say? Not using the connector on NForce4SLi32x still result in a performance loss? Well, why's that?

And why doesn't it happen on the crossfire chipset...why does it actually get a wee bit faster(2-3%) WITHOUT the bridge, while using the Crossfire chipset?
__________________

Last edited by cadaveca; 13th May, 2006 at 04:32 PM.
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiFurl this Post!
Reply With Quote
  #8 (permalink)  
Old 13th May, 2006, 05:03 PM
Member/Contributor/Resident Crystal Ball
 
Join Date: March 2004
Posts: 7,451

gets more confusing....

supertiling...Crossfire exclusive...

but tiling was originally a tech of Gigapixel, who were purchased by 3Dfx...who were then bought by nVidia.

But nVidia didn't do a full acquisition of 3Dfx, they merely bought out patent rights, engineering designs, and technologies.

Seems to me nVidia holds the rights to Supertiling...but it's a Crossfire "exclusive".
__________________
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiFurl this Post!
Reply With Quote
  #9 (permalink)  
Old 13th May, 2006, 08:20 PM
Gizmo's Avatar
Chief BBS Administrator
BassTeroids Champion, Global Player Champion, Aim & Fire Champion, Puzzle Maniax Champion, Othello Champion, Canyon Glider Champion, Unicycle Challenge Champion, YetiSports 9: Final Spit Champion, Zed Champion
 
Join Date: May 2003
Location: Webb City, Mo
Posts: 16,178
Send a message via ICQ to Gizmo Send a message via AIM to Gizmo Send a message via MSN to Gizmo Send a message via Yahoo to Gizmo Send a message via Skype™ to Gizmo

Quote:
Originally Posted by cadaveca
Or just distract everyone from what i was really saying?
Cad, I really don't know what is up with you lately. My purpose in life is not to obfuscate the truth, or deried or ridicule those who have a different opinion from mine. Yet this is the second time you made snide comments in my direction when we get engaged in a technical discussion.

Tell you what. I'll leave you alone and you leave me alone. Ok?
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiFurl this Post!
Reply With Quote
  #10 (permalink)  
Old 13th May, 2006, 10:48 PM
Member/Contributor/Resident Crystal Ball
 
Join Date: March 2004
Posts: 7,451

All apol0gies, Gizmo, was not directed at you @ all. Your comment directly relates to how nVidia portrays the scenario...SLi motherobard, using PCB, is best, but really, Crossfire chipsets, with the bridge, are far better.

It sticks me as odd, however, when nViida says that you need the connector, and thier chipset, and go as far as writing drivers that exclude the competitor's chipsets, becasue "it performs best as a platform".

nVidia is about the release a new platform shortly...one that will see a faster PCI-E bus with all chipsets in the platform connected together to bring the performance boost, through the use of exclusive drivers.

I've slowly come to realize that once a tech company makes thier tech "exclusive"...they are almost declaring that someone does part of thier platform better, and "we aren't gonna let on."

It's this attitude, Gizmo that my anger is directed at, and not you. Unfortunately you were the messenger.
__________________
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiFurl this Post!
Reply With Quote
  #11 (permalink)  
Old 14th May, 2006, 03:45 AM
Gizmo's Avatar
Chief BBS Administrator
BassTeroids Champion, Global Player Champion, Aim & Fire Champion, Puzzle Maniax Champion, Othello Champion, Canyon Glider Champion, Unicycle Challenge Champion, YetiSports 9: Final Spit Champion, Zed Champion
 
Join Date: May 2003
Location: Webb City, Mo
Posts: 16,178
Send a message via ICQ to Gizmo Send a message via AIM to Gizmo Send a message via MSN to Gizmo Send a message via Yahoo to Gizmo Send a message via Skype™ to Gizmo

Quote:
Originally Posted by cadaveca
All apol0gies, Gizmo, was not directed at you @ all.
Apology accepted in the spirit in which it was given.

Quote:
Originally Posted by cadaveca
Your comment directly relates to how nVidia portrays the scenario...SLi motherobard, using PCB, is best, but really, Crossfire chipsets, with the bridge, are far better.
I think which is 'best' rather depends on the scenario. As I mentioned previously, if you are already sucking bandwidth on the PCIe bus for basic video stuff, and then you try to communicate something else with the card on top of it, there MAY be issues with that. I can quite easily cause issues with VOIP on a 100 megabit network while only pulling about 20 megabits/second of bandwidth, because that 20 megabits adversely affects the latency of the network, and VOIP is very sensitive to latencies. Video is probably even less tolerant of such latencies than voice applications like VOIP.

Now, is this a realistic consideration with the PCIe bus? Quite frankly, I don't know. But I can understand where the nVidia engineers might have believed that having the dedicated SLI bus would have been a good idea. Of course, a good engineer would have done some modeling and maybe even prototyping to test the theory. It may even be that the engineers originally said "Hey, we can do this all across the PCIe bus" and some marketing goon said "could we use a dedicated bus like the VESA feature connector, so that it could be proprietary?".

Quote:
Originally Posted by cadaveca
It sticks me as odd, however, when nViida says that you need the connector, and thier chipset, and go as far as writing drivers that exclude the competitor's chipsets, becasue "it performs best as a platform".
I'll agree that this smells.

Quote:
Originally Posted by cadaveca
I've slowly come to realize that once a tech company makes thier tech "exclusive"...they are almost declaring that someone does part of thier platform better, and "we aren't gonna let on."
Not necessarilly that someone is doing part of their platform better, but that someone MIGHT do part of their platform better, if they don't put some roadblocks up.

Dude, this is the whole PREMISE of proprietary hardware and software. It's why the industry is fighting open source so freaking hard. When people can actually scrutinize the nitty-gritty details of your work and point out what's wrong with it, people with large egos suddenly feel very small, and companies that charge exorbitant prices for mediocre tech suddenly have to work for a living.
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiFurl this Post!
Reply With Quote
  #12 (permalink)  
Old 14th May, 2006, 04:36 AM
Member/Contributor/Resident Crystal Ball
 
Join Date: March 2004
Posts: 7,451

I think, Gizmo, the part you are missing from what I see...the part that has me confused...

I agree wholeheartedly about traffic causing confusion, and this is even more important with PCI-E, in relation to PCI, because of how IRQ's are managed, and even more so how Windows manages IRQ's...because i have yet to find a board that sticks to the manual IRQ assignments in bios, once Windows is loaded.

Anyway, the point I'm getting to, is the whole peer-to-peer scenario. This aspect completely disregards any current traffic that may be on the bus...becuase it's not on the same working bus that the cpu uses...it's a peer-to-peer link using the PCI-E spec, and not a peer-to-peer link using the pci-e bus.


And what the hell is wrong with working? What else ya gonna do?
__________________

Last edited by cadaveca; 14th May, 2006 at 04:44 AM.
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiFurl this Post!
Reply With Quote
  #13 (permalink)  
Old 14th May, 2006, 04:44 AM
Gizmo's Avatar
Chief BBS Administrator
BassTeroids Champion, Global Player Champion, Aim & Fire Champion, Puzzle Maniax Champion, Othello Champion, Canyon Glider Champion, Unicycle Challenge Champion, YetiSports 9: Final Spit Champion, Zed Champion
 
Join Date: May 2003
Location: Webb City, Mo
Posts: 16,178
Send a message via ICQ to Gizmo Send a message via AIM to Gizmo Send a message via MSN to Gizmo Send a message via Yahoo to Gizmo Send a message via Skype™ to Gizmo

Quote:
Originally Posted by cadaveca
Anyway, the point I'm getting to, is the whole peer-to-peer scenario. This aspect completely disregards any current traffic that may be on the bus...becuase it's not on the same working bus that the cpu uses...it's a peer-to-peer link using the PCI-E spec, and not a peer-to-peer link using the pci-e bus.
Eh, I think you are massively confused here about what peer-to-peer really means:

As I know you are well aware, the whole idea of peer-to-peer is that card A and card B can talk to each other via a dedicated link, without taking away bandwidth from other cards on the bus. It's rather like a network switch for ethernet, if it helps to think of it in those terms.

However, the thing that is not at all obvious is that the cards can only carry on one conversation at a time. I may have dedicated phone lines to everyone that I want to talk to, meaning that no one ever gets a busy signal when they want to call me, but I can only talk to one person at a time, no matter how hard I try. If I'm very fast and clever, I may be able to manage several conversations at one time so that, TO THE PERSON ON THE OTHER END, I appear to be talking only to them, but the fact remains that I am only able to talk to one person at a time, and if they say something that requires my attention, they may have to wait until I finish the conversation I am currently engaged in on another line.

Does that help any?
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiFurl this Post!
Reply With Quote
  #14 (permalink)  
Old 14th May, 2006, 05:56 AM
Member/Contributor/Resident Crystal Ball
 
Join Date: March 2004
Posts: 7,451

Not really. An interface with the gpu's dispatch processor, and one with the memory controller, solves the issues. Or simply 2 different links...or how about 8?
__________________
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiFurl this Post!
Reply With Quote
  #15 (permalink)  
Old 14th May, 2006, 03:53 PM
Gizmo's Avatar
Chief BBS Administrator
BassTeroids Champion, Global Player Champion, Aim & Fire Champion, Puzzle Maniax Champion, Othello Champion, Canyon Glider Champion, Unicycle Challenge Champion, YetiSports 9: Final Spit Champion, Zed Champion
 
Join Date: May 2003
Location: Webb City, Mo
Posts: 16,178
Send a message via ICQ to Gizmo Send a message via AIM to Gizmo Send a message via MSN to Gizmo Send a message via Yahoo to Gizmo Send a message via Skype™ to Gizmo

Quote:
Originally Posted by cadaveca
Not really. An interface with the gpu's dispatch processor, and one with the memory controller, solves the issues. Or simply 2 different links...or how about 8?
Eh, what?

I can have as many PCIe interfaces on the graphics card as I want, but I've still only got one connector on the mobo. Even if that connector is a 16x connector with a 16x link, that doesn't mean I can handle 16 conversations; it just means I can handle one conversation 16 times as fast. I can still only talk to the other video card OR the CPU, not the other video card AND the CPU.

What am I missing here?
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiFurl this Post!
Reply With Quote
  #16 (permalink)  
Old 14th May, 2006, 04:06 PM
Member/Contributor/Resident Crystal Ball
 
Join Date: March 2004
Posts: 7,451

Well, what happens when the card switches to 8x connection instead of 16x...it gets half the information?

More lanes does not makes for FASTER...it makes for MORE. Maybe this is why nVidia had to go that route...ATI had already aquired a patent for traffic steering, and nVidia couldn't use chipset steering for the traffic...would explain why we flip a paddle for SLi16x...it creates the link between the cards that is seperate from teh pci-e bus, in addition to the bridge pcb. with the addition of the bridge, we have 2 seperate links...

Either I'm right, or the info down that second set of 8 lanes is simply replicated data?
__________________
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiFurl this Post!
Reply With Quote
  #17 (permalink)  
Old 14th May, 2006, 04:21 PM
Gizmo's Avatar
Chief BBS Administrator
BassTeroids Champion, Global Player Champion, Aim & Fire Champion, Puzzle Maniax Champion, Othello Champion, Canyon Glider Champion, Unicycle Challenge Champion, YetiSports 9: Final Spit Champion, Zed Champion
 
Join Date: May 2003
Location: Webb City, Mo
Posts: 16,178
Send a message via ICQ to Gizmo Send a message via AIM to Gizmo Send a message via MSN to Gizmo Send a message via Yahoo to Gizmo Send a message via Skype™ to Gizmo

Quote:
Originally Posted by cadaveca
Well, what happens when the card switches to 8x connection instead of 16x...it gets half the information?

More lanes does not makes for FASTER...it makes for MORE. Maybe this is why nVidia had to go that route...ATI had already aquired a patent for traffic steering, and nVidia couldn't use chipset steering for the traffic...would explain why we flip a paddle for SLi16x...it creates the link between the cards that is seperate from teh pci-e bus, in addition to the bridge pcb. with the addition of the bridge, we have 2 seperate links...

Either I'm right, or the info down that second set of 8 lanes is simply replicated data?
AIUI, the paddle card is used to route the second 8 lanes to the second connector when you are using SLI. You take one 16x PCIe link and split it into two 8x PCIe links (one for each card slot). The two devices at each end of a PCIe link negotiate the highest bandwidth they can transmit on, and then use that bandwidth for transmission.

And yes, more lanes DOES mean faster, NOT more. PCIe is a serial protocol, yes, but the data are striped across the links. If you have a 1x link, and transmit 8 bytes down the link, then the bytes will be transmitted in order, 1-8. However, if you have a 2x PCIe link, all of the odd bytes are transmitted on one lane, and all the even bytes are transmitted on the other. For a 4x link, bytes 1 and 5 would go on lane 1, bytes 2 and 6 would go on lane 2, bytes 3 and 7 would go on lane 3, and bytes 4 and 8 would go on lane 4. If you have an 8x link, you would send one byte on each link. If you have an 8x link and you only need to send 4 bytes, the remaining links are 'padded' with null data. As you can see, it is a case of FASTER, not MORE.

All PCIe devices MUST support, AT MINIMUM, trasmitting data at their native bus width AS WELL AS 1x. Devices MAY negotiate OTHER speeds that are in between, so long as both ends support doing so. So, for example, a 16x PCIe card could plug into a 16x slot and then negotiate for 8x, 4x, 2x, or 1x trasmission speeds. This also allows for things like splitting a 16x link into two 8x links and routing them to two different PCIe slots.
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiFurl this Post!
Reply With Quote
  #18 (permalink)  
Old 14th May, 2006, 04:28 PM
Member/Contributor/Resident Crystal Ball
 
Join Date: March 2004
Posts: 7,451

SO then the flip-paddle is a cost-saving feature?




Um, so the 16x slots are actually 2x8 grouped together?

uh, yes, it is, if i remember the pci-e sig approval.


nVidia pci-e cards need more pci-e bandwitdh. the new platform will see a 25% overclock of the pci-e bus.

Ideas as to why?
__________________
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiFurl this Post!
Reply With Quote
  #19 (permalink)  
Old 14th May, 2006, 04:34 PM
Gizmo's Avatar
Chief BBS Administrator
BassTeroids Champion, Global Player Champion, Aim & Fire Champion, Puzzle Maniax Champion, Othello Champion, Canyon Glider Champion, Unicycle Challenge Champion, YetiSports 9: Final Spit Champion, Zed Champion
 
Join Date: May 2003
Location: Webb City, Mo
Posts: 16,178
Send a message via ICQ to Gizmo Send a message via AIM to Gizmo Send a message via MSN to Gizmo Send a message via Yahoo to Gizmo Send a message via Skype™ to Gizmo

Quote:
Originally Posted by cadaveca
SO then the flip-paddle is a cost-saving feature?
Well, yeah, you could look at it that way.

Quote:
Originally Posted by cadaveca
Um, so the 16x slots are actually 2x8 grouped together?

uh, yes, it is, if i remember the pci-e sig approval.


nVidia pci-e cards need more pci-e bandwitdh. the new platform will see a 25% overclock of the pci-e bus.

Ideas as to why?
You just said it. The PCIe cards need more PCIe bandwidth. Adding more lanes is expensive. It requires extra chip real-estate, more routing on the board, etc., etc. It's cheaper to make the existing signaling run faster (at least, up to a point).
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiFurl this Post!
Reply With Quote
  #20 (permalink)  
Old 14th May, 2006, 04:40 PM
cloasters's Avatar
Asst. BBS Administrator
 
Join Date: September 2001
Location: Location, Location
Posts: 21,956

"Huh?" said the kid at the back of the classroom. I'm glad that we have some Members who know about this 'somewhat' baffling subject. Thanks, guys!
__________________
When the world will be better.
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiFurl this Post!
Reply With Quote
Reply



Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools
Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Peer-to-peer networks, less than peerless at times! Daniel ~ Data Security 0 17th March, 2007 06:56 PM
Cannot peer network WinXP & Win98! DimViesel Mobile Devices and Networking 35 16th August, 2003 10:21 PM
AGP 4x Fast write.... AMD FANZ EPoX MotherBoards 4 29th August, 2002 12:37 PM
The right way to write an obituary. mookydooky Mookydooky's Just for laughs! 0 9th August, 2002 02:57 AM


All times are GMT +1. The time now is 01:08 AM.


Copyright ©2001 - 2010, AOA Forums
Don't Click Here Don't Click Here Either

Search Engine Friendly URLs by vBSEO 3.3.0