AOA Forums AOA Forums AOA Forums Folding For Team 45 AOA Files Home Front Page Become an AOA Subscriber! UserCP Calendar Memberlist FAQ Search Forum Home


Go Back   AOA Forums > Hardware > General Hardware Discussion

General Hardware Discussion Hard drives, CD, DVD Monitors, All hardware questions not better served by our other Topics


Reply
 
LinkBack Thread Tools Rate Thread
  #1 (permalink)  
Old 14th September, 2004, 10:18 AM
brian770's Avatar
Member
 
Join Date: July 2004
Location: HighBridge Wisconsin
Posts: 279

duh!

hi all!!
i named this post for what many of you will probably say when you read it. lol!!
but i was looking at some level 1 and 2 cashe's on both Intel and AMD procs and was wondering why they dont just put 1 meg of level 1 or a bunch (like 5 or 10 megs of level 2? it cant be because of the size can it? the cpu's would be bigger but so what? dose silacon realy cost that much to make? and also with AMD basicly proving that thier chips are more efficient(please excuse the bad spelling) why dont they add a few sections to the pipelines to get a faster chip?can you imagine a Athlon running factory at 3.6 gigs?what would they call it a Athlon 6000 + its would just be a faster version of the same efficient chip.wouldnt it?
i guess it all comes down to the money.i know that all of these are probably rookie questions but i was just wondering.....please excuse my "ignorance"
brian770
__________________
overclocking is like raising a child, you do everything you can to make sure its right and hopefully when your done it turns out perfect......but your never realy done..lol
AOA Team fah
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiFurl this Post!
Reply With Quote
  #2 (permalink)  
Old 14th September, 2004, 10:28 AM
Chief Systems Administrator
 
Join Date: September 2001
Location: Europe
Posts: 13,075

Size is of the essence. The cache takes up the majority of the silicon itself. Whilst size doesn't appear to be an issue, once you understand how chips are made, you'll see why.

When a wafer is made, there's a number of chips on that wafer. The wafer is cut up, and the chips tested. Now, the manufacturing process is not perfect, and there will be areas on the wafer where there is damage. Obviously any chips where there is damage will not work.

For arguments sake, lets say that there are 4 spots on the wafer where there's damage. A manufacturer makes a small chip, and can fit 400 on a wafer. Of those 400, four land where the damage is. The manufacturer thus makes 396 working chips.

Another manufacturer makes a CPU with a large amount of cache, and can only fit 10 chips on a wafer. Assuming those four spots are evenly distributed, the manufacturer will only have 6 working chips out of the wafer.

Lets make another assumption; the investment in the production plant and the process required to make the wafer mean the wafer costs $3000 to produce. The first manufacturer needs to charge $7.6 per chip. The second manufacturer needs to charge $500 per chip just to recoup the costs. That's why size is such a big issue!
__________________
Any views, thoughts and opinions are entirely my own. They don't necessarily represent those of my employer (BlackBerry).
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiFurl this Post!
Reply With Quote
  #3 (permalink)  
Old 14th September, 2004, 10:36 AM
brian770's Avatar
Member
 
Join Date: July 2004
Location: HighBridge Wisconsin
Posts: 279

i didnt realise that the silicon cost that much to produce or that the process has errors in it, i just figured that after making the waffers for so many years it would be error free. also , why do they still make round waffers? wouldnt it be more cost effective to make them square?
thanks for your answer, it seams my oversimplification was from lack of ...whats the word.....thinking about it before i stuck foot in mouth. lol
thanks
brian770
__________________
overclocking is like raising a child, you do everything you can to make sure its right and hopefully when your done it turns out perfect......but your never realy done..lol
AOA Team fah
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiFurl this Post!
Reply With Quote
  #4 (permalink)  
Old 14th September, 2004, 05:49 PM
Gizmo's Avatar
Chief BBS Administrator
BassTeroids Champion, Global Player Champion, Aim & Fire Champion, Puzzle Maniax Champion, Othello Champion, Canyon Glider Champion, Unicycle Challenge Champion, YetiSports 9: Final Spit Champion, Zed Champion
 
Join Date: May 2003
Location: Webb City, Mo
Posts: 16,178
Send a message via ICQ to Gizmo Send a message via AIM to Gizmo Send a message via MSN to Gizmo Send a message via Yahoo to Gizmo Send a message via Skype™ to Gizmo

The wafers are round because the silicon ingot they are cut from is 'grown' from a silicon crystal, and it naturally grows in a round form.

This link has a rather nice little writeup about it.
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiFurl this Post!
Reply With Quote
  #5 (permalink)  
Old 14th September, 2004, 06:29 PM
GrahamGarside's Avatar
Member/Contributer
 
Join Date: September 2004
Location: England
Posts: 4,572

and the reason amd doesn't increase the pipeline to ramp up the speed is because their lower pipeline is part of what makes them efficient. The larger the pipeline the longer it takes when there is an error and the pipeline is cleared and started again or something along those lines
__________________
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiFurl this Post!
Reply With Quote
  #6 (permalink)  
Old 14th September, 2004, 06:49 PM
Chief Systems Administrator
 
Join Date: September 2001
Location: Europe
Posts: 13,075

Pretty close!

Most software has to handle decisions somewhere, even if it's only to see if the user has clicked a mouse button. These decisions are usually pretty basic, but are a fundimental problem to a pipeline. Usually, there's only two code paths; using our mouse button example: most of the computer's time the mouse button isn't clicked. However, sometimes the mouse button is clicked, and the decision goes the opposite way to normal. In a pipeline, this means the whole pipeline has to be cleared out, as it had the instructions about what to do if the button wasn't clicked. Then it has to be reloaded with the instructions about what to do when the button IS clicked.

This flushing and reloading of the pipeline exacts a performance penalty. The deeper the pipeline is, the bigger the performance penalty when the CPU guesses the decision wrongly. Until the pipeline fills up again, the CPU sits waiting!
__________________
Any views, thoughts and opinions are entirely my own. They don't necessarily represent those of my employer (BlackBerry).
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiFurl this Post!
Reply With Quote
  #7 (permalink)  
Old 14th September, 2004, 07:38 PM
Gizmo's Avatar
Chief BBS Administrator
BassTeroids Champion, Global Player Champion, Aim & Fire Champion, Puzzle Maniax Champion, Othello Champion, Canyon Glider Champion, Unicycle Challenge Champion, YetiSports 9: Final Spit Champion, Zed Champion
 
Join Date: May 2003
Location: Webb City, Mo
Posts: 16,178
Send a message via ICQ to Gizmo Send a message via AIM to Gizmo Send a message via MSN to Gizmo Send a message via Yahoo to Gizmo Send a message via Skype™ to Gizmo

To expand further; what Áedán is talking about above is known as a pipeline flush, and is the reason that CPU manufacturers invest a great deal of resources into a technology called 'Branch Prediction'. The better their branch prediction, the greater the chances that they won't have to flush the pipeline. Intel has better branch prediction that AMD AFAIK (at least, with the P4 vs. the 32-bit Athlon, I understand the Athlon64 has improved this). On the other hand, Intel HAS to have better branch prediction, because Intel has, IIRC, something like 20 pipeline stages, versus AMD's 12. This means that a missed branch prediction for Intel is about 1.6 times more expensive than for AMD. I believe this is one of the reasons that AMD chips have historically excelled at games when running at the same clock. Things where the algorithm is highly predictable (for...next loops for example) benefit greatly from Intel's branch prediction. However, things that are essentially random (like Áedán's mouse button example) need a shorter pipeline, because there is no branch prediction algorithm in the world that will accurately predict when a user is going to click a button.

Note that the programmer can help things out somewhat by being consisten in how their logic functions. For example, if the true condition of an if branch is always the exception, the branch predictor can eventually learn this and make more accurate predictions. If the true condition of an if branch is the exception one time and the normal condition the next, that makes things a bit more difficult for the branch predictor. At least, that is my understanding. I may be wrong since I haven't really gotten into this in depth. This is actually usually handled at the compiler level for high level languages.
Digg this Post!Add Post to del.icio.usBookmark Post in TechnoratiFurl this Post!
Reply With Quote
Reply



Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools
Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On



All times are GMT +1. The time now is 11:18 PM.


Copyright ©2001 - 2010, AOA Forums
Don't Click Here Don't Click Here Either

Search Engine Friendly URLs by vBSEO 3.3.0