GTX 660 and other 600Series cards have different boost clocks???
I own two GTX 660 and recently came across following: One card boosts up to 1097Mhz (so does nvidia Inspector say) The other card instead only boosts up to 1058Mhz (so does nvidia Inspector say) My Question is: How can two cards, both same manufacturer, both same bios, both same setup, can end up with 2 different Boost Clocks? And did Anyone else came across similar or the same Problem???
I own two GTX 660 and recently came across following:
One card boosts up to 1097Mhz (so does nvidia Inspector say)
The other card instead only boosts up to 1058Mhz (so does nvidia Inspector say)
My Question is:
How can two cards, both same manufacturer, both same bios, both same setup, can end up with 2 different Boost Clocks? And did Anyone else came across similar or the same Problem???

#1
Posted 03/02/2013 12:31 AM   
Maximum GPU Boost frequencies can vary between GPUs, even ones that are cut from the same wafer, simply due to differences in ASIC quality. While completely irrelevant for a GPU on the market in terms of performance, functionality, and component lifetime, it does mean some chips will require slightly more power and produce more heat than others. GPU Boost algorithms account for this by increasing the boost frequency in 13 MHz steps until the thermal/power targets are reached for a given application, which varies.
Maximum GPU Boost frequencies can vary between GPUs, even ones that are cut from the same wafer, simply due to differences in ASIC quality. While completely irrelevant for a GPU on the market in terms of performance, functionality, and component lifetime, it does mean some chips will require slightly more power and produce more heat than others. GPU Boost algorithms account for this by increasing the boost frequency in 13 MHz steps until the thermal/power targets are reached for a given application, which varies.

GeForce Technical Marketing

#2
Posted 03/02/2013 12:58 AM   
So in my case: Should I use the 1097Mhz card as Nr# 1 and the 1058Mhz card as Nr# 2? I mean in Crossfire? Or is there a way to change the power target limit somewhere so that both cards will clock boost up to 1097? Or in case for later condition: If it comes to over clocking (since both are ASUS GTX 660 DC2) wich card should lead the way? I think it would be smart to use the fast card as 1st and the slower as 2nd? right? Anyone esle got some ideas on how to operate these Sympthoms? I know that avery dice made is a "unique little snowflake" but in my carrer as a geek I never saw that huge of a gap like I did with these two cards!!! BTW: I LOVE THEM POWER PRICE TO PERFORMANCE ON NEARLY EVERY GAME I PLAY LEGENDARY!
So in my case:
Should I use the 1097Mhz card as Nr# 1 and the 1058Mhz card as Nr# 2?
I mean in Crossfire?
Or is there a way to change the power target limit somewhere so that both cards will clock boost up to 1097?

Or in case for later condition:
If it comes to over clocking (since both are ASUS GTX 660 DC2) wich card should lead the way?
I think it would be smart to use the fast card as 1st and the slower as 2nd? right?

Anyone esle got some ideas on how to operate these Sympthoms?

I know that avery dice made is a "unique little snowflake" but in my carrer as a geek I never saw that huge of a gap like I did with these two cards!!!

BTW: I LOVE THEM POWER PRICE TO PERFORMANCE ON NEARLY EVERY GAME I PLAY LEGENDARY!

#3
Posted 03/02/2013 01:17 AM   
That is considered good practice with SLI, yes. Having the fastest GPU installed as the primary device allows single-GPU or low-scaling applications to benefit from the slight performance advantage. The power target for your GPU is adjustable through third party applications like [url=http://www.evga.com/precision/][i]EVGA Precision[/i][/url] and [url=http://event.msi.com/vga/afterburner/download.htm][i]MSI Afterburner[/i][/url]. However, because GPU Boost is adjusted on the fly, there's no guarantee you will be able to perfectly synchronize their boosted frequencies. Of course, whether or not you do is of no particular consequence; SLI will happily work in either case.
That is considered good practice with SLI, yes. Having the fastest GPU installed as the primary device allows single-GPU or low-scaling applications to benefit from the slight performance advantage.

The power target for your GPU is adjustable through third party applications like EVGA Precision and MSI Afterburner. However, because GPU Boost is adjusted on the fly, there's no guarantee you will be able to perfectly synchronize their boosted frequencies. Of course, whether or not you do is of no particular consequence; SLI will happily work in either case.

GeForce Technical Marketing

#4
Posted 03/02/2013 02:11 AM   
...so as far as I understood, SLI is limited by double the amount of the frames of your weakest card (for AFR as well as for SFR right?) So if my 1058Mhz performs better in Heaven Bench than the 1097Mhz I should use the 1058Mhz as my first card? I really dont get the point: Is it better to have the best performing card at 1st or: Is it better to have the highest clocked card at 1st? Other thing is: What if I could oc the 1058 to more than 1200Mhz and the 1987 just to 1188,9Mhz? Shouldn´t I take the 1058Mhz in this case as well as the 1st one? (performance of course is better) Or should I dig into that stuff and run the cards seperatly for some time to see wich one is the better? Again and most important question: Performance best = 1st card? Highest Clock = 1st card? THX
...so as far as I understood, SLI is limited by double the amount of the frames of your weakest card (for AFR as well as for SFR right?)

So if my 1058Mhz performs better in Heaven Bench than the 1097Mhz I should use the 1058Mhz as my first card?

I really dont get the point:
Is it better to have the best performing card at 1st or:
Is it better to have the highest clocked card at 1st?

Other thing is:
What if I could oc the 1058 to more than 1200Mhz and the 1987 just to 1188,9Mhz?
Shouldn´t I take the 1058Mhz in this case as well as the 1st one? (performance of course is better)

Or should I dig into that stuff and run the cards seperatly for some time to see wich one is the better?

Again and most important question:
Performance best = 1st card?
Highest Clock = 1st card?

THX

#5
Posted 03/02/2013 03:52 AM   
sorry i meant: 1097 just to 1188,9Mhz
sorry i meant:
1097 just to 1188,9Mhz

#6
Posted 03/02/2013 03:57 AM   
[quote="Johnathan_Doe"]...so as far as I understood, SLI is limited by double the amount of the frames of your weakest card (for AFR as well as for SFR right?)[/quote] SLI does not forcibly synchronize the frequencies of the grouped GPUs, so no. If we were to assume performance is completely static, and given one GPU with a 1097 MHz core clock paired to another GPU with a 1058 MHz core clock, then performance would be comparable to two GPUs with 1077.5 MHz core clocks. In reality, the difference between these hypothetical PCs would probably be lost to the noise of minor deviations in measurements. Even if it wasn't I seriously doubt it would be as apparent to the eye as it was on paper. AFR is the only rendering mode that can potentially offer doubled performance. SFR does not scale with geometry and is susceptible to overdrawing as well as load balancing errors, which is why it's mostly used with older games and applications. [quote="Johnathan_Doe"]So if my 1058Mhz performs better in Heaven Bench than the 1097Mhz I should use the 1058Mhz as my first card? I really dont get the point: Is it better to have the best performing card at 1st or: Is it better to have the highest clocked card at 1st?[/quote] Between two cards of the same kind, higher frequencies will pretty much always translate to better performance. The only time it won't is when you're CPU-limited, and since you're using SLI I certainly hope this is not something you encounter often! [quote="Johnathan_Doe"]Other thing is: What if I could oc the 1058 to more than 1200Mhz and the 1097 just to 1188,9Mhz? Shouldn´t I take the 1058Mhz in this case as well as the 1st one? (performance of course is better) Or should I dig into that stuff and run the cards seperatly for some time to see wich one is the better?[/quote] The answer here is more or less the same as above: whichever clocks higher is going to be the one that yields higher performance. How far either of them can stably overclock is not as predictable, so a fair bit of experimentation is in order.
Johnathan_Doe said:...so as far as I understood, SLI is limited by double the amount of the frames of your weakest card (for AFR as well as for SFR right?)

SLI does not forcibly synchronize the frequencies of the grouped GPUs, so no. If we were to assume performance is completely static, and given one GPU with a 1097 MHz core clock paired to another GPU with a 1058 MHz core clock, then performance would be comparable to two GPUs with 1077.5 MHz core clocks. In reality, the difference between these hypothetical PCs would probably be lost to the noise of minor deviations in measurements. Even if it wasn't I seriously doubt it would be as apparent to the eye as it was on paper.

AFR is the only rendering mode that can potentially offer doubled performance. SFR does not scale with geometry and is susceptible to overdrawing as well as load balancing errors, which is why it's mostly used with older games and applications.

Johnathan_Doe said:So if my 1058Mhz performs better in Heaven Bench than the 1097Mhz I should use the 1058Mhz as my first card?

I really dont get the point:
Is it better to have the best performing card at 1st or:
Is it better to have the highest clocked card at 1st?

Between two cards of the same kind, higher frequencies will pretty much always translate to better performance. The only time it won't is when you're CPU-limited, and since you're using SLI I certainly hope this is not something you encounter often!

Johnathan_Doe said:Other thing is:
What if I could oc the 1058 to more than 1200Mhz and the 1097 just to 1188,9Mhz?
Shouldn´t I take the 1058Mhz in this case as well as the 1st one? (performance of course is better)

Or should I dig into that stuff and run the cards seperatly for some time to see wich one is the better?

The answer here is more or less the same as above: whichever clocks higher is going to be the one that yields higher performance.

How far either of them can stably overclock is not as predictable, so a fair bit of experimentation is in order.

GeForce Technical Marketing

#7
Posted 03/02/2013 04:21 AM   
Yeah that was exactly what I thought but the fact is: The 1058Mhz card wich won´t boost more than that scores: ~1353 points in Heaven Bench Xtreme Dx11 The 1097Mhz card wich won´t boost more than that scores: ~1336 points in Heaven Bench Xtreme Dx11 I logged everything and take a closer look at the curves and it seems that the 1058Mhz card holds it´s boost clock more often and for longer periods. The 1097Mhz doesn´t boost up to 1097 that often but never drops under 1084 but scores less than the 1058 card??? Do you now get the pont that confuses me here?!? I really don´t feel like one of the gpus is damaged, everything is working fine doesn´t matter wich card I use the only difference is: 1097Mhz card ... 1336 points bench 1058Mhz card ... 1353 points bench So what would you do? Because in SLI I really have more stuttering and hicksing than in single gpu, and no I´m not talking about microstuttering!!! And it also feels like that there is something holding back in SLI if I take a look at the benches!!! PS: The scores I wrote are calculated from multiple benches!!!
Yeah that was exactly what I thought but the fact is:
The 1058Mhz card wich won´t boost more than that scores:
~1353 points in Heaven Bench Xtreme Dx11
The 1097Mhz card wich won´t boost more than that scores:
~1336 points in Heaven Bench Xtreme Dx11

I logged everything and take a closer look at the curves and it seems that the 1058Mhz card holds it´s boost clock more often and for longer periods.
The 1097Mhz doesn´t boost up to 1097 that often but never drops under 1084 but scores less than the 1058 card???

Do you now get the pont that confuses me here?!?

I really don´t feel like one of the gpus is damaged, everything is working fine doesn´t matter wich card I use the only difference is:
1097Mhz card ... 1336 points bench
1058Mhz card ... 1353 points bench

So what would you do?

Because in SLI I really have more stuttering and hicksing than in single gpu, and no I´m not talking about microstuttering!!! And it also feels like that there is something holding back in SLI if I take a look at the benches!!!

PS: The scores I wrote are calculated from multiple benches!!!

#8
Posted 03/02/2013 05:19 PM   
I do, and I see similar behavior between the four GPUs in my setup: despite all of them being able to boost up to 1202 MHz, one of them is regularly clocked lower than the others by maybe 65-130 MHz. The others are more comfortable working at 1150-1176 MHz, and although this one could do the same, it's less likely to. The thing is that once you leave the base clocks and get into GPU Boost territory, it's in the hands of NVIDIA's calculations, which for GPU Boost 1.0 even they will admit are based on the somewhat awkward notion of power. GPU Boost 2.0 (as seen in the GTX TITAN) is centered on temperature, which is far easier to work with and has the GPU spending more average time closer to its maximum boost frequency. Putting all of that aside however, you're looking at a ~1.25% difference just in one benchmark. What would I do? See how far they overclock and look at what the difference is then, both in the games I play and benchmarks. If it's still that small afterwards, I'd probably put off switching them around until the next time I took my case apart to clean it out. If you can prove that one graphics card is always offering better performance than the other, there's no reason you shouldn't have it as the primary, but when dealing with such a tiny distinction I wouldn't be jumping out of my seat. Not without pushing them as hard as I could first, at least. What's giving you the impression something is holding back your SLI performance? We haven't discussed anything but your graphics cards until now, so I have no idea what kind of system you have, what resolution you're playing at, or even what drivers you're using.
I do, and I see similar behavior between the four GPUs in my setup: despite all of them being able to boost up to 1202 MHz, one of them is regularly clocked lower than the others by maybe 65-130 MHz. The others are more comfortable working at 1150-1176 MHz, and although this one could do the same, it's less likely to.

The thing is that once you leave the base clocks and get into GPU Boost territory, it's in the hands of NVIDIA's calculations, which for GPU Boost 1.0 even they will admit are based on the somewhat awkward notion of power. GPU Boost 2.0 (as seen in the GTX TITAN) is centered on temperature, which is far easier to work with and has the GPU spending more average time closer to its maximum boost frequency.

Putting all of that aside however, you're looking at a ~1.25% difference just in one benchmark. What would I do? See how far they overclock and look at what the difference is then, both in the games I play and benchmarks. If it's still that small afterwards, I'd probably put off switching them around until the next time I took my case apart to clean it out. If you can prove that one graphics card is always offering better performance than the other, there's no reason you shouldn't have it as the primary, but when dealing with such a tiny distinction I wouldn't be jumping out of my seat. Not without pushing them as hard as I could first, at least.

What's giving you the impression something is holding back your SLI performance? We haven't discussed anything but your graphics cards until now, so I have no idea what kind of system you have, what resolution you're playing at, or even what drivers you're using.

GeForce Technical Marketing

#9
Posted 03/02/2013 06:16 PM   
Phenom II 1100T @ 3,7/4,1 Ghz Crosshair V formula Z 1201 bios i think Corsair Dominator 1600Mhz 8Gb Force GT 120 GB x2 in AHCI with trim support and no page file on any HD Well it feels like I miss about 10% of performance in SLI. I know you can´t double performance by gpu count; What I mean is that I mostly get about 50% of frames plus wich actually mostly boost me up to 140FPS or so in most SLI loving games... but only 50% of frames? oh yeah res. of course 1080p; 1920 x 1080 but I´m starting to think that the max frames or buffer is depended on the games if you break some fps bars...?
Phenom II 1100T @ 3,7/4,1 Ghz
Crosshair V formula Z 1201 bios i think
Corsair Dominator 1600Mhz 8Gb
Force GT 120 GB x2 in AHCI with trim support and no page file on any HD

Well it feels like I miss about 10% of performance in SLI.
I know you can´t double performance by gpu count;
What I mean is that I mostly get about 50% of frames plus wich actually mostly boost me up to 140FPS or so in most SLI loving games...
but only 50% of frames?

oh yeah res. of course 1080p; 1920 x 1080

but I´m starting to think that the max frames or buffer is depended on the games if you break some fps bars...?

#10
Posted 03/02/2013 06:49 PM   
What games and applications are you seeing the +50% performance gains in? 2-Way SLI can come very close to doubling your frame rates, but some graphics engines are hard to make this true for. [quote="Johnathan_Doe"]but I´m starting to think that the max frames or buffer is depended on the games if you break some fps bars...?[/quote] Not sure what you mean here, sorry.
What games and applications are you seeing the +50% performance gains in? 2-Way SLI can come very close to doubling your frame rates, but some graphics engines are hard to make this true for.

Johnathan_Doe said:but I´m starting to think that the max frames or buffer is depended on the games if you break some fps bars...?

Not sure what you mean here, sorry.

GeForce Technical Marketing

#11
Posted 03/02/2013 11:05 PM   
I meant buffer/pool sizes etc. that are that small that a more powerfull card won´t get loaded enough and mostly ist just stressed overall about 50-75% at max gpu load bus load mem controller load etc. wich just comes out more in SLI I never saw a game that really maxes all capacities of a high end card (I´m talking about same aged games/cards) Older high end cards often get more loaded "of course" but not always there are some cards/games combinations that just don´t match! I´m just digging myself in programming and computer related stuff focused on gaming over the last year and read about some games wich were just massively fu**ed up and later hotfixed/patched together to "just" run however they performed. It´s just a shame that that much potential get´s wasted because of something called "free market" but that´s another subject... But to get back to my problem: In GTA IV for example I do have more laggs stucking (very short stops and then suddenly a game speed boost to that point where I would have been without the small stop). It jus feels like the VRAM loading in GTA isn´t synched very well or is even done for each card and not simultaiously, because in Single GPU I do have "the same stuckings" but half the time and without changing Game Speed! In BF3 I can´t say that I miss performance for example - In Hitman Absolution (I know there is no propper SLI Profile) I don´t even get a single frame more just in very rare cases... And so goes my list on and on: Some games are beautifully scaled in SLI and some aren´t!!!
I meant buffer/pool sizes etc. that are that small that a more powerfull card won´t get loaded enough and mostly ist just stressed overall about 50-75% at max gpu load bus load mem controller load etc. wich just comes out more in SLI
I never saw a game that really maxes all capacities of a high end card (I´m talking about same aged games/cards)
Older high end cards often get more loaded "of course" but not always there are some cards/games combinations that just don´t match!
I´m just digging myself in programming and computer related stuff focused on gaming over the last year and read about some games wich were just massively fu**ed up and later hotfixed/patched together to "just" run however they performed.
It´s just a shame that that much potential get´s wasted because of something called "free market" but that´s another subject...

But to get back to my problem:
In GTA IV for example I do have more laggs stucking (very short stops and then suddenly a game speed boost to that point where I would have been without the small stop). It jus feels like the VRAM loading in GTA isn´t synched very well or is even done for each card and not simultaiously, because in Single GPU I do have "the same stuckings" but half the time and without changing Game Speed!
In BF3 I can´t say that I miss performance for example - In Hitman Absolution (I know there is no propper SLI Profile) I don´t even get a single frame more just in very rare cases...

And so goes my list on and on:
Some games are beautifully scaled in SLI
and some aren´t!!!

#12
Posted 03/03/2013 02:47 PM   
[quote="Johnathan_Doe"]laggs stucking (very short stops and then suddenly a game speed boost to that point where I would have been without the small stop)[/quote] Dude you just perfectly described what happens with my SLI in some games/benchs. By any chance you use Nvidia SYstem Tools? I have no idea why, but i realized in my system what was causing this freeze-lagg was the Sysyem tools. I always used it in single GPU mode and no problem. But i changed to SLI a while ago and this freeze-lagg started. Was messing around to try to find out if it was some app or driver who was causing the problem , and found out that WITHOUT Nvidia System Tools the issue was gone... Strange isnt it? Well i dont know if that is your case, but if you have Nvidia System tools running in your system, try uninstalling it and give games a try! Cheers
Johnathan_Doe said:laggs stucking (very short stops and then suddenly a game speed boost to that point where I would have been without the small stop)


Dude you just perfectly described what happens with my SLI in some games/benchs.

By any chance you use Nvidia SYstem Tools? I have no idea why, but i realized in my system what was causing this freeze-lagg was the Sysyem tools. I always used it in single GPU mode and no problem. But i changed to SLI a while ago and this freeze-lagg started. Was messing around to try to find out if it was some app or driver who was causing the problem , and found out that WITHOUT Nvidia System Tools the issue was gone... Strange isnt it? Well i dont know if that is your case, but if you have Nvidia System tools running in your system, try uninstalling it and give games a try!
Cheers

#13
Posted 08/01/2013 05:04 PM   
Scroll To Top