OpenGL glReadPixels performance MUCH lower on GTX480 vs GTX285
  1 / 2    
I am evaluating the GTX470 and GTX480 cards versus a GTX285 card and I am seeing much lower glReadPixels performance with the 470/480 cards.
I'm testing under XP using the latest drivers (197.41 for the 470/480) (197.45 for the 285)
I'm using these Nvidia PBO Texture Performance utilities to do the evaluation.

Look here for a description [url="http://developer.download.nvidia.com/SDK/9.5/Samples/samples.html#TexturePerformancePBO"]http://developer.download.nvidia.com/SDK/9...ePerformancePBO[/url]
You can download it here [url="http://developer.download.nvidia.com/SDK/9.5/Samples/DEMOS/OpenGL/TexturePerformancePBO.zip"]http://developer.download.nvidia.com/SDK/9...formancePBO.zip[/url]

Running the DEMOS\OpenGL\bin\release\TexturePerformancePBO.exe test I am seeing the following results.
With ImageSource=static ImageSink=glReadPixels
GTX480 : FPS=178
GTX285 : FPS=457
With ImageSource=static ImageSink=PBO
GTX480 : FPS=290
GTX285 : FPS=457

As a side note I am seeing an improvement in the texture upload for the 480 vs the 285 (FSP=940 vs FPS=895)

Is anyone else seeing similar issues?
I'm trying to find out if there were trade offs made to boost texture upload performance at the expense of download
or if this is just because the card/driver is new and has not been optimized.
I am evaluating the GTX470 and GTX480 cards versus a GTX285 card and I am seeing much lower glReadPixels performance with the 470/480 cards.

I'm testing under XP using the latest drivers (197.41 for the 470/480) (197.45 for the 285)

I'm using these Nvidia PBO Texture Performance utilities to do the evaluation.



Look here for a description http://developer.download.nvidia.com/SDK/9...ePerformancePBO

You can download it here http://developer.download.nvidia.com/SDK/9...formancePBO.zip



Running the DEMOS\OpenGL\bin\release\TexturePerformancePBO.exe test I am seeing the following results.

With ImageSource=static ImageSink=glReadPixels

GTX480 : FPS=178

GTX285 : FPS=457

With ImageSource=static ImageSink=PBO

GTX480 : FPS=290

GTX285 : FPS=457



As a side note I am seeing an improvement in the texture upload for the 480 vs the 285 (FSP=940 vs FPS=895)



Is anyone else seeing similar issues?

I'm trying to find out if there were trade offs made to boost texture upload performance at the expense of download

or if this is just because the card/driver is new and has not been optimized.

#1
Posted 04/23/2010 04:05 PM   
Was this issue ever resolved??

NVIDIA? hello?
Was this issue ever resolved??



NVIDIA? hello?

#2
Posted 10/01/2010 07:17 PM   
Was this issue ever resolved??

NVIDIA? hello?
Was this issue ever resolved??



NVIDIA? hello?

#3
Posted 10/01/2010 07:17 PM   
I know this is resurrecting a dead thread, but even now the problem is still there and perhaps relevant to the current inability of 400-series cards to perform well (compared to older cards) in non-game 3D applications such as Maya, 3ds Max, and Blender. I tested my GTX 460 card, and it seems to have the slow glReadPixels problem as well although not as slow as the GTX 480 in this benchmark. However, with PBO Readback speed is pretty good. Here are my numbers:

With Image Source=Static Image and Image Sink=glReadPixels
GTX 460 : FPS=224

With Image Source=Static Image and Image Sink=PBO Readback
GTX 460 : FPS=402

There's also another thread about this problem over at: [url="http://forums.nvidia.com/index.php?showtopic=181574"]http://forums.nvidia.com/index.php?showtopic=181574[/url]

Now, what I want to know is Nvidia's official statement on this. I hope ManuelG or others on this forum related to Nvidia can pass on this request to the higher ups. Is this a bug in the current OpenGL drivers for the Fermi cards that will eventually be fixed? Or is this a way for Nvidia to sell their Quadro cards by purposely crippling the drivers for the 400-series cards for non-game OpenGL 3D applications? Even a statement such as "The 400-series cards are for playing games only, and you must buy Quadro cards for 3D productivity software since the 400-series cards totally suck at it. If you're using 400-series cards for 3D productivity apps, YOU'RE DOING IT WRONG!" is definitely enough for us so that we know not to buy or recommend 400-series cards to other people who need to do work on non-game 3D apps. Keeping silent on this issue is very poor PR on the part of Nvidia, and only shows Nvidia's lack of goodwill towards its customers which can backfire and drive them away from doing business again with Nvidia in the future.
I know this is resurrecting a dead thread, but even now the problem is still there and perhaps relevant to the current inability of 400-series cards to perform well (compared to older cards) in non-game 3D applications such as Maya, 3ds Max, and Blender. I tested my GTX 460 card, and it seems to have the slow glReadPixels problem as well although not as slow as the GTX 480 in this benchmark. However, with PBO Readback speed is pretty good. Here are my numbers:



With Image Source=Static Image and Image Sink=glReadPixels

GTX 460 : FPS=224



With Image Source=Static Image and Image Sink=PBO Readback

GTX 460 : FPS=402



There's also another thread about this problem over at: http://forums.nvidia.com/index.php?showtopic=181574



Now, what I want to know is Nvidia's official statement on this. I hope ManuelG or others on this forum related to Nvidia can pass on this request to the higher ups. Is this a bug in the current OpenGL drivers for the Fermi cards that will eventually be fixed? Or is this a way for Nvidia to sell their Quadro cards by purposely crippling the drivers for the 400-series cards for non-game OpenGL 3D applications? Even a statement such as "The 400-series cards are for playing games only, and you must buy Quadro cards for 3D productivity software since the 400-series cards totally suck at it. If you're using 400-series cards for 3D productivity apps, YOU'RE DOING IT WRONG!" is definitely enough for us so that we know not to buy or recommend 400-series cards to other people who need to do work on non-game 3D apps. Keeping silent on this issue is very poor PR on the part of Nvidia, and only shows Nvidia's lack of goodwill towards its customers which can backfire and drive them away from doing business again with Nvidia in the future.

CPU: Intel Core i5-2550K @4.4GHz
Mainboard: MSI Z77A-GD43 (Intel Z77 chipset)
Graphics: MSI N660Ti PE 2GD5/OC (GeForce GTX 660 Ti @1019MHz)
RAM: 2 x 4GB Visipro PC3-12800 (1.5V @933MHz)
OS: Windows 7 Ultimate 64-bit Service Pack 1
PSU: Seasonic Eco 600W SS-600BT Active PFC T3
Monitor: Asus VX239H (23" Full HD AH-IPS LED Display)

#4
Posted 10/02/2010 04:55 AM   
I know this is resurrecting a dead thread, but even now the problem is still there and perhaps relevant to the current inability of 400-series cards to perform well (compared to older cards) in non-game 3D applications such as Maya, 3ds Max, and Blender. I tested my GTX 460 card, and it seems to have the slow glReadPixels problem as well although not as slow as the GTX 480 in this benchmark. However, with PBO Readback speed is pretty good. Here are my numbers:

With Image Source=Static Image and Image Sink=glReadPixels
GTX 460 : FPS=224

With Image Source=Static Image and Image Sink=PBO Readback
GTX 460 : FPS=402

There's also another thread about this problem over at: [url="http://forums.nvidia.com/index.php?showtopic=181574"]http://forums.nvidia.com/index.php?showtopic=181574[/url]

Now, what I want to know is Nvidia's official statement on this. I hope ManuelG or others on this forum related to Nvidia can pass on this request to the higher ups. Is this a bug in the current OpenGL drivers for the Fermi cards that will eventually be fixed? Or is this a way for Nvidia to sell their Quadro cards by purposely crippling the drivers for the 400-series cards for non-game OpenGL 3D applications? Even a statement such as "The 400-series cards are for playing games only, and you must buy Quadro cards for 3D productivity software since the 400-series cards totally suck at it. If you're using 400-series cards for 3D productivity apps, YOU'RE DOING IT WRONG!" is definitely enough for us so that we know not to buy or recommend 400-series cards to other people who need to do work on non-game 3D apps. Keeping silent on this issue is very poor PR on the part of Nvidia, and only shows Nvidia's lack of goodwill towards its customers which can backfire and drive them away from doing business again with Nvidia in the future.
I know this is resurrecting a dead thread, but even now the problem is still there and perhaps relevant to the current inability of 400-series cards to perform well (compared to older cards) in non-game 3D applications such as Maya, 3ds Max, and Blender. I tested my GTX 460 card, and it seems to have the slow glReadPixels problem as well although not as slow as the GTX 480 in this benchmark. However, with PBO Readback speed is pretty good. Here are my numbers:



With Image Source=Static Image and Image Sink=glReadPixels

GTX 460 : FPS=224



With Image Source=Static Image and Image Sink=PBO Readback

GTX 460 : FPS=402



There's also another thread about this problem over at: http://forums.nvidia.com/index.php?showtopic=181574



Now, what I want to know is Nvidia's official statement on this. I hope ManuelG or others on this forum related to Nvidia can pass on this request to the higher ups. Is this a bug in the current OpenGL drivers for the Fermi cards that will eventually be fixed? Or is this a way for Nvidia to sell their Quadro cards by purposely crippling the drivers for the 400-series cards for non-game OpenGL 3D applications? Even a statement such as "The 400-series cards are for playing games only, and you must buy Quadro cards for 3D productivity software since the 400-series cards totally suck at it. If you're using 400-series cards for 3D productivity apps, YOU'RE DOING IT WRONG!" is definitely enough for us so that we know not to buy or recommend 400-series cards to other people who need to do work on non-game 3D apps. Keeping silent on this issue is very poor PR on the part of Nvidia, and only shows Nvidia's lack of goodwill towards its customers which can backfire and drive them away from doing business again with Nvidia in the future.

CPU: Intel Core i5-2550K @4.4GHz
Mainboard: MSI Z77A-GD43 (Intel Z77 chipset)
Graphics: MSI N660Ti PE 2GD5/OC (GeForce GTX 660 Ti @1019MHz)
RAM: 2 x 4GB Visipro PC3-12800 (1.5V @933MHz)
OS: Windows 7 Ultimate 64-bit Service Pack 1
PSU: Seasonic Eco 600W SS-600BT Active PFC T3
Monitor: Asus VX239H (23" Full HD AH-IPS LED Display)

#5
Posted 10/02/2010 04:55 AM   
[quote name='Ital' date='02 October 2010 - 06:55 AM' timestamp='1285995334' post='1125285']
...current inability of 400-series cards to perform well (compared to older cards) in non-game 3D applications such as Maya, 3ds Max, and Blender.

(...)

Now, what I want to know is Nvidia's official statement on this. *...) Is this a bug in the current OpenGL drivers for the Fermi cards that will eventually be fixed? Or is this a way for Nvidia to sell their Quadro cards by purposely crippling the drivers for the 400-series cards for non-game OpenGL 3D applications?
[/quote]

So there were no any comments from the Nvidia's team??
How it can be - that [b]new GTX 480 is SLOWER than older cards[/b]?

Where's exactly the bug?
Is is deliberate hardware limitation for bandwith transfert or a 'bugs' in OGL driver?

Are nvidia's officials going to comment this? Why keep silence?

It's still unclear - how there could be such dramatic slowdown,
because all specifications on new cards are better than older even memory bandwith, gddr ram clocks, etc
Please comment!
[quote name='Ital' date='02 October 2010 - 06:55 AM' timestamp='1285995334' post='1125285']

...current inability of 400-series cards to perform well (compared to older cards) in non-game 3D applications such as Maya, 3ds Max, and Blender.



(...)



Now, what I want to know is Nvidia's official statement on this. *...) Is this a bug in the current OpenGL drivers for the Fermi cards that will eventually be fixed? Or is this a way for Nvidia to sell their Quadro cards by purposely crippling the drivers for the 400-series cards for non-game OpenGL 3D applications?





So there were no any comments from the Nvidia's team??

How it can be - that new GTX 480 is SLOWER than older cards?



Where's exactly the bug?

Is is deliberate hardware limitation for bandwith transfert or a 'bugs' in OGL driver?



Are nvidia's officials going to comment this? Why keep silence?



It's still unclear - how there could be such dramatic slowdown,

because all specifications on new cards are better than older even memory bandwith, gddr ram clocks, etc

Please comment!

#6
Posted 12/11/2010 09:55 PM   
[quote name='ascender' date='11 December 2010 - 06:55 PM' timestamp='1292104535' post='1159489']
So there were no any comments from the Nvidia's team??
How it can be - that [b]new GTX 480 is SLOWER than older cards[/b]?

Where's exactly the bug?
Is is deliberate hardware limitation for bandwith transfert or a 'bugs' in OGL driver?

Are nvidia's officials going to comment this? Why keep silence?

It's still unclear - how there could be such dramatic slowdown,
because all specifications on new cards are better than older even memory bandwith, gddr ram clocks, etc
Please comment!
[/quote]

Well, I felt the same in Blender + Linux.
I confess I was inclined to buy a GTX 580 (woooow, ultra specs !), but then, wow,
incredible slowdowns (redraw, anyone ???) and artifacts problems with lamps when they are in both sides of the Mesh.
To my luck, I tested GTX 580 in another computer, using Blender from a Pen drive, then I started to
search the Net for more Info.

The Result ?
GTX 4xx and 5xx are super expensive cards that perform poorer that gtx 2xx series when in OpenGL...
Then I take the money and bought 2 used 280 cards, GTX 580 is an expensive way to be s*****d
in OpenGL apps by artificial and ''non disclosed'' limitations...
Better than this, the older drivers are also very stable !
Sorry for you Nvidia, no new money coming from my pocket to you this time...
[quote name='ascender' date='11 December 2010 - 06:55 PM' timestamp='1292104535' post='1159489']

So there were no any comments from the Nvidia's team??

How it can be - that new GTX 480 is SLOWER than older cards?



Where's exactly the bug?

Is is deliberate hardware limitation for bandwith transfert or a 'bugs' in OGL driver?



Are nvidia's officials going to comment this? Why keep silence?



It's still unclear - how there could be such dramatic slowdown,

because all specifications on new cards are better than older even memory bandwith, gddr ram clocks, etc

Please comment!





Well, I felt the same in Blender + Linux.

I confess I was inclined to buy a GTX 580 (woooow, ultra specs !), but then, wow,

incredible slowdowns (redraw, anyone ???) and artifacts problems with lamps when they are in both sides of the Mesh.

To my luck, I tested GTX 580 in another computer, using Blender from a Pen drive, then I started to

search the Net for more Info.



The Result ?

GTX 4xx and 5xx are super expensive cards that perform poorer that gtx 2xx series when in OpenGL...

Then I take the money and bought 2 used 280 cards, GTX 580 is an expensive way to be s*****d

in OpenGL apps by artificial and ''non disclosed'' limitations...

Better than this, the older drivers are also very stable !

Sorry for you Nvidia, no new money coming from my pocket to you this time...

#7
Posted 01/19/2011 09:21 PM   
I've just created an account to tell you how disappointed I am.

I'm a owner of a small computer graphic company.
I've just bought NINE nvidia 480, I test it with blender, and I see this awful performance! The old 9600 where faster than this!!!

Then I look for good drivers, and I see it is something Nvidia has done INTENTIONALY!

Shame on you. You make me loose 2700 $ !
I've just created an account to tell you how disappointed I am.



I'm a owner of a small computer graphic company.

I've just bought NINE nvidia 480, I test it with blender, and I see this awful performance! The old 9600 where faster than this!!!



Then I look for good drivers, and I see it is something Nvidia has done INTENTIONALY!



Shame on you. You make me loose 2700 $ !

#8
Posted 03/04/2011 02:41 PM   
[quote name='dddjef' date='04 March 2011 - 09:41 PM' timestamp='1299249680' post='1202135']
I've just created an account to tell you how disappointed I am.

I'm a owner of a small computer graphic company.
I've just bought NINE nvidia 480, I test it with blender, and I see this awful performance! The old 9600 where faster than this!!!

Then I look for good drivers, and I see it is something Nvidia has done INTENTIONALY!

Shame on you. You make me loose 2700 $ !
[/quote]
Since you still need usable graphics cards for your workstations, I'm sure Nvidia will appreciate more of your money thrown their way if you buy either Quadro cards or the older 200-series GeForce cards for your 3D graphics workstations. /biggrin.gif' class='bbc_emoticon' alt=':biggrin:' />
[quote name='dddjef' date='04 March 2011 - 09:41 PM' timestamp='1299249680' post='1202135']

I've just created an account to tell you how disappointed I am.



I'm a owner of a small computer graphic company.

I've just bought NINE nvidia 480, I test it with blender, and I see this awful performance! The old 9600 where faster than this!!!



Then I look for good drivers, and I see it is something Nvidia has done INTENTIONALY!



Shame on you. You make me loose 2700 $ !



Since you still need usable graphics cards for your workstations, I'm sure Nvidia will appreciate more of your money thrown their way if you buy either Quadro cards or the older 200-series GeForce cards for your 3D graphics workstations. /biggrin.gif' class='bbc_emoticon' alt=':biggrin:' />

CPU: Intel Core i5-2550K @4.4GHz
Mainboard: MSI Z77A-GD43 (Intel Z77 chipset)
Graphics: MSI N660Ti PE 2GD5/OC (GeForce GTX 660 Ti @1019MHz)
RAM: 2 x 4GB Visipro PC3-12800 (1.5V @933MHz)
OS: Windows 7 Ultimate 64-bit Service Pack 1
PSU: Seasonic Eco 600W SS-600BT Active PFC T3
Monitor: Asus VX239H (23" Full HD AH-IPS LED Display)

#9
Posted 03/04/2011 03:38 PM   
I am sure this is a driver bug.
Are you guys using the latest [b]267.14 [/b](or 267.31 if you are a 580 user) Beta drivers?
John
I am sure this is a driver bug.

Are you guys using the latest 267.14 (or 267.31 if you are a 580 user) Beta drivers?

John

MSI GTX 580 3GB Lightning XE , Factory Overclocked 832Mhz Core, 1664Mhz Shader and 4200Mhz Memory

Intel Core 2 Extreme QX9650

ASUS Rampage Extreme X48 Motherboard

8GB of DDR3 @ 1333Mhz CL8

NEC 24WMGX3

Windows 7 x64 with Service Pack 1

Creative Labs X-Fi Fatal1ty

#10
Posted 03/04/2011 06:35 PM   
[quote name='JohnZS' date='04 March 2011 - 01:35 PM' timestamp='1299263710' post='1202242']
I am sure this is a driver bug.
[/quote]

It's not. I use Maya and have known about this since day 1 of Fermi, I don't know what kind of caves you people live in. They sabotaged OGL performance on purpose to try and force people to buy Quadro cards. As if that's going to create some giant revenue stream. What percent of the world is involved in 3d content creation? 0.0000000000000001%?

The whole idea of even having a Quadro series at all is obsolete in the first place since they are allowing full blown GPGPU processing on consumer level cards. So they are going to let you do scientific work or brute force encryption hacking on your consumer GPU but they're not going to let you display 3d images on your GPU? Makes no sense.

They would probably even save money by ending the Quadro scam and consolidating all the cards together and releasing only 2 or 3 different cards per product line then they spend less on different assembly process, driver teams, packaging, etc, etc.
[quote name='JohnZS' date='04 March 2011 - 01:35 PM' timestamp='1299263710' post='1202242']

I am sure this is a driver bug.





It's not. I use Maya and have known about this since day 1 of Fermi, I don't know what kind of caves you people live in. They sabotaged OGL performance on purpose to try and force people to buy Quadro cards. As if that's going to create some giant revenue stream. What percent of the world is involved in 3d content creation? 0.0000000000000001%?



The whole idea of even having a Quadro series at all is obsolete in the first place since they are allowing full blown GPGPU processing on consumer level cards. So they are going to let you do scientific work or brute force encryption hacking on your consumer GPU but they're not going to let you display 3d images on your GPU? Makes no sense.



They would probably even save money by ending the Quadro scam and consolidating all the cards together and releasing only 2 or 3 different cards per product line then they spend less on different assembly process, driver teams, packaging, etc, etc.

Motherboard: Gigabyte Z77 UD5H + 2500k

Ram: 8g (4g x 2 Samsung 30nm MV-3V4G3)

GPU: MSI 570GTX reference card - stock

PSU: SeaSonic S12 Energy Plus SS-650HT 650W

Sound: Creative Audigy 2 ZS PCI

HD: OCZ Agility 60g, Intel RST AHCI driver, Win7 x64

Mouse: G9x, Sensei, + more

One LCD attached, DVI mode, DPC Latency = 4

#11
Posted 03/04/2011 11:54 PM   
There are hardware differences between [i]Fermi[/i] Quadro and Desktop Quadro.
For example Double Precision Computational performance is castrated at the hardware level on Desktop Fermi compared to Tesla and Quado Fermi.
VRAM Quantity is different too, for example Quadro Fermi ranges from 3-6GB on Single GPU.

I do agree to you with some extent that there are some software locks with it comes to maya and 3ds max, however there are justifiable differences between the products.

The coolers on Quadro Cards are something else, the cards also take up a lot more depth in your PC (require a full tower) as they are designed to be in use 24x7

I am pretty sure the OpenGL problem you are experiencing is software/driver related.

Remember Fermi is optimised for OpenGL 4.0 and uses a different render path to 3.3 cards such as Geforce 8-200 cards, I would not be surprised if a bug lays here somewhere.

John
There are hardware differences between Fermi Quadro and Desktop Quadro.

For example Double Precision Computational performance is castrated at the hardware level on Desktop Fermi compared to Tesla and Quado Fermi.

VRAM Quantity is different too, for example Quadro Fermi ranges from 3-6GB on Single GPU.



I do agree to you with some extent that there are some software locks with it comes to maya and 3ds max, however there are justifiable differences between the products.



The coolers on Quadro Cards are something else, the cards also take up a lot more depth in your PC (require a full tower) as they are designed to be in use 24x7



I am pretty sure the OpenGL problem you are experiencing is software/driver related.



Remember Fermi is optimised for OpenGL 4.0 and uses a different render path to 3.3 cards such as Geforce 8-200 cards, I would not be surprised if a bug lays here somewhere.



John

MSI GTX 580 3GB Lightning XE , Factory Overclocked 832Mhz Core, 1664Mhz Shader and 4200Mhz Memory

Intel Core 2 Extreme QX9650

ASUS Rampage Extreme X48 Motherboard

8GB of DDR3 @ 1333Mhz CL8

NEC 24WMGX3

Windows 7 x64 with Service Pack 1

Creative Labs X-Fi Fatal1ty

#12
Posted 03/05/2011 12:19 AM   
[quote name='JohnZS' date='04 March 2011 - 07:19 PM' timestamp='1299284344' post='1202448']
There are hardware differences between [i]Fermi[/i] Quadro and Desktop Quadro.
For example Double Precision Computational performance is castrated at the hardware level on Desktop Fermi compared to Tesla and Quado Fermi.
[/quote]

And I'm sure that's as much of a cost saving improvement as laser severing shader units on 580's to create 570's.

It's just an *attempt* to rape the consumer for more profit. You can call it whatever you want, but damaging a perfectly working chip on purpose with a laser then selling it to consumers is bad business practice. It only happens because the number of players in the market is so small they get to create their own status quo. Same reason all governments evolve into tyrannical dictatorships or kleptocracy, because they have a monopoly on what they do, they create their own status quo.

The overwhelming majority of Geforce users do not buy the most expensive card. So for the majority of their clientele, it's like a big jackass corporate executive running up to you and saying, "Hey man, I think I gave you a little too good of a deal on this video card. Here, let me bash it with a baseball bat a few times to lower it's performance some. Ok, now it doesn't work as good, I can sleep better at night".

The only reason no public outrage occurred when they transitioned to this business practice is because all the big hardware sites are on the take and don't want to bite the hand that feeds them so they didn't report on it all or reported on it as good business practice without any negative light. Others erroneously embrace the idea of monopoly capitalism. An economist will try to claim two companies in an industry isn't a monopoly but they can easily inherent most of the benefits of one by playing their cards right.

If they want to create more expensive product lines to try and reap more profits, I would lock out GPGPU functionality from consumer cards and have it Quadro only since the history of graphics cards has been to display images and not encryption breaking or scientific computing so nobody really expected to receive that for free. There's no point damaging shader units on purpose to create a seperate 570 and 580 line or sabotaging 3d performance for pro 3d apps on purpose when alternate, much less shady routes can be taken.
[quote name='JohnZS' date='04 March 2011 - 07:19 PM' timestamp='1299284344' post='1202448']

There are hardware differences between Fermi Quadro and Desktop Quadro.

For example Double Precision Computational performance is castrated at the hardware level on Desktop Fermi compared to Tesla and Quado Fermi.





And I'm sure that's as much of a cost saving improvement as laser severing shader units on 580's to create 570's.



It's just an *attempt* to rape the consumer for more profit. You can call it whatever you want, but damaging a perfectly working chip on purpose with a laser then selling it to consumers is bad business practice. It only happens because the number of players in the market is so small they get to create their own status quo. Same reason all governments evolve into tyrannical dictatorships or kleptocracy, because they have a monopoly on what they do, they create their own status quo.



The overwhelming majority of Geforce users do not buy the most expensive card. So for the majority of their clientele, it's like a big jackass corporate executive running up to you and saying, "Hey man, I think I gave you a little too good of a deal on this video card. Here, let me bash it with a baseball bat a few times to lower it's performance some. Ok, now it doesn't work as good, I can sleep better at night".



The only reason no public outrage occurred when they transitioned to this business practice is because all the big hardware sites are on the take and don't want to bite the hand that feeds them so they didn't report on it all or reported on it as good business practice without any negative light. Others erroneously embrace the idea of monopoly capitalism. An economist will try to claim two companies in an industry isn't a monopoly but they can easily inherent most of the benefits of one by playing their cards right.



If they want to create more expensive product lines to try and reap more profits, I would lock out GPGPU functionality from consumer cards and have it Quadro only since the history of graphics cards has been to display images and not encryption breaking or scientific computing so nobody really expected to receive that for free. There's no point damaging shader units on purpose to create a seperate 570 and 580 line or sabotaging 3d performance for pro 3d apps on purpose when alternate, much less shady routes can be taken.

Motherboard: Gigabyte Z77 UD5H + 2500k

Ram: 8g (4g x 2 Samsung 30nm MV-3V4G3)

GPU: MSI 570GTX reference card - stock

PSU: SeaSonic S12 Energy Plus SS-650HT 650W

Sound: Creative Audigy 2 ZS PCI

HD: OCZ Agility 60g, Intel RST AHCI driver, Win7 x64

Mouse: G9x, Sensei, + more

One LCD attached, DVI mode, DPC Latency = 4

#13
Posted 03/05/2011 12:38 AM   
[quote name='JohnZS' date='04 March 2011 - 07:19 PM' timestamp='1299284344' post='1202448']
There are hardware differences between [i]Fermi[/i] Quadro and Desktop Quadro.
For example Double Precision Computational performance is castrated at the hardware level on Desktop Fermi compared to Tesla and Quado Fermi.

[/quote]

But that's just a hard block the stuff is still in there to run it full speed, not removed AFAIK.

Originally they did it purely by driver but didn't like it getting hacked.
[/quote]
[quote name='JohnZS' date='04 March 2011 - 07:19 PM' timestamp='1299284344' post='1202448']

There are hardware differences between Fermi Quadro and Desktop Quadro.

For example Double Precision Computational performance is castrated at the hardware level on Desktop Fermi compared to Tesla and Quado Fermi.







But that's just a hard block the stuff is still in there to run it full speed, not removed AFAIK.



Originally they did it purely by driver but didn't like it getting hacked.

#14
Posted 03/05/2011 09:20 AM   
[quote name='JohnZS' date='05 March 2011 - 07:19 AM' timestamp='1299284344' post='1202448']
The coolers on Quadro Cards are something else, the cards also take up a lot more depth in your PC (require a full tower) as they are designed to be in use 24x7
[/quote]

Hmm, I wonder how Quadro Fermi cards perform thermally under FurMark. If they do perform significantly cooler than the GeForce Fermi cards, does that mean we're not supposed to game 24x7 with our GeForce Fermi cards in hot tropical countries without air conditioning?

And if these performance problems are indeed due to driver bugs, why haven't Nvidia or ManuelG indicated so in order to clarify this matter? AFAIK Nvidia has said absolutely nothing about this either in this thread or in the other threads with similar complaints. The fact that they have kept mum about this issue shows Nvidia's lack of goodwill.
[quote name='JohnZS' date='05 March 2011 - 07:19 AM' timestamp='1299284344' post='1202448']

The coolers on Quadro Cards are something else, the cards also take up a lot more depth in your PC (require a full tower) as they are designed to be in use 24x7





Hmm, I wonder how Quadro Fermi cards perform thermally under FurMark. If they do perform significantly cooler than the GeForce Fermi cards, does that mean we're not supposed to game 24x7 with our GeForce Fermi cards in hot tropical countries without air conditioning?



And if these performance problems are indeed due to driver bugs, why haven't Nvidia or ManuelG indicated so in order to clarify this matter? AFAIK Nvidia has said absolutely nothing about this either in this thread or in the other threads with similar complaints. The fact that they have kept mum about this issue shows Nvidia's lack of goodwill.

CPU: Intel Core i5-2550K @4.4GHz
Mainboard: MSI Z77A-GD43 (Intel Z77 chipset)
Graphics: MSI N660Ti PE 2GD5/OC (GeForce GTX 660 Ti @1019MHz)
RAM: 2 x 4GB Visipro PC3-12800 (1.5V @933MHz)
OS: Windows 7 Ultimate 64-bit Service Pack 1
PSU: Seasonic Eco 600W SS-600BT Active PFC T3
Monitor: Asus VX239H (23" Full HD AH-IPS LED Display)

#15
Posted 03/05/2011 09:47 AM   
  1 / 2    
Scroll To Top