Would you like to Ask Nvidia A question?
  1 / 12    
Greetings everyone. I would like to take this chance to invite Nvidia customers and enthusiasts to ask Nvidia a question.

1) Is a latest trend or development on your mind?

2) Have a technology related question to Nvidia Hardware?

3) Have a question about The Way Its Meant to be played?


Me and Amorphous will be allowing questions that Nvidia will be answering. Please keep in mind that not "all" questions are going to be answered. Me and Amorphous will try and pick and choose the best questions and the most supported subjects. There are of course some limitations on this. We will not be able to field questions regarding unreleased products and products Nvidia does not support. ((Example Radeon Questions)). Also, be wary that if we feel the question has already been answered, we will link you back to the question's answer.

This is an exciting chance for us to help you communicate your questions, concerns, and feedback to Nvidia. It also will help Nvidia have better interaction with the community. Nvidia will be providing a spot to "answer" these questions in the near future. Which will update/amend into this post once its available. We will try to submit 3 to 5 questions a week and if successful Nvidia is committed to continue doing this. The amount of answers received will also depend on the amount of questions asked.


Remember. Me And Amorphous will be closely monitoring this thread. Do not troll in this thread. It will not be allowed.


Final Note: This is not a "Debate" Thread. If you wish to debate the answers that are recieved. Then please do so in another thread. This post is specifically for asking questions and receiving answers. Not Arguing or debating them. You can discuss that how you feel free anywhere in this community or another.


[url="http://forums.nvidia.com/index.php?showtopic=109971"]Nvidia Question and Answers Archive[/url]


[b]-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[/b]



[b]November 1st updates and Questions.[/b]


[b]Q: With AMD's acquisition of ATI and Intel becoming more involved in graphics, what will NVIDIA do to remain competitive in the years to come?[/b]

[i]
Jen-Hsun Huang, CEO and founder of NVIDIA: The central question is whether computer graphics is maturing or entering a period of rapid innovation. If you believe computer graphics is maturing, then slowing investment and “integration” is the right strategy. But if you believe graphics can still experience revolutionary advancement, then innovation and specialization is the best strategy.

We believe we are in the midst of a giant leap in computer graphics, and that the GPU will revolutionize computing by making parallel computing mainstream. This is the time to innovate, not integrate.

The last discontinuity in our field occurred eight years ago with the introduction of programmable shading and led to the transformation of the GPU from a fixed-pipeline ASIC to a programmable processor. This required GPU design methodology to include the best of general-purpose processors and special-purpose accelerators. Graphics drivers added the complexity of shader compilers for Cg, HLSL, and GLSL shading languages.

We are now in the midst of a major discontinuity that started three years ago with the introduction of CUDA. We call this the era of GPU computing. We will advance graphics beyond “programmable shading” to add even more artistic flexibility and ever more power to simulate photo-realistic worlds. Combining highly specialize graphics pipelines, programmable shading, and GPU computing, “computational graphics” will make possible stunning new looks with ray tracing, global illumination, and other computational techniques that look incredible. “Computational graphics" requires the GPU to have two personalities – one that is highly specialized for graphics, and the other a completely general purpose parallel processor with massive computational power.

While the parallel processing architecture can simulate light rays and photons, it is also great at physics simulation. Our vision is to enable games that can simulate the interaction between game characters and the physical world, and then render the images with film-like realism. This is surely in the future since films like Harry Potter and Transformers already use GPUs to simulate many of the special effects. Games will once again be surprising and magical, in a way that is simply not possible with pre-canned art.

To enable game developers to create the next generation of amazing games, we’ve created compilers for CUDA, OpenCL, and DirectCompute so that developers can choose any GPU computing approach. We’ve created a tool platform called Nexus, which integrates into Visual Studio and is the world’s first unified programming environment for a heterogeneous computing architecture with the CPU and GPU in a “co-processing” configuration. And we’ve encapsulated our algorithm expertise into engines, such as the Optix ray-tracing engine and the PhysX physics engine, so that developers can easily integrate these capabilities into their applications. And finally, we have a team of 300 world class graphics and parallel computing experts in our Content Technology whose passion is to inspire and collaborate with developers to make their games and applications better.

Some have argued that diversifying from visual computing is a growth strategy. I happen to believe that focusing on the right thing is the best growth strategy.

NVIDIA’s growth strategy is simple and singular: be the absolute best in the world in visual computing – to expand the reach of GPUs to transform our computing experience. We believe that the GPU will be incorporated into all kinds of computing platforms beyond PCs. By focusing our significant R&D budget to advance visual computing, we are creating breakthrough solutions to address some of the most important challenges in computing today. We build Geforce for gamers and enthusiasts; Quadro for digital designers and artists; Tesla for researchers and engineers needing supercomputing performance; and Tegra for mobile user who want a great computing experience anywhere. A simple view of our business is that we build Geforce for PCs, Quadro for workstations, Tesla for servers and cloud computing, and Tegra for mobile devices. Each of these target different users, and thus each require a very different solution, but all are visual computing focused.

For all of the gamers, there should be no doubt: You can count on the thousands of visual computing engineers at NVIDIA to create the absolute graphics technology for you. Because of their passion, focus, and craftsmanship, the NVIDIA GPU will be state-of-the-art and exquisitely engineered. And you should be delighted to know that the GPU, a technology that was created for you, is also able to help discover new sources of clean energy and help detect cancer early, or to just make your computer interaction lively. It surely gives me great joy to know what started out as “the essential gear of gamers for universal domination” is now off to really save the world.

Keep in touch.

Jensen[/i]

[b]Q: How do you expect PhysX to compete in a DirectX 11/OpenCL world? Will PhysX become open-source?[/b]

[i]Tom Petersen, Director of Technical Marketing: NVIDIA supports and encourages any technology that enables our customers to more fully experience the benefits of our GPUs. This applies to things like CUDA, DirectCompute and OpenCL—APIs where NVIDIA has been an early proponent of the technology and contributed to the specification development. If someday a GPU physics infrastructure evolves that takes advantage of those or even a newer API, we will support it.



For now, the only working solution for GPU accelerated physics is PhysX. NVIDIA works hard to make sure this technology delivers compelling benefits to our users. Our investments right now are focused on making those effects more compelling and easier to use in games. But the APIs that we do that on is not the most important part of the story to developers, who are mostly concerned with features, cost, cross-platform capabilities, toolsets, debuggers and generally anything that helps complete their development cycles.[/i]



[b]Q: How is NVIDIA approaching the tessellation requirements for DX11 as none of the previous and current generation cards have any hardware specific to this technology?
[/b]


[i]Jason Paul, Product Manager, GeForce: Fermi has dedicated hardware for tessellation (sorry Rys :-P). We’ll share more details when we introduce Fermi’s graphics architecture shortly![/i]
Greetings everyone. I would like to take this chance to invite Nvidia customers and enthusiasts to ask Nvidia a question.



1) Is a latest trend or development on your mind?



2) Have a technology related question to Nvidia Hardware?



3) Have a question about The Way Its Meant to be played?





Me and Amorphous will be allowing questions that Nvidia will be answering. Please keep in mind that not "all" questions are going to be answered. Me and Amorphous will try and pick and choose the best questions and the most supported subjects. There are of course some limitations on this. We will not be able to field questions regarding unreleased products and products Nvidia does not support. ((Example Radeon Questions)). Also, be wary that if we feel the question has already been answered, we will link you back to the question's answer.



This is an exciting chance for us to help you communicate your questions, concerns, and feedback to Nvidia. It also will help Nvidia have better interaction with the community. Nvidia will be providing a spot to "answer" these questions in the near future. Which will update/amend into this post once its available. We will try to submit 3 to 5 questions a week and if successful Nvidia is committed to continue doing this. The amount of answers received will also depend on the amount of questions asked.





Remember. Me And Amorphous will be closely monitoring this thread. Do not troll in this thread. It will not be allowed.





Final Note: This is not a "Debate" Thread. If you wish to debate the answers that are recieved. Then please do so in another thread. This post is specifically for asking questions and receiving answers. Not Arguing or debating them. You can discuss that how you feel free anywhere in this community or another.





Nvidia Question and Answers Archive





-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------









November 1st updates and Questions.





Q: With AMD's acquisition of ATI and Intel becoming more involved in graphics, what will NVIDIA do to remain competitive in the years to come?





Jen-Hsun Huang, CEO and founder of NVIDIA: The central question is whether computer graphics is maturing or entering a period of rapid innovation. If you believe computer graphics is maturing, then slowing investment and “integration” is the right strategy. But if you believe graphics can still experience revolutionary advancement, then innovation and specialization is the best strategy.



We believe we are in the midst of a giant leap in computer graphics, and that the GPU will revolutionize computing by making parallel computing mainstream. This is the time to innovate, not integrate.



The last discontinuity in our field occurred eight years ago with the introduction of programmable shading and led to the transformation of the GPU from a fixed-pipeline ASIC to a programmable processor. This required GPU design methodology to include the best of general-purpose processors and special-purpose accelerators. Graphics drivers added the complexity of shader compilers for Cg, HLSL, and GLSL shading languages.



We are now in the midst of a major discontinuity that started three years ago with the introduction of CUDA. We call this the era of GPU computing. We will advance graphics beyond “programmable shading” to add even more artistic flexibility and ever more power to simulate photo-realistic worlds. Combining highly specialize graphics pipelines, programmable shading, and GPU computing, “computational graphics” will make possible stunning new looks with ray tracing, global illumination, and other computational techniques that look incredible. “Computational graphics" requires the GPU to have two personalities – one that is highly specialized for graphics, and the other a completely general purpose parallel processor with massive computational power.



While the parallel processing architecture can simulate light rays and photons, it is also great at physics simulation. Our vision is to enable games that can simulate the interaction between game characters and the physical world, and then render the images with film-like realism. This is surely in the future since films like Harry Potter and Transformers already use GPUs to simulate many of the special effects. Games will once again be surprising and magical, in a way that is simply not possible with pre-canned art.



To enable game developers to create the next generation of amazing games, we’ve created compilers for CUDA, OpenCL, and DirectCompute so that developers can choose any GPU computing approach. We’ve created a tool platform called Nexus, which integrates into Visual Studio and is the world’s first unified programming environment for a heterogeneous computing architecture with the CPU and GPU in a “co-processing” configuration. And we’ve encapsulated our algorithm expertise into engines, such as the Optix ray-tracing engine and the PhysX physics engine, so that developers can easily integrate these capabilities into their applications. And finally, we have a team of 300 world class graphics and parallel computing experts in our Content Technology whose passion is to inspire and collaborate with developers to make their games and applications better.



Some have argued that diversifying from visual computing is a growth strategy. I happen to believe that focusing on the right thing is the best growth strategy.



NVIDIA’s growth strategy is simple and singular: be the absolute best in the world in visual computing – to expand the reach of GPUs to transform our computing experience. We believe that the GPU will be incorporated into all kinds of computing platforms beyond PCs. By focusing our significant R&D budget to advance visual computing, we are creating breakthrough solutions to address some of the most important challenges in computing today. We build Geforce for gamers and enthusiasts; Quadro for digital designers and artists; Tesla for researchers and engineers needing supercomputing performance; and Tegra for mobile user who want a great computing experience anywhere. A simple view of our business is that we build Geforce for PCs, Quadro for workstations, Tesla for servers and cloud computing, and Tegra for mobile devices. Each of these target different users, and thus each require a very different solution, but all are visual computing focused.



For all of the gamers, there should be no doubt: You can count on the thousands of visual computing engineers at NVIDIA to create the absolute graphics technology for you. Because of their passion, focus, and craftsmanship, the NVIDIA GPU will be state-of-the-art and exquisitely engineered. And you should be delighted to know that the GPU, a technology that was created for you, is also able to help discover new sources of clean energy and help detect cancer early, or to just make your computer interaction lively. It surely gives me great joy to know what started out as “the essential gear of gamers for universal domination” is now off to really save the world.



Keep in touch.



Jensen




Q: How do you expect PhysX to compete in a DirectX 11/OpenCL world? Will PhysX become open-source?



Tom Petersen, Director of Technical Marketing: NVIDIA supports and encourages any technology that enables our customers to more fully experience the benefits of our GPUs. This applies to things like CUDA, DirectCompute and OpenCL—APIs where NVIDIA has been an early proponent of the technology and contributed to the specification development. If someday a GPU physics infrastructure evolves that takes advantage of those or even a newer API, we will support it.







For now, the only working solution for GPU accelerated physics is PhysX. NVIDIA works hard to make sure this technology delivers compelling benefits to our users. Our investments right now are focused on making those effects more compelling and easier to use in games. But the APIs that we do that on is not the most important part of the story to developers, who are mostly concerned with features, cost, cross-platform capabilities, toolsets, debuggers and generally anything that helps complete their development cycles.








Q: How is NVIDIA approaching the tessellation requirements for DX11 as none of the previous and current generation cards have any hardware specific to this technology?







Jason Paul, Product Manager, GeForce: Fermi has dedicated hardware for tessellation (sorry Rys :-P). We’ll share more details when we introduce Fermi’s graphics architecture shortly!

#1
Posted 10/14/2009 08:38 PM   
I think it's a great thing and I wish those two the best of luck with it, but I got a question...

...what was it that brought about this good thing? What made it happen? Is this the beginning of a trend?
I think it's a great thing and I wish those two the best of luck with it, but I got a question...



...what was it that brought about this good thing? What made it happen? Is this the beginning of a trend?

#2
Posted 10/14/2009 09:11 PM   
[quote name='Digiwand' post='601018' date='Oct 14 2009, 02:11 PM']I think it's a great thing and I wish those two the best of luck with it, but I got a question...

...what was it that brought about this good thing? What made it happen? Is this the beginning of a trend?[/quote]


I can answer that question myself. I collected feedback from multiple forum users privately and openly regarding their assessment of Nvidia. I then initiated a conversation with Nvidia and all members of the user group regarding how the enthusiast community feels about Nvidia. Me and the other user group members looked over trends and discussed it with Nvidia. It was agreed that Nvidia needs more involvement with the community and to follow trends more closely so they can answer questions in an open dialogue. Eventually the summary of that work/involvement came to this. If this is successful it may be expanded. But we are starting with this.

Chris
[quote name='Digiwand' post='601018' date='Oct 14 2009, 02:11 PM']I think it's a great thing and I wish those two the best of luck with it, but I got a question...



...what was it that brought about this good thing? What made it happen? Is this the beginning of a trend?





I can answer that question myself. I collected feedback from multiple forum users privately and openly regarding their assessment of Nvidia. I then initiated a conversation with Nvidia and all members of the user group regarding how the enthusiast community feels about Nvidia. Me and the other user group members looked over trends and discussed it with Nvidia. It was agreed that Nvidia needs more involvement with the community and to follow trends more closely so they can answer questions in an open dialogue. Eventually the summary of that work/involvement came to this. If this is successful it may be expanded. But we are starting with this.



Chris

#3
Posted 10/14/2009 09:39 PM   
Thanks Chris, good job then. /thumbup.gif' class='bbc_emoticon' alt=':thumbup:' />

Again, best of luck!
Thanks Chris, good job then. /thumbup.gif' class='bbc_emoticon' alt=':thumbup:' />



Again, best of luck!

#4
Posted 10/14/2009 10:55 PM   
I agree with this because there is some negative feelings and some of them are very strong and I feel more involvement and sharing nVidia's side with clarity would help ease some of this tension. There are a lot of passionate gamers out there in the community but having more openness may go a long way of building a closer bond with the community over time.

Thanks for this; it is very welcomed.
I agree with this because there is some negative feelings and some of them are very strong and I feel more involvement and sharing nVidia's side with clarity would help ease some of this tension. There are a lot of passionate gamers out there in the community but having more openness may go a long way of building a closer bond with the community over time.



Thanks for this; it is very welcomed.

#5
Posted 10/14/2009 11:06 PM   
I have a few questions. I have been a long time Nvidia supporter, so much so that I have since 1998 helped set people striaght with Nvidia info on GPUs, performance numbers, cost/perofrmance for reasons to use SLi, driver installation(this was always peoples biggest issue between 1998 and 2003 as people never knew what driver was best to use). But lately I have been becoming hard pressed to help let alone support Nvidia and hopefully you can answer my questions to help ease the problem.

1.Why disable PhysX with ATI cards? Allow for it but make it known using it with an ATI card the user does so at their own risk as currently you do not have the time, funds or means to test every possible configuration. But at the same time you COULD and SHOULD re-enable it and do case by case testing to fix bugs as they become evident.

2. Who is really running the company, the PR department or the Engineers? Myself and almost all of those who surf the web are sick and tired of your PR departments shinagins inregards to the way they handle things. At the same time tho, it seems as tho the engineers have struggled as of late. Come on get with it, G9x line should never have come, you should have brought out the GT200 then instead and the rebadging of GPUs is totally bogus.

3. Why leave the chipset business? You guys have had some of the best inovations that forced others to step up their game in the chipset business, we need you to stay in it to keep them on their toes.
I have a few questions. I have been a long time Nvidia supporter, so much so that I have since 1998 helped set people striaght with Nvidia info on GPUs, performance numbers, cost/perofrmance for reasons to use SLi, driver installation(this was always peoples biggest issue between 1998 and 2003 as people never knew what driver was best to use). But lately I have been becoming hard pressed to help let alone support Nvidia and hopefully you can answer my questions to help ease the problem.



1.Why disable PhysX with ATI cards? Allow for it but make it known using it with an ATI card the user does so at their own risk as currently you do not have the time, funds or means to test every possible configuration. But at the same time you COULD and SHOULD re-enable it and do case by case testing to fix bugs as they become evident.



2. Who is really running the company, the PR department or the Engineers? Myself and almost all of those who surf the web are sick and tired of your PR departments shinagins inregards to the way they handle things. At the same time tho, it seems as tho the engineers have struggled as of late. Come on get with it, G9x line should never have come, you should have brought out the GT200 then instead and the rebadging of GPUs is totally bogus.



3. Why leave the chipset business? You guys have had some of the best inovations that forced others to step up their game in the chipset business, we need you to stay in it to keep them on their toes.

#6
Posted 10/15/2009 12:00 AM   
For question number 1)

This is Nvidia's official stance on the issue. Nvidia does not officially support it for performance/QA/Dev/Business reasons.

[url="http://physxinfo.com/news/330/official-nvidia-position-on-hybrid-ati-nv-physx-configurations/"]http://physxinfo.com/news/330/official-nvi...configurations/[/url]

One of the later points to this. I just want to clarify about QA work and technical support from a scenerio provided by Nvidia.

[b]Joe Consumer turns on PhysX in his AMD/NVDIA set up. It breaks something or does not work correctly. What happens.


Step 1) call the maker of the game for tech support. (cost $ for game maker).

Step 2) told it is PhysX that does not work, disable it (black eye for NVIDIA).
[/b]

Nvidia does not wish to support this configuration. If you wish to discuss/comment further on this. Create a different thread please. I want to keep this thread related to questions.

Thank you.

The rest of your questions have been noted and written down.
For question number 1)



This is Nvidia's official stance on the issue. Nvidia does not officially support it for performance/QA/Dev/Business reasons.



http://physxinfo.com/news/330/official-nvi...configurations/



One of the later points to this. I just want to clarify about QA work and technical support from a scenerio provided by Nvidia.



Joe Consumer turns on PhysX in his AMD/NVDIA set up. It breaks something or does not work correctly. What happens.





Step 1) call the maker of the game for tech support. (cost $ for game maker).



Step 2) told it is PhysX that does not work, disable it (black eye for NVIDIA).





Nvidia does not wish to support this configuration. If you wish to discuss/comment further on this. Create a different thread please. I want to keep this thread related to questions.



Thank you.



The rest of your questions have been noted and written down.

#7
Posted 10/15/2009 01:01 AM   
3d vision

1) Is nVidia fully committed to supporting 3d vision for the foreseeable future with consistent driver updates or will we see a decrease in support as appears to be the current trend to many 3d vision users?

2) Can we expect that more features will be added to 3d vision to help with compatibility and user control, such as, shutter sync timing, custom profiles and a power user settings mode?

3) Are the issues with the shadows of many modern games something that can be addressed in profiles or is it simply not fixable in most cases?

4) How active is nV in cooperating with as many developers as possible to provide a robust experience for 3d vision users now that the novelty has worn off after the launch?

5) Is nVidia working towards including 3d stereo standards in direct x so that all games that are direct x compliant might be 3d vision ready? I imagine that would do quite a bit to push the 3d stereo industry as a whole.

6) Will we see OpenGl support soon?

Physx

1) Is it true that nVidia has offered to open up physx to ATi without stipulation so long as ATi offers its own support and codes its own driver or is ATi correct in asserting that nVidia has stated that nV will never allow physx on ATi gpus? What is nVida's official stance in allowing ATi to create a driver at no cost for physx to run on their GPUs via Opencl?

2) Does nVidia expect the physx standard to take hold in a directx 11 world?

3) Does nVidia have any plans for releasing a CUDA only card that would only process physx and other CUDA based software via its hardware? Has the idea been entertained? Would this be usable in a system with other vendors' GPUs?
3d vision



1) Is nVidia fully committed to supporting 3d vision for the foreseeable future with consistent driver updates or will we see a decrease in support as appears to be the current trend to many 3d vision users?



2) Can we expect that more features will be added to 3d vision to help with compatibility and user control, such as, shutter sync timing, custom profiles and a power user settings mode?



3) Are the issues with the shadows of many modern games something that can be addressed in profiles or is it simply not fixable in most cases?



4) How active is nV in cooperating with as many developers as possible to provide a robust experience for 3d vision users now that the novelty has worn off after the launch?



5) Is nVidia working towards including 3d stereo standards in direct x so that all games that are direct x compliant might be 3d vision ready? I imagine that would do quite a bit to push the 3d stereo industry as a whole.



6) Will we see OpenGl support soon?



Physx



1) Is it true that nVidia has offered to open up physx to ATi without stipulation so long as ATi offers its own support and codes its own driver or is ATi correct in asserting that nVidia has stated that nV will never allow physx on ATi gpus? What is nVida's official stance in allowing ATi to create a driver at no cost for physx to run on their GPUs via Opencl?



2) Does nVidia expect the physx standard to take hold in a directx 11 world?



3) Does nVidia have any plans for releasing a CUDA only card that would only process physx and other CUDA based software via its hardware? Has the idea been entertained? Would this be usable in a system with other vendors' GPUs?

The human race divides politically into those who want people to be controlled and those who have no such desire.

--Robert A. Heinlein

#8
Posted 10/15/2009 02:01 AM   
Are you going to fix the framerate loss due to using the 3d crosshair while using 3d vision? This includes games like left 4 dead and tf2. When the crosshair is set for 3d vision it causes the framerate to plummet. No idea why this is happening.

Just wondering if this will be fixed soon.
Are you going to fix the framerate loss due to using the 3d crosshair while using 3d vision? This includes games like left 4 dead and tf2. When the crosshair is set for 3d vision it causes the framerate to plummet. No idea why this is happening.



Just wondering if this will be fixed soon.

#9
Posted 10/15/2009 09:10 AM   
1. Is Nvidia moving away from gaming and focusing more on GPGPU. We have heard a lot about Fermi's compute capability, but nothing of how good it is for gamers.

2. Do Nvidia plan on implementing 3-monitor support (as ATi's latest card does)?

3. Proprietary features (such as PhysX) are usually replaced with a general purpose implementation over time (tessellation was a feature that only ATi had until DirectX11 included it in it's feature set for example). Does Nvidia expect there to be a physics engine build into DirectX in the future? If not, what do you see as the way forward for a physics engine that can be used by all?

4. With GPUs such as Fermi becoming more and more programmable, is it possible that cards will become 'future proof', being able to support future features of DirectX by simply updating drivers? For example, is tesselation implemented in dedicated hardware on Fermi or as code that runs on the CUDA cores?

That's all I can think of for now.
1. Is Nvidia moving away from gaming and focusing more on GPGPU. We have heard a lot about Fermi's compute capability, but nothing of how good it is for gamers.



2. Do Nvidia plan on implementing 3-monitor support (as ATi's latest card does)?



3. Proprietary features (such as PhysX) are usually replaced with a general purpose implementation over time (tessellation was a feature that only ATi had until DirectX11 included it in it's feature set for example). Does Nvidia expect there to be a physics engine build into DirectX in the future? If not, what do you see as the way forward for a physics engine that can be used by all?



4. With GPUs such as Fermi becoming more and more programmable, is it possible that cards will become 'future proof', being able to support future features of DirectX by simply updating drivers? For example, is tesselation implemented in dedicated hardware on Fermi or as code that runs on the CUDA cores?



That's all I can think of for now.

#10
Posted 10/15/2009 03:52 PM   
I've have a few questions that really need to be answered:

1.) SLI performance in World Of Warcraft, now most are under the assumption that there is no SLI support in World of Warcraft, I was sure it worked but as of testing last weekend SLI and even 3-way SLI offers no benefit over a single card in World Of Warcraft. Is there any plans to support World of Warcraft with SLI?

2.) Nforce chipset issues with Creative soundcards? Is there or will there ever be a bios or chipset driver release that will finally solve the age old dilema that is the "Snap Crackle Pop" issue with Creative soundcards on an Nforce motherboard? Especially with SLI? We've never received a straight answer from either Nvidia or Creative and its just one blaming the other. I think we need a straight answer finally as the issue certainly looks like its an Nvidia problem.

3.) Poor SSD performance on all Nforce motherboards? This has alot of user very unhappy with the Nforce chipset now, considering that a Raid 0 SSD setup can barely push the speed of a single SSD drive. Comparing to other chipsets of the same time period, X38, X48 and AMD chipsets we see performance is very very good, usually a Raid 0 of 2x OCZ Vertex drives running at reads of 425 and writes of 200 where as on an 790i we see only 220 reads and 140 writes.

4.) TRIM functionig? is TRIM or will TRIM ever be supported through the driver/controller on an NFORCE system? With Windows 7 less then a week away will my Raid setup be able to run TRIM as it is up to the driver/controller for support of the TRIM command?
I've have a few questions that really need to be answered:



1.) SLI performance in World Of Warcraft, now most are under the assumption that there is no SLI support in World of Warcraft, I was sure it worked but as of testing last weekend SLI and even 3-way SLI offers no benefit over a single card in World Of Warcraft. Is there any plans to support World of Warcraft with SLI?



2.) Nforce chipset issues with Creative soundcards? Is there or will there ever be a bios or chipset driver release that will finally solve the age old dilema that is the "Snap Crackle Pop" issue with Creative soundcards on an Nforce motherboard? Especially with SLI? We've never received a straight answer from either Nvidia or Creative and its just one blaming the other. I think we need a straight answer finally as the issue certainly looks like its an Nvidia problem.



3.) Poor SSD performance on all Nforce motherboards? This has alot of user very unhappy with the Nforce chipset now, considering that a Raid 0 SSD setup can barely push the speed of a single SSD drive. Comparing to other chipsets of the same time period, X38, X48 and AMD chipsets we see performance is very very good, usually a Raid 0 of 2x OCZ Vertex drives running at reads of 425 and writes of 200 where as on an 790i we see only 220 reads and 140 writes.



4.) TRIM functionig? is TRIM or will TRIM ever be supported through the driver/controller on an NFORCE system? With Windows 7 less then a week away will my Raid setup be able to run TRIM as it is up to the driver/controller for support of the TRIM command?

Intel Core i7 2600K @ 5.2GHz, Asus Maximus IV Extreme, Corsair Vengeance 16GB CL8, EVGA 680 Classified TRI-SLI, Creative Labs Recon 3D, OCZ Vector 512GB RAID 0, Samsung POS 22x DVDRW, SilverStone Strider Gold 1200WATT PSU, SilverStone Temjin TJ11, Blackwidow Ultimate 2013, Logitech G700, Dell UltraSharp U3011 LCD, Windows 8 Professional x64

#11
Posted 10/15/2009 04:01 PM   
Thanks for the continued questions. Questions will be submitted every monday to Nvidia. If your question does not get answered in the first round. Dont hesitate to ask it again. If its ignored it may not because its a bad question. But only because we can only field so many a week.

Thank you.

Chris
Thanks for the continued questions. Questions will be submitted every monday to Nvidia. If your question does not get answered in the first round. Dont hesitate to ask it again. If its ignored it may not because its a bad question. But only because we can only field so many a week.



Thank you.



Chris

#12
Posted 10/15/2009 10:17 PM   
A while ago, you guys showed off a slick interface running on a Tegra powered Smartphone device. You still have a Flash powered Demo on your site, but it's been over a year I believe and no phone is currently running this User Interface / OS (I'm not sure how sophisticated the demo was). After viewing the phone mock-ups on the nVidia website, as well as seeing the Demo, I have been obsessed over these devices. I know that there are not currently any, but I haven't heard of any news on plans either.

I understand you can't confirm devices as other parties are / may be manufacturing them, but I was wondering if nVidia had any plans to enter the Smartphone industry? Seeing as everyone was quite excited about your Prototype phone, I'm surprised nothing has come of it yet. I know I personally would love to have an nVidia built phone running the nVidia OS.
A while ago, you guys showed off a slick interface running on a Tegra powered Smartphone device. You still have a Flash powered Demo on your site, but it's been over a year I believe and no phone is currently running this User Interface / OS (I'm not sure how sophisticated the demo was). After viewing the phone mock-ups on the nVidia website, as well as seeing the Demo, I have been obsessed over these devices. I know that there are not currently any, but I haven't heard of any news on plans either.



I understand you can't confirm devices as other parties are / may be manufacturing them, but I was wondering if nVidia had any plans to enter the Smartphone industry? Seeing as everyone was quite excited about your Prototype phone, I'm surprised nothing has come of it yet. I know I personally would love to have an nVidia built phone running the nVidia OS.

#13
Posted 10/17/2009 08:28 PM   
Hi everyone :rolleyes:
apologies if this question is too simple, coz I'm not familiar with hardware stuff :wacko:

I'm using GA-K8N-SLI motherboard, which according to [url="http://www.gigabyte.com.tw/Products/Motherboard/Products_Spec.aspx?ClassValue=Motherboard&ProductID=1928&ProductName=GA-K8N-SLI"]http://www.gigabyte.com.tw/Products/Mother...Name=GA-K8N-SLI[/url], can only use DDR400 memory.

I'm thinking of upgrading the graphics card from the present NX6600GT to geforce 9500 gt. However, i notice from [url="http://www.nvidia.com/object/product_geforce_9500gt_us.html"]http://www.nvidia.com/object/product_geforce_9500gt_us.html[/url] that geforce 9500 gt seems to be using DDR2 memory...
=> can geforce 9500 gt work on old motherboard like GA-K8N-SLI?

Thanks in advance for any clarification on the above query :rolleyes:
Hi everyone :rolleyes:

apologies if this question is too simple, coz I'm not familiar with hardware stuff :wacko:



I'm using GA-K8N-SLI motherboard, which according to http://www.gigabyte.com.tw/Products/Mother...Name=GA-K8N-SLI, can only use DDR400 memory.



I'm thinking of upgrading the graphics card from the present NX6600GT to geforce 9500 gt. However, i notice from http://www.nvidia.com/object/product_geforce_9500gt_us.html that geforce 9500 gt seems to be using DDR2 memory...

=> can geforce 9500 gt work on old motherboard like GA-K8N-SLI?



Thanks in advance for any clarification on the above query :rolleyes:

#14
Posted 10/17/2009 08:52 PM   
Yes it will work Flashz. They are both PCIE compliant.

Chris
Yes it will work Flashz. They are both PCIE compliant.



Chris

#15
Posted 10/17/2009 11:23 PM   
  1 / 12    
Scroll To Top