NVIDIA Questions and Answers Updated 2/19/2010
  1 / 3    
[b]Q: Why leave the chipset business?[/b]

Tom Petersen, Director of Technical Marketing for SLI and PhysX: We will continue to innovate in integrated solutions for Intel's FSB architecture. We firmly believe that this market has a long healthy life ahead. But because of Intel's improper claims to customers and the market that we aren't licensed to the new DMI bus, it is effectively impossible for us to market chipsets for future CPUs. So, until we resolve this matter in court, we'll postpone further chipset investments for Intel DMI CPUs.

Despite Intel's actions, we have innovative products that we are excited to introduce to the market in the months ahead. We know these products will bring with them some amazing breakthroughs that will surprise the industry.


[b]Q: Now that ATI has made it a standard feature, what is NVIDIA doing to support 3+ monitor gaming? How would it work with SLI? Now that this is a known feature, when will we see driver support for Surround gaming and 3D Vision Surround?[/b]

Andrew Fear, Product Manager for 3D Vision: GTX 200 or GTX 400 GPUs in SLI will provide triple monitor gaming support. Not only that, we’ll also be supporting 3D Vision across the three panels, enabling a truly spectacular 3D gaming experience. We'll have more information on driver availability in the near future.


[b]Q: Is NVIDIA working with Pande Group on OpenCL for a rumored new F@H GPU client?[/b]

Andrew Humber, Senior PR Manager for Tesla: The OpenCL client development effort is being driven by the Pande Group at Stanford so we should allow them to comment on its status. What we can say is that we are working closely with them on this and a number of other projects that will continue to deliver improvements in Folding@Home performance for NVIDIA GPU contributors. Our view is to support the Folding@Home effort, irrespective of their choice of API.


[b]Q: How did you get so behind schedule on the Fermi? I just saw that it was delayed to 2010. How will you recover from lost sales to AMD/ATi?[/b]

Jason Paul, GeForce product manager: On the GF100 schedule—I think Ujesh Desai (our Vice President of Marketing) said it best when he said "designing GPUs is f'ing hard!" J With GF100, we chose to tackle some of the toughest problems of graphics and compute. If we merely doubled up on GT200, we may have shipped earlier, but essential elements for DX11 gaming, like support for scalable tessellation in hardware, would have remained unsolved.

While we all wish GF100 would have been completed earlier, our investment in a new graphics and compute architecture is showing fantastic results, and we're glad that we took the time to do it right so gamers can get a truly great experience.

Regarding "lost sales" -- despite some rumors to the contrary, we have been shipping our GTX 200 GPUs in mass and they continue to sell well. In fact, our overall GeForce desktop market share grew during the last quarter: [url="http://www.pcper.com/comments.php?nid=8312"]http://www.pcper.com/comments.php?nid=8312[/url]


[center]
12/03/2009[/center]

[b]Q: How do you expect PhysX to compete in a DirectX 11/OpenCL world?[/b]

Tom Petersen, Director of Technical Marketing: PhysX does not compete with OpenCL or DX11’s DirectCompute.

PhysX is an API and runtime that allows games and game engines to model the physics in a game. Think of PhysX as a layer above OpenCL or DirectCompute, which in contrast are very generic and low level interfaces that enable GPU-accelerated computation. Game developers don’t create content in OpenCL or DirectCompute. Instead they author in toolsets (some of which are provided by NVIDIA) that allow them to be creative quickly. Once they have good content they “compile” a specific platform (PC, Wii, Xbox, PS3, etc) using another tool flow.

During this process game studios have three basic concerns:[list=1]
[*]Does PhysX make it easier to develop games for all platforms – including consoles?
[*]Does PhysX make it easier to have kick ass effects in my game?
[*]Will NVIDIA support my efforts to integrate this technology?
[/list]And the answer to the three questions above is: yes, yes, and yes. We are spending our time and money pursuing those goals to support developers, and right now the developer community is not telling us that OpenCL or DirectCompute support are required.

In the future this may or may not change, and the dynamics of this situation are hard to predict. We can say this though:[list=1]
[*]AMD and Intel are not investing today at the same pace as NVIDIA in GPU accelerated physics.
[*]AMD and Intel will need to do the bulk of the work required to support GPU accelerated PhysX on their products. NVIDIA is not going to do QA or design for AMD or Intel.
[/list]At the end of the day, the success of PhysX as a technology will depend on how easy it is for game designers to use and how incredible the game effects are that they create. Batman: Arkham Asylum is a good example of the type of effects we can achieve with PhysX running on NVIDIA GPUs, and we are working to make the next round of games even more compelling. At this time, NVIDIA has no plan to move from CUDA to either OpenCL or DirectCompute as the implementation engine for GPU acceleration. Instead we are working to support developers and implement killer effects.

So does NVIDIA profit from all this? We sure hope so. If we make our GPUs more desirable because they do incredible things with PhysX, then we have done a great job for our customers and made PC gaming more compelling.


[b]Q: Will PhysX become open-source?[/b]

Tom Petersen: NVIDIA is investing a lot of time and effort in PhysX and we do not plan to make it open source today. Of course the binaries for the SDK are distributed for free, and source code is available for licensing if game designers need it.


[center]
11/02/2009[/center]

[b]Q: With AMD's acquisition of ATI and Intel becoming more involved in graphics, what will NVIDIA do to remain competitive in the years to come?[/b]

Jen-Hsun Huang, CEO and founder of NVIDIA: The central question is whether computer graphics is maturing or entering a period of rapid innovation. If you believe computer graphics is maturing, then slowing investment and “integration” is the right strategy. But if you believe graphics can still experience revolutionary advancement, then innovation and specialization is the best strategy.

We believe we are in the midst of a giant leap in computer graphics, and that the GPU will revolutionize computing by making parallel computing mainstream. This is the time to innovate, not integrate.

The last discontinuity in our field occurred eight years ago with the introduction of programmable shading and led to the transformation of the GPU from a fixed-pipeline ASIC to a programmable processor. This required GPU design methodology to include the best of general-purpose processors and special-purpose accelerators. Graphics drivers added the complexity of shader compilers for Cg, HLSL, and GLSL shading languages.

We are now in the midst of a major discontinuity that started three years ago with the introduction of CUDA. We call this the era of GPU computing. We will advance graphics beyond “programmable shading” to add even more artistic flexibility and ever more power to simulate photo-realistic worlds. Combining highly specialize graphics pipelines, programmable shading, and GPU computing, “computational graphics” will make possible stunning new looks with ray tracing, global illumination, and other computational techniques that look incredible. “Computational graphics" requires the GPU to have two personalities – one that is highly specialized for graphics, and the other a completely general purpose parallel processor with massive computational power.

While the parallel processing architecture can simulate light rays and photons, it is also great at physics simulation. Our vision is to enable games that can simulate the interaction between game characters and the physical world, and then render the images with film-like realism. This is surely in the future since films like Harry Potter and Transformers already use GPUs to simulate many of the special effects. Games will once again be surprising and magical, in a way that is simply not possible with pre-canned art.

To enable game developers to create the next generation of amazing games, we’ve created compilers for CUDA, OpenCL, and DirectCompute so that developers can choose any GPU computing approach. We’ve created a tool platform called Nexus, which integrates into Visual Studio and is the world’s first unified programming environment for a heterogeneous computing architecture with the CPU and GPU in a “co-processing” configuration. And we’ve encapsulated our algorithm expertise into engines, such as the Optix ray-tracing engine and the PhysX physics engine, so that developers can easily integrate these capabilities into their applications. And finally, we have a team of 300 world class graphics and parallel computing experts in our Content Technology whose passion is to inspire and collaborate with developers to make their games and applications better.

Some have argued that diversifying from visual computing is a growth strategy. I happen to believe that focusing on the right thing is the best growth strategy.

NVIDIA’s growth strategy is simple and singular: be the absolute best in the world in visual computing – to expand the reach of GPUs to transform our computing experience. We believe that the GPU will be incorporated into all kinds of computing platforms beyond PCs. By focusing our significant R&D budget to advance visual computing, we are creating breakthrough solutions to address some of the most important challenges in computing today. We build Geforce for gamers and enthusiasts; Quadro for digital designers and artists; Tesla for researchers and engineers needing supercomputing performance; and Tegra for mobile user who want a great computing experience anywhere. A simple view of our business is that we build Geforce for PCs, Quadro for workstations, Tesla for servers and cloud computing, and Tegra for mobile devices. Each of these target different users, and thus each require a very different solution, but all are visual computing focused.

For all of the gamers, there should be no doubt: You can count on the thousands of visual computing engineers at NVIDIA to create the absolute graphics technology for you. Because of their passion, focus, and craftsmanship, the NVIDIA GPU will be state-of-the-art and exquisitely engineered. And you should be delighted to know that the GPU, a technology that was created for you, is also able to help discover new sources of clean energy and help detect cancer early, or to just make your computer interaction lively. It surely gives me great joy to know what started out as “the essential gear of gamers for universal domination” is now off to really save the world.

Keep in touch.

Jensen


[b]Q: How do you expect PhysX to compete in a DirectX 11/OpenCL world? Will PhysX become open-source?[/b]

Tom Petersen, Director of Technical Marketing: NVIDIA supports and encourages any technology that enables our customers to more fully experience the benefits of our GPUs. This applies to things like CUDA, DirectCompute and OpenCL—APIs where NVIDIA has been an early proponent of the technology and contributed to the specification development. If someday a GPU physics infrastructure evolves that takes advantage of those or even a newer API, we will support it.

For now, the only working solution for GPU accelerated physics is PhysX. NVIDIA works hard to make sure this technology delivers compelling benefits to our users. Our investments right now are focused on making those effects more compelling and easier to use in games. But the APIs that we do that on is not the most important part of the story to developers, who are mostly concerned with features, cost, cross-platform capabilities, toolsets, debuggers and generally anything that helps complete their development cycles.


[b]Q: How is NVIDIA approaching the tessellation requirements for DX11 as none of the previous and current generation cards have any hardware specific to this technology?[/b]

Jason Paul, Product Manager, GeForce: Fermi has dedicated hardware for tessellation (sorry Rys :-P). We’ll share more details when we introduce Fermi’s graphics architecture shortly!

[center]
10/23/2009[/center]

[b]1. Is NVIDIA moving away from gaming and focusing more on GPGPU? We have heard a lot about Fermi's compute capability, but nothing of how good it is for gamers.[/b]

Jason Paul, GeForce Product Manager: Absolutely not. We are all gamers here! But, like G80 and G200 before, Fermi has two personalities: graphics and compute. We chose to introduce Fermi’s compute capability at our GTC conference, which was very compute-focused and attended by developers, researchers, and companies using our GPUs and CUDA for compute-intensive applications. Such attendees require fairly long lead times for evaluating new technologies, so we felt it was the right time to unveil Fermi’s compute architecture. Fermi has a very innovative graphics architecture that we have yet to unveil.

Also, it’s important to note that our reason for focusing on compute isn’t all about HPC. We believe next generation games will exploit compute as heavily as graphics. For example:

[indent]Physical simulation – whether using PhysX, Bullet or Direct Compute, GPU computing can add incredible dynamic realism to games through physical simulation of the environment.[/indent][indent]Advanced graphical effects – compute shaders can be used to speed up advanced post-processing effects such as blurs, soft shadows, and depth of field, helping games look more realistic[/indent][indent]Artificial intelligence – compute shaders can be used for artificial intelligence algorithms in games[/indent][indent]Ray Tracing – this is a little more forward looking, but we believe ray tracing will eventually be used in games for incredibly photo-realistic graphics. NVIDIA’s ray tracing engine uses CUDA.[/indent]
Compute is important for all of the above. That’s why Fermi is built the way it is, with a strong emphasis on compute features and performance.

In addition, we wouldn't be investing so heavily in gaming technologies if we were really moving away from gaming. Here’s a few of the substantial investments NVIDIA is currently making in PC gaming:

[indent]PhysX and 3D Vision technologies[/indent][indent]The Way it’s Meant to be Played program, including technical support, game compatibility testing, developer tools, antialiasing profiles, ambient occlusion profiles, etc.[/indent][indent]LAN parties and gaming events (including PAX, PDX LAN, Fragapalooza, Million Man LAN, Blizzcon, and Quakecon to name a few recent ones) Attached are some links to videos from those event.[/indent]
[url="http://www.slizone.com/object/slizone_eventsgallery_aug09.html"]http://www.slizone.com/object/slizone_even...lery_aug09.html[/url]

[url="http://www.nzone.com/object/nzone_quakecon09_trenches.html"]http://www.nzone.com/object/nzone_quakecon09_trenches.html[/url]

[url="http://www.nzone.com/object/nzone_blizzcon09_trenches.html"]http://www.nzone.com/object/nzone_blizzcon09_trenches.html[/url]

[url="http://www.nzone.com/page/nzone_section_trenches.html"]http://www.nzone.com/page/nzone_section_trenches.html[/url]

We put our money where our mouth is here.

Finally, Fermi has plenty of “traditional” graphics goodness that we haven’t talked about yet. Fermi’s graphics architecture is going to blow you guys away! Stay tuned.


[b]2. Why Has NVIDIA continued to refresh the G92? Why didn't NVIDIA create an entry level GT200 piece of hardware? The constant G92 renames and reuse of this aging part have caused a lot of discontent amongst the 3D enthusiast community.[/b]

Jason Paul, GeForce Product Manager: We hear you. We realize we are behind with GT200 derivative parts, and we are doing our best to get them out the door as soon as possible. We invested our engineering resource in transitioning our G9x class products from 65nm to 55nm manufacturing technology as well as adding several new video and display features to GT 220/210, which put these GT200-derivative products later in time than usual. Also, 40nm capacity has been limited, which has made the transition more difficult.

Since its introduction, G92 has remained a strong price/performance product in our line-up. So why did we rebrand it? While hardware enthusiasts often look at GPUs in terms of the silicon core (i.e. G92) and architecture (i.e. GT2xx), many of our less techie customers instead think about GPUs simply in terms of performance, price, and feature set, summarized via the product name. The product name is an easy way to communicate how products with the same base feature set (i.e. DirectX 10 support) compare to each other in terms of price and performance. Let’s take an example – what is the higher performance product, a 8800 GT or a 9600 GT? The average joe looking at an OEM web configurator or Best Buy retail shelf probably won’t know the answer. But if they saw a 9800 GT and a 9600 GT, they would know that a 9800 GT would provide better performance. By keeping G92 branding current with the rest of our DirectX 10 product line-up, we were able to more effectively communicate to customers where the product fit in terms of price and performance. At the same time, we tried to make it clear to technical press that these new brands were based on the G92 core so enthusiasts would know this information up front.


[b]3. Is it true that NVIDIA has offered to open up PhysX to ATi without stipulation so long as ATi offers its own support and codes its own driver, or is ATi correct in asserting that NVIDIA has stated that NV will never allow PhysX on ATi gpus? What is NVIDIA’s official stance in allowing ATi to create a driver at no cost for PhysX to run on their GPUs via OpenCL?[/b]

Jason Paul, GeForce Product Manager: We are open to licensing PhysX, and have done so on a variety of platforms (PS3, Xbox, Nintendo Wii, and iPhone to name a few). We would be willing to work with AMD, if they approached us. We can’t really give PhysX away for “free” for the same reason why a Havok license or x86 license isn’t free—the technology is very costly to develop and support. In short, we are open to licensing PhysX to any company who approaches us with a serious proposal.


[b]4. Is NVIDIA fully committed to supporting 3D Vision for the foreseeable future with consistent driver updates or will we see a decrease in support as appears to be the current trend to many 3D Vision users? For example. A lot of games have major issues with Shadows while running 3D Vision. Can profiles fix these issues or are we going to have to rely on developers to implement 3D Vision compatible shadows? What role do developers play in having a good 3D Vision experience at launch?[/b]

Andrew Fear, 3D Vision Product Manager: NVIDIA is fully committed to 3D Vision. In the past four driver releases, we have added more than 50 game profiles to our driver and we have seeded over 150 3D Vision test setups to developers worldwide. Our devrel team works hard to evangelize the technology to game developers and you will see more developers ensuring their games work great with 3D Vision. Like any new technology, it takes time and not every developer is able to intercept their development/release cycles and make changes for 3D Vision. In the specific example of shadows, sometimes these effects are rendered with techniques that need to be modified to be compatible with stereoscopic 3D, which means we have to recommend users disable them. Some developers are making the necessary updates, and some are waiting to fix it in their next games.

In the past few months we have seen our developer relations team work with developers to make Batman: Arkham Asylum and Resident Evil 5 look incredible in 3D. And we are excited now to see new titles that are coming – such as Borderlands, Bioshock 2, and Avatar – that should all look incredible in 3D.

Game profiles can help configure many games, but game developers spending time to optimize for 3D Vision will make the experience better. To help facilitate that, we have provided new SDKs for our core 3D Vision driver architecture that lets developers have almost complete control over how their game is rendered in 3D. We believe these changes, combined with tremendous interest from developers, will result in a large growth of 3D Vision-Ready titles in the coming months and years.

In addition to making gaming better, we are also working on expanding our ecosystem to support better picture, movie, and Web experiences in 3D. A great example is our support for the Fujifilm FinePix REAL 3D W1 camera. We were the first 3D technology provider to recognize the new 3D picture file format taken by the camera and provide software for our users. In upcoming drivers, you will also see even more enhancements for a 3D Web experience.


[b]5. Could Favre really lead the Vikings to a Superbowl?[/b]

Ujesh Desai, Vice President of GeForce GPU Business: We are glad that the community looks to us to tackle the tough questions, so we put our GPU computing horsepower to work on this one! After simulating the entire 2009-2010 NFL football season using a Tesla supercomputing cluster running a CUDA simulation program, we determined there is a 23.468% chance of Favre leading the Vikings to a Superbowl this season.* But Tesla supercomputers aside, anyone with half a brain knows the Eagles are gonna finally win it all this year! J

[i]*Disclaimer: NVIDIA is not liable for any gambling debts incurred based on this data.[/i]
Q: Why leave the chipset business?



Tom Petersen, Director of Technical Marketing for SLI and PhysX: We will continue to innovate in integrated solutions for Intel's FSB architecture. We firmly believe that this market has a long healthy life ahead. But because of Intel's improper claims to customers and the market that we aren't licensed to the new DMI bus, it is effectively impossible for us to market chipsets for future CPUs. So, until we resolve this matter in court, we'll postpone further chipset investments for Intel DMI CPUs.



Despite Intel's actions, we have innovative products that we are excited to introduce to the market in the months ahead. We know these products will bring with them some amazing breakthroughs that will surprise the industry.





Q: Now that ATI has made it a standard feature, what is NVIDIA doing to support 3+ monitor gaming? How would it work with SLI? Now that this is a known feature, when will we see driver support for Surround gaming and 3D Vision Surround?



Andrew Fear, Product Manager for 3D Vision: GTX 200 or GTX 400 GPUs in SLI will provide triple monitor gaming support. Not only that, we’ll also be supporting 3D Vision across the three panels, enabling a truly spectacular 3D gaming experience. We'll have more information on driver availability in the near future.





Q: Is NVIDIA working with Pande Group on OpenCL for a rumored new F@H GPU client?



Andrew Humber, Senior PR Manager for Tesla: The OpenCL client development effort is being driven by the Pande Group at Stanford so we should allow them to comment on its status. What we can say is that we are working closely with them on this and a number of other projects that will continue to deliver improvements in Folding@Home performance for NVIDIA GPU contributors. Our view is to support the Folding@Home effort, irrespective of their choice of API.





Q: How did you get so behind schedule on the Fermi? I just saw that it was delayed to 2010. How will you recover from lost sales to AMD/ATi?



Jason Paul, GeForce product manager: On the GF100 schedule—I think Ujesh Desai (our Vice President of Marketing) said it best when he said "designing GPUs is f'ing hard!" J With GF100, we chose to tackle some of the toughest problems of graphics and compute. If we merely doubled up on GT200, we may have shipped earlier, but essential elements for DX11 gaming, like support for scalable tessellation in hardware, would have remained unsolved.



While we all wish GF100 would have been completed earlier, our investment in a new graphics and compute architecture is showing fantastic results, and we're glad that we took the time to do it right so gamers can get a truly great experience.



Regarding "lost sales" -- despite some rumors to the contrary, we have been shipping our GTX 200 GPUs in mass and they continue to sell well. In fact, our overall GeForce desktop market share grew during the last quarter: http://www.pcper.com/comments.php?nid=8312






12/03/2009




Q: How do you expect PhysX to compete in a DirectX 11/OpenCL world?



Tom Petersen, Director of Technical Marketing: PhysX does not compete with OpenCL or DX11’s DirectCompute.



PhysX is an API and runtime that allows games and game engines to model the physics in a game. Think of PhysX as a layer above OpenCL or DirectCompute, which in contrast are very generic and low level interfaces that enable GPU-accelerated computation. Game developers don’t create content in OpenCL or DirectCompute. Instead they author in toolsets (some of which are provided by NVIDIA) that allow them to be creative quickly. Once they have good content they “compile” a specific platform (PC, Wii, Xbox, PS3, etc) using another tool flow.



During this process game studios have three basic concerns:[list=1]

  • Does PhysX make it easier to develop games for all platforms – including consoles?
  • Does PhysX make it easier to have kick ass effects in my game?
  • Will NVIDIA support my efforts to integrate this technology?
  • [/list]And the answer to the three questions above is: yes, yes, and yes. We are spending our time and money pursuing those goals to support developers, and right now the developer community is not telling us that OpenCL or DirectCompute support are required.



    In the future this may or may not change, and the dynamics of this situation are hard to predict. We can say this though:[list=1]

  • AMD and Intel are not investing today at the same pace as NVIDIA in GPU accelerated physics.
  • AMD and Intel will need to do the bulk of the work required to support GPU accelerated PhysX on their products. NVIDIA is not going to do QA or design for AMD or Intel.
  • [/list]At the end of the day, the success of PhysX as a technology will depend on how easy it is for game designers to use and how incredible the game effects are that they create. Batman: Arkham Asylum is a good example of the type of effects we can achieve with PhysX running on NVIDIA GPUs, and we are working to make the next round of games even more compelling. At this time, NVIDIA has no plan to move from CUDA to either OpenCL or DirectCompute as the implementation engine for GPU acceleration. Instead we are working to support developers and implement killer effects.



    So does NVIDIA profit from all this? We sure hope so. If we make our GPUs more desirable because they do incredible things with PhysX, then we have done a great job for our customers and made PC gaming more compelling.





    Q: Will PhysX become open-source?



    Tom Petersen: NVIDIA is investing a lot of time and effort in PhysX and we do not plan to make it open source today. Of course the binaries for the SDK are distributed for free, and source code is available for licensing if game designers need it.






    11/02/2009




    Q: With AMD's acquisition of ATI and Intel becoming more involved in graphics, what will NVIDIA do to remain competitive in the years to come?



    Jen-Hsun Huang, CEO and founder of NVIDIA: The central question is whether computer graphics is maturing or entering a period of rapid innovation. If you believe computer graphics is maturing, then slowing investment and “integration” is the right strategy. But if you believe graphics can still experience revolutionary advancement, then innovation and specialization is the best strategy.



    We believe we are in the midst of a giant leap in computer graphics, and that the GPU will revolutionize computing by making parallel computing mainstream. This is the time to innovate, not integrate.



    The last discontinuity in our field occurred eight years ago with the introduction of programmable shading and led to the transformation of the GPU from a fixed-pipeline ASIC to a programmable processor. This required GPU design methodology to include the best of general-purpose processors and special-purpose accelerators. Graphics drivers added the complexity of shader compilers for Cg, HLSL, and GLSL shading languages.



    We are now in the midst of a major discontinuity that started three years ago with the introduction of CUDA. We call this the era of GPU computing. We will advance graphics beyond “programmable shading” to add even more artistic flexibility and ever more power to simulate photo-realistic worlds. Combining highly specialize graphics pipelines, programmable shading, and GPU computing, “computational graphics” will make possible stunning new looks with ray tracing, global illumination, and other computational techniques that look incredible. “Computational graphics" requires the GPU to have two personalities – one that is highly specialized for graphics, and the other a completely general purpose parallel processor with massive computational power.



    While the parallel processing architecture can simulate light rays and photons, it is also great at physics simulation. Our vision is to enable games that can simulate the interaction between game characters and the physical world, and then render the images with film-like realism. This is surely in the future since films like Harry Potter and Transformers already use GPUs to simulate many of the special effects. Games will once again be surprising and magical, in a way that is simply not possible with pre-canned art.



    To enable game developers to create the next generation of amazing games, we’ve created compilers for CUDA, OpenCL, and DirectCompute so that developers can choose any GPU computing approach. We’ve created a tool platform called Nexus, which integrates into Visual Studio and is the world’s first unified programming environment for a heterogeneous computing architecture with the CPU and GPU in a “co-processing” configuration. And we’ve encapsulated our algorithm expertise into engines, such as the Optix ray-tracing engine and the PhysX physics engine, so that developers can easily integrate these capabilities into their applications. And finally, we have a team of 300 world class graphics and parallel computing experts in our Content Technology whose passion is to inspire and collaborate with developers to make their games and applications better.



    Some have argued that diversifying from visual computing is a growth strategy. I happen to believe that focusing on the right thing is the best growth strategy.



    NVIDIA’s growth strategy is simple and singular: be the absolute best in the world in visual computing – to expand the reach of GPUs to transform our computing experience. We believe that the GPU will be incorporated into all kinds of computing platforms beyond PCs. By focusing our significant R&D budget to advance visual computing, we are creating breakthrough solutions to address some of the most important challenges in computing today. We build Geforce for gamers and enthusiasts; Quadro for digital designers and artists; Tesla for researchers and engineers needing supercomputing performance; and Tegra for mobile user who want a great computing experience anywhere. A simple view of our business is that we build Geforce for PCs, Quadro for workstations, Tesla for servers and cloud computing, and Tegra for mobile devices. Each of these target different users, and thus each require a very different solution, but all are visual computing focused.



    For all of the gamers, there should be no doubt: You can count on the thousands of visual computing engineers at NVIDIA to create the absolute graphics technology for you. Because of their passion, focus, and craftsmanship, the NVIDIA GPU will be state-of-the-art and exquisitely engineered. And you should be delighted to know that the GPU, a technology that was created for you, is also able to help discover new sources of clean energy and help detect cancer early, or to just make your computer interaction lively. It surely gives me great joy to know what started out as “the essential gear of gamers for universal domination” is now off to really save the world.



    Keep in touch.



    Jensen





    Q: How do you expect PhysX to compete in a DirectX 11/OpenCL world? Will PhysX become open-source?



    Tom Petersen, Director of Technical Marketing: NVIDIA supports and encourages any technology that enables our customers to more fully experience the benefits of our GPUs. This applies to things like CUDA, DirectCompute and OpenCL—APIs where NVIDIA has been an early proponent of the technology and contributed to the specification development. If someday a GPU physics infrastructure evolves that takes advantage of those or even a newer API, we will support it.



    For now, the only working solution for GPU accelerated physics is PhysX. NVIDIA works hard to make sure this technology delivers compelling benefits to our users. Our investments right now are focused on making those effects more compelling and easier to use in games. But the APIs that we do that on is not the most important part of the story to developers, who are mostly concerned with features, cost, cross-platform capabilities, toolsets, debuggers and generally anything that helps complete their development cycles.





    Q: How is NVIDIA approaching the tessellation requirements for DX11 as none of the previous and current generation cards have any hardware specific to this technology?



    Jason Paul, Product Manager, GeForce: Fermi has dedicated hardware for tessellation (sorry Rys :-P). We’ll share more details when we introduce Fermi’s graphics architecture shortly!




    10/23/2009




    1. Is NVIDIA moving away from gaming and focusing more on GPGPU? We have heard a lot about Fermi's compute capability, but nothing of how good it is for gamers.



    Jason Paul, GeForce Product Manager: Absolutely not. We are all gamers here! But, like G80 and G200 before, Fermi has two personalities: graphics and compute. We chose to introduce Fermi’s compute capability at our GTC conference, which was very compute-focused and attended by developers, researchers, and companies using our GPUs and CUDA for compute-intensive applications. Such attendees require fairly long lead times for evaluating new technologies, so we felt it was the right time to unveil Fermi’s compute architecture. Fermi has a very innovative graphics architecture that we have yet to unveil.



    Also, it’s important to note that our reason for focusing on compute isn’t all about HPC. We believe next generation games will exploit compute as heavily as graphics. For example:



    [indent]Physical simulation – whether using PhysX, Bullet or Direct Compute, GPU computing can add incredible dynamic realism to games through physical simulation of the environment.[/indent][indent]Advanced graphical effects – compute shaders can be used to speed up advanced post-processing effects such as blurs, soft shadows, and depth of field, helping games look more realistic[/indent][indent]Artificial intelligence – compute shaders can be used for artificial intelligence algorithms in games[/indent][indent]Ray Tracing – this is a little more forward looking, but we believe ray tracing will eventually be used in games for incredibly photo-realistic graphics. NVIDIA’s ray tracing engine uses CUDA.[/indent]

    Compute is important for all of the above. That’s why Fermi is built the way it is, with a strong emphasis on compute features and performance.



    In addition, we wouldn't be investing so heavily in gaming technologies if we were really moving away from gaming. Here’s a few of the substantial investments NVIDIA is currently making in PC gaming:



    [indent]PhysX and 3D Vision technologies[/indent][indent]The Way it’s Meant to be Played program, including technical support, game compatibility testing, developer tools, antialiasing profiles, ambient occlusion profiles, etc.[/indent][indent]LAN parties and gaming events (including PAX, PDX LAN, Fragapalooza, Million Man LAN, Blizzcon, and Quakecon to name a few recent ones) Attached are some links to videos from those event.[/indent]

    http://www.slizone.com/object/slizone_even...lery_aug09.html



    http://www.nzone.com/object/nzone_quakecon09_trenches.html



    http://www.nzone.com/object/nzone_blizzcon09_trenches.html



    http://www.nzone.com/page/nzone_section_trenches.html



    We put our money where our mouth is here.



    Finally, Fermi has plenty of “traditional” graphics goodness that we haven’t talked about yet. Fermi’s graphics architecture is going to blow you guys away! Stay tuned.





    2. Why Has NVIDIA continued to refresh the G92? Why didn't NVIDIA create an entry level GT200 piece of hardware? The constant G92 renames and reuse of this aging part have caused a lot of discontent amongst the 3D enthusiast community.



    Jason Paul, GeForce Product Manager: We hear you. We realize we are behind with GT200 derivative parts, and we are doing our best to get them out the door as soon as possible. We invested our engineering resource in transitioning our G9x class products from 65nm to 55nm manufacturing technology as well as adding several new video and display features to GT 220/210, which put these GT200-derivative products later in time than usual. Also, 40nm capacity has been limited, which has made the transition more difficult.



    Since its introduction, G92 has remained a strong price/performance product in our line-up. So why did we rebrand it? While hardware enthusiasts often look at GPUs in terms of the silicon core (i.e. G92) and architecture (i.e. GT2xx), many of our less techie customers instead think about GPUs simply in terms of performance, price, and feature set, summarized via the product name. The product name is an easy way to communicate how products with the same base feature set (i.e. DirectX 10 support) compare to each other in terms of price and performance. Let’s take an example – what is the higher performance product, a 8800 GT or a 9600 GT? The average joe looking at an OEM web configurator or Best Buy retail shelf probably won’t know the answer. But if they saw a 9800 GT and a 9600 GT, they would know that a 9800 GT would provide better performance. By keeping G92 branding current with the rest of our DirectX 10 product line-up, we were able to more effectively communicate to customers where the product fit in terms of price and performance. At the same time, we tried to make it clear to technical press that these new brands were based on the G92 core so enthusiasts would know this information up front.





    3. Is it true that NVIDIA has offered to open up PhysX to ATi without stipulation so long as ATi offers its own support and codes its own driver, or is ATi correct in asserting that NVIDIA has stated that NV will never allow PhysX on ATi gpus? What is NVIDIA’s official stance in allowing ATi to create a driver at no cost for PhysX to run on their GPUs via OpenCL?



    Jason Paul, GeForce Product Manager: We are open to licensing PhysX, and have done so on a variety of platforms (PS3, Xbox, Nintendo Wii, and iPhone to name a few). We would be willing to work with AMD, if they approached us. We can’t really give PhysX away for “free” for the same reason why a Havok license or x86 license isn’t free—the technology is very costly to develop and support. In short, we are open to licensing PhysX to any company who approaches us with a serious proposal.





    4. Is NVIDIA fully committed to supporting 3D Vision for the foreseeable future with consistent driver updates or will we see a decrease in support as appears to be the current trend to many 3D Vision users? For example. A lot of games have major issues with Shadows while running 3D Vision. Can profiles fix these issues or are we going to have to rely on developers to implement 3D Vision compatible shadows? What role do developers play in having a good 3D Vision experience at launch?



    Andrew Fear, 3D Vision Product Manager: NVIDIA is fully committed to 3D Vision. In the past four driver releases, we have added more than 50 game profiles to our driver and we have seeded over 150 3D Vision test setups to developers worldwide. Our devrel team works hard to evangelize the technology to game developers and you will see more developers ensuring their games work great with 3D Vision. Like any new technology, it takes time and not every developer is able to intercept their development/release cycles and make changes for 3D Vision. In the specific example of shadows, sometimes these effects are rendered with techniques that need to be modified to be compatible with stereoscopic 3D, which means we have to recommend users disable them. Some developers are making the necessary updates, and some are waiting to fix it in their next games.



    In the past few months we have seen our developer relations team work with developers to make Batman: Arkham Asylum and Resident Evil 5 look incredible in 3D. And we are excited now to see new titles that are coming – such as Borderlands, Bioshock 2, and Avatar – that should all look incredible in 3D.



    Game profiles can help configure many games, but game developers spending time to optimize for 3D Vision will make the experience better. To help facilitate that, we have provided new SDKs for our core 3D Vision driver architecture that lets developers have almost complete control over how their game is rendered in 3D. We believe these changes, combined with tremendous interest from developers, will result in a large growth of 3D Vision-Ready titles in the coming months and years.



    In addition to making gaming better, we are also working on expanding our ecosystem to support better picture, movie, and Web experiences in 3D. A great example is our support for the Fujifilm FinePix REAL 3D W1 camera. We were the first 3D technology provider to recognize the new 3D picture file format taken by the camera and provide software for our users. In upcoming drivers, you will also see even more enhancements for a 3D Web experience.





    5. Could Favre really lead the Vikings to a Superbowl?



    Ujesh Desai, Vice President of GeForce GPU Business: We are glad that the community looks to us to tackle the tough questions, so we put our GPU computing horsepower to work on this one! After simulating the entire 2009-2010 NFL football season using a Tesla supercomputing cluster running a CUDA simulation program, we determined there is a 23.468% chance of Favre leading the Vikings to a Superbowl this season.* But Tesla supercomputers aside, anyone with half a brain knows the Eagles are gonna finally win it all this year! J



    *Disclaimer: NVIDIA is not liable for any gambling debts incurred based on this data.

    Advanced Moderator Operations and Recursive Posting Hermetic/Omnigenous User-Simulating AI



    Overclocking Hall of Fame - Post your 3DMark scores today!



    NVIDIA Focus Group Members receive free software and/or hardware from NVIDIA from time to time to facilitate

    the evaluation of NVIDIA products. However, the opinions expressed are solely those of the members.

    #1
    Posted 10/26/2009 08:17 AM   
    Feel free to discuss NVIDIA's responses in this thread.

    [b]Do not ask new questions in this thread.[/b] Use the question submission thread: [url="http://forums.nvidia.com/index.php?showtopic=109093"]http://forums.nvidia.com/index.php?showtopic=109093[/url]


    Amorphous
    Feel free to discuss NVIDIA's responses in this thread.



    Do not ask new questions in this thread. Use the question submission thread: http://forums.nvidia.com/index.php?showtopic=109093





    Amorphous

    Advanced Moderator Operations and Recursive Posting Hermetic/Omnigenous User-Simulating AI



    Overclocking Hall of Fame - Post your 3DMark scores today!



    NVIDIA Focus Group Members receive free software and/or hardware from NVIDIA from time to time to facilitate

    the evaluation of NVIDIA products. However, the opinions expressed are solely those of the members.

    #2
    Posted 10/26/2009 08:20 AM   
    Good to hear news on Fermi, i cant wait till it gets released.

    the thing im most curious about is whether the top card will be a two-in-one arrangement or a single card, as per my question in the question thread...
    Good to hear news on Fermi, i cant wait till it gets released.



    the thing im most curious about is whether the top card will be a two-in-one arrangement or a single card, as per my question in the question thread...
    [b]Q: With AMD's acquisition of ATI and Intel becoming more involved in graphics, what will NVIDIA do to remain competitive in the years to come?[/b]

    Jen-Hsun Huang, CEO and founder of NVIDIA: The central question is whether computer graphics is maturing or entering a period of rapid innovation. If you believe computer graphics is maturing, then slowing investment and “integration” is the right strategy. But if you believe graphics can still experience revolutionary advancement, then innovation and specialization is the best strategy.

    We believe we are in the midst of a giant leap in computer graphics, and that the GPU will revolutionize computing by making parallel computing mainstream. This is the time to innovate, not integrate.

    The last discontinuity in our field occurred eight years ago with the introduction of programmable shading and led to the transformation of the GPU from a fixed-pipeline ASIC to a programmable processor. This required GPU design methodology to include the best of general-purpose processors and special-purpose accelerators. Graphics drivers added the complexity of shader compilers for Cg, HLSL, and GLSL shading languages.

    We are now in the midst of a major discontinuity that started three years ago with the introduction of CUDA. We call this the era of GPU computing. We will advance graphics beyond “programmable shading” to add even more artistic flexibility and ever more power to simulate photo-realistic worlds. Combining highly specialize graphics pipelines, programmable shading, and GPU computing, “computational graphics” will make possible stunning new looks with ray tracing, global illumination, and other computational techniques that look incredible. “Computational graphics" requires the GPU to have two personalities – one that is highly specialized for graphics, and the other a completely general purpose parallel processor with massive computational power.

    While the parallel processing architecture can simulate light rays and photons, it is also great at physics simulation. Our vision is to enable games that can simulate the interaction between game characters and the physical world, and then render the images with film-like realism. This is surely in the future since films like Harry Potter and Transformers already use GPUs to simulate many of the special effects. Games will once again be surprising and magical, in a way that is simply not possible with pre-canned art.

    To enable game developers to create the next generation of amazing games, we’ve created compilers for CUDA, OpenCL, and DirectCompute so that developers can choose any GPU computing approach. We’ve created a tool platform called Nexus, which integrates into Visual Studio and is the world’s first unified programming environment for a heterogeneous computing architecture with the CPU and GPU in a “co-processing” configuration. And we’ve encapsulated our algorithm expertise into engines, such as the Optix ray-tracing engine and the PhysX physics engine, so that developers can easily integrate these capabilities into their applications. And finally, we have a team of 300 world class graphics and parallel computing experts in our Content Technology whose passion is to inspire and collaborate with developers to make their games and applications better.

    Some have argued that diversifying from visual computing is a growth strategy. I happen to believe that focusing on the right thing is the best growth strategy.

    NVIDIA’s growth strategy is simple and singular: be the absolute best in the world in visual computing – to expand the reach of GPUs to transform our computing experience. We believe that the GPU will be incorporated into all kinds of computing platforms beyond PCs. By focusing our significant R&D budget to advance visual computing, we are creating breakthrough solutions to address some of the most important challenges in computing today. We build Geforce for gamers and enthusiasts; Quadro for digital designers and artists; Tesla for researchers and engineers needing supercomputing performance; and Tegra for mobile user who want a great computing experience anywhere. A simple view of our business is that we build Geforce for PCs, Quadro for workstations, Tesla for servers and cloud computing, and Tegra for mobile devices. Each of these target different users, and thus each require a very different solution, but all are visual computing focused.

    For all of the gamers, there should be no doubt: You can count on the thousands of visual computing engineers at NVIDIA to create the absolute graphics technology for you. Because of their passion, focus, and craftsmanship, the NVIDIA GPU will be state-of-the-art and exquisitely engineered. And you should be delighted to know that the GPU, a technology that was created for you, is also able to help discover new sources of clean energy and help detect cancer early, or to just make your computer interaction lively. It surely gives me great joy to know what started out as “the essential gear of gamers for universal domination” is now off to really save the world.

    Keep in touch.

    Jensen


    [b]Q: How do you expect PhysX to compete in a DirectX 11/OpenCL world? Will PhysX become open-source?[/b]

    Tom Petersen, Director of Technical Marketing: NVIDIA supports and encourages any technology that enables our customers to more fully experience the benefits of our GPUs. This applies to things like CUDA, DirectCompute and OpenCL—APIs where NVIDIA has been an early proponent of the technology and contributed to the specification development. If someday a GPU physics infrastructure evolves that takes advantage of those or even a newer API, we will support it.

    For now, the only working solution for GPU accelerated physics is PhysX. NVIDIA works hard to make sure this technology delivers compelling benefits to our users. Our investments right now are focused on making those effects more compelling and easier to use in games. But the APIs that we do that on is not the most important part of the story to developers, who are mostly concerned with features, cost, cross-platform capabilities, toolsets, debuggers and generally anything that helps complete their development cycles.


    [b]Q: How is NVIDIA approaching the tessellation requirements for DX11 as none of the previous and current generation cards have any hardware specific to this technology?[/b]

    Jason Paul, Product Manager, GeForce: Fermi has dedicated hardware for tessellation (sorry Rys :-P). We’ll share more details when we introduce Fermi’s graphics architecture shortly!
    Q: With AMD's acquisition of ATI and Intel becoming more involved in graphics, what will NVIDIA do to remain competitive in the years to come?



    Jen-Hsun Huang, CEO and founder of NVIDIA: The central question is whether computer graphics is maturing or entering a period of rapid innovation. If you believe computer graphics is maturing, then slowing investment and “integration” is the right strategy. But if you believe graphics can still experience revolutionary advancement, then innovation and specialization is the best strategy.



    We believe we are in the midst of a giant leap in computer graphics, and that the GPU will revolutionize computing by making parallel computing mainstream. This is the time to innovate, not integrate.



    The last discontinuity in our field occurred eight years ago with the introduction of programmable shading and led to the transformation of the GPU from a fixed-pipeline ASIC to a programmable processor. This required GPU design methodology to include the best of general-purpose processors and special-purpose accelerators. Graphics drivers added the complexity of shader compilers for Cg, HLSL, and GLSL shading languages.



    We are now in the midst of a major discontinuity that started three years ago with the introduction of CUDA. We call this the era of GPU computing. We will advance graphics beyond “programmable shading” to add even more artistic flexibility and ever more power to simulate photo-realistic worlds. Combining highly specialize graphics pipelines, programmable shading, and GPU computing, “computational graphics” will make possible stunning new looks with ray tracing, global illumination, and other computational techniques that look incredible. “Computational graphics" requires the GPU to have two personalities – one that is highly specialized for graphics, and the other a completely general purpose parallel processor with massive computational power.



    While the parallel processing architecture can simulate light rays and photons, it is also great at physics simulation. Our vision is to enable games that can simulate the interaction between game characters and the physical world, and then render the images with film-like realism. This is surely in the future since films like Harry Potter and Transformers already use GPUs to simulate many of the special effects. Games will once again be surprising and magical, in a way that is simply not possible with pre-canned art.



    To enable game developers to create the next generation of amazing games, we’ve created compilers for CUDA, OpenCL, and DirectCompute so that developers can choose any GPU computing approach. We’ve created a tool platform called Nexus, which integrates into Visual Studio and is the world’s first unified programming environment for a heterogeneous computing architecture with the CPU and GPU in a “co-processing” configuration. And we’ve encapsulated our algorithm expertise into engines, such as the Optix ray-tracing engine and the PhysX physics engine, so that developers can easily integrate these capabilities into their applications. And finally, we have a team of 300 world class graphics and parallel computing experts in our Content Technology whose passion is to inspire and collaborate with developers to make their games and applications better.



    Some have argued that diversifying from visual computing is a growth strategy. I happen to believe that focusing on the right thing is the best growth strategy.



    NVIDIA’s growth strategy is simple and singular: be the absolute best in the world in visual computing – to expand the reach of GPUs to transform our computing experience. We believe that the GPU will be incorporated into all kinds of computing platforms beyond PCs. By focusing our significant R&D budget to advance visual computing, we are creating breakthrough solutions to address some of the most important challenges in computing today. We build Geforce for gamers and enthusiasts; Quadro for digital designers and artists; Tesla for researchers and engineers needing supercomputing performance; and Tegra for mobile user who want a great computing experience anywhere. A simple view of our business is that we build Geforce for PCs, Quadro for workstations, Tesla for servers and cloud computing, and Tegra for mobile devices. Each of these target different users, and thus each require a very different solution, but all are visual computing focused.



    For all of the gamers, there should be no doubt: You can count on the thousands of visual computing engineers at NVIDIA to create the absolute graphics technology for you. Because of their passion, focus, and craftsmanship, the NVIDIA GPU will be state-of-the-art and exquisitely engineered. And you should be delighted to know that the GPU, a technology that was created for you, is also able to help discover new sources of clean energy and help detect cancer early, or to just make your computer interaction lively. It surely gives me great joy to know what started out as “the essential gear of gamers for universal domination” is now off to really save the world.



    Keep in touch.



    Jensen





    Q: How do you expect PhysX to compete in a DirectX 11/OpenCL world? Will PhysX become open-source?



    Tom Petersen, Director of Technical Marketing: NVIDIA supports and encourages any technology that enables our customers to more fully experience the benefits of our GPUs. This applies to things like CUDA, DirectCompute and OpenCL—APIs where NVIDIA has been an early proponent of the technology and contributed to the specification development. If someday a GPU physics infrastructure evolves that takes advantage of those or even a newer API, we will support it.



    For now, the only working solution for GPU accelerated physics is PhysX. NVIDIA works hard to make sure this technology delivers compelling benefits to our users. Our investments right now are focused on making those effects more compelling and easier to use in games. But the APIs that we do that on is not the most important part of the story to developers, who are mostly concerned with features, cost, cross-platform capabilities, toolsets, debuggers and generally anything that helps complete their development cycles.





    Q: How is NVIDIA approaching the tessellation requirements for DX11 as none of the previous and current generation cards have any hardware specific to this technology?



    Jason Paul, Product Manager, GeForce: Fermi has dedicated hardware for tessellation (sorry Rys :-P). We’ll share more details when we introduce Fermi’s graphics architecture shortly!

    Advanced Moderator Operations and Recursive Posting Hermetic/Omnigenous User-Simulating AI



    Overclocking Hall of Fame - Post your 3DMark scores today!



    NVIDIA Focus Group Members receive free software and/or hardware from NVIDIA from time to time to facilitate

    the evaluation of NVIDIA products. However, the opinions expressed are solely those of the members.

    #4
    Posted 11/03/2009 12:14 AM   
    [quote name='Amorphous' post='944036' date='Nov 2 2009, 07:14 PM']Jen-Hsun Huang, CEO and founder of NVIDIA:

    ...We believe we are in the midst of a giant leap in computer graphics, and that [i]the GPU will revolutionize computing by making parallel computing mainstream[/i]. This is the time to innovate, not integrate...[/quote]
    The singular reason I chose nVidia...

    ...and the most compelling reason to remain nVidia.
    [quote name='Amorphous' post='944036' date='Nov 2 2009, 07:14 PM']Jen-Hsun Huang, CEO and founder of NVIDIA:



    ...We believe we are in the midst of a giant leap in computer graphics, and that the GPU will revolutionize computing by making parallel computing mainstream. This is the time to innovate, not integrate...

    The singular reason I chose nVidia...



    ...and the most compelling reason to remain nVidia.

    Intel Siler DX79SI Desktop Extreme | Intel Core i7-3820 Sandy Bridge-Extreme | DangerDen M6 and Koolance MVR-40s w/Black Ice Stealths | 32 GB Mushkin PC3-12800LV | NVIDIA GTX 660 Ti SLI | PNY GTX 470 | 24 GB RAMDisk (C:\Temp\Temp) | 120 GB Intel Cherryville SSDs (OS and UserData)| 530 GB Western Digital VelociRaptor SATA 2 RAID0 (C:\Games\) | 60 GB G2 SSDs (XP Pro and Linux) | 3 TB Western Digital USB-3 MyBook (Archive) | LG BP40NS20 USB ODD | LG IPS236 Monitor | LogiTech X-530 Speakers | Plantronics GameCom 780 Headphones | Cooler Master UCP 1100 | Cooler Master HAF XB | Windows 7 Pro x64 SP1

    Stock is Extreme now

    #5
    Posted 11/03/2009 08:56 AM   
    Am surprised tessellation has dedicated hardware... thought Fermi could use some of its shader cores to perform tessellation calculations. At least if that were the case, the algorithms used could be 'upgraded' to improve performance or add features etc.

    It's almost like the Fixed Function stuff we had years ago before programmable shaders were available.

    Surely tessellation could have been done in a 'programmable' way?
    Am surprised tessellation has dedicated hardware... thought Fermi could use some of its shader cores to perform tessellation calculations. At least if that were the case, the algorithms used could be 'upgraded' to improve performance or add features etc.



    It's almost like the Fixed Function stuff we had years ago before programmable shaders were available.



    Surely tessellation could have been done in a 'programmable' way?

    #6
    Posted 11/03/2009 01:56 PM   
    For what it's worth, my response to Andrew Fear's answers regarding 3D Vision is [url="http://forums.nvidia.com/index.php?s=&showtopic=109782&view=findpost&p=605215"]here[/url].
    Cheers,
    DD
    For what it's worth, my response to Andrew Fear's answers regarding 3D Vision is here.

    Cheers,

    DD

    #7
    Posted 11/21/2009 08:59 AM   
    [quote name='DickDastardly' post='953879' date='Nov 21 2009, 04:59 AM']For what it's worth, my response to Andrew Fear's answers regarding 3D Vision is [url="http://forums.nvidia.com/index.php?s=&showtopic=109782&view=findpost&p=605215"]here[/url].
    Cheers,
    DD[/quote]

    It aint worth much Dick. Just looks like a bash fest instead of civilly addressing points one by one. But, to each his own I guess.
    Catch more flies with honey.........
    [quote name='DickDastardly' post='953879' date='Nov 21 2009, 04:59 AM']For what it's worth, my response to Andrew Fear's answers regarding 3D Vision is here.

    Cheers,

    DD



    It aint worth much Dick. Just looks like a bash fest instead of civilly addressing points one by one. But, to each his own I guess.

    Catch more flies with honey.........

    Member of Nvidia Focus Group. NVIDIA Focus Group Members receive free software and/or hardware from NVIDIA from time to time to facilitate the evaluation of NVIDIA products. However, the opinions expressed are solely those of the Members.

    #8
    Posted 11/23/2009 11:14 AM   
    [quote name='keysplayr' post='954554' date='Nov 23 2009, 11:14 AM']It aint worth much Dick. Just looks like a bash fest instead of civilly addressing points one by one. But, to each his own I guess.
    Catch more flies with honey.........[/quote]
    If you're familiar with 3D Vision and you disagree with [i]any[/i] of my comments, then I'd be interested to hear which. As for the general tone of frustration in my remarks, if you read the 3D Vision forum you'll find it's a feeling shared by many other users who are similarly disappointed by the almost total lack of progress on fixing flaws and missing features in the drivers, staggeringly feeble support and, perhaps most annoying of all, the way nVidia have completely failed to capitalize on their enormous potential influence with game developers to ensure 3D Vision compatibility.
    Cheers,
    DD
    [quote name='keysplayr' post='954554' date='Nov 23 2009, 11:14 AM']It aint worth much Dick. Just looks like a bash fest instead of civilly addressing points one by one. But, to each his own I guess.

    Catch more flies with honey.........

    If you're familiar with 3D Vision and you disagree with any of my comments, then I'd be interested to hear which. As for the general tone of frustration in my remarks, if you read the 3D Vision forum you'll find it's a feeling shared by many other users who are similarly disappointed by the almost total lack of progress on fixing flaws and missing features in the drivers, staggeringly feeble support and, perhaps most annoying of all, the way nVidia have completely failed to capitalize on their enormous potential influence with game developers to ensure 3D Vision compatibility.

    Cheers,

    DD

    #9
    Posted 11/23/2009 08:23 PM   
    Once again, these forums are for User-to-User support. They are not an official support channel. They are not here to send feedback to NVIDIA (use the appropriate NVIDIA feedback page). If you have a question you think it would be worthwhile for NVIDIA to respond to, make use of the established question submission thread, take note of the established rules and guidelines for question submission in that thread. If you wish to discuss issues with your setup, do it in your own thread. Do not post about your technical issues in this thread. The purpose of this thread is to discuss NVIDIA's responses to questions from the community, listed above.


    Amorphous
    Once again, these forums are for User-to-User support. They are not an official support channel. They are not here to send feedback to NVIDIA (use the appropriate NVIDIA feedback page). If you have a question you think it would be worthwhile for NVIDIA to respond to, make use of the established question submission thread, take note of the established rules and guidelines for question submission in that thread. If you wish to discuss issues with your setup, do it in your own thread. Do not post about your technical issues in this thread. The purpose of this thread is to discuss NVIDIA's responses to questions from the community, listed above.





    Amorphous

    Advanced Moderator Operations and Recursive Posting Hermetic/Omnigenous User-Simulating AI



    Overclocking Hall of Fame - Post your 3DMark scores today!



    NVIDIA Focus Group Members receive free software and/or hardware from NVIDIA from time to time to facilitate

    the evaluation of NVIDIA products. However, the opinions expressed are solely those of the members.

    #10
    Posted 11/23/2009 08:37 PM   
    Sorry about the delay in getting this thread updated. Entirely my fault!

    [b]Q: How do you expect PhysX to compete in a DirectX 11/OpenCL world?[/b]

    Tom Petersen, Director of Technical Marketing: PhysX does not compete with OpenCL or DX11’s DirectCompute.

    PhysX is an API and runtime that allows games and game engines to model the physics in a game. Think of PhysX as a layer above OpenCL or DirectCompute, which in contrast are very generic and low level interfaces that enable GPU-accelerated computation. Game developers don’t create content in OpenCL or DirectCompute. Instead they author in toolsets (some of which are provided by NVIDIA) that allow them to be creative quickly. Once they have good content they “compile” a specific platform (PC, Wii, Xbox, PS3, etc) using another tool flow.

    During this process game studios have three basic concerns:[list=1]
    [*]Does PhysX make it easier to develop games for all platforms – including consoles?
    [*]Does PhysX make it easier to have kick ass effects in my game?
    [*]Will NVIDIA support my efforts to integrate this technology?
    [/list]And the answer to the three questions above is: yes, yes, and yes. We are spending our time and money pursuing those goals to support developers, and right now the developer community is not telling us that OpenCL or DirectCompute support are required.

    In the future this may or may not change, and the dynamics of this situation are hard to predict. We can say this though:[list=1]
    [*]AMD and Intel are not investing today at the same pace as NVIDIA in GPU accelerated physics.
    [*]AMD and Intel will need to do the bulk of the work required to support GPU accelerated PhysX on their products. NVIDIA is not going to do QA or design for AMD or Intel.
    [/list]At the end of the day, the success of PhysX as a technology will depend on how easy it is for game designers to use and how incredible the game effects are that they create. Batman: Arkham Asylum is a good example of the type of effects we can achieve with PhysX running on NVIDIA GPUs, and we are working to make the next round of games even more compelling. At this time, NVIDIA has no plan to move from CUDA to either OpenCL or DirectCompute as the implementation engine for GPU acceleration. Instead we are working to support developers and implement killer effects.

    So does NVIDIA profit from all this? We sure hope so. If we make our GPUs more desirable because they do incredible things with PhysX, then we have done a great job for our customers and made PC gaming more compelling.


    [b]Q: Will PhysX become open-source?[/b]

    Tom Petersen: NVIDIA is investing a lot of time and effort in PhysX and we do not plan to make it open source today. Of course the binaries for the SDK are distributed for free, and source code is available for licensing if game designers need it.
    Sorry about the delay in getting this thread updated. Entirely my fault!



    Q: How do you expect PhysX to compete in a DirectX 11/OpenCL world?



    Tom Petersen, Director of Technical Marketing: PhysX does not compete with OpenCL or DX11’s DirectCompute.



    PhysX is an API and runtime that allows games and game engines to model the physics in a game. Think of PhysX as a layer above OpenCL or DirectCompute, which in contrast are very generic and low level interfaces that enable GPU-accelerated computation. Game developers don’t create content in OpenCL or DirectCompute. Instead they author in toolsets (some of which are provided by NVIDIA) that allow them to be creative quickly. Once they have good content they “compile” a specific platform (PC, Wii, Xbox, PS3, etc) using another tool flow.



    During this process game studios have three basic concerns:[list=1]

  • Does PhysX make it easier to develop games for all platforms – including consoles?
  • Does PhysX make it easier to have kick ass effects in my game?
  • Will NVIDIA support my efforts to integrate this technology?
  • [/list]And the answer to the three questions above is: yes, yes, and yes. We are spending our time and money pursuing those goals to support developers, and right now the developer community is not telling us that OpenCL or DirectCompute support are required.



    In the future this may or may not change, and the dynamics of this situation are hard to predict. We can say this though:[list=1]

  • AMD and Intel are not investing today at the same pace as NVIDIA in GPU accelerated physics.
  • AMD and Intel will need to do the bulk of the work required to support GPU accelerated PhysX on their products. NVIDIA is not going to do QA or design for AMD or Intel.
  • [/list]At the end of the day, the success of PhysX as a technology will depend on how easy it is for game designers to use and how incredible the game effects are that they create. Batman: Arkham Asylum is a good example of the type of effects we can achieve with PhysX running on NVIDIA GPUs, and we are working to make the next round of games even more compelling. At this time, NVIDIA has no plan to move from CUDA to either OpenCL or DirectCompute as the implementation engine for GPU acceleration. Instead we are working to support developers and implement killer effects.



    So does NVIDIA profit from all this? We sure hope so. If we make our GPUs more desirable because they do incredible things with PhysX, then we have done a great job for our customers and made PC gaming more compelling.





    Q: Will PhysX become open-source?



    Tom Petersen: NVIDIA is investing a lot of time and effort in PhysX and we do not plan to make it open source today. Of course the binaries for the SDK are distributed for free, and source code is available for licensing if game designers need it.

    Advanced Moderator Operations and Recursive Posting Hermetic/Omnigenous User-Simulating AI



    Overclocking Hall of Fame - Post your 3DMark scores today!



    NVIDIA Focus Group Members receive free software and/or hardware from NVIDIA from time to time to facilitate

    the evaluation of NVIDIA products. However, the opinions expressed are solely those of the members.

    #11
    Posted 12/08/2009 07:05 AM   
    [b]Q: Why leave the chipset business?[/b]

    Tom Petersen, Director of Technical Marketing for SLI and PhysX: We will continue to innovate in integrated solutions for Intel's FSB architecture. We firmly believe that this market has a long healthy life ahead. But because of Intel's improper claims to customers and the market that we aren't licensed to the new DMI bus, it is effectively impossible for us to market chipsets for future CPUs. So, until we resolve this matter in court, we'll postpone further chipset investments for Intel DMI CPUs.

    Despite Intel's actions, we have innovative products that we are excited to introduce to the market in the months ahead. We know these products will bring with them some amazing breakthroughs that will surprise the industry.


    [b]Q: Now that ATI has made it a standard feature, what is NVIDIA doing to support 3+ monitor gaming? How would it work with SLI? Now that this is a known feature, when will we see driver support for Surround gaming and 3D Vision Surround?[/b]

    Andrew Fear, Product Manager for 3D Vision: GTX 200 or GTX 400 GPUs in SLI will provide triple monitor gaming support. Not only that, we’ll also be supporting 3D Vision across the three panels, enabling a truly spectacular 3D gaming experience. We'll have more information on driver availability in the near future.


    [b]Q: Is NVIDIA working with Pande Group on OpenCL for a rumored new F@H GPU client?[/b]

    Andrew Humber, Senior PR Manager for Tesla: The OpenCL client development effort is being driven by the Pande Group at Stanford so we should allow them to comment on its status. What we can say is that we are working closely with them on this and a number of other projects that will continue to deliver improvements in Folding@Home performance for NVIDIA GPU contributors. Our view is to support the Folding@Home effort, irrespective of their choice of API.


    [b]Q: How did you get so behind schedule on the Fermi? I just saw that it was delayed to 2010. How will you recover from lost sales to AMD/ATi?[/b]

    Jason Paul, GeForce product manager: On the GF100 schedule—I think Ujesh Desai (our Vice President of Marketing) said it best when he said "designing GPUs is f'ing hard!" J With GF100, we chose to tackle some of the toughest problems of graphics and compute. If we merely doubled up on GT200, we may have shipped earlier, but essential elements for DX11 gaming, like support for scalable tessellation in hardware, would have remained unsolved.

    While we all wish GF100 would have been completed earlier, our investment in a new graphics and compute architecture is showing fantastic results, and we're glad that we took the time to do it right so gamers can get a truly great experience.

    Regarding "lost sales" -- despite some rumors to the contrary, we have been shipping our GTX 200 GPUs in mass and they continue to sell well. In fact, our overall GeForce desktop market share grew during the last quarter: [url="http://www.pcper.com/comments.php?nid=8312"]http://www.pcper.com/comments.php?nid=8312[/url]
    Q: Why leave the chipset business?



    Tom Petersen, Director of Technical Marketing for SLI and PhysX: We will continue to innovate in integrated solutions for Intel's FSB architecture. We firmly believe that this market has a long healthy life ahead. But because of Intel's improper claims to customers and the market that we aren't licensed to the new DMI bus, it is effectively impossible for us to market chipsets for future CPUs. So, until we resolve this matter in court, we'll postpone further chipset investments for Intel DMI CPUs.



    Despite Intel's actions, we have innovative products that we are excited to introduce to the market in the months ahead. We know these products will bring with them some amazing breakthroughs that will surprise the industry.





    Q: Now that ATI has made it a standard feature, what is NVIDIA doing to support 3+ monitor gaming? How would it work with SLI? Now that this is a known feature, when will we see driver support for Surround gaming and 3D Vision Surround?



    Andrew Fear, Product Manager for 3D Vision: GTX 200 or GTX 400 GPUs in SLI will provide triple monitor gaming support. Not only that, we’ll also be supporting 3D Vision across the three panels, enabling a truly spectacular 3D gaming experience. We'll have more information on driver availability in the near future.





    Q: Is NVIDIA working with Pande Group on OpenCL for a rumored new F@H GPU client?



    Andrew Humber, Senior PR Manager for Tesla: The OpenCL client development effort is being driven by the Pande Group at Stanford so we should allow them to comment on its status. What we can say is that we are working closely with them on this and a number of other projects that will continue to deliver improvements in Folding@Home performance for NVIDIA GPU contributors. Our view is to support the Folding@Home effort, irrespective of their choice of API.





    Q: How did you get so behind schedule on the Fermi? I just saw that it was delayed to 2010. How will you recover from lost sales to AMD/ATi?



    Jason Paul, GeForce product manager: On the GF100 schedule—I think Ujesh Desai (our Vice President of Marketing) said it best when he said "designing GPUs is f'ing hard!" J With GF100, we chose to tackle some of the toughest problems of graphics and compute. If we merely doubled up on GT200, we may have shipped earlier, but essential elements for DX11 gaming, like support for scalable tessellation in hardware, would have remained unsolved.



    While we all wish GF100 would have been completed earlier, our investment in a new graphics and compute architecture is showing fantastic results, and we're glad that we took the time to do it right so gamers can get a truly great experience.



    Regarding "lost sales" -- despite some rumors to the contrary, we have been shipping our GTX 200 GPUs in mass and they continue to sell well. In fact, our overall GeForce desktop market share grew during the last quarter: http://www.pcper.com/comments.php?nid=8312

    Advanced Moderator Operations and Recursive Posting Hermetic/Omnigenous User-Simulating AI



    Overclocking Hall of Fame - Post your 3DMark scores today!



    NVIDIA Focus Group Members receive free software and/or hardware from NVIDIA from time to time to facilitate

    the evaluation of NVIDIA products. However, the opinions expressed are solely those of the members.

    #12
    Posted 02/20/2010 09:26 AM   
    hey there i have nvidia geforce 6200 graphic card.....256mb
    can i play all the latest games.....?
    like cod mw2...?
    hey there i have nvidia geforce 6200 graphic card.....256mb

    can i play all the latest games.....?

    like cod mw2...?

    #13
    Posted 03/28/2010 10:38 AM   
    The Rocket Sled demo. Is there a loop or are there any plans in creating a benchmarking application for DX11?
    The Rocket Sled demo. Is there a loop or are there any plans in creating a benchmarking application for DX11?

    Would you like SLI Support for your game?

    nvidia.com_sli_request_form

    Nvidia_SLI_Profile_Update#7_2011.03.01

    #14
    Posted 04/23/2010 11:25 PM   
    My current case does not have the room to Fit a dual slot card, my best friend has one of the 480 cards and they are amazing cards and a dream come true, i know that as evga and other card companies get there hands on them they will most likely release these same cards with less power but in single slot format, I know this for a fact based off a convo i had with a person from Nvidia, i am just posting here hoping that maybe someone has heard news of single slot 470/480's coming down the line.
    My current case does not have the room to Fit a dual slot card, my best friend has one of the 480 cards and they are amazing cards and a dream come true, i know that as evga and other card companies get there hands on them they will most likely release these same cards with less power but in single slot format, I know this for a fact based off a convo i had with a person from Nvidia, i am just posting here hoping that maybe someone has heard news of single slot 470/480's coming down the line.

    A pizza with the radius ‘z’ and thickness ‘a’ has the volume pi*z*z*a. ;P

    #15
    Posted 05/02/2010 09:35 AM   
      1 / 3    
    Scroll To Top