Fermi "Graphic" Facts. Fermi
  1 / 10    
Fermi has 16 tessellation units
1 tessellation unit per SM. So that equals 1 tessellation unit per 32 shaders
Seeing upwards to 3x-6x performance advantage in tessellation benchmarks.
4 Triangles per clock setup engine ((4x improvement over previous engines which do it at 1x))
1.5 Gigabytes of Memory
512 Cuda Cores
384 bit Bandwith Bus
GDDR5
TMUS are decoupled and run connected to the SMs rather than the shaders. Increasing efficiency.
64 TMUS
48 ROPS
Polymorph Tessellation Engine:
Vertex Fetch -> Tessellator -> Viewport Transform -> Atribute setup -> Stream Output
The tessellator units are hardware. But the entire pipeline is programmable.
Improved 8x MS performance should be on AVG about 10-15% max slower than 4x.
32x CSAA 8x MS + 24 CSAA.
Transparency AA now works with Coverage Samples. In previous generious 16x CSAA would only provide 4x color samples for Transparency AA. Now its covered by coverage samples as well.
Shader engine improved better cache. Better units. More units = Improved PhysX Support.
Unlike previous generation Nvidia hardware GF100 is "very" focused on geometry where as in the past Nvidia has primarily been focused on pixel Fidelity.

Main Dissapointment: It wont be available till end of Q1. Sorry I kept quiet for so long. But this is a dramatic shift in architecture for Nvidia and they wanted to keep it under wraps for a while. Cant post performance analysis yet. We're gonna have to wait a little longer for that.
Fermi has 16 tessellation units

1 tessellation unit per SM. So that equals 1 tessellation unit per 32 shaders

Seeing upwards to 3x-6x performance advantage in tessellation benchmarks.

4 Triangles per clock setup engine ((4x improvement over previous engines which do it at 1x))

1.5 Gigabytes of Memory

512 Cuda Cores

384 bit Bandwith Bus

GDDR5

TMUS are decoupled and run connected to the SMs rather than the shaders. Increasing efficiency.

64 TMUS

48 ROPS

Polymorph Tessellation Engine:

Vertex Fetch -> Tessellator -> Viewport Transform -> Atribute setup -> Stream Output

The tessellator units are hardware. But the entire pipeline is programmable.

Improved 8x MS performance should be on AVG about 10-15% max slower than 4x.

32x CSAA 8x MS + 24 CSAA.

Transparency AA now works with Coverage Samples. In previous generious 16x CSAA would only provide 4x color samples for Transparency AA. Now its covered by coverage samples as well.

Shader engine improved better cache. Better units. More units = Improved PhysX Support.

Unlike previous generation Nvidia hardware GF100 is "very" focused on geometry where as in the past Nvidia has primarily been focused on pixel Fidelity.



Main Dissapointment: It wont be available till end of Q1. Sorry I kept quiet for so long. But this is a dramatic shift in architecture for Nvidia and they wanted to keep it under wraps for a while. Cant post performance analysis yet. We're gonna have to wait a little longer for that.

#1
Posted 01/18/2010 06:00 AM   
Please keep Fermi launch posts to this thread guys. I'd like to collect some feed back.
Please keep Fermi launch posts to this thread guys. I'd like to collect some feed back.

#2
Posted 01/18/2010 06:08 AM   
how is it 338 bit memory bandwidth? 5.3 memory channels?
how is it 338 bit memory bandwidth? 5.3 memory channels?

#3
Posted 01/18/2010 06:19 AM   
Was a typo. I fixed it.
Was a typo. I fixed it.

#4
Posted 01/18/2010 06:22 AM   
Looks like the NDA is over. Unfortunately, a lot of this tech stats stuff is Greek to me; maybe I have to do some more Googling up what does what, but I also like to occasionally use my computer for fun stuff. But it all looks pretty promising.

Guru3d.com just launched a tech preview as well. [url="http://www.guru3d.com/article/nvidia-gf100-fermi-technology-preview/"]HERE[/url]

I haven't even read it all yet, but I was quite intrigued by what I did read. A difference of 50 FPS on a GTX 285 to 84 FPS on the GF 100 in their Far Cry 2 benchmark is quite encouraging. If the GF 100 can maintain an average of 60 to 75% FPS increase over the 285 on high settings, its definitely going to whoop the Radeon HD 5800 series, although probably not by as much as many of us would have hoped.
Looks like the NDA is over. Unfortunately, a lot of this tech stats stuff is Greek to me; maybe I have to do some more Googling up what does what, but I also like to occasionally use my computer for fun stuff. But it all looks pretty promising.



Guru3d.com just launched a tech preview as well. HERE



I haven't even read it all yet, but I was quite intrigued by what I did read. A difference of 50 FPS on a GTX 285 to 84 FPS on the GF 100 in their Far Cry 2 benchmark is quite encouraging. If the GF 100 can maintain an average of 60 to 75% FPS increase over the 285 on high settings, its definitely going to whoop the Radeon HD 5800 series, although probably not by as much as many of us would have hoped.

Image

#5
Posted 01/18/2010 06:22 AM   
Fermi will be faster by a good margin compared to the GTX 285. But its really gonna spread its wings and scream in DX11.
Fermi will be faster by a good margin compared to the GTX 285. But its really gonna spread its wings and scream in DX11.

#6
Posted 01/18/2010 06:32 AM   
[quote name='ChrisRay' post='983262' date='Jan 17 2010, 10:32 PM']Fermi will be faster by a good margin compared to the GTX 285. But its really gonna spread its wings and scream in DX11.[/quote]
Good to know.

I'm just glad with some real numbers out now that expectation will come back down to Earth. Honestly, a 30% increase in gaming performance was the best that I was hoping for, but if we can see an average 60% increase over the GTX 285 maintained across the board, the GF 100 can be seen as the next 8800, in terms of potential lifespan as a high performance card.

If my sentences are missing huge parts or just not making sense, please forgive me: I'm on a lot of painkillers because I hurt my shoulder a few days ago.
[quote name='ChrisRay' post='983262' date='Jan 17 2010, 10:32 PM']Fermi will be faster by a good margin compared to the GTX 285. But its really gonna spread its wings and scream in DX11.

Good to know.



I'm just glad with some real numbers out now that expectation will come back down to Earth. Honestly, a 30% increase in gaming performance was the best that I was hoping for, but if we can see an average 60% increase over the GTX 285 maintained across the board, the GF 100 can be seen as the next 8800, in terms of potential lifespan as a high performance card.



If my sentences are missing huge parts or just not making sense, please forgive me: I'm on a lot of painkillers because I hurt my shoulder a few days ago.

Image

#7
Posted 01/18/2010 06:47 AM   
I can't imagine now a game worth of a faster gpu than a GTX285. ( except Crysis and C. warhead, but for these you can use a SLI or TriSLI). Imo a single 300( I intend this for "Fermi") could replace a SLI or 3-way SLI of GTX285.
I can't imagine now a game worth of a faster gpu than a GTX285. ( except Crysis and C. warhead, but for these you can use a SLI or TriSLI). Imo a single 300( I intend this for "Fermi") could replace a SLI or 3-way SLI of GTX285.

Asus Maximus IV - I7 2600K- Toughpower 1200W - 8GB Kingston 1600 - Noctua NCH + fans - SB XFi usb - SSD Intel 120 GB- TriSLI Zotac GTX580 - 3D vision Acer GD245HQ - W7 Ult. 64 bit

#8
Posted 01/18/2010 06:59 AM   
[quote name='Olonese' post='983267' date='Jan 17 2010, 10:59 PM']I can't imagine now a game worth of a faster gpu than a GTX285. ( except Crysis and C. warhead, but for these you can use a SLI or TriSLI). Imo a single 300( I intend this for "Fermi") could replace a SLI or 3-way SLI of GTX285.[/quote]

Dirt2 with DX11, once your wait is over you'll see...

:rolleyes:
[quote name='Olonese' post='983267' date='Jan 17 2010, 10:59 PM']I can't imagine now a game worth of a faster gpu than a GTX285. ( except Crysis and C. warhead, but for these you can use a SLI or TriSLI). Imo a single 300( I intend this for "Fermi") could replace a SLI or 3-way SLI of GTX285.



Dirt2 with DX11, once your wait is over you'll see...



:rolleyes:

#9
Posted 01/18/2010 07:14 AM   
I'm very interested in how it will handle Dx11 as well , I have been looking at the ATI 5000 series and sometimes the fps are cut almost in half when going from Dx10 to 11, if NVIDIA can keep at least 75% of the performance when going to Dx11 it would be good.
I'm very interested in how it will handle Dx11 as well , I have been looking at the ATI 5000 series and sometimes the fps are cut almost in half when going from Dx10 to 11, if NVIDIA can keep at least 75% of the performance when going to Dx11 it would be good.

Image

#10
Posted 01/18/2010 07:15 AM   
Many thanks Chris. It is an impressive shift of architecture; although you can see a lot of the shift is making the GPU into a better computing module - luckily a lot of this transfers over to gaming well!

A few websites (LOTS) with some form of technology preview information:

[url="http://www.guru3d.com/article/nvidia-gf100-fermi-technology-preview/1"]http://www.guru3d.com/article/nvidia-gf100...ology-preview/1[/url]

[url="http://www.anandtech.com/video/showdoc.aspx?i=3721"]http://www.anandtech.com/video/showdoc.aspx?i=3721[/url]

[url="http://www.tomshardware.com/reviews/gf100-fermi-directx-11,2536.html"]http://www.tomshardware.com/reviews/gf100-...tx-11,2536.html[/url]

[url="http://hothardware.com/Articles/NVIDIA-GF100-Architecture-and-Feature-Preview/"]http://hothardware.com/Articles/NVIDIA-GF1...eature-Preview/[/url]

Only managed to read the Guru3d version so far, and its already looking very impressive. It is a shame there are no real world figures in terms of gaming just yet though. I was kind of hopeful of power requirements and confirmed model version, but no such luck.

3D Vision requires SLI for the three screens; not suprising and what I suspected, although I was secretly hoping we would be able to use all three outputs on a card (even despite the differing connector types).

Guru3d makes an interesting point about how easily people are dismissing the rocket sled demo, in that its actually a very complex tech demo. I get this, but this is half the problem with tech demos; compare it to something we know and it makes more sense. But you can't because some of its new technology. Compare it to something similar (ATI) and it still makes no sense due to the lack of PhysX.

More on the point though, although a lot of the info coming out is just confirmation of rumours, its still some mighty impressive stats. I mean, its really a small computing array crammed into a GPU. Now all we need are the model names and stats, power and cooling requirements... plus perhaps some promised games that know how to use the hardware :)


J
Many thanks Chris. It is an impressive shift of architecture; although you can see a lot of the shift is making the GPU into a better computing module - luckily a lot of this transfers over to gaming well!



A few websites (LOTS) with some form of technology preview information:



http://www.guru3d.com/article/nvidia-gf100...ology-preview/1



http://www.anandtech.com/video/showdoc.aspx?i=3721



http://www.tomshardware.com/reviews/gf100-...tx-11,2536.html



http://hothardware.com/Articles/NVIDIA-GF1...eature-Preview/



Only managed to read the Guru3d version so far, and its already looking very impressive. It is a shame there are no real world figures in terms of gaming just yet though. I was kind of hopeful of power requirements and confirmed model version, but no such luck.



3D Vision requires SLI for the three screens; not suprising and what I suspected, although I was secretly hoping we would be able to use all three outputs on a card (even despite the differing connector types).



Guru3d makes an interesting point about how easily people are dismissing the rocket sled demo, in that its actually a very complex tech demo. I get this, but this is half the problem with tech demos; compare it to something we know and it makes more sense. But you can't because some of its new technology. Compare it to something similar (ATI) and it still makes no sense due to the lack of PhysX.



More on the point though, although a lot of the info coming out is just confirmation of rumours, its still some mighty impressive stats. I mean, its really a small computing array crammed into a GPU. Now all we need are the model names and stats, power and cooling requirements... plus perhaps some promised games that know how to use the hardware :)





J

Official GeForce Forums Benchmarking Leaderboards
NVIDIA SLI Technology: A Canine's Guide

Corsair Obsidian 350D mATX, Asus Maximus VI GENE Z87 mATX, Intel Core i7-4770k @ 4.40GHz, Corsair H110, Corsair Dominator Platinum 16GB (4x4GB) @ 2400MHz, 1x OCZ Vertex 4 256GB, 1x WD Scorpio Black 750GB, 2x WD Caviar Black 1TB, EVGA GeForce GTX 780Ti Superclock, Enermax 1250W Evolution, Windows 8 64bit.

Logitech G9x, Razer Black Widow Ultimate, Logitech G930, 2x Eizo EV2333W.

Twitter | Steam

#11
Posted 01/18/2010 07:19 AM   
thought this was funny in the hothardware preview,

[quote]And it gives developers the ability to provide data to the GPU at coarser resolution. This saves artists the time it would normally take to create more complex polygonal meshes and reduced the data's memory footprint.[/quote]

When a model tessellates, for example a box, it cant just tessellate into a beautiful tree, the model needs to know what to tessellate into. Therefore a developer would have to provide the low poly model, and the high poly model for the tessellation to properly adhere. So it wouldn't save time for a developer, only add to it. However, most companies already have high poly objects for their meshes, so really its just the same process as before.

On a side note, I'm really excited for fermi :) can't wait to get my hands on one.
thought this was funny in the hothardware preview,



And it gives developers the ability to provide data to the GPU at coarser resolution. This saves artists the time it would normally take to create more complex polygonal meshes and reduced the data's memory footprint.




When a model tessellates, for example a box, it cant just tessellate into a beautiful tree, the model needs to know what to tessellate into. Therefore a developer would have to provide the low poly model, and the high poly model for the tessellation to properly adhere. So it wouldn't save time for a developer, only add to it. However, most companies already have high poly objects for their meshes, so really its just the same process as before.



On a side note, I'm really excited for fermi :) can't wait to get my hands on one.

#12
Posted 01/18/2010 07:53 AM   
Ferrrr...mi!
Ferrrr...mi!
Ferrrr...mi!

FOR ME!
Ferrrr...mi!

Ferrrr...mi!

Ferrrr...mi!



FOR ME!

Intel Siler DX79SI Desktop Extreme | Intel Core i7-3820 Sandy Bridge-Extreme | DangerDen M6 and Koolance MVR-40s w/Black Ice Stealths | 32 GB Mushkin PC3-12800LV | NVIDIA GTX 660 Ti SLI | PNY GTX 470 | 24 GB RAMDisk (C:\Temp\Temp) | 120 GB Intel Cherryville SSDs (OS and UserData)| 530 GB Western Digital VelociRaptor SATA 2 RAID0 (C:\Games\) | 60 GB G2 SSDs (XP Pro and Linux) | 3 TB Western Digital USB-3 MyBook (Archive) | LG BP40NS20 USB ODD | LG IPS236 Monitor | LogiTech X-530 Speakers | Plantronics GameCom 780 Headphones | Cooler Master UCP 1100 | Cooler Master HAF XB | Windows 7 Pro x64 SP1

Stock is Extreme now

#13
Posted 01/18/2010 10:21 AM   
One thing we gotta keep in mind folks is these numbers could well be for the MIDRANGE card they were supposedly releasing first (last I heard). If it is, that means it could be as little as half as powerful as the flagship card.

Now if that IS the flagship card... anything less than a 2:1 performance increase over GTX285 single card applications would make it not worth it to me to upgrade. But, having just gotten my Classifieds, I'm a lot more leaning towards keeping the old ;-)
One thing we gotta keep in mind folks is these numbers could well be for the MIDRANGE card they were supposedly releasing first (last I heard). If it is, that means it could be as little as half as powerful as the flagship card.



Now if that IS the flagship card... anything less than a 2:1 performance increase over GTX285 single card applications would make it not worth it to me to upgrade. But, having just gotten my Classifieds, I'm a lot more leaning towards keeping the old ;-)



Image


Image

Help fight Cancer, Alzheimer's and Parkinson's Disease by donating unused CPU and GPU power to Stanford University's Research Folding@Home projects:

Simplest method is to setup the FAH v7 client with this Windows Installation Guide

#14
Posted 01/18/2010 11:20 AM   
[quote name='Goddess84' post='983356' date='Jan 18 2010, 05:20 AM']One thing we gotta keep in mind folks is these numbers could well be for the MIDRANGE card they were supposedly releasing first (last I heard). If it is, that means it could be as little as half as powerful as the flagship card.

Now if that IS the flagship card... anything less than a 2:1 performance increase over GTX285 single card applications would make it not worth it to me to upgrade. But, having just gotten my Classifieds, I'm a lot more leaning towards keeping the old ;-)[/quote]

Supposedly, the Far Cry 2 benchmark was done on the GTX 360 (the 448-cuda core version), so no one really knows how fast the high-end GTX 380 is, but I am guessing you can add atleast another 15-20FPS to that.

Oh, I also have a question for ChrisRay: Will the GF100 match the GT200 in raw IQ or will it be much better like the HD 5870?
[quote name='Goddess84' post='983356' date='Jan 18 2010, 05:20 AM']One thing we gotta keep in mind folks is these numbers could well be for the MIDRANGE card they were supposedly releasing first (last I heard). If it is, that means it could be as little as half as powerful as the flagship card.



Now if that IS the flagship card... anything less than a 2:1 performance increase over GTX285 single card applications would make it not worth it to me to upgrade. But, having just gotten my Classifieds, I'm a lot more leaning towards keeping the old ;-)



Supposedly, the Far Cry 2 benchmark was done on the GTX 360 (the 448-cuda core version), so no one really knows how fast the high-end GTX 380 is, but I am guessing you can add atleast another 15-20FPS to that.



Oh, I also have a question for ChrisRay: Will the GF100 match the GT200 in raw IQ or will it be much better like the HD 5870?

Case: Antec 1200

PSU: Corsair AX850 850W

CPU: Intel Core i7 860 @4GHz 1.352v cooled w. Xigmatek Thor's Hammer

RAM: 2x4GB G. Skill Sniper Series DDR3-1866 @1910MHz 9-10-9-28 2T 1.5v 2:10 FSB:DRAM

MB: ASUS Maximus III Formula Intel P55

HD: WD Velociraptor 160GB (Boot), WD Caviar Black 750GB (all my junk)

OD: ASUS Blu-Ray

GPU: 2 x EVGA GeForce GTX 670 FTW

Sound Card: Creative X-Fi Titanium HD

OS: Windows 7 Home Premium 64-bit

Peripherals: Logitech G510 keyboard, Logitech G500 laser mouse, Logitech Z-5500 5.1 speakers, Gateway FPD2485W 24'' monitor

Image

#15
Posted 01/18/2010 12:13 PM   
  1 / 10    
Scroll To Top