Pre-render & OpenGL gathering infos regarding pre-render settings and nvidia drivers
  1 / 2    
[b]First of all hi nvidia forum,[/b]

I come here with a question i've spent a lot of time searching the answer all over the internet and resigned myself to come up here and ask to the nvidia community so hi again, i'm new here.


Two years ago (or maybe a bit more) i talked with the creator of nHancer (Grestorn) about the nvidia drivers and the pre-render option regarding OpenGL games (ie : Unreal 3, QuakeLive etc).

At that time we made a test. We benchmarked Unreal 3. We tested with pre-render 0, 1, 2 and 3 and came up with this result :
* pre-render at 2 or 3 was giving more fps obviously. Which also means that when set to 0 the input (kb & mouse) was more responsive. It was clear that the pre-render option was working with OpenGL at that time.

Also back in the days some people were talking about a registry key that you could modify (ogl_maxframesallowed).


So here comes my question, let's say i buy a recent nvidia graphic card (ie the 500 serie). Will i be able to set the pre-render to 0 with OpenGL games? If so, by which methode?

Cordialy.
First of all hi nvidia forum,



I come here with a question i've spent a lot of time searching the answer all over the internet and resigned myself to come up here and ask to the nvidia community so hi again, i'm new here.





Two years ago (or maybe a bit more) i talked with the creator of nHancer (Grestorn) about the nvidia drivers and the pre-render option regarding OpenGL games (ie : Unreal 3, QuakeLive etc).



At that time we made a test. We benchmarked Unreal 3. We tested with pre-render 0, 1, 2 and 3 and came up with this result :

* pre-render at 2 or 3 was giving more fps obviously. Which also means that when set to 0 the input (kb & mouse) was more responsive. It was clear that the pre-render option was working with OpenGL at that time.



Also back in the days some people were talking about a registry key that you could modify (ogl_maxframesallowed).





So here comes my question, let's say i buy a recent nvidia graphic card (ie the 500 serie). Will i be able to set the pre-render to 0 with OpenGL games? If so, by which methode?



Cordialy.

#1
Posted 10/31/2011 08:19 PM   
[color="#008000"]*Update (May 2, 2012)*
In r300 drivers, nVidia reintroduced a pre-render setting for OpenGL. I didn't test if the NVCPL applies it, but Inspector certainly does. It's called "Maximum frames allowed".

Thanks nVidia for adding this.


*Update (May 8, 2012)*
Just checked, and the NVCPL does indeed adjust this setting when you change "Maximum pre-rendered frames". For OpenGL the range is from 0 to 4, with a default of 2, and zero reserved as "let the 3d application or driver decide".

[img]http://forums.nvidia.com/uploads/monthly_05_2012/post-361-1336490829295.png[/img]

-----------------------------------------------
-----------------------------------------------[/color]

The control panel says maximum pre-rendered frames only works with DirectX, and I believe that to be the case.

There use to be registry tweak for OpenGL pre-render, but that is gone since driver r256.

Pre-render of zero is not a good configuration anyway, as it will only succeed in stalling the CPU/GPU and decreasing performance (for a negligible decrease in latency -- if any).
*Update (May 2, 2012)*

In r300 drivers, nVidia reintroduced a pre-render setting for OpenGL. I didn't test if the NVCPL applies it, but Inspector certainly does. It's called "Maximum frames allowed".



Thanks nVidia for adding this.





*Update (May 8, 2012)*

Just checked, and the NVCPL does indeed adjust this setting when you change "Maximum pre-rendered frames". For OpenGL the range is from 0 to 4, with a default of 2, and zero reserved as "let the 3d application or driver decide".



Image



-----------------------------------------------

-----------------------------------------------



The control panel says maximum pre-rendered frames only works with DirectX, and I believe that to be the case.



There use to be registry tweak for OpenGL pre-render, but that is gone since driver r256.



Pre-render of zero is not a good configuration anyway, as it will only succeed in stalling the CPU/GPU and decreasing performance (for a negligible decrease in latency -- if any).

"This is your code. These are also your bugs. Really. Yes, the API runtime and the
driver have bugs, but this is not one of them. Now go fix it already." -fgiesen

#2
Posted 10/31/2011 08:22 PM   
Thank you for your answer. Back in thoose days, the nvidia CP was also saying it works only for DirectX but it clearly was not.

Regarding the usefulness of such an option, well if your framerate is very low and you have 3 frames delay then the input lag will be big. Now if you play a game with a good framerate the 3 frames delay will obviously be shorter so it's ok. Now imagine you play a fast paced FPS with FPS caped at 125, in this situation you would want to disable the pre-render at all cost to avoid any input lags.

I was investigating the question since I'm very sensitive to any input delay. Playing the very fast paced FPS (QuakeLive) at 125fps constant 800x600 @200hz and would want to upgrade my 8 years old graphic card with something new.

While writing this last line I was thinking about something. Maybe the easyest way would be to find someone whiling to make a benchmark with an OpenGL game to see if pre-render is realy non-existant this days.
Thank you for your answer. Back in thoose days, the nvidia CP was also saying it works only for DirectX but it clearly was not.



Regarding the usefulness of such an option, well if your framerate is very low and you have 3 frames delay then the input lag will be big. Now if you play a game with a good framerate the 3 frames delay will obviously be shorter so it's ok. Now imagine you play a fast paced FPS with FPS caped at 125, in this situation you would want to disable the pre-render at all cost to avoid any input lags.



I was investigating the question since I'm very sensitive to any input delay. Playing the very fast paced FPS (QuakeLive) at 125fps constant 800x600 @200hz and would want to upgrade my 8 years old graphic card with something new.



While writing this last line I was thinking about something. Maybe the easyest way would be to find someone whiling to make a benchmark with an OpenGL game to see if pre-render is realy non-existant this days.

#3
Posted 10/31/2011 08:59 PM   
[quote name='nvtweakman' date='31 October 2011 - 09:22 PM' timestamp='1320092545' post='1318219']
There use to be registry tweak for OpenGL pre-render, but that is gone since driver r256.
[/quote]

The latest driver i could find before 256 are 197.45 do you think the option was still available with this one?
[quote name='nvtweakman' date='31 October 2011 - 09:22 PM' timestamp='1320092545' post='1318219']

There use to be registry tweak for OpenGL pre-render, but that is gone since driver r256.





The latest driver i could find before 256 are 197.45 do you think the option was still available with this one?

#4
Posted 11/01/2011 10:06 AM   
Maybe, but I can't recommend such old drivers.

Lately I've been playing Doom 3 and Rage (both OpenGL based engines capped to 60 FPS) and there is no noticeable input lag, even with vSync and triple buffer enabled. The pre-render setting doesn't even introduce lag when I set it to 255.
Maybe, but I can't recommend such old drivers.



Lately I've been playing Doom 3 and Rage (both OpenGL based engines capped to 60 FPS) and there is no noticeable input lag, even with vSync and triple buffer enabled. The pre-render setting doesn't even introduce lag when I set it to 255.

"This is your code. These are also your bugs. Really. Yes, the API runtime and the
driver have bugs, but this is not one of them. Now go fix it already." -fgiesen

#5
Posted 11/01/2011 09:42 PM   
Ty. maybe the pre render setting didn t worked at such a high value. Theoricaly with games like d3 and rage which are caped at 60 fps, asking for a pre render of 255 frames would make the input realy delayed ( more than 1000ms). This or it also could mean that it doesn t work with openGL games anymore.

If you could try it with some directX games that would make more sense :). I'm gonna run some test with another computer featuring gtx560ti, another 0ms CRT @200Hz and a Q9550 @3.7GHz. I hope i can sort it out.

But i want to add that during my chats with other persons i discovered a hole. Some people say the pre render option make the CPU pre render stuff, others say, the GPU pre render the frames.


[quote name='on another forum by Nukm' date='01 November 2011 - 10:42 PM' timestamp='1320183778' post='1318804']
if your CPU is not as quick as your GPU then you'll get some input lag, this is where the Max Prerendered Frames comes to the rescue as its less job for your CPU (set it less than 3), if your CPU is quicker than your GPU then actually increasing the max-prerendered frames improves things. (don't go above 5 though).
Maximum Pre-Rendered Frames: If available, this option - previously known as 'Max Frames to Render Ahead' in old Forceware versions - controls the number of frames the CPU prepares in advanced of being rendered by the GPU. The default value is 3 - higher values tend to result in smoother but more laggy gameplay, while lower values can help reduce mouse and keyboard lag. However extremely low values such as 0 may hurt performance, so I recommend this option be kept at its default of 3 globally, and only adjusted downwards in specific game profiles. Remember, in most cases mouse lag is due to low framerates, so adjusting this option is not an automatic cure to lag issues, nor should it be the first thing you try. Finally, it only works in DirectX games, not OpenGL games.[/quote]

I must admit i hit a wall at this. I don't understand the CPU preparing frames i though only GPU do such a thing. Or is it frames related to input+game engine? If so how could we quantify this frames? I'm totaly confused now as you can see.

I would (and i'm sure i'm not alone /smile2.gif' class='bbc_emoticon' alt=':))' /> be very grateful if some people with knowledge about this could take a little bit of time to explain this.

Again thanks to have taken some times to answer.
Cordialy.
Ty. maybe the pre render setting didn t worked at such a high value. Theoricaly with games like d3 and rage which are caped at 60 fps, asking for a pre render of 255 frames would make the input realy delayed ( more than 1000ms). This or it also could mean that it doesn t work with openGL games anymore.



If you could try it with some directX games that would make more sense :). I'm gonna run some test with another computer featuring gtx560ti, another 0ms CRT @200Hz and a Q9550 @3.7GHz. I hope i can sort it out.



But i want to add that during my chats with other persons i discovered a hole. Some people say the pre render option make the CPU pre render stuff, others say, the GPU pre render the frames.





[quote name='on another forum by Nukm' date='01 November 2011 - 10:42 PM' timestamp='1320183778' post='1318804']

if your CPU is not as quick as your GPU then you'll get some input lag, this is where the Max Prerendered Frames comes to the rescue as its less job for your CPU (set it less than 3), if your CPU is quicker than your GPU then actually increasing the max-prerendered frames improves things. (don't go above 5 though).

Maximum Pre-Rendered Frames: If available, this option - previously known as 'Max Frames to Render Ahead' in old Forceware versions - controls the number of frames the CPU prepares in advanced of being rendered by the GPU. The default value is 3 - higher values tend to result in smoother but more laggy gameplay, while lower values can help reduce mouse and keyboard lag. However extremely low values such as 0 may hurt performance, so I recommend this option be kept at its default of 3 globally, and only adjusted downwards in specific game profiles. Remember, in most cases mouse lag is due to low framerates, so adjusting this option is not an automatic cure to lag issues, nor should it be the first thing you try. Finally, it only works in DirectX games, not OpenGL games.



I must admit i hit a wall at this. I don't understand the CPU preparing frames i though only GPU do such a thing. Or is it frames related to input+game engine? If so how could we quantify this frames? I'm totaly confused now as you can see.



I would (and i'm sure i'm not alone /smile2.gif' class='bbc_emoticon' alt=':))' /> be very grateful if some people with knowledge about this could take a little bit of time to explain this.



Again thanks to have taken some times to answer.

Cordialy.

#6
Posted 11/02/2011 07:07 PM   
I'm pretty sure it doesn't apply to OpenGL, my experiments with Quake 3, Jedi Academy and such have shown 0 difference by altering this setting (from 0 to 8).

I remember it making a huge difference in UT3 (dx9) at some point on the other hand, a patch added an option later on that allowed you to change it from the game's menu because many hardcore gamers begged for a fix to the obvious mouse lag. Turning it on resulted in a noticeably smoother gameplay but also a noticeable input lag, while turning it off made the mouse input much better at the expense of smoothness. Although if you had 100+fps, disabling it couldn't really hurt.

However the setting didn't look exactly like the one in the nvidia control panel: it was just ON or OFF. But it had exactly the same effect as reducing the max prerendered frames pre patch. They call it "one frame thread lag".
I'm pretty sure it doesn't apply to OpenGL, my experiments with Quake 3, Jedi Academy and such have shown 0 difference by altering this setting (from 0 to 8).



I remember it making a huge difference in UT3 (dx9) at some point on the other hand, a patch added an option later on that allowed you to change it from the game's menu because many hardcore gamers begged for a fix to the obvious mouse lag. Turning it on resulted in a noticeably smoother gameplay but also a noticeable input lag, while turning it off made the mouse input much better at the expense of smoothness. Although if you had 100+fps, disabling it couldn't really hurt.



However the setting didn't look exactly like the one in the nvidia control panel: it was just ON or OFF. But it had exactly the same effect as reducing the max prerendered frames pre patch. They call it "one frame thread lag".

#7
Posted 11/02/2011 10:28 PM   
[quote name='Somebody from some other forum']if your CPU is not as quick as your GPU then you'll get some input lag[/quote]

That information is ass backwards, actually. /pinch.gif' class='bbc_emoticon' alt=':pinch:' /> Swap the terms CPU and GPU and it becomes correct.

[quote name='Dwayne_Hicks' date='02 November 2011 - 03:07 PM' timestamp='1320260827' post='1319372']I must admit i hit a wall at this. I don't understand the CPU preparing frames i though only GPU do such a thing. Or is it frames related to input+game engine? If so how could we quantify this frames? I'm totaly confused now as you can see.[/quote]

Pre-rendering improves performance by allowing the CPU to prepare the set of commands necessary to render the next frame while the GPU is busy rendering the current frame. Think of it as a queue or buffer that helps prevent the CPU and GPU from getting stuck waiting on each other. Perhaps now you could imagine what would happen in a CPU-limited situation versus a GPU-limited scenario.
Somebody from some other forum said:if your CPU is not as quick as your GPU then you'll get some input lag




That information is ass backwards, actually. /pinch.gif' class='bbc_emoticon' alt=':pinch:' /> Swap the terms CPU and GPU and it becomes correct.



[quote name='Dwayne_Hicks' date='02 November 2011 - 03:07 PM' timestamp='1320260827' post='1319372']I must admit i hit a wall at this. I don't understand the CPU preparing frames i though only GPU do such a thing. Or is it frames related to input+game engine? If so how could we quantify this frames? I'm totaly confused now as you can see.



Pre-rendering improves performance by allowing the CPU to prepare the set of commands necessary to render the next frame while the GPU is busy rendering the current frame. Think of it as a queue or buffer that helps prevent the CPU and GPU from getting stuck waiting on each other. Perhaps now you could imagine what would happen in a CPU-limited situation versus a GPU-limited scenario.

"This is your code. These are also your bugs. Really. Yes, the API runtime and the
driver have bugs, but this is not one of them. Now go fix it already." -fgiesen

#8
Posted 11/03/2011 11:04 PM   
Thank you to both of you for all the answers you provided.
Thank you to both of you for all the answers you provided.

#9
Posted 11/05/2011 07:35 PM   
In r300, nVidia reintroduced a pre-render setting for OpenGL. I didn't test if the NVCPL applies it, but Inspector certainly does. It's called "Maximum frames allowed".

Thanks nVidia for adding this.
In r300, nVidia reintroduced a pre-render setting for OpenGL. I didn't test if the NVCPL applies it, but Inspector certainly does. It's called "Maximum frames allowed".



Thanks nVidia for adding this.

"This is your code. These are also your bugs. Really. Yes, the API runtime and the
driver have bugs, but this is not one of them. Now go fix it already." -fgiesen

#10
Posted 05/02/2012 10:31 AM   
[quote] That information is ass backwards, actually. /pinch.gif' class='bbc_emoticon' alt=':pinch:' /> Swap the terms CPU and GPU and it becomes correct.[/quote]

no, actually at the time it wasn't.

however since the discovery that locking a framerate just below the refresh rate with vsync doesn't have much, if any input lag, theres less of a reason to touch the prerendered frames count
That information is ass backwards, actually. /pinch.gif' class='bbc_emoticon' alt=':pinch:' /> Swap the terms CPU and GPU and it becomes correct.




no, actually at the time it wasn't.



however since the discovery that locking a framerate just below the refresh rate with vsync doesn't have much, if any input lag, theres less of a reason to touch the prerendered frames count



In Memory of Chris "ChrisRay" Arthington, 1982-2010

CPU:Intel i7 920 @ 3.8(D0), Mainboard:Asus Rampage II Gene, Memory:12GB Corsair Vengeance 1600
Video:EVGA Geforce GTX 680+ 4GB, Sound:Creative XFI Titanium Fatal1ty Pro, Monitor:BenQ G2400WD
HDD:500GB Spinpoint F3, 1TB WD Black, 2TB WD Red, 1TB WD Black
Case:NZXT Guardian 921RB, PSU:Corsair 620HX, OS:Windows 7 SP1

#11
Posted 05/02/2012 03:36 PM   
[quote name='Sora' date='02 May 2012 - 11:36 AM' timestamp='1335972981' post='1403442']
no
[/quote]

Yes.. and your statement about fps limiter further illustrates my point. GPU limitation, not CPU limitation, is a primary cause of input lag.
[quote name='Sora' date='02 May 2012 - 11:36 AM' timestamp='1335972981' post='1403442']

no





Yes.. and your statement about fps limiter further illustrates my point. GPU limitation, not CPU limitation, is a primary cause of input lag.

"This is your code. These are also your bugs. Really. Yes, the API runtime and the
driver have bugs, but this is not one of them. Now go fix it already." -fgiesen

#12
Posted 05/02/2012 04:40 PM   
the Input lag caused by cpu prerender/max allowed is a different sort to the lag caused by vsync.

However, adjusting this setting may reduce some of the lag that occurs when in high end games that easily choke the cpu (like skyrim used to)

Oblivion benefited from adjusting this setting back in the day because vertex batching was choking the athlonxp/p4's of the time.

Basically, this setting isn't much help anymore because cpu's have multiple cores these days . The frame prerendering can be happening un a seperate core to the core the game's directx batch is being executed on so you don't have 1 thread stalling the other as much as in the past.
the Input lag caused by cpu prerender/max allowed is a different sort to the lag caused by vsync.



However, adjusting this setting may reduce some of the lag that occurs when in high end games that easily choke the cpu (like skyrim used to)



Oblivion benefited from adjusting this setting back in the day because vertex batching was choking the athlonxp/p4's of the time.



Basically, this setting isn't much help anymore because cpu's have multiple cores these days . The frame prerendering can be happening un a seperate core to the core the game's directx batch is being executed on so you don't have 1 thread stalling the other as much as in the past.



In Memory of Chris "ChrisRay" Arthington, 1982-2010

CPU:Intel i7 920 @ 3.8(D0), Mainboard:Asus Rampage II Gene, Memory:12GB Corsair Vengeance 1600
Video:EVGA Geforce GTX 680+ 4GB, Sound:Creative XFI Titanium Fatal1ty Pro, Monitor:BenQ G2400WD
HDD:500GB Spinpoint F3, 1TB WD Black, 2TB WD Red, 1TB WD Black
Case:NZXT Guardian 921RB, PSU:Corsair 620HX, OS:Windows 7 SP1

#13
Posted 05/02/2012 05:17 PM   
[quote name='Sora' date='02 May 2012 - 01:17 PM' timestamp='1335979034' post='1403483']
Basically, this setting isn't much help anymore
[/quote]

Maybe, maybe not. I don't experience much input lag on my current gaming system. But it's nice to be able to adjust it if need be. I'm just glad nVidia is adding features rather than removing them /happy.gif' class='bbc_emoticon' alt=':happy:' />
[quote name='Sora' date='02 May 2012 - 01:17 PM' timestamp='1335979034' post='1403483']

Basically, this setting isn't much help anymore





Maybe, maybe not. I don't experience much input lag on my current gaming system. But it's nice to be able to adjust it if need be. I'm just glad nVidia is adding features rather than removing them /happy.gif' class='bbc_emoticon' alt=':happy:' />

"This is your code. These are also your bugs. Really. Yes, the API runtime and the
driver have bugs, but this is not one of them. Now go fix it already." -fgiesen

#14
Posted 05/02/2012 09:03 PM   
input lag is much worse than small performance hit by having prerender limit disabled
and it doesn't matter if input lag is created by gpu or cpu: high prerender will add to it so it evil feature /thumbsdown.gif' class='bbc_emoticon' alt=':thumbsdown:' />
input lag is much worse than small performance hit by having prerender limit disabled

and it doesn't matter if input lag is created by gpu or cpu: high prerender will add to it so it evil feature /thumbsdown.gif' class='bbc_emoticon' alt=':thumbsdown:' />

i7 @ 4GHz

Radeon HD7950 @ 1180/1500

A-DATA 4x2GB @1600MHz CL7

Intel X25-V 40GB + few HDD >4.5TB total

Vista Home Premium + rarely used XP&Win7

Xonar Essence STX + Technics RP-F30

Sony GDM-FW900 24" Trinitron CRT (no AG)

LG Flatron W2420R RGB-LED A-TW H-IPS

#15
Posted 05/03/2012 08:47 AM   
  1 / 2    
Scroll To Top